Last month I blogged about Red Hat’s move away from Xen to KVM. So many of you probably are asking the same questions I asked: What’s so good about KVM that has Red Hat not only moving away from Xen, but has now announced the intention to acquire Qumranet (the maintainers of KVM)?
First off, KVM is implemented as a host operating system extension. This is also known as a host based hypervisor or type 2 hypervisor. To the host operating system (Linux), the hypervisor appears as a very large device driver. This is akin to the general architectures of Microsoft’s Virtual Server, VMware’s VMware Server (formerly GSX), and Parallel’s Parallels Desktop products. Now, don’t go drawing conclusions from this comparison, we have more to dig into on these.
Conversely, Xen is what is known as a microkernel based hypervisor or type 1 hypervisor. It has its own microkernel and does not rely on a host operating system to talk to the CPU, but does leverage a Linux operating system to manage the system and load/operate the hardware device drivers for LAN, Disk, etc. This model effectively has two kernels either loosely or tightly coupled. This is akin to the general architectures of Microsoft’s Hyper-V and VMware’s ESX server products. Again – no conclusions, I’m not done yet.
Let’s get one thing out of the way first: Host based hypervisors tend to be thought of as slow, inefficient, and slaved to the shortcomings and whims of the host operating system upon which they are running. The latter point is absolutely true (slaved to the host operating system) and is what yields the slow and inefficient problems in most cases. This perception of host based hypervisors is the result of most people experiencing these products running on Windows as the host OS. Windows was never intended to be a host OS for a hypervisor, and as such does yield a slow, inefficient, and insecure system. But not all host based hypervisors have to run on Windows. GSX, for example, can run on Linux as well. Strip that Linux host down to the bare bones, removing unnecessary components, and in theory, you can build a lean mean hypervisor. That’s what Red Hat is doing with its Ovirt project. Red Hat will most likely not put KVM on top of RHEL 6 (the full blown product) but will offer the KVM hypervisor on top of a very minimal slimmed down RHEL 6 kernel. You would run the full blown RHEL 6 as a guest in this model, or on bare metal with no virtualization.
With that out of the way, let’s look at the Pros and Cons of each:
-Smaller hypervisor memory footprint (only one kernel)
-Included in Linux mainline kernel (since 2.6.20)
-Supports all device drivers certified/tested on mainline kernel
-Does not require forking the Linux kernel
-Can use standard Linux process monitoring and management tools to monitor VMs
-Smaller attack surface (security)
-Each VM is run off of a Linux process (tuned for applications, not VMs)
-Lack of VM scheduling based on resource loading (QoS – but this could change as Virtuozzo container technologies make their way into the Kernel over time)
-Requires custom stripped down kernel to achieve efficiency
-Lack of paravirtualized device drivers (this will change as they are built/released)
-Immature (lack of ISV/IHV support such as backup solutions)
-Lack of perception in the industry as a valid contender (time to mature)
-Custom microkernel tuned for VM process scheduling and management
-Process scheduling based on resources and states important to virtual machines
-Broad vendor support including IHV OEM distributions
-Increasing production deployments
-Paravirtualized device drivers (significantly improves guest OS performance)
-Microkernel most likely won’t share vulnerabilities with Linux kernel
-Not included in mainline Linux kernel
-Requires forking the Linux kernel
-Requires Linux device driver recertification
-Requires a great deal of work to integrate into the Linux Kernel (Paravirtual options work in the Linux kernel can change this over time allowing Xen or VMware or other to simply plug in under the Linux Kernel)
-Dependency on Linux Kernel (Dom 0) for device drivers (although as hardware becomes more virtualizable such as single root I/O virtualization LAN adapters, the dependency on Dom 0 will fade to where it is only for management of the system)
-Larger attack surface (security)
The bottom line of all this is twofold. Technically, KVM is a couple years behind Xen, but there is really nothing barring it from maturing to a viable solution. It’s up to the investment in the open source community and Red Hat’s purchase of Qumranet is a step in the direction of securing that investment. Secondly is the market perception aspect. KVM appears to the masses as “yet another” hypervisor; and late to the game to boot. This proves to be an up-hill battle for KVM in a rapidly crowding market. Red Hat needs another large vendor to adopt and put muscle behind KVM in the market. Xen has Citrix, Novell, Oracle, Red Hat, Sun, and Virtual Iron commercially shipping products based on its technology. KVM effectively has nothing shipping until Red Hat releases the Ovirt hypervisor product.
I’m going to look into the crystal ball and take a gander: In a few years KVM will be the preferred open source solution used in desktop virtualization (not to be confused with virtual desktop infrastructures) and Xen will be the preferred open source solution used in server virtualization. Xen offers the ability to custom tune its microkernel for virtual machines (its heritage) and KVM offers broader hardware support needed in desktop/laptop environments (its heritage).
[Posted by: Richard Jones]