« Production-class Hypervisor Evaluation Criteria | Main | Citrix Essentials for XenServer and Hyper-V: First Impressions »

February 23, 2009

Comments

Virtual Man

virtensys and xsigo appear to be doing the same thing. virtensys is using PCIe in servers, xsigo uses infiniband.

Nik Simpson

Yes, they are somewhat similar, in that both extend the PCIe bus outside the server. The big difference is that the XSIGO approach requires an InfiniBand HCA in every server, the VirtenSys approach only needs a passive PCIe extender card. So in the long term a PCIe based solution will be less expensive.

Marcus

Not exactly correct. The Virtensys product doesnt actually virtualize anything. You can think of it as a PCIe Multi-host multiplexor. Basically if you plug in a single 10Gb NIC or FC HBA....each of the up to 16 hosts can all see and share the FC HBA or NIC.....but each host only sees 1 NIC or 1 HBA....the resource is shared, but not virtualized. Also, there is no QoS...to allow some hosts to get more bandwidth than the others. Not all hosts are created equal nor should their access to bandwidth.

The Xsigo box actually takes a single HBA IO card and presents up to 128 Virtual HBAs to any server in any combination. Server 1 can have 4 VHBAs or 2, or 1....Server 2 can have 2, 4, 5, 10 VHBAs, etc.

 Nik Simpson

Marcus,
I've checked with Virtensys, and they disagree, I've asked them to followup and post a comment in response.

bob napaa

In response to Marcus comments:
1. "The Virtensys product doesn’t actually virtualize anything" is not correct. We create multiple virtual devices from a single physical device. That is what virtualization means. How they can be assigned across multiple servers is a different issue.
2.“Also, there is no QoS …” comment is plain wrong. Yes we do support QoS including minimum bandwidt guarantees, bandwidth capping, traffic priorities, etc.

A bit more details regarding virtualization:
Perhaps Marcus may have meant (and been more accurate about) is that it doesn't "... abstract anything". That's actually the key difference (in this context) between the VirtenSys approach (PCIe based with transparency) vs. the Xsigo approach (IB with abstraction). Now I would argue that the transparent approach - the virtual device looks just like the physical device - is the preferred approach since it causes no disruption to the server software stack. All the existing software on the server works exactly the same as today. Much of the feedback we've heard about why user don't like the IB solutions is that it messes up their server software and is one of the reasons there's so much OEM and user interest in VirtenSys.

The issue about assigning multiple virtual devices to a single server is also a red herring. Since our virtual device are identical to the physical devices (which support multiple VMs) there little value is assigning multiple device from a single physical device to a single server. VM support comes for free. You only need to do that if you've gone down the IB (simple, abstracted virtual devices) route. Anyway, we probably will provide multiple virtual device mapping to an individual server later on. Just for completeness. There just not much need for it with our approach.

The comments to this entry are closed.

  • Burton Group Free Resources Stay Connected Stay Connected Stay Connected Stay Connected



Blog powered by Typepad