Announcements this week from ScaleMP and 3leaf Systems offered new ways to build very large Intel-based systems from commodity building blocks. Both approaches are similar in concept, in that they create a single system from multiple x86 servers by tying them together on an InfiniBand fabric. Once the system is built, it behaves as a single system so in you can build 8-way, 16-way, or even 32-way x86 servers with terabytes of memory, running a single operating system image.
Where the two companies differ is in their implementation and go-to-market strategies. ScaleMP’s approach uses InfiniBand cards in PCIe slots, so all the inter-node traffic (i.e. memory reads and writes) has to be moved from local memory and sent over the PCIe bus. The big advantage of this approach is that it will work with any commodity server, and ScaleMP already has some level of partnership with all the major server OEMs as well as a number of channel partners (mostly in the high-performance compute (HPC) space).
Meanwhile 3Leaf Systems has built a hardware solution that puts an InfiniBand interface directly into a CPU socket on custom made 3-socket motherboards that use AMD processors. Technically this approach has advantages because the latency for inter-node traffic will be reduced by eliminating the PCIe bus from the equation. The downside is that can’t be used with the off-the-shelf systems, so 3Leaf will have to find hardware partners willing to build suitable servers.
The interesting question is whether these systems have a market outside HPC that can attract the server OEMs. Candidates for possible new markets include:
- Large database server market: Currently dominated by RISC/UNIX platforms. The problem here is that a large server constructed from multiple commodity boxes will not have the availability characteristics of the large RISCV/UNIX platforms, so for now at least, this is a non-starter.
- Hosting for large numbers of virtual machines: Today, x86 server virtualization products are typically run on 2-socket and 4-socket commodity servers. This places an upper limit on the number of VMs hosted on each server, resulting in the need for multiple physical servers in clusters. A cost-effective way to build 8-, 16-, and 32-socket servers would allow for much higher consolidation ratios. However, many customers would be wary of hosting hundreds of virtual machines this way because of the complexity involved in the event of hardware problems.
- Flexible public/private cloud infrastructure: A cloud built from 2-socket servers can’t host a workload that needs more than a 2-socket server can deliver. You can get around this by having servers of different sizes in the cloud (i.e. a mix of 2- and 4-socket servers) but this mean predicting what the customers will ask for and will increase hardware costs. But a cloud that can reconfigure the 2-socket servers into larger systems has much more flexibility. For example a cloud consisting of sixteen 2-socket servers can be presented to customers as four socket servers, two –8-socket servers, two 4-socket and four 2-socket server and so on.
Of these markets, I think the last one is the most interesting, and it will be interesting to see if the idea catches on with hardware infrastructure as a service (HIaaS) providers.
Posted by: Nik Simpson