In January, I commented on several articles covering research that showed we would soon hit scalability barriers with multi-core, multi-processor systems (see “Well Duh!”). In brief, I could see the problem (it’s not rocket science) but I was skeptical about the long term impact. Move on a couple of weeks, and several sites are making similar predictions (see 1,2,3) but based on a different source. Needless to say, I think the articles are just a little “sensational” in their coverage. The basic argument is that organizations may well need to upgrade applications, operating systems, and hypervisors to take best advantage of forthcoming hardware platforms – ya think!
The first thing to point out is that from a programming perspective a 4-core x86 processor is not significantly different to a 4-processor x86 machine, and we’ve had those for a decade or more. So most enterprise applications (and all operating systems and hypervisors) are designed for some degree of parallelism (often referred to as being multi-threaded). When you throw in server virtualization to the mix and run multiple applications on the same physical host, it’s not at all difficult to make effective use of any x86 server shipping today or in the foreseeable future.
But let’s step back a moment and look at the core of the argument which is that software available today isn’t perfectly optimized for systems that haven’t shipped yet. Which begs the question “why would somebody writing a hypervisor in 2005 design it to be optimized for a system with 128 execution threads (say an 8-processor system, with 8-cores/processor and two threads/core) when the biggest x86 system they were likely to encounter might support eight simultaneous execution threads?”
Any sensible software design process will set the design point for the platforms that it’s likely to run on, plus a little wiggle room for future advances that will occur in the likely operational life of the software. Doing anything more than that is wasted effort and may well actually end up being slower on existing hardware (i.e. no point in doing all the work to manage 128 execution threads on a server that can only execute 8 threads at the time.)
But software isn’t set in stone (otherwise we’d call it hardware), it evolves with the hardware (as I discussed in “Ticket to Ride (REDUX)” which is directly relevant to the discussion.) One thing I didn’t mention in that blog is Microsoft is also getting rid of the 32 processor limit that has been part of the NT kernel since it was designed (coincidence, I think not!) I’d be willing to make a pretty big bet, that it’s not just Microsoft who are evolving their OS, hypervisor, and applications to make better use of multi-core servers. Just because a hypervisor shipping today doesn’t support more than 64 cores is not a good reason to assume that this will still be the case by the time such systems are widely available.
So, in conclusion, while the next generation of multi-core CPUs may very well mean “the end of the world as know it”, I feel fine. More importantly, IT organizations should not shy away from more advanced x86 server platforms if they have any interest in scaling performance of their applications and/or reducing the amount hardware and energy they pay for each year.
Posted by: Nik Simpson