Solid state disk (SSD) made from FLASH memory technologies are suddenly all the rage, with big vendors like EMC and SUN giving details on their strategy for using SSD to improve I/O performance in enterprise storage. It's not surprising that storage is the first to benefit from the technology for two reasons:
- Performance: Conventional disk cannot address the growing gap between storage and overall system performance. You can't just make disks spin faster or the read/write heads move more quickly, physics gets in the way.
- Ease of use: FLASH-based memory packaged as disk (i.e. using the same 3.5" and 2.5" form factors and interfaces) is pretty easy to integrate with existing storage systems, it doesn't take a rocket science degree.
The immediate impact will be on sales of high performance SAS and FC disk, where performance is prized over capacity. Why buy (and subsequently power and cool) ten 72GB 15K RPM FC drives if one or two SSD drives can give you better performance and still meet your capacity requirements? Yes, I know the technology isn't quite there yet, particularly for write performance, but it will be, and it will decimate the high-performance disk market.
With SSD in disk drive form factors a foregone conclusion (my bet is most enterprises will have it for at least some applications by 2010), the more interesting question is how the technology evolves and it's long term impact on several areas:
- Server form factors: Some of the form factor of a typical server or blade is governed by the number of drives supported. But why have drives at all, FLASH memory doesn't have to be packaged that way, it could installed in slots on a motherboard for example. So FLASH bring about new more dense server form factors as storage performance, power consumption, and storage form factors change.
- File systems: All the file systems in widespread use today are designed to overcome or at least mitigate the inherent limitations of conventional disk. For example many file system optimize placement of disk blocks for a particular file to minimize disk head movements caused by fragmentation, others employ defragmentation tools to address the problem. But FLASH-based storage is truly random access, performance is the same for every single block on the device, rendering the whole concept of fragmentation meaningless.
- I/O interface protocols: Protocols like FC and SAS are designed for use with conventional disk, but they don't make sense for high-performance I/O from memory, which would be better suited to a pure memory type interface without all the storage protocol overhead.
- Main memory: At first glance this one doesn't seem that attractive, after all FLASH is much slower than DDR2 DRAM isn't it? But it doesn't have to be that much slower, as Spansion are demonstrating with their NOR FLASH-based DIMMs that plug right into the memory slots on the motherboard. Read performance is not quite up to DDR2 speeds, but it's close enough, and 32GB/DIMM (that's 256GB in eight slots) is very attractive for applications that primarily read from a database that can now be loaded into memory.
- Operating systems: Changing the performance ratio between storage and compute may open up new possibilities in OS design, for example the paging system could be implemented on FLASH storage interfaced directly to the memory bus.
I could go on (and probably will do in the future :-), but that will do for now.
Posted by: Nik Simpson