A few weeks ago, I wrote about thin provisioning and how it was becoming an increasingly common feature in storage. Well if you needed any more proof that "thin provisioning" of storage is the latest must have check box, then HP's announcement this week of Dynamic Capacity Management (DCM) for its new line of EVA storage arrays should put any doubts to rest. There's only one problem, Dynamic Capacity Management isn't thin provisioning, and to be fair to HP they don't call it thin provisioning in any of the materials I could find on their web site.
But a strange thing happened with coverage of the announcement, almost without exception DCM has been talked about as thin provisioning in web and print coverage. The only question is whether everybody briefed by HP were victims of a shared delusion or HP deliberately positioned it as thin provisioning in their briefings. Either way, calling DCM "thin provisioning" is at best being "economical with the truth."
So why isn't DCM true "thin provisioning?" Lets look at some of the key features of previously announced thin provisioning and compare them what DCM has to offer:
|OS independent||Yes, all the thin provisioning systems announced to this point will work for any Fibre or iSCSI attached host.||Only works with Windows hosts and needs a host agent.|
|Non-disruptive||Thin provisioning is completely transparent to the host, as far as the host is concerned the LUN size never changes||DCM relies on the host being able to stretch a volume to use the additional space added to the LUN. This may or may not work depending on the application.|
|Fine grain allocation||Thin provisioning allocates storage in relatively small pieces (typically a few MBs at a time) so that physical storage is consumed slowly and efficiently as hosts write data.||Because of the potential for disruption, DCM will add large chunks of storage to minimize the number of allocation events that cause the OS to stretch a volume.|
This combination of DCM deficiencies means that it fails to deliver on the promises of thin provisioning in just about every respect. Let's just look at an example of an application server adding 50GBs of new data every week. With DCM we might start with 200GBs of storage to contain the OS, application and initial data. With 50GBs of new data added each week, we'll exhaust that initial storage before the end of the first month, and then assuming we add storage in 200GB chunks we'll have go through the process about once month for as a long as the server is active. The following chart shows how that looks from an efficiency perspective:
Over the course of twenty four weeks, the volume will have been resized 6 times (increasing the size of chunk to reduce the number of resize events only makes things worse from an efficiency perspective), with a significant part of the capacity wasted until the host actually creates new data. The end result is that you are:
- Paying for storage long before you need it
- Paying to power and cool that storage when it's not actually doing useful work
A real thin provisioning systems will cause "allocated capacity" to closely track "required capacity" and the host will never have to stretch its volumes because with thin provisioning the volume size doesn't change, just the amount of physical allocated to the volume changes.
So the lesson here is simple, if you want a general, non-disruptive and efficient thin-provisioning implementation then look to HDS, 3PAR, Equalogic, DataCore or Network Appliance. If you are happy with all the shortcomings of DCM, then EVA arrays are a solution, but don't be fooled, it's not thin provisioning!