« Microsoft's Virtualization Launch - Key Takeaways and VMware's Next Move | Main | VMWorld: Paul Maritz Keynote »

September 15, 2008

Comments

James

It looks much slower than the AnandTech report. They said random write is 11171 IOPS. Could I get your IOMeter configuration like "Maximum Disk Size","# of outstanding IO" and "Run time"?
Thank you in advanced.

Gene

James, The random write value you mention is too high - sounds like its the random read value. SSD's have much worse random write than read performance because of the erase cycle.
I used IOmeter with 4 workers, 100 outstanding IOs per target, and maximum disk size - I let the test file take up all 80G. The test ran for 60 seconds. I saw no correlation to time.
btw, the intel X25-E claims >3300 4k write iops. impressive - that's more than 10x better than a 15K rpm hard disk.
Gene

Scott C.

Random write iops of the intel X25-M drive varies. Depending on how long you run it, how large a file you write random to, what sort of work you were doing before the test, and how long you have let it "rest" between runs the results will change.

Yes you can get ~12k random iops for writes on it (I've done it), but it typically won't last.
The worst case scenario is above.

For sequential writes, you can also get it into "slow mode" of about 30MB/sec write.

All this is because it has to "garbage collect" pages in the background, and if its Logical Block Address (LBA) to internal flash page mapping gets extremely fragmented with respect to "free" space, it has to do some compaction activities in the background that get in the way. Letting it "rest" for a while, or doing a large sequential write then re-write, tends to kick it back into high gear.

Intel claims it has several different garbage collection patterns optimized for different work loads and that it shifts from one to the other based on the use load, so running a series of synthetic tests often tricks it into one form of operation that is unopimal for the "next" test.

In the real world, writing random tiny bits to the whole disk just doesn't happen. The same LBA's are usually written over (re-writes) quite often, and pure random write tests are a bit fake, but great stress tests, for a SSD. You can weed out the good ones from the bad ones with such a test, but you can't get a measured value that represents what YOUR workload's performance will be, but you can find the worst case.

One test that almost all reviews fail to do, but is critical for analyzing a SSD, is one with mixed reads and writes simultaneously.
For example, take 4 or so workers doing random reads and then stick 2 concurrent streaming writes and one random writer all at once ... the "bad" ssd's will utterly fail this test, with high 1-2 sec latencies on the reads mixed in and very low write bandwidth.

The good ones can actually achieve higher than the linear weighted combination of the individual tasks alone . . .

Gene Ruth

Scott, excellent comments. Thanks. The weird performance behavior of SSDs is why I've been calling (see more recent post)for a standardized way to quote performance among other things. Misunderstandings will only hurt the industry as it gains a foot hold. Your comments are a good reminder that SSDs are NOT HDDs and require careful scrutiny.

The comments to this entry are closed.

  • Burton Group Free Resources Stay Connected Stay Connected Stay Connected Stay Connected



Blog powered by Typepad