I am doing the final prep work for my upcoming UC presentation on SSD’s, and I thought I would throw this out their. Recently their has been a great deal of discussion on the write cache on the Intel x-25e and whether you need to disable it to prevent data loss on a power outage. Most disk caches are not protected by a battery backup and are disabled by default on most high end controllers. Who wants to potentially lose 16-64MB of data on an outage? So it seems like it would make sense that you should disable the cache on the Intel drives as well. But their is a problem. Vadim over at the MySQL Performance Blog recently published some benchmarks that show some rather slow results when the disk cache is disabled, in fact I have also noticed a significant slow down in these cases as well. So this leads to the question, if you need to disable the disk cache of most drives to make the drives resilient and truly enterprise ready… why would you produce an enterprise class drive that when you disable the feature, gimps the performance to a level that makes the price per IO/GB un-attractive?
So I went in search of any details on on the web I could find about the cache on the Intel SSD’s. Let me tell you their is not a lot out their. but I did find this interesting note over at Anandtech:
“That being said, the root of the problem is still unknown. My first thought was that it was because the MLC drives had no DRAM buffer, and if you’ll notice, Intel’s MLC drive does have a DRAM buffer. I asked Intel about this and it turns out that the DRAM on the Intel drive isn’t used for user data because of the risk of data loss, instead it is used as memory by the Intel SATA/flash controller for deciding exactly where to write data (I’m assuming for the wear leveling/reliability algorithms). Despite the presence of the external DRAM, both the Intel controller and the JMicron rely on internal buffers to cache accesses to the SSD.”
Well that makes it sound like the cache should not be used for data storage, but rather for figuring out where to write. This however seems contradictory to Vadim’s test, where he showed missing completed data after an outage. We may need Intel to weigh in on this… because without the cache enabled the drives are still fast, but not nearly as ground breaking as they originally appeared to be. In tests with other vendors drives (some without cache ) , I have found that SSD’s makes the most sense for workloads that are heavier read then write ( 80%-100% reads ). In these predominately read-only workloads you get a significant boost in performance sometimes upwards of 50x ( all reads) but more commonly somewhere in the 3-4x range… what makes the the Intel drives so impressive is they showed performance boosts of up to 10-20x on mixed workloads. As I said fast, but not round breaking.
Another secondary concern here, with the Intel is that disabling the write cache may screw up the wear leveling that is imperative for continued reliability of the drives… But this needs to be verified still.