Sun/Intel X-25e 4 Disk Raid 10 tests – part 2

So lets test some different configurations and try and build some best practices around Multiple SSD’s:

Which is better? Raid 5 or Raid 10?

As with regular disks, Raid 10 seems to performance better ( accept for pure reads ).  I did get a lot of movement test to test like with the 67% read test -vs- the 75% or 80% tests. But all in all RAID 10 seemed to be the optimal config.

Should you enable the controller cache? One of the things I have found in my single drive tests is that “dumb” controllers tend to give better performance numbers then “smart” controllers. Really expensive controllers tend to have extra logic to compensate for the limitations of traditional disk. So I decided to play with some of the controller options. The obvious one is cache on the controller.

Some tests showed substantially better performance when the disk cache was disabled ( both read & write ).

If better controllers try and think to much for SSD’s own good, what about if we try software raid?

We do seem to get an even bigger boost from running software RAID over the Adaptec controller in this machine. Keep in mind, this maybe very controller specific some controllers may do a better job with SSD’s and I think manufactures will start to optimize their hardware for SSD’s shortly.

Lets switch gears and quickly look at DBT2 benchmarks using “our” best practices.  Their was one more thing I wanted to test which was the IO Scheduler. No idea why I did it with DBT2 over sysbench.   As with other benchmarks running on SSD’s, the noop scheduler appears to be best ( basically telling the OS not to do any fancy IO scheduling ):


Using software raid, Noop, and disabling all the controller cache leads us to a quick RAID comparison:


In a 100W test run with a smaller BP ( 1.5GB ) we end up see that our 4 disk RAID 10 without disk cache enabled does slightly outperform an 8 disk RAID 10 system, but I was hoping for a wider margin here. Even with the disk cache enabled we were not even 2x faster. It’s fast… don’t get me wrong, but in a pure OLTP workload I really was hoping for 15K+ TPM… one explanation could be that DBT2 is not a good measure of flash performance generally as the test is not really IO constrained per say. I may revisit Juice with this setup later on.

Speaking of IO constrained workloads, most people would consider a system with 100% of its data in the BP to non-io constrained, but they are forgetting that there are other activities that will hit disk.  I wanted to show that flash can even help even in these environments.

I did test moving the logs and ibdata file to “regular” disk but I did not see any great improvement in my tests, Yoshinori did however in some recent tests he did… so this maybe something I need to retest or is simply better on some hardware then others.

If I was going to deploy these today I would choose the following as best practices:

* software RAID,
* the noop scheduler,
* XFS ( others have tested this )
* RAID 10 setup,
* disable all the cache on the controllers for the SSD’s
* 64K stripe size
* possibly separate the log files, & ibdata ( when using file per table ) to BBU’d disks.  See Yoshinori’s blog on this.

This entry was posted in benchmark, hardware, linux, Matt, mysql, performance. Bookmark the permalink.

Comments are closed.