More Details on MySQL & SSD drive performance…

As mentioned earlier I am testing out SSD disk performance on a 4 core machine with 6GB of memory. I spent last week comparing the drive to a standard 10K RPM SATA Raptor drive (EXT3 file system right now). As noted here and elsewhere the performance of these drives really shines with a specific workload, but they are not for everyone out there. The random write performance of these drives leaves a great deal to be desired while their read performance is outstanding:

Sysbench Random IO's

Above you can see that when we perform 10K random reads with 0 writes we peak at about 5200 IOPS vs the 161 IOPS on a standard SATA drive. When we flip the IO to all writes we end up getting around 100 IOPS out of the SSD drive. Not many sites are 100% reads, so some sort of mixed IO load is expected. Here you can see how the number of IO’s per second varies under different workloads:

Different Workloads

I have 2 drives, I will rerun these in RAID 1 & RAID 0 to see how things change. But based on the above sys bench tests in a 80-20 read/write environment the Mtron does show a 3X increase. What is a little odd is when I moved to OLTP testing. I used 100Millions rows with 32 threads. According to the raw output from the test there are about 70% reads in the OLTP test, see below:

Maximum number of requests for OLTP test is limited to 50000
Threads started!
OLTP test statistics:
queries performed:

read:                            700098

write:                           250035

other:                           100014

total:                           1050147

transactions:                        50007  (131.93 per sec.)

deadlocks:                           0      (0.00 per sec.)

read/write requests:                 950133 (2506.74 per sec.)

other operations:                    100014 (263.87 per sec.)

Test execution summary:

total time:                          379.0310s

total number of events:              50007

total time taken by event execution: 12126.7578

per-request statistics:

min:                            0.0028s

avg:                            0.2425s

max:                            2.3525s

approx.  95 percentile:         0.7041s

Threads fairness:

events (avg/stddev):           1562.7188/19.88

execution time (avg/stddev):   378.9612/0.02

The results i received show a much higher performance benefit then the raw sysbench IO tests:


Run time dropped from 1979 seconds on a single raptor drive to 379 Seconds on the standalone Mtron drive. An improvement of over 5X. Based on the generic disk benchmarks I would have thought I would have seen slightly lower runtimes. The 5X seems to hold up looking at the other numbers as well:

sysbench ops per second

Sysbench Trans per second
On top of Sysbench I also ran Orion. Orion is an oracle disk benchmark tool that they released several years ago. It is freely available and does a great job of benchmarking disks, breaking down performance based on the # of threads hitting the disk and showing latency at each point. Lets take a look at the numbers.

We already established reads were screaming, but lets check Orion just to help validate things:

Orion Reads

Same thing… nothing more to say about that. Now lets look at the 50% read/50% write environment:

Orion IOPS 50-50

The stand alone mtron drive does beat out the raptor, only by a small margin however. Definitely not worth the price premium in this type of environment. But as we saw in sysbench, as the number of writes diminishes the # of supportable IOPS will increase. So lets look at the 80/20 & 90/10 splits:

Orion IOPS 80-20

Orion IOPS 90-1-

In a 90/10 type of environment the performance is about 6.5x improvement, while the 80/20 is only 3X. When your IO bound and can only fit 4-6 disks internally a few thousand dollars to see a 3-7x improvement may be very attractive to certain MySQL users. At that point you alternative is either a bigger box or an external San both of which will dwarf the cost of the 2 or 4 SSD drives. Another time I will but a controller with cache in my server to see if I can boost these numbers further. Back to the numbers… Lets take a look at latency.

In a pure read environment:

Orion Read Latency

At 5 threads the latency on the Mtron drive is only 1ms vs 34.9ms on the raptor. What is not shown is that at 10 threads the Mtron drive is 2ms vs 63ms for the Raptor. In the 50-50 test the numbers are very similar:

Orion 50-50 Latency

24ms on the Mtron or 28ms on the Raptor drive is an eternity. But as I mentioned for a 50-50 environment a naked mtron or two probably is not the best solution. So what about a 90/10? Glad you asked.

Orion Latency 90-10

For my final trick of the day… DBT2 test results. The tests were done with the innodb storage engine. I used the following settings:

mysql> show variables like 'innodb%';
| Variable_name | Value |
| innodb_additional_mem_pool_size | 20971520 |
| innodb_autoextend_increment | 8 |
| innodb_buffer_pool_awe_mem_mb | 0 |
| innodb_buffer_pool_size | 2894069760 |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:1500M:autoextend |
| innodb_data_home_dir | /mtron/ |
| innodb_doublewrite | OFF |
| innodb_fast_shutdown | 1 |
| innodb_file_io_threads | 4 |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | O_DIRECT |
| innodb_force_recovery | 0 |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_arch_dir | /data01/ |
| innodb_log_archive | OFF |
| innodb_log_buffer_size | 16777216 |
| innodb_log_file_size | 681574400 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | /data01/ |
| innodb_max_dirty_pages_pct | 90 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_open_files | 300 |
| innodb_rollback_on_timeout | OFF |
| innodb_support_xa | OFF |
| innodb_sync_spin_loops | 20 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 1000 |
| innodb_thread_sleep_delay | 10000 |

dbt2 results

The first 3 tests can hardly be counted, they size of a 20 warehouse database is smaller then the innodb buffer pool. Effectively all this is testing is cpu and memory access. The last two results are a little more telling and accurate. Adding more warehouses and more active threads hits both disks hard, but the SSD drive is able to maintain a much higher transaction count then its platter based brother. This all makes perfect sense. The more active threads requesting data over a wider range of data, the more a traditional disk arm has to move. The more the disk arm moves, the slower the access time.

More to come later.

This entry was posted in hardware, linux, mysql, performance. Bookmark the permalink.