Intel x-25m80GB SSD DBT2/MySQL Benchmarks

As promised, here are the DBT2 results for the Intel SSD drive:

Raid 5 Raid 10 10K Raptor Matt’s Mtron Matt’s Memoright Intel x-25m
4579 6139 625 4900 4156 6558
8-disks 8-disks 1 disk 1 disk 1 disk 1 disk

As you see the Intel drive blew away all the competition here… even besting another dbt2 score I got from a nice new shiny 8 disk raid 10 system.

This entry was posted in hardware, mysql, performance. Bookmark the permalink.

16 Responses to Intel x-25m80GB SSD DBT2/MySQL Benchmarks

  1. Peter says:

    Matt,

    One test you may consider with SSD is testing long term performance. SSD can write very quickly when they “are” clean but when flash erase and data merging starts to happen you can expect performance to drop. How much – this is I would really like to know.

    Ie graph of TPM from DBT2 for 24 hours or so would be nice indication of long term performance

  2. matt says:

    Peter,

    Yep, that would be a good test to run. Some details on the above test though, I run back to back tests overnight generally. These tests run 14 1800 second dbt2 runs. It stops, reloads the data, then starts, and reruns the test. So the drive is staying relatively busy. Easy enough to keep a sustained load for the entire day though.

    Here is what I think I’ll end up runnings:

    #1 – 24 consecutive runs of dbt2 running for 3600 seconds, no reload after shutdown. this should give an interesting TPM graph.
    #2 – 1 long 24 hour dbt2 run, 43200 seconds, using iostat -x 1800 to try and graph 30 minute w/s + r/s

    Another thing, My Standard test is about 11GB of data ( 100 Warehouses ), so I think I need to up this as well to come closer to the maximum drive size. I am curious if Intel maybe doing some free space magic with writes ( like putting them into free space at the end of the drive and processing deletes later, i.e. the MFT stuff I tested earlier ).

    #3 – I think I need to fill the space and restest, or up the # of warehouses…

  3. Peter says:

    Right. BTW you do not have to do many DBT2 runs. There is the tool with DBT2 which allows you to simply get per minute graph as the TPM floats. It is very interesting anyway – for example how large is checkpoint dip in case you use SSD ?

  4. matt says:

    Do you happen to know the name of the tool?

  5. matt says:

    Ahhh, are you talking about the notpm.data that is produced by the mix_analyzer.pl?

  6. matt says:

    I will let this run sometime over the next week, I am in the middle of testing something else… but I have a 200 minute test, which is fairly consistent:


    8 301 3541 313 3378 221
    9 301 3678 339 3405 219
    10 326 3222 314 3153 193
    11 309 3211 301 3198 203
    12 300 3129 284 3046 176
    13 308 3300 325 3295 189
    14 333 3748 342 3626 231
    15 320 3519 300 3519 217
    16 309 3752 343 3464 216
    17 313 3528 331 3511 224
    18 303 3502 294 3379 211
    19 323 3504 298 3273 192
    20 303 3202 280 3187 187
    21 259 3264 293 3082 195
    22 245 2811 265 2698 169
    23 295 3466 350 3352 192
    24 297 3370 311 3339 199
    25 331 3739 340 3584 225
    26 323 3594 322 3337 198
    27 312 3438 308 3336 239
    28 296 3474 276 3225 198
    29 301 3647 314 3600 220
    30 311 3284 290 3154 202
    31 288 3228 278 3183 172
    32 243 2934 272 2861 176
    33 324 3495 325 3400 178
    34 302 3527 320 3286 201

    179 310 3315 300 3310 236
    180 316 3331 294 3356 206
    181 327 3561 316 3360 199
    182 322 3604 320 3416 217
    183 314 3415 301 3394 219
    184 331 3612 328 3492 243
    185 325 3410 308 3362 187
    186 317 3621 362 3502 230
    187 307 3539 310 3409 207
    188 338 3425 282 3277 218
    189 303 3664 328 3472 207
    190 328 3663 332 3460 208
    191 346 3791 345 3614 204
    192 328 3545 347 3492 223
    193 272 3448 300 3401 245
    194 361 3783 351 3695 219
    195 333 3682 335 3519 231
    196 333 3682 331 3426 222
    197 334 3548 348 3478 196
    198 305 3524 363 3305 188

  7. Robin YANG says:

    hi matt,

    I’m very intereted in how you get the result for Intel SSD,
    Could you give me the specific parameters following?
    Warehouse, Connection, Termial per warehouse, Duration, Keying and Thinking time?
    Thanks a lot!!

    Robin

  8. Robin YANG says:

    Forgot to say that I’m also interested in the settings you use, i.e page size, buffer size, etc.
    Thanks a lot!

    Robin

  9. Robin says:

    Hi matt,

    I’m very interested in the detailed settings you had for intel ssd. For example, the warehouse number, the connection number, the terminal per warehouse number, the keying and thing time, the file system, and the buffer size?
    I have also done some dbt2 experiments on intel ssd, but can’t get that high TPM……
    Thanks.

    Robin

  10. matt says:

    2.75G buffer 100W innodb

    Started with:
    -s100 -c 16 -w 100 -t 15 -d 1800 -n

    Centos 5 no X interface active ( ubuntu desktop is about 1/2 as fast )

    innodb_data_file_path = ibdata1:15000M:autoextend
    innodb_buffer_pool_size = 2760M
    innodb_additional_mem_pool_size = 20M
    innodb_log_file_size = 650M
    innodb_log_buffer_size = 16M
    innodb_lock_wait_timeout = 50
    innodb_flush_log_at_trx_commit = 0
    innodb_log_files_in_group = 2
    innodb_support_xa = 0
    innodb_doublewrite = 0
    innodb_thread_concurrency = 1000

  11. Robin says:

    I don’t know if it’s because postgresql is not that fast. I can’t get that result with similar settings, I only got around 3300 for intel x-25m-80G. I also used Centos5 and Ext3 file system. Have you done any changes in the os?

  12. Robin YANG says:

    I also wonder the mtron’s result. Do you use MFT driver for mtron in the test? I can only get a throughput which is slightly higher than that of disk for mtron.
    But in my understanding, in your experiment, random write speed for mtron is half of that for disk (100 v.s 200), random read speed for mtron is 50 times of that for disk, that means the write/read ratio should be 0.05 : 1 to get that 8 times throughput for mtron over disk. How can that be?

  13. matt says:

    No changes to the os, i did turn swapiness off, but that should have little effect. No this was not the mft enabled disk. With 100 warehouse test I was getting anywhere form 4800-5100 TPM, with the same settings I posted above. When MFT enabled I ended up getting 9500-10000 TPM.

    If i understand the question , you wondering how http://www.bigdbahead.com/?p=37 the dbt2 results showed 869 for 10K disk while showing 4900 for the dbt2 results? Do you think the # for disk should be higher? or that the mtron # is too high? DBT2 is heavily read centric, so the read should really help. Also as the workload gets more mixed I don’t think the drive performance is going to stay on a straight linear decrease/increase.

    Also my numbers may very from yours depending on your hardware, what sort of Controller are you using? Early on I found I got better performance from dumb controllers, the smarter controllers try to optimize for slower disk… which sometimes hurts SSD.

  14. Robin YANG says:

    I think the # for mtron is too high compared with my result. Do you try to change the number of terminals per warehouse to get the saturated throughput? What’s the -t # you used for mtron with 100 warehouse?

    I think it’s not the controllers’ problem because I got 7800+ for intel ssd with 100 warehouse, 14 terminals per warehouse, 30 connections. This is reasonable compared with your result.

  15. matt says:

    Just checked some of my old tests… the Noop scheduler worked best for the mtron, also mount the drive noatime.

    I do not have the mtron drive to retest, only the memoright and the intel drive.

    Additionally I saw a small bump in performance ( not enough to be statistically accurate ) when I had the logs separated onto a different drive. The numbers posted should be the mtron with the data and logs both on the same drive.

  16. Pingback: Big DBA Head! - Database Brain Power! » Waffle Grid: Remote Buffer Cache -VS- SSD Grudge Match