SSD and MySQL Tests… The logic behind the tests & wondering did I break it?

Over the last several weeks I have been testing out the 2 mtron drives that easy computing company provided me. In fact I have been testing like a madman. Nightly I would kick off a batch of DBT2 tests, sysbench tests, bonnie++, and more. Each test changed something about the environment I was testing. RAID 1, RAID 0, with easy’s MFT, without the MFT, with the logs on SSD, With the logs on a raptor drive, etc. Each test run when it completed successfully would take roughly 12 hours. a typical example of a portion of one of these test runs is:

shutdown mysql
remove the data and log files
wait 5 minutes
startup mysql
wait 5 minutes
load a 200 warehouse DBT2 test
wait 5 minutes after the load is complete
run DBT2 test

rinse and repeat.

Minimally I have been doing a 40 active warehouse dbt2 test and a 60 active warehouse dbt2 test for each change I make. I was trying to do each test (whether it was xfs vs ext3 or raid1 tests ) at least 2 times ( tried for 3 times, but did not always follow through ). Running these multiple times helps validate the findings. So far I have gathered a ton of data.

These tests were marching along wonderfully until Thursday night. Thursday night ( about 3 weeks into my tests ) my system raised a kernel panic. Not good, found that about half my test finished. So I rebooted, fsck’d the disks and started over. Several hours later… kernel panic. Same place as before. At the time I was running tests on a naked mtron vs easy’s MFT technology. I had run similar tests for the previous two days with awesome results. I stepped through my test script., which showed shortly after MySQL starts up a kernel panic happens. Looking at the console the only thing meaningful in the panic message is something about device sdd ( hey that is the primary mtron drive I am using! ). After a reboot I check out the MySQL Error logs, its writing the innodb datafile and all the kernel panics appear to have occurred when after it has gotten to about 2GB out of the 15GB the initial innodb datafile is going to take.

I tried the other mtron drive ( one that has been used a lot less then the primary ), the build works perfectly fine (The test ran and no kernel panic occurred). So it does not appear to be controller or OS related ( or so i think ). So I then tried simply copying a file over to the mtron drive giving me the issues. During the file copy, their it is… a kernel panic! I am right now in the middle of running some diagnostics on the drive. But I have to wonder is this a manufacturing issue or did I just do too many write cycles? I will post more later.

This entry was posted in hardware, linux, mysql, performance, raid. Bookmark the permalink.

4 Responses to SSD and MySQL Tests… The logic behind the tests & wondering did I break it?

  1. burtonator says:

    Hey. Could you post a quick benchmark with sysbench and rndrw on a file that’s about 2x memory with the MFT enabled?

    I’m curious if you’re getting performance anywhere near their claims.

    Kevin

  2. burtonator says:

    OK… more on your tests.

    Did you try removing EasyCo’s driver? I wonder if that’s causing your kernel panic?

    I wouldn’t think you would have a kernel panic from a bad SSD. I think you’d get a write error or corrupt data.

    Kevin

  3. matt says:

    I will post them. I tried with and without the MFT stuff. Same thing, paniced. Going to try a few more things.

  4. burtonator says:

    Yeah… PLEASE do. I’m very curious about their MFT work. In general log structured filesystems would work really well on SSDs.

    I think I’m going to write a trivial DB that runs as a LSFS to see if I can get the necessary random write performance.

    I think that systems like PBXT would fly on SSD.

    Of course the next generation flash systems all seem to use LSFSs internally.