24 Hour DBT2 run on the Intel X-25M SSD

Following up on my previous post Peter asked if he could see a 24 hour run on the Intel drive…  took me a few days because I am doing some testing on a few other things, but I kicked a run off yesterday before leaving the Vegas airport…  Here ya go:

These are roughly 10 minute TPM averages.  As you can see their is a definate decline in performance the longer the drive is active.   Its realtively small% wise, only about 6% off of peak…  but the delcine is easy to spot.

What’s this mean?  It you beat the hell out of the drive over a long contious period things slow down the longer the drive is in continuous use.   Just an FYI, the drive was 78% full during these tests.

This entry was posted in hardware, linux, mysql, performance. Bookmark the permalink.

10 Responses to 24 Hour DBT2 run on the Intel X-25M SSD

  1. randybias says:

    Good info. It would be nice to see 24-hour runs on an SLC drive like the Mtron or MemoRight. My interest is more on the ZFS write cache side of things rather than DB, but the perf tests here are very helpful in getting a deeper sense of how these new SSDs perform under different workloads.

  2. burtonator says:

    Are you close to filling up the drive?

    This might be one of the reasons it’s slowing down.

    Also, does the drive have a SMART export of the erase cycle stats?

    You can use this to compute the failure rate.

    I’m still skeptical that these drives can handle random write tests as it might cause the drive to fail faster.

  3. Xaprb says:

    Arrrghhhh, registering to post a comment….

    Maybe I missed it in a previous post, but this post doesn’t justify the assertion that the drive is slowing down. It could be any of a number of other things. If you want to know if the drive is slowing down, benchmark the drive — not the database server software.

  4. matt says:

    AS mentioned the difference is only 6% which can easily be contributed to other things. And yes if you want to know the long term effect of usage on the drive, testing just the drive along would be best. Here I am testing the drive against MySQL to see what the effect of continued use is. For completeness sake I should run the same test on a normal drive.

  5. Xaprb says:

    Tufte busts you for lying with graphs ;-) The way you scaled the graph makes it look like a lot more than 6%.

  6. matt says:

    Yeah, I did say it was small:)

  7. Robin YANG says:

    In your “Intel x-25m80GB SSD DBT2/MySQL Benchmarks”, Intel has a 6558tpm, but why it’s only 3000+ in this test? Is this because you have done too many writes on it, and now, the garbage collection procedure is triggered more frequently? I said I have got a 8000tpm once, and after I did a lot of test on intel ssd, I can’t get this high tpm anymore……only 3000+.
    BTW, the settings are 100W 30 connections 15 Termials/W 1800s duration.

  8. matt says:

    This is on a different OS, Ubuntu desktop -vs- Centos 5… I can still get the higher TPM in centos…

  9. Robin YANG says:

    Oh, I see.
    This is so weird that I can’t get the same TPM as I got 3 days ago even with all the same settings.
    Thanks. I’ll keep on looking for the reasons……

  10. TS says:

    This is the reason why you don’t use the X25-M for databases.

    1. The X25-M uses 20 MLC chips for a total of 80GB. That means it doesn’t have extra chips to guarantee copy on write. That means, when the drives are filled up, the write IOPS drops linearly. As you can see, over 24 hours, you are already at 3000 IOPS. When you hit 95% capacity, you will probably require erase before a write can continue. Enterprise drives that sacrifice one channel for writing will make sure that the drives will always have empty cells to write to. It means you should look for drive that give extra flash chips like the Samsung SLCs or Intel X25-E

    2. Traditional RAID on the flash is dangerous. RAID1 or RAID10 will write the same amount data on the mirrored drives. Since flash cells have the same cycles, both drives should technically fail around the same time, rendering the whole array dysfunctional. Just be careful.