XFS & Centos 5 & MySQL Performance

OK, Sometimes you stumble onto things that are just not right. On my own personal set of hardware (independent from the Server with the IBM Raid card) I am still running Tests with the Mtron flash drives. I noticed a huge regression in performance late this week from earlier in the week. In fact it was truely head scratching. DBT2 results that were in the 26K TPM range suddenly dropped to 4K TPM.I hate to admit a big screw up, but my tests towards the end of the week were tainted. While investigating the problem with the IBM Raid Card and XFS, I took one of the Mtron drives and rebuilt it with XFS. The benchmarks (sysbench) showed that on my hardware their was little performance difference. When I resumed my Mtron dbt2 benchmarks the mtron drive that housed the log files was indeed on this xfs drive.

Trying to figure out what was going on I noticed this (sde is the log drive, sdd is the data ):

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdd 16.02 79.76 728.09 79.98 29500.03 20417.09 61.77 4.67 5.78 0.45 35.98
sde 0.00 0.00 0.00 42.85 0.00 1115.12 26.03 0.99 23.11 23.07 98.86

42 writes per second is really low.

Looking at my previous tests iostat -x output which was performed on EXT3:

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdd 22.94 153.62 1895.74 153.96 63260.72 39412.58 50.09 5.30 2.59 0.38 77.82
sde 0.00 307.59 0.00 332.15 0.00 5117.92 15.41 0.07 0.21 0.21 7.00

Wow the difference is night and day. The bottleneck switches from sde the log drive to sdd the data drive. I redid the tests. Ext3 screamed and xfs choked again.

For the sake of documentation:

Centos 5 ( 2.6.18-53.1.6.el5 )

The dbt2 test in question was using 200 installed warehouses, 100 active, 16 connections, & 15 terminals per warehouse. I am setting up another test run, I will post the results here.

This entry was posted in linux, mysql, performance, raid. Bookmark the permalink.