What’s the Performance impact of the Double Write Buffer?

I have been benchmarking Waffle Grid using the new Innodb Plugin 1.03 the past couple of days. Let me say the plugin is fast. Which got me thinking, generally when you fix a bottleneck another area becomes a bottleneck… its a vicious cycle really. I figured why not benchmarks several different settings just to see what sort of improvement or detriment we get in Inno. This hopefully will lead to the next place to look for potential performance improvements. For the test I chose a somewhat IO bound setup and a CPU bound setup.

The IO bound setup was a 20W test, 768M buffer pool.
The CPU boud setup was a 20W test, 5GB buffer pool.

I decided to start with the Double Write Buffer. For those who are not familiar with the double write buffer check out the docs or here for a post Peter Z did a few years ago.

So the IO bound test actually showed a larger impact then I was expecting:

30-35% overhead is actually huge… so this maybe an area that could be looked at to boost performance for IO bound workloads.

On the CPU side of things, I noticed something completely strange, and I had to rerun the tests several times just to make sure I got this right and did not reverse my findings:


Yep those are correct. In all my runs, I was seeing up to a 7% increase in performance by having the double write enabled. Which seems to counter logic, more writes should be slower (The three tests above represent the high/low/and average of 7 test runs ). I guess I could understand a mixed set of tests, where 3 tests showed it as slower, and 4 showed it faster … that could be attributed to tiny changes in the randomness of the tests… but in this case every test with the double write was faster. I am at a loss for why, but it now has me curious… what could cause such strange behavior? Not 100% sure, but I will add it to the ever growing list of thing I want to look at and dig into in the future.

This entry was posted in benchmark, Matt, mysql, performance. Bookmark the permalink.

6 Responses to What’s the Performance impact of the Double Write Buffer?

  1. Shirish Jamthe says:

    Hi Matt,

    Very interesting test. The counter intuitive results puzzled me so I took a look at the buf flush & os file code.

    Here is my guess on what is happening when you increase the buf pool to 5G.
    The double write buffer actually pools upto 128 blocks together then does two writes to ibdata1 file followed by several writes (one per block), but the sync is called only after all blocks are written.
    When you have a large buffer pool it is possible that the blocks selected for flushing are consecutive (the buf_flush_try_neighbor code tries to do this).

    When you don’t use double write buffer every block is written to as it is found (but with async io). it is possible this leads to more random IOs and frequent sync.

    Did you collect write iops during this experiment. If you see more iops without double-write buffer then my logic has merit.

  2. Quite interesting findings.

    What innodb_flush_method did you run ? What hardware ?

    With default flush method it will be the question of 2 fsyncs for large blocks vs batch of small fsync.

    Also 30% is a lot of overhead. I’ve typically seen smaller though it can be very hardware dependent

  3. matt says:

    O_direct. Hardware was quad core 2.4GHZ, 8GB of memory, 2 10K disks striped.

  4. Do you have BBU ? Which filesystem ?

  5. matt says:

    Ext3.. no cache on this controller this is runnin oh natural. I am planning to rerun this on a 4450 I have access to as well… Will let you know the results. Also was going to try xfs and do my battery of tests between the two as time permits.

  6. matt says:

    Oh yeah… Be interested in your results on different hardware as well.