Wafflegrid: DBT2, Dolphin and Innodb Readahead

Ok, I am perplexed… i don’t say that often.   I have the privilege of testing out a couple of Dolphin Interconnects with Waffle Grid.   They are proving to substantially improve our transactions throughput, I mean we are getting 3x performance over 1gbe…. but what is perplexing is each run going over the faster interconnects results in 1/3 of the memcached sets/gets that occur when testing over 1gbe!  Same datasets, same tests, repeatable results.  See here:

         cmd_get: 771811
cmd_set: 784119


         cmd_get: 239423
cmd_set: 271259

So instead of testing out the interconnect performance, I am really seeing better results from a higher cache hit rate.  Less items are hitting the LRU.   So what could it be?  Well We are doing way less read-aheads then normal:


Innodb_buffer_pool_read_ahead_rnd       650
Innodb_buffer_pool_read_ahead_seq       9744


Innodb_buffer_pool_read_ahead_rnd       1382
Innodb_buffer_pool_read_ahead_seq       3738

Each read  ahead could read 64 pages … that means this could account for a lot of the missing gets. I have been looking through the code and have yet to find  any answers…   I even debugged the heck out of the read-ahead functions and they the number of times the read-ahead function is called is 1/3 ( so its not exiting due to some weird condition ).  I guess its *** possible *** that a few threads could requesting the same data and because the data is retrieved faster ( < 200 microseconds )  less data is being pushed out, but that seems unlikely.  Any interesting thoughts out their?

This entry was posted in hardware, mysql, performance, Waffle Grid. Bookmark the permalink.

2 Responses to Wafflegrid: DBT2, Dolphin and Innodb Readahead

  1. Brooks says:

    I find it interesting that the are more random read aheads for dolphin but less sequential.

    Whenever I try to solve this type of problem I isolate the sql that is running much faster.

  2. Pingback: Big DBA Head! - Database Brain Power! » Waffle Grid: More fast interconnect madness

Comments are closed.