Innodb Compression: When More is Less

So Vadim posted on the MySQL Performance Blog about poor benchmarks when running innodb compressed pages.  I ran some tests a few weeks ago and did not see the same results as him and checked into my previous tests and compared them to his #’s.  In a round about way verifying his thoughts on Mutex contention I found that increasing the BP sized with compressed data decreases the transactional throughput. The test was run with an uncomressed data set size of 6GB, 3.1GB compressed read-only.

TPS
2G, NOZIP 3217.19
8G, NOZIP 4479.81
8G, NOZIP, 16BP 4424.3
1G,ZIP 1120.3
2G,ZIP 1181.8
4G,ZIP 38
8G, ZIP 33.6
8G, ZIP, 4BP’s 226
8G, ZIP, 8 BP’s 544.7
8G, ZIP, 12BP’s 3009.79
8G, ZIP, 16BP’s 3026.1

 

You can see that adding memory to the innodb buffer pool slows things down. Looking at the Innodb status, you can see things getting locked up. What interesting though is you can mitigate alot of this simply by making use of multiple buffer pools.

Here is where it waiting:

----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 529411, signal count 133604
--Thread 140360422000384 has waited at /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0buf.c line 3483 for 0.0000 seconds the semaphore:
S-lock on RW-latch at 0x412fb68 '&buf_pool->page_hash_latch'
a writer (thread id 140360421799680) has reserved it in mode exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0buf.c line 3483
Last time write locked in file /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0lru.c line 1626
--Thread 140355466110720 has waited at /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0lru.c line 813 for 0.0000 seconds the semaphore:
Mutex at 0x412fb28 '&buf_pool->LRU_list_mutex', lock var 1
waiters flag 1
--Thread 140355465910016 has waited at /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0buf.c line 4398 for 0.0000 seconds the semaphore:
Mutex at 0x412fb28 '&buf_pool->LRU_list_mutex', lock var 1
waiters flag 1
--Thread 140360422201088 has waited at /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0buf.c line 4398 for 0.0000 seconds the semaphore:
Mutex at 0x412fb28 '&buf_pool->LRU_list_mutex', lock var 1
waiters flag 1
--Thread 140355466311424 has waited at /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0buf.c line 4398 for 0.0000 seconds the semaphore:
Mutex at 0x412fb28 '&buf_pool->LRU_list_mutex', lock var 1
waiters flag 1
--Thread 140355466512128 has waited at /var/lib/buildbot/slaves/percona-server-51-12/DEB_Ubuntu_maverick_amd64/work/Percona-Server-5.5.10-rc20.1/storage/innobase/buf/buf0buf.c line 4398 for 0.0000 seconds the semaphore:
Mutex at 0x412fb28 '&buf_pool->LRU_list_mutex', lock var 1
waiters flag 1
Mutex spin waits 455865, rounds 16856962, OS waits 524701
RW-shared spins 29949, rounds 192489, OS waits 3591
RW-excl spins 499, rounds 26672, OS waits 117
Spin rounds per wait: 36.98 mutex, 6.43 RW-shared, 53.45 RW-excl

Long story short, if your using compression + Innodb you may want to look into using multiple buffer pools until this is fixed.

This entry was posted in benchmark, innodb internals, mysql. Bookmark the permalink.

Comments are closed.