The importance of network latency in application performance – part 2

I harped on this earlier this month. The network is an often over looked, but vital component of every application. I have been in many shops content with running 100Mb/s between the application and database simply because they are no where near maxing out the available network bandwidth between the two servers. What they are forgetting is there is a big latency difference between 10Mb,100Mb, & 1000mb. Speaking from my testing on Waffle Grid we see that under load 100mb connection routinely has network latency in the 3000-4000 microsecond range, while running under load in 1gbe tests we routinely run at around 1100 microseconds. By the way the same test using a Dolphin interconnect card finishes with an average latency of less then 300 microseconds. These tests average less then 5Mb/s being pushed over the network, which from a network perspective would not even hit half the available bandwidth of a 100Mb/s network. What this means you may think 100Mb/s connection is good enough, but it could be adding 3x more latency to each network operation. 3X more over how many packets per day? Over how many operations each day? Ouch, that’s really going to add up.

One of the most common performance mistakes I see ( its moved onto my top 10 list now ) is returning more data then you actually need on your pages. In fact recently I was at a client who we cut down the data returned from the database on the majority of their pages from 300kb to less then 50kb. They potentially could serve millions of these pages a day… that equates to a savings of 292GB of data per day -vs- 48GB of data per day being shuttled between the database and the application. As I said it adds up. Less data, less packets, less stuff traversing the network, less contention will make things faster. While reduction in sizes like this can help, so will faster network connections. You may ask yourself: “Yeah a 1gbe card is faster then a 100Mb card, and it will make things faster… but really how much faster?” The answer as always depends on your application, but it could be huge.

To underscore the benefits of faster network access,  I thought I would share a recent experience. While testing out the Dolphin interconnect cards I ran into something completely unexpected. The Dolphin interconnects use a socket interface they call Dolphin SuperSockets to speed up network performance. The goal of my testing was going to be testing the performance of the SuperSockets interface between MySQL and memcached ( Waffle Grid ), what I had not expected was that the SuperSockets interface also effects activity that goes to localhost. Logically this makes sense, but I guess I never thought about the ramifications, that is until I saw some very strange DBT2 results. My test run times started coming back 3x faster then anything else I have tested, I saw odd changes in the number of certain operations in the database, and patterns of data, etc. The only thing that changed was the SuperSockets interface. So I tested DBT2 on a system without Waffle Grid. For the test, I ran Dbt2 0.37 against MySQL 5.1.30 on the same server ( connecting via localhost ) with and without SuperSockets. Survey says:

Yep that’s not a rounding error. I saw a 2800 -vs- 27000 TPM difference. That ends up nearly a 10X performance bump and the only change is the SuperSockets interface. Once again SuperSockets is going to push data faster over the network, not “enhance cpu”, super charge IO, etc. The localhost improvement here is nothing short of spectacular. I asked the guys over at Dolphin about this, their reply:

“We basically apply the same “shortcut” through the kernel network stack that we use for Ethernet sockets. We’ve seen latency *and* bandwidth improvements of factors 5..10. However, you still need a DX adapter in your machine. “

So yes they have seen this performance bump elsewhere.

A couple of things to note here. The improvement seen here may not be typical for all applications. DBT2 is a generic test that simply may process data in such a way that takes advantage of a really fast server to client interface ( I honestly have not dug into this ). My guess based on the stats I am seeing is that because there is no significant delay in network between the application and database hot data is not hitting the Innodb LRU while waiting for the network to pass data back to the client.

The real message here is that network performance can significantly impact performance. In this case simply returning data more quickly to the application allowed the application to push 10x more data through the system then with a slower network. Raise your hand if you would have guessed that you would see this type of speed up only from boosting the performance of “localhost” traffic.

Once again this was thanks to the folks at Dolphin Interconnect Solutions for letting me play with this, its very cool.

This entry was posted in Common Mistakes, mysql, performance. Bookmark the permalink.

One Response to The importance of network latency in application performance – part 2

  1. Great blog,
    it’s not a complete surprise that Dolphin is
    very efficient on localhost traffic. We have
    done quite a few benchmarks of DBT2 on
    MySQL Cluster where MySQL Servers and Data
    Nodes are on the same machines and some on
    different machines and wanted the speedup also
    in the localhost case. So we asked Dolphin to
    add this feature which they kindly did. So it’s
    a feature which already benefits Dolphin users
    of MySQL Cluster.

    Delays using Dolphin optimised path is down
    well below 1 microsecond whereas normal
    localhost was measured around 12 microseconds.