For those unfamiliar with SQL Server clustering let me give you a quick blurb. Microsoft markets SQL Server as having active-active clustering. When most people hear active/active or clustering in the database they generally immediately think of features like Oracle Rac. SQL Servers implementation of Active-Acitve clustering ( as of 2005 anyways ) is really a HA setup with one instance active on the A side, and a seperate unrelated instance active on the B Side. These servers hove different ports, install directories, and share nothing at the DB level. A san is used with its disk presented to each node in the cluster. Microsoft Clustering Services (MCS) is used to manage the IP take over, handle resource transitions, etc. The setup is really active-passive, passive-active. This ends up confusing and even mis-leading people who are both technical and in management positions.
Why Is this appearing under a MySQL related topic? Because I too can mislead the public! No just kidding. We can learn a thing or two from Microsoft’s active-active configuration. First one of the biggest knocks I have heard at client sites about using HA or DRBD is most of the time ( in fact I can not think of a case not like this ) its implimented active-passive. Server A is doing all the heavy lifting and Server B is sitting idle. This drives many old school companies batty, “idle hardware! for the love of pete!, we can not have that “.
So why aren’t more people setting up MS Style active-active ha pairs? Heartbeat supports it. Clients with SANS or with DRBD can make use of it … Mind you I have never set it up either, but this is all theoritcally possible. The only argument I have heard that I can buy into is that during a failover, your production server may have two instances running, which could hammer the system. While thats true, would you rather be slow or dead? But I also see many under utilized servers, running with HA ( active-passive) at <20% utilization.. so for these the argument does not hold. Is it because hardware is so cheap no one cares about the cost of 1 extra server?
What would an active-active setup look like?
Here is a very simple sample setup: Server A has instance mysql1, on port 3306, with a dedicated VIP, disk /dev/sda1 ( on the san ) is mounted and running on Server A ( mysql data, logs, and binaries are here ) … server B has instance mysql2, on port 3307, with a dedicated VIP, disk /dev/sdb1 ( on the san ) is mounted and running on Server A ( mysql data, logs, and binaries are here ) … heartbeat manages the resources.
This leads me to a client setup where active/active is a perfect fit. Database is < 50GB, Server is 16 cores, 128GB of memory connected to a big SAN. Just blabbing a bit here, stream of conciousness but … Add 8 nic ports to each server, dual fiber cards… etc. Server A would have mysql instance #1 by default. Instance one would have its cpu affinity set to cores 1-8. Instance #1 would effectively be configured like 64GB of memory was available ( 50GB buffer pool lets say). Server B would have mysql instance #2 by default. Instance one would have its cpu affinity set to cores 9-16. Instance #2 would effectively be configured like 64GB of memory was available ( 50GB buffer pool lets say). On each server 2 Nic ports would be bounded for the public VIP for instance #1, 2 more would be bonded for the publicVIP for instance 2, two nics would be bonded for the private interface, the other two nics could be used for replication/backups/etc. 8 disks in raid 10 would be presented to servers A & B for instance #1, a seperate 8 disks in raid 10 would be presented to servers A & B for instance #2. That should work rather nicely I think…
As for this client I think active-active deployments could be a great fit for those “under-utilized” 16 core boxes you bought to run MySQL…. Anyone running this setup? This should also work with mcs in windows land.