It's often quite tempting to make sweeping statements about the superiority of one approach to a problem over another. While various approaches often have advantages, in the real world often there are many competing criteria which make a black and white assessment of choices seem rather simplistic. Politicians and marketing folks oversimplify complex problems every day, and it often makes a real discussion of the issue at hand harder - although it does engender a vitriolic us vs. them approach of yelling. Luckily for us, this never happens in the technical world.
Eric Bergen posted an interesting entry on his blog this morning about DRBD. He makes some very good points, but I believe leaves out some context or assumptions for some of his conclusions, primarily by assuming that there is a single HA setup with a single set of criteria for success and then comparing both DRBD and MySQL Replication to that invented scenario. Since the scenario seems to be one in which MySQL Replication is the obvious choice, it is no surprise that a solution involving DRBD comes up wanting. Eric is 100% correct - DRBD is terrible at performing the tasks that MySQL Replication is well suited for. I would argue, however, that MySQL Replication is just as terrible at performing the tasks that DRBD does well, and neither of them can touch MySQL Cluster for the tasks that MySQL Cluster was deisgned for. One of the underlying ideas in the MySQL world is that there is no one perfect task for every occasion - witness the existence of multiple storage engines. The real Zen comes in matching the correct tool (or tools) to the job.
The key difference between MySQL Replication and DRBD is that MySQL Replication is asynchronous whereas DRBD is synchronous. Using MySQL Replication, writes to the primary master are not affected by the health of the secondary master in any way. Using DRBD writes are dependent on the health of both boxes. This doesn't mean that in a DRBD world that a failure of either box is a failure on both, but if the performance of either box becomes degraded the performance of the pair becomes degraded. In this, as in many other ways, a DRBD system behaves much like a system involving two machines and shared storage. If your SAN performance becomes degraded, it will affect the overall performance of the system. Thinking of DRBD in similar terms to shared storage is often helpful, as the metaphor actually holds up quite nicely.
Even with both boxes disk subsystems perfectly healthy, as a synchronous replication technology, DRBD adds overhead to each write, and a performance hit is unavoidable. MySQL Replication also does not experience this problem. Should we not then, as Eric suggests, avoid DRBD and use MySQL Replication for everything? The answer is an emphatic "No" as it really depends on the problem you are trying to solve.
The very thing that makes MySQL Replication wonderful at some tasks, the asynchronous operation, is the very thing that makes it impossible to use in other situations. MySQL Replication undermines the durability of your transactions. It can't help it. Imagine your database is collecting the data for processing credit card payments, as many of them do. Imagine then that your primary master is handling a transaction and returns success on a commit. The expectation now is that this data is saved and durable. Ah, but then at that moment, your primary master dies. Who knows why, maybe one of the new guys working at the colo facility thought it would be neat to unplug the server from power. It happens. You've got heartbeat or something like it set up and your secondary master comes up and running and the system as a whole keeps on running, and it takes next to no time since the secondary master was running a warm copy of the database. Life is good - right? What about that transaction? You've told the rest of the world that it is committed, but you have absolutely no way of knowing if it made it to the secondary master. If you are running one transaction at a time, you could check and correct it manually, but you probably don't have redundant masters if your query load is that low. You probably have a steady stream of transactions happening. Which means that now your secondary master is in an unknown state.
Does this mean that MySQL Replication should never be used in a dual-master setup? Of course not. There are plenty of applications (click tracking comes to mind as an excellent example) where the loss of a few records actually doesn't matter in the slightest, and where what you really want is the lowest possible transactional latency and the lowest possible downtime during a failover.
As a general rule (and remember, general rules are made to be broken) MySQL Replication is wonderful where short failover time a consistent primary performance are the key. "In a good fail over scenario a problem with the backup master should never cause an issue on the primary master." In a scenario where this is the case, MySQL Replication is a fantastic choice. DRBD is wonderful if you need to be certain about the state of your data and can afford to lose a little bit of performance to provide extra durability, and in this case degraded performance of the secondary affecting the performance of the primary is something that is perfectly acceptable.
Something else to keep in mind is that not only can failovers be automated with DRBD, but so can failbacks (reattaching the original primary to the pair) In fact, you can happily bounce back and forth between two hosts running DRBD all day long with nary a problem. I dare you to do that unattended with a MySQL Replication setup and not run some sort of external consistency checking on your databases.
A few quick points to address directly:
"When DRBD, the operating system, or hardware crashes it crashes hard. Any corruption on the primary master during a nasty failure gets happily propagated over DRBD." There are two pieces of truth here combined in an interesting way. In a normal DRBD setup, I configure DRBD to throw a kernel panic in case of any problems with the underlying IO subsystem. If the error happens on the primary, the secondary happily takes over. If it happens on the secondary, the primary just keeps right on chugging. If, however, there is a bug internal to MySQL that causes corrupt data to be written to disk, this data will be happily written to both disks. Most of the time this doesn't result in an immediate crash of the primary server, so although the secondary server may not have the corruption, it is unknown how long the primary may run with the corruption, and again, the state of the data consistency between servers is unknown and unknowable.
Since DRBD replicates blocks, you also don't run in to the very common problem MySQL Replication has of the slave getting behind the master due to a single execution thread. The secondary machine doesn't have to process a thing - all it has to do is write blocks to the disk. Eric is 100% correct in pointing out that this does not allow you to use the alter tables on the slave trick. Of course, the real culprit here is the inability to add columns or indexes live. You really don't want to use this trick to remove columns, as all matter of holy hell will break loose if you suddenly have less columns on your replication target.
Reports that DRBD loading pairs of masters for query time outs are very interesting, or would be if they were backed up with any real details. The most overhead I've seen DRBD put on a system is a 30% slower disk subsystem response time. Maybe the client is trying to do DRBD over a 10M network link, or has the whole thing horribly, horribly misconfigured. I have not seen anything even remotely like this problem in the field.
DRBD is operationally much more stable and simple to deal with than failover using MySQL Replication. This still doesn't mean there aren't times when Replication is the answer, but alluding to a "less stable less operationally friendly system" is just plain misdirection. I wholehearted agree that we should be working on making MySQL Replication better (right there with you on check sums - how about global transaction id's of some sort too?) but there is no reason we can't continue to make both tools better and have more ways of dealing with more problems.
If you want to get rid of your DRBD fail over setup, by all means give Eric a shout and he'll be happy to help out. But if you wouldn't mind, give me a shout too. I'd like to hear about situations where DRBD isn't actually working out (haven't personally run across one yet) Maybe your DRBD isn't setup well, or maybe it's just trying to be the wrong tool for the wrong job. And, of course, I'd be remiss if I didn't want to give Eric some friendly competition for your business in doing the migration if it's actually warranted.
Technorati Tags: mysql, drbd