Replication Revamp Feature

From MusicBrainz Wiki
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

This page describes a proposed server feature.

This idea allows trusted servers (i.e. probably ones hosted with the main servers, and run by the same admins) to be near-real-time mirrors, instead of only being updated whenever a replication point is reached.

The essence of this idea is to run vanilla "dbmirror" code (well, almost - we already modify the 8.x dbmirror code to use the 7.x schema). You then decouple the idea of a "dbmirror slave" from a "database slave" - so when you read some updates from P/PD, instead of necessarily writing those changes to a database, you could do something else. So some slaves might be "database" slaves (i.e. like dbmirror); some might update derived data stores (e.g. Lucene); some might spool out the changes to a file (replication packets); some might do something else completely (MRTG? Hmmm, there's an idea).

You can then have one or more near-real-time slaves replicating from the master, using vanilla dbmirror code.

For all of this to work, you'd have to also modify ExportAllTables --with-replication to NOT read/delete from the P/PD tables - i.e. not produce the replication packets.

Slave type "Sequence Point Synchroniser"

(Hereafter called the SPS slave).

This slave adds itself to MirrorHost (and updates the MirroredTransaction table to say "I have processed all transactions so far). It then sucks in and discards all changes, stopping just after the txn which updates replication_control is seen.

The slave is now synchronised to a replication point (say, "X"). The slave displays the value of X.

This slave type is used to introduce new slaves into the system.

Slave type "database"

Updates are written to a pgsql database, just like vanilla dbmirror.

Slave type "replication packet generator"

(Hereafter called the RPG slave).

Since, under this proposal, ExportAllTables would NOT be producing replication packets, something else has to - that's this slave type. All ExportAllTables would do is update replication_control, in the same way that it does now.

Whenever this slave reads P/PD rows, it spools them to a pair of files. If the SQL transaction (to update the slave status) fails, that data would have to be "un-spooled".

Whenever it sees (has just finished applying) the transaction which updated replication_control, it knows that it now needs to produce a replication packet (the very last transaction in every packet is the one which updates replication_control to the sequence number of the packet we need to produce).

It then produces a replication packet, using the spooled/copied data from the P/PD tables, and resets the spooled files to empty. (The schema seq, replication seq, and timestamp can all be read from the just-modified replication_control table).

Slave type "Lucene"

Rob mentioned (in an earlier version of this page), that it (might be|should be|is) possible to update Lucene "via a couple of WS calls". So this slave could watch for the changed rows that it cares about, and make those WS calls. Any updates that don't affect Lucene, it just discards.

Slave type "MRTG"

Provide a COUNTER of some of following:

  • non-read-only transactions
  • rows modified / inserted / updated / delete
  • rows modified / inserted / updated / delete per table (probably just for the major tables)

Slave type "Delay"

This slave deliberately doesn't process updates as quickly as possible - it holds back, so that data is not deleted from P/PD. (TODO: holds back forever, or by time (e.g. 24 hours), or by replication sequence (e.g. 6 packets)?).

How to get there from here

The safe way

Do this at a server upgrade. i.e. stop database writes, make a replication packet (with a full export, just to make sure).

Add the new "host" dbmirror tables. Start the RPG slave. (P/PD should be empty, so the slave starts up idle). Modify the existing ExportAllTables so that P/PD are not emptied and not exported (and no replication packet is generated).

Then start allowing writes again.

The more risky-sounding way

As above but without a full export.

Even more risky still

No server release, no pause in writing. Hmmmm.

Stop the old ExportAllTables.

Add the new "host" dbmirror tables.

Add the RPG slave. It will (I think) start spooling all available P/PD data. (Remember that, at any given time, the earliest rows in P/PD represent the first changes after the replication point boundary. Therefore this slave will neatly start generating the next packet).

Start the new ExportAllTables.

Introducing new slaves

Assume that we're already using the new RPG slave. How do we add another slave - say, a database slave?

The problem is this: if you take the latest fullexport, create a database from that, apply all the replication packets so far, then add the slave to the MirrorHost table, it's almost certain that you'll have missed some data.

Therefore: start at a fullexport point, apply as many packets as you can find (up to packet "X"). Now, add the slave using the SPS slave (this is the reason for its existence). This syncs the slave to replication point "Y" (Y>X). Apply packets from X+1 to Y. Then start reading data using dbmirror.

Run exports against a slave?

Modify one of the "database" slaves so that, once the transaction which modifies replication_control has just been applied, a full export is run against that database. However this would need to be a COMPLETE copy of the master, not the some tables / sanitised copies that we currently have.