David Shrewsbury <[email protected]> writes: > The options for handling this are: > > 1) Place locks on the replication stream and buffer Transaction > messages, likely to disk, until the transaction is complete and > send them all down the stream together, locking the stream to > do so. This is how MySQL handles writing RBR events to the > binlog. > > 2) Let the replication stream readers do the buffering of the > Transaction messages themselves.
#2. #1 throws away potentially useful information about parallelism in the original transaction mix. For doing parallel application on the slave (which one will want to do sooner or later), seeing intermixed transactions in the replication stream is useful, it says that these transactions could run in parallel on the master. It is sad to just throw this away, leaving the slave the hard problem of somehow re-discovering this property by itself. (I understand this proposal is just for bulk loading, but this can be extended once the concept of intermixed transactions is allowed). On the other hand, it seems trivial to create an optional filter of some sort on the slave end that would read the intermixed stream, and do buffering as needed to present simple-minded consumers with a non-intermixed stream. I don't see why this should be any harder to do on the slave than on the master. - Kristian. _______________________________________________ Mailing list: https://launchpad.net/~drizzle-discuss Post to : [email protected] Unsubscribe : https://launchpad.net/~drizzle-discuss More help : https://help.launchpad.net/ListHelp

