Hi, 

I agree with Kristian and others that #2 is the best approach provided you can 
make it work. 

#1 creates a big problem at the slave end because a bulk load can monopolize 
the transaction stream and make it very difficult to implement parallel apply 
on slaves.  The large transactions blow out many in-memory buffering approaches 
which are necessary for speed and to minimize the complexity of implementation. 
  We have this problem now with MySQL replication on Tungsten.  It is a real 
bear to work around.  

#2 is faster but does have some hidden complexities.  One tricky problem is to 
ensure that you can easily recreate the original serialized commit order in 
case you hit something that can't be parallelized due to application-level 
dependencies between apply streams.  This occurs commonly in multi-tenant 
applications that have per-customer databases plus a database with centralized 
reference data.  Such applications will sometimes get very sick if they see an 
out-of-order view of causally dependent data.  

Another practical problem of #2 is how to deal with transactions that go for 
some period of time and then fail.  Would you not write them in the log at all, 
or would you start to write them before they actually commit? In this case you 
would need to do the standard DBMS log thing and have a rollback at the end to 
cancel them. 

Yet another parallel apply problem is restart after a crash but this is less 
difficult.  There are several ways for slaves to remember their positions on 
different streams as long as transactions on those streams are fully serialized 
with respect to other transactions within the stream. 

I'm just starting to look back into drizzle replication for Tungsten.  Does the 
current design handle the reserialization issue?  

Cheers, Robert

On Sep 17, 2010, at 4:47 AM PDT, Kristian Nielsen wrote:

> David Shrewsbury <[email protected]> writes:
> 
>> The options for handling this are:
>> 
>> 1) Place locks on the replication stream and buffer Transaction
>> messages, likely to disk, until the transaction is complete and
>> send them all down the stream together, locking the stream to
>> do so. This is how MySQL handles writing RBR events to the
>> binlog.
>> 
>> 2) Let the replication stream readers do the buffering of the
>> Transaction messages themselves.
> 
> #2.
> 
> #1 throws away potentially useful information about parallelism in the
> original transaction mix. For doing parallel application on the slave (which
> one will want to do sooner or later), seeing intermixed transactions in the
> replication stream is useful, it says that these transactions could run in
> parallel on the master. It is sad to just throw this away, leaving the slave
> the hard problem of somehow re-discovering this property by itself. (I
> understand this proposal is just for bulk loading, but this can be extended
> once the concept of intermixed transactions is allowed).
> 
> On the other hand, it seems trivial to create an optional filter of some sort
> on the slave end that would read the intermixed stream, and do buffering as
> needed to present simple-minded consumers with a non-intermixed stream. I
> don't see why this should be any harder to do on the slave than on the master.
> 
> - Kristian.
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~drizzle-discuss
> Post to     : [email protected]
> Unsubscribe : https://launchpad.net/~drizzle-discuss
> More help   : https://help.launchpad.net/ListHelp


_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to