On Aug 7, 2013, at 3:58 PM, Nick Alexander <[email protected]> wrote:
> Hello sync-dev, > > This is intended for warner, ckarlof, and rnewman; but, I see no reason why > others shouldn't chime in. > > The merge scenario that most concerns me (right now) is the case when (to > borrow git terminology) you're rebasing a long series of commits onto an > upstream head that has advanced from your head, and the first commit has > non-trivial conflicts. In git, you need to manually update each subsequent > commit in the series to rebase. > > If I understand the queue sync proposal correctly, the merge algorithm will > also consider each subsequent commit in order and need to rebase each. For > example: > > * if the local commits to rebase onto upstream are > > rename folder A to folder B at time t0 > insert bookmark X into folder B > > * if the new upstream commits are > > rename folder A to folder C at time t1 > t0 > insert bookmark Y into folder C > insert bookmark Z into folder C > > * then presumably the merged commits should look like > > rename folder A to folder B (since t0 < t1, B "wins") > insert bookmark X into folder B > insert bookmark Y into folder B > insert bookmark Z into folder B > As Richard said, this isn't a challenging example if the folders have the same GUIDs. Content dedup is definitely conflict-Mordor, so I'm going to ignore that for now. In general, queue sync as described doesn't maintain much history for merging. So for "rebasing on top of upstream changes", it would look like this: 1) First upstream push fails. 2) Fetch the upstream changes. 3) Give the upstream changes to the data provider to merge. 4) Data provider merges and sends any new upstream changes to reflect the local-changes-merged-with-the-updates. The data provider uses the current local content versions as guidance for detecting conflicts. If they match the parentContentVersion of the upstream change, all is dandy. Otherwise, the local version moved on (or there's a weird server rollback then re-advance thing), and there's risk of information loss. Bookmark merge could be tricky in the case of lots of structural changes. You guys have a lot of experience with that. Merging other data types seems pretty easy to me. Queue sync isn't really designed for long disconnected operation, but I don't think that will necessarily happen often when syncing browser data. Browser data doesn't change (much) on its own if you're disconnected, and unlike current sync, we're designing for durable, higher availability servers. It's possible you could do a bunch of bookmark re-arranging offline when you're out of sync and then reconnect. I'm skeptical if that's the motivating example for keeping tons of revision history around, though. -chris > The point being that /both/ the Y and Z inserts need to be rebased into > folder B, since each commit is a delta (roughly corresponding to a DB crud > operation) that implicitly references its "surrounding context". As the > sequence of deltas grows longer, this requires tracking more and more merge > context. (For example: renaming the same folder many times in sequence. One > can imagine coalescing renames and other local optimizations, but this is not > driving down complexity.) > > Is my understanding of how this would work in queue sync correct? Am I > making this too complex? Am I missing something? > > Full disclosure: I'm not clear on how this merge would look in tree sync. > I'm going to try to sketch that out right now! > > Best, > Nick > _______________________________________________ > Sync-dev mailing list > [email protected] > https://mail.mozilla.org/listinfo/sync-dev _______________________________________________ Sync-dev mailing list [email protected] https://mail.mozilla.org/listinfo/sync-dev

