> One trick Vlad has looked into is to pre-compute the server-revision,
> using the same algorithm that CouchDB will use (based on the Erlang
> "Term" language). It works, but it feels fragile, and would lock us into
> a specific backend database. We could also use the response to "POST
> /_bulk_docs" to learn the server-generated server-revisions for each
> accepted upstream record.
I believe this is very fragile, if you are running different versions of
erlang or do an upgrade, there is every chance it can break, I have brought
up the fact that the revision generation is a leaky abstraction in couchdb,
pouchdb also has the same problem of not being able to generate the same
revisions but it only has very minor consequences, even if you do generate
client side revisions the only way to commit them is via new_edits which
means you lose compare and swap, using the return from bulk_docs seems
useful here
As the Provider is not doing compare and swap doesnt it always know what
its latest version of the record is (it doesnt seem like it needs to know
intermidiary states)
> I'll have to think about that some more. One note though: the provider
> only gets two records ("mine" and "theirs", not "common"), since we
> don't retain enough history to provide that. We'd need an extra shadow
> copy in the mediator to hold onto the common ancestor.
Similiarly as mentioned above, the 3rd record I was referring to was the
current document in the provider that it could fetch itself, the 'mine'
refers to the mine it sent to previously be updated and theirs being a new
version introduced by another client.
I may have missed some details about the Provider though, as far as I
understood its only job was to be written to (without having to worry about
version constraints) provide change notifications to the Mediator and to
merge conflicts
> I don't consider that a significant win *in isolation*. (I also don't
> consider it a lose; just want to be clear that this is not a killer
feature)
If we can avoid atomic multikey updates then in my opinion it is a huge
win, the architecture looks very similiar to what couchdb was designed to
handle and rewriting that on top of a database that wasnt designed in the
same way in a way that is stable and handle a large volume of traffic is a
huge undertaking, an auth layer is pretty much irrelevant in terms of
additional work to get a working server (an auth layer wouldnt need to sit
in front of couchdb, it could just be a token server)
If we have a hard requirement on atomic multikey writes (without any
possibility of introducing conflicts) then it is a different story in terms
of time to get a server running, but still a pretty huge task
On 31 July 2013 00:39, Ryan Kelly <[email protected]> wrote:
> On 31/07/2013 5:17 AM, Brian Warner wrote:
> > On 7/30/13 4:22 AM, Andreas Gal wrote:
> >> Also, if we implement our own replication mechanism, what is the
> >> advantage of sticking with the CouchDB wire protocol? Its actually
> >> rather clumsy and inefficient (see proposed jsondiff delta
> >> compression). I am not arguing for using something else than CouchDB.
> >> I am merely asking why you think it makes sense to stick with the wire
> >> protocol but abandon the higher level semantics of CouchDB.
> >
> >
> > The biggest advantage of sticking with the CouchDB wire protocol is that
> > we get to use existing CouchDB implementations on the storage server.
> > We'll probably have a proxy in front of it, but that'll be pretty thin.
>
> I don't consider that a significant win *in isolation*. (I also don't
> consider it a lose; just want to be clear that this is not a killer
> feature)
>
> IMO our time-to-production-ready on the server side will be quite
> similar whether we're fronting couchdb or mysql or some other store.
>
> Using CouchDB as a wire protocol becomes more compelling if:
>
> * It lets people stand up their own storage servers really easily,
> e.g. by just running CouchDB and pointing firefox at it.
>
> * We outsource the servers to a company with experience running
> this at scale, such as https://cloudant.com/
>
> * It provides interesting opportunities for interacting with
> a wider software ecosystem.
>
> * It happens to be a nice fit for the problem at hand.
>
> If we're talking about needing a custom write API for transactionality
> and a fronting proxy to do auth etc, then I'm not sure any of those
> points hold.
>
>
> Cheers,
>
> Ryan
> _______________________________________________
> Sync-dev mailing list
> [email protected]
> https://mail.mozilla.org/listinfo/sync-dev
>
_______________________________________________
Sync-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/sync-dev