On Tue, Sep 1, 2015 at 5:35 PM, Dave Cridland <d...@cridland.net> wrote:
> I think most (or all) of the above only applies if you have rosters that are
> computed on demand, rather than managed by users via clients.
>
> Otherwise all you need on a simple roster (no shared groups) is a counter
> for the version, the value of the latest tombstone *not* retained (ie, the
> last delete if there are no tombstones), and per item, the value of the last
> change, and if it's deleted (ie, if it's a tombstone). No multiple versions
> of anything. Tombstones are optional; but without them it means it's only
> efficient for adds.

That's all true for simple rosters, but our entire use case is shared
rosters / groups that are managed by the server, and we're certainly
not the only ones (every company I've ever worked at has used XMPP for
team communication, and they've all had shared rosters).

Further feedback from Doug (who's not on this list):

> I'd agree XEP-0237 is a better spec for smaller deployments, but it fails if 
> you want to get efficient, differential updates for large deployments, 
> because you're depending on timestamps or some other shared, monotonically 
> increasing resource, which are hard/nearly impossible on large clusters
> this spec is all about trading upload for download for the benefit of server 
> scalability
> in our case, it was also nice to get differential updates in place (which we 
> could have achieved with a proper implementation of XEP-0237, but we would've 
> taken on the server challenge of maintaining some kind of reliable, 
> cluster-wide sequence generator)
> also, XEP-0237 always assumes we want the full roster or rooms collection. 
> With our XEP, the server can selectively issue subsets to certain users 
> pretty trivially
(also part of our scaling story)


(we know that timestamps are [rightfully] discouraged by XEP-0237, but
monotonically increasing version numbers still need to be synced)


> So you're saying you added a bunch of stuff for efficiency, and then had to
> add an efficient synch mechanism due to the inefficiency it caused? ;-)

I'm not sure I follow? All of this was added (or at least concieved)
as once piece to solve the problem. It was then broken into two
phases, the first phase would add caching to roster and disco#items
lists (using the mechanism described here) and the second phase would
add the ability to only download part of the list (and fetch the rest
only as it's needed).

> Amazingly, the simple mechanism I detailed above still works for items
> containing metadata, incidentally.

The metadata's not a show stopper, and I don't mean to suggest that
roster versioning doesn't handle metadata, I just use it as an example
because it means that we're downloading a lot more info (uploading
1000 small version tokens is a good trade off to stop downloading 999
large metadata blobs).

> As I said before, the difficulty is in
> dealing with multiple views; I think MUC room listing has those, and I don't
> have a solution - at least, not without a changelog.

This is a fairly good solution (once I fix the issue of deletes). This
is one of the use cases we're using it for right now (to version muc
room lists, which are more or less unique per user because private
rooms don't show up in the list unless you're in the ACL).

—Sam




-- 
Sam Whited
pub 4096R/54083AE104EA7AD3
https://blog.samwhited.com

Reply via email to