On 30 May 2012, at 14:13, Kevin Smith wrote: > On Wed, May 30, 2012 at 2:09 PM, Theo Cushion <[email protected]> wrote: >> >> On 30 May 2012, at 13:21, Kevin Smith wrote: >> >>> On Wed, May 30, 2012 at 1:13 PM, Theo Cushion <[email protected]> >>> wrote: >>>> I agree that there is never going to be a silver bullet that will solve >>>> all issues. However, there is always going to be a limit on the rate of >>>> stanzas that can be dealt with in a timely manner what ever the platform. >>> >>> I'm not sure there's a silver bullet that'll solve all your problems >>> trivially - but I'm also not sure that there isn't a solution that >>> gains you more than what you currently propose. >>> >>> So, there are two things being discussed here: >>> >>> 1) Your use case and the need to limit the work done by the client on >>> login. I think this is addressable for your deployment by limiting the >>> number of rooms that need to be joined prior to there being activity >>> in them (or possibly by using pubsub nodes rather that MUC rooms, >>> although this is not a clear win and requires you to do significantly >>> more client work). >>> >>> 2) Allowing servers to 'force' or 'autojoin' users into MUCs - this is >>> a feature that's generally interesting and speccing it up seems >>> sensible even if it won't help your cases (although it might, in >>> combination with some new server code). >> >> It would certainly be nice to be able to get what ever saving is possible >> from the standards, as then everyone can benefit rather than focusing on >> application specific code. >> >>>> Anything that can be done to minimise it will create more breathing room. >>>> By those estimates I'd say losing a 1/3 of the stanzas across the wire is >>>> a significant optimisation. >>> >>> Right, it is. >>> >>>> Perhaps the saving could be greater, why would there be 300+ back? If I >>>> were only the occupant, would I not just get my own presence back? >>> >>> It'll receive presence from anyone in the room (I've not counted this) >>> its own presence (I did count this), any message history >>> requested/sent (I've not counted this) and the room subject, which >>> indicates the join is complete (I did count this). >> >> Is this possibly a great fit for the Pubsub/Muc hybrid. Clients can >> permanently subscribe selectively to things they are interested in. For >> example, I don't care about room subject, but presence and history I might >> care about. Having this information map on to nodes to the MUC jid gives >> very fine control over what information is required using an existing >> standard. Could the Pubsub/Muc hybrid simply come down to certain predefined >> mappings, plus room for arbitrary information. > > You mean exposing the room as both a MUC and as MEP, both being a > representation onto the same data? That would certainly help in your > case. I wonder what other people think of it? > > /K
I think we're on the same page. I'll try and illustrate with an example: We have a normal MUC room residing here: "[email protected]" However, we also have a Pubsub root node living at the same address, then we have a number of predefined children nodes, for sake of argument (I guess advertised using the disco features): - jid = "[email protected]" node = "users" - jid = "[email protected]" node = "messages" - jid = "[email protected]" node = "subject" (I don't think this will conflict with anything?) If I want to receive all events in Pubsub form I subscribe to "[email protected]". If I just want the messages and users I can subscribe to "[email protected], users" and "[email protected], messages" respectively. Or I could request certain items, using normal pubsub messages. It gives me the power of Pubsub, and the advantages of MUC without introducing a load of baggage. Just as we represent some of this information using Disco (who's in a room, subject, etc) we are doing the same for Pubsub, except we are doing it in a push fashion. Any merit? Theo
_______________________________________________ JDev mailing list Info: http://mail.jabber.org/mailman/listinfo/jdev Unsubscribe: [email protected] _______________________________________________
