I was reading the http://xmpp.org/registrar/disco-categories.html and
the http://xmpp.org/extensions/xep-0030.html XEPs and have some questions.

Here at Buddycloud Towers we are running into some component scalability
problems that are hitting us sooner than we thought and I need move from
a single component handling messages to multiple components handling the
same message type.

Our main location lookup component handling the XEP-0255 stanzas is
starting to be IO-bound and we need to break it out into multiple
components handling the location queries.

What's current best practise for having multiple components that offer
the same service?

Here's a couple of options that I could think of (the component in
question is called butler.buddycloud.com):

* Have the client do a disco query at startup and then choose a
component based on a regex and a bit of randomness for load balancing:
search for butler*.buddycloud.com would return all the butlers and the
client would then connect to butler33.buddycloud.com.

* Have the client do a disco query at startup and then choose a random
leaf from a branch: household-staff.buddycloud.com
        butler01.household-staff.buddycloud.com
        butler02.household-staff.buddycloud.com
        etc.

* have all the butlers go through a proxy butler that is registered on
the sever which then load-balances between different dedicated butlers.

Any tips or pointers would be most useful.  (I'll push the queries about
load balancing components between servers to the ejabberd mailing list.)

S.
-- 
Simon Tennant
Buddycloud
uk: +44 20 7043 6756               de: +49 89 420 955 854
uk: +44 78 5335 6047               de: +49 17 8545 0880
email and xmpp: si...@buddycloud.com
_______________________________________________
JDev mailing list
FAQ: http://www.jabber.org/discussion-lists/jdev-faq
Forum: http://www.jabberforum.org/forumdisplay.php?f=20
Info: http://mail.jabber.org/mailman/listinfo/jdev
Unsubscribe: jdev-unsubscr...@jabber.org
_______________________________________________

Reply via email to