On 17 Apr 2007, at 23:47, Nic James Ferrier wrote:
Gordon Joly <[EMAIL PROTECTED]> writes:
At 10:31 +0100 17/4/07, Ian Forrester wrote:
I think it can scale if they open up the queuing system and stick to
charging for SMS's. I think Kosso has the right idea -
http://kosso.wordpress.com/2007/03/28/os-twitter-and-services/
How will charging affect packets going through routers?
Charging is not necessary... it just has to be designed correctly.
Twitter is just in need of horizontal scaling. Split the namespace
across many servers and it would scale.
No problem.
Which is why I don't understand why they're having some
problems. Well, I do. It's because they're using rails. If you do that
it suggests you don't know what you're doing.
[sits back and waits for everyone to explode with rage]
Nic,
Without being the flag bearer of the rails brigade [1], that they are
using Rails has nothing significant to do with their problems -
they'd exist with any platform in use. It's fortunate that it's not
one that would require rigmarole to upgrade - i'd hate to see twitter
having to amend their Volume License Agreement every week. I don't
know what the actual technical competence of this list is, but aside
from joining-the-dots with mashups, I'm yet to see much which is
truly groundbreaking, impressive and unique - which makes this
conversation so empty and pointless.
It's true: Twitter hasn't really done anything magical, other than
connecting mobile, im and web in a tangled mesh of ubiquity. Sure,
there are problems - from design to use: bear in mind that the
twitter crew's original mantra was for a tool to tell friends where
they are and what they are up to (the sort of thing that jaiku et al
are really honing in on, by demoting the conversation).
So as to your suggestion - adding more servers. It's an easy fix when
you have a service generating income. Twitter, currently, does not.
Thus who keeps paying for the machines? Who keeps paying for the text
messages - twitter's SMS bill is large enough to get the attention of
any provider out there.
Developers who understand scalability know that it's often a plumbing
problem: as soon as one pipe is capped or uncovered, another leak
starts. You constantly have to uncover and release pressure until the
system is in balance. Right now twitter is struggling because it's
run out of compute cycles; next week it may be the database.
Twitter currently has a traffic rank in the top 500 websites - and is
completely dynamic. Google currently indexes over 220, 000 pages from
twitter.com. It's not a trivial problem. Its not something that a few
more servers will fix: twitter needs to come up with new architecture
such that it can manage the service properly. In reality this means
transitioning to a core twitter centric codebase - ie, do exactly as
amazon, ebay and others have done: replace the web scripting language
they prototyped in and roll their own, where it makes sense.
So hop off the language hate bandwagon, because no-one cares.
Instead, add something constructive.
Sincerely -
James Cox
[1] Seriously, I really don't give a crap what platform you prefer.
-
Sent via the backstage.bbc.co.uk discussion group. To unsubscribe, please
visit http://backstage.bbc.co.uk/archives/2005/01/mailing_list.html.
Unofficial list archive: http://www.mail-archive.com/backstage@lists.bbc.co.uk/