Miriam English wrote:
> 
> I have some ideas on that too. I have a feeling that
> absorbing the server stuff into the client stuff and
> running it all on the client machine may be the solution
> to this.
>

 That doesn't work well in practice. It has to do with the
varying bandwidth between users, and the difficulty of 
maintaining a reasonable set of peer interconnections.
It's a math issue, not an implementation issue.

 Multiuser systems running over the internet work much,
much better with a central server architecture. If you
want to get fancy (and very complex) you can use a set of
central servers (SOCS?:-) with high bandwidth
interconnections instead of a single central server. If
you want to get very fancy and even more complex, you can
have some of the SOCS also be clients. If you want overly
fancy and very brittle, but buzzword compliant, you can
have the interconnection topology of the SOCS be dynamic.

 But it's all the same in the end, innit? Almost everybody
ends up on the client-only side of things and you're back
to client/server.

 Multicast doesn't help, because the problem is the
clients, and the clients are (generally) all out on their
own individual networks, so multicast degenerates into
singlecast, and you're no better off than before.  OTOH,
for the client/set-of-central-servers architecture,
multicast is a cool implementation hack.

 There have been some good articles about how Quake handles
the problem (the central server maintains perfect state
info, and then filters the updates individually for every
connection). I seem to remember Origin uses the
set-of-central-servers architecture for UOL.

 Best solution: single central server and accept the fact
that it's better to have a limited but simple and useful
system, than an overly complex buzzword compliant system
with "unlimited" scalability.


-- 
Christopher St. John [EMAIL PROTECTED]
DistribuTopia http://www.distributopia.com

Reply via email to