Hi Christopher;
"Christopher K. St. John" wrote:
> That doesn't work well in practice. It has to do with the
> varying bandwidth between users, and the difficulty of
> maintaining a reasonable set of peer interconnections.
> It's a math issue, not an implementation issue.
Maybe it is a protocol issue.
> Multicast doesn't help, because the problem is the
> clients, and the clients are (generally) all out on their
> own individual networks, so multicast degenerates into
> singlecast, and you're no better off than before.
Except that with Multicast, a sender issues one packet, and
it gets received by all (theoretically). As they are UDP
packets, they can get lost due to a variety of issues. No
"round-robin" server updating the clients type of delays.
If, say, you have 10,000 clients; the amount of outgoing
traffic from a server is then about 1/10,000 that of a
unicast client/server model. Remember, with Multicast, it
is the network that clones packets only when needed.
And, if using RTP over UDP with Multicast distribution,
you do get an idea of how many participants are there, and
what kind of data loss they are experiencing, so you can
tailor your output accordingly. There are multicast
monitoring tools out there, including one that I wrote,
MultiMON.
We actually do work here with various NATO countries,
creating prototype networks with differing levels of
network connectivity, and even with nodes that are in
"no emissions" mode. Reliable multicast works quite
well.
--
John Stewart
[EMAIL PROTECTED] http://www.crc.ca/FreeWRL/