> If there's such a compelling need for native multicast, why 
> has it seen such limited deployment, and why is it available 
> to such a tiny proportion of the Internet?

Other reasons have already been posted, but I wanted to throw mine in:
Because multicast was designed from an organizational (presumably a
university) point of view, not from the core.  When you have a protocol
designed that way, you get assumptions about the Internet topology that
aren't necessarily true, the biggest one being that there is some root
node at the top that connects you to the cloud, and you work down from
there.  If you have a copule of root nodes, it still works fine, but if
you're in the core, with peer connections instead of root nodes,
handling several orders of magnitude more connections, it ceases to work
so nicely.  So another answer to your question is that, while people
would love to do multicast, it is fundimentally broken in a
wide-deployment scenario.  It simply doesn't scale to Internet size yet,
despite the numerous protocols that have been thrown at the problem.
Big backbones don't want to deploy a broken protocol that few people use
over wide ranges, and individual sites don't want to develop for a
broken protocol that big backbones aren't deploying.  When somebody
fixes the protocol, in such a way that it still provides a benefit to
the transmitter and all the interested end stations can use it, then
we'll see wide deployment.

This scenario can be applied to almost any protocol/technology that
folks want to see deployed and can't understand why it hasn't been yet.
Naming the biggest example is left as an exercise for the reader.

-Dave

Reply via email to