Re: MPLS,IETF, etc..

2001-09-04 Thread Jon Crowcroft



a node might be simpler but the system composed of a graph of suvch
nodes more complex - you (as switch or router vendors) might get to
make your h/w or s/w simpler at the level of forwarding, bu the overal
syusytem that manages routes and traffic might be less simple and
(therefore) more failure prone 

van jacobson's keynote at SIGCOMM 2001 (last week in san diego) made
this point very clearly.

local optimsiations often aren;t, globally.

In message [EMAIL PROTECTED]
com, Natale, Robert C (Bob) typed:

  From: Bob Braden [mailto:[EMAIL PROTECTED]]
  Sent: Saturday, September 01, 2001 1:29 PM
 
 Hi Bob,
 
  Simplicity, in this case, seems to be in the eye of the beholder.
 
 There is certainly some universal truth in that statement.
 
  I don't get why label swapping is any simpler than hop/hop forwarding.
 
 It's simpler, IMHO, because it accomplishes more and does so in
 a way that is globally beneficial.
 
 That is, MPLS (in its fundamental goals) goes a long way toward
 integrating L3 and L2 in a way that leverages the strengths and
 discounts the weaknesses of the two paradigms:
 
L3/routing/packet/connectionless
L2/switching/circuit/connection-oriented
 
 The concept of scaling hop/hop forwarding via more capable hardware
 has its benefits (mostly of the short-term economic variety...which
 can be quite powerful, I agree), but is in the long run (I believe)
 inferior (in terms of scalability and synthesis, at least) to a more
 fundamental architecture/software solution.
 
 Thanks,
 
 BobN
 

 cheers

   jon




Re: MPLS,IETF, etc..

2001-09-04 Thread Mahadevan Iyer



Jon Crowcroft wrote:

 a node might be simpler but the system composed of a graph of suvch
 nodes more complex - you (as switch or router vendors) might get to
 make your h/w or s/w simpler at the level of forwarding, bu the overal
 syusytem that manages routes and traffic might be less simple and
 (therefore) more failure prone 

 van jacobson's keynote at SIGCOMM 2001 (last week in san diego) made
 this point very clearly.

 local optimsiations often aren;t, globally.


In general, stronger global guarantees on performance require stronger coupling
between network nodes. The coupling could be in the form of coordination of
decisions or at least exchange of information. Of course, strongly coupled networks
also imply a greater chance of a catastrophe (sudden degradation of performance) if
the coupling is such that failure of a few nodes quickly degrades local performance
in all the other nodes. Or if the strong coupling introduces so much h/w or s/w
complexity in the nodes that they become less reliable.

On the other hand, in a 'cloudy' network of loosely coupled nodes that do not
coordinate their actions, it is very difficult to predict or control end-to-end
performance quantitatively or even qualititatively. Its not surprising that say,
end-to-end TCP performance in an IP network is analyzable only for a single
bottleneck link. Any queuing at more than one link and the analysis quickly gets
intractable or at least difficult.
However, loosely coupled networks with simple nodes can be made more robust and
immune to catastrophes.

I guess much of the above concepts apply to distributed systems in general, and not
just the Internet.

Perhaps the only way to understand and predict global performance in a large cloudy
network is to use gigantic simulations to construct massive databases of network
behavior under all possible scenarios. Maybe do a distributed simulation using
volunteer computers all over the net on the lines of SETI @home or the Mersenne
prime number search, etc. That would be using the net to simulate itself, in a way
:)