Hi Noel,

In "Re: Arguments in favour of Core-Edge Elimination vs. Separation?"
you wrote:

> In a system which is designed for a very long lifetime, design for the outer
> envelope of technology today - because tomorrow's will be a lot more capable.
> Yes, it has to be possible, and vaguely economic with today's technology, but
> over the entire life-cycle of the technology, you'll be happier overall if
> you 'push' the design, even though it may cause a little heart-burn in the
> very earliest stages.

I entirely agree with you on this.  I will be tempted to cite it as:

  Noel Chiappa's principle of Adventurous Practical Design

This is precisely what I have been doing with Ivip since June 2007.

In that time, no-one from the LISP camp has ever written a critique
of Ivip.  They seem to shy away from it because they know in their
bones that real-time (a few seconds) end-user control of ITRs can't
be done.

I hope you will read the ID I wrote a few days ago:

   http://tools.ietf.org/html/draft-whittle-ivip-fpr-00

Ignoring blank space, there's 30 pages to read - it would take you an
hour or so.

If there is a practical way of getting mapping in real-time to local
full-database query servers, then it is easy to get it all the way to
every ITR which needs it - also in real-time.  Then, most of the
problems in LISP which you mention in your LISP critique disappear.

There may well be a better method of real-time distribution than
this.  If you think my approach is practical, then please say so.
All this is available for future versions of LISP.

Most of LISP's problems today result largely from the designers
failing to follow your Doctrine - so the LISP design is boxed in by
not being able to get mapping to ITRs in real-time.  The designers
have been trying to add elaborations since its inception to overcome
the resulting problems - and the result is a complex mess of stuff
which is never going to achieve what can be achieved with relative
ease once you have real-time mapping distribution to ITRs.

If you believe that the approach I suggest - or any other approach to
give end-user networks direct control over ITRs - can never work then
please state why.

Please give detailed references to the parts of my proposal which you
think can't work - not some generalised statement about lack of
storage capacity in the query servers or how past experience etc.
indicates such things can never work.

Servers today have quad core CPUs running at close to 3GHz. They can
easily have 8, 12, 16 or 24 gigabytes of RAM.

8 gigabytes of server RAM -  Kingston ECC DDR3-1066 SDRAM - costs
$289.88 at *Walmart*.

I can store mapping for 2 billion IPv4 SPI (EID) addresses in 8
gigabytes of RAM - simply by having separate arrays for each MAB
(Mapped Address Block), each with a 32 bit ETR address for per IP
address.

So there's no problem storing or looking up mapping in a server
today.  IPv6 is more bloated, but by the time IPv6 is widely used,
RAM will be still more plentiful.  Ordinary PCs will have 16 core
CPUs and a few hundred gigs of RAM - for common everyday purposes
such as running Second Life.


However, the principle you annunciated above does not mean we should
design Internet protocols which require every host to do more routing
and addressing work, since some or many of those hosts will be on
links which are slow, unreliable and/or expensive.

Its OK to design for greater technical capability in network elements
where this is clearly going to be feasible.

Wireless communication techniques are close to physical limits.
There's only so much spectrum and you can't plaster an entire country
with base-stations every 100 metres.  So it would be unwise to design
as if the limitations of wireless links will be largely eliminated in
the future.

While some or many wireless links will be very good, in the
not-too-distant future (2020), there will always be billions of
Internet hosts on wireless links which can never be as fast, reliable
and inexpensive as fibre or copper.

For instance, on a commercial airliner flying over oceans.  Unless
the ocean is peppered with fibre-linked buoy-mounted "ground"
stations (an abomination I strongly oppose), the link will probably
be via a geostationary satellite - which involves long latencies and
high costs.  LEO satellites are not out of the question, but there
are real problems with phased-array antennae on the fuselage being
able to track multiple such satellites at the same time - and even
then, the LEO satellites are over the ocean too.  MEO satellites
might be OK, but they involve significant latency too.

 - Robin

_______________________________________________
rrg mailing list
rrg@irtf.org
http://www.irtf.org/mailman/listinfo/rrg

Reply via email to