RE: Don't feed the trolls

2006-03-31 Thread Soliman, Hesham
Pekka, 

 > I've yet to see Anthony make any useful contribution to the IETF 
 > (various rants on the IETF list don't count).

=> I don't know who he is and never read his emails but I find
statements
like the above unhelpful and unnecessary. If you really believe that,
then stop reading his emails. Part of working in a large group is to
accept the presence of all sorts of people. Public blacklisting of
people
is not a useful contribution either. 

 > 
 > Perhaps we should just ban him from the list.

=> Perhaps not. This is rediculous and a slippery slope.
In the interest of making useful contributions this will be my *only*
email on this thread. 

Hesham

 > 
 > -- 
 > Pekka Savola "You each name yourselves king, yet the
 > Netcore Oykingdom bleeds."
 > Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
 > 
 > ___
 > Ietf mailing list
 > Ietf@ietf.org
 > https://www1.ietf.org/mailman/listinfo/ietf
 > 

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was:

2006-03-31 Thread Peter Sherbin
> Immediately blowing 2^125 addresses is absurd.

We want to network the world inside and around us
and then automate it. IPv6 is timely and suits well
both purposes.

[EMAIL PROTECTED]

--- "Anthony G. Atkielski" <[EMAIL PROTECTED]>
wrote:

> Dave Cridland writes:
> 
> > I do understand your argument, and you're correct
> in all its
> > assertions, but not the conclusion. I suspect
> that's the case for 
> > everyone at this point.
> 
> Not as long as I still see people claiming that 128
> bits will provided
> 2^128 addresses _and_ that it can still be divided
> into multiple bit
> fields.
> 
> > You state, loosely, that 128 bits will not
> realistically yield
> > 2**128 addresses, which is entirely true.
> 
> Yes.
> 
> > It's been pointed out that IPv6 wasn't designed
> for that, instead,
> > it was designed to yield 2**64 subnets, and even
> so, it's
> > acknowledged that a considerable amount of that
> space will be
> > wasted. People have agreed with this, but pointed
> out that the
> > "subnet" level can be moved down, since we're only
> using an eighth
> > of the available address space.
> 
> I don't think many people appreciate just how
> quickly such policies
> can exhaust an address space--mainly because they
> keep emphasizing
> that 2^n addresses are available in n bits,
> apparently oblivious to
> the multiple factors that will waste most of the
> addresses.
> 
> > Your conclusion, however, is that we should be
> switching to a
> > zero-wastage allocation mechanism preferably based
> on variable 
> > bitlength addresses.
> 
> That is one option.  Another is to stop trying to
> plan the entire
> future of IP addressing today.  As I've said, just
> adding one more bit
> to 32-bit addresses would hold the Internet together
> for years to
> come.  Immediately blowing 2^125 addresses is
> absurd.
> 
> > In response to this, several people have commented
> that this
> > is unworkable using both current hardware and any
> hardware
> > predicted to be available within the next few
> years. I don't
> > know about that, but I'm prepared to accept that
> opinion.
> 
> I'll accept the opinion, but as long as it remains
> opinion, I can
> continue to assert the contrary.  I don't see any
> insurmountable
> obstacle that would prevent this type of
> implementation.  Indeed, I
> should think it would greatly simplify routing.
> 
> > There's an additional unanswered question your
> argument has, which is
> > whether the - very real - issues you're pointing
> out with prefix 
> > based allocations will cause actual operational
> problems within a 
> > timeframe short enough for anyone to worry over
> for a few decades, 
> > and - a related issue - would these problems hit
> sufficiently quickly
> > that a replacement for IPv6 couldn't be developed
> in time?
> 
> In this respect I'm going by past history.  As I've
> said, engineers
> routinely underestimate capacity and overestimate
> their own ability to
> foresee the future, often with expensive and
> defect-ridden results.
> The Internet gets bigger all the time, and the cost
> of these mistakes
> will be astronomically high in the future--more than
> high enough to
> justify changing this mindset.  I'm just trying to
> limit the damage by
> suggesting changes as early as possible.
> 
> Has anyone else noticed that the simplest standards
> tend to last the
> longest, and that complex, committee-designed
> standards are often
> obsolete even before the 6000-page specifications
> are printed and
> bound?  I see that SMTP is still around, but I don't
> see too many
> people using X.400.  I see people writing code in C,
> but not in Ada.
> 
> 
> 
> ___
> Ietf mailing list
> Ietf@ietf.org
> https://www1.ietf.org/mailman/listinfo/ietf
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Stupid NAT tricks and how to stop them.

2006-03-31 Thread Michel Py
Christian,

What you wrote is doubly incorrect.
First, you missed the context:

>> Noel Chiappa wrote:
>> Needless to say, the real-time taken for this process to complete
>> - i.e. for routes to a particular destination to stabilize, after a
>> topology change which affects some subset of them - is dominated by
>> the speed-of-light transmission delays across the Internet fabric.
>> You can make the speed of your processors infinite and it won't make
>> much of a difference.

> Christian Huitema wrote:
> Since events imply some overhead in processing, message passing,
> etc, one can assume that at any given point in time there is a
> limit to what a router can swallow.

This is true indeed, but a) this limit has everything to do with
processing power and available bandwidth and nothing to do with speed of
light and b) the context was talking about infinite processing power
anyway.


> Bottom line, you can only increase the number of routes
> if you are ready to dampen more aggressively.

There is no close relation. Dampening affects route that flap. If the
new routes don't flap, all that is required is more memory to hold them
and slightly more CPU to perform lookups but not much more as the
relation between lookup time and size is logarithmic. Read below for
handling routes that flap because they indeed do.


> There is an obvious "tragedy of the commons" here: if more network
> want to "multi-home" and be declared in the core, then more aggressive
> dampening will be required, and each of the "multi-homed" networks
will
> suffer from less precise routing, longer time to correct outages, etc.

Again I don't see a relation here. Assuming that the newer prefixes in
the core flap about as much as the current ones, what is required to
handle more is to increase computing power and bandwidth in order to
keep what a router can swallow under the limit it takes a hike.

> There are different elements at play that also limit the number of
> core routers. Basically, an event in a core router affects all the
> path that go through it, which depending on the structure of the graph
> is somewhere between O(M*log(M)) or O(M.log(M)). In short, the routing
> load grows much faster than linearly with the number of core routers.

I agree; the relation between processing power requirements and number
of prefixes is somehow exponential, but back to the real world:

Years there was a frantic forklift upgrade business to get the biggest
baddest BFR from vendors even before the paint was dry, and this
happened because indeed we were starving for more CPU and more memory.

This does not happen today. As Stephen points out, even the little guys
aren't complaining anymore and vendors don't even put the latest
technology they can in their products because nobody's screaming for it
anymore.

In short: the IPv6 idea of reducing the size of the routing table was
necessary if IPv6 had been deployed and replaced v4 5 years ago. We have
missed the launch window and as of today this problem solved by time; I
hear that we could handle a million prefixes with today's technology.

If it takes a THz processor to handle 10 million prefixes and a 100THz
one to handle 100 million prefixes, I don't care as long as said
processors are on the shelf at Fry's for $200 a piece and on a vendor's
Sup for $100K a piece.

Michel.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-31 Thread Anthony G. Atkielski
Iljitsch van Beijnum writes:

> And in reaction to other posts: there is no need to make the maximum
> address length unlimited, just as long as it's pretty big, such as  
> ~256 bits.

But there isn't much reason to not make it unlimited, as the overhead
is very small, and specific implementations can still limit the actual
address length to a compromise between infinity and the real-world
network that the implementation is expected to support.

> The point is not to make the longest possible addresses,
> but to use shorter addresses without shooting ourselves in the foot
> later when more address space is needed.

Use unlimited-length addresses that can expand at _either_ end, and
the problem is solved.  When more addresses are needed in one
location, you add bits to the addresses on the right; when networks
are combined and must have unique addresses, you add bits on the left.



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was:

2006-03-31 Thread Anthony G. Atkielski
Dave Cridland writes:

> I do understand your argument, and you're correct in all its
> assertions, but not the conclusion. I suspect that's the case for 
> everyone at this point.

Not as long as I still see people claiming that 128 bits will provided
2^128 addresses _and_ that it can still be divided into multiple bit
fields.

> You state, loosely, that 128 bits will not realistically yield
> 2**128 addresses, which is entirely true.

Yes.

> It's been pointed out that IPv6 wasn't designed for that, instead,
> it was designed to yield 2**64 subnets, and even so, it's
> acknowledged that a considerable amount of that space will be
> wasted. People have agreed with this, but pointed out that the
> "subnet" level can be moved down, since we're only using an eighth
> of the available address space.

I don't think many people appreciate just how quickly such policies
can exhaust an address space--mainly because they keep emphasizing
that 2^n addresses are available in n bits, apparently oblivious to
the multiple factors that will waste most of the addresses.

> Your conclusion, however, is that we should be switching to a
> zero-wastage allocation mechanism preferably based on variable 
> bitlength addresses.

That is one option.  Another is to stop trying to plan the entire
future of IP addressing today.  As I've said, just adding one more bit
to 32-bit addresses would hold the Internet together for years to
come.  Immediately blowing 2^125 addresses is absurd.

> In response to this, several people have commented that this
> is unworkable using both current hardware and any hardware
> predicted to be available within the next few years. I don't
> know about that, but I'm prepared to accept that opinion.

I'll accept the opinion, but as long as it remains opinion, I can
continue to assert the contrary.  I don't see any insurmountable
obstacle that would prevent this type of implementation.  Indeed, I
should think it would greatly simplify routing.

> There's an additional unanswered question your argument has, which is
> whether the - very real - issues you're pointing out with prefix 
> based allocations will cause actual operational problems within a 
> timeframe short enough for anyone to worry over for a few decades, 
> and - a related issue - would these problems hit sufficiently quickly
> that a replacement for IPv6 couldn't be developed in time?

In this respect I'm going by past history.  As I've said, engineers
routinely underestimate capacity and overestimate their own ability to
foresee the future, often with expensive and defect-ridden results.
The Internet gets bigger all the time, and the cost of these mistakes
will be astronomically high in the future--more than high enough to
justify changing this mindset.  I'm just trying to limit the damage by
suggesting changes as early as possible.

Has anyone else noticed that the simplest standards tend to last the
longest, and that complex, committee-designed standards are often
obsolete even before the 6000-page specifications are printed and
bound?  I see that SMTP is still around, but I don't see too many
people using X.400.  I see people writing code in C, but not in Ada.



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Proposed 2008 - 2010 IETF Meeting dates

2006-03-31 Thread Dave Crocker



Also note that local holidays may be city specific not country specific.
It's quite impractical to consider city holidays three years out.


Not if the city is chosen 2-3 years out.

d/

--

Dave Crocker
Brandenburg InternetWorking


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-31 Thread Hallam-Baker, Phillip
I agree with Steve here, we have plenty of tools at our disposal here and
eight tries to get it right.

Variable length addresses would be much more expensive to support and there
really is no reason to expect 128 bits to be insufficient unless the
allocation mechanism is completely broken, something that more bits will not
cure.

If variable length addressing had been proposed when IPv4 was being designed
it might well have avoided the need for IPv6, or at least the need for IPv6
to affect the end user to the extent it does.

At this point the IPv6 address space is a decision that has already been
made and would take over a decade to change. IPv4 space runs into exhaustion
first so that is not an acceptable option.

> -Original Message-
> From: Steven M. Bellovin [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 30, 2006 11:11 PM
> To: ietf@ietf.org
> Subject: Re: 128 bits should be enough for everyone, was: 
> IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: 
> StupidNAT tricks and how to stop them.)
> 
> On Thu, 30 Mar 2006 20:43:14 -0600, "Stephen Sprunk"
> <[EMAIL PROTECTED]> wrote:
> 
> > 
> > That's why 85% of the address space is reserved.  The /3 we 
> are using 
> > (and even then only a tiny fraction thereof) will last a long, long 
> > time even with the most pessimistic projections.  If it turns out 
> > we're still wrong about that, we can come up with a 
> different policy for the next /3 we use.
> > Or we could change the policy for the existing /3(s) to 
> avoid needing 
> > to consume new ones.
> > 
> 
> I really shouldn't waste my time on this thread; I really do know
> better.
> 
> You're absolutely right about the /3 business -- this was a very
> deliberate design decision.  So, by the way, was the decision to use
> 128-bit, fixed-length addresses -- we really did think about this
> stuff, way back when.
> 
> When the IPng directorate was designing/selecting what's now IPv6,
> there was a variable-length address candidate on the table: CLNP.  It
> was strongly favored by some because of the flexibility; 
> others pointed
> out how slow that would be, especially in hardware.
> 
> There was another proposal, one that was almost adopted, for something
> very much like today's IPv6 but with 64/128/192/256-bit addresses,
> controlled by the high-order two bits.  That looked fast enough in
> hardware, albeit with the the destination address coming first in the
> packet.  OTOH, that would have slowed down source address checking
> (think BCP 38), so maybe it wasn't a great idea.
> 
> There was enough opposition to that scheme that a compromise was
> reached -- those who favored the 64/128/192/256 scheme would accept
> fixed-length addresses if the length was changed to 128 bits from 64,
> partially for future-proofing and partially for flexibility in usage.
> That decision was derided because it seemed to be too much address
> space to some, space we'd never use.
> 
> I'm carefully not saying which option I supported.  I now 
> think, though,
> that 128 bits has worked well.
> 
>   --Steven M. Bellovin, http://www.cs.columbia.edu/~smb
> 
> ___
> Ietf mailing list
> Ietf@ietf.org
> https://www1.ietf.org/mailman/listinfo/ietf
> 
> 


smime.p7s
Description: S/MIME cryptographic signature
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Proposed 2008 - 2010 IETF Meeting dates

2006-03-31 Thread Hallam-Baker, Phillip

> From: Brian E Carpenter [mailto:[EMAIL PROTECTED] 

> It works the other way round. We fix our dates 2 or 3 years 
> in advance, avoiding clashes with other organizations and 
> international holidays as much as possible. Site selection 
> inevitably comes later, which means local holidays may 
> influence site selection, but not date selection.
> 
> Also note that local holidays may be city specific not 
> country specific.
> It's quite impractical to consider city holidays three years out.

Booking the Moscone three years out is the cannonical way that conference
companies fail. 

The IETF is nowhere near big enough for shortage of venues of sufficient
size to be a serious problem. If you are running a 10,000+ person conference
there are few venues that work, if you get to 20,000 there are serious
issues and you may get forced into doing advance booking.

The meeting could have been twice as large without straining the facilities
in Dallas. There are probably at least 500 hotels in the US that are
designed to take a conference of IETF size. The IETF is not a desperately
profitable conference from the hotel point of view but it does make more
money than an empty hotel. They are only going to be willing to offer the
type of rate the IETF is willing to pay if they are confident that no more
profitable alternative is on offer.



smime.p7s
Description: S/MIME cryptographic signature
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-31 Thread JFC (Jefsey) Morfin

At 04:43 31/03/2006, Stephen Sprunk wrote:
If IPv6 is supposed to last 100 years, that means we have ~12.5 
years to burn through each /3, most likely using progressively 
stricter policies.


I suppose you want to say 16,66 years (only 5 /3 are available). This 
is a way of seeing things.
This means that there are still 4 years to go for the next /3 to 
start being used.
It seems a good forecast, in line with observation and demand. But it 
means that decisions are to be taken now.


There's also plenty of time to fix it if we develop consensus 
there's a problem.


- don't you think it is clear now there is a market rough consensus?
- what makes you believe that the IETF is the proper place to take 
care of such a "fix"? I love competition when it makes sense. I 
certainly would favor the IETF and the ITU to best compete for an 
IPv6 service the Internet users dearly miss. Before we have other 
grassroots solutions (are you sure that NATs are the only solution?).


There are obviously two schools about IPv6 numbering:
- "what exists is nearly perfect and we need to implement it to prove 
it, but we do not know how to get it implemented."

- "what exists is wrong and this is the reason why it is not implemented."

IMHO both schools should be given an equal chance to show they are 
right. And probably to address different types of problems.


BTW, deploying IPv6, I suggest that every new ICANN TLD should only 
accept registrations with IPv6 addresses (why new TLDs if not for a 
new netwok?). The day ".xxx" is accepted the network would turn IPv6.

jfc




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Proposed 2008 - 2010 IETF Meeting dates

2006-03-31 Thread Brian E Carpenter

Joel,

Joel Jaeggli wrote:

On Tue, 28 Mar 2006, JORDI PALET MARTINEZ wrote:

I think is clear that we need to fix the meeting dates, and that 
should be
done in advance so we avoid clashes with other events and we can 
negotiate

with hotels and sponsors ahead of time enough to make it cheaper.

While I don't agree is to take in consideration national holidays unless
they are (almost) *worldwide* ones. Otherwise, taking the national 
holidays
from one or the other country will be discriminatory for the rest. 
Moreover
when we don't know the place we will meet 3-4 years in advance. 
Otherwise we

need to manage at the same time the meeting date and the place for each
meeting, which we know is impossible.



I mean at the meeting venue.


It works the other way round. We fix our dates 2 or 3 years in advance,
avoiding clashes with other organizations and international holidays
as much as possible. Site selection inevitably comes later, which means
local holidays may influence site selection, but not date selection.

Also note that local holidays may be city specific not country specific.
It's quite impractical to consider city holidays three years out.

Brian

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Stupid NAT tricks and how to stop them.

2006-03-31 Thread Francois Menard


Does that constraint remains if peering happens closer to the edge to AS 
that are more regional in nature.  For instance, say the FCC mandated 
peering at the LATA level over IP-based IMT's/Bill'n'Keep trunks.


This is not far fetched, say that the PSTN transitions to SIP and say that 
peering on a bill and keep basis must extend to video conferencing in IP 
and not only to voice, then once the bill and keep is IP, it makes sense 
to run the rest of the IP traffic there, including the P2P traffic 
transitioning through regional AS.


So my thought is that we can enable MH, but if we keep it regional and 
then we do not have to worry about dampening lobotomized to handle 
load of O(N*F)


F.

--
[EMAIL PROTECTED]
819 692 1383

On Thu, 30 Mar 2006, Christian Huitema wrote:


Dampening is part of the protocol and has nothing to do with the speed
of light.


Well, not really. Assume a simplistic model of the Internet with M
"core" routers (in the default free zone) and N "leaf" AS, i.e. networks
that have their own non-aggregated prefix. Now, assume that each of the
leaf AS has a "routing event" with a basic frequency, F. Without
dampening, each core router would see each of these events with that
same frequency, F. Each router would thus see O(N*F) events per second.
Since events imply some overhead in processing, message passing, etc,
one can assume that at any given point in time there is a limit to what
a router can swallow. In either N or F is too large, the router is
cooked. Hence dampening at a rate D, so that N*F/D remains lower than
the acceptable limit.

Bottom line, you can only increase the number of routes if you are ready
to dampen more aggressively. There is an obvious "tragedy of the
commons" here: if more network want to "multi-home" and be declared in
the core, then more aggressive dampening will be required, and each of
the "multi-homed" networks will suffer from less precise routing, longer
time to correct outages, etc.

There are different elements at play that also limit the number of core
routers. Basically, an event in a core router affects all the path that
go through it, which depending on the structure of the graph is
somewhere between O(M*log(M)) or O(M.log(M)). In short, the routing load
grows much faster than linearly with the number of core routers.

-- Christian Huitema


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-31 Thread Iljitsch van Beijnum

On 31-mrt-2006, at 6:11, Steven M. Bellovin wrote:


You're absolutely right about the /3 business -- this was a very
deliberate design decision.  So, by the way, was the decision to use
128-bit, fixed-length addresses -- we really did think about this
stuff, way back when.


I reviewed some old IPng mail archives last year and it was very  
illuminating to see people worry both about stuff that is completely  
a non-issue today and stuff that's still as big a problem as ever  
today. However, a lot has changed in over a decade, and even if fixed  
length addresses was the right answer then (which I'm not necessarily  
conceding), that doesn't necessarily make it the right answer today.



When the IPng directorate was designing/selecting what's now IPv6,
there was a variable-length address candidate on the table: CLNP.


I'm no OSI expert, but what I gather is that within a domain, all  
addresses must be the same length, so variable length addressing  
doesn't really work out in practice.


It was strongly favored by some because of the flexibility; others  
pointed

out how slow that would be, especially in hardware.


I guess that argument can be made for the traditional "this address  
is X bits and here are enough bytes to hold them" type of variable  
length address encoding that we also use in BGP, for example. But  
there are other ways to do this that are more hardware-friendly:



There was another proposal, one that was almost adopted, for something
very much like today's IPv6 but with 64/128/192/256-bit addresses,
controlled by the high-order two bits.  That looked fast enough in
hardware, albeit with the the destination address coming first in the
packet.  OTOH, that would have slowed down source address checking
(think BCP 38), so maybe it wasn't a great idea.


On the other hand having a protocol chain in IPv6 makes checking TCP/ 
UDP ports a nightmare, so there's more than enough precedent for  
that. That's one lesson we can learn from the OSI guys: the port  
number should really be part of the address.


A way to encode variable length addresses that would presumably be  
even easier to implement in hardware is to split the address in  
several fields, and then have some bits that indicate the presence/ 
absence of these fields. For instance, the IPv6 address could be 8 16- 
bit values. The address 3ffe:2500:0:310::1 would be transformed into  
3ffe-2500-0310-0001 (64 bits) with the control bits 11010001  
indicating that the first, second, fourth and eighth 16-bit values  
are present but the third and fifth - seventh aren't. It should be  
fairly simple to shift the zero bits in and out in hardware so the  
full maximum length version of the address can be available in places  
where that's convenient.


And in reaction to other posts: there is no need to make the maximum  
address length unlimited, just as long as it's pretty big, such as  
~256 bits. The point is not to make the longest possible addresses,  
but to use shorter addresses without shooting ourselves in the foot  
later when more address space is needed. For instance, I have a /48  
at home and one for my colocated server. For that server, I could use  
the /48 as the actual address for the server, or add a very small  
number of bits. At home, stateless autoconf is useful so 94 bits  
would be sufficient (/48 + 46 bit MAC address), maybe add a couple of  
bits for future subnetting. So the server address would be 7 bytes  
(with the length field) rather than 16 and the laptop address 13,  
saving 12 bytes per packet between the two over today's IPv6...


I'm carefully not saying which option I supported.  I now think,  
though,

that 128 bits has worked well.


It would be rather disastrous if 128 bits didn't work well at this  
stage.  :-)


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was:

2006-03-31 Thread Dave Cridland

On Fri Mar 31 06:17:01 2006, Anthony G. Atkielski wrote:
It depends.  People with an emotional attachment to a specific 
notion

will never been convinced otherwise, but people who simply don't
understand something may change their mind once they understand.


I do understand your argument, and you're correct in all its 
assertions, but not the conclusion. I suspect that's the case for 
everyone at this point.


You state, loosely, that 128 bits will not realistically yield 2**128 
addresses, which is entirely true. It's been pointed out that IPv6 
wasn't designed for that, instead, it was designed to yield 2**64 
subnets, and even so, it's acknowledged that a considerable amount of 
that space will be wasted. People have agreed with this, but pointed 
out that the "subnet" level can be moved down, since we're only using 
an eighth of the available address space.


You also state that the relatively coarse prefix-based allocation 
will yield higher wastage of addresses, and increasing the corseness 
of this allocation reduces available address space exponentially - 
again, this is at least loosely true. I'm not utterly sure that 
"exponential" is the correct word to use here, but I'll accept it in 
lieu of any other term. Many people have agreed with the implications 
of this, suggesting that a jump from /64 to /48 is too much, and I'm 
inclined to agree - the cost of renumbering is not high, and given 
autoconfiguration, is virtually zero.


Your conclusion, however, is that we should be switching to a 
zero-wastage allocation mechanism preferably based on variable 
bitlength addresses. In response to this, several people have 
commented that this is unworkable using both current hardware and any 
hardware predicted to be available within the next few years. I don't 
know about that, but I'm prepared to accept that opinion.


There's an additional unanswered question your argument has, which is 
whether the - very real - issues you're pointing out with prefix 
based allocations will cause actual operational problems within a 
timeframe short enough for anyone to worry over for a few decades, 
and - a related issue - would these problems hit sufficiently quickly 
that a replacement for IPv6 couldn't be developed in time?


Dave.
--
  You see things; and you say "Why?"
  But I dream things that never were; and I say "Why not?"
   - George Bernard Shaw

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf