Re: Question about IPAM tools for v6

2014-02-03 Thread Sam Wilson

On 3 Feb 2014, at 11:58, Tim Chown  wrote:

> 
> On 3 Feb 2014, at 11:32, Sam Wilson  wrote:
> 
>> 
>> On 3 Feb 2014, at 11:17, Nick Hilliard  wrote:
>> 
>>> On 03/02/2014 11:11, Sam Wilson wrote:
 Let me de-lurk and make the obvious point that using standard Ethernet
 addressing would limit the number of nodes on a single link to 2^47, and
 that would require every unicast address assigned to every possible
 vendor.  Using just the Locally Administered addresses would limit you
 to 2^46.
>>> 
>>> it bothers me that I can't find any switch with 2^46 ports.
>>> 
>>> Damned vendors.
>> 
>> 
>> The back of my envelope says that with my vendor of choice and a 4-deep tree 
>> (7-hop old-style STP limit) of 384-port switches I can't get more than about 
>> 2^34 edge ports.  Very disappointing.  That would need approximately 57 
>> million routers, though, and 170 GW of electrical power, not counting the 
>> cooling requirements.  
> 
> That's a lot of hamsters.


Turns out it's more hamsters than we have in the UK.  


Sam

-- 
Sam Wilson
Communications Infrastructure Section, IT Infrastructure
Information Services, The University of Edinburgh
Edinburgh, Scotland, UK



The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.



Re: Question about IPAM tools for v6

2014-02-03 Thread Tim Chown

On 3 Feb 2014, at 11:32, Sam Wilson  wrote:

> 
> On 3 Feb 2014, at 11:17, Nick Hilliard  wrote:
> 
>> On 03/02/2014 11:11, Sam Wilson wrote:
>>> Let me de-lurk and make the obvious point that using standard Ethernet
>>> addressing would limit the number of nodes on a single link to 2^47, and
>>> that would require every unicast address assigned to every possible
>>> vendor.  Using just the Locally Administered addresses would limit you
>>> to 2^46.
>> 
>> it bothers me that I can't find any switch with 2^46 ports.
>> 
>> Damned vendors.
> 
> 
> The back of my envelope says that with my vendor of choice and a 4-deep tree 
> (7-hop old-style STP limit) of 384-port switches I can't get more than about 
> 2^34 edge ports.  Very disappointing.  That would need approximately 57 
> million routers, though, and 170 GW of electrical power, not counting the 
> cooling requirements.  

That's a lot of hamsters.

Tim

Re: Question about IPAM tools for v6

2014-02-03 Thread Sam Wilson

On 3 Feb 2014, at 11:17, Nick Hilliard  wrote:

> On 03/02/2014 11:11, Sam Wilson wrote:
>> Let me de-lurk and make the obvious point that using standard Ethernet
>> addressing would limit the number of nodes on a single link to 2^47, and
>> that would require every unicast address assigned to every possible
>> vendor.  Using just the Locally Administered addresses would limit you
>> to 2^46.
> 
> it bothers me that I can't find any switch with 2^46 ports.
> 
> Damned vendors.


The back of my envelope says that with my vendor of choice and a 4-deep tree 
(7-hop old-style STP limit) of 384-port switches I can't get more than about 
2^34 edge ports.  Very disappointing.  That would need approximately 57 million 
routers, though, and 170 GW of electrical power, not counting the cooling 
requirements.  

-- 
Sam Wilson
Communications Infrastructure Section, IT Infrastructure
Information Services, The University of Edinburgh
Edinburgh, Scotland, UK



The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.



Re: Question about IPAM tools for v6

2014-02-03 Thread Nick Hilliard
On 03/02/2014 11:11, Sam Wilson wrote:
> Let me de-lurk and make the obvious point that using standard Ethernet
> addressing would limit the number of nodes on a single link to 2^47, and
> that would require every unicast address assigned to every possible
> vendor.  Using just the Locally Administered addresses would limit you
> to 2^46.

it bothers me that I can't find any switch with 2^46 ports.

Damned vendors.

Nick



Re: Question about IPAM tools for v6

2014-02-03 Thread Sam Wilson

On 31 Jan 2014, at 15:26, Alexandru Petrescu  wrote:

> Speaking of scalability - is there any link layer (e.g. Ethernet) that 
> supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.
> 
> I suppose the largest number of nodes in a single link may reach somewhere in 
> the thousands of nodes, but not 2^64.


Let me de-lurk and make the obvious point that using standard Ethernet 
addressing would limit the number of nodes on a single link to 2^47, and that 
would require every unicast address assigned to every possible vendor.  Using 
just the Locally Administered addresses would limit you to 2^46.

Sam
-- 
Sam Wilson
Communications Infrastructure Section, IT Infrastructure
Information Services, The University of Edinburgh
Edinburgh, Scotland, UK



The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.



Re: Question about IPAM tools for v6

2014-02-01 Thread Nick Hilliard
>> /64 netmask opens up nd cache exhaustion as a DoS vector.
> 
> FUD.

I probably should have qualified this statement a little better before
posting it.

Large locally-connected connected l2 domains can open up nd cache
exhaustion and many other problems as DoS vectors if the operating systems
connected to these domains do not have resource exhaustion limitations
built in, or they are built in but not configured properly.

In particular, the large address space prevents operating systems from
implementing certain types of mitigation mechanisms that might be possible
with ipv4 (e.g. slot based rate limiting).  The ND rate limiters that I've
tested all cause collateral connectivity problems as they place all ND
floods from all hosts in the same RL bucket.

While some aspects of this problem are more generic and not specifically
related to the address domain size (i.e. they're similar to what's already
seen on ipv4), the fact that the addressing domain is so large does not
help either the o/s implementer or the operator and the issues relating to
ND flooding of whatever sort (NS/RA/etc) are something that explicitly need
to be understood by both the o/s implementer and the network operator
because otherwise connectivity problems can occur in production.

Nick



Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu

Le 31/01/2014 18:13, Fernando Gont a écrit :

Alex,

On 01/31/2014 01:47 PM, Alexandru Petrescu wrote:

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


I tend to agree, but I think you talk about a different kind of limit.
This kind of limit to avoid memory overflow, thrashing, is not the same
as to protect against security attacks.


What's the difference between the two? -- intention?


Mostly intention, yes, but there are some differences.

For example, if we talk limits of data structures then we talk mostly
implementations on the end nodes, the Hosts.


Enforce, say, 16K, 32K, or 64K. And document it.


Well, it would be strange to enforce a 16K limit on a sensor which only 
has 4k memory size.  Enforcing that limit already means write new code 
to enforce limits (if's and so are the most cycle-consuming).


On another hand, the router which connects to that sensor may very well 
need a higher limit.


And there's only one stack.

I think this is the reason why it would be hard to come up with such a 
limit.



For ND, if one puts a limit on the ND cache size on the end Host, one
would need a different kind of limit for same ND cache size but on the
Router.  The numbers would not be the same.


64K probably accommodates both, and brings a minimum level of sanity.


Depends on whether it's Host or Router... sensor or server, etc.


The protocol limit set at 64 (subnet size) is not something to prevent
attacks.  It is something that allows new attacks.


What actually allows attacks are bad programming habits.


We're too tempted to put that on the back of the programmer.


It's the programmer's fault to not think about limits. And it's our
fault (IETF) that do not make the programmer's life easy -- he should't
have to figure out what a sane limit would be.


:-)


But a
kernel programmer (where the ND sits) is hard to suppose to be using bad
habits.


THe infamous "blue screen of death" would suggest otherwise (and this is
just *one* example)...


The fault of blue-screen-of-death is put on the _other_ programmers 
(namely the non-agreed device programmers). :-) the hell is the others.



If one looks at the IP stack in the kernel one notices that
people are very conservative and very strict about what code gets there.


.. in many cases, after... what? 10? 20? 30 years?



  These are not the kinds of people to blame for stupid errors such as
forgetting to set some limits.


Who else?

And no, I don't just blame the programmer. FWIW, it's a shame that some
see the actual implementation of an idea as less important stuff. A good
spec goes hand in hand with good code.


I agree.


You cannot be something that you cannot handle. I can pretend to be
Superman... but if after jumping over the window somehow I don't start
flying, the thing ain't working and won't be funny when I hit the
floor.

Same thing here: Don't pretend to be able t handle a /32 when you can't.
In practice, you won't be able to handle 2**32 in the NC.


I'd say depends on the computer?  The memory size could, I believe.


References, please :-)


Well I think about simple computer with RAM and virtual memory and 
terabyte disks.  That would fit well a 2^64-entry NC no?



Take the /64 as "Addresses could be spread all over this /64" rather
than "you must be able to handle 2**64 addresses on your network".


It is tempting.  I would like to take it so.

But what about the holes?  Will the holes be subject to new attacks?
Will the holes represent address waste?


"Unused address space". In the same way that the Earth's surface is not
currently accommodating as many many as it could. But that doesn't meant
that it should, or that you'd like it to.


Hmm, intriguing... I could talk about the Earth and its ressources, the 
risks, the how long we must stay here together, the rate of population 
growth, and so on.


But this 'unused address space' is something one can't simply just live 
with.


Without much advertising, there are some predictions talking 80 billion 
devices arriving soon.  Something like the QR codes on objects, etc. 
These'd be connected directly or through intermediaries.  If one 
compares these figures one realizes that such holes may not be welcome. 
They'd be barriers to deployment.



If we come up with a method to significantly distribute these holes such
that us the inventors understand it, will not another attacker
understand it too, and attack it?


Play both sides. And attack yourself. scan6
(http://www.si6networks.com/tools/ipv6toolkit) exploit current
addressing techniques. draft-ietf-6man-stable-privacy-addresses is meant
to defeat it.

Maybe one problem is the usual disconnect between the two: Folks
building stuff as if nothing wrong is ever going to happen. And folks
breaking stuff without ever thinking about how things could be made
better.  -- But not much of a surprise: pointing out wea

Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 16:59, Fernando Gont a écrit :

On 01/31/2014 12:26 PM, Alexandru Petrescu wrote:

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.

There are some different needs with this limitation.

It's good to rate-limit a protocol exchange (to avoid dDoS), it's good
to limit the size of the buffers (to avoid buffer overflows), but it may
be arguable whether to limit the dynamic sizes of the instantiated data
structures, especially when facing requirements of scalability - they'd
rather be virtually infinite, like in virtual memory.

This means that the underlying hard limit will hit you in the back.

You should enforce limits that at the very least keeps the system usable.

At the end of the day, at the very least you want to be able to ssh to it.


I agree.  Or I'd say even less, such as rsh or telnet or SLIP into it; 
because ssh is a rather heavy exchange.



This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f* ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


I tend to agree, but I think you talk about a different kind of limit.  
This kind of limit to avoid memory overflow, thrashing, is not the same 
as to protect against security attacks.


The protocol limit set at 64 (subnet size) is not something to prevent 
attacks.  It is something that allows new attacks.


An implementation that will restrict the size of an instantiation of a 
data structure (say,limit its size to a max hosting 2^32 nodes) will be 
a clear limit to something else: subnets that want to be of that 
particular 2^32 size.


Also, think that people who develop IP stacks don't necessarily think 
Ethernet, they think many other link layers.  Once that stack gets into 
an OS as widespread as linux, there is little control about which link 
layer the IP stack will run on.  Actually there they want no limit at all.


It is not as simple as saying it is programmer's fault.


It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.


I am trained thank you.

Alex


Speaking of scalability - is there any link layer (e.g. Ethernet) that
supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.

Scan Google's IPv6 address space, and you'll find one. (scan6 of
 is your friend :-) )

Cheers,





Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu

Le 31/01/2014 17:35, Fernando Gont a écrit :

On 01/31/2014 01:12 PM, Alexandru Petrescu wrote:



This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f*
ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


I tend to agree, but I think you talk about a different kind of limit.
This kind of limit to avoid memory overflow, thrashing, is not the same
as to protect against security attacks.


What's the difference between the two? -- intention?


Mostly intention, yes, but there are some differences.

For example, if we talk limits of data structures then we talk mostly 
implementations on the end nodes, the Hosts.


But if we talk limits of protocol, then we may talk implementations on 
the intermediary routers.


For ND, if one puts a limit on the ND cache size on the end Host, one 
would need a different kind of limit for same ND cache size but on the 
Router.  The numbers would not be the same.



The protocol limit set at 64 (subnet size) is not something to prevent
attacks.  It is something that allows new attacks.


What actually allows attacks are bad programming habits.


We're too tempted to put that on the back of the programmer.  But a 
kernel programmer (where the ND sits) is hard to suppose to be using bad 
habits.  If one looks at the IP stack in the kernel one notices that 
people are very conservative and very strict about what code gets there. 
 These are not the kinds of people to blame for stupid errors such as 
forgetting to set some limits.



The /64 has exposed bad programming habits.. that's it.




An implementation that will restrict the size of an instantiation of a
data structure (say,limit its size to a max hosting 2^32 nodes) will be
a clear limit to something else: subnets that want to be of that
particular 2^32 size.


You cannot be something that you cannot handle. I can pretend to be
Superman... but if after jumping over the window somehow I don't start
flying, the thing ain't working and won't be funny when I hit the floor.

Same thing here: Don't pretend to be able t handle a /32 when you can't.
In practice, you won't be able to handle 2**32 in the NC.


I'd say depends on the computer?  The memory size could, I believe.

What is not possible to imagine is that 2^32 computers sit together on 
the same Ethernet link.



Take the /64 as "Addresses could be spread all over this /64" rather
than "you must be able to handle 2**64 addresses on your network".


It is tempting.  I would like to take it so.

But what about the holes?  Will the holes be subject to new attacks? 
Will the holes represent address waste?


If we come up with a method to significantly distribute these holes such 
that us the inventors understand it, will not another attacker 
understand it too, and attack it?



Also, think that people who develop IP stacks don't necessarily think
Ethernet, they think many other link layers.  Once that stack gets into
an OS as widespread as linux, there is little control about which link
layer the IP stack will run on.  Actually there they want no limit at all.

It is not as simple as saying it is programmer's fault.


Not enforcing limits is a programmer's fault. Most security exploits
rely on that.


I tend to agree.


It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.


I am trained thank you.


What I meant was: one should train oneself such that you don't really
need to think about it. Enforcing limits is one of those. First thing
your brain must be trained to is that before you allocate a data
structure, you check how big the thing is, and how big it's supposed to be.

And it's not just limits. e.g., how many *security* tools need superuser
privileges, but will never give up such superuser privileges once they
are not needed anymore?

"Know thyself" (http://en.wikipedia.org/wiki/Know_thyself). I know my
code is not going to be as good as it should. So I better limit the
damage that it can cause: enforce limits, and release unnecessary
privileges. And fail on the safe side. You could see it as
"compartmentalization", too.


Interesting.

Alex









Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 16:59, Fernando Gont a écrit :

On 01/31/2014 12:26 PM, Alexandru Petrescu wrote:

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.

There are some different needs with this limitation.

It's good to rate-limit a protocol exchange (to avoid dDoS), it's good
to limit the size of the buffers (to avoid buffer overflows), but it may
be arguable whether to limit the dynamic sizes of the instantiated data
structures, especially when facing requirements of scalability - they'd
rather be virtually infinite, like in virtual memory.

This means that the underlying hard limit will hit you in the back.

You should enforce limits that at the very least keeps the system usable.

At the end of the day, at the very least you want to be able to ssh to it.




This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f* ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.



It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.




Speaking of scalability - is there any link layer (e.g. Ethernet) that
supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.

Scan Google's IPv6 address space, and you'll find one. (scan6 of
 is your friend :-) )


Do you think they have somewhere one single link on which 2^64 nodes 
connect simultaneously?  (2^64 is a relatively large number, larger than 
the current Internet).


Or is it some fake reply?

Alex




Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 16:13, Fernando Gont a écrit :

On 01/31/2014 10:59 AM, Aurélien wrote:

I personnally verified that this type of attack works with at least one
major firewall vendor, provided you know/guess reasonably well the
network behind it. (I'm not implying that this is a widespread attack type).

I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf

I'm looking for other information sources, do you know other papers
dealing with this problem ? Why do you think this is FUD ?

The attack does work. But the reason it works is because the
implementations are sloppy in this respect: they don't enforce limits on
the size of the data structures they manage.

The IPv4 subnet size enforces an artificial limit on things such as the
ARP cache. A /64 removes such artificial limit. However, you shouldn't
be relying on such limit. You should a real one in the implementation
itself.

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.


There are some different needs with this limitation.

It's good to rate-limit a protocol exchange (to avoid dDoS), it's good 
to limit the size of the buffers (to avoid buffer overflows), but it may 
be arguable whether to limit the dynamic sizes of the instantiated data 
structures, especially when facing requirements of scalability - they'd 
rather be virtually infinite, like in virtual memory.


This is not a problem of implementation, it is a problem of unspoken 
assumption that the subnet prefix is always 64.  It is unspoken because 
it is little required (almost none) by RFCs.  Similarly as when the 
router of the link is always the .1.


Speaking of scalability - is there any link layer (e.g. Ethernet) that 
supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.


I suppose the largest number of nodes in a single link may reach 
somewhere in the thousands of nodes, but not 2^64.


The limitation on the number of nodes on the single link comes not only 
from the access contention algorithms, but from the implementation of 
the core of the highest performance switches; these are limited in terms 
of bandwidth.  With these figures in mind, one realizes that it may be 
little reasonable to imagine subnets of maximum size 2^64 nodes.


Alex



If you want to play, please take a look at the ipv6toolkit:
. On the same page, you'll
also find a PDF that discusses ND attacks, and that tells you how to
reproduce the attack with the toolkit.

Besides, each manual page of the toolkit (ra6(1), na6(1), etc.) has an
EXAMPLES section that provides popular ways to run each tool.

Thanks!

Cheers,





smime.p7s
Description: Signature cryptographique S/MIME


Re: Question about IPAM tools for v6

2014-01-31 Thread Alexandru Petrescu
Messages cités pour référence (si rien alors fin de message) : Le 
31/01/2014 14:07, Ole Troan a écrit :

Consensus around here is that we support DHCPv6 for non-/64 subnets
(particularly in the context of Prefix Delegation), but the immediate
next question is "Why would you need that?"

/64 netmask opens up nd cache exhaustion as a DoS vector.

FUD.


Sigh... as usual with brief statements it's hard to see clearly.

I think ND attacks may be eased by an always-same prefix length (64).

Some attacks may be using unsolicited NAs to deny others configuring a 
particular address.  That's easier if the attacker assumes the prefix 
length were, as usual, 64.


Additionally, an always-64 prefix length gives a _scanning_ perspective 
to the security dimension, as per section 2.2 "Target Address Space for 
Network Scanning" of RFC5157.


As a side note, security is not the only reason why people would like to 
configure prefixes longer than 64 on some subnets... some of the most 
obvious being the address exhaustion at the very edge.


Alex




cheers,
Ole





RE: Question about IPAM tools for v6

2014-01-31 Thread Templin, Fred L
Hi Erik,

> -Original Message-
> From: Erik Kline [mailto:e...@google.com]
> Sent: Friday, January 31, 2014 10:46 AM
> To: Templin, Fred L
> Cc: Nick Hilliard; Cricket Liu; ipv6-ops@lists.cluenet.de; 
> draft-carpenter-6man-wh...@tools.ietf.org;
> Mark Boolootian
> Subject: Re: Question about IPAM tools for v6
> 
> On 31 January 2014 10:22, Templin, Fred L  wrote:
> >> Not if you route a /64 to each host (the way 3GPP/LTE does for mobiles).  
> >> :-)
> >
> > A /64 for each mobile is what I would expect. It is then up to the
> > mobile to manage the /64 responsibly by either black-holing the
> > portions of the /64 it is not using or by assigning the /64 to a
> > link other than the service provider wireless access link (and
> > then managing the NC appropriately).
> 
> 
> 
> Yep.  My point, though, was that we can do the same kind of thing in
> the datacenter.

Sure, that works for me too.

> 
> 
> In general, I think ND exhaustion is one of those "solve it at Layer
> 3" situations, since we have the bits to do so.
> 
> IPv6 gives us a large enough space to see new problems of scale, and
> sometimes the large enough space can be used to solve these problems
> too, albeit with non-IPv4 thinking.

Right - thanks for clarifying.

Thanks - Fred
fred.l.temp...@boeing.com


RE: Question about IPAM tools for v6

2014-01-31 Thread Templin, Fred L
> Not if you route a /64 to each host (the way 3GPP/LTE does for mobiles).  :-)

A /64 for each mobile is what I would expect. It is then up to the
mobile to manage the /64 responsibly by either black-holing the
portions of the /64 it is not using or by assigning the /64 to a
link other than the service provider wireless access link (and
then managing the NC appropriately).

Thanks - Fred
fred.l.temp...@boeing.com


Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 02:30 PM, Alexandru Petrescu wrote:
> I tend to agree, but I think you talk about a different kind of limit.
> This kind of limit to avoid memory overflow, thrashing, is not the
> same
> as to protect against security attacks.

 What's the difference between the two? -- intention?
>>>
>>> Mostly intention, yes, but there are some differences.
>>>
>>> For example, if we talk limits of data structures then we talk mostly
>>> implementations on the end nodes, the Hosts.
>>
>> Enforce, say, 16K, 32K, or 64K. And document it.
> 
> Well, it would be strange to enforce a 16K limit on a sensor which only
> has 4k memory size.

That's why it should be configurable. -- Set a better one at system startup.


> Enforcing that limit already means write new code
> to enforce limits (if's and so are the most cycle-consuming).

That's the minimum pain you should pay for not doing it in the first place.

And yes, writing sloppy code always requires less effort.



> On another hand, the router which connects to that sensor may very well
> need a higher limit.
> 
> And there's only one stack.
> 
> I think this is the reason why it would be hard to come up with such a
> limit.

Make a good default that handles the general case, and make it
configurable so that non-general cases can be addressed.



>>> For ND, if one puts a limit on the ND cache size on the end Host, one
>>> would need a different kind of limit for same ND cache size but on the
>>> Router.  The numbers would not be the same.
>>
>> 64K probably accommodates both, and brings a minimum level of sanity.
> 
> Depends on whether it's Host or Router... sensor or server, etc.

Do you run a host or router that needs more than 64K entries?



>>> But a
>>> kernel programmer (where the ND sits) is hard to suppose to be using bad
>>> habits.
>>
>> THe infamous "blue screen of death" would suggest otherwise (and this is
>> just *one* example)...
> 
> The fault of blue-screen-of-death is put on the _other_ programmers
> (namely the non-agreed device programmers). :-) the hell is the others.

I don't buy that. Win 95 (?) infamously crashed in front of the very
Bill Gates upon connection of a scanner.

And W95 was infamous for one-packet of death crashes (the "nukes" from
the '90s)



 You cannot be something that you cannot handle. I can pretend to be
 Superman... but if after jumping over the window somehow I don't start
 flying, the thing ain't working and won't be funny when I hit the
 floor.

 Same thing here: Don't pretend to be able t handle a /32 when you
 can't.
 In practice, you won't be able to handle 2**32 in the NC.
>>>
>>> I'd say depends on the computer?  The memory size could, I believe.
>>
>> References, please :-)
> 
> Well I think about simple computer with RAM and virtual memory and
> terabyte disks.  That would fit well a 2^64-entry NC no?

Consider yourself lucky if your implementation can gracefully handle,
say, 1M entries.



 Take the /64 as "Addresses could be spread all over this /64" rather
 than "you must be able to handle 2**64 addresses on your network".
>>>
>>> It is tempting.  I would like to take it so.
>>>
>>> But what about the holes?  Will the holes be subject to new attacks?
>>> Will the holes represent address waste?
>>
>> "Unused address space". In the same way that the Earth's surface is not
>> currently accommodating as many many as it could. But that doesn't meant
>> that it should, or that you'd like it to.
> 
> Hmm, intriguing... I could talk about the Earth and its ressources, the
> risks, the how long we must stay here together, the rate of population
> growth, and so on.
> 
> But this 'unused address space' is something one can't simply just live
> with.
> 
> Without much advertising, there are some predictions talking 80 billion
> devices arriving soon.  Something like the QR codes on objects, etc.
> These'd be connected directly or through intermediaries.  If one
> compares these figures one realizes that such holes may not be welcome.
> They'd be barriers to deployment.

mm.. what's the problem here?

Cheers,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
Alex,

On 01/31/2014 01:47 PM, Alexandru Petrescu wrote:
 It's as straightforward as this: whenever you're coding something,
 enforce limits. And set it to a sane default. And allow the admin to
 override it when necessary.
>>>
>>> I tend to agree, but I think you talk about a different kind of limit.
>>> This kind of limit to avoid memory overflow, thrashing, is not the same
>>> as to protect against security attacks.
>>
>> What's the difference between the two? -- intention?
> 
> Mostly intention, yes, but there are some differences.
> 
> For example, if we talk limits of data structures then we talk mostly
> implementations on the end nodes, the Hosts.

Enforce, say, 16K, 32K, or 64K. And document it.


> For ND, if one puts a limit on the ND cache size on the end Host, one
> would need a different kind of limit for same ND cache size but on the
> Router.  The numbers would not be the same.

64K probably accommodates both, and brings a minimum level of sanity.



>>> The protocol limit set at 64 (subnet size) is not something to prevent
>>> attacks.  It is something that allows new attacks.
>>
>> What actually allows attacks are bad programming habits.
> 
> We're too tempted to put that on the back of the programmer.

It's the programmer's fault to not think about limits. And it's our
fault (IETF) that do not make the programmer's life easy -- he should't
have to figure out what a sane limit would be.


> But a
> kernel programmer (where the ND sits) is hard to suppose to be using bad
> habits.

THe infamous "blue screen of death" would suggest otherwise (and this is
just *one* example)...



> If one looks at the IP stack in the kernel one notices that
> people are very conservative and very strict about what code gets there.

.. in many cases, after... what? 10? 20? 30 years?


>  These are not the kinds of people to blame for stupid errors such as
> forgetting to set some limits.

Who else?

And no, I don't just blame the programmer. FWIW, it's a shame that some
see the actual implementation of an idea as less important stuff. A good
spec goes hand in hand with good code.


>> You cannot be something that you cannot handle. I can pretend to be
>> Superman... but if after jumping over the window somehow I don't start
>> flying, the thing ain't working and won't be funny when I hit the
>> floor.
>>
>> Same thing here: Don't pretend to be able t handle a /32 when you can't.
>> In practice, you won't be able to handle 2**32 in the NC.
> 
> I'd say depends on the computer?  The memory size could, I believe.

References, please :-)



>> Take the /64 as "Addresses could be spread all over this /64" rather
>> than "you must be able to handle 2**64 addresses on your network".
> 
> It is tempting.  I would like to take it so.
> 
> But what about the holes?  Will the holes be subject to new attacks?
> Will the holes represent address waste?

"Unused address space". In the same way that the Earth's surface is not
currently accommodating as many many as it could. But that doesn't meant
that it should, or that you'd like it to.



> If we come up with a method to significantly distribute these holes such
> that us the inventors understand it, will not another attacker
> understand it too, and attack it?

Play both sides. And attack yourself. scan6
(http://www.si6networks.com/tools/ipv6toolkit) exploit current
addressing techniques. draft-ietf-6man-stable-privacy-addresses is meant
to defeat it.

Maybe one problem is the usual disconnect between the two: Folks
building stuff as if nothing wrong is ever going to happen. And folks
breaking stuff without ever thinking about how things could be made
better.  -- But not much of a surprise: pointing out weaknesses usually
hurt egos, and fixing stuff doesn't get as much credit as fixing it in
the security world.

Cheers,
-- 
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint:  31C6 D484 63B2 8FB1 E3C4 AE25 0D55 1D4E 7492






Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 01:02 PM, Alexandru Petrescu wrote:
>>> Speaking of scalability - is there any link layer (e.g. Ethernet) that
>>> supports 2^64 nodes in the same link?  Any deployed such link? I
>>> doubt so.
>> Scan Google's IPv6 address space, and you'll find one. (scan6 of
>>  is your friend :-) )
> 
> Do you think they have somewhere one single link on which 2^64 nodes
> connect simultaneously?  (2^64 is a relatively large number, larger than
> the current Internet).
> 
> Or is it some fake reply?

Apparently, it's not fake (although I didn't scan the *whole* space). I
bet there's some trick there, though. -- I don't expect them to be
running 2**64 servers...

With a little bit more of research, it shouldn't be hard to check
whether the responses are legitimate or not (TCP timestamps, IP IDs,
etc. are usually your friends here).

Thanks,
-- 
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint:  31C6 D484 63B2 8FB1 E3C4 AE25 0D55 1D4E 7492






Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 01:12 PM, Alexandru Petrescu wrote:
> 
>>> This is not a problem of implementation, it is a problem of unspoken
>>> assumption that the subnet prefix is always 64.
>> Do you know what they say assumptions? -- "It's the mother of all f*
>> ups".
>>
>> It's as straightforward as this: whenever you're coding something,
>> enforce limits. And set it to a sane default. And allow the admin to
>> override it when necessary.
> 
> I tend to agree, but I think you talk about a different kind of limit. 
> This kind of limit to avoid memory overflow, thrashing, is not the same
> as to protect against security attacks.

What's the difference between the two? -- intention?



> The protocol limit set at 64 (subnet size) is not something to prevent
> attacks.  It is something that allows new attacks.

What actually allows attacks are bad programming habits.

The /64 has exposed bad programming habits.. that's it.



> An implementation that will restrict the size of an instantiation of a
> data structure (say,limit its size to a max hosting 2^32 nodes) will be
> a clear limit to something else: subnets that want to be of that
> particular 2^32 size.

You cannot be something that you cannot handle. I can pretend to be
Superman... but if after jumping over the window somehow I don't start
flying, the thing ain't working and won't be funny when I hit the floor.

Same thing here: Don't pretend to be able t handle a /32 when you can't.
In practice, you won't be able to handle 2**32 in the NC.

Take the /64 as "Addresses could be spread all over this /64" rather
than "you must be able to handle 2**64 addresses on your network".



> Also, think that people who develop IP stacks don't necessarily think
> Ethernet, they think many other link layers.  Once that stack gets into
> an OS as widespread as linux, there is little control about which link
> layer the IP stack will run on.  Actually there they want no limit at all.
> 
> It is not as simple as saying it is programmer's fault.

Not enforcing limits is a programmer's fault. Most security exploits
rely on that.



>>> It is unspoken because
>>> it is little required (almost none) by RFCs.  Similarly as when the
>>> router of the link is always the .1.
>> That's about sloppy programming.
>>
>> Train yourself to do the right thing. I do. When I code, I always
>> enforce limits. If anything, just pick one, and then tune it.
> 
> I am trained thank you.

What I meant was: one should train oneself such that you don't really
need to think about it. Enforcing limits is one of those. First thing
your brain must be trained to is that before you allocate a data
structure, you check how big the thing is, and how big it's supposed to be.

And it's not just limits. e.g., how many *security* tools need superuser
privileges, but will never give up such superuser privileges once they
are not needed anymore?

"Know thyself" (http://en.wikipedia.org/wiki/Know_thyself). I know my
code is not going to be as good as it should. So I better limit the
damage that it can cause: enforce limits, and release unnecessary
privileges. And fail on the safe side. You could see it as
"compartmentalization", too.


-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 12:26 PM, Alexandru Petrescu wrote:
>>
>> And it's not just the NC. There are implementations that do not limit
>> the number of addresses they configure, that do not limit the number of
>> entries in the routing table, etc.
> 
> There are some different needs with this limitation.
> 
> It's good to rate-limit a protocol exchange (to avoid dDoS), it's good
> to limit the size of the buffers (to avoid buffer overflows), but it may
> be arguable whether to limit the dynamic sizes of the instantiated data
> structures, especially when facing requirements of scalability - they'd
> rather be virtually infinite, like in virtual memory.

This means that the underlying hard limit will hit you in the back.

You should enforce limits that at the very least keeps the system usable.

At the end of the day, at the very least you want to be able to ssh to it.



> This is not a problem of implementation, it is a problem of unspoken
> assumption that the subnet prefix is always 64.

Do you know what they say assumptions? -- "It's the mother of all f* ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.


> It is unspoken because
> it is little required (almost none) by RFCs.  Similarly as when the
> router of the link is always the .1.

That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.



> Speaking of scalability - is there any link layer (e.g. Ethernet) that
> supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.

Scan Google's IPv6 address space, and you'll find one. (scan6 of
 is your friend :-) )

Cheers,
-- 
Fernando Gont
SI6 Networks
e-mail: fg...@si6networks.com
PGP Fingerprint:  31C6 D484 63B2 8FB1 E3C4 AE25 0D55 1D4E 7492






Re: Neighbor Cache Exhaustion, was Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 11:16 AM, Enno Rey wrote:
> Hi Guillaume,
> 
> willing to share your lab setup / results? We did some testing
> ourselves in a Cisco-only setting and couldn't cause any problems.
> [for details see here:
> http://www.insinuator.net/2013/03/ipv6-neighbor-cache-exhaustion-attacks-risk-assessment-mitigation-strategies-part-1/]
>
>  After that I asked for other practical experience on the
> ipv6-hackers mailing list, but got no responses besides some "I heard
> this is a problem in $SOME_SETTING" and references to Jeff Wheeler's
> paper (which works on the - wrong - assumption that an "incomplete"
> entry can stay in the cache for a long time, which is not true for
> stacks implementing ND in conformance with RFC 4861). So your
> statement is actually the first first-hand proof of NCE being a
> real-world problem I ever hear of. thanks in advance for any
> additional detail.

Are we talking about Ciscos, specifically?

I recall reproducing this sort of thing on BSDs, Linux, and Windows.

Note: In some cases, the problem is that even when the entries in the
INCOMPLETE state are timeout, if the rate is lower than the rate at
which you "produce" them, it's still a problem.

Too bad -- we do have plenty of experience with this.. e.g., managing
the IP reassembly queue.

Thanks,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 09:33 AM, Mohacsi Janos wrote:
> 
>> On 29/01/2014 22:19, Cricket Liu wrote:
>>> Consensus around here is that we support DHCPv6 for non-/64 subnets
>>> (particularly in the context of Prefix Delegation), but the immediate
>>> next question is "Why would you need that?"
>>
>> /64 netmask opens up nd cache exhaustion as a DoS vector.
> 
> ND cache size Should be limited by HW/SW vendors - limiting number
> entries ND cache entries per MAC adresss, limiting number of outstanding
> ND requests etc.

+1

Don't blame the subnet size for sloppy implementations.

Cheers,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Re: Question about IPAM tools for v6

2014-01-31 Thread Fernando Gont
On 01/31/2014 10:59 AM, Aurélien wrote:
> 
> I personnally verified that this type of attack works with at least one
> major firewall vendor, provided you know/guess reasonably well the
> network behind it. (I'm not implying that this is a widespread attack type).
> 
> I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf
> 
> I'm looking for other information sources, do you know other papers
> dealing with this problem ? Why do you think this is FUD ?

The attack does work. But the reason it works is because the
implementations are sloppy in this respect: they don't enforce limits on
the size of the data structures they manage.

The IPv4 subnet size enforces an artificial limit on things such as the
ARP cache. A /64 removes such artificial limit. However, you shouldn't
be relying on such limit. You should a real one in the implementation
itself.

And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.

If you want to play, please take a look at the ipv6toolkit:
. On the same page, you'll
also find a PDF that discusses ND attacks, and that tells you how to
reproduce the attack with the toolkit.

Besides, each manual page of the toolkit (ra6(1), na6(1), etc.) has an
EXAMPLES section that provides popular ways to run each tool.

Thanks!

Cheers,
-- 
Fernando Gont
e-mail: ferna...@gont.com.ar || fg...@si6networks.com
PGP Fingerprint: 7809 84F5 322E 45C7 F1C9 3945 96EE A9EF D076 FFF1





Neighbor Cache Exhaustion, was Re: Question about IPAM tools for v6

2014-01-31 Thread Enno Rey
Hi Guillaume,

willing to share your lab setup / results?
We did some testing ourselves in a Cisco-only setting and couldn't cause any 
problems. [for details see here: 
http://www.insinuator.net/2013/03/ipv6-neighbor-cache-exhaustion-attacks-risk-assessment-mitigation-strategies-part-1/]

After that I asked for other practical experience on the ipv6-hackers mailing 
list, but got no responses besides some "I heard this is a problem in 
$SOME_SETTING" and references to Jeff Wheeler's paper (which works on the - 
wrong - assumption that an "incomplete" entry can stay in the cache for a long 
time, which is not true for stacks implementing ND in conformance with RFC 
4861).
So your statement is actually the first first-hand proof of NCE being a 
real-world problem I ever hear of. thanks in advance for any additional detail.

best

Enno





On Fri, Jan 31, 2014 at 02:59:24PM +0100, Aur??lien wrote:
> On Fri, Jan 31, 2014 at 2:07 PM, Ole Troan  wrote:
> 
> > >> Consensus around here is that we support DHCPv6 for non-/64 subnets
> > >> (particularly in the context of Prefix Delegation), but the immediate
> > >> next question is "Why would you need that?"
> > >
> > > /64 netmask opens up nd cache exhaustion as a DoS vector.
> >
> > FUD.
> >
> >
> Hi Ole,
> 
> I personnally verified that this type of attack works with at least one
> major firewall vendor, provided you know/guess reasonably well the network
> behind it. (I'm not implying that this is a widespread attack type).
> 
> I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf
> 
> I'm looking for other information sources, do you know other papers dealing
> with this problem ? Why do you think this is FUD ?
> 
> Thanks,
> -- 
> Aur??lien Guillaume

-- 
Enno Rey

ERNW GmbH - Carl-Bosch-Str. 4 - 69115 Heidelberg - www.ernw.de
Tel. +49 6221 480390 - Fax 6221 419008 - Cell +49 173 6745902 

Handelsregister Mannheim: HRB 337135
Geschaeftsfuehrer: Enno Rey

===
Blog: www.insinuator.net || Conference: www.troopers.de
Twitter: @Enno_Insinuator
===


Re: Question about IPAM tools for v6

2014-01-31 Thread Aurélien
On Fri, Jan 31, 2014 at 2:07 PM, Ole Troan  wrote:

> >> Consensus around here is that we support DHCPv6 for non-/64 subnets
> >> (particularly in the context of Prefix Delegation), but the immediate
> >> next question is "Why would you need that?"
> >
> > /64 netmask opens up nd cache exhaustion as a DoS vector.
>
> FUD.
>
>
Hi Ole,

I personnally verified that this type of attack works with at least one
major firewall vendor, provided you know/guess reasonably well the network
behind it. (I'm not implying that this is a widespread attack type).

I also found this paper: http://inconcepts.biz/~jsw/IPv6_NDP_Exhaustion.pdf

I'm looking for other information sources, do you know other papers dealing
with this problem ? Why do you think this is FUD ?

Thanks,
-- 
Aurélien Guillaume


Re: Question about IPAM tools for v6

2014-01-31 Thread Ole Troan
>> Consensus around here is that we support DHCPv6 for non-/64 subnets
>> (particularly in the context of Prefix Delegation), but the immediate
>> next question is "Why would you need that?"
> 
> /64 netmask opens up nd cache exhaustion as a DoS vector.

FUD.

cheers,
Ole


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Question about IPAM tools for v6

2014-01-31 Thread Mohacsi Janos




On Fri, 31 Jan 2014, Nick Hilliard wrote:


On 29/01/2014 22:19, Cricket Liu wrote:

Consensus around here is that we support DHCPv6 for non-/64 subnets
(particularly in the context of Prefix Delegation), but the immediate
next question is "Why would you need that?"


/64 netmask opens up nd cache exhaustion as a DoS vector.


ND cache size Should be limited by HW/SW vendors - limiting number entries 
ND cache entries per MAC adresss, limiting number of outstanding ND 
requests etc.



Best Regards,
Janos Mohacsi


Re: Question about IPAM tools for v6

2014-01-31 Thread Nick Hilliard
On 29/01/2014 22:19, Cricket Liu wrote:
> Consensus around here is that we support DHCPv6 for non-/64 subnets
> (particularly in the context of Prefix Delegation), but the immediate
> next question is "Why would you need that?"

/64 netmask opens up nd cache exhaustion as a DoS vector.

Nick



Re: Question about IPAM tools for v6

2014-01-31 Thread Cricket Liu
Hi Mark.

On Jan 29, 2014, at 11:07 AM, Mark Boolootian  wrote:

>> Can anyone say whether existing IP Address Management tools that
>> support IPv6 have built-in assumptions or dependencies on the
>> /64 subnet prefix length, or whether they simply don't care about
>> subnet size?
> 
> We use Infoblox's IPAM.  There aren't any limitations of which I'm
> aware in terms of allocating space and IPv6 prefix length in the IPAM.
> However, I don't know if there are restrictions when it comes to
> DHCPv6, as we've only set up /64s.

Consensus around here is that we support DHCPv6 for non-/64 subnets 
(particularly in the context of Prefix Delegation), but the immediate next 
question is "Why would you need that?"

cricket

Re: Question about IPAM tools for v6

2014-01-29 Thread Brian E Carpenter
On 30/01/2014 11:19, Cricket Liu wrote:
> Hi Mark.
> 
> On Jan 29, 2014, at 11:07 AM, Mark Boolootian  wrote:
> 
>>> Can anyone say whether existing IP Address Management tools that
>>> support IPv6 have built-in assumptions or dependencies on the
>>> /64 subnet prefix length, or whether they simply don't care about
>>> subnet size?
>> We use Infoblox's IPAM.  There aren't any limitations of which I'm
>> aware in terms of allocating space and IPv6 prefix length in the IPAM.
>> However, I don't know if there are restrictions when it comes to
>> DHCPv6, as we've only set up /64s.
> 
> Consensus around here is that we support DHCPv6 for non-/64 subnets 
> (particularly in the context of Prefix Delegation), but the immediate next 
> question is "Why would you need that?"

That's been a reasonably hot topic over on the IETF v6ops list,
which is what prompted a group of us to start writing a draft
(which we want to make fact-based, not opinion-based, hence
my question here).

Thanks to you and the others who have replied with facts!

Brian


Re: [ipv6-ops] Re: Question about IPAM tools for v6

2014-01-29 Thread Aaron Hughes
As one of the founders of 6connect, we had initially, years ago, only allowed 
for delegation down to the /64. Client demand dictated support down to the /128 
and has been that way for a couple of years. People still implement v6 in very 
odd ways. A common example I have seen is where someone uses .. say a /21 v4 
per VLAN and matches it with a /118 of v6 to keep with their existing 
provisioning policy. We've had to build in all kinds of unrecommended 
capabilities for customers and I expect the rest will have to do the same. Same 
for DHCPv6 BTW.

Cheers,
Aaron

On Wed, Jan 29, 2014 at 08:22:19PM +0100, Nicolas CARTRON wrote:
> Hi Brian,
> 
> On Wed, Jan 29, 2014 at 7:54 PM, Brian E Carpenter <
> brian.e.carpen...@gmail.com> wrote:
> 
> > Hi,
> >
> > We're working on the next version of
> > http://tools.ietf.org/html/draft-carpenter-6man-why64
> >
> > Can anyone say whether existing IP Address Management tools that
> > support IPv6 have built-in assumptions or dependencies on the
> > /64 subnet prefix length, or whether they simply don't care about
> > subnet size?
> >
> 
> I'm working at EfficientIP, a (DNS/DHCP) IPAM vendor, and our IPAM software
> proposes by default /64 subnets,
> but you can increase or decrease the size if needed, so no blocking point
> IMO.
> 
> Cheers,
> 
> -- 
> Nicolas

-- 

Aaron Hughes 
aar...@tcp0.com
+1-703-244-0427
Key fingerprint = AD 67 37 60 7D 73 C5 B7  33 18 3F 36 C3 1C C6 B8
http://www.tcp0.com/


Re: Question about IPAM tools for v6

2014-01-29 Thread Nicolas CARTRON
Hi Brian,

On Wed, Jan 29, 2014 at 7:54 PM, Brian E Carpenter <
brian.e.carpen...@gmail.com> wrote:

> Hi,
>
> We're working on the next version of
> http://tools.ietf.org/html/draft-carpenter-6man-why64
>
> Can anyone say whether existing IP Address Management tools that
> support IPv6 have built-in assumptions or dependencies on the
> /64 subnet prefix length, or whether they simply don't care about
> subnet size?
>

I'm working at EfficientIP, a (DNS/DHCP) IPAM vendor, and our IPAM software
proposes by default /64 subnets,
but you can increase or decrease the size if needed, so no blocking point
IMO.

Cheers,

-- 
Nicolas


Re: Question about IPAM tools for v6

2014-01-29 Thread Mark Boolootian
> Can anyone say whether existing IP Address Management tools that
> support IPv6 have built-in assumptions or dependencies on the
> /64 subnet prefix length, or whether they simply don't care about
> subnet size?

We use Infoblox's IPAM.  There aren't any limitations of which I'm
aware in terms of allocating space and IPv6 prefix length in the IPAM.
 However, I don't know if there are restrictions when it comes to
DHCPv6, as we've only set up /64s.

mark