Messages cités pour référence (si rien alors fin de message) : Le 31/01/2014 16:59, Fernando Gont a écrit :
On 01/31/2014 12:26 PM, Alexandru Petrescu wrote:
And it's not just the NC. There are implementations that do not limit
the number of addresses they configure, that do not limit the number of
entries in the routing table, etc.
There are some different needs with this limitation.

It's good to rate-limit a protocol exchange (to avoid dDoS), it's good
to limit the size of the buffers (to avoid buffer overflows), but it may
be arguable whether to limit the dynamic sizes of the instantiated data
structures, especially when facing requirements of scalability - they'd
rather be virtually infinite, like in virtual memory.
This means that the underlying hard limit will hit you in the back.

You should enforce limits that at the very least keeps the system usable.

At the end of the day, at the very least you want to be able to ssh to it.

I agree. Or I'd say even less, such as rsh or telnet or SLIP into it; because ssh is a rather heavy exchange.

This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.
Do you know what they say assumptions? -- "It's the mother of all f* ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.

I tend to agree, but I think you talk about a different kind of limit. This kind of limit to avoid memory overflow, thrashing, is not the same as to protect against security attacks.

The protocol limit set at 64 (subnet size) is not something to prevent attacks. It is something that allows new attacks.

An implementation that will restrict the size of an instantiation of a data structure (say,limit its size to a max hosting 2^32 nodes) will be a clear limit to something else: subnets that want to be of that particular 2^32 size.

Also, think that people who develop IP stacks don't necessarily think Ethernet, they think many other link layers. Once that stack gets into an OS as widespread as linux, there is little control about which link layer the IP stack will run on. Actually there they want no limit at all.

It is not as simple as saying it is programmer's fault.

It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.
That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.

I am trained thank you.

Alex

Speaking of scalability - is there any link layer (e.g. Ethernet) that
supports 2^64 nodes in the same link?  Any deployed such link? I doubt so.
Scan Google's IPv6 address space, and you'll find one. (scan6 of
<http://www.si6networks.com/tools/ipv6toolkit> is your friend :-) )

Cheers,


Reply via email to