Le 31/01/2014 17:35, Fernando Gont a écrit :
On 01/31/2014 01:12 PM, Alexandru Petrescu wrote:

This is not a problem of implementation, it is a problem of unspoken
assumption that the subnet prefix is always 64.
Do you know what they say assumptions? -- "It's the mother of all f*
ups".

It's as straightforward as this: whenever you're coding something,
enforce limits. And set it to a sane default. And allow the admin to
override it when necessary.

I tend to agree, but I think you talk about a different kind of limit.
This kind of limit to avoid memory overflow, thrashing, is not the same
as to protect against security attacks.

What's the difference between the two? -- intention?

Mostly intention, yes, but there are some differences.

For example, if we talk limits of data structures then we talk mostly implementations on the end nodes, the Hosts.

But if we talk limits of protocol, then we may talk implementations on the intermediary routers.

For ND, if one puts a limit on the ND cache size on the end Host, one would need a different kind of limit for same ND cache size but on the Router. The numbers would not be the same.

The protocol limit set at 64 (subnet size) is not something to prevent
attacks.  It is something that allows new attacks.

What actually allows attacks are bad programming habits.

We're too tempted to put that on the back of the programmer. But a kernel programmer (where the ND sits) is hard to suppose to be using bad habits. If one looks at the IP stack in the kernel one notices that people are very conservative and very strict about what code gets there. These are not the kinds of people to blame for stupid errors such as forgetting to set some limits.

The /64 has exposed bad programming habits.. that's it.



An implementation that will restrict the size of an instantiation of a
data structure (say,limit its size to a max hosting 2^32 nodes) will be
a clear limit to something else: subnets that want to be of that
particular 2^32 size.

You cannot be something that you cannot handle. I can pretend to be
Superman... but if after jumping over the window somehow I don't start
flying, the thing ain't working.... and won't be funny when I hit the floor.

Same thing here: Don't pretend to be able t handle a /32 when you can't.
In practice, you won't be able to handle 2**32 in the NC.

I'd say depends on the computer?  The memory size could, I believe.

What is not possible to imagine is that 2^32 computers sit together on the same Ethernet link.

Take the /64 as "Addresses could be spread all over this /64" rather
than "you must be able to handle 2**64 addresses on your network".

It is tempting.  I would like to take it so.

But what about the holes? Will the holes be subject to new attacks? Will the holes represent address waste?

If we come up with a method to significantly distribute these holes such that us the inventors understand it, will not another attacker understand it too, and attack it?

Also, think that people who develop IP stacks don't necessarily think
Ethernet, they think many other link layers.  Once that stack gets into
an OS as widespread as linux, there is little control about which link
layer the IP stack will run on.  Actually there they want no limit at all.

It is not as simple as saying it is programmer's fault.

Not enforcing limits is a programmer's fault. Most security exploits
rely on that.

I tend to agree.

It is unspoken because
it is little required (almost none) by RFCs.  Similarly as when the
router of the link is always the .1.
That's about sloppy programming.

Train yourself to do the right thing. I do. When I code, I always
enforce limits. If anything, just pick one, and then tune it.

I am trained thank you.

What I meant was: one should train oneself such that you don't really
need to think about it. Enforcing limits is one of those. First thing
your brain must be trained to is that before you allocate a data
structure, you check how big the thing is, and how big it's supposed to be.

And it's not just limits. e.g., how many *security* tools need superuser
privileges, but will never give up such superuser privileges once they
are not needed anymore?

"Know thyself" (http://en.wikipedia.org/wiki/Know_thyself). I know my
code is not going to be as good as it should. So I better limit the
damage that it can cause: enforce limits, and release unnecessary
privileges. And fail on the safe side. You could see it as
"compartmentalization", too.

Interesting.

Alex





Reply via email to