At 20:27 +0100 11/14/06, Paul Wouters wrote:
On Tue, 14 Nov 2006, Edward Lewis wrote:
The third ingredient is the amplification factor that is a result of
robustness of the security. When slapping security onto any
existing system,
some level of robustness is lost.
and gained?
No, robustness is lost - in the sense that if you examine the state
machine of the protocol, once legal states are no longer legal. In
many state machines there are paths that travel from safe states
through questionable states and then back to safe states. It is when
travelling though a questionable state (sometimes known as a
vulnerability) that bad things could happen; so slapped-on security
makes these states unreachable. When the number of possible good
paths are reduced, robustness is lost.
When I say that robustness is lost, I'm looking at the protocol
design. I suppose that if your concern is capacity (i.e. the length
of queues), you could say that security is improving robustness by
avoiding hitting limits. That may be, but the point I was trying to
make above is that if you take an operating system working without a
problem and then crimp it where you think there might be a
vulnerability, you would be wise to crimp it in ways that do not
choke it.
Because of the flood, the routers couldn't get to the DNS to get the
certificates.
Sounds like a design mistake.
Well, something was lost in the translation. The case was that a
router had to pull from a flood of packets some information that
would stem the flow of bad packets. To validate the good
information, credentials needed to be judged. Even if the X.509 was
sent along in entirety, the revocation list could not be sent
reliably. The error is in relying on remote security data when
reaching for it may be impossible. (How do you put all needed
security data local to where an attack can be fended off? If you
push ahead of time, then you are using resources needlessly in
"peacetime." And - in an open population, to where do you send
stuff?)
In conclusion, there isn't really much the DNSOP community can do
about this.
There are things we can do, mostly involved with BCP's and sane software
defaults and things like AS112.
And we could encourage better protocol design. Note that I refer to
"slapped-on" security. Not all security is doomed - it's when we
slap it on we are risking doing a worse job. E.g., if in DNSSEC we
get a signature, it is simple to judge the security of the data. If
we don't get the signature, we have to go see if the signature was
supposed to be there, etc. Same for the spam defense approaches.
Think of what it would mean to DNS if all records came with a
signature field built in. (For some of is, think "slabs.") The
prove-able non-existence issue wouldn't be the same. Replays may
declare a zone empty, but you couldn't poison cache with false data
by downgrading a zone's security.
If we try to protect the DNS, well, there's nothing that can really be done
in the protocol about this, other than to ask "please use the DNS less."
Why would we ask that? DNS serves many good purposes. If "protecting" comes
down to "don't use", then you also have no more need to protect it.
Exactly my point.
And if you think SMTP spam filtering email is bad, wait for your phones
to need to do SIP spam filtering. How long before we can't trust our
phones anymore? I give it two years top.
Fortunately for phones there is heavy regulation. ;) Do-not-call
lists and the like. (The non-SIP phones I mean.)
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis +1-571-434-5468
NeuStar
I didn't miss the meeting, I left it before I arrived.
.
dnsop resources:_____________________________________________________
web user interface: http://darkwing.uoregon.edu/~llynch/dnsop.html
mhonarc archive: http://darkwing.uoregon.edu/~llynch/dnsop/index.html