Bob Relyea wrote:
yes it does. If you can't trust you've made a connection to the site
you thought you made a connection to, you have no security. Saying you
do is like saying "I'm secure because I have an RF shielded cable
running from my computer".
Hmm... people trust to probabilities every
minute of their lives. They may not have
"security" but they have a great chance of
surviving. Just because there is small chance
of losing that bet doesn't mean that we can
then turn around and say "that's not secure."
<begin SOAPBOX>
Soapboxes are great!
There is a growing myth at you can get most of the security you want
by using unauthenticated encrypted pipes. This myth has been enhanced
by such systems as SSH, PGP, and use of self-signed certs.
Your definition of the word secure is ...
uncompromising and ... not used by
most of the world most of the time!
When people say SSH secures them,
they mean it in the same way that
they say seatbelts secure them, and
cars secure them and stocks and bonds
are securities. That is, there is a risk
element in there, and the wise know
what it is, the less wise hope it doesn't
happen to them, because it isn't worth
their energy to figure out the percentages.
In this way the less wise are wiser than
the wise.
These tools -- when used properly -- can be secure.
Ah, you mean absolutely secure. No,
no such thing can ever happen. There
is no absolute security. For example,
you could carefully exchange FPs over
a beer with your PGP buddy, and think
you are secure. What you don't know
is that there is a keylogger on your
computer and you are SOL.
I think components can be secure, in
the terms you mean above. But I don't
think complete systems can be secure,
because unlike components, that can't
simply punt inconvenient assumptions
up to the higher layer application; the
buck stops with the system, and it has
to take account of the fact that the
platform might be infected, in its
statement of security.
It requires a diligent, knowledgeable operator, who painstakenly
checks all fingerprints of all the keys he trusts in the system. These
tools depend on an already existing *human* relationship between the
people communicating. The problems come because the operator
verification burden is too high. Most of us --- even those of us who
know better --- simply rely on the fact that we trust our underlying
intra- or inter- net infrastructure and click 'accept' when asked to
do the check.
Right. Most of us do the economic thing:
we take what we can get for free, and we
risk the rest. That's real world security, in the
sense that it incorporates economic risks.
Now add there scores of intelligent programmers and admins, who
understand enough to start these products, but aren't familiar with
applied cryptography and their protocols. Suddenly everyone thinks
"look it's encrypted, so it's safe", without understanding the
underlying attacks.... and they get away with it because,for the most
part, our unencrypted connections are secure actually enough. They
clammer for us to remove the warning dialogs and 'just let me get on
with it...' because they've never been bitten before. We are already
at risk and are only talking the most intelligent 5-10% of the
population.
Right. But bear in mind that a lot of
those warnings and popups and so
forth were put in there by techies who
had an exaggerated view of the world.
For SSL it was assumed the credit card
MITM was lurking (no such) and for
PGP it was assumed the NSA was lurking
(who cares?). SSH was the first model
to turn around and *explicitly* say, ok,
here's a risk, it's infinitesimal, and it's
ok to assume it as a risk, especially as
it stops those *passive* password sniffers.
The issue is for most of our operations, the risk isn't that we are
going to loose our sensitive information to some internet snooper.
SSH doesn't prevent any more practical attacks against my system than
Telnet does (unless I only turn on client auth).
? SSH prevents every attack after the
first time. It's peace until the key gets
reset. Sniffers are pretty much dead in
the water, as not only do they have to
be there at the right time, they also
have to conduct an active MITM and
leave tracks ...
There are very few scenarios where snooping is feasible, but
redirection of the packets isn't. If we aren't protecting against the
redirect attack, we aren't supplying enough extra security to warrant
telling the user he's 'secure'.
Um. It may be that if I can snoop I
can also redirect. But there is a *big*
economic difference between an
active attack and a passive attack.
In doing an active attack I leave
tracks, and basically if that evidence
is found, I'm *nailed*. That's a much
bigger risk.
In doing a passive attack, there are
no tracks ... and even if I'm caught,
I can just say it was a logger for some
other purpose and your passphrase
got caught up in the dragnet. Sorry
'bout that...
Think of it as the rules of war - listen
all you like, watch all you like, but
send one packet and that's an act of
war.
<end SOAPBOX>
Great stuff!
One more 'myth' that has been around for a long time is "little bits
of security is better than no security". This is only true if you
understand the magnitude of "the little bits".
Mmmm.... that helps. But, it's also true if the
little bit of security is what you can get for
free. If you aren't going to pay any money
and any time, you get ... what you get. A
little is better than none in that sense,
purely because it reduces your probabilities
of attack down by a few orders of magnitude.
I've heard developers say obscuring the password is better than
nothing at all, but that's like saying "putting a fake lock on a gate
is better than just a latch" where the user isn't told the lock is fake.
OK, but the difference here is *what the user
is told* !! and not the strength of the security.
If the user was told that the gate lock was
false, and they accepted that, then fine. The
issue is that the security industry here has a
tendency to say "it's secure" when what they
mean is "it's got some OBscurity features, but
it won't stop a real attack."
As a practical measure, people use false locks
all the time. We used to call it a dummy lock
where we'd make the chain and padlock look
closed, and tell the team what was going on.
Yeah, we took a risk, but it was way way better
than copying dozens of keys and having to put
up with lost keys and so forth. It actually would
deliver more security than the alternate.
The same thing can - and does - work in crypto.
The canonical case was the GSM crypto which
used dodgy 40 bit crypto, and still provided
peace for 10 years from phone forgers and
phone listeners. Huge value to society.
It's true the novice thief may pass up the gate because it is too much
work to get through, but even a casual thief would recognize the lock
as fake and break in. If the user had known the lock was fake, he may
not have secured his valuables behind it.
Right. Tell the user how much security there
is.
In short, providing the connection without accepting it as secure is,
IMHO, an excellent way to solve the problem and start breaking down
the myth.
I've been actively breaking down the myth of
absolute security for these last couple of years,
so it's probably me you mean when you talk
about the myth of opportunistic cryptography :-)
iang
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
_______________________________________________
mozilla-crypto mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-crypto