Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-09 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Just to throw in my two cents...

In the early 1990’s I wanted to roll out an encrypted e-mail solution
for the MIT Community (I was the Network Manager and responsible for
the mail system). We already had our Kerberos Authentication system
(of which I am one of the authors, so I have a special fondness for
it). It would do a fine job of helping people exchange session keys
for mail and everyone at MIT has a Kerberos ID (and therefore would
permit communication between everyone in the community).

However, as Network Manager, I was also the person who would see legal
requests for access to email and other related data. Whomever ran the
Kerberos KDC would be in a position to retrieve any necessary keys to
decrypt any encrypted message. Which meant that whomever ran the KDC
could be compelled to turn over the necessary keys. In fact my fear
was that a clueless law enforcement organization would just take the
whole KDC with a search warrant, thus compromising everyone’s
security. Today they may well also use a search warrant to take the
whole KDC, but not because they are clueless...

The desire to offer privacy protection that I, as the administrator,
could not defeat is what motivated me to look into public key systems
and eventually participate in the Internet’s Privacy Enhanced Mail
(PEM) efforts. By using public key algorithms, correspondents are
protected from the prying eyes of even the folks who run the system.

I don’t believe you can do this without using some form of public key
system.

-Jeff
–
___
Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room E17-110A, 32-392
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@mit.edu
http://jis.qyv.name
___



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSLhgY8CBzV/QUlSsRAoQ8AKDBC/y/qph+HpE11a+5d7p6a6DqyQCgiN/f
3Dcsr8wLR1H+J9gzz31n4ys=
=84A0
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Sep 06, 2013 at 05:22:26PM -0700, John Gilmore wrote:
> Speaking as someone who followed the IPSEC IETF standards committee
> pretty closely, while leading a group that tried to implement it and
> make so usable that it would be used by default throughout the
> Internet, I noticed some things:
> ...

Speaking as one of the Security Area Directors at the time...

I have to disagree with your implication that the NSA intentionally
fouled the IPSEC working group. There were a lot of people working to
foul it up! I also don’t believe that the folks who participated,
including the folks from the NSA, were working to weaken the
standard. I suspect that the effort to interfere in standards started
later then the IPSEC work. If the NSA was attempting to thwart IETF
security standards, I would have expected to also see bad things in
the TLS working group and the PGP working group. There is no sign of
their interference there.

The real (or at least the first) problem with the IPSEC working group
was that we had a good and simple solution, Photuris. However the
document editor on the standard decided to claim it (Photuris) as his
intellectual property and that others couldn’t recommend changes
without his approval. This effectively made Photuris toxic in the
working group and we had to move on to other solutions. This is one of
the events that lead to the IETF’s “Note Well” document and clear
policy on the IP associated with contributions. Then there was the
ISAKMP (yes, an NSA proposal) vs. SKIP. As Security AD, I eventually
had to choose between those two standards because the working group
could not generate consensus. I believed strongly enough that we
needed an IPSEC solution so I decided to choose (as I promised the
working group I would do if they failed to!). I chose ISAKMP. I posted
a message with my rationale to the IPSEC mailing list, I’m sure it is
still in the archives. I believe that was in 1996 (I still have a copy
somewhere in my personal archives).

At no point was I contacted by the NSA or any agent of any government
in an attempt to influence my decision. Folks can choose to believe
this statement, or not.

IPSEC in general did not have significant traction on the Internet in
general. It eventually gained traction in an important niche, namely
VPNs, but that evolved later.

IPSEC isn’t useful unless all of the end-points that need to
communicate implement it. Implementations need to be in the OS (for
all practical purposes).  OS vendors at the time were not particularly
interested in encryption of network traffic.

The folks who were interested were the browser folks. They were very
interested in enabling e-commerce, and that required
encryption. However they wanted the encryption layer someplace where
they could be sure it existed. An encryption solution was not useful
to them if it couldn’t be relied upon to be there. If the OS the user
had didn’t have an IPSEC layer, they were sunk. So they needed their
own layer. Thus the Netscape guys did SSL, and Microsoft did PCT and
in the IETF we were able to get them to work together to create
TLS. This was a *big deal*. We shortly had one deployed interoperable
encryption standard usable on the web.

If I was the NSA and I wanted to foul up encryption on the Internet,
the TLS group is where the action was. Yet from where I sit, I didn’t
see any such interference.

If we believe the Edward Snowden documents, the NSA at some point
started to interfere with international standards relating to
encryption. But I don’t believe they were in this business in the
1990’s at the IETF.

-Jeff

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSLSMV8CBzV/QUlSsRAigkAKCU6erw1U7FOt7A1QdItlGbFRfo+gCfeMg1
0Woyz0FyKqKYqS+gZFQWEf0=
=yWOw
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-07 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sat, Sep 07, 2013 at 09:14:47PM +, Gregory Perry wrote:
> And this is exactly why there is no real security on the Internet.
> Because the IETF and standards committees and working groups are all
> in reality political fiefdoms and technological monopolies aimed at
> lining the pockets of a select few companies deemed "worthy" of
> authenticating user documentation for purposes of establishing
> online credibility.
> ...
> Encrypting IPv6 was initially a mandatory part of the spec,
> but then it somehow became discretionary.  The nuts and bolts of
> strong crypto have been around for decades, but the IETF and related
> standards "powers to be" are more interested in creating a global
> police state than guaranteeing some semblance of confidential and
> privacy for Internet users.

I’m sorry, but I cannot let this go unchallenged. I was there, I saw
it. For those who don’t know, I was the IESG Security Area Director
from 1994 - 2003. (by myself until 1998 after which we had two co-AD’s
in the Security Area). During this timeframe we formed the TLS working
group, the PGP working group and IPv6 became a Draft Standard. Scott
Bradner and I decided that security should be mandatory in IPv6, in
the hope that we could drive more adoption.

The IETF was (and probably still is) a bunch of hard working
individuals who strive to create useful technology for the
Internet. In particular IETF contributors are in theory individual
contributors and not representatives of their employers. Of course
this is the theory and practice is a bit “noisier” but the bulk of
participant I worked with were honest hard working individuals.

Security fails on the Internet for three important reasons, that have
nothing to do with the IETF or the technology per-se (except for point
3).

 1.  There is little market for “the good stuff”. When people see that
 they have to provide a password to login, they figure they are
 safe... In general the consuming public cannot tell the
 difference between “good stuff” and snake oil. So when presented
 with a $100 “good” solution or a $10 bunch of snake oil, guess
 what gets bought.

 2.  Security is *hard*, it is a negative deliverable. You do not know
 when you have it, you only know when you have lost it (via
 compromise). It is therefore hard to show return on investment
 with security. It is hard to assign a value to something not
 happening.

 2a. Most people don’t really care until they have been personally
 bitten. A lot of people only purchase a burglar alarm after they
 have been burglarized. Although people are more security aware
 today, that is a relatively recent development.

 3.  As engineers we have totally and completely failed to deliver
 products that people can use. I point out e-mail encryption as a
 key example. With today’s solutions you need to understand PK and
 PKI at some level in order to use it. That is likely requiring a
 driver to understand the internal combustion engine before they
 can drive their car. The real world doesn’t work that way.

No government conspiracy required. We have seen the enemy and it is...

-Jeff

___________
Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room E17-110A, 32-392
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@mit.edu
http://jis.qyv.name
___
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSK7xM8CBzV/QUlSsRApyUAKCB6GpP/hUHxtOQNGjSB5FDZS8hFACfVec6
pPw4Xvukq3OqPEkmVZKl0c8=
=9/UP
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Protecting Private Keys

2013-09-07 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sat, Sep 07, 2013 at 03:46:10PM -0400, Jim Popovitch wrote:
> $5k USD to anyone one of the thousands of admins with access

Years ago when key escrow and the Clipper was still on the table, I
developed an attack on the key escrow agents. It worked like this:

 1. Approach facility, knock on door.
 2. To the person who answers: “Here is $1 Million, take a walk.”
 3. To anyone else encountered: “Here is $1 Million, go to the
bathroom.”
 4. ... (you get the idea).

The fact that the keys would fit on an exabyte tape made exfiltrating
them pretty easy.

A few SSL private keys take even less space.

I have a lot of respect for how Google runs its operation. However it
wouldn’t be that hard to arrange for an agent to get a job there
(there are very smart people at NSA, and Google likes hiring smart
people :-) ) for the purpose to obtaining keys.

Of course, this is all speculation...

-Jeff

___
Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room E17-110A, 32-392
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@mit.edu
http://jis.qyv.name
___
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSK4cq8CBzV/QUlSsRAhRiAKCFMtmsTn/8Ov0GzkEZxG/8/iOedACeJEHN
wG0AdNDiIjsmLEwAIL8AVNs=
=vNVD
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Protecting Private Keys

2013-09-07 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

While we worry about symmetric vs. public key ciphers, we should not
forget the risk of compromise of our long-term keys. How are they
protected?

One of the most obvious ways to compromise a cryptographic system is
to get the keys. This is a particular risk in TLS/SSL when PFS is not
used. Consider a large scale site (read: Google, Facebook, etc.) that
uses SSL. The private keys of the relevant certificates needs to be
literally on hundreds if not thousands of systems. Chances are they
are not encrypted on those systems so those systems can auto-restart
without human intervention. Those systems also break
periodically. What happens to the broken pieces, say a broken hard
drive?

If one of these private keys is compromised, all pre-recorded traffic
can now be decrypted, as long as PFS was not used (and as we know, it
is rarely used).

Encrypted email is also at great risk because we have no PFS in any of
these systems. Our private keys tend to last a long time (just look at
the age of my private key!).

If I was the NSA, I would be scavenging broken hardware from
“interesting” venues and purchasing computers for sale in interesting
locations. I would be particularly interested in stolen computers, as
they have likely not been wiped.

The bottom line here is that the NSA has upped the game (and probably
did so quite a while ago, but we are just learning about it now). This
means that commercial organizations that truly want to protect their
customers from the NSA, and other national actors whom I am sure are
just as skilled and probably more brazen, need to up their game, by a
lot!

- -Jeff

P.S. I am very careful about which devices my private key touches and
what happens to it when I am through with it.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSKzZE8CBzV/QUlSsRAqTsAJ4xJymTj04zCGF7v9OaZ4vJC3WoMgCfU1Qd
960tkxkWdrzz4ymCksyaKog=
=0JHf
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sat, Sep 07, 2013 at 10:57:07AM +0300, ianG wrote:
> It's a big picture thing.  At the end of the day, symmetric crypto
> is something that good software engineers can master, and relatively
> well, in a black box sense.  Public key crypto not so easily, that
> requires real learning.  I for one am terrified of it.

Don’t be. There is no magic there. From what I can tell, there are two
different issues with public key.

1. Weaknesses in the math.
2. Fragility in use.

The NSA (or other national actors) may well have found a mathematical
weakness in any of the public key ciphers (frankly they may have found
a weakness in symmetric ciphers as well). Frankly, we just don’t know
here. Do we trust RSA more then Diffie-Hellman or any of the Elliptic
Curve techniques? Who knows. We can make our keys bigger and hope for
the best.

As for fragility. Generating random numbers is *hard*, particularly on
a day to day basis. When you generate a keypair with GPG/PGP it
prompts you to type in random keystrokes and move the mouse etc., all
in an attempt to gather as much entropy as possible. This is a pain,
but it makes sense for one-lived keys. People would not put up with
this if you had to do this for each session key. Fragile public key
systems (such as Elgamal and all of the variants of DSA) require
randomness at signature time. The consequence for failure is
catastrophic. Most systems need session keys, but the consequence for
failure in session key generation is the compromise of the
message. The consequence for failure in signature generation in a
fragile public key system is compromise of the long term key!

I wrote about this in NDSS 1991 I cannot find an on-line reference
to it though.

Then if you are a software developer, you have the harder problem of
not being able to control the environment your software will run on,
particularly as it applies to the availability of entropy.

So my advice.

Use RSA, choose a key as long as your paranoia. Like all systems, you
will need entropy to generate keys, but you won’t need entropy to use
it for encryption or for signatures.

- -Jeff

___________
Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room E17-110A, 32-392
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@mit.edu
http://jis.qyv.name
___

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSKzKi8CBzV/QUlSsRAhoSAJ98g7NreJwIK+aYODM1zDsVsreMCQCcD2R9
vnvmNc4Uo45+ckUFQafuE4U=
=x9bK
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Google's Public Key Size (was Re: NSA and cryptanalysis)

2013-09-02 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, Sep 02, 2013 at 03:09:31PM -0400, Jerry Leichter wrote:
> Google recently switched to 2048 bit keys; hardly any other sites
> have done so, and some older software even has trouble talking to
> Google as a result.

Btw. As a random side-note. Google switched to 2048 bit RSA keys on
their search engine. However my connection to mail.google.com is using
a NIST p256r1 ECC key in its certificate.

- -Jeff
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSJQt78CBzV/QUlSsRAtO0AKDkltH4HUVw5Pa2lwCLhHLAGrIJHACgxzZh
1EInnyyRoKX4xZ1rQ0M9c2g=
=uOUn
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Five Theses on Security Protocols

2010-08-02 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/01/2010 09:31 AM, Anne & Lynn Wheeler wrote:
> Part of what was recognized by the x9a10 financial standard working
> group (and the resulting x9.59 financial standard) was that relying
> on the merchant (and/or the transaction processor) to provide major
> integrity protection for financial transactions ... is placing the
> responsibility on the entities with the least financial interest
> ... the "security proportional to risk" scenario (where largest
> percentage of exploits occur in the current infrastructure
> ... including data breaches)

Speaking as a merchant (yep, I get to do that too!), albeit one in
Higher Education, we don't view the risk of a compromise of a
transaction simply as the risk of losing the associated profit. Our
risk concern is more about risk to reputation. MIT wants it's name in
the papers associated with scientific breakthroughs and the like, not
with reports of security breaches!

We also need to be concerned with penalties that might be levied from
the various card networks (this is a more recent concern).

I am involved in our PCI security efforts (led them for a while). We
are not at all concerned with issues involved with SSL. Almost all of
our concern is about protecting the PC's of non-technical people who
are processing these transactions.

Similarly the breaches I am aware of were all about compromising
back-end systems, not breaking SSL.

> A decade ago, there were a number of "secure" payment transaction
> products floated for the internet ... with significant upfront
> merchant interest ... assuming that the associated transactions
> would have significant lower interchange fees (because of the
> elimination of "fraud" surcharge). Then things went thru a period of
> "cognitive dissonance" when financial institutions tried to explain
> why these transactions should have a higher interchange fee ... than
> the highest "fraud surchange" interchange fees. The severity of the
> "cognitive dissonance" between the merchants and the financial
> institutions over whether "secure" payment transactions products
> should result in higher fees or lower fees contributed significantly
> to the products not being deployed.

I remember them well. Indeed these protocols, presumably you are
talking about Secure Electronic Transactions (SET), were a major
improvement over SSL, but adoption was killed by not only failing the
give the merchants a break on the fraud surcharge, but also requiring
the merchants to pick up the up-front cost of upgrading all of their
systems to use these new protocols. And there was the risk that it
would turn off consumers because it required the consumers setup
credentials ahead of time. So if a customer arrived at my SET
protected store-front, they might not be able to make a purchase if
they had not already setup their credentials. Many would just go to a
competitor that doesn't require SET rather then establish the
credentials.

So another aspect of the failure of SET was that it didn't provide
real incentives for consumers to go though the hassle of having their
credentials setup, thus creating a competitive advantage for those
merchants who did *not* use SET. Sigh.

Comes back to Perry's theses, you *must* design security systems to
work for *real* people (both as consumes and merchants).

-Jeff

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFMVbQg8CBzV/QUlSsRAlCwAKDL85NXtQ+HXMvjvhpBs3fnOiL+0wCghXTT
aILxpWKCSavrIDukc+VCKVU=
=tATX
-END PGP SIGNATURE-

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-07-31 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/31/2010 02:44 AM, Peter Gutmann wrote:

> Apparently the DNS root key is protected by what sounds like a
> five-of-seven threshold scheme, but the description is a bit
> unclear.  Does anyone know more?
>
> (Oh, and for people who want to quibble over "practically-deployed",
>  I'm not aware of any real usage of threshold schemes for anything,
>  at best you have combine-two-key-components (usually via XOR), but
>  no serious use of real n- of-m that I've heard of.  Mind you, one
>  single use doesn't necessarily count as "practically deployed"
>  either).

When we deployed the U.S. Higher Ed. PKI Root (USHER) [1] we secret
shared the root key in a 3 of 5 way. The operator of the PKI had two
of the shares and 3 independent outsiders each had a single share. The
idea being that the operator would need to contact one of the outside
share holders in order to recover the key, but it would require all
three of the outside share holders to get together to recombine the
key without the cooperation of the operator.

Each share consisted of a CD-R with the share written to it thousands
of times (why not, the thing holds ~650Mb and the share is about 1k or
so). We also wrote out the source code of the combining program a few
thousand times as well. It was written in Python.

-Jeff


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFMVMKw8CBzV/QUlSsRAk/XAKCeffugiZsoTARGdfiOk6/2ZL4eKACgqtaY
CBSjaU/x53CWvO6aYvxIbnU=
=5Oga
-END PGP SIGNATURE-

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/31/2010 12:32 PM, Perry E. Metzger wrote:
> 1 If you can do an online check for the validity of a key, there is
>   no need for a long-lived signed certificate, since you could
>   simply ask a database in real time whether the holder of the key
>   is authorized to perform some action. The signed certificate is
>   completely superfluous.
>
>   If you can't do an online check, you have no practical form of
>   revocation, so a long-lived signed certificate is unacceptable
>   anyway.

In general I agree with you, in particular when the task at hand is
authenticating individuals (or more to the point, Joe
Sixpack). However the use case of certificates for websites has worked
out pretty well (from a purely practical standpoint). The site owner
has to protect their key, because as you say, revocation is pretty
much non-existent.

> 2 A third party attestation, e.g. any certificate issued by any
>   modern CA, is worth exactly as much as the maximum liability of
>   the third party for mistakes. If the third party has no liability
>   for mistakes, the certification is worth exactly nothing. All
>   commercial CAs disclaim all liability.
>
>   An organization needs to authenticate and authorize its own users;
>   it cannot ask some other organization with no actual liability to
>   perform this function on its behalf. A bank has to know its own
>   customers, the customers have to know their own bank. A company
>   needs to know on its own that someone is allowed to reboot a
>   machine or access a database.

This is one of the issues driving "Federated Authentication." The idea
being that each organization authenticates its own users (however it
deems appropriate) and the federation technology permits this
authentication to be used transitively. I view federation as still in
its infancy, so there are plenty of growing pains ahead of us.

As an aside... a number of years ago I was speaking with the security
folks at a large financial organization which does business with
MIT. Their authentication approach was pretty lame. I asked them if
they could instead accept MIT client certificates. They had a simple
question for me. They asked me if MIT would make good if a transaction
went bad and the "badness" could be attributed to us
mis-authenticating someone. I said "No". They said, well, our
authentication may be lame, but we stand behind it. If someone loses
money as a result, we will make them whole. And there you have it.

> 3 Any security system that demands that users be "educated",
>   i.e. which requires that users make complicated security decisions
>   during the course of routine work, is doomed to fail.
>
>   For example, any system which requires that users actively make
>   sure throughout a transaction that they are giving their
>   credentials to the correct counterparty and not to a thief who
>   could reuse them cannot be relied on.
>
>   A perfect system is one in which no user can perform an action
>   that gives away their own credentials, and in which no user can
>   authorizes an action without their participation and knowledge. No
>   system can be perfect, but that is the ideal to be sought after.

Completely agree. One of the appeals of public key credentials, notice
that I didn't say "certificate" here, is that you can prove your
identity without permitting the relying party to turn around and use
your credentials. I call this class of system "non-disclosing" because
you do not disclose sufficient information to permit the relying party
to impersonate you. Passwords are "disclosing"!

We do not require drivers of automobiles to be auto mechanics. We
shouldn't require internet users to be security technologists.

> 4 As a partial corollary to 3, but which requires saying on its own:
>   If "false alarms" are routine, all alarms, including real ones,
>   will be ignored. Any security system that produces warnings that
>   need to be routinely ignored during the course of everyday work,
>   and which can then be ignored by simple user action, has trained
>   its users to be victims.
>
>   For example, the failure of a cryptographic authentication check
>   should be rare, and should nearly always actually mean that
>   something bad has happened, like an attempt to compromise
>   security, and should never, ever, ever result in a user being told
>   "oh, ignore that warning", and should not even provide a simple UI
>   that permits the warning to be ignored should someone advise the
>   user to do so.
>
>   If a system produces too many false alarms to permit routine work
>   to happen without an "ignore warning" button, the system is
>   worthless anyway.

I learned about this from a story when I was a kid. I believe it was
called "The Boy who Cried Wolf."

> 5 Also related to 3, but important in its own right: to quote Ian
>   Grigg:
>
> *** There should be one mode, and it should be secure. ***
>
>   There must not be a confusing c

Re: Security of Mac Keychain, Filevault

2009-11-02 Thread Jeffrey I. Schiller
- "Jerry Leichter"  wrote:
> for iPhone's and iPod Touches, which are regularly used to hold  
> passwords (for mail, at the least).

I would not (do not) trust the iPhone (or iPod Touch) to protect a
high value password. Or more to the point I would change any such
password if my iPhone went unaccounted for.

In the case of the Mac Keychain and Filevault, if implemented
correctly, the security hinges on a secret that you know. Pick a good
secret (high entropy) and you are good. Pick a poor one, well...

However the iPhone’s keychain is not encrypted in a password. Instead
it is encrypted in a key derived from the hardware. The iPhone
Dev-Team, the folks who regularly jail break the iPhone, seem to have
little problem deriving keys from the phone! Note: Setting a phone
lock password doesn’t prevent me from accessing the phone using the
various jail breaking tools. Presumably once I have control of the
phone, I have access to any of the keys on it.

-Jeff

-- 
========
Jeffrey I. Schiller
MIT Network Manager/Security Architect
PCI Compliance Officer
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room W92-190
Cambridge, MA 02139-4307
617.253.0161 - Voice
j...@mit.edu


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: HSM outage causes root CA key loss

2009-07-14 Thread Jeffrey I. Schiller
- "Peter Gutmann"  wrote:
> I haven't been able to find an English version of this, but the
> following news item from Germany: ...

It is exactly for this reason that when we generated the root key for
the U.S. Higher Education PKI we did it outside of an HSM and then
loaded it into two HSMs. The "raw" key was then manually secret shared
accross five CD's (three being the quorum) which were distributed to
five individuals for safe keeping. Because CD's have 700 Mb of storage
and the share secret is tiny, literally thousands of copies of it were
written on each CD along with the source code of the secret sharing
software (written in Python).

In theory every few years we are supposed to take out the CD's and
verify that they can be read. It's probably time to do that now :-)

Because of prior experience with a SafeKeyper(tm) (a very large HSM),
I learned that when the only copy of your key is in an HSM, the HSM
vendor really owns you key, or at least they own you!

-- 
============
Jeffrey I. Schiller
MIT Network Manager
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room W92-190
Cambridge, MA 02139-4307
617.253.0161 - Voice
j...@mit.edu

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: how to properly secure non-ssl logins (php + ajax)

2009-02-20 Thread Jeffrey I. Schiller
I think you are close, but are probably doing way too much work.

First let's define a function HMAC_MD. HMAC is defined in RFC2104
and represents the current best current practice for using a hash to
"sign" a data value. It takes:

  result = hmac_md(key, value)

You can use hmac with MD5, SHA1, SHA256... whatever. You will likely
find libraries that already implement this in various languages.

Below "SHA" means SHA1, SHA256, your choice.

We'll assume each user has a password which is stored on the
server. For a bit of extra security I would have the server store it
as a SHA hash of the actual password.)


For authentication you do a challenge response.

   Server Client
nonce   >
<===username, hmac_md(sha(password), nonce)

The trick is to ensure that the nonce is not re-used. There are
several ways to do this. One way is to store it in a table:

  crate table nonces (
   nonce varchar(128),   # Probably enough
   ts  timestamp,
   used Boolean )

When the server gets the reply it looks up the user's "sha(password)"
which is stored in the user account table. It then verifies that the
nonce value is in the nonces table and that used is False. It then
verifies that the timestamp is "fresh" (you can decide this). Upon
use, the nonces table is updated to set used to True. A second login
attempt would require a separate nonce. Once a nonce is no longer
"fresh" it can be purged from the nonces table (so you don't have to
store these forever). Obviously the server computes
hmac_md(sha(password), nonce) and verifies it is the value received
from the client.

There are a copy of gotchas here. The biggest is how you initially
setup the shared secret (aka the password). Without public key
operations there is no good way to create accounts (unless this is
done administratively, effectively "off line"). SSL of course can
solve this (but you don't want to use SSL). You can also attempt to
implement RSA in javascript and PHP (well, I'm sure routines exist for
PHP). You can then download a public key in your javascript code for
account registration. The user's browser can then compute
sha(password) and send it encrypted in the public key (or encrypted in
a data encrypting key which is encrypted in a public key).

I don't know how amenable javascript is to doing RSA. Years ago (when
computers were much slower) I wrote a Java Applet that did RSA in the
applet for account registration at MIT. It wasn't very fast, but it
was good enough for a one-time registration applet. Heh heh, we still
use it today!

Now of course I really cannot end this message without throwing in the
obvious caution that without SSL your authentication is pretty
weak. Even though you have not exposed the user's password, once
logged in PHP uses a session cookie. This cookie, although of limited
lifetime, is now available to the eavesdropper to steak and abuse. I'm
not even sure that PHP ensures that a cookie is coming from the same
IP address it was issued to (and in fact you cannot usually implement
such a restriction because some environments [aka large NATs and other
crud] can result in a legitimate user's traffic coming from different
IP addresses even within the same web session!).

And of course all of your data is also exposed, both for viewing and
for modification in flight.

Last I checked, SSL certificates could be had for the $20/year range,
so I don't see how that is cost prohibitive!

Modern hardware also does SSL pretty darn fast. You really have to
have a very high traffic site before it becomes a problem. There
actually aren't that many high traffic sites out there. Most
organizations may think their sites are high traffic, but they rarely
are.

-Jeff

- "Rene Veerman"  wrote:

> Hi.
>
> Recently, on both the jQuery(.com) and PHP mailinglists, a question
> has arisen on how to properly secure a login form for a non-ssl
> web-application.  But the replies have been "get ssl".. :(
>
> I disagree, and think that with a proper layout of authentication
> architecture, one can really secure a login system without having the
> administrative overhead of installing SSL everywhere, and the monetary
> cost for a SSL certificate for each domain.

--

Jeffrey I. Schiller
MIT Network Manager
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room W92-190
Cambridge, MA 02139-4307
617.253.0161 - Voice
j...@mit.edu


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Disk Encryption (was: Re: PGP "master keys")

2006-05-01 Thread Jeffrey I. Schiller
I use the following approach to encrypting my disks.

I use an encrypted loopback device. The version of losetup I use
permits me to store the disk key in a PGP encrypted file and decrypt
it (with gpg) when needed. I made many backups of the both my personal
keyring and the file with the encrypted loop key. So the only "secret"
I have to remember is the passphrase on my normal PGP key, which I am
not liekly to forget.

Of course there is a trade-off here. If my PGP key is compromised, my
disk encryption is at risk (if the encrypted disk key file is
compromised as well).

-Jeff

P.S. If you run a reasonably modern Linux system, and have more then
one system, you can use "drbd" to implement software mirroring between
the two systems. Clever use of openvpn and encrypted loopback devices
can do this securely as well.

--
=========
Jeffrey I. Schiller
MIT Network Manager
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room W92-190
Cambridge, MA 02139-4307
617.253.0161 - Voice
[EMAIL PROTECTED]



smime.p7s
Description: S/MIME cryptographic signature


Re: ID "theft" -- so what?

2005-07-21 Thread Jeffrey I. Schiller
Btw. There are credit card issuers (AT&T Universal is one) that permits
you to create a virtual one-time use credit card (with a time limit and
$$ limit if you want).

So when I shop at a merchant I don't want to trust, I open another
browser window and go to my issuers website and obtain a one-time card
number and use it at the merchant site. I can usually see immediately
after the purchase that the card has been used (on the issuers website)
so I know the merchant is checking the card in real time.

Apparently there is wallet software that will do this in a more
automated fashion, but it isn't available for my platform (non-Windows).

Jerrold Leichter wrote:

>| Date: Wed, 13 Jul 2005 16:08:20 -0400
>| From: John Denker <[EMAIL PROTECTED]>
>| To: Perry E. Metzger <[EMAIL PROTECTED]>
>| Cc: cryptography@metzdowd.com
>| Subject: Re: ID "theft" -- so what?
>| ...
>| Scenario:  I'm shopping online.  Using browser window #1, I
>| have found a merchant who sells what I want.   in the checkout
>| phase.
>| 
>| Now, in browser window #2, I open a secure connection to my
>| bank.  The bank and I authenticate each other.  (This is
>| two-way authentication;  one possible method is SSL certificate
>| on the bank's part, and a password or some such on my part.)
>| 
>| ...
>| As a refinement, I could ask the bank to issue a number that
>| was not only restricted to a single use, but also restricted
>| as to dollar amount.  As a further refinement, it could be
>| restricted to a particular merchant.
>| 
>| Everything to this point is upward-compatible with existing
>| systems.  The merchant doesn't even need to know there's
>| anything special about this card number;  the charge moves
>| through existing processing channels just like any other.
>| 
>| As a further refinement, once the system is established,
>| the merchant could provide, on the checkout page, a number
>| that encodes in some standard format various details of
>| the transaction:  merchant ID, date, dollar amount, and
>| maybe even a description of the goods sold.  I cut-and-paste
>| this code from the merchant to the bank, and let the bank
>| site decode it.  If it looks correct, I click OK and then
>| the bank generates a response that the merchant will
>| accept in payment.  If necessary I can cut-and-paste this
>| from the bank to the merchant ... but it would make more
>| sense for the bank to transmit it directly to the merchant.
>| This further increases security, and also saves me from
>| having to enter a lot of billing detail
>In effect, what you've done is proposed a digital model of *checks* to
>replace the digital model of *credit cards* that has been popular so far.
>In the old days, paper checks were considered hard to forge and you were
>supposed to keep yours from being stolen.  Your physical signature on the
>check was considered hard to forge.  Checks were constructed in a way
>that made made alteration difficult, binding the details of the transaction
>(the payee, the amount being paid) and the signature to the physical
>instrument.
>
>There was a time when checks were accepted pretty freely, though often with
>some additional identification to show that you really were the person whose
>name was on the check.
>
>Credit cards and credit card transactions never bound these various features
>of the transaction nearly as tightly.  Stepping back and looking at the
>two systems, it seems that using the check model as the starting point for
>a digital payment system may well be better than using credit cards, whose
>security model was never really as well developed.  When I handed a check to
>a vendor, I had (and to this day have) excellent assurance that he could
>not change the amount, and (in the old days) reasonable assurance that he
>could not create another check against my account.  (Given digital scanners
>on the one hand, and the "virtualization" of the check infrastructure, from
>the ability to initiate checking transactions entirely over the phone to
>check truncation at the merchant on the other, this is long gone.  It would be
>nice to recover it.)
>   -- Jerry
>
>-----
>The Cryptography Mailing List
>Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
>
>  
>

-- 
=
Jeffrey I. Schiller
MIT Network Manager
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room W92-190
Cambridge, MA 02139-4307
617.253.0161 - Voice
[EMAIL PROTECTED]




signature.asc
Description: OpenPGP digital signature


Re: The real problem that https has conspicuously failed to fix

2003-06-12 Thread Jeffrey I. Schiller
Yep, I deployed such a PKI here at MIT back in 1996. Today every student 
and most faculty and staff have certificates.

It really does work, but unfortunately the support for them in the 
common browsers is quirky enough that we have our support fun! I can 
understand why commercial sites shy away.

I have also been involved in efforts to get U.S. Higher Education to 
start deploying client certificates. The big problem there is that 
public key encryption appears to require more then the amount of clue 
that most computer administrators seem to have, so education is a real 
problem.

		-Jeff

Nomen Nescio wrote:
Jeffrey I. Schiller writes:


Oh, and btw, the form posting URL in my message wasn't even https, it 
was just http. So all the futzing in the world with https wouldn't help!


Of course it would help.  Have you been following this discussion
at all?  The idea is to eliminate passwords as being of any value in
getting access to PayPal or other ecommerce sites, by replacing them
with client certificates.  This implies using https or something
cryptographically similar.



pgp0.pgp
Description: PGP signature


Re: The real problem that https has conspicuously failed to fix

2003-06-11 Thread Jeffrey I. Schiller
Oh, and btw, the form posting URL in my message wasn't even https, it 
was just http. So all the futzing in the world with https wouldn't help!

			-Jeff

Pete Chown wrote:
John R. Levine wrote:

Crypto lets someone say "Hi!  I absolutely definitely
have a name somewhat like the name of a large familiar organization,
and I'd like to steal your data!" ...


It might help if browsers displayed some details of the certificate 
without being asked.  For example, instead of a padlock, the browser 
could have an SSL toolbar.  This would show the verified name and 
address of the site you are connected to.

The bar could also show the server name for unverified connections. This 
would avoid the attacks that use URLs like 
http://www.microsoft.com:[EMAIL PROTECTED] .




pgp0.pgp
Description: PGP signature


Re: The real problem that https has conspicuously failed to fix

2003-06-11 Thread Jeffrey I. Schiller
Folks, this isn't an https (or even http) problem. It is a tough user 
interface issue. Note: The form posting goes to www.pos2life.biz, which 
doesn't remotely look like paypal.com!

To make matters worse, there are plenty of businesses that send you leg 
imitate email that comes from a "random" looking place. Just today I 
received one from MIT's Alumni Association, but the actual source was 
something like m0.email-foobar.com (or something). Obviously the Alumni 
Association outsources the sending of the mail to some third party 
company. So even if we came up with some fancy was of saying "This form 
doesn't post to the same place this page came from [never mind that the 
original of an e-mail form is ill defined]" won't help.

I also received this scam mail. There were only two hints of badness 
(besides the obvious request for personal info that paypal shouldn't 
need) one was the form posting and the other was the "Received-by" line 
which my mail system put on the message which showed its original at a 
suspicious place (I believe in Japan, but I may have remembered wrong, 
it didn't look right at the time).

This is a social problem. Technical measures can help, but won't solve 
it, I am afraid.

			-Jeff

Roy M.Silvernail wrote:
On Sunday 08 June 2003 06:11 pm, martin f krafft wrote:

also sprach James A. Donald <[EMAIL PROTECTED]> [2003.06.08.2243 +0200]:

(When you hit the submit button, guess what happens)
How many people actually read dialog boxes before hitting Yes or OK?


It's slightly more subtle.  The action tag of a form submission isn't usually 
visible to the user like links are.  In the scam copy I received, all the 
links save one pointed to legitimate PayPal documents.  Only the 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



pgp0.pgp
Description: PGP signature