Re: Hashing algorithm needed

2010-09-14 Thread Nicolas Williams
On Tue, Sep 14, 2010 at 03:16:18PM -0500, Marsh Ray wrote:
> On 09/14/2010 09:13 AM, Ben Laurie wrote:
> >Of some interest to me is the approach I saw recently (confusingly named
> >WebID) of a pure Javascript implementation (yes, TLS in JS, apparently),
> >allowing UI to be completely controlled by the issuer.
> 
> First, let's hear it for out of the box thinking. *yay*
> 
> Now, a few questions about this approach:
> 
> How do you deliver Javascript to the browser securely in the first
> place? HTTP?

I'll note that Ben's proposal is in the same category as mine (which
was, to remind you, implement SCRAM in JavaScript and use that, with
channel binding using tls-server-end-point CB type).

It's in the same category because it has the same flaw, which I'd
pointed out earlier: if the JS is delivered by "normal" means (i.e., by
the server), then the script can't be used to authenticate the server.

And if you've authenticated the server vi HTTPS (TLS) then you might as
well just POST the username&password to the server, since the server
could just as well send you a script that does just that.

This approach works only if you deliver the script in some out-of-band
manner, such as via a browser plug-in/add-on (hopefully signed [by a
trustworthy trusted third party]).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Intel plans crypto-walled-garden for x86

2010-09-14 Thread Steven Bellovin

On Sep 13, 2010, at 11:58 57PM, John Gilmore wrote:

> http://arstechnica.com/business/news/2010/09/intels-walled-garden-plan-to-put-av-vendors-out-of-business.ars
> 
> "In describing the motivation behind Intel's recent purchase of McAfee
> for a packed-out audience at the Intel Developer Forum, Intel's Paul
> Otellini framed it as an effort to move the way the company approaches
> security "from a known-bad model to a known-good model." Otellini went
> on to briefly describe the shift in a way that sounded innocuous
> enough--current A/V efforts focus on building up a library of known
> threats against which they protect a user, but Intel would live to
> move to a world where only code from known and trusted parties runs on
> x86 systems."
> 
> Let me guess -- to run anything but Windows, you'll soon have to 
> jailbreak even laptops and desktop PC's?
> 

I've written a long blog post on this issue for the Concurring Opinions legal 
blog; see 
http://www.concurringopinions.com/archives/2010/09/a-new-threat-to-generativity.html


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Intel plans crypto-walled-garden for x86

2010-09-14 Thread Bill Frantz

On 9/13/10 at 8:58 PM, g...@toad.com (John Gilmore) wrote:


Intel's Paul
Otellini framed it as an effort to move the way the company approaches
security "from a known-bad model to a known-good model."


Does that include monetary indemnity when the "known-good" turns 
out to be bad? I bet not.


If we could "know good", security would be a lot easier, but 
nobody has a clue how to actually achieve that knowledge.



Let me guess -- to run anything but Windows, you'll soon have 
to jailbreak even laptops and desktop PC's?


I expect Steve Jobs will get them to approve MacOS too.

For the rest, there's always AMD.

Cheers - Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 
Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, 
CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Folly of looking at CA cert lifetimes

2010-09-14 Thread Paul Hoffman
At 5:33 PM -0400 9/14/10, Thor Lancelot Simon wrote:
>On Tue, Sep 14, 2010 at 08:14:59AM -0700, Paul Hoffman wrote:
>> At 10:57 AM -0400 9/14/10, Perry E. Metzger did not write, but passed on for 
>> someone else:
>> >This suggests to me that even if NIST is correct that 2048 bit RSA
>> >keys are the reasonable the minimum for new deployments after 2010,
>> >much shorter keys are appropriate for most server certificates that
>> >these CAs will sign.  The CA keys have lifetimes of 10 years or more;
>> >the server keys a a quarter to a fifth of that.
>>
>> No, no, a hundred times no. (Well, about 250 times, or however many
>> CAs are in the current OS trust anchor piles.) The "lifetime" of a "CA
>> key" is exactly as long as the OS or browser vendor keeps that key,
>> usually in cert form, in its trust anchor pile. You should not
>> extrapolate *anything* from the contents of the CA cert except the key
>> itself and the proclaimed name associated with it.
>
>I don't understand.  The original text seems to be talking about *server*
>certificate lifetimes, and how much shorter they are than CA cert
>lifetimes.  What does that have to do with "a thousand times no" about
>some proposition to do with CA cert lifetimes?
>
>In other words, if CA key lifetimes are longer than indicated by their
>X.509 properties, it seems to me that just makes the quoted text about
>the relationship between server and CA key lifetimes even more true.

Ah, I see what you are saying, and what Perry's anonymous forwarder meant. That 
is, if the "CA keys have lifetimes of 10 years or more" means "because that's 
how long OSs and browsers leave them in the trust anchor pile", then it has 
nothing to do with the built-in notAfter dates in the server certificates.

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Folly of looking at CA cert lifetimes

2010-09-14 Thread Thor Lancelot Simon
On Tue, Sep 14, 2010 at 08:14:59AM -0700, Paul Hoffman wrote:
> At 10:57 AM -0400 9/14/10, Perry E. Metzger did not write, but passed on for 
> someone else:
> >This suggests to me that even if NIST is correct that 2048 bit RSA
> >keys are the reasonable the minimum for new deployments after 2010,
> >much shorter keys are appropriate for most server certificates that
> >these CAs will sign.  The CA keys have lifetimes of 10 years or more;
> >the server keys a a quarter to a fifth of that.
> 
> No, no, a hundred times no. (Well, about 250 times, or however many
> CAs are in the current OS trust anchor piles.) The "lifetime" of a "CA
> key" is exactly as long as the OS or browser vendor keeps that key,
> usually in cert form, in its trust anchor pile. You should not
> extrapolate *anything* from the contents of the CA cert except the key
> itself and the proclaimed name associated with it.

I don't understand.  The original text seems to be talking about *server*
certificate lifetimes, and how much shorter they are than CA cert
lifetimes.  What does that have to do with "a thousand times no" about
some proposition to do with CA cert lifetimes?

In other words, if CA key lifetimes are longer than indicated by their
X.509 properties, it seems to me that just makes the quoted text about
the relationship between server and CA key lifetimes even more true.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Debian encouraging use of 4096 bit RSA keys

2010-09-14 Thread Henrique de Moraes Holschuh
On Tue, 14 Sep 2010, Perry E. Metzger wrote:
> The decision that 1024 bit keys are inadequate for code signing is
> likely reasonable. The idea that 2048 bits and not something between
> 1024 bits and 2048 bits is a reasonable minimum is perhaps arguable.
> One wonders what security model indicated 4096 bits is the ideal
> length

Key lifetime in Debian can be very long, 10 to 15 years.

I'd appreciate some input from this list about the Debian bias towards 4096
RSA main keys, instead of DSA2 (3072-bit) keys.  Is it justified?

These keys are used as KSK, as gpg will happily attach subkeys to them
for the grunt work...

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Intel plans crypto-walled-garden for x86

2010-09-14 Thread David G. Koontz
On 14/09/10 3:58 PM, John Gilmore wrote:
> http://arstechnica.com/business/news/2010/09/intels-walled-garden-plan-to-put-av-vendors-out-of-business.ars
> 
> "In describing the motivation behind Intel's recent purchase of McAfee
> for a packed-out audience at the Intel Developer Forum, Intel's Paul
> Otellini framed it as an effort to move the way the company approaches
> security "from a known-bad model to a known-good model." Otellini went
> on to briefly describe the shift in a way that sounded innocuous
> enough--current A/V efforts focus on building up a library of known
> threats against which they protect a user, but Intel would live to
> move to a world where only code from known and trusted parties runs on
> x86 systems."

The 'approved application' security model doesn't have to be ubiquitous
anymore than the IOS application restrictions on iDevices extend to Mac OS
X.  Just yesterday I tripped across a media item saying Nvidia's Tegra 2 was
being replace by an Intel Atom CE4100 (due to lack of performance for Full
HD output).
http://liliputing.com/2010/09/boxee-box-up-for-pre-order-nvidia-tegra-2-chip-replaced-with-intel-atom-ce4100.html

If you look in the August 20th Business Week article
http://www.businessweek.com/news/2010-08-20/intel-after-mcafee-may-find-mobile-a-difficult-sell.html

  “As we look at all of the growth areas for Intel silicon, one of the
  consistent purchase criteria for both IT managers and consumer is
  security,” Renee James, the head of Intel’s software division, said in an
  interview yesterday. “This is a pretty natural step for us.”

Growth areas for Intel silicon aren't in the PC market, which is saturated,
Intel is producing silicon to compete with ARM CPUs in mobile and appliance
computing.

  “The number of new security threats identified every month continues to
  rise,” Otellini said. “We have concluded that security has now become the
  third pillar of computing, joining energy-efficient performance and
  Internet connectivity in importance.”

Energy-efficient implies portability.  And:

  Intel will have to persuade customers they need security in non-PC
  electronics in much the same way it has convinced businesses and
  consumers that they required chips that speed computing tasks or ensure
  seamless wireless connections.

Owning an antivirus software company is probably a good license to
scaremonger. It's likely McAfee will suddenly start detecting threats and
offering solutions.

And:

  “As we move from a PC-centric era to a mobile-centric era, Intel needs to
  take advantage of every opportunity to expand its footprint into that
  marketplace.”

The gist of the article is that the intent is for new Intel markets.  In
other words there's more to mobile and appliance computing than dreamed
about in Mr. Gates philosophy, wherein Microsoft has moved in the antivirus
market for PCs, haven't they?  (Microsoft Security Essentials).  In a
saturated PC market the McAfee adoption rate has probably been stagnating or
dropping signaling the need for new markets, hence the company being
available for purchase.

There doesn't appear to be enough information to state what Intel plans
authoritatively, but it does bring into question Windows Mobile 7 adoption
rates.

Also when (web) content contains programming (javascript, etc.) you'd be
faced with the necessity of certifying everyone's content (including blogs)
or impinging on First Amendment uses of the Internet.  It's unlikely the
entire Internet would be transformed into commercial outlets for goods and
services, while providing the means for walled city marketing in specific
products appears the hot new thing.

While vigilance to impingement of rights is always a good thing, there's
evidence for the meat of the issue to fall on the other side of the razor's
edge.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Marsh Ray

On 09/14/2010 09:13 AM, Ben Laurie wrote:

On 14/09/2010 12:29, Ian G wrote:

On 14/09/10 2:26 PM, Marsh Ray wrote:

On 09/13/2010 07:24 PM, Ian G wrote:



1. In your initial account creation / login, trigger a creation of a
client certificate in the browser.


There may be a way to get a browser to generate a cert or CSR, but I
don't know it. But you can simply generate it at the server side.


Just to be frank here, I'm also not sure what the implementation details
are here.  I somewhat avoided implementation until it becomes useful.


FWIW, you can get browsers to generate CSRs and eat the resulting certs.
The actual UIs vary from appalling to terrible.

Of some interest to me is the approach I saw recently (confusingly named
WebID) of a pure Javascript implementation (yes, TLS in JS, apparently),
allowing UI to be completely controlled by the issuer.


First, let's hear it for out of the box thinking. *yay*

Now, a few questions about this approach:

How do you deliver Javascript to the browser securely in the first 
place? HTTP?


How do you get the user to save his private key file? Copy and paste?

How does the proper Javascript later access the user's private key securely?

How do they securely wipe memory in Javascript?

How do they resist timing attacks? In practice, an attacker can probably 
get the browser to repeatedly sign random stuff with the client cert 
even while he's running his own script in the same process.



Ultimately this
approach seems too risky for real use, but it could be used to prototype
UI, perhaps finally leading to something usable in browsers.


A sad indictment of browser vendor user interface priorities.


Slide deck here: http://payswarm.com/slides/webid/#(1)

(note, videos use flash, I think, so probably won't work for anyone with
their eye on the ball).

Demo here: https://webid.digitalbazaar.com/manage/


"This Connection is Untrusted"

- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Haystack redux

2010-09-14 Thread Alec Muffett

Obliged, Steve.  My & Simon Phipps' write-up is at ComputerWeekly:


http://blogs.computerworlduk.com/simon-says/2010/09/burning-haystack/index.htm

- a


On 14 Sep 2010, at 17:57, Steve Weis wrote:

> There have been significant developments around Haystack since the
> last message on this thread. [...]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Haystack redux

2010-09-14 Thread Steve Weis
There have been significant developments around Haystack since the
last message on this thread. Jacob Applebaum obtained a copy and found
serious vulnerabilities that could put its users at risk. He convinced
Haystack to immediately suspend operations. The developer of Haystack,
Daniel Colascione, has subsequently resigned from the project.

Many claims made about Haystack's security and usage made by its
creators now appear to be inaccurate. These claims were repeated
without verification by the New York Times, Newsweek, the BBC, and the
Guardian UK. Evegeny Morozov wrote several blog posts covering this.
His latest post is here:
http://neteffect.foreignpolicy.com/posts/2010/09/13/on_the_irresponsibility_of_internet_intellectuals

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


HDCP master key supposedly leaked

2010-09-14 Thread Steven Bellovin
http://arstechnica.com/tech-policy/news/2010/09/claimed-hdcp-master-key-leak-could-be-fatal-to-drm-scheme.ars

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Intel plans crypto-walled-garden for x86

2010-09-14 Thread Peter Gutmann
John Gilmore  writes:

>Let me guess -- to run anything but Windows, you'll soon have to jailbreak
>even laptops and desktop PC's?

Naah, we're perfectly safe, like every other similar attempt after 5-10 years
of effort and several hundred million dollars down the drain it'll come to
nothing.  I guess that's one silver lining of the corollary to "We can't
secure PCs against the bad guys", which is "We can't 'secure' them against
their owners either" (with the rider "... although we can cause a lot of cost
and inconvenience in trying").

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Erwan Legrand
On Tue, Sep 14, 2010 at 13:29, Ian G  wrote:
> On 14/09/10 2:26 PM, Marsh Ray wrote:
>>
>> On 09/13/2010 07:24 PM, Ian G wrote:
>
>>> 1. In your initial account creation / login, trigger a creation of a
>>> client certificate in the browser.
>>
>> There may be a way to get a browser to generate a cert or CSR, but I
>> don't know it. But you can simply generate it at the server side.
>
> Just to be frank here, I'm also not sure what the implementation details are
> here.  I somewhat avoided implementation until it becomes useful.

The French government has been doing this using Java applets for the
last decade (at least). This allows the happy French tax payers to
generate their own CSRs and have them automatically signed by the tax
administration in one swoop.

This might be the only large scale deployment of client-side
certificates in browsers I know of. (And I'd certainly like to hear
about others.)

--
Erwan Legrand

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-09-14 Thread Tom Ritter
When their talk first started getting hyped on twitter last Thursday,
the focus was on ASP.Net's viewstate [1,2] rather than the cookie
aspect. (Viewstate is a base64 blob of data in a hidden form field
about the current state of controls on the page.) I wonder if
threatpost focused on cookies because it's more accessible to
non-webforms programmers.  On Friday, a tweet mentioning using HMAC on
the viewstate was a valid mitigation [3].  This made sense to me
because...


If Viewstates are protected with a simple hash by default, you could
append data and still generate a valid hash (because of many hash
functions' design flaw that created the need for HMAC itself [4]).

So you run the padding oracle attack described (which I won't explain
for fear of explaining it wrong) but you can append encrypted blocks
and generate valid hashes of your appended data.

Using HMAC on the viewstate instead of a vanilla hash function
prevents targeting the viewstate because you can no longer append
blocks and generate new hash.


What's weird is I find confusing literature about what *is* the
default for protecting the viewstate.  In this article[5] it says that
in .Net 1.1 viewstates are HMAC-ed to prevent tampering...

validationKey - This specifies the key that the HMAC algorithm uses
to make ViewState tamper proof.


But this article [6] implies only SHA1 uses an HMAC and MD5 does not,
in .Net 2.0.  (.Net 2 also added encrypted viewstate which is another
story)

SHA1 - SHA1 is used to tamper proof ViewState and, if configured, the
forms authentication ticket. When SHA1 is selected for the validation
attribute, the algorithm used is HMACSHA1.
MD5 - MD5 is used to tamper proof ViewState and, if configured, the
forms authentication ticket.


And in this article[7], maybe the most recent, which talks about .Net
4.0 it gets even more confusing, adding specific HMAC options:


If your application is built on the .NET Framework 3.5 or earlier,
you can choose SHA1 (the default value), AES, MD5 or 3DES as the MAC
algorithm. If you're running .NET Framework 4, you can also choose
MACs from the SHA-2 family: HMACSHA256, HMACSHA384 or HMACSHA512.

After you choose a MAC algorithm, you'll also need to manually
specify the validation key. Remember to use cryptographically strong
random numbers: if necessary, you can refer to the key generation code
specified earlier. You should use at least 128-byte validation keys
for either HMACSHA384 or HMACSHA512, and at least 64-byte keys for any
other algorithm.


I'm thoroughly confused about what the default is in each version, and
how each option actually behaves.  Based on some of the documentation
and how I understand POET (their tool for the padding oracle attack)
working, I think there may be a disconnect between the writers, and
the security team.  I tried hard to get my company to send one of our
(non-security) Argentinean devs I'm friends with to ekoparty to take
notes and fill me in, but to no avail.  I hope after the presentation
blogs and this list fill with details about it.

Unrelated, at one point a phrase was written and echoed precipitously:
SHA1 is preferable because it produces a larger hash
http://www.google.com/search?q=%22larger+hash+than+MD5%22+%22and+is+therefore+considered+more+secure%22&filter=0

Anyway, Colin Percival and Thomas Ptacek got in a discussion[x] about
Encrypt-then-MAC, reproduced here because following twitter
discussions is a pain:

  Ptacek: CBC + HMAC decrypt+validate is an infamously tricky piece of
code to get right. I've never seen a generalist's implementation that
did.
  Percival: This is why (a) you should encrypt-then-MAC, not vice
versa, and (b) not use CBC mode.
  Ptacek: What does encrypt-then-MAC have to do with it? That's the
pattern that creates the timing variant of the attack.
  Percival: With encrypt-then-MAC, fake messages are discarded without
having their CBC padding inspected.
  Ptacek: Sorry, I misread. But then: you trust SHA256 as a first-line
defense more than AES?
  Percival: Do I trust HMAC-SHA256 more than AES? Hell yes.

Colin's right of course, if the HMAC option is used, then it should
throw out the attempts POET makes without indicating the padding is
good or bad... It's just that darned documentation that's confusing
me!

-tom


[1] http://twitter.com/dragosr/status/24070283257
[2] http://twitter.com/tqbf/status/24032786374
[3] http://twitter.com/dragosr/status/24073818333
[4] http://en.wikipedia.org/wiki/HMAC#Design_principles
[5] http://channel9.msdn.com/wiki/wiki/HowToConfigureTheMachineKeyInASPNET2/
[6] http://msdn.microsoft.com/en-us/library/ff649308.aspx
[7] http://msdn.microsoft.com/en-us/magazine/ff797918.aspx
[x] http://twitter.com/tqbf/status/24033073128
http://twitter.com/cperciva/status/24036001435
http://twitter.com/tqbf/status/24036476121
http://twitter.com/cperciva/status/24038505268
http://twi

Folly of looking at CA cert lifetimes

2010-09-14 Thread Paul Hoffman
At 10:57 AM -0400 9/14/10, Perry E. Metzger did not write, but passed on for 
someone else:
>This suggests to me that even if NIST is correct that 2048 bit RSA
>keys are the reasonable the minimum for new deployments after 2010,
>much shorter keys are appropriate for most server certificates that
>these CAs will sign.  The CA keys have lifetimes of 10 years or more;
>the server keys a a quarter to a fifth of that.

No, no, a hundred times no. (Well, about 250 times, or however many CAs are in 
the current OS trust anchor piles.) The "lifetime" of a "CA key" is exactly as 
long as the OS or browser vendor keeps that key, usually in cert form, in its 
trust anchor pile. You should not extrapolate *anything* from the contents of 
the CA cert except the key itself and the proclaimed name associated with it.

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Debian encouraging use of 4096 bit RSA keys

2010-09-14 Thread Ben Laurie
On 14/09/2010 13:15, Perry E. Metzger wrote:
> The decision that 1024 bit keys are inadequate for code signing is
> likely reasonable. The idea that 2048 bits and not something between
> 1024 bits and 2048 bits is a reasonable minimum is perhaps arguable.
> One wonders what security model indicated 4096 bits is the ideal
> length

Given their constraints, what they say (i.e. "to be on the safe side")
seems entirely reasonable. Code signing and verification do not occur
with great frequency, so a big key is not a big problem.

In general, we should resist the temptation to pare security protocols
down to the bare minimum - it is this tendency that gave us, for
example, the TLS renegotiation attack. A little bit of belt and braces
and that would have been a non-issue.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Ben Laurie
On 14/09/2010 12:29, Ian G wrote:
> On 14/09/10 2:26 PM, Marsh Ray wrote:
>> On 09/13/2010 07:24 PM, Ian G wrote:
> 
>>> 1. In your initial account creation / login, trigger a creation of a
>>> client certificate in the browser.
>>
>> There may be a way to get a browser to generate a cert or CSR, but I
>> don't know it. But you can simply generate it at the server side.
> 
> Just to be frank here, I'm also not sure what the implementation details
> are here.  I somewhat avoided implementation until it becomes useful.

FWIW, you can get browsers to generate CSRs and eat the resulting certs.
The actual UIs vary from appalling to terrible.

Of some interest to me is the approach I saw recently (confusingly named
WebID) of a pure Javascript implementation (yes, TLS in JS, apparently),
allowing UI to be completely controlled by the issuer. Ultimately this
approach seems too risky for real use, but it could be used to prototype
UI, perhaps finally leading to something usable in browsers.

Slide deck here: http://payswarm.com/slides/webid/#(1)

(note, videos use flash, I think, so probably won't work for anyone with
their eye on the ball).

Demo here: https://webid.digitalbazaar.com/manage/

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Debian encouraging use of 4096 bit RSA keys

2010-09-14 Thread Perry E. Metzger
On Tue, 14 Sep 2010 12:01:22 -0300 Henrique de Moraes Holschuh
 wrote:
> On Tue, 14 Sep 2010, Perry E. Metzger wrote:
> > The decision that 1024 bit keys are inadequate for code signing is
> > likely reasonable. The idea that 2048 bits and not something
> > between 1024 bits and 2048 bits is a reasonable minimum is
> > perhaps arguable. One wonders what security model indicated 4096
> > bits is the ideal length
> 
> Key lifetime in Debian can be very long, 10 to 15 years.

That may be longer than is reasonable. Technologies shift, and having
the capability to update keys over the course of years may be
superior to attempting to guess (without sufficient information) what
the right key length in 2025 would be.

Recall that it is also difficult to keep a private key secure for
decades, so 15 years may be longer than it is reasonable to assume
that the physical key is safe from actual outright theft or even
accidental disclosure. Also, every once in a while, it turns out that
one's random number generator or algorithms are not what they should
have been.

One needs a way of updating keys even if one is reasonably sure that
brute force attacks will not work over the period. Given that,
attempting to secure the system with a massive key is probably a bad
tradeoff.

> I'd appreciate some input from this list about the Debian bias
> towards 4096 RSA main keys, instead of DSA2 (3072-bit) keys.  Is it
> justified?

I'm not sure why the tradeoff would be between a particular seemingly
arbitrary RSA size and a particular seemingly arbitrary DSA size. I
would suggest instead selecting the algorithm and key length
independently.

> These keys are used as KSK, as gpg will happily attach subkeys to
> them for the grunt work...

I'll open the floor to further discussion now... 

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Debian encouraging use of 4096 bit RSA keys

2010-09-14 Thread Peter Gutmann
"Perry E. Metzger"  writes:

>One wonders what security model indicated 4096 bits is the ideal length

The one that says that if you wind things up past 11 (4096 bits), various
things break.

(D'you really think they applied any kind of security analysis to the choice
of key size?  They just wound it up until they got to 11, then declared that
the new key size).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Debian encouraging use of 4096 bit RSA keys

2010-09-14 Thread Perry E. Metzger
[Moderator's note: Anonymously forwarded at the request of the
sender. If you reply to this, please don't attribute it to me, I
didn't send it. --Perry]

Begin forwarded message:

[Perry, please forward this anonymously, if you're permitting that
these days]

On Tue, Sep 14, 2010 at 08:15:52AM -0400, Perry E. Metzger wrote:
> The decision that 1024 bit keys are inadequate for code signing is
> likely reasonable. The idea that 2048 bits and not something between
> 1024 bits and 2048 bits is a reasonable minimum is perhaps arguable.
> One wonders what security model indicated 4096 bits is the ideal
> length  

I ran into a mindboggling one a couple of weeks ago: a customer
complaint that "our new certificate doesn't work" when loaded into
one of my employer's SSL offload devices.

The actual cause was that the customer had loaded a 4096 bit key and
caused end-to-end performance to fall to about 12 TPS from the 1500
TPS they were seeing with their previous 1024 bit key.

When we inquired why they were using a 4096 bit key, they indicated
that their "information security department" had imposed the
requirement that their service keys had to be "twice as long as the
CA's key" so that "we are not the weak link in our customers'
security".

It took some time, but I think we explained the deep folly of this new
policy to them.

I am a big fan of keys in the 1280-1536 bit range for SSL server
certificates.  Surveying a large number of commercially signed
certificates on the Internet I see the overwhelming majority expire
within 3 years of issue.

This suggests to me that even if NIST is correct that 2048 bit RSA
keys are the reasonable the minimum for new deployments after 2010,
much shorter keys are appropriate for most server certificates that
these CAs will sign.  The CA keys have lifetimes of 10 years or more;
the server keys a a quarter to a fifth of that.

Making 2048 bit keys the standard on individual servers will reduce
server performance to the extent that initiatives like "HTTPS
everywhere" will become impractical.  Yes, I/O is usually the
bottleneck for most servers, but increasing the SSL handshake cost by
a factor of 10 changes that quite dramatically.

Meanwhile, 1280 bit keys offer a huge increase in resistance to
factoring within the next decade and have much less performance impact
for servers (since the performance impact on clients is so widely
distributed for the HTTPS case I think it can be ignored, but this
is of course better for 1280 bit keys too).

But people look at the NIST document that recommends 2048 bit keys
after 2010 (which I do think is a somewhat misguided recommendation
for keys as short-lived as web server keys, though definitely correct
for CA keys) and decide to be "double safe" and we get lunacy like
Debian trying to atone for their past OpenSSL sins by using 4096 bit
keys everywhere and, as a practical matter, *reducing* the spread of
service deployment over HTTPS because with 4096 bit keys, you just 
can't.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Debian encouraging use of 4096 bit RSA keys

2010-09-14 Thread Perry E. Metzger
The decision that 1024 bit keys are inadequate for code signing is
likely reasonable. The idea that 2048 bits and not something between
1024 bits and 2048 bits is a reasonable minimum is perhaps arguable.
One wonders what security model indicated 4096 bits is the ideal
length

Perry

Begin forwarded message:

Date: Tue, 14 Sep 2010 00:18:48 -0500
From: Gunnar Wolf 
To: debian-devel-annou...@lists.debian.org
Subject: Bits from keyring-maint


Hi,

So, even small teams more closely related to bureaucracy and
bookkeeping such as ours also deserve to send out some "bits from..."
mails from time to time. And being past midnight, I hope I can keep
this concise and short. For people that were present at my lightning
talk at DebConf, expect no new material in this mail... We just needed
to send it out.

1. PGP (v3) keys are gone!
   ---

The first point is that, with a lot of patience and chasing, and after
over a year of having stated the intention, we can finally say that
older, vulnerable v3 keys are gone from the Debian Developer keyring,
yay! Thanks in no small measure to Jonathan's endless bugging and
chasing, all keys in Debian today are v4 1024D or higher, and that is
a Very Good Thing. And yes, it leads us to the next point...

2. We want stronger keys
   -

1024D (SHA1) keys are OK-ish for now. No attacks are known on them,
and they are not compromising the archive in any way (if they were, of
course, we would immediately disable them and _then_ look for
solutions, while surely becoming overnight the most hated team in
Debian). Still, to be on the safe side (and to avoid the long and
painful declining curve we had with v3 keys), we are now clearly
pushing Debian towards adopting stronger RSA keys - We have accepted
some 2048R keys, but if you don't have a real reason to keep your key
at that size (i.e. you very often build on underpowered machines where
a 4096R key takes forever, or something like that), we really prefer
to go with 4096R keys.

To create your 4096R key, you are advised to follow Ana Guerrero's
excellent tutorial [1].

The policies for a key upgrade go as follows (and are explained at
greater length at [2]): 

- Your new key should be signed by your old key

- Your new key should be signed by two or more other Debian Developers

- Mail the key replacement request to keyr...@rt.debian.org,
  mentioning 'Debian RT' somewhere in the mail subject

- The request should be _inline_ signed by your old key. If you send a
  MIME-encoded signed message, RT will mangle it and it won't
  validate. Please, inline-sign the message.

- Although we clearly want to transition to a stronger keyring, that
  does not mean we want to loosen the Web of Trust. That means that if
  you have a gazillion signatures in your 1024D key, you should not
  rush to update it with a barely-signed 4096R one. Get it signed by
  as many people as possible. If you are already socially active in
  Debian, that should pose no problem. Otherwise... Well, if you are
  isolated and far from anybody else, we might do it. But remember,
  there is no _pressing_ need to do so.

3. We demand stronger keys!
   

But then again, we are not allowing any new 1024D keys
anymore. Anybody who is currently a DD or DM, or that has started his
application towards becoming one, will be allowed with whatever key
they currently have - But effective October 1st, no applications for
DM or DD should be processed with anything less than a 2048R
SHA2-capable key. 

Ok, so, I'm looking forward to process your key update requests!

On behalf of keyring-maint,

   -Gunnar

--

[1] http://keyring.debian.org/creating-key.html

[2] http://keyring.debian.org/replacing_keys.html

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Ian G

On 14/09/10 2:26 PM, Marsh Ray wrote:

On 09/13/2010 07:24 PM, Ian G wrote:



1. In your initial account creation / login, trigger a creation of a
client certificate in the browser.


There may be a way to get a browser to generate a cert or CSR, but I
don't know it. But you can simply generate it at the server side.


Just to be frank here, I'm also not sure what the implementation details 
are here.  I somewhat avoided implementation until it becomes useful.


Marsh's notes +1 from me.

iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Intel plans crypto-walled-garden for x86

2010-09-14 Thread Ben Laurie
On 14/09/2010 04:58, John Gilmore wrote:
> http://arstechnica.com/business/news/2010/09/intels-walled-garden-plan-to-put-av-vendors-out-of-business.ars
> 
> "In describing the motivation behind Intel's recent purchase of McAfee
> for a packed-out audience at the Intel Developer Forum, Intel's Paul
> Otellini framed it as an effort to move the way the company approaches
> security "from a known-bad model to a known-good model." Otellini went
> on to briefly describe the shift in a way that sounded innocuous
> enough--current A/V efforts focus on building up a library of known
> threats against which they protect a user, but Intel would live to
> move to a world where only code from known and trusted parties runs on
> x86 systems."
> 
> Let me guess -- to run anything but Windows, you'll soon have to 
> jailbreak even laptops and desktop PC's?

They said "known and trusted", right? So that would rule out anything
from MSFT...

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-09-14 Thread Peter Gutmann
=JeffH  quotes:

>"We knew ASP.NET was vulnerable to our attack several months ago, but we
>didn't know how serious it is until a couple of weeks ago. It turns out that
>the vulnerability in ASP.NET is the most critical amongst other frameworks.
>In short, it totally destroys ASP.NET security," said Thai Duong, who along
>with Juliano Rizzo, developed the attack against ASP.NET.

The earlier work is also pretty devastating against CAPTCHAs (as well as being
a damn good read, "Sudo make me a CAPTCHA" :-).  A great many CAPTCHAs work by
using a hidden form field containing the encrypted solution to the CAPTCHA,
which is then POSTed back to the server along with the client's solution (this
is needed to make the operation stateless).  If the decrypted version matches
what the client provides, they've solved the CAPTCHA.  So all an attacker has
to do is solve one CAPTCHA manually and then replay the encrypted version back
along with the solution as often as they like, you don't need to hire a
Pakistani Internet cafe any more for your CAPTCHA-breaking.  This destroys an
awful lot of CAPTCHAs, and isn't at all easy to fix because of the requirement
to keep it stateless.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-09-14 Thread Perry E. Metzger
On Tue, 14 Sep 2010 23:14:36 +1200 Peter Gutmann
 wrote:
> The earlier work is also pretty devastating against CAPTCHAs (as
> well as being a damn good read, "Sudo make me a CAPTCHA" :-).  A
> great many CAPTCHAs work by using a hidden form field containing
> the encrypted solution to the CAPTCHA, which is then POSTed back to
> the server along with the client's solution (this is needed to make
> the operation stateless).  If the decrypted version matches what
> the client provides, they've solved the CAPTCHA.  So all an
> attacker has to do is solve one CAPTCHA manually and then replay
> the encrypted version back along with the solution as often as they
> like, you don't need to hire a Pakistani Internet cafe any more for
> your CAPTCHA-breaking.  This destroys an awful lot of CAPTCHAs, and
> isn't at all easy to fix because of the requirement to keep it
> stateless.

Couldn't one simply include a timestamp in the encrypted data?
Assuming a five minute window (or what have you) would be too much,
one could also keep some state for five minutes (which is not a lot
to ask for.) 

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Intel plans crypto-walled-garden for x86

2010-09-14 Thread John Gilmore
http://arstechnica.com/business/news/2010/09/intels-walled-garden-plan-to-put-av-vendors-out-of-business.ars

"In describing the motivation behind Intel's recent purchase of McAfee
for a packed-out audience at the Intel Developer Forum, Intel's Paul
Otellini framed it as an effort to move the way the company approaches
security "from a known-bad model to a known-good model." Otellini went
on to briefly describe the shift in a way that sounded innocuous
enough--current A/V efforts focus on building up a library of known
threats against which they protect a user, but Intel would live to
move to a world where only code from known and trusted parties runs on
x86 systems."

Let me guess -- to run anything but Windows, you'll soon have to 
jailbreak even laptops and desktop PC's?

John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Marsh Ray

On 09/13/2010 07:24 PM, Ian G wrote:

On 11/09/10 6:45 PM, f...@mail.dnttm.ro wrote:


Essentially, the highest risk we have to tackle is the database.
Somebody having access to the database, and by this to the
authentication hashes against which login requests are verified,
should not be able to authenticate as another user. Which means, I
need an algorithm which should allow the generation of different
hashes for which it can be verified that they stem from the same login
info, without being able to infer this login info from a hash. This
algorithm is the problem I haven't solved yet. Other than that, I see
no way of protecting against a dictionary attack from somebody having
direct access to the database.


flj, I appreciate your systematic and conscientious engineering 
approach. But I haven't heard anything in your requirements that make it 
sound like a journey outside of well established protocols is justified 
here.


There are a few experienced people around here who could probably design 
come up with a new custom scheme and get it right the first time. But 
the history of most (even professionally-designed) new security 
protocols usually includes the later discovery of serious weaknesses.



I don't recall the full discussion, but what you described is generally
handled by public key cryptography, and it is built into HTTPS.

Here's my suggestion:


+1

I have a similar setup going in a reasonably big production environment 
and it's working great.



1. In your initial account creation / login, trigger a creation of a
client certificate in the browser.


There may be a way to get a browser to generate a cert or CSR, but I 
don't know it. But you can simply generate it at the server side.



1.b. record the client cert as the authenticator in the database.

>

2. when someone connects, the application examines the cert used, and
confirms the account indicated.


Note that you can get the full client cert presented by the web server 
and compare it (or a sufficiently long :-) hash of it) directly with 
what you have in the database. There may be no need to check signatures 
and so on if your server-side is centralized.



If an unknown cert, transfer to a
landing page.
2.b note that there is no login per se, each request can as easily check
the client cert listed by Apache.


Most apps will want to ask the human to authenticate explicitly from 
time to time for one reason or another.



3. you just need some way to roll-over keys from time to time. Left for
later.


Make sure you have at least some way of revoking and renewing a client 
certs, even if it's a code update. Just on the outside chance that, say, 
the keys got generated by Debian Etch's RNG or something.



3.b There are some other bugs, but if the approximate scheme works...


Three more recommendations:

Don't put anything sensitive in the X509 cert. Just a minimal userid or 
even random junk. You're just looking it up in a database.


Disable TLS renegotiation unless you control both the clients and the 
servers and can ensure they're all patched for CVE 2009-3555. Don't 
expect to be able to use renegotiation to "hide" the contents public 
cert, that never worked against an active attacker anyway.


Use a separate dns name for the https site that accepts client certs 
from the one that does not. The reason is that the client cert will have 
to be requested on the initial handshake. Requesting a client cert will 
cause many browsers to pop-up a dialog. Not something you want on your 
secure home page.


Again this is a good scheme, it's the way SSL/TLS has been intended to 
be used for authenticated clients since SSLv3. It even offers additional 
protections from the server's perspective, too: the server is no longer 
forced to transitively trust the union of all trusted root CA certs of 
all allowed clients in order to prove the non-existence of a 
man-in-the-middle.


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com