Re: Photos of an FBI tracking device found by a suspect

2010-10-08 Thread Nicolas Williams
On Fri, Oct 08, 2010 at 05:45:16PM -0400, Perry E. Metzger wrote:
> On Fri, 8 Oct 2010 16:13:13 -0500 Nicolas Williams
>  wrote:
> > On Fri, Oct 08, 2010 at 11:21:16AM -0400, Perry E. Metzger wrote:
> > > My question: if someone plants something in your car, isn't it
> > > your property afterwards?
> > 
> > If you left a wallet in someone's car, isn't it still yours?
> 
> Yes. However, that's an accident. If you deliberately leave a package
> on someone's doorstep, they then own the contents. (In fact, if
> someone mails you something, US law is very clear that it is yours.)

I covered that, didn't I?

> I'd be interested in hearing what a lawyer thinks.

Indeed, but I'm pretty sure the FBI wouldn't lose that question.  If the
surveillance subject said "it's mine now" they could probably arrest
him, and the legal question can get settled later, possibly in a
protracted appeals battle that would likely ultimately favor the FBI
anyways.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Photos of an FBI tracking device found by a suspect

2010-10-08 Thread Nicolas Williams
On Fri, Oct 08, 2010 at 11:21:16AM -0400, Perry E. Metzger wrote:
> My question: if someone plants something in your car, isn't it your
> property afterwards?

If you left a wallet in someone's car, isn't it still yours?  And isn't
that so even if you left it there on purpose (e.g., to test a person's
character)?  But this is not the same situation, of course, since the
item left behind is an active device.

If your planting of the device violates the target's rights you might
(or might not) lose ownership of the device, along with other penalties.
The FBI is a state actor though, so the rules that apply in this case
are different than in the case of a tracking device planted by a private
investigator, and those might be different than the rules that would
apply if the device's owner is a private actor not even licensed as a
PI.

IOW: ask a lawyer.  But I strongly suspect that the answer in this case
is "the FBI still owns the device", and "the question is not moot" (as
it might be if the device had stopped working then fallen off the car
(e.g., after hitting a number of nasty potholes).  I mean, I seriously
doubt that relevant laws would be written as to grant the subject
ownership of devices planted as part of a legal surveillance of them,
and though it's possible that judge-made law would conclude differently,
I doubt that judges would make such law.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: English 19-year-old jailed for refusal to disclose decryption key

2010-10-07 Thread Nicolas Williams
On Thu, Oct 07, 2010 at 01:10:12PM -0400, Bernie Cosell wrote:
> I think you're not getting the trick here: with truecrypt's plausible 
> deniability hack you *CAN* give them the password and they *CAN* decrypt 
> the file [or filesystem].  BUT: it is a double encryption setup.  If you 
> use one password only some of it gets decrypted, if you use the other 
> password all of it is decrypted.  There's no way to tell if you used the 
> first password that you didn't decrypt everything.  So in theory you 
> could hide the nasty stuff behind the second passsword, a ton of innocent 
> stuff behind the first password and just give them the first password 
> when asked.  In practice, I dunno if it really works or will really let 
> you slide by.

There is no trick, not really.  If decryption results in plaintext much
shorter than the ciphertext -much shorter than can be explained by the
presence of a MAC- then it'd be fair to assume that you're pulling this
"trick".  The law could easily deal with this.

Plausible deniability with respect to crypto technology used is not
really any different than plausible deniability with respect to
knowledge of actual keys.  Moreover, possession of software that can do
"double encryption" could be considered probable cause that your files
are likely to be encrypted with it.

Repeat after me: cryptography cannot protect citizens from their states.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Nicolas Williams
On Tue, Sep 14, 2010 at 03:16:18PM -0500, Marsh Ray wrote:
> On 09/14/2010 09:13 AM, Ben Laurie wrote:
> >Of some interest to me is the approach I saw recently (confusingly named
> >WebID) of a pure Javascript implementation (yes, TLS in JS, apparently),
> >allowing UI to be completely controlled by the issuer.
> 
> First, let's hear it for out of the box thinking. *yay*
> 
> Now, a few questions about this approach:
> 
> How do you deliver Javascript to the browser securely in the first
> place? HTTP?

I'll note that Ben's proposal is in the same category as mine (which
was, to remind you, implement SCRAM in JavaScript and use that, with
channel binding using tls-server-end-point CB type).

It's in the same category because it has the same flaw, which I'd
pointed out earlier: if the JS is delivered by "normal" means (i.e., by
the server), then the script can't be used to authenticate the server.

And if you've authenticated the server vi HTTPS (TLS) then you might as
well just POST the username&password to the server, since the server
could just as well send you a script that does just that.

This approach works only if you deliver the script in some out-of-band
manner, such as via a browser plug-in/add-on (hopefully signed [by a
trustworthy trusted third party]).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-08 Thread Nicolas Williams
On Wed, Sep 08, 2010 at 05:45:26PM +0200, f...@mail.dnttm.ro wrote:
> We do a web app with an Ajax-based client. Anybody can download the
> client and open the app, only, the first thing the app does is ask for
> login.
> 
> The login doesn't happen using form submission, nor does it happen via
> a known, standard http mechanism.
> 
> What we do is ask the user for some login information, build a hash
> out of it, then send it to the server and have it verified. If it
> checks out, a session ID is generated and returned to the client.
> Afterwards, only requests accompanied by this session ID are answered
> by the server.

I understand why you're doing this, but it's not really any better than
sending the password to the server in the first place via a POST from an
HTML form (where the GET and POST of the form happen over HTTPS).  The
reason there's no real difference is that the script comes from the
server, so it could do anything at all (and besides, you'll end up with
a password equivalent at some point if all you do is hash the password,
but read on).

And since you'd still be teaching your users to type passwords into web
page elements, you'd still leave your users susceptible to phishing.

Still, if this is what really you want, then I recommend that you use
SCRAM (RFC5802).  SCRAM uses HMAC-SHA-1.  You can find implementations
of SHA-1 in JavaScript; implementing SCRAM given a JavaScript SHA-1
implementation isn't hard.

Note that if mechanisms like SCRAM were provided by the browser, then
it'd safe (indeed, good!) to use them.  Implementing them in a server-
provided script doesn't provide anywhere near as much security as
implementing them natively in the browser (or in a browser plug-in/
add-on).

If you can afford to develop a browser add-on, then I recommend you
implement SCRAM there.  The key is to make it possible for users to
distinguish web page elements from UI elements of the browser/ add-on --
this is hard.

> What we need is a hashing algorithm that:
> - should not generate the same hash every time - i.e. should includen 
> something random elements

We calls those random elements "nonces".  SCRAM has those.  Browsers
provide a random() method.

> - should require little code to generate

SCRAM is rather simple.

> - should allow verification of whether two hashes stem from the same
> login data, without having access to the actual login data

I'm not sure what you mean here.  I think you mean that the verifier
shouldn't be the password nor a password equivalent, in which case
you're completely correct.  SCRAM has that, though the server obtains
enough information, the first time that the user authenticates, to keep
a password equivalent for later use.  Also, SCRAM is susceptible to
off-line password dictionary attacks.  SCRAM is best used over TLS
(HTTPS), with channel binding (see below), to defeat passive attacks and
to make MITM attacks harder to mount without the user noticing.

> We need to implement the hashing algorithm in Javascript and the
> verification algorithm in Java, and it needs to execute reasonably
> fast, that's why it has to require little code. None of us is really
> into cryptography, so the best thing we could think of was asking for
> advice from people who grok the domain.

"Little code" does not imply "fast".  Nor does "much code" imply "slow".
If you don't understand this then you have bigger problems than not
being into cryptography.  You should want "little code" so that
development and maintenance are cheap, not so it runs fast.

> The idea is the following: we don't want to secure the connection, we
> just want to prevent unauthenticated/unauthorized requests. Therefore,

You should definitely want to secure the connection.  If you don't then
your users will be subject to MITM attacks.  (But then, your users will
be vulnerable to phishing attacks anyways, since they'll be used to
typing passwords into spoofable web page elements.)

> we only send a hash over the wire and store it in the database when
> the user changes his password, and only send different hashes when the
> user authenticates later on. On the server, we just verify that the
> stored hash and the received hash match, when an authentication
> request arrives. Cleartext passwords aren't stored anyway, and don't
> ever travel over the wire.
> 
> However, we could not imagine a reasonable algorithm for what we need
> until now, and didn't find anything prefabricated on the web.
> Therefore we ask for help.

Well, you did the right thing in coming to such a list.  You should have
done more reasearch, but it's not a big deal that you didn't, because as
you can see, the real problem is that we don't have a good solution for
you (that is, the browsers don't provide what you really need).

> br,
> 
> flj
> 
> PS: reusing the session ID is of course a security risk, since it
> could allow session hijacking. We're aware of this, but don't intend
> to do anything about it other than warn customers/us

Re: questions about RNGs and FIPS 140

2010-09-01 Thread Nicolas Williams
On Sat, Aug 28, 2010 at 07:01:18PM +1200, Peter Gutmann wrote:
> What matters to someone getting something evaluated isn't what NIST thinks or
> what one person's interpretation of the standard says, but what the lab does
> and doesn't allow.  Since what I reported is based on actual evaluations
> (rather than what NIST thinks), how can it be "factually incorrect"?

BTW, FIPS-140-2 is reasonable regarding RNGs: there are no approved
non-deterministic RNGs, and but non-deterministic RNGs may be used to
seed a deterministic RNG.  There are a few problems though:

a) nothing in the standard says anything about re-seeding nor seeding in
   the absence of TRNGs,
b) the standard speaks of "nondeterministic RNGs approved for use in
   classified applications" without referencing what such RNGs exist
   (and specifically stating that there are on approved
   non-deterministic RNGs),
c) Annex C (where approved RNGs are listed) is still a Draft.

On the plus side, one of the approved RNGs listed in the draft Annex C
(DRBG Special Publication 800-90) specifically addresses the entropy
pool concept, periodic reseeding with entropy from a non-deterministic
RNG (or from a chain of RNGs anchored by a non-deterministic RNG).

> >The fact is that all of the approved deterministic RNGs have places that you
> >are expected to use to seed the generator.  The text of the standard
> >explicitly states that you can use non-approved non-deterministic RNGs to
> >seed your approved deterministic RNG.
> 
> Yup, and if you look at some of the generators you'll see things like the use
> of a date-and-time vector DT in the X9.17/X9.30 generator, which was the
> specific example I gave earlier of sneaking in seeding via the date-and-time.
> Unfortunately one lab caught that and required that the DT vector really be a
> date and time, specifically the 64-bit big-endian output of time(), the
> Security 101 counterexample for how to seed an RNG.

X9.31 uses two seeds, one a date/time vector, the other a proper seed
vector.  X9.31 is in the FIPS-140-2 draft Annex C.

> In summary it doesn't matter what the standard says, it matters what the labs
> require, and that can be (a) often arbitrary and (b) contrary to what would
> generally be regarded as good security practice.

I would argue that if what you say is true then it derives from the
standard being underspecified.  In other words: what the standard says
(and doesn't) does matter, very much.  If different labs interpret
critical portions of the standard in significantly different ways, and
in some cases in ways that reduce security, then clearly the standard is
in need of updating.

The spec ought to describe acceptable PRNG seeding in the absence of
TRNGs (e.g., use a factory seed plus date/time and/or an atomically
incremented counter that is stored persistently), it should cover
virtual machines, and it should cover RNGs that are entropy pools
constructed out of TRNGs and PRNGs.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-27 Thread Nicolas Williams
On Thu, Aug 26, 2010 at 02:13:46PM -0700, Eric Murray wrote:
> On Thu, Aug 26, 2010 at 11:21:35AM -0500, Nicolas Williams wrote:
> > I'm thinking of a system where a deterministic (seeded) RNG and
> > non-deterministic RNG are used to generate a seed for a deterministic
> > RNG, which is then used for the remained of the system's operation until
> > next boot or next re-seed.  That is, the seed for the run-time PRNG
> > would be a safe combination (say, XOR) of the outputs of a FIPS 140-2
> > PRNG and non-certifiable TNG.
> 
> That won't pass FIPS.  It's reasonable from a security standpoint,
> (although I would use a hash instead of an XOR), but it's not FIPS 140
> certifiable.
> 
> Since FIPS can't reasonably test the TRNG output, it can't
> be part of the output.  FIPS 140 is about guaranteeing a certain 
> level of security, not maximizing security.

If the issue is that determinism is necessary during certification
testing, then it should be possible to switch off the TRNG.  If the
issue is that FIPS is braindead, well, then we're at layer 9.

(One would think that gambling systems would be required to have a TRNG
on/off switch that would be set to off for testing, then set to on, then
resin poured on it, at the end of testing, to cause it to stay on.  That
way there'd be no risk of seeds being stolen because normal operation
would render possession of those seeds useless... without also attacking
the TRNG physically.  The TRNG design should be such that physical
attacks on it would be noticeable by bystanders and physical security
monitoring.  Yes, I know, a determined engineer working at a gambling
equipment manufacturer could probably find other ways to trojan the
system anyways.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Nicolas Williams
On Thu, Aug 26, 2010 at 06:25:55AM -0400, Jerry Leichter wrote:
> On Aug 25, 2010, at 4:37 PM,
> travis+ml-cryptogra...@subspacefield.org wrote:
> >
> >I also wanted to double-check these answers before I included them:
> >
> >1) Is Linux /dev/{u,}random FIPS 140 certified?
> >No, because FIPS 140-2 does not allow TRNGs (what they call non-
> >deterministic).  I couldn't tell if FIPS 140-1 allowed it, but
> >FIPS 140-2 supersedes FIPS 140-1.  I assume they don't allow non-
> >determinism because it makes the system harder to test/certify,
> >not because it's less secure.
> No one has figured out a way to certify, or even really describe in
> a way that could be certified, a non-deterministic generator.

Would it be possible to combine a FIPS 140-2 PRNG with a TRNG such that
testing and certification could be feasible?

I'm thinking of a system where a deterministic (seeded) RNG and
non-deterministic RNG are used to generate a seed for a deterministic
RNG, which is then used for the remained of the system's operation until
next boot or next re-seed.  That is, the seed for the run-time PRNG
would be a safe combination (say, XOR) of the outputs of a FIPS 140-2
PRNG and non-certifiable TNG.

factory_prng = new PRNG(factory_seed, sequence_number, datetime);
trng = new TRNG(device_path);
runtime_prng = new PRNG(factory_prng.gen(seed_size) ^ trng.gen(seed_size), 0, 
0);

One could then test and certify the deterministic RNG and show that the
non-deterministic RNG cannot destroy the security of the system (thus
the non-deterministic RNG would not require testing, much less
certification).

To me it seems obvious that the TRNG in the above scheme cannot
negatively affect the security of the system (given a sufficiently large
seed anyways).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: towards https everywhere and strict transport security

2010-08-26 Thread Nicolas Williams
On Thu, Aug 26, 2010 at 12:40:04PM +1000, James A. Donald wrote:
> On 2010-08-25 11:04 PM, Richard Salz wrote:
> >>Also, note that HSTS is presently specific to HTTP. One could imagine
> >>expressing a more generic "STS" policy for an entire site
> >
> >A really knowledgeable net-head told me the other day that the problem
> >with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
> >are now more prohibitive than the crypto costs.  I was quite surprised to
> >hear this; he was stunned to find it out.

It'd help amortize the cost of round-trips if we used HTTP/1.1
pipelining more.  Just as we could amortize the cost of public key
crypto by making more use of TLS session resumption, including session
resumption without server-side state [RFC4507].

And if only end-to-end IPsec with connection latching [RFC5660] had been
deployed years ago we could further amortize crypto context setup.

We need solutions, but abandoning security isn't really a good solution.

> This is inherent in the layering approach - inherent in our current
> crypto architecture.

The second part is a correct description of the current state of
affairs.  I don't buy the first part (see below).

> To avoid inordinate round trips, crypto has to be compiled into the
> application, has to be a source code library and application level
> protocol, rather than layers.

Authentication and key exchange are generally going to require 1.5 round
trips at least, which is to say, really, 2.

Yes, Kerberos AP exchanges happen in 1 round trip, but at the cost of
requiring a persistent replay cache (and also there's the non-trivial
TGS exchanges as well).  Replay caches historically have killed
performance, though they don't have to[0], but still, there's the need
for either a persistent replay cache backing store or a trade-off w.r.t.
startup time and clients with slow clocks[0], and even then you need to
worry about large (>1s) clock adjustments.

So, really, as a rule of thumb, budget 2 round trips for all crypto
setup.  That leaves us with amortization and piggy-backing as ways to
make up for that hefty up-front cost.

> Every time you layer one communication protocol on top of another,
> you get another round trip.
> 
> When you layer application protocol on ssl on tcp on ip, you get
> round trips to set up tcp, and *then* round trips to set up ssl,
> *then* round trips to set up the application protocol.

See draft-williams-tls-app-sasl-opt-04.txt [1], a variant of false
start, which alleviates the latter.  See also draft-bmoeller-tls-
falsestart-00.txt [2].

Back to layering...

If abstractions are leaky, maybe we should consider purposeful
abstraction leaking/piercing.

There's no reason that we couldn't piggy-back one layer's initial message
(and in some cases more) on a lower layer connection setup message
exchange -- provide much care is taken in doing so.

That's what PROT_READY in the GSS-API is for, that's one use for GSS-API
channel binding (see SASL/GS2 [RFC5801] for one example).  It's what TLS
"false start" proposals are about...  draft-williams-tls-app-sasl-opt-04
gets an up to 1.5 round-trip optimization for applications over TLS.

We could apply the same principle to TCP... (Shades of the old, failed?
transaction TCP [RFC1644] proposal from the mid `90s, I know.  Shades
also of TCP-AO and other more recent proposals perhaps as well.)

But there is a gotcha: the upper layer must be aware of the early
message send/delivery semantics.  For example, early messages may not
have been protected by the lower layer, with protection not confirmed
till the lower layer succeeds, which means... for example, that the
upper layer must not commit much in the way of resources until the lower
layer completes (e.g., so as to avoid DoS attacks).

I'm not saying that piercing layers is to be done cavalierly.  Rather,
that we should consider this approach, carefully.  I don't really see
better solutions (amortization won't always help).

Nico

[0] Turns out that there is a way to optimize replay caches greatly, so
that an fsync(2) is not needed on every transaction, or even most.

This is an optimization that turned out to be quite simple to
implement (with much commentary), but took a long time to think
through.  Writing a test program and then using it to test the
implementation's correctness was the lion's share of the
implementation work.

You can see it here:


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/gss_mechs/mech_krb5/krb5/rcache/rc_file.c

Diffs:


http://src.opensolaris.org/source/diff/onnv/onnv-gate/usr/src/lib/gss_mechs/mech_krb5/krb5/rcache/rc_file.c?r2=%252Fonnv%252Fonnv-gate%252Fusr%252Fsrc%252Flib%252Fgss_mechs%252Fmech_krb5%252Fkrb5%252Frcache%252Frc_file.c%4012192%3Ab9153e7686cf&r1=%252Fonnv%252Fonnv-gate%252Fusr%252Fsrc%252Flib%252Fgss_mechs%252Fmech_krb5%252Fkrb5%252Frcache%252Frc_file.c%407934%3A6aeeafc994de

RFE (though IIRC the description is wro

Re: Has there been a change in US banking regulations recently?

2010-08-16 Thread Nicolas Williams
On Fri, Aug 13, 2010 at 02:55:32PM -0500, eric.lengve...@wellsfargo.com wrote:
> There are some possibilities, my co-workers and I have discussed. For
> purely internal systems TLS-PSK (RFC 4279) provides symmetric
> encryption through pre-shared keys which provides us with whitelisting
> as well as removing asymmetric crypto.  [...]

For purely internal systems Kerberos is really the way to go, mostly
because it's so easy to deploy nowadays.

TLS-PSK is not a useful way of building any but the smallest networks,
and for two reasons: a) there's no agreed PBKDF and password salting
mechanisms, so passwords are out, b) there's no enrolment mechanism, so
PSK setup is completely ad-hoc.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-03 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 11:29:32AM -0400, Adam Fields wrote:
> On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
> [...]
> > 3 Any security system that demands that users be "educated",
> >   i.e. which requires that users make complicated security decisions
> >   during the course of routine work, is doomed to fail.
> [...]
> 
> I would amend this to say "which requires that users make _any_
> security decisions".
> 
> It's useful to have users confirm their intentions, or notify the user
> that a potentially dangerous action is being taken. It is not useful
> to ask them to know (or more likely guess, or even more likely ignore)
> whether any particular action will be harmful or not.

But users have to help you establish the context.  Have you ever been
prompted about invalid certs when navigating to pages where you couldn't
have cared less about the server's ID?  On the web, when does security
matter?  When you fill in a field on a form?  Maybe you're just
submitting an anonymous comment somewhere.  But certainly not just when
making payments.

I believe the user has to be involved somewhat.  The decisions the user
has to make need to be simple, real simple (e.g., never about whether to
accept a defective cert).  But you can't treat the user as a total
ignoramus unless you're also willing to severely constrain what the user
can do (so that you can then assume that everything the user is doing
requires, say, mutual authentication with peer servers).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-03 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 04:19:38PM -0400, Paul Wouters wrote:
> On Mon, 2 Aug 2010, Nicolas Williams wrote:
> >How should we measure success?
> 
> "The default mode for any internet communication is encrypted"

That's... extreme.  There are many things that will not be encrypted,
starting with the DNS itself, and also most public contents (because
their purveyors won't want to pay for the crypto; sad but true).

> >By that measure TLS has been so much more successful than IPsec as to
> >prove the point.
> 
> I never claimed IPsec was more successfulIt was not.

No, but you claimed that APIs weren't a major issue.  I believe they are.

> >But note that the one bit you're talking about is necessarily a part of
> >a resolver API, thus proving my point :)
> 
> Yes, but in some the API is pretty much done. If you trust your (local)
> resolver, the one bit is the only thing you need to check. You let the
> resolver do most of the bootstrap crypto. One you have that, your app
> can rip out most of the X.509 nonsense and use the public key obtained
> from DNS for its further crypto needs.

You missed the point.  The point was: do not design security solutions
without designing their interfaces.

IPsec has no user-/sysadmin-/developer-friendly interfaces -> IPsec is
not used.  DNS has interfaces -> when DNSSEC comes along we can extend
those intefaces.

Note that IPsec could have had trivial APIs -- trivial by comparison to
the IPsec configuration interfaces that operating systems typically
have.  For example, there's a proposal in the IETF apps area for an API
that creates connections to named servers, hiding all the details of
name resolution, IPv4/v6/v4-mapped-v6 addressing.  Such an API could
trivially have a bit by which the app can request cryptographic
protection (via IPsec, TLS, whatever can be negotiated).  Optional
complexity could be added to deal with subtleties of the secure
transport (e.g., what cipher suites do you want, if not the default).
But back in the day APIs were seen as not really in scope, so IPsec
never got them, so IPsec has been underused (and rightly so).

> >...but we grow technologies organically, therefore we'll never have a
> >situation where the necessary infrastructure gets deployed in a secure
> >mode from the get-go.  This necessarily means that applications need
> >APIs by which to cause and/or determine whether secure modes are in
> >effect.
> 
> But by now, upgrades happen more automatic and more quickly. Adding something
> new to DNS won't take 10 years to get deployed. We've come a long way. It's
> time to reap the benefits from our new infrastructure.

No objection there.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 01:05:53PM -0400, Paul Wouters wrote:
> On Mon, 2 Aug 2010, Perry E. Metzger wrote:
> 
> >For example, in the internet space, we have http, smtp, imap and other
> >protocols in both plain and ssl flavors. (IPSec was originally
> >intended to mitigate this by providing a common security layer for
> >everything, but it failed, for many reasons. Nico mentioned one that
> >isn't sufficiently appreciated, which was the lack of APIs to permit
> >binding of IPSec connections to users.)
> 
> If that was a major issue, then SSL would have been much more successful
> then it has been.

How should we measure success?  Every user on the Internet uses TLS
(SSL) on a daily basis.  None uses IPsec for anything other than VPN
(the three people who use IPsec for end-to-end protection on the
Internet are too few to count).

By that measure TLS has been so much more successful than IPsec as to
prove the point.

Of course, TLS hasn't been successful in the sense that we care about
most.  TLS has had no impact on how users authenticate (we still send
usernames and passwords) to servers, and the way TLS authenticates
servers to users turns out to be very weak (because of the plethora of
CAs, and because transitive trust isn't all that strong).

> I have good hopes that soon we'll see use of our new biggest
> cryptographically signed distributed database. And part of the
> signalling can come in via the AD bit in DNSSEC (eg by adding an EDNS
> option to ask for special additional records signifying "SHOULD do
> crypto with this pubkey")
> 
> The AD bit might be a crude signal, but it's fairly easy to implement
> at the application level. Requesting specific additional records will
> remove the need for another latency driven DNS lookup to get more
> crypto information.
> 
> And obsolete the broken CA model while gaining improved support for
> SSL certs by removing all those enduser warnings.

DNSSEC will help immensely, no doubt, and mostly by giving us a single
root CA.

But note that the one bit you're talking about is necessarily a part of
a resolver API, thus proving my point :)

The only way we can avoid having such an API requirement is by ensuring
that all zones are signed and all resolvers always validate RRs.  An API
is required in part because we won't get there from day one (that day
was decades ago).

The same logic applies to IPsec.  Suppose we'd deployed IPsec and DNSSEC
back in 1983... then we might have many, many apps that rely on those
protocols unknowingly, and that might be just fine...

...but we grow technologies organically, therefore we'll never have a
situation where the necessary infrastructure gets deployed in a secure
mode from the get-go.  This necessarily means that applications need
APIs by which to cause and/or determine whether secure modes are in
effect.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: GSM eavesdropping

2010-08-02 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 12:32:23PM -0400, Perry E. Metzger wrote:
> Looking forward, the "there should be one mode, and it should be
> secure" philosophy would claim that there should be no insecure
> mode for a protocol. Of course, virtually all protocols we use right
> now had their origins in the days of the Crypto Wars (in which case,
> we often added too many knobs) or before (in the days when people
> assumed no crypto at all) and thus come in encrypted and unencrypted
> varieties of all sorts.
> 
> For example, in the internet space, we have http, smtp, imap and other
> protocols in both plain and ssl flavors. [...]

Well, to be fair, there is much content to be accessed insecurely for
the simple reason that there may be no way to authenticate a peer.  For
much of the web this is the case.

For example, if I'm listening to music on an Internet radio station, I
could care less about authenticating the server (unless it needs to
authenticate me, in which case I'll want mutual authentication).  Same
thing if I'm reading a randmon blog entry or a random news story.

By analogy to the off-line world, we authenticate business partners, but
in asymmetric broadcast-type media, authentication is very weak and only
of the broadcaster to the receiver.  If we authenticate broadcasters at
all, we do it by such weak methods as recognizing logos, broadcast
frequencies, etcetera.

In other words, context matters.  And the user has to understand the
context.  This also means that the UI matters.  I hate to demand any
expertise of the user, but it seems unavoidable.  By analogy to the
off-line world, con-jobs happen, and they happen because victims are
naive, inexperienced, ill, senile, etcetera.  We can no more protect the
innocent at all times online as off, not without their help.

"There should be one mode, and it should be secure" is a good idea, but
it's not as universally applicable as one might like.  *sadness*

SMTP and IMAP, then, definitely require secure modes.  So does LDAP,
even though it's used to access -mostly- public data, and so is more
like broadcast media.  NNTP must not even bother with a secure mode ;)

Another problem you might add to the list is tunneling.  Firewalls have
led us to build every app as a web or HTTP application, and to tunnel
all the others over port 80.  This makes the relevant context harder, if
not impossible to resolve without the user's help.

HTTP, sadly, needs an insecure mode.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Nicolas Williams
On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
> 5 Also related to 3, but important in its own right: to quote Ian
>   Grigg:
> 
> *** There should be one mode, and it should be secure. ***

6. Enrolment must be simple.

I didn't see anything about transitive trust.  My rule regarding that:

7. Transitive trust, if used at all, should be used to bootstrap
   non-transitive trust (see "enrolment must be simple") or should be
   limited to scales where transitive trust is likely to work (e.g.,
   corporate scale).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-29 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 03:47:01PM -0400, Richard Salz wrote:
> At shutdown, a process copies /dev/random to /var/random-seed which is 
> used on reboots.
> Is this a good, bad, or "shrug, whatever" idea?

If the entropy pool has other, reasonable/fast sources of entropy at
boot time, then seeding the entropy pool at boot time with a seed
generated at shutdown time is harmless (assuming a good enough entropy
pool design).  Else, then this approach can be a good idea (see below).

> I suppose the idea is that "all startup procs look the same" ?

The idea is to get enough entropy into the entropy pool as fast as
possible at boot time, faster than the system's entropy sources might
otherwise allow.

The security of a system that works this way depends critically on
several things: a) no one reads the seed between the time it's generated
and the time it's used to seed the entropy pool, b) the seed cannot be
used twice accidentally, c) the system can cope with crashes (i.e., no
seed at boot) such as by blocking reads of /dev/random and even
/dev/urandom until enough entropy is acquired, d) the entropy pool
treats the seed as entropy from any other source and applies the normal
mixing procedure to it, e) there is a way to turn off this chaining of
entropy across boots.  (Have I missed anything?)

(a) can't really be ensured.  But one could be sufficiently confident
that (a) is true that one would want to enable this.  (d) means that
every additional bit of entropy obtained from other sources at boot time
will make it harder for an attacker that managed to read this seed to
successfully mount any attacks on you.  (e) would be for the paranoid;
for most users, most of the time, chaining entropy across reboots is
probably a very good idea.  But most importantly, on-CPU RNGs should
make this totally pointless (see previous RNG-on-CPU threads).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 10:50:10AM +0200, Alexandre Dulaunoy wrote:
> On Thu, Jul 29, 2010 at 3:09 AM, Nicolas Williams
>  wrote:
> > This is a rather astounding misunderstanding of the protocol.  [...]
> 
> I agree on this and but the implementation of OCSP has to deal with
> all "non definitive" (to take the wording of the RFC) answers. That's
> where the issue is. All the "exception case", mentioned in 2.3, are
> all unauthenticated and it seems rather difficult to provide authenticated
> scheme for that part as you already mentioned in [*].
> 
> That's why malware authors are already adding fake entries of OCSP
> server in the host file... simple and efficient.

A DoS attack on OCSP clients (which is all this really is) should either
cause the clients to fallback on CRLs or to fail the larger operation
(TLS handshake, whatever) altogether.  The latter makes this just a DoS.
The former makes this less than a DoS.

The real risk would be OCSP clients that don't bother with CRLs if OCSP
Responder can't respond successfully, but which proceed anyways af if
peers' certs are valid.  If there exist such clients, don't blame OCSP.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 10:03:08PM +0200, Alexandre Dulaunoy wrote:
> On Wed, Jul 28, 2010 at 5:51 PM, Peter Gutmann
>  wrote:
> > Nicolas Williams  writes:
> >
> >>Exactly.  OCSP can work in that manner.  CRLs cannot.
> >
> > OCSP only appears to work in that manner.  Since OCSP was designed to be 
> > 100%
> > [...]

The protocol allows for more than simple proxy checking of CRLs.  What
implementations do is another matter (which matters, of course, but be
sure to know what you're condemning, the implementations or the
protocol, as they're not the same thing).

> OCSP is even better for an attacker. As the OCSP responses are
> unauthenticated[1], you can be easily fake the response with
> what ever the attacker likes.
> 
> http://www.thoughtcrime.org/papers/ocsp-attack.pdf
> 
> [1] Would be silly to run OCSP over SSL ;-)

This is a rather astounding misunderstanding of the protocol.  An
OCSPResponse does contain unauthenticated plaintext[*], but that
plaintext says nothing about the status of the given certificates -- it
only says whether the OCSP Responder was able to handle the request.  If
a Responder is not able to handle requests it should respond in some
way, and it may well not be able to authenticate the error response,
thus the status of the responder is unauthenticated, quite distinctly
from the status of the certificate, which is authenticated.  Obviously
only successful responses are useful.

The status of a certificate (see SingleResponse ASN.1 type) most
certainly is covered by the signature: SingleResponse is part of
ResponseData, which is the type of tbsResponseData, which is what the
signature covers.

Don't take my word for it, nor that paper's author's.  Read the RFC and
decide for yourself.

[*] It's not generally possible to avoid unauthenticated plaintext
completely in cryptographic protocols.  The meaning of a given bit
of unauthenticated plaintext must be taken into account when
analyzing a cryptographic protocol.


Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 02:41:35PM -0400, Perry E. Metzger wrote:
> On the other edge of the spectrum, many people now use quite secure
> protocols (though I won't claim the full systems are secure --
> implementation bugs are ubiquitous) for handling things like remote
> login and file transfer, accessing shared file systems on networks,
> etc., with little to no knowledge on their part about how their
> systems work or are configured. This seems like a very good thing. One
> may complain about many issues in Microsoft's systems, for example,
> but adopting Kerberos largely fixed the distributed authentication
> problem for them, and without requiring that users know what they're
> doing.

Hear, hear!  But... great for corporate networks, not quite for
Internet-scale, but a great example of how we can make progress when we
want to.

> (I am reminded of the similar death-by-complexity of the IPSec
> protocol's key management layers, where I am sad to report that even I
> can't easily configure the thing. Some have proposed standardizing on
> radically simplified profiles of the protocol that provide almost no
> options -- I believe to be the last hope for the current IPSec suite.)

IPsec is a great example of another kind of failure: lack of APIs.
Applying protection to individual packets without regard to larger
context is not terribly useful.  Apps have no idea what's going on, if
anything, in terms of IPsec protection.  Worse, the way in which IPsec
access control is handled means that typically many nodes can claim any
given IP address, which dilutes the protection provided by IPsec as the
number of such nodes goes up.  Just having a way to ask that a TCP
connection's packets all be protected by IPsec, end-to-end, with similar
SA pairs (i.e., with same peers, same transforms) would have been a
great API to have years ago.

The lack of APIs has effectively relegated IPsec to the world of VPN.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 01:25:21PM -0400, Perry E. Metzger wrote:
> My mother relies on many certificates. Can she make a decision on
> whether or not her browser uses OCSP for all its transactions?
> 
> I mention this only because your language here is quite sticky.
> Saying it is "up to the relying parties" is incorrect. It is really
> up to a host of people who are nowhere near the relying parties. In
> most cases, the relying parties aren't even capable of understanding
> the issue.

Precise and concise language in a fast moving thread with participants
with diverse backgrounds is going to be hard to come by.  Better to quit
than hold out for that (unless you enjoy being disappointed).  I'm
hardly the only "sinner" here on that score.

"up to the relying parties" means "up to the browsers", where users-as-
relying-parties are concerned.  That also means "getting software
updated", which to some degree means "getting my mom to do stuff she
doesn't and shouldn't have to know how".  It shouldn't mean "getting my
mom to enable OCSP" -- that would be hopeless.

"up to the relying parties" means "up to the server" as well, since
servers too are relying-parties.

Again, if everything is too hard, why do we bother even talking about
any of this?  ETOOHARD cannot usefully be a retort to every suggestion.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 12:18:56PM -0400, Perry E. Metzger wrote:
> Again, I understand that in a technological sense, in an ideal world,
> they would be equivalent. However, the big difference, again, is that
> you can't run Kerberos with no KDC, but you can run a PKI without an
> OCSP server. The KDC is impossible to leave out of the system. That is
> a really nice technological feature.

Whether PKI can run w/o OCSP is up to the relying parties.  Today,
because OCSP is an afterthought, they have little choice.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 04:23:52AM +1200, Peter Gutmann wrote:
> Nicolas Williams  writes:
> >Sorry, but this is wrong.  The OCSP protocol itself really is an online
> >certificate status protocol.  
> 
> It's not an online certificate status protocol because it can provide neither
> a yes or a no response to a query about the validity of a certificate.

You should be more specific.  I'm looking at RFC2560 and I don't see
this.

OCSP Responses allow the Responder to assert:

 - A time at which the given cert was known to be valid (thisUpdate;
   REQUIRED).

   Relying parties are free to impose a "freshness" requirement (e.g.,
   thisUpdate must be no more than 5 minutes in the past).

   Perhaps you're concerned that protocols that allow for carrying OCSP
   Responses don't provide a way for peers to indicate what their
   freshness requirements are?

 - A time after which the given OCSP Response is not to be considered
   valid (nextUpdate, which is OPTIONAL).

 - The certificate's status (certStatus, one of good, revoked, unknown;
   REQUIRED).

How is responding "certStatus=good, thisUpdate="
not a "yes response to a query about the validity of a certificate"?

What am I missing?

> (For an online status protocol I want to be able to submit a cert and get back
> a straight valid/not valid response, exactly as I can for credit cards with
> their authorised/declined response.  Banks were doing this twenty years ago
> with creaky mainframes over X.25 and (quite probably) wet bits of string, but
> we still can't do this today with multicore CPUs and gigabit links if we're
> using OCSP).

OCSP gives you that.  Seriously.  In fact, an OCSP Responder either must
not respond or it must give you at least {certStatus, thisUpdate}
information about a cert.  Yes, certStatus can be "unknown", but a
Responder that regularly asserts certStatus=unknown would be a rather
useless responder.

> >Responder implementations may well be based on checking CRLs, but they aren't
> >required to be.
> 
> They may be, or they may not be, but you as a relying party have no way of 
> telling.

And why would a relying party need to know internal details of the OCSP
Responder?

> In any event though since OCSP can't say yes or no, it doesn't matter whether 
> the response is coming from a live database or a month-old CRL, since it's 
> still a fully CRL-bug-compatible blacklist I can trivially avoid it with a 
> manufactured-cert attack.

Manufactured cert attack?  If you can mint certs without having the CA's
private key then who cares about OCSP.  If you can do it only as a
result of hash collisions, well, switch hashes.  Let's not confuse hash
collision issues with whether OCSP does what it's advertised to do.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 11:20:51AM -0500, Nicolas Williams wrote:
> On Wed, Jul 28, 2010 at 12:18:56PM -0400, Perry E. Metzger wrote:
> > Again, I understand that in a technological sense, in an ideal world,
> > they would be equivalent. However, the big difference, again, is that
> > you can't run Kerberos with no KDC, but you can run a PKI without an
> > OCSP server. The KDC is impossible to leave out of the system. That is
> > a really nice technological feature.
> 
> Whether PKI can run w/o OCSP is up to the relying parties.  Today,
> because OCSP is an afterthought, they have little choice.

Also, requiring OCSP will probably take less effort than switching from
PKI to Kerberos.  In other words: eveything sucks.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Thu, Jul 29, 2010 at 03:51:33AM +1200, Peter Gutmann wrote:
> Nicolas Williams  writes:
> 
> >Exactly.  OCSP can work in that manner.  CRLs cannot.
> 
> OCSP only appears to work in that manner.  Since OCSP was designed to be 100% 
> bug-compatible with CRLs, it's really an OCQP (online CRL query protocol) and 
> not an OCSP.  Specifically, if I submit a freshly-issued, valid certificate 
> to 
> an OCSP responder and ask "is this a valid certificate" then it can't say 
> yes, 
> and if I submit an Excel spreadsheet to an OCSP responder and ask "is this a 
> valid certificate" then it can't say no.  It takes quite some effort to 
> design 
> an online certificate status protocol that's that broken.
> 
> (For people not familiar with OCSP, it can't say "yes" because a CRL can't 
> say 
> "yes" either, all it can say is "not on the CRL", and it can't say "no" for 
> the same reason, all it can say is "not on the CRL".  The ability to say 
> "vslid certificate" or "not valid certificate" was explicitly excluded from 
> OCSP because that's not how things are supposed to be done).

Sorry, but this is wrong.  The OCSP protocol itself really is an online
certificate status protocol.  Responder implementations may well be
based on checking CRLs, but they aren't required to be.

Don't be confused by the fact that OCSP borrows some elements from CRLs.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 11:38:28AM -0400, Perry E. Metzger wrote:
> On Wed, 28 Jul 2010 09:57:21 -0500 Nicolas Williams
>  wrote:
> > OCSP Responses are much like a PKI equivalent of Kerberos tickets.
> > All you need to do to revoke a principal with OCSP is to remove it
> > from the Responder's database or mark it revoked.
> 
> Actually, that's untrue in one very important respect.
> 
> In a Kerberos style system, you actively ask for credentials to do
> things at frequent intervals, and if the KDCs refuse to talk to you,
> you get no credentials.
> 
> In OCSP, we've inverted that. You have the credentials, for years in
> most cases, and someone else has to actively check that they're okay
> -- and in most instances, if they fail to get through to an OCSP
> server, they will simply accept the credentials.

No, they really are semantically equivalent.  In Kerberos (traditional,
pre-PKINIT Kerberos) the long-term credential is {principal name,
long-term secret key} or {principal name, password}, and the temporary
credential is the Kerberos Ticket.  In PKI+OCSP the long-term credential
is {certificate, private key}, and the temporary credentials is
{certificate, private key, fresh OCSP Response}.

Both, Kerberos and PKI+OCSP replace a long-term credential with a
short-lived, temporary credential authenticating the same principal.

> OCSP is hung on to the side of X.509 as an afterthought, so it cannot
> [...]

Yes, but it's still "morally equivalent" to Kerberos as described above,
but with PK instead of KDCs and shared secret keys.

Also, PKI+OCSP is somewhat less dependent on online infrastructure than
Kerberos because: a) just one OCSP Response will do[*], vs. a multitude
of service Tickets, b) OCSP Responders don't need access to the CA's
private key, whereas the KDCs do need access to the TGS keys.  Also,
OCSP Responses can be cached by the network, whereas Kerberos Tickets
cannot be (since they are useless[**] without the corresponding session
key).

[*]  It helps to have protocols where subjects can send OCSP responses
 for their own certs to their peers.  It also helps to have
 protocols where client subjects can get OCSP Responses for their
 own certs from their peers and then re-use those Responses later.

[**] I'm ignoring user-to-user authentication here.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 11:13:36AM -0400, Perry E. Metzger wrote:
> On Wed, 28 Jul 2010 09:30:22 -0500 Nicolas Williams
>  wrote:
> 
> I have no objections to "infrastructure" -- bridges, the Internet,
> and electrical transmission lines all seem like good ideas. However,
> lets avoid using the term "Public Key Infrastructure" for things that
> depart radically from the Kohnfelder and subsequent X.509 models.

Well, OK.  But PKI no longer means that, not with bridges and what not
in the picture.

> > Infrastructure (whether of a pk variety or otherwise) and transitive
> > trust probably have to be part of the answer for scalability
> > reasons, even if transitive trust is a distasteful concept.
> 
> Well, it depends a lot on what kind of trust.
> 
> Let me remind everyone of one of my long-standing arguments.
> 
> Say that Goldman Sachs wants to send Morgan Stanley an order for a
> billion dollars worth of bonds. Morgan Stanley wants to know that
> Goldman sent the order, because the consequences of a mistake on a
> transaction this large would be disastrous.

Indeed.  They must first establish a direct trust relationship.  They
might leverage transitive trust to bootstrap direct trust if doing so
makes the process easier (which it almost certainly does, and which we
use in the off-line world all the time using pieces of paper or plastic
issued by various authorities, such as "drivers' licenses", "passports",
...).

> > However, we need to be able to build direct trust relationships,
> > otherwise we'll just have a house of transitive trust cards.
> > Again, think of the the SSH leap-of- faith and "SSL pinning"
> > concepts, but don't constrain yourselves purely to pk technology.
> 
> I believe we may, in fact, be in violent agreement here.

We are.  Perhaps I hadn't made my point obvious enough: transitive trust
is necessary, but primarily as a method of bootstrapping direct trust
relationships.  I really should have used that specific formulation.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 10:42:43AM -0400, Anne & Lynn Wheeler wrote:
> On 07/28/2010 10:05 AM, Perry E. Metzger wrote:
> >I will point out that many security systems, like Kerberos, DNSSEC and
> >SSH, appear to get along with no conventional notion of revocation at all.
> 
> long ago and far away ... one of the tasks we had was to periodically
> go by project athena to "audit" various activities ... including
> Kerberos. The original PK-INIT for kerberos was effectively
> certificateless public key ... 

And PKINIT today also allows for rp-only user certs if you want them.
They must be certificates, but they needn't carry any useful data beyond
the subject public key, and the KDC must know the {principal,
cert|pubkey} associations.

> An issue with Kerberos (as well as RADIUS ... another major
> authentication mechanism) ... is that account-based operation is
> integral to its operation ... unless one is willing to go to a
> strictly certificate-only mode ... where all information about an
> individuals authority and access privileges are also carried in the
> certificate (and eliminate the account records totally).

This is true time you have rp-only certs or certs that carry less
information than the rp will require.  The latter almost always true.
The account can be local to each rp, however, or centralized -- that's
up to the relying parties.

> As long as the account record has to be accessed as part of the
> process ... the certificate remains purely redundant and superfluous
> (in fact, some number of operations running large Kerberos based
> infrastructure have come to realize that they have large redundant
> administrative activity maintaining both the account-based information
> as well as the duplicate PKI certificate-based information).

Agreed.  Certificates should, as much as possible, be rp-only.

> The account-based operations have sense of revocation by updating the
> account-based records. [...]

Exactly.  OCSP can work in that manner.  CRLs cannot.  In terms of
administration updating an account record is much simpler than updating
a CRL (because much less information needs to be available for the
former than for the latter).

> The higher-value operations tend to be able to justify the real-time,
> higher quality, and finer grain information provided by an
> account-based infrastructure ... and as internet and technology has
> reduced the costs and pervasiveness of such operations ... it further
> pushes PKI, certificate-based mode of operation further and further
> into no-value market niches.

Are you arguing for Kerberos for Internet-scale deployment?  Or simply
for PKI with rp-only certs and OCSP?  Or other "federated"
authentication mechanism?  Or all of the above?  :)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 03:16:32PM +0100, Ben Laurie wrote:
> Maybe it doesn't, but no revocation mechanism at all makes me nervous.
> 
> I don't know Kerberos well enough to comment.
> 
> DNSSEC doesn't have revocation but replaces it with very short
> signature lifetimes (i.e. you don't revoke, you time out).

Kerberos too lacks revocation, and it also makes up for it with short
ticket lifetimes.

OCSP Responses are much like a PKI equivalent of Kerberos tickets.  All
you need to do to revoke a principal with OCSP is to remove it from the
Responder's database or mark it revoked.  To revoke an individual
certificate you need only mark a date for the given subject such that no
cert issued prior to it will be considered valid.

An OCSP Responder implementation could be based on checking a real CRL
or checking a database of known subjects (principals).  Whichever is
likely to be smaller over time is best, though the latter is just
simpler to administer (since you don't need to know the subject public
key nor the issuer&serial, nor the actual TBSCertificate in order to
revoke, just the subject name and current date and time).

> SSH does appear to have got away without revocation, though the nature
> of the system is s.t. if I really wanted to revoke I could almost
> always contact the users and tell them in person. This doesn't scale
> very well to SSL-style systems.

The SSH ad-hoc pubkey model is a public key pre-sharing (for user keys)
and pre-sharing and/or leap-of-faith (for host keys) model.  It doesn't
scale without infrastructure.  Add infrastructure and you're back to a
PKI-like model (maybe with no hierarchy, but still).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 10:05:22AM -0400, Perry E. Metzger wrote:
> PKI was invented by Loren Kohnfelder for his bachelor's degree thesis
> at MIT. It was certainly a fine undergraduate paper, but I think we
> should forget about it, the way we forget about most undergraduate
> papers.

PKI alone is certainly not the answer to all our problems.

Infrastructure (whether of a pk variety or otherwise) and transitive
trust probably have to be part of the answer for scalability reasons,
even if transitive trust is a distasteful concept.  However, we need to
be able to build direct trust relationships, otherwise we'll just have a
house of transitive trust cards.  Again, think of the the SSH leap-of-
faith and "SSL pinning" concepts, but don't constrain yourselves purely
to pk technology.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Nicolas Williams
On Wed, Jul 28, 2010 at 01:21:33PM +0100, Ben Laurie wrote:
> On 28/07/2010 13:18, Peter Gutmann wrote:
> > Ben Laurie  writes:
> > 
> >> I find your response strange. You ask how we might fix the problems, then 
> >> you 
> >> respond that since the world doesn't work that way right now, the fixes 
> >> won't 
> >> work. Is this just an exercise in one-upmanship? You know more ways the 
> >> world 
> >> is broken than I do?
> > 
> >   [...].  I'm 
> > after effective practical solutions, not just "a solution exists, QED" 
> > solutions.
> 
> The core problem appears to be a lack of will to fix the problems, not a
> lack of feasible technical solutions.
> 
> I don't know why it should help that we find different solutions for the
> world to ignore?

Solutions at higher layers might have a better chance of getting
deployed.  No, I'm not suggesting that we replace TLS and HTTPS with
application-layer crypto over HTTP, not entirely anyways.  I am
suggesting that we use what little TLS does give us in ways that don't
require changing TLS much or at all.

Application-layer authentication with tls-server-end-point channel
bindings seems like a feasible candidate.  This too would require
changes on clients and servers, which makes it not-that-likely to get
implemented and deployed, but not changes at the TLS layer (other than
an API by which to extract a TLS connection's server cert).  It could be
deployed incrementally such that users who can use it get better
security.  Then if the market gives a damn about security, it might get
closer to fully deployed in our lifetimes.

The assumption here is that improvements at the TLS and PKI layers occur
with enormous latency.  If this were true at all layers then we could
just give up, or aim to fix not just today's problems, but tomorrow's, a
decade or three from now (ha).  It'd be nice if that assumption were not
true at all.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-28 Thread Nicolas Williams
On Tue, Jul 27, 2010 at 10:10:54PM -0600, Paul Tiemann wrote:
> I like the idea of SSL pinning, but could it be improved if statistics
> were kept long-term (how many times I've visited this site and how
> many times it's had certificate X, but today it has certificate Y from
> a different issuer and certificate X wasn't even near its expiration
> date...)

My preference would be for doing something like SCRAM (and other
SASL/GSS mechanisms) with channel binding (using tls-server-end-point CB
type).  It has the effect that the server can confirm that the
certificate seen by the client is the correct one -- whereas the server
cannot do that in the "SSL pinning" approach.  It'd have other major
benefits as well.

The problem is: there's no standard way to do this in web browser
applications.  Worse, there's not even any prototypes.

I also like the Moonshot approach.

> Another thought: Maybe this has been thought of before, but what about
> emulating the Sender Policy Framework (SPF) for domains and PKI?
> Allow each domain to set a DNS TXT record that lists the allowed CA
> issuers for SSL certificates used on that domain.  (Crypto Policy
> Framework=CPF?)

Better yet: use DNSSEC and publish TLS EE certs in the DNS.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Nicolas Williams
On Tue, Jul 27, 2010 at 06:30:51PM -0600, Paul Tiemann wrote:
> >> **  But talking about TLS/SNI to SSL suppliers is like talking about the
> >> lifeboats on the Titanic ... we don't need it because SSL is unsinkable.
> 
> Apache support for this came out 12 months ago.  Does any one know of
> statistics that show what percentage of installed Apache servers out
> there are running 2.2.12 or greater?  How many of the top 10 Linux
> distributions are past 2.2.12?  

Yet browser SNI support is what matters regarding adoption.  No hosting
service will provision services such that SNI is required if too much of
the browser installed base does not support it.

Of course server support is a requirement in order to get SNI deployed,
but that's much less of an issue than client support.

Thanks for pointing out IE6 though.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Nicolas Williams
On Tue, Jul 27, 2010 at 09:54:51PM +0100, Ben Laurie wrote:
> On 27/07/2010 15:11, Peter Gutmann wrote:
> > The intent with posting it to the list was to get input from a collection of
> > crypto-savvy people on what could be done.  The issue had previously been
> > discussed on a (very small) private list, and one of the members suggested I
> > post it to the cryptography list to get more input from people.  The 
> > follow-up
> > message (the "Part II" one) is in a similar vein, a summary of a problem and
> > then some starters for a discussion on what the issues might be.
> 
> Haven't we already decided what to do: SNI?

But isn't that the problem, that "SNI had to be added therefore it isn't
everywhere therefore site operators don't trust its presence therefore
SNI is irrelevant"?

Do we have any information as to which browsers in significant current
use don't support SNI?  Hopefully at some point site operators could
declare that browsers that don't support SNI will not be supported.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: What if you had a very good patent lawyer...

2010-07-24 Thread Nicolas Williams
On Thu, Jul 22, 2010 at 05:59:50PM -0700, John Gilmore wrote:
> It's pretty outrageous that anyone would try to patent rolling barcoded
> dice to generate random numbers.

If you have children at home you could just point a webcam at their
gameroom, or, depending on how obsessive compulsive their guardians are
regarding cleanliness, anywhere in their homes.  Of course, they won't
be there all the time, and their guardians will sometimes cleanup, which
means that such a generator will tend to be biased, which means you need
an entropy extractor and entropy pool (but you knew you needed those
anyways).  Even so, I believe that such an entropy generator will
generally produce better entropy than a geiger counter, at least when
it's operational.

I wouldn't put it past any PTO, especially the USPTO, to issue a patent
on gathering entropy from a webcam pointed at tiny, human entropy
generators.  But IANAL.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Intel to also add RNG

2010-07-12 Thread Nicolas Williams
On Mon, Jul 12, 2010 at 01:13:10PM -0400, Jack Lloyd wrote:
> I think it's important to make the distinction between trusting Intel
> not to have made it actively malicious, and trusting them to have
> gotten it perfectly correct in such a way that it cannot fail.
> Fortunately, the second problem, that it is a well-intentioned but
> perhaps slightly flawed RNG [*], could be easily alleviated by feeding
> the output into a software CSPRNG (X9.31, a FIPS 186-3 design, take
> your pick I guess). And the first could be solved by also feeding your
> CSPRNG with anything that you would have fed it with in the absense of
> the hardware RNG - in that case, you're at least no worse off than you
> were before. (Unless your PRNG's security can be negatively affected
> by non-random or maliciously chosen inputs, in which case you've got
> larger problems).

You need an entropy pool anyways.  Adding entropy (from the CPU's RNG,
from hopefully-random event timings, ...) and non-entropy (from a flawed
HW RNG, from sadly-not-random event timings, ...) to the pool results in
having enough entropy (once enough entropy has been added to begin
with).  You'll want multiple entropy sources no matter what, to deal
with HW RNG failures for example.

BTW, SPARC CPUs have shipped with on-board HW RNGs; Intel is hardly
first.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: "Against Rekeying"

2010-03-26 Thread Nicolas Williams
On Sat, Mar 27, 2010 at 12:31:45PM +1300, Peter Gutmann (alt) wrote:
> Nicolas Williams  writes:
> 
> >I made much the same point, but just so we're clear, SSHv2 re-keying has been
> >interoperating widely since 2005.  (I was at Connectathon, and while the
> >details of Cthon testing are proprietary, I can generalize and tell you that
> >interop in this area was very good.)
> 
> Whose SSH rekeying though?  I follow the support forums for a range of non-
> mainstream (i.e. not the usual suspects of OpenSSH, ssh.com, or Putty) SSH
> implementations and "why does my connection die after an hour with [decryption
> error/invalid packet/unrecognised message type/whatever]" (all signs of
> rekeying issues) is still pretty much an FAQ across them at the current time.

Several key ones, including SunSSH.  I'd have to go ask permission in
order to disclose, since Connectathon results are private, IIRC.  Also,
it's been five years, so some of the information has fallen off my
cache.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: "Against Rekeying"

2010-03-26 Thread Nicolas Williams
On Fri, Mar 26, 2010 at 10:22:06AM -0400, Peter Gutmann wrote:
> I missed that in his blog post as well.  An equally big one is the SSHv2
> rekeying fiasco, where for a long time an attempt to rekey across two
> different implementations typically meant "drop the connection", and it still
> does for the dozens(?) of SSH implementations outside the mainstream of
> OpenSSH, Putty, ssh.com and a few others, because the procedure is so complex
> and ambiguous that only a few implementations get it right (at one point the
> ssh.com and OpenSSH implementations would detect each other and turn off
> rekeying because of this, for example).  Unfortunately in SSH you're not even
> allowed to ignore rekey requests like you can in TLS, so you're damned if you
> do and damned if you don't [0].

I made much the same point, but just so we're clear, SSHv2 re-keying has
been interoperating widely since 2005.  (I was at Connectathon, and
while the details of Cthon testing are proprietary, I can generalize and
tell you that interop in this area was very good.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: "Against Rekeying"

2010-03-25 Thread Nicolas Williams
On Thu, Mar 25, 2010 at 01:24:16PM +, Ben Laurie wrote:
> Note, however, that one of the reasons the TLS renegotiation attack was
> so bad in combination with HTTP was that reauthentication did not result
> in use of the new channel to re-send the command that had resulted in a
> need for reauthentication. This command could have come from the
> attacker, but the reauthentication would still be used to "authenticate" it.

It would have sufficed to bind the new and old channels.  In fact, that
is pretty much the actual solution.

> In other words, designing composable secure protocols is hard. And TLS
> isn't one. Or maybe it is, now that the channels before and after
> rekeying are bound together (which would seem to invalidate your
> argument above).

Channel binding is one tool that simplifies the design and analysis of
composable secure protocols.  Had channel binding been used to analyze
TLS re-negotiation earlier the bug would have been obvious earlier as
well.  Proof of that last statement is in the pudding: Martin Rex
independently found the bug when reasoning about channel binding to TLS
channels in the face of re-negotiation; once he started down that path
he found the vulnerability promptly.

(There are several champions of the channel binding technique who could
and should have noticed the TLS bug earlier.  I myself simply took the
security of TLS for granted; I should have been more skeptical.  I
suspect that what happened, ultimately, is that TLS re-negotiation was
an afterthought, barely mentioned in the TLS 1.2 RFC and barely used,
therefore many experts were simply not conscious enough of its existence
to care.  Martin was quite conscious of it while also analyzing a
tangential channel binding proposal.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: "Against Rekeying"

2010-03-23 Thread Nicolas Williams
On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
> On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
> > Ekr has an interesting blog post up on the question of whether protocol
> > support for periodic rekeying is a good or a bad thing:
> > 
> > http://www.educatedguesswork.org/2010/03/against_rekeying.html
> > 
> > I'd be interested in hearing what people think on the topic. I'm a bit
> > skeptical of his position, partially because I think we have too little
> > experience with real world attacks on cryptographic protocols, but I'm
> > fairly open-minded at this point.
> 
> I fully agree with EKR on this: if you're using block ciphers with
> 128-bit block sizes in suitable modes and with suitably strong key
> exchange, then there's really no need to ever (for a definition of
> "ever" relative to common "connection" lifetimes for whatever protocols
> you have in mind, such as months) re-key for cryptographic reasons.

I forgot to mention that I was referring to session keys for on-the-wire
protocols.  For data storage I think re-keying is easier to justify.

Also, there is a strong argument for changing ephemeral session keys for
long sessions, made by Charlie Kaufman on EKRs blog post: to limit
disclosure of earlier ciphertexts resulting from future compromises.

However, I think that argument can be answered by changing session keys
without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
session keys in such a way would not be trivial, but it may well be
simpler than the alternative.  I've only got, in my mind, a sketch of
how it'd work.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: "Against Rekeying"

2010-03-23 Thread Nicolas Williams
On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
> Ekr has an interesting blog post up on the question of whether protocol
> support for periodic rekeying is a good or a bad thing:
> 
> http://www.educatedguesswork.org/2010/03/against_rekeying.html
> 
> I'd be interested in hearing what people think on the topic. I'm a bit
> skeptical of his position, partially because I think we have too little
> experience with real world attacks on cryptographic protocols, but I'm
> fairly open-minded at this point.

I fully agree with EKR on this: if you're using block ciphers with
128-bit block sizes in suitable modes and with suitably strong key
exchange, then there's really no need to ever (for a definition of
"ever" relative to common "connection" lifetimes for whatever protocols
you have in mind, such as months) re-key for cryptographic reasons.

There may be reasons for re-keying, but the commonly given one that a
given key gets weak over time from use (meaning the attacker can gather
ciphertexts) and just the passage of time (during which an attacker
might brute force it) does not apply to modern crypto.

Ensuring that a protocol that uses modern crypto also supports re-keying
only complicates the protocol, which adds to the potential for bugs.

Consider SSHv2: popular implementations of the server do privilege
separation, but after successful login there's the potential for having
to do re-keys that require privilege (e.g., if you're using SSHv2 w/
GSS-API key exchange), which complicates privilege separation.  But for
that wrinkle the only post-login privsep complications are: logout
processing (auditing, ...), and utmpx processing (if you want tty
channels to appear in w(1) output; this could always be handled in ways
that are not specific to sshd).  What a pain!  (OTOH, the ability
delegate fresh GSS credentials via re-keying is useful.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 1024 bit RSA cracked?

2010-03-16 Thread Nicolas Williams
On Wed, Mar 10, 2010 at 09:27:06PM +0530, Udhay Shankar N wrote:
> Anyone know more?
> 
> http://news.techworld.com/security/3214360/rsa-1024-bit-private-key-encryption-cracked/

My initial reaction from reading only the abstract and parts of the
introduction is that the authors are talking about attacking hardware
that implements RSA (say, a cell phone) by injecting faults into the
system via the power supply of the device.

This isn't really applicable to server hardware in a data center (where
the power, presumably, will be conditioned and physical security will be
provided, also presumably) but this attack is definitely applicable to
portable devices -- laptops, mobiles, smartcards.

> "The RSA algorithm gives security under the assumption that as long as
> the private key is private, you can't break in unless you guess it.
> We've shown that that's not true," said Valeria Bertacco, an associate
> professor in the Department of Electrical Engineering and Computer
> Science, in a statement.

They're not the first ones to show that!  Side-channel attacks have been
around for a while now.  It's not just the algorithms, but the machine
executing them and its physical characteristics that matter.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: TLS break

2009-11-25 Thread Nicolas Williams
On Wed, Nov 11, 2009 at 10:57:04AM -0500, Jonathan Katz wrote:
> Anyone care to give a "layman's" explanation of the attack? The 
> explanations I have seen assume a detailed knowledge of the way TLS/SSL 
> handle re-negotiation, which is not something that is easy to come by 
> without reading the RFC. (As opposed to the main protocol, where one can 
> find textbook descriptions.)

Not to sound like a broken record, and not to plug work I've done[*],
but IMO the best tool to apply to this situation, both, to understand
the problem, produce solutions, and to analyze proposed solutions, is
"channel binding" [0].

Channel binding should be considered whenever one combines two (or more)
two-peer end-to-end security protocols.

In this case two instances of the same protocol are combined, with an
outer/old TLS connection and an inner/new connection negotiated with the
protection of the outer one.  That last part, "negotiated with the
protection of the outer one" may have led people to believe that the
combination technique was safe.  However, applying channel binding as an
analysis technique would have made it clear that that technique was
vulnerable to MITM attacks.

What channel binding does not give you as an analysis technique is
exploit ideas beyond "try being an MITM".

The nice thing about channel binding is that it allows you to avoid
having to analyze the combined protocols in order to understand whether
the combination is safe.  As a design technique all you need to do is
this: a) design a cryptographically secure "name" for an estabilished
"channel" of the outer protocol, b) design a cryptographically secure
facility in the inner protocol for veryfying that the applications at
both ends observe the same outer channel, c) feed (a) to (b), and if the
two protocols are secure and (a) and (b) are secure then you'll have a
secure combination.

[*] I've written an RFC on the topic, but the idea isn't mine -- it
goes as far back as 1992 in IETF RFCs.  I'm not promoting channel
binding because I had anything to do with it, but because it's a
useful technique in combining certain cryptographic protocols that I
think should be more widely understood and applied.

[0] On the Use of Channel Bindings to Secure Channels, RFC5056.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-02 Thread Nicolas Williams
On Sun, Nov 01, 2009 at 10:33:34PM -0700, Zooko Wilcox-O'Hearn wrote:
> I don't understand why you need a MAC when you already have the hash  
> of the ciphertext.  Does it have something to do with the fact that  
> the checksum is non-cryptographic by default (http://docs.sun.com/app/ 
> docs/doc/819-5461/ftyue?a=view ), and is that still true?  Your  
> original design document [1] said you needed a way to force the  
> checksum to be SHA-256 if encryption was turned on.  But back then  
> you were planning to support non-authenticating modes like CBC.  I  
> guess once you dropped non-authenticating modes then you could relax  
> that requirement to force the checksum to be secure.

[Not speaking for Darren...]  No, the requirement to use a strong hash
remains, but since the hash would be there primarily for protection
against errors, I don't the requirement for a strong hash is really
needed.

> Too bad, though!  Not only are you now tight on space in part because  
> you have two integrity values where one ought to do, but also a  
> secure hash of the ciphertext is actually stronger than a MAC!  A  
> secure hash of the ciphertext tells whether the ciphertext is right  
> (assuming the hash function is secure and implemented correctly).   
> Given that the ciphertext is right, then the plaintext is right  
> (given that the encryption is implemented correctly and you use the  
> right decryption key).  A MAC on the plaintext tells you only that  
> the plaintext was chosen by someone who knew the key.  See what I  
> mean?  A MAC can't be used to give someone the ability to read some  
> data while withholding from them the ability to alter that data.  A  
> secure hash can.

Users won't actually get the data keys, only the data key wrapping keys.
Users who can read the disk and find the wrapped keys and know the
wrapping keys can find the actual data keys, of course, but add in a
host key that the user can't read and now the user cannot recover their
data keys.  One goal is to protect a system against its users, but
another is to protect user data against maliciou modification by anyone
else.  A MAC provides the first kind of protection if the user can't
access the data keys, and a MAC provides the second kind of protection
if the data keys can be kept secret.

> One of the founding ideas of the whole design of ZFS was end-to-end  
> integrity checking.  It does that successfully now, for the case of  
> accidents, using large checksums.  If the checksum is secure then it  
> also does it for the case of malice.  In contrast a MAC doesn't do  
> "end-to-end" integrity checking.  For example, if you've previously  
> allowed someone to read a filesystem (i.e., you've given them access  
> to the key), but you never gave them permission to write to it, but  
> they are able to exploit the isses that you mention at the beginning  
> of [1] such as "Untrusted path to SAN", then the MAC can't stop them  
> from altering the file, nor can the non-secure checksum, but a secure  
> hash can (provided that they can't overwrite all the way up the  
> Merkle Tree of the whole pool and any copies of the Merkle Tree root  
> hash).

I think we have to assume that an attacker can write to any part of the
pool, including the Merkle tree roots.  It'd be odd to assume that the
attacker can write anywhere but there -- there's nothing to make it so!

I.e., we have to at least authenticate the Merkle tree roots.  That
still means depending on collision resistance of the hash function for
security.  If we authenticate every block we don't have that dependence
(I'll come back to this).

The interesting thing here is that we want the hash _and_ the MAC, not
just the MAC.  The reason is that we want block pointers (which include
the {IV, MAC, hash} for the block being pointed to) to be visible to the
layer below the filesystem, so that we can scrub/resilver and evacuate
devices from a pool (meaning: re-write all the block pointers point to
blocks on the evacuated devices so that they point elsewhere) even
without having the data keys at hand (more on this below).

We could MAC the Merkle tree roots alone, thus alleviating the space
situation in the block pointer structure (and also saving precious CPU
cycles).  But interestingly we wouldn't alleviate it that much!  We need
to store a 96-bit IV, and if we don't MAC every block then we'll want
the strongest hash we can use, so we'll need at least another 256 bits,
for a total of 352 bits of the 384 that we have to play with.  Whereas
if we MAC every block we might store a 96-bit IV, a 128-bit
authentication tag and 160-bit hash, using all 384 bits.

You get more collision resistance from an N-bit MAC than from a hash of
the same length.  That's because in the MAC case the forger can't check
the forgery without knowing the key, while in the hash case the attacker
can verify that some contents collides with another's hash.  In the MAC
case an attacker that hasn't broken the M

Re: Possibly questionable security decisions in DNS root management

2009-10-19 Thread Nicolas Williams
Getting DNSSEC deployed with sufficiently large KSKs should be priority #1.

If 90 days for the 1024-bit ZSKs is too long, that can always be
reduced, or the ZSK keylength be increased -- we too can squeeze factors
of 10 from various places.  In the early days of DNSSEC deployment the
opportunities for causing damage by breaking a ZSK will be relatively
meager.  We have time to get this right; this issue does not strike me
as urgent.

OTOH, will we be able to detect breaks?  A clever attacker will use
breaks in very subtle ways.  A ZSK break would be bad, but something
that could be dealt with, *if* we knew it'd happened.  The potential
difficulty of detecting attacks is probably the best reason for seeking
stronger keys well ahead of time.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-30 Thread Nicolas Williams
On Sun, Sep 27, 2009 at 02:23:16PM -0700, Fuzzy Hoodie-Monster wrote:
> As usual, I tend to agree with Peter. Consider the time scale and
> severity of problems with cryptographic algorithms vs. the time scale
> of protocol development vs. the time scale of bug creation
> attributable to complex designs. Let's make up some fake numbers,
> shall we? (After all, we're software engineers. Real numbers are for
> real engineers! Bah!)
> 
> [snip]
> 
> Although the numbers are fake, perhaps the orders of magnitude are
> close enough to make the point. Which is: your software will fail for
> reasons unrelated to cryptographic algorithm problems long before
> SHA-256 is broken enough to matter. Perhaps pluggability is a source
> of frequent failures, designed to solve for infrequent and
> low-severity algorithm failures. I would worry about an overfull \hbox
> (badness 1!) long before I worried about AES-128 in CBC mode with
> a unique IV made from /dev/urandom. Between now and the time our

"AES-128 in CBC mode with a unique IV made from /dev/urandom" is
manifestly not the issue of the day.  The issue is hash function
strength.  So when would you worry about MD5?  SHA-1?  By your own
admission MD5 has already been fatally wounded and SHA-1 is headed
that way.

> ciphers and hashes and signatures are broken, we'll have a decade to
> design and implement the next simple system to replace our current
> system. Most software developers would be overjoyed to have a full
> decade. Why are we whining?

We don't have a decade to replace MD5.  We've had a long time to replace
MD5, and even SHA-1 already, but we haven't done it yet.  The reason is
simple: there's more to it than you've stated.  Specifically, for
example, you ignored protocol update development (you assumed 1 new
protocol per-year, but this says nothing about how long it takes to,
say, update TLS) and deployment issues completely, and you supposed that
software development happens at a consistent, fast clip throughout.
Software development and deployment are usually constrained by legacy
and customer behavior, as well as resource availability, all of which
varies enormously.  Protocol upgrade development, for example, is harder
than you might think (I'm guessing though, since you didn't address that
issue).  Complexity exists outside protocol.  This is why we must plan
ahead and make reasonable trade-offs.  Devising protocols that make
upgrade easier is important, supposing that they actually help with the
deployment issues (cue your argument that they do not).

I'm OK with making up numbers for the sake of argument.  But you have to
make up all the relevent numbers.  Then we can plug in real data where
we have it, argue about the other numbers, ...

> What if TLS v1.1 (2006) specified that the only ciphersuite was RSA
> with >= 1024-bit keys, HMAC_SHA256, and AES-128 in CBC mode. How
> likely is it that attackers will be able to reliably and economically
> attack those algorithms in 2016? Meanwhile, the comically complex
> X.509 is already a punching bag
> (http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf
> and 
> http://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf,
> including the remote exploit in the certificate handling code itself).

We don't have crystal balls.  We don't really know what's in store for
AES, for example.  Conservative design says we should have a way to
deploy alternatives in a reasonably short period of time.

You and Peter are clearly biased against TLS 1.2 specifically, and
algorithm negotiation generally.  It's also clear that you're outside
the IETF consensus on both matters _for now_.  IMO you'll need to make
better arguments, or wait enough time to be proven right by events, in
order to change that consensus.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-09-08 Thread Nicolas Williams
On Thu, Sep 03, 2009 at 04:26:30PM +1200, Peter Gutmann wrote:
> Steven Bellovin  writes:
> >This returns us to the previously-unsolved UI problem: how -- with today's
> >users, and with something more or less like today's browsers since that's
> >what today's users know -- can a spoof-proof password prompt be presented?
> 
> Good enough to satisfy security geeks, no, because no measure you take will
> ever be good enough.  [...]

Well, if you're willing to reserve screen real estate, keyboard key
combinations, and so on, with said reserved screen space used to
indicate unambiguously the nature of other things displayed, and
reserved input combinations used to trigger trusted software paths, then
yes, you can solve that problem.  That's the premise of "trusted
desktops", at any rate.  There are caveats, like just how large the TCB
becomes (including parts of the browser), the complexity of the trusted
information to be presented to users versus the limited amount of screen
real estate available to convey it, the need to train users to
understand the concept of trusted desktops, no fullscreen apps can be
allowed, accessibility issues, it all falls apart if the TCB is
compromised, ...

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: RNG using AES CTR as encryption algorithm

2009-09-08 Thread Nicolas Williams
On Wed, Sep 02, 2009 at 10:58:03AM +0530, priya yelgar wrote:
> How ever I searched on the CSRC site, but found the test vectors for
> AES_CBC not for AES CTR.
> 
> Please  can any one tell me where to look for the test vectors to test
> RNG using  AES CTR.

They are trivially constructed from the test vectors for AES in ECB
mode (just as counter mode is trivially constructed from ECB mode).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Nicolas Williams
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
> In order to roll out a new crypto algorithm, you have to roll out new
> software. So, why is anything needed for "pluggability" beyond versioning?
> 
> It seems to me protocol designers get all excited about this because
> they want to design the protocol once and be done with it. But software
> authors are generally content to worry about the new algorithm when they
> need to switch to it - and since they're going to have to update their
> software anyway and get everyone to install the new version, why should
> they worry any sooner?

Many good replies have been given already.  Here's a few more reasons to
want "pluggability" in the protocol:

 - Yes, we "want to design the protocol once and be done with" the hard
   parts of the design problem that we can reasonably expect to have to
   do only once.  Having to do things only once is not just "cool".

 - Pluggability at the protocol layer enable pluggability in the
   implementations.  A pluggable design does not imply open plug-in
   interfaces, but a pluggable design does imply highly localized
   development of new plug-ins.

 - It's a good idea to promote careful thought about the future,
   precisely what designing a pluggable protocol does and requires.

   We may get it wrong (e.g., the SSHv2 alg nego protocol has quirks,
   some of which were discovered when we worked on RFC4462), but the
   result is likely to be much better than not putting much or any such
   thought into it.

If the protocol designers and the implementors get their respective
designs right, the best case scenario is that switching from one
cryptographic algorithm to another requires less effort in the pluggable
case than in the non-pluggable case.  Specifically, specification and
implementation of new crypto algs can be localized -- no existing
specification nor code need change!  Yes, new SW must still get
deployed, and that's pretty hard, but it helps to make it easier to
develop that SW.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread Nicolas Williams
On Thu, Jul 23, 2009 at 05:34:13PM +1200, Peter Gutmann wrote:
> "mhey...@gmail.com"  writes:
> >2) If you throw TCP processing in there, unless you are consistantly going to
> >have packets on the order of at least 1000 bytes, your crypto algorithm is
> >almost _irrelevant_.
> >[...]
> >for a Linux 2.2.14 kernel, remember, this was 10 years ago.
> 
> Could the lack of support for TCP offload in Linux have skewed these figures
> somewhat?  It could be that the caveat for the results isn't so much "this was
> done ten years ago" as "this was done with a TCP stack that ignores the
> hardware's advanced capabilities".

How much NIC hardware does both, ESP/AH and TCP offload?  My guess: not
much.  A shame, that.

Once you've gotten a packet off the NIC to do ESP/AH processing, you've
lost the opportunity to use TOE.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-22 Thread Nicolas Williams
On Wed, Jul 22, 2009 at 06:49:34AM +0200, Dan Kaminsky wrote:
> Operationally, HMAC-SHA-256 is the gold standard.  There's wonky stuff all
> over the place -- Bernstein's polyaes work appeals to me -- but I wouldn't
> really ship anything but HMAC-SHA-256 at present time.

Oh, I agree in general.  As far as new apps and standards work I'd make
HMAC-SHA-256 or AES, in an AEAD cipher mode, REQUIRED to implement and
the default.

But that's not what I'm looking for here.  I'm looking for the fastest
MACs, with extreme security considerations (e.g., "warning, warning!
must rekey every 10 minutes") being possibly OK, depending on just how
extreme -- the sort of algorithm that one would not make REQUIRED to
implement, but which nonetheless one might use in some environments
simply because it's fast.

For example, many people use arcfour in SSHv2 over AES because arcfour
is faster than AES.  The SSHv2 AES-based ciphers ought to be RTI and
default choice, IMO, but that doesn't mean arcfour should not be
available.

In the crypto world one never designs weak-but-fast algorithms on
purpose, only strong-and-preferably-fast ones.  And when an algorithm is
successfully attacked it's usually deprecated, put in the ash heap of
history.  But there is a place for weak-but-fast algos, as long as
they're not too weak.  Any weak-but-fast algos we might have now tend to
be old algos that turned out to be weaker than designed to be, and new
ones tend to be slower because resistance against new attacks tends to
require more computation.  I realized this would make my question seem a
bit pointless, but hoped I might get a surprising answer :(

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Fast MAC algorithms?

2009-07-21 Thread Nicolas Williams
I've an application that is performance sensitive, which can re-key very
often (say, every 15 minutes, or more often still), and where no MAC is
accepted after 2 key changes.  In one case the entity generating a MAC
is also the only entity validating the MAC (but the MAC does go on the
wire).  I'm interested in any MAC algorithms which are fast, and it
doesn't matter how strong they are, as long as they meet some reasonable
lower bound on work factor to forge a MAC or recover the key, say 2^64,
given current cryptanalysis, plus a comfort factor.

On the other hand, practical MAC forgery / key recovery attacks would
completely break the security of this application.  So stronger MACs
would have to be available as well, as a performance vs. security
trade-off.

Key length is not an issue.  Having to confound the MAC by adding a
nonce is an acceptable and desirable requirement.  MAC and nonce length
are also not an issue (128-bits is acceptable).  Implementation of any
MAC algorithms for this application must be in software; parallelization
is not really an option.  Algorithm agility is also not a problem, and
certainly desirable.

I see many MAC algorithms out there: HMAC, NMAC, OMAC, PMAC, CBC-MAC,
UMAC, ...  And, of course, AEAD ciphers that can be used for
authentication only, (AES-GCM, AES-CCM, Helix/Phelix, ...).  What I'm
interested in is a comprehensive table showing relative strength under
current cryptanalysis and relative performance.  I suspect there's no
such thing, sadly.  UMAC and HMAC-SHA-1 seem like obvious default
candidates.

I also see papers like "Differential-Linear Attacks against the Stream
Cipher Phelix", by Wu and Preneel.  Wu and Preneel declare Phelix to be
insecure because if you violate the requirement that nonces not be
reused then the key can be recovered rather easily.  Helix seems to be
stronger than Phelix in this regard, even though the opposite was
intended.  That makes Phelix and Helix seem likely to be in for further
weakening.  For uses such as mine (see above), such weaknesses are fine
-- nonces will not be reused, and keys will be changed very often.  So
I'm willing to consider algorithms, for this particular use, that I'd
not consider for general use cases, though these sorts of weaknesses
make me feel uneasy.

Which MAC algorithms would you recommend?  (Off-list replies will be
summarized to the list with attribution if you'll allow me to.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: HSM outage causes root CA key loss

2009-07-14 Thread Nicolas Williams
On Tue, Jul 14, 2009 at 11:09:41PM +0200, Weger, B.M.M. de wrote:
> Suppose this happens in a production environment of some CA
> (root or not), how big a problem is this? I can see two issues:
> - they have to build a new CA and distribute its certificate
>   to all users, which is annoying and maybe costly but not a 
>   security problem,

Not a security problem?  Well, if you have a way to do authenticated
trust anchor distribution that doesn't depend on the lost CA, then sure,
it's not a security problem.  But that's just not likely, or at least
there's no standard for authenticated TA distribution, yet.  If you can
do unauthenticated TA distribution without much trouble (as opposed to
by, say, having to physically visit every host), then chances are you
have no security to begin with.

If there was such a standard you'd want to make real sure that you have
separate keys for TA distribution than for your CA, with similar
physical and other security safeguards.

This goes to show that we do need a TA distribution protocol (not for
the web, mind you), and it needs to use PKI -- a distinct, but related
PKI.  As long as both sets of hardware tokens don't die simultaneously,
then you'll be OK.  Add multiple CAs for TA distro and you get more
redundancy.

> - if they rely on the CA for signing CRLs (or whatever 
>   revocation mechanism they're using) then they have to find 
>   some other way to revoke existing certificates.

The only other ways are: distribute the new CA certs, and/or use OCSP
(which must use a different cert than the CA).  OCSP is the better
answer, if you can get all apps to use it.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: password safes for mac

2009-07-01 Thread Nicolas Williams
I should add that a hardware token/smartcard, would be even better, but
the same issue arises: keep it logged in, or prompt for the PIN every
time it's needed?  If you keep it logged in then an attacker who
compromises the system will get to use the token, which I bet in
practice is only moderately less bad than compromising the keys
outright.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: password safes for mac

2009-07-01 Thread Nicolas Williams
On Wed, Jul 01, 2009 at 12:32:40PM -0400, Perry E. Metzger wrote:
> I think he's pointing out a more general problem.

Indeed.  IIRC, the Mac keychain uses your login password as its passphrase
by default, which means that to keep your keychain unlocked requires
either keeping the password around (bad), keeping the keys in cleartext
around (worse?), or prompting for the password/passphrase every time
they are needed (unusable).

This applies to ssh-agent, the GNOME keychain, etcetera.  It also
applies to distributed authentication systems with password-based
options, like Kerberos.

ISTM that keeping the password around (preferably in mlocked memory,
and, to be sure, with swap encrypted with ephemeral keys) is probably
the better solution.  Of course, the keys themselves have to be handled
with care too.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: password safes for mac

2009-06-30 Thread Nicolas Williams
On Mon, Jun 29, 2009 at 11:29:48PM -0700, Jacob Appelbaum wrote:
> This would be great if LoginWindow.app didn't store your unencrypted
> login and password in memory for your entire session (including screen
> lock, suspend to ram and hibernate).
> 
> I keep hearing that Apple will close my bug about this and they keep
> delaying. I guess they use the credentials in memory for some things
> where they don't want to bother the user (!) but they still want to be
> able to elevate privileges.

Suppose a user's Kerberos credentials are about to expire.  What to do?

If Kerberos TGT renewable lifetime is set long enough then chances are
very good that the user will have to unlock their screen sometime within
a few hours of TGT expiration.  But what if TGT renewable lifetime is
set very short?  Or if the user doesn't lock then unlock their screen in
time?  You have to prompt the user.  But this could be an asynchronous
prompt coming from deep within the kernel (think secure NFS) -- not
impossible, but certainly tricky to implement.  And what if the user
were not using a graphical login (stop thinking Mac all the time :)?
You can't do async prompts on text-based consoles (though you can do
async warnings).

You can see where the temptation to cache the user's password comes
from.

The password can't be cached in encrypted form either (it could be
cached in string2key() form, but that's password-equivalent).  It could
be cached in scrambled form, or encrypted with a key that's stored in
cleartext or in a hardware token (think TPM), but ultimately it'd be
extractable by any sufficiently privileged process.  In any case, the
password must not end up in cleartext on unencrypted swap, and
preferably not on swap at all.

FWIW, Solaris doesn't cache the user's password.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Property RIghts in Keys

2009-02-12 Thread Nicolas Williams
On Tue, Feb 03, 2009 at 04:54:48PM -0500, Steven M. Bellovin wrote:
> Under what legal theory might a certificate -- or a key! -- be
> considered "property"?  There wouldn't seem to be enough creativity in
> a certificate, let alone a key, to qualify for copyright protection.

Private and secret keys had better be property.  Public keys are...
well, *public*, and CA public keys really, really had better be public,
so I'm as perplexed as you.

Most likely this is just a case of lawyers gone wild.  Too bad a TV show
or DVD product based on that idea wouldn't be successful.

> I won't even comment on the rest of the CPS, not even such gems as
> "Subscribers warrant that ... their private key is protected and that
> no unauthorized person has ever had access to the Subscriber's private
> key."  And just how can I tell that?

Really, really wild lawyers.  (Or maybe not so wild, in the U.S.,
depending on what happens in the Lori Drew case.)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-31 Thread Nicolas Williams
On Fri, Jan 30, 2009 at 03:37:22PM -0800, Taral wrote:
> On Fri, Jan 30, 2009 at 1:41 PM, Jonathan Thornburg
>  wrote:
> > For open-source software encryption (be it swap-space, file-system,
> > and/or full-disk), the answer is "yes":  I can assess the developers'
> > reputations, I can read the source code, and/or I can take note of
> > what other people say who've read the source code.
> 
> Really? What about hardware backdoors? I'm thinking something like the
> old /bin/login backdoor that had compiler support, but in hardware.

Plus: that's a lot of code to read!  A single person can't hope to
understand the tens of millions of lines of code that make up the
software (and firmware, and hardware!) that they use every day on a
single system.  Note: that's not to say that open source doesn't have
advantages over proprietary source.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Proof of Work -> atmospheric carbon

2009-01-29 Thread Nicolas Williams
On Wed, Jan 28, 2009 at 04:35:50PM -0500, Jerry Leichter wrote:
> [Proposals to use reversible computation, which in principle consume  
> no energy, elided.]
> 
> There's a contradiction here between the computer science and economic
> parts of the problem being discussed.  What gives a digital coin value
> is exactly that there is some real-world expense in creating it.

For some definition of "digital coin."

An alternative design where all coins are double-spend checked against
on-line infrastructure belonging to the issuer don't have this
constraint.  Though they have different properties.  For example,
anonymity might then depend on trusting mixmaster-type networks to
exchange coins the issuer knows you have for coins that the issuer
doesn't know you have, but that might make anonymity entirely
impractical.  But then, how practical are POW coins anyways?

I suspect most people in the formal sectors of most economies would
gladly live with digital credit/bank cards most of the time and to heck
with digital coins.

> So, how do you tie the cost of a token to power?  Curiously, something  
> of the sort has already been proposed.  It's been pointed out - I'm  
> afraid I don't have the reference - that CPU's keep getting faster and  
> more parallel and a high rate, but memories, while they are getting  
> enormously bigger, aren't getting much faster.  So what the paper I  
> read proposed is hash functions that are expensive, not in CPU  
> seconds, but in memory reads and writes.  Memory writes are inherently  
> non-reversible so inherently cost power; a high-memory-write algorithm  
> is also one that uses power.

Clever!

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Obama's secure PDA

2009-01-27 Thread Nicolas Williams
On Mon, Jan 26, 2009 at 04:18:39PM -0500, Jerry Leichter wrote:
> An email system for the White  
> House has the additional complication of the Presidential Records  
> Act:  Phone conversations don't have to be recorded, but mail messages  
> do (and have to remain accessible).

[OT for this list, I know.]

It seems that the President's lawyers believe that IM is covered by the
Presidential Records Act and shouldn't be used in the White House:

http://www.newser.com/tag/31542/1/presidential-records-act.html
http://www.newser.com/story/48239/team-obama-told-to-ditch-instant-messaging.html

One possible workaround might be to allow WH staff to _receive_ IMs, and
follow twitting from outside the WH, but not respond to any of it except
by phone.  (Even phone calls, though not recorded, are dangerous to the
WH since there is a record of calls made and taken.)

Of course, if there's nothing to hide, then, why not just use IM and be
done?  The legal advice seems sounds, but it's just advice.  Obama and
his staff could easily use and archive IMs and avoid embarrassment by,
well, keeping discussions above board.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Nicolas Williams
On Mon, Jan 19, 2009 at 01:38:02PM +, Darren J Moffat wrote:
> I don't think it depends at all on who you trust but on what algorithms 
> are available in the protocols you need to use to run your business or 
> use the apps important to you for some other reason.   It also very much 
> depends on why the app uses the crypto algorithm in question, and in the 
> case of digest/hash algorithms wither they are key'd (HMAC) or not.

As Jeff Hutzelman suggested recently, inspired by the SSHv2 CBC mode
vulnerability, hash algorithm agility for PKI really means having more
than one signature, each using a different hash, in each certificate;
this enlarges certificates.  Alternatively, it needs to be possible to
select what certificate to present to a peer based on an algorithm
negotiation; this tends to mean adding round-trips to our protocols.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email?

2008-12-19 Thread Nicolas Williams
On Thu, Dec 18, 2008 at 01:06:37PM +1000, James A. Donald wrote:
> Peter Gutmann wrote:
> > ... to a statistically irrelevant bunch of geeks.
> > Watch Skype deploy a not- terribly-anonymous (to the
> > people running the Skype servers) communications
> > system.
> 
> Actually that is pretty anonymous.  Although I am sure
> that Skype would play ball with any bunch of goons that
> put forward a plausible justification, or threated to
> rip their fingernails off, most government agencies find
> it difficult to deal with anyone that they cannot
> casually have thrown in jail - dealing with equals is
> not part of their mindset.  So if your threat model does
> not include the FBI and the CIA, chances are that  the
> people who are threatening you will lack the
> organization and mindset to get Skype's cooperation.

That's also true for e-mail where the only encryption is in the
transport.  Except that you tend to store your e-mails and not your
phone calls, of course.  But you could always encrypt your filesystem
and not your e-mail itself, and that way avoid all the portability
issues that Alec brought up.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-18 Thread Nicolas Williams
On Wed, Dec 17, 2008 at 03:02:54PM -0500, Perry E. Metzger wrote:
> The longer I'm in this field, the more the phrase "use with extreme
> caution" seems to mean "don't use" to me. More and more, I think that
> if you don't have a really good way to test and get assurance about a
> component of your security architecture, you should leave that
> component out.

But do beware of becoming something of a luddite w.r.t. entropy sources.

If you can mix seeds into your entropy pool without destroying the
entropy of your pool (and we agree that you can) while adding some of
any entropy in your seeds (and we agree that you can), then why not?

Yes, I saw your other message.  Testing entropy pools and sources is
hard if you want real entropy.  One way to test the pool and its mixing
function is to add and use a hook for supplying test vectors instead of
real entropy for each source.  But to test the operational system, if it
has real entropy sources, is harder.  So you might as well add in a
fixed, manufacture-time seed + time/counter-based salting, as you
suggested.  And you'll still want to test the result, but you can only
apply statistical analysis to the outputs to decide if they're
random-*looking*.

Having no entropy sources is not a good option for systems where the
threat model requires good entropy sources (e.g., if you want PFS to
prevent compromise of an end-point from compromising pre-compromise
communications).  IMO it's not wise to trivially reject an "all of the
above" approach to entropy gathering.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email?

2008-12-17 Thread Nicolas Williams
On Tue, Dec 16, 2008 at 03:06:04AM +, StealthMonger wrote:
> Alec Muffett  writes:
> > In the world of e-mail the problem is that the end-user inherits a
> > blob of data which was encrypted in order to defend the message as it
> > passes hop by hop over the store-and-forward SMTP-relay (or UUCP?) e-
> > mail network...  but the user is left to deal with the effects of
> > solving the *transport* security problem.
> 
> > The model is old.  It is busted.  It is (today) wrong.
> 
> But the capabilities of encrypted email go beyond mere confidentiality
> and authentication.  They include also strongly untraceable anonymity
> and pseudonymity.  This is accomplished by using chains of anonymizing
> remailers, each having a large random latency for mixing with other
> traffic.

The subject is "[w]hy the poor uptake of encrypted email?".

Alec's answer shows that "encrypted email" when at rest is not easy to
use.

Providing a suitable e-mail security solution for the masses strikes me
as more important than providing anonymity to the few people who want or
need it.  Not that you can't have both, unless you want everyone to use
PGP or S/MIME as a way to hide anonymized traffic from non-anonymized
traffic.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Quantum direct communication: secrecy without key distribution

2008-12-05 Thread Nicolas Williams
[I'm guessing that nobody here wants yet another "quatum crypto is snake
oil, no it's not, yes it is, though it has a bright future, no it's not,
..." thread.]

On Fri, Dec 05, 2008 at 02:16:09PM +0100, Eugen Leitl wrote:
>In the last couple of years, we've seen a number of quantum key
>distribution systems being set up that boast close-to-perfect security
>([4]although they're not as secure as the marketing might imply).
> 
>These systems rely on two-part security. The first is the quantum part
>which reveals whether a message has been intercepted or not. Obviously
>this is no use when it comes to sending secret message because it can
>only uncover eavesdroppers after the fact.

That's not the most serious, obvious flaw in quantum cryptography.

The most obvious flaw is that when we're talking fiber optics the
eavesdropper might as well be a man in the middle, and so...  well, see
the list archive.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-18 Thread Nicolas Williams
On Fri, Nov 14, 2008 at 11:04:21PM -0800, Ray Dillinger wrote:
> On Sat, 2008-11-15 at 12:43 +0800, Satoshi Nakamoto wrote:
> > > If someone double spends, then the transaction record 
> > > can be unblinded revealing the identity of the cheater. 
> > 
> > Identities are not used, and there's no reliance on recourse.  It's all 
> > prevention.
> 
> Okay, that's surprising.  If you're not using buyer/seller 
> identities, then you are not checking that a spend is being made 
> by someone who actually is the owner of (on record as having 
> recieved) the coin being spent.  

How do identities help?  It's supposed to be anonymous cash, right?  And
say you identify a double spender after the fact, then what?  Perhaps
you're looking at a disposable ID.  Or perhaps you can't chase them
down.

Double spend detection needs to be real-time or near real-time.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-23 Thread Nicolas Williams
On Mon, Sep 22, 2008 at 08:59:25PM -1000, James A. Donald wrote:
> The major obstacle is that the government would want a strong binding 
> between sim cards and true names, which is no more practical than a 
> strong binding between physical keys and true names.

I've a hard time believing that this is the major obstacle.  We all use
credit cards all the time -- apparently that's as good a "strong binding
between [credit] cards and true names" and as the government needs.  (If
not then throw in cameras at many intersections and along freeways, add
in license plate OCR, and you can tie things together easily enough.
Wasn't that a worry in another recent thread here?)

More likely there are other problems.

First, there's a business model problem.  Every one wants in: the cell
phone manufacturer, the software developer, the network operators, and
the banks.  With everyone wanting a cut of every transaction done
through cell phones the result would likely be too expensive to compete
with credit cards, even after accounting for the cost of credit card
fraud.  Credit card fraud and online security, in any case, are pretty
low on the list of banking troubles these past few weeks, and not
without reason!

Second, there's going to be standard issues.

Third the nfc technology has to be commoditized.

Fourth there's cost of doing an initial rollout of the POS nfc
terminals and building momentum for the product.  Once momentum is there
you're done.  And there's risk too -- if you fail you lose your
investment.

...

> Trouble is, what happens if the user's email account is stolen?

Touble is: what happens if the user's cell phone is stolen?

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-10 Thread Nicolas Williams
On Wed, Sep 10, 2008 at 01:29:32PM -0400, William Allen Simpson wrote:
> I agree.   I'm sure this is a world-wide problem, and head-in-the-sand
> cyber-libertarianism has long prevented better solutions.  The "market"
> doesn't work for this, as there is a competitive *disadvantage* to
> providing improved security, and it's hard to quantify safety.

Or maybe there's a civil liability law issue that causes the market to
fail in this instance.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 12:35:43PM -0700, Paul Hoffman wrote:
> At 1:47 PM -0500 8/8/08, Nicolas Williams wrote:
> >On Fri, Aug 08, 2008 at 02:08:37PM -0400, Perry E. Metzger wrote:
> >> The kerberos style of having credentials expire very quickly is one
> >> (somewhat less imperfect) way to deal with such things, but it is far
> >> from perfect and it could not be done for the ad-hoc certificate
> >> system https: depends on -- the infrastructure for refreshing all the
> >> world's certs every eight hours doesn't exist, and if it did imagine
> >> the chaos if it failed for a major CA one fine morning.
> >
> >The PKIX moral equivalent of Kerberos V tickets would be OCSP Responses.
> >
> >I understand most current browsers support OCSP.
> 
> ...and only a tiny number of CAs do so.

Not that long ago nothing supported OCSP.  If all that's left (ha) is
the CAs then we're in good shape.  (OCSP services can be added without
modifying a CA -- just issue the OCSP Responders their certs and let
them use CRLs are their source of revocation information.)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 11:20:15AM -0700, Eric Rescorla wrote:
> At Fri, 08 Aug 2008 10:43:53 -0700,
> Dan Kaminsky wrote:
> > Funnily enough I was just working on this -- and found that we'd end up 
> > adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
> > curious about the feasibility of a large bloom filter that fails back to 
> > online checking though.  This has side effects but perhaps they can be 
> > made statistically very unlikely, without blowing out the size of a browser.
> 
> Why do you say a couple of megabytes? 99% of the value would be
> 1024-bit RSA keys. There are ~32,000 such keys. If you devote an
> 80-bit hash to each one (which is easily large enough to give you a
> vanishingly small false positive probability; you could probably get
> away with 64 bits), that's 320KB.  Given that the smallest Firefox
> [...]

You could store {, } and check matches for false positives
by generating a key with the corresponding seed and then checking for an
exact match -- slow, but rare.  This way you could choose your false
positive rate / table size comfort zone and vary the size of the hash
accordingly.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 02:08:37PM -0400, Perry E. Metzger wrote:
> The kerberos style of having credentials expire very quickly is one
> (somewhat less imperfect) way to deal with such things, but it is far
> from perfect and it could not be done for the ad-hoc certificate
> system https: depends on -- the infrastructure for refreshing all the
> world's certs every eight hours doesn't exist, and if it did imagine
> the chaos if it failed for a major CA one fine morning.

The PKIX moral equivalent of Kerberos V tickets would be OCSP Responses.

I understand most current browsers support OCSP.

> One also worries about what will happen in the UI when a certificate
> has been revoked. If it just says "this cert has been revoked,
> continue anyway?" the wrong thing will almost always happen.

No doubt.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: The PKC-only application security model ...

2008-07-24 Thread Nicolas Williams
On Wed, Jul 23, 2008 at 05:32:02PM -0500, Thierry Moreau wrote:
> The document I published on my web site today is focused on fielding 
> certificateless public operations with the TLS protocol which does not 
> support client public keys without certificates - hence the meaningless 
> security certificate. Nothing fancy in this technique, just a small 
> contribution with the hope to facilitate the use of client-side PKC.

Advice on how to generate self-signed certs for this purpose would be
good for an FYI, or even a BCP.  I don't think we need extensions to any
protocols that support PKI to support bare PK (though some protocols
have both, e.g., IKE).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-11 Thread Nicolas Williams
On Fri, Jul 11, 2008 at 05:08:39PM +0100, Dave Korn wrote:
>   It does sound a lot like "SSL/TLS without certs", ie. SSL/TLSweakened to
> make it vulnerable to MitM.  Then again, if no Joe Punter ever knows the
> difference between a real and spoofed cert, we're pretty much in the same
> situation anyway.

Note that this is not all that bad because many apps can do
authentication at the application layer, and if you add channel binding
then you can leave session crypto to IPsec while avoiding MITMs (they
get flushed by channel binding).

This is the premise of BTNS + connection latching.  W/o channel binding
it's better than nothing, though not much.  W/ channel binding it should
be much easier to deploy (beyond software updates) than plain IPsec with
similar security guarantees.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-10 Thread Nicolas Williams
On Thu, Jul 10, 2008 at 02:31:12PM -0400, James Cloos wrote:
> > "Eugen" == Eugen Leitl <[EMAIL PROTECTED]> writes:
> 
> Eugen> I'm not sure what the status of http://postel.org/anonsec/
> 
> The IETF just created a new list and subscribed all anonsec subscribers:
> 
> https://www.ietf.org/mailman/listinfo/btns

Indeed.  But it's as quiet as the old list :/

Seriously, the work of the BTNS WG is, IMO, crucial to the use of IPsec
as an end-to-end solution (as opposed to as a VPN solution, for which
IPsec is already pretty darned good).  If you care, then please
participate, or even better, implement.

That anyone is working on IPETEE indicates that end-to-end IPsec
solutions are desired.  The in-band nature of the IPETEE key exchange
indicates, to me, a dislike of IKE, or perhaps unawareness of BTNS WG
(man, the WG's name doesn't reflect very well what it does), or perhaps
a misunderstanding of IPsec.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-10 Thread Nicolas Williams
On Thu, Jul 10, 2008 at 06:10:27PM +0200, Eugen Leitl wrote:
> In case somebody missed it, 
> 
> http://www.tfr.org/wiki/index.php?title=Technical_Proposal_(IPETEE)

I did miss it.  Thanks for the link.  I don't think in-band key exchange
is desirable here, but, you never know what will triumph in the
marketplace.

> I'm not sure what the status of http://postel.org/anonsec/
> is, the mailing list traffic dried up a while back.

Connection latching, which is the BTNS WG equivalent of 'IPETEE', but
much simpler, is in the IESG's hands now.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-06-30 Thread Nicolas Williams
On Mon, Jun 30, 2008 at 11:47:54AM -0700, Allen wrote:
> Nicolas Williams wrote:
> >On Mon, Jun 30, 2008 at 07:16:17AM -0700, Allen wrote:
> >>Given this, the real question is, /"Quis custodiet ipsos custodes?"/ 
> >
> >Putting aside the fact that cryptographers aren't custodians of
> >anything, it's all about social institutions.
> 
> Well, I wouldn't say they aren't custodians. Perhaps not in the 
> sense that the word is commonly used, but most certainly in the 
> sense custodians of the wisdom used to make the choices. This is 
> exemplified by Bruce Schneier, an "acknowledged expert,"  changing 
> his mind about the way to do security from "encrypt everything" to 
> "monitor everything." Yes, I have simplified his stance, but just to 
> make the point that even experts learn and change over time.

What does that have to do with anything?  Expert != knowledge cast in
stone.

> >There are well-attended conferences, papers published online and in many
> >journals, etcetera.  So it's not so difficult for people who don't know
> >anything about security and crypto to eventually figure out who does, in
> >the process also learning who else knows who the experts are.
> 
> Actually I think it is just about as difficult to tell who is a 
> trustworthy expert in the field of cryptography as it is in any 
> field of science or medicine. Just look at the junk science and 
> medical studies. One retrospective study of 90+ clinical trials 
> found that over 600 potentially important reaction to the drugs 
> occurred but only 39 were reported in the papers. I suspect if we 
> did the same sort of retrospective study for cryptography we would 
> find some similar issues, just, perhaps, not as large because there 
> is not as much money to be made with junk cryptography as junk 
> pharmaceuticals.

The above does not really refute what I wrote.  It takes effort to
figure out who's an expert.  But I believe that the situation w.r.t.
crypto is similar to that in science (cold fusion frauds were identified
rather quickly, were they not?) and better than in medicine (precisely
because there is not much commercial incentive to fraud here; there is
incentive for intelligence organizations to interfere, I suppose, but
here the risk of getting caught is high and the potential cost of
getting caught high as well).

> I'm curious, how does software get sold for so long that is clearly 
> weak or broken? Detected, yes, but still sold like Windows LANMAN 
> backward compatibility.

I thought we were talking about cryptographers, not marketing
departments, market dynamics, ...  If you want to include the latter in
"custodes" then there is a clear custody hierarchy: the community of
experts in the field is above individual implementors.  Thus we have
reports of snake oil on this list, on various blogs, etc...

So we're back to "quis custodiet ipsos custodes?"  Excluding marketing
here is the right thing to do (see above).  Which brings us back to my
answer.

> >When it comes to expertise in crypto, Quis custodiet ipsos custodes
> >seems like a relatively simple problem.  I'm sure it's much, much more
> >difficult a problem for, say, police departments, financial
> >organizations, intelligence organizations, etc...
> 
> Well, Nico, this is where I diverge from your view. It is the 
> "police departments, financial organizations, intelligence 
> organizations, etc..." who deploy the cryptography. Why should they 

In my experience market realities have much more to do with what gets
deployed than the current state of the art does; never mind who the
experts are.  "We'd love to deploy technology X, but in our
heterogeneous network only one quarter of the vendors support X, and
only if we upgrade  systems, which requires QA testing,
which..." -- surely you've run into that sort of situation, amongst
others.  Legacy, broken code dwarfs snake oil in terms of deployment;
legacy != snake oil -- we're allowed to learn, as you yourself point
out.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: The wisdom of the ill informed

2008-06-30 Thread Nicolas Williams
On Mon, Jun 30, 2008 at 07:16:17AM -0700, Allen wrote:
> Given this, the real question is, /"Quis custodiet ipsos custodes?"/ 

Putting aside the fact that cryptographers aren't custodians of
anything, it's all about social institutions.

There are well-attended conferences, papers published online and in many
journals, etcetera.  So it's not so difficult for people who don't know
anything about security and crypto to eventually figure out who does, in
the process also learning who else knows who the experts are.

For example, in the IETF there's an institutional structure that makes
finding out who to ask relatively simple.  Large corporations tend to
have some experts in house, even if they are only expert in finding the
real experts.

We (society) have new experts joining the field, with very low barriers
to entry (financial and political barriers to entry are minimal -- it's
all about brain power), and diversity amongst the existing experts.

There's no major personal gain to be had, besides fame, and too much
diversity and openness for anyone to have a prayer of manipulating the
field undetected for too long.

When it comes to expertise in crypto, Quis custodiet ipsos custodes
seems like a relatively simple problem.  I'm sure it's much, much more
difficult a problem for, say, police departments, financial
organizations, intelligence organizations, etc...

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: User interface, security, and "simplicity"

2008-05-06 Thread Nicolas Williams
On Tue, May 06, 2008 at 03:40:46PM +, Steven M. Bellovin wrote:
> Experiment part two: implement remote login (or remote IMAP, or remote
> Web with per-user privileges, etc.) under similar conditions.  Recall
> that being able to do this was a goal of the IPsec working group.
> 
> I think that part one is doable, though possibly the existing APIs are
> incomplete.  I don't think that part two is doable, and certainly not
> with high assurance.  In particular, with TLS the session key can be
> negotiated between two user contexts; with IPsec/IKE, it's negotiated
> between a user and a system.  (Yes, I'm oversimplifying here.)

"Connection latching" and "connection-oriented" IPsec APIs can address
this problem.

Solaris, and at least one other IPsec implementation (OpenSwan?  I
forget) makes sure that all packets for any one TCP connection (or UDP
"connection") are protected (or bypassed) the same way during their
lifetime.  "The same way" -> by similar SAs, that is, SAs with the same
algorithms, same peers, and various other parameters.

A WGLC is about to start in the IETF BTNS WG on an I-D that describes
this.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: how to read information from RFID equipped credit cards

2008-04-18 Thread Nicolas Williams
On Tue, Apr 01, 2008 at 12:47:45AM +1300, Peter Gutmann wrote:
> Ben Laurie <[EMAIL PROTECTED]> writes:
> 
> >And so we end up at the position that we have ended up at so many times
> >before: the GTCYM has to have a decent processor, a keyboard and a screen,
> >and must be portable and secure.
> >
> >One day we'll stop concluding this and actually do something about it.
> 
> Actually there are already companies doing something like this, but they've
> run into a problem that no-one has ever considered so far: The GTCYM needs a
> (relatively) high-bandwidth connection to a remote server, and there's no easy
> way to do this.

Cell phones have that.

The bigger problem is pairing with the local POS (or whatever), which is
where NFC comes in -- the "obvious" thing to do here is to make this
pairing not-really-wireless (e.g., the cell phone could scan a barcode
from the POS, or the POS could scan a barcode displayed by the cell
phone, or both, or any number of variants of this).

> (Hint: You can't use anything involving USB because many corporates lock down
> USB ports to prevent data leaking onto other corporates' networks, or
> conversely to prevent other corporates' data leaking onto their networks. Same
> for Ethernet, Firewire, ...).

Right, it's got to be wireless :)

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread Nicolas Williams
On Tue, Feb 05, 2008 at 08:17:32AM +1000, James A. Donald wrote:
> Nicolas Williams wrote:
> > Sounds a bit like SCTP, with crypto thrown in.
> 
> SCTP is what we should have done http over, though of
> course SCTP did not exist back then.  Perhaps, like
> quite a few other standards, it still does not quite
> exist.

Proposing something new won't help make that available sooner than SCTP
if that something new, like SCTP, must be implemented in kernel-land.

> > I thought it was the latency cause by unnecessary
> > round-trips and expensive key exchange crypto that
> > motivated your proposal.  The cost of session crypto
> > is probably not as noticeable as that of the latency
> > of key exchange and authentication.
> 
> The big problem is that between the time one logs on to
> one's bank, and the time one logs off, one is apt to
> have done lots and lots of cryptographic key exchanges.
> One key exchange per customer session is a really small
> cost, but we have a storm of them.

This is what session resumption is all about, and now that we have a way
to do it without server-side state (RFC4507) there should be no more
complaints.

If the latency of multiple key exchanges is the issue then we should
push for deployment of RFC4507 before we go push for a brand new
transport protocol.

> Whenever the web page shows what is particular to the
> individual rather than universal, it uses a session
> cookie, visible to server side web page code.
> Encryption, the bundle of shared secrets that enable
> encrypted communications, should be visible at that
> level, should be a session cookie characteristic rather
> than a low level transport characteristic, should have
> the durability and scope of a session cookie, instead of
> the durability and scope of a transaction.

If I understand what you mean then the ticket in RFC4507 is just that.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread Nicolas Williams
On Sun, Feb 03, 2008 at 09:24:48PM +1000, James A. Donald wrote:
> Nicolas Williams wrote:
> >What, specifically, are you proposing?
> 
> I am still writing it up.
> 
> > Running the web over UDP?
> 
> In a sense.
> 
> That should have been done from the beginning, even before security 
> became a problem.  TCP is a poor fit to a transactional protocol, as the 
> gyrations with "Keep-alive" and its successors illustrate.

In the beginning most pages were simple enough that to speak of
"transactional protocol" is almost an exageration.  Web techonologies
grew organically.  Solutions to the various resulting problems will, I
bet, also grow organically.

A complete revamping is probably not in the cards.  But if one should be
then it should not surprise you that I'm all in favor of piercing
abstraction layers.  User authentication should happen that the
application layer, and session crypto should happen at the transport
layer, with everything cryptographically bound up.  In any case we
should re-use what we know works (e.g., ESP/AH for transport session
crypto, IKEv2/TLS/DTLS for key exchange, ...).

> In rough summary outline, what I propose is to introduce a distinction 
> between connections and streams, that a single long lasting connection 
> contains many transient streams.  This is equivalent to TCP in the case 
> that a single connection always contains exactly two streams, one in 
> each direction, and the two streams are created when the connection is 
> created and shut down when the connection is shut down, but the main 
> objective is to support usages that are not equivalent to TCP. This is 
> pretty much the same thing as T/TCP, except that a "connection" can have 
> a large shared secret associated with it to encrypt the streams.  For an 
> unencrypted connection, it can be spoof flooded the same way as T/TCP 
> can be spoof flooded, 

Sounds a bit like SCTP, with crypto thrown in.

>   but the main design objective is to make 
> encryption efficient enough that one always encrypts everything.

I thought it was the latency cause by unnecessary round-trips and
expensive key exchange crypto that motivated your proposal.  The cost of
session crypto is probably not as noticeable as that of the latency of
key exchange and authentication.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-03 Thread Nicolas Williams
On Thu, Jan 31, 2008 at 11:12:45PM -0500, Victor Duchovni wrote:
> On Fri, Feb 01, 2008 at 01:15:09PM +1300, Peter Gutmann wrote:
> > If anyone's interested, I did an analysis of this sort of thing in an
> > unpublished draft "Performance Characteristics of Application-level Security
> > Protocols", http://www.cs.auckland.ac.nz/~pgut001/pubs/app_sec.pdf.  It
> > compares (among other things) the cost in RTT of several variations of SSL 
> > and
> > SSH.  It's not the TCP RTTs that hurt, it's all the handshaking that takes
> > place during the crypto connect.  SSH is particularly bad in this regard.
> 
> Thanks, an excellent reference! Section 6.2 is most enlightening, we were
> already considering adopting HPN fixes in the internal OpenSSH deployment,
> this provides solid material to motivate the work...

To be fair, the "handbrake" in SFTP isn't -- the clients and servers
should be using async I/O and support interleaving the transfers of many
files concurrently, which should allow the peers to exchange data as
fast as it can be read from disk.

The same is true of NFS, and keep in mind that SFTP is more of a remote
filesystem protocol than a file transfer protocol.

But nobody writes archivers that work asynchronously (or which are
threaded, since, e.g., close(2) has no async equivalent, and is required
to be synchronous in the NFS case).  And nobody writes SFTP clients and
server that work asynchronously.  But, we could, and we should.

And the handbrake in the SSHv2 connection protocol has its rationale as
well (namely to allow interactive sessions to be responsive).  As
described in Peter's paper, it can be turned off, effectively.  It's
most useful when mixing interactive sessions and X11 display forwarding
(and port forwarding which don't involve bulk data transfers).  It's
most useless when doing bulk transfers.  So use separate connections for
bulk transfers.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-01 Thread Nicolas Williams
On Fri, Feb 01, 2008 at 07:58:16PM +, Steven M. Bellovin wrote:
> On Fri, 01 Feb 2008 13:29:52 +1300
> [EMAIL PROTECTED] (Peter Gutmann) wrote:
> > (Anyone have any clout with Firefox or MS?  Without significant
> > browser support it's hard to get any traction, but the browser
> > vendors are too busy chasing phantoms like EV certs).
> > 
> The big issue is prompting the user for a password in a way that no one
> will confuse with a web site doing so.  Given all the effort that's
> been put into making Javascript more and more powerful, and given
> things like picture-in-picture attacks, I'm not optimistic.   It might
> have been the right thing, once upon a time, but the horse may be too
> far out of the barn by now to make it worthwhile closing the barn door.

And on top of that web site designers don't want browser dialogs for
HTTP/TLS authentication.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-01 Thread Nicolas Williams
On Fri, Feb 01, 2008 at 06:24:25PM +1000, James A. Donald wrote:
> You are asking for a layered design that works better than the existing 
> layered design.  My claim is that you get an additional round trip for 
> each layer - which your examples have just demonstrated.
> 
> SSL has to be on top of a reliable transport layer, hence has to have an 
> extra round trip.  I was not proposing something better *for* SSL, I was 
> proposing something better *instead* *of* SSL.  If one takes SSL as a 
> given, then indeed, *three* round trips are needed before the client can 
> send any actual data - which is precisely my objection to SSL.

What, specifically, are you proposing?  Running the web over UDP?
That's the only alternative that I can see short of modifying TCP or
IPsec.  I doubt any of those three will take the web world by storm, but
HTTP over DTLS over UDP would have to be least unlikely, and even then,
I strongly doubt it.

I think we'll just have to deal with those round-trips.  As long as
there be plenty of other, cheaper or more practical ways to improve web
app performance, that's all we're likely to see pursued.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-01 Thread Nicolas Williams
On Fri, Feb 01, 2008 at 09:24:10AM -0500, Perry E. Metzger wrote:
> > Does tinc do something that IPsec cannot?
> 
> I use a VPN system other than IPSec on a regular basis. The reason is
> simple: it is easy to configure for my application and my OS native
> IPsec tools are very difficult to configure.
> 
> There is a lesson in this, I think.

I agree wholeheartedly.  I'm trying to fix this too.  But for web stuff,
IPsec won't have a chance for a long time, maybe never.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-01 Thread Nicolas Williams
On Wed, Jan 30, 2008 at 02:47:46PM -0500, Victor Duchovni wrote:
> If someone has a faster than 3-way handshake connection establishment
> protocol that SSL could leverage instead of TCP, please explain the
> design.

I don't have one that exists today and is practical.  But we can
certainly imagine possible ways to improve this situation: move parts of
TLS into TCP and/or IPsec.  There are proposals that come close enough
to this (see the last IETF SAAG meeting's proceedings, see the IETF BTNS
WG) that it's not too farfetched, but for web stuff I just don't think
they're remotely likely.

Prior to the advent of AJAX-like web design patterns the most noticeable
latency in web apps was in the server (for dynamic content) and the
client (re-rendering the whole page on every click).  Applying GUI
lessons to the web (asynchrony!  callbacks/closures!) fixed that.

TLS was not to blame.

TLS probably still isn't to blame for whatever latency users might be
annoyed by in web apps.

It's *much* easier to look for improvements in the app layer first given
that web app updates are much easier to deploy than TLS (which in turn
is much easier to deploy than changes to TCP or IPsec).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: two-person login?

2008-01-29 Thread Nicolas Williams
On Tue, Jan 29, 2008 at 06:34:29PM +, The Fungi wrote:
> On Mon, Jan 28, 2008 at 03:56:11PM -0700, John Denker wrote:
> > So now I throw it open for discussion.  Is there any significant
> > value in two-person login?  That is, can you identify any threat 
> > that is alleviated by two-person login, that is not more wisely 
> > alleviated in some other way?
> [...]
> 
> I don't think it's security theater at all, as long as established
> procedure backs up this implementation in a sane way. For example,
> in my professional life, we use this technique for commiting changes
...

I think you missed John's point, which is that two-person *login* says
*nothing* about what happens once logged in -- logging in enables
arbitrary subsequent transactions that may not require two people to
acquiesce.

What if one of the persons leaves the other alone to do whatever they
wish with the system?  Or are the two persons chained to each other?
(And even then, there's no guarantee that they are both conscious at the
same time, that no third person shows up and knocks them out *after*
they've logged in, ...)

> Technology can't effectively *force* procedure (ingenious people
> will always find a way around the better mousetrap), but it can help
> remind them how they are expected to interact with systems.

When you force two people to participate on a *per-transaction* basis
then you probably get both of them to pay attention, though such schemes
might not scale to thousands, or even hundreds of transactions per-team,
per-day.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: refactoring crypto handshakes (SSL in 3 easy steps)

2007-11-13 Thread Nicolas Williams
On Thu, Nov 08, 2007 at 01:49:30PM -0600, [EMAIL PROTECTED] wrote:
> PREVIOUS WORK:
> 
> Three messages is the proven minimum for mutual authentication.  Last
> two messages all depend on the previous message, so minimum handshake
> time is 1.5 RTTs.

Kerberos V manages in one round-trip.  And it could do one round-trip
without a replay cache if it used ephemeral-ephemeral DH to exchange
sub-session keys.  (OTOH, high performance, secure replay caches are
difficult to implement, ultimately being limited by the number of write
to persistent storage ops that the system can manage.)

I think you might want to say that "three messages is the minimum for
mutual authentication with neither a replay cache nor a trusted third
party negotiating a key for use during the authentication exchanges."
Or something along those lines.

Of course, you might claim that the TGS exchanges should be added to the
number of messages needed for AP exchanges, but if you re-authenticate
often then you amortize the cost of the TGS exchanges over many AP
exchanges.

I think first and foremost we need authentication protocols to be
secure, while at the same time being algorithm agile.  I think you can
generally manage that in 1.5 round-trips optimistically, more when
optimistic negotiation fails.  And you can do better if you have
something like a KDC that can do negotiation out of band.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: improving ssh

2007-07-19 Thread Nicolas Williams
Doesn't this belong on the old SSHv2 WG's mailing list?

On Sat, Jul 14, 2007 at 11:43:53AM -0700, Ed Gerck wrote:
> SSH (OpenSSH) is routinely used in secure access for remote server
> maintenance. However, as I see it, SSH has a number of security issues
> that have not been addressed (as far I know), which create unnecessary
> vulnerabilities.

The SSHv2 protocol or OpenSSH (an implementation of SSHv1 and SSHv2)?

> Some issues could be minimized by turning off password authentication,
> which is not practical in many cases. Other issues can be addressed by
> additional means, for example:
> 
> 1. firewall port-knocking to block scanning and attacks

Do you think that implementations of the protocol should implement this?
(From what you say below I think your answer is "yes."  Which brings up
the questions "why?" and "how?")

> 2. firewall logging and IP disabling for repeated attacks (prevent DoS,
> block dictionary attacks)

SSH servers could integrate features like this without needing firewall
APIs.

> 3. pre- and post-filtering to prevent SSH from advertising itself and
> server OS

Unfortunately SSH implementations tend to depend on accurate client and
server software version strings in order to work around bugs.

Anyways, security by obscurity doesn't help.

> 4. block empty authentication requests

What are those?

Are they requests with an empty username?  The only SSHv2 userauth
methods that support that are the GSS ones, and that's a good feature
(it allows the server to derive the username from the client's principal
name).

> 5. block sending host key fingerprint for invalid or no username

Currently the only way to do this is to configure SSH servers to support
only SSHv2 and only the gss-* key exchange algorithms (see RFC4462,
section 2).  There exist implementations that support this.

To get rid of the "host authenticates itself first" attitude for all
non-GSS-based SSHv2 userauth methods will require radical changes to the
protocol and deployment transitions.

> 6. drop SSH reply (send no response) for invalid or no username

The server has to answer with something.  Silence is still an answer.
So is closing the TCP connection.

> I believe it would be better to solve them in SSH itself, as one would
> not have to change the environment in order to further secure SSH.
> Changing firewall rules, for example, is not always portable and may
> have unintended consequences.

Coding to firewall APIs is even less portable (heck, not all OSes have
firewall APIs).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Quantum Cryptography

2007-06-27 Thread Nicolas Williams
On Tue, Jun 26, 2007 at 02:03:29PM -0700, Jon Callas wrote:
> On Jun 26, 2007, at 10:10 AM, Nicolas Williams wrote:
> >This too is a *fundamental* difference between QKD and classical
> >cryptography.
> 
> What does this "classical" word mean? Is it the Quantum way to say  
> "real"? I know we're in violent agreement, but why are we letting  
> them play language games?

I don't mind using "classical" here.  I don't think Newtonian physics
(classical) is "bad" -- it works great at every day human scales.

> >IMO, QKD's ability to discover passive eavesdroppers is not even
> >interesting (except from an intellectual p.o.v.) given: its
> >inability to detect MITMs, its inability to operate end-to-end across
> >across middle boxes, while classical crypto provides protection
> >against  eavesdroppers *and* MITMs both *and* supports end-to-end
> >operation across middle boxes.
> 
> Moreover, the quantum way of discovering passive eavesdroppers is  
> really just a really delicious sugar coating on the classical term  
> "denial of service." I'm not being DoSed, I'm detecting a passive  
> eavesdropper!

Heh!  Indeed: with classical (or non-quantum, or standard, or...) crypto
eavesdroppers are passive attackers and passive attackers cannot mount
DoS attacks (oh, I suppose that wiretapping can cause some slightly
noticeable interference in some cases, but usually that's no DoS), but
in QKD passive attackers become active attackers.

But it gets worse!  To eavesdrop on a QKD link requires much the same
effort (splice the fiber) as to be an MITM on a QKD link, so why would
any attacker choose to eavesdrop and be detected instead of being an
MITM, go undeteceted and get the cleartext they're after?  Right, they
wouldn't.  Attackers aren't stupid, and an attacker that can splice your
fibers can probably afford the QKD HW they need to mount an MITM attack.

So, really, you need authentication.  And, really, you need end-to-end,
not hop-by-hop authentication and data confidentiality + integrity
protection.

This reminds me of Feynman's presentation of Quantum Electro Dynamics,
which finished with "QED."  Has it now been sufficiently established
that QKD is not useful that whenever it rears its head we can point
folks at archives of these threads and not spill anymore ink?

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: ad hoc IPsec or similiar

2007-06-26 Thread Nicolas Williams
On Tue, Jun 26, 2007 at 01:20:41PM -0700, Paul Hoffman wrote:
> >For all the other aspects of BTNS (IPsec connection latching [and
> >channel binding], IPsec APIs, leap-of-faith IPsec) agreeing on a
> >globally shared secret does not come close to being sufficient.
> 
> Fully agree. BTNS will definitely give you more than just one-off 
> encrypted tunnels, once the work is finished. But then, it should 
> probably be called MMTBTNS (Much More Than...).

I strongly dislike the WG's name.  Suffice it to say that it was not my
idea :); it created a lot of controversy at the time, though perhaps
that controversy helped sell the idea ("why would you want this silly,
insecure stuff?" "because it enables this other actually secure stuff").

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: ad hoc IPsec or similiar

2007-06-26 Thread Nicolas Williams
On Fri, Jun 22, 2007 at 10:43:16AM -0700, Paul Hoffman wrote:
> Note that that RFC is Informational only. There were a bunch of 
> perceived issues with it, although I think they were more purity 
> disagreements than anything.
> 
> FWIW, if you do *not* care about man-in-the-middle attacks (called 
> active attacks in RFC 4322), the solution is much, much simpler than 
> what is given in RFC 4322: everyone on the Internet agrees on a 
> single pre-shared secret and uses it. You lose any authentication 
> from IPsec, but if all you want is an encrypted tunnel that you will 
> authenticate all or parts of later, you don't need RFC 4322.
> 
> This was discussed many times, and always rejected as "not good 
> enough" by the purists. Then the IETF created the BTNS Working Group 
> which is spending huge amounts of time getting close to purity again.

That's pretty funny, actually, although I don't quite agree with the
substance (surprise!)  :)

Seriously, for those who merely want unauthenticated IPsec, MITMs and
all, then yes, agreeing on a globally shared secret would suffice.

For all the other aspects of BTNS (IPsec connection latching [and
channel binding], IPsec APIs, leap-of-faith IPsec) agreeing on a
globally shared secret does not come close to being sufficient.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Quantum Cryptography

2007-06-26 Thread Nicolas Williams
On Mon, Jun 25, 2007 at 08:23:14PM -0400, Greg Troxel wrote:
> Victor Duchovni <[EMAIL PROTECTED]> writes:
> > Secure in what sense? Did I miss reading about the part of QKD that
> > addresses MITM (just as plausible IMHO with fixed circuits as passive
> > eavesdropping)?
> 
> It would be good to read the QKD literature before claiming that QKD is
> always unauthenticated.

Noone claimed that it isn't -- the claim is that there is no quantum
authentication, so QKD has to be paired with classical crypto in order
to defeat MITMs, which renders it worthless (because if you'll rely on
classical crypto then you might as well only use classical crypto as QKD
doesn't add any security that classical crypto, which you still have to
use, doesn't already).

The real killer for QKD is that it doesn't work end-to-end across middle
boxes like routers.  And as if that weren't enough there's the
exhorbitant cost of QKD kit.

> The generally accepted approach among the physics crowd is to use
> authentication with a secret keys and a universal family of has
> functions.

Everyone who's commented has agreed that authentication is to be done
classically as there is no quantum authentication yet.

But I can imagine how quantum authentication might be done: generate an
entangled pair at one end of the connection, physically carry half of it
to the other end, and then run a QKD exchange that depends on the two
ends having half of the same entangled particle or photon pair.  I'm no
quantum physicist, so I can't tell how workable that would be at the
physics-wise, but such a scheme would be analogous to pre-sharing
symmetric keys in classical crypto.  Of course, you'd have to do this
physical pre-sharing step every time you restart the connection after
having run out of pre-shared entabled pair halfs; ouch.

> > Once QKD is augmented with authentication to address MITM, the "Q"
> > seems entirely irrelevant.
> 
> It's not if you care about perfect forward secrecy and believe that DH
> might be broken, and can't cope with or don't trust a Kerberos-like
> scheme.  You can authenticate QKD with a symmetric mechanism, and get
> PFS against an attacker who records all the traffic and breaks DH later.

The end-to-end across middle boxes issue kills this argument about
protection against speculative brokenness of public key cryptography.

All but the smallest networks depend on middle boxes.

Quantum cryptography will be useful when:

 - it can be deployed in an end-to-end fashion across middle boxes

 OR

 - we adopt hop-by-hop methods of building end-to-end authentication

And, of course, quantum kit has got to be affordable, but let's assume
that economies of scale will be achieved once quantum crypto becomes
useful.

Critical breaks of public key crypto will NOT be sufficient to drive
adoption of quantum crypto: we can still build networks out of symmetric
key crypto (and hash/MAC functions) only if need be (with pre-shared
keying, Kerberos, and generally Needham-Schroeder).

> There are two very hard questions for QKD systems:
> 
>  1) Do you believe the physics?  (Most people who know physics seem to.)
> 
>  2) Does the equipment in your lab correspond to the idealized models
> with which the proofs for (1) were done.  (Not even close.)

But the only real practical issue, for Internet-scale deployment, is the
end-to-end issue.  Even for intranet-scale deployments, actually.

> I am most curious as to the legal issue that came up regarding QKD.

Which legal issue?

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Quantum Cryptography

2007-06-26 Thread Nicolas Williams
On Fri, Jun 22, 2007 at 08:21:25PM -0400, Leichter, Jerry wrote:
> BTW, on the quantum subway tokens business:  In more modern terms,
> what this was providing was unlinkable, untraceable e-coins which
> could be spent exactly once, with *no* central database to check
> against and none of this "well, we can't stop you from spending it
> more than once, but if we ever notice, we'll learn all kinds of
> nasty things about you".  (The coins were unlinkable and untraceable
> because, in fact, they were *identical*.)  Now, of course, they
> were also physical objects, not just collections of bits.  The same
> is true of the photons used in quantum key exchange.  Otherwise,
> it wouldn't work.  We're inherently dealing with a different model
> here.  Where it ends up is anyone's guess at this point.

This relates back to the inutility of QKD as follows: when physical
exchanges are required you cannot run such exchanges end-to-end over an
Internet -- the middle boxes (routers, etc...) get in the way of the
physical exchange.

This too is a *fundamental* difference between QKD and classical
cryptography.

That difference makes QKD useless in *today's* Internet.

IF we had a quantum authentication facility then we could build
hop-by-hop authentication to build an Internet out of QKD and QA
(quantum authentication).  That's a *big* condition, and the change in
security models is tremendous, and for the worse: since the trust chains
get enormously enlarged.

IMO, QKD's ability to discover passive eavesdroppers is not even
interesting (except from an intellectual p.o.v.) given: its inability to
detect MITMs, its inability to operate end-to-end across across middle
boxes, while classical crypto provides protection against eavesdroppers
*and* MITMs both *and* supports end-to-end operation across middle
boxes.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-23 Thread Nicolas Williams
On Mon, Jun 11, 2007 at 11:28:37AM -0400, Richard Salz wrote:
> >Many protocols use some form of self describing data format, for example
> > ASN.1, XML, S expressions, and bencoding.
> 
> I'm not sure what you're getting at.  All XML and S expressions really get 
> you is that you know how to skip past something you don't understand. This 
> is also true for many (XER, DER, BER) but not all (PER) encodings for 
> ASN.1.

If only it were so easy.  As we discovered in the IETF KRB WG you can't
expect that just because the protocol uses a TLV encoding (DER) you can
just add items to sequences (structures) or choices (discriminated
unions) willy nilly: code generated by a compiler might choke because
formally the protocol didn't allow extensibility and the compiler did
the Right Thing.  Extensibility of this sort requires that one be
explicit about it in the original spec.

> Are you saying why publish a schema?

I doubt it: you can have schemas without self-describing encodings
(again, PER, XDR, are examples of non-self-describing encodings for
ASN.1 and XDR, respectively).  Schemas can be good while self-describing
encodings can be bad...

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-21 Thread Nicolas Williams
On Mon, Jun 11, 2007 at 09:28:02AM -0400, Bowness, Piers wrote:
> But what is does help is allowing a protocol to be expanded and enhanced
> while maintaining backward compatibility for both client and server.

Nonsense.  ASN.1's PER encoding does not prevent extensibility.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-21 Thread Nicolas Williams
> >But the main motivation (imho) is that it's trendy. And once anyone
> >proposes a heavyweight "standard" encoding, anyone who opposes it is
> >labeled a Luddite.

Maybe.  But there's quite a lot to be said for standards which lead to
widespread availability of tools implementing them, both, open source
and otherwise.

One of the arguments we've heard for why ASN.1 sucks is the lack of
tools, particularly open source ones, for ASN.1 and its encodings.

Nowadays there is one GPL ASN.1 compiler and libraries: SNACC.  (I'm not
sure if it's output is unencumbered, like bison, or what, but that's
important to a large number of developers who don't want to be forced to
license under GPL, and there's not any full-featured ASN.1 compilers and
libraries licensed under the BSD or BSD-like licenses.)

The situation is markedly different with XML.  Even if you don't like
XML, or its redundancy (as an encoding, but then, see FastInfoSet, a
PER-based encoding of XML), it has that going for it: tool availability.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Why self describing data formats:

2007-06-21 Thread Nicolas Williams
On Fri, Jun 01, 2007 at 08:59:55PM +1000, James A. Donald wrote:
> Many protocols use some form of self describing data format, for example 
> ASN.1, XML, S expressions, and bencoding.

ASN.1 is not an encoding, and not all its encodings are self-describing.

Specifically, PER is a compact encoding such that a PER encoding of some
data cannot be decoded without access to the ASN.1 module(s) that
describes the data types in question.

Yes, it's a nit.

Then there's XDR -- which can be thought of as a subset of ASN.1 and a
four-octet aligned version of PER (XDR being both, a syntax and an
encoding).

> Why?

Supposedly it is (or was thought to be) easier to write encoders/
decoders for TLV encodings (BER, DER, CER) and S-expressions, but I
don't believe it (though I certainly believe that it was thought to be
easier): rpcgen is a simple enough program, for example.

TLV encodings tend to quite redundant, in a way that seems dangerous: a
lazy programmer can (and many have) write code that fails to validate
parts of an encoding and mostly get away with it (until the then
inevitable subsequent buffer overflow, of course).

Of course, code generators and libraries for self-describing and non-
self-describing encodings alike are not necessarily bug free (have any
been?) but at least they have the virtue that they are automatic tools
that consume a formal language, thus limiting the number of lazy
programmers involved and the number of different ways in which they can
screw up (and they leave their consumers off the hook, to a point).

> Presumably both ends of the conversation have negotiated what protocol 
> version they are using (and if they have not, you have big problems) and 
> when they receive data, they need to get the data they expect.  If they 
> are looking for list of integer pairs, and they get a integer string 
> pairs, then having them correctly identified as strings is not going to 
> help much.

I agree.  The redundancy of TLV encodings, XML, etcetera, is
unnecessary.  Note though that I'm only talking about serialization
formats for data in protocols; XML, I understand, was intended for
_documents_, and it does seem quite appropriate for that, and so it can
be expected that there should be a place for it in Internet protocols in
transferring pieces of documents.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-15 Thread Nicolas Williams
On Tue, May 15, 2007 at 11:37:56AM +0200, Ian G wrote:
> Nicolas Williams wrote:
> >The requirement for having providers signed by a vendor's key certified
> >by Sun was to make sure that only providers from suppliers not from,
> >say, North Korea etc., can be loaded by the pluggable frameworks.
> 
> OK, but can we agree that this is a motive outside normal 
> engineering practices?  And it is definately nothing to do 
> with security as understood at the language and application 
> levels?

If we ignore politics, and if we ignore TPMs, yes.  Those are big
caveats.

> >As
> >far as I know the process for getting a certificate for this is no more
> >burdensome to any third parties, whether open source communities or
> >otherwise, than is needed to meet the legal requirements then, and
> >since, in force.
> 
> From what the guys in Cryptix have told me, this is true. 
> Getting the certificate is simply a bureaucratic hurdle, at 
> the current time.  This part is good.  But, in the big picture:

Good.

> J1.0:  no crypto
> J1.1:  crypto with no barriers
> J1.2:  JCA with no encryption, but replaceable
> J1.4:  JCA with low encryption, stuck, but providers are easy
> J1.5:  JCA, low encryption, signed providers, easy to get a 
> key for your provider
> J1.6:  ??
> 
> (The java version numbers are descriptive, not accurate.)

I'm not sure I understand the significance of the above.  I'm sure that
there are better lists to ask about the prospects for evolution here.

> The really lucky part here is that (due to circumstances 
> outside control) the entire language or implementation has 
> gone open source.

That's not due to luck.

> No more games are possible ==>  outside requirements are 
> neutered.  This may save crypto security in Java.

Save it from what exactly?

> >Of course, IANAL and I don't represent Sun, and you are free not to
> >believe me and try getting a certificate as described in Chapter 8 of
> >the Solaris Security Developers Guide for Solaris 10, which you can find
> >at:
> 
> 
> Sure.  There are two issues here, one backwards-looking and 
> one forwards-looking.
> 
> 1.  What is the way this should be done?  the Java story is 

By whom?  The code is GPLed -- you're free to hack on it.  OpenSolaris
is CDDLed and you're free to hack on that too.

Sun may or may not be subject to more relaxed export rules as a result
of open sourcing these things.  I don't know, IANAL.  The point is that
Sun may not be able to do in the products it ships what the community
can do with the source code.

> 2.  What is needed now?  Florian says the provider is 
> missing and the "root list" is empty.  What to do?  Is it 
> time to reinvigorate the open source Java crypto scene?

Ah, but you're free to: the code is GPLed and you can figure out what to
do to make the crypto framework not require provider signing.

Also, the provider surely can't be missing due to export rules -- the
C/assemler equivalents in Solaris are open source.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-15 Thread Nicolas Williams
On Mon, May 14, 2007 at 11:06:47AM -0600, [EMAIL PROTECTED] wrote:
>  Ian G wrote:
> > * Being dependent on PKI style certificates for signing, 
> ...
> 
> The most important motivation at the time was to avoid the risk of Java being
> export-controlled as crypto.  The theory within Sun was that "crypto with a
> hole" would be free from export controls but also be useful for programmers.

"crypto with a hole" (i.e., a framework where anyone can plug anyone
else's crypto) is what was seen as bad.

The requirement for having providers signed by a vendor's key certified
by Sun was to make sure that only providers from suppliers not from,
say, North Korea etc., can be loaded by the pluggable frameworks.  As
far as I know the process for getting a certificate for this is no more
burdensome to any third parties, whether open source communities or
otherwise, than is needed to meet the legal requirements then, and
since, in force.

Of course, IANAL and I don't represent Sun, and you are free not to
believe me and try getting a certificate as described in Chapter 8 of
the Solaris Security Developers Guide for Solaris 10, which you can find
at:

http://docs.sun.com

Comments should probably be sent to [EMAIL PROTECTED]

Cheers,

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-12 Thread Nicolas Williams
> Subject: Re: no surprise - Sun fails to open source the crypto part of Java

Were you not surprised because you knew that said source is encumbered,
or because you think Sun has some nefarious motive to not open source
that code?

If the latter then keep in mind that you can find plenty of crypto code
in OpenSolaris, which, unless you think the CDDL does not qualify as
open source, is open source.  I've no first hand knowledge, but I
suspect that the news story you quoted from is correct: the code is
encumbered and Sun couldn't get the copyright holders to permit release
under the GPL in time for the release of Java source under the GPL.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


  1   2   >