Re: Secure erasing Info (fwd from richard@SCL.UTAH.EDU)

2005-05-02 Thread Jason Holt

On Mon, 2 May 2005, sunder wrote:

 Yeah, but these days, I'd go with the largest flash drive I could 
 afford.  USB2 or otherwise.  I don't believe you can recover data from 
 these once you actually overwrite the bits (anyone out there know any 
 different?).

There are lots of pitfalls in secure erasure, even without considering
physical media attacks.  Your filesystem may not overwrite data on the same
blocks used to write the data originally, for instance.  Plaintext may be left
in the journal and elsewhere.  Even filling up the disk may not do it, as some
filesystems keep blocks in reserve.  I did a demo a few years ago where I
wrote plaintext, overwrote, then dumped the filesystem blocks out and found
parts of the plaintext.

For anybody who hasn't read it, the Gutmann paper is Secure Deletion of Data
from Magnetic and Solid-State Memory, and is highly recommended.  He shows
that even RAM isn't safe against physical media attacks.

-J




RE: Dell to Add Security Chip to PCs

2005-02-04 Thread Jason Holt

On Thu, 3 Feb 2005, Erwann ABALEA wrote:
 And do you seriously think that you can't do that, it's technically not
 possible is a good answer? That's what you're saying. For me, a better
 answer is you don't have the right to deny my ownership.

Yes, Senator McCarthy, I do in fact feel safer knowing that mathematics
protects my data.  Welcome to cypherpunks.

-J



Hiawatha's research

2004-06-16 Thread Jason Holt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Hiawatha's Research
Jason Holt [EMAIL PROTECTED]
June, 2004, released into the public domain.
Dedicated to Eric Rescorla, with apologies to Longfellow.
(E. Rescorla may be substituted for Hiawatha throughout.)

Hiawatha, academic,
he could start ten research papers,
start them with such mighty study,
that the last had left his printer,
ere the first deadline extended.

Then, to serve the greater purpose,
he would post these master papers,
post them with such speed and swiftness,
to gain feedback from his cohorts,
for their mighty learned comments.

from his printer, Hiawatha
took his publication paper,
sent it to the preprint archive,
sent it out to all the newsgroups

Then he waited, watching, listening,
for the erudite discussion,
for the kudos and the errors,
that the others soon would send him.

But in this my Hiawatha
was most cruelly mistaken,
for not one did read his papers,
not one got past the simple abstract.

Still did they all grab their keyboards,
writing with great flaming fury
of the folly of his venture,
of his paper's great misgiving.
Of his obvious omissions,
of his great misunderstandings,
of his utter lack of vision,
of his blatant plagiarism.

(This last point he found most galling,
found it really quite dumbfounding,
since for prior art, he'd listed
ninety-three related papers.)

Now the mighty Hiawatha,
in his office still is sitting,
contemplating on his research,
thinking on his chosen topic.
Wondering, in idle moments,
if he had not chosen wrongly,
the position he had taken
as a research paper author

And he thinks, my Hiawatha,
if he might not have been better
served by a more lowly station,
as a cashier at McDonalds,
as a washer at the car wash,
as a cleaner of the bathrooms.
Thus departs my Hiawatha.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFA0K1inwLgjivV2NERAmuuAKCTCxuOBxSTvFpN++ttTjNcOwCFQACg02WV
/gYbY6V9JyRJl56DtIGx0vw=
=BUXX
-END PGP SIGNATURE-



Re: who goes 1st problem

2004-05-11 Thread Jason Holt

[Adam and I are taking this discussion off-list to spare your inboxes, but
this message seemed particularly relevant.  Perhaps we'll come back later if
we come up with anything we think will be of general interest.]

-J

On Tue, 11 May 2004, Adam Back wrote:

 Anyway the who goes 1st problem definition has my interest piqued: I
 am thinking this would be a very practically useful network protocol
 for privacy, if one could find a patent-free end-2-end secure (no
 server trust), efficient solution.  Another desirable feature I think
 is to not use too much funky crypto, people are justifiably nervous
 about putting experimental crypto into standards, even if it has
 security proofs until some peer review has happened.

Agreed.  Ninghui Li's RSA OSBEs might be the answer; they're not quite as
elegant as the IBE version, but they work with blinded RSA signatures, and so
should be patent-free by next year, assuming Ninghui doesn't seek any patents.  
Section 4 of his PODC paper describes the RSA implementation.  He also has a
new paper which does neat things with commitments that I haven't wrapped my
mind around yet.

Actually, we might also consider contacting Dan Boneh at some point; he seems
to be interested in the proliferation of IBE, and might be sympathetic to the
needs of the IETF to have free standards, especially considering the exposure
it'd get for his system.

However, we need to define just what we need to accomplish.  Since my lab
works in trust negotiation, we think in terms of policies a lot, whereas SSL
just assumes you know what certs you want to send to whom.  But let's assume
the SSL model for simplicity.

The second issue, now that I think of it in this context, would be how you
actually get your certs to the other guy.  Hidden credentials, as Ninghui
pointed out, assume you have some means for creating the other guy's cert,
eg., a template (nym):Senior_Agent:(current year) producing
Bob:Senior_Agent:2004.

The OSBE paper, OTOH, assumes we're going to exchange our certificates, just
without the CA signatures.  Then I can send you messages you can only read if
you really do have a signature on that cert.  But I've always thought that was
problematic, since why would honest people bother to connect then use fake
certs?  The attacker doesn't need to see the signature - he believes you.  So
honest users would need to regularly give out fake certs so they can hide
their legit behavior among the fake connects.  Will Winsborough also suggests
this with the notion of ACK policies - you *always* give people something they
ask for, so they can't tell what you have and what you don't.

So maybe what we really want is some sort of fair exchange or something, where
I can show you my valid certs as you show me the valid certs of your own.  

If one side is guessable, we've discussed this sort of thing with hidden
creds:

E(Hi Bob, since you're a senior agent, you can see my agent credential:
'Alice:Denver field office agent (apprentice):2004,
Bob:Senior_Agent:2004)

E(Hi Bob, since you're a BYU alumnus, you can see my BYU credential:
'Alice:Senior:computer science:3.96 gpa:2004,
Bob:Senior_Agent:2004)

etc.

So that's an open problem.  But let's assume guessable-certs, since that's the
only way I know how to really keep certs and policies safe for now. The
OSBE-RSA math still works.  So we're good so far, except that the RSA approach
is interactive.  Section 4 says that in the RSA scheme, Alice sends her cert
/and blinded signature/ to Bob (which may or may not be bogus), and then Bob
can send back an encrypted message.  (In HC and IBE-OSBEs, Bob doesn't need
the blinded signature to use as a public key).

But maybe Robert's improved secret sharing scheme from the new HC paper can 
give us some ideas:

1. Alice sends blinded signatures for each of her relevant certs, not
revealing which signature goes with each cert, and not revealing the cert
contents.

2. Bob generates the contents of each of Alice's certs relevant to his policy,
and simply generates each possible combination of hash-of-cert-contents and
blinded-signature.  One from each row will be a match-up between contents and
signature, and Alice will have to figure out which.  Unfortunately, this
requires n^2 multiplies and exponentiations.

-J




Re: who goes 1st problem

2004-05-11 Thread Jason Holt

[Adam and I are taking this discussion off-list to spare your inboxes, but
this message seemed particularly relevant.  Perhaps we'll come back later if
we come up with anything we think will be of general interest.]

-J

On Tue, 11 May 2004, Adam Back wrote:

 Anyway the who goes 1st problem definition has my interest piqued: I
 am thinking this would be a very practically useful network protocol
 for privacy, if one could find a patent-free end-2-end secure (no
 server trust), efficient solution.  Another desirable feature I think
 is to not use too much funky crypto, people are justifiably nervous
 about putting experimental crypto into standards, even if it has
 security proofs until some peer review has happened.

Agreed.  Ninghui Li's RSA OSBEs might be the answer; they're not quite as
elegant as the IBE version, but they work with blinded RSA signatures, and so
should be patent-free by next year, assuming Ninghui doesn't seek any patents.  
Section 4 of his PODC paper describes the RSA implementation.  He also has a
new paper which does neat things with commitments that I haven't wrapped my
mind around yet.

Actually, we might also consider contacting Dan Boneh at some point; he seems
to be interested in the proliferation of IBE, and might be sympathetic to the
needs of the IETF to have free standards, especially considering the exposure
it'd get for his system.

However, we need to define just what we need to accomplish.  Since my lab
works in trust negotiation, we think in terms of policies a lot, whereas SSL
just assumes you know what certs you want to send to whom.  But let's assume
the SSL model for simplicity.

The second issue, now that I think of it in this context, would be how you
actually get your certs to the other guy.  Hidden credentials, as Ninghui
pointed out, assume you have some means for creating the other guy's cert,
eg., a template (nym):Senior_Agent:(current year) producing
Bob:Senior_Agent:2004.

The OSBE paper, OTOH, assumes we're going to exchange our certificates, just
without the CA signatures.  Then I can send you messages you can only read if
you really do have a signature on that cert.  But I've always thought that was
problematic, since why would honest people bother to connect then use fake
certs?  The attacker doesn't need to see the signature - he believes you.  So
honest users would need to regularly give out fake certs so they can hide
their legit behavior among the fake connects.  Will Winsborough also suggests
this with the notion of ACK policies - you *always* give people something they
ask for, so they can't tell what you have and what you don't.

So maybe what we really want is some sort of fair exchange or something, where
I can show you my valid certs as you show me the valid certs of your own.  

If one side is guessable, we've discussed this sort of thing with hidden
creds:

E(Hi Bob, since you're a senior agent, you can see my agent credential:
'Alice:Denver field office agent (apprentice):2004,
Bob:Senior_Agent:2004)

E(Hi Bob, since you're a BYU alumnus, you can see my BYU credential:
'Alice:Senior:computer science:3.96 gpa:2004,
Bob:Senior_Agent:2004)

etc.

So that's an open problem.  But let's assume guessable-certs, since that's the
only way I know how to really keep certs and policies safe for now. The
OSBE-RSA math still works.  So we're good so far, except that the RSA approach
is interactive.  Section 4 says that in the RSA scheme, Alice sends her cert
/and blinded signature/ to Bob (which may or may not be bogus), and then Bob
can send back an encrypted message.  (In HC and IBE-OSBEs, Bob doesn't need
the blinded signature to use as a public key).

But maybe Robert's improved secret sharing scheme from the new HC paper can 
give us some ideas:

1. Alice sends blinded signatures for each of her relevant certs, not
revealing which signature goes with each cert, and not revealing the cert
contents.

2. Bob generates the contents of each of Alice's certs relevant to his policy,
and simply generates each possible combination of hash-of-cert-contents and
blinded-signature.  One from each row will be a match-up between contents and
signature, and Alice will have to figure out which.  Unfortunately, this
requires n^2 multiplies and exponentiations.

-J




Re: blinding BF IBE CA assisted credential system (Re: chaum's patent expiry?)

2004-05-10 Thread Jason Holt

On Mon, 10 May 2004, Adam Back wrote:

 On Mon, May 10, 2004 at 03:03:56AM +, Jason Holt wrote:
  [...] Actually, now that you mention Chaum, I'll have to look into
  blind signatures with the BF IBE (issuing is just a scalar*point
  multiply on a curve).  
 
 I think you mean so that the CA/IBE server even though he learns
 pseudonyms private key, does not learn the linkage between true name
 and pseudonym.  (At any time during a show protocol whether the
 private key issuing protocol is blinded or not the IBE server can
 compute the pseudonyms private key).

Well, he can always generate private keys for any pseudonym, just as in cash
systems where the bank can always issue bank notes.  Here's what I'm
suggesting, where b is a blinding function and n1... are random nyms:

Issuing:

Alice  FBI TTP
b(n1,agent)
b(n2,agent)
b(n3,agent)

---cut  choose: n1,n3

(n1,agent)-
(n3,agent)-

---sig(b(n2,agent))

(Alice unblinds and now has a credential for nym n2)

So it's vanilla Chaum-style blinded credentials.  The FBI signs Alice's agent
cred without learning the nym.  Alice can use the nym, and the server can ask
the FBI the attributes (agent? chief? secretary?) of the person with the nym,
but the FBI won't know.  The FBI could eavesdrop on Alice's connection and
generate whatever creds are necessary to read the resource Bob sends her, but
that's why I was talking about building it in a protocol with PFS.

But now that I think of it, PFS isn't really necessary at all for AliceBob to
have a conversation where their policies are respected:

Alice Bob

(Alice generates random nonce na)
HC_E(na, Bob:agent, FBI)---

 (Bob generates random nb)
 ---HC_E(nb, Alice:member, NRA)

Both generate session keys from Hash(na,nb).

So, Alice wants to connect iff Bob's FBI, and Bob wants to talk iff Alice is
in the NRA, where Alice and Bob are random pseudonyms.  Thus they send
their random nonces na and nb encrypted against those creds (HC_E is a hidden
cred encrypt), then use those nonces for the session keys.

The FBI can *always* impersonate an agent, because, well, they're the CA and
they can make up pseudonymous agents all day long. But in this protocol, I
believe they wouldn't be able to be a MITM and/or just eavesdrop on AliceBob.  
That's because Bob only wants to talk to NRA members, and the FBI can't
impersonate that.

Now, this is for an interactive session, rather than just sending a single
request/response round like I discuss in the paper.  But even then, policies
are always respected.  Just change na to request and nb to response.  
Alice's policy is respected whether she talks to FBI-authorized-Bob or
FBI-authorized-FBI, and the FBI doesn't get to read Bob's NRA-Alice-only
repsonse.

-J



Re: Brands' private credentials

2004-05-10 Thread Jason Holt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mon, 10 May 2004, Adam Back wrote:
 After that I was presuming you use a signature to convince the server
 that you are authorised.  Your comment however was that this would
 necessarily leak to the server whether you were a doctor or an AIDs
 patient.
 
 However from what I understood from your paper so does your scheme,
 from section 5.1:
 
 P = (P1 or P2) is encoded HC_E(R,p) = {HC_E(R,P1),HC_E(R,P2)} 
 
 With Hidden Credentials, the messages are in the other direction: the
 server would send something encrypted for your pseudonym with P1 =
 AIDs patient, and P2 = Doctor attributes.  However the server could
 mark the encrypted values by encoding different challenge response
 values in each of them, right?

Yep, that'd be a problem in that case.  In the most recent (unpublished)  
paper, I addressed that by using R as the key for a ciphertext+MAC on the
actual message.  So the server would have to find two R's that both satisfy
the MAC but produce different ciphertexts in order to learn anything from the
response.

In either case, though, you can't just trust that the server encrypted against
patient OR doctor unless you have both creds and can verify that they each
recover the secret.  They might be lying about the doctor part, and really
sending against patient OR nonexistant, in which case your response reveals
that you're a patient.  That's why we recommend that your response (if any)
include the policy for the creds you used in decryption.  So if Alice is
responding to a message she decrypted with her patient cred, which she only
(implicitly) discloses to Medicare, and the response itself is only for AIDS
clinics, she should encrypt against Medicare AND AIDS_clinic.

(And you're right, the AIDS example is not very compelling.  The slides give a
better one about FBI agents, but I'm still looking for other examples of
super-sensitive transactions where HCs would fit)


 Another approach to hiding membership is one of the techniques
 proposed for non-transferable signatures, where you use construct:
 
 RSA-sig_A(x),RSA-sig_B(y) and verification is x xor y = hash(message).
 
 Where the sender is proving he is one of A and B without revealing
 which one.  (One of the values is an existential forgery, where you

That's very slick.  I'll check it out.

 OK so the fact that the server is the AIDs db server is itself secret.
 Probably better example is dissident's server or something where there
 is some incentive to keep the identity of the server secret.  So you
 want bi-directional anonymity.  It's true that the usual protocols can
 not provide both at once; SSL provides neither, the anonymous IP v2
 protocol I designed at ZKS had client anonymity (don't reveal
 pseudonym until authenticate server, and yet want to authenticate
 channel with pseudonym).  This type of bi-directional anonymity pretty
 much is going to need something like the attribute based encryption
 model you're using.

Hugo Krawczyk gave a great talk at Crypto about the going-first problem in
IPSec, which is where I got the phrase.  He has a nice compromise in letting
the user pick who goes first, but for some situations I think hidden
credentials really would hit the spot.


 I think it would be fair to call it anonymity system, just that the
 trust model includes a trusted server.  There are lots of things
 possible with a trusted server, even with symmetric crypto (KDCs).

Yeah, although I think most of them would require an on-line trusted server.  
But that just makes all sorts of things way too easy to be interesting. :)

-J
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFAn9/HnwLgjivV2NERAkBUAJwLhH7lZBtd/boI6Edn3JWA+eStDQCdEFZi
GI4rzGoiscp0Ze/+iKweu08=
=eX/X
-END PGP SIGNATURE-



Re: more hiddencredentials comments (Re: Brands' private credentials)

2004-05-10 Thread Jason Holt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mon, 10 May 2004, Adam Back wrote:
 OK that sounds like it should work.  Another approach that occurs is
 you could just take the plaintext, and encrypt it for the other
 attributes (which you don't have)?  It's usually not too challenging
 to make stuff deterministic and retain security.  Eg. any nonces,
 randomizing values can be taken from PRMG seeded with seed also sent
 in the msg.  Particularly that is much less constraining on the crypto
 system than what Bert-Jaap Koops had to do to get binding crypto to
 work with elgamal variant.
 
  In either case, though, you can't just trust that the server
  encrypted against patient OR doctor unless you have both creds and
  can verify that they each recover the secret.
 
 The above approach should fix that also right?

I don't quite get what you're suggesting.  Could you give a more concrete
example?  


  Hugo Krawczyk gave a great talk at Crypto about the going-first problem in
  IPSec, which is where I got the phrase.  He has a nice compromise in letting
  the user pick who goes first, but for some situations I think hidden
  credentials really would hit the spot.
 
 Unless it's signifcantly less efficient, I'd say use it all the time.

Well, I wouldn't complain. :)  (Although pairings are quite slow, on the order
of hundreds of milliseconds.)  Hilarie Orman presented it at an IETF meeting
to what was reportedly a lukewarm response, and they also raised the patent
issue.  Dan Boneh is sensitive to the issue of patented crypto, and was quite
considerate when I asked about it, but www.voltage.com still has the same
vague statement in their FAQ about how they're not going to be evil with the
patent, so it's still up in the air whether IBE will be useful in IETF
standards.

-J
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFAoAQfnwLgjivV2NERAtVcAKC8vQ6wxHeZ5Z3L4zcWPvZL7WKRqACgvB6y
8GxvXfFyewCuAA0FSAjdKoY=
=ukVn
-END PGP SIGNATURE-



Re: Brands' private credentials

2004-05-10 Thread Jason Holt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mon, 10 May 2004, Adam Back wrote:
 After that I was presuming you use a signature to convince the server
 that you are authorised.  Your comment however was that this would
 necessarily leak to the server whether you were a doctor or an AIDs
 patient.
 
 However from what I understood from your paper so does your scheme,
 from section 5.1:
 
 P = (P1 or P2) is encoded HC_E(R,p) = {HC_E(R,P1),HC_E(R,P2)} 
 
 With Hidden Credentials, the messages are in the other direction: the
 server would send something encrypted for your pseudonym with P1 =
 AIDs patient, and P2 = Doctor attributes.  However the server could
 mark the encrypted values by encoding different challenge response
 values in each of them, right?

Yep, that'd be a problem in that case.  In the most recent (unpublished)  
paper, I addressed that by using R as the key for a ciphertext+MAC on the
actual message.  So the server would have to find two R's that both satisfy
the MAC but produce different ciphertexts in order to learn anything from the
response.

In either case, though, you can't just trust that the server encrypted against
patient OR doctor unless you have both creds and can verify that they each
recover the secret.  They might be lying about the doctor part, and really
sending against patient OR nonexistant, in which case your response reveals
that you're a patient.  That's why we recommend that your response (if any)
include the policy for the creds you used in decryption.  So if Alice is
responding to a message she decrypted with her patient cred, which she only
(implicitly) discloses to Medicare, and the response itself is only for AIDS
clinics, she should encrypt against Medicare AND AIDS_clinic.

(And you're right, the AIDS example is not very compelling.  The slides give a
better one about FBI agents, but I'm still looking for other examples of
super-sensitive transactions where HCs would fit)


 Another approach to hiding membership is one of the techniques
 proposed for non-transferable signatures, where you use construct:
 
 RSA-sig_A(x),RSA-sig_B(y) and verification is x xor y = hash(message).
 
 Where the sender is proving he is one of A and B without revealing
 which one.  (One of the values is an existential forgery, where you

That's very slick.  I'll check it out.

 OK so the fact that the server is the AIDs db server is itself secret.
 Probably better example is dissident's server or something where there
 is some incentive to keep the identity of the server secret.  So you
 want bi-directional anonymity.  It's true that the usual protocols can
 not provide both at once; SSL provides neither, the anonymous IP v2
 protocol I designed at ZKS had client anonymity (don't reveal
 pseudonym until authenticate server, and yet want to authenticate
 channel with pseudonym).  This type of bi-directional anonymity pretty
 much is going to need something like the attribute based encryption
 model you're using.

Hugo Krawczyk gave a great talk at Crypto about the going-first problem in
IPSec, which is where I got the phrase.  He has a nice compromise in letting
the user pick who goes first, but for some situations I think hidden
credentials really would hit the spot.


 I think it would be fair to call it anonymity system, just that the
 trust model includes a trusted server.  There are lots of things
 possible with a trusted server, even with symmetric crypto (KDCs).

Yeah, although I think most of them would require an on-line trusted server.  
But that just makes all sorts of things way too easy to be interesting. :)

-J
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFAn9/HnwLgjivV2NERAkBUAJwLhH7lZBtd/boI6Edn3JWA+eStDQCdEFZi
GI4rzGoiscp0Ze/+iKweu08=
=eX/X
-END PGP SIGNATURE-



Re: more hiddencredentials comments (Re: Brands' private credentials)

2004-05-10 Thread Jason Holt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mon, 10 May 2004, Adam Back wrote:
 OK that sounds like it should work.  Another approach that occurs is
 you could just take the plaintext, and encrypt it for the other
 attributes (which you don't have)?  It's usually not too challenging
 to make stuff deterministic and retain security.  Eg. any nonces,
 randomizing values can be taken from PRMG seeded with seed also sent
 in the msg.  Particularly that is much less constraining on the crypto
 system than what Bert-Jaap Koops had to do to get binding crypto to
 work with elgamal variant.
 
  In either case, though, you can't just trust that the server
  encrypted against patient OR doctor unless you have both creds and
  can verify that they each recover the secret.
 
 The above approach should fix that also right?

I don't quite get what you're suggesting.  Could you give a more concrete
example?  


  Hugo Krawczyk gave a great talk at Crypto about the going-first problem in
  IPSec, which is where I got the phrase.  He has a nice compromise in letting
  the user pick who goes first, but for some situations I think hidden
  credentials really would hit the spot.
 
 Unless it's signifcantly less efficient, I'd say use it all the time.

Well, I wouldn't complain. :)  (Although pairings are quite slow, on the order
of hundreds of milliseconds.)  Hilarie Orman presented it at an IETF meeting
to what was reportedly a lukewarm response, and they also raised the patent
issue.  Dan Boneh is sensitive to the issue of patented crypto, and was quite
considerate when I asked about it, but www.voltage.com still has the same
vague statement in their FAQ about how they're not going to be evil with the
patent, so it's still up in the air whether IBE will be useful in IETF
standards.

-J
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFAoAQfnwLgjivV2NERAtVcAKC8vQ6wxHeZ5Z3L4zcWPvZL7WKRqACgvB6y
8GxvXfFyewCuAA0FSAjdKoY=
=ukVn
-END PGP SIGNATURE-



Re: Brands' private credentials

2004-05-09 Thread Jason Holt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Sun, 9 May 2004, Adam Back wrote:

 and seeing that it is a completely different proposal essentially
 being an application of IBE, and extension of the idea that one has
 multiple identities encoding attributes.  (The usual attribute this
 approach is used for is time-period of receipt .. eg month of receipt
 so the sender knows which key to encrypt with).

Right, good summary.


 One claim is that the system should hide sensitive attributes from
 disclosure during a showing protocol.  So the example given an AIDs
 patient could authenticate to an AIDS db server without revealing to
 an outside observer whether he is an AIDs patient or an authorised
 doctor.
 
 However can't one achieve the same thing with encryption: eg an SSL
 connection and conventional authentication?  

How would you use SSL to prove fulfillment without revealing how?  You could
get the CA to issue you a patient or doctor SSL cert, likewise for every
possible combination of things somebody might ask you for, but that's not very
practical.  Presumably this is why the other systems also allow proof of
expressions without revealing all the attributes you used to do so.


 Outside of this, the usual approach to this is to authenticate the
 server first, then authenticate the client so the client's privacy is
 preserved.

If you can trust the server to do so.  Firstly, hidden credentials limit what
the server learns, so you don't *have* to trust the server as much.  But
secondly, they also solve the problem which shifts to the server when it goes
first: now the server has to reveal attributes to a complete stranger.  For
sensitive systems, it's easy to get circular dependencies where neither side
wants to go first.  Hidden credentials let you enforce the policy in the
ciphertext: if you can read this, let's talk.  if not, I didn't want to talk
to you anyway (and you won't learn why).  (Incidentally, two other similar
systems came out at about the same time as mine, both geared less toward
extreme policy/credential paranoia and more toward resolving such circular
dependencies: OSBE (Li, Du, Boneh) and Secret Handshakes (Balfanz et al)).


 Further more there seems to be no blinding at issue time.  So to
 obtain a credential you would have to identify yourself to the CA /
 IBE identity server, show paper credentials, typically involving True
 Name credentials, and come away with a private key.  So it is proposed
 in the paper the credential would be issued with a pseudonym.  However
 the CA can maintain a mapping between True Name and pseudonym.
 
 However whenever you show the credential the event is traceable back
 to you by collision with the CA.

Right, that is a big consideration with my system; CAs can be nosy.  Of
course, any CA will want you to show paper credentials or some other
real-world proof that they should give you a credential.  But you're right
that the Chaum/Brands/LC family do have a big advantage in limiting the risks
of big-brother CAs once they've issued it to you.


 I would not say your Hidden Credential system _is_ an anonymous
 credential system.  There is no blinding in the system period.  All is
 gated via a trust-me CA that in this case happens to be an IBE
 server, so providing the communication pattern advantages of an IBE
 system.

If your definition requires anonymity wrt the CA, then you're right.  My
system lets folks:

* authenticate based on attributes rather than identity
* access resources without the server even knowing whether they fulfill the
policy
* hide policies from people who don't fulfill them

So it's definitely in the realm of other privacy systems.  We could define a
new term just to exclude my system from the others, but at this point I don't
think naming confusion is any worse for my system; they all have lots of
different nonorthogonal features.  I have to write a survey paper for my Ph.D.
requirements, and I've been thinking I should write a big feature table as
part of it.


 In particular I don't see any way to implement an anonymous epayment
 system using Hidden Credentials.  As I understand it is simply not

I've never really considered it as a payment system.  It's geared more toward
systems which use extremely sensitive resources, and their corresponding
sensitive policies and credentials.

-J


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFAnuwCnwLgjivV2NERAs/lAKC2B9R0EQJY+fgh46QpjkdmsdjbMwCgziHw
VRCNzAhIdnIImHMyu7Lpvwk=
=wpJ0
-END PGP SIGNATURE-



Re: Brands' private credentials

2004-05-09 Thread Jason Holt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Sun, 9 May 2004, Adam Back wrote:

 and seeing that it is a completely different proposal essentially
 being an application of IBE, and extension of the idea that one has
 multiple identities encoding attributes.  (The usual attribute this
 approach is used for is time-period of receipt .. eg month of receipt
 so the sender knows which key to encrypt with).

Right, good summary.


 One claim is that the system should hide sensitive attributes from
 disclosure during a showing protocol.  So the example given an AIDs
 patient could authenticate to an AIDS db server without revealing to
 an outside observer whether he is an AIDs patient or an authorised
 doctor.
 
 However can't one achieve the same thing with encryption: eg an SSL
 connection and conventional authentication?  

How would you use SSL to prove fulfillment without revealing how?  You could
get the CA to issue you a patient or doctor SSL cert, likewise for every
possible combination of things somebody might ask you for, but that's not very
practical.  Presumably this is why the other systems also allow proof of
expressions without revealing all the attributes you used to do so.


 Outside of this, the usual approach to this is to authenticate the
 server first, then authenticate the client so the client's privacy is
 preserved.

If you can trust the server to do so.  Firstly, hidden credentials limit what
the server learns, so you don't *have* to trust the server as much.  But
secondly, they also solve the problem which shifts to the server when it goes
first: now the server has to reveal attributes to a complete stranger.  For
sensitive systems, it's easy to get circular dependencies where neither side
wants to go first.  Hidden credentials let you enforce the policy in the
ciphertext: if you can read this, let's talk.  if not, I didn't want to talk
to you anyway (and you won't learn why).  (Incidentally, two other similar
systems came out at about the same time as mine, both geared less toward
extreme policy/credential paranoia and more toward resolving such circular
dependencies: OSBE (Li, Du, Boneh) and Secret Handshakes (Balfanz et al)).


 Further more there seems to be no blinding at issue time.  So to
 obtain a credential you would have to identify yourself to the CA /
 IBE identity server, show paper credentials, typically involving True
 Name credentials, and come away with a private key.  So it is proposed
 in the paper the credential would be issued with a pseudonym.  However
 the CA can maintain a mapping between True Name and pseudonym.
 
 However whenever you show the credential the event is traceable back
 to you by collision with the CA.

Right, that is a big consideration with my system; CAs can be nosy.  Of
course, any CA will want you to show paper credentials or some other
real-world proof that they should give you a credential.  But you're right
that the Chaum/Brands/LC family do have a big advantage in limiting the risks
of big-brother CAs once they've issued it to you.


 I would not say your Hidden Credential system _is_ an anonymous
 credential system.  There is no blinding in the system period.  All is
 gated via a trust-me CA that in this case happens to be an IBE
 server, so providing the communication pattern advantages of an IBE
 system.

If your definition requires anonymity wrt the CA, then you're right.  My
system lets folks:

* authenticate based on attributes rather than identity
* access resources without the server even knowing whether they fulfill the
policy
* hide policies from people who don't fulfill them

So it's definitely in the realm of other privacy systems.  We could define a
new term just to exclude my system from the others, but at this point I don't
think naming confusion is any worse for my system; they all have lots of
different nonorthogonal features.  I have to write a survey paper for my Ph.D.
requirements, and I've been thinking I should write a big feature table as
part of it.


 In particular I don't see any way to implement an anonymous epayment
 system using Hidden Credentials.  As I understand it is simply not

I've never really considered it as a payment system.  It's geared more toward
systems which use extremely sensitive resources, and their corresponding
sensitive policies and credentials.

-J


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFAnuwCnwLgjivV2NERAs/lAKC2B9R0EQJY+fgh46QpjkdmsdjbMwCgziHw
VRCNzAhIdnIImHMyu7Lpvwk=
=wpJ0
-END PGP SIGNATURE-



Re: Trivial OTP generation method? (makernd.c)

2003-02-27 Thread Jason Holt

Several things: 

* Using the output to seed MD5 for the next block exposes that part of the
state of the RNG.  Might be better to use half the MD5 output as seed for the
next block, and the other half as output data.

* Your RNG takes input from an attackable source.  I can significantly reduce
the entropy of your system by placing a transmitter near your machine (even if
I didn't know what frequency you were tuned to, I could try to just overload
the receiver's front end, or burn it out entirely).  If my transmitter and
your receiver are very clean, the entropy could go quite low.  With a better
entropy check, that might just turn into a DoS attack, but even then it might
be attackable - it would depend on how well I could manipulate the /dev/dsp
output via my transmitter.  The present check only requires that some pair of
bytes differ by 16 - something that might be relatively easy to cause with a
transmitter.  Of course, reading 128 bytes buys you a lot of entropy even just
from marginal noise, so you may still be okay.

-J



Re: patent free(?) anonymous credential system pre-print - a simpleattack and other problems

2002-11-06 Thread Jason Holt

(Re: my paper at http://eprint.iacr.org/2002/151/ )

Stefan Brands wrote:
 - The system is subject to a simple attack. The problem lies with the
 multiplication of the  hashes. Let's take the Chaum blinding as an
[...]

(For our readers at home, that was the vulnerability I mentioned in my 
response to Adam).

[...]
 - My work certainly does provide for revocable anonymity and pooling
 prevention. For  pooling protection, see paragraph 2 on page 193,
 section 5.11 page 210 paragraph 2, and  section 5.5.2 on page 211. For
[...]

When I speak of pooling credentials, I'm talking about Alice
presenting her student ID along with the senior-citizen ID Bob loaned her (or
for which Bob is answering clandestine-ly), as if they both belonged to her,
in order to get both discounts on her movie tickets.  In my system, you get
your credentials issued in a set associated with a single identity, and it's
hard for Alice to get Bob's credentials included in one of her own sets.  It
works even if the CAs don't trust each other.

Page 211 of your book talks about discouraging lending, which doesn't
help in the case when Bob answers in Alice's behalf when she shows his
credentials.  In any case, section 5.5.2 only adds liability to pooling - it
doesn't prevent it mathematically.  (As to lending in general, I think you're
right that discouragement may be the best we can do).

Page 193 and 210 do talk about having an identifying value encoded in
the credentials which the holder can prove is or isn't the same as in other
credentials.  However, the discussion on page 193 is with respect to building
digital pseudonyms, and the discussion on page 210 seems to be about showing
that values are *not* the same, following a scenario in which a pseudonym
holder has been identified as a misbehaver. I can think of ways in which this
feature might be leveraged to create otherwise-unlinkable sets of credentials
from different (distrusting) CAs, but it's never addressed directly that I can
see, and would need some specifics filled in.  Nonetheless, I'll point out in
my paper that it's a possibility in your system.


 - The proposed hashing technique for selective disclosure was introduced
 by myself in 1999.  Quoting from page 27 of my MIT Press book titled
[...]

Pages 27 and 184 of your book are now both referenced in my section on
selective disclosure.


 - More seriously, the simple hash technique has numerous drawbacks, as I
 explain on page page  27 of my MIT Press book, in the very same
 paragraph: Although certificate holders now have  some control over
 which attributes they reveal to verifiers, they are forced to leave
 behind digital signatures. ...
[...]

What do you mean by forced to leave behind digital signatures?  


 ...  Worse, nothing prevents the CA and others from tracing and linking all
 the communications and transactions of each certificate holder. ...
[...]

This is of course overcome in my system with blinding and
cut-and-choose.

 [
   Snipped discussion of features which Brands' system has and my system 
   doesn't: boolean formulae, lending prevention, limited show,
   wallet-with-observer, discarding protection, anonymous recertification /
   updating, multi-application certificates, etc.
 ]

From my response to Adam Back:

I'm glad that was clear in my text.  This isn't a do-everything system like
Brands' - rather, it has 2 aims.  1: show how to do simple selective
disclosure in a Chaum/Fiat/Naor-like system using X.509v3 credentials as a
base, and 2: show how to link credentials from multiple issuers to the same
identity without compromising anonymity.

And actually, I forgot to mention the original goal of my paper, which
was to create a system not encumbered by your patents or Chaum's.

I'll expand my related work section to point out that your system and
others have lots of features which my system doesn't attempt to provide.  My
apologies if my terse treatment mischaracterized your work.

-J




Re: patent free(?) anonymous credential system pre-print - a simpleattack and other problems

2002-11-05 Thread Jason Holt

(Re: my paper at http://eprint.iacr.org/2002/151/ )

Stefan Brands wrote:
 - The system is subject to a simple attack. The problem lies with the
 multiplication of the  hashes. Let's take the Chaum blinding as an
[...]

(For our readers at home, that was the vulnerability I mentioned in my 
response to Adam).

[...]
 - My work certainly does provide for revocable anonymity and pooling
 prevention. For  pooling protection, see paragraph 2 on page 193,
 section 5.11 page 210 paragraph 2, and  section 5.5.2 on page 211. For
[...]

When I speak of pooling credentials, I'm talking about Alice
presenting her student ID along with the senior-citizen ID Bob loaned her (or
for which Bob is answering clandestine-ly), as if they both belonged to her,
in order to get both discounts on her movie tickets.  In my system, you get
your credentials issued in a set associated with a single identity, and it's
hard for Alice to get Bob's credentials included in one of her own sets.  It
works even if the CAs don't trust each other.

Page 211 of your book talks about discouraging lending, which doesn't
help in the case when Bob answers in Alice's behalf when she shows his
credentials.  In any case, section 5.5.2 only adds liability to pooling - it
doesn't prevent it mathematically.  (As to lending in general, I think you're
right that discouragement may be the best we can do).

Page 193 and 210 do talk about having an identifying value encoded in
the credentials which the holder can prove is or isn't the same as in other
credentials.  However, the discussion on page 193 is with respect to building
digital pseudonyms, and the discussion on page 210 seems to be about showing
that values are *not* the same, following a scenario in which a pseudonym
holder has been identified as a misbehaver. I can think of ways in which this
feature might be leveraged to create otherwise-unlinkable sets of credentials
from different (distrusting) CAs, but it's never addressed directly that I can
see, and would need some specifics filled in.  Nonetheless, I'll point out in
my paper that it's a possibility in your system.


 - The proposed hashing technique for selective disclosure was introduced
 by myself in 1999.  Quoting from page 27 of my MIT Press book titled
[...]

Pages 27 and 184 of your book are now both referenced in my section on
selective disclosure.


 - More seriously, the simple hash technique has numerous drawbacks, as I
 explain on page page  27 of my MIT Press book, in the very same
 paragraph: Although certificate holders now have  some control over
 which attributes they reveal to verifiers, they are forced to leave
 behind digital signatures. ...
[...]

What do you mean by forced to leave behind digital signatures?  


 ...  Worse, nothing prevents the CA and others from tracing and linking all
 the communications and transactions of each certificate holder. ...
[...]

This is of course overcome in my system with blinding and
cut-and-choose.

 [
   Snipped discussion of features which Brands' system has and my system 
   doesn't: boolean formulae, lending prevention, limited show,
   wallet-with-observer, discarding protection, anonymous recertification /
   updating, multi-application certificates, etc.
 ]

From my response to Adam Back:

I'm glad that was clear in my text.  This isn't a do-everything system like
Brands' - rather, it has 2 aims.  1: show how to do simple selective
disclosure in a Chaum/Fiat/Naor-like system using X.509v3 credentials as a
base, and 2: show how to link credentials from multiple issuers to the same
identity without compromising anonymity.

And actually, I forgot to mention the original goal of my paper, which
was to create a system not encumbered by your patents or Chaum's.

I'll expand my related work section to point out that your system and
others have lots of features which my system doesn't attempt to provide.  My
apologies if my terse treatment mischaracterized your work.

-J




Re: patent free(?) anonymous credential system pre-print

2002-11-05 Thread Jason Holt

(Re: my paper at http://eprint.iacr.org/2002/151/ )

Let me first point out that Dr. Stefan Brands noted an insecurity in
my system which would allow malicious users to obtain issuer signatures on
arbitrary documents.

This is due to the fact that users aren't prevented from using the
(bitwise) same document for each element but one in the cut and choose
protocol and making the remaining document malicious.  If the malicious
document isn't selected for inspection, dividing out the signatures on the
(n/2)-1 identical documents is trivial, leaving a signature on the malicious
value.

The credential IDs described in section 4.1 of my paper were designed
to thwart this attack, (and the session random values to thwart similar
attacks over multiple issuing sessions), and do appear to succeed with the
additional requirement that each credential ID be different from the others.
This requirement will be added to the next update to the pre-print.

This requirement is analogous to the variable i in the preimage of y_i
in the Chaum/Fiat/Naor system.  It ensures that each candidate is different,
and therefore that the values of the elements signed will be unpredictable.  


On Thu, 31 Oct 2002, Adam Back wrote:

 Some comments on this paper comparing efficiency, and functionality
 with Camenisch, Chaum, Brands.

Thanks for your feedback!

[...]
 - efficiency
 
 The non-interactive cut and choose protocol results in quite big
 messages in the issuing and showing protcols to attain good security.
[...]
 ... a single credential would be order of 10KB.  Similar for the showing
 protocol.

Indeed, as section 6 points out, a set of 3 credentials could be a
megabyte in size and require half a megabyte of network traffic.  Efficiency
is /not/ a major selling point of this system. :)

[...]
 - functionality

[pulling up from later on in Adam's post]
 
 Most of these short-falls stem from the analogous short-falls in the
 Wagner blinding method they are based on.  Of course (and the point of
 the paper) the credentials do offer over the base Wagner credentials
 (a restrictive) form of selective disclosure which the base
 credentials do not.

I'm glad that was clear in my text.  This isn't a do-everything system
like Brands' - rather, it has 2 aims.  1: show how to do simple selective
disclosure in a Chaum/Fiat/Naor-like system using X.509v3 credentials as a
base, and 2: show how to link credentials from multiple issuers to the same
identity without compromising anonymity.

The feature comparison is appreciated, though; it may be useful for an
expansion of the related work section, and in terms of features to add in the
future.

[...]
 The credentials can be replayed (as there is no credential private
 key, a trace of a credential show offers no defense against replay).
 Brands credentials have a private key so they can defend against this.
 (Chaum's credentials have the same problem).

Section 4.3 specifies that Alice should create a keypair and store the
public key as a selective disclosure field, allowing her to prove ownership as
you describe.

 
 The credentials unavoidably leave the verifier with a transferable
 signed trace of the transaction.  Brands credentials offer a
 zero-knowledge option where the verifier can not transfer any
 information about what he was shown.

Good point.

 The credentials support selective disclosure of attributes, but only
 in a restricted sense.  Attributes can be disclosed with AND
 connectives.  However other connectives (OR, +, -, negation, and
 formulae) are not directly possible.  Brands supports all of these.

Also true.  I point this out in paragraphs 1 and 2 of section 2. 

 
 The credentials do not support lending deterence (there is no option
 to have a secret associated with a credential that must necessarily be
 revealed to lend the credential as with Brands).

This could be added to my system.  To be honest, I don't consider
Brands' implementation of lending deterrence to be a worthwhile feature.  
Having embarassing information in a credential could be a deterrence against
lending to an untrusted party, but comes at the cost of an equal liability if
the credential is stolen.  It also doesn't prevent the rightful holder from
providing the response to the challenge on that field when the lendee uses the
credential (in real time).  Lending is a problem which I don't believe can be
solved purely mathematically (which Brands also points out, as I recall).  
Thus I prefer to avoid the topic rather than give it unavoidably insufficient
treatment.

 
 The credentials are not suitable for offline use because they offer no
 possibility for a secret (such as user identity, account number etc)
 to be revealed if the user spends more times than allowed.

My credentials aren't designed to do limited-show, although I do point
out in section 4.4.1 that credentials should be used only one 

patent free(?) anonymous credential system pre-print

2002-10-29 Thread Jason Holt

I've submitted a pre-print of my anonymous credential system to the IACR
ePrint server.  Thanks to all of you who responded to the questions I posted
here while working on it.  I'd love to hear feedback from any and all before I
sumbit it for publication; particularly, I want to make sure I haven't
forgotten to give proper attribution for any previous work.

http://eprint.iacr.org/2002/151/

It mentions how to use the blinding technique Ben Laurie describes in his
Lucre paper, which I don't think has been mentioned in the formal literature,
and also describes what I call a non-interactive cut and choose protocol which
is new AFAICT.  Thanks again!

-J





patent free(?) anonymous credential system pre-print

2002-10-29 Thread Jason Holt

I've submitted a pre-print of my anonymous credential system to the IACR
ePrint server.  Thanks to all of you who responded to the questions I posted
here while working on it.  I'd love to hear feedback from any and all before I
sumbit it for publication; particularly, I want to make sure I haven't
forgotten to give proper attribution for any previous work.

http://eprint.iacr.org/2002/151/

It mentions how to use the blinding technique Ben Laurie describes in his
Lucre paper, which I don't think has been mentioned in the formal literature,
and also describes what I call a non-interactive cut and choose protocol which
is new AFAICT.  Thanks again!

-J





Re: Chaum's unpatented ecash scheme

2002-08-26 Thread Jason Holt


[...]
 Speaking of anonymous, you should give credit in your paper to Anonymous
 for discovering the possibility of marking Lucre coins, in a coderpunks
 posting at
 http://www.mail-archive.com/coderpunks@toad.com/msg02186.html, and for
 inventing the Type II Defence, both in the posting above and amplifed
 at http://www.mail-archive.com/coderpunks@toad.com/msg02323.html.
 
 It may seem pointless to credit anonymous postings, but it makes the
 historical record more clear.

 Anonymous _is_ creditted, but I can add the specific URLs.
[...]

I've got a paper ready to publish that uses this technique for digital
credentials.  It'll be nice to have it at least referenced in the literature,
since Ben's paper is the only place I know of where it's really explained.  I
give a URL for it as well as credit to Anonymous, but it would be helpful to
know who really should be credited with what.

I'm listing it as Laurie's method since he wrote the Lucre paper, but
I haven't been able to tell how much David Wagner had to do with the idea.  
Ben?  David? The URLs to the Anon messages are handy; I'll see about including
them too.

-J




Data Security class programming project

2002-08-20 Thread Jason Holt


I'm working on designing the programming projects for a data security class.  
What do you think of this one?

I love its intrinsic irony, but can we actually get away with requiring it for
a university class?  I mean, Elcomsoft really is in court for this.  My
University is unfortunately not the type of organization to stand at the
forefront and protect our civil rights.

-J

---
DRM Lab
---

Options
---
* Make the lab open-ended - they get to pick what to break and how, as long as
they don't use a pre-packaged tool.  DVDs, CDs, .WMA, etc.

Objective
-
To demonstrate the fundamental futility of current attempts to prevent
unauthorized copying of published works.

Requirements

_Here_ you can find a copy of Dmitry Sklyarov's Defcon slides in Adobe's eBook
format.  They were created with the eBookPro compiler, advertised as the only
software in the universe that makes your information virtually 100%
burglarproof!.

Extract all text from the slides using the method of your choice and turn in
the resulting text file.  It needs no special formatting.





Data Security class programming project

2002-08-19 Thread Jason Holt


I'm working on designing the programming projects for a data security class.  
What do you think of this one?

I love its intrinsic irony, but can we actually get away with requiring it for
a university class?  I mean, Elcomsoft really is in court for this.  My
University is unfortunately not the type of organization to stand at the
forefront and protect our civil rights.

-J

---
DRM Lab
---

Options
---
* Make the lab open-ended - they get to pick what to break and how, as long as
they don't use a pre-packaged tool.  DVDs, CDs, .WMA, etc.

Objective
-
To demonstrate the fundamental futility of current attempts to prevent
unauthorized copying of published works.

Requirements

_Here_ you can find a copy of Dmitry Sklyarov's Defcon slides in Adobe's eBook
format.  They were created with the eBookPro compiler, advertised as the only
software in the universe that makes your information virtually 100%
burglarproof!.

Extract all text from the slides using the method of your choice and turn in
the resulting text file.  It needs no special formatting.





Tunneling through hostile proxy

2002-07-23 Thread Jason Holt

 Roy M. Silvernail[SMTP:[EMAIL PROTECTED]]
 Given internet access from a private intranet, through an HTTP 
 proxy out of the user's control, is it possible to establish a secure 
 tunnel to an outside server?  I'd expect that ordinary SSL 
 connections will secure user - proxy and proxy - server 
 separately, with the proxy able to observe cleartext.  Could an SSH 
 connection be made under these conditions?
[...]

The default behavior for an SSL proxy is to pass the encrypted bytes
back and forth, allowing you to connect all the way to the other server.  
However, it is possible for the proxy to have its own CA which has been added
to your browser.  Then it acts as a man in the middle and pretends to be the
remote host to you, and vice versa.  In that case, it works as you describe,
watching the data during its interim decryption.

Typically, the proxy would give you generic certificates (like
*.com), but it could conceivably generate a certificate for each site you
visit (secure.yahoo.com, etc.).  The way to tell would be to look at the
issuing authority according to your browser - if it's one of the public ones,
like Thawte, you've got a connection to the far end.  If it's Th4wt3, or
your company's, the proxy is probably watching.

Incidentally, another company that does private browsing over SSL is
www.orangatango.com (along with other nifty anonymizing stuff).

-J




Re: Tunneling through hostile proxy

2002-07-23 Thread Jason Holt

On Tue, 23 Jul 2002, Adam Back wrote:
[...]
  However, it is possible for the proxy to have its own CA which has
  been added to your browser.  Then it acts as a man in the middle and
  pretends to be the remote host to you, and vice versa.  In that
  case, it works as you describe, watching the data during its interim
  decryption.
 
 While it's _possible_ to do this, I've never heard of a server hosted
 application that advertises that it's doing this.  I would think it
 would be quite hard to get a CA to issue you a certificate if this is
 what you intended to do with it (act as a general MITM on SSL
 connections you proxy).
[...]

I don't know of any other real-world examples.  Rescorla mentions the
technique on pp. 316-319 of SSL and TLS.  Certainly Thawte isn't going to
issue such wildcard certs, for exactly the reasons you mention.  That's why
you (or your government, or company, or whoever keeps an eye on you) create
your *own* CA and tell your browser to trust it.  Then it'll accept the
wildcard certs without complaint.

-J




Safe RSA variant?

2002-06-14 Thread Jason Holt


Well, I got such a good response from my last technical question that I'll try
again :)

If it's actually secure, it'll go really well with my credential system.

Trent generates primes p,q.  He publishes n=pq and some random value g.

Trent calculates a and a' such that aa' = 1 % (p-1)(q-1) and a' is prime.  He
sends Alice a' and g^a%n.  a' is her secret exponent and g^a%n her public
value.

Bob can establish a shared secret with Alice if Alice got a' from Trent.  He
picks a random r and sends her g^ar%n.  She raises it to a' to compute the
shared secret g^r%n.

So the important questions are:

* Given g^a%n and a', can Alice derive (p-1)(q-1)?  If so, she'd be able to
take over Trent's job.

* Given g^k%n and k' for lots of different k, can we derive (p-1)(q-1) or
otherwise imitate Trent's ability to give out (g^k%n, k') pairs?

So IOW the goal is for Bob to be able to send Alice a message iff she knows a
secret from Trent.  And if Alice's secret is compromised, only messages sent
to (or possibly from)  Alice become vulnerable.

A friend of mine pointed out that Alice can trivially compute another working
pair of keys from her own: g^-a and -a'. And if the a' keys weren't prime,
Alice and Bob could factor them and generate other keypairs. Both of those
seem manageable, though.

The common modulus attack in which you use g^k and g^k' to get information on
g (which is public in this case) by calculating rk+sk'=1 caused me problems in
earlier equations I tried, but doesn't seem to help the attacker here.

-J





Ben's blinding, plus pre-publishing

2002-06-10 Thread Jason Holt


Maybe you could say more about the details of your credential system.
Such a system built on Wagner blinding might be very interesting.

I've been thinking it would be nice to post my entire paper here (and
maybe on sci.crypt.research) before sending it off to the journals.  What are
the issues surrounding that, though?

The academic folks here seem uncomfortable when I talk about it, like
I'd be leaking something secret.  AFAICT, nobody else would be able to apply
for a patent on the idea without telling a lot of lies in the process.  So
that leaves the possibility that somebody whips out another paper on the topic
before mine's all the way done.  Are the journals going to be snippy about
copyright issues?  Why haven't I seen other papers published on usenet and
such before going to press?


 If I replace h1 with (g^b0) and get the issuer to sign:

 ((g^b0)*g^b1 *h2*g^b2 *h3*g^b3...)^k

 I should be able to divide the two results and get h1^k.  But part of the
 cut-and-choose protocol will be to require that the n/2 checked documents
 are all valid and different from any previous instances of the protocol.  
 So it should be extremely hard for the user to sneak lots of previously
 used values and fake h's (which are really blinding factors) into the
 unrevealed documents.  But are there other ways to separate out signatures
 on individual h's?

You're really going to remember all the discarded h values from all the
previous instances of credential issuing?  Seems like it might be a lot of
data.  How many h values do you typically expect?

I get around that by having the issuer issue a new random value for
each issuing session which gets hashed several times along with some other
data before going into the blinded messages.  You have to prove that the value
properly descends from the issuer's random value, which makes it tough to
reuse values from a previous session.

-J




Ben's blinding, plus pre-publishing

2002-06-10 Thread Jason Holt


Maybe you could say more about the details of your credential system.
Such a system built on Wagner blinding might be very interesting.

I've been thinking it would be nice to post my entire paper here (and
maybe on sci.crypt.research) before sending it off to the journals.  What are
the issues surrounding that, though?

The academic folks here seem uncomfortable when I talk about it, like
I'd be leaking something secret.  AFAICT, nobody else would be able to apply
for a patent on the idea without telling a lot of lies in the process.  So
that leaves the possibility that somebody whips out another paper on the topic
before mine's all the way done.  Are the journals going to be snippy about
copyright issues?  Why haven't I seen other papers published on usenet and
such before going to press?


 If I replace h1 with (g^b0) and get the issuer to sign:

 ((g^b0)*g^b1 *h2*g^b2 *h3*g^b3...)^k

 I should be able to divide the two results and get h1^k.  But part of the
 cut-and-choose protocol will be to require that the n/2 checked documents
 are all valid and different from any previous instances of the protocol.  
 So it should be extremely hard for the user to sneak lots of previously
 used values and fake h's (which are really blinding factors) into the
 unrevealed documents.  But are there other ways to separate out signatures
 on individual h's?

You're really going to remember all the discarded h values from all the
previous instances of credential issuing?  Seems like it might be a lot of
data.  How many h values do you typically expect?

I get around that by having the issuer issue a new random value for
each issuing session which gets hashed several times along with some other
data before going into the blinded messages.  You have to prove that the value
properly descends from the issuer's random value, which makes it tough to
reuse values from a previous session.

-J




More of Ben's blinding

2002-06-07 Thread Jason Holt


 But actually another solution is much simpler, which is to do blinding
 as just h * g^b, without a y factor.  That works fine as long as the
 bank is known not to be misbehaving.  Ben's paper shows how the bank
 can use a ZK proof to show that it is raising to the same power k every
 time, basically again that same ZK proof regarding discrete logarithms.
 If the bank uses such a proof then you can use simpler blinding without
 a y factor, and you can recover the signature on the product of your h
 values by dividing by g^k^(sum of b's).

Somewhere I got the idea that that was patented, but looking at
undeniable signatures, they're actually much closer to (h^y)(g^b), so your
suggestion should work great.  Thanks!

Anybody know of other patents which might get in the way?  I'm worried
about Chaum's blind signature and undeniable signature patents, and want to
present as patent-free a system as possible.

One more thing.  If the issuer returns the signature:

(h1*g^b1 *h2*g^b2 *h3*g^b3...)^k

Can I separate out any of the h^k values?  My system relies on that
being hard.  If I replace h1 with (g^b0) and get the issuer to sign:

((g^b0)*g^b1 *h2*g^b2 *h3*g^b3...)^k

I should be able to divide the two results and get h1^k.  But part of
the cut-and-choose protocol will be to require that the n/2 checked documents
are all valid and different from any previous instances of the protocol.  So
it should be extremely hard for the user to sneak lots of previously used
values and fake h's (which are really blinding factors) into the unrevealed
documents.  But are there other ways to separate out signatures on individual
h's?

-J




Laurie's blinding w/cut and choose?

2002-06-05 Thread Jason Holt

In his paper on Lucre (2nd defence against marking):

http://anoncvs.aldigital.co.uk/lucre/

Ben Laurie gives this as a (possibly patent-free) blinding technique,
where h is the message, and g is the public generator:

r = blind(h) = h^y * g^b (mod p)

To sign,

s = sign(r) = m^h

To unblind,

(s/g^k^b)^(1/y) (mod p)

(where k is the signer's secret exponent. Of course, nobody but the
signer can verify the signature).  Unfortunately, this doesn't work with cut
and choose where the signer signs the product of unrevealed documents, since 
the 1/y exponent above would distribute to all the internal terms:

((r  * r  * r   ...)^k)^(1/y )
   123  1
-- !=  (h  * r  * r   ...)^k   (mod p)
 (g^k)^b 123
1

Can anyone see how to get this to work?  It doesn't matter for Ben's
money system since he doesn't need cut and choose, but I'm working on a
patent-free credential system where the issuer needs to cut and choose to keep
the user from cheating.

Alternatively, is there another way to get some sort of blind mark
(that foils the issuer from adding subliminal information that would
compromise the blinding) without stepping on Chaum's patent?  I hear Chaum
mentioned one himself at PET 2002, but I can't find anything about it online.

-J  




Laurie's blinding w/cut and choose?

2002-06-05 Thread Jason Holt

In his paper on Lucre (2nd defence against marking):

http://anoncvs.aldigital.co.uk/lucre/

Ben Laurie gives this as a (possibly patent-free) blinding technique,
where h is the message, and g is the public generator:

r = blind(h) = h^y * g^b (mod p)

To sign,

s = sign(r) = m^h

To unblind,

(s/g^k^b)^(1/y) (mod p)

(where k is the signer's secret exponent. Of course, nobody but the
signer can verify the signature).  Unfortunately, this doesn't work with cut
and choose where the signer signs the product of unrevealed documents, since 
the 1/y exponent above would distribute to all the internal terms:

((r  * r  * r   ...)^k)^(1/y )
   123  1
-- !=  (h  * r  * r   ...)^k   (mod p)
 (g^k)^b 123
1

Can anyone see how to get this to work?  It doesn't matter for Ben's
money system since he doesn't need cut and choose, but I'm working on a
patent-free credential system where the issuer needs to cut and choose to keep
the user from cheating.

Alternatively, is there another way to get some sort of blind mark
(that foils the issuer from adding subliminal information that would
compromise the blinding) without stepping on Chaum's patent?  I hear Chaum
mentioned one himself at PET 2002, but I can't find anything about it online.

-J  




Bit commitment with hashes in Applied Cryptography

2002-05-31 Thread Jason Holt

In Applied Cryptography, p. 87 (2nd ed., heading Bit Commitment Using One-Way
Functions) Schneier specifies that Alice must generate 2 random bit strings
before hashing, and then send one along with the hash as her commitment:

commitment = H(R1, R2, b), R1

Then she sends R2 and her bit to reveal what she committed to.  Why do we need
R1? It should be sufficient to send H(R2, b) as the commitment, then reveal
R2,b later.

Is this to keep her from taking advantage of known collisions?  (Are there
known collisions for md5/sha/etc.?)  In a protocol where the preimage data
must meet a certain format would we need R1?

  -J




Re: When encryption is also authentication...

2002-05-31 Thread Jason Holt


Ian Grigg wrote:
[...]
 SSL for commerce is readily in place without batting an eyelid these days.

 Costs are still way too high.  This won't change until
 browsers are shipped that treat self-signed certs as being
 valid.  Unfortunately, browser manufacturers believe in
 cert-ware for a variety of non-security reasons.
[...]

Self signed certs defeat the purpose of the certificate chain mechanism, which
is not just there to make Veri$ign rich.  Mallory can self-sign a cert for
bob.com, and hack Alice's DNS to point bob.com at her own site.  But it's
(theoretically, anyway) much more difficult for her to convince Verisign that
she owns bob.com.  If we trust Verisign to do that, then we know we're really
talking to Bob when we visit bob.com.

Now, the ability to add other CAs which we trust would be a nice feature, and
if there were more trustworthy CAs which were added to the browsers by
default, we could get the costs down closer to the actual overhead of
verifying that the supplicant (er, applicant) actually owns the domain he's
trying to get a cert for.  But anyone can certify themselves as owning
amazon.com, and it's critical that my browser tell me when some stranger makes
such an assertion on their own.

-J




Bit commitment with hashes in Applied Cryptography

2002-05-31 Thread Jason Holt

In Applied Cryptography, p. 87 (2nd ed., heading Bit Commitment Using One-Way
Functions) Schneier specifies that Alice must generate 2 random bit strings
before hashing, and then send one along with the hash as her commitment:

commitment = H(R1, R2, b), R1

Then she sends R2 and her bit to reveal what she committed to.  Why do we need
R1? It should be sufficient to send H(R2, b) as the commitment, then reveal
R2,b later.

Is this to keep her from taking advantage of known collisions?  (Are there
known collisions for md5/sha/etc.?)  In a protocol where the preimage data
must meet a certain format would we need R1?

  -J




Re: When encryption is also authentication...

2002-05-31 Thread Jason Holt


Ian Grigg wrote:
[...]
 SSL for commerce is readily in place without batting an eyelid these days.

 Costs are still way too high.  This won't change until
 browsers are shipped that treat self-signed certs as being
 valid.  Unfortunately, browser manufacturers believe in
 cert-ware for a variety of non-security reasons.
[...]

Self signed certs defeat the purpose of the certificate chain mechanism, which
is not just there to make Veri$ign rich.  Mallory can self-sign a cert for
bob.com, and hack Alice's DNS to point bob.com at her own site.  But it's
(theoretically, anyway) much more difficult for her to convince Verisign that
she owns bob.com.  If we trust Verisign to do that, then we know we're really
talking to Bob when we visit bob.com.

Now, the ability to add other CAs which we trust would be a nice feature, and
if there were more trustworthy CAs which were added to the browsers by
default, we could get the costs down closer to the actual overhead of
verifying that the supplicant (er, applicant) actually owns the domain he's
trying to get a cert for.  But anyone can certify themselves as owning
amazon.com, and it's critical that my browser tell me when some stranger makes
such an assertion on their own.

-J




Re: Making Veri$ign rich(er)

2002-05-30 Thread Jason Holt

On Thu, 30 May 2002, Ian Grigg wrote:
[...]
 And, in practice this is how it goes.  No thief ever bothers
 to do an MITM, even over *un*encrypted traffic.  They simply
 hack into the machines and steal it all.  That's why there
 has never been a case of CCs sniffed over the net and being
 used to commit a fraud (at least, no recorded ones).
 
 Change the analysis to small merchants, and it is even worse
 (of course Amazon will have a cert, so even its rich bounty
 is unavailable, you have to do this on small merchants).
 
 
 
 So, how do we make Veri$ign richer?  Easy, switch browsers
 to accepting self-signed certs.  To see this, we have to
 have tried or heard about small enterprises who have tried
 to set up their SSL certs.
[...]

If MITM attacks are so hard that you don't consider them a threat, why
bother with SSL at all?  SSL provides two things:

* A certificate chain that demonstrates who you're talking to
* Secrecy and message integrity between you and the person you're
talking to

You remove the first benefit by using self-signed certs.  The second
one is still nice, but if you're worried about me *watching* your traffic,
shouldn't you also be worried about me intercepting your DNS lookup and
replacing the response with my own IP?  If we all use self-signed certs,
you'll never be the wiser.

Yes, the attack you describe where I get the root nameservers to
redirect *all* amazon.com traffic to me is hard.  And it can be pretty tough
to watch and modify an individual user's traffic.  But it's not nearly as
tough as breaking the crypto behind SSL.  If we use it right, that security
extends to the domain I type into my browser.  If we don't, we reduce it to
the hardness of manipulating the wire.

I certainly agree that merchants need to use better security on the
server end.  But that's orthogonal to the SSL issue.

-J




Re: When encryption is also authentication...

2002-05-30 Thread Jason Holt


Ian Grigg wrote:
[...]
 SSL for commerce is readily in place without batting an eyelid these days.

 Costs are still way too high.  This won't change until
 browsers are shipped that treat self-signed certs as being
 valid.  Unfortunately, browser manufacturers believe in
 cert-ware for a variety of non-security reasons.
[...]

Self signed certs defeat the purpose of the certificate chain mechanism, which
is not just there to make Veri$ign rich.  Mallory can self-sign a cert for
bob.com, and hack Alice's DNS to point bob.com at her own site.  But it's
(theoretically, anyway) much more difficult for her to convince Verisign that
she owns bob.com.  If we trust Verisign to do that, then we know we're really
talking to Bob when we visit bob.com.

Now, the ability to add other CAs which we trust would be a nice feature, and
if there were more trustworthy CAs which were added to the browsers by
default, we could get the costs down closer to the actual overhead of
verifying that the supplicant (er, applicant) actually owns the domain he's
trying to get a cert for.  But anyone can certify themselves as owning
amazon.com, and it's critical that my browser tell me when some stranger makes
such an assertion on their own.

-J




Re: Making Veri$ign rich(er)

2002-05-30 Thread Jason Holt

On Thu, 30 May 2002, Ian Grigg wrote:
[...]
 And, in practice this is how it goes.  No thief ever bothers
 to do an MITM, even over *un*encrypted traffic.  They simply
 hack into the machines and steal it all.  That's why there
 has never been a case of CCs sniffed over the net and being
 used to commit a fraud (at least, no recorded ones).
 
 Change the analysis to small merchants, and it is even worse
 (of course Amazon will have a cert, so even its rich bounty
 is unavailable, you have to do this on small merchants).
 
 
 
 So, how do we make Veri$ign richer?  Easy, switch browsers
 to accepting self-signed certs.  To see this, we have to
 have tried or heard about small enterprises who have tried
 to set up their SSL certs.
[...]

If MITM attacks are so hard that you don't consider them a threat, why
bother with SSL at all?  SSL provides two things:

* A certificate chain that demonstrates who you're talking to
* Secrecy and message integrity between you and the person you're
talking to

You remove the first benefit by using self-signed certs.  The second
one is still nice, but if you're worried about me *watching* your traffic,
shouldn't you also be worried about me intercepting your DNS lookup and
replacing the response with my own IP?  If we all use self-signed certs,
you'll never be the wiser.

Yes, the attack you describe where I get the root nameservers to
redirect *all* amazon.com traffic to me is hard.  And it can be pretty tough
to watch and modify an individual user's traffic.  But it's not nearly as
tough as breaking the crypto behind SSL.  If we use it right, that security
extends to the domain I type into my browser.  If we don't, we reduce it to
the hardness of manipulating the wire.

I certainly agree that merchants need to use better security on the
server end.  But that's orthogonal to the SSL issue.

-J