[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]
- Forwarded message from Jason Holt <[EMAIL PROTECTED]> - From: Jason Holt <[EMAIL PROTECTED]> Date: Sun, 2 Oct 2005 22:23:50 + (UTC) To: cyphrpunk <[EMAIL PROTECTED]> Cc: [EMAIL PROTECTED], cryptography@metzdowd.com Subject: Re: nym-0.2 released (fwd) Reply-To: [EMAIL PROTECTED] On Sun, 2 Oct 2005, cyphrpunk wrote: >1. Limting token requests by IP doesn't work in today's internet. Most Hopeless negativism. I limit by IP because that's what Wikipedia is already doing. Sure, hashcash would be easy to add, and I looked into it just last night. Of course, as several have observed, hashcash also leads to whack-a-mole problems, and the abuser doesn't even have to be savvy enough to change IPs. Why aren't digital credential systems more widespread? As has been suggested here and elsewhere at great length, it takes too much infrastructure. It's too easy when writing a security paper to call swaths of CAs into existance with the stroke of the pen. To assume that any moment now, people will start carrying around digital driver's licenses and social security cards (issued in the researcher's pet format), which they'll be happy to show the local library in exchange for a digital library card. That's why I'm so optimistic about nym. A reasonable number of Tor users, a technically inclined group of people on average, want to access a single major site. That site isn't selling ICBMs; they mostly want people to have access anyway. They have an imperfect rationing system based on IPs. The resource is cheap, the policy is simple, and the user needs to conceal a single attribute about herself. There's a simple mathematical solution that yields certificates which are already supported by existing software. That, my friend, is a problem we can solve. >I suggest a proof of work system a la hashcash. You don't have to use >that directly, just require the token request to be accompanied by a >value whose sha1 hash starts with say 32 bits of zeros (and record >those to avoid reuse). I like the idea of requiring combinations of scarce resources. It's definitely on the wishlist for future releases. Captchas could be integrated as well. >2. The token reuse detection in signcert.cgi is flawed. Leading zeros >can be added to r which will cause it to miss the saved value in the >database, while still producing the same rbinary value and so allowing >a token to be reused arbitrarily many times. Thanks for pointing that out! Shouldn't be hard to fix. >3. signer.cgi attempts to test that the value being signed is > 2^512. >This test is ineffective because the client is blinding his values. He >can get a signature on, say, the value 2, and you can't stop him. > >4. Your token construction, sign(sha1(r)), is weak. sha1(r) is only >160 bits which could allow a smooth-value attack. This involves >getting signatures on all the small primes up to some limit k, then >looking for an r such that sha1(r) factors over those small primes >(i.e. is k-smooth). For k = 2^14 this requires getting less than 2000 >signatures on small primes, and then approximately one in 2^40 160-bit >values will be smooth. With a few thousand more signatures the work >value drops even lower. Oh, I think I see. The k-smooth sha1(r) values then become "bonus" tokens, so we use a large enough h() that the result is too hard to factor (or, I suppose we could make the client present properly PKCS padded preimages). I'll do some more reading, but I think that makes sense. Thanks! -J - End forwarded message - -- Eugen* Leitl http://leitl.org";>leitl __ ICBM: 48.07100, 11.36820http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE signature.asc Description: Digital signature
[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]
- Forwarded message from cyphrpunk <[EMAIL PROTECTED]> - From: cyphrpunk <[EMAIL PROTECTED]> Date: Sun, 2 Oct 2005 09:12:18 -0700 To: Jason Holt <[EMAIL PROTECTED]> Cc: [EMAIL PROTECTED], cryptography@metzdowd.com Subject: Re: nym-0.2 released (fwd) Reply-To: [EMAIL PROTECTED] A few comments on the implementation details of http://www.lunkwill.org/src/nym/: 1. Limting token requests by IP doesn't work in today's internet. Most customers have dynamic IPs. Either they won't be able to get tokens, because someone else has already gotten one using their temporary IP, or they will be able to get multiple ones by rotating among available IPs. It may seem that IP filtering is expedient for demo purposes, but actually that is not true, as it prevents interested parties from trying out your server more than once, such as to do experimental hacking on the token-requesting code. I suggest a proof of work system a la hashcash. You don't have to use that directly, just require the token request to be accompanied by a value whose sha1 hash starts with say 32 bits of zeros (and record those to avoid reuse). 2. The token reuse detection in signcert.cgi is flawed. Leading zeros can be added to r which will cause it to miss the saved value in the database, while still producing the same rbinary value and so allowing a token to be reused arbitrarily many times. 3. signer.cgi attempts to test that the value being signed is > 2^512. This test is ineffective because the client is blinding his values. He can get a signature on, say, the value 2, and you can't stop him. 4. Your token construction, sign(sha1(r)), is weak. sha1(r) is only 160 bits which could allow a smooth-value attack. This involves getting signatures on all the small primes up to some limit k, then looking for an r such that sha1(r) factors over those small primes (i.e. is k-smooth). For k = 2^14 this requires getting less than 2000 signatures on small primes, and then approximately one in 2^40 160-bit values will be smooth. With a few thousand more signatures the work value drops even lower. A simple solution is to do slightly more complex padding. For example, concatenate sha1(0||r) || sha1(1||r) || sha1(2||r) || ... until it is the size of the modulus. Such values will have essentially zero probability of being smooth and so the attack does not work. CP - End forwarded message - -- Eugen* Leitl http://leitl.org";>leitl __ ICBM: 48.07100, 11.36820http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE signature.asc Description: Digital signature
[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]
- Forwarded message from Jason Holt <[EMAIL PROTECTED]> - From: Jason Holt <[EMAIL PROTECTED]> Date: Sun, 2 Oct 2005 00:13:02 + (UTC) To: [EMAIL PROTECTED], [EMAIL PROTECTED] Cc: cryptography@metzdowd.com Subject: Re: nym-0.2 released (fwd) Reply-To: [EMAIL PROTECTED] On Sat, 1 Oct 2005, cyphrpunk wrote: >All these degrees of indirection look good on paper but are >problematic in practice. As the great Ulysses said, Pete, the personal rancor reflected in that remark I don't intend to dignify with comment. However, I would like to address your attitude of hopeless negativism. Consider the lilies of the g*dd*mn field...or h*ll, look at Delmar here as your paradigm of hope! [Pause] Delmar: Yeah, look at me. Okay, so maybe there's no personal rancor, but I do detect some hopeless negativism. Or perhaps it's unwarranted optimism that crypto-utopia will be here any moment now, flowing with milk and honey, ecash, infrastructure and multi show zero knowledge proofs. Maybe I just need a disclaimer: "Warning: this product favors simplicity over crypto-idealism; not for use in Utopia." Did I mention that my code is Free and (AFAIK) unencumbered? The reason I have separate token and cert servers is that I want to end up with a client cert that can be used in unmodified browsers and servers. The certs don't have to have personal information in them, but with indirection we cheaply get the ability to enfore some sort of structure on the certs. Plus, I spent as much time as it took me to write *both releases of nym* just trying to get ahold of the actual digest in an X.509 cert that needs to be signed by the CA (in order to have the token server sign that instead of a random token). That would have eliminated the separate token/cert steps, but required a really hideous issuing process and produced signatures whose form the CA could have no control over. (Clients could get signatures on IOUs, delegated CA certs, whatever.) (Side note to Steve Bellovin: having once again abandoned mortal combat with X.509, I retract my comment about the system not being broken...) >the security properties of the system. Hence it makes sense for all of them >to be run by a single entity. There can of course be multiple independent >such pseudonym services, each with its own policies. Sure, there's no reason for one entity not to run all three services; we're only talking about 2 CGI scripts and a web proxy anyway. Or, run a CA which serves multiple token servers, and issues certs with extensions specifying what kinds of tokens were "spent" to obtain the cert. Then web servers get articulated limiting from a single CA's certs. >In particular it is not clear that the use of a CA and a client >certificate buys you anything. Why not skip that step and allow the >gateway proxy simply to use tokens as user identifiers? Misbehaving >users get their tokens blacklisted. It buys not having to strap hacked-up code onto your web browser or server. Run the perl scripts once to get the cert, then use it with any browser and any server that knows about the CA. >There are two problems with providing client identifiers to Wikipedia. >The first is as discussed elsewhere, that making persistent pseudonyms >such as client identifiers (rather than pure certifications of >complaint-freeness) available to end services like Wikipedia hurts >privacy and is vulnerable to future exposure due to the lack of >forward secrecy. Great, you guys work up an RFC, then an IETF draft, then some Idemix code with all the ZK proofs. In the meantime, I'll be setting up my 349 lines of perl/shell code for whoever wants to use it. Whoops, I forgot the IP-rationing code; 373 lines. Actually, if all you want is complaint-free certifications, that's easy to put in the proxy; just make it serve up different identifiers each time and keep a table of which IDs map to which client certs. Makes it harder for the wikipedia admins to see patterns of abuse, though. They'd have to report each incident and let the proxy admin decide when the threshold is reached. >The second is that the necessary changes to the Wikipedia software are >probably more extensive than they might sound. Wikipedia tags each >("anonymous") edit with the IP address from which it came. This information >is displayed on the history page and is used widely throughout the site. >Changing Wikipedia to use some other kind of identifier is likely to have >far-reaching ramifications. Unless you can provide this "client idenfier" >as a sort of virtual IP (fits in 32 bits) which you don't mind being >displayed everywhere on the site (see objection 1), it is going to be >expensive to implement on the wiki side. There's that hopeless negativism again. Do you want a real solution or not? Because I can think of at least 2 ways to solve that problem in a practical setting, and that's assuming that your assumption about MediaWiki being
[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]
- Forwarded message from Adam Langley <[EMAIL PROTECTED]> - From: Adam Langley <[EMAIL PROTECTED]> Date: Sun, 2 Oct 2005 03:21:41 +0100 To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED], cryptography@metzdowd.com Subject: Re: nym-0.2 released (fwd) Reply-To: [EMAIL PROTECTED] cyphrpunk: > Each link in this chain has to trust all the > others. ... any of these can destroy the security properties > of the system. Dude, we're not launching missiles here, it's just Wikipedia. On 10/2/05, Jason Holt <[EMAIL PROTECTED]> wrote: > The reason I have separate token and cert servers is that I want to end up > with a client cert that can be used in unmodified browsers and servers. First, how do you add client certificates in modern browsers? Oh, actually I've just found it in Firefox, but what about IE/Opera/whatever else? Can you do it easily? The blinded signature is just a long bit string and it might well be better from a user's point of view for them to 'login' by pasting the base64 encoded blob into a box. Just a thought (motivated in no small part by my dislike for all things x509ish) > > privacy and is vulnerable to future exposure due to the lack of > > forward secrecy. The lack of forward secrecy is pretty fundamental in a reputation based system. The more you turn up the forward secrecy, the less effective any reputation system is going to be. And I'm also going to say well done to Jason for actually coding something. There do seem to be a lot couch-geeks on or-talk - just look at the S/N ratio on the recent wikipedia threads. It might not work, but it's *something*. No amount of talk is going to suddenly become a solution. AGL -- Adam Langley [EMAIL PROTECTED] http://www.imperialviolet.org (+44) (0)7906 332512 PGP: 9113 256A CC0F 71A6 4C84 5087 CDA5 52DF 2CB6 3D60 - End forwarded message - -- Eugen* Leitl http://leitl.org";>leitl __ ICBM: 48.07100, 11.36820http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE signature.asc Description: Digital signature
[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]
- Forwarded message from cyphrpunk <[EMAIL PROTECTED]> - From: cyphrpunk <[EMAIL PROTECTED]> Date: Sat, 1 Oct 2005 15:27:32 -0700 To: Jason Holt <[EMAIL PROTECTED]> Cc: cryptography@metzdowd.com, [EMAIL PROTECTED] Subject: Re: nym-0.2 released (fwd) Reply-To: [EMAIL PROTECTED] On 9/30/05, Jason Holt <[EMAIL PROTECTED]> wrote: > http://www.lunkwill.org/src/nym/ > ... > My proposal for using this to enable tor users to play at Wikipedia is as > follows: > > 1. Install a token server on a public IP. The token server can optionally be > provided Wikipedia's blocked-IP list and refuse to issue tokens to offending > IPs. Tor users use their real IP to obtain a blinded token. > > 2. Install a CA as a hidden service. Tor users use their unblinded tokens to > obtain a client certificate, which they install in their browser. > > 3. Install a wikipedia-gateway SSL web proxy (optionally also a hidden > service) > which checks client certs and communicates a client identifier to MediaWiki, > which MediaWiki will use in place of the REMOTE_ADDR (client IP address) for > connections from the proxy. When a user misbehaves, Wikipedia admins block > the > client identifier just as they would have blocked an offending IP address. All these degrees of indirection look good on paper but are problematic in practice. Each link in this chain has to trust all the others. Whether the token server issues tokens freely, or the CA issues certificates freely, or the gateway proxy creates client identifiers freely, any of these can destroy the security properties of the system. Hence it makes sense for all of them to be run by a single entity. There can of course be multiple independent such pseudonym services, each with its own policies. In particular it is not clear that the use of a CA and a client certificate buys you anything. Why not skip that step and allow the gateway proxy simply to use tokens as user identifiers? Misbehaving users get their tokens blacklisted. There are two problems with providing client identifiers to Wikipedia. The first is as discussed elsewhere, that making persistent pseudonyms such as client identifiers (rather than pure certifications of complaint-freeness) available to end services like Wikipedia hurts privacy and is vulnerable to future exposure due to the lack of forward secrecy. The second is that the necessary changes to the Wikipedia software are probably more extensive than they might sound. Wikipedia tags each ("anonymous") edit with the IP address from which it came. This information is displayed on the history page and is used widely throughout the site. Changing Wikipedia to use some other kind of identifier is likely to have far-reaching ramifications. Unless you can provide this "client idenfier" as a sort of virtual IP (fits in 32 bits) which you don't mind being displayed everywhere on the site (see objection 1), it is going to be expensive to implement on the wiki side. The simpler solution is to have the gateway proxy not be a hidden service but to be a public service on the net which has its own exit IP addresses. It would be a sort of "virtual ISP" which helps anonymous users to gain the rights and privileges of the identified, including putting their reputations at risk if they misbehave. This solution works out of the box for Wikipedia and other wikis, for blog comments, and for any other HTTP service which is subject to abuse by anonymous users. I suggest that you adapt your software to this usage model, which is more general and probably easier to implement. CP - End forwarded message - -- Eugen* Leitl http://leitl.org";>leitl __ ICBM: 48.07100, 11.36820http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE signature.asc Description: Digital signature