Amazon.com Inquiry

2005-10-02 Thread Amazon
Dear Amazon member, 

Due to concerns we have for the safety and integrity of the Amazon community we 
have issued this warning. 

Per the User Agreement, Section 9, we may immediately issue a warning, 
temporarily suspend, indefinitely suspend or terminate your membership and 
refuse to provide our services to you if we believe that your actions may cause 
financial loss or legal liability for you, our users or us. We may also take 
these actions if we are unable to verify or authenticate any information you 
provide to us. 

Please follow the link below: 

http://www.amazon.com.encrypted-inquiry.cn?/exec/obidos

and update your account information. 

We apreciate your support and understanding, as we work together to keep Amazon 
market a safe place to trade. 

Thank you for your attention on this serious matter.

Regards,
Amazon Safety Department


NOTE: This message was sent to you by an automated e-mail system. Please don't 
reply to it. Amazon treats your personal information with the utmost care, and 
our Privacy Policy is designed to protect you and your information.






[EMAIL PROTECTED]: [IP] Guardian Observer (London) on Google Privacy Issues]

2005-10-02 Thread Eugen Leitl
- Forwarded message from David Farber [EMAIL PROTECTED] -

From: David Farber [EMAIL PROTECTED]
Date: Sat, 1 Oct 2005 21:28:29 -0400
To: Ip Ip ip@v2.listbox.com
Subject: [IP] Guardian Observer (London) on Google Privacy Issues
X-Mailer: Apple Mail (2.734)
Reply-To: [EMAIL PROTECTED]


http://observer.guardian.co.uk/business/story/0,6903,1582719,00.html





Our internet secrets stored for decades

Privacy groups want the law changed to stop Google using, or  
divulging to outside agencies, the vast amount of personal data it  
has access to. By Conal Walsh

Sunday October 2, 2005
The Observer

Google took a further step away from its folksy image when it hired  
its first professional lobbyist in Washington earlier this year. But  
it turned out to be a timely move. The world's biggest search engine  
has been under attack on many fronts in 2005 - and its activities  
have spawned a cottage industry of Google critics, who complain above  
all that the company's dramatic rise to prominence is a threat to our  
privacy.
Much protest focuses on the company's use of 'cookies' - pieces of  
programming code - which Google plants on your computer's hard drive  
when you use its service.
The cookies enable Google to keep a record of your web-searching  
history. They don't expire until 2038, meaning that potentially  
sensitive information on your interests and peccadilloes could be  
stored for upwards of 30 years. It is sobering to think what  
fraudsters, identity thieves, blackmailers or government snoopers  
could do with this information if they got access to it.
Privacy groups are up in arms. 'We need to re-evaluate the role of  
big search engines, email portals, and all the rest of it,' says  
Daniel Brandt, of the website Google Watch.
'They all track everything. Google was the first to do it, arrogantly  
and without any apologies; now everyone assumes that if Google does  
it, they can do it too.'
Lauren Weinstein, founder of the US-based People for Internet  
Responsibility, says out-of-date privacy laws fail to capture the  
information-gathering powers of youthful but powerful new media  
companies.
'The relevant laws are generally so weak - if they exist at all -  
that it's difficult to file complaints when you can't find out what  
data they're keeping and how they are using it,' says Weinstein.
Google says these fears are unfounded, that it respects privacy and  
keeps strictly within relevant privacy laws. Personal data are logged  
on computer files but 'no humans' access it, says the company;  
safeguards are in place to prevent employees from examining traffic  
data without special permission from senior managers. Nor is personal  
information shared with outsiders. All Google's records are  
impenetrable to hackers.
Besides, say Google devotees, open access and the empowerment of the  
individual are central to the whole philosophy of the company; it  
would never seek to misuse or betray its users' secrets.
Life, though, can be complicated. In repressive countries such as  
China, Google and other portals have little choice but to accommodate  
the authorities, which regularly censor the internet and spy on users.
In the US, Google has declined to say how often it responds to  
requests for information from America's intelligence and law  
enforcement agencies. And there are concerns that what Google is  
building with its data-retention operation is a vast marketing  
database, which one day could be exploited ruthlessly.
Simmering discontent turned into open confrontation earlier this year  
when Google launched Gmail, a free email service designed to compete  
with Yahoo and Microsoft's Hotmail.
To ordinary punters, the great advantage of Gmail was the enormous  
two gigabytes of storage space it offered, enabling users to keep all  
their old messages. But Google planned to make the service pay by  
scanning customers' emails for keywords in order to send them  
targeted advertisements - a flagrant breach of privacy, according to  
opponents.
The Consumer Federation of America demanded that Google rethink the  
scheme, while California politician Liz Figueroa called for changes  
in the law to protect users' 'most intimate and private email  
thoughts'. The London-based campaigners Privacy International filed  
complaints with data protection agencies in several countries,  
including Britain.
The UK Information Commissioner took no action after consulting with  
Google, but campaigners argue that government bodies operating with a  
small staff and obsolete laws are no match for a technology  
superpower like Google, which is expanding at an almost exponential  
rate and continues to innovate in its use of personal data.
In claims denied by Google, Privacy International's Simon Davies  
asserts that there is 'an absence of contractual commitment to the  
security of data' and 'fundamental problems in achieving lawful  
customer consent'.
For now, campaigners may have to 

[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]

2005-10-02 Thread Eugen Leitl
- Forwarded message from cyphrpunk [EMAIL PROTECTED] -

From: cyphrpunk [EMAIL PROTECTED]
Date: Sat, 1 Oct 2005 15:27:32 -0700
To: Jason Holt [EMAIL PROTECTED]
Cc: cryptography@metzdowd.com, [EMAIL PROTECTED]
Subject: Re: nym-0.2 released (fwd)
Reply-To: [EMAIL PROTECTED]

On 9/30/05, Jason Holt [EMAIL PROTECTED] wrote:
 http://www.lunkwill.org/src/nym/
 ...
 My proposal for using this to enable tor users to play at Wikipedia is as
 follows:

 1. Install a token server on a public IP.  The token server can optionally be
 provided Wikipedia's blocked-IP list and refuse to issue tokens to offending
 IPs.  Tor users use their real IP to obtain a blinded token.

 2. Install a CA as a hidden service.  Tor users use their unblinded tokens to
 obtain a client certificate, which they install in their browser.

 3. Install a wikipedia-gateway SSL web proxy (optionally also a hidden 
 service)
 which checks client certs and communicates a client identifier to MediaWiki,
 which MediaWiki will use in place of the REMOTE_ADDR (client IP address) for
 connections from the proxy.  When a user misbehaves, Wikipedia admins block 
 the
 client identifier just as they would have blocked an offending IP address.

All these degrees of indirection look good on paper but are
problematic in practice. Each link in this chain has to trust all the
others. Whether the token server issues tokens freely, or the CA
issues certificates freely, or the gateway proxy creates client
identifiers freely, any of these can destroy the security properties
of the system. Hence it makes sense for all of them to be run by a
single entity. There can of course be multiple independent such
pseudonym services, each with its own policies.

In particular it is not clear that the use of a CA and a client
certificate buys you anything. Why not skip that step and allow the
gateway proxy simply to use tokens as user identifiers? Misbehaving
users get their tokens blacklisted.

There are two problems with providing client identifiers to Wikipedia.
The first is as discussed elsewhere, that making persistent pseudonyms
such as client identifiers (rather than pure certifications of
complaint-freeness) available to end services like Wikipedia hurts
privacy and is vulnerable to future exposure due to the lack of
forward secrecy. The second is that the necessary changes to the
Wikipedia software are probably more extensive than they might sound.
Wikipedia tags each (anonymous) edit with the IP address from which
it came. This information is displayed on the history page and is used
widely throughout the site. Changing Wikipedia to use some other kind
of identifier is likely to have far-reaching ramifications. Unless you
can provide this client idenfier as a sort of virtual IP (fits in 32
bits) which you don't mind being displayed everywhere on the site (see
objection 1), it is going to be expensive to implement on the wiki
side.

The simpler solution is to have the gateway proxy not be a hidden
service but to be a public service on the net which has its own exit
IP addresses. It would be a sort of virtual ISP which helps
anonymous users to gain the rights and privileges of the identified,
including putting their reputations at risk if they misbehave. This
solution works out of the box for Wikipedia and other wikis, for blog
comments, and for any other HTTP service which is subject to abuse by
anonymous users. I suggest that you adapt your software to this usage
model, which is more general and probably easier to implement.

CP

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature


[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]

2005-10-02 Thread Eugen Leitl
- Forwarded message from Adam Langley [EMAIL PROTECTED] -

From: Adam Langley [EMAIL PROTECTED]
Date: Sun, 2 Oct 2005 03:21:41 +0100
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED], cryptography@metzdowd.com
Subject: Re: nym-0.2 released (fwd)
Reply-To: [EMAIL PROTECTED]

cyphrpunk:
 Each link in this chain has to trust all the
 others. ... any of these can destroy the security properties
 of the system.

Dude, we're not launching missiles here, it's just Wikipedia.

On 10/2/05, Jason Holt [EMAIL PROTECTED] wrote:
 The reason I have separate token and cert servers is that I want to end up
 with a client cert that can be used in unmodified browsers and servers.

First, how do you add client certificates in modern browsers? Oh,
actually I've just found it in Firefox, but what about
IE/Opera/whatever else? Can you do it easily?

The blinded signature is just a long bit string and it might well be
better from a user's point of view for them to 'login' by pasting the
base64 encoded blob into a box.

Just a thought (motivated in no small part by my dislike for all things x509ish)

  privacy and is vulnerable to future exposure due to the lack of
  forward secrecy.

The lack of forward secrecy is pretty fundamental in a reputation
based system. The more you turn up the forward secrecy, the less
effective any reputation system is going to be.

And I'm also going to say well done to Jason for actually coding
something. There do seem to be a lot couch-geeks on or-talk - just
look at the S/N ratio on the recent wikipedia threads. It might not
work, but it's *something*. No amount of talk is going to suddenly
become a solution.


AGL

--
Adam Langley  [EMAIL PROTECTED]
http://www.imperialviolet.org   (+44) (0)7906 332512
PGP: 9113   256A   CC0F   71A6   4C84   5087   CDA5   52DF   2CB6   3D60

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature


[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]

2005-10-02 Thread Eugen Leitl
- Forwarded message from Jason Holt [EMAIL PROTECTED] -

From: Jason Holt [EMAIL PROTECTED]
Date: Sun, 2 Oct 2005 00:13:02 + (UTC)
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Cc: cryptography@metzdowd.com
Subject: Re: nym-0.2 released (fwd)
Reply-To: [EMAIL PROTECTED]


On Sat, 1 Oct 2005, cyphrpunk wrote:
All these degrees of indirection look good on paper but are
problematic in practice.

As the great Ulysses said,

  Pete, the personal rancor reflected in that remark I don't intend to 
  dignify
  with comment. However, I would like to address your attitude of hopeless
  negativism.  Consider the lilies of the g*dd*mn field...or h*ll, look at
  Delmar here as your paradigm of hope!

  [Pause] Delmar: Yeah, look at me.

Okay, so maybe there's no personal rancor, but I do detect some hopeless 
negativism.  Or perhaps it's unwarranted optimism that crypto-utopia will be 
here any moment now, flowing with milk and honey, ecash, infrastructure and 
multi show zero knowledge proofs.  Maybe I just need a disclaimer: Warning: 
this product favors simplicity over crypto-idealism; not for use in Utopia. 
Did I mention that my code is Free and (AFAIK) unencumbered?

The reason I have separate token and cert servers is that I want to end up 
with a client cert that can be used in unmodified browsers and servers.  The 
certs don't have to have personal information in them, but with indirection 
we cheaply get the ability to enfore some sort of structure on the certs. 
Plus, I spent as much time as it took me to write *both releases of nym* 
just trying to get ahold of the actual digest in an X.509 cert that needs to 
be signed by the CA (in order to have the token server sign that instead of 
a random token).  That would have eliminated the separate token/cert steps, 
but required a really hideous issuing process and produced signatures whose 
form the CA could have no control over.  (Clients could get signatures on 
IOUs, delegated CA certs, whatever.)

(Side note to Steve Bellovin: having once again abandoned mortal combat with 
X.509, I retract my comment about the system not being broken...)


the security properties of the system. Hence it makes sense for all of them 
to be run by a single entity. There can of course be multiple independent 
such pseudonym services, each with its own policies.

Sure, there's no reason for one entity not to run all three services; we're 
only talking about 2 CGI scripts and a web proxy anyway.  Or, run a CA which 
serves multiple token servers, and issues certs with extensions specifying 
what kinds of tokens were spent to obtain the cert.  Then web servers get 
articulated limiting from a single CA's certs.


In particular it is not clear that the use of a CA and a client
certificate buys you anything. Why not skip that step and allow the
gateway proxy simply to use tokens as user identifiers? Misbehaving
users get their tokens blacklisted.

It buys not having to strap hacked-up code onto your web browser or server. 
Run the perl scripts once to get the cert, then use it with any browser and 
any server that knows about the CA.


There are two problems with providing client identifiers to Wikipedia.
The first is as discussed elsewhere, that making persistent pseudonyms
such as client identifiers (rather than pure certifications of
complaint-freeness) available to end services like Wikipedia hurts
privacy and is vulnerable to future exposure due to the lack of
forward secrecy.

Great, you guys work up an RFC, then an IETF draft, then some Idemix code 
with all the ZK proofs.  In the meantime, I'll be setting up my 349 lines of 
perl/shell code for whoever wants to use it.  Whoops, I forgot the 
IP-rationing code; 373 lines.

Actually, if all you want is complaint-free certifications, that's easy to 
put in the proxy; just make it serve up different identifiers each time and 
keep a table of which IDs map to which client certs.  Makes it harder for 
the wikipedia admins to see patterns of abuse, though.  They'd have to 
report each incident and let the proxy admin decide when the threshold is 
reached.


The second is that the necessary changes to the Wikipedia software are 
probably more extensive than they might sound. Wikipedia tags each 
(anonymous) edit with the IP address from which it came. This information 
is displayed on the history page and is used widely throughout the site. 
Changing Wikipedia to use some other kind of identifier is likely to have 
far-reaching ramifications. Unless you can provide this client idenfier 
as a sort of virtual IP (fits in 32 bits) which you don't mind being 
displayed everywhere on the site (see objection 1), it is going to be 
expensive to implement on the wiki side.

There's that hopeless negativism again.  Do you want a real solution or not? 
Because I can think of at least 2 ways to solve that problem in a practical 
setting, and that's assuming that your assumption about MediaWiki being 
limited to 4-byte identifiers is 

[EMAIL PROTECTED]: Re: nym-0.2 released (fwd)]

2005-10-02 Thread Eugen Leitl
- Forwarded message from cyphrpunk [EMAIL PROTECTED] -

From: cyphrpunk [EMAIL PROTECTED]
Date: Sun, 2 Oct 2005 09:12:18 -0700
To: Jason Holt [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED], cryptography@metzdowd.com
Subject: Re: nym-0.2 released (fwd)
Reply-To: [EMAIL PROTECTED]

A few comments on the implementation details of
http://www.lunkwill.org/src/nym/:

1. Limting token requests by IP doesn't work in today's internet. Most
customers have dynamic IPs. Either they won't be able to get tokens,
because someone else has already gotten one using their temporary IP,
or they will be able to get multiple ones by rotating among available
IPs. It may seem that IP filtering is expedient for demo purposes, but
actually that is not true, as it prevents interested parties from
trying out your server more than once, such as to do experimental
hacking on the token-requesting code.

I suggest a proof of work system a la hashcash. You don't have to use
that directly, just require the token request to be accompanied by a
value whose sha1 hash starts with say 32 bits of zeros (and record
those to avoid reuse).

2. The token reuse detection in signcert.cgi is flawed. Leading zeros
can be added to r which will cause it to miss the saved value in the
database, while still producing the same rbinary value and so allowing
a token to be reused arbitrarily many times.

3. signer.cgi attempts to test that the value being signed is  2^512.
This test is ineffective because the client is blinding his values. He
can get a signature on, say, the value 2, and you can't stop him.

4. Your token construction, sign(sha1(r)), is weak. sha1(r) is only
160 bits which could allow a smooth-value attack. This involves
getting signatures on all the small primes up to some limit k, then
looking for an r such that sha1(r) factors over those small primes
(i.e. is k-smooth). For k = 2^14 this requires getting less than 2000
signatures on small primes, and then approximately one in 2^40 160-bit
values will be smooth. With a few thousand more signatures the work
value drops even lower.

A simple solution is to do slightly more complex padding. For example,
concatenate sha1(0||r) || sha1(1||r) || sha1(2||r) || ... until it is
the size of the modulus. Such values will have essentially zero
probability of being smooth and so the attack does not work.

CP

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature