Thesis Announcement: "Rethinking public key infrastructures and digital certificates --- building in privacy"

1999-09-23 Thread Stefan Brands

   A N N O U N C E M E N T


Thesis title:

 "Rethinking public key infrastructures and digital
 certificates --- building in privacy" (ISBN 90-901-3059-4,
 304 pages, September 1999)

Author:

 Stefan Brands

Thesis defense date and location:

 October 4, 1999, Eindhoven University of Technology (Netherlands)

Thesis advisors:

 prof. Henk C.A. van Tilborg (Eindhoven University of Technology)
 prof. Adi Shamir (Weizmann Institute of Science)

Thesis reading committee:

 prof. Ronald L. Rivest (Massachusetts Institute of Technology)
 prof. Claus P. Schnorr (Johann Wolfgang Goethe University)
 prof. Adi Shamir (Weizmann Institute of Science)

Summary:

Paper-based communication and transaction mechanisms are being replaced
by electronic mechanisms at a breath-taking pace. The driving force
behind this unstoppable transition is the desire to combat fraud, to
reduce costs, and to address an array of new opportunities opened up by
Internet and other telecommunication networks. Public key
infrastructures, which center around the distribution and management of
public keys and digital certificates, are widely regarded as the
foundational technology for secure electronic communications and
transactions, in cyberspace as well as in the real world.

While their future looks bright and shiny, public key infrastructures
have a dark side. Today's public key infrastructures erode privacy in a
manner unimaginable just a few decades ago. If the prevailing visions
about digital certificates turn into reality, then everyone will be
forced to communicate and transact in what will be the most pervasive
electronic surveillance tool ever built.

This thesis analyzes the privacy dangers, and introduces highly
practical digital certificates that can be used to design
privacy-protecting electronic communication and transaction systems. The
new certificates allow individuals, groups, and organizations to
communicate and transact securely, in such a way that at all times they
can determine for themselves when, how, and to what extent information
about them is revealed to others, and to what extent others can link or
trace this information. At the same time, the new techniques overcome
many of the security and efficiency shortcomings of the currently
available mechanisms, minimize the risk of identity fraud, and offer a
myriad of benefits to organizations. They can be implemented in low-cost
smartcards without cryptographic coprocessors, admit elliptic curve
implementations with short keys, and encompass today's views about
digital certificates and public key infrastructures as a special case.

Applications of the new techniques include, but are not limited to,
electronic cash, pseudonyms for online chat rooms and public forums
(virtual communities), access control (to Virtual PrivateNetworks,
subscription-based services, buildings, databases, and so on), health
care information exchange, electronic voting, electronic postage, Web
site personalization, secure multi-agent systems, collaborative
filtering, medical prescriptions, road-toll pricing, public transport
tickets, loyalty schemes, and electronic gambling.

--

See http://www.xs4all.nl/~brands for a detailed overview of the contents
of the thesis, online summaries (in English and in Dutch), several
downloadable chapter parts, and contact information.





Re: snake-oil voting?

1999-09-23 Thread Anonymous

>Did any of you see this
>http://www.votehere.net/content/Products.asp#InternetVotingSystems
>
>that proposes to authenticate the voter by asking for his/her/its SSN#? 

It looked like the idea for this part was to prevent double voting,
plus make sure that only authorized people could vote.  It wasn't
necessarily SSN, it could be name/address/date of birth or whatever.
Similar to what is done when you go and vote in person.

There was also this idea of what they earnestly called a VERN, Voter
Encrypted Registration Number, which would be distributed in advance
to people who were authorized to vote.  You'd provide your VERN along
with your authenticating info (DOB/SSN/whatever) to prove that you were
authorized.

Any voting system ultimately relies on real world proof like this.
Until we have a worldwide secure system of cryptographic credentials
for proving membership in various groups (like registered voters) you
aren't going to get away from this.

In something like Usenet newsgroup votes, you could still use this
but you wouldn't use SSN, you'd just use names/emails as you do now.
It's not perfectly secure against double voting but it is good enough
in most cases.

The real point of the protocol is to keep people from finding out HOW
each person voted, while assuring that the vote count is correct.  There
has been a lot of work on crypto protocols for secure voting and this
appears to be what they have implemented.

Some systems in the literature involve encrypting votes in a manner such
that summing can be done with the encrypted data, without decrypting them.
Sounds like something similar is done here.

This looks like a good system although it would be nice to see more
details.  It certainly sounds better than alternatives.  With current
Usenet votes everyone gets to see how you voted.  With this VoteHere
system you could be assured that your vote was correct (because it would
match the encryption you sent in), nobody else could see how you voted,
and yet you could be sure that the vote total was correct (by running the
sum operation on the encrypted data, and verifying that the decryption
of this is the claimed sum).

It certainly doesn't look like snake oil, rather an attempt to bring
these theoretical crypto protocols into the real world.  It's always
tough to join theory and practice and so there may be some rough edges
at the interface.  But it looks like the idea has significant potential.
Otherwise we're going to get "just trust us" electronic voting like some
areas are using already.



Re: Why smartcards? (was IP: Smart Cards with Chips encouraged)

1999-09-23 Thread Rachel Willmer



Arnold Reinhold wrote:

> And what is the value proposition for the consumer? SSL works swell.

This is true iff :

(1) the consumer is an adult who has a credit card

(2) the consumer is content that the transaction is traceable through
their credit card statement

(3) the consumer is happy to pay the extra cost needed to cover the cost
of the credit card hierarchy (which may be hidden in the ticket price
but most certainly is there when the merchant calculates the selling
price as he considers the cost to him of the credit card charges,
potential chargebacks, insurance against chargebacks, etc)

(4) the consumer wants to spend money with a merchant who is able to get
a merchant account with a credit card processor (this is a real problem
over here in the UK)

or

(5) the consumer wants to exchange money with a merchant rather than a
friend 

In short, for the commerce model we have today (essentially the old mail
order metaphor taken online), SSL and credit cards works just fine.

For tomorrow's other commerce models, you need (and will have) digital
cash smartcards, loyalty smartcards, identification smartcards (probably
all on the same card). SSL doesn't provide a solution for everything.

Rachel

PS I'm considering starting a new mailing list to look at smartcards on
the Internet - would anyone find this interesting/useful ? the list that
this email is getting forwarded to seems rather large...



Re: floppy drive SCRs (was IP: Smart Cards with Chips encouraged)

1999-09-23 Thread Rachel Willmer


> I predict the floppy smart card reader will be a dumb flop. Here's why:

Here's another one. These things are driven from watch batteries, rather
than from the computer's mains power.

There is at least one digital cash smartcard which draws sufficient
power that the battery life just isn't up to reasonable usage. I did a
US trip two and a half years ago (omigod, is it really that long ago...)
demonstrating our Mondex Internet payment software using a Fischer
Smarty as the SCR on my laptop, and discovered fairly quickly that we
needed to replace the battery every day to be sure of finishing the demo
with some battery life left. 

So, I am a huge fan of Mondex for Internet cash usage, as everyone who
knows me knows, but the combination of Mondex and a battery driven SCR
would probably not prove satisfactory for reliable usage. 

(Caveat: haven't checked up on currently available hardware, there may
be a mains driven floppy SCR available now or in the offing...)

On the other hand, I have found an easier acceptance of the idea of
smart card usage when I show one of the floppy SCRs to a potential user
rather than a serial port or USB, just because it *looks* familiar.

And as an aside, I'm not entirely convinced that the choice of the name
"Mr. Floppy" was a good one (which is what the Fischer Smarty sells as
in Asia)

:-)

Rachel



Re: snake-oil voting?

1999-09-23 Thread James Robertson

At 15:20 23/09/1999 , Ed Gerck wrote:

>List:
>
>Did any of you see this
>http://www.votehere.net/content/Products.asp#InternetVotingSystems
>
>that proposes to authenticate the voter by asking for his/her/its SSN#? 
>And, by the
>contents of ... an email msg sent to him/her/it?

What about the privacy issues here?

Is this not collecting a substantial database
of Name & SSN details?

Would this not be the ideal start for a widespread
project of identity theft?

(I can't believe people are still using SSNs for
this sort of stuff. Have no lessons be learnt?

Browse almost any issue of RISKS digest for
examples of SSN misuse.)

J

-
James Robertson
Step Two Designs Pty Ltd
SGML, XML & HTML Consultancy
http://www.steptwo.com.au/
[EMAIL PROTECTED]

"Beyond the Idea"
  ACN 081 019 623



Re: Ecash without a mint, or - making anonymous payments practical

1999-09-23 Thread Anonymous

Amir Herzberg says,
> Anonymous says,
>
> > It is still worth considering how to create anonymous payment systems
> > which could be more compatible with other elements of present day society.
>
> I think we can do this, indeed, we can achieve an even stronger goal:
> a payment mechanism that will support anonymous payments for people
> so wishing, while allowing other people to use non-anonymous payments
> (which will always have some advantages), without allowing merchants to
> identify the anonymity-seekers.

Yes, of course you could add identification to an anonymous payment
system simply by having people reveal their identities.  Anonymity
infrastructures offer users the option to hide their identities, but
they can't stop people from revealing pseudonyms or true names.

> The method is simple and can use any anonymous payment mechanism. Consider
> for simplicity a buyer, seller and a billing server (payment system
> provider - bank, telco, etc. - `billing system` is the term we use
> for this party in IBM Micro Payments). The payment system supports
> pre-certified payments, which are payments (to the seller) signed
> directly by the billing server. In this case, the buyer's identity
> obviously does not need to appear in the pre-certified payment (it
> is simply a payment - like a check - from billing server to seller).
> So all the buyer really does is `buy` this pre-certified payment. Now,
> obviously, if the billing system allows, the buyer may use anonymous
> payment protocol to buy the pre-certified payment, in which case (and
> assuming all communication is anonymized) we have complete anonymity
> (from billing system and from seller).

Hmmm... sounds like you are saying that if you had an anonymous payment
system you could use it to buy "checks" in your non-anonymous system.
But if you already had the ability to make anonymous payments, why bother
with your system?  I can go to the bank and buy a cashier's check for
cash, then make a payment with it, but I could just as easily have paid
with cash directly.

Of course in practice it is helpful to have money changers who can
convert between different payment systems, since there are so many
competing proposals in the world.  So it would be useful if you could in
fact accept some kind of anonymous payment system and translate it into
your own currency.  This is more of a financial problem than a technical
one, though.

> We actually will have the necessary APIs in merchant and buyer to allow
> integration of such an anonymous payment mechanism with the next release
> of IBM Micro Payment (1.3, next month). We may later on implement this
> ourselves if customers are interested, but frankly I prefer to see others
> implementing it; for one reason, as you know, there are multiple patents
> regarding anonymous payments, so it will be a pain to do this (in IBM).

http://www.ecoin.net/mmdh is a project based on Wagner blinding which
is thought to escape patent protection.  Perhaps this would be a good
starting point for a blind payment system.  Are your APIs going to
be public?



Re: having source code for your CPU chip -- NOT

1999-09-23 Thread Eli Brandt

Arnold Reinhold wrote:
> Perry, if you really believe that the question of whether a given 
> lump of object code contains a Thompson Trap is formally undecidable 
> I'd be interested in seeing a proof. Otherwise Herr Goedel has 
> nothing to do with this.

That sure smells undecidable to me.  Any non-trivial predicate P on
Turing machines (non-trivial meaning that both P and not-P are
non-empty) is undecidable by Rice's Theorem.  There are technical
issues in encoding onto the tape all possible interactions with the
world -- the theorem doesn't apply if some inputs are deemed illegal
-- but, hey, it's all countably infinite; re-encode with the naturals.

The practical impact of this is not immediately apparent.  All
non-trivial theorem-proving about programs is futile in this same
sense, but people do it, or try.  They have a lot of difficulties less
arcane than running into the pathological cases constructed to prove
these undecidability theorems.

> Your argument reminds me of claims I always 
> hear that nothing can be measured because of the Heisenberg 
> Uncertainty principle.

I do feel your pain.

-- 
 Eli Brandt  |  [EMAIL PROTECTED]  |  http://www.cs.cmu.edu/~eli/



Re: having source code for your CPU chip -- NOT

1999-09-23 Thread Martin Minow



"Steven M. Bellovin" wrote:
"Steven M. Bellovin" <[EMAIL PROTECTED]> wrote:
> 
> In message , Martin Minow writes:
> 
> >
> > Yeah, but 370 Assembler H had a very extensive macro facility and
> > you could hide all kinds of wierd stuff in 370 code. Not too many
> > folk left around who can read it.
> 
> And those of us who once could no longer remember how to -- for me, it's at
> least 20 years (more like 25, actually) since I touched the stuff...
> >

It's been 30 for me and I still have some listings lying around but
haven't the foggiest idea what some of the macros do (same for my
7090 assembler).

> That isn't the real problem -- most crypto routines per se are small enough
> that one could verify the machine code without too much effort.  The problem
> is the environment they're embedded in.  By that I mean not just the
> crypto-using application, but the entire operating system.  By example, I
> could verify the machine code for IDEA, but not PGP and certainly not your
> favorite version of UNIX.

Why run crypto code on Unix? You could write a tiny microkernel
(semaphores, interrupt redirection, static memory allocation, no
memory management or protection) for a PDP-11 (or a similar "modern"
computer such as a 68HC11) in about 1000 lines of C and 200 lines of
assembler. (Or buy one ready-made from any of a half-dozen vendors)
Add a minimal IP stack and web server and you have enough of an
environment to write a complete "crypto machine" that can be verified
with a line-by-line code walk-through. Put the "crypto machine" in a
bullet-proof (and Tempest proof) container and "drive" it with HTML.

While you can't validate the Dallas Semiconductor TINI operating system,
it could serve as a test platform for a Java-based design. The crypto
secrets would stay on an iButton while the TINI provides the network
front-end. Both are programmed in Java.

Martin.
ps: I found Decus C on ftp://ftp.update.uu.se/pub/pdp11/decusc.tar.Z
It looks complete.



Re: having source code for your CPU chip -- NOT

1999-09-23 Thread Bill Frantz

At 10:26 PM -0700 9/22/99, Martin Minow wrote:
>At 9:26 AM -0700 9/22/99, Bill Frantz wrote:
>>
>>My own approach would be to audit the generated code.  In KeyKOS/370, we
>>"solved" the problem by using an assembler which was written before KeyKOS
>>was designed.  (N.B. KeyKOS/370 was written in 370 Assembler H).
>>
>
>Yeah, but 370 Assembler H had a very extensive macro facility and
>you could hide all kinds of wierd stuff in 370 code. Not too many
>folk left around who can read it.

The big advantage of Assembler for detecting compiler Trojans is that the
object code has a simple relation to the source code.  It is quite straight
forward to see if the assembler has inserted/deleted code ala Thompson.

It is certainly not impossible to do this for a higher level language such
as C.  If the compiler allows you to associate source line numbers with
offsets in the object code it is even easier.  You "merely" have to look at
the object code line-for-line and see if it is a reasonable result of the
source code.  I have occasionally had to do almost exactly this when
line-by-line stepping of a C program was too course grained for the bug I
was seeking.


-
Bill Frantz | The availability and use of secure encryption may |
Periwinkle  | offer an opportunity to reclaim some portion of   |
Consulting  | the privacy we have lost. - B. FLETCHER, Circuit Judge|





Re: having source code for your CPU chip -- NOT

1999-09-23 Thread Arnold Reinhold

At 9:02 AM -0400 9/23/99, Steven M. Bellovin wrote:
>In message , Martin Minow writes:
>
> >
> > Yeah, but 370 Assembler H had a very extensive macro facility and
> > you could hide all kinds of wierd stuff in 370 code. Not too many
> > folk left around who can read it.

If I remember right you could turn off macros, but even if you could 
not, it is easy to filter your assembler source to ensure there are 
no macro definitions.  And here are plenty of people who are still 
using 370 code. I believe the entire US enroute air traffic control 
system is written in BAL.

>
>And those of us who once could no longer remember how to -- for me, it's at
>least 20 years (more like 25, actually) since I touched the stuff...
> >
> > I have a copy of Decus C (Open Source PDP-11 C) lying around and
> > wrote enough of its compiler and code generator to know what it can
> > and cannot do, in case anyone is interested. The entire source code
> > of the C compiler is small enough to sight-verify in about a man-month.
> > A "Small C" compiler (see early issues of Dr. Dobbs) can be implemented
> > in about 3 man months and ought to be good enough for crypto work.

Martin, you should Zip or tar those files and sign the archive 
immediately. Then publish the signature here.

>
>That isn't the real problem -- most crypto routines per se are small enough
>that one could verify the machine code without too much effort.  The problem
>is the environment they're embedded in.  By that I mean not just the
>crypto-using application, but the entire operating system.  By example, I
>could verify the machine code for IDEA, but not PGP and certainly not your
>favorite version of UNIX.
>
>   --Steve Bellovin

The point of having a verified small C Compiler is not to compile 
crypto code, it is to compile GNU C++ or an intermediate-sized 
open-source compiler that can then compile GNU C++.  The goal should 
be to produce a signed golden object version of GNU C++ whose 
provenance, in terms of the object code that was used in compiling 
it, is reproducibly traceable to either an ancient compiler or a 
small compiler that any computer science class can build from scratch 
as a term project -- preferably both.

Our moderator, whose hard work in keeping this list excellent I 
really appreciate, but with whom I must disagree here, interjected:

>[And then how do you trust your assembler? Or the compiler and
>assembler you compiled the C compiler on? And the linker? If you
>really try hard enough on all this, you find your self smack dab in
>front of Kurt Goedel's door, and he tends to have unpleasant news for
>visitors who come to him looking for solace.
>  --Perry]

Perry, if you really believe that the question of whether a given 
lump of object code contains a Thompson Trap is formally undecidable 
I'd be interested in seeing a proof. Otherwise Herr Goedel has 
nothing to do with this. Your argument reminds me of claims I always 
hear that nothing can be measured because of the Heisenberg 
Uncertainty principle. I don't dispute that building trusted systems 
is hard and time consuming or that Thompson's paper adds another 
dimension to the difficulty, but his work does not prove it is 
impossible.

On the other hand, you did mention me in the same breath as David 
Hilbert, so perhaps I shouldn't complain.

Arnold Reinhold




Re: having source code for your CPU chip -- NOT

1999-09-23 Thread Steven M. Bellovin

In message , Martin Minow writes:

> 
> Yeah, but 370 Assembler H had a very extensive macro facility and
> you could hide all kinds of wierd stuff in 370 code. Not too many
> folk left around who can read it.

And those of us who once could no longer remember how to -- for me, it's at 
least 20 years (more like 25, actually) since I touched the stuff...
> 
> I have a copy of Decus C (Open Source PDP-11 C) lying around and
> wrote enough of its compiler and code generator to know what it can
> and cannot do, in case anyone is interested. The entire source code
> of the C compiler is small enough to sight-verify in about a man-month.
> A "Small C" compiler (see early issues of Dr. Dobbs) can be implemented
> in about 3 man months and ought to be good enough for crypto work.

That isn't the real problem -- most crypto routines per se are small enough 
that one could verify the machine code without too much effort.  The problem 
is the environment they're embedded in.  By that I mean not just the 
crypto-using application, but the entire operating system.  By example, I 
could verify the machine code for IDEA, but not PGP and certainly not your 
favorite version of UNIX.

--Steve Bellovin





Re: having source code for your CPU chip -- NOT

1999-09-23 Thread Martin Minow

At 9:26 AM -0700 9/22/99, Bill Frantz wrote:
>
>My own approach would be to audit the generated code.  In KeyKOS/370, we
>"solved" the problem by using an assembler which was written before KeyKOS
>was designed.  (N.B. KeyKOS/370 was written in 370 Assembler H).
>

Yeah, but 370 Assembler H had a very extensive macro facility and
you could hide all kinds of wierd stuff in 370 code. Not too many
folk left around who can read it.

I have a copy of Decus C (Open Source PDP-11 C) lying around and
wrote enough of its compiler and code generator to know what it can
and cannot do, in case anyone is interested. The entire source code
of the C compiler is small enough to sight-verify in about a man-month.
A "Small C" compiler (see early issues of Dr. Dobbs) can be implemented
in about 3 man months and ought to be good enough for crypto work.

Martin Minow
[EMAIL PROTECTED]


[And then how do you trust your assembler? Or the compiler and
assembler you compiled the C compiler on? And the linker? If you
really try hard enough on all this, you find your self smack dab in
front of Kurt Goedel's door, and he tends to have unpleasant news for
visitors who come to him looking for solace.

And of course, once you've done all this lovely work, the NSA comes in
and puts a microscopic bug into your keyboard cable in the night, or
replaces your hand verified assembler executables, or...

I suggest that in practical terms, one has to set some reasonable
limits on what one is willing to do to overcome risk. Paranoia is a
potential source of infinite work, but there is only a finite amount
of work one can do in a given lifetime. That is not to say that *some*
paranoia isn't of value, but perfect paranoia results in a perfect
absence of progress on one's projects.

   --Perry]



snake-oil voting?

1999-09-23 Thread Ed Gerck


List:

Did any of you see this
http://www.votehere.net/content/Products.asp#InternetVotingSystems

that proposes to authenticate the voter by asking for his/her/its SSN#? And, by the
contents of ... an email msg sent to him/her/it?

Besides confusing authentication with identification, VoteHere also confuses the
problem of non-repudiation (that the PKIX WG is struggling with for some years),
as they declare to have solved it as well:

 "...also prevents voters from later denying that they cast a ballot."

And, as customary in these cases, by declaring to use very strong keys:

 "Every voted ballot is encrypted using 1024-bit public-key encryption."

that, presumedly to them and to the public, must be self-secure. But, the "best claim" 
is
right at the begining, when they postulate the VoteHere system as commented above
is  a "universally verifiable election system" with their own following definition:

4.  Universally Verifiable Elections - secure, efficient, and maintains the voter's
privacy. Furthermore, anyone can verify that the election was conducted fairly,
without compromising voters' privacy.

Comments?

Cheers,

Ed Gerck