Re: Cryptographic Algorithm Metrics

2001-01-08 Thread lcs Mixmaster Remailer

Burton Rosenberg writes:
> There is the concept of Kolomogorov complexity, the size of the
> smallest algorithm that can generate the message. A perfectly
> compressed message would have Kolomogorov complexity
> equal to itself.
>
> Kolomogorov complexity does have a freedom in its definition,
> but the exciting thing is that any two definitions will differ only
> by a constant.

Yes, the freedom is that the complexity is defined only with respect to a
particular Universal Turing Machine.  For any given string you can find
a UTM which compresses it as small as you like, down to a single bit.
This makes it useless as an "absolute" definition of complexity in the
present context.  (Also as Nick Szabo pointed out it is uncomputable
due to the halting problem.)




Re: Cryptographic Algorithm Metrics

2001-01-04 Thread lcs Mixmaster Remailer

"Perfect compression" doesn't make sense anyway.  Perfection of
compression (as with entropy) can be expressed only relative to a specific
probability distribution of possible inputs.  Once you have specified
such a probability distribution, you can evaluate how well a particular
compression algorithm works.  But speaking of absolute compression or
absolute entropy is meaningless.




Re: Is PGP broken?

2000-12-04 Thread lcs Mixmaster Remailer

It is often useful to include some information associated with a signature
that is not in the hashed portion.  There are several reasons for this.

First, some information is not security critical and there is no reason
to hash it.  Second, some such information may be subject to change and
updates, and it is desirable for the document to be edited in place in
order to make changes without invalidating the siganture.  And third,
some information cannot be calculated until after the signature hash is
calculated due to the semantics involved.

Examples of the first case would be an identifier which indicates the
signing key.  In PGP this would be the key ID; in SMIME, CMS and other
PKCS-7 derived formats it is the IssuerAndSerialNumber.  These fields
are not hashed.  This is not security critical because it is essentially
a hint about where to find the key.  If this data is altered, the wrong
key will be found and the signature won't verify.

Examples of the second case would be other kinds of hints for finding the
signing key, in the form of URLs or database pointers which might change.
PGP's preferred key server subpacket might fall into this category.
Another example would be the KeyInfo field in the XML signature format
(http://www.w3.org/TR/2000/CR-xmldsig-core-20001031/).  This has a
number of options for ways to identify and locate keys.  It is not in
the hashed area.

Examples of the third case would be the UnauthenticatedAttributes of the
PKCS-7 family.  CMS (RFC2630) uses this field to hold a countersignature,
which is a signature on a signature.  This cannot be calculated until
after the signature is calculated so it must be in the unhashed region.
PGP might want to add a countersignature mechanism in the future and an
unhashed subpacket would be a good place for it.

If you are really convinced that allowing unhashed data is wrong, you
should lend your expertise not only to PGP, but to the many other ongoing
cryptographic working groups and let them know that they are all on the
wrong track.




Re: migration paradigm (was: Is PGP broken?)

2000-12-04 Thread lcs Mixmaster Remailer

William Allen Simpson <[EMAIL PROTECTED]> writes:
> My requirements were (off the top of my head, there were more):
>
>  4) an agreed algorithm for generating private keys directly from 
> the passphrase, rather than keeping a private key database.  
> Moving folks from laptop to desktop has been pretty hard, and 
> public terminals are useless.  AFS/Kerberos did this with a 
> "well-known" string-to-key algorithm, and it's hard to convince 
> folks to use a new system that's actually harder to use.  We need 
> to design for ease of use!  

This is a major security weakness.  The strength of the key relies
entirely on the strength of the memorized password.  Experience has
shown that keys will not be strong if this mechanism is used.

There must be something more.  At a minimum it can be a piece of paper
with the written-down, long passphrase.  Or it can be a smart card
with your key on it.  Conceivably it could also be a secure server that
you trust and access with a short passphrase, where the server can log
incorrect passphrase guesses.  But if you can attack a public key purely
by guessing the memorized passphrase which generated the secret part,
the system will not be secure.




Rijndael among the weakest of the AES candidates

2000-10-03 Thread lcs Mixmaster Remailer

NIST has got its web site working again.  The rationale document at
http://csrc.nist.gov/encryption/aes/round2/r2report.pdf has some
troubling aspects.

Pure cipher strength actually played very little role in the selection.
All the ciphers were judged adequately strong.  Rijndael's main advantages
were in practical implementation issues, plus resistance to various
hardware failures.

Section 3.2 analyzes cipher strength.  The results are summarized
in section 3.2.2.  NIST explicitly rejects simple measures of cipher
strength which compare number of rounds attacked/broken against total
numbers of rounds.  This single number is not always meaningful, but
it does give an idea of overall strength.  NIST judged MARS, Serpent
and Twofish to have "high" security margin, and RC6 and Rijndael to
have "adequate" security margin.

Rijndael has attacks on 6 or 7 out of the 10 rounds for 128 bits keys;
7 out of 12 rounds for 192 bit keys; and 7, 8 or 9 out of 14 rounds for
256 bit keys (Rijndael uses more rounds for larger keys).  The attacks
against larger numbers of rounds require prohibitive levels of work.

Apparently, NIST judged all ciphers adequately strong on this basis.
The decision as to which to pick was made on other grounds.  Rijndael is
fast, easy to implement in hardware, and lightweight.  These traits seem
to be what led to its choice.

For those whose primary interest in AES is high security, the emphasis
might have been placed elsewhere.  Rather than choosing a cipher with
merely an "adequate" level of security, they would prefer that the
choice had been made from among those ciphers judged highest in security:
MARS, Twofish and Serpent.  Choosing from among these ciphers by similar
criteria of efficiency would probably have led to Twofish.

Rijndael appears to be a compromise between security and efficiency.
This leaves us in an unhappy and uncomfortable position.  It may well be
that Twofish and perhaps Serpent continue to be widely used alternatives
to AES.




Re: Command-line tools supporting both PKCS#12 and PKCS#11

2000-09-21 Thread lcs Mixmaster Remailer

Can someone provide or point to a list of tokens which support the
PKCS-11 ("Cryptoki" interface?  TIA!




Re: reflecting on PGP, keyservers, and the Web of Trust

2000-09-11 Thread lcs Mixmaster Remailer

A common misconception about the PGP web of trust is that trust flows
through the web along the signatures.  Actually, PGP's trust model is
founded on the principle that "trust isn't transitive".  A signature
is never trusted in PGP unless the user has explicitly indicated that
he personally trusts the signer.  (The new NAI versions of PGP do have
an exception in that the user can mark a signer as a "meta introducer"
allowing trust to flow an extra step.)

This is in contrast to the practice in the X.509 PKI, where a root CA
has the ability to delegate trust as far as it wishes.  If your browser
trusts Verisign, and Verisign trusts someone else, you automatically
trust that other party.

What does flow along PGP's "web of trust" is validity of name-key
bindings.  You know and trust Alice, so you sign her key and mark it
as trusted.  Alice signs Bob's key.  Since you trusted her, you now have
confidence that this is in fact Bob's key.

You know this is Bob's key, but that doesn't mean you automatically trust
it to issue key signatures.  This is a separate decision you make, based
on your knowledge of Bob's character and qualities.  If you do trust him,
you mark his key as trusted.

Bob now signs Carol's key.  You can make a similar determination of
whether Carol is trustworthy.  If she is, you will then trust the
signatures she has made.

You can end up with a chain of Alice->Bob->Carol->David, and determine
that you know David's key.  The only key you had to explicitly verify
was Alice's.  But you had to determine for yourself whether you choose to
trust Alice, Bob, and Carol, in order for this chain to confer validity
on David's key.

Trust models make a distinction between the question of whether a
certificate (name-key binding) is true and accurate, and the question of
whether a key holder is trusted to issue certificates (key signatures).
X.509 and PGP both distinguish these uses, although they do so in
slightly different ways.  In X.509, the certificate issuer (key signer)
decides whether to delegate trust.  In PGP, the verifier (end user)
decides which keys are trustworthy.

People unfamiliar with the issues of cryptographic trust models often
do not clearly distinguish these two concepts, which is unfortunate and
leads to much confusion.




Re: What would you like to see in a book on cryptography for programme

2000-08-11 Thread lcs Mixmaster Remailer

William Rowden writes:
> In the tempting-but-wrong category, one could include samples of the
> insecure systems that result when programmers with no cryptanalysis
> background create their own cryptographic algorithms.

Yes, and let us hope that Michael Paul Johnson resists the temptation to
plug his own home-grown ciphers, Sapphire and Diamond.  Hopefully he'll
realize that including his own ciphers in the book will ruin what little
credibility he has as an author.




Re: MS Outlook certs

2000-06-16 Thread lcs Mixmaster Remailer

> bash-2.03$ /usr/bin/openssl x509 -in kurs10.cer -text -noout
> Certificate:
> Data:
> Version: 1 (0x0)
> Serial Number: 932714537 (0x37981829)
> Signature Algorithm: md5WithRSA
> Issuer: C=PL, O=oi-wbd
> Validity
> Not Before: Jun 13 09:54:03 2000
> Not After : Dec 14 09:54:03 2001
> Subject: CN=kurs10, CN=recipients, OU=oi-wbd, O=oi-wbd
> Subject Public Key Info:
> Public Key Algorithm: rsaEncryption
> RSA Public Key: (510 bit)
> Modulus (510 bit):
> 20:55:5f:0b:f3:5c:7a:c1:96:bd:36:72:53:c0:ed:
> a8:b5:24:af:34:d9:c0:66:1f:56:dd:ee:99:32:e1:
> 6a:63:cb:10:43:99:7b:20:1c:08:c3:9d:09:4f:82:
> df:01:76:c4:ad:7b:90:22:de:1f:66:3e:78:5e:1c:
> 01:e4:eb:3d
> Exponent: 65537 (0x10001)
> Signature Algorithm: md5WithRSA
> 85:30:cd:a1:30:19:95:42:f7:c7:1d:f8:1b:bf:0e:c6:2f:f5:
> 80:05:ed:04:07:3c:34:96:c8:04:60:3a:a3:33:90:65:c9:50:
> 27:c1:4f:73:16:63:c8:ab:e6:91:71:4a:7a:09:88:e0:ad:3a:
> 2a:84:f9:43:0f:bf:ef:2d:46:1c

Besides the modulus not being prime, your signature is greater than your
modulus, which should never happen.

Either your cert is totally bogus/corrupted, or your build of openssl is
messed up.  Probably the latter, because you probably built it yourself
while the MS Outlook cert manager came to you already built.  You should
check your build tools and configuration.




Re: Perfect Forward Security def wanted

2000-05-08 Thread lcs Mixmaster Remailer

   $ public-key forward secrecy (PFS)
  (I) For a key agreement protocol based on asymmetric cryptography,
  the property that ensures that a session key derived from a set of
  long-term public and private keys will not be compromised if one
  of the private keys is compromised in the future.

This would disallow unauthenticated DH then.  There are no long term
keys in that case.

   - One concept of "forward secrecy" is that, given observations of
  the operation of a key establishment protocol up to time t, and
  given some of the session keys derived from those protocol runs,
  you cannot derive unknown past session keys or future session
  keys.

Logically, forward secrecy would be secrecy forward in time.  That is,
information cannot be recovered once you go forward in time,
into the future.  Information today cannot be discovered from information
available tomorrow.

Likewise, backward secrecy would be secrecy backward in time.
Information cannot be recovered as you move backward in time, that is,
into the past.  Information today cannot be discovered from information
available yesterday.

  (C) Forward vs. backward: Experts are unhappy with the word
  "forward", because compromise of "this" encryption key also is not
  supposed to compromise the "previous" one, which is "backward"
  rather than forward.

No, that is forward secrecy.  Forward secrecy means today's information is
protected tomorrow.  Equivalently, yesterday's information is protected
today.  That is, yesterday's keys can't be recovered from knowledge of
today's.  This is forward secrecy.

Backward secrecy would mean that it is impossible to derive tomorrow's
keys from knowledge of today's.

It's easy to get these confused, because the existence of two reference
points creates two relative directions.  If one is forward of the other,
the other is backward from the first.  For consistency we adopt the
convention of measuring the direction from the point of view of when
the secret information exists.  Forward secrecy is secrecy against
attackers who are forward in time, that is, in the future.

It is unfortunate that what is basically a simple and well defined
concept has led to so much confusion and inconsistency in usage.




Re: Perfect Forward Security def wanted

2000-05-04 Thread lcs Mixmaster Remailer

What is the difference (if any) between "perfect" forward secrecy and
just plain old ordinary forward secrecy?

Forward secrecy sounds like it means secrecy against attacks forward
(later) in time.  When you burn your one time pad after use you have
forward secrecy, because afterwards there is no way to reconstruct
the message.  Likewise a DH exchange produces forward secrecy once the
secret exponents are destroyed, because again the information necessary
to reconstruct the result is lost.

Usually in cryptography "perfect" refers to information theoretic
security, as distinguished from computational security.

By this definition, the burned OTP would provide perfect forward secrecy.
The DH exchange would not, because computational attacks could in
principle recover the secret.

However DH is widely stated to provide PFS.  Therefore "perfect" must
mean something else in this context.  Can anyone shed light on the
distinction between PFS and FS?




Slow revocation checks (was: X.BlahBlah...)

2000-03-06 Thread lcs Mixmaster Remailer

Peter Gutmann writes:
> The reason why revocation checking is disabled by default is a pragmatic
> one, in practice it acts as a "Delay processing each message by a minute
> or two" facility (or at least it did a year or so back), so by disabling
> it by default the vast masses (who don't know or care about it) get
> their PKI warm fuzzies, and those who turn it on get what they asked for
> (I don't use Outlook but if I did I'd certainly have it turned off).

Can you explain why it has this delay?  Presumably it is because it has
to fetch a CRL?  Is this because:

 - CRLs are not cached but fetched every time?

 - CRLs expire every week or so, and you probably don't get more than
   one encrypted message a week, so your previous CRL has expired?

 - CRLs are issued by dozens of different CAs, and you probably don't
   ever receive two messages from people certified by the same CA, so
   you don't have a CRL from the CA you need?

None of these seem particularly plausible.  Is there another reason?



Re: Legal/patent analysis of Lucre?

2000-02-29 Thread lcs Mixmaster Remailer

James Donald writes:
> What is wrong with the original solution proposed in my original
> article, 
>
> The client uses an existing used coin for blinding the newly created
> coin, preferably a coin that he got from someone else, not a coin
> issued to him by the issuer.  If the coin issuer marks coins by using
> a different key for some coins and not others, the blinding will
> generate unrecognizable garbage and the system will fail. 

This could help, but it might not completely eliminate the problem.

The difficulty from the bank's perspective is that although it can still
mark a coin and recognize it at withdrawal, if that coin is then used as
the base for blinding further coins, those other coins will be completely
bogus.  If the bank does not want to be caught in its marking, it must
be prepared to accept bogus coins, which policy might be discovered.

However if marking is a rare and seldom applied technique, then the
bank could usually reject bogus coins, and only go into "permissive"
mode for a short while after marking a coin.  In that case it might get
away with it.  Unless people are constantly trying to deposit bogus coins,
which may be a difficult procedure to maintain over long periods of time,
the bank could get through its window of vulnerability.

All in all it seems superior to have the bank prove that it is behaving
properly.  That closes off the possibility of marking and also of the
bank intentionally creating bogus coins which it later pretends are valid.



Re: Brands on privacy

2000-02-27 Thread lcs Mixmaster Remailer

Ben Laurie wrote:
> lcs Mixmaster Remailer wrote:
> > This is powerful writing, but one can't escape the thought that making
> > his advanced technology available on a non-exclusionary basis would be
> > a significant first step in bringing about this desirable outcome.
>
> I wrote to Brands about free implementations last year. His answer was,
> in essence, "forget it".

Maybe now that ZKS has licensed the patents, they would be more amenable
to allowing a free implementation, perhaps with some restrictions.

There could be a BRANDSREF, analogous to the RSAREF library which has
been available for many years for non-commercial implementations of RSA.

Actually there would be good reason for a library such as this.
Unlike with RSA, which is pretty much a no-brainer to implement, Brands'
technology is filled with gotchas, traps and pitfalls.

His four certificate issuing protocols all suffer from the problem
that if the issuer runs them in parallel, the cert recipient can get
certified attributes he is not supposed to have.  Brands provides two
"immunization" techniques to address this problem, both of which impose
costs and complexities of their own.

Then there is the "delegation" attack, where someone wants to show a
cert they don't have, and manages to do so by carrying on an interaction
with the cert issuer in parallel with their cert-showing protocol.
Brands shows that although this can happen, it is usually not a problem
and there are simple counter measures that address it.

The point is, you'd better know what you're doing if you want to implement
Brands' certificate technology.  Obviously Stefan Brands himself does
understand the issues, probably better than anyone.  Unless he was
personally involved with the creation and validation of a library to
implement his technology, it would be very questionable to trust it.
This would argue in favor of a reference implementation, approved by
Brands, which would make sure that things were done properly.

If this could be provided under some kind of open licensing, even if
just for non-commercial use, it would allow the community to begin
experimenting with Brands certificates in various small applications.
This would be similar to approaches like the KeyNote trust management
system where the code is provided and people can begin to work with it
and see where it is most useful.  Brands certificates would be a natural
match to KeyNote, SPKI, and the other attribute based certification/trust
management systems people are experimenting with today.

This might well be a superior strategy to that which Chaum attempted with
DigiCash, which was to provide a monolithic software application which
attempted to be a fully functional, all-things-to-all-people payment
system.  The startup costs are huge and the difficulty of breaking in
to the established payment infrastructure is formidable.

It would be better to start small, with a set of narrowly defined
applications and vertical markets, and hope to build from there into a
more widely used system.  Free licensing could be an important component
of such a strategy.

ZKS no doubt paid a lot of money for Brands' patents, but they should
think of this as an investment which brought them Brands himself.
The man, with his intelligence and creativity, is worth far more than
the work.  Given ZKS' background and philosophy, it would be absurd for
them to wrap themselves in the cloak of intellectual property protection
in order to stifle competition.  Unless they want to join the ranks of
such beloved companies as DigiCash and RSADSI, they will find a way to
make this important technology widely available.



Brands on privacy

2000-02-26 Thread lcs Mixmaster Remailer

Stefan Brands' thesis finally came yesterday from Fatbrain, almost two
months after ordering.  His techniques are very powerful and interesting,
but unfortunately patented and hence of no practical value for anyone
other than the one licensee.  How different the world might be if he
and Chaum had made their technology freely available.

Brands has some very strong and brave words in his epilogue in favor
of privacy, which deserve to be widely read:

"Anyone who considers 'key escrow' as a way of protecting privacy is,
of course, in a state of sin.  On a fundamental level there can be no
mistake about this.  Westin's widely accepted definition of privacy
clearly requires that individuals themselves are in control over their
own information.  Key escrow takes this control completely away, and
therefore offers zero privacy

"In recent years, many cryptographers have worked fiercely to replace
privacy-protecting systems by key escrow systems:

"The first area that fell victim is electronic voting.  Following
proposals guaranteeing unconditional privacy, [cryptographers] introduced
key escrow electronic voting.  Virtually all electronic voting schemes
proposed since then are key escrow systems...

"The most recent area that has fallen victim is electronic cash.
Starting with [certain cryptographers] a floodgate of papers on key
escrow electronic cash opened...

"Here, the primary excuse to squander privacy has been to combat
money laundering.  However, money laundering concerns can be addressed
effectively witout giving up privacy by (prudently) applying one or more
of the following measures: placing limits on amounts; ensuring payee
traceability (by the payer only); limiting the issuing of electronic
cash to regulated institutions; disallowing anonymous accounts; issuing
only personalized paying devices; identifying payers in high-value
transactions; and, checking the identity of parties who convert other
forms of money into electronic cash...

"Many of the key escrow papers sport exaggerated and sometiems
downright ignorant statements about how privacy will hurt individuals,
organizations, and societies at large... [T]he key escrow smoke screen
enables the researcher to downplay the annihilation of privacy by
claiming that the new system provides 'balanced' privacy; many authors
do not even shy from claiming that their key escrow systems 'preserve'
or even 'improve' privacy

"Privacy is protected only if each individual for him or herself is
able at all times to control and determine which parties, if any, are
capable of recovering a secret.  If a user decides to give up some of
that control, that is his or her choice, but it should not be hardwired
into the design of the system

"It is time to stop tolerating (let alone promoting) misleading practices
towards privacy, be they self-regulation, seal programs, infomediaries,
key escrow systems, or otherwise.  Schemes in which users do not have
control over their own personal data offer zero privacy.  No smoke and
mirrors can change this fact

"Today, the foundations for the communication and transaction
technologies of the next century are being laid.  Digital certificates
will be hardwired into all operating systems, network protocols, Web
browsers, chip cards, application programs, household utensils, and
so on.  To avert the doom scenario of a global village founded wholly
on inescapable identification technologies, it is imperative that we
rethink our preconceived ideas about security and identity - and build
in privacy before the point of no return has been reached."

This is powerful writing, but one can't escape the thought that making
his advanced technology available on a non-exclusionary basis would be
a significant first step in bringing about this desirable outcome.



Smartcard anonymity patents

2000-02-24 Thread lcs Mixmaster Remailer

At 10:16 AM 02/23/2000 -0800, Bill Stewart writes:
> At 10:14 PM 02/21/2000 -0800, Greg Broiles wrote:
> >4759063 Blind signature systems (19 Jul 2005)
> >4529870 Cryptographic identification, financial transaction, and credential 
> >device (16 Jul 2002)
>
> Interesting - I wonder how much of the "Credentials Without Identity"
> is part of the 2002 patent as opposed to the 2005 blind signature patents?

The 2002 patent seems to be focused mostly on smartcards, as are a number
of the Brands patents recently licensed by Zero Knowledge Systems.

What are the prospects for smartcard based systems within the U.S.?  Such
cards are essentially nonexistent in commerce.  Apparently in Europe and
Asia they are widely used, though, instead of the credit cards preferred
by Americans.

The one place smartcards might make inroads in the U.S. in the next ten
years would be B2B applications, companies rolling out smartcards to
all their employees to hold their keys and certs.  This will improve
security and allow people to sign messages from wherever they are.

However that would seem to be the environment where the anonymity offered
by Chaum and Brands would be least attractive.  Companies need to know
what their employees are doing.  They have little interest in giving a
credential saying "authorized to purchase up to $10,000" to an employee,
then when the invoice comes in, being unable to determine who ordered,
just that it was an authorized order.  This would be a nightmare for
the accountants.

Are there other applications where the Chaum and Brands smartcard patents
could play a role in the U.S. within the next decade?



ZKS hires Brands, licenses patents

2000-02-22 Thread lcs Mixmaster Remailer

According to Zero Knowledge Systems
http://www.zeroknowledge.com/media/pressrel.asp?rel=0000:

   RENOWNED CRYPTOGRAPHER DR. STEFAN BRANDS JOINS ZERO-KNOWLEDGE SYSTEMS;
   COMPANY GAINS EXCLUSIVE RIGHTS TO HIS SUITE OF PRIVACY PATENTS

   Leading Internet privacy and identity-management company Zero-Knowledge
   Systems announced today that Dr. Stefan Brands, a top cryptographer who
   specializes in privacy, PKI, digital identity authentication systems
   and electronic cash, has joined the company as Distinguished Scientist.
   Zero-Knowledge also gained exclusive rights to the Brands patent suite
   - a collection of five issued patents and a suite of pending privacy
   patents that represent the future of certified identity-management
   and electronic cash in the online and offline worlds.

   The patents make Zero-Knowledge one of only two companies capable of
   developing cryptographically assured private and anonymous e-cash, and
   the only company with the technology to offer it for both the online
   and offline worlds.  In an increasingly networked world, where our
   homes, cars and phones are connected, these patents will enable people
   to ensure privacy and flexibility in their commercial transactions.

See also http://www.wired.com/news/politics/0,1283,34477,00.html.

This should be interesting.  "One of only two companies" apparently
refers to ZKS and eCash Technologies, which bought Chaum's patents.
This might lead to a battle royale where we find out once and for all
whether Brands has succeeded in escaping the minefield of Chaum's blinding
patents.  Of course any such battle could delay the implementation of
the technology for years while the patents are tied up in court, which
would be unfortunate.

ZKS has previously hinted about a commitment to open source, recently
hiring Mike Shaver from mozilla.org as Chief Software Officer.  It would
seem that the linkage of open source software and patented technology is
awkward to say the least.  What use is open source if it is patented?
But allowing people to freely modify the source would be tantamount to
giving the patent away.

Have their been other open source projects which used patented technology
owned by the company releasing the source?  How has the licensing been
handled in those cases?

BTW if you want to search for Brands' patents, he is listed as
Stefanus Brands in the U.S. patent db.  Here are his five patents:

US05696827 12/09/1997 Secure cryptographic methods for electronic transfer of 
information
US05668878 09/16/1997 Secure cryptographic methods for electronic transfer of 
information
US05606617 02/25/1997 Secret-key certificates
US05604805 02/18/1997 Privacy-protected transfer of electronic information
US05521980 05/28/1996 Privacy-protected transfer of electronic information



Re: Coerced decryption?

2000-02-11 Thread lcs Mixmaster Remailer

Russell Nelson writes:
> Nobody's mentioned the possibility of an encryption system which
> always encrypts two documents simultaneously, with two different keys:
> one to retrieves the first (real) document, and the second one which
> retrieves to the second (innocuous) document.

This idea has been discussed in the literature in various forms.  See
http://www.deja.com/[ST_rn=ps]/getdoc.xp?AN=520237545 for a critique of
one method and some discussion of alternatives.

In practice though such a system would be hard to use.  In particular,
coming up with plausible cover traffic for each sensitive message would
be time consuming and tedious.  No one would want to use such a method
except specifically for the purpose of being able to reveal false traffic.
Since the method has no independent justification, it is likely that
a regime which requires people to give up their keys would outlaw this
method if people started to use it.



RE: [PGP]: PGP 6.5.2 Random Number Generator (RNG) support

2000-02-03 Thread lcs Mixmaster Remailer

Lucky Green writes:
> Your post is the third or forth post I have seen in the last year that
> claims that Paul concluded that Intel's RNG outputs strong random numbers.

Such as when they said (http://www.cryptography.com/intelRNG.pdf):

   Cryptographically, we believe that the Intel RNG is strong and that
   it is unlikely that any computationally feasible test will be found
   to distinguish data produced by Intel's RNG library from output from
   a perfect RNG. As a result, we believe that the RNG is by far the most
   reliable source of secure random data available in the PC.

Right, it would be a real stretch from this to claim that Paul concluded
that Intel's RNG outputs strong random numbers.

> Paul and Ben did not draw any conclusions about the quality of the random
> numbers generated Intel's RNG as fielded. Nor could they have drawn such
> conclusions, since neither was given an opportunity to analyze known (to
> them) unwhitened output of the RNG. Which the carefully mention in their
> paper. You may wish to read Section 4 of the document you cited more
> carefully.

It is true that the analysis relied ultimately on information supplied by
Intel, not just for the random data, but for the architecture of the RNG
as well.  Obviously Jun and Kocher did not put a device under an electron
microscope or start etching off layers.  So sure, Intel could have lied
through their teeth, lied about everything, presented a strong design
producing good data, then put in something completely different.  Or even
if they'd supplied a chip sample and the researchers had independently
verified the mask and data, Intel could have changed the design for the
shipping parts.

But Intel could easily get caught in such a fraud, and imagine the fallout
if this happened.  These parts are in systems now, and Kocher could in
theory take one out and compare it with the design information he got
from Intel.  If there's a SHA-1 hash on that chip as some have proposed,
it would stand out like a sore thumb.

And attempts by Intel to present fake random data would have been
even more foolish.  Everyone who's got a chip now can catch them.
They've published exactly what you need to do to get data from the chip.
It takes two lines of code to pull out a byte.  Kocher, you, I, or anyone
else can grab data from this chip and run exactly the same statistical
tests described in his report.  If the chip is producing crappy random
numbers, people will know.

Paranoids will never be satisfied, short of nationalizing the security
industry and putting them in charge (and even then they'd soon stop
trusting one another).  The bottom line is, as Kocher says, the Intel
RNG is BY FAR the best source of secure random data available in the PC.

Previously the paranoids pointed to the lack of information on accessing
the chip hardware as evidence of a cover-up.  Now that obstacle has been
removed, and they fall back on muttering about fake data.  Note that
no thanks have been offered to Intel for releasing the spec, clearly
a step taken in order to facilitate open source development (drivers
already existed for Windows).  Apparently gratitude is too much to ask
from the open source security community.

It's time to put these ravings aside, and work to incorporate this RNG
as a source for Linux /dev/random.  The Linux IPSEC developers need it
badly, for stand-alone servers which have to generate new session keys at
a high rate.  What they have now sucks, and they know it.  By providing
a high volume source of good randomness, the Intel RNG will tremendously
improve the security of network communications.



Re: [PGP]: PGP 6.5.2 Random Number Generator (RNG) support

2000-02-03 Thread lcs Mixmaster Remailer

On Wed, 2 Feb 2000, Martin Minow wrote:

> > http://www.cryptography.com/intelRNG.pdf.
> 
> The one problem I have with the RNG, based on my reading of the
> analysis, is that programmers cannot access the "raw" bitstream,
> only the stream after the "digital post-processing" that converts
> the bitstream into a stream of balanced 1 and 0 bits.

Why do you want this?  The post-processing is a simple Von Neumann bias
remover that looks for 0-1 and 1-0 transitions (actually slightly more
complex, looking at triplets of bits rather than pairs, but the same
idea).  The benefit you would gain from being able to see this biased
data must be balanced against the harm that will result from some people
accidentally using it in the belief that it is secure.

Bram replied:

> It not only does that, it hashes the thing using sha-1. For all we know,
> the thing might be producing unacceptably small amounts of entropy for
> crypto purposes but large enough amounts that it hardly ever repeats.

No, it doesn't.  The Intel software library does that, but
if you use the spec referenced earlier to access the chip,
http://developer.intel.com/design/chipsets/manuals/298029.pdf, there is
no SHA hash involved.  Hashing or otherwise munging the output before
use is probably still a good idea, though.

> The work on the studying the output of Intel's RNG has only had accessed
> to the post-processed output, plus I believe a file directly from Intel
> which was claimed to be unprocessed output. Yeah ... right.

The post-processed output was processed via the Von Neumann bias remover,
and that's the way the data comes off the chip. It is entirely appropriate
to analyze such output in looking at the quality of the randomness
produced by the chip.  Paul Kocher is not such an idiot as to try to
analyze the output of a SHA-1 whitener for quality of randomness.

> If Intel wants people to trust them, they should quit acting like they're
> coving for bad engineering.

So, what would satisfy you?  Kocher has published the theory of the
device, but that's not good enough.  What more do you need?  Circuit
diagrams?  Device masks?  Even those could be faked.  Don't you need
an observer standing at the Intel fab plant, empowered to take his own
samples of the chips and subject them to analysis?  Or, better, a team of
observers sitting in on all Intel engineering and management meetings to
make sure they don't do anything untrustworthy?

Short of this level of monitoring, it is impossible to be sure that the
chip in your computer is free of backdoors (and even then you have to
worry about somebody sneaking into your house and swapping CPU boards on
you).  Face it: no matter what they do, people are going to bitch, just
like they do at every other crypto or security company in the industry.
There's no satisfying some people.

Intel has gone to great lengths, they have made the design available to
some of the best minds in the fields for scrutiny, and it has passed with
flying colors.  For months people complained about the lack of access to
the hardware, and now they've even opened that up as well (as predicted
here, BTW).  If you really want more, you should say exactly what would
make you happy.  Get a consensus on that from security professionals
and you might get what you want.



Re: [PGP]: PGP 6.5.2 Random Number Generator (RNG) support

2000-02-02 Thread lcs Mixmaster Remailer

It may not have been mentioned here, but Intel has
released the programmer interface specs to their RNG, at
http://developer.intel.com/design/chipsets/manuals/298029.pdf.
Nothing prevents the device from being used in Linux /dev/random now.

As for the concerns about back doors, the best reference on
the design of the RNG remains cryptography.com's analysis at
http://www.cryptography.com/intelRNG.pdf.  Paul Kocher and his team
concluded that the chip was well designed and that the random numbers were
of good quality.  (Note, BTW that the RNG is extremely small, crammed
into the margins of the device.  An RNG which produced undetectably
backdoored random date would probably be an order of magnitude larger.)

Even if Intel wanted to put in a back door, it would be very difficult
to exploit it successfully.  There is no way for the chip to predict how
any given random bit will be used: it may go into a session key directly,
it may be hashed through some kind of mixing function along with other
sources of randomness, it may seed a PRNG which is then used to find
RSA primes.  There are a multitude of different possibilities and it
would be hard in general to design an effective backdoor without knowing
how the output will be used.

And as pointed out before, this level of paranoia is ultimately self
defeating, as Intel could just as easily put back doors into its CPU.
Unless or until you are willing to use a self-designed and self-fabbed
CPU, you are fundamentally at the mercy of the hardware manufacturer.



Re: The problem with Steganography

2000-01-26 Thread lcs Mixmaster Remailer

> For example, it's possible that this email was written by a political
> prisoner in a 3rd world country and he's used steganography to conceal a
> message to his friends and family right here in these 3 paragraphs.  My
> question is, without prior agreement or access to an outside channel, how
> are his friends to know to look on the [EMAIL PROTECTED] Listserv for the
> ciphertext?  No matter how well concealed (stego)or how well encrypted
> (crypto), does he have any way of notifying his friends that they should
> look here without alerting the enemy of his attempts to communicate?

Ideally, this would be handled by prearrangement, so that anyone involved
in a resistance movement or who might be a target as a political prisoner
would know where to post his messages and what keys to use so that they
could be read.

If this was not done, he could still tell his friends that he has heard
that some people embed messages in that specific location.  He could
describe the software program used to do so, and that their public keys
would be used to read the message.  Maybe to be "politically correct"
he could go on and say that doing this would be wrong.

Presumably this would be enough of a hint for anyone; even the bad
guys, of course, but they would not be able to tell whether any data
was actually being sent by this channel.

[Presumably the bad guys would only need to suspect, not to know for
sure -- if they really don't pay attention to human rights, well, I'm
sure the rubber hose will be cheap enough for them to buy. --Perry]



Re: The problem with Steganography

2000-01-26 Thread lcs Mixmaster Remailer

> The basic notion of stego is that one replaces 'noise' in a document with
> the stego'ed information. Thus, a 'good' stego system must use a crypto
> strategy whose statistical properties mimic the noise properties of the
> carrying document. Our favorite off the shelf crypto algorithms do *not*
> have this property -- they are designed to generate output that looks
> statistically random. So, can't we detect the presence of stego'ed data by
> looking for 'noise' in the document that's *too* random?

Yes, and no.

There is no particular difficulty in altering the statistics of encrypted
data to match whatever distribution is necessary for the noise.  So there
is no reason a priori to expect that stego'd data would be "too random".

The real problem arises in constructing an accurate noise model.
Whatever model is built can be matched, but there is always the worry
that the model is not quite right.  In particular, if the adversary can
spend more to construct an accurate noise model than the steganographer,
then he can detect the stego'd data because its statistics will differ
in subtle ways from natural data.

In these circumstances, it is prudent to assume that an adversary does
have more money to spend than the person hiding the data.  He may well
be a large government agency or a private bureaucracy which is looking
for illicit data.  The attacker's budget will often be far bigger than
that of the people who need to hide from him.

All is not necessarily lost; it becomes a matter of sufficient accuracy
for the purpose.  In order to distinguish the stego data from natural
data it may be necessary to acquire a considerable volume of messages.
The stego noise model only needs to be accurate enough to make the data
indistinguishable from noise for the specific volume of data being
embedded.  If this threshold is reached, then even if the attacker's
model is better the stego can still succeed.

The greater danger is a subtle but catastrophic failure of the noise
model, as for example when a new statistical analysis is used which
the steganographer did not consider, perhaps some kind of higher order
correlation.  The well funded attacker can afford to spend time searching
for such statistics, and if he is lucky, the game may be over before it
has begun.

These considerations are the real art and science of steganography.
Plunking data into LSBs is grade school stuff, analogous to ROT13 as a
cipher.  True steganography goes far beyond such elementary substitutions.



Re: The problem with Steganography

2000-01-25 Thread lcs Mixmaster Remailer

> The problem with Steganography is that there's basically no way to
> clue people in to it's location without clueing everyone into it.

That's not a problem.  By definition, successful steganography
is undetectable even when you know where to look.  Otherwise the
steaganography has failed.

Encryption is successful if the attacker can't find information about the
plaintext without the key.  Ideally, he can't answer questions about the
plaintext any better with access to the ciphertext than without.

Steganography is successful if the attacker can't distinguish
message-holding data from ordinary data without the key.  Ideally, he
can't guess whether a message is present any better upon inspecting the
cover data than he could without being able to see it.

With this model there is no problem in making everyone aware of where to
look for cover traffic with stego data in it.



Re: small authenticator

2000-01-21 Thread lcs Mixmaster Remailer

At 12:10 PM 01/19/2000 -0700, [EMAIL PROTECTED] wrote:
>Several people have suggested using a MAC; my problem is that the
>opponent can reverse-engineer the chip and find the key.  I was hoping
>to give the chips a public key and have it encrypt a challenge that I'll
>respond to.  On my side, I'd need to prevent chosen-cipehrtext attacks.

How about using a hash chain?  Assume there is some fixed number of times
the remote side will have to authenticate itself to the chip, like say
1000.  Choose a random x_0, compute x_1 = hash(x_0), x_2 = hash(x_1), ...
x_1000 = hash(x_999).  Preload the chip with x_1000.

To authenticate itself the other side supplies x_999, the chip verifies
x_1000 = hash(x_999), and overwrites x_1000 with x_999.  Then the next
time the remote side supplies x_998.  If the remote side is reasonably
powerful it only needs to store x_0 and compute the hash chain on the fly.



Re: Ten Risks of PKI

1999-12-13 Thread lcs Mixmaster Remailer

Carl Ellison writes:
> The Bloomberg attack didn't require connection hijacking.  All that attacker 
> did was post a newsgroup message with a URL in it.

This is presumably a reference to the incident described in
http://news.cnet.com/news/0-1005-200-341267.html, where a PairGain
employee apparently created a fake web page which resembled that
of trusted financial news source Bloomberg, reporting an impending
acquisition of PairGain.  He then posted to Yahoo discussion groups a
reference to his page's URL, using its IP address to disguise the actual
point of origin and claiming it to be a genuine Bloomberg news story.
The result was a 30% rise in PairGain's stock.

This kind of attack is one of the things that PKIs are intended to
address, but in this case no cryptography was used.  Perhaps it would
make good fodder for your upcoming companion article, "Ten Risks of NOT
Using PKI".

> If you're depending on that little lock in the corner of the browser window 
> to mean you're connected to the page you seem to be connected to, and the 
> "seem to be" is derived only from the page contents, you're in trouble.  
> That's more what we were talking about than connection hijacking -- although 
> if you want to go to that trouble, feel free. :)

Okay, but in the context of the risk you identified with PKIs, that is
in fact what we are talking about: ways to get that little lock to appear
when it shouldn't.  They aren't as easy as the Bloomberg attack.

> This shows up more clearly with e-mail.  Here again, you don't have to 
> hijack a connection if the attacker initiates the exchange (sends the first 
> message) and the victim uses the "reply to" button in his mailer.  [E.g.,
> the attacker asks for a copy of the victim's latest draft -- and the 
> victim sends it.]

Again, isn't this a case where a PKI helps rather than harms security?
Getting a cert accepted with the identity of the person the victim
thinks he is responding to will be more difficult than simply sending
an unsigned message which claims to be from that person.

Many of the issues you raised in your article are legitimate (although
not necessarily specific to PKIs), but there seems to be a danger that
you will just end up sowing confusion and doubt.  The result will be
that people will continue to use the old ways and fall into the traps
you have described here.  It's fair to criticize PKIs with an eye towards
improving them, but your article seems more directed at questioning the
value of cryptography itself.



Re: Ten Risks of PKI

1999-12-13 Thread lcs Mixmaster Remailer

> > While this is true, keep in mind that there is more to mounting
> > a successful cryptographic attack than adding root keys and fake
> > certificates.  It is also necessary to intercept the messages which
> > might have gone to the legitimate recipient, and possibly decrypt and
> > re-encrypt them.  All this implies an attacker who has at least temporary
> > write access to the victim's computer, and long term read/write control
> > over the communication channels he will use.
>
> I do not believe this attack requires "long term read/write" access to
> the victim's computer.  If I want to get a forged certificate into a
> clients Browser all I have to do is convince the user to browse my
> secure server with Netscape (or another browser) that will prompt the 
> user to install my unrecognized root certificate.  

That's a good point, most browsers are configured to make it easy to
install root certificates.

However this is just the first step in an effective compromise.  Now you
need to get him to use a bogus certificate when he thinks he is using
a good one.  He tries to connect to a secure site, and you need to step
in and play man in the middle.  You must hijack his connection to, say,
www.amazon.com, and direct it to your own site.  Then you can offer your
bogus cert for www.amazon.com and get it accepted.

You need to have long-term control over the communications channel in
order to be able to do these interceptions and get your bogus certs
accepted.  There is more to it than just getting your root cert installed.

And note that the value to the attacker of a browser compromise is
relatively limited.  In most cases the worst that will happen is that
you could steal a credit card number.  Such a prize is hardly worth
the necessary trouble and expense.  This is probably why the browser
companies have done so little to prevent the installation of untrustworthy
root certs.  There just isn't that much an attacker can do with them.



RE: Two Observations on the IETF Plenary Wiretap Vote

1999-11-15 Thread lcs Mixmaster Remailer

Lucky Green <[EMAIL PROTECTED]> writes:

> Over the years, using Wei Dai's term Pipenet (or Pipe-net, as it was spelled
> originally) has firmly been established as denotating an anonymous IP
> network that uses constant or otherwise data independent "pipes" between the
> nodes of the network. Since Freedom uses link padding, I would consider
> Freedom a Pipenet.
>
> It has been the recognition that data-independent traffic flows are a
> necessary design component of a secure anonymous IP network, especially
> between the end-user and the first network node, that sets Pipenet designs
> apart from naive implementations such as the first generation Onion Routers
> and Crowds.

Does Freedom do this?  The white paper at
http://www.zeroknowledge.com/products/Freedom_Architecture.html describes
padding between AIP (Anonymous Internet Proxy) nodes:

: Reading the list of neighbors, the AIP sends "PADDING" packets through
: UDP to the neighbors. These packets have the same size as payload packets
: to provide "for free" cover traffic. The use of PADDING packets and cover
: traffic introduces the notion of a Heartbeat amongst the AIPs. A heartbeat
: is defined as the time delay at which a packet must leave the machine for
: a specific neighbor, hiding any information of the AIP server's status
: (idle or busy).  The heartbeat concept prevents traffic analysis to a
: significant degree. Since packets are sent out on a regular basis, and
: knowing the rate at which these heartbeat packets arrive at a machine,
: an AIP can determine if a neighbor is unreachable since it will fail to
: send an ALIVE packet after a certain amount of time. PADDING packets
: further prevent traffic analysis by maintaining a constant data flow
: between the AIPs. In addition, all data is link encrypted between two
: adjacent routers with a shared session key.

However the diagram does not show the end user's "client" node as an
AIP node.  The document further identifies the AIP as a subsystem of a
Freedom Server node.  These are the "mix" nodes and are a separate set
than the client nodes.

This documentation would apparently be consistent with the use of link
padding between the nodes of the network but not between the user's
machine and the node where it enters the network.  As Lucky points
out, padding from the end-user to the first network node is important.
We need a clear description of the Freedom architecture which answers
this question.



Re: Freedom/Pipenet Security

1999-11-15 Thread lcs Mixmaster Remailer

Lucky Green wrote:
> Over the years, using Wei Dai's term Pipenet (or Pipe-net, as it was
spelled
> originally) has firmly been established as denotating an anonymous IP
> network that uses constant or otherwise data independent "pipes" between
the
> nodes of the network. Since Freedom uses link padding, I would consider
> Freedom a Pipenet.

Problem is, PipeNet also refers to a specific protocol design (it's
published at http://www.eskimo.com/~weidai/pipenet.txt). I suggest using
"pipenet" with lower-case letters to denote the class of protocols that
use data-independent traffic to achieve untraceability, and "PipeNet" to
denote Wei Dai's protocol.

Freedom cannot currently be considered a pipenet because the traffic
padding functionality has not been turned on yet.



The Truth About Encryption (Re: NewsScan Daily, 5 November 1999 ("Ab

1999-11-06 Thread lcs Mixmaster Remailer

> THE TRUTH ABOUT ENCRYPTION
> Cambridge University cryptography expert Ross Anderson says governments'
> efforts to keep encryption technology out of the hands of criminals and
> terrorists is misguided: "If I were to hold a three-hour encrypted
> conversation with someone in the Medellin drug cartel, it would be a dead
> giveaway. In routine monitoring, GCHQ (Britain's signals intelligence
> service) would pick up the fact that there was encrypted traffic and would
> instantly mark down my phone as being suspect. Quite possibly the police
> would then send in the burglars to put microphones in all over my house. In
> circumstances like this, encryption does not increase your security. It
> immediately and rapidly decreases it. You are mad to use encryption if you
> are a villain." (New Scientist 6 Nov 99)
> http://www.newscientist.com/ns/19991106/confidenti.html

This is total bullshit, or at best it is a highly limited and short-
sighted view.  Once encryption becomes more widely used there will
be no such red flags going up just because someone wants to speak to
their cousin in Colombia in privacy.  (Not to mention that with internet
phones going through mixing nets there will be no way even to know who
is talking to whom.)

Ross Anderson is arguing that we should not put restrictions on encryption
because using it makes you stand out as someone who has something to hide.
But the only way to make sure that stays the case is to put as many
restrictions as possible on encryption!  If we listen to him and make
encryption freely available to everyone, his argument no longer works.

Hence, in effect, his argument can be twisted to support more restrictions
on access to encryption, rather than less.  It is misguided and counter-
productive to argue along these lines.



Re: Bridge

1999-06-25 Thread lcs Mixmaster Remailer

"Ge' Weijers" <[EMAIL PROTECTED]> writes:
> On Wed, Jun 23, 1999 at 12:46:43PM -0500, Matt Crawford wrote:
> > > > > There are 52! bridge hands, so a random hand has
> > > > > log2(56!) = 226 bits of entropy or 68 decimal digits worth. 
> > No, just 52! / (13!)^4 hands, which is around 2^96.
> The interesting part is to come up with an algorithm that only uses 96 bits.

Not so hard.  Let C(m,n) be the number of ways of choosing n items from
a set of m, unordered.

To choose the first hand there are C(52,13) choices, then to choose the
second hand C(39,13), then to choose the third, C(26,13), and the fourth
is then automatically determined.  These will add up to Matt's formula.

The problem thus reduces to choosing one of C(m,n) choices with minimal
number of bits.

To solve it we will assume a random package which can make choices with
specified probability x out of y, where y may not be a power of two,
without wasting bits.  Such a system was recently described on coderpunks
under the thread "being miserly with randomness".

To choose one of C(m,n) choices, we first determine whether the first
element will be in the selection.  The probability is simply n/m.

After we have done this, the problem reduces to either choosing n-1
elements from m-1 in the case that the first element was chosen, or n
elements from m-1 in the case that it was not chosen.  We can therefore
repeat the process to determine whether the second element was selected,
then the third, and so on.  Continue to repeat until all n elements are
chosen.

In conjunction with an efficient x out of y interface to the RNG, this
will choose your bridge hands without wasting any random bits.



Re: strong authentication without strong crypto?

1999-02-04 Thread lcs Mixmaster Remailer

> Quick question:  does anyone know of technology or techniques that would
> facilitate strong authentication (_not_ encryption) for unattended high
> volume electronic transactions and does not require strong crypto along
> the lines of DSA or RSA?  Shared secrets are not an option.

There are strong identification protocols, most of which are
mathematically related to RSA and DSA, but which are for identification
rather than encryption or signature.  Schneier devotes chapter 21 to
discussing several of these.  Most of the ID protocols are closely
related to signature schemes, but if you have a political need to say
that you aren't doing signatures, these might fill the bill.



Re: quantum cryptanalysis

1999-02-01 Thread lcs Mixmaster Remailer

> Suppose someone discovers a way to solve NP-complete problems with a
> quantum computer; should he publish?
>
> Granted, the quantum computers aren't big enough yet, but the
> prospects look bright for larger ones in the near future.  It would
> break all classical cryptography.

It would probably be best for the discoverer to publish his result.

First, being able to solve NP-complete problems may not make much
difference.  Presuming you mean that the problems can be solved
in polynomial time, this does not automatically imply that specific
cryptosystems can be broken in a practical way.  The polynomial exponent
or multiplicative factors could still be too large to be useful.

Second, quantum computers will already threaten much cryptography in
use today.  They would be able to factor numbers and find discrete logs,
breaking the public key systems.  For symmetric ciphers, they effectivelly
halve the key length, which might even allow breaking 128 bit ciphers.
(Which is why the new AES will support keys up to 256 bits.)

Third, the idea of keeping a crypto breakthrough secret for personal gain
has been hashed around for years.  It appears to be difficult to both
profit from the discovery and keep it secret.  A method which would only
work on quantum computers would be that much more difficult to exploit.

Cryptography is big business these days, and someone who makes a discovery
like this can expect fame and, if he desires it badly enough, fortune.
He has every reason to publish.