Re: Formal notice given of rearrangement of deck chairs on RMS PKItanic

2010-10-06 Thread Simon Josefsson
Jack Lloyd ll...@randombit.net writes:

 On Wed, Oct 06, 2010 at 04:52:46PM +1300, Peter Gutmann wrote:

 Right, because the problem with commercial PKI is all those attackers who are
 factoring 1024-bit moduli, and apart from that every other bit of it works
 perfectly.

 _If_ Mozilla and the other browser vendors actually go through with
 removing all 2048 bit CA certs (which I doubt will happen because I
 suspect most CAs will completely ignore this), it would have one
 tangible benefit:

 (Some of, though unfortunately not nearly all) the old CA certificates
 that have been floating around since the dawn of time (ie the mid-late
 90s), often with poor chains of custody through multiple iterations of
 bankruptcies, firesale auctions, mergers, acquisitions, and so on,
 will die around 2015 instead of their current expirations of
 2020-2038. Sadly this will only kill about 1/3 of the 124 (!!)
 trusted roots Mozilla includes by default.

Another consequence is that people will explore moving to ECC, which is
less studied than RSA and appears to be a patent mine-field.  As much as
I'd like to get rid of old hard coded CAs in commonly used software, I
feel there are better ways to achieve that than a policy like this.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Simon Josefsson
Bill Stewart bill.stew...@pobox.com writes:

 Basically, 2048's safe with current hardware
 until we get some radical breakthrough
 like P==NP or useful quantum computers,
 and if we develop hardware radical enough to
 use a significant fraction of the solar output,
 we'll probably find it much easier to eavesdrop
 on the computers we're trying to attack than to
 crack the crypto.

Another breakthrough in integer factoring could be sufficient for an
attack on RSA-2048.  Given the number of increasingly efficient integer
factorization algorithms that have been discovered throughout history,
another breakthrough here seems more natural than unlikely to me.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Simon Josefsson
Perry E. Metzger pe...@piermont.com writes:

 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:

 http://www.educatedguesswork.org/2010/03/against_rekeying.html

 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.

One situation where rekeying appears to me not only useful but actually
essential is when you re-authenticate in the secure channel.

TLS renegotiation is used for re-authentication, for example, when you
go from no user authentication to user authenticated, or go from user X
authenticated to user Y authenticated.  This is easy to do with TLS
renegotiation: just renegotiate with a different client certificate.

I would feel uncomfortable using the same encryption keys that were
negotiated by an anonymous user (or another user X) before me when I'm
authentication as user Y, and user Y is planning to send a considerably
amount of traffic that user Y wants to be protected.  Trusting the
encryption keys negotiated by user X doesn't seem prudent to me.
Essentially, I want encryption keys to always be bound to
authentication.

Yes, the re-authentication use-case could be implemented by tearing down
the secure channel and opening a new one, and that may be overall
simpler to implement and support.

However, IF we want to provide a secure channel for application
protocols that re-authenticate, I have a feeling that the secure channel
must support re-keying to yield good security properties.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-16 Thread Simon Josefsson
Perry E. Metzger pe...@piermont.com writes:

 This does necessitate an extra manufacturing step in which the device
 gets individualized, but you're setting the default password to a
 per-device string and having that taped to the top of the box anyway,
 right? If you're not, most of the boxes will be vulnerable anyway and
 there's no point...

Not quite, you can optimize that.  Some software (e.g., OpenWRT) forces
users to access the device via (local) ethernet before wireless is
enabled.  This enables security aware people to configure wireless
security, and avoid a period of insecure wireless network.

Incidentally, this approach also enables devices to collect entropy from
the user session.  That could be useful when generating SSH private
keys.  (Although I believe, unfortunately, OpenWRT generates the SSH key
directly after the first boot.  It seems unclear what kind of entropy it
can hope to have at that point?)

I agree with your recommendation to write an AES key to devices at
manufacturing time.  However it always comes with costs, including:

1) The cost of improving the manufacture process sufficiently well to
make it unlikely that compromised AES keys are set in the factory.

2) The cost of individualizing each device.

Each of these costs can be high enough that alternative approaches can
be cost-effective. (*) My impression is that the cost and risks in 1)
are often under-estimated, to the point where they can become a
relatively cheap attack vector.

/Simon

(*) In case anyone doubts how the YubiKey works, which I'm affiliated
with, we took the costs in 1) and 2).  But they are large costs.  We
considered to require users to go through an initial configuration step
to set the AES key themselves.  However, the usability cost in that is
probably higher than 1) and 2).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Protection mail at rest

2008-06-04 Thread Simon Josefsson
Victor Duchovni [EMAIL PROTECTED] writes:

 On Tue, Jun 03, 2008 at 04:37:20PM -0400, Eric Cronin wrote:

 
 On Jun 3, 2008, at 11:51 AM, Adam Aviv wrote:
 
 Depending on the level of protection you want, you could just add a
 script to your .forward to encrypt your email before delivery using
 PGP/GPG. However, this will leave the headers in the clear, so you
 will likely want to create an entirely new envelope for the message
 with the original message encrypted as the body or an attachment.
 
 Does anybody have a recipe for this first mode handy?  plain text e- 
 mails seem simple enough, but there needs to be a bit of MIME  
 unwrapping and rewrapping to correctly handle attachments so that the  
 client sees/decrypts them correctly I think.  I've searched from time  
 to time and never found a good HowTo...

 S/MIME supports enveloped MIME objects, if PGP does not work out for MIME
 entities, you could try that. S/MIME is natively supported in Thunderbird,
 Apple Mail, and similar MUAs.

Actually, PGP/MIME uses the same high-level mechanism to wrap MIME
objects as S/MIME does: http://www.ietf.org/rfc/rfc1847.txt

The PGP/MIME description is in: http://www.ietf.org/rfc/rfc3156.txt

Specification wise both should work equally well, but implementation
quality may differ.

What is often overlooked is that the e-mail envelope (including the
Subject: header field) is not protected or even encrypted under RFC 3156
unless you forward the entire e-mail as a message/rfc822 MIME part
within the PGP/MIME (or S/MIME) message.  Interoperability of that has
historically been poor, but the more modern MUAs should handle it today.

Writing a .forward proxy that wraps incoming e-mails into PGP/MIME
encrypted message/rfc822 attachments should be simple, probably simpler
than PGP/MIME wrapping all the individual MIME parts in the incoming
e-mail.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-27 Thread Simon Josefsson
Taral [EMAIL PROTECTED] writes:

 On 5/26/08, Simon Josefsson [EMAIL PROTECTED] wrote:
  For example, reading a lot of data from linux's /dev/urandom will
  deplete the entropy pool in the kernel, which effectively makes reads
  from /dev/random stall.  The two devices uses the same entropy pool.

 That's a bug in the way the kernel hands out entropy to multiple
 concurrent consumers. I don't think it's a semantic issue.

Do you have any references?  Several people have brought this up before
and have been told that the design with depleting the entropy pool is
intentional.

Still, the semantics of /dev/*random is not standardized anywhere, and
the current implementation is sub-optimal from a practical point of
view, so I think we are far away from an even OK situation.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-26 Thread Simon Josefsson
Ben Laurie [EMAIL PROTECTED] writes:

 Steven M. Bellovin wrote:
 On Sat, 24 May 2008 20:29:51 +0100
 Ben Laurie [EMAIL PROTECTED] wrote:

 Of course, we have now persuaded even the most stubborn OS that
 randomness matters, and most of them make it available, so perhaps
 this concern is moot.

 Though I would be interested to know how well they do it! I did
 have some input into the design for FreeBSD's, so I know it isn't
 completely awful, but how do other OSes stack up?

 I believe that all open source Unix-like systems have /dev/random
 and /dev/urandom; Solaris does as well.

 I meant: how good are the PRNGs underneath them?

For the linux kernel, there is a paper:

http://eprint.iacr.org/2006/086

Another important aspect is the semantics of the devices: None of the
/dev/*random devices are standardized anywhere (as far as I know).
There semantics can and do differ.  This is a larger practical problem.

For example, reading a lot of data from linux's /dev/urandom will
deplete the entropy pool in the kernel, which effectively makes reads
from /dev/random stall.  The two devices uses the same entropy pool.

I believe a much better approach would be if /dev/urandom was a fast and
secure PRNG, with perfect-forward-secrecy properties, and /dev/random
was a slow device with real entropy (whatever that means..) gathered
from the hardware.  The two devices would share little or no code.  The
/dev/urandom PRNG seed could be fed data from /dev/random from time to
time, or from other sources (like kernel task switching timings).  I
believe designs like this have been proposed from time to time, but
there hasn't been any uptake.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Toshiba shows 2Mbps hardware RNG

2008-02-21 Thread Simon Josefsson
David Wagner [EMAIL PROTECTED] writes:

 Crawford Nathan-HMGT87 writes:
One of the problems with the Linux random number generator
is that it happens to be quite slow, especially if you need a lot of
data.

 /dev/urandom is blindingly fast.  For most applications, that's
 all you need.

Alas, reading from /dev/urandom depletes the entropy pool much like
reading from /dev/random does.  So if you read a lot of data from
/dev/urandom, you make /dev/random unusable in practice.  This problem
has hit libgcrypt/gnutls via the MTA Exim on a lot of Debian systems.  I
would argue that the linux kernel /dev/*random system is sub-optimally
designed, reading a lot of data from /dev/urandom should not cause the
system's /dev/random to be unusable.

(Admittedly, there were other design flaws in how exim used gnutls, such
as re-generating the DH parameters every X hour, and doing that
synchronously in the SMTP process, causing them all to stall waiting for
entropy, but that has been fixed.)

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Seagate announces hardware FDE for laptop and desktop machines

2007-10-02 Thread Simon Josefsson
Following up on an old thread with some new information:

 Hitachi's white paper is available from:

 http://www.hitachigst.com/tech/techlib.nsf/techdocs/74D8260832F2F75E862572D7004AE077/$file/bulk_encryption_white_paper.pdf
...
 The interesting part is the final sentence of the white paper:

Hitachi will be offering the Bulk Data Encryption option on all new
2.5-inch hard disk drive models launched in 2007, including both the
7200 RPM and 5400 RPM product lines. At the request of the customer,
 ^^
this option can be enabled or not, at the factory, without any impact
on the drive?s storage capacity, features or performance.

Interestingly, Hitachi has updated that paragraph in the paper (re-using
the same URL), and now it reads:

  Hitachi will be offering the Bulk Data Encryption option on specific
  part numbers of all new 2.5-inch hard disk drive products launched in
  2007, including both the 7200 RPM and 5400 RPM product lines. For a
  list of specific part numbers that include the Bulk Disk Encryption
  feature or for more information on how to use the encryption feature,
  see the ?How To Guide? for Bulk Data Encryption Technology available
  on our website.

The How To Guide includes screen shots from BIOS configuration.  The
disk appear to be using the standard ATA BIOS password lock mechanism.
The guide is available from:

http://hitachigst.com/tech/techlib.nsf/products/Travelstar_7K200
http://hitachigst.com/tech/techlib.nsf/techdocs/F08FCD6C41A7A3FF8625735400620E6A/$file/HowToGuide_BulkDataEncryption_final.pdf

Without access to the device (I've contacted Hitachi EMEA to find out if
it is possible to purchase the special disks) it is difficult to infer
how it works, but the final page of the howto seems strange:

   Disable security
   

   For an end user to disable security (i.e., turn off the password
   access control):

 1. Enter the BIOS and unlock the drive (when required, BIOS
 dependent).

 2. Find the security portion of your BIOS and disable the HDD user
 password, NOT the BIOS password. The master password is still set.
...

   NOTE: All data on the hard drive will be accessible. A secure erase
   should be performed before disposing or redeploying the drive to
   avoid inadvertent disclosure of data.

One would assume that if you disable the password, the data would NOT be
accessible.  Making it accessible should require a read+decrypt+write of
the entire disk, which would be quite time consuming.  It may be that
this is happening in the background, although it isn't clear.

Another interesting remark is:

  Note that the access method to the drive is stored in an encrypted
  form in redundant locations on the drive.

It sounds to me as if they are storing the AES key used for bulk
encryption somewhere on the disk, and that it can be unlocked via the
password.  So it may be that the bulk data encryption AES key is
randomized by the device (using what entropy?) or possibly generated in
the factory, rather than derived from the password.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Seagate announces hardware FDE for laptop and desktop machines

2007-09-07 Thread Simon Josefsson
Jacob Appelbaum [EMAIL PROTECTED] writes:

 Seagate recently announced a 1TB drive for desktop systems and a 250GB
 laptop drive. What's of interest is that it appears to use a system
 called DriveTrust for Full Disk Encryption. It's apparently AES-128.

 The detail lacking press release is here:
 http://www.seagate.com/ww/v/index.jsp?locale=en-USname=seagate-unveils-new-giantsvgnextoid=6bb0e0e1f0494110VgnVCM10f5ee0a0aRCRD

 The relevant excerpt of it appears to be:
 The Barracuda FDE (full disc encryption) hard drive is the world?s
 first 3.5-inch desktop PC drive with native encryption to prevent
 unauthorized access to data on lost or stolen hard drives or systems.
 Using AES encryption, a government-grade security protocol and the
 strongest that is commercially available, The Barracuda FDE hard drive
 delivers endpoint security for powered-down systems. Logging back on
 requires a pre-boot user password that can be buttressed with other
 layers of authentication such as smart cards and biometrics.


 I found this somewhat relevant paper (though it seriously lacks
 important details) on DriveTrust:
 http://www.seagate.com/docs/pdf/whitepaper/TP564_DriveTrust_Oct06.pdf

 Has anyone read relevant details for this system? It seems like
 something quite useful but I'm not sure that I trust something I can't
 review...

Hitachi's white paper is available from:

http://www.hitachigst.com/tech/techlib.nsf/techdocs/74D8260832F2F75E862572D7004AE077/$file/bulk_encryption_white_paper.pdf

(Btw, it contains something as rare as a reasonable threat analysis!  At
least compared to other advertising materials...)

After having acquired the 1TB device, and didn't find any support for
this feature, I re-read some information: The interesting part is the
final sentence of the white paper:

   Hitachi will be offering the Bulk Data Encryption option on all new
   2.5-inch hard disk drive models launched in 2007, including both the
   7200 RPM and 5400 RPM product lines. At the request of the customer,
^^
   this option can be enabled or not, at the factory, without any impact
   on the drive?s storage capacity, features or performance.

I wonder how easily it would be to request this for a normal customer.
I gave up when my supplier said they didn't offer this configuration.

I would be interested to know which key-derivation function they are
using, I'm assuming the key is derived from a password, and which AES
mode and IV etc.  Knowing that may enable you to verify that data is
really stored encrypted: buy two devices, set up one to use disk
encryption, and swap the logic boards and then read data from the
supposedly encrypted disk.  As for finding out if they accidentally also
write down the AES key on some hidden part of the disk, that may be more
difficult...

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: open source disk crypto update

2007-04-27 Thread Simon Josefsson
Alexander Klimov [EMAIL PROTECTED] writes:

 Are you afraid of attackers secretly changing your software (to
 monitor you?) while your computer is off?

I believe this is a not completely unreasonable threat.  Modifying files
on the /boot partition to install a keylogger is not rocket science, and
(more importantly) can be done remotely, if you gain unauthorized access
to the machine.

If you boot from a trusted USB stick instead, and check the integrity of
the hard disk, the attacker needs to modify BIOS in order to install the
keylogger.  This may be sufficient difficult to do on a large scale
(there are many different ways to update BIOS software), so that the
attacker goes away to try some other weakness instead.

There is one aspect that I don't recall seeing in this thread: if you
use a laptop, and suspend it to disk, there is no encryption or
authentication of the data as far as I know.  (I believe swsusp
optionally can use SHA-1 or MD5 to verify integrity, but the hash is not
keyed.)  For example, your SSH or PGP RSA key may be copied to disk
without warning.  In addition, someone could modify the on-disk RAM
image to add a new root process when you restart the machine.

 If so, are you sure that there is no hardware keylogger in your
 keyboard and there is no camera inside a ceiling mounted smoke
 detector [1]?

Installing or enabling such features remotely is difficult, and
(importantly) cannot be done in the same way for all hardware.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DNSSEC to be strangled at birth.

2007-04-05 Thread Simon Josefsson
Paul Hoffman [EMAIL PROTECTED] writes:

 At 5:51 PM +0100 4/4/07, Dave Korn wrote:
   Can anyone seriously imagine countries like Iran or China signing up to a
system that places complete control, surveillance and falsification
capabilities in the hands of the US' military intelligence?

 No.

 But how does having the root signing key allow those?

 Control: The root signing key only controls the contents of the root,
 not any level below the root.
...
 Falsification: This is possible but completely trivially detected (it
 is obvious if the zone for furble.net is signed by . instead of
 .net). Doing any falsification will cause the entire net to start
 ignoring the signature of the root and going to direct trust of the
 signed TLDs.

If you control the root signing key, you can sign a new zone key for,
e.g., '.com' and then create whatever content you want, e.g.,
'example.com' and sign it with your newly created '.com' zone key.
The signatures would chain back and verify to the root key.

However, in practice I don't believe many will trust the root key
alone -- for example, I believe most if not all Swedish ISPs would
configure in trust of the .se key as well.  One can imagine a
web-of-trust based key-update mechanism that avoids the need to trust
a single root key.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-25 Thread Simon Josefsson
Leichter, Jerry [EMAIL PROTECTED] writes:

 I agree that there are two issues, and they need to be treated
 properly.  The first - including data after the ASN.1 blob in the
 signature computation but then ignoring it in determining the
 semantics - is, I'll argue, an implementation error.  You list only
 OpenSSL as definitely vulnerable, NSS as ?, so it sounds like
 only one definitive example.

Yes.  I'm only familiar with NSS as a user, not as a developer.  For
some reason, the Mozilla bug tracker hides information about this
problem from us, so it is difficult to track the code down.

I believe I identified the patch that solved the problem in NSS,
search for 350640 in:

http://bonsai.mozilla.org/cvsquery.cgi?branch=HEADfile=mozilla/security/nss/date=month

The bug discussion is not public:

https://bugzilla.mozilla.org/show_bug.cgi?id=350640

Possibly also bug reports 351079 and 351848 are related to the same
problem, but these bugs are also hidden.

The actual patch for 350640 is:

http://bonsai.mozilla.org/cvsview2.cgi?diff_mode=contextwhitespace_mode=showsubdir=mozilla/security/nss/lib/utilcommand=DIFF_FRAMESETfile=secdig.crev1=1.6rev2=1.7root=/cvsroot

If some NSS developer could chime in, that would help.

 Even if NSS has the same problem, one has to look at code
 provenance.

Sure.

 OSS efforts regularly share code, and code to pick apart data fields
 is hardly kind of thing that is going to inspire someone to go out
 and do it better - just share.

I think you want to read up on free software license compatibilities,
and in particular OpenSSL vs GPL.  But this is a very different topic,
that we shouldn't pursue here...

 The second - embedded parameter fields - is a much deeper issue.
 That's a protocol and cryptographic error.  The implementations
 appear to be correctly implementing the semantics implied by the
 spec - ignore parameters you don't care about.

At least some versions of PKCS#1 does NOT say that, e.g., RFC 3447.

RFC 3447 essentially says to generate a new token and use memcmp().
Such implementations would not be vulnerable to any of the current
attacks, except the Kaliski ASN.1 OID attack (an attack that doesn't
work on existing implementations).

 This is common practice, and a fine idea in *most* situations, since
 it allows for extensions without breaking existing implementations.

I believe the principle of Be conservative in what you do; be liberal
in which you accept from others is generally a bad advice.

The principle hides problems that should be fixed.  Hiding a problem
instead of fixing it typically enables bad things to happen.  We've
seen that lead to security problems for many years, this is just one
example.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-23 Thread Simon Josefsson
Leichter, Jerry [EMAIL PROTECTED] writes:

 Granted, one or more implementations got this wrong.  (Has anyone looked
 to see if all the incorrect code all descends from a common root, way
 back when?)

We have at least three independent widely used implementations that
got things wrong: OpenSSL, Mozilla NSS, and GnuTLS.

However, note that this isn't a single problem; we are talking about
at least two related problems.  Some implementations are vulnerable to
only one of them.

The first problem was ignoring data _after_ the ASN.1 blob.
Vulnerable: OpenSSL, NSS?

The second problem was ignoring data _in_ the ASN.1 blob, in
particular, in the parameters field.  Vulnerable: OpenSSL, GnuTLS,
NSS?

A several year old paper by Kaliski discussed using the ASN.1 OID to
store data in.  It has slightly different properties, but the lesson
in this context is that implementations must properly check the ASN.1
OID field too.

 Until we know whether this is *one* mistake that was copied from
 implementation to implementation, or the same mistake made by
 multiple developers, it's really premature to draw any conclusions.

I hope that I convinced you that this isn't an open question.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-21 Thread Simon Josefsson
[EMAIL PROTECTED] (Peter Gutmann) writes:

Consequently, I think the focus on e=3 is misguided. 

 It's not at all misguided.  This whole debate about trying to hang on to e=3
 seems like the argument about epicycles, you modify the theory to handle
 anomalies, then you modify it again to handle further anomalies, then you
 modify it again, and again, ...  Alternatively, you say that the earth
 revolves around the sun, and all of the kludges upon kludges go away.
 Similarly, the thousands of words of nitpicking standards, bashing ASN.1, and
 so on ad nauseum, can be eliminated entirely by following one simple rule:

   Don't use e=3

 This is never going to be reliably fixed if the fix is to assume that every
 implementor and implementation everywhere can get every miniscule detail right
 every time.  The fix is to stop using e=3 and be done with it.

Not using e=3 when generating a key seems like an easy sell.

A harder sell might be whether widely deployed implementations such as
TLS should start to reject signatures done with an e=3 RSA key.

What do people think, is there sufficient grounds for actually
_rejecting_ e=3 signatures?

One alternative would be to produce a warning, similar to what is
sometimes done for MD2 and MD5 today.

Btw, by default, OpenSSH's ssh-keygen appear to use e=35 (0x23..),
GnuPG (libgcrypt), GnuTLS and OpenSSL appear to all use e=65537, BIND
dnssec-keygen appear to use e=3.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-18 Thread Simon Josefsson
Whyte, William [EMAIL PROTECTED] writes:

 This is incorrect. The simple form of the attack
 is exactly as described above - implementations
 ignore extraneous data after the hash. This
 extraneous data is _not_ part of the ASN.1 data.
 
 James A. Donald wrote:
But it is only extraneous because ASN.1 *says* it is
extraneous.

 No. It's not the ASN.1 that says it's extraneous, it's the
 PKCS#1 standard. The problem is that the PKCS#1 standard
 didn't require that the implementation check for the
 correct number of ff bytes that precede the BER-encoded
 hash. The attack would still be possible if the hash
 wasn't preceded by the BER-encoded header.

That's not true -- PKCS#1 implicitly require that check.  PKCS#1 says
the verification algorithm should generating a new signature and then
compare them.  See RFC 3447 section 8.2.2.  That solves the problem.

Again, there is no problem in ASN.1 or PKCS#1 that is being exploited
here, only an implementation flaw, even if it is an interesting one.

After reading http://www.rsasecurity.com/rsalabs/node.asp?id=2020 it
occurred to me that section 4.2 of it describes a somewhat related
problem, where the hash OID is modified instead.  That attack require
changes in specifications and implementations, to have the
implementation support the new hash OID.  But it suggests a potential
new problem too: if implementation don't verify that the parsed hash
OID length is correct.  E.g., an implementation that uses

memcmp (parsed-hash-oid, sha1-hash-oid,
 MIN (length (parsed-hash-oid), length (sha1-hash-oid)))

to recognize the hash algorithm used in the ASN.1 structure, it may
also be vulnerable: the parsed-hash-oid may contain garbage, that
can be used to forge signatures against broken implementations,
similar to the two attacks discussed so far.  I don't know of any
implementations that do this, though.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-13 Thread Simon Josefsson
Jostein Tveit [EMAIL PROTECTED] writes:

 Anyone got a test key with a real and a forged signature to test
 other implementations than OpenSSL?

There are actually two problems to consider...

First, there is the situation by Bleichenbacher at Crypto 06 and
explained in:

http://www.imc.org/ietf-openpgp/mail-archive/msg14307.html

That uses the fact that implementation doesn't check for data beyond
the end of the ASN.1 structure.  OpenSSL was vulnerable to this,
GnuTLS was not, see my analysis for GnuTLS on this at:

http://lists.gnupg.org/pipermail/gnutls-dev/2006-September/001202.html

Eric already posted test vectors that trigger this problem.

The second problem is that the parameters field can ALSO be used to
store data that may be used to manipulate the signature value into
being a cube.  To my knowledge, this was discovered by Yutaka Oiwa,
Kazukuni Kobara, Hajime Watanabe.  I didn't attend Crypto 06, but as
far as I understand from Hal's post, this aspect was not discussed.
Their analysis isn't public yet, as far as I know.

Both OpenSSL and GnuTLS were vulnerable to the second problem.  My
discussion of this for GnuTLS is in:

http://lists.gnupg.org/pipermail/gnutls-dev/2006-September/001205.html

When I read the OpenSSL advisory, I get the impression that it doesn't
quite spell out the second problem clearly, but if you look at the
patch to correct this:

+   /* Excess data can be used to create forgeries */
+   if(p != s+i)
+   {
+   RSAerr(RSA_F_RSA_VERIFY,RSA_R_BAD_SIGNATURE);
+   goto err;
+   }
+
+   /* Parameters to the signature algorithm can also be used to
+  create forgeries */
+   if(sig-algor-parameter
+   sig-algor-parameter-type != V_ASN1_NULL)
+   {
+   RSAerr(RSA_F_RSA_VERIFY,RSA_R_BAD_SIGNATURE);
+   goto err;
+   }
+

You'll notice that there are two added checks, one check per problem.

Test vectors for this second problem are as below, created by Yutaka
OIWA.

[EMAIL PROTECTED]:~/src/gnutls/tests$ cat pkcs1-pad-ok.pem
-BEGIN CERTIFICATE-
MIICzTCCAjagAwIBAgIJAOSnzE4Qx2H/MA0GCSqGSIb3DQEBBQUAMDkxCzAJBgNV
BAYTAkpQMRQwEgYDVQQKEwtDQSBURVNUIDEtNDEUMBIGA1UEAxMLQ0EgVEVTVCAx
LTQwHhcNMDYwOTA3MTY0MDM3WhcNMDcwOTA3MTY0MDM3WjBPMQswCQYDVQQGEwJK
UDEOMAwGA1UECBMFVG9reW8xFjAUBgNVBAoTDVRFU1QgMiBDTElFTlQxGDAWBgNV
BAMTD3d3dzIuZXhhbXBsZS5qcDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
vSpZ6ig9DpeKB60h7ii1RitNuvkn4INOfEXjCjPSFwmIbGJqnyWvKTiMKzguEYkG
6CZAbsx44t3kvsVDeUd5WZBRgMoeQd1tNJBU4BXxOA8bVzdwstzaPeeufQtZDvKf
M4ej+fo/j9lYH9udCug1huaNybcCtijzGonkddX4JEUCAwEAAaOBxjCBwzAJBgNV
HRMEAjAAMCwGCWCGSAGG+EIBDQQfFh1PcGVuU1NMIEdlbmVyYXRlZCBDZXJ0aWZp
Y2F0ZTAdBgNVHQ4EFgQUK0DZtd8K1P2ij9gVKUNcHlx7uCIwaQYDVR0jBGIwYIAU
340JbeYcg6V9zi8aozy48aIhtfihPaQ7MDkxCzAJBgNVBAYTAkpQMRQwEgYDVQQK
EwtDQSBURVNUIDEtNDEUMBIGA1UEAxMLQ0EgVEVTVCAxLTSCCQDkp8xOEMdh/jAN
BgkqhkiG9w0BAQUFAAOBgQCkGhwCDLRwWbDnDFReXkIZ1/9OhfiR8yL1idP9iYVU
cSoWxSHPBWkv6LORFS03APcXCSzDPJ9pxTjFjGGFSI91fNrzkKdHU/+0WCF2uTh7
Dz2blqtcmnJqMSn1xHxxfM/9e6M3XwFUMf7SGiKRAbDfsauPafEPTn83vSeKj1lg
Dw==
-END CERTIFICATE-

-BEGIN CERTIFICATE-
MIICijCCAfOgAwIBAgIJAOSnzE4Qx2H+MA0GCSqGSIb3DQEBBQUAMDkxCzAJBgNV
BAYTAkpQMRQwEgYDVQQKEwtDQSBURVNUIDEtNDEUMBIGA1UEAxMLQ0EgVEVTVCAx
LTQwHhcNMDYwOTA3MTYzMzE4WhcNMDYxMDA3MTYzMzE4WjA5MQswCQYDVQQGEwJK
UDEUMBIGA1UEChMLQ0EgVEVTVCAxLTQxFDASBgNVBAMTC0NBIFRFU1QgMS00MIGd
MA0GCSqGSIb3DQEBAQUAA4GLADCBhwKBgQDZfFjkPDZeorxWqk7/DKM2d/9Nao28
dM6T5sb5L41hD5C1kXV6MJev5ALASSxtI6OVOmZO4gfubnsvcj0NTZO4SeF1yL1r
VDPdx7juQI1cbDiG/EwIMW29UIdj9h052JTmEbpT0RuP/4JWmAWrdO5UE40xua7S
z2/6+DB2ZklFoQIBA6OBmzCBmDAdBgNVHQ4EFgQU340JbeYcg6V9zi8aozy48aIh
tfgwaQYDVR0jBGIwYIAU340JbeYcg6V9zi8aozy48aIhtfihPaQ7MDkxCzAJBgNV
BAYTAkpQMRQwEgYDVQQKEwtDQSBURVNUIDEtNDEUMBIGA1UEAxMLQ0EgVEVTVCAx
LTSCCQDkp8xOEMdh/jAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBABsH
aJ/c/3cGHssi8IvVRci/aavqj607y7l22nKDtG1p4KAjnfNhBMOhRhFv00nJnokK
y0uc4DIegAW1bxQjqcMNNEmGbzAeixH/cRCot8C1LobEQmxNWCY2DJLWoI3wwqr8
uUSnI1CDZ5402etkCiNXsDy/eYDrF+2KonkIWRrr
-END CERTIFICATE-

[EMAIL PROTECTED]:~/src/gnutls/tests$ ../src/certtool -e  pkcs1-pad-ok.pem
Certificate[0]: C=JP,ST=Tokyo,O=TEST 2 CLIENT,CN=www2.example.jp
Issued by: C=JP,O=CA TEST 1-4,CN=CA TEST 1-4
Verifying against certificate[1].
Verification output: Verified.

Certificate[1]: C=JP,O=CA TEST 1-4,CN=CA TEST 1-4
Issued by: C=JP,O=CA TEST 1-4,CN=CA TEST 1-4
Verification output: Verified.

[EMAIL PROTECTED]:~/src/gnutls/tests$ cat pkcs1-pad-broken.pem
-BEGIN CERTIFICATE-
MIICzTCCAjagAwIBAgIJAOSnzE4Qx2H/MA0GCSqGSIb3DQEBBQUAMDkxCzAJBgNV
BAYTAkpQMRQwEgYDVQQKEwtDQSBURVNUIDEtNDEUMBIGA1UEAxMLQ0EgVEVTVCAx
LTQwHhcNMDYwOTA3MTY0MDM3WhcNMDcwOTA3MTY0MDM3WjBPMQswCQYDVQQGEwJK
UDEOMAwGA1UECBMFVG9reW8xFjAUBgNVBAoTDVRFU1QgMiBDTElFTlQxGDAWBgNV

Re: GnuTLS (libgrypt really) and Postfix

2006-02-13 Thread Simon Josefsson
Werner Koch [EMAIL PROTECTED] writes:

 On Sat, 11 Feb 2006 12:36:52 +0100, Simon Josefsson said:

   1) It invoke exit, as you have noticed.  While this only happen
  in extreme and fatal situations, and not during runtime,
  it is not that serious.  Yet, I agree it is poor design to
  do this in a library.

 I disagree strongly here.  Any code which detects an impossible state
 or an error clearly due to a programming error by the caller should
 die as soon as possible.  If you try to resolve the problem by working
 around it will increase code complexity and thus error won't be
 detected.  (Some systems might provide a failsafe mechanism at a top
 layer; e.g. by voting between independed developed code).

That /dev/random doesn't exist seem like a quite possible state to me.
The application would want to shut down gracefully when the library
detect that condition.  The application may be processing files in
different threads.

Further, a library is not in a good position to report errors.  A
users will sit there wondering why Postfix, or some other complex
application died, without any clues.  Returning an error and providing
a foo_strerror() function at least make it possible to report a useful
error to the user.

I would agree if we are only talking about truly fatal cases, like
asserts() to check explicit pre-conditions for a function, but I
disagree when we move into the area if easily anticipated problems.

However, looking at the code, it is possible for Postfix to handle
this.  They could have installed a log handler with libgcrypt, and
make sure to shut down gracefully if the log level is FATAL.  The
recommendation to avoid GnuTLS because libgcrypt calls exit suggest
that the Postfix developers didn't care to investigate how to use
GnuTLS and libgcrypt properly.  So I don't think there is any real
reason to change code in libgcrypt here.  Postfix could be changed, if
they care about GnuTLS/libgcrypt.

 It is the same rationale why defining NDEBUG in production code is a
 Bad Thing.

Agreed.

   2) If used in a threaded environment, it wants to have access to
  thread primitives.  The primary reason was for RNG pool locking
  (where it is critical), but I think the primitives are now used
  in other places too.  GnuTLS is thread agnostic, so it can't
  initialize libgcrypt properly.

 Against our advise Nikos rejected to implement a proper
 initialization.  Libraries and threading is actually a deep problem.
 It usually works well on GNU/Linux systems but this is more of
 coincidence than by design.  We did quite some research on this and
 experimented with different ways of automagically initializing the
 thread primitives correctly; they all fail either at runtime or create
 headaches when trying to write proper build rules.  The current
 approach is by far the most flexible and safest.  But yes, the fact
 that one library needs an initialization can't be hidden from the
 application even if the application is using the lib only indirectly
 (Foo-OpenLDAP-GnuTLS-Libgcrypt).

I'd say that the most flexible approach for a library is to write
thread-safe code that doesn't need access to mutexes to work properly.

Implementing the RNG functions like this is a challenge, and may
require kernel-level support (see below), but giving up and requiring
thread hooks seem sub-optimal.

 list.  I think the Linux /dev/urandom implementation is sub-optimal.

 This is known since Ted Ts'o wrote /dev/random and justified by
 requirement of the Linux hackers to keep the memory use by a minimal
 Linux build low. 

That seem like a poor argument to me.  It may be valid for embedded
devices, but for most desktop PCs, Linux should provide a useful
/dev/urandom.

It seems that it would be possible to write a new /dev/*random
implementation that is more useful by libgcrypt and other RNG
libraries.

Thanks,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA-640 factored

2005-11-09 Thread Simon Josefsson
Steven M. Bellovin [EMAIL PROTECTED] writes:

 http://mathworld.wolfram.com/news/2005-11-08/rsa-640/

There are timing details in:

http://www.crypto-world.com/announcements/rsa640.txt

They claim they need 5 months of 80 machines with 2.2GHz processors.

Using these numbers, I think it would be interesting to come up with
an estimate of how expensive it would be to crack larger RSA keys for
someone who used the same software.  I'll make an attempt to do this
below, but I reckon I will make errors...  please correct me.

The complexity for the GNFS is roughly

O(exp(1.9(log n)^.3 * (log log n)^.66)

where n is the number to factor, according to
http://mathworld.wolfram.com/NumberFieldSieve.html.

I'm not sure translating complexity into running time is reasonable,
but pending other ideas, this is a first sketch.

Let's input the numbers for 2^640:

octave:26 n=2^640
n =  4.5624e+192
octave:27 a=e^(1.923*(log(n))^(1/3)*(log(log(n)))^(2/3))
a =  1.7890e+21

And let's input them for 2^768:

octave:28 n=2^768
n =  1.5525e+231
octave:29 b=e^(1.923*(log(n))^(1/3)*(log(log(n)))^(2/3))
b =  1.0776e+23

Let's compute the difference:

octave:30 b/a
ans = 60.232

In other words, cracking a RSA-768 key would take 60 times as long,
assuming the running time scale exactly as the complexity (which is
unlikely).

So it seems, if you have 80*60 = 4800 machines, you would be able to
crack a RSA-768 key in 5 months.

Continuing this to 1024 bit keys...  (or rather 1023 since Octave
believe 2^1024=Inf)

octave:40 n=2^1023
n =  8.9885e+307
octave:41 c=e^(1.923*(log(n))^(1/3)*(log(log(n)))^(2/3))
c =  1.2827e+26
octave:42 c/a
ans =  7.1697e+04
octave:43

I.e., RSA-1024 is about 7 times as difficult as RSA-640 using
GNFS.  If you have 80*7 = 560 machines, you would be able to
crack a 1024 bit RSA keys in 5 months.  Or put differently, if you had
10.000 CPUs it would take 5*80*7/1/12 = 233 years to factor a
RSA-1024 key.

I know there are many hidden assumptions here, and I probably made
mistakes when computing this.  Please point out flaws so we can get
accurate numbers.

Cheers,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA-640 factored

2005-11-09 Thread Simon Josefsson
Victor Duchovni [EMAIL PROTECTED] writes:

 On Wed, Nov 09, 2005 at 05:27:12PM +0100, Simon Josefsson wrote:

 I'm not sure translating complexity into running time is reasonable,
 but pending other ideas, this is a first sketch.
 

 It is not reasonable, because the biggest constraint is memory, not
 CPU. Inverting the matrix requires increasingly prohitive quantities
 of RAM. Read the DJB hardware GNFS proposal.

Can we deduct a complexity expression from it, that could be used to
(at least somewhat reliably) predict the cost of cracking RSA-768 or
or RSA-1024, based on the timing information given in this report?
The announcement doesn't say how much memory these machines had,
though, but perhaps that information can be disclosed.

Thanks,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


GnuTLS 1.2.9

2005-11-07 Thread Simon Josefsson
I thought that this might be of some interest; I just released a new
version of GnuTLS that disable RSA-MD5 for some X.509 uses by default.
See announcements and details below.

Feedback and suggestions are always welcome.

Cheers,
Simon

From: Simon Josefsson [EMAIL PROTECTED]
Subject: GnuTLS 1.2.9
Date: Mon, 07 Nov 2005 22:34:46 +0100

We are pleased to announce the availability of GnuTLS version 1.2.8.

GnuTLS is a modern C library that implement the standard network
security protocol Transport Layer Security (TLS), for use by network
applications.

This is the last non-bugfix release in the 1.2.x series.  We will open
the 1.3.x branch after this release.  The goal of 1.3.x will be to
merge work currently done on CVS branches, for TLS Pre-Shared-Keys and
TLS Inner Application.  Other planned improvements in 1.3.x are
system-independent resume data structures, modularization of the
bignum operations, and TLS OpenPGP improvements.

This release disable the RSA-MD5 algorithm when verifying untrusted
intermediary X.509 CA certificates.  This decision was made based on
the results in Lenstra, Wang and Weger's Colliding X.509
Certificates.  This is discussed in more detail, including
instructions on how to re-enable the algorithm for application's that
need backwards compatibility, in:
http://josefsson.org/gnutls/manual/html_node/Digital-signatures.html

Noteworthy changes since version 1.2.8:
- Documentation was updated and improved.
- RSA-MD2 is now supported for verifying digital signatures.
- Due to cryptographic advances, verifying untrusted X.509
  certificates signed with RSA-MD2 or RSA-MD5 will now fail with a
  GNUTLS_CERT_INSECURE_ALGORITHM verification output.  For
  applications that must remain interoperable, you can use the
  GNUTLS_VERIFY_ALLOW_SIGN_RSA_MD2 or GNUTLS_VERIFY_ALLOW_SIGN_RSA_MD5
  flags when verifying certificates.  Naturally, this is not
  recommended default behaviour for applications.  To enable the
  broken algorithms, call gnutls_certificate_set_verify_flags with the
  proper flag, to change the verification mode used by
  gnutls_certificate_verify_peers2.
- Make it possible to send empty data through gnutls_record_send,
  to align with the send(2) API.
- Some changes in the certificate receiving part of handshake to prevent
  some possible errors with non-blocking servers.
- Added numeric version symbols to permit simple CPP-based feature
  tests, suggested by Daniel Stenberg [EMAIL PROTECTED].
- The (experimental) low-level crypto alternative to libgcrypt used
  earlier (Nettle) has been replaced with crypto code from gnulib.
  This leads to easier re-use of these components in other projects,
  leading to more review and simpler maintenance.  The new configure
  parameter --with-builtin-crypto replace the old --with-nettle, and
  must be used if you wish to enable this functionality.  See README
  under Experimental for more information.  Internally, GnuTLS has
  been updated to use the new Generic Crypto API in gl/gc.h.  The
  API is similar to the old crypto/gc.h, because the gnulib code were
  based on GnuTLS's gc.h.
- Fix compiler warning in the anonself self test.
- API and ABI modifications:
gnutls_x509_crt_list_verify: Added 'const' to prototype in gnutls/x509.h.
 This doesn't reflect a change in behaviour,
 so we don't break backwards compatibility.
GNUTLS_MAC_MD2: New gnutls_mac_algorithm_t value.
GNUTLS_DIG_MD2: New gnutls_digest_algorithm_t value.
GNUTLS_VERIFY_ALLOW_SIGN_RSA_MD2,
GNUTLS_VERIFY_ALLOW_SIGN_RSA_MD5: New gnutls_certificate_verify_flags values.
  Use when calling
  gnutls_x509_crt_list_verify,
  gnutls_x509_crt_verify, or
  gnutls_certificate_set_verify_flags.
GNUTLS_CERT_INSECURE_ALGORITHM: New gnutls_certificate_status_t value,
used when broken signature algorithms
is used (currently RSA-MD2/MD5).
LIBGNUTLS_VERSION_MAJOR,
LIBGNUTLS_VERSION_MINOR,
LIBGNUTLS_VERSION_PATCH,
LIBGNUTLS_VERSION_NUMBER: New CPP symbols, indicating the GnuTLS
  version number, can be used for feature existence
  tests.

Improving GnuTLS is costly, but you can help!  We are looking for
organizations that find GnuTLS useful and wish to contribute back.
You can contribute by reporting bugs, improve the software, or donate
money or equipment.

Commercial support contracts for GnuTLS are available, and they help
finance continued maintenance.  Simon Josefsson Datakonsult, a
Stockholm based privately held company, is currently funding GnuTLS
maintenance.  We are always looking for interesting development
projects.

If you need help to use GnuTLS, or want to help others, you are
invited to join our help-gnutls mailing list, see:
http://lists.gnu.org/mailman/listinfo/help-gnutls

Re: Fwd: Tor security advisory: DH handshake flaw

2005-09-01 Thread Simon Josefsson
Werner Koch [EMAIL PROTECTED] writes:

 On Mon, 29 Aug 2005 17:32:47 +0200, Simon Josefsson said:

 which are Fermat pseudoprime in every base.  Some applications,
 e.g. Libgcrypt used by GnuPG, use Fermat tests, so if you have control
 of the random number generator, I believe you could make GnuPG believe
 it has found a prime when it only found a Carmichael number.

 5 Rabin-Miller tests using random bases are run after a passed Fermat
 test.

If you control the random number generator, you control which
Miller-Rabin bases that are used too.

Of course, it must be realized that the threat scenario here is
slightly obscure.  The scenario I have been thinking about is when an
attacker has gained control of the hardware or kernel.  The attacker
might then be able to see when a crypto library requests randomness,
and return carefully constructed data to fool the user.  The
constructed data should be so the RSA/DH parameters become weak [for
the attacker].  The attacker may not be in a position to send the
generated prime back home over the network, and doing that may also be
detected by firewalls.  The target system might not even be networked.

Designing this fake random number generator is not trivial, and must
likely be done separately for each crypto library that is used.  If
software only used prime numbers that came with a prime certificate,
you combat this attack.

Too bad you can't mathematically certify that real randomness was
used in choosing the prime too.  Although perhaps you get pretty close
with algorithms that both generate a prime and a prime certificate in
one go.

Regards,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fwd: Tor security advisory: DH handshake flaw

2005-08-31 Thread Simon Josefsson
Ben Laurie [EMAIL PROTECTED] writes:

 Simon Josefsson wrote:
 No, the certificate is verifiable in deterministic polynomial time.
 The test is probabilistic, though, but as long as it works, I don't
 see why that matters.  However, I suspect the ANSI X9.80 or ISO 18032
 paths are more promising.  I was just tossing out URLs.

 Surely Miller-Rabin is polynomial time anyway?

Yes, but it doesn't produce certificates; the algorithm that I cited
do.  The algorithm to _verify_ the certificate was not probabilistic,
only the algorithm to _produce_ the certificates was probabilistic.

Btw, could you describe the threat scenario where you believe this
test would be useful?

Thanks,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fwd: Tor security advisory: DH handshake flaw

2005-08-29 Thread Simon Josefsson
Ben Laurie [EMAIL PROTECTED] writes:

 Simon Josefsson wrote:
 Ben Laurie [EMAIL PROTECTED] writes:
 
[EMAIL PROTECTED] wrote:

So Miller-Rabin is good for testing random candidates, but it is easy to
maliciously construct an n that passes several rounds of
 Miller-Rabin.  

Interesting! So how does one go about constructing such an n?
 I wonder if the original author didn't think of Carmichael numbers,
 which are Fermat pseudoprime in every base.  Some applications,
 e.g. Libgcrypt used by GnuPG, use Fermat tests, so if you have control
 of the random number generator, I believe you could make GnuPG believe
 it has found a prime when it only found a Carmichael number.

 Surely the attack of interest is where the attacker provides the prime - 
 no control of RNGs is required for this.

Right.  The attack I mentioned was a tangent off the Fermat test.

However, controlling the prime numbers seem to be comparable to
controlling the random number generator.  I.e., you have some access
to the subject's hardware, and want to trick the software into using
crackable parameters for RSA, DH etc.  If the application doesn't use
prime numbers without a proof, neither of these two attacks aren't
possible.  So this is actually a class of attacks.

 However, for Miller-Rabin, it has been proven that all composite
 numbers pass the test for at most 1/4 of the possible bases.  So as
 long as you do sufficiently many independent tests (different bases,
 preferably chosen at random), I don't see how you could be fooled.
 http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html
 Doing the test for more than 1/4 of the bases (which would actually
 prove the number prime, although without a succinct witness) for large
 numbers is too expensive though.
 One algorithm that results in a polynomially verifiable witness is:
 Almost All Primes Can be Quickly Certified
 http://theory.lcs.mit.edu/~cis/pubs/shafi/1986-stoc-gk.pdf

 This appears to be a probabilistic certificate, which strikes me as 
 rather pointless.

No, the certificate is verifiable in deterministic polynomial time.
The test is probabilistic, though, but as long as it works, I don't
see why that matters.  However, I suspect the ANSI X9.80 or ISO 18032
paths are more promising.  I was just tossing out URLs.

Regards,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fwd: Tor security advisory: DH handshake flaw

2005-08-29 Thread Simon Josefsson
Ben Laurie [EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] wrote:
 So Miller-Rabin is good for testing random candidates, but it is easy to
 maliciously construct an n that passes several rounds of
 Miller-Rabin.  

 Interesting! So how does one go about constructing such an n?

I wonder if the original author didn't think of Carmichael numbers,
which are Fermat pseudoprime in every base.  Some applications,
e.g. Libgcrypt used by GnuPG, use Fermat tests, so if you have control
of the random number generator, I believe you could make GnuPG believe
it has found a prime when it only found a Carmichael number.

However, for Miller-Rabin, it has been proven that all composite
numbers pass the test for at most 1/4 of the possible bases.  So as
long as you do sufficiently many independent tests (different bases,
preferably chosen at random), I don't see how you could be fooled.

http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html

Doing the test for more than 1/4 of the bases (which would actually
prove the number prime, although without a succinct witness) for large
numbers is too expensive though.

One algorithm that results in a polynomially verifiable witness is:

Almost All Primes Can be Quickly Certified
http://theory.lcs.mit.edu/~cis/pubs/shafi/1986-stoc-gk.pdf

Btw, I've been playing with prime proving in the past, and if you want
to specify a format for prime proofs that OpenSSL would understand, I
would consider supporting the same format in GnuTLS.  Trusting that
numbers are prime for cryptographic purposes should require a proof.
There are several prime proof formats, but I can't tell if they are
practical for this purpose.

Regards,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-02 Thread Simon Josefsson
Perry E. Metzger [EMAIL PROTECTED] writes:

 Guus Sliepen [EMAIL PROTECTED] writes:
  In that case, I don't see why you don't bend your efforts towards
  producing an open-source implementation of TLS that doesn't suck.
 
 We don't want to program another TLS library, we want to create a VPN
 daemon. 

 Well, then you might consider using an existing TLS library. It is
 rather hard to make a protocol that does TLS things that is both safe
 and in any significant way simpler than TLS.

Several people have now suggested using TLS, but nobody seem to also
refute the arguments made earlier against building VPNs over TCP, in
http://sites.inka.de/~bigred/devel/tcp-tcp.html.

I have to agree with many things in the paper; using TCP (as TLS does)
to tunnel TCP/UDP is a bad idea.  Off-the-shelf TLS may be a good
security protocol, but it is not a good VPN protocol.  Recommending
TLS without understanding, or caring about, the application domain
seem almost arrogant to me.

Admittedly, you could invent a datagram-based TLS, but this is not
widely implemented nor specified (although I vaguely recall WTLS) so
then you are back at square one as far as security analysis goes.

Thanks,
Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Attacking networks using DHCP, DNS - probably kills DNSSEC NOT

2003-06-30 Thread Simon Josefsson
Bill Stewart [EMAIL PROTECTED] writes:

* Your laptop see and uses the name yahoo.com.attackersdomain.com.
   You may be able to verify this using your DNSSEC root key, if the
   attackersdomain.com people have set up DNSSEC for their spoofed
   entries, but unless you are using bad software or judgment, you will
   not confuse this for the real yahoo.com.

 The DNS suffix business is designed so that your laptop tries
 to use yahoo.com.attackersdomain.com, either before yahoo.com
 or after unsuccessfully trying yahoo.com, depending on implementation.
 It may be bad judgement, but it's designed to support intranet sites
 for domains that want their web browsers and email to let you
 refer to marketing as opposed to marketing.webservers.example.com,
 and Netscape-derived browsers support it as well as IE.

It can be a useful feature, but it does not circumvent DNSSEC in any
way, that I can see.  DNSSEC see yahoo.com.attackersdomain.com and can
verify that the IP addresses for that host are the one that the owner
of the y.c.a.c domain publishes, and that is what DNSSEC delivers.
The bad judgement I referred to was if your software, after DNSSEC
verification, confuses yahoo.com with yahoo.com.attackersdomain.com.

Of course, everything fails if you ALSO get your DNSSEC root key from
the DHCP server, but in this case you shouldn't expect to be secure.
I wouldn't be surprised if some people suggest pushing the DNSSEC root
key via DHCP though, because alas, getting the right key into the
laptop in the first place is a difficult problem.

 I agree with you and Steve that this would be a Really Bad Idea.
 The only way to make it secure is to use an authenticated DHCP,
 which means you have to put authentication keys in somehow,
 plus you need a reasonable response for handling authentication failures,
 which means you need a user interface as well.
 It's also the wrong scope, since the DNSSEC is global information,
 not connection-oriented information, so it's not really DHCP's job.

I think it is simpler to have the DNSSEC root key installed with the
DNSSEC software.  If someone can replace the root key in that
distribution channel, they could also modify your DNSSEC software, so
you are no worse off.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Attacking networks using DHCP, DNS - probably kills DNSSEC

2003-06-29 Thread Simon Josefsson
Bill Stewart [EMAIL PROTECTED] writes:

 At 11:15 PM 06/28/2003 -0400, Steven M. Bellovin wrote:
In message [EMAIL PROTECTED], Bill Stewart writes:
 This looks like it has the ability to work around DNSSEC.
 Somebody trying to verify that they'd correctly reached yahoo.com
 would instead verify that they'd correctly reached
 yahoo.com.attackersdomain.com, which can provide all the signatures
 it needs to make this convincing.
 
 So if you're depending on DNSSEC to secure your IPSEC connection,
 do make sure your DNS server doesn't have a suffix of echelon.nsa.gov...

No, that's just not true of DNSsec.  DNSsec doesn't depend on the
integrity of the connection to your DNS server;
rather, the RRsets are digitally signed.
In other words, it works a lot like certificates,
with a trust chain going back to a magic root key.

 I thought about that, and I think this is an exception,
 because this attack tricks your machine into using the
 trust chain yahoo.com.attackersdomain.com., which it controls,
 instead of the trust chain yahoo.com., which DNSSEC protects adequately.
 So you're getting a trustable answer to the wrong query.

No, I believe only one of the following situations can occur:

* Your laptop see and uses the name yahoo.com, and the DNS server
  translate them into yahoo.com.attackersdomain.com.  If your laptop
  knows the DNSSEC root key, the attacker cannot spoof yahoo.com since
  it doesn't know the yahoo.com key.  This attack is essentially a
  man-in-the-middle attack between you and your recursive DNS server.

* Your laptop see and uses the name yahoo.com.attackersdomain.com.
  You may be able to verify this using your DNSSEC root key, if the
  attackersdomain.com people have set up DNSSEC for their spoofed
  entries, but unless you are using bad software or judgment, you will
  not confuse this for the real yahoo.com.

Of course, everything fails if you ALSO get your DNSSEC root key from
the DHCP server, but in this case you shouldn't expect to be secure.
I wouldn't be surprised if some people suggest pushing the DNSSEC root
key via DHCP though, because alas, getting the right key into the
laptop in the first place is a difficult problem.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]