Re: [cryptography] [FORGED] Re: Kernel space vs userspace RNG

2016-05-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Sadly, people's prejudices get them overcomplicating the issue.

It's certainly true that a geiger counter measures something that's truly 
random (for some suitable value of truly random) because of quantum effects. 
But so does a noisy diode or resistor noise. The difference is that radioactive 
decay is sexy because you have to get exotic and dangerous material, but a 
resistor is just carbon, and so people are quite sure that it doesn't actually 
have atoms or let alone quanta or quarks in it. Quanta are exotic. It's not 
like they make quantum computers out of atoms, right?

Similarly, the lava lamp is cool, but you get just as good (and often better) 
real randomness out of the same camera pointed at a lava lamp, but with the 
lens cap on. That's because the sensor gets quantum crap in it caused by many 
things (from similar noise to the above to virtual particles) but with light 
coming in, the image washes out the quantum crap. But it doesn't *feel* random 
to take readings from a camera with a lens cap on.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVzwPwfD9H+HfsTZWAQjO3wf7Bzm4yrerdi4Y0FPh90dCRpLLFo8/t2Va
bcWAxLp/ogpVc5yqfMyJ9UIyF8KBJzhPuY9nUA/yzG1k9xTdAEuw5H2jP8Azfdd3
dxthTh+OVx/GdFjtv9lxbG3YT7JQgO7nwTKZ73n9samQ/sf+HfGmrqwnS5w5Wv6H
3Wb3W0pM6gGHQzq+SJc6zEO8cFPEwCx84qV2E/wz6qFbMzJ6HrN/CF5T4G5wGOQx
t1eXozrKY2h9MsKJFTGoxLRgpRUgnAU/kZvW8sGkxLkonsyI5yHqYUNmAvEh3WCl
IBXmEt/WndnbyFSrUzVGcNxwNrseJwHriWw5u7FqeFTHOvTQjLPD8Q==
=4c1p
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Unbreakable crypto?

2015-03-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Here's an optimization:

* Assume you have a decent One Time Pad generator.

* Assume you have a secure pad delivery system.

* Assume it is reasonably low-latency and high-volume. Say somewhere between 
Usenet and the modern Internet.

Now then -- instead of sending the pads, send a message. It gets delivered with 
the same security as the pad, so it has identical security as using the OTP. 
Even better, you don't have to worry about insecurity of the OTP generator.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVQ8kVvD9H+HfsTZWAQglHAgApSI+gBHAzenSwtoE64g+TRb17tEbD3Vq
dSjtzFlp+j4k4DqoMTXCzmG0xmvVunZqsKFpxActAA6ztbN5gKX1xnOmFDH/dn8z
s5rw8RJNteIxRitTtb8+01yJiR4lzuJuQPcGX+ag6pF1GFOhNWf4sYLDVL0ya61u
wXe4Ykz1E+S2zPDmqAnTvJaBgc+wWvTSe2CT+6T7hOfFf0eCn/h21Js+8vFfdhiJ
K0aOzJH4aFdNuPGqKN48GKmFOvdnbrfZ0v9Y9zk1tnoM1YszX/HXXTxsOKSr4mzX
V3u52AH4viqrR0KbFQ/7aU7pR7lIQtML2fgoWDLQhnr3DJ7Vrn152w==
=1PVt
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PGP word list

2015-02-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

 
 I just realised one barrier -- language.  It uses the English language, and 
 PGP might be stronger in Europe than in the anglo world.
 
 
 So perhaps the wordset should be retuned to being some form of 
 internationalised english, words that are recognisable by a wide set of 
 cultures?
 
 Things like: weekend, manyana, angst, perestroika, bollywood, ...
 
 just a thought.

We're using the PGP world list for verifying short authentication strings.

You're bringing up a great point, and it's one we're dealing with. 

Ultimately, the problem is that any given word is going to be unpronounceable 
gibberish to *someone* and you want that set of words and someones to be small 
enough.

The alternative is to use something like base32 and the ICAO/NATO word list 
(alpha, bravo, charlie, delta, echo, etc.) or even bare letters and numbers to 
get base32.

The PGP word list is a set of two-syllable and three-syllable words that are 
eight bits long, each. You can either alternate two-syllable and three-syllable 
words for error correction, or combine them. That gives you either eight or 
nine bits per word, versus five bits for ICAO.

At the end of the day, you're either taking a hit on intelligibility with bare 
letters and numbers, or using English words. You have to pick the way in 
which you want to have suck.

The advantage of the PGP word list is that you get a large number of bits per 
word, but the cost is a high chance of a word that's baffling to someone. ICAO 
words have fewer words, but at least there's only 32 of them. Bare letters have 
some of the worst of all of these -- they're easily misunderstood (which is why 
the ICAO list exists), and even more cross-language.

So pick your poison.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVOYF9PD9H+HfsTZWAQhIRwf8CHlbpHidIYNLE8MpXBRAPq9w1QMbC5ZF
m37Zcei8Cyg9+UbAxZGdn1yWPQ8uRprAbQ60LCP8LVo6KY5e+q8KrmOsFkl/eaQN
9DUgFNaigjQJojMgaB/92DvXZG5FGN6z7Fs1pBPpMmvlEtVWaD9mN2Ny06jzdmai
8JTdJuQv8UD37daB/5Uxeg0AL5ap5WIEzl/MQnzSNHIlQyFvELbfSh/R/sD8yqKB
dA1l2g/54kwPtuVld+RkGQ4NWqha/hi2uJc14v3LO2J+Ubocbcalb1BNkY4de0X9
MTd525ZQi5hTmOynlBNvWDfPGkf985Ubfcei4bEuTOlncdXVNLfQ1Q==
=ptz5
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Gogo inflight Internet uses fake SSL certs to MITM their users

2015-01-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On Jan 8, 2015, at 3:37 PM, John Levine jo...@iecc.com wrote:

 
 Do the fake certs validate in web browsers?  


No, they do not validate.

If you go (went) to a Youtube, Vimeo, etc. site, URL, embedded whatever, you'd 
get the expected browser cert failure error.


 If so, who's giving them fake
 *.google.com certs?

I apologize for being a smartass on this, especially since the premise of your 
conditional is false. But I just can't resist; please take this with the humor 
I offer it with:

https://www.google.com/search?q=how+do+I+use+openssl+to+generate+a+certificate

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVK8ZcvD9H+HfsTZWAQhklQf+PFg6a0O6ap3ewKH4hLMz2vGaoDC3d+Ye
HN5LYlvjdQsHqYgizc9QFHdT0/y9ZdWcpS99heaUeYPaGsMxoEId+WfCMfpUj6UD
683KSegfPq+lGev3MHaX6t0Eq0j+VojFuBdRHQ3HyRrnuNgT8yxfs9jnpQS/2AKh
EBbuxS4hB5Ar8pwJdHTjgxjjqqLif0ouhL+GFsWUbAq6RsEIVowcoSNXqzgeRPkr
1b25hk2MlebkZssr7L6PGfNKr6cpDccUCjIdXBBMsG/C7ZLg5W0oqQCiirsOYOk6
Kt2gKL/cDDEezdcbSn9cFtklI35RLXJoM3Oty/iEVzXYuibaHcyqiQ==
=6PT0
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Gogo inflight Internet uses fake SSL certs to MITM their users

2015-01-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On Jan 6, 2015, at 8:34 AM, shawn wilson ag4ve...@gmail.com wrote:

 You can smartly limit resolution in squid - I don't trust this is what
 they were doing, but you could provide a better experience like this.

It is what they are doing. I am an unhappy (for many reasons) regular (for many 
other reasons) Gogo customer, and noticed pretty quickly when they started 
doing it. I looked at their certs and it's an awful-user-experience way of 
blocking videos, and I strongly suspect that the rotten user experience is the 
intent.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.3.0 (Build 9060)
Charset: us-ascii

wsBVAwUBVK8RMfD9H+HfsTZWAQidnwf9EsXGOIyf1gUq7b2o92SFOdENxhmc0b3H
/7NTBm1beKwq6LA6nwxrl8zunfuxNRVKn9ZCfyCteE+2mpzafFrxHubBPbKcffRX
motiqHmNs6nYrVNNbZe7BCbb6ds23gFuwREe8wPVrCplWz9n65hm+pf7FBhDlVwr
OMsVcMt6yGffnYOZhv/apbRPEUwj+ltkI0RKybAwxnEFDORcKto/MOckClKcbC60
RSAxt7r/R5GOUpCddAPXAI5o9rz6Rd6RsGEgVccnjmYMg/uj0Eb8Ko31GR702uX0
VklDxdH8HCzfkNpgewx7oLktsW1FxTqPsHxfiZPyiEv1uN9pdit+SA==
=UzPn
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] State of the art in block ciphers?

2013-12-07 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Dec 5, 2013, at 12:13 AM, Matthew Orgass darks...@city-net.com wrote:

 
  I recently looked into this and Threefish seems to be the only block cipher 
 I could find that provides major advantages over AES.  The large block sizes 
 and tweak parameter make it a good fit for disk encryption. I don't know how 
 the performace compares to hardware AES.  I haven't so far come across any 
 good reason to start using any block cipher other than AES or Threefish 
 (unless special circumstances are involved).

Thanks. Full disclosure, I'm a Threefish/Skein coauthor.

Performance-wise, software anything is not going to compare against hardware 
anything-else. If you are so desperate for performance that you consider it 
more important than security, then there are lots of good answers for you. XOR, 
for example, is about as fast as you can get, and all you need is a good key 
generation algorithm. AES in hardware (like AES-NI) runs faster than one clock 
per byte if everything's set up correctly. If things aren't set up correctly, 
it runs at about twice software speed. 

In software, Threefish runs at about twice AES speed. There are many, many 
handwaves in my previous statement. Threefish was designed for a 64-bit, 
superscalar architecture with a good handful of registers. At the time, the 
exemplar of such an architecture was an Intel Core 2 CPU. It succeeds at using 
the whole of the CPU so well that Intel uses it internally as a literal burn-in 
test of a CPU. If you have a CPU with heat flow issues, running Threefish on it 
will find them. Often destructively. Notably, it's intentionally an ARX 
construction that uses register flow as a cheap way to get inter-round 
permutations. 

The wide-block is a huge advantage. You've noted its use in disk encryption, 
but it's also a great use just about anywhere else. Block ciphers in general 
have the advantage that they, um, well, encrypt in blocks. They didn't exist at 
all until relatively recently. (If you squint at it the right way, you can 
consider ADFGX to be an early block cipher, if not the earliest, but no doubt 
we can debate it). It's relatively easy to turn a block cipher into a stream 
cipher (counter mode), but relatively hard to turn a stream cipher into a block 
cipher. The wider the block is, the more mixing you get. A one-bit change in a 
block cipher affects the whole block, and as the block's width gets larger, the 
more it approaches All-Or-Nothing Cryptography.

Tweakable ciphers in general also have huge advantages. You can think of a 
tweak as the generalization of an IV or counter. This is why a tweakable cipher 
is good for disk encryption -- because the LBN of a disk block is 
definitionally not a secret parameter. But once you have a tweakable cipher, 
there are ways that you can re-think chaining modes.

For example, you can re-think counter mode trivially by moving your counter to 
the tweak and now you don't have to worry so much about counter re-use. Yay! 
You can even throw away nonces, at the cost of having to deal with short 
trailing blocks. That's often inconvenient so you can even do something like 
take some static initial data (I am trying not to call it an IV), and encrypt 
that iteratively with an incrementing tweak, XORING the result onto your 
plaintext. Now you have all the convenience of counter mode, and can be pretty 
careless in picking your nonce and counter.

It's also pretty easy to extend this basic ida (a tweak is a generalized 
IV/nonce) and re-think any mode you care to in the tweakable world.

Now, an obvious (to me) disadvantage of Threefish per se is that it not only 
has a wide block, but a wide key. Some people might consider this an advantage, 
and really, I'm happy to lose this argument. It's my cipher, after all. But 
from an engineering standpoint, it can be inconvenient to have to transport a 
wide key around. The wider your block is, the more inconvenient that will be. 
On the other hand, there's an easy solution to this inconvenience -- a KDF. 
Heck, run your short key through Skein, and then feed that to your Threefish 
operation and poof, Alice is your auntie.

The other obvious disadvantage (to me) of Threefish is that the tweak is only 
128 bits wide. If the tweak were full-width, then it would be trivial to do 
what I handwaved above -- produce obvious tweak-based chaining modes that were 
trivially as secure as the underlying tweakable cipher. You could always hold 
your nose and just truncate to 128 bits and show 128-ish bits of security, but 
that's really unappealing at the least.

However, we could also just re-think chaining modes, as well. I am at present 
very fond of McOE mode, which is an authenticated mode. It was developed by a 
team at University of Weimar that includes my Skein/Threefish co-author, Stefan 
Lucks. The obvious search should find their paper. They designed it to work 
either with an AES-like cipher or a 

Re: [cryptography] the spell is broken

2013-10-03 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Oct 3, 2013, at 7:13 AM, Jeffrey Goldberg jeff...@goldmark.org wrote:

Jeff,

You might call it security theatre, but I call it (among other things) 
protest. I have also called it trust, conscience, and other things 
including emotional. I'm willing to call it marketing in the sense that 
marketing often means non-technical. I disagree with security theatre because 
in my opinion security theatre is *empty* or *mere* trust-building, but I don't 
fault you for being upset. I don't blame you for venting in my direction, 
either. I will, however, repeat that I believe this is something gentlepersons 
can disagree on. A decision that's right for me might not be right for you and 
vice-versa.

Since the AES competition, NIST has been taking a world-wide role in crypto 
standards leadership. Overall, it's been a good thing, but one could have one's 
disagreements with a number of things (and I do), but it's been a good 
*standards* process.

A good standard, however, is not necessarily the *best*, it's merely agreed 
upon. A standard that is everyone's second choice is better than a standard 
that is anyone's first choice. I don't think there are any problems with AES, 
but I think Twofish is a better choice. During the AES competition, the OpenPGP 
community as a whole, and I and my PGP colleagues put Twofish into OpenPGP 
*independently* of the then-unselected AES. It was thus our vote for it. When 
Phil, Alan, and I were putting ZRTP together, we put in Twofish as an option 
(RFC 6189, section 5.1.3). Thus in my opinion, if you know my long-standing 
opinions on ciphers, this shouldn't be a surprise. I think Twofish is a better 
algorithm than Rijndael.

ZRTP also has in it an option for using Skein's one-pass MAC instead of 
HMAC-SHA1. Why? Because we think it's more secure in addition to being a lot 
faster, which is important in an isochronous protocol. 

Silent Phone already has Twofish in it, and is already using Skein-MAC.

In Silent Text, we went far more to the one true ciphersuite philosophy. I 
think that Iang's writings on that are brilliant. 

As a cryptographer, I agree, but as an engineer, I want options. I view those 
options as a form of preparedness. One True Suite works until that suite is no 
longer true, and then you're left hanging.

To be fair, there are few options in ZRTP -- it's only AES or Twofish and 
SHA1-HMAC or Skein-MAC, so the selection matrix is small when compared to 
OpenPGP. We have One True Elliptic Curve -- P-384, and options for AES-CCM in 
either 128 or 256 bits and paired with SHA-256 or SHA-512 as hash and HMAC as 
appropriate. There's a third option, AES-256 paired with Skein/Skein-MAC, which 
I don't think is in the code, merely defined as a cipher suite. I can't 
remember. So we have to add Twofish there, but it's in Silent Phone now.

Now let me go back to my comment about standards. Standards are not about 
what's *best*, they're about what's *agreed*, and part of what's agreed on is 
that they're good enough. When one is part of a standards regime, one 
sublimates one's personal opinions to the collective good of the standard. That 
collective good of the standard is also security theatre in the sense that 
one uses it because it's the thing uses to be part of the community.

I think Twofish is better than AES. I believe that Skein is better than SHA-2. 
I also believe in the value of standards.

The problem one faces with the BULLRUN documents gives a decision tree. The 
first question is whether you think they're credible. If you don't think 
BULLRUN is credible, then there's an easy conclusion -- stay the course. If you 
think it is credible, then the next decision is whether you think that the NIST 
standards are flawed, either intentionally or unintentionally; in short, was 
BULLRUN *successful*. If you think they're flawed, it's easy; you move away 
from them.

The hard decision is the one that comes next -- I can state it dramatically as 
Do you stand with the NSA or not? which is an obnoxious way to put it, as 
there are few of us who would say, Yes, I stand with the NSA. You can phrase 
less dramatically it as standing with NIST, or even less dramatically as 
standing with the standard. You can even state it as whether you believe 
BULLRUN was successful, or lots of other ways.

Moreover, it's not all-or-nothing. Bernstein and Lange have been arguing that 
the NIST curves are flawed since before Snowden. Lots of people have been 
advocating moving to curve 25519. I want a 384-or-better curve because my One 
True Curve has been P-384.

If I'm going to move away from the NIST/NSA curve (which seems wise), what 
about everything else? Conveniently, I happen to have alternates for AES and 
SHA-2 in my back pocket, where they've been *alternates* in my crypto going 
back years. They're even in part of the software, sublimated to the goodness of 
the standard. The work is merely pulling them to the 

Re: [cryptography] the spell is broken

2013-10-02 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Oct 2, 2013, at 12:26 PM, coderman coder...@gmail.com wrote:

 On Wed, Oct 2, 2013 at 10:38 AM, Jared Hunter feralch...@gmail.com wrote:
 Aside from the curve change (and even there), this strikes me as a marketing 
 message rather than an important technical choice. The message is we react 
 to a deeper class of threat than our users understand.
 
 
 it is simpler than that.  to signal integrity, and provide assurance,
 it is common not just to avoid impropriety, but to avoid the
 _appearance_ of impropriety.
 
 this change, while not materially affecting security (the weakest link
 in SilentCircle was never the crypto) succeeds in conveying the
 message of integrity as paramount.
 
 so yes, a marketing message, but a simple one. i have no problem with
 this as long as they're not implying that AES or SHA-2 are broken in
 some respect.

Thank you very much for that assessment.

I'm not implying at all that AES or SHA-2 are broken. If P-384 is broken, I 
believe the root cause is more that it's old than it was backdoored. 

But it doesn't matter what I think. This is a trust issue.

A friend of mine offered this analogy -- what if it was leaked that the 
government replaced all of a vaccine with salt water because some nasty jihadis 
get vaccinated. This is serious and pretty horrifying.

If you're a responsible doctor, and source your vaccines from the same place, 
even if you test them yourself you're stuck proving a negative and in a place 
where stating the negative can look like you're part of the conspiracy.

I see this as a way out of the madness. Yes, it's marketing if by marketing 
you mean non-technical. By pushing this out, we're letting people who believe 
there's a problem have a reasonable alternative. 

If we, the crypto community, decide that the P-384+AES+SHA2 cipher suite is 
just fine, we can walk the decision back. It's just a software change.

Let me also add that I wouldn't fault anyone for deciding differently. We, the 
crypto community, need to work together with security and respecting each 
other's decisions even if we make different decisions and do different things. 
I respect the alternate decision, to stay the course.

Jon




-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSTJzTsTedWZOD3gYRAtsxAJ9CPoZjv+shNwID/ip+9KOcWK/JrQCeKuNv
rZmdU8syRIb+6KmX3xqEHt8=
=W3/0
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Random number generation being influenced - rumors

2013-09-09 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sep 8, 2013, at 10:10 PM, coderman coder...@gmail.com wrote:

 and so forth and so on, to no effect.  the lines have been drawn, and
 nothing will convince Intel to release raw access to the entropy
 source.

I have to disagree with you. Lots of us have told Intel that we really need to 
see the raw bits, and lots of us have gotten informal feedback that we'll see 
that in a future chip.

In the meantime, don't use it if you don't like it!

Better, however, would be to continue using whatever software RNG you're using, 
and reseed it with whatever you're doing now and throw an RDRAND reading in. It 
won't hurt anything no matter how badly it's broken and helps against any 
number of things. Heck, I've done that with TPM RNGs that I knew were of 
limited quality.

Once Intel better documents the RNG and we have ways to look at the entropy 
source, then we might use it more. Until then, it's somewhere between a toy and 
a curiosity.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSLcg1sTedWZOD3gYRAqbnAJ9uqS5CONA5vWYheiTrsE5C5BDXGgCeM/l/
qprr/56jYSuasPBWiRdqDHs=
=HEOP
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Also at http://silentcircle.wordpress.com/2013/08/17/reply-to-zooko/


# Reply to Zooko

(My friend and colleague, [Zooko 
Wilcox-O'Hearn](https://leastauthority.com/blog/author/zooko-wilcox-ohearn.html)
 wrote an open letter to me and Phil [on his blog at 
LeastAuthority.com](https://leastauthority.com/blog/open_letter_silent_circle.html).
 Despite this appearing on Silent Circle's blog, I am speaking mostly for 
myself, only slightly for Silent Circle, and not at all for Phil.)

Zooko,

Thank you for writing and your kind words. Thank you even more for being a 
customer. We're a startup and without customers, we'll be out of business. I 
think that everyone who believes in privacy should support with their 
pocketbook every privacy-friendly service they can afford to. It means a lot to 
me that you're voting with your pocketbook for my service.

Congratulations on your new release of [LeastAuthority's 
S4](https://leastauthority.com) and 
[Tahoe-LAFS](https://tahoe-lafs.org/trac/tahoe-lafs). Just as you are a fan of 
my work, I am an admirer of your work on Tahoe-LAFS and consider it one of the 
best security innovations on the planet.

I understand your concerns, and share them. One of the highest priority tasks 
that we're working on is to get our source releases better organized so that 
they can effectively be built from [what we have on 
GitHub](https://github.com/SilentCircle/). It's suboptimal now. Getting the 
source releases is harder than one might think. We're a startup and are pulled 
in many directions. We're overworked and understaffed. Even in the old days at 
PGP, producing effective source releases took years of effort to get down pat. 
It often took us four to six weeks to get the sources out even when delivering 
one or two releases per year.

The world of app development makes this harder. We're trying to streamline our 
processes so that we can get a release out about every six weeks. We're not 
there, either.

However, even when we have source code to be an automated part of our software 
releases, I'm afraid you're going to be disappointed about how verifiable they 
are. 

It's very hard, even with controlled releases, to get an exact byte-for-byte 
recompile of an app. Some compilers make this impossible because they randomize 
the branch prediction and other parts of code generation. Even when the 
compiler isn't making it literally impossible, without an exact copy of the 
exact tool chain with the same linkers, libraries, and system, the code won't 
be byte-for-byte the same. Worst of all, smart development shops use the 
*oldest* possible tool chain, not the newest one because tool sets are designed 
for forwards-compatibility (apps built with old tools run on the newest OS) 
rather than backwards-compatibility (apps built with the new tools run on older 
OSes). Code reliability almost requires using tool chains that are 
trailing-edge.

The problems run even deeper than the raw practicality. Twenty-nine years ago 
this month, in the August 1984 issue of Communications of the ACM (Vol. 27, 
No. 8) Ken Thompson's famous Turing Award lecture, Reflections on Trusting 
Trust was published. You can find a facsimile of the magazine article at 
https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf and a 
text-searchable copy on Thompson's own site, 
http://cm.bell-labs.com/who/ken/trust.html.

For those unfamiliar with the Turing Award, it is the most prestigious award a 
computer scientist can win, sometimes called the Nobel Prize of computing. 
The site for the award is at http://amturing.acm.org.

In Thompson's lecture, he describes a hack that he and Dennis Ritchie did in a 
version of UNIX in which they created a backdoor to UNIX login that allowed 
them to get access to any UNIX system. They also created a self-replicating 
program that would compile their backdoor into new versions of UNIX portably. 
Quite possibly, their hack existed in the wild until UNIX was recoded from the 
ground up with BSD and GCC.

In his summation, Thompson says:

The moral is obvious. You can't trust code that you did not totally
create yourself. (Especially code from companies that employ people
like me.) No amount of source-level verification or scrutiny will
protect you from using untrusted code. In demonstrating the
possibility of this kind of attack, I picked on the C compiler. I
could have picked on any program-handling program such as an
assembler, a loader, or even hardware microcode. As the level of
program gets lower, these bugs will be harder and harder to detect.
A well installed microcode bug will be almost impossible to detect.

Thompson's words reach out across three decades of computer science, and yet 
they echo Descartes from three centuries prior to Thompson. In Descartes's 1641 
Meditations, he proposes the thought experiment of an evil demon who 
deceives us by simulating the 

Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Aug 17, 2013, at 2:41 AM, ianG i...@iang.org wrote:

 So back to Silent Circle.  One known way to achieve some control over their 
 closed source replacement vulnerability is to let an auditor into their inner 
 circle, so to speak.

One correction of fact:

Our source is not closed source. It's up on GitHub and has an non-commercial 
BSD variant license, which I know isn't OSI, but anyone who wants to build, 
use, and even distribute their verified version is free to do so.

Secondly, we have auditors in the mix. We are customers of Leviathan Security 
and their virtual security officer program. They do regular code audits, 
network audits, and are helping us create a software development lifecycle.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSD64VsTedWZOD3gYRAp5iAKDFiDEn9MyTMscvsuznSY5jS83SpACg41F3
WL8vRZBFo747yv4C1DfwFeA=
=FYfS
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas

On Aug 17, 2013, at 12:49 AM, Bryan Bishop kanz...@gmail.com wrote:

 On Sat, Aug 17, 2013 at 1:04 AM, Jon Callas j...@callas.org wrote:
 It's very hard, even with controlled releases, to get an exact byte-for-byte 
 recompile of an app. Some compilers make this impossible because they 
 randomize the branch prediction and other parts of code generation. Even when 
 the compiler isn't making it literally impossible, without an exact copy of 
 the exact tool chain with the same linkers, libraries, and system, the code 
 won't be byte-for-byte the same. Worst of all, smart development shops use 
 the *oldest* possible tool chain, not the newest one because tool sets are 
 designed for forwards-compatibility (apps built with old tools run on the 
 newest OS) rather than backwards-compatibility (apps built with the new tools 
 run on older OSes). Code reliability almost requires using tool chains that 
 are trailing-edge.
 
 Would providing (signed) build vm images solve the problem of distributing 
 your toolchain?

Maybe. The obvious counterexample is a compiler that doesn't deterministically 
generate code, but there's lots and lots of hair in there, including potential 
problems in distributing the tool chain itself, including copyrighted tools, 
libraries, etc.

But let's not rathole on that, and get to brass tacks.

I *cannot* provide an argument of security that can be verified on its own. 
This is Godel's second incompleteness theorem. A set of statements S cannot be 
proved consistent on its own. (Yes, that's a minor handwave.)

All is not lost, however. We can say, Meh, good enough and the problem is 
solved. Someone else can construct a *verifier* that is some set of policies 
(I'm using the word policy but it could be a program) that verifies the 
software. However, the verifier can only be verified by a set of policies that 
are constructed to verify it. The only escape is decide at some point, meh, 
good enough.

I brought Ken Thompson into it because he actually constructed a rootkit that 
would evade detection and described it in his Turing Award lecture. It's not 
*just* philosophy and theoretical computer science. Thompson flat-out says, 
that at some point you have to trust the people who wrote the software, because 
if they want to hide things in the code, they can.

I hope I don't sound like a broken record, but a smart attacker isn't going to 
attack there, anyway. A smart attacker doesn't break crypto, or suborn 
releases. They do traffic analysis and make custom malware. Really. Go look at 
what Snowden is telling us. That is precisely what all the bad guys are doing. 
Verification is important, but that's not where the attacks come from (ignoring 
the notable exceptions, of course).

One of my tasks is to get better source releases out there. However, I also 
have to prioritize it with other tasks, including actual software improvements. 
We're working on a release that will tie together some new anti-surveillance 
code along with a better source release. We're testing the new source release 
process with some people not in our organization, as well. It will get better; 
it *is* getting better.

Jon



PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Aug 17, 2013, at 10:41 AM, ianG i...@iang.org wrote:

 Apologies, ack -- I noticed that in your post.
 
 (And I think for crypto/security products, the BSD-licence variant is more 
 important for getting it out there than any OSI grumbles.)

Thanks. I agree with your comments in other parts of those notes that I removed 
about issues with open versus closed source. I often wish I didn't believe in 
open source, because the people doing closed source get much less flak than we 
do.

 Ah ok.  Will they be writing an audit report?  Something that will give us 
 trust that more people are sticking their name to it?

I get regular audit reports, and have since last fall. :-)

I haven't been putting them out because it felt like argument from authority. 
Hey, don't audit this yourself, trust these guys!

Moreover, those reports are guidance we have from an independent party on what 
to do next. I want those to be raw and unvarnished. If they're going to get 
varnished, I lose guidance and I also lose speed. A report that's made for the 
public is definitionally sanitized. I don't want to encourage sanitizing.

It's a hard problem. I understand what you want, but my goal is to provide a 
good service, not a good report.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFSD7+7sTedWZOD3gYRAtF4AJ4+feoP9wGq6s1Zni9ZhS6aiJx1YwCgwOiy
GHaj1lPMi8gBm3XDSvorr9U=
=HWhT
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Aug 17, 2013, at 11:00 AM, Ali-Reza Anghaie a...@packetknife.com wrote:

 On Sat, Aug 17, 2013 at 1:50 PM, Jon Callas j...@callas.org wrote:
 I hope I don't sound like a broken record, but a smart attacker isn't going
 to attack there, anyway. A smart attacker doesn't break crypto, or suborn
 releases. They do traffic analysis and make custom malware. Really. Go look
 at what Snowden is telling us. That is precisely what all the bad guys are
 doing. Verification is important, but that's not where the attacks come from
 (ignoring the notable exceptions, of course).
 
 Part of the problem is that most people can't even wrap their heads
 around what a State or non-State Tier 1 Actor would even look like.
 They bully, kill leaders, deny resources, .. heck, they kill ~users~
 to dissuade use of a given tool.
 
 Then on the flip side we think about design and architectural
 aspects that don't even ever get the chance to be used against ~any~
 adversary because we force too much philosophy down into a hole that
 may have just one device, maybe just an iPhone - and limited to
 connectivity to even use it.
 
 I've called this the problem of Western Sensibilities where we seem
 to forget the economics and geopolitics of the rest of the world.
 
 Before getting heads wrapped around all these poles that are pretty
 exclusive to the haves - go out to truly hostile territory and live
 like a have not and try to build up the OPSEC routine you want,
 complete with FOSS only and full audits, and work from the field that
 way. It's non-trivial to say the least - even if you've done it a
 hundred times from a hundred different American and European venues.

I've had the privilege on several occasions to talk to people who really do 
this stuff. A couple of things really stuck with me:

* Don't patronize us. We know what we're doing, we know what we're up 
against. The guy who told me this had his brother murdered horribly. His 
tradecraft was basic and elegant.

* Simple, usable countermeasures are best because they have to be used by the 
sort of person who decided yesterday that they're not going to take it any 
more. They're newly-minted heroes who a threat to themselves and others if they 
screw up what they're doing. We asked them what they'd like most and the answer 
was SSL on websites. This was after Diginotar and we'd been talking about 
advanced threats, so we were a bit taken aback. They explained that the biggest 
problems are people putting stuff on websites as well as mistakes like making 
calendar entries for times and places of meetings. 

That put a fine point on the admonition not to patronize them. Heck, the 
adversaries don't have to crack anything sophisticated when they can just sniff 
CalDAV.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSD/qksTedWZOD3gYRAsj7AKCXuWr60RLPvsFXVtHzDGZUOS/fuwCgvK6m
6X311tAwXg+lYZD2TAOZAm0=
=C0O6
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] evidence for threat modelling -- street-sold hardware has been compromised

2013-07-30 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jul 30, 2013, at 4:07 AM, ianG i...@iang.org wrote:

 It might be important to get this into the record for threat modelling.  The 
 suggestion that normally-purchased hardware has been compromised by the 
 bogeyman is often poo-pooed, and paying attention to this is often thought to 
 be too black-helicopterish to be serious.  E.g., recent discussions on the 
 possibility of perversion of on-chip RNGs.
 
 This doesn't tell us how big the threat is, but it does raise it to the level 
 of 'evidenced'.

Evidence of what, though?

The rumor isn't a new one. A bunch of government agencies dropped ThinkPads 
from approved lists when they were sold from IBM to Lenovo, and that was pure 
ooo-scary-Chinese stuff, not with any actual evidence. It's reasonable enough, 
and jibe with their general mistrust of Huawei, etc. It was a pre-emptive move 
away from ThinkPads.

That mistrust ranges from the reasonable to the quasi-reasonable to whatever. I 
can understand completely removing ThinkPads from fast track approval to 
needing testing etc. once they were sold to Lenovo in 2005. This sounds like 
nothing but rumor mongering based on that.

Evidence would be something like a Black Hat preso.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFR98MAsTedWZOD3gYRAsssAJoCqOCNwDLrIGlk0IQqj2kOL+XQTwCg7BZc
tkFk68doeFMPtaLSCDomeX0=
=Gy/J
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] post-PRISM boom in secure communications (WAS skype backdoor confirmation)

2013-06-30 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 30, 2013, at 12:44 AM, James A. Donald jam...@echeque.com wrote:

 Silent Circle expects end users to manage their own keys, which is of course 
 the only way for end users to be genuinely secure. Everything else is snake 
 oil, or rapidly turns into snake oil in practice.  (Yes, Cryptocat,  I am 
 looking at you)
 
 However, everyone has found it hard to enable end users to manage keys.  User 
 interface varies from hostile, to unbearably hostile.
 
 Silent Circle publish end users public keys, which would seem to create the 
 potential for a man in the middle attack.
 
 I would like to see a review and evaluation of Silent Circle's key management.

This isn't quite correct. You have the gist of it, though.

Silent Phone uses ZRTP, which is ephemeral DH with hash commitments for 
continuity, in the style of SSH. The short authentication string is there for 
explicit MITM protection. There's no explicit public key.

Silent Phone uses SCIMP, which is also a EDH+hash commitment protocol, and also 
has no explicit public keys. The problem there is that unlike a voice protocol 
when you can use a voice recitation of a short authentication string, there's 
no implicit second channel in a text protocol. We're working on improvements 
there.

There's a SCIMP paper up on silentcircle.com. Please look at it.

Jon





-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFR0KhvsTedWZOD3gYRAiYEAJ4w96a0qdNjeDRAlii7qaF/dZ1TsACfUVJI
zfGnH862J4muQrTHag9sL48=
=ZqZE
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-29 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 28, 2013, at 10:27 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:

 There are a couple interesting lessons from LocationGate. 

[...]

 The second lesson has to do with the the status of iOS protection classes 
 that can leave things unencrypted even when the phone is locked. There are 
 things that we want our phones to do before they are unlocked with a 
 passcode. 

[...]

 
 The trick is how to communicate this the people...

[...]

Very well put in all of those.

 What's the line? Never attribute to malice what can be explained by 
 incompetence.

That is the line. And also that stupidity is the most second most common 
element in the universe, after hydrogen. (And variants on that.)

 
 At the same time we are in the business of designing system that will protect 
 people and their data under the assumption that the world is full of hostile 
 agents. As I like to put it, I lock my car not because I think everyone is a 
 crook, but because I know that car thieves do exist.

And in many cases a cheap lock will work because it deters and deflects, not 
because it actually prevents. This doesn't apply so much with information 
security, but I think it does in places.

For example, I think that the most important thing about a password is that it 
not be a dictionary word. If it is one, length doesn't matter. If it isn't, 
length only matters a little, because most attackers just one someone's 
password, not yours. If they do want yours, either spearphishing or malware 
like Zeus is a better bang for the buck. They won't actually bother cracking 
it, they'll go around it.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVTsEsTedWZOD3gYRAhDeAKDYJOTTA9mBBebl4ccMbAbqZQzg9ACdG7A7
XRwwSV8OBtA8JufBO4YsAJ0=
=/Olb
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 28, 2013, at 1:21 PM, ianG i...@iang.org wrote:

 
 Correct me if I'm wrong, but the parity bits in DES guard the key, which 
 doesn't need correcting?  And the block which does need correcting has no 
 space for parity bits?

Guard is perhaps a bit strong. They're just parity bits. 

In those days, people bought parity memory, and it was worth it. As Steve says, 
hardware errors that would just happen were pretty common. 

Now, there is a little more to it than that -- remember that when Lucifer 
became DES, it was knocked down from a 64-bit key to a 56-bit key. When they 
did that, they chose to knock one bit off of each octet (note that I'm saying 
octet, not byte, because also in those days it was not presumed that bytes 
had eight bits) rather than have 56 packed bits.

If you do it that way, using the orphaned bits as parity is a pretty reasonable 
use for them. 

 
 Layering was the big idea of the ISO 7 layer model.  From memory this first 
 started appearing in standards committees around 1984 or so?  So likely it 
 was developed as a concept in the decade before then -- late 1970s to early 
 1980s.

Earlier than that. But arguably, the full seven layers are still aspirational, 
but the word conceptual was used for a long, long time. The bottom four 
layers are pretty easy to know what goes where. But what makes a protocol be in 
5, 6, or 7 is subject to debate.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVKvZsTedWZOD3gYRAsIWAKCFLl335xfo5ivgyqSAOk+PbMY5rgCeMcvd
wdXEKz5QaHIzaKwDo5uXlHg=
=SgaG
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

[Not replied-to cryptopolitics as I'm not on that list -- jdcc]

On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:

 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?
 
 You seem to be talking about a single iOS 4 hardware key. But each device
 has its own. We don't know if Apple actually has retained copies of that.

I've been involved in these sorts of questions in various companies that I've 
worked. Let's look at it coolly and rationally.

If you make a bunch of devices with keys burned in them, if you *wanted* to 
retain the keys, you'd have to keep them in some database, protect them, create 
access  controls and procedures so that only the good guys (to your definition) 
got them, and so on. It's expensive.

You're also setting yourself up for a target of blackmail. Once some bad guy 
learns that they have such a thing, they can blackmail you for the keys they 
want lest they reveal that the keys even exist. Those bad guys include 
governments of countries you operate or have suppliers in, mafiosi, etc. Heck, 
once some good guy knows about it, the temptation to break protocol on who gets 
keys when will be too great to resist, and blackmail will happen.

Eventually, so many people know about the keys that it's not a secret. Your 
company loses its reputation, even among the sort of law-and-order types who 
think that it's good for *their* country's LEAs to have those keys because they 
don't want other countries having those keys. Sales plummet. Profits drop. 
There are civil suits, shareholder suits, and most likely criminal charges in 
lots of countries (because while it's not a crime to give keys to their LEAs, 
it's a crime to give them to that other bad country's LEAs). Remember, the only 
difference between lawful access and espionage is whose jurisdiction it is.

On the other hand, if you don't retain the keys it doesn't cost you any money 
and you get to brag about how secure your device is, selling it to customers in 
and out of governments the world over.

Make the mental calculation. Which would a sane company do?

 
 I suspect Apple has the methods/processes to provide it.
 
 I have no more evidence than you do, but my guess is that they don't, for
 the simple reason that if they did that fact would leak out. Secret
 conspiracies (and that's what it would take) grow less plausible
 as a function of the number of people who have to be in on it.
 (Furthermore I suspect that implausibility rises super-linearly with
 the number of people in on a conspiracy.)

And that's just what I described above. I just wanted to put a sharper point on 
it. I don't worry about it because truth will out. Or as Dr. Franklin put it, 
three people can keep a secret if two of them are dead.

 
 I think there's much more to it than a simple brute force.
 
 We know that those brute force techniques exist (there are several vendors
 of forensic recovery tools), and we've got very good reasons to believe
 that only a small portion of users go beyond the default 4 digit passcode.
 In case of LEAs, they can easily hold on to the phones for the 20 minutes
 (on average) it takes to brute force them.

The unlocking feature on iOS uses the hardware to spin crypto operations on 
your passcode, so you have to do it on the device (the hardware key is involved 
-- you can't just image the flash) and you get about 10 brute force checks per 
second. For a four-character code, that's about 1000 seconds.

See http://images.apple.com/ipad/business/docs/iOS_Security_May12.pdf for 
many details on what's in iOS specifically.

Also, surprisingly often, if the authorities ask someone to unlock the phone, 
people comply. 

 
 So I don't see why you suspect that there is some other way that only
 Apple (or other relevant vendor) and the police know about.

Yeah, me either. We know that there are countries that have special national 
features in devices made by hardware makers that are owned by that country's 
government, but they're very careful to keep them within their own borders, for 
all the obvious reasons. It just looks bad and could lead to losing contracts 
in other countries.

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVNHisTedWZOD3gYRAnLPAKCA3BW64XmpIlJJL8vMIwEZ9qBQzwCcDQiJ
OvnvTSUXUdELynnYxnT0lEA=
=JuD+
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas

On Mar 28, 2013, at 4:07 PM, shawn wilson ag4ve...@gmail.com wrote:

 
 On Mar 27, 2013 11:38 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 
 
 
  http://blog.agilebits.com/2012/03/30/the-abcs-of-xry-not-so-simple-passcodes/
 
 
 Days? Not sure about the algorithm but both ocl and jtr can be run in 
 parallel and idk why you'd try to crack a password on an arm device anyway 
 (there's a jtr page that compares platforms and arm is god awful slow)
 
 

You have to run the password cracker on the device, because it involves mixing 
the hardware key in with the passcode, and that's done in the security chip. 
You can't parallelize it unless you pry the chip apart. I'm not saying it's 
impossible, but it is risky. If you screw that up, you lose totally, as then 
breaking the passcode is breaking AES-256. And if you have about 2^90 memory, 
it's easier than breaking AES-128!

Jon




PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 28, 2013, at 5:24 PM, Kevin W. Wall kevin.w.w...@gmail.com wrote:

 
 All excellent, well articulated points. I guess that means that
 RSA Security is an insane company then since that's
 pretty much what they did with the SecurID seeds. Inevitably,
 it cost them a boatload too. We can only hope that Apple
 and others learn from these mistakes.

No, RSA was careless and stupid. It's not the same thing at all.

SecurID seeds are shared secrets and the authenticators need them. They did 
nothing like what we were talking about -- handing them out so the security of 
the device could be compromised. They kept their own crown jewels on some PC on 
their internal network and they were hacked for them.

 
 OTOH, if Apple thought they could make a hefty profit by
 selling to LEAs or friendly governments, that might change
 the equation enough to tempt them. Of course that's doubtful
 though, but stranger things have happened.

Excuse me, but Apple in particular is making annual income in the same ballpark 
as the GDP of Ireland, the Czech Republic, or Israel. They could bail out 
Cyprus with pocket change.

If you want to go all tinfoil hat, you shouldn't be thinking about friendly 
governments buying them off, you should be thinking about *them* buying their 
own country.

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFRVPGKsTedWZOD3gYRAmKzAKDkD8/myOnUQjpSQzohZ7i3OqC6QwCeJ69T
e81n4nVL+KTK7g72TLMeHow=
=JqMQ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 28, 2013, at 6:59 PM, Jeffrey Walton noloa...@gmail.com wrote:

 On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas j...@callas.org wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 [Not replied-to cryptopolitics as I'm not on that list -- jdcc]
 
 On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 
 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?
 
 You seem to be talking about a single iOS 4 hardware key. But each device
 has its own. We don't know if Apple actually has retained copies of that.
 
 I've been involved in these sorts of questions in various companies that 
 I've worked.
 Somewhat related: are you bound to some sort of non-disclosure with
 Apple? Can you discuss all aspects of the security architecture, or is
 it [loosely] limited to Apple's public positions?

- From being there, Apple's culture and practices are such that everything they 
do is focused on making cool things for the customers. Apple fights for the 
users. The users' belief and faith in Apple saved it from near death. 
Everything there focuses on how it's good for the users. Also remember that 
there are many axes of good for the users. User experience, cost, reliability, 
etc. are part of the total equation along with security. People like you and me 
are not the target,  it's more the proverbial My Mom sort of user.

Moreover, they're not in it for the money. They're in it for the cool. 
Obviously, one has to be profitable, and obviously high margins are better than 
low ones, but the motivator is the user, and being cool. Ultimately, they do it 
for the person in the mirror, not for the cash.

I believe that Apple is too closed-mouthed about a lot of very, very cool 
things that they do security-wise. But that's their choice, and as a gentleman, 
I don't discuss things that aren't public because I don't blab. NDA or no NDA, 
I just don't blab.


 I regard these as the positive talking points. There's no slight of
 hand in your arguments, and I believe they are truthful. I expect them
 to be in the marketing literature.
 
 I suspect Apple has the methods/processes to provide it.
 I have no more evidence than you do, but my guess is that they don't, for
 the simple reason that if they did that fact would leak out. ...
 And that's just what I described above. I just wanted to put a sharper point 
 on it.
 I don't worry about it because truth will out. ...
 A corporate mantra appears to be 'catch me if you can', 'deny deny
 deny', and then 'turn it over to marketing for a spin'.
 
 We've seen it in the past with for example, Apple and location data,
 carriers and location data, and Google and wifi spying. No one was
 doing it until they got caught.
 
 Please forgive my naiveness or my ignorance if I'm seeing things is a
 different light (or shadow).

Well, with locationgate at Apple, that was a series of stupid and unfortunate 
bugs and misfeatures. Heads rolled over it.

- From what I have read of the Google wifi thing, it was also stupid and 
unfortunate. The person who coded it up was a pioneer of wardriving. People 
realized they could do cool things and did them without thinking it through. 
Thinking it through means that there are things to do that are cool if you are 
just a hacker, but not if you are a company. If that had been written up here, 
or submitted at a hacker con, everyone would have cheered -- and basically did, 
since arguably a pre-alpha of that hack was a staple of DefCon contests. The 
superiors of the brilliant hackers didn't know or didn't grok what was going on.

In neither of those cases was anyone trying to spy. In each differently, people 
were building cool features and some combination of bugs and failure to think 
it through led to each of them. It doesn't excuse mistakes, but it does explain 
them. Not every bad thing in the world happens by intent. In fact, most of them 
don't.

 
 Apple designed the hardware and hold the platform keys. So I'm clear
 and I'm not letting my imagination run too far ahead:
 
 Apple does not have or use, for example, custom boot loaders signed by
 the platform keys used in diagnostics, for data extraction, etc.
 
 There are no means to recover a secret from the hardware, such as a
 JTAG interface or a datapath tap. Just because I can't do it, it does
 not mean Apple, a University with EE program, Harris Corporation,
 Cryptography Research, NSA, GCHQ, et al cannot do it.

I alluded to that before. Prying secrets out of hardware is known technology. 
If you're willing to destroy the device, there's a lot you can do, from 
decapping the chip, to just x-raying it, etc.

 
 A naturally random event is used to select the hardware keys, and not
 a deterministic event such as hashing a serial number and date of
 manufacture.
 
 These are some of the goodies I would expect a manufacturer to provide
 to select

Re: [cryptography] why did OTR succeed in IM?

2013-03-23 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 23, 2013, at 6:36 AM, Ben Laurie b...@links.org wrote:

 On 23 March 2013 09:25, ianG i...@iang.org wrote:
 Someone on another list asked an interesting question:
 
 Why did OTR succeed in IM systems, where OpenPGP and x.509 did not?
 
 Because Adium built it in?
 

Yeah. And it just worked. It took me two hours to find a Jabber client that 
actually worked (Psi) and get Psi working with OpenPGP support, and even then 
it was just weird, from a UX perspective.

But there's also one other thing, and that is that there was no other real 
competitor. So:

* Greenfield advantage
* Better UX
* Better out-of-the-box experience.

Jon




-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRTeWPsTedWZOD3gYRAgcxAJ9RLtQdYAsdluIKa/+hyBLDfCIVjwCg2bIq
pZT24itMJrs0CHuTSIeVm3o=
=WS8Z
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Workshop on Real-World Cryptography

2013-03-03 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 3, 2013, at 7:05 PM, Patrick Pelletier c...@funwithsoftware.org wrote:

 
 This article surprised me, because it could almost be read as an argument 
 against AES (or even against block ciphers in general).  Which seems to 
 contradict the common cryptographic wisdom of just use AES and be done with 
 it.
 
 Besides the argument about AES having timing side-channels in #9, the room 
 101 section at the end suggests we should do away with not only CBC, but also 
 AES-GCM, which is commonly touted as the solution to CBC's woes.  (He admits 
 it was his most controversial point, and I'm curious how it was received when 
 the talk was given.)  But I believe that if we rule out both CBC and AES-GCM 
 ciphersuites in TLS, that leaves us with only RC4.  (And indeed, 
 unsurprisingly given the author, RC4 seems to be what Google's sites prefer.)

Sadly, it's more complex than that. There are a bunch of rules of thumb that 
are independent of any particular cipher. Here's a few:

* Stream ciphers are typically a seeded PRNG that XORs the pseudo-random stream 
(colloquially called a keystream, but I think would be better called an 
r-stream) onto the plaintext. Everything from Lorentz to GCM works this way. 
This means that known plaintext means known keystream. That means that if you 
reuse the keystream, then there's a cipher break and it's independent of the 
cipher construction or key size. So they are very bad to use on jobs like 
encrypting disk blocks.

* Block ciphers need chaining modes to be effective, otherwise you can get a 
codebook built up. This is why ECB is suboptimal. Every chaining mode has its 
own plusses and minuses. CBC has weaknesses when you use it in a data stream, 
as opposed to a data block. The recent SSL attacks are attacks on the chaining 
mode more than on the cipher. Don't use CBC for a data stream. Counter mode 
turns a block cipher into a stream cipher and makes it good for streams, but 
then it gets all the drawbacks of stream ciphers. If you forget that counter 
mode is no longer a block cipher but a stream cipher, you can hurt yourself. 
But similarly, we've learned that CBC is tetchy when used in a data stream.

CFB mode is kinda part stream cipher and part block cipher. It's CBC mode's 
poor relation for no good reason. There many cases where a CBC weakness 
(particularly one that boils down to a padding attack) could be fixed by using 
CFB mode. People don't though, for no good reason. There are plenty of places 
to use it -- but also look at the Katz-Schneier attack against OpenPGP, that 
was essentially an attack on CFB mode. Ironically, the easiest way to mitigate 
that attack is to compress your data before encrypting.

* Every cipher and system is going to have weak points. There are ones worth 
worrying about and ones not worth worrying about. There are even ones worth 
arguing over or even deciding that gentlepersons can disagree. There's a very 
old saying, there ain't a lock that can't be picked and it's true of crypto, 
too.

If you start hyperventilating about too many things, you *will* just throw your 
hands up in the air. Side channels are important. Pay attention to them. But if 
you start thinking too hard and expect perfect security, you won't do anything, 
and plaintext is always worse than ciphertext. That sounds obvious, but you 
would be surprised how hard it is for people to internalize that.

You can use PKCS#1 properly, if you know what you're doing. You can screw up 
GCM if you don't. (Personally, I don't like GCM. I think it's too tetchy. But 
I'm pretty blasé about PKCS#1, because I'm used to pouring over it to make sure 
it's done right.)

* There are many crypto problems that good engineering can paper over. There 
are many that don't really show up in the real world. There are others that 
manifest themselves for whatever reason. Engineering is hard. Don't panic.

* There is a common thing that people do that I call engineering from 
ignorance as opposed to engineering from knowledge. For example, if you jump 
from AES or RC4 because of what you know about it to a cipher that hasn't been 
analyzed, you are engineering from ignorance. You're jumping from the devil you 
know to the devil you don't know. People like to do that, especially ones who 
want to live in a perfect world where ciphers have no drawbacks and there's no 
friction.

 
 It seems like we've been told for ages that RC4 is old and busted, and that 
 AES is the one-size-fits-all algorithm, and yet recent developments like 
 BEAST and Lucky 13 seem to be pushing us back into the arms of RC4 and away 
 from AES.

What do you mean we? 

RC4 got a bad rep because it has some weaknesses and because a lot of people 
didn't realize that you never send a stream cipher to do a block cipher's job. 
It has some other issues, like that its construction makes it hard to 
accelerate. For a cipher of its age, it's not bad, really, assuming you 

Re: [cryptography] Which CA sells the most malware-signing certs?

2013-02-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Feb 18, 2013, at 7:07 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 I've just done a quick tally of the certs posted to
 http://www.ccssforum.org/malware-certificates.php, a.k.a. Digital
 Certificates Used by Malware.  Looks like Verisign (and its sub-brand Thawte)
 are the malware-authors' CA of choice, selling more certs used to sign malware
 than all other CAs combined.  GeoTrust comes second, and everything below that
 is in the noise.  GoDaddy, the most popular CA, barely rates.  Other CAs
 who've sold their certs to malware authors include ACNLB, Alpha SSL (which
 isn't supposed to sell code-signing certificates at all as far as I can tell),
 Certum, CyberTrust, DigiCert, GeoTrust, GlobalSign, GoDaddy, Thawte,
 StarField, TrustCenter, VeriSign, and WoSign.  Everyone's favourite whipping-
 boy CAs CNNIC and TurkTrust don't feature at all.
 
 Caveats: These are malware certs submitted by volunteers, so they're not a
 comprehensive sample.  The site tracks malware-signing certs and not criminal-
 website certs, for which the stats could be quite different.

Interesting, but I have a raised eyebrow.

As Andy Steingruebl pointed out, there are a lot of malware certs that are 
stolen, so this data needs to be normalized against market share. Similarly 
relevant would be the CAs with significantly fewer certs there than market 
share would indicate. My former employer, Entrust, has zero certs in that 
database. What does that mean? Anything?

Why pick on the CAs at all? Frankly, the real problem with signed malware is 
that the *platforms* have the policy that equates a signature with reputation. 
That's the thing that to me is mind-bogglingly daft. It's the equivalent of the 
TSA wanting a government issued ID, because as we all know, terrorists can't 
get ID.

If you separate signatures from reputation, then anti-malware scanners can 
detect malware by a database of known malware signatures, and then infer 
upwards from a piece of malware to a key owned by or suborned by a malware 
author. They could conveniently kill malware by code signature or signing cert, 
as appropriate. They could even go beyond malware to disable things like known 
buggy or exploitable versions of software. I don't see why they aren't doing 
that now. They don't even need the platform makers to play along.

An alliance of the platforms and the anti-malware people would make it 
unnecessary to even have a CA-issued code signing cert.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRIrpGsTedWZOD3gYRAs9gAKDtpTwIOjAIRCxfhcDubT2i/4whXACg6BHa
Mrh87nc4QUybQUCxAbLX1/Y=
=kgfC
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for your comments, Ian. I think they're spot on.

At the time that the so-called Arab Spring was going on, I was invited to a 
confab where there were a bunch of activists and it's always interesting to 
talk to people who are on the ground. One of the things that struck me was 
their commentary on how we can help them.

A thing that struck me was one person who said, Don't patronize us. We know 
what we're doing, we're the ones risking our lives. Actually, I lied. That 
person said, don't fucking patronize us so as to make the point stronger. One 
example this person gave was that they talked to people providing some social 
meet-up service and they wanted that service to use SSL. They got a lecture how 
SSL was flawed and that's why they weren't doing it. In my opinion, this was 
just an excuse -- they didn't want to do SSL for whatever reason (very likely 
just the cost and annoyance of the certs), and the imperfection was an excuse. 
The activists saw it as being patronizing and were very, very angry. They had 
people using this service, and it would be safer with SSL. Period.

This resonates with me because of a number of my own peeves. I have called this 
the the security cliff at times. The gist is that it's a long way from no 
security to the top -- what we'd all agree on as adequate security. The cliff 
is the attitude that you can't stop in the middle. If you're not going to go 
all the way to the top, then you might as well not bother. So people don't 
bother.

This effect is also the same thing as the best being the enemy of the good, and 
so on. We're all guilty of it. It's one of my major peeves about security, and 
I sometimes fall into the trap of effectively arguing against security because 
something isn't perfect. Every one of us has at one time said that some 
imperfect security is worse than nothing because it might lull people into 
thinking it's perfect -- or something like that. It's a great rhetorical 
flourish when one is arguing against some bit of snake oil or cargo-cult 
security. Those things really exist and we have to argue against them. However, 
this is precisely being patronizing to the people who really use them to 
protect themselves.

Note how post-Diginotar, no one is arguing any more for SSL Everywhere. Nothing 
helps the surveillance state more than blunting security everywhere.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRFVFhsTedWZOD3gYRAjX5AKCw+SBcR1TDlDuPorgri2makt30wACgs3iI
2f+SwEqjbAVyPhf9SH67Aa8=
=tB7/
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am separating this from my previous as I went into a rant.

As we were designing Silent Text, we talked to a lot of people about what they 
needed. I don't remember who told me this anecdote, but this person went over 
to a colleague's office after they'd been texting to just talk. They walked 
into the colleagues office and noticed their phone open with a conversation 
plainly visible with someone else. A third party who was their mutual colleague 
was texting about that meeting.

In short: Alice goes to Bob's office for a meeting and sees texts from Charlie 
about that meeting, including comments about Alice.

There wasn't anything untoward about the texting. No insults about Alice or 
anything, but there was an obvious privacy loss here. What if it *had* been 
included an intemperate comment about our Alice? Alice said nothing about it to 
Bob, but I got an earful. That earful included the opinion that the threat of 
accidental disclosure of messages within a group of people is greater than 
either the messages being plucked out of the air or seizure and forensic 
groveling over the device. Alice's opinion was that when people have a secure 
communications channel, they loosen up and say things that are more dramatic 
than they would be otherwise. It's not that they're more honest, they're less 
honest. They're exaggerated to the point of hyperbolic at times. Alice said 
that she knew that she'd texted some things to Bob that she really wouldn't 
want the person she'd said them about to see them. They were said quickly, in 
frustration, and so on. It's not that they'd be taken out of context, it's 
 that they'd be taken *in* context.

It's interesting underlying the story, Alice suddenly saw Bob not as an ally in 
snark, but a threat -- the sort of person who leaves their phone unlocked on 
their desk. Bob, of course, would say something like that if the texts had been 
potentially offensive, he'd have locked his phone. This explanation would thus 
convince Alice that Bob is *really* not to be trusted with snark.

This is incredibly perceptive, that the greatest security threat is not the 
threat from outside, it's the threat from inside. It is exactly Douglas Adams's 
point about the babelfish that by removing barriers to communication, it 
created more and bloodier wars than anything else.

That's where Burn Notice came from. It's a safety net so that when Charlie 
texts Bob, I'm tired of Alice always... it goes away.

What I find amusing is the reaction to it all around. There's a huge 
manic-depressive, bimodal reaction. Lots of people get ahold of this and 
they're like girls who've gotten ahold of makeup for the first time. ZOMG! You 
mean my eyelids can be PURPLE and SPARKLY? This is the same thing that happens 
when people discover font libraries or text-to-speech systems. For a couple of 
days that someone gets the new app, there's nothing but text messages that are 
self-destructing, purple, sparkly eyelids with font-laden Tourette's Syndrome 
with the Mission Impossible theme song playing in the background. (Note, if you 
are using Silent Text, you can't actually make the text purple, nor sparkly, 
nor change fonts. You need to put all of that in a PDF or an animated GIF -- 
and you will. This is a metaphor, not a requirements document.)

The next thing that happens is that they are so impressed with some 
particularly inspired bit self-desctructing childishness that they take a 
screen shot. As they gaze at the screen shot, or sometimes just as they take 
the screen shot, light dawns. Oh. You mean Oh. Then the depressive phase 
kicks in.

Back in the dark ages, PGP had the For Your Eyes Only feature. This is pretty 
much the ancestor of Burn Notice. Simultaneously useful and worthless. It's 
useful because it signals to your partner that this is not only secret but 
sensitive and does something to stop accidental disclosure. It is utterly 
ineffective against a hostile partner for many of the same reasons. We did all 
sorts of silly things with FYEO that included an anti-TEMPEST/Van Eck font, and 
other things. Silent Text actually has an FYEO feature that isn't exposed, 
thank heavens.

I mention all of that because once you're in the depressive phase, it's easy to 
go down the same rathole we did with FYEO. I spent time researching if you can 
prevent screen shots on iOS (you can't). I did this while telling people that 
it was dumb because I can take a picture of my iPhone with my iPad. I held up 
my phone to video chat and said, Here, see this? This is what you can do!

Sanity prevailed, but I think that fifteen years of FYEO helped a lot. When you 
stare into self-destructing messages, trying to figure out how make them really 
go away flawlessly, they stare back. You will end up trying to figure out how 
to do a destructive two-phase commit, what class libraries need to be patched 
so those that non-mutable strings inherit from 

Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-06 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Feb 6, 2013, at 3:35 PM, Jeffrey Walton wrote:

 On Wed, Feb 6, 2013 at 7:17 AM, Moti m...@cyberia.org.il wrote:
 Interesting read.
 Mostly because the people behind this project.
 http://www.slate.com/articles/technology/future_tense/2013/02/silent_circle_s_latest_app_democratizes_encryption_governments_won_t_be.html
 
 No offense to folks like Mr. Zimmermann, but I'm very suspect of his
 claims. I still remember the antithesis of the claims reported at
 http://www.wired.com/threatlevel/2007/11/encrypted-e-mai/.
 
 I'm also suspect of ... the sender of the file can set it [the
 program?] on a timer so that it will automatically “burn” - deleting
 it [encrypted file] from both devices after a set period of, say,
 seven minutes. Apple does not allow arbitrary background processing -
 its usually limited to about 20 minutes. So the process probably won't
 run on schedule or it will likely be prematurely terminated. In
 addition, Flash Drives and SSDs are notoriously difficult to wipe an
 unencrypted secret.
 
 Perhaps a properly scoped PenTest with published results would ally my
 suspicions. It would be really bad if people died: ... a handful of
 human rights reporters in Afghanistan, Jordan, and South Sudan have
 tried Silent Text’s data transfer capability out, using it to send
 photos, voice recordings, videos, and PDFs securely.

No offense is taken. You don't even need a pen test. I'll tell you how it works.

There's no magic there. Every message that we send has metadata on it that is a 
timeout. The timer starts when you get the message. So if I send you a seven 
minute timeout while you're on an airplane, the seven minutes starts when you 
receive the message.

And you are correct, the iOS app model doesn't allow background tasks, so if 
you switch away from the app for an hour, the delete doesn't happen until you 
switch back to the app. Until Apple lets us do something in the background, 
we're stuck with that limitation. It's that simple. We hope to do better on 
Android. And if someone from Apple happens to be listening in, we'd love to be 
able to schedule some deletions.

Deleting the things, however, is trivial. This is a place that iOS shines. 
Every file is encrypted with a unique key and if you delete the file, it is 
cryptographically erased. You're correct in that flash *is* notoriously 
difficult to wipe unencrypted secrets. Fortunately for us, all the flash on iOS 
is encrypted and the crypto management is easy to use.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFRE1VKsTedWZOD3gYRAvfHAJ0dd9tSABRZkJxtdM4QbcI+d/jQqACgnPN7
nZ0rsFPcGCU9KNQEqSu70HU=
=nsyj
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-12 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jan 12, 2013, at 1:27 AM, ianG wrote:

 Oh, I see.  So basically they are breaking the implied promise of the https 
 component of the URL.
 
 In words, if one sticks https at the front of the URL, we are instructing the 
 browser as our agent to connect securely with the server using SSL, and to 
 check the certs are in sync.
 
 The browser is deciding it has a better idea, and is redirecting that URL to 
 a cloud server somewhere.
 
 (I'm still just trying to understand the model.  Yes, I'm surprised, I had 
 never previously heard of this.)

I suppose you can look at it as breaking the implied promise. You can also 
look at it as a service.

Many of these systems work in an environment where connectivity is very 
expensive. In such an environment, saving money by having someone filter your 
HTTP comes with the cost that you have to trust them not to do bad things with 
your data.

But if you get into a cab, you're trusting them not to drive you into oncoming 
traffic. If that threat bothers you, don't take a cab. Every time you eat in a 
restaurant, you're trusting them to have reasonable food safety practices and 
not spit on your food. If that bothers you, don't do that.

 
 
 
 That can be converted pictures, edits to the HTML proper, and so on.
 
 The security characteristics are a mixed bag. They can send smaller 
 pictures, scan for malware, but obviously they can't process your SSL 
 connections. So they send the URL to the cloud server, make the SSL 
 connection, and then send you the optimized page over SSL.
 
 One could interpret the browser as being a combined service between the 
 client on the phone, and the cloud support services, sure.
 
 I think this interpretation would be unusual to any ordinary user.  At a 
 contractual level, it would also need to be agreed by both ends.  We can 
 easily ensure the end-users' agreement by means of the phone agreement, but 
 it is less easy to imply the banks' agreement.

In some parts of the world and under some conditions, it's *usual*. The network 
is bad and expensive. It's really easy for us rich Westerners who can afford 
data roaming plans and travel SIMs to go into high dudgeon over it. I share 
your disdain, but my disdain is similar to my disdain for payday check cashing 
places etc. I don't approve. I understand, but I don't approve.

 
 And, if a security case were to result in a bank being held for damages, it 
 could easily expand to Nokia.  Given the complexity of modern day online 
 banking sites (that's a kind description) I can't see how they could be agile 
 enough to avoid making mistakes.

Sure. Nokia is taking a risk, as is Opera (who supply that browser). That risk 
is mitigated by a click-through license that no one reads, but heck, someday 
some judge is going to hack up a hairball on click-throughs.

 
 Yes, ok, it's not an attack if there isn't an attacker.  Or more generally, 
 is it an attack when the attack is done by self?  We have met the enemy, and 
 he is us.

Exactly, and the answer is no. It's a service voluntarily offered and 
subscribed to (for some suitable definition of the word voluntary).

 
 So more properly, it might be a breach-of-contract issue, where the contract 
 to provide a browser that does the 'right thing' has been breached (in the 
 view of the outraged).
 
 Nokia will argue that their contract is clearly expressed, they can do this 
 and they claim so in their contract.  OK.
 
 Question remains -- what to make of a vendor that does tricksy things with 
 the implied secure browsing contract?

Well, that's like the difference between a short-term loan person who does 
something tricksy with the interest rate. There's a big smear from accepted to 
dodgy to unfair to evil. 

 
 If Nokia can do this, can the other vendors?  Why can't Firefox and Chrome 
 start clouding the https connection?

They could, sure. As I pointed out, Google Reader is almost the same sort of 
thing, but is an RSS reader. I have quibbles with them, and my quibbles are 
actually the opposite. Amazon Silk does pretty much the same thing as the 
Nokia/Opera thing. A lot of pixels have been spilt over it. I don't use Silk, 
but I don't think Amazon are evil for offering it. I don't think the people who 
use it are either stupid or dupes. It's just not my thing.

(The quibble I have is over partial security. My quibble is that lots of 
partial security systems label the partial security as being worse than no 
security. I believe that partial security is always better than no security.)

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQ8b6MsTedWZOD3gYRAvfNAKDU1sQjOqV+8SRzHWzg1sBYbGZ+tACgoFhi
78lRhcT0rG+0afgTRktaII4=
=TPRD
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-10 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Others have said pretty much the same in this thread; this isn't an MITM 
attack, it's a proxy browsing service.

There are a number of optimized browsers around. Opera Mini/Mobile, Amazon 
Silk for the Kindle Fire, and likely others. Lots of old WAP proxies did 
pretty much the same thing. The Nokia one is essentially Opera.

These optimized browsers take your URL, process it on their server and then 
send you back an optimized page. That can be converted pictures, edits to the 
HTML proper, and so on.

The security characteristics are a mixed bag. They can send smaller pictures, 
scan for malware, but obviously they can't process your SSL connections. So 
they send the URL to the cloud server, make the SSL connection, and then send 
you the optimized page over SSL.

Some of these browsers let you turn off the optimizations for SSL pages. The 
Amazon Silk browser does. 

You can find information about Opera at:

http://www.opera.com/mobile/specs/

Here's articles with various concerns about Silk:

http://www.zdnet.com/blog/networking/amazons-kindle-fire-silk-browser-has-serious-security-concerns/1516

http://www.theinquirer.net/inquirer/news/2203964/amazon-confirms-kindle-fires-silk-browser-tracks-users

They're not doing certificate hinkiness. They are straightforward cloud 
services, or perhaps more formally proxy services. Heck, Google Reader is more 
or less the same thing, itself, albeit as an RSS reader than a web browser.

If one wants to get upset about them, there's plenty to grumble over. There's 
the explicit security concerns, concerns about tracking, concerns about 
misrepresentation to the users about what's really going on, and so on. The 
meta concern that smart people like us are even discussing it is also a 
security concern.

But they provide services that some people find valuable. I don't use them, but 
I wouldn't even call them a MITM, myself. When we say MITM we're eliding the 
word attack. It's not an attack.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFQ71XksTedWZOD3gYRAoShAKDyXR3LPirRscaxA1RDTPQFrjl/jgCgpiMF
TMyJCoC77oZ9uaaWWomVuEg=
=f2UH
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How much does it cost to start a root CA ?

2013-01-05 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm really glad you asked this question. It gives me to tell a story I've 
wanted to tell for some time. I know the answer to your question because I've 
done it.

Some years ago, PGP Corporation toyed off and on with the idea of becoming a 
CA. We looked at ways to get there through the side door, like buying the 
assets of some company that was going out of business, and managed to be too 
little, too late.

So after a lot of dithering, we started a project to create a CA from scratch. 
I led the project and it had a budget of US$250K. I code-named the project 
Casablanca. Partially because Casablanca begins and ends with a CA, but mostly 
because I really like the phrase, I am shocked, shocked that PGP is issuing 
X.509 certificates. 

The process for setting up a CA is straightforward and exacting. You have to 
have physical and logical controls on things, dual-authentication and 
separation of duties on just about everything, but it's straightforward. You 
have to write a lot of documents, create a lot of procedures, and have all of 
that audited. You have to get audited regularly and often as you start out, and 
then the audits taper off after you show that you're running a tight ship. 

The main thing you're looking to do is to pass the WebTrust audit and 
associated practices that the platforms will require you to do. Microsoft has 
the most mature process. They have a set of rules and guidelines. If you follow 
them, you're in. One of those, by the way, is that you have to be a retail CA, 
as opposed to an internal one or a government one. It's best to work with 
Microsoft first, and once you're in their root program move to the others. They 
are fair, disciplined, and helpful. Most of all, once you've gone through all 
that, it's easier to get into the other important root stores.

If you go into this business with the attitude that you're doing a job that 
protects the Internet at large, defends the public trust, and so on, then 
you'll find the requirements completely reasonable and easy to do. 

Now that $250K that I spent got an offline root CA and an intermediate online 
CA. The intermediate was not capable of supporting workloads that would make 
you a major business. You need a data center after that, that supports the 
workloads that your business requires. But of course, you can grow that with 
your customer workload, and you can buy the datacenter space you need.

The costs got split out to about 40% hardware, etc. and 60% people. It does not 
include the people costs of the internal PGP personnel who worked on it. I 
raided part time help from around the company. It took about fourteen months 
from start to end.

PGP bought an existing company, TrustCenter. TrustCenter was the remaining end 
of GeoTrust (spun out Equifax) that Verisign did not buy. The plan was that the 
PGP-branded Casablanca roots would be put into the TrustCenter machinery and 
datacenters, and then you have a major CA. That got interrupted by Symantec 
buying PGP and then buying Verisign. Casablanca is now rolled up into their 
Norton CA business along with Verisign and Thawte, GeoTrust, etc.

There are rumors, which you've read here about how there are lots of 
underhanded obstacles in the way of becoming a CA. My experience is that the 
only underhanded part of the industry is that no one in it dispels the rumors 
that there are underhanded obstacles in your path. This is pretty much the 
first time I have, so I suppose I'm as guilty as anyone else.

Furthermore, there are lots of overblown rumors about the CA/Browser Forum. You 
don't have to be a Forum member to be a CA. If you plan to issue EV 
certificates, you have to follow the EV guidelines which are produced by the 
CA/Browser Forum, but that is because the platforms won't put your EV root in 
their stores unless you do. You don't have to be a member of the Forum to be a 
CA. As a matter of fact, there are a large number of CAs that are not members.

The situation is similar to Internet protocols and the IETF. If you want to 
make routers, you don't have to be a member of the IETF. You *will* have to 
follow IETF documents, but you don't have to participate. Obviously, there are 
advantages in participating, but there are also costs.

I was involved in the CA/Browser Forum for a few years, first with Apple (on 
the browser end) and then with Entrust (on the CA end). I heard the stories 
about how it's a cartel, etc. At PGP, we had no plans to be members because we 
had no interest in being part of a cartel. It was a huge disappointment to be 
there and find out that it isn't a cartel at all, it's a volunteer organization 
that handles lots of the rough edges of web PKI with the same combination of 
spurts of efficiency and spurts of fecklessness that you find in just about any 
organization that tries to get a bunch of organizations with different goals to 
work together.

Presently, the Forum is 

Re: [cryptography] Tigerspike claims world first with Karacell for mobile security

2012-12-26 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I took a look at it. Amusing. I didn't spend a lot of time on it. Probably not 
more than twice what it took me to write this.

It has an obvious problem with known plaintext. You can work backward from 
known plaintext to get a piece of their tumbler and since the tumbler is just 
a big bitstring, work from there to pull out the whole thing.

The encrypted Karacell file format has 64 bits that must decrypt to zero. Since 
encryption is an XOR onto a pseudo-one-time-pad, this leaks 64 bits of the 
tumbler. Similarly, the checksum at the end is a bunch of hash blocks of 
their special hash all XORed together. This doesn't work against malicious 
modificationp; you can cut-and-paste through XOR, etc.

There are obvious vulnerabilities to linear and differential cryptanalysis. It 
is a lot of XORing on large-ish fixed longterm secrets with only bit-rotating 
through the secrets, and between the vulnerabilities of known plaintext as well 
as the leaks in it, I don't see a lot of long-term strength. I bet that you can 
use known structure of plaintext (like that it's ASCII/UTF8, let alone things 
like known headers on XML files) to start prying bits out of the tumblers and 
you just work backwards. 

But beyond that, it isn't even particularly fast. Since it needs a lot of bit 
extraction and rotations, I doubt it would be as fast as AES on a processor 
with AES-NI instructions. The whole thing is based on doing 16-bit calculations 
and some bit sliding; I don't expect it to be as fast as RC4 or some of the 
fast estream ciphers.

Obviously, I could be missing something, but there are other errors of art that 
lead me to think there isn't a lot here. For example, if your basic encryption 
system is to take a one-time-pad and try to expand that out to more uses, zero 
constants are errors of art. You should know better. There are similar errors 
like easily deducible parameters that give more known plaintext. The author 
discusses using a text string directly as a key, which is very bad with his 
expansion system. He invented his own message digest functions, and they look 
like complete linear functions to me. They're in uncommented C that's light on 
indenting and whitespace. Confirmation bias might be making me miss something, 
but it's not like he made it easy for me.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQ225dsTedWZOD3gYRArauAKC5vrbr9HKPd0a0NoXL+eVQq428uQCgiiFE
GFlyVpZAY6w80CBqxXl2qHs=
=gncJ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why using asymmetric crypto like symmetric crypto isn't secure

2012-11-04 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Nov 3, 2012, at 7:03 PM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 Jon Callas j...@callas.org writes:
 
 Which immediately prompts the question of what if it's long or secret? [1]
 This attack doesn't work on that.
 
 The asymmetric-as-symmetric was proposed about a decade ago as a means of
 protecting against new factorisation attacks, and was deployed as a commercial
 product.  I don't recall them keeping the exponent secret because there wasn't
 any need to... until now that is.  So I think Taral's comment about not using
 crypto in novel ways is quite apropos here, the asymm-as-sym concept only
 protected you against the emergence of novel factorisation attacks (or the use
 of standard factorisation attacks on too-short keys) as long as no-one
 bothered trying to attack the public-key-hiding itself.

Point taken. I'm being too grumpy. 

I think this is a brilliant result because it gives us a see, see reference 
to give to people.

I'm big on sneering at proofs of security because they often do not relate to 
real security in the real world in ways that upset me (a guy whose degree is in 
mathematical logic) to my core. If you want the same sort of rigor that math 
has, security is useless.

On the other hand, and Hal Finney drove this home to me many times, they do 
tell you what sort of things you can ignore. 

This one is great because of the way it slaps intuition around.

 
 If you believe that the only attack against RSA is factoring the modulus,
 then you can be seduced into thinking that hiding the modulus makes the
 attacker's job harder. 
 
 Yup, and that was the flaw in the reasoning behind the keep-the-public-key-
 secret system.  So this a nice textbook illustration of why not to use crypto
 in novel ways based purely on intuition.

There are all sorts of things people do based on an intuition. Hell, I've done 
them. Sometimes they just present themselves. If I had a protocol that didn't 
expose public keys (suppose they're all wrapped in a secure transfer), I might 
point out that hey, this system has hidden RSA keys. But this points out that 
unless there is a lot of extra work you do, you didn't do squat. It also 
suggests that the conservative engineering approach, which is to say that 
unless you can characterize added security it's just fluff, has new backing in 
fact.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQluTIsTedWZOD3gYRAvvGAKDAGkbALD3jqLq8kmG7VIXWtJ2sWACfWOwG
DFFKn3LjBEqvpwv4lqHYn04=
=G0xh
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why using asymmetric crypto like symmetric crypto isn't secure

2012-11-03 Thread Jon Callas
 In the past there have been a few proposals to use asymmetric cryptosystems,
 typically RSA, like symmetric ones by keeping the public key secret, the idea
 behind this being that if the public key isn't known then there isn't anything
 for an attacker to factor or otherwise attack.  Turns out that doing this
 isn't secure:
 
  http://eprint.iacr.org/2012/588
 
  Breaking Public Keys - How to Determine an Unknown RSA Public Modulus
  Hans-Joachim Knobloch
 
  [...] We show that if the RSA cryptosystem is used in such a symmetric
  application, it is possible to determine the public RSA modulus if the
  public exponent is known and short, such as 3 or F4=65537, and two or more
  plaintext/ciphertext (or, if RSA is used for signing, signed
  value/signature) pairs are known.

Great paper, however, the conclusions here and in replies are not quite right. 
The paper itself says,

it is possible to determine the public RSA modulus if the public exponent is 
known and short, such as 3 or F4=65537, 


Which immediately prompts the question of what if it's long or secret? [1] 
This attack doesn't work on that.

What it tells you is that if for some strange reason, you are going to keep the 
public key secret, you need to make the exponent part of the secret. That's the 
real, real lesson here -- an RSA key has an exponent and a modulus and unless 
the exponent is secret, the key isn't secret. And of course secret doesn't mean 
the usual ones just put in a cabinet.

And for us logic weenies, he does not show that a secret public key is 
insecure. He shows that there is no added security for secret public keys where 
the exponent is known and short. Those keys are just as secure as they would be 
if they had known public keys (which could be not at all).

The danger is not using a public key algorithm in a novel way, it's using it in 
a novel way and thinking that your intuition is correct. It's thinking through 
the consequences of your actions.

If you believe that the only attack against RSA is factoring the modulus, then 
you can be seduced into thinking that hiding the modulus makes the attacker's 
job harder. The brilliance of this paper is that is concisely shows that unless 
you take care is selecting an exponent, the modulus leaks easily. 

Obviously, a secret public key isn't *less* secure. (The reductio ad absurdum 
is left as an exercise for the reader.) It must be as secure or greater. But if 
it's greater, by how much and how would you know? If you can't answer that 
question, or at least handwave in the direction of an answer.

If you don't have a lower bound on the improved security of that tweak, then 
you should consider it to be zero. This is why although it's still left open as 
to whether a truly secret public key adds security, we should assume there's no 
added security.

The engineering dope-slapping that needs to happen is over getting distracted. 
Security systems are designed to meet certain assumptions. Changing the 
assumptions changes the result. Public-key cryptosystems are designed in such a 
way that the public key is a public parameter. They are not designed to have 
added security when the public key is secret. This paper shows a case in which 
there is no added security, and as a matter of fact, the modulus leaks from the 
ciphertext.

If you want to make the public key secret, you have to do more work and there's 
no indication of how much added security there is -- it could be zero. No one 
has ever done a keygen with any work done into considering the care you need to 
make the exponent be a secret parameter. On the contrary, it's usually a 
quasi-constant.

All that added work could be put somewhere else, and as we all know there's 
plenty of ways to induce bugs by doing the extra work. Therefore, in the words 
of Elvis Costello, don't get cute. If you use reasonable parameters in 
off-the-shelf subsystems, you work just fine. Getting cute at best adds in some 
undefinable bit of good-feeling, which isn't the same thing as security.

Jon

[1] Operationally, long or secret will be long *and* secret because there are 
no commonly used long exponents, and all the common exponents are short. 
Phrased another way, the short exponents are easily iterated over.

PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] hashed passwords, iteration counts, and PBKDF2

2012-10-31 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Oct 31, 2012, at 1:58 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

 * PGP Signed by an unknown key
 
 Thinking out loud;
 
 One reason why PBKDF2 requires the original password is so that you don't 
 repeatedly
 hash the same thing, and end up a short cycle, where e.g. hash(x) = x.  At 
 that
 point, repeated iterations don't do anything.
 
 I just realized, you don't necessarily need to put the original password in; 
 you
 could just hash something else that varies to keep it out of a short cycle; 
 for
 example, the round number.
 
 This would allow you to update an iteration count post-facto without knowing 
 the
 original password.  Would it break any security goals?

Almost certainly not. There aren't proofs of security, but I can wave my hand 
at some.

This basic technique is something that a number of modern (SHA-3 etc.) hash 
functions do. It's more or less what Skein does.

Consider what you're doing as creating a hash function with a compression 
function that is your base hash function (which is likely to actually be a 
keyed HMAC), and then you chain it with a counter that provides uniqueness per 
iteration.

Skein takes the Threefish tweakable cipher as its compression function and uses 
a counter and other stuff in the tweak to create the UBI chaining mode which 
has per-chunk uniqueness to get some security guarantees.

There's a handwave. If you are indeed using HMAC with a small quantity of 
smarts, you can almost certainly chain the HMAC proofs into a proof of security.

It's trivially no weaker than the base PRF/compression-function, which if it's 
an HMAC is not bad, security-wise. I can think of some ways to screw it up, but 
I think those imply a drastic weakness in either the underlying base hash 
function or HMAC itself. Even those can probably be papered over with a 
Luby-Rackoff argument that enough rounds covers all sins. If you're doing a few 
tens of thousands of rounds (which is just a good idea with PBKDF2), I'm sure 
that you can end up with a security floor that is much greater than the entropy 
in the password itself (which is going to have lots of suck -- you're lucky to 
*ever* get over 32 bits, and only an insane person would be much over 64, and 
even those are likely to be illusory).

In short, it sounds okay to me. I'm sure you can screw it up if you try, but it 
sounds okay to me.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQkaC1sTedWZOD3gYRAtSmAKCuQSeeeq2uwuVDx9S7T/6wQquW7QCeJwH0
Tox5gJds6vvt/PmIY7GwkbE=
=6f0G
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DKIM: Who cares?

2012-10-24 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

As someone who is one of the DKIM authors, I can but roll my eyes and shrug.

It's an interesting, intentional facet of DKIM that any given key being used 
only has to last as long as it takes the email to go from the sender's domain 
to the receiver's.

You could set things up so there's one key per message and take them down as 
the message is used. That's a lot of trouble, but you *could* do that.

However, RFC 4871 says on the subject:

3.3.3.  Key Sizes

   Selecting appropriate key sizes is a trade-off between cost,
   performance, and risk.  Since short RSA keys more easily succumb to
   off-line attacks, signers MUST use RSA keys of at least 1024 bits for
   long-lived keys.  Verifiers MUST be able to validate signatures with
   keys ranging from 512 bits to 2048 bits, and they MAY be able to
   validate signatures with larger keys.  Verifier policies may use the
   length of the signing key as one metric for determining whether a
   signature is acceptable.

   Factors that should influence the key size choice include the
   following:

   o  The practical constraint that large (e.g., 4096 bit) keys may not
  fit within a 512-byte DNS UDP response packet

   o  The security constraint that keys smaller than 1024 bits are
  subject to off-line attacks

   o  Larger keys impose higher CPU costs to verify and sign email

   o  Keys can be replaced on a regular basis, thus their lifetime can
  be relatively short

   o  The security goals of this specification are modest compared to
  typical goals of other systems that employ digital signatures

   See [RFC3766] for further discussion on selecting key sizes.

Note the weasel-words long-lived. I think that the people caught out in this 
were risking things -- but let's also note that the length of exposure is the 
TTL of the DNS entries.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFQiL2lsTedWZOD3gYRAou1AJ0W4HQMn/pfT00nvQcJB+B8MqUVXQCdGL9R
PxLZSoy7Qeax8ABpvdTc214=
=phnF
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptanalysis of 923-bit ECC?

2012-06-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 22, 2012, at 2:01 AM, James A. Donald wrote:

 On 2012-06-22 6:21 PM, James A. Donald wrote:
 Is this merely a case where 973 bits is equivalent to ~60 bits symmetric?
 
 As I, not an authority, understand this result, this result is not oops, 
 pairing based cryptography is broken
 
 It is oops, pairing based cryptography requires elliptic curves over a 
 slightly larger field than elliptic curve based cryptography does

Indeed. So kudos to the Fujitsu guys, and we make the curves bigger. Even 77 
bits is really too small for serious work.

Does anyone know what the ratio is for equivalences, either before or after?

The usual rule of thumb is 2x bits for symmetric security equivalence on hashes 
and normal ECC, with integer public keys being 1024 maps to 80 symmetric, 2048 
to 112, and 3K to 128.

What creates the 953 - 153 relation? Then of course there's the obvious 153 
halved, but do we know at all how we'd compensate for the new result?

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP5LFxsTedWZOD3gYRAi2oAKDTs9aRZVTc2IoFlaKPbEJw9pd6jACeOSqe
WMl+TXGl/i+KHfW9p88dxHA=
=0+9/
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptanalysis of 923-bit ECC?

2012-06-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jun 22, 2012, at 11:20 AM, Samuel Neves wrote:

 
 Not exactly. If the target is ~80-bit security, ~160-bit elliptic curves are 
 still fine, even for pairing-based crypto. The failure there was the choice 
 of the particular *field* and *curve parameters*. Namely, choosing both the 
 characteristic (3) and the embedding degree (6) to be small left it open to 
 faster attacks.

Yeah, but we're all supposed to retire 80-bit crypto.

I'm well aware of my own lackadaisicalness in this regard (to wit, the 1024-bit 
DSA key that this message is signed with). That doesn't make the point invalid, 
it only means that I am a sinner, too.

I'm interested in knowing what the equivalent values for uprating are, and the 
rationales for them.

If ~1000 bit pairing is equivalent to 80 bits, what's equivalent to 128?

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFP5N6CsTedWZOD3gYRAlBZAKDf1Yl6Z9sw7HY2kZYSJos8QAaa8ACfYFEO
6UmICgYZia5H9rw2b9IVTM8=
=SUPa
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Jon Callas

On Jun 18, 2012, at 9:03 PM, Matthew Green wrote:

 On Jun 18, 2012, at 4:21 PM, Jon Callas wrote:
 
 Reviewers don't want a review published that shows they gave a pass on a 
 crap system. Producing a crap product hurts business more than any thing in 
 the world. Reviews are products. If a professional organization gives a pass 
 on something that turned out to be bad, it can (and has) destroyed the 
 organization.
 
 
 I would really love to hear some examples from the security world. 
 
 I'm not being skeptical: I really would like to know if any professional 
 security evaluation firm has suffered meaningful, lasting harm as a result of 
 having approved a product that was later broken.
 
 I can think of several /counterexamples/, a few in particular from the 
 satellite TV world. But not the reverse.
 
 Anyone?

The canonical example I was thinking of was Arthur Anderson, which doesn't meet 
your definition, I'm sure.

But we'll never get to requiring security reviews if we don't start off seeing 
them as desirable.

Jon



PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jun 18, 2012, at 4:12 PM, Marsh Ray wrote:

 
 150 clocks (Intel's figure) implies 18.75 clocks per byte.
 

That's not bad at all. It's in the neighborhood of what I remember my DRBG 
running at with AES-NI. Faster, but not by a lot. However, I will getting the 
full 16 bytes out of the AES operation and RDRAND is doing 64 bits at a time, 
right?

 
 Note that Skein 512 in pure software costs only about 6.25 clocks per byte. 
 Three times faster! If RDRAND were entered in the SHA-3 contest, it would 
 rank in the bottom third of the remaining contestants.
 http://bench.cr.yp.to/results-sha3.html

As much as it warms my heart to hear you say that, it's not a fair comparison. 
A DRBG has to do a lot of other stuff, too. The DRBG is an interesting beast 
and a subject of a whole different conversation.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFP4B3lsTedWZOD3gYRAkegAJ0Z491IAfNVXX3hKOdOghPczZmWMACgztIG
Ym7qE1e/es0m0o+macE+Iv0=
=GJXv
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am reminded of an article my dear old friend, Martin Minow, did in 
Cryptologia ages ago. He wrote the article I think for the April 1984 issue. It 
might not have been 1984, but it was definitely April.

In it, he described a cryptosystem in which you set the key to be the same as 
the plaintext and then XOR them together. There is a two-fold beauty to this. 

First that you have full information-theoretic security on the scheme. It is 
every bit as secure as a one-time pad without the restrictions of a one-time 
pad as to randomness of the keys and so on. 

The second wonderful property is that the ciphertext is compressible. Usually 
cipher text is not compressible, but in this case it is. Moreover, it is 
*maximally* compressible. The ciphertext can be compressed to a single bit and 
the ciphertext length recovered after key distribution.

I think that non-decryptable encryption really needs to cite Minow's pioneering 
work.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP4CW6sTedWZOD3gYRAgW8AKCpdVUpa1CpDpn5F6ZB4hezweGa9gCgz/62
m2eb/GnTagRxb6O0ct0a2oQ=
=Gwp3
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] non-decryptable encryption

2012-06-19 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 19, 2012, at 12:09 AM, Jon Callas wrote:

 * PGP Signed: 06/19/2012 at 12:09:46 AM
 
 I am reminded of an article my dear old friend, Martin Minow, did in 
 Cryptologia ages ago. He wrote the article I think for the April 1984 issue. 
 It might not have been 1984, but it was definitely April.

1986. Cryptologia, Volume 10, Issue 2, 1986. The article is entitled NO 
TITLE. The first page is available here:

http://www.tandfonline.com/doi/abs/10.1080/0161-118691860912

but sadly the rest of it is behind a paywall that wants $43 for the issue (or 
the whole volume for $58, such a bargain).

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP4CoEsTedWZOD3gYRAouxAKDSMxRISY7BgZ7aLZ8TxCbm2uX+9gCg8T8E
J/rdgBl2nIaHES8X2nWp0QY=
=LZvI
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:

 The fact that something occurs routinely doesn't actually make it a good 
 idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
 
 This is CRI, so I'm fairly confident nobody is cutting corners. But that 
 doesn't mean the practice is a good one. 

I don't understand.

A company makes a cryptographic widget that is inherently hard to test or 
validate. They hire a respected outside firm to do a review. What's wrong with 
that? I recommend that everyone do that. Un-reviewed crypto is a bane.

Is it the fact that they released their results that bothers you? Or perhaps 
that there may have been problems that CRI found that got fixed?

These also all sound like good things to me.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP32NnsTedWZOD3gYRAuxbAKCvzWt3/+jKq5VadSBLBo6hfT9L8wCeJT15
8e6Ll1xBvXe8IojvRDvksXw=
=jAzX
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jun 18, 2012, at 11:15 AM, Jack Lloyd wrote:

 On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:
 On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
 
 The fact that something occurs routinely doesn't actually make it a good 
 idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
 
 This is CRI, so I'm fairly confident nobody is cutting corners. But that 
 doesn't mean the practice is a good one. 
 
 I don't understand.
 
 A company makes a cryptographic widget that is inherently hard to
 test or validate. They hire a respected outside firm to do a
 review. What's wrong with that? I recommend that everyone do
 that.
 
 When the vendor of the product is paying for the review, _especially_
 when the main point of the review is that it be publicly released, the
 incentives are all pointed away from looking too hard at the
 product. The vendor wants a good review to tout, and the reviewer
 wants to get paid (and wants repeat business).

Not precisely.

Reviewers don't want a review published that shows they gave a pass on a crap 
system. Producing a crap product hurts business more than any thing in the 
world. Reviews are products. If a professional organization gives a pass on 
something that turned out to be bad, it can (and has) destroyed the 
organization.

The reviewer is actually in a win-win situation. No matter what the result is, 
they win. But ironically, or perhaps perversely, a bad review is better for 
them than a good review. The reviewer gains far more from a bad review.

Any positive review is not only lacking in the titillation that comes from 
slagging something, but you can't prove something is secure. When you give a 
good review, you lay the groundwork for the next people to come along and find 
something you missed -- and I guarantee it, you missed something. There's no 
system in the world with zero bugs.

Of course there are perverse incentives in reviews. That's why when you read 
*any* review, you have to have your brain turned on and see past the marketing 
hype and get to the substance. Ignore the sizzle, look at the steak.

 
 I have seen cases where a FIPS 140 review found serious issues, and
 when informed the vendor kicked and screamed and threatened to take
 their business elsewhere if the problem did not 'go away'. In the
 cases I am aware of, the vendor was told to suck it and fix their
 product, but I would not be so certain that there haven't been at
 least a few cases where the reviewer decided to let something slide. I
 would also imagine in some of these cases the reviewer lost business
 when the vendor moved to a more compliant (or simply less careful)
 FIPS evaluator for future reviews.

I agree with you completely, but that's somewhere between irrelevant and a 
straw man.

FIPS 140 is exasperating because of the way it is bi-modal in many, many 
things. NIST themselves are cranky about calling it a validation as opposed 
to a certification because they recognize such problems themselves.

However, this paper is not a FIPS 140 evaluation. Anything one can say positive 
or negative about FIPS 140 is at best tangential to this paper. I just searched 
the paper for the string FIPS and there are six occurrences of that word in 
the paper. One reference discusses how a bum RNG can blow up DSA/ECDSA (FIPS 
186). The other five are in this paragraph:

In additional to the operational modes, the RNG supports a FIPS
mode, which can be enabled and disabled independently of the
operational modes. FIPS mode sets additional restrictions on how
the RNG operates and can be configured, and is intended to
facilitate FIPS-140 certification. In first generation parts, FIPS
mode and the XOR circuit will be disabled. Later parts will have
FIPS mode enabled. CRI does not believe that these differences in
configuration materially impact the security of the RNG. (See
Section 3.2.2 for details.)

So while we can have a bitch-fest about FIPS-140 (and I have, can, do, and will 
bitch about it), it's orthogonal to the discussion.

It appears that you're suggesting the syllogism:

FIPS 140 demonstrate security well.
This RNG has FIPS 140
Therefore, this RNG is not secure.

Or perhaps a conclusion of Therefore, this paper does not demonstrate the 
security of the RNG which is less provocative.

What they're actually saying is that they don't think that FIPSing the RNG will 
materially impact the security of the RNG -- which if you think about it, is 
pretty faint praise.


 
 I am not in any way suggesting that CRI would hide weaknesses or
 perform a lame review.

But that is *precisely* what you are saying.

Jon Stewart could parody that argument far better than I can. You're not saying 
that CRI would hide things, you're just saying that accepting payment sets the 
incentives all the wrong way and that all companies would put out shoddy work 
so long as they got paid, especially

Re: [cryptography] Master Password

2012-05-31 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On May 30, 2012, at 4:28 AM, Maarten Billemont wrote:

 If I understand your point correctly, you're telling me that while scrypt 
 might delay brute-force attacks on a user's master password, it's not 
 terribly useful a defense against someone building a rainbow table.  
 Furthermore, you're of the opinion that the delay that scrypt introduces 
 isn't very valuable and I should just simplify the solution with a hash 
 function that's better trusted and more reliable.
 
 Tests on my local machine (a MacBook Pro) indicate that scrypt can generate 
 10 hashes per second with its current configuration while SHA-1 can generate 
 about 1570733.  This doesn't quite seem like a trivial delay, assuming 
 rainbow tables are off the... table.  Though I certainly wish to see and 
 understand your point of view.

My real advice, as in what I would do (and have done) is to run PBKDF2 with 
something like SHA1 or SHA-256 HMAC and an absurd number of iterations, enough 
to take one to two seconds on your MBP, which would be longer on ARM. There is 
a good reason to pick SHA1 here over SHA256 and that is that the time 
differential will be more predictable.

Let me digress for a moment. It's a truism of security that complexity is the 
enemy of security and simple things are more secure. We just had such a 
discussion on this list in the last week. However, saying that is kinda like 
saying that you should love your mother because she loves you. Each of those is 
something it's impossible to disagree with -- how can you be against simplicity 
or loving your mother? But neither of them is actionable. The truism about my 
mother doesn't say how much I should spend on her birthday present. It doesn't 
tell me if I should come home early from work so I can spend more time with her 
or spend more time at work so I can be successful and she'll be proud of me. 
They are each true, but each meaningless, since I can use the principle to 
defend either A or ~A.

Having said that, the sort of simplicity that I strive for and that I like is 
something that I call understandability. It isn't code size, or anything like 
that, it's how well I can not only understand it, but intuit it. It is similar 
to the mathematical concept of elegance in that it is an aesthetic principle. 
Like loving my mother, I might do something today and its opposite tomorrow 
because it is aesthetic within its context.

I've read the scrypt paper maybe a half-dozen times. As a matter of fact, I 
just went and read it again while writing this note. Each time I read it, I nod 
as I follow along and when I get to the end of the paper, I'm not sure I 
understand it any more. I remain unconvinced. I think it complex. I think it 
inelegant. It fails my understandability test. This is not rational, and I know 
that; this is why I said that this is something that gentlepersons can disagree 
on. I don't think that because *I* don't like it that *you* shouldn't like it. 
I also mean no disrespect to Colin Percival, who is jaw-droppingly brilliant. I 
read his paper and say Wow even as I remain unconvinced.

I also start to poke at it in some odd ways, mentally. I have a friend who 
builds custom supercomputers. These things have thousands of CPUs and tens of 
terabytes of memory. Would scrypt hold up to its claims in such an environment? 
I don't know, and my eyebrow is raised. 

Let us suppose that someone were to spend billions of dollars making a 
supercomputing site out in the desert somewhere. Would scrypt stand up to the 
analytic creativity that they show? I don't know. Moreover, I am irrationally 
skeptical; I believe that it would not, and I have no rational reason for it.

Lastly, I fixate on Table 1 of the scrypt paper, on page 14. Estimated cost of 
hardware to crack a password in 1 year. In the middle row-section we see an 
estimate for a 10 character password. It would take (according to the paper) 
$160M to make a computer that would break 100ms of PBKDF2-HMAC-SHA256. The 
comparison is against 64ms of scrypt and a cost estimate of $43B. In the next 
row-section down, it gives a comparison of 5s of PBKDF2 for $8.3B versus 3.8s 
of scrypt for $175T.

PBKDF2 is understandable. It's simple. In my head, I can reach into my mental 
box of cryptographic Lego and pull out a couple SHA blocks, snap them to an 
HMAC crossbar, and then wrap the thing in a PBKDF2 loop and see the whole thing 
in my head. It's understandable. I *believe* Colin Percival's number that 100ms 
of iteration will cost $160M (assuming 2009 hardware costs, at standard 
temperature and pressure) to break, and I think Wow. That's good enough. And 
if it isn't -- we can up it to 200ms, and handwave out to $300M hardware cost. 
I can also mentally adjust those against using GPUs and other accelerators 
because it's understandable.

In contrast, I can't get a mental model of scrypt. It is mentally complex and 
because of that complexity I don't 

Re: [cryptography] Master Password

2012-05-31 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On May 30, 2012, at 12:59 PM, Nico Williams wrote:

 
 Are you saying that PBKDFs are just so much cargo cult now?

No. PBKDF2 is what I suggest, actually. C.F. my entirely too long missive to 
Maarten that I just sent.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPxxfxsTedWZOD3gYRAiUHAJ4wLHhlM4220R3nOryUVitaC83ShACg5yjk
MjpQdcrhZywKmrWdPgjHoG0=
=wYQ9
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-30 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Your algorithm is basically okay, but there are a couple of errors you've made, 
things you and I will disagree over, and one flaw that I consider to wreck the 
whole thing. But all of the problems are correctable, easily. If I have not 
understood something, or read it too quickly and gotten confused, I apologize 
in advance.

Let me walk through a reduction of your system:

(1) You take the master password and run it through a 512-bit hash function, 
producing master binary secret.

You pick scrypt for your hash function, because you think burning time and 
space adds to security. I do not. This is a place where gentlepersons can 
disagree, and I really don't expect to convince you that SHA-512 or Skein would 
be better options. I'm convinced that I know why you're doing it, and it would 
be a waste of both our times to go further. We just disagree.

At the end of it, it hardly matters because if an attacker wishes to construct 
a rainbow table, the correct way to do it is to assemble a list of likely 
passwords and just go from there. It will take longer if they use scrypt than 
with a real hash function, but once it's done it is done. They have the rainbow 
table.

This isn't a flaw, it just is. The goal of your system requires that you have a 
master secret. But security-wise, there's no win here other than burning some 
cycles in a way that the attacker can trivially replicate.

Security-wise, I'm quite certain that scrypt isn't nearly as secure as any real 
hash function you'd pick, but I'm just whining. We know that the security is 
password. If they pick puppies as their password, it really doesn't matter 
what hash function you run it through. Almost certainly, there is not enough 
security in their password to make it a difference what function you picked.

Let's call the parameters P for password and M for the master key.

(2) You take M and construct site-specific keys. We'll call the site name N, 
your counter C, and the site keys K_s.

You compute a given site-specific key, K_s, with:

K_s = SHA1(S + M + C), where + is the function that concatenates a null byte 
and then the second string.

Strictly speaking, you really ought to do it in the order K + C + S, because 
that's more collision-resistant. It's good practice when computing a keyed hash 
to hash the key first. In reality, it probably doesn't matter, but it *does* 
save you lots of debates with people like me.

You also want to hash in the length of S, too, because that's also more 
collision-resistent.

So you really want it to be K_s = SHA1(M + C + S.length + S), but that's the 
only real security problems I can see. The ordering is a nit, and omitting the 
length is only a problem if the hash function is broken. Speaking of which, why 
not use a non-broken hash function, like SHA256, or SHA512 or SHA512/160, if 
the output size matters to you? Given that you're using scrypt, why not use a 
better hash function, even if it is slower? But that's also a nit.

The real problem you have, however, is in the counter. First of all a counter 
is not a salt. A salt is an arbitrary non-security parameter. But arbitrary 
means random, just not secret. A counter is a counter.

The counter has two problems to it. One is that it doesn't add to the security 
of the system. The other it makes system utterly useless. 

An attacker can easily brute force through the site keys by just running a 
counter. That's why I say it doesn't add to the security. Even if you used 
scrypt here, too, it wouldn't matter much. It's still easy to brute force.

However, this completely ruins the system for the end user. The end user can't 
just remember their master password and the site name. They have to remember 
what *order* they did it in too, which ruins everything. You can't sync this 
across devices, you have to keep track of orders, and so on. You need to remove 
the counter.

I understand why you did it. You did it because that way two people with the 
same master password on the same site aren't going to have the same password -- 
which would give away the master password to the other one, as well as leaking 
to an attacker that you picked a lousy password, even if they don't immediately 
know what it is (cue the rainbow tables). Ironically, this is kinda like the 
recent RSA/GCD thing in that your security oops can be made into a disaster if 
someone else makes the same oops.

You can fix this one by substituting the person's site username for the 
counter. In this revision, you have K_s = HASH(M + U.length + U + S.length + 
S). That's a much better construction.

(3) You run the site key, K, through some pretty printer. I really didn't read 
this and really don't care. It doesn't matter to me. TL;DR.

All in all, it's cute. I like it enough to write this note. I advise that you:

* Pick a real hash function or two. We can debate scrypt, but you can do better 
than SHA1. Even a second scrypt is better than 

Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-25 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

My money would be on a combination of traffic analysis and targeted malware. We 
know that the Germans have been pioneering using targeted malware against 
Skype. Once you've done that, you can pick apart anything else. Just a simple 
matter of coding.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPv800sTedWZOD3gYRAomcAJ4uNmrVjVFy3TzjDaqxqE/fm8xPvgCcCV+a
5F0VUjuKacwHqQEdCzQv//g=
=qVnU
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Forensic snoops: It doesn't take a Genius to break into an iPhone

2012-04-10 Thread Jon Callas

On Apr 10, 2012, at 10:32 AM, Natanael wrote:

 Just FYI, there's been claims that these guys faked it. But on the other 
 hand, there ARE other tools that can extract data from iPhones so you can 
 bruteforce the encryption later.
 

I'm pretty certain they faked it. The question is how they faked it. They may 
have faked it in a quasi-defensible way.

It takes ~1000 seconds to brute force a four-digit PIN, because the hardware 
calibrates each iteration to ~100ms (and it must be done on the device itself, 
because there's a hardware key that's part of the calculation, and if you don't 
want to destroy the device, you do it on the device. Thats 16 2/3 minutes.

If you then say that well, you can get one on average in 8 1/3 minutes, that 
has merit, but we've definitely wandered into marketing. If you note that some 
large percentage of PINs start with a zero or one, that average pulls down, 
particularly since you'll do everything starting with a one in ~100 seconds, 
and really, part of the human factors of pincodes is that a frighteningly large 
number of them are under 1231. 

If you're selling a forensic toolkit, it is not untrue that you could do it in 
a few minutes on average. It's not what I'd call responsible, though. It 
implies that the best pincode is  or perhaps 9989 (no triple-repeated 
digit). :-)

Jon




PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key escrow 2012

2012-03-29 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 29, 2012, at 2:48 PM, mhey...@gmail.com wrote:

 On Tue, Mar 27, 2012 at 1:17 PM, Nico Williams n...@cryptonector.com wrote:
 On Tue, Mar 27, 2012 at 5:18 AM, Darren J Moffat
 
 For example an escrow system for ensuring you can decrypt data written by
 one of your employees on your companies devices when the employee forgets or
 looses their key material.
 
 Well, the context was specifically the U.S. government wanting key
 escrow.
 
 Hmm - these are not mutually exclusive.
 
 Back in the mid to late 90s, the last time the U.S. government
 required key escrow for international commerce with larger key sizes,
 they allowed key escrow systems that were controlled completely by the
 company. Specifically, they allowed Trusted Information System's
 RecoverKey product (I worked on this one, still have the shirt, and am
 not aware of any other similar products available at the time - PGP's
 came later and was more onerous to use).
 
 RecoverKey simply wrapped a session key in a corporate public key
 appended to the same session key wrapped with the user's public key.
 If the U.S. Government wanted access to the data, the only thing they
 got was the session key after supplying the key blob and a warrant to
 the corporation in question. The U.S. government even allowed us to
 sell RecoverKey internationally to corporations that kept their
 RecoverKey data recovery centers offshore but agreed to keep them in a
 friendly country.

I'd have to disagree with you on much of that.

The US Government never required key escrow for international commerce. 
Encrypted data was never restricted, what was restricted was the export of 
software etc. If you were of a mind where you thought that the only way to get 
cryptographic software was from the US, then you'd think this might be 
something like effective. In reality, the idea was absurd from the get-go 
because encrypted data was never restricted.

The people who wanted to push key escrow never had a good way to explain to 
anyone why they'd want it. They never had a good carrot, either, for it. At one 
point, they tried to sugar-coat it by offering fast-tracks on export for it, 
but Commerce granted export easily. Furthermore, Commerce's own rules 
progressed so fast with so many exemptions that it was all obviated before it 
could be developed.

Amusingly, I ended up having TIS's RecoverKey under my bailiwick because 
Network Associates bought PGPi and then TIS. The revenues from it were so small 
that I don't think they even covered marketing material like that shirt you 
had. In a very real sense, it didn't exist as anything more than a 
proof-of-concept that proved the concept was silly.

Also, there wasn't a PGP system. The PGP additional decryption key is really 
what we'd call a data leak prevention hook today, but that term didn't exist 
then. Certainly, lots of cypherpunks called it that at the time, but the 
government types who were talking up the concept blasted it as merely a way to 
mock (using that very word) the concept.

Jon





-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPdOR+sTedWZOD3gYRAtc6AKD/GlvCO3/cs+xuaPTz5I0sqjfUzwCdGcw2
4PlzXeIu0dK9EqfgDQBfpLI=
=GfnU
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say)

2012-03-25 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 25, 2012, at 1:22 PM, coderman wrote:

 now they pay to side step crypto entirely:
 
 iOS up to $250,000
 Chrome or IE up to $200,000
 Firefox or Safari up to $150,000
 Windows up to $120,000
 MS Word up to $100,000
 Flash or Java up to $100,000
 Android up to $60,000
 OSX up to $50,000
 
 via 
 http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/
 
 plenty of weak links between you and privacy...

This is precisely the point I've made: the budget way to break crypto is to buy 
a zero-day. And if you're going to build a huge computer center, you'd be 
better off building fuzzers than key crackers.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFPb4NssTedWZOD3gYRAijMAKDNSNKcPYXxUZX2ekzFusz0cEEHTgCgqi8x
lDqmYv4yOLL0C7hc+RDrpVI=
=V0YJ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] RSA Moduli (NetLock Minositett Kozjegyzoi Certificate)

2012-03-23 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 23, 2012, at 6:39 AM, Peter Gutmann wrote:

 Jon Callas j...@callas.org writes:
 On Mar 23, 2012, at 6:03 AM, Peter Gutmann wrote:
 Jeffrey Walton noloa...@gmail.com writes:
 Is there any benefit to using an exponent that factors? I always thought 
 low
 hamming weights and primality were the desired attributes for public
 exponents. And I'm not sure about primality.
 
 Seeing a CA put a key like this in a cert is a bit like walking down the
 street and noticing someone coming towards you wearing their underpants on
 their head, there's nothing inherently bad about this but you do tend to 
 want
 to cross the street to make sure that you avoid them.
 
 But Peter, CAs don't *precisely* put keys into certs. CAs certify a key that
 the key creator wants to have in their cert.
 
 This is a self-signed cert from the CA, so the key creator was the CA.

So it's like issuing yourself an Artistic License card with a color printer and 
laminator. :-) Good for lots of laughs.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPbIAAsTedWZOD3gYRAo4KAKDuG0OgEg81mxGUJDGlYp5OzLMI/gCgkRRq
/G3T3NLS/8k1L4njuxMJMd0=
=tHSy
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country's Biggest Spy Center (Watch What You Say)

2012-03-22 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 22, 2012, at 10:02 AM, Marsh Ray wrote:

 
 
 Or it could be complete BS.
 

The race is not always to the swift, nor the battle to the strong, but that's 
the way to bet.
  -- Damon Runyon.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPa2cfsTedWZOD3gYRAvxtAJ9wVuVfkJVV3cn+NpTpN+8sxxUEIwCeKEvo
4a7DfTy0flJyn96s49GBcyM=
=re6+
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Number of hash function preimages

2012-03-11 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 10, 2012, at 5:24 PM, Eitan Adler wrote:

 On Sat, Mar 10, 2012 at 7:28 PM, Jon Callas j...@callas.org wrote:
 
 2) Is it known if every (valid) digest has always more than one
 preimage? To state it otherwise: Does a collision exist for every
 message? (i.e., is the set h^{-1}(x) larger than 1 for every x in the
 image of h?).
 
 
 Sure, by the pigeonhole principle. Consider a hash with 256-bit (32-byte) 
 output. For messages larger than 32 bytes, by the pigeonhole principle, 
 there must be collisions. For messages smaller than 32 bytes, they might 
 collide on themselves but don't have to, but there will be some large 
 message that collides with it.
 
 I think you are misunderstanding the question (or at the very least I
 am). The pigeonhole principle only shows that there exist collisions
 not that collisions exist for every element in the codomain.
 Think about the function over the natural numbers:
 f(x) = {
 1, if x = 0
 2, if x  0
 3, if x  0
 }
 
 While there exist collisions within N it isn't true that every element
 in the co-domain has a collision.

You're right, I misunderstood the question.

Your example gets to what I was saying about a lot being there in the ellipsis 
on hash functions. 

Let's take your function -- which is a hash function, it's just not generally 
useful -- and if we define its codomain to be [1..3], then yes. But if its 
codomain is [0..3] (i.e. it's a two-bit hash function), then no because it 
never returns a zero. 

Its my intuition that a hash function that's made up of a block cipher and a 
chaining mode is going to do that when operating on natural sizes. For example, 
I expect that Skein512-512 is both surjective and well-behaved on the 
co-domain. It's an ARX block cipher with simple chaining between the blocks. 

However, Skein512-1024 (Skein512 with 1024 bit output) is obviously not 
surjective (512 bits of state yield 1024 bits of output), but I'd expect the 
output codomain to be evenly covered. Also note that not only is it a fine 
output for an RSA-1024 signature, but arguably better than a smaller output.

For smaller output of an unnatural size (e.g. 511 bits), I'd also expect it to 
cover the codomain, and I think it would have to.

I think you'd have to look at other constructions on a case-by-case basis. If 
we look again at my trivial modification of a hash function that makes it not 
return zero but a one instead, it's not surjective, it doesn't evenly cover its 
codomain, and yet for any practical purpose it's a fine hash function. 

For some purposes, it's even more secure than the original. Consider using it 
as a KDF for a cipher for which zero is a weak key (like DES). By not returning 
a weak key, it's more secure than the base function. That's interesting in that 
a flaw in it being an ideal hash function makes it actually superior as a KDF. 

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPXGlXsTedWZOD3gYRAq3+AJwK2l3SNm84mvjdqvAzZV2+bWbmpQCgtsfc
SHd+g57nXlOylLOLUsekgCQ=
=3rTZ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Number of hash function preimages

2012-03-10 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 9, 2012, at 3:25 AM, Florian Weingarten wrote:

 Hello list,
 
 first, excuse me if my questions are obvious (or irrelevant).

No, they're interesting and subtle.

 
 I am interested in the following questions. Let h be a cryptographic
 hash function (let's say SHA1, SHA256, MD5, ...).

There's a lot to put in that ellipsis. 

 
 1) Is h known to be surjective (or known to not be surjective)? (i.e.,
 is the set h^{-1}(x) non-empty for every x in the codomain of h?)

No. I would bet that the standard ones are all surjective, but I don't know 
that it's ever been demonstrated for any given hash function. The main property 
we want from a hash function is that is is one-way, and demonstrating that a 
one-way function is or isn't surjective flirts with that, at least. Some will 
be easy (modulo is one way and surjective, for example), others will be harder. 
When you add in other desirable hash function properties such as being 
reasonably collision-free, it becomes harder to show.

However, if you show that a hash function is a combination of surjective 
functions that all preserve surjectivity, I think it's an easy proof.

All the ones that use a block cipher and a chaining mode are likely easy to 
prove. 

 
 2) Is it known if every (valid) digest has always more than one
 preimage? To state it otherwise: Does a collision exist for every
 message? (i.e., is the set h^{-1}(x) larger than 1 for every x in the
 image of h?).
 

Sure, by the pigeonhole principle. Consider a hash with 256-bit (32-byte) 
output. For messages larger than 32 bytes, by the pigeonhole principle, there 
must be collisions. For messages smaller than 32 bytes, they might collide on 
themselves but don't have to, but there will be some large message that 
collides with it.

 3) Are there any cryptographic protocols or other applications where the
 answers to 1) or 2) are actually relevant?

Very likely not.

Let's construct a trivial non-surjective hash function. Start with H, and 
construct H' that for any message that produces a hash of 0, we emit 1 instead. 
It's therefore not surjective since it can't emit a zero. 

It isn't a *useful* non-surjectivity because we don't usefully know a preimage 
of a zero or a one. 

But now let's construct H'' that emits H(M2) when calculating H(M1). This is 
just like H', but with different constants. The difference here is that we have 
artificially created a collision between M1 and M2 instead of a preimage of 0 
and a preimage of 1, which we don't know in advance. Is this a useful 
collision? That's a philosophical question. I'd say no, myself, but I'd 
understand why someone said yes, I'd merely disagree with them.

That's why I say very likely not, instead of just no.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPW/HEsTedWZOD3gYRAhvWAJ4rL6Zxp9eCUpxqDEYPQTLxKQu0VwCeJqHG
IVoDJYQIMASPi03Hl19LxXE=
=68//
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-26 Thread Jon Callas

On Feb 25, 2012, at 3:18 PM, Kevin W. Wall wrote:

 On Sat, Feb 25, 2012 at 2:50 AM, Jon Callas j...@callas.org wrote:
 
 [snip]
 
 But to get to the specifics here, I've spoken to law enforcement and
 border control people in a country that is not the US, who told me
 that yeah, they know all about TrueCrypt and their assumption is
 that *everyone* who has TrueCrypt has a hidden volume and if they
 find TrueCrypt they just get straight to getting the second password.
 They said, We know about that trick, and we're not stupid.
 
 Well, they'd be wrong with that assumption then.

Only from your point of view. From their point of view, the user is the one 
with wrong assumptions.

Remember what I said -- they're law enforcement and border control. In their 
world, Truecrypt is the same thing as a suitcase with a hidden compartment. 
When someone crosses a border (or they get to perform a search), hidden 
compartments aren't exempt. They get to search them. 

Also to them, Truecrypt is a suitcase that advertises a hidden compartment, and 
that's pretty useless, in their world.

 
 I asked them about the case where someone has TrueCrypt but doesn't
 have a hidden volume, what would happen to someone doesn't have one?
 Their response was, Why would you do a dumb thing like that? The whole
 point of TrueCrypt is to have a hidden volume, and I suppose if you
 don't have one, you'll be sitting in a room by yourself for a long
 time. We're not *stupid*.
 
 That's good to know then. I never had anything *that* secret to protect,
 so never bothered to create a hidden volume. I just wanted a good, cheap
 encrypted volume solution where I could keep my tax records and other
 sensitive personal info. And if law enforcement ever requested the password
 for that, I wouldn't hesitate to hand it over if they had the proper
 subpoena / court order. But I'd be SOL when then went looking for a second
 hidden volume simply because one doesn't exist. Guess if I ever go out of
 the country with my laptop, I'd just better securely wipe that partion.

Or just put something in it that you can show. 

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-26 Thread Jon Callas

On Feb 25, 2012, at 6:35 PM, James A. Donald wrote:

 Jon Callasj...@callas.org  writes:
   I've spoken to law enforcement and border control people
   in a country that is not the US, who told me that yeah,
   they know all about TrueCrypt and their assumption is that
   *everyone* who has TrueCrypt has a hidden volume and if
   they find TrueCrypt they just get straight to getting the
   second password. They said, We know about that trick, and
   we're not stupid.
 
 They may assume that - but they cannot prove it.

You're assuming that they operate with the same security model that you do.

Your security model presupposes US law, to start with. I can see that in the 
glib comment asking if I'd ever heard of innocent until proven guilty -- 
which is a US principle. It is one that I not only have heard of, but think is 
is pretty darn good idea, too!

Nonetheless, it does not exist everywhere in the world, and I said this was not 
the US. In fact the very reason I said it wasn't the US was because I wanted to 
point out that objections to the story based upon US law are irrelevant. 
Moreover, innocent until proven guilty is interpreted differently depending on 
what sort of case there is. The term *proven* is context-dependent. There are 
different ways they prove, different burdens of proof. Beyond reasonable 
doubt and clear and convincing evidence are two used in criminal cases in 
the US. Preponderance of evidence is usually used in civil cases.

None of these are plausible deniability. As I said before, this is a term of 
spycraft and statecraft. Usually it's used to describe how a powerful entity 
like a nation state can defend itself against attacks by less-powerful 
entities. There are forms of torture that are popular because they leave no 
marks on the victim and therefore give the state plausible deniability. 
Bureaucracies also use this technique to spread blame or leave the blame with 
some other person. 

In a number of cases involving spectacularly failed companies, the CEO has 
tried to stick someone else with the blame through plausible denial. Or perhaps 
the family and associates of a fraudster use a form of plausible denial to 
avoid conviction or trial. (I am not saying that using plausible means you're 
guilty -- it only means you don't have a better defense.) It works sometimes 
and doesn't work others. It didn't work for Bernie Ebbers, for example. 
Plausible denial combined with a lack of evidence works really well, but it's 
not a legal principle at all.

Most people who use the term plausible denial, particularly us crypto people, 
would be better served to say reasonable doubt. It's a better marketing term 
at the very least.

But anyway, back to deniable encryption and what is a language-theoretic issue.

If your security model includes technical issues and policy issues, but your 
attacker has different policies, then your security might fail for 
language-theoretic reasons.

To a border control person (and that's who I was talking about), Truecrypt is 
the same thing as a suitcase with a false bottom. Technically, we'd say that it 
is a container that (assuming it works correctly) *might* have a secret 
compartment and that one that does have secret compartment is 
information-theoretcially indistinguishable from one that has a secret 
compartment. But if you read the previous sentence to a border control person, 
they might hear, ...it is a container ... that ... has a secret compartment. 

The difference is policy, not technical. If their security model includes the 
policy that there's no reason to have a suitcase with a false bottom except to 
put something in it, then how you make a denial becomes everything.

If your denial is don't be ridiculous, I *know* you guys can spot hidden 
volumes and that's why I'd never use one -- I use it because I'm cheap then 
you're doing well. If your denial is, you can't prove there's a hidden volume 
there then you're not doing so well.

My point is that there are security models out there that know about hidden 
volumes and have their own defenses against them. I used the word defenses 
intentionally. They are border control people. Their model considers a hidden 
volume to be an attack, not a defense. They have developed their own defenses 
against smuggling that take hidden volumes into account.

 Evidently in the case of
 http://www.ca11.uscourts.gov/opinions/ops/201112268.pdf They
 were totally unable to get information out of John Doe
 
 For the entire case turned on the fact that John Doe never
 admitted the existence of the hidden drive, and forensics were
 entirely unable to prove the existence of the hidden drive.
 
 Customs may have the authority to search through your stuff,
 but if they cannot find what they are looking for, they have
 no authority to make you tell them that it exists and where
 it is.
 
 But if you *do* tell them that it exists, then they can make
 you tell them where it is.

Absolutely. This is a 

Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-24 Thread Jon Callas

On Feb 24, 2012, at 5:43 PM, James A. Donald wrote:

 Truecrypt supports an inner and outer encrypted volume, encryption hidden 
 inside encryption, the intended usage being that you reveal the outer 
 encrypted volume, and refuse to admit the existence of the inner hidden 
 volume.
 
 To summarize the judgment:  Plausibile deniability, or even not very 
 plausible deniability, means you don't have to produce the key for the inner 
 volume.  The government first has to *prove* that the inner volume exists, 
 and contains something hot.  Only then can it demand the key for the inner 
 volume.
 
 Defendant revealed, or forensics discovered, the outer volume, which was 
 completely empty.  (Bad idea - you should have something there for plausible 
 deniability, such as legal but mildly embarrassing pornography, and a 
 complete operating system for managing your private business documents, 
 protected by a password that forensics can crack with a dictionary attack)
 
 Forensics felt that with FIVE TERABYTES of seemingly empty truecrypt drives, 
 there had to be an inner volume, but a strong odor of rat is no substitute 
 for proof.
 
 (Does there exist FIVE TERABYTES of child pornography in the entire world?)
 
 Despite forensics suspicions, no one, except the defendant, knows whether 
 there is an inner volume or not, and so the Judge invoked the following 
 precedent.
 
 http://www.ca11.uscourts.gov/opinions/ops/201112268.pdf
 
 That producing the key is protected if conceding the existence, possession, 
 and control of the documents tended to incriminate the defendant.
 
 The Judge concluded that in order to compel production of the key, the 
 government has to first prove that specific identified documents exist, and 
 are in the possession and control of the defendant, for example the 
 government would have to prove that the encrypted inner volume existed, was 
 controlled by the defendant, and that he had stored on it a movie called 
 Lolita does LA, which the police department wanted to watch.

There is no such thing as plausible deniability in a legal context.

Plausible deniability is a term that comes from conspiracy theorists (and like 
many things contains a kernel of truth) to describe a political technique where 
everyone knows what happened but the people who did it just assert that it 
can't be proven, along with a wink and a nudge.

But to get to the specifics here, I've spoken to law enforcement and border 
control people in a country that is not the US, who told me that yeah, they 
know all about TrueCrypt and their assumption is that *everyone* who has 
TrueCrypt has a hidden volume and if they find TrueCrypt they just get straight 
to getting the second password. They said, We know about that trick, and we're 
not stupid.

I asked them about the case where someone has TrueCrypt but doesn't have a 
hidden volume, what would happen to someone doesn't have one? Their response 
was, Why would you do a dumb thing like that? The whole point of TrueCrypt is 
to have a hidden volume, and I suppose if you don't have one, you'll be sitting 
in a room by yourself for a long time. We're not *stupid*.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Jon Callas
It was (2), they didn't wait.

Come on -- every one of these devices is some distribution of Linux that comes 
with a stripped-down kernel and Busybox. It's got stripped-down startup, and no 
one thought that it couldn't have enough entropy. These are *network* people, 
not crypto people, and the distribution didn't have a module to handle 
initial-boot entropy generation.

Period, that's it. It's not malice, it's not even stupidity, it's just 
ignorance.

The answer to what were they thinking? is almost always they weren't.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Applications should be the ones [GishPuppy]

2012-02-17 Thread Jon Callas

On Feb 17, 2012, at 4:55 AM, Jack Lloyd wrote:

 On Thu, Feb 16, 2012 at 09:41:04PM -0600, Nico Williams wrote:
 
 developers agree).  I can understand *portable* applications (and
 libraries) having entropy gathering code on the argument that they may
 need to run on operating systems that don't have a decent entropy
 provider.
 
 Another good reason to do this is resiliance - an application that
 takes some bits from /dev/(u)random if it's there, but also tries
 other approaches to gather entropy, and mixes them into a (secure)
 PRNG, will continue to be safe even if a bug in the /dev/random
 implementation (or side channel in the kernel that leaks pool bits,
 etc) causes the conditional entropy of what it is producing to be
 lower than perfect. I'm sure at some point we'll see a fiasco on the
 order of the Debian OpenSSL problem with /dev/random in a major
 distribution.

Really?

Let's suppose I've completely compromised your /dev/random and I know the bits 
coming out. If you pull bits out of it and put them into any PRNG, how is that 
not just Bits' = F(Bits) ? Unless F is a secret function, I just compute Bits' 
myself. If F is a secret function than the security is exactly the secrecy of F.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Jon Callas

On 16 Feb, 2012, at 3:30 AM, Bodo Moeller wrote:

 On Thu, Feb 16, 2012 at 12:05 PM, Werner Koch w...@gnupg.org wrote:
  
 You are right that RFC4880 does not demand that the key expiration date
 is put into a hashed subpacket.  But not doing so would be stupid.
 
 I call it a protocol failure, you call it stupid, but Jon calls it a 
 feature (http://article.gmane.org/gmane.ietf.openpgp/4557/).

That's not what I said. Or perhaps not what I meant.

I think it is indeed a feature that the expiry is a part of the certification, 
not part an intrinsic property of the key material. That permits you to do very 
cool things like rolling certification lifetimes.

Putting that into an unhashed packet is stupid, as Werner said.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Jon Callas

On Feb 14, 2012, at 7:42 AM, ianG wrote:

 On 14/02/12 21:40 PM, Ralph Holz wrote:
 Ian,
 
 Actually, we thought about asking Mozilla directly and in public: how
 many such CAs are known to them?
 
 It appears their thoughts were none.
 
 Of course there have been many claims in the past.   But the Mozilla CA desk 
 is frequently surrounded by buzzing small black helicopters so it all becomes 
 noise.

I've asked about this, too, and the *documented* evidence of this happening is 
exactly that -- zero.

I believe it happens. People I trust have told me, whispered in my ear, and 
assured me that someone they know has told them about it, but there's 
documented evidence of it zero times.

I'd accept a screen shot of a cert display or other things as evidence, myself, 
despite those being quite forgeable, at this point.

Their thoughts of it being none are reasonably agnostic on it.

Those who have evidence need to start sharing.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-14 Thread Jon Callas

On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:

 The practical import is unclear, since there's (as far as is known) no
 way to predict or control who has a bad key.
 
 To me, the interesting question is how to distribute the results.  That
 is, how can you safely tell people you have a bad key, without letting
 bad guys probe your oracle.  I suspect that the right way to do it is to
 require someone to sign a hash of a random challenge, thereby proving
 ownership of the private key, before you'll tell them if the
 corresponding public key is in your database.

Yeah, but if you're a bad guy, you can download the EFF's SSL Observatory and 
just construct your own oracle. It's a lot like rainbow tables in that once you 
learn the utility of the trick, you just replicate the results. If you 
implement something like the Certificate Transparency, you have an 
authenticated database of authoritative data to replicate the oracle with.

Waving my hand and making software magically appear, I'd combine Certificate 
Transparency and such an oracle be combined, and compute the status of the key 
as part of the certificate logs and proofs.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Proving knowledge of a message with a given SHA-1 without disclosing it?

2012-02-01 Thread Jon Callas

On Feb 1, 2012, at 1:49 AM, Francois Grieu wrote:

 The talk does not give much details, and I failed to locate any article
 with a similar claim.
 I would find that result truly remarkable, and it is against my intuition.
 
 Any info on the Hal Finney protocol, or a protocol giving a similar
 result, or the (in)feasibility of such a protocol?

As I remember Hal's protocol, it requires about eight megabytes of data to be 
transferred back and forth to prove that you know the SHA1 hash. It's not so 
much to be obviously absurd, but not efficient enough to be something you'd 
want to do often.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Well, that's depressing. Now what?

2012-01-27 Thread Jon Callas

On Jan 27, 2012, at 5:22 PM, Noon Silk wrote:

 
 So why didn't one of these real world people point this out, to
 researchers? It's a bit too easy to claim something as obvious when
 someone just told you.

There are any number of us who have been quantum skeptics for years, and the 
responses that have come back to us have been essentially that the fact that we 
were skeptical showed ipso facto that we didn't know what we were talking 
about. The quantum folks have just insisted that doubting quantum cryptography 
was like doubting evolution or gravity.

Nonetheless, as prettily fragrant as the schadenfreude is this evening, I'm not 
sure I buy this paper, either. I'm immediately reminded of Clarke's First Law. 
(Not the technology and magic one, but one about elderly and distinguished 
scientists making predictions.)

The quantum crypto people have earned contempt from us math people by 
high-handedly dismissing any operational concerns, by fake competition -- 
insisting on the false dilemma that quantum and mathematical techniques are 
product and technological competitors, and even in the very *word* 
cryptography. Quantum cryptography is not cryptography. It is an amazing bit 
of physics. In the last few years, they've backed off to quantum key 
distribution but quantum *secrecy* is not only more accurate, less snake 
oil, and far cooler than either of the terms.

Heck, just this week, an article Quantum mechanics enables perfectly secure 
cloud computing showed up on physorg.com at 
http://www.physorg.com/news/2012-01-quantum-mechanics-enables-perfectly-cloud.html.
 It manages to put the same snake oil into the very headline by using the word 
perfect. It's been a relatively few days since I read something else where 
they were claiming that devices to do quantum crypto to mobile devices are 
around the corner, unironically including the trusted third party in the middle 
that acts as a key router. That one's perfect, too.

I can hardly wait to see the rebuttals to this paper.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-18 Thread Jon Callas

On Dec 18, 2011, at 10:19 AM, M.R. wrote:

 On 2011-12-07 16:31, Jon Callas wrote:
 There are many things about code signing that I don't think I understand.
 
 same here.
 
 But I do understand something about the code creation, dissemination
 and the trust between code creator and code user (primary parties),
 and the role of the operating system vendor (a tertiary party) as
 an intermediary between the code creator and the code user.
 
 With that said, I propose that code signing and then enforcing some
 kind of use sanctioning protocol by the operating system vendor is
 an idiotic idea, and fortunately one that has been proven as completely
 impractical and ill-aligned with the interest of the two primary parties, and 
 thus continually rejected in practice.
 
 What should be signed and tusted (or not trusted) is not the code,
 but the channel by which the code is distributed.

Which is precisely what can't be done, in the general case.

It's really, really, doable in the singular case. If the channel signs the code 
(which is what Apple does on the App Store), then sure, Alice is your auntie. 

But when developer D has code they sign *themselves* with a cert given from 
signatory S, and delivered to marketplace M, you end up with some sort of 
DSM-defined insanity. There's no responsibility anywhere. The worst, though, is 
to go to the signer and say, This is another fine mess you've gotten me into, 
Stanley.

Jon
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-11 Thread Jon Callas
On 10 Dec, 2011, at 11:58 PM, Peter Gutmann wrote:

 Jon Callas j...@callas.org writes:
 
 If someone actually built such combination of OS and marketplace, it would
 work for the users very well, but developers would squawk about it. Properly
 done, it could drop malware rates to close to nil.
 
 Oh, developers would do more than squawk about it.  Both Java and .NET
 actually support the capability-based security that you mentioned, but it's so
 painful to use that it's either turned off by default (.NET's 'trust
 level=Full') or was turned off after massive developer backlash (Java).
 Even the very minimal capabilities used by Android are failing because of the
 dancing bunnies and confused deputy problems, and because developers request
 as close to any/any as they can get just in case (exacerbating the confused
 deputy problem).
 
 (One of the nice things about Android is that it's fairly easy to decompile
 and analyse the code, so there have been all sorts of papers published on its
 capability-based security mechanisms using this technique.  It's serving as a
 nice real-world empirical evaluation of failure modes of capability-based
 security systems.  I'm sure someone could get a good thesis out of it at some
 point).
 
 Properly done, it could drop malware rates to close to nil.
 
 Objection, tautology: Properly done, any (malware-related) security measure
 would drop malware rates close to nil.  The problem is doing it properly...
 

Yes, doing it properly is the key and I'll assert that Apple is doing a pretty 
good approximation of it. They are doing more or less what I described -- good 
coding enforcement backed up with digital signatures. There are plenty of 
people squawking about it. I know developers who've thrown up their hands and 
there is plenty of grumpiness I've heard. Some of it reasonable grumpiness, too.

But the end result for the users is that malware rate is close to zero. The 
system is by no means perfect, and has side-effects. But the times when 
something slipped through the net are so few that they're notable still. (And 
some of the malware has been kinda charming, like the flashlight app that had a 
hidden SOCKS proxy that let people use it for tethering.) More importantly, the 
system does not throw things at the users that they're incapable of handling, 
like the Android way of just informing you what capabilities an app needs. 
People can and do just hand devices to their kids and let them use them with no 
ill effects.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-10 Thread Jon Callas

On 9 Dec, 2011, at 9:15 PM, Peter Gutmann wrote:

 Jon Callas j...@callas.org writes:
 
 If it were hard to get signing certs, then we as a community of developers
 would demonize the practice as having to get a license to code.
 
 WHQL is a good analogy for the situations with certificates, it has to be made
 inclusive enough that people aren't unfairly excluded, but exclusive enough
 that it provides a guarantee of quality.  Pick any one of those two.
 
 (I have a much longer analysis of this, a bit too much to post here, but
 there's a long history of vendors gaming WHQL and the certifiers looking the
 other way, just as there is with browser vendors looking the other way when a
 CA screws up, although in the case of hardware vendors the action is
 deliberate rather than accidental).

Sure, and that's why the assurance system and the signatures have to be tied 
together and the incentives have to be aligned. In a software market where the 
app store itself is doing the validation, doing the enforcement, signing the 
code, and taking the responsibility for both delivering the software and 
backfilling the inevitable errors, you'll see the *system* lower malware. But 
even in that, it's the system that's doing it, not digital signatures. The 
signatures are merely the wax seals. The quality system has to be built to 
create and deliver quality. That is the sine qua non of this whole thing.

I think we agree that trying to build quality by giving certificates to 
developers is a fantasy at best.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-09 Thread Jon Callas

On 8 Dec, 2011, at 8:27 PM, Peter Gutmann wrote:

 In any case getting signing certs really isn't hard at all.  I once managed 
 it 
 in under a minute (knowing which Google search term to enter to find caches 
 of 
 Zeus stolen keys helps :-).  That's as an outsider, if you're working inside 
 the malware ecosystem you'd probably get them in bulk from whoever's dealing 
 in them (single botnets have been reported with thousands of stolen keys and 
 certs in their data stores, so it's not like the bad guys are going to run 
 out 
 of them in a hurry).
 
 Unlike credit cards and bank accounts and whatnot we don't have price figures 
 for stolen certs, but I suspect it's not that much.

If it were hard to get signing certs, then we as a community of developers 
would demonize the practice as having to get a license to code.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Jon Callas
There are many things about code signing that I don't think I understand.

I think that code-signing is a good thing, and that all things being equal, 
code-signing is a good thing, and that code should be signed.

However, there seems to strange, mystical beliefs about it.

As an example, there's the notion that if you have signed code and you revoke 
the signing key (whatever revoke means, and whatever a key is) then the 
software will automagically stop working, as if there's some sort of quantum 
entanglement between the bits of the code and the bits of the key, and 
invalidating the key therefore invalidates the code.

This seems to me to be daft -- I don't see how this *could* work in a general 
case against an attacker who doesn't want that code to stop working (and that 
attacker could be either a malware writer or the owner of the computer). I can 
see plenty of special cases where it works, but it is fundamentally not 
reliable and a security system that wants to stop malware or whatever by 
revoking keys is even less reliable because we now have three or four parties 
(malware writer, machine owner, certifier, anti-virus maker).

It also seems to me that discussions on this list hit this situation from two 
strange directions. One is the general sneering at the daft belief. The other 
is continuing to discuss it. I don't care who is using it (even effectively); 
we're all smart enough to know both that DRM cannot work, and yet there are 
users of it that are happy with it. Whatever.

Slightly tangential to this is a discussion of expiration of signing keys. In 
reality, they don't expire. Unless you you make a device that can be 
permanently broken by setting the clock forward (which is certainly possible, 
merely not desirable), then expiry can be hacked around. The rough edge of what 
happens to code that expires while it is executing generalizes out to a set of 
other problems that just show that in fact, you can't really expire a code 
signing key any more than you can revoke it -- that is to say there are many 
edge conditions in which it works and many of these are useful to some people 
and some circumstances, but in the general case, it doesn't and cannot work.

But that doesn't mean that code signing is a bad thing. On the contrary, code 
signing is very useful because you can use the key, the signature, or the hash 
as a way to detect malware and form a blacklist, as well as detect software 
that should be whitelisted.

Simply stated, an anti-malware scanner can detect (and remove) a specific piece 
of malware by the simple technique of comparing its signature to a blacklist. 
It can compare a single object's hash to a list of hashes and that only 
requires the scanner to hash the code object; this catches the simple case of 
malware that is merely re-signed with a new key. It also permits it to do more 
complex operations than a simple hash (like hashing pieces, or hash at 
different times) to identify a piece of malware. It can also use the key to 
detect whole classes of malware (or good-ware).

Code signing is good because it gives the anti-malware people a set of tools 
that augment what they have with some easy, fast, effective ways to categorize 
software as known goods or known bads. 

But that's it -- you don't get the spooky action at a distance aspects that 
some people think you can do with revocation. You get something close, if you 
feed the blacklist/whitelist information to whatever the code-scanner is. 
Nonetheless, this answers how you deal with signed malware (once it's known to 
be malware, you stop it via signature), or bogus 512-bit signing keys (just 
declare anything signed by such to either be treated as malware or as unsigned).

So am I missing something? I feel like I'm confused about this discussion 
because *of* *course* you can't revoke a key and have that magically transmit 
to software. Perhaps some people believe that daft notion and have built 
systems that assume that this is true. So what? Maybe it works for them. The 
places where it doesn't work aren't even interesting. Perhaps observing when 
this daft notion meets the real world is helpful as an object lesson. Perhaps 
it works for *them* but not *us*.

But really, I think that code signing is a great thing, it's just being done 
wrong because some people seem to think that spooky action at a distance works 
with bits.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Jon Callas

On 7 Dec, 2011, at 11:34 AM, ianG wrote:

 
 Right, but it's getting closer to the truth.  Here is the missing link.
 
 Revocation's purpose is one and only one thing:  to backstop the liability to 
 the CA.

I understand what you're saying, but I don't agree.

CAs have always punted liability. At one point, SSL certs came with a huge 
disclaimer in them in ASCII disclaiming all liability. Any CA that accepts 
liability is daft. I mean -- why would you do that? Every software license in 
the world has a liability statement in it that essentially says they don't even 
guarantee that the software contains either ones or zeroes. Why would 
certificates be any different?

I don't think it really exists, not the way it gets thrown around as a term. 
Liability is a just a bogeyman -- don't go into the woods alone at night, 
because the liability will get you!

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] really sub-CAs for MitM deep packet inspectors? (Re: Auditable CAs)

2011-12-06 Thread Jon Callas

On 6 Dec, 2011, at 3:43 AM, ianG wrote:

 The promise of PKI in secure browsing is that it addresses the MITM.  That's 
 it, in a nutshell.  If that promise is not true, then we might as well use 
 something else.

Is it?

I thought that the purpose of a certificate was to authenticate the server to 
the client. This is a small, but important difference. If you properly 
authenticate the server, then (one hopes) that we've tacitly eliminated both an 
impersonation attack and a MiTM (an MiTM is merely a real-time, two-way 
impersonation).

The problem is that we're authenticating the server by naming, and there are 
many entities with a reason to lie about names. There are legitimate and 
illegitimate reasons to lie about names, and while we know that it's going on, 
we don't have a characterization of what reality even *is*.

We're seeing this in this very discussion. I also want to see proof that this 
is going on. I know it is, but I want to see it. These bogus certs are a lot 
like dark matter -- we know they're there, but we have little direct 
observation of them.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digest comparison algorithm

2011-12-01 Thread Jon Callas

On Dec 1, 2011, at 2:37 PM, Jerrie Union wrote:

 I’m wondering, if it’s running as some authenticated server application, if 
 it should be considered as resistant to time attacks nowadays. I’m aware 
 that’s
 not a good practice, but I’m not clear if I should consider it as exploitable 
 over the
 network (on both intranet and internet scenarios). 
 
 I would like to run some tests, but I’m not sure if I should follow some 
 specific
 approach. Anyone has done some research recently?

I agree with Ian. You have correctly observed that the check algorithm is not 
constant time. This is a flaw. But you're doing a hash, and consequently that 
flaw may not be observable. It is therefore a very small flaw. 

I might rewrite the routine differently than Ian did. Let me apologize in 
advance for being a C guy writing Java, but I'd do approximately this:

public boolean check(digest, secret) {
 int failure = 0;
 hash = md5(secret);
 

 failure |= (digest.length != hash.length);   // Is the hash of the same 
length?

 for (i = 0; i  min(digest.length, secret.length); i++) {  
 
failure |= (digest[i] != hash[i]);  // Check each byte for non-match
 }   

 return failure == 0;   // return true if we didn't fail. Yeah, confusing.
} 

I don't guarantee that this works, but it looks okay from here. The intent is 
that you always OR together a length check and each byte check, with a 
low-order 1 bit indicating a failure. Then you reverse polarity and convert to 
a boolean. I hope I didn't embarrass myself in this pseudocode.

You have to have a wart in one of two places. I chose to have a wart that if 
the sizes mismatch, you still do the byte checks. Alternatively, if you return 
early on a size mismatch, you leak a size mismatch, which is small potatoes in 
the grand scheme of things. My way of doing it leaks the size mismatch and its 
size if you can somehow force in a secret of variable size. I went back and 
forth on which is better and decided I don't care at the end.

I don't think there's anything wrong with what Ian did, but I stuck to having 
most of my work be an OR because I'm that paranoid.

Jon
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Newbie Question

2011-12-01 Thread Jon Callas

On Dec 1, 2011, at 8:43 PM, Randall Webmail wrote:

 From: ianG i...@iang.org
 
 It does store certs.  It just takes above  beyond to get at them.  
 Unknown whether it stores certs that you reject.
 
 I spend a lot of time in hotels, and it is VERY common for me to get one of 
 those popups complaining about certificates when I connect to the hotel WiFi.
 
 I am an almost-complete greenie WRT crypto, which is why I'm here to learn.
 
 What is the proper thing to do when one of those things pops up?   (It is NOT 
 a rare event).
 
 I use the https everywhere firefox extension on my OSX laptop.   I do not 
 access my bank accounts on public WiFi, but I really don't have a choice but 
 to access webmail and gmail.What should I do when I get one of those cert 
 warnings?

Click Cancel and then try again.

The usual reason for the message is that some network client has bumped up 
against the captive portal and gotten either a network error or something that 
is an HTTP response and thus a completely protocol illegal answer. They then 
interpret it as an SSL error when it's really nothing but the captive portal.

But you want to click cancel, because if there's someone who wants to hack you, 
that's how they'd do it.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] No one bothers cracking the crypto (real life edition)

2011-12-01 Thread Jon Callas
http://pauldotcom.com/2011/11/cracking-md5-passwords-with-bo.html

BozoCrack is a depressingly effective MD5 password hash cracker with almost 
zero CPU/GPU load. Instead of rainbow tables, dictionaries, or brute force, 
BozoCrack simply finds the plaintext password. Specifically, it googles the MD5 
hash and hopes the plaintext appears somewhere on the first page of results.

It works way better than it ever should.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-30 Thread Jon Callas

On Nov 29, 2011, at 8:33 PM, Ilya Levin wrote:

 On Tue, Nov 29, 2011 at 5:52 PM, Jon Callas j...@callas.org wrote:
 
 But the other one is Drew Gross's observation. If you think like an 
 attacker, then you're a fool to worry about the crypto.
 
 While generally true, this is kind of an overstatement. I'd say that
 if you think like an attacker then crypto must be the least of your
 worries.  But you still must worry about it.
 
 I've seen real life systems were broken because of crypto combined
 with other thins. Well, I broke couple of these in old days (whitehat
 legal stuff)
 
 For example, the Internet banking service of the bank I would not name
 here was compromised during a blind remote intrusion simulating
 exercise because of successful known plaintext attack on DES. Short
 DES keys together with key derivation quirks and access to ciphertext
 made the attack very practical and very effective.
 
 Again, I'm not arguing with Drew Gross's observation. It is just a bit
 extreme to say it like this.

Let me try to restate what I was saying, because I think the point is getting 
lost in the words.

If I were an attacker who wanted to compromise your computers, I would not 
attack your crypto. I would attack your software. Even if what I wanted to do 
was ultimately to get to your crypto, I wouldn't mount a cryptanalytical 
attack, I'd attack your system. That's it.

We are seeing this in the real world now. The targeted malware that the German 
government has to compromise Skype is not cryptanalysis, it is a systems-level 
attack that then gets at the crypto.

Robert Morris gave the famous advice, first, check for plaintext. I'm just 
saying that checking first for Flash today's equivalent.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] really sub-CAs for MitM deep packet inspectors? (Re: Auditable CAs)

2011-11-30 Thread Jon Callas
On Nov 30, 2011, at 9:32 PM, Rose, Greg wrote:

 I run a wonderful Firefox extension called Certificate Patrol. It keeps a 
 local cache of certificates, and warns you if a certificate, CA, or public 
 key changes unexpectedly. Sort of like SSH meets TLS. As soon as I went to my 
 stockbroker's web site, the warnings started to appear. Then it was just 
 checking IP addresses and stuff.

And I presume you didn't save the cert.

Of course, we just need to have people look for these and then save them.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-29 Thread Jon Callas

On Nov 27, 2011, at 12:10 PM, Steven Bellovin wrote:

 Does anyone know of any (verifiable) examples of non-government enemies
 exploiting flaws in cryptography?  I'm looking for real-world attacks on
 short key lengths, bad ciphers, faulty protocols, etc., by parties other
 than governments and militaries.  I'm not interested in academic attacks
 -- I want to be able to give real-world advice -- nor am I looking for
 yet another long thread on the evils and frailties of PKI.

Steve, it's hard to know how to answer that, really. I often quote Drew Gross, 
I love crypto, it tells me what part of the system not to bother attacking. 
I'd advise anyone wanting to attack a system that they should look at places 
other than the crypto. Drew cracked wise about that to me in 1999 and I'm still 
quoting him on it.

If you look at the serious attacks going on of late, none of them are crypto, 
to the best of my knowledge, anyway. The existing quote-quote APT attacks are 
simple spear-phishing at best. A number of them are amazingly simplistic. 

We know that the attack against EMC/RSA and SecureID was done with a vuln in a 
Flash attachment embedded in an Excel spreadsheet. According to the best news I 
have heard, the Patient Zero of that attack had had the infected file 
identified as bad! They pulled it out of the spam folder and opened it anyway. 
That attack happened because of a security failure on the device that sits 
between the keyboard and chair, not for any technology of any sort.

There are also a number of cases where suspects or convicted criminals in the 
hands of powerful governments along with their encrypted data have not had 
their crypto broken. Real world evidence says that if you pick a reasonably 
well-designed-and-implemented cryptosystem (like PGP or TrueCrypt) and exercise 
good OPSEC, then your crypto won't be broken, even if you're up against the 
likes of First World TLAs.

I have, however, hidden many details in a couple of phrases above, especially 
the words exercise good OPSEC.

If we look at it from the other angle, though, one of the cautionary tales I'd 
tell, along with a case study is the TI break. The fellow who did it announced 
on a web board that very long number equals long number_1 times long 
number_2. People didn't get it, so he wrote it out in hex. They still didn't 
get it, and he pointed out that the very long number could be found in a 
certain certificate. The other people on the board went through all of 
Kubler-Ross's stages in about fifteen posts. It's hilarious to read. The 
analyst said that he'd sieved the key on a single computer in -- I remember it 
being about 80 days, but it could be 60ish. Nonetheless, he just went and did 
it.

On the one hand, he broke the crypto. But on the other hand, we had all known 
that 512-bit numbers can be quasi-easily factored. It was a shock, but not a 
surprise. 

Another thing to look at would be the cryptanalysis of A5/n over the years. 
Certainly, there's been brilliant cryptanalysis on those ciphers. But it's also 
true that the people who put them in place willfully avoided using ciphers 
known to be strong. It is as if they built their protocols so that they could 
hack them but they presumed we couldn't. We proved them wrong. Does that really 
count as cryptanalysis as opposed to puncturing arrogance?

If you want to look at protocol train wrecks, WEP is the canonical one. But 
that one had at its core the designers cheaping out on the crypto so that the 
hardware could be cheaper. I think it is a good exercise to look the mistakes 
in WEP, but a better one is to look at creating something significantly more 
secure within the same engineering constraints. You *can* do better with about 
the same constraints, and there are a number of ways to do it, even.

I can list a number of oopses of lesser degrees, where someone took reasonable 
components and there were still problems with it. But I really don't think 
that's what you're asking for, either.

The good news we face today is that there really isn't any snake oil any more. 
If there is anything that we can be proud of as a discipline, it's that the 
problems we face are genuine mistakes as opposed to genuine or malicious not 
understanding the problem. 

The bad news is that there are two major problems left. One is mis-use of 
otherwise mostly okay protocols. Users picking crap passwords is the most 
glaring example of this. There are a number of well-tested cryptosystems out 
there that are nearly universally used badly.

But the other one is Drew Gross's observation. If you think like an attacker, 
then you're a fool to worry about the crypto. Go buy a few zero days, instead. 
But that's only if you don't want to be discovered afterwards. If you don't 
care, there are so many unpatched systems out there that scattershotting 
well-crafted spam with a Flash exploit works just fine.

What I'm really saying here is that in the chain of real security, crypto is 

Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-28 Thread Jon Callas
 
 WEP?  Again, we all know how bad it is, but has it really been used?
 Evidence?
 
 Yes, WEP was a confirmed vector in the Gonzales TJX hack:
 http://www.jwgoerlich.us/blogengine/post/2009/09/02/TJ-Maxx-security-incident-timeline.aspx
 
 http://en.wikipedia.org/wiki/TJX_Companies#Computer_systems_intrusion
 
 Ah --- I'll check.  I knew they attacked WiFi; I didn't recall that they'd
 cracked WEP.  Thanks.

I don't believe the TJX attack cracked WEP. I believe that the post-hack 
auditors identified WEP as a weak point, but the attackers got in through an 
easily-cracked network. By easily cracked I mean something like a stupid 
password or unsecured. The attackers were not sophisticated.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] HMAC over messages digest vs messages

2011-11-02 Thread Jon Callas

On Nov 2, 2011, at 12:59 PM, Leandro Meiners wrote:

 I thought of that, but I could not convince myself because it seems to
 depend on the particular application.
 
 For example, lets assume the following scenario: m is a message that it
 authenticated by the HMAC.
 
 For example, in the HMAC(HASH(m)) scenario, you might find a collision,
 however it might be gibberish and therefore useless. However, it might
 be that m lacks structure so that HMAC(m) might be the valid signature
 for two different messages m1 and m2 that both give the same m to be
 signed. In this case, the HMAC(HASH(m)) could save you from such a
 situation.
 
 Nevertheless, I am not sure of how to factor this into the reasoning as
 there are probably cases where an example can be found the other way around.
 
 Am I making any sense?

I think I understand where you're going. However, in the general case, as Marsh 
and Greg have pointed out, there are length issues, etc. that you'd want to at 
the very least hash the length + the message. Very likely more tweaks are 
needed, too.

But I have to ask why you're bothering? The best way in the world to introduce 
a crypto flaw is to improve an existing, known construction. Really. Don't. If 
you don't have a specific problem you're trying to fix, or feature you need to 
enable, treat the standard set of constructions like a box of Legos. Just put 
them together, and you'll almost always be fine. When you're not fine, you'll 
have problems that lots of people will understand, too. 

The construction you have, an HMAC of a hash, computes three hashes, as opposed 
to the HMAC proper which only does two. So it's slower. On the security axis, 
we're now tweaking your contraction to remove flaws that wouldn't be there if 
you just used an HMAC.

Ask yourself what problem are you trying to solve that HMAC doesn't solve? If 
you don't have a good answer to that question, just use an HMAC.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PFS questions (was SSL *was* broken by design)

2011-10-03 Thread Jon Callas
 At the risk of feeding the conspiracy angle, I note that there is only one 
 stream cipher for SSL/TLS (RC4). All the others in common use are CBC modes, 
 with that same predictable IV weakness as IPsec (i.e. BEAST). There are no 
 DHE cipher suites defined for RC4. So if you want PFS, you have to accept 
 predictable IVs. If you want resistance to BEAST, you have to give up PFS.
 
 Personally, I don't interpret this as anything more than the IETF process and 
 some vendor biases back in the 90s. But it shows that designing for this 
 concept of 'agility' is important, in particular for reasons you don't know 
 at the time.

Oh, please.

I'm sorry, Marsh, but this is just silly, suggesting that there are vendor 
biases against stream ciphers and agility. After all, if we look through the 
library of publicly-available, well-trusted stream ciphers there is, u, 
well, there's always, ur, well. Oh, I know! Counter mode! Yeah, that's it. 
On the agility front, most people seem to be against it. Weren't we in a huge 
no-choices-are-the-only-security mood a few weeks ago?

Stream ciphers are hard. They're hard to build correctly, hard to use 
correctly, and have been the red-headed-stepchild of cipher design for really 
good reasons. Remember WEP? The most damning problem in it to my mind was the 
order-2^24 attack caused by using a stream cipher (and a 24-bit pseudo-IV).

Any stream cipher that gets created has to answer this really good question: 
Why are you better than AES-CTR? The next question would be why it's better 
than Serpent-CTR, or Twofish-CTR, or heck why not use Threefish-CTR? 

Of course right now, the best thing to do stream-cipher-wise is to use GCM 
mode, with is in TLS 1.2, but hardly deployed at all, no doubt because of bias 
against wanting to use something that's authenticated, right? After all, 
wouldn't the surveillance state want us all to be vulnerable to CBC attacks 
like BEAST, and people who are preventing that must be in cahoots with the NSA, 
right? 

But of course, GCM mode is part of Suite B, and that's the NSA's push for using 
an authenticated data stream. So that means that the people who are pushing for 
stream ciphers are also in cahoots with the surveillance state by pushing for 
authenticated modes, too!

In case anyone missed it, the sarcasm bits should have been showing up in the 
UTF-8 over the last couple of paragraphs at some point or other.

Come on. This discussion has descended past whacko, which is where it went once 
the broken by design discussion started. Yeah, security is hard, but it's 
software. We know how to do that, once we understand the problems. The wrong 
questions have been asked for so long in this long discussion that I think the 
only reasonable people are the ones ignoring the whole thing.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Tell Grandma to remember the Key ID and forget the phone number. [was: Re: Let's go back to the beginning on this]

2011-09-26 Thread Jon Callas
 Drill Grandma on one thing:
 
 FORGET THE TELEPHONE NUMBER.  REMEMBER THE KEY ID.
 
 If she's smart enough to know to write down or remember the telephone
 number, she's smart enough to re-channel that to the Key ID.
 
 Merchants and banks proudly and prominently display their Key IDs on
 their front pages and with all ads likely to catch Grandma's eye.
 
 The rest is done by a local or on-line cryptographically-secure
 directory indexed by Key ID.
 
 Now retire the CAs and forget about them.

That's a big if. It's an if that's so big that it's guaranteed to be false. 
Human beings don't do very well remembering such things. It's worse if you want 
to roll them over periodically.

Now what you're suggesting could work if you did something like made some 
directories that stored the key IDs and web sites they belonged to. This could 
be something that could easily be stored in Google, Yahoo!, or Bing, for 
example. This has a downside of a privacy leak every time someone wants to look 
one up.

If that privacy leak bothers you, or you want to offload the lookup requests 
from the search engine infrastructure, we could always store it on the web 
server itself. A digital signature would provide the proper integrity check.

And yes, with that system, we could retire the CAs.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SSL is not broken by design

2011-09-23 Thread Jon Callas

On Sep 23, 2011, at 11:17 AM, Ben Laurie wrote:

 On Thu, Sep 22, 2011 at 4:46 PM, Peter Gutmann
 pgut...@cs.auckland.ac.nz wrote:
 Ben Laurie b...@links.org writes:
 
 Well, don't tease. How?
 
 The link I've posted before (but didn't want to keep spamming to the list):
 
 http://www.cs.auckland.ac.nz/~pgut001/pubs/pki_risk.pdf
 
 That was a fun read and I mostly agree, but it raises some questions...
 
 a) Key continuity is nice, but ... are you swapping one set of
 problems for another? What happens when I lose my key? How do I roll
 my key? I just added a second server with a different key, and now a
 bunch of users have the wrong key - what do I do? How do I deal with
 a compromised key?

Great rhetorical questions, Ben. You nail it.

Continuity is great, but it has its own set of problems that include all the 
ones you mention. Rolling keys is the easiest one of them and can be solved 
pretty much the same way. But all the others are problems that continuity 
introduces. I brought up these issues in my long rant. Continuity can solve 
some, but not all of the problems.

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI - and the threat model is ...?

2011-09-12 Thread Jon Callas

On Sep 12, 2011, at 7:15 AM, M.R. wrote:

 In these long and extensive discussions about fixing PKI there
 seems to be a fair degree of agreement that one of the reasons
 for the current difficulties is the fact that there was no precisely
 defined threat model, documented and agreed upon ~before~ the
 SSL system was designed and deployed.
 
 It appears to me that it is consequently surprising that again,
 in these discussions for instance, there is little or nothing
 offered to remedy that; i.e., to define the threat model
 completely independent of what the response to it might or
 might not be.

Bingo.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-07 Thread Jon Callas

On Jul 5, 2011, at 10:53 PM, Alfonso De Gregorio wrote:

 Let's assume there is a way to convince market participants that some 
 Bitcoins has been destroyed, what would happen then? The value of the current 
 Bitcoin supply would slightly increase, that's correct.
 
 Would market participants be willing to invest more in order to secure their 
 liquid assets against Bitcoin assassination attacks? How the attack rate 
 would increase/decrease?
 
 The tragedy of the commons suggests us that when both risks and benefits are 
 socialized between the elements of the population, individuals lack the 
 incentive to unilaterally invest in security.
 
 On the other hand, as long as the reduction of money supply increases the 
 value of the survived assets (ie, there's demand), some elements of the 
 population will have an incentive to attack.
 
 It would be interesting to investigate further.

Some of market participants would certainly be willing to do that.

Consider a participant who is also a payment processor, similar to a credit 
card, etc. They're a financial institution, so they're certainly interested in 
assets and commodities, but they also receive money on normal transactions and 
do not get this when bitcoins are directly transferred. (Obviously, they could 
bill transactions in bitcoins just as easily as dollars or euros or sterling, 
too.)

Assume they get 2% of a transaction. This means that (hand wave) their 
Highlander Function is proportional to 50. In other words, if assassinating a 
bitcoin causes 50 transactions to move to traditional currencies, then the cost 
of killing it is zero.

Also obviously, if they just start collecting bitcoins themselves, they can 
take them out of circulation, which as we've all noted, may be equivalent to 
destroying them.

But this institution also profits by instability. If the price of a bitcoin 
fluctuates wildly, then they are less attractive as a currency to small holders.

For example, let's assume 5% wild fluctuation. I'll put this in mathematical 
terms as saying that if its nominal price is 100, then the actual price is a 
random function between 95 and 105. If I am a receiver of bitcoins, I deal with 
this simply. I just say that the value of a bitcoin is 95. I push the cost of 
fluctuation to the payer. I'm not interested in anything else -- after all, I 
have the supply and if the supply is of something that is attractive for you to 
use a bitcoin for, I just make you eat the fluctuation. Note that the 
manipulator can also use the fluctuation as a lever to get bitcoins cheaper. 
They keep querying the price of a bitcoin and buy it if it's (e.g.) 98 or less, 
thus getting 2% profit on them.

If the fluctuation creator can induce more fluctuation, it's even better. It's 
even to their benefit if the fluctuation is lopsided -- e.g. 90% of the time 
the value is 110, but 10% of the time the value is 50. Pump and dumps can get 
this behavior. 

If this manipulation causes people to flee bitcoins, they win because they get 
people to use normal credit cards again. If they fail, and despite all this, 
bitcoins are still a valuable currency, they win again. They have a lot of a 
valuable currency.

Note that I've been sketching this attack assuming that it's Just Business. If 
there's genuine animus against the currency by which I mean they just want it 
to fail and don't care about cost, then all these are accentuated.

There's an old security maxim that it's easy to make a system that you can't 
break yourself, but it's hard to make one that other people can't break. This 
is especially true when the attacker has motivations you don't know or 
understand.

A government, for example, could turn a blind eye to people manipulating the 
market. Easy and passive and there's no fingerprints. They could set up private 
deals so that they pay a bounty over the market price of bitcoins (and make 
that be a moving average to encourage fluctuation) and let the financials take 
care of it. They could set up their own hacking networks to steal or destroy 
bitcoins.

It is my intuition that nation states of all stripes aren't going to like them. 
Some set of them would be happy to let the banks and speculators take care of 
it. Some of them would engage in actual hacking to hurt the currency, and the 
interesting property that destroying a bitcoin is a worthwhile attack makes it 
even more interesting. 

What if the Western governments gave Lulzsec letters of marque and told them 
they'd pay them by the lul? This would make a great movie plot. The merry 
hackers get caught and then turned loose on the (according to that plot) bad 
guys.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Jon Callas
On Jul 4, 2011, at 10:10 PM, coderman wrote:

 H3 should be Gospel: There is Only One Mode and it is Secure
 
 anything else is a failure waiting to happen…

Yeah, sure. I agree completely. How could any sane person not agree? We could 
rephrase this as, The Nineties Called, and They Want Their Exportable Crypto 
Back. Exportable crypto was risible at the time and we all knew it.

But how is this actionable? How can I use this principle as a touchstone to let 
me know the right thing to do. I suppose we could consider it a rule of thumb 
instead, but that flies in the face of making it Gospel.

Rather than rant, I'll propose a practical problem and pose a question.

You're writing an S/MIME system. Do you include RC2/40 or not? Why?

Hint: Gur pbeerpg nafjre vf gung lbh vaqrrq fubhyq vapyhqr vg. Ohg V yrnir gur 
jurersberf nf na rkrepvfr. Ubjrire, guvf uvag vf nyfb n zrgn-uvag nf gb gur 
ernfbaf jul lbh fubhyq vapyhqr vg.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Bitcoin observation

2011-07-05 Thread Jon Callas
I was sitting around the other weekend with some friends and we were talking 
about Bitcoin, and gossiping furiously about it. While we were doing so, an 
interesting property came up.

Did you know that if a Bitcoin is destroyed, then the value of all the other 
Bitcoins goes up slightly? That's incredible. It's amazing and leads to some 
emergent properties.

If you have a bunch of Bitcoins and you want to increase your worth, you can do 
this by one of three ways:

(1) Create more Bitcoins.
(2) Buy up more Bitcoins, with the end state of that strategy being that you've 
cornered the market.
(3) Destroy other people's Bitcoins. The end state of that is also that you've 
cornered the market.

I also observe that if the player succeeds at either strategy (2) or (3), then 
Bitcoins are no longer a decentralized currency. They're a centralized 
currency. (And presumably, that player wins the Bitcoin Game.)

I'll go further and note that if a self-stable oligarchy manages to buy or 
destroy all the other  Bitcoins, they win as a group, too. With enough value in 
the Bitcoin universe, and properly motivated players, that could easily happen.

I wonder myself when it is more efficient to destroy a Bitcoin than buy or 
create one? Let's call the value of the energy to create one C. We'll call the 
value to buy one B. There must be some constant H where H*C or H*B makes it as 
efficient to destroy one than to buy or create. I suppose there's really two 
separate constants, H_c and H_b.

Nonetheless, I call this H because it's the Highlander Constant. You know -- 
there can only be one! If H is large enough, then you have unrestricted 
economic war that leads to a stable end where a single player or an oligarchy 
holds all the bitcoins.

So if we consider a universe of N total coins and a total market value of V, 
and a players purse size of P coins, what's the value of H? I think it's an 
interesting question.

I have some other related things to muse over as well, like what it means to 
destroy a bitcoin. If you silently destroy one, the value of the remaining 
coins increases passively through deflation. But if you publicly destroy one, 
you could see an immediate uptick. Does revealing the value of a coin destroy 
it? Do you need to publicly destroy one through some zero-knowledge proof? 

Also, does public destruction actually hurt the market by making people tend to 
not want to put money into Bitcoins? Might this form some sort of negative 
feedback on the value of H, by cheapening Bitcoins as a whole? But is there a 
double-negative feedback through the fact that if people want to sell coins 
cheaply, the big players just buy them cheap and run the market back up that 
way?

The end of all this musing, though, is that I believe that a decentralized 
coinage that has the property that destroying a coin has value *inevitably* 
leads to centralization through the Highlander Constant.

Discuss.

Jon



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-05 Thread Jon Callas

On Jul 5, 2011, at 12:07 AM, coderman wrote:

 On Mon, Jul 4, 2011 at 11:44 PM, Jon Callas j...@callas.org wrote:
 ...
 Did you know that if a Bitcoin is destroyed, then the value of all the other 
 Bitcoins goes up slightly? That's incredible. It's amazing and leads to some 
 emergent properties.
 
 this is not completely correct. it is only true if you destroy a
 bitcoin in circulation. (for whatever interpretation of in
 circulation is reasonable.)
 
 for example, there's one guy sitting on a cache of 371,000 bitcoins
 generated when the network was small and computation was minuscule.
 these were never in circulation, and for sake of argument, consider
 the physical media containing the keys for the coins is lost (not
 destroyed).
 
 how does this affect the value of other coins?

Yeah, whatever.

 
 
 If you have a bunch of Bitcoins and you want to increase your worth, you can 
 do this by one of three ways:
 
 (1) Create more Bitcoins.
 (2) Buy up more Bitcoins, with the end state of that strategy being that 
 you've cornered the market.
 (3) Destroy other people's Bitcoins. The end state of that is also that 
 you've cornered the market.
 
 0. steal other people's bitcoins. this is currently the most
 productive means for obtaining more bitcoin value, as proven over the
 last few months. cue rant on building secure software systems here
 
 1. is only available while there are coins to be generated. at some
 point all coins will be mined and:
 
 4. collecting transaction fees for participating in the network is the
 last option in this list.

Good points. But nonetheless, it's a really, really cool property of the system 
that you can gain by destroying bitcoins. I mean heck -- let's create another 
sub-constant, H_s which is the constant that shows when it better to destroy 
one than steal one. Obviously, if you have zero bitcoins, then stealing them 
has some value. But heck -- what if you're sitting on a cache of 371,000 coins. 
My intuition is that it's going to be better to destroy than steal. If you're 
found with a stolen bitcoin, you have some 'splaining to do. But if you 
silently destroy one -- then you see a market float.


 
 
 regarding the remainder of your argument, the ability to divide
 bitcoins into arbitrarily smaller and smaller units implies that such
 an attacker will be chasing an asymptote; never able to reach
 definitive control while leaving a large network trading amongst
 themselves in millionths of a coin…

I think it only changes the value of H. It doesn't invalidate the argument. You 
can subdivide them in to 10^8 pieces, right?

If you think of naive H and complete H, we know that Complete H is at most 
10^8 times the naive value.

 
 the impacts of a large H are still interesting, even if never leading
 to one player takes all...

Very interesting. I stand by my hypothesis. I think it inevitably leads to 
centralization through coin assassination.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Jon Callas

On Jul 4, 2011, at 11:35 PM, coderman wrote:

 On Mon, Jul 4, 2011 at 11:11 PM, Jon Callas j...@callas.org wrote:
 ...
 Yeah, sure. I agree completely.
 
 no you don't ;)
 

Actually I do. I also believe in truth and justice and beauty, too. And 
simplicity. I just value actionable, as well.


 
 How can I use this principle as a touchstone to let me know the right thing 
 to do. I suppose we could consider it a rule of thumb instead, but that 
 flies in the face of making it Gospel.
 
 what are the good reasons for options that don't include:
 - backwards compatibility
 - intentional crippling (export restrictions)
 - patents or other license restrictions
 - interoperability with others
 ?
 
 there may be a pragmatic need for options dealing with existing
 systems or business requirements, however i have yet to hear a
 convincing argument for why options are necessary in any new system
 where you're able to apply lessons learned from past mistakes.
 
 

Pragmatic. That's what I'm talking about pragmatism. It's not pragmatic to go 
write a new protocol all the time. Especially if the time to create one with no 
known flaws is longer than the time to find a flaw.


 You're writing an S/MIME system...
 
 well there's your problem right there!
 

Hey, you mentioned backwards compatibility, yourself.

Jon
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-04 Thread Jon Callas

On Jul 4, 2011, at 4:28 PM, Sampo Syreeni wrote:

 (I'm not sure whether I should write anything anytime soon, because of Len 
 Sassaman's untimely demise. He was an idol of sorts to me, as a guy who Got 
 Things Done, while being of comparable age to me. But perhaps it's equally 
 valid to carry on the ideas, as a sort of a nerd eulogy?)
 
 Personally I've slowly come to believe that options within crypto protocols 
 are a *very* bad idea. Overall. I mean, it seems that pretty much all of the 
 effective, real-life security breaches over the past decade have come from 
 protocol failings, if not trivial password ones. Not from anything that has 
 to do with hard crypto per se.

Let me be blunt here. The state of software security is so immature that 
worrying about crypto security or protocol security is like debating the 
options between hardened steel and titanium, when the thing holding the chain 
of links to the actual user interaction is a twisted wire bread tie. 

Lots of other discussion is people noting that if you coated that bread tie 
with plastic rather than paper, it would be a lot more resistant to rust. And 
you know what, they're right!

In general, the crypto protocols are not the issue. I can enumerate the obvious 
exceptions where they were a problem as well as you can, and I think that they 
prove the rule. Yeah, it's hard to get the crypto right, but that's why they 
pay us. It's hard to get bridges and buildings and pavement right, too.

There are plenty of people who agree with you that options are bad. I'm not one 
of them. Yeah, yeah, sure, it's always easy to make too many options. But just 
because you can have too many options that doesn't mean that zero is the right 
answer. That's just puritanism, the belief that if you just make a few absolute 
rules, everything will be alright forever. I'm smiling as I say this -- 
puritanism: just say no.

 
 So why don't we make our crypto protocols and encodings *very* simple, so as 
 to resist protocol attacks? X.509 is a total mess already, as Peter Gutmann 
 has already elaborated in the far past. Yet OpenPGP's packet format fares not 
 much better; it might not have many cracks as of yet, but it still has a very 
 convoluted packet structure, which makes it amenable to protocol attacks. Why 
 not fix it into the simplest, upgradeable structure: a tag and a binary blob 
 following it?

Meh. My answer to your first question is that you can't. If you want an 
interesting protocol, it can't resist protocol attacks. More on that later.

As for X.509, want to hear something *really* depressing? It isn't a total 
mess. It actually works very well, even though all the mess about it is quite 
well documented. Moreover, the more that X.509 gets used, the more elegant its 
uses are. There are some damned fine protocols using it and just drop it in. 
Yeah, yeah, having more than one encoding rule is madness, but to make that 
make you run screaming is to be squeamish. However, the problems with PKI have 
nothing to do 

OpenPGP is a trivially simple protocol at its purest structure. It's just tag, 
length, binary blob. (Oh, so is ASN.1, but let's not clutter the issue.) You 
know where the convolutedness comes from? A lack of options. That and 
over-optimization, which is actually a form of unneeded complexity. One of the 
ironies about protocol design is that you can make something complex by making 
it too simple.

I recommend Don Norman's new book, Living With Complexity. He quotes what he 
calls Tesler's law of complexity, which is that the complexity of a system 
remains constant. You can hide the complexity, or expose it. If you give the 
user of your system no options, it means you end up with a lot of complexity 
underneath. If you expose complexity, you can simplify things underneath. The 
art is knowing when to do each.

If you create a system with truly no options, you create brittleness and 
inflexibility. It will fail the first time an underlying component fails and 
you can't revise it. 

If you want a system to be resilient, it has to have options. It has to have 
failover. Moreover it has to failover into the unknown. Is it hard. You bet. Is 
it impossible? No. It's done all the time.

I started off being a mathematician and systems engineer before I got into 
crypto. I learned about building complex systems before I learned crypto, and 
complexity doesn't scare me. I look askance at it, but I don't fear it.

Yes, yes, simpler systems are more secure. They're also more efficient, easier 
to build, support, maintain, and everything else. Simplicity is a virtue. But 
it is not the *only* virtue, and I hope you'll forgive me for invoking 
Einstein's old saw that a system should be as simple as possible and no 
simpler. 

I think that crypto people are scared of options because options are hard to 
get right, but one doesn't get away from options by not having them. The only 
thing that happens is that when one's system fails, 

Re: [cryptography] Oddity in common bcrypt implementation

2011-06-29 Thread Jon Callas
On Jun 29, 2011, at 11:36 AM, James A. Donald wrote:

 
 Thus the password has to be normalized before being hashed.
 
 Further, often a variants of a single character with a single meaning also 
 have multiple codes - there is no sharp boundary between the string, and 
 formatting information, though this is more a problem for unicode searching, 
 than for unicode passwords.

Meh.

I've dealt with this issue in cases where the password is (sadly) by necessity 
not even unicode, but the series raw scan codes from the keyboard. I've dealt 
with them like Peter -- they're just a blob of bytes, and had only slightly 
more problems he's had. There are a couple of possible solutions, but the use 
the same keyboard one has practical appeal.

By experience says that while strictly speaking, you are correct, in practice 
it's not an issue because you almost always are hashing the same blob of bytes.

Note that I'm not saying it can't go wrong, not that it won't go wrong. Merely 
that I don't see it as an issue that must be solved up front. It's an issue 
that needs to be considered, but it's less of a problem than you'd think.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Repeated Encryptions Considered.... ?

2011-06-19 Thread Jon Callas

On Jun 18, 2011, at 8:44 PM, Tom Ritter wrote:

 I'm wondering what the general opinion of folks is for repeated
 encryptions - either accidentally or on purpose.  Applied Cryptography
 devotes a chapter to it, and I'm more interested in cascades -
 multiple algorithms: RC4 k1(AES k2(plaintext)) .  The general opinion
 I've heard is It's a bad idea, you shouldn't do it - but I want to
 revisit that.

I think it comes down to my old mentor Larry Kenah's question: what problem are 
you trying to solve?

If you don't trust AES, what makes you think that RC4 will fix the problem? 
Similarly, if you don't trust RC4 as a good crypto algorithm, why not just use 
base64, which is not a good crypto algorithm, either?

Looking at it another way, let's presume you like AES. Let's presume that means 
you think there is no better attack on the algorithm than brute force, why 
would putting another algorithm on top of it help at all? It just slows things 
down.

I presume that you're considering it because there's some nagging part of your 
head that says, but what if and you're hedging your bet. But at the end 
of the day, it's hard to know what an effective hedge is going to be. Very 
rarely is crypto actually broken. It's almost always that the *system* is 
broken. Two ciphers create a key management issue, or you use a KDF and then 
you've just created a more complex cipher.

If you take a key and run in through a KDF to get two subkeys each passed to a 
cipher, it's just a big cipher with a fancy key schedule.

That brings us back to the main question: what problem are you trying to solve?

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Repeated Encryptions Considered.... ?

2011-06-19 Thread Jon Callas

On Jun 19, 2011, at 5:54 PM, Nico Williams wrote:

 On Sun, Jun 19, 2011 at 7:01 PM, Jon Callas j...@callas.org wrote:
 That brings us back to the main question: what problem are you trying to 
 solve?
 
 The OP meantioned that the context was JavaScript crypto, and whether
 one could forego the use of TLS if crypto were being applied at a
 higher layer.
 

Uh huh, but what problem are you trying to solve?

Why not send *all* your network traffic over TLS?

Jon


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] encrypted storage, but any integrity protection?

2011-01-15 Thread Jon Callas

On Jan 15, 2011, at 4:23 PM, Ivan Krstić wrote:

 On Jan 14, 2011, at 4:13 PM, Jon Callas wrote:
 XTS in particular is a wide-block mode that takes a per-block tweak. This 
 means that if you are using an XTS block of 512 bytes, then a single-bit 
 change to the ciphertext causes the whole block to decrypt incorrectly. If 
 you're using a 4K data block, even better, as the single bit error 
 propagates to the whole 4K.
 
 No, XTS is not a wide-block mode. Its diffusion properties are an improvement 
 upon CBC, which allows arbitrary bits to be flipped in a target block at the 
 expense of randomizing the entire previous block. XTS doesn't let you do 
 that; you can only randomize entire 128-bit blocks, just as with LRW and XEX 
 (from which XTS is derived). Where diffusion beyond 128 bits is a 
 requirement, the options are wide-block modes like CMC (now deprecated) and 
 EME, or wide-block ciphers like BEAR and LION. All of these require making 
 more than one pass over the data; two in the case of EME, three in the case 
 of BEAR and LION.

Sorry. My brain fart on that.

Nonetheless, wide block modes are called by some people Poor Man's 
Authentication because they approximate authentication to some degree. 

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] encrypted storage, but any integrity protection?

2011-01-14 Thread Jon Callas

On Jan 13, 2011, at 10:40 AM, travis+ml-rbcryptogra...@subspacefield.org wrote:

 * PGP Signed by an unknown key
 
 So does anyone know off the top of their head whether dm-crypt or
 TrueCrypt (or other encrypted storage things) promise data integrity
 in any way, shape or form?

This depends on what you mean by data integrity. In a strict, formal way, where 
you'd want to have encryption and a MAC, the answer is no. I don't know of one 
that does, but if there *is* one that does, it's likely got other issues. 
Disks, for example, pretty much assume that a sector is 512 bytes (or 
whatever). There's no slop in there. It wouldn't surprise me if someone were 
doing one, but it adds a host of other operational issues.

However -- a number of storage things (including TrueCrypt) are using modes 
like XTS-AES. These modes are sometimes called PMA modes for Poor Man's 
Authentication. XTS in particular is a wide-block mode that takes a per-block 
tweak. This means that if you are using an XTS block of 512 bytes, then a 
single-bit change to the ciphertext causes the whole block to decrypt 
incorrectly. If you're using a 4K data block, even better, as the single bit 
error propagates to the whole 4K. On top of that, there's the use of the tweak 
parameter; in disk storage, it's typically a function of the LBA of the data. 

Together, this severely limits what an attacker can do to a storage system. 
Single bit changes make a whole sector go bad, and you can't shuffle sectors. 
While that isn't authentication in a formal sense, operationally the 
constraints it puts on the attacker make it look a lot like authentication.

XTS has the additional advantage that it's a small overhead on top of AES.

So while it's not actual data integrity, once you start lowering your 
requirements by saying, in any way, shape or form, anyone who is using XTS, 
EME, or other wide-block, tweakable modes, they're getting close to what you're 
asking for.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] anonymous surveys

2011-01-06 Thread Jon Callas
 1) What are the limits of what we can achieve?  I'm thinking that
 after the first submission occurs, if we can view it, then we know
 that person's results and they are not anonymous since we can tell
 that everyone else hasn't filled out the survey.  And by induction,
 this applies to every new submission.  Can we do better?

There are a number of papers and works on de-anonymizing data. Do some 
searching.

The bottom line is that either you trust the people doing the survey to protect 
the respondents, or you don't. If they do not actively protect the respondents, 
there's no anonymity no matter what you do.

 
 2) How much can we achieve with crypto?

Not nearly as much as you think.

 
 3) How much guarantee can we give the end user, who may not write the
 client software himself?  This is similar to a problem in e-voting.

I have to blink with a bit of incredulity this statement, particularly the 
phrase, ... who may not write the client software himself. This seems to 
imply that you think that writing the software yourself is either necessary or 
sufficient. For the necessary part, I'll say that if security requires only 
writing software you write yourself, then it's all over, just hang it up now. 
Even people who can (like you and me) have better things to do with their time. 
Saying that people have to write their own software is saying you want to end 
innovation. It's like saying that we'll make do with living in caves; not 
precisely that, but very like it. As for the sufficiency part, the major 
problem with that is that the standards one needs to interoperate are the major 
threat.

Any philosophy of security that limits the user to software they write 
themselves not only makes security an elite activity only done by the very, 
very few, but limits the elite to only the activities they care enough about to 
actually write the software.

Lastly, let's also note that you and I have each assumed that said software 
will be bug-free. This may be overly-simplistic.

Jon


All the cryptography in the world isn't going to protect survey anonymity if 
the 

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-17 Thread Jon Callas
Let's get back to the matter at hand.

I believe that there's another principle, which is that he who proposes, 
disposes. I'll repeat -- it's up to the person who says there was/is a back 
door to find it.

Searching the history for stupid-ass bugs is carrying their paranoid water. 
*Finding* a bug is not only carrying their water, but accusing someone of being 
underhanded. The difference between a stupid bug and a back door is intent. By 
calling a bug a back door, or considering it, we're also accusing that coder of 
being underhanded. You're doing precisely what the person throwing the paranoia 
wants. You're sowing fear and paranoia. 

Of course there are stupid bugs in the IPsec code. There's stupid bugs in every 
large system. It is difficult to assign intent to bugs, though, as that ends up 
being a discussion of the person.

I also think that in this case, the accusation is laughable. I'll be happy to 
laugh in anyone's face who needs makes it, in person.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-15 Thread Jon Callas
Me, I figure that extraordinary claims require a smidgen of actual evidence.

It's really easy to say that a decade ago, system foo had back doors snuck in 
it. But -- what were the back doors? A bum random number generator? Keygen that 
made RSA keys with a known, fixed prime? What?

My view is that if the claim is merely for back doors without saying what they 
are, there's an obvious reason for that -- if you said that there's flaw X 
which was in module M.c on lines 23-137, someone could actually go look at M.c 
and see what they are. But this way, the slur has been made in a way that is 
impossible to discuss. I think evidence is called for, or failing that, and 
actual description of the flaw.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-15 Thread Jon Callas
Oh, come on.

The summary of this is: I worked on X_1, X_2, ... X_n. I used to have a 
clearance but now I don't, so therefore what I'm saying is true. You can trust 
me because I've been quoted by the New York Times. Here's what I said.

It is certainly possible that there are back doors somewhere. But this is just 
chaff. It has not yet risen to the level of gossip. It's far more like the 
Cretan paradox than anything else. It's kinda saying: all government people are 
evil, you can trust me on that because I used to be a government person. 
Rght.

I want to see description of the back door. Part of the reason is that we can't 
even assess whether it is a real back door, as opposed to (e.g.) a suboptimal 
implementation without a description. For example, I can remember people saying 
that a fixed shared secret is a back door. I can empathize with frustration 
over something like that, but even if a fixed shared secret is done with 
ill-intent, it's not what I'd call a back door.

Facts. I want facts. Failing facts, I want a *testable* accusation. Failing 
that, I want a specific accusation.

Jon

On Dec 15, 2010, at 1:23 AM, Marsh Ray wrote:

 On 12/15/2010 02:31 AM, Jon Callas wrote:
 But this way,
 the slur has been made in a way that is impossible to discuss. I
 think evidence is called for, or failing that, and actual description
 of the flaw.
 
 
 Hot off the presses. Haven't yet decided how much this counts for 
 information. But he does come closer to naming source files.
 
 - Marsh
 
 
 
 http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd
 
 he OCF was a target for side channel key leaking mechanisms, as well
 as pf (the stateful inspection packet filter), in addition to the
 gigabit Ethernet driver stack for the OpenBSD operating system; all
 of those projects NETSEC donated engineers and equipment for,
 including the first revision of the OCF hardware acceleration
 framework based on the HiFN line of crypto accelerators.
 
 The project involved was the GSA Technical Support Center, a circa
 1999 joint research and development project between the FBI and the
 NSA; the technologies we developed were Multi Level Security controls
 for case collaboration between the NSA and the FBI due to the Posse
 Commitatus Act, although in reality those controls were only there
 for show as the intended facility did in fact host both FBI and NSA
 in the same building.
 
 We were tasked with proposing various methods used to reverse
 engineer smart card technologies, including Piranha techniques for
 stripping organic materials from smart cards and other embedded
 systems used for key material storage, so that the gates could be
 analyzed with Scanning Electron and Scanning Tunneling Microscopy.
 We also developed proposals for distributed brute force key cracking
 systems used for DES/3DES cryptanalysis, in addition to other methods
 for side channel leaking and covert backdoors in firmware-based
 systems.  Some of these projects were spun off into other sub
 projects, JTAG analysis components etc.  I left NETSEC in 2000 to
 start another venture, I had some fairly significant concerns with
 many aspects of these projects, and I was the lead architect for the
 site-to-site VPN project developed for Executive Office for United
 States Attorneys, which was a statically keyed VPN system used at
 235+ US Attorney locations and which later proved to have been
 backdoored by the FBI so that they could recover (potentially) grand
 jury information from various US Attorney sites across the United
 States and abroad.  The person I reported to at EOSUA was Zal Azmi,
 who was later appointed to Chief Information Officer of the FBI by
 George W. Bush, and who was chosen to lead portions of the EOUSA VPN
 project based upon his previous experience with the Marines (prior to
 that, Zal was a mujadeen for Usama bin Laden in their fight against
 the Soviets, he speaks fluent Farsi and worked on various incursions
 with the CIA as a linguist both pre and post 911, prior to his tenure
 at the FBI as CIO and head of the FBI’s Sentinel case management
 system with Lockheed).  After I left NETSEC, I ended up becoming the
 recipient of a FISA-sanctioned investigation, presumably so that I
 would not talk about those various projects; my NDA recently expired
 so I am free to talk about whatever I wish.
 
 Here is one of the articles I was quoted in from the NY Times that
 touches on the encryption export issue:
 
 In reality, the Clinton administration was very quietly working
 behind the scenes to embed backdoors in many areas of technology as a
 counter to their supposed relaxation of the Department of Commerce
 encryption export regulations – and this was all pre-911 stuff as
 well, where the walls between the FBI and DoD were very well
 established, at least in theory.
 

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net

Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-15 Thread Jon Callas
 That said, I would not recommend people to write their own crypto, as
 cryptography is hard enough to foster any kind of fault, glitch or
 defect. In turn, this may leads to incidents that promise to be no
 less severe than those arising from a backdoor in OpenBSD IPSec stack,
 if any.

Perhaps a bit more succinctly, the best way to eavesdrop on someone is to tell 
them that their crypto is broken. 

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


  1   2   >