Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-07-26 Thread james hughes


On Jul 24, 2009, at 9:33 PM, Zooko Wilcox-O'Hearn wrote:

[cross-posted to tahoe-...@allmydata.org and  
cryptogra...@metzdowd.com]


Disclosure:  Cleversafe is to some degree a competitor of my Tahoe- 
LAFS project.

...
I am tempted to ignore this idea that they are pushing about  
encryption being overrated, because they are wrong and it is  
embarassing.


and probably patent pending regardless of there being significant  
amounts of prior art for their work. One reference is “POTSHARDS:  
Secure Long-Term Storage Without Encryption” by Storer, Greenan, and  
Miller at http://www.ssrc.ucsc.edu/Papers/storer-usenix07.pdf. Maybe  
they did include this in their application. I certainly do not know.  
They seem to have one patent

http://tinyurl.com/njq8yo
and 7 pending.
http://tinyurl.com/ntpsj9

...
But I've decided not to ignore it, because people who publicly  
spread this kind of misinformation need to be publicly contradicted,  
lest they confuse others.

...

The trick is cute, but I argue largely irrelevant. Follows is a  
response to this web page that can probably be broadened to be a  
criticism of any system that claims security and also claims that key  
management of some sort is not a necessary evil.


http://dev.cleversafe.org/weblog/?p=111 # Response Part 2:  
Complexities of Key Management


I agree with many of your points. I would like to make a few of my own.
1) If you are already paying the large penalty to Reed Solomon the  
encrypted data, the cost of your secret sharing scheme is a small  
additional cost to bear, agreed. Using the hash to “prove” you have  
all the pieces is cute and does turn Reed Solomon into an AONT. I will  
argue that if you were to do a Blakley key split of a random key, and  
append each portion to each portion of the encrypted file you would  
get similar performance results. I will give you that your scheme is  
simpler to describe.


2) In my opinion, key management is more about process than  
cryptography. The whole premise of Shamir and Blakley is that each  
share is independently managed. In your case, they are not. All of the  
pieces are managed by the same people, possibly in the same data  
center, etc. Because of this, some could argue that the encryption has  
little value, not because it is bad crypto, but because the shares are  
not independently controlled. I agree that if someone steals one  
piece, they have nothing. They will argue, that if someone can steal  
one piece, it is feasible to steal the rest.


3) Unless broken drives are degaussed, if they are discarded, they can  
be considered lost. Because of this, there will be drive loss all the  
time (3% per year according to several papers). As long as all N  
pieces are not on the same media, you can actually lose the media, no  
problem. This can be expanded to a loss of a server, raid controllers,  
NAS box, etc. without problem as long as there is only N-1 pieces, no  
problem. What if you lose N chunks (drives, systems, etc.) over time,  
are you sure you have not lost control of someone’s data? Have you  
tracked what was on each and every lost drive? What is your process  
when you do a technology refresh and retire a complete configuration?  
If media destruction is still necessary, will resulting operational  
process really any easier or safer than if the data were just split?


4) What do you do if you believe your system has been compromised by a  
hacker? Could they have read N pieces? Could they have erased the logs?


5) I also suggest that there is other prior art out there for this  
kind of storage system. I suggest the paper “POTSHARDS: Secure Long- 
Term Storage Without Encryption” by Storer, Greenan, and Miller at http://www.ssrc.ucsc.edu/Papers/storer-usenix07.pdf 
 because it covers the same space, and has a good set of references  
to other systems.


My final comment is that you raised the bar, yes. I will argue that  
you did not make the case that key management is not needed. Secrets  
are still needed to get past the residual problems described in these  
comments. Keys are small secrets that can be secured at lower cost  
that securing the entire bulk of the data. Your system requires the  
bulk of the data to to be protected, and thus in the long run, does  
not offer operational efficiency that simple bulk encryption with a  
traditional key management provides.


Jim


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-07-26 Thread Jerry Leichter

On Jul 26, 2009, at 12:11 AM, james hughes wrote:



On Jul 24, 2009, at 9:33 PM, Zooko Wilcox-O'Hearn wrote:

[cross-posted to tahoe-...@allmydata.org and cryptography@metzdowd.com 
]


Disclosure:  Cleversafe is to some degree a competitor of my Tahoe- 
LAFS project.

...
I am tempted to ignore this idea that they are pushing about  
encryption being overrated, because they are wrong and it is  
embarassing.


The trick is cute, but I argue largely irrelevant. Follows is a  
response to this web page that can probably be broadened to be a  
criticism of any system that claims security and also claims that  
key management of some sort is not a necessary evil
It seems to me there's a much simpler critique.  The Cleversafe  
approach - which is not without its nice points - solves the key  
management problem in exactly the same way that some version of  
Windows solved the frequent General Protection Fault crashes problem  
(by eliminating the error message).


The key management problem comes down to:  I have encrypted data  
stored somewhere (where we assume attackers can access it, but not  
make use of it without the key).  To make that data meaningful, I need  
to be able to locate the key appropriate to that data.  What's a key?   
It's some private information.  In Cleversafe's approach, I have data  
stored in pieces all over the place.  To get at it, I need to know  
where the pieces of some data are.  That information has to be secret,  
since anyone who has access to it can do the same computation and  
recover the data just as I can.


Alternatively, I can rely not on the secrecy of that information, but  
on the discretion of those who hold the pieces.  OK, but I could have  
done that with a simpler technique:  Encrypt the data conventionally,  
then split the key among the trusted holders.  That's a tiny, and more  
to the point, *fixed* overhead beyond the size of the data, which will  
always beat the cleverest Reed-Solomon or erasure coding.  (It also  
has - if I use an appropriate mode - such nice features as random  
access to small parts of the data without the need to decrypt the  
whole thing first.)


Granted, Cleversafe has other nice features.  But other than changing  
the key management problem to the secret information needed to get  
at the data, which won't be used as a crypto key problem, I don't see  
how they've actually *solved* anything.


Further:  If I'm only encrypting stuff for myself, there's little  
reason to use multiple keys.  The key management problem becomes  
interesting when there is different encrypted data with different  
access rights for different groups of users.  It's beyond me how  
Cleversafe's approach makes this easier - or harder.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: XML signature HMAC truncation authentication bypass

2009-07-26 Thread Peter Gutmann
Jon Callas j...@callas.org writes:
On Jul 17, 2009, at 8:39 PM, Peter Gutmann wrote:
 PGP Desktop 9 uses as its default an iteration count of four
 million (!!) for its password hashing, which looks like a DoS to
 anything that does sanity-checking of input.

That's precisely what it is -- a denial of service to password crackers.

In that case why not use a billion iterations (or at least bytes of output),
that would really slow down attackers.

In the implementation, we upped the default because of more password
cracking, but also added a twist in it. We time the number of iterations take
1/10 of a second on the computer you're using, and use that value. The goal
is to have the iteration count scale as computers get faster without having
to make software changes.

Where this falls apart completely is when there are asymmetric capabilities
across sender and receiver.  Having an embedded device suspend (near) real-
time processing while it iterates away at something generated on a multicore
3GHz desktop PC isn't really an option in a production environment (the actual
diagnosis was messages generated by PGP Desktop cause our devices to crash
because they were triggering a deadman timer that soft-restarted them, it
wasn't until they used an implementation that sanity-checked input values that
they realised what the problem was).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


The latest Flash vulnerability and monoculture

2009-07-26 Thread Perry E. Metzger

This is purely about security, not on crypto.

For those of you not in the know, there is an exploitable hole in
Adobe's Flash right now, and there is no fix available yet:

http://www.adobe.com/support/security/advisories/apsa09-03.html

(See also:
http://www.us-cert.gov/cas/techalerts/TA09-204A.html )

The responsible thing would be to advise everyone to turn off flash
until Adobe comes up with a fixed binary, but of course, if they did,
large numbers of companies -- from the obvious Youtube and Hulu to the
less obvious business down the street that uses Flash to handle their
video catalog -- would be screwed. (Instead, of course, just about
everyone out there with a web browser is screwed.)

This highlights an unfortunate instance of monoculture -- nearly
everyone on the internet uses Flash for nearly all the video they watch,
so just about everyone in the world is using a binary module from a
single vendor day in, day out.

This is a bit of a wakeup call -- the use of standards based
technologies to deliver content to users would likely have led to
multiple implementations being in wide use, which would at least
mitigate such problems.

It would also help quite a bit if we had better encapsulation
technology. Binary plug-ins for browsers are generally a bad idea --
having things like video players in separate processes where operating
system facilities can be used to cage them more effectively would also
help to mitigate damage.

(By the way, for those that aren't aware, because recent versions of
Acrobat Reader include the ability for PDFs to embed Flash, you are
better off reading PDFs with third party PDF readers.)

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-26 Thread James A. Donald

From: Nicolas Williams nicolas.willi...@sun.com

For example, many people use arcfour in SSHv2 over AES because arcfour
is faster than AES.


Joseph Ashwood wrote:
I would argue that they use it because they are stupid. ARCFOUR should 
have been retired well over a decade ago, it is weak, it meets no 
reasonable security requirements,


No one can break arcfour used correctly - unfortunately, it is tricky to 
use it correctly.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


ADMIN: slight list hiccup today

2009-07-26 Thread Perry E. Metzger

If you submitted a post to the list for about an hour this afternoon
(as measured by the US/Eastern timezone), it probably bounced. There was
a brief period where email on the list server was misconfigured. My
apologies, and the problem has been fixed.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-26 Thread james hughes


On Jul 27, 2009, at 4:50 AM, James A. Donald wrote:


From: Nicolas Williams nicolas.willi...@sun.com
For example, many people use arcfour in SSHv2 over AES because  
arcfour

is faster than AES.


Joseph Ashwood wrote:
I would argue that they use it because they are stupid. ARCFOUR  
should have been retired well over a decade ago, it is weak, it  
meets no reasonable security requirements,


No one can break arcfour used correctly - unfortunately, it is  
tricky to use it correctly.


RC-4 is broken when used as intended. The output has a statistical  
bias and can be distinguished.

http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/FluhrerMcgrew.pdf
and there is exceptional bias in the second byte
http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/bc_rc4.ps
The latter is the basis for breaking WEP
http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/wep_attack.ps
These are not attacks on a reduced algorithm, it is on the full  
algorithm.


If you take these into consideration, can it be used correctly? I  
guess tossing the first few words gets rid of the exceptional bias,  
and maybe change the key often to get rid of the statistical bias? Is  
this what you mean by used correctly?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: XML signature HMAC truncation authentication bypass

2009-07-26 Thread Jon Callas
Where this falls apart completely is when there are asymmetric  
capabilities

across sender and receiver.


You are of course correct, Peter, but are you saying that we shouldn't  
do anything?


I don't believe that we should roll over and die. We should fight  
back, even if the advantage is to the attacker.




Having an embedded device suspend (near) real-
time processing while it iterates away at something generated on a  
multicore
3GHz desktop PC isn't really an option in a production environment  
(the actual
diagnosis was messages generated by PGP Desktop cause our devices  
to crash
because they were triggering a deadman timer that soft-restarted  
them, it
wasn't until they used an implementation that sanity-checked input  
values that

they realised what the problem was).


You are wrong with this.

*Messages* don't have this property, so long as they were encrypted to  
a public key. It is unlocking the *key* that has this problem.


That problem *only* exists when you import a key from a fast client  
into a slow client. That problem can be fixed either through some  
smart software (look at the iteration count and if it's higher than  
you like, change it the next time you use the key), or the user can do  
it manually. Set your passphrase once to the same thing it used to be.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The latest Flash vulnerability and monoculture

2009-07-26 Thread Jerry Leichter

On Jul 26, 2009, at 2:27 PM, Perry E. Metzger wrote:

...[T]here is an exploitable hole in
Adobe's Flash right now, and there is no fix available yet
This highlights an unfortunate instance of monoculture -- nearly
everyone on the internet uses Flash for nearly all the video they  
watch,

so just about everyone in the world is using a binary module from a
single vendor day in, day out.

This is a bit of a wakeup call -- the use of standards based
technologies to deliver content to users would likely have led to
multiple implementations being in wide use, which would at least
mitigate such problems.
While I agree with the sentiment and the theory, I'm not sure that it  
really works that way.  How many actual implementations of typical  
protocols are there?  With open source, once there's a decent  
implementation, there's little
incentive for anyone to start from scratch on an independent one.  Why  
not just improve the one that's already there?


One way or another, a single implementation usually wins out in the  
OSS community.  Even if along the way a competition - based on code  
size or speed or whatever - breaks out between two implementations, in  
the long one someone usually takes the best from both and produces the  
ultimate winner.


So while standard, openly defined protocols *make it possible* for  
multiple OSS implementations to thrive, they certainly don't  
*guarantee* it, and in many cases that's just not what we end up with.


In fact, the scenario most likely to produce multiple *usable*  
implementations is probably:  An open protocol, and multiple *closed  
source* competing implementations.  As an example, not of a protocol,  
but of another kind of software - consider C compilers.  There  
continues to be a market for proprietary C compilers, and quite a few  
of them exist.  In the OSS world, gcc dominates.  (Perhaps a new LLVM- 
based compiler will displace it - though more likely gcc will just  
absorb LLVM as an alternate back end.  That hardly leave behind all  
gcc bugs.)


In the hardware world, one is typically very leery of buying from a  
sole-source supplier.  It's common to require that the vendor who  
developed some new chip license someone else to build the thing, too -  
just in case.  (Of course, if you buy a couple of hundred chips a year  
from Intel, you're not going to have much luck getting them to work  
with you.  But the *big* buyers definitely force second sourcing when  
they can.  It would be nice if Flash users told Adobe find someone to  
do another implementation or we stop using Flash.  But since the  
space of Flash users has two components - those who *produce* Flash,  
who generally won't care about this; and those who use it to get light  
- it's difficult to generate such pressures.  The Flash generators  
don't have any reason to care about this, and the users of Flash files  
- who pay nothing - have little leverage unless they serious follow  
through on a strike plan.

-- Jerry





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The latest Flash vulnerability and monoculture

2009-07-26 Thread Perry E. Metzger

Jerry Leichter leich...@lrw.com writes:
 While I agree with the sentiment and the theory, I'm not sure that it
 really works that way.  How many actual implementations of typical
 protocols are there?

I'm aware of at least four TCP/IP implementations in common use, several
common HTTP servers (though there are far more uncommon ones), at least
four or six common web browsers (depending on whether you count the
several that use webkit as a single implementation or not), a half dozen
jpeg libraries, three different opentype implementations, etc., etc.

 One way or another, a single implementation usually wins out in the
 OSS community.

See above -- even counting only open source, we have *many*
implementations. Heck, there are even multiple independent open source
SSL, SSH and PGP implementations.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com