Re: [cryptography] Cryptome’s searing critique of Snowden Inc.

2016-02-14 Thread Kevin W. Wall
(Note: Removed some mailing lists that I am not subscribed to.)

On Sun, Feb 14, 2016 at 5:38 AM, John Young  wrote:
>
> Cryptome's searing critique of Snowden Inc.
>
> http://timshorrock.com/?p=2354

One thing that I'm not quite getting here that perhaps you can
explain. Ms. Natsios made this comment in the partial interview
transcript posted to http://timshorrock.com/?p=2354:

But these are taxpayer-paid documents belonging in the public
domain. What authority does he have to open the spigot where he
is now controlling in a fairly doctrinaire and authoritarian way
what happens to this trove, this cache?…

I am not disputing the rather dubious handling by Snowden and
others of this all being somewhat self-serving. However, I would question
that these documents (legally speaking) "belong in the public domain"
simply because they were paid for by US taxes and have been
leaked in part. It is a fair question of whether they _should_ be
regarded in this manner, but I am sure that the USG would dispute
that since most of these documents were classified as Secret or
Top Secret and thus never intended for public viewing. It's not
like had we known that these documents existed pre-Snowden disclosure
that we would have had any prayer getting them released via a
FOIA request even if there were prior proof of their existence.
After all, if you believed that, you could make a FOIA request
for the missing pages of the PRISM report and obtain them that
way. Yeah, good luck with that.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/| Twitter: @KevinWWall
NSA: All your crypto bit are belong to us.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Java RNG

2015-12-30 Thread Kevin W. Wall
On Wed, Dec 30, 2015 at 10:24 AM, Givon Zirkind  wrote:
> Does anyone have any thoughts on the randomness of the Java random number
> generator?

You really need to be more specific.  Here are some things to
consider in no particular order:

1) java.util.Random vs. java.security.SecureRandom
The former is not suitable at all for most cryptographic purposes.
2) Which JDK version are you using it with? (Makes a different because
 of bug fixes and implementation changes in entropy gathering.)
3) If you are referring to SecureRandom, which provider are you intending
to use? The default Sun provider or Bouncy Castle or some other provider?
4) Have you tweaked any of the relevant settings from
$JAVA_HOME/jre/lib/java.security or set -Djava.security.edg
5) Are you planning on using it with a Java Security Manager? (Hahahahaha!)
6) What's your threat model?
7) Probably a dozen or more questions that I'm forgetting to ask.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] fonts and viruses

2015-12-15 Thread Kevin W. Wall
On Dec 15, 2015 9:49 AM, "Marcus Brinkmann" <
marcus.brinkm...@ruhr-uni-bochum.de> wrote:
>
> I'd start here:
>
>
http://www.cvedetails.com/vulnerability-list/vendor_id-9705/product_id-17354/opec-1/Pango-Pango.html
>
> But if you are looking for specific examples, I don't know any.
>
> What you are looking for is bugs in the font rendering libraries, which
are system dependent.

Googling for
vulnerabilities in font libraries
is also a good starting place.

-kevin
Sent from my Droid; please excuse typos.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Fwd: [SC-L] Silver Bullet: Whitfield Diffie

2015-01-01 Thread Kevin W. Wall
Seems as though this interview might be of interest to those on these
lists. I've not listened to it yet so I don't know how interesting it may
be.

-kevin
P.S. - Happy Gnu Year to all of you.
Sent from my Droid; please excuse typos.
-- Forwarded message --
From: Gary McGraw g...@cigital.com
Date: Jan 1, 2015 9:44 AM
Subject: [SC-L] Silver Bullet: Whitfield Diffie
To: Secure Code Mailing List s...@securecoding.org

hi sc-l,

Merry New Year to you all!!

Episode 105 of Silver Bullet is an interview with Whitfield Diffie.  Whit
co-invented PKI among other things.  We have an in depth talk about crypto,
computation, LISP, AI, quantum key distro, and more

http://bit.ly/SB-diffie

As always, your feedback on Silver Bullet is welcome.

gem

company www.cigital.com
blog www.cigital.com/justiceleague
book www.swsec.com



___
Secure Coding mailing list (SC-L) s...@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Browser JS (client side) crypto FUD

2014-07-27 Thread Kevin W. Wall
[Note: Dropped cypherpunks list as I'm not subscribed to that list.]

On Sat, Jul 26, 2014 at 11:03 AM, Lodewijk andré de la porte
l...@odewijk.nl wrote:
 http://matasano.com/articles/javascript-cryptography/

 Is surprisingly often passed around as if it is the end-all to the idea of
 client side JS crypto.

 TL;DR:

I don't see how you claim that it was TL;DR, especially when you
put in as much time as you apparently did in your almost blow-by-blow
reply. It was a mere 9 pages if you printed it out and if had they used
formatting for headers normally used when presenting white papers, I
would guess it would come out no more than mere 5 or 6 pages.

 It's a fantastic load of horse crap, mixed in with some extremely
 generalized cryptography issues that most people never thought about before
 that do not harm JS crypto at all.

 I'm not sure why the guy wrote it. Maybe he's NSA motivated? Maybe he's
 worked a lot on secure systems and this just gives him the creeps? Maybe
 he's the kind of guy that thinks dashJS/dash dynamic scripted languages
 are not a real languages?

Really? You're going to go there and imply he's an NSA shill? That's pretty
unprofessional.


 Somebody, please, give me something to say against people that claim JS
 client side crypto can just never work!

I can't do that, because I wouldn't claim that it can never work. I could see
it being useful if used correctly in the right context, which I mean as using it
as an approach to security-in-depth. But if one is referring to moving all the
crypto to the client side, I think that would generally be a huge mistake.

 -
 Aside from that it's, well, fundamentally moronic to claim that something is
 harmful when you actually means it does nothing, it's also just (almost!)
 never true that no attacks are prevented.

 But, let's go with the flow of the article. Rants won't really settle
 arguments.

 Two example usages are given.

 The first is client-side hashing of a password, so that it's never sent in
 the clear. This is so legitimate it nearly makes me drop my hat, but, the
 author decides to use HMAC-SHA1 instead of SHA2 for reasons that are fully
 beyond me. Perhaps just trying to make things less secure?

I think it more likely had do do with the prevalence of SHA1.  When was
this written anyway? The only date that I saw was 2008 in reference
to the browser maturity, where it stated:

   Check back in 10 years when the majority of people aren't still running
   browsers from 2008.

(Of course that's still largely true today and will remain so until all
those unsupported WinXP systems get replaced.)

So assume HMAC-SHA2 here if you like. I don't think that changes things
much. But I think the reason for the HMAC was because you clearly want
a keyed hash where you are hashing a nonce in the sort of challenge-response
authN system that the author is describing.

But if the goal is to build something like SRP, it would be much better to
build that into the HTTP specification so that the browsers and web servers
could support them directly similar to how the do with HTTP Digest
Authentication.

 The second is using AES keys to client side encrypt. The author must've
 thought he was being helpful when he imagined the scheme for this. Or maybe
 he was drunk. So you generate an AES key for each note, send it to the
 user's browser to store locally, forget the key, and let the user wrap and
 unwrap their data.. Somehow trusting the transport layer is all back in
 vogue. The only key-generation problem in JS is entropy, which is a problem
 everywhere tbh. If you really want to ensure entropy, send a random data
 blob and XOR it with whatever client-side best-shot at randomness. Whatever.

 The author bluntheadedly claims They will both fail to secure users. In
 principle I agree, his methods sucked balls. He, however, blames it on JS.
 Okay.. Let's go on.

 REALLY? WHY?
 For several reasons, including the following:
 1 Secure delivery of Javascript to browsers is a chicken-egg problem.
 2 Browser Javascript is hostile to cryptography.
 3 The view-source transparency of Javascript is illusory.

 Until those problems are fixed, Javascript isn't a serious crypto research
 environment, and suffers for it.

 (points numbered for pointwise addressing)

 1 - Yeah. Duh. What do you think of delivering anything client side? There's
 the whole SSL infrastructure, if that doesn't cut it for you, well, welcome
 to the Internet. (I suggest the next article is about how the Internet is
 fundamentally flawed.) I would suggest, however, that once your delivery
 pathway is exploited you're fundamentally screwed in every way. You can't
 communicate anything, you can't authenticate anyone, you really can't do
 anything! So let's leave out the Javascript part of this point, and just
 do whatever we're already doing to alleviate this issue.

Well, it's really more than that and the problem goes beyond just JS 

Re: [cryptography] Best practices for paranoid secret buffers

2014-05-07 Thread Kevin W. Wall
On Wed, May 7, 2014 at 8:15 AM, Jeffrey Walton noloa...@gmail.com wrote:
 On Tue, May 6, 2014 at 11:56 PM, Tony Arcieri basc...@gmail.com wrote:
 Can anyone point me at some best practices for implementing buffer types for
 storing secrets?

 There are the general coding rules at cryptocoding.net for example, that say
 you should use unsigned bytes and zero memory when you're done, but I'm more
 curious about specific strategies, like:

 - malloc/free + separate process for crypto
 I think this is a good idea. I seem to recall the new FIPS 140 will
 have some language for it. I also seem to recall something about
 Microsoft's CryptNG, but I don't recall the details.

 - malloc/free + mlock/munlock + secure zeroing
 On Microsoft platforms, you have `SecureZeroMemory`
 (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366877(v=vs.85).aspx).
 It is guaranteed *not* to be removed by the optimizer. On Linux, you
 have `bzero`, but I'm not sure about any guarantees. On OpenSSL, you
 have OpenSSL_cleanse. OpenSSL_cleanse is most acrobatic of the three.

 - mmap/munmap (+ mlock/munlock)
 Keeping secrets out of the page file or swap file can be tricky. VMs
 can be trickier.

 Should finalizers be explicit or implicit? (or should an implicit finalizer
 try to make sure buffers are finalized if you don't do it yourself?)
 Not all languages have finalizers.

 Java has finalizers but tells you to put secrets in a char[] or byte[]
 so you can overwrite them manually: See, for example,
 http://docs.oracle.com/javase/1.4.2/docs/guide/security/jce/JCERefGuide.html#PBEEx
 (I think that link may be dead now).

Right; in Java you can't count on when the finalizers will be called (they
aren't like DTORs in C++), so you best do so immediately, probably in a
'finally' block to make sure it is done.

Also, In Java, I suspect that you have to beware of JIT optimizers like HotSpot.
In Java, disabling HotSpot or any other JIT on the server side is just not going
to happen and AFAIK (would love to be shown I'm ignorant here), you
can't disable
JIT optimizers for just a few classes. [If you can, someone *please*
tell me how.]

[snip]

 Are paranoid buffers worth the effort? Are the threats they'd potentially
 mitigate realistic? Are there too many other things that can go wrong (e.g.
 rewindable VMs) for this to matter?
 I think they are worth the effort. Target's data breach was the result
 of (among others): memory scraping malware. At minimum, they cost next
 to nothing.

 You also have wrapping. That is, a buffer get a quick dose of XOR to
 mask the secrets while in memory but not in use.

 .Net's SecureString uses wrapping
 (http://msdn.microsoft.com/en-us/library/system.security.securestring(v=vs.80).aspx),
 and NIST has a key wrap for symmetric encryption keys
 (http://csrc.nist.gov/groups/ST/toolkit/documents/kms/key-wrap.pdf).

Yes, in Java, the Cipher class supports the key wrapping, at least for AES.
I'm never tried of for something else like DESede.

 Maybe the later would have helped with Heartbleed, too... who knows.

Perhaps; it's doubtful that it would have hurt unless it was done in some
way that would introduce some sort of blatant timing side-channel
attack, which seems very unlikely if you always do it in the same place and
in the same manner regardless.

However, I don't think it's a panacea. Didn't someone have an
attack where they were able to reconstruct AES encryption keys
by recovering some fraction of the S-box values? I thought that
was either Felten, et al, Cold Boot attack or something that
was discussed in the literature around that time. Maybe I'm
just blabbering here since I can barely remember what I had
for lunch two days ago much less recall details of papers that
I've read from 5 or 6 years ago. Anyhow, I'm sure someone
on this list knows the details and I probably have it all wrong
anyway.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] question about heartbleed on Linux

2014-04-10 Thread Kevin W. Wall
On Thu, Apr 10, 2014 at 1:09 PM, Scott G. Kelly sc...@hyperthought.com wrote:
 A friend and I were discussing this. If the memory management is lazy
 (doesn't clear on page allocation/free), and if processes don't clear their
 own memory, I wondered if heartbleed would expose anything. My friend thinks
 modern operating systems clear memory to prevent inter-process data
 leakage. Of course, I agree that this is security goodness, but I wonder if,
 in the name of performance, this is optional.

 I'm poking around in linux memory management code in between other tasks,
 but I'll bet somebody here knows the answer. Anyone?

Last I remembered (and this was a long time ago; 10+ years, so
things may have changed), the heap managed by malloc / free
generally does not automatically clear the free'd or new allocated
memory using something like memset by default. That is up to
the application.Usually that is done by the application calling calloc()
rather than malloc() when requesting memory from the heap.  There may
also be some explicit alternate memory allocation libraries (e.g., libmalloc
might have this abilitiy; too lazy to look it up and it's be a LONG time).

Also, the memory allocated on the stack (e.g., local variables and
function arguments) is usually not cleared before use, although I
suppose there could be some compilers that might / could do that.

When memory is returned to the operating system, things may be
different because it could be a different process that grabs that
memory segment. So, is most of the cases that I've seen, it is
considered good practice for the OS to that whenever the kernel
maps some memory page to user address space. I believe that
Linux does this but I've never done kernel programming in Linux,
but only on ATT SVR[2,3,4] UNIX. At the time, it was done in
SVR4, but it was inconsistent in that do all of the kernel used the
same memory allocation routines. (The kernel itself didn't generally
clear memory for it's own use as it already had access to all memory
address space.)

I'm not sure if that answers your question or not. If not, well, like I
said it's be a long time that I've written C/C++ programs and even
longer since doing an serious kernel work.

-kevin
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA Molecular Nanotechnology hardware trojan

2014-01-06 Thread Kevin W. Wall
On Jan 6, 2014 10:29 AM, Krassimir Tzvetanov mailli...@krassi.biz wrote:

 Guys, are you trying to kill this list as well?

 Can you, please, move this discussion to the sci-fi or theory of
conspiracy _forums_.

Indeed; let's not feed the trolls!

-kevin
Sent from my Droid; please excuse typos.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA Molecular Nanotechnology hardware trojan

2014-01-06 Thread Kevin W. Wall
On Jan 6, 2014 10:29 AM, Krassimir Tzvetanov mailli...@krassi.biz wrote:

 Guys, are you trying to kill this list as well?

 Can you, please, move this discussion to the sci-fi or theory of
conspiracy _forums_.

Indeed; let's not feed the trolls!

-kevin
Sent from my Droid; please excuse typos.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] To Protect and Infect Slides

2014-01-05 Thread Kevin W. Wall
On Tue, Dec 31, 2013 at 3:13 PM, Jacob Appelbaum ja...@appelbaum.netwrote:

 Kevin W. Wall:
  On Tue, Dec 31, 2013 at 3:10 PM, John Young j...@pipeline.com wrote:
 
  30c3 slides from Jacob Appelbaum:
 
  http://cryptome.org/2013/12/appelbaum-30c3.pdf (3.8MB)
 
 
  And you can find his actual prez here:
  https://www.youtube.com/watch?v=b0w36GAyZIA
 
  Worth the hour, although I'm sure your blood
  pressure will go up a few points.
 

 I'm also happy to answer questions in discussion form about the content
 of the talk and so on. I believe we've now released quite a lot of
 useful information that is deeply in the public interest.


Jacob,

Okay, here's a question for you that I hope you
can answer. Unfortunately, it may be a little OT
for this list, so I apologize for that in advance.

In your talk, you mentioned about the interdiction
that the NSA was using on laptops ordered online.

I'm assuming it would be too expensive and of little
return for them to do that on all laptops ordered
online, so likely they are only doing this for
certain targeted individuals.

If indeed that is the case, my question is, do you
have any idea of how common this interdiction
practice is and how they pull it off? Specifically,
with respect to the how part, I mean how do they
learn of a person of interest ordering a new laptop
online to begin with? If it is via the POI's already
compromised system, that is one thing, but if they
are doing this via snooping on all the orders of all
vendors who handle online laptop orders, that is
much more disturbing.

Informed speculation is okay as well, although we
would appreciate stating it as such.

Thanks in advance for your response,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] To Protect and Infect Slides

2013-12-31 Thread Kevin W. Wall
On Tue, Dec 31, 2013 at 3:10 PM, John Young j...@pipeline.com wrote:

 30c3 slides from Jacob Appelbaum:

 http://cryptome.org/2013/12/appelbaum-30c3.pdf (3.8MB)


And you can find his actual prez here:
https://www.youtube.com/watch?v=b0w36GAyZIA

Worth the hour, although I'm sure your blood
pressure will go up a few points.

-
kevin

-- 
Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password Blacklist that includes Adobe's Motherload?

2013-11-14 Thread Kevin W. Wall
On Thu, Nov 14, 2013 at 6:07 PM, Patrick Mylund Nielsen
cryptogra...@patrickmylund.com wrote:
 On Thu, Nov 14, 2013 at 5:57 PM, Ben Laurie b...@links.org wrote:

 On 14 November 2013 03:29, shawn wilson ag4ve...@gmail.com wrote:
  This is the only thing I've seen (haven't really looked):
  http://stricture-group.com/files/adobe-top100.txt

 I have to ask: snoopy1 more popular than snoopy? wtf?


 Probably people who reuse passwords and are used to sites that require a
 number in the password (or picked their go-to password when signing up for
 a site that did) -- snoopy1 works more often.

The digit is obviously there because there because of today's password
complexity rules used most sites that demand at least one digit or a 3 of 4
char sets of uppercase, lowercase, digits, or special characters.

Besides that, (unfortunately) it's a lot easier to change 'snoopy1' to 'snoopy2'
then to 'snoopy3', etc. when your password inevitably changes. Plus, it makes
a lot easier to remember than to start out with 'sn00py' and then go
to 'sn11py',
'sn22py', etc. :-)

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
NSA: All your crypto bit are belong to us.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-22 Thread Kevin W. Wall
On Fri, Aug 23, 2013 at 12:54 AM, Patrick Pelletier
c...@funwithsoftware.org wrote:

 On 8/22/13 9:40 AM, Nico Williams wrote:

 My suggestion is /dev/urandomN where N is one of 128, 192, or 256, and
 represents the minimum entropy estimate of HW RNG inputs to date to
 /dev/urandomN's pool.  If the pool hasn't received that much entropy
 at read(2) time, then block, else never block and just keep stretching
 that entropy and accepting new entropy as necessary.


 That sounds like the perfect interface!  The existing dichotomy between random
 and urandom (on Linux) is horrible, and it's nice to be able to specify how
 much entropy you are in need of.

Instead of a bunch of additional devices in /dev, the could add support
to use fcntl(2) and ioctl(2) system calls to control it. That would allow for
more granular control (although not be as convenient from languages
where fcntl and ioctl are not supported such as the shell or Java. On
second thought, scrap that idea.  Of course, as far as blocking / non-blocking
I/O, one should be able to change that behavior by a flag to the open(2)
system call; e.g.,

   int fd = open(/dev/random, O_RDONLY | O_NONBLOCK);
or for /dev/urandom,
   int fd = open(/dev/urandom, O_RDONLY | ~O_NONBLOCK);

At least that much could be supported from Java if not from the shell.

Then if it is opened not to block, any read(2) request should either
return whatever is available or -1 with errno set to EWOULDBLOCK
when the normal result would be to block because there is not sufficient
entropy. It would be up to the application to repeat the read() attempt
(hopefully sleeping awhile in between) if they haven't read enough
bytes.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] best practices for hostname validation when using JSSE

2013-08-10 Thread Kevin W. Wall
On Fri, Aug 9, 2013 at 3:03 PM, Patrick Pelletier
c...@funwithsoftware.org wrote:
 One thing mentioned in the Most Dangerous Code in the World paper (and
 I've verified experimentally) is that JSSE doesn't validate the hostname
 against the X.509 certificate, so if one uses JSSE naively, one is open to
 man-in-the-middle attacks.  The best solution I've been able to figure out
 is to borrow the hostname validation code from Apache HttpComponents.  But
 I'm curious what other people who use JSSE are doing, and if there's a best
 practice for doing this.

 Apologies if this isn't on-topic for this list; I know you guys mostly
 discuss higher-level issues, rather than APIs.  I already tried asking on
 Stack Overflow, and they said it was off-topic for Stack Overflow:

 http://stackoverflow.com/questions/18139448/how-should-i-do-hostname-validation-when-using-jsse

I recall using HttpsUrlConnection and that it supported hostname verification.
I know you said you are not using HTTPS, but somewhere under the hood,
HttpsUrlConnection, is still handling the SSL connection and retrieving
the certificate and checking the server-side cert for a match to subjectDN or
subjectAlternateName attributes.

I haven't studied this yet (and may not have time to do so in the near future),
but I figure that this analysis of HttpsUrlConnection might help. Check out:
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/com/sun/net/ssl/HttpsURLConnection.java

If you just search for HostnameVerifier on that page, it should lead you in
the right direction.  If you have a specific question about the code, ping
me off-list and I'll see if I can answer.

HTH,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Recommendations for glossary of cryptographic terms

2013-07-04 Thread Kevin W. Wall
I am trying to wrap of the writing of the cryptography section
of the new OWASP Dev Guide 2013 and rather than writing all
my definitions, my thought was to just refer to some good
glossary of cryptographic terms rather than doing all that work
over again (and probably not as well).

Does anyone have any recommendations for one that would
be understandable by most in the development community
who have little or now understanding of cryptography?

At a minimum, the glossary needs to be searchable. Ideally,
it would allow the ability to link to a specific term.  There are
quite few that I've looked at (and obviously even more that
I haven't), but thought someone might be able to recommend
something that would be suitable for the development community.

Note that I am hoping that there will be some consensus that
will develop rather than getting 20 different recommendations from
15 different people, but if you have one that you've created,
don't let you dissuade you from recommending it.

Thanks,
-kevin
P.S.- If there are any takers in reviewing this once I've completed the
 initial draft, please let me know off-list. It's looking like it will be
 somewhere between 12-15 pages when I'm finished. You will be
 given appropriate credit.  Thanks!
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Interesting presentation on CryptDB

2013-04-28 Thread Kevin W. Wall
There is very interesting presentation at Microsoft Research by MIT
PhD candidate
Raluca Ada Popa on CryptoDB over at:
http://research.microsoft.com/apps/video/default.aspx?id=178914

CryptDB works as a trusted proxy used on the application side and is
completely transparent to the database and to the application (after some
metadata configuration to identify the sensitive data from the schema).

The presentation runs for an hour 17 minutes but is definitely worth a watch.
CryptDB definitely looks to be a better choice for encrypting sensitive data
than using something like Oracle's or SQL Server's Transparent Data
Encryption (TDE) solutions and it's probably a lot more practical than
expecting application developers to handle the encryption entirely within
their application.

The main website for CryptDB is at:
http://css.csail.mit.edu/cryptdb

There are some papers there that I've not yet had the chance to read,
but this looks really interesting and a very innovative approach. Full
source code is also hosted on GitHub. (URL provided at the main site.)

One of the major things discussed in the presentation is how they've
developed a way with CryptDB to implement order preserving encryption
in a more or less practical way. OPE does compromise the security, but
they have done it in a way that it doesn't get used unless comparitive
queries are run against encrypted data.

Nothing was said about side-channel attacks, and I expect that there
may very well be some in the implementation, but I didn't see anything
particularly in the design that was a show-stopper in that regard.

Anyhow, I'd be interesting in hearing other's opinion on this especially
since it is a problem that I regularly face when it comes to application
security.

Thanks,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] OT: Skype-Based Malware Forces Computers into Bitcoin Mining

2013-04-17 Thread Kevin W. Wall
You know Bitcoin must have arrived when this is going on.
(For that matter, I even heard Bitcoin mentioned on NPR a few
days ago.)

As reported on IEEE Computer Society's _Computing Now_
news site:
http://www.computer.org/portal/web/news/home/-/blogs/skype-based-malware-forces-computers-into-bitcoin-mining

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Privacy-Preserving Photo Sharing via crypto

2013-04-12 Thread Kevin W. Wall
http://www.usc.edu/uscnews/newsroom/news_release.php?id=3017

Interesting use of crypto, not a lot of details here. Haven't checked the
USENIX proceedings yet though. However, somewhat disturbing though that
software developed via NFS grants on the U.S. taxpayer's dime can be
patented.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ICIJ's project - comment on cryptography tools

2013-04-09 Thread Kevin W. Wall
Some OT comments to an OT response...

On Mon, Apr 8, 2013 at 8:30 AM, ianG i...@iang.org wrote:
 On 7/04/13 09:38 AM, Nico Williams wrote:
[big snip]
 We've built a house of cards, not so much on the Internet as
 on the web (but not only!).  Web application security is complete
 mess.  And anyways, we build on foundations, but the foundations
 (operating systems) we built on are now enormous and therefore full of
 vulnerabilities.  We're human -fallible-, and our systems reflect this
 -our failures-.

 Yeah, this is the popular explanation -- we're not good enough.

 Let me pose another thought question.  Most of the long termers here
 understand how Skype, SSH and now Bitcoin were constructed.  Peter adds
 iMessage to the list of successful crypto systems.

 Many of us here could make a fair stab at duplicating that in another
 product.  I'd personally have confidence in that statement -- given the
 budget I'd reckon Steve, Jon, Peter, James, and a dozen other frequent
 posters could do that job well, or a similar one.

Sorry, but I agree with Nico on this one. The problem is the brittleness
of our systems. One tiny problem and it allows the entire system to
break down and suffer vulnerabilities.  An attacker only has to find
one way in. And to be clear, as bad as developers handle cryptography,
cryptography, even when used poorly, is seldom the weakest link.
No...the problem is that humans just suck at writing secure code...
for that matter, we suck at writing _correct_ code (which often
results in insecure code).

And while I can't comment on Bitcoin or iMessage, I do know that
both Skype and openSSH have had their share of vulnerabilities and
probably an order of magnitude or more of non-security related bugs.

As humans, we make make lots of mistakes in many other
endeavors, but in many of those cases, the human element
itself is the end recipient / consumer of those systems
and it is a lot more resilient than our computer systems
are to errors. Case in point, see how many typos you can
find in this particular email thread...spelling errors, grammatical
errors, etc. Most of us probably read right through them. I'm
pretty sure that none of those errors made our brain reboot. ;-)
Try the analogous thing with computer code and at best you have
a harmless bug, but often you get a security vulnerability.  So far,
we haven't invented computer systems that work on a Do What I Mean,
Not What I Say. Fortunately the human brain seems to grok DWIMNWIS.
(Google for Cna Yuo Raed Tihs? for one popular example.)

 I therefore suggest the popular explanation doesn't really pass muster.  I
 say we really are good enough.

That depends on what you mean by good enough. I would agree that
most crypto is good enough, but one reason for that there generally
are so many more easily exploitable vulnerabilities, why bother with
the crypto. For instance, when you web app is full of XSS and SQLi,
why would an attacker try some attack against TLS? It would be pointless.

On the other hand, if all other vulnerabilities were somehow magically
removed and only the crypto ones remained so that they were indeed
the weakest link, I think the crypto-related exploits would start getting
a lot more play.

 Why did they succeed, as an exception, but we did not, as the general rule?

 The strange names and origins are a possible clue.  I suggest the same
 reason that a couple of bored scientists succeeded in creating a games
 platform that was then turned into a document preparation platform that then
 became a standard OS teaching tool and eventually by many steps is now in
 the hands of most of the planet:

  they did it without interference.

They were in Area 11 (research) and back in the day, that research wasn't
required to be directly applicable.  Today I think something like this would
be rare, at least outside of universities, because their is just too much
pressure to turn everything into product in order to make profits.

 PS: ok, that last comment about Unix requires some mental juggery.  The
 bored scientists did something that they were banned from doing.  At the
 time, ATT was party to a cartel agreement with IBM that reserved computing
 to IBM and networking to ATT.  How quaint!

 This had perverse effect of turning Ritchie  Kerninghams' toy into a skunk

Uh, that would actually be Ritchie and Thompson, but I'm sure you knew that. :)

 works project, in effect allowing everyone to politely ignore it.  Unix
 survived and grew within Bell Labs because ATT could not commercialise it,
 and therefore the project was purely an academic exercise.  Hence, the
 corporate interference was untypically low to non-existent.  Hence, it grew
 in Universities only.

OK, that last part is a bit misleading.  I worked at Bell Labs from
79-96 and Unix
was used in many of our internal systems, not just as development platforms
but also as operations support systems, call routing systems, etc. So it was
commercialized in a sense. ATT 

Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Kevin W. Wall
On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas j...@callas.org wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 [Not replied-to cryptopolitics as I'm not on that list -- jdcc]

Ditto.

 On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:

 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?

 You seem to be talking about a single iOS 4 hardware key. But each device
 has its own. We don't know if Apple actually has retained copies of that.

 I've been involved in these sorts of questions in various companies that I've 
 worked. Let's look at it coolly and rationally.

 If you make a bunch of devices with keys burned in them, if you *wanted* to 
 retain the keys, you'd have to keep them in some database, protect them, 
 create access  controls and procedures so that only the good guys (to your 
 definition) got them, and so on. It's expensive.

 You're also setting yourself up for a target of blackmail. Once some bad guy 
 learns that they have such a thing, they can blackmail you for the keys they 
 want lest they reveal that the keys even exist. Those bad guys include 
 governments of countries you operate or have suppliers in, mafiosi, etc. 
 Heck, once some good guy knows about it, the temptation to break protocol on 
 who gets keys when will be too great to resist, and blackmail will happen.

 Eventually, so many people know about the keys that it's not a secret. Your 
 company loses its reputation, even among the sort of law-and-order types who 
 think that it's good for *their* country's LEAs to have those keys because 
 they don't want other countries having those keys. Sales plummet. Profits 
 drop. There are civil suits, shareholder suits, and most likely criminal 
 charges in lots of countries (because while it's not a crime to give keys to 
 their LEAs, it's a crime to give them to that other bad country's LEAs). 
 Remember, the only difference between lawful access and espionage is whose 
 jurisdiction it is.

 On the other hand, if you don't retain the keys it doesn't cost you any money 
 and you get to brag about how secure your device is, selling it to customers 
 in and out of governments the world over.

 Make the mental calculation. Which would a sane company do?


All excellent, well articulated points. I guess that means that
RSA Security is an insane company then since that's
pretty much what they did with the SecurID seeds. Inevitably,
it cost them a boatload too. We can only hope that Apple
and others learn from these mistakes.

OTOH, if Apple thought they could make a hefty profit by
selling to LEAs or friendly governments, that might change
the equation enough to tempt them. Of course that's doubtful
though, but stranger things have happened.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] RSA SecurID breach (was Re: Here's What Law Enforcement Can Recover From A Seized iPhone)

2013-03-28 Thread Kevin W. Wall
Note subject change.

On Thu, Mar 28, 2013 at 9:36 PM, Steven Bellovin s...@cs.columbia.edu wrote:
 All excellent, well articulated points. I guess that means that
 RSA Security is an insane company then since that's
 pretty much what they did with the SecurID seeds.

 Well, we don't really know what RSA stores; it's equally plausible
 that they have a master key and use it to encrypt the device serial
 number to produce the per-device key.  But yes, that's isomorphic.
 However...

 What Jon left out of his excellent analysis is this: what is the
 purpose of having such a database?  For Apple, which pushes a host
 or cloud backup solution, there's a lot less point; if a phone is
 dying, you restore your state onto a new phone.  They simply have no
 reason to need such keys.  With RSA, though, it's a different story.
 They're shipping boxes with hundreds or thousands of tokens to
 customers; these folks need some way to get the per-token keys into
 a database.  How do they do that?  For that matter, how does RSA
 get keys into the devices?  The outside of the devices has a serial
 number; the inside has a key.  How does provisioning work?  It's
 all a lot simpler, for both manufacturing and the customer, if
 the per-device key is a function of a master key and the serial
 number.  You then ship the customer a file with the serial number
 and the per-device key.  When I look at p. 64 of
 ftp://ftp.rsa.com/pub/docs/AM7.0/admin.pdf that sounds like what
 happens: there's a per-token XML file that you have to import
 into your system.

Yes; that's exactly what you do. And RSA has told us that the do
not have a master key used to generate them but that they are
generated randomly.  They told us that DB that was snatched
contained the seeds and serial #s. If a master key and serial #
was in itself sufficient it doesn't make a lot of sense why they
also store the seeds.  Of course, they could have been lying,
but if that's the case, it's an RSA conspiracy theory because
I've heard the same story from two very independent sources
at RSA who work in separate divisions.

 Translation: at some point in every token's life, RSA has to have
 a database with the keys.  Do they delete it?  Is it available
 to help customers who haven't backed up their own database properly?
 I don't know the answer to those questions; I do claim that they
 at least have a reason, which Apple apparently does not.

Long ago, they apparently deleted it, at least after a short
period of time, once they were confident that the fobs were
delivered and the seeds from the XML file imported.

But then they started to get calls from customers who had
lost the XML file or had their local DBs munged and needed
the seeds to restore usage to their fobs. Ultimately this lead
to RSA replacing the customer's fobs (at some cost to the
customer).  RSA claims that they saw an opportunity to
save their customers grief and according to them, starting
offering their SecurID customers a free service of keeping a
backup of their customer's serial #s and seeds. What I was
told that this originally was part of the contract and the
customer could opt-out of the free backup service if they
desired. But at some point, following numerous contract
revisions they apparently stopped even mentioning this
service in their purchase contracts. (One RSA person
speculated that this was probably because so few customers
had opted-out and all the customers who availed themselves
of the service were so happy their current fobs weren't toast
that someone in sales figured it would just be a good idea
to provide the service to all of their customers.)

Now, as I heard the story, RSA originally did everything
right and they had completely air-gapped their DB and
web interface to it and their CSRs had to use a manual swivel
chair process to manually copy the seed into an email
destined to the customer who missed the seeds for their
fobs.  However, eventually there was pressure from their
customers to speed up the process of seed recovery and
they removed the air gap. The rest is history... a few
well-targeted spear phishing attacks with a 0day Adobe
Flash exploit in an Excel spreadsheet and eventually
we were introduced to the new APT acronym.

 Btw: I've never been convinced that what was stolen from RSA was,
 in fact, keys or master keys.  Consider: when someone logs in
 to a system with an RSA token, they enter a userid, probably a PIN,
 and the code displayed on the token.  This hypothetical database
 or master key maps serial numbers -- not userids, and definitely
 not PINs since RSA wouldn't have those -- to keys.  How does an
 attacker with this database figure out which userid goes with
 which serial number?


Mostly answered above. We were also told that there was
a separate database of customers and SecurID serial #s
that was not air-gapped. What was not clear is whether
or not all individual serial #s were there. It seems unlikely,
but for bulk shipments, RSA usually ships 

Re: [cryptography] Cryptographers win Turing award

2013-03-14 Thread Kevin W. Wall
On Mar 14, 2013 7:52 AM, ianG i...@iang.org wrote:

snip
 ACM Press release is helpful:
 http://www.acm.org/press-room/news-releases/2013/turing-award-12
 Wikipedia is too:
 http://en.wikipedia.org/wiki/Probabilistic_encryption
 better copy of the 1984 article:
 http://groups.csail.mit.edu/cis/pubs/shafi/1984-jcss.pdf

 That article in networkworld is fatally flawed, and thus meets and
exceeds the standard for press commentary.

:-) Ironically,  the NetworkWorld article was the one cited by ACM
TechNews, which was where I first encountered it. I was somewhat surprised
that the URL they cited for their Probabilistic Encryption paper was at
Peking University. I wasn't even able to access it from work because that
domain is blocked because of malware and spam.

-kevin
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Recommendations for crypto package for ASP.NET 4.5

2013-03-12 Thread Kevin W. Wall
Hi list,

I'm looking for some crypto package (preferably FOSS) that supports
some sort of authenticated encryption cipher mode (prefer GSM or CCM,
but anything without patent encumbrances will probably do) that will
work for ASP.NET 4.5 out-of-the-box. It can be built from C code if
there is a managed C++ wrapper around it so that the entire code base
is a managed assembly. (Our company no longer permits non-managed
application code in our ASP.NET deployments.)

Ideally, it would also be something simple for developers with little
or no crypto experience to use correctly (e.g., something like NaCL).
I checked and Sodium didn't mention any Windows ports, at least
as of yet, but would it be possible to use its Python port with
ASP.NET? There are Python implementations that apparently work
with the .NET CLR so perhaps those assemblies could be used with
C#??? E.g., http://pythonnet.sourceforge.net/; just not sure
what would be involved in making that work. Or perhaps there
are better options?

The .NET 4.5 framework itself only seems to support ECB, CBC, OFB,
CFB, and CTS modesbut no AE cipher modes. :-(
(
http://msdn.microsoft.com/en-us/library/system.security.cryptography.ciphermode.aspx
)

Thanks for your help,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] side channel analysis on phones

2013-03-09 Thread Kevin W. Wall
Ian,

Hopefully some more food for thought.  However, given that neither
Android development
nor side-channels is where my expertise lies, I can't guarantee that such food
won't cause undue illness. ;-)

On Sat, Mar 9, 2013 at 5:06 AM, ianG i...@iang.org wrote:
 On Mar 8, 2013 5:46 AM, Ethan Heilman eth...@gmail.com
 mailto:eth...@gmail.com wrote:

 It depends what sort of side channel attacks you are worried about
 and what sort of crypt algorithms you are using.

 Sure.  RSA signing is the algorithm.  The side channel is another app that
 is also running on the same phone, and has some ability to measure what else
 is going on.  Although there is sandboxing and so forth in the Android, I'm
 expecting this to be weak, and I'm expecting there to be a way to measure
 the rough CPU / energy consumption, etc, of other apps. Enough to determine
 (for example) the beginning and end of an RSA sig.

For timing / CPU side-channels, why can you just deal with that by adding random
noise to the timing via sleeping a random # of microseconds, inserting
various spin-loops
at various places (you can work on calculating the next Mersenne prime
for a short
duration :), etc. I suppose there a lot of ways to add random
consumption of power.
Of course none of those things are very user friendly, but like
everything in security
(and most of engineering), there are trade-offs.

Also, what types of Android permissions are you assuming for this malicious app?
And are you assuming that the Android phone as bee rooted or not? A
rooted device is
probably always go to be an issue because then you have to worry about another
rooted app collecting the side channel leakage and such apps are no longer
confined to just what can be done via the Dalvik sandbox.  If that's
part of your
threat model, t's likely that the only countermeasures that you can
apply against
that would ironically requiring that your own app run with root privs
as well so that
you can go beyond the Android sandbox to protect things. (E.g., in theory, you
could send a SIGSTOP to all non-system processes to stop all non-essesntial
processes were process was running and then sending all a SIGCONT signal
so they couldn't gather any side channel leakage while your app was running.
Of course, that's apt to be very somewhat non-portable as well, although not
as bad as trying to rely on certain hardware accelerator support.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Q: CBC in SSH

2013-02-11 Thread Kevin W. Wall
On Mon, Feb 11, 2013 at 6:20 PM, Peter Gutmann pgut...@cs.auckland.ac.nzwrote:

 snip



  ... I don't understand the resistance either, in the case
 of TLS it's such a trivial change (in my case it was two lines of code
 added
 and two lines swapped, alongside hundreds of lines of ad-hockery dealing
 with
 MAC-then-encrypt vulnerabilities sidelined) that it was a complete
 no-brainer.
 In case anyone's interested, the bikeshedding starts here:

 http://www.ietf.org/mail-archive/web/tls/current/msg09161.html

 The full thread is:

 http://www.ietf.org/mail-archive/web/tls/current/threads.html#09161

 We really need a few more cryptographers to weigh in (hint, hint), at the
 moment the opposition to the change seems to be mostly based on speculation
 and/or I don't want to change my code.


It would be great if we could really get this fixed in TLS 1.3. Then ten
years down
the road when it finally reaches a critical mass and we can turn off all
the previous
broken versions, we might actually reach the state where we have a secure
communication channel. (Well, that, and if we can do cert pinning, etc. or
get
rid of all the CAs, but that's a discussion that we've already pummeled
cadaverous equines, so lets skip that this time around, okay?)

Seriously, I'd like to be optimistic, but looking at this from an industry
practitioner's perspective it truly will take us decades to kill off older,
insecure versions of SSL / TLS. With some distributions of software,
SSLv2 comes still enabled and many browsers in use only still support
SSLv3 and TLS 1.0. (And given that WinXP seems to be the Cobol of
the OS world, indeed those two may never die as well.)  So yeah,
by the time TLS 1.3 has reached critical mass that most businesses
are willing to disable support for TLS 1.2 and earlier, I'll be looking at
retirement. Just sayin'...

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] any reason to prefer one java crypto library over another

2013-01-29 Thread Kevin W. Wall
At long last, a question that I can (almost) answer! ;-)

On Tue, Jan 29, 2013 at 9:05 PM,
travis+ml-rbcryptogra...@subspacefield.org wrote:
 First, are there any documented vulns in java cryptography providers,
 such that one would prefer one over another?

I'm not aware of any outstanding vulnerabilities, but there have been a few
in the past. Those in open source JCE providers such as Bouncy Castle
are easier to find details about as they are a lot more transparent than
Sun or (especially) Oracle.

As far as it goes, I suppose one could consider the lack of any AE modes
in the standard SunJCE a vulnerability. Bouncy Castle at least supports
CCM and GCM.  That's one of the shortcomings that I tried to address with
OWASP ESAPI 2.0 for Java.

And add to that the recent thread about OAEP weakness with RSA,
I'd say there are no secure padding schemes for RSA encryption, at
least in SunJCE. (I've not checked what Bouncy Castle has to offer.)

 Second, is there any significant reason (e.g. usability) to prefer a
 different API than the JCA/JCE?

None that I'm aware of, but if you are looking for something that
is FIPS 140-2 compliant, there are only two JCE compatible
libraries that I'm aware of:
 IBM Java JCE FIPS 140-2 Cryptographic Module (aka, IBMJCEFIPS)
http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/1401val2005.htm#497
and
 RSA Security BSafe Crypto-J
http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/1401val2008.htm#1048

so I suppose that could be viewed as a disadvantage.  JCA/JCE also really
offers very little in the way of key management and PKI support is fairly weak
(well, sparse) as well.



 In short, if devs ask 'which crypto library should I use in java' is
 there any reason to prohibit anything in particular, or recommend
 anything in particular?

As far as usability, its a every consistent approach... init/update/doFinal
that I, at least, find fairly easy and flexible to use as long as you understand
basic constructs like cipher modes and padding schemes. But I wouldn't
allow non-crypto literate developers to use it or otherwise you end up with
everyone using AES in ECB mode with no padding (which is the default if
you just use Cipher.getInstance(AES)). [Seen from way too many cases
of personal experience, including the ESAPI 1.4 release, which is why
I rewrote it.]  ESAPI 2.0 is not perfect, but the approach is sound...
just provide safe options and make the tweaks available via configuration
properties that you can allow your security team to set. If you want
more details on it, contact me off-list as I doubt others really care.
ESAPI still has a *long* way to go. (But I've really love getting some
volunteers! Hint! Hint!)

I think the other good crypto API is KeyCzar done Ben Laurie and others
at Google. If *all* you are looking for is a crypto lib, I'd probably recommend
it as ESAPI has way too much baggage (30 or so dependencies). But if you
also need lots of other things like XSS protection, etc., then ESAPI is
probably worth a look.

 I've been digging around this evening and haven't found much, and
 I'm sure someone on this list has done this research before.

Hope this helps.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Rocra malware targets files encrypted by Acid Cryptofiler

2013-01-16 Thread Kevin W. Wall
May be of some interest to this group.

Looks like another US intelligence cyber-espionage malware has
been reported by Kaspersky, this time primarily targeting former
Soviet-block republics.

Full story is here:
http://www.scmagazine.com/red-october-spy-campaign-uncovered-rivals-flame-virus/printarticle/276016/

I found it interesting that this SC Magazine report stated:

... the campaign deploys malware to steal sensitive information,
including files encrypted by Acid Cryptofiler, classified software
used to safeguard confidential data maintained by such organizations
as the European Union, the North Atlantic Treaty Organization (NATO)
and European Parliament. ...

I'm guessing that means that this Acid Cryptofiler is some
severely flawed crypto software (or was written by the NSA and
has some back door or side channel).

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] phishing/password end-game (Re: Why anon-DH ...)

2013-01-16 Thread Kevin W. Wall
On Wed, Jan 16, 2013 at 9:21 PM,  d...@geer.org wrote:

   To clarify:  I think everyone and everything should be identified by
   their public key,...

 Would re-analyzing all this in a key-centric model rather than
 a name-centric model offer any insight?  (key-centric meaning
 that the key is the identity and Dan is an attribute of that
 key; name-centric meaning that Dan is the identity and the key
 is an attribute of that name)

Hmm... in which case identity fraud would take on a whole new
meaning and man who lose key, get no new key.

Sorry; couldn't resist.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-12 Thread Kevin W. Wall
Relevant to this thread, but OT to the charter of this list.

On Sat, Jan 12, 2013 at 5:46 AM, Jeffrey Walton noloa...@gmail.com wrote:
 On Sat, Jan 12, 2013 at 4:27 AM, ianG i...@iang.org wrote:
 On 11/01/13 02:59 AM, Jon Callas wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Others have said pretty much the same in this thread; this isn't an MITM
 attack, it's a proxy browsing service.

 There are a number of optimized browsers around. Opera Mini/Mobile,
 Amazon Silk for the Kindle Fire, and likely others. Lots of old WAP
 proxies did pretty much the same thing. The Nokia one is essentially Opera.

 These optimized browsers take your URL, process it on their server and
 then send you back an optimized page.

 Oh, I see.  So basically they are breaking the implied promise of the https
 component of the URL.

 In words, if one sticks https at the front of the URL, we are instructing
 the browser as our agent to connect securely with the server using SSL, and
 to check the certs are in sync.

 The browser is deciding it has a better idea, and is redirecting that URL to
 a cloud server somewhere.

 (I'm still just trying to understand the model.  Yes, I'm surprised, I had
 never previously heard of this.)
 It's right up there with the PenTesters using BurpSuite to to destroy
 a secure channel. I look at the PenTest reports and shake my head in
 disbelief that no one took exception to what the PenTesters did

Whoa...hold on there Jeff. I'm hoping that I'm misunderstanding your
last statement about what the pen testers did to destroy a secure
channel.

Are you implying that _authorized_ PenTesters using software such as
BurpSuite (or Fiddler2 or Paros Proxy, or any other software that
leverages the browser's _forward_ proxy ability is violation of some
law or morals? If so, I would wholeheartedly disagree. They are not
capturing arbitrary HTTPS traffic of others, but only that originating
from their
own browser. How is that any different from doing it from a brower
plug-in, such as Tamper Data in Firefox? [Note: I'm not debating if
some arbitrary person tries to pen test their bank or some other
application that the have not been properly authorized to do. That
is a different store entirely and is a violation of the law, but probably
NOT because is is destroying a secure channel...DMCA not
withstanding.]

There is a big difference in forward proxies and reverse proxies. A forward
proxy is (generally) under your control. When it is not under the user's
control which appears the case here, that is completely different. It matters
little (to me at least) that Nokia has probably buried this under the fine
print legalese of their TOS.  But IMHO, that's a far cry from a pen tester
configuring their browser's forward proxy capability to use BurpSuite or
Fiddler2, or some other proxy. Keep in mind that it's not only pen testers
who do this, but many web application developers use these tools as well
to aid them in debugging their web applications.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] current limits of proving MITM (Re: Gmail and SSL)

2012-12-17 Thread Kevin W. Wall
[A bit OT. Sorry]

On Sun, Dec 16, 2012 at 5:51 PM, Jeffrey Walton noloa...@gmail.com wrote:
 On Sun, Dec 16, 2012 at 4:48 AM, ianG i...@iang.org wrote:
 On 16/12/12 11:47 AM, Adam Back wrote:
[snip]
 On Sun, Dec 16, 2012 at 10:52:37AM +0300, ianG wrote:

 [...] we want to prove that a certificate found in an MITM was in the
 chain
 or not.

 But (4) we already have that, in a non-cryptographic way.  If we find
 a certificate that is apparently signed by say VeriSign root and was
 found in an MITM, we can simply publish it with the facts.  Verisign
 are then encouraged to disclose (a) it was ours, (b) it wasn't ours,
 or (c) ummm...

 Verisign cant claim it wasnt theirs because the signing CA it will be
 signed
 by one of their roots, or a sub-CA thereof.

 Just to nitpick on this point, a CA certainly can claim that they or an
 agent did not sign a certificate.  And, they can provide the evidence, and
 should have the ability to do this:  CAs internally have logs as to what
 they did or did not sign, and this is part of their internal process.
 That brings up a good point: the CA should be responsible for their
 reseller's or agent's actions. The CA entered into the relationship,
 and no one forced them into the partnering.

 I also envision a scenario where a CA sets up a subsidiary (that is, a
 distinct corporate entity) and then uses the new corporate entity to
 subvert the spirit and intentions of the system. Later, the CA claims
 it was them, not us.

 Lack of responsibility and accountability are part of the problem. It
 needs to be addressed.

IANAL (thank God! ;), but I really don't see how this could work, at
least unless there were laws specific and restricted to only cases like
this narrow focus and I don't see that as likely. I think that this will
continue to be enforced by legal binding contractual agreements
rather than regulatory issues. There would be great resistance from
most businesses to have it otherwise.

What you propose all sounds good on the surface, specifically if
the intent of the CA is to create such a subsidiary for potentially nefarious
purposes, but intent is something is difficult to regulate as well as being
difficult to prove.

I'm sure if the scenario that you outline were to happen and a breach
resulted because of it, both the CA and their subsidiary could be sued
even without any specific existing laws governing this.  In many (most?)
states in the USA (well, at least in Ohio), one cannot completely waive
tort despite what the contract that sign says. (Or so my attorney informs
me.) If it can be shown that there is an intent to defraud or negligence
is involved (especially if it is intentional), the contract is thrown out the
window. (Of course, showing enough evidence in court is another
matter entirely.) In some such cases, even criminal charges are
possible.

But there is also an inherent risk in doing business and no business in their
right minds would ever sign an agreement where they are liable for some
other service provider's screw-ups. Usually, businesses want contractual
agreements with their service providers where the service providers agree to
liability in cases where service provider screws-up. (E.g., where their buggy
software causes an unexpected service outage.) In my experience, most service
providers--at least those with deep pockets--are reluctant to agree to
even that much. So it is highly unlikely of businesses are going to support
any type of legislation that makes them liable for what their service providers
do. They naturally want to shed liability, not take it on.

In the specific case that you mention, even if there were such specific laws
it would likely mean an end CAs creating such CA subsidiaries which
probably means higher prices for certificates all of us.

If you think about what you are asking for in the *general* sense, I think you
might reconsider. For example, consider a case where a merchant wants to
do a credit check so they send a credit bureau an SSN and DOB and get back
a credit rating. Let's suppose the merchant does all this securely and
doesn't even
permanently store the SSN / DOB, but only holds it long enough to get a credit
rating back from some credit bureau.  Surely, you would not hold that merchant
responsible for the credit bureau's lack of security, would you? Would you want
that merchant to be able to be (successfully) sued even though a security breach
of the credit bureau resulted in the identity fraud of all the
merchant's customers?
And of course a similar scenario could be possible with credit cards. Why should
the merchant be responsible in such cases when that merchant has pretty much
lost control of how someone downstream is handling sensitive data?

I realize that we are somewhat comparing apples and oranges here, but law
often tends to become become more encompassing than originally intended
because it gets used a precedent in similar cases that are brought to
trial. There
is enough similarity here 

Re: [cryptography] Questions about crypto in Oracle TDE

2012-11-11 Thread Kevin W. Wall
On Sun, Nov 11, 2012 at 7:34 AM, Florian Weimer f...@deneb.enyo.de wrote:
 * Kevin W. Wall:

 Oracle TDE is being looked at as oneoption because it is thought to be
 more or less transparent to application itself and its JDBC code.

 If it's transparent, it's unlikely to help against relevant attacks,
 such as dumping the database over JDBC after the application server
 has been compromised.  Non-cryptographic approaches, such
 database-level access controls, seem better suited for this task
 (assuming that the database has been set up in a suitable fashion and
 is itself robust enough to withstand attacks over the client
 interface).

Of course; the threat model that Oracle TDE supposedly addresses
does nothing to address SQLi vulnerabilities. Even having the encryption
being done by the application does not necessarily mitigate that attack
vector in all situations. As usual, that is best handled by ensuring the
use of prepared statements (aka, parameterized queries).

At first I thought that the attack vector that Oracle TDE was intended
to address
was that of a rogue DBA with access to the database just dumping sensitive data
from the DB. I got that impression because of Oracle's documentation recommends
having a separate security administrator. However, as I thought
about it, it seems
that this really right either. Anyone that has SELECT ability on the
table's encrypted
column can dump the encrypted sensitive data.  Even if a DBA for this database
didn't have SELECT privilege directly, it would seem that indirectly they could
create another DB user that *did* have the needed SELECT access and
smash  grab of the sensitive data that way.

So looking back at it, I'm not really sure what threat Oracle TDE is supposed
to prevent. Perhaps an OS administrator stealing the data? Possibly. More likely
it was there to satisfy some inept auditor's checklist mentality to
security. A lot
of security in the real world is of this CYA variety, so it wouldn't surprise
me in the least. That doesn't always mean that CYA security approaches
are always pointless though. In the event of lawsuits resulting from some
data breach, such approaches often are considered following best practice
and thus considered doing due diligence, thus keeping you from getting
sued for negligence and paying treble damages.

I'm leaning heavily towards making the application handle the encryption, but I
think it depends on how much they have left in the budget for this change
request.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Questions about crypto in Oracle TDE

2012-11-08 Thread Kevin W. Wall
On Thu, Nov 8, 2012 at 6:22 PM, Morlock Elloi morlockel...@yahoo.com wrote:
 We have been using a different approach for securing particular fields in the 
 database.

 The main issue with symmetric ciphers inside (distributed) systems is that 
 the encrypting entity is always the most numerous weak point. Whoever 
 subverts your input flow - and there are lots of opportunities there - gets 
 keys to everything. Your distributed system is not really distributed - it's 
 mostly distributing vulnerabilities.

 However, if you use asymmetric crypto (say, 1024 or 2048-bit RSA), give only 
 public key(s) to encrypting flows, and reserve the secret key(s) for modules 
 that need the actual plaintext access (a rare situation in practice), then:

 - the storage size remains the same;

 - you can use the first 512 bits or so for indexing (you may get a collision 
 once before the Universe cools down, or whatever your belief about the 
 curvature is.)

 - there are no ECB issues (for field sizes  1024 or 2048 bits, most are);

 - the extra CPU cost for modular arithmetic (to insert or search) is 
 negligible (at least in use cases we've seen so far);

 - the security requirements on the input side drop down big time. You can 
 (continue to) have bozos code your 'apps'.

 More philosophically, the database is just a wire with a delay. You would 
 never directly use symmetric keys in other communications (by sharing them 
 under the table), would you?

They are using Oracle 10.2 where Oracle TDE only supports AES and
3DES, so if I wanted to do something
like this, TDE is not going to be the solution, in which case I'm not
going to have the encrypt done by
the DB; instead, I would have the application do it using our standard
crypto libraries.  TDE was being
looked at because from the application's point of view, it is transparent.

I think that doing the encryption / decryption in the application is
more secure (much better
separation of duties; the DBA can't decrypt the tables), but it is
also more effort.  There's a
security vs. effort trade-off.  I'm just trying to determine where the
acceptable risk level is
at and that means understanding how TDE works.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Public Key Pinning Extension for HTTP (draft-ietf-websec-key-pinning-01)

2012-11-01 Thread Kevin W. Wall
On Nov 1, 2012 5:23 PM, Jeffrey Walton noloa...@gmail.com wrote:

 Hi All,

 I was reading through Public Key Pinning Extension for HTTP
 (draft-ietf-websec-key-pinning-01,
 http://tools.ietf.org/html/draft-ietf-websec-key-pinning-01).

 Section 3.1. Backup Pins, specifies that a backup should be available
 in case something goes awry with the current pinset. The backup pinset
 is a hash of undisclosed certificates or keys. Appendix A. Fingerprint
 Generation, then offers a program to hash a PEM encoded certificate.
snip
 Would it be
 better to retain a hash of the public key instead since the public key
 rarely changes?

Or perhaps public key plus SubjectDN since that also rarely
changes??? At least would still allow us
to associate the two.

-kevin
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Data breach at IEEE.org: 100k plaintext passwords.

2012-09-25 Thread Kevin W. Wall
-kevin
Sent from my Droid; please excuse typos.
On Sep 25, 2012 1:39 PM, Jeffrey Walton noloa...@gmail.com wrote:

 In case anyone on the list might be affected... [Please note: I am not
 the I' in the text below]

 http://ieeelog.com

For shame. This should make for a nice article in a future _IEEE Security
 Privacy_.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Data breach at IEEE.org: 100k plaintext passwords.

2012-09-25 Thread Kevin W. Wall
I'm thinking the IEEE should pick up the membership dues for 2013 for all
those 100k users. :-p

-kevin
Sent from my Droid; please excuse typos.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key extraction from tokens (RSA SecurID, etc) via padding attacks on PKCS#1v1.5

2012-07-02 Thread Kevin W. Wall
On Mon, Jul 2, 2012 at 1:56 AM, Jeffrey Walton noloa...@gmail.com wrote:
 On Sat, Jun 30, 2012 at 11:11 PM, Noon Silk noonsli...@gmail.com wrote:
 From: 
 http://blog.cryptographyengineering.com/2012/06/bad-couple-of-years-for-cryptographic.html

[snip]

 Direct link to the paper:
 http://hal.inria.fr/docs/00/70/47/90/PDF/RR-7944.pdf - Efficient
 Padding Oracle Attacks on Cryptographic Hardware by Bardou, Focardi,
 Kawamoto, Simionato, Steel and Tsay
 RSA says that its tokens are secure,
 http://www.h-online.com/security/news/item/RSA-says-that-its-tokens-are-secure-1627326.html

 After a significantly improved attack on crypto hardware made the
 news, RSA's Sam Curry has said that the affected SecurID 800 token is
 secure. The token has not been cracked, and the attack is not useful,
 explained Curry, adding that the attack does not allow private RSA
 keys to be extracted from the token.

Curry's actual blog post
http://blogs.rsa.com/curry/dont-believe-everything-you-read-your-rsa-securid-token-is-not-cracked/
gives a bit more additional info that I didn't see mentioned in
_The H Security_ article.  In particular, it fails to mention this:

*Utilize PKCS #1 v2.0 with OAEP in applications that require encryption.*
It has been RSA’s position for some time that customers should utilize
the higher PKCS #1 v2.0 standard with OAEP, which is not subject to this
type of vulnerability. The RSA SecurID 800 technology supports this
standard.

So it would seem to me that those using the SecurID 800 with PKCS #1 v1.5
only have themselves to blame??? I suspect that PCSC#1 v1.5 is still supported
by SecurID 800 tokens for backward compatibility mode, and so yes, there are
likely lots of RSA customers who are still running things that way. But
if it supports PKCS#1 v2.0 and OAEP padding schemes and their customers are
not using it, then I fail to see how the vendor is to blame. (Unless they
have it default to v1.5. Anyone know?)

Also, this supposedly does not affect SecurID authentication tokens, but
only the smartcard functionality.

 [Wasn't RSA caught lying about their breach also? Didn't they claim
 the phishing campaign was an APT?]

Well, spin-doctoring, for sure. There was a phishing campaign that started
it all, but according to sources inside of RSA, it was a spear phishing
attack aimed at only 7 or so different RSA employees in HR. The email
addresses were forged to appear as it it were coming from a trusted
contracting company that they did business with and the attached Excel
spreadsheet had a 0day Adobe Flash exploit.

After listening to the complete inside story, where I found fault with RSA
was:
1) They were escrowing the SecurID seeds, often without their customers
   explicit knowledge. (Presumably it was in the fine print of the contract
   that most customers probably did not read and there supposedly was an
   opt-out policy, but it really should have been an opt-in.)
2) RSA decided to eliminate their air gap where they previously had a
   manual swivel chair process where they burned CD-ROMs to recover lost
   seeds to send to their customers and replaced that with a web interface.
   RSA claims it was a business decision that was encouraged by their
   customers. (Note: The air gap has since been restored.)

Of course the whole APT term itself is IMO somewhat misnamed. I think
a better term would be Targeted Persistent Threat, but of course that
would not allow as much spin doctoring, which is why we are probably stuck
with the APT term.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-22 Thread Kevin W. Wall
Marsh,

Am I missing something?

On Fri, Jun 22, 2012 at 1:06 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 06/21/2012 09:05 PM, ianG wrote:


 On 22/06/12 06:53 AM, Michael Nelson wrote:
[snip]

 It's a natural human question to ask. I want to see what's under the
 hood. But it seems there is also a very good response - if you can
 see under the hood, so can your side-channel-equipped attacker.

 It seems to me that the bits one gets to see via RdRand aren't a side
 channel, by defintion. But if the attacker gets to see a disjoint set of
 samples from the same oscillator then we only need to worry about
 dependencies lurking between the sample sets.

 The oscillator is a fairly simple circuit, so it should be straightforward
 to show it has a memory capacity of only bit or two. Allowing the oscillator
 to run for a few cycles between sample sets going to different consumers
 should eliminate the possibility of short term dependencies.

You wrote going to DIFFERENT consumers. I am interpreting that as
different processes, but I don't see how a CPU instruction like RdRand
or anything else is going to be process or thread or insert your favorite
security context here aware.  If you would have omitted the different,
then it would have made sense.

So am I just reading too much into your statement and you didn't really
mean *different* consumers or am I simply not understanding what
you meant? If the latter, if you could kindly explain.

Thanks,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning?

2012-05-02 Thread Kevin W. Wall
On Wed, May 2, 2012 at 5:01 AM, Darren J Moffat
darren.mof...@oracle.com wrote:
 On 05/02/12 06:33, Kevin W. Wall wrote:

 primitives that do not include *any* AE cipher modes at all. Some
 great examples are in the standard SunJCE that comes with the
 JDK (you have to use something like BouncyCastle to get things
 like GCM or CCM for Java and that's often a hard political sell so
 most developers won't bother). Another example is with the .NET
 framework. It too has no authenticated mode. Both Java and
 .NET only support ECB, CBC, CFB, OFB modes and starting
 with JDK 1.6, Java also offers CTR mode. (.NET may too; I haven't
 actually looked in a while.)


 JEP 115: AEAD CipherSuites

 http://openjdk.java.net/jeps/115

Darren,

Well, that is definitely good news to be sure, but as I read JEP 115,
it appears that:
1) The interfaces are only defined in JDK 7. A reference implementation
won't come until JDK 8. (Meanwhile, most applications are still on
JDK 6 and some even on earlier, unsupported versions.)
 2) As I read this, it appears as though the reference implementation
is only going to address Java Secure Sockets Extension (JSSE)
and for PKCS#11 support. In other words, all they are doing with is
adding support for some new cipher suites for TLS.and adding PKCS11
support to NSA's Suite B compliance for TLS.

Specifically, from the cited URL, it states:
Note that in order to support the GCM AEAD cipher
suite in JSSE, the GCM cipher implementation is
required in the JCA/JCE PKCS11 provider.

So at this point, without having looked at the interfaces in JDK 7,
I am not sure that one will be able to use GCM with AES when
using the Cipher class.  However, I will look more deeply. If nothing
else though, it's a step in the right direction, so thanks for the
pointer.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say)

2012-03-24 Thread Kevin W. Wall
On Mar 24, 2012 3:29 AM, Marsh Ray ma...@extendedsubset.com wrote:

 On 03/24/2012 01:28 AM, J.A. Terranson wrote:


 Ah... Probably not.  Think Jim Bell et al.   I suspect it is far more
 likely that the vast majority of subscribers here are listed in the
 Potentially Dangerous category, if not the flat out Budding Terrorist
 label.

[snip]

 If you're looking for someplace to feel subversive around, this isn't it.
Crypto is a mainstream engineering discipline these days, and one greatly
needed by modern civilization.

 Can we kill this thread now please?

Ah, shucks. Does that mean I can't add Budding Terrorist to my resume?

-kevin
--
Please excuse typos. Sent from my DROIDX.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustwave admits issuing corporate mitm certs

2012-02-27 Thread Kevin W. Wall
On Mon, Feb 27, 2012 at 6:08 PM, coderman coder...@gmail.com wrote:
 On Sat, Feb 25, 2012 at 4:54 PM, Marsh Ray ma...@extendedsubset.com wrote:
...
 Still it might be worth pointing that if Wells Fargo really wanted to forbid
 a Trustwave network-level MitM, SSL/TLS provides the capability to enforce
 that policy at the protocol level. They could configure their web app to
 require a client cert (either installed in the browser or from a smart
 card).

 many years ago at $my_old_telco_employer they supported web based call
 monitoring. they required a client side cert purchased from verisign
 specifically for the purpose. we had pages of documentation detailing
 how to generate the request, and add the cert into your browser.

 this was the first and only time i had ever used client certificates
 from a CA vendor in such a manner.

 mutual authentication... what a concept. is it really that rare?

Very rare for residential consumers; not quite as rare for B2B
transactions. For instance, we reguarly use if for B2B web services
and require it when ILECs or CLECs are retrieving CPNI data.
YMMV depending on your telco.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-26 Thread Kevin W. Wall
On Sun, Feb 26, 2012 at 8:36 PM, James A. Donald jam...@echeque.com wrote:
 On 2012-02-27 3:35 AM, Jon Callas wrote:
 Remember what I said -- they're law enforcement and border
 control. In their world, Truecrypt is the same thing as a
 suitcase with a hidden compartment. When someone crosses a
 border (or they get to perform a search), hidden
 compartments aren't exempt. They get to search them.

 Hidden compartment?  What hidden compartment?  If I have one,
 you are welcome to search it.  Go knock yourselves out.

Well, we're already considerably OT, but since the moderator seems to
be letting this thread play itself out, I use that to segue to a related topic
on a new proposed Ohio law and hidden compartments.

[I just literally finished posting this to my G+ account moments ago, but
will repost here rather than making all you you go to GooglePlus.]

Ohio Gov. John Kasich is advocating a law that would make it a 4th-degree
felony to own any vehicle equipped with hidden compartments. Conviction
under this proposed law could mean up to 18 months in jail and a
potential $5,000 fine.

So someone please tell me why the ACLU is not jumping all over this? I
just don't see how this law is a good thing. It seems to me that this
could trap a lot of innocent people. Imagine the following scenario:

A drug dealer whose car has a secret compartment decides to
get some new wheels so he trades in is old car for a hot new
one to some legitimate auto dealer. The auto dealer does not
know this person is a drug dealer so they have no reason to
suspect anything. Sometime later, the car dealer sells the
car to someone. That someone then happens to get in an accident
where they get rear ended. The ensuing damage reveals a hidden
compartment such as that described in the Columbus Dispatch
article (see below). The officer on the scene of the accident
notices the secret compartment, and even though there are no
drugs present, decides to arrest the driver of the damaged car
solely because she or he can observe the secret compartment.
Thereby some innocent person is charged with a fourth degree
felony and at least has to go through a bunch of legal hoops
to clear his or her name.

Now how is this a _good_ thing? So much for the presumed innocent until
proven guilty.

The original Columbus Dispatch article is here in case anyone wishes
to read it:
http://www.dispatch.com/content/stories/local/2012/02/25/secret-compartments-could-get-drivers-busted.html

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-25 Thread Kevin W. Wall
On Sat, Feb 25, 2012 at 2:50 AM, Jon Callas j...@callas.org wrote:

[snip]

 But to get to the specifics here, I've spoken to law enforcement and
 border control people in a country that is not the US, who told me
 that yeah, they know all about TrueCrypt and their assumption is
 that *everyone* who has TrueCrypt has a hidden volume and if they
 find TrueCrypt they just get straight to getting the second password.
 They said, We know about that trick, and we're not stupid.

Well, they'd be wrong with that assumption then.

 I asked them about the case where someone has TrueCrypt but doesn't
 have a hidden volume, what would happen to someone doesn't have one?
 Their response was, Why would you do a dumb thing like that? The whole
 point of TrueCrypt is to have a hidden volume, and I suppose if you
 don't have one, you'll be sitting in a room by yourself for a long
 time. We're not *stupid*.

That's good to know then. I never had anything *that* secret to protect,
so never bothered to create a hidden volume. I just wanted a good, cheap
encrypted volume solution where I could keep my tax records and other
sensitive personal info. And if law enforcement ever requested the password
for that, I wouldn't hesitate to hand it over if they had the proper
subpoena / court order. But I'd be SOL when then went looking for a second
hidden volume simply because one doesn't exist. Guess if I ever go out of
the country with my laptop, I'd just better securely wipe that partion.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Combined cipher modes

2012-02-20 Thread Kevin W. Wall
First of all, let me thank all who have responded for lending
your expertise. I am just picking out Ian's to respond to
because of his suggesting dividing up the IV into

random||counter||time

but I do appreciate everyone else's comments as well.

On Mon, Feb 20, 2012 at 7:11 AM, ianG i...@iang.org wrote:
 On 20/02/12 18:11 PM, Kevin W. Wall wrote:

 Hi list,

 This should be a pretty simple question for this list, so please pardon
 my ignorance. But better to ask than to continue in ignorance. :-)

 NIST refers to combined cipher modes as those supporting *both*
 authenticity and confidentiality, such as GCM and CCM.

 My personal impression of such things is that although they can give a paper
 sense of authenticity, it is not good enough to rely on at an application
 level because of software layering issues.  In the past, I've preferred to
 use a heavyweight signed plaintext packet, then encrypted with a light-level
 HMAC or similar.  So, yes it is authenticated twice, but they are at
 different strengths / semantics / layers.

Yes, well, that is all well and good for some things, but when the primary
use of encryption nowadays is to encrypt short strings like credit card
numbers and bank account numbers, most developers are not going to
put up with the additional space  CPU overhead of both a dsig and an
HMAC. Based on your recommendation from several years ago, we had originally
used an HMAC-SHA1, but changed it to an HMAC-SHA256 after recommendations
from the initial NSA review. However, we (OWASP ESAPI) only do this for
when the user decides to use an unauthenticated cipher mode.

 So my first question: Are there ANY combined cipher modes
 for block ciphers that do not cause the ciphers to act as a key
 stream? (That seems to be cause most of the ones I found build
 the confidentiality piece around CTR mode.) If yes, please name
 a few (especially those with no patent restrictions).

 I know when you have a cipher that acts in a streaming mode,
 that you need to be careful to use a unique IV for every encryption
 performed with the same key.


 Well.  With basic modes like CBC, if there is no variation in the early
 parts of the packet, those blocks will encrypt the same.

 A good plaintext packet design can push strong variation into the first
 bytes.  e.g., the MAC can go at the beginning not the end.  It used to be
 customary to put the MAC at the end because hardware calculated it and
 streamed it at the same time, but software doesn't work that way.

 (There was a paper suggesting that encrypt-then-mac was better than
 mac-then-encrypt, but I vaguely recall this result only applies under some
 circumstances.  Does anyone recall how important this issue was?)

I've read a few papers and blogs on this topic. The one that sticks in
my mind was one of Nate Lawson's blogs.  I just looked it up and I
think it was:
http://rdist.root.org/2010/09/10/another-reason-to-mac-ciphertext-not-plaintext/

Based on this and some additional comments from Duong  Rizzo, we decided to
use the encrypt-then-MAC approach to ensure the integrity of the ciphertext.
(Keep in mind this was designed around the time that Duong and Rizzo
automated a padding oracle attack with their POET software.)

However, we skipped the MAC calculation if the cipher mode chosen
was an authenticated mode like CCM or GCM. The assumption (hopefully
a correct one), was that an authenticated cipher mode was sufficient.

In ESAPI (ignoring all error / exception handling, etc.), using such
a combined mode, distills down essentially to something like this
for the encryption:

// This is in essence what ESAPI does for combined cipher modes like CCM  GCM
// Assume a 128-bit AES key for this example.
public class Encryptor {
private static SecureRandoma prng = new SecureRandom();
...
public byte[] encrypt(SecretKey sk, byte[] plain) throws Exception {
...
Cipher cipher = Cipher.getInstance(AES/GCM/NoPadding);
byte[] ivBytes = new byte[ 128 / 8 ];   // 16 bytes (AES block size)
prng.nextBytes(ivBytes);
IvParameterSpec ivSpec = new IvParameterSpec(ivBytes);
cipher.init(Cipher.ENCRYPT_MODE, sk, ivSpec);
return cipher.doFinal( plain );
}
...
}
///

However, as you can plainly see, there is no attempt here to prevent
the reuse of an IV for a given key. The assumption was originally,
that the random IV use was sufficient, but after reading recent comments
on entropy pools at boot-up time (when application servers typically
are started from /etc/init.d scripts), I'm not so sure.

However, above more or less is considered best practice to using Java for
symmetric encryption when you are using an authenticated mode.

The question is whether or not this is sufficient. I suppose like
most everything in information security, it depends on one's threat
model. Unfortunately, when trying to provide a generally reusable
(and simple) security API that does

[cryptography] Combined cipher modes

2012-02-19 Thread Kevin W. Wall
Hi list,

This should be a pretty simple question for this list, so please pardon
my ignorance. But better to ask than to continue in ignorance. :-)

NIST refers to combined cipher modes as those supporting *both*
authenticity and confidentiality, such as GCM and CCM.

So my first question: Are there ANY combined cipher modes
for block ciphers that do not cause the ciphers to act as a key
stream? (That seems to be cause most of the ones I found build
the confidentiality piece around CTR mode.) If yes, please name
a few (especially those with no patent restrictions).

I know when you have a cipher that acts in a streaming mode,
that you need to be careful to use a unique IV for every encryption
performed with the same key.

So my second question is, if all the combined cipher modes all
cause a cipher to act as if it is in a streaming mode, is it okay
to just choose a completely RANDOM IV for each encryption?
Because it sure doesn't seem to be feasible to record all the IVs
for a given key to make sure that an IV isn't reused. If that is not
acceptable, then how does one ever address this?

Thanks,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustwave admits issuing corporate mitm certs

2012-02-15 Thread Kevin W. Wall
On Wed, Feb 15, 2012 at 12:49 AM, Jeffrey Walton noloa...@gmail.com wrote:
 On Sun, Feb 12, 2012 at 8:17 PM, Steven Bellovin s...@cs.columbia.edu wrote:

 On Feb 12, 2012, at 6:31 AM, Harald Hanche-Olsen wrote:

 [Jeffrey Walton noloa...@gmail.com (2012-02-12 10:57:02 UTC)]

 (1) How can a company actively attack a secure channel and tamper with
 communications if there are federal laws prohibiting it?

 IANAL, as they say, but I guess they are acting under the presumption
 that any communication originating in the company's own is the
 company's own communication, and so they can do anything they please
 with it. It could be argued that the notion of tampering with your
 own communications doesn't make sense, and so there is no breach of
 federal law.

 I am not defending the above interpretation, nor am I saying for sure
 that it holds water. But I think it is a reasonable guess, at least
 that that the company's lawyers will use arguments along those lines
 (abeit argued in more legalese terms) if they had to defend this
 practice.


 Although I'm not a lawyer, I've worked with a number of lawyers on the
 wiretap act, and have been studying it for close to 20 years.  I do not
 see any criminal violation.

Nor do I. If anything, I think this would be a civil matter.

 18 USC 2512 (http://www.law.cornell.edu/uscode/text/18/2512) bars devices
 if design of such device renders it primarily useful for the purpose of
 the surreptitious interception of wire, oral, or electronic communications.
 Is a private key or certificate a device?  Not as I read 18 USC 2510(5)
 (http://www.law.cornell.edu/uscode/text/18/2510).  Paragraph (12) of that
 section would seem to say that intra-company wires aren't covered.  But
 a better explanation of that can be found in Ruel Torres Hernandez, ECPA
 and online computer privacy, Federal Communications Law Journal, 
 41(1):17–41,
 November 1988.  He not only concluded that the ECPA did not bar a company
 from monitoring his own devices, he quoted a participant in the law's
 drafting process as saying that that was by intent.  California law bars
 employers from monitoring employee phone calls, but in 1991 a court there
 explicitly ruled that monitoring email was permissible -- or rather, that
 it wasn't barred by a statute that only spoke of phone calls.
 I looked at the cited cases. As a layman, I'm not contesting the fact
 that an employer has a right to monitor its employees, and I
 understand why some of the plaintiff positions were undefensible in
 civil court.

 I'm talking about violation of US Code and criminal cases. Remember, a
 lot of these corporations wanted harsh regulations for folks breaking
 into their [insecure] networks. Obviously, they don't want to eat
 their own dog food. But some of this stuff is sufficiently broad so
 that their actions are criminal despite their intentions or desires.

I'd agree that their actions are immoral / unethical, but that doesn't
make their actions criminal, especially if their users consent to monitoring
of all company computer and network usage. And, the AUPs that
I've seen at all the companies that I've worked for as both employee
and contractor all make you sign those...otherwise, you won't
be collecting a pay check.

If the company did not inform the employees that they were being
monitored, then _perhaps_ a criminal case might be made based on
illegal wire tap statutes, but I do not not have enough knowledge
to judge that. As they say, IANAL.

 Whether they like or or not (or agree or disagree), they were only
 authorized to transmit traffic.

Perhaps, if you are talking about someone who is merely acting
in the role of provider / carrier of services, but I thought this discussion
was about employee / employer relationships.  Maybe I'm misunderstanding
something that you are trying to communicate.

 Here, I speak of the communications
 between two parties - A and B. When they peeled away SSL/TLS, they
 exceeded their authorization. Even if party A agreed to be monitored,
 I doubt party B also agreed 'a priori,' especially if party B did not
 reside on the same corporate network. Hence a criminal violation of
 federal code.

In some states, both parties do not need to be informed that they are
being monitored...only one of the parties needs to be aware. However,
regardless of that, I don't see how this is any different in principle
if a company decided to install a keystroke logger on your company
PC and take a constant video of your screen? Is that illegal? Probably
not if the employees consent to it. How about if I monitor your
network traffic by decrypting your SSL connection at your PC's endpoint
by some SSL DLL that would leak the SSL master key and record
that and the SSL keystream to some central server? Again, I think
that would only be illegal if employees did not consent to monitoring.

That said, I do think that companies may be in trial from a civil suit
perspective, especially if it had been widely known that 

Re: [cryptography] Password non-similarity?

2012-01-03 Thread Kevin W. Wall
On Tue, Jan 3, 2012 at 8:07 PM,  d...@geer.org wrote:

   So I would conjecture, at least in cases like this where users only
   login infrequently, that the password change policy every N days
   be done away with, or at the very least, we make N something
   reasonably long, like 365 or more days.

 Kevin, are you suggesting a 50 uses and change it rule?

Well, in the cases where users login infrequently, such their telco
or wireless carrier where users only login once a month to pay their bill,
I think that makes more sense than requiring them to changing it
every 90 days or so. Very few people are going to be able to memorize
their password when they only use it once a month and you make them
change it every 3 months (3 tries). In such cases, you could get almost
the same affect by making the change period very long. For instance,
instead of requiring a password change every 90 days, make them
change it once every 2 years. And if you do that by uses instead of
by days, it makes it a LOT easier / more relevant to warn them that
they have a password change coming up so it won't take them by
surprise. IMO, that's another reason why people have such a problem
logging it. We have a policy something like warn the user 10 days in
advance that their password is going to change, but they only log in
every 30 days, so at the end of those (say) 90 days, they are surprised
by Your password has expired. Please change it. message. Not only
do they not get a chance to think of a decent password that they can
remember, but they may not be prepared to safely record it. (For example,
maybe they use something like PasswordSafe to store it, but it's on
a USB flash drive that they don't happen to have a the moment b/c you've
taken them by surprise.) If instead, they could be greeted by a message
something like You have 2 more uses of your current password allowed.
Would you like to change it now? then they are not going to be hit
out of the blue that their password has expired. Unlike warnings that
are based on time (D days before password is scheduled to expire)
that the user might never see, at least they would always see these
warnings. Hopefully less surprise means better, stronger passwords.

I don't think this is suitable for everything though. For example, if you
use Active Directory passwords inside your corporation for also logging
into lots of different servers, I think time-based expiration would work
better than usage-based expiration there. Otherwise, you'd have some
people that would have to be changing their password every 10 days
and others that would only be changing it every 250 days. There,
employee turnover also probably makes time-based expiration more
suitable.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2012-01-02 Thread Kevin W. Wall
On 2012/1/2 lodewijk andré de la porte lodewijka...@gmail.com:
 The reason for regular change is very good. It's that the low-intensity
 brute forcing of a password requires a certain stretch of time. Put the
 change interval low enough and you're safer from them.

This may make sense in specific cases, but in the general case,
say for web sites that have a large # of public users, there are other
things that this has to be weighed against. Specifically consider
cases where users might only login once a month to pay a bill. If
you require those users to change their passwords every 30, 60,
or 90 days, they probably will never actually have time to learn it.
And since we've tried to teach people not to write down their passwords
on PostIt notes, etc. many of these users don't write them down
at all.

So the end result is that many of these types of users frequently
forget their passwords, because they only use them 2 or 3 times
before they have to change them again. So that has the undesirable
effect of increasing calls to the helpdesk to have users' passwords
reset.  To drive this additional helpdesk cost down, IT then decides
to implement a I forgot my password mechanism that is generally
based one some set of trivial Q  A such as What is your favorite
sports team? or Where did you attend elementary school?, etc.
thus causing over major security issues.

So I would conjecture, at least in cases like this where users only
login infrequently, that the password change policy every N days
be done away with, or at the very least, we make N something
reasonably long, like 365 or more days.

That's why I've said and will say again, that your security policies
should be driven by your specific threat model. Unfortunately, most
companies don't do this. Instead that they just perpetuate the myth
that everyone should be required to change their password every N
days because this is obviously best security practice for everyone.
It may be for your specific threat model, but it also might not be.

 We've had someone talk on-list about a significant amount of failed remote
 ssh login attempts. Should he chose not to force user to change their
 passwords they wouldn't. And the likelyhood of a successfull login
 would improve with the years (given coordination) to somewhere above the
 admin's comfort zone.

 The timeframe in which a password has to change also limits the maximum time
 exposed once someone has cracked it. This is relevant when the adversary
 needs multiple opportunity's to coincide. The amount of time it'll have
 access without triggering resource-counting or other suspicious behavior
 alarms becomes limited, as changing a password would either lock him or the
 legitimate user out.

Although requiring the use of SSH public/private keys probably would
be better way to go here. The big problem here is for *nix systems
at least, even if you remember your password and could change it,
trying to remember 20+ ferent passwords for 20+ different servers,
all which expire at different times is, at a minimum, a major pain in
the ass, and generally will cost you in terms of requiring a password
having to be reset by some system administrator plus all the helpdesk
overhead.

 For most systems though, it's a complete waste of time.

Agree.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2012-01-02 Thread Kevin W. Wall
On Mon, Jan 2, 2012 at 7:12 PM, Craig B Agricola cr...@theagricolas.org wrote:
 On Sun, Jan 01, 2012 at 03:16:39AM -, John Levine wrote:
 Where's this log?  Wherever it is, it's on a system that also has their
 actual password.

 If I wanted to reverse engineer passwords, this doesn't strike me as a
 particularly efficient way to do so.

 R's,
 John

 Well, the log is presumedly unencrypted on the same machine that has a
 *hash* of their actual password.  It takes a lot longer to crack against
 the hashed password list than it does to scan the log for these type of
 log messages, which they can then check against the hashed password
 database quickly and easily.  I agree with Kevin that this scenario
 isn't enough justification for the overhead and user annoyance that is
 forced password rotation, but it's not an unreasonable scenario to want
 to mitigate.  Some web servers even make it easy to accidentally export
 the logs, since often HTTP is the access method of choice for the people
 who actually should be able to review the logs...

Agree that cracking effort far exceeds the effort of scanning the logs,
but keep in mind that in most cases, if you can break in and have the
password hash readable, then you likely already have admin permissions
and it's game over. (E.g., consider that /etc/shadow usually only readable
by root and group 'shadow'.) OTOH, depending on where you log such
failures, that may or may not be word readable. (It really shouldn't be,
but many times it is.) And even if you are using syslog and a remote
log server and sending this to some SIEM product, keep in mind that
those monitoring these logs via a SIEM usually do not have superuser access
on those servers.

But, please understand that I was not trying to imply that this means
that periodically requiring password changes is a good idea. Generally,
it's a bad idea when we try to enforce a one-size-fits-all security policy
to everything. One needs to evaluate this on a risk basis on a case
by case basis.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-31 Thread Kevin W. Wall
On Tue, Dec 27, 2011 at 6:12 PM, Steven Bellovin s...@cs.columbia.edu wrote:
[snip]
 Here's a heretical thought: require people to change their passwords --
 and publish the old ones.  That might even be a good idea...

I'm not sure if you were just being facetious here or if you were serious, but
you know, I think you might just be onto something here...especially
if we could do this and allow some degree of anonymity. Maybe if we
could post the passwords, run them through a password cracker for
T minutes to see if they could be cracked that way or allow people
to comment on them. It would give people an opportunity to teach
how to create secure passwords and to critique weak ones by
showing why they are weak.

If this were something that was voluntary as well as anonymous,
I think it has a chance for the greater good. Without anonymity,
we would at definitely would have to only make it voluntary, or
at least grant an amnesty period where people could opt out.
Otherwise, you'd end up with a lot of lawsuits and likely fired
employees.

But I think you may be onto something here.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-31 Thread Kevin W. Wall
On Sat, Dec 31, 2011 at 9:02 PM, Bernie Cosell ber...@fantasyfarm.com wrote:
 On 1 Jan 2012 at 11:02, Peter Gutmann wrote:

 Bernie Cosell ber...@fantasyfarm.com writes:
 On 31 Dec 2011 at 15:30, Steven Bellovin wrote:
  Yes, ideally people would have a separate, strong password, changed
  regularly for every site.
 
 This is the very question I was asking: *WHY* changed regularly?  What
 threat/vulnerability is addressed by regularly changing your password?  I
 know that that's the standard party line [has been for decades and is
 even written into Virginia's laws!], but AFAICT it doesn't do much of
 anything other than encourage users to be *LESS* secure with their
 passwords.

 This requires an answer that's waaay too long to post here, I've made an
 attempt (with lots of references to historical docs) in the chapter
 Passwords in http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf (it's
 easier to post the link than to post large extracts here, since the 
 discussion
 is fairly in-depth).

 Actually, it isn't too large an extract to, basically, make my point:

    Another tenet of best-practice password management is to expire
    passwords at regular intervals and sometimes to enforce some sort of
    timeout on logged-in sessions [64].  Requiring password changes is
    one of those things that many systems do but no-one has any idea
    why.  Purdue professor Gene Spafford thinks this may have its
    origins in work done with a standalone US Department of Defence
    (DoD) mainframe system for which the administrators calculated that
    their mainframe could brute-force a password in x days and so a
    period slightly less than this was set as the password- change
    interval [65].  Like the ubiquitous Kilroy was here there are
    various other explanations floating around for the origins of this
    requirement, but in truth no-one really knows for sure where it came
    from.  In fact the conclusion of the sole documented statistical
    modelling of password change, carried out in late 2006, is that
    changing passwords doesn´t really matter ...

Well, I can think of one real risk, but IMHO, it is minimal and
hardly constitutes the hassle that we enforce upon millions of
users.  I have seen this from personal observation as a syad as well
as have done this accidentally myself several times. What you
ask? Well, on more than a few occasions, I've observed cases
where users have accidentally entered their password into the
username field (either alone, or with the username preprended).
Of course, the login attempt fails and, more to the point, the
invalid user name is logged. The users almost immediately
realize their mistakes, and then login correctly. Unfortunately,
most users don't realize that their password has just been logged
as an invalid user name and their logged subsequent successful login
makes it rather trivial to associate that password with the actual
username of the user. And because they don't realize this, they
don't immediately change their password. (I confess that I have
even been guilty of this at times, but generally for sites that
probably shouldn't be requiring passwords in the first place.)
Anyhow, requiring a user change their password every 30/60/90/N
days mitigates this risk to some degree. Apparently, the idea
is that hopefully by the time an adversary discovers this from
the captured log files in question, the user will already have been forced
to change their password.  I think that's the theory at least.
I think the risk is low, especially if you train people to change
their passwords when them make this sort of screw up. (It
wouldn't be too hard to conceive of an authN system that could
even automate this, although it would have to retain some
state information for login attempts that failed b/c of an
invalid user.) I suspect a similar argument could be made for
somone scribbling a username / password on a scrap of
paper and then tossing it in the trash where it might be
discovered by some dumpster diver.

Of course, IMO, this risk is hardly justification for requiring
that users periodically be forced to change their passwords.

To get to the real reason, I suspect you'd have to chat with
the corporate attorneys. I think the rationale in their mind
goes something like this and involves a misconception:

Best security practice is to regularly change your
password. So if we force users to do that, we've
done due diligence, and should a security breach be
discovered, at least we can't be sued for treble the
damages because we've done due diligence by following
industry best practice.

Or whatever. The misconception is of course, that this
truly is best practice. Pretty sure that it's some CYA
policy along this line that is driving this. And IT has learned
it's just easy to implement whatever legal requests than to
argue the rationality of the decision with their legal department.
(Besides, it's more work for IT, and thus job security to

Re: [cryptography] Password non-similarity?

2011-12-31 Thread Kevin W. Wall
On Sat, Dec 31, 2011 at 9:56 PM, Jeffrey Walton noloa...@gmail.com wrote:
 On Sat, Dec 31, 2011 at 9:05 PM, Kevin W. Wall kevin.w.w...@gmail.com wrote:
 On Tue, Dec 27, 2011 at 6:12 PM, Steven Bellovin s...@cs.columbia.edu 
 wrote:
 [snip]
 Here's a heretical thought: require people to change their passwords --
 and publish the old ones.  That might even be a good idea...

 I'm not sure if you were just being facetious here or if you were serious, 
 but
 you know, I think you might just be onto something here...especially
 if we could do this and allow some degree of anonymity. Maybe if we
 could post the passwords, run them through a password cracker for
 T minutes to see if they could be cracked that way or allow people
 to comment on them.
 Google as a password cracker,
 http://www.lightbluetouchpaper.org/2007/11/16/google-as-a-password-cracker/.
 No need to waste local cycles (someone else previously posted a
 similar link).

True that.

 It would give people an opportunity to teach
 how to create secure passwords and to critique weak ones by
 showing why they are weak.
 I think this would be a bad idea. I imagine it would promote stemming
 related attacks. If not completely anonymous and coupled with some
 reconnaissance (IP = Company, find some users at company.com), it
 could prove to be a very dangerous practice.

Well, I wasn't referring to making the results public, but rather treating
them as proprietary, within the confines of a company. Should have made
that clear.

Of course, I'm pretty sure that you'd never be able to get this past the
corporate lawyers even if you did treat it as proprietary information and
made it completely voluntary on the part of the users. So probably the
best we could do is to run it as a science experiment as a colaberation
between some CS and psych department at some university.
(Professor Bellovin: Hint, hint! ;-)

I think it would at least make for some interesting reading...in particular,
would users adjust their practice as they got feedback from prior
passwords.

 Besides, there's plenty of password lists floating around.
 http://www.google.com/#q=password+list.

That wasn't my point. My goal would be to see the effect of
feedback provided to users to see if it would change their
behavior of how the create passwords. For example, at
every change I get, I suggest to my friends, colleagues,
and students whom I have taught that they can create
a strong password by simply thinking of some sufficiently
long,memorable phrase and using the first character of
each word and toss in some numbers and punctuation to
satisfy the password character constraints. So for example,
I might think of the leadin phrase from Lincoln's Gettysburgh
Address Four score and seven years ago, our fathers brought forth...
and then translate that to something like Fs7ya,ofbf
(Of course, now that I mention this, someone will put  Fs7ya,ofbf...
into a cracker dictionary--if it is not already there [I've written about this
before way back in 1999]--so you would be best to
avoid that particular phrase. ;-)

Indeed, Ross Anderson did some study of this in one of his
classes (sorry, I don't have the citation, but Ross, if you're
listening, feel free to pipe in) and discovered that passwords
created this way were almost as strong as completely
random passwords by were much more memorable.

Anyhow, that's only one technique. It's the one I use,
but there are others. See my write-up from 1999 here:
https://sites.google.com/site/kevinwwall/Home/presentations/good-passwords

It's a bit outdated, but IMO, the best thing about it is that
it provides both good and bad examples of each technique
and tells why the bad examples are bad.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-31 Thread Kevin W. Wall
On Sat, Dec 31, 2011 at 10:24 PM, Randall  Webmail rv...@insightbb.com wrote:
 From: Kevin W. Wall kevin.w.w...@gmail.com

Boy, the latter sounds like advice that a black hat hacker would give someone 
to
 ensure simple dictionary attacks are successful. Your dog's name? Really???

 Beats the usual method of writing it on a Post-It note where the janitorial 
 staff can see.

Nothing wrong with writing your password on a Post-It note. The problem is
*where* you keep that Post-It note. Put it in your wallet or purse or store
it in your locked desk drawer and the janitor isn't going to casually
see it.

 The current state of security in corporate America is somewhere between 
 parlous and laughable.

 I've been in a Fortune 100 CEO's office -- his login/pw were indeed on a 
 Post-It, stuck to his monitor.

That's true, but IMO, that's because most of corporate security is
driven as CYA policies
rather than ones with any particular rationale threat model in mind.
So instead of
engaging real risks, we waste our time fighting windmills.

 The most common password is Password.

See, that would never fly at our company. They'd have to make
it Passw0rd or Password1 because our AD policy requires
one uppercase, one lower case, and one numeric. :-P

 I know of at least one global company whose database password was Oracle.

More common for our DBAs is the username written out backwards. (There
excuse: We tell the developers and/or operations teams to change it. But
very few seldom do.)

 For a time in the 1980s, the BUPERS password on at least one dialup node was 
 Letmein.

 If you're wanting thousands of users to change their passwords once a month 
 and you're NOT going to allow them to use Post-Its, you'd better plan to hire 
 hundreds of kids for Tech Support.

As Prof Bellovin so aptly remarked, a better approach would be to
train people to use
a password wallet / vault. E.g., Password Safe or KeePass, etc. Then keep the
file on a flash drive that you carry with you or if you are more
trusting, keep it
in the cloud somewhere. Then you only have a small handful of passwords to
worry about.

Train the uses how to create intelligent strong passwords (which we seldom do)
and they won't have to write them down. But teach them that it's OK to write
them down and put in a secure place where only they have access to them.
(E.g., treat them like you treat your money!)

It's really not that hard.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-31 Thread Kevin W. Wall
On Sat, Dec 31, 2011 at 10:32 PM, Jeffrey Walton noloa...@gmail.com wrote:
 On Sat, Dec 31, 2011 at 10:29 PM, Kevin W. Wall kevin.w.w...@gmail.com 
 wrote:
 On Sat, Dec 31, 2011 at 9:56 PM, Jeffrey Walton noloa...@gmail.com wrote:
 On Sat, Dec 31, 2011 at 9:05 PM, Kevin W. Wall kevin.w.w...@gmail.com 
 wrote:
 On Tue, Dec 27, 2011 at 6:12 PM, Steven Bellovin s...@cs.columbia.edu 
 wrote:
 [snip]
[snip]

 It would give people an opportunity to teach
 how to create secure passwords and to critique weak ones by
 showing why they are weak.
 I think this would be a bad idea. I imagine it would promote stemming
 related attacks. If not completely anonymous and coupled with some
 reconnaissance (IP = Company, find some users at company.com), it
 could prove to be a very dangerous practice.

 Well, I wasn't referring to making the results public, but rather treating
 them as proprietary, within the confines of a company. Should have made
 that clear.
 Gotcha. Treat it as IP - perhaps a creative work - and protect it
 through Copyright and DRM in case of loss ;)

Seriously, that's a great idea. I'm going to see if I can get our
attorneys to patent it before you. Ha!

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-30 Thread Kevin W. Wall
On Fri, Dec 30, 2011 at 8:40 PM, Randall  Webmail rv...@insightbb.com wrote:
 On Tue, 27 Dec 2011 15:54:35 -0500 (EST), Jeffrey Walton noloa...@gmail.com 
 wrote:
Hi All,

We're bouncing around ways to enforce non-similarity in passwords over
 time: password1 is too similar too password2 (and similar to
 password3, etc).

I'm not sure its possible with one way functions and block cipher residues.

Has anyone ever implemented a system to enforce non-similarity business rules?

 You are going to run into massive resistance from the user base, almost all 
 of whom have
 been told of the organization's Change your password every X days rule, and 
 almost the
 same number of whom have been told Just pick a password you'll remember, 
 like your dog's name,
 and then when you have to change it, just add a 1 on the end.

Boy, the latter sounds like advice that a black hat hacker would give someone to
ensure simple dictionary attacks are successful. Your dog's name? Really???

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] implementation of NIST SP-108 KDFs?

2011-12-28 Thread Kevin W. Wall
Adam,

On Wed, Dec 28, 2011 at 5:51 PM, Adam Back a...@cypherspace.org wrote:
 As there are no NIST KAT / test vectors for the KDF defined in NIST SP 108,
 I wonder if anyone is aware of any open source implementations of them to
 use for cross testing?

I am not aware of any NIST test vectors, but ESAPI Java does have a FOSS
implementation (under the new BSD license) at:
http://owasp-esapi-java.googlecode.com/svn/trunk/src/main/java/org/owasp/esapi/crypto/KeyDerivationFunction.java
that you could try comparing results to. It should be noted that we
interpreted section
7.6 of NIST SP 800-108 to imply that the context should be _optional_
rather than required (it says SHOULD rather than MUST), so we set it to the
empty string by default.

In addition, Jeff Walton (CC'd) is working on a C++ port of the ESAPI
Java crypto,
so he may have a working C++ implementation that he can point you to.

If you get different results than what ESAPI's KeyDerivationFunction produces
or if you run across any NIST test vectors, I would appreciate it if you could
let me know.

Thanks,
-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] really sub-CAs for MitM deep packet inspectors? (Re: Auditable CAs)

2011-12-03 Thread Kevin W. Wall
On Fri, Dec 2, 2011 at 1:07 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
[snip]
 OK, so it does appear that people seem genuinely unaware of both the fact that
 this goes on, and the scale at which it happens.  Here's how it works:

 1. Your company or organisation is concerned about the fact that when people
 go to their site (even if it's an internal, company-only one), they get scary
 warnings.

 2. Your IT people go to a commercial CA and say we would like to buy the
 ability to issue padlocks ourselves rather than having to buy them all off
 you.

When it is *only* company-only, I think it's much more common for companies
to set up their internal CAs and just do something like an SMS or WSUS push
to get their internal root CA certs into all the trusted keystores of all the
company computers. I've only seen the latter case used when it involves
residential customers. We can't take that the approach to force them to
add our internal CA cert chain to their trust stores, and even if we could it
would likely result in so many calls to the help desk to make it infeasible.
However, we have occasionally used that approach with business partners.

 3. The CA goes through an extensive consulting exercise (billed to the
 company), after which they sell the company a padlock-issuing license, also
 billed to the company.  The company is expected to keep records for how many
 padlocks they issue, and pay the CA a further fee based on this.

 4. Security is done via the honour system, the CA assumes the company won't do
 anything bad with their padlock-issuing capability (or at least I've never
 seen any evidence of a CA doing any checking apart from for the fact that
 they're not getting short-changed).

Through the honor system? Does that mean that we can use a pair
of dice rolled two or three times for our hardware key generation? ;-)

Actually, more surprisingly, I've been told by those who manage
something like this for our company, that even the reported
number of padlocks that they issue and are expected to
compensate the CA for is kept on the honor systemm--at least
for the CA with whom we interact. (Or course, I'm assuming that
the this CA retains the right to periodically do audits, etc.)

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Bitcoin featured the IEEE Spectrum

2011-10-20 Thread Kevin W. Wall
In case anyone is interested...
http://spectrum.ieee.org/computing/networks/the-worlds-first-bitcoin-conference/

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] validating SSL cert chains timestamps

2011-10-07 Thread Kevin W. Wall
On Fri, Oct 7, 2011 at 5:56 PM, Peter Gutmann pgut...@cs.auckland.ac.nzwrote:

 travis+ml-rbcryptogra...@subspacefield.org writes:

 If we assume that the lifetime of the cert is there to limit its window of
 vulnerability to factoring, brute force, and other attacks against
 computational security properties,

 Which only occurs in textbooks.  It's probably not necessary to mention
 that
 in real life the lifetime of a cert exists to enforce a CA's billing cycle,
 but beyond that, that it's common practice to re-certify the same key year
 in,
 year out, without changing it.  So even if you have a cert issued last
 year,
 it may contain a key generated a decade ago.

 It does, however, seem to ensure a subscription-based revenue model for
 CAs.

 That's it exactly.


As evidenced by the fact that the typical SSL server cert has a 1 year
lifetime
and the typical CA cert has a 10 yr (or longer) lifetime.  The CAs are all
about
minimizing the hassle and cost to themselves and maximizing the cost (and
thus profits) to everyone else. Unfortunately, there isn't much push back on
this. IMO, there should be a browser tweak one could set to prevent the
Danger, Will Robinson! The sky is falling and evil aliens are approaching
pop-ups that the browsers seem to unanimously give. Sometimes it seems
like there are in cahoots with the CAs.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Duong-Rizzo TLS attack (was 'Re: SSL is not broken by design')

2011-09-19 Thread Kevin W. Wall
On Mon, Sep 19, 2011 at 12:42 PM, Marsh Ray ma...@extendedsubset.com wrote:
 IMHO, as far as crypto protocols go the TLS protocol itself is pretty solid
 as long as the endpoints restrict themselves to negotiating the right
 options.

 On that note, there's a little more info coming out on the Duong-Rizzo
 attack:
 http://threatpost.com/en_us/blogs/new-attack-breaks-confidentiality-model-ssl-allows-theft-encrypted-cookies-091611

So does anyone know anymore details on this? Specifically is it an
implementation flaw or a design flaw?

Duong  Rizzo's previous work relied on padding oracle attacks whereas
this one is categorized as a chosen-plaintext attack, so it looks like it's
not building on their previous work.

Lastly, would anyone care to speculate whether (for instance) using RC4
intead of AES/CBC protect you from this chosen-plaintext attack? The
article cited by the URL that Marsh mentioned only mentions AES
so perhaps other cipher choices are immune. Just not a lot of details
available yet. Guess will have to wait until Friday.

Thanks,
-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] DigiNotar news

2011-09-15 Thread Kevin W. Wall
The DigiNotar breach made the IEEE Spectrum:

http://spectrum.ieee.org/riskfactor/telecom/security/diginotar-certificate-authority-breach-crashes-egovernment-in-the-netherlands/?utm_source=techalertutm_medium=emailutm_campaign=091511

I only skimmed it and while I didn't see anything new, it is a pretty
good synopsis of all the events. And other than the hyped headlines,
the rest of the article is pretty much even-handed.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] An appropriate image from Diginotar

2011-08-30 Thread Kevin W. Wall
On Tue, Aug 30, 2011 at 1:02 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 http://www.diginotar.com/Portals/0/Skins/DigiNotar_V7_COM/image/home/headerimage/image01.png

 The guy in the background must have removed his turban/taqiyah for the photo.

In keeping with the impersonation theme and Peter Steiner's
famous New Yorker cartoon, it would be more appropriate if the
background image would be of the canine variety.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OT: Found: the missing link in RSA SecurID hack Read more: Found: the missing link in RSA SecurID hack

2011-08-27 Thread Kevin W. Wall
On Fri, Aug 26, 2011 at 11:36 PM, Jeffrey Walton noloa...@gmail.com wrote:
 It kind of takes the wind out of the sails of the Advanced Persistent
 Threat defense

 http://www.pcpro.co.uk/news/security/369556/found-the-missing-link-in-rsa-securid-hack:

Pretty much what I've been saying all along, every since the story of the
RSA SecurID breach broke back in mid-March.

To me, the only really surprising thing is, that according to the article,
this spear phishing *only* targeted a single individual, or at most four.
(Only one person targeted on To: line and 3 others CC'd.) If that
is true, then I'd say that the attackers must have really done their
homework and had a high degree of certainty that one of those recipients
would follow their instructions and open the infected Excel spreadsheet.
They must have also known that AV software RSA was using would not
identify it as malware. But I definitely I see no evidence of any APT here.
In my personal opinion, the whole APT thing was just a BS cover story that
Art Coviello fed the media. I stand by the conclusions of my original SC-L post
on this, archived at:

http://krvw.com/pipermail/sc-l/2011/002605.html

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OT: RSA's Pwnie Award

2011-08-08 Thread Kevin W. Wall
On Mon, Aug 8, 2011 at 8:00 PM, Jeffrey Walton noloa...@gmail.com wrote:
 In case anyone is interested, RSA won a Pwnie for lamest vendor
 response for its RSA SecurID token compromise:
 http://pwnies.com/winners/

What, you didn't like that APT excuse? ;-)

Rightly deserved, I'd say.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Kevin W. Wall
On Wed, Jul 13, 2011 at 11:39 AM, Andy Steingruebl a...@steingruebl.com wrote:
 On Wed, Jul 13, 2011 at 7:11 AM, Peter Gutmann
 pgut...@cs.auckland.ac.nz wrote:
 Andy Steingruebl a...@steingruebl.com writes:

The way it for for everyone I knew that went through it was:

1. Sniffing was sort of a problem, but most people didn't care
2. Telnet was quite a bit of a pain, especially when using NAT, and wanting
to do X11 forwarding
3. Typing in your password again and again over telnet (which did have
advantages over rlogin/rsh) was a pain.

Enter SSH.  It solved #1, but the big boon to sysadmins to figure it out and
installed it was that it *really* solved #2 and #3, hence major adoption.

 Uhh, this seems like a somewhat unusual reinterpretation of history.  SSH was
 primarily an encrypted telnet, and everything else was an optional add-on
 (when it was first published it was almost rejected with the comment this is
 just another encrypted telnet).  The big boon to sysadmins was that (a) you
 could now safely type in your root password without having to walk to the 
 room
 the box was in to sit at the console and (b) you could build and run it on
 pretty much everything without any hassle or cost.  That combination was what
 made it universal.

 Hmm, do you know that many sysadmins outside high-security conscious
 areas that really cared about typing the root password over telnet,
 especially back in 1997?  I don't.  Academia and banks cared, and
 often deployed things like securid or OPIE/SKEY to get away from this
 problem, but your average IT shop didn't care at all.

 Or are you really suggesting we got massive SSH adoption because of
 the security properties?   Certainly not in my experience...

 Maybe this calls for a survey/retrospective on reasons for adoption of SSH? :)

I can't speak of the experience of other companies, but I had a bunch of
sysadmins reporting to me at the time, and my recollection is that the main
reason why that SSH caught on over other secure versions of telnet or rsh
is because it could be used in script without having to place the
user's password
in plaintext anywhere. That was a major improvement because SSH allowed one
to authenticate to a remote system and execute a command without hard-coding
passwords or require manual input of said password. As such, it was ideal for
running automated scripts from crontab, at bootup, etc.

The fact that it did all this over a secure channel was really not
that important
to the sysadmins who worked with me. In fact, I can't recall a single one of
them who were concerned about that. Then again, network sniffing was pretty
rare back then, but they were definitely concerned about leaving passwords in
scripts where some unauthorized person could see them. (And yes, this meant
that they didn't protect the SSH private key with a passphrase...a practice that
is still common today when SSH is used for scripting.)

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-27 Thread Kevin W. Wall
On Mon, Jun 27, 2011 at 8:59 PM, Arshad Noor arshad.n...@strongauth.com wrote:
 In 2008, I sent the following e-mail to my representatives and both
 Presidential candidates:

 http://seclists.org/dataloss/2008/q3/133

 Its intent was to initiate a change in policy wrt breach disclosures.
 There was not even the courtesy of a form-response from most of them,
 so its not surprising that we continue to fly blind in 2011.

That's because you obviously forgot to attach the letter to your $5000
campaign contribution. (Where's an emoticon for sarcasm when you
need it. Guess this will have to do. :-P  )

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.        -- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] crypto security/privacy balance (Re: Digital cash in the news...)

2011-06-16 Thread Kevin W. Wall
On Thu, Jun 16, 2011 at 5:27 PM, James A. Donald jam...@echeque.com wrote:

 On 2011-06-17 4:02 AM, Nico Williams wrote:

  Crypto is no more than an equivalent of doors, locks, keys, safes, and
 hiding.


 The state can break locks, but it cannot break crypto.

 Hiding *is* effectual against the state - and long has been even before
 crypto.


The key word here being *effectual*. Crypto is effective, but some of your
posts
make it seem to be a panacea, similar to how Bruce Schneier originally
thought
(see preface of *Applied Cryptography*) that cryptography was going to be
the
salvation of information security. Crypto certainly has a major role to play
in ensuring confidentiality and integrity, but it is not an be-all
andend-all.
The point is, the state doesn't always *need *to *break *crypto to get your
secrets.

To that end, I think you are misinterpreting what Nico was trying to say,
which
was, crypto is no guarantee that you can hide things from the state, at
least
as it is practiced by the general populace.

Specifically, if that state is some corrupt regime, crypto *may*[1] help,
but it
will not ensure with 100% certainty that your secrets will remain
confidential
from the state.

For that to be true, everything would have to be secure, from the OS all the
way
down to all the firmware. (See Ken Thompson's ACM Turing Award lecture,
*On Trusting Trust*.)  You'd also have to eliminate all possible side
channel
attacks such as EMF leaks. And even if you are secure from attacks coming
from
all those threat sources, an unscrupulous state will have no compunctions
about using a rubber hose attack on you or ones you care about to get your
secrets or get you to divulge your crypto keys. (Someone in an earlier post
mentioned how it is already getting close to that in certain criminal cases
in
England. How much worse would it be with a corrupt regime not following
principled rule-of-law at all?)

While I don't want to put words into Nico's mouth, I think he was merely
trying to point out the difference between the use of crypto in theory and
crypto in practice.
_
[1] Using crypto in a fascist or otherwise corrupt state where crypto is not
the norm
may have the opposite affect of drawing attention to yourself and arousing
the
suspicion of the state. So in such cases, one at least needs to account for
plausible deniability, otherwise you'd be better off keeping your head low
so as not to be noticed in the first place.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digital cash in the news...

2011-06-11 Thread Kevin W. Wall
;-)

On Sat, Jun 11, 2011 at 6:29 PM, Jeffrey Walton noloa...@gmail.com wrote:

 On Sat, Jun 11, 2011 at 4:13 PM, John Levine jo...@iecc.com wrote:
 Unlike fiat currencies, algorithms assert limit of total volume.
 And the mint and transaction infrastructure is decentral, so there's
 no single point of control. These both are very useful properties.
 
  Useful for something, but not useful for money.  I can't help but note
  that the level of economic knowledge in the digital cash community is
  pitifully low, and much of what people think they know is absurd.
 OK, I bite - who has the knowledge? Is it the expert folks who have
 the US 14 trillion or so in debt? Or is it embodied in experts in
 other countries, such as Greece?

 
  [SNIP]
 

 Jeff
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography




-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Mobile Devices and Location Information as Entropy?

2011-04-02 Thread Kevin W. Wall
On 04/02/2011 11:36 PM, Randall Webmail wrote:
 First, join the Navy ...

Too old...afraid they wouldn't take me. I'd just hang
out with an ex-Navy submariner instead. Or I guess in
some cases, an ex-Marine might qualify. :)

-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OTR algos for multi-user chat

2010-12-30 Thread Kevin W. Wall
On 12/30/2010 12:14 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:
 On Tue, Dec 21, 2010 at 07:33:23PM -0500, Kevin W. Wall wrote:
 On 12/21/2010 04:28 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

 PS: If you know any coders who are bored,

 http://www.subspacefield.org/~travis/good_ideas.txt

 Or maybe I should have said, if I respond to those that *HAVE* been
 done, would you update your list?
 
 To everyone who might do so, the answer is an unqualified yes.
 
 Finding out they're already done might solve a need I have.
 
 You may reply directly to this to get it seen and not mixed up 
 with list mail
 
 Thanks :-)

Travis,

I've commented on the stuff that I know about, so hope this helps a bit.
See below and look for lines prefixed by 'kww '.
-kevin
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME


== http://www.subspacefield.org/~travis/good_ideas.txt ==

Often I hear from people, especially younger ones, that they don't know what
to do.  I have compiled a list of ideas that I think would be great for someone
to work on.  Next time someone says they can't think of something to do, or
that they are bored, point them to this page.

This page is kinda old so check around and make sure the problem hasn't
already been solved.

# Programming and development ideas:

Create a tool that figures out (like make) in what order to run the
startup scripts on Linux.  Get rid of /etc/rcN.d altogether.  Cheat by
checking on how other OSes do it, NetBSD had a tool like this IIRC.

kww For starters, this might help:
kwwhttp://en.wikipedia.org/wiki/Init#Other_styles
kww Of those listed here, I have read that Ubuntu's upstart and
kww Fedora's systemd are gaining quite a following. I believe that
kww systemd is scheduled to be the default mechanism in FC15 and
kww that upstart has already replaced the old SysV style init under
kww the hood for Ubuntu (and I think for Fedora as well). Not sure
kww it that's what you are getting at or not.

Create a web front-end for managing asterisk.

Create a web front-end for a firewall like OpenBSD's pf or Linux's
iptables.  Show the last N blocked packets, the top N destination
ports of blocked packets over different periods of time, the top N
source IPs of blocked packets, etc.  This is open-ended; you can get
creative with graphics, such as the gd library for PERL, or even
visualization packages like graphviz, LGL, VolSuite, OpenQVIS, etc.

kww Some of the GUI-based firewalls that use iptables or pf provide
kww some of these things and others are provided by add-ons. I am quite
kww happy with IPCop (mostly because it works well on ancient hardware)
kww and it's add-ons. I've also heard good things about Smoothwall and
kww PfSense. You can find a more complete list here:
kww
http://en.wikipedia.org/wiki/List_of_Linux_router_or_firewall_distributions
kww Finally, the 'ntop' and 'ngrep' programs might provide you with some
kww of these things as well.

Create a secure and standard way to tell routers and firewalls
(e.g. my DFD) to open up a port to a particular machine.  See SPA, uPNP.

kww IIRC, there is an emerging standard for this that I think that
kww Ivan Ristic and a few others have been pushing for and that has been
kww adopted by some of the commercial firewall vendors, but for the life
kww of me, I can't recall what the standard is named. I think it uses
kww some XML format (but what doesn't now days).

Create a FLOSS standard, possibly based on XML, for calendar entries
that works with cell phones, and a format for mailing meeting
invitations to people, and MUA plug-ins or helpers to add them to your
calendar.  Also let it scrape sites (like RSS aggregators) for import
into your own calendar. Like Google Calendar, but on your own systems.
I think Google calendar uses ical, so maybe look at that.  Also look at:
* Chandler
* Citadel
* Claws Mail (vCalendar plugin is required to handle iCalendar in Claws 
Mail)
* Darwin Calendar Server
* Drupal with its event module
* Evolution the Gnome email/calendar client
* Horde
* Kontact (namely KOrganizer and KMail)
* Lightning (a Mozilla extension for Thunderbird)
* Moodle will export iCalendar data or let you subscribe to a Moodle
iCalendar feed
* Mulberry
* OLAT - LMS supporting import and export of personal and shared calendars
via iCal
* OpenCRX
* Opengroupware.org
* Open-Xchange
* PHP iCalendar web based display of shared calendars
* Plone open source content management system
* Simple Groupware
* SPIP a CMS that allows the export of its site calendar in the iCal format
* Sunbird (a Mozilla stand-alone application)
* TYPO3 via its Calendar Base Extension
* WebCalendar
* WebGUI
* Webical
* Zimbra Collaboration

Re: [cryptography] OTR algos for multi-user chat

2010-12-21 Thread Kevin W. Wall
On 12/21/2010 04:28 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:
snip
 PS: If you know any coders who are bored,
 
 http://www.subspacefield.org/~travis/good_ideas.txt

Are you aware that more than a few things on this list have already
been done?

-kevin
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OTR algos for multi-user chat

2010-12-21 Thread Kevin W. Wall
On 12/21/2010 04:28 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

 PS: If you know any coders who are bored,
 
 http://www.subspacefield.org/~travis/good_ideas.txt

Or maybe I should have said, if I respond to those that *HAVE* been
done, would you update your list?

-kevin
--
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-17 Thread Kevin W. Wall
On 12/17/2010 07:42 AM, Ian G wrote:
 (resend, with right sender this time)
 
 On 17/12/10 3:30 PM, Peter Gutmann wrote:
 
 To put it more succinctly, and to paraphrase Richelieu, give me six
 lines of
 code written by the hand of the most honest of coders and I'll find
 something
 in there to backdoor.
 
 
 This is the sort of extraordinary claim which I like.
 
 So, how to explore this claim and turn it into some form of
 scientifically validated proposition?
 
 Perhaps we should run a competition?
 
Come one, come all!  Bring your KR!
 
Submit the most subtle backdoor into open source crypto thingumyjob.
 
Win fame, fortune, and a free holiday in a disputed part of Cuba ...
 
Judged by a panel of extremely crotchety and skeptical cryptoplumbers
 
(aka, assembled herein).
 
 Fancy?

I like it. And I propose that this be the 6 lines of code:

int a;
int b;
int c;
int d;
int e;
int f;

Not impossible, but good luck with that!  OK, don't like that one? How about
these 6 lines:

}
}
}
}
}
}

or maybe 6 arbitrary #include lines? Or to be *really* mean, try to do
something with this?

void someNeverCalledFcn()
{
// Any 6 lines you would like
}

Oh, and BTW, did I mention that these *NINE* lines are the LAST 9 lines of
the C source file and as the function name indicates, it's just dead
code that someone left lying around and is never called??? I'm pretty sure
this one is especially hard to do much with other than perhaps causing
compilation errors. (Or maybe you can exploit a BoF in the C compiler!!!
Does that count? Works for me.)

OK, obviously, such a contest would need some additional constraints, such
as the one attempting the back door gets to see the rest of the program! Fair
enough.

Also, such a contest should not be CONTRIVED code, but actual working code.
So, the greater chore might be to pick something suitable to attempt to
back door.

Lastly, since this whole discussion arose from allegations of a OpenBSD IPSec
back door, I contend that 1) not only should the code be open sources, but
2) the back door must be implemented in a way that is NOT obvious!

What do I want the latter constraint (back door not obvious)? Because,
the OpenBSD team is very thorough about doing manual code inspection
of all the code that is in the OpenBSD kernel. So in the case of this
specific allegation, such back doored code would have had to slip by
any original as well as subsequent code inspections. If the back door
were obvious (and I realize that's a subjective term, but we are all
likely to say I'd know an *obvious* back door if I saw it...at least
if it in my are of subject matter expertise), then it would have been
useless.

Anyway, I like Ian's idea. This could replace the Obfuscated C Code
Contest that they (used to? still?) hold, which was getting really boring
anyway.

Thoughts?
-kevin
--
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-17 Thread Kevin W. Wall
On 12/17/2010 12:34 PM, Jon Callas wrote:
 ...snip...
 Searching the history for stupid-ass bugs is carrying their paranoid
 water. *Finding* a bug is not only carrying their water, but accusing
 someone of being underhanded. The difference between a stupid bug and
 a back door is intent. By calling a bug a back door, or considering
 it, we're also accusing that coder of being underhanded. You're doing
 precisely what the person throwing the paranoia wants. You're sowing
 fear and paranoia.

 Of course there are stupid bugs in the IPsec code. There's stupid bugs
 in every large system. It is difficult to assign intent to bugs, though,
 as that ends up being a discussion of the person.

Oh put another way, when it comes to maliciousness versus human stupidity,
I'll pick human stupidity almost every time.

-kevin
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography