Re: [whatwg] Cryptographically strong random numbers

2011-02-22 Thread Brendan Eich
On Feb 22, 2011, at 2:00 PM, Jorge wrote:

 On 22/02/2011, at 22:36, Brendan Eich wrote:
 (...)
 
 However, Math.random is a source of bugs as Amit Klein has shown, and these 
 can't all be fixed by using a better non-CS PRNG underneath Math.random and 
 still decimating to an IEEE double in [0, 1]. The use-cases Klein explored 
 need both a CS-PRNG and more bits, IIRC. Security experts should correct 
 amateur-me if I'm mistaken.
 
 .replace( /1]/gm, '1)' ) ?

Right.

Reading more of Amit Klein's papers, the rounding to IEEE double also seems 
problematic. Again, I'm not the crypto-droid you are looking for.

/be



Re: [whatwg] Cryptographically strong random numbers

2011-02-22 Thread Brendan Eich
On Feb 22, 2011, at 2:49 PM, Erik Corry wrote:
 I can find Klein's complaints that the implementation of Math.random is 
 insecure but not his complaints about the API.  Do you have a link?

In the paper linked from http://seclists.org/bugtraq/2010/Dec/13 section 3 (3. 
The non-uniformity bug), viz:

Due to issues with rounding when converting the 54 bit quantity to a double 
precision number (as explained in 
http://www.trusteer.com/sites/default/files/Temporary_User_Tracking_in_Major_Browsers.pdf
 section 2.1, x2 may not accurately represent the state bits if the whole 
double precision number is ≥0.5.

but that link dangles, and I haven't had time to read more.

The general concern about the API arises because Adam's API returns a typed 
array result that could have lenght  1, i.e., not a random result that fits in 
at most 32 (or even 53) bits.

/be

Re: [whatwg] Cryptographically strong random numbers

2011-02-22 Thread Brendan Eich
On Feb 22, 2011, at 3:45 PM, Erik Corry wrote:
 Thanks for the link. Having read the section in question I am satisfied that 
 the author has no problem with the API.
 
In theory, sure. Bits are bits.

The practical issue is usability, where less usable interfaces tend to breed 
more bugs, as I argued was a hazard of the proposal to return a plain old Array 
containing uint16 values as elements. Glenn Maynard's point about more to go 
wrong with IEEE double seem to be validated by the IE9 preview release 
Math.random bugs that Amit Klein found. From the crypto-hacker point of view, 
anything that makes it harder to get random uint{8,16,32} values than necessary 
seems that much less good.

If we have only number type for the result, then Math.random is the API 
template to match. Given typed arrays / binary data, Adam's API looks more 
usable, even counting the cost of differing from Math.random in its API 
signature.

/be


 On Feb 23, 2011 12:34 AM, Brendan Eich bren...@mozilla.org wrote:
  On Feb 22, 2011, at 2:49 PM, Erik Corry wrote:
  I can find Klein's complaints that the implementation of Math.random is 
  insecure but not his complaints about the API. Do you have a link?
  
  In the paper linked from http://seclists.org/bugtraq/2010/Dec/13 section 3 
  (3. The non-uniformity bug), viz:
  
  Due to issues with rounding when converting the 54 bit quantity to a 
  double precision number (as explained in 
  http://www.trusteer.com/sites/default/files/Temporary_User_Tracking_in_Major_Browsers.pdf
   section 2.1, x2 may not accurately represent the state bits if the whole 
  double precision number is ≥0.5.
  
  but that link dangles, and I haven't had time to read more.
  
  The general concern about the API arises because Adam's API returns a typed 
  array result that could have lenght  1, i.e., not a random result that 
  fits in at most 32 (or even 53) bits.
  
  /be
 ___
 es-discuss mailing list
 es-disc...@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss



Re: [whatwg] Cryptographically strong random numbers

2011-02-14 Thread Brendan Eich
On Feb 14, 2011, at 8:40 AM, Boris Zbarsky wrote:

 On 2/14/11 11:31 AM, Mark S. Miller wrote:
 On Mon, Feb 14, 2011 at 2:47 AM, Adam Barth w...@adambarth.com
 mailto:w...@adambarth.com wrote:
 
That's a pretty long time horizon.  You're going to start discussing
it in 2-4 months?  That seems a bit overwrought for what amounts to
four lines of code.
 
 For what it's worth, it's a lot more than 4 lines of code in Gecko, because 
 we don't have an existing source of strong randomness in Spidermonkey.  It'd 
 be about that much code in the DOM, where we could pass the buck to NSS, but 
 for Spidermonkey it would take a good bit more work.

Not really. We use callback-style APIs to delegate such chores as l10n to the 
embedding, and the same can be done for randomness-hunting. It's just 
interfacing, or buck-passing -- but I'm still wondering what magical four lines 
of code provide useful lower bounds on bits of entropy in WebKit. Is this just 
non-interoperable delegation to the host OS?

Adam, I like moving fast (but beware, that's how JS and the DOM were created), 
but we need a cross-browser spec of some kind, even if we all add 
crypto.getRandomValues quickly. The original y2k-buggy, 
cloned-from-java.util.Date JS Date object of 1995 required a lot of work for 
ES1 to remove OS dependencies that made for interop hell.

For a core language standard, we would want some kind of minimum quality 
guarantee on the randomness.

Quick thought on the name: to avoid vagueness and the why not wider int or 
even float element type questions that have already come up, call the method 
getRandomBytes.

/be



Re: [whatwg] Cryptographically strong random numbers

2011-02-14 Thread Brendan Eich
On Feb 14, 2011, at 11:31 AM, Adam Barth wrote:

 What's non-interoperable about filling an ArrayBuffer with random bytes?  I'm 
 not sure I understand your question.

The question is what OSes fail to provide enough random bits these days.

This may just be a sanity-checking step (my sanity, at least; I lived through 
the great entropy hunt of 1995; 
http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html [link courtesy 
dwagner]).


 However, I'm disinclined to wait on the basic best-effort PRNG for that to 
 happen.

What would you be waiting for? Ignoring Ecma, just adding code to WebKit 
doesn't make a cross-browser standard. Never mind Firefox (we'll do something 
soon enough to match). What about IE?

It seems to me we (whatwg members, w3c members; browser vendors in general) 
need something more than IDL in the way of a spec.


 I added support for all the integer ArrayBuffer types, so getRandomBytes 
 isn't a particularly accurate name.

Ok, that seems fine (now that I have read your patch -- thanks for the link!).

/be



Re: [whatwg] Cryptographically strong random numbers

2011-02-14 Thread Brendan Eich
On Feb 14, 2011, at 12:26 PM, Adam Barth wrote:

 On Mon, Feb 14, 2011 at 11:56 AM, Brendan Eich bren...@mozilla.org wrote:
 On Feb 14, 2011, at 11:31 AM, Adam Barth wrote:
 What's non-interoperable about filling an ArrayBuffer with random bytes?  
 I'm not sure I understand your question.
 The question is what OSes fail to provide enough random bits these days.
 
 This may just be a sanity-checking step (my sanity, at least; I lived through 
 the great entropy hunt of 1995; 
 http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html [link courtesy 
 dwagner]).
 
 Somehow OpenSSL and NSS seem to solve that problem given that cryptographic 
 entropy is required to make HTTPS secure.  I'm certainly willing to believe 
 I've goofed it up, but I suspect it's not that much of a limitation these 
 days.

I'm happy if the answer is all OSes, mobile and desktop, provide enough high 
quality randomness. Looking for data if anyone has it.


 However, I'm disinclined to wait on the basic best-effort PRNG for that to 
 happen.
 
 What would you be waiting for? Ignoring Ecma, just adding code to WebKit 
 doesn't make a cross-browser standard. Never mind Firefox (we'll do something 
 soon enough to match). What about IE?
 
 Given that we've missed IE9, we're talking about IE10 at the earliest.  If 
 history is a guide (which it might not be), that means we're talking about 
 something 2 years in the future.

Oh, who knows what any vendor will do under modern competitive pressure. Let's 
not assume :-/. My point is: to engage all the major browser vendors, we need a 
spec, not four-line (net of #ifdefs) patches.

Here is another spec-worthy issue that IDL can't capture (and if we choose one 
way, can't even express): the #ifdefs raise the question (which David Wagner 
and I discussed briefly in private correspondence, and which I see came up on 
the whatwg list already) of whether it is better to provide a throwing stub 
when the OS randomness configury is not enabled, or better not to provide an 
object-detectible method at all?

I'm inclined toward not providing a method that always throws. I've seen too 
much fool's gold mislead developers. Bugs and throwing stubs underlying 
detectible methods have led web developers to not just object-detect, but 
unit-test online! (See 
http://www.slideshare.net/jeresig/the-dom-is-a-mess-yahoo .) But testing 
detectible methods for correctness further bloats downloaded JS and slows 
first-run execution.


 Ok.  I'll write up a spec later today.

Thanks.


 On Mon, Feb 14, 2011 at 12:15 PM, Mark S. Miller erig...@google.com wrote:
 As has already been discussed, since these issues are general EcmaScript 
 issues, applying not just in the browser but in many places JavaScript is run 
 (e.g., nodejs), I suggest that this be the last message on this thread 
 cross-posted to whatwg. Unless someone has a reason for retaining whatwg in 
 the addressee list, I will drop it from all my further postings on this 
 thread.
 
 Actually, that is precisely what we're discussing.  My suggestion is that we 
 do both: browsers implement a simple DOM-based API today that handles the 
 basic arc4random-like use case and TC39 go through whatever multi-month 
 (year?) process it likes and spec out whatever all-sing, all-dance crypto 
 library it likes.

There seems to be some spin here. What does today mean, and why the loaded 
multi-month (year?) process and all-sing[ing] etc. imputations to Ecma? I 
hope you are not describing how quickly you can hack on WebKit code, because 
while I can hack quickly on Mozilla code, that does not set the pace of a 
standard, never mind make a new feature available cross-browser to web 
developers.

While it indeed takes years to produce new Ecma (ISO) specs, we on TC39 support 
early prototyping of harmonious proposals, so web developers can start using 
such candidate features. But for this to work we need a hand-shake on what is 
harmonious.

If the idea is to promulgate a de-facto standard via Chrome and let other 
browsers reverse-engineer it, that can work, but it could backfire.

Extending the old window.crypto object we added ages ago at Netscape may be a 
good idea and a faster route to a high-quality cross-browser RBG. I won't argue 
about that. My objection here is to your choice of words and the way they might 
foreclose better cooperation among vendors: such an RBG API is not done 
today, and it needs more than just one implementation to be a standard other 
browsers will implement.

/be



Re: [whatwg] Cryptographically strong random numbers

2011-02-14 Thread Brendan Eich
On Feb 14, 2011, at 1:29 PM, Adam Barth wrote:

 On Mon, Feb 14, 2011 at 12:49 PM, Brendan Eich bren...@mozilla.org wrote:
 On Feb 14, 2011, at 12:26 PM, Adam Barth wrote:
 On Mon, Feb 14, 2011 at 11:56 AM, Brendan Eich bren...@mozilla.org wrote:
 On Feb 14, 2011, at 11:31 AM, Adam Barth wrote:
 What's non-interoperable about filling an ArrayBuffer with random bytes?  
 I'm not sure I understand your question.
 The question is what OSes fail to provide enough random bits these days.
 
 This may just be a sanity-checking step (my sanity, at least; I lived 
 through the great entropy hunt of 1995; 
 http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html [link courtesy 
 dwagner]).
 
 Somehow OpenSSL and NSS seem to solve that problem given that cryptographic 
 entropy is required to make HTTPS secure.  I'm certainly willing to believe 
 I've goofed it up, but I suspect it's not that much of a limitation these 
 days.
 
 I'm happy if the answer is all OSes, mobile and desktop, provide enough high 
 quality randomness. Looking for data if anyone has it.
 
 As far as I can tell, /dev/urandom works great on all modern Unix-derived 
 operating systems (e.g., Mac, Linux, BSD, iOS, Android).  Windows since 
 Windows 2000 has CryptGenRandom:
 
 http://msdn.microsoft.com/en-us/library/aa379942(v=vs.85).aspx
 
 Are there other operating systems you're worried about?

Not for Mozilla (see below about BeOS). So as not to put up stop energy, let 
me say this is great news, and I hope we're done.

In parallel, hearing all clear from Ecma and W3C members who care about 
Symbian, Nokia's Linux-based stuff, anything not modern Windows (older mobile 
stuff), or the various proprietary real-time/embedded OSes would be helpful.


 Ok.  We can change the API to be like that, if you like.  In WebKit, 
 USE(OS_RANDOMNESS) is defined on virtually every port.  There might be some 
 ports that aren't covered (e.g., BeOS) though.

Heh, Mozilla has a BeOS 'port, along with other OS/CPU targets that are not 
tier 1 (where tier 1 has /dev/urandom or CryptGenRandom). I think the Amiga 
port died years ago ;-). But these can do without. The question is what the 
mechanics of doing without should be (runtime exception or no method 
detected).


 There seems to be some spin here. What does today mean, and why the loaded 
 multi-month (year?) process and all-sing[ing] etc. imputations to Ecma? I 
 hope you are not describing how quickly you can hack on WebKit code, because 
 while I can hack quickly on Mozilla code, that does not set the pace of a 
 standard, never mind make a new feature available cross-browser to web 
 developers.
 
 Maybe I misunderstood TC39's intentions.  My understanding is that your 
 aspirations include a full-featured crypto library, e.g., at the level of 
 complexity of OpenSSL rather than at the complexity of arc4random.  Certainly 
 designing and implementing such a feature is a longer-term prospect.

Mark suggested such a program, and I like the idea (as you clearly do, cited 
below), but TC39 as a committee has not bought into it yet.

Putting aside the idea of a larger crypto library TC39 task group, Mark did 
make a special case for the RBG in the core language, since Math.random is in 
the core language, cannot be removed, yet is also a footgun.

This has been TC39's position: whatever we do with a separate task group for a 
crypto library, we have a Harmony agenda item, represented by that

http://wiki.ecmascript.org/doku.php?id=strawman:random-er

place-holder proposal.


 While it indeed takes years to produce new Ecma (ISO) specs, we on TC39 
 support early prototyping of harmonious proposals, so web developers can 
 start using such candidate features. But for this to work we need a 
 hand-shake on what is harmonious.
 
 If the idea is to promulgate a de-facto standard via Chrome and let other 
 browsers reverse-engineer it, that can work, but it could backfire.
 
 If that were my intention, we wouldn't be having this discussion.

I think you're probably right that whatwg could get some crypto.getRandomValues 
spec together faster. For one, you've already done a bunch of work in that 
context!

But I see the situation as fluid, no matter what standards body claims 
jurisdiction. If we prototype and test something successfully, then (with name 
changes if necessary) it could be put into either standard process. Neither 
process is fast, since for IPR release the whatwg one still must flow into 
w3c.

So my point is that nothing in the current standards bodies necessitates that 
an RBG proto-spec appears today in the whatwg context, vs. years from now in 
Ecma. Maybe we should have two APIs, but as David Bruant just argued, wouldn't 
it be better to have only one?

The Chrome idea is not only a matter of your intentions. It could happen no 
matter what you intend, and that could be a good thing, too -- in the best 
case. I've promulgated de-facto standards, some of which did not suck. I did 
this during

Re: [whatwg] Cryptographically strong random numbers

2011-02-14 Thread Brendan Eich
On Feb 14, 2011, at 3:03 PM, Allen Wirfs-Brock wrote:

 And why overwrite the elements of an existing array?  Why not just creating a 
 new Array and use the argument to specify the desired length?

Just to respond to this, I believe the reusable buffer is an optimization 
(premature? perhaps not for those JS VMs that lack super-fast generational GC) 
to allow the API user to amortize allocation overhead across many calls to 
getRandomValues. Of course, with a fast enough GC or few enough calls, this 
optimization doesn't matter.

The IDL's use of an array inout parameter also supports efficient bindings for 
languages with stack allocation, which is a non-trivial win in C and C++ not 
only compared to malloc performance-wise, but also for automated cleanup (vs. 
intrinsic cost of free, plus on some OSes, which free do I call?).

/be