On 30/09/13 16:43, Adam Back wrote:
On Mon, Sep 30, 2013 at 02:34:27PM +0100, Wasa wrote:
On 30/09/13 10:47, Adam Back wrote:
Well clearly passwords are bad and near the end of their life-time with
GPU advances, and even amplified password authenticated key exchanges like
EKE have a (so far) unavoidable design requirement to have the server
store something offline grindable, which could be key stretched, but thats
it.  PBKDF2 + current GPU or ASIC farms = game over for passwords.

what about stronger pwd-based key exchange like SRP and JPAKE?

what I mean there is a so far unavoidable aspect of the AKE design pattern
is that the server holds verifier v = PBKDF2(count,salt,password), so the
server if hostile, or of even more concern an attacker who steals the whole database of user verifiers from the server can grind passwords against it. There is a new such server hashed password db attack disclosed or hushed up
every few weeks.

Passwords don't scale up and are very inconvenient, but are you sure your
argument "PBKDF2 + current GPU or ASIC farms = game over for passwords"
really holds?  what about scrypt?  And theoretically, you can always
increase the number of rounds in the hash...  I refer to this link too:
http://www.lightbluetouchpaper.org/2013/01/17/moores-law-wont-kill-passwords/

You know GPUs are pretty good at computing scrypt.  Eg look at litecoin
(bitcoin with hashcash mining changed to scrypt mining, people use GPUs for ~10x speed up over CPUs). Litecoin was originally proposed as I understood it to be more efficient on CPU than GPU, so that people could CPU mine and
GPU mine without competing for resources, but they chose a 128kB memory
consumption parameter, and it transpired that GPUs can compute on that
memory size fine (any decent GPU has > 1GB of ram and a quite nice cacheing hierarchy). Clearly its desirable to have modest memory usage on a CPU or
if it fill L3 cache the CPU will slow down significantly for other
applications.

depends on the context i guess. if you are using a smartphone to log-in your banking app, it does not matter so much if you slow down other _background_ apps.

Even 128kB is going to fill L1 and half of L2 which has to
cost generic performance. Anyway in the bitcoin context that coincidentally was fine because then FPGAs & ASICs became the only way to profitably mine
hashcash based bitcoin, and so GPUs were freed up to mine scrypt based
litecoin.  Also for bitcoin purposes higher memory scrypt parameters
increase the validation phase (where all full nodes check all hashes and
signatures, a double SHA256 is a lot faster than a scrypt at even 128KB,
changing that to eg 128MB will only make it worse.

Also the PBKDF2 / scrypt happens on the client side - how do you think your
ARM powered smart phone will compare to a 9x 4096 core GPU monster.  Not
well :)

i had this in the back of mind when I replied to ur email; so I tend to be on ur side here How much would it help to delegate PBKDF2 / scrypt to smartphone GPU to break this asymmetry?

since SRP and JPAKE use exponent_modulo sort of computation rather than a hash, any idea how this impacts attackers?
how well can you paralellize a dictionary brute force for DL problem?
I'm not expert so glad to hear about it.

So yes I stand by that. One could use higher memory scrypt parameters, and
so the claim goes with memory bound functions that memory IO is less
dissimilar between classes of machines than CPU speed (smartphone vs GPU).
However you have to bear in mind also that scrypt actually has CPU memory
tradeoffs, known about and acknowledged by its designer.

I believe its realtively easy to construct a tweaked scrypt that doesnt have
this problem.

Also for the bitcoin/litecoin side of things I heard a rumor that there were
people working on a litecoin ASIC.  Bitcoin FTW in terms of proving the
vulnerability of crypto password focussed KDFs to ASIC hardware. The scrypt
time memory tradeoff issue maybe useful for efficient scrypt ASIC design.

But there is a caveat which is the client/server imbalance is related to the difference in CPU power between mobile devices and server GPU or ASIC farms. While it is true that moore's law seems to have slowed down in terms of
clock rates and serial performance the number of cores and memory
architectures are still moving forward, and for ASICs density, clock rates
and energy efficiency are increasing, and thats what counts for password
cracking. But yes the longer term picture depends on the trend of the ratio between server GPU/ASIC performance vs mobile CPU. Another factor also is
the mobiles are more elastic (variable clock, more cores) but to get full
perf you end up with power drain and people dont thank you for draining
their phone battery.  It is possible for ARM eg to include an scrypt or a
new ASIC unfriendly password KDF on the die perhaps if there was enough
interest. The ready availability of cloud is another dynamic where you dont
even have to own the GPU farm to use it.  You can rent it by the hour or
even minute, or use paid password cracking services (with some disclaimer
that it better be for an account owned by you).

Anyway and all that because we are seemingly alergic to using client side
keys which kill the password problem dead. For people with smart phones to hand all the time eg something like oneid.com (*) can avoid passwords (split keys between smart phone and server, nothing on server to grind, even stolen
smart phone cant have its encrypted key store offline ground to recover
encrypted private keys (they are also split so you need to compromise the
server and the client simultaneously). Also the user can lock the account
at any time in event of theft or device loss.

i like the idea. Any issue/complications with re-provisioning or multiple devices with same identity?


Adam

(*) disclaimer I designed the crypto as a consultant to oneid.

_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to