On Tue, Jun 2, 2015 at 9:56 AM, Daniel Kahn Gillmor <[email protected]> wrote: > > So it looks like it's saying "if we slow down the user by a factor of X, > then we slow down an attacker by the same factor." > > The assumption here appears to be that there is no speedup for an > attacker that is better than brute force. If an attacker has some > clever precomputation, or a way to reuse intermediate results > efficiently, it seems like the slowdown for the attacker may be a > smaller factor than the slowdown for the user, unfortunately. >
If you're salting the hash (basically, if you're doing password-based decryption of a private key file instead of password-based re-derivation of the private key), then there should be no effective precomputation as long as the hash is cryptographically secure. If you can't salt, then rainbow tables are a possibility. But iterating the same hash function still makes rainbow tables linearly more expensive to compute. As dkg said, this is just a linear numbers game, but I believe that's the worst case scenario. If you take N times longer to hash on the client side, you should always be able to slow the attacker down by at least a factor of N. Using a memory-hard function, or some other function designed to be difficult to implement in hardware, can slow some attackers down by a greater factor if it prevents them from building hardware as cheaply. As Taylor said there isn't a definitive paper on this yet as far as I know, though there are quite a few papers arising from the password-hashing competition with different design approaches, threat models, terminology, etc. Hopefully we'll settle on a model eventually which can give simpler "bits of security" estimates as Nadim asked for.
_______________________________________________ Messaging mailing list [email protected] https://moderncrypto.org/mailman/listinfo/messaging
