Are we going into the right direction?
Is it worth to solve the task of making
puzzle difficulties less erratic if the computing power
of clients may differ by the order of magnitude (or even magnitudes)?

Can we make the process more flexible?
For example - the server may indicate two difficulty levels in puzzle
request - the desired one and the acceptable one.
For example, the desired level is 20 bits and the acceptable level is 16 bits.
It means, that if client is able to find the solution with 20 bits of zeroes
it will reserve the right to be served. But if it cannot (for any reason -
insufficient power or the puzzle is by chance too hard), then any
solution with the number of zero bits higher than 16 will be accepted.
But it doesn't mean that the server will immediately serve this
request if the solution contains less that 20 zero bits - depending
on the number of zero bits the client managed to get,
the amount of time it takes it to solve, the number of puzzles this
client has already solved and the current server's load the server
may issue new puzzle to such client. The process continues
until the server decides that the client spent enough resources
and it can be served now taking into consideration current server's load.

The advantage of such approach is that it makes the whole process
more adjustable to varying factors, including the wide variety of
clients and their computational power. The disadvantage is the higher
server load (it needs to prepare and verify more puzzles) and
the higher network bandwidth consumption. But unlike the previously
suggested approaches it doesn't increase the size of a single message,
that is very good as it decreases the chance for IP fragmentation.

Valery.


Results are actually way better, with the 4X changed into 2X. However it seems to me that Scott's proposal is better - slightly more complex but certainly more deterministic.

Thanks,
Yaron

On 02/02/2015 08:31 AM, Yoav Nir wrote:
The three-sigma rule applies even with a non-normal distribution.

Anyway, I tried the 64-key sample. Results are slightly better.

Yoav




On Feb 1, 2015, at 7:36 PM, Yaron Sheffer <yaronf.i...@gmail.com> wrote:

Hi Yoav,

This is good, but I'm not sure if it's good enough. The ratio between min and max (which I trust more than the "mean +/- 3s" rule, because this is not a normal distribution) is consistently around 4. So if you have to design your timeouts on a particular machine, you would still have a lot of uncertainty. For comparison, could you try again with 64 replacing the 16 tries, and with lower bit lengths?

Thanks,
Yaron

On 02/01/2015 11:46 AM, Yoav Nir wrote:
And now it’s really attached.




On Feb 1, 2015, at 11:45 AM, Yoav Nir <ynir.i...@gmail.com> wrote:


On Jan 31, 2015, at 12:35 AM, Yoav Nir <ynir.i...@gmail.com> wrote:


On Jan 30, 2015, at 3:37 PM, Yaron Sheffer <yaronf.i...@gmail.com> wrote:

What I would suggest is: we give the client a single puzzle, and ask it to return 16 different solutions. Indeed each puzzle then should be 16X easier. The nice thing is, the server should only check *one* of them, at random. The client would still need to solve all of them because it doesn't want to risk the exchange being rejected because some solutions are invalid (the game theory is probably more complex than that, but I think what I'm saying is still close to the truth).

So: the client does the same amount of work, the server does the same amount of work, but the client run-time is still much more deterministic.

<snip />

Note that these are still single core results, and most laptops can do twice or four times as much. Now, I know that what I SHOULD be doing is to randomly generate 100 “cookies” and then calculate the times for different bit lengths for each of them, and then calculate mean and standard deviation. But just by looking, it looks like it’s much closer to what we want. 16 bits would be a fine puzzle level for my laptop. No idea about a phone, although I could try to compile this and run it on an ARM-based appliance, which should match phones.

OK. Now I have done it right. See attached. The data suggests that 15 or 16 bits is the level of puzzle that for this kind of hardware is challenging but not too onerous. Add another bit if we assume (probably correctly) that the vast majority of laptops have dual cores at least.

I would like to run a similar test on an ARM processor, though. The capabilities of phones and tablets are all over the place, what with different versions of ARM processors and devices having anything from dual to octo-core, but it would be nice to get ballpark figures.

Yoav




_______________________________________________
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec

Reply via email to