> The random tests are encrypting small blocks of data (the first between 0
> and
> 15 bytes per write, the second between 65 and 128) using the same key and
> IV - this is basically the worst case scenario and the byte-at-a-time
> algorithm is almost certainly more performant.
>
> However, this is not likely to be a good example of real-world use - it
> probably matches SSH interactive use, although even then the IV/block
> counter
> is reset between calls. For SSH bulk you're likely to be encrypting 8-16KB
> blocks at a time, for SSL up to 16KB blocks and for file encryption,
> anything
> up to 2^70 bytes.
>
> If you are interested in doing further benchmarks it would be interesting
> to
> measure performance for 8KB, 16KB and larger blocks, changing the IV in
> between. Encrypting/decrypting a single large file (using a single key/IV)
> would also be an interesting measurement.
>

It appears you are correct. I tested with 100,000 16KB blocks, and the code
in OpenBSD is showing a small performance advantage.
However, when using clang, the code on the blog seems to do significantly
better than with GCC, then, the difference between the two is minuscule.



>
> It is also worth noting that the performance of the block optimised code is
> still going to be well below the performance of a well implemented assembly
> version.
>

Are you guys waiting for OpenSSL to put out optimized assembly code for
ChaCha20 to use?


> Just to be clear, chacha_encrypt_bytes() is not my code - the original was
> written by D. J. Bernstein (http://cr.yp.to/chacha.html). I just imported
> it
> and applied KNF to it.
>

I'm sorry, I didn't mean to imply you wrote it. I meant to say that the
code you guys were currently using just looked awful.


>
> > Looking at this new code, I also see it has an interesting comment
> > regarding incrementing the counter. Is there any validity to the approach
> > its taking? Would that be more secure for very long streams?
>
> It should only make a difference if you encrypt 2^70 bytes using a single
> key
> and IV. For the time being I do not see that being a problem. Any sensible
> application is going to chunk the data and change the IV.
>

Okay, thank you.

I had a look around different implementations I could find on GitHub, and
it seems each library is taking their own unique approach with increment
the nonce/IV. I wonder if this is going to cause interoperability issues.

J

Reply via email to