ni...@lysator.liu.se (Niels Möller) writes:

> I'm not that fond of the struct cbc_aes128_ctx though, which includes
> both (constant) subkeys and iv. So I'm considering changing that to
>
>   void
>   cbc_aes128_encrypt(const struct aes128_ctx *ctx, uint8_t *iv,
>                      size_t length, uint8_t *dst, const uint8_t *src);
>
> I.e., similar to cbc_encrypt, but without the arguments
> nettle_cipher_func *f, size_t block_size.

I had almost forgotten about this, but I did push these changes last
autumn (including x86_64 aesni implementation), and I just added
documentation.

If I rememeber correctly, separate assembly functions should bring great
speedup on s390x (and maybe significant speedup also on other archs
where AES is fast, similar to the x86_64 aesni case). Still no
parallelism, so I expect speedup mainly from less overhead, e.g., not
having to reload subkeys, and keeping iv in registers between blocks (or
in the s390x case, special instruction for aes + cbc).

Regards,
/Niels

-- 
Niels Möller. PGP key CB4962D070D77D7FCB8BA36271D8F1FF368C6677.
Internet email is subject to wholesale government surveillance.
_______________________________________________
nettle-bugs mailing list -- nettle-bugs@lists.lysator.liu.se
To unsubscribe send an email to nettle-bugs-le...@lists.lysator.liu.se

Reply via email to