Re: All CPU threads

2023-09-11 Thread Jacob Bachmeyer via Gnupg-users

Werner Koch via Gnupg-users wrote:

[...]

On Sat,  9 Sep 2023 22:07, Robert J. Hansen said:
  

and for the vast majority of users isn't worth it.  The easy wins (28%
cost savings on RSA encryption!  Whee, almost half a millisecond!) are



The blinding we use for RSA (to mitigate side-channel attacks) should be
in the same range as these wins.  I bet that by adding threads to the
computation you will open another can of side-channel attacks.
  


So using threads to compute a blinded RSA operation would just about 
recover the computational cost of blinding the calculation?  How would 
hypothetical thread-related side channels matter if we are using 
blinding around the parallel calculation?



-- Jacob

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
https://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: All CPU threads

2023-09-11 Thread Werner Koch via Gnupg-users
Hi!

Thanks Rob for your comments.  Here are some additional points:

On Sat,  9 Sep 2023 22:07, Robert J. Hansen said:
> and for the vast majority of users isn't worth it.  The easy wins (28%
> cost savings on RSA encryption!  Whee, almost half a millisecond!) are

The blinding we use for RSA (to mitigate side-channel attacks) should be
in the same range as these wins.  I bet that by adding threads to the
computation you will open another can of side-channel attacks.

> performance. I'm sure that if and when the next RFC is officially
> released, there will be interest in getting parallelization support

OCB mode is already used and deployed for years.  With a decent
Libgcrypt (1.10) I get these figures for the old (CFB) and the new mode
(OCB)

 AES256 |  nanosecs/byte   mebibytes/sec   cycles/byte  auto Mhz
CFB enc | 0.691 ns/B  1379 MiB/s  5.14 c/B  7440±1
CFB dec | 0.064 ns/B 14959 MiB/s 0.470 c/B  7372±2

OCB enc | 0.070 ns/B 13547 MiB/s 0.522 c/B  7415±2
OCB dec | 0.071 ns/B 13451 MiB/s 0.520 c/B  7336±3


These values are for the low level crypto routines.  In reality we also
do a SHA-1 hashing in addition to CFB which makes it even slower.  OTOH.
the protocol requires buffering and the way gpg implements things has a
large impact on the performance.  Fortunately, Jussi Kivilinna also
worked on gpg's buffering and gained a lot of extra speed:

  * gpg: Threefold decryption speedup for large files.
https://dev.gnupg.org/rGab177eed51  (For the old CFB mode)

  * gpg: Nearly double the AES256.OCB encryption speed.
https://dev.gnupg.org/rG99e2c178c7

Thus in 2.4 we get this for symmetric encryption of a 4 GiB file from
RAM to /dev/null on a Ryzen5800X:

   AES256.CFB encryption 1.3 GiB/s
   AES256.OCB encryption 4.2 GiB/s

FWIW there are also improvements in signature verification:

  * gpg: Up to five times faster verification of detached signatures.
Doubled detached signing speed. 
https://dev.gnupg.org/rG4e27b9defc
https://dev.gnupg.org/rGf8943ce098

YMMV depending on what kind of data you encrypt, whether signing and
compression comes into the game.  Compression is a major performance hog
- feeding gpg from a (threaded) bzip2 and using -z0 will in general give
better performance than the using the internal compressor code.


Shalom-Salam,

   Werner


-- 
The pioneers of a warless world are the youth that
refuse military service. - A. Einstein


openpgp-digital-signature.asc
Description: PGP signature
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
https://lists.gnupg.org/mailman/listinfo/gnupg-users