Hi all,
we would like to have your help in order to understand the behavior of SoftHSM 
token described herein.

We developed a multi-threaded, C++ implemented, FastCGI module for Apache 
embedding an instance of SoftHSM v2.0 token.

The service, named HBM (as per: HSM Broker Module) accept HTTP POST requests 
(for encryption and decryption), and is executed on a bare-metal DELL server 
R-710 model, with the following characteristics:

2 CPU 6-core Intel(R) Xeon(R) CPU  E5645  @2.40GHz
Hyper-threading enabled
RAM: 96 GB @1333MHz DDR3 (8 GB banks)
6 x 600 GB HDD in RAID 10, HW RAID (total end-user capacity: about 1.5 TBytes)
Operating System: Centos 7.2 - 64 bit

The installed version of the SoftHSM package is:
softhsm.x86_64                     2.1.0-2.el7                     @base

On another machine with the same characteristics, we sent into execution the 
Siege benchmarking utility, having populated the Siege input file with half HBM 
encryption commands, and half decryption ones.
The Siege run lasts five minutes, and is executed at concurrency level set at 
16; this is the command through which each Siege run is sent into execution:

/home/kafka/siege/bin/siege -q -f/tmp/siege.in.$$ -l$PWD/h.log -m "$MESSAGE" -b 
-i -t300S -c16

Each encryption and decryption request is associated to a (AES-256-CBC) key 
identifier, but, in the test, we always used the same key identifier.
A request to the service is taken in charge by a freshly created thread, to 
which is associated a private (but non-volatile) cache containing the PKCS#11 
handle of the key to be used to satisfy the request. Thus, we do not need to 
execute the C_FindObject() PKCS#11 routine for each request: after some 
requests, new threads are likely to find their private cache yet filled by a 
previously executed thread (which dies after the request is serviced).

We measured the throughput reported by Siege after modifications (keys added to 
or removed from the SoftHSM token), and we have the following table, where the 
"HBM reboot" boolean indicates if, before the corresponding run, the HBM 
service was rebooted or not.

Run ID | Number of keys | HBM reboot | Throughput
    1  |        52      |      Y     |  12951.84
    2  |         1      |      N     |  13456.31
    3  |        52      |      N     |   3566.23
    4  |        52      |      Y     |  12930.80
    5  |        26      |      N     |   6427.34
    6  |        38      |      N     |   4583.19
    7  |        52      |      N     |   3505.33
    8  |        52      |      Y     |  12865.59
    9  |        26      |      N     |   6427.05
   10  |         1      |      N     |  13310.76

As can be seen (going from run 2 to run 3) adding keys to the token without 
rebooting HBM causes a notable drop in throughput, and the same happens when 
removing keys (runs 4 to 5 and 8 to 9). However, when only the (unique) used 
key is left in the token, the throughput rises again.

This happens also when the HBM service is rebooted (runs 3 to 4, and 7 to 8).

We do not understand this behavior, and any hint to interpret it would be much 
appreciated.
Regards,

Marco Spadoni



This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify [email protected] .
www.italiaonline.it

_______________________________________________
Opendnssec-user mailing list
[email protected]
https://lists.opendnssec.org/mailman/listinfo/opendnssec-user

Reply via email to