On Mon, Jun 27, 2016 at 2:13 PM, Rainer Jung <rainer.j...@kippdata.de> wrote:
> Am 27.06.2016 um 18:47 schrieb Nate Clark:
>>
>> On Mon, Jun 27, 2016 at 12:17 PM,  <therealnewo...@gmail.com> wrote:
>>>
>>> On Mon, Jun 27, 2016 at 12:04 PM,  <therealnewo...@gmail.com> wrote:
>>>>
>>>> On Mon, Jun 27, 2016 at 11:54 AM, Rainer Jung <rainer.j...@kippdata.de>
>>>> wrote:
>>>>>
>>>>> Great analysis. I was really wondering, what could make the hash map so
>>>>> huge
>>>>> and hadn't thought about the hash function as the problem.
>>>>>
>>>>> Before OpenSSL 1.1.0 there's a callback for applications to provide
>>>>> their
>>>>> own thread IDs:
>>>>>
>>>>>
>>>>> https://www.openssl.org/docs/man1.0.2/crypto/CRYPTO_THREADID_set_callback.html
>>>>>
>>>>> So we could probably work around the problem of the poor hashing
>>>>> function by
>>>>> passing in IDs that work for hashing (pre-hashed ID?). Of course then
>>>>> we
>>>>> loose the direct association of the OpenSSL thread ID with the real
>>>>> platform
>>>>> thread id.
>>>>>
>>>>> Currently our callback in tcnative is ssl_set_thread_id() which refers
>>>>> to
>>>>> ssl_thread_id(), which on Linux gets the ID from the APR function
>>>>> apr_os_thread_current(). So we could add some hashing formula in
>>>>> ssl_thread_id().
>>>>
>>>>
>>>> I think just using the real thread id would work, since now it isn't
>>>> using the real thread id instead it is using the address location of
>>>> errno. If this was the real thread id I think the hash algorithm and
>>>> bucket selection they have now will work much better since the thread
>>>> ids are basically numerically increasing and aren't aligned to powers
>>>> of 2.
>>>
>>>
>>> Sorry, I need to read code a bit closer. I misread the
>>> OPENSSL_VERSION_NUMBER < 0x10100000L as version < 1.0.1 and not 1.1.0.
>>>
>>> I would still expect real thread ids to provide enough of a
>>> distribution in a map where the hash bucket ends up being is ((ID *
>>> 13) & 0X7F).
>>
>>
>> OK, this now makes more sense apr_os_thread_current() calls
>> pthread_self() which returns a pthred_t which is basically a handle
>> for pthread's internal state and not an actual ID. It might be better
>> to use gettid(), but that isn't as portable.
>
>
> I wouldn't care too much about portability right now. If you can prove
> Mark's analysis by using gettid() on Linux and checking that this fixes the
> performance issue, that would very useful feedback. If it doesn't help, we
> can go down the path of using the hash statistics to get more info about
> what's happening. If it helps, we can think further about how to reduce hash
> collisions fo every platform.

I can try that, but unfortunately will not be able to run the test
right away. The machine I was using is a shared resource, which
somebody else needs for the next few days.

Even if that does help the performance and makes it so the hash map
has better distribution. It won't help with long term bloat. If a
system is running long enough and threads are started and stopped the
hash map could grow to include up to almost max pid entries, 32768 by
default on Linux. With the size of the ERR_STATE struct that will be
roughly 12MB of memory, not horrible but not ideal. I still think
ERR_remove_thread_state() should probably be invoked prior to a thread
exiting. The only way I can think of doing that and keeping everything
in native and not call it for every request would be to call
pthread_cleanup_push(ERR_remove_thread_state, NULL) the first time a
thread is used to perform any ssl operation. That would then require
you to track per thread if the cleanup function had been registered.

Also, any solution which make the thread id random and not a bounded
set of values would cause the max memory foot print of the hash map to
be virtually unbounded and require that ERR_remove_thread_state be
invoked to clean up the error states.

-nate

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to