Hi,
I had a re- look into the segmentation fault issue in my application. What I
observe is that OPENSSL_malloc returns NULL in the 'int EVP_DigestInit_ex'
function in digest.c:
if (ctx-digest != type)
{
if (ctx-digest ctx-digest-ctx_size)
OPENSSL_free(ctx-md_data);
SHA_CTX *c is getting corrupted. GDB indicated ctx=0x0 in init(). However
it was not the case.
static int init(EVP_MD_CTX *ctx)
{
if(ctx != 0L)
{
return SHA1_Init(ctx-md_data);
}
else
{
printf(ctx is NULL\n); //Never to be seen though
David,
The OpenSSL version that I use is openssl-0.9.8e. Your guess about methods
being called is right. It appears to be stack corruption.
Gayathri,
I don't suspect the gdb. I checked the CTX status in HASH_INIT (SHA_CTX *c)
under stress , 'c' was indeed NULL and the application immediately
At times The following traces as well are obtained:
(gdb) bt
#0 MD5_Init (c=0x0) at md5_dgst.c:75
#1 0x405b2a90 in init (ctx=0x0) at m_md5.c:73
#2 0x405afc91 in EVP_DigestInit_ex (ctx=0x8e29b44, type=0x4061f560,
impl=0x0) at digest.c:207
#3 0x403819f5 in ssl3_init_finished_mac (s=0x8e298c8)
This is really one of those don't do that then things.
Thread-per-connection is well-known to break down at about 750
connections.
Just curious at how the number 750 was calculated or deduced. And
is this a linux-specific limit?
On Windows, it's usually more like 800 on older versions
Even reducing the thread stack size didn't help. I observe that the thread
creation as such is not a problem. I create about 1000 threads , delay in
each thread the SSL_connect for about 10 sec.
Once the delay expires and each client make connections to the server the
seg fault occurs.
Regards,
Even reducing the thread stack size didn't help.
I observe that the thread creation as such is not
a problem. I create about 1000 threads , delay in
each thread the SSL_connect for about 10 sec.
Once the delay expires and each client make connections
to the server the seg fault occurs.
You
The stack trace showing a null sha1 transform kindof caught my attention
here, I wouldnt go by the the GDB call trace coz its obviously a memory
leak and the gdb stack could have been corrupted, many a times I see 0x0
in the frames but when you actually try to print the ctx address it would
be
Hi Gayathri,
I couldn't entirely grasp what you had mentioned. l didn't find sha1 in
lsmod command output.
If you could describe briefly the issue you had experienced that would be
very much helpful.
Thanks Regards,
Prabhu. S
On 10/15/07, Gayathri S [EMAIL PROTECTED] wrote:
Hi Prabhu,
Can
Hi David,
Yes, the design of one thread per connection is bit odd. Our application is
used to test a SSL server for its performance. The application would
simulate hundreds of client and at a time try connecting to the server. The
server would be thus tested for burst connection handling
On 10/16/07, Prabhu S [EMAIL PROTECTED] wrote:
Hi David,
Yes, the design of one thread per connection is bit odd. Our application is
used to test a SSL server for its performance. The application would simulate
hundreds of client and at a time try connecting to the server. The server
--- David Schwartz [EMAIL PROTECTED] wrote:
This is really one of those don't do that then things.
Thread-per-connection is well-known to break down at about 750 connections.
[snip]
It may help to reduce the stack size for each thread. But you really should
re-architect.
Hi Prabhu,
Can you check the sha1 usage count in the lsmod?
I am thinking you have not freed the sha tfm and eventually run out of it.
I hit a similar issue when making use of linux sha1.
Thanks
--Gayathri
On Mon, 15 Oct 2007, Prabhu S wrote:
Hi,
The SSL enabled client application seg
The application creates about 800 threads in a Linux 2.6 Kernel.
This is really one of those don't do that then things.
Thread-per-connection is well-known to break down at about 750 connections.
#0 SHA1_Init (c=0x0) at sha_locl.h:150
#1 0x405b2bb0 in init (ctx=0x0) at m_sha1.c:72
#2
14 matches
Mail list logo