Um... sorry, please disregard the parent message.
The application which I am adding the SSL support to insists on
periodically sending a 0-byte buffer and apparently SSL_write() doesn't
like being called with 0 bytes to send. I'd still call it a bug in
OpenSSL (since error code is set incorrectly) but the workaround is trivial.
Hi,
I'm running the following setup:
client and server, both using OpenSSL 0.9.8a on win2k3/win2k. Server is
using blocking sockets, client is using non-blocking sockets. Periodically on
the client SSL_write returns 0 and SSL_get_error() indicates
SSL_ERROR_SYSCALL. If I read the docs right, it should mean that the
connection was dropped by the server without proper shutdown. However if I
just ignore the error the connections seems to be fine and can be used (I
havn't confirmed this with 100% of certainty but it sure looks that way).
Just in case I checked WSGetLastError (Windows errno for WinSock) and it's 0.
The server does not notice anything wrong. I beleive this only happens under
stress load, but again I'm not 100% sure. The test I'm running right now has
280 clients on a single system connecting to two servers, sending lot's of
small (as well as soem big) messages back and forth.
Will appreciate any help in troublshooting this!
On a completely unrelated note: SSL_get_current_cipher() can return NULL if
called on a not-negotiated context and SSL_CIPHER_description(NULL, ....)
will crash and destroy stack, which makes troublshooting not as user-friendly
as it could be.
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
Automated List Manager [EMAIL PROTECTED]
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
Automated List Manager [EMAIL PROTECTED]