> I have a SSL client and a server application.The client connects to a
> SSL server in a TCP socket persistence mode, i.e, it does a data
> exchange with the server through a SSL connection , tears down the SSL
> connection but again sends out a client_hello in the same TCP socket
> connection it had earlier established with the server to perform another
> cycle of data exchange.

You are trying to do this without having any protocol that would assure that
it works. As far as I know, there is no reason this should work, and you
haven't coded any.

How do you determine, unambiguously, whether a particular chunk of data is
part of the first SSL session or the second? How do you make sure final data
from the first connection isn't seen as starting data for the second?

If you want to multiplex two SSL connections over a single TCP connection,
you need to make sure that both ends agree unambiguously on what data is
part of the first SSL connection and what data is part of the second. Unless
you wrote code to do that, you are basically expecting it to happen by
accident.

There is simply no way to prevent the code that receives the last bytes from
the first SSL connection from receiving some of the first bytes for the new
one, and it has no place to "put them back" since all the code just reads
directly from the socket.

Theoretically, SSL could have been designed to allow this, but as far as I
know, it wasn't. Do you have any reason to believe otherwise?

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to