If you have control over the father process source code, I think it is easier to accept( ) the incomming connection in the father process, then do a fork( ) and let the child to stablish the SSL channel using the inherited accepted socket returned by accept( ). This way, you donīt need to share memory becuse all SSL operations are made in the child process. You only need to "initialize" the OpenSSL library in the father process (load certificates, keys, set up algorithms...) and use the inherited structures in the child to create the SSL channel.
----- Original Message ----- From: <[EMAIL PROTECTED]> To: <openssl-users@openssl.org> Sent: Wednesday, April 20, 2005 11:32 AM Subject: Multi process Server and openssl Folks, We have come up against a problem while trying to integrate the openssl library into our server. The server architecture is multi process where child processes handle requests. Each process attaches to a single shared memory segment which holds common configuration data. Our problem is: During the TLS negotiation and after the secure channel is set up different child processes will handle the request and will need access to the SSL connection. The SSL connections are allocated and freed using openssl library calls therefore are in the address space of the process that allocated. There is no method of telling openssl to use our block of shared memory for its needs. We solved a similar problem with LDAP connections by putting a tag into shared memory and each process has its own real LDAP connection to the server in local memory which it found using the tag. We don't think that this approach can be applied to openssl. Does anyone have any ideas how this problem can be solved without threading the server. thanks, Martin. ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]