1) That doesn't make sense. Maybe you mean the socket come from (TCP-level) accept and you give it to SSL_set_fd?
That does make sense and should work for one connection=socket at a time i.e. accept #3, connect SSL to #3, do send and receive until connection closed, close socket and SSL_clear, accept #7, ditto. 2) 1 is not a real error code. If SSL_get_error(ssl) returns 1 == SSL_ERROR_SSL, you should call ERR_get_error or its variants, or just ERR_print_errors[_fp] for the simplest handling. Note ERR not SSL. Note SSL_get_error() is 5 == SSL_ERROR_SYSCALL you must also look at your *OS* error: errno on Unix or [WSA]GetLastError() on Windows. On Unix perror or strerror gives a nice decode; Windows is harder. However if your keys or certs were bad you would get the error on loading them or at the latest at handshake, which if you don't do it explicitly would be on the first SSL_read or SSL_write. Not the second. This error is almost certainly something else and the ERR_* details above should help spot it. From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] On Behalf Of kasthurirangan balaji Sent: Friday, September 05, 2014 13:49 To: openssl-users@openssl.org Subject: design clarification using openssl Hi, After searching the web, I am writing to this address as my questions are still un-answered. 1) Can a SSL structure, allocated memory once via SSL_CTX be used with various socket descriptors just by changing the descriptors using SSL_set_fd? The socket descriptor used would have been passed thru SSL_accept before reaching SSL_set_fd. The socket is in blocking mode only. 2) I generated key and certificate files locally using the openssl commands. Is anything else needs to be done before loading them? I ask this because, the first read via SSL_read is always success and subsequent reads fail with error:00000001:lib(0): func(0): reason 1. If this is not the right place to ask, pls direct me to the right place so that I can get my queries cleared. Thanks, Balaji.