Thor Lancelot Simon wrote:
I have an application which reads data into fixed-size buffers which it
maintains per session.  It uses non-blocking IO and select() when a read
returns SSL_ERROR_WANT_{READ,WRITE}.

To conserve memory I reduced the buffer size from 16384 to 8192 and saw
sessions suddenly hang.  A coworker diagnosed this as follows:

To be clear, Which buffer ?

Case 1) The internal one inside libssl.so used to decode a full packet ? (which as others have commented is set by the protocol standard)

Case 2) The application space buffer used to receive decrypted application data which us usually passed in SSL_read() ?



If you chose "case 1" then the issue is you maybe violating the protocol spec (maybe there is a negotiation option/setting to lower the max packet size).


If you choose "case 2" then I see no reason why SSL should hang, if your application choose to read 1 byte at a time of application data with SSL_read() that should work 100%.

What OpenSSL does internally is decode the full packet and maintain pointers to data left of the previous packet, once there is no more data left to convey to application it then attempts to read more data in from the lower layers (from the socket) until it gets another full packet.

Yes all data is double buffered (technically triple buffered, since there is encrypted data, then unencrypted data, then application space), maybe new OpenSSL API should be allowed to request a read-only pointer and length to new data to remove the extra buffering overhead, since SSL_read() does a full copy (from unencrypted data to application space).

Sorry for not taking the time to read every email in this thread today.


Darryl
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to