Bodo Moeller wrote:
On Thu, Jun 22, 2006 at 10:41:14PM +0100, Darryl Miles wrote:
SSL_CTX_set_mode(3)
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Make it possible to retry SSL_write() with changed buffer
location (the buffer contents must stay the same). This is not the
default to avoid the mis-
conception that non-blocking SSL_write() behaves like
non-blocking write().
What is that all about ? My application makes no guarantee what the
exact address given to SSL_write() is, it only guarantees the first so
many bytes are my valid data. Why do I need to give it such guarantees ?
Thanks for this clearest explanation so far.
When using SSL_write() over a non-blocking transport channel, you may
have to call SSL_write() multiple times until all your data has been
transferred. In this case, the data buffer needs to stay constant
between calls until SSL_write() finally returns a positive number
since (unless you are using SSL_MODE_ENABLE_PARTIAL_WRITE) some of the
calls to SSL_write() may read some of your data, and if the buffer
changes, you might end up inadvertantly transferring incoherent data.
To help detect such potential application bugs, OpenSSL includes a
simple sanity check -- if SSL_write() is called again but the data
buffer *location* has changed, OpenSSL suspects that this is a mistake
and returns an error.
"Some of the calls to SSL_write() may read some of your data", I am
still not such how the reading of data impacts the write operation. Are
you saying that when WANT_READ is returned from SSL_write() the OpenSSL
library has already committed some number of bytes from the buffer given
but because its returning -1 WANT_READ it is failing to report that
situation back to the application during the first SSL_write() call ?
An under reporting of committed bytes if you want to call it that. This
would also imply you can't reduce the amount of data to SSL_write()
since a subsequent call that failed. Or implies that OpenSSL may access
bytes outside of the range given by the currently executing SSL_write(),
in that its somehow still using the buffer address given during a
previous SSL_write() call.
I still have not gotten to the bottom of the entire scope of
situation(s) can cause an SSL_write() to return -1 WANT_READ. If its
only renegotiation that can; then this is always instigated by a
SSL_renegotiate() (from my side) or and SSL_read() that causes a
re-negotiate request (from the remote side) to be processed.
Back to your clarification on the modes.
It is still unclear how this would work, here is the strictest pseudo
code case I can think up. This is where:
* the exact address for the 4096th byte to send it always at the same
address for every repeated SSL_write() call and
* I don't change or reduce the amount of data to be written during
subsequent SSL_write(), until all 4096 bytes of the first SSL_write()
have been committed into OpenSSL.
char pinned_buffer[4096];
int want_write_len = 4096;
int offset = 0;
int left = want_write_len;
do {
int n = SSL_write(ssl, &pinned_buffer[offset], left);
if(n < 0) {
sleep_as_necessary();
} else if(n > 0) {
offset += n;
left -= n;
}
while(left > 0);
In practice many applications may copy their data to a local stack
buffer and give that stack buffer to SSL_write(). This means the data
shuffles up and the next 4096 byte window is use for SSL_write().
So what I am asking now is what is the _LEAST_ strict case that can be
allowed too if the one above the what I see as the most strict usage.
The need for this dumbfounds me. If SSL_write() is returning (<= 0)
then it should not have taken any data from my buffer, nor be retaining
my buffer address (or accessing data outside the scope of the function
call).
It is also valid for me to "change my mind" about exactly what
application data I want to write at the next SSL_write() call. This
maybe a change of application data contents or a change of amount of
data to write (length).
Infact I have an application that does exactly this, it implements a
priority queue of packetized data and the decision about what to send
next is made right at the moment it knows it can call write().
But sometimes, you might want to change the buffer location for some
reason, e.g. since the SSL_write() data buffer is just a window in a
larger buffer handled by the application. To tell OpenSSL that such
an address change is intentional in your application, and that the
application will make sure that any buffer contents will be preserved
until SSL_write() reports success, you can set the
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER flag. This will not change
OpenSSL's operation in any way except disabling the sanity check,
since settings this flag indicates that your application does not
require this check.
When you say "change the buffer location" do you mean the exact offset
given to SSL_write() in 2nd argument ? Or do you mean for repeated
calls to SSL_write() the last byte (4096th byte from example) address
remains constant until OpenSSL gives indication that the last byte has
been committed ?
Here I am asking "which buffer when?" and "what location?" in relation
to previous failed SSL_write() ?
In the case of my example usage when copying to the stack, I have a much
larger buffering system in place and a small temporary window on the
stack is used to prepare data for SSL_write(). This is because
SSL_write() doesn't support IOV/writev() scatter-gather buffers which I
am using with unencrypted sockets.
Its still unclear what guarantees my application needs to make for
OpenSSL, the sanity check looks at first glance to be there for its own
sake and redundant. Is there a direct relationship between
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER and SSL_MODE_ENABLE_PARTIAL_WRITE ?
Sorry for more questions. Seeing an example of what is right and what
is wrong in annotated code form would be ideal.
Darryl
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
Automated List Manager [EMAIL PROTECTED]