On Mon, Jun 26, 2006 at 12:35:57PM +0100, Darryl Miles wrote:

> "Some of the calls to SSL_write() may read some of your data", I am 
> still not such how the reading of data impacts the write operation.  Are 
> you saying that when WANT_READ is returned from SSL_write() the OpenSSL 
> library has already committed some number of bytes from the buffer given 
> but because its returning -1 WANT_READ it is failing to report that 
> situation back to the application during the first SSL_write() call ? 

Yes.  During the first call to SSL_write(), OpenSSL may take as many
bytes as fit into one TLS record, and encrypt this for transport.
Then SSL_write() may fail with WANT_WRITE or WANT_READ both before and
after this first record has been written, until finally all the data
has been broken up into records and all these records have been sent.


> An under reporting of committed bytes if you want to call it that.  This 
> would also imply you can't reduce the amount of data to SSL_write() 
> since a subsequent call that failed.  Or implies that OpenSSL may access 
> bytes outside of the range given by the currently executing SSL_write(), 
> in that its somehow still using the buffer address given during a 
> previous SSL_write() call.
> 
> I still have not gotten to the bottom of the entire scope of 
> situation(s) can cause an SSL_write() to return -1 WANT_READ.  If its 
> only renegotiation that can; then this is always instigated by a 
> SSL_renegotiate() (from my side) or and SSL_read() that causes a 
> re-negotiate request (from the remote side) to be processed.

Yes.  The other party is always allowed to start renegotiation, so
your SSL_write() call might be during renegotiation; but OpenSSL won't
try to read data from the other party during SSL_write() unless it
already knows that a renegotiation is going on.


> It is still unclear how this would work, here is the strictest pseudo 
> code case I can think up.  This is where:
> 
>  * the exact address for the 4096th byte to send it always at the same 
> address for every repeated SSL_write() call and
> 
>  * I don't change or reduce the amount of data to be written during 
> subsequent SSL_write(), until all 4096 bytes of the first SSL_write() 
> have been committed into OpenSSL.

Exactly.  You should not change the amount of data (n), and you should
not change the contents of these n bytes.  You may change the address
of that buffer (provided that the contents remain the same) if you
set the flag that you asked about.  However ...


> char pinned_buffer[4096];
> int want_write_len = 4096;
> int offset = 0;
> int left = want_write_len;
> 
> do {
>       int n = SSL_write(ssl, &pinned_buffer[offset], left);
>       if(n < 0) {
>               sleep_as_necessary();
>       } else if(n > 0) {
>               offset += n;
>               left -= n;
>       }
> while(left > 0);

... once SSL_write() returns a positive number, this indicates that
this number of bytes, and *only* this number of bytes, has been
processed.  So any subsequent SSL_write() is "detached" from this
SSL_write(); OpenSSL does not care what you change in the buffer

Note that you'll have to set SSL_MODE_ENABLE_PARTIAL_WRITE to cause
OpenSSL to return success before *all* of the application buffer has
been written.  The default is that OpenSSL will write all the data,
using multiple records if necessary; with
SSL_MODE_ENABLE_PARTIAL_WRITE, SSL_write() will report success
once a single record has been written.


> In practice many applications may copy their data to a local stack 
> buffer and give that stack buffer to SSL_write().  This means the data 
> shuffles up and the next 4096 byte window is use for SSL_write().
> 
> So what I am asking now is what is the _LEAST_ strict case that can be 
> allowed too if the one above the what I see as the most strict usage.
> 
> 
> 
> The need for this dumbfounds me.  If SSL_write() is returning (<= 0) 
> then it should not have taken any data from my buffer, nor be retaining 
> my buffer address (or accessing data outside the scope of the function 
> call).

If SSL_write() has started writing a first record, but delayed other
data to later records, then it may have to return -1 to indicate
a "WANT_WRITE" or "WANT_READ" condition.


> It is also valid for me to "change my mind" about exactly what 
> application data I want to write at the next SSL_write() call.  This 
> maybe a change of application data contents or a change of amount of 
> data to write (length).

Not if OpenSSL has already started handling the application data.
In that case you should buffer the application data so that you
can repeat the SSL_write() call properly.


> Infact I have an application that does exactly this, it implements a 
> priority queue of packetized data and the decision about what to send 
> next is made right at the moment it knows it can call write().
> 
> 
> >But sometimes, you might want to change the buffer location for some
> >reason, e.g. since the SSL_write() data buffer is just a window in a
> >larger buffer handled by the application.  To tell OpenSSL that such
> >an address change is intentional in your application, and that the
> >application will make sure that any buffer contents will be preserved
> >until SSL_write() reports success, you can set the
> >SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER flag.  This will not change
> >OpenSSL's operation in any way except disabling the sanity check,
> >since settings this flag indicates that your application does not
> >require this check.
> 
> When you say "change the buffer location" do you mean the exact offset 
> given to SSL_write() in 2nd argument ?  Or do you mean for repeated 
> calls to SSL_write() the last byte (4096th byte from example) address 
> remains constant until OpenSSL gives indication that the last byte has 
> been committed ?
> 
> Here I am asking "which buffer when?" and "what location?" in relation 
> to previous failed SSL_write() ?

I don't quite understand these questions.  The second argument
to SSL_write provides a pointer to a buffer, the third argument
provides the length.  Without the SSL_MODE_ENABLE_PARTIAL_WRITE,
a sanity check done by OpenSSL requires the buffer pointer to remain
unchanged until SSL_write returns a non-negative number, since
otherwise there might be unintended behaviour, as explained above.


> In the case of my example usage when copying to the stack, I have a much 
> larger buffering system in place and a small temporary window on the 
> stack is used to prepare data for SSL_write().  This is because 
> SSL_write() doesn't support IOV/writev() scatter-gather buffers which I 
> am using with unencrypted sockets.
> 
> 
> Its still unclear what guarantees my application needs to make for 
> OpenSSL, the sanity check looks at first glance to be there for its own 
> sake and redundant.  Is there a direct relationship between 
> SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER and SSL_MODE_ENABLE_PARTIAL_WRITE ?

Well, if you use SSL_MODE_ENABLE_PARTIAL_WRITE, then it doesn't really
matter if the write buffer changes between calls to SSL_write() since
OpenSSL won't break up the buffer into multiple records anyway.  So in
that case, you probably won't encounter problems if you disable the
sanity check by using SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER as well.

Bodo

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to