Geoff Thorpe <[EMAIL PROTECTED]>:
> Bodo Moeller:

>> But programs are not going to call SSL_peek() just because they don't
>> have anything better to do, are they?  When SSL_peek() is called, then
>> usually because control flow depends on it; so SSL_peek() cannot just
>> be omitted without further changes to the program.

> Depends on what the peek()-like operations are being used for. Like I
> said, "peek()" may be used *optionally* - if the application feels the
> need (or the user feels the need) to scan ahead independantly of the code
> doing the actual read/write looping, then it can. My point is that the
> "peek" ideally should not alter the state in any way -

But OpenSSL error reporting is stateful.  How should the error queue
behave?

>                                                         so that things in
> the regular read/write loops can't be affected by whether some other chunk
> of code feels like peeking the incoming data before it actually gets
> consumed.
> 
> A simple example would be an SSL server that could proxy to different
> addresses inside a LAN, so that if the first clear text data to come out
> of the tunnel can also give some clue which backend address would be best
> to proxy to, then it would peek that *before* setting up the proxied
> connection. The point of the example is that the "peek"ing might be a
> switch or some other user-configurable feature, and as such the rest of
> the "regular" logic should be unaltered by whether a peek is performed or
> not.

But for this purpose, SSL_peek will have to do something -- usually it
will have to read network data from some socket, which will increase
the TCP window for the incoming half of the connection, which may
avoid deadlocks.

If you want to avoid this, i.e. just peek at data that has already
been received and decrypted, you'll have to call SSL_pending first
(the corrected version of it, not the current one) and call SSL_peek
only if SSL_pending returns non-zero.  In this case, SSL_peek will not
change the state.  But the SSL server will not be able to peek at the
client's requests because it is at the the very beginning of an
application record, which we cannot look at without changing the state
of the connection.  (And if we *do* look at it and notice that instead
of the expected application data it contains a closure alert, then we
suddenly know that the connection is dead, and we have to disable
write attempts -- this certainly is a significant state change!)


[...]
> Having said that, I'm not sure I grasp the current SSL state machine
> implementation - more specifically, when/where the machine gets "churned".
> If any "write" (either from the clear or encrypted side) causes the
> maximal amount of flushing then implementing an entirely passive peek
> shouldn't be a problem (ie. the data is ready to peek at, without needing
> to advance the state machine) - if on the other hand all flushing is done
> via read attempts (on either side) then this is obviously not going to be
> an option.

When you try to read fresh data, you may find that you have to send an
alert and possibly close down the connection, or you may find that the
peer requests renegotiation; so the state change can affect more than
just the TCP window.  With some extra effort you might be able to
pretend that you did not really see those bytes (there's no such thing
as absolute time anyway [1], and there are no required upper limits on
delays, so the peer cannot tell if the new protocol data has actually
been processed by the SSL implementation).  My intended changes to the
SSL library will allow applications to disable sending data over the
underlying BIO, so you can avoid actually sending your response to the
peer, but it would be hard to avoid changing the internal state of the
SSL object.


[1]  Microsoft's IIS relies on the assumption that there is absolute
     time, though.  I haven't checked current versions, but I remember
     very well seeing RSTs instead of correct TCP connection closure
     from such servers (not to mention TLS closure alerts, which would
     be reported as "I/O errors" by earlier versions of MSIE).
     If instead of receiving the server's application data immediately
     (before the RST can be processed by the client's TCP) a client
     tries to send a new pipelined request over the existing
     connection, it may find that the connection has been reset by the
     server before the client has had a chance to receive the reply
     to its first request -- the incoming RST instructs the local TCP
     to throw away any data that has been buffered locally, so the
     client application might never see it.
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to