Hey there,

On Fri, 1 Dec 2000, Bodo Moeller wrote:

> But programs are not going to call SSL_peek() just because they don't
> have anything better to do, are they?  When SSL_peek() is called, then
> usually because control flow depends on it; so SSL_peek() cannot just
> be omitted without further changes to the program.

Depends on what the peek()-like operations are being used for. Like I
said, "peek()" may be used *optionally* - if the application feels the
need (or the user feels the need) to scan ahead independantly of the code
doing the actual read/write looping, then it can. My point is that the
"peek" ideally should not alter the state in any way - so that things in
the regular read/write loops can't be affected by whether some other chunk
of code feels like peeking the incoming data before it actually gets
consumed.

A simple example would be an SSL server that could proxy to different
addresses inside a LAN, so that if the first clear text data to come out
of the tunnel can also give some clue which backend address would be best
to proxy to, then it would peek that *before* setting up the proxied
connection. The point of the example is that the "peek"ing might be a
switch or some other user-configurable feature, and as such the rest of
the "regular" logic should be unaltered by whether a peek is performed or
not. If the decision making between peek or no-peek is more subtle and
embedded deep into applications (and perhaps [not]happening only in
certain special circumstances), then an incorrectly coded application may
appear to work 'most of the time' - ie. if SSL_peek() can generate
heisenburgs.

> As far as unnoticed deadlocks are concerned, there are more likely
> ways to obtain them; e.g. by selecting for readable data before trying
> SSL_read() even if the previous error was SSL_ERROR_WANT_WRITE.  Once
> in a while the client cert chain sent during renegotiation might be
> too long for your hosts TCP send buffer ...

Absolutely, but my philosophical point was that a "peek" should preferably
be a "const" operation :-) Any alteration to the SSL state from a peek
leaves open the window of opportunity for applications to use peeks and
miss a step in their non-blocking behaviour because the peek() was
effectively advancing the state anyway. Any later attempt to switch the
peek() operations out might result in the application locking up -
something that should have been spotted earlier if peek() was strictly
passive. (And that's just one possible manifestation of quantum-mechanical
program logic :-).

Having said that, I'm not sure I grasp the current SSL state machine
implementation - more specifically, when/where the machine gets "churned".
If any "write" (either from the clear or encrypted side) causes the
maximal amount of flushing then implementing an entirely passive peek
shouldn't be a problem (ie. the data is ready to peek at, without needing
to advance the state machine) - if on the other hand all flushing is done
via read attempts (on either side) then this is obviously not going to be
an option.

Cheers,
Geoff


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to