> On November 27, 2002 03:24 pm, Kenneth R. Robinette wrote:
> > Um, well that's one approach.  But its a little like saying "Lets let
> > SSL/TLS take care of agreeing on a cipher type, and then leave it up to
> > the user application to take care of the actual encryption/decrytion.
> > I would rather see the most commonly used methods inplemented within
> > SSL/TLS itself.
>
> If the SSL/TLS implementation is doing the (de)compression I don't see
> what your point is. Ie. with this compression method negotiated by the
> client and server, the SSL/TLS would still be responsible for handling
> compression - it would just handle it on the application data before
> applying the SSL/TLS framing rather than compressing data inside it. From
> the application point of view, there's no need to implement anything. Did
> you misunderstand me or vice versa?


Geoff,

I can't speak for Kenneth, but I'm not sure I get what you're saying
here. The data is first compressed and then encrypted according to
RFC2246. In my mind, once the application hands the data to OpenSSL
via SSL_write() or BIO_write() or _puts() or whatever it is no longer
application data, even if compression has been negotiated.

I think it is best to firstly get the decompressor correct. My belief
is that a single decompressor can transparently handle the following
three possible compression scenarios:

1) Each record is compressed independently. The dictionary is reset
before each record. This appears to be the way OpenSSL currently works
(flush is Z_FULL_FLUSH). Compression ratio is worst of the three.

2) The dictionary is not reset between records.  However, the current
compression buffer can be flushed (Z_SYNC_FLUSH), so that uncompressed
data does not span an SSL record boundary. Compression ratio is better
than #1.

3) The compression buffer is not flushed between records. Uncompressed
data may span SSL record boundaries. Best compression ratio.

#1 is the 'safest' in that it seems to make compression as
transparently to client applications as is possible. #2 is almost as
safe. For the most part, #2 will be just as safe as #1. In fact, I
can't really think of any reasonable scenarios in which this is not
true, but strange things are possible with acceleraters, proxies,
shims and whatnot. At least #2 is absolutely necessary, e.g. for
client protocols like EAP-TLS.

A decompressor that has this functionality would be backward
compatible with the current OpenSSL scheme and forward compatible with
almost any reasonable implementation of ZLIB over TLS.




______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to