Eric: Thanks for your feedback. Please contribute your code. I suggest you use
eayZLIBstream(226) - Jeff > In the current implementation of OpenSSL, compression/decompression state is > initialized and destroyed per record. It cannot possibly interoperate with > a compressor that maintains compression state across records. The > decompressor does care, unfortunately. The other way around could work, > though: a compressor that works per record, sending to a decompressor that > maintains state. > > Personally I am adding a separate compression scheme that I called > COMP_streamzlib to the already existing COMP_zlib and COMP_rle methods > defined in OpenSSL. The only (but significant) difference is that it will > maintain the compression state across records. For the time being, I will > just use one of the private IDs mentionned in the previous emails (193 to > 255), as it is not compatible with the current zlib/openssl compression. > > Eric Le Saux > Electronic Arts > > -----Original Message----- > From: pobox [mailto:[EMAIL PROTECTED]] > Sent: Sunday, November 24, 2002 2:43 PM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: Re: OpenSSL and compression using ZLIB > > ----- Original Message ----- > From: "Jeffrey Altman" <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> > Sent: Sunday, November 24, 2002 8:26 AM > Subject: Re: OpenSSL and compression using ZLIB > > > > http://www.ietf.org/internet-drafts/draft-ietf-tls-compression-03.txt > > > > defines the compression numbers to be: > > > > enum { null(0), ZLIB(1), LZS(2), (255) } CompressionMethod; > > > > Therefore proposed numbers have been issued. I suggest that OpenSSL > > define the CompressionMethod numbers to be: > > > > enum { null(0), ZLIB(1), LZS(2), eayZLIB(224), eayRLE(225), (255) } > > CompresssionMethod > > > > as values in the range 193 to 255 are reserved for private use. > > > > Where does the above draft state that the dictionary must be reset? > > It states that the engine must be flushed but does not indicate that > > the dictionary is to be reset. Resetting the dictionary would turn > > ZLIB into a stateless compression algorithm and according to the draft > > ZLIB is most certainly a stateful algorithm: > > > > "the compressor maintains it's state through all compressed records" > > > > I do not believe that compatibility will be an issue. It will simply > > result in the possibility that the compressed data is distributed > > differently among the TLS frames that make up the stream. > > > > The draft clearly implies that the dictionary need not be reset and probably > should not be reset, but it is not clear to me that it prohibits this. > However, the draft talks about ... > "If TLS is not being used with a protocol that provides reliable, sequenced > packet delivery, the sender MUST flush the compressor completely" ... > I find this confusing because I've always understood that TLS assumes it is > running over just such a protocol. If I read it correctly, even EAP-TLS (RFC > 2716) will handle sequencing, duped, and dropped packets before TLS > processing is invoked. So what's this clause alluding to? > > In any event, I think I agree that the compressor can compatibly behave in > different ways as long as the decompressor doesn't care. I'm just not sure I > understand RFC1950 and 1951 well enough to know what is possible. Is "flush > the compressor completely" (as in the TLS compression draft language) > equivalent to compressing all the current data and emitting an end-of-block > code (value=256 in the language of RFC1951)? I'm guessing it is. Is > "resetting the dictionary" equivalent to compressing all the current data > and sending the block with the BFINAL bit set? If so, then it seems like the > decompressor can always react correctly and therefore compatibly in any of > the three cases. If the dictionary is reset for every record (current > OpenSSL behavior), then the decompressor knows this because the BFINAL bit > is set for every record. If the dictionary is not reset but is flushed for > every record, then the decompressor knows this because every record ends > with and end-of-block code. If the most optimal case is in play, which > implies a single uncompressed plaintext byte might be split across multiple > records, the decompressor can recognize and react properly to this case. If > all this is correct, then the next question is ... > What will the current implementation of thedecompressor in OpenSSL do in > each of these cases? > > > --greg > [EMAIL PROTECTED] > > ______________________________________________________________________ > OpenSSL Project http://www.openssl.org > Development Mailing List [EMAIL PROTECTED] > Automated List Manager [EMAIL PROTECTED] > ______________________________________________________________________ > OpenSSL Project http://www.openssl.org > Development Mailing List [EMAIL PROTECTED] > Automated List Manager [EMAIL PROTECTED] > Jeffrey Altman * Volunteer Developer Kermit 95 2.1 GUI available now!!! The Kermit Project @ Columbia University SSH, Secure Telnet, Secure FTP, HTTP http://www.kermit-project.org/ Secured with MIT Kerberos, SRP, and [EMAIL PROTECTED] OpenSSL. ______________________________________________________________________ OpenSSL Project http://www.openssl.org Development Mailing List [EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]