If we are not going to use compression, or we are handling it in our own code, why link the zlib library?

You said you are not using the v7 component, so none of this currently
effects your applications.  It is not realistic to make the code more
complicated just to satisfy compatibility for one user that may or may
not use it sometime in the future.
Changing this

if (ZStreamType <> zsZLib) then begin

to this

if (ZStreamType <> zsZLib) {$IFNDEF USE_ZLIB_OBJ}and ZLibLoaded {$ENDIF} then begin

and adding that ZLibLoaded function, needed to at same time make your code bug free if user decide to use ZLIBDLL, is turning the code much more complicated?!!!


The size increase adding the zlib code is not that much, about 21K,
particularly compared to the vast Delphi visual runtime library that
brings in hundreds of kilobytes of components that are rarely used, like
action lists, image lists, docking, etc.
Agree, but I know it's there, and just a little code change can remove these extra bytes, change that don't impact anyhow with component functionality. VCL bad implementation in the use of resources, and size occupied by not used code that result mostly because of the not so smart linker, are not good examples.
And the other suggestion to always call the OnContentEncode, if assigned and hoContentEncoding in server options, despite the internal content type and stream size checks?

That would mean the user had to repeat the content type and size tests in
the event, before deciding whether to look-up a cached compressed file.
Content-Encoding is not just compression. It's an elegant way to use custom, not standard, encodings in custom server-client applications. Your internal imposed rule to call the OnContentEncode only after internal check of imposed rules is not optimal and completely obliterate the idea of that event. And you can always add a new variable to that event that can be used to force compression, bypassing that internal check.

ForceCompression:=false;
ContentEncodingHandled:=false;

TriggerContentEncode(ContentEncoding, ContentEncodingHandled, ForceCompression);

if ContentEncodingHandled then
PutStringInSendBuffer('Content-Encoding: ' + ContentEncoding + #13#10)
   else
if ForceCompression or
(((ContType = '') or (Pos ('text/', ContType) > 0)) and { only compress textual stuff } ((FDocStream.Size >= FServer.SizeCompressMin) and { too small a waste of time } (FDocStream.Size < FServer.SizeCompressMax))) then begin { too large will block server and use a lot of memory }
...


But I accept the tests need to be improved, I forgot about compressing
XML content for instance.  So I will move the tests into a virtual
function that can be overridden in an application to give total control
(and save some duplicated code).
Yes, all current content-encoding code is duplicated in AnswerStream and SendDocument, so perfectly viable for a procedure. For the valid to compress content-type check, it is probably more functional the use of a published string property, where to concatenate all the valid content type distinct strings that can be checked by a simple string search against the stream content type.


Rui
--
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

Reply via email to