Hi Eli...
Kevin Kiley here..
In a message dated 01-09-04 03:54:15 EDT, Eli Marmor writes...
> Ian Holsman wrote:
>
> > 4. mod_gzip is beter
> > It could be, and it probably does a better job of compression.
> > Once your code is published we'll see. If it is better it will
replace 'gz'
> > (if gz gets accepted) my only issue with mod_gzip (1.3 not 2) is
it's
> > size, and it's replication of what looks like gzip code inside
of it.
Couple of points here...
1. mod_gzip is ALREADY 'published'. It has been freely available for just
about one year now and except for a few early bug reports and the addition
of user-requested features it has worked fine since DAY ONE. I don't know why
everyone keeps acting like a 2.0 version of a module is so significantly
'different'
from a 1.3.x module that you have to 'see it to believe it'. I can tell you
right now
that converting from 1.3.x to 2.0 ( once the filtering API's were finally
stablized ) was simply no big deal and I most certainly did not 'rewrite'
the thing from scratch just because BUFF.C went away. Just look at any
I/O section of any module and it doesn't take much imagination to visualize
what it's going to look like with 2.0 filtering calls added.
2. The 'size' issue has been beat to death. Look at the code. As someone
already reported... about 90 percent of it doesn't even need to be there.
It's simply 'storytelling' debug style. Like SQUID Proxy Cache debug on
steroids.
3. Any implementation of patent-free LZ77 + Huffman is going to 'look' like
the code known as 'deflate', GZIP and/or ZLIB... because that's what they are.
All Mark and Jean did years ago was rewind to (inferior but free) versions of
LZ and
Huffman when Sperry Rand and IBM patented LZW and locked it up for themselves.
Anyone can do it (well, maybe not anyone) and you don't need GZIP or ZLIB.
> Since I don't see that this discussion is going to anywhere, let me suggest
> the following, in order to have a progress:
>
> Kevin will publish his code for 2.0, although he prefers to wait for a more
> stable release of 2.0.
>
> "In return", nobody will judge it according to its current status, since
> Kevin already proved (in the 1.3's release) that he could write a high
> quality mod_gzip, so the assumption will be that if it happened with 1.3,
> it will happen with 2.0 too, once 2.0 will meet Kevin's standards. In
> addition, Kevin has more experience in this field than anybody else.
Geesus... it sounds like we are arranging a prisoner exchange on
some bridge that crosses the Rhine.
More points here...
No, Kevin does NOT have more experience in this field 'than anyone else'.
WE ( as a company ) just happen to have a lot of it. We have been compressing
Internet traffic even before RFC2016 and since the first time we pressed
'View Source'
on an HTML page and saw all that bloated crap that was arriving over a TCP/IP
connection. We had ( and still have ) client-side software that made ANY
TCP/IP
installation able to receive compressed data regardless of whether some dumb
browser can or not. IETF Content-Encoding is only one way to do it and if it's
available, we use... that's all. It's just something we do and it's certainly
not the
only thing we do.
Also... 'Kevin' does not have any complicated 'standards' for Apache 2.0.
I just want the people that WROTE it to say they believe in it enough to
take it out of skunk-works status and that they TRUST it enough to let
real people use it. That's not a lot to ask for something that's been worked
on
for YEARS now. Just get it out the damn door.
( I am SURE I am not alone on this point )
> The "size" problem will be judged simply:
>
> Once the code is published, it will be compiled under a standard platform
> (Linux?), and with the default flags (i.e. no "-g" or "-DMOD_GZIP_DEBUG1").
>
> Then, the command "size mod_gzip.o" will be invoked, and its output, which
> is based only on the REAL code (without the 90% debugging stuff, comments,
> etc.) will be the base for a decision.
Huh?
Not sure I followed that one.
I actually had no intention of submitting anything that has any SQUID style
'storytelling' debug in it because history has proven that most of the Apache
folks just can't deal with it... they always bitch about it and/or use it as
a
reason to reject something. 'Too much to read' or 'Too hard to read' or
'Please submit smaller files' or something like that.
> I suggest also to take into account that this code contains EVERYTHING (I
> believe that zlib alone is bigger than it), so you don't need anything else
> or any dependency.
I have already gone over the 'licensing' issues as I understand them with
regards to Apache's fears about including other public domain source code
into Apache and the possibility of one day being held to the 'least
restrictive
OpenSource license in a codebase shall apply to the work as a whole' issue.
That alone might be the 'showstopper' for there ever being an actual ZLIB or
GZIP source code base in the Apache tree. See previous messages on
this thread regarding 'license' concerns as we understood them following
the firestorm that erupted the last time ( WE ) suggested that ZLIB be
added to the Apache source tree.
ASIDE: If you read the top of the mod_gzip.c source code you will discover
something quite unique. Since Apache REJECTED it for 1.3.x yet we still
desired to retain the Apache license in case anyone changed their minds
you will notice that the ONLY license there is on mod_gzip.c is the
Apache 2.0 style license itself. It was personally approved and 'allowed' to
be there via private emails by a minimum of 3 of Apache's own top level board
members as is required by Apache's own bylaws.
There is NO OTHER license. In other words... there is no possibility that
anyone who uses mod_gzip ( or any part of it ) can be subject to the
'least restrictive OpenSource license in a codebase shall apply to the
work as a whole' thingamabob because there IS no other LICENSE
even though it's not 'officially' part of Apache at this time.
It doesn't get any easier than that.
> Also, the fact that much of its size is dedicated for meeting the RFC, must
> be taken into account. Once another gzip filter meets it, it will become
> large too.
Of course. Add all of the same configuration directives that mod_gzip has to
any
module and it will be about the same size.
What really baffles me here is the way some people are acting like they
are 'negotiating' to 'get' something that has a 'sole source' like trying
to get IBM to cough up all their internal changes ( and improvements? )
to Apache.
Fer chrissakes... we are talking about a single .C file that has been fully
available to ANYONE in the public domain for more than a year. You
don't HAVE to 'get it from us'. If you guys are itching to have a fully
IETF compliant module that can deliver compressed responses before
the sun sets tomorrow in the west and/or before mod_include or mod_perl
or mod_ssl for 2.0 are even finished ( they are not ) then be my guest...
just add the filtering calls yourselves and dump it into the tree. You do NOT
need my permission. It's ALREADY been donated to Apache and has ALREADY
been downloaded and tested ( and is currently USED ) by thousands of people.
Don't like all the code you see?... fine... then just beg, borrow or steal
whatever the hell you want. Take out the ITEM MAPPING ( it works like
a champ ) or the compressor ( it works like a champ ) or the alternative
content-negotiation for compressed objects ( it works like a champ ) or
the complex user-agent identification schemes ( they work like a champ )
or the compression results delivery determination rules ( tried and true )
or the sophisticated Apache log compression statistics update code
and/or the dozens of log analysis scripts from the website ( also totally
free ) etc... etc... etc...
That's what 'free' code is about... it's 'free'...
However... if you are actually negotiating for ME, personally, to get
involved
with Apache having its own IETF Content-Encoding module then I have already
stated I am ready to do that ( Jumpin lizards... I'm the one who submitted
it all in the first place and have been the strongest advocate for Apache
being able to do IETF Content-encoding ) but I will do that ( get involved
with you )
on MY terms... not YOURS.
The base pay rate that you offer is too low for it to happen any other way.
AFAIK the only people actually 'paid' to work on Apache are the Covalent boys.
For the rest of 'us'... it's a bigger decision whether to promise 'full time'
attention.
Also... something else that bothered me in this thread needs attention
if people are currently holding their breath for me to press the upload key...
Someone ( Apache comitter? ) said that if a module shows up that
can support BOTH Apache 1.3.x AND Apache 2.0 in the same piece
of code that it will AUTOMATICALLY be REJECTED for that reason alone.
That's a joke, right?
Is Apache seriously saying that submitted moudles are now NOT ALLOWED
to try and support ALL in-use releases of their own damn Server?
If that's true... then what is this 2 hour old quote from Ryan about?
[snip]
State-Changed-From-To: open-closed
State-Changed-By: rbb
State-Changed-When: Mon Sep 3 18:38:02 PDT 2001
State-Changed-Why:
I have backed out this change. It is very important that
we be compatible with older versions of Apache...
[snip]
If this is now some kind of serious weird requirement for module
submissions ( Ignore all previous versions of Apache ) then someone
please pipe up and confirm because I have news for you... before I
hit the upload key I have to turn a single .C file that supports all
versions of Apache seamlessly in the same file into some hack that
never cares about anything but Apache 2.0.
There is no reason that a simple module should NOT be able to support
ALL versions of Apache because it's really very simple... but if we
are talking some kind of hard fast rule for 2.0 submissions then somebody
say so. I assure you... Apache 1.3.x is going to be widely used for
YEARS to come... no matter when 2.0 gets 'out the door'.
> I apologize in advance if I offence anybody; I'm just trying to settle the
> argument and to make some progress; I believe that finally there is a
> concensus that having a gzipping filter is critical for the success of
> Apache. I hope we will not be back with sayings like "let PHP and JSP
> compress themselves"; It is not acceptable, and it's like saying "let PHP
> and JSP SSLing themselves". Didn't we invite the I/O filtering for these
> purposes?
>
> Sorry again,
Don't be sorry... email is all there is at Apache and anyone who sends
a lot of emails knows that no matter how you try to present something
someone usually takes it the wrong way. The key is to just
keep on truckin'....
Thanks for your ( reasonable ) suggestions, Eli.
> --
> Eli Marmor
> [EMAIL PROTECTED]
> CTO, Founder
> Netmask (El-Mar) Internet Technologies Ltd.