RE: Apachelounge has to remove Apachelounge Feather, be warned

2007-08-19 Thread Peter J. Cranstone
Guys,

How about everyone take a deep breath here. Right now it's about helping not 
hurting. It's about trying to deliver a product which clearly needs some vision 
to a customer base that is increasingly becoming IIS dependant (check the 
Netcraft numbers).

You're all missing the bigger picture. Apache is on the decline. You should be 
doing anything and everything to come up with a consistent, compelling, 
credible product that gives your customer base confidence that Apache is still 
relevant. 

I've been watching these threads and feel for the Apache Lounge guy. I remember 
the wars Kevin and I went through when we tried to donate Mod_Gzip to the 
Apache foundation. Mod_gzip succeeded beyond all imagination, but as the saying 
goes there has to be a better way than dealing with all this nonsense.

What's important here is your customer base. It's in decline because there are 
too many inconsistent versions of Apache out there without any clear 
differentiator over the competition (Microsoft) which is starting to eat 
everyone's lunch. 

Steffen was trying to help. How about helping him to succeed. Let's put the 
personalities to one side and attach the problem not the people. 

Cheers,


Peter
_
Peter J. Cranstone
5o9, Inc.
Boulder, CO  USA

Mobile: 303.809.7342 | GMT -7 
Skype:  Cranstone
Email:  [EMAIL PROTECTED]
Blog:   http://petercranstone.blogspot.com

Making Web Services Contextually Aware
Web site: www.5o9inc.com 

-Original Message-
From: William A. Rowe, Jr. [mailto:[EMAIL PROTECTED] 
Sent: Sunday, August 19, 2007 1:41 PM
To: dev@httpd.apache.org
Subject: Re: Apachelounge has to remove Apachelounge Feather, be warned

Steffen wrote:
 
 On request I have to remove the Feather, see the mail below.

You are welcome to share that private post, of course.  I mailed you
privately so that you could ask any questions of the prc@ folks, and
even ask them for permission, at a more leisurely pace.  I was also
trying to handle that issue more tactfully than I had the first issue.

I said (nicely)

  If you would please remove the Apache feather, and indicate the site is
  not affiliated with the Apache Software Foundation nor the Apache httpd
  Server Project, I believe this would address all of the Foundation's
  concerns.

which was to say, the only issue we have with you as part of our community is
not confusing users between the ASF site and your site.  By making sure your
users aren't confused you earn the goodwill of the developers and community.
Your site is part of the wider httpd user community, and that's a good thing.
Your site isn't part of the Foundation.

Our logo integrated into yours could be misleading.  We have an imperative
to defend our mark, that's how trademarks work.  Again, I politely offered
for you to ask [EMAIL PROTECTED]  They are the final word, if they say to you 
not to
use it, don't.  Or if they offered no, we don't find that confusing, you
have permission to use it in that way, then you would be able to add 'Feather
logo used with permission of the ASF' or something similar on your own site.

Now, more about confusion.  In your favor, you very clearly indicate that
it's your build and how you've gone about building it.  I've supported all
of you, including Hunter, yourself and countless others when you bring back
problem reports.  We don't always agree on the one right fix, but it's
always fixed.  And you are one of the first to bring us trouble reports
about a release candidate.  Please don't decide we don't appreciate you if
we simply point out problems with your site.

We don't discriminate, we bring these up to all the sites where we find
such problems, as we find them.  I'm sorry if you feel singled out today,
or if my tone rubbed you the wrong way.

Jeff and I offered comments in January about how you presented the release
candidate as releases.  You ended the conversation that you would take
them down but you didn't see our reasoning.

http://mail-archives.apache.org/mod_mbox/httpd-dev/200701.mbox/[EMAIL PROTECTED]

Now, I'd offered you some of the reasoning (not all, for sure) of why it's
not a good idea *in your interests*, and also why it's not helpful to your
users if they are confused by an unreleased package.  Maybe you still don't
see the reasoning.  But I hoped you would understand these are not any
territorial dispute, but for your benefit.  If I disliked you I would have
said nothing.

The bottom line is that nobody took issue with Jeff's or my comments.  They
are free to do so.  Colm has this time around.  His points don't quite jive,
if you offered a patch set and said hey, this is the difference between
the ASF's 2.2.4 and my binaries here, then his point would be spot-on and
we'd all agree there is no issue.  Or change it radically and don't name it
Apache 2.2.4.  That's fine too.  I couldn't find the argument for releasing
our *candidates* on external sites from Colm's

RE: 3.0 - Proposed Goals

2007-02-19 Thread Peter J. Cranstone
So might I make a humble suggestion.

Ask your 65 million customers what they would like in Apache 3.0 - this time
around let someone else tell you what they want.

It's the only way to build something.


Peter J. Cranstone
5o9, Inc.
303.809.7342 | [EMAIL PROTECTED]

Making Web Applications Location and User Aware
URL: www.5o9inc.com 

-Original Message-
From: Nick Kew [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 19, 2007 2:44 AM
To: dev@httpd.apache.org
Subject: Re: 3.0 - Proposed Goals

On Tue, 13 Feb 2007 23:33:27 -0800
Paul Querna [EMAIL PROTECTED] wrote:

 So, I've been kicking around some ideas about where I personally would
 like trunk to go for a couple months now.

You've missed the most important consideration here.
Namely, don't break everything that's gone before.

Specifically, a big -1 on forcing substantial rewrites of
existing applications.  Or in other words, the API must
continue to work (with at most trivial breakages).

Of course, deprecating things is fine.  And where parts of
the existing API do not fit well, they might be moved outwards
from the core to a compatibility layer - provided that's
going to be maintainable.

The breakage between 1.x and 2.0 was far too much.  If we
do it again, the world will rightly conclude that Apache
is not a solution fit for the long term.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/



RE: pgp trust for https?

2005-11-09 Thread Peter J. Cranstone
Something else you all need to consider which currently no one is talking
about. Root trust and chain of trust from the PC - through the OS - to the
application and on to the customer (client)

The issue is simple - how do I know I can trust the machine let alone the
applications running on the machine? The current answer is a TPM chip. Sorry
that's not good enough. TPM allows you to verify that the boot up sequence
hasn't been tampered with, after that you need to ensure that you can build
a chain of trust through the firmware and then the operating system and
finally to the application.

Currently Windows, Linux and Unix only use two levels of privilege - Ring 3
and Ring 0. Everybody and there uncle's code want to run at Ring 0. Another
really bad idea, as once I introduce a network/video/keyboard/whatever
driver at that level I can execute malicious code. From there I can control
the machine.

The web of trust starts much lower in the stack than people are currently
talking about. 

What you've outlined below is the following:

Client  attacker -- Server

Substitute the attacker for a legitimate use i.e. SSL proxy acceleration
via compression. The man in the middle is now a legitimate user. The benefit
is faster downloads to the customer and conservation of bandwidth. Many
people currently use mod_gzip this way. Works like a champ.

Now of course there's the announcement on News.com today that Blue Coat is
selling a box that sits in the attacker spot and decrypts the traffic and
scans it to ensure no spyware etc.

Foiled again. If you want a web of trust start at the machine and extend it
to the operating system and applications via a chain of trust and finally to
the clients desktop. Anything else is simply a band aid which can be used
either for malicious or good use.


Peter
-Original Message-
From: Brian Candler [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 09, 2005 2:22 AM
To: Nick Kew
Cc: dev@httpd.apache.org
Subject: Re: pgp trust for https?

On Tue, Nov 08, 2005 at 12:46:18PM +, Nick Kew wrote:
 On Tuesday 08 November 2005 12:02, Brian Candler wrote:
 
 [twice - please don't]

Not sure what you mean by that. I probably did a group reply, so you got one
copy directly and one via the list. I'm afraid it's impossible to please
everyone on this point: some people insist on getting a direct reply (so
they see it sooner), and some people insist on replies to the list only.
Unfortunately my brain and my MUA are not able to record individual
preferences here, so everyone gets a 'G'roup reply.

   I'll sign my server.  Same as I'll sign an httpd tarball if I roll one
   for public consumption.  You sign your server.  Where's the problem?
 
  The problem is that you'll have no protection against man-in-the-middle
  attacks, whereby an attacker impersonates you, or intercepts your
traffic
  (decrypting it and re-encrypting it, allowing them to read and/or modify
  all communication on your supposedly 'secure' connection)
 
 Nonsense.  The encryption is unaffected by this.  It's only the server
 identity we're verifying.

Which fundamentally affects whether the encryption is actually performing
the job it was intended to do, namely to ensure that the data between the
two endpoints is (a) not visible to any other party, and (b) not subject to
tampering by any other party.

Simple example: attacker inserts a machine between the client and the server
(the archetypal man in the middle attack)

   Client  attacker -- Server

The client negotiates an encrypted session with the attacker, and the
attacker negotiates a separate encrypted session with the server.

The client *thinks* they are talking to the server, and that the session is
strongly encrypted (which it is). The server *thinks* it is talking to the
client, and that the session is strongly encrypted (it is).

In fact, the attacker can read *everything* which goes between the client
and the server - and either just copy it back and forth, or modify any parts
of it that it wishes. Both confidentiality and integrity have failed.

The fundamental problem is down to identity: can the client be sure it is
talking directly to the server, or is it talking to an imposter? Without
this assurance of identity, encryption is useless. Well, it's not 100%
useless because it protects you against passive attacks (sniffing only), but
you have to assume you're dealing with a *determined* attacker. These
man-in-the-middle attacks are trivial to implement, and there are toolkits
for doing so.

  For example, an attacker could redirect requests for www.yourdomain.com
to
  their server, perhaps by spoofing the DNS, or by unplugging the cable
  somewhere between your ISP and your server and inserting their own
server.
  Clients would be none the wiser.
 
 Nonsense.  My server is signed with my (private) key.  If they've got my
 key and passphrase, then the whole thing is dead, just as if they got
 my 

RE: pgp trust for https?

2005-11-09 Thread Peter J. Cranstone
No problem - Itanium has the architecture you need. You can isolate all the
physical memory into compartments controlled by a protection key. Each
compartment has the ability to individually control read, write and execute
privileges.  

Peter
[EMAIL PROTECTED] 

-Original Message-
From: Paul A Houle [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 09, 2005 1:07 PM
To: dev@httpd.apache.org
Subject: Re: pgp trust for https?

Peter J. Cranstone wrote:

Currently Windows, Linux and Unix only use two levels of privilege - Ring 3
and Ring 0. Everybody and there uncle's code want to run at Ring 0. Another
really bad idea, as once I introduce a network/video/keyboard/whatever
driver at that level I can execute malicious code. From there I can control
the machine.

  

You'd need a new hardware architecture for ring 1 drivers to be 
worth it.  The trouble is that drivers can initiate DMA operations 
against physical memory.  Unless you devise some system where the OS can 
veto DMA operations,  protection in the CPU is worthless.



RE: pgp trust for https?

2005-11-09 Thread Peter J. Cranstone
Follow up. For those of you who are interested in reading more about how
Itanium supports a secure platform you can read all about it at the US
patent office number: US 2002/0194389 A1

Here's a snip from the abstract..

The combined-hardware-and-software secure-platform interface employs a
hardware platform that provides at least four privilege levels,
non-privileged instructions, non-privileged registers, privileged
instructions, privileged registers, and firmware interfaces. The
combined-hardware-and-software secure-platform interface conceals all
privileged instructions,
privileged registers, and firmware interfaces and privileged registers from
direct access by operating systems and custom control programs, providing to
the operating systems and custom control programs the non-privileged
instructions and non-privileged registers provided by the hardware platform
as well as a set of callable software services. The callable services
provide a set of secure-platform management services for operational control
of hardware resources that neither exposes privileged instructions,
privileged registers, nor firmware interfaces of the hardware nor simulates
privileged instructions and privileged registers. The callable services also
provide a set of security-management services that employ internally
generated secret data, each compartmentalized security-management service
managing internal secret data without exposing the internal secret data to
computational entities other than the security-management service itself.

To solve the security problems you (us, whatever) will have to use a
combination hardware and software architecture. Can't be done in software
alone and it all has to start with Root Trust. If you don't have that then
you have something else.

Cheers.

Peter
[EMAIL PROTECTED]  

-Original Message-
From: Peter J. Cranstone [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 09, 2005 1:12 PM
To: dev@httpd.apache.org
Subject: RE: pgp trust for https?

No problem - Itanium has the architecture you need. You can isolate all the
physical memory into compartments controlled by a protection key. Each
compartment has the ability to individually control read, write and execute
privileges.  

Peter
[EMAIL PROTECTED] 

-Original Message-
From: Paul A Houle [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 09, 2005 1:07 PM
To: dev@httpd.apache.org
Subject: Re: pgp trust for https?

Peter J. Cranstone wrote:

Currently Windows, Linux and Unix only use two levels of privilege - Ring 3
and Ring 0. Everybody and there uncle's code want to run at Ring 0. Another
really bad idea, as once I introduce a network/video/keyboard/whatever
driver at that level I can execute malicious code. From there I can control
the machine.

  

You'd need a new hardware architecture for ring 1 drivers to be 
worth it.  The trouble is that drivers can initiate DMA operations 
against physical memory.  Unless you devise some system where the OS can 
veto DMA operations,  protection in the CPU is worthless.



RE: pgp trust for https?

2005-11-09 Thread Peter J. Cranstone
Bill,

You're no fun ;-)

But seriously though, Apache now has 70% of the Internet web servers running
it's software. The single most important thing on IT minds is web services
followed by security.

Apache needs to think about what it's going to do to make the server more
secure. If you don't someone is going to come in and steal the limelight. I
doubt it will be Microsoft but who knows maybe Google does it.

Security is a big deal - and it's not off-topic for any forum these days
unless of course you're tired of being the lead dog on the Internet.

Reminds me of the good old days and the debate around compressing data from
a web server. LOL.

Peter
[EMAIL PROTECTED] 
-Original Message-
From: William A. Rowe, Jr. [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 09, 2005 1:43 PM
To: dev@httpd.apache.org
Subject: Re: pgp trust for https?

Folks, somehow this thread diverged from HTTP/1.1 PGP based TLS mechanisms
into a fun-with-hardware-trust thread.

Please take this discussion to an appropriate security-wonk debating
club forum, such as vuln-dev or bugtraq, as it's all entirely off topic
on this forum.

Yours,

Bill



RE: Multi-threaded proxy? was Re: re-do of proxy request bodyhandling - ready for review

2005-02-02 Thread Peter J. Cranstone
Ron,

Who is trying to serve up 2GB files?

 
Peter J. Cranstone


-Original Message-
From: Ronald Park [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 02, 2005 11:24 AM
To: dev@httpd.apache.org
Subject: Re: Multi-threaded proxy? was Re: re-do of proxy request
bodyhandling - ready for review

Imagine, just as a wild theoretical scenario (:D), that you have
the following setup:

Apache - (proxy) - Squid - (cache miss) - Apache - (docroot)

Where the back-end Apache serves up large files (in the 2G range)
(and, yes, there are far more files that can be effectively cached).
Now imagine you have thousands of clients trying to get those files
some of which have very slow connections.  And also imagine that
their are more front-end Apache instances than back-ends.

The backend Apache could quickly delivery the file through to
the frontend Apache's mod_proxy if it wasn't held up by waiting
for each chunk to be spoonfed over to the slow client.  Even for
relatively good clients, it's likely a number of them are going
to tie up a thread in the back-end for longer than it would if
the front-end gobbled up the proxy response faster.

The problem with the 'gobble up the whole proxy response' all at
once though is that for these huge files, the original client might
not get any response for a noticable amount of time.  Further, if
an impatient client, it might give up and reissue the request again,
tying up another set of threads (and internal bandwidth). :(

Ron

On Wed, 2005-02-02 at 18:51 +0100, Mladen Turk wrote:
 Paul Querna wrote:
  
  One thought I have been tossing around for a long time is tying the 
  proxy code into the Event MPM.  Instead of a thread blocking while it 
  waits for a backend reply, the backend server's FD would be added to the

  Event Thread, and then when the reply is ready, any available worker 
  thread would handle it, like they do new requests.
  
  This would work well for backend servers that might take a second or two

   for a reply, but it does add at least 3 context switches.  (in some use

  cases this would work great, in others, it would hurt performance...)
 
 
 I don't think it would give any benefit. Well perhaps only on
 forward proxies it could spare some keep-alive connections.
 
 Regards,
 Mladen.
-- 
Ronald Park [EMAIL PROTECTED]



RE: [RFC] Patch for mod_log_config to allow conditioning on status code

2004-10-16 Thread Peter J. Cranstone
 Of course.  Apache 1.3 is an old, legacy application, and vastly less
 capable than current versions.

But millions and millions of users rely on it everyday. What might help
migration is a simple chart showcasing the differences between 2.x and 1.x 

I'm no power user of Apache but I still can't see why Apache 2.x is a MUST
HAVE vs. a stable product in use by millions of users.

Apache is facing a marketing problem not a technology problem


Peter
 

-Original Message-
From: Nick Kew [mailto:[EMAIL PROTECTED] 
Sent: Saturday, October 16, 2004 6:31 AM
To: [EMAIL PROTECTED]
Subject: Re: [RFC] Patch for mod_log_config to allow conditioning on status
code

On Sat, 16 Oct 2004, Glenn Strauss wrote:

 I don't want to discourage Luc, but there's a steep uphill battle
 to getting anything into Apache 1.3.

Of course.  Apache 1.3 is an old, legacy application, and vastly less
capable than current versions.  It's still maintained, but noone is in
the business of adding new *features*.

2.1 is where interesting things happen, while 2.0 is intermediate: new
features may be added, but stability and binary-compatibility are more
important.  I might review and incorporate a third-party patch for 2.x,
but certainly wouldn't for 1.x unless someone was paying.

 diff -ruN apache_1.3.31/src/main/http_log.c
apache_1.3.31-new/src/main/http_log.c
 --- apache_1.3.31/src/main/http_log.c   2004-02-16 17:29:33.0
-0500
 +++ apache_1.3.31-new/src/main/http_log.c   2004-05-24
12:26:06.0 -0400

Bugzilla is a good place for patches like that.  People who want it can
help themselves, without compromising stability.

-- 
Nick Kew



RE: Aborting a filter.

2004-06-21 Thread Peter J. Cranstone
Can you provide the name of the site?

Thanks,

Peter

-Original Message-
From: Nick Kew [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 21, 2004 8:53 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Aborting a filter.


I have a problem with mod_deflate's inflate filter (the one I wrote a
couple of months ago).  I've found a site that (repeatably) provides
compressed data that causes zlib to return -3 (data error). I hope I can
solve it by more detailed study of zlib, but it raises a broader issue.

The inflate-filter is error-handling in what seems an appropriate manner -
aborting the decompress and returning APR_EGENERAL.  But that then
propagates NOBODY_WROTE, and the client is getting unexpected end
of connection, which is not good.

The cop-out of a remove_output_filter and a pass_brigade is no use here,
as the error could be mid-stream, and the Content-Encoding header has
already been stripped in any case.

Instinctively I'd like to see the Client get a 500 errordoc in such cases.
But that's not an option mid-stream either.  OTOH, if it happens before
core_output_filter has got any data, is there any cleanup core can do
about this to improve the client experience?

Any thoughts?  Has this been discussed before?

-- 
Nick Kew


RE: mod_deflate vs mod_gzip

2004-03-30 Thread Peter J. Cranstone
What about trying mod_gzip with Apache 2.x

-Original Message-
From: Henri Gomez [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 30, 2004 7:06 AM
To: [EMAIL PROTECTED]
Subject: mod_deflate vs mod_gzip

Hi to all,

One of my customers is trying to use to an Apache 2.0.47 using mod_deflate.

Its HTTP implementation works with Apache 1.3.x and mod_gzip but
not with Apache 2.0.47 and mod_deflate.

The PHP gzinflate and gzuncompress were used but without luck
and even when skipped 10 first chars.

Any help welcome.

A beer to winner.




RE: 2.0.48 worker mpm on RH3 NPTL results

2004-01-09 Thread Peter J. Cranstone
1. What was the CPU utilization during the tests
2. What size of file was being benched?

Regards,


Peter


From: Jean-Jacques Clar [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 09, 2004 9:29 AM
To: [EMAIL PROTECTED]
Subject: Re: 2.0.48 worker mpm on RH3 NPTL results

HyperThreading enabled or not when limited to 1/2 CPUs, if these are HT
CPUs?  That can make a difference (either way) when benchmarking IIRC.
 
CPUs are not HT.


Apache + Windows

2003-11-18 Thread Peter J. Cranstone
Bill,

Here is an interesting link to a problem someone encountered running Apache
on Windows. If he's right there is little hope for Apache to ever run
properly on newer versions of Windows.

http://grumet.net/weblog/archives/2003/11/18/questions_about_windows_apache.
html

Regards,



Peter


RE: consider reopening 1.3

2003-11-17 Thread Peter J. Cranstone
Bill,

I've done some thinking about this - price/performance is only part of the
equation.

Someone needs to take a step back and see where Apache wants to *be* in two
years time. I agree with Jim that 1.x probably is just about done, it works,
people understand it and have ported their modules to it, and performance is
good enough. So that leaves you with 2.x The debate will rage on whether
it's faster than 1.x, but the benchmarking is a slippery slope and we've all
been down it at some time.

Personally I think you have to focus on the customer via market segment. The
super user who wants to play with Apache 2.x work out the kinks, port his
modules and really fine tune it. That's about 1/10 of 1% of your addressable
market. He is not going to pull this product into the mainstream market.

Therefore who is your classic Apache user and what does he want that he
can't get right now? Probably not much. Let me use mod_gzip as an example.
When we released it we were busy for about 4 months fixing it for the real
world i.e. with a ton of feedback. Since then (over 2 years) we haven't
touched it, why? Because it does all it needs to do and people are happy
with the status quo. How do we improve on that? While I'm sure we can come
up with a few ideas will people buy into it? I can't justify the resources
in this economy to find out.

Right now the status quo is prevailing with 1.x and 2.x is not a must
have. To prove my point look no further than Covalent. When money is on the
table the rules change. Covalent have not been able to monetize 2.x in the
way everyone thought it could be done. Therefore what has the current
management team done - change direction to the new focus of web application
management and security. Sure there's still a web server in their somewhere
but now there's a different agenda i.e. make money for the shareholders. 

So what would I do...


Apache 1.x  - mass market - status quo prevails - boring but stable

Apache 2.x  - tiny market - **risk takers only** - not stable enough,
tough to understand

So what's the differentiator that drives the next wave? Personally (my
opinion only guys) it's hardware. 64-bit is now here. There are only going
to be a few choices. Either Sun, AMD, PowerPC or Itanium. Sun has had 64-bit
for a long time and is already entrenched at the web edge. AMD adds CISC
compatibility with a hint of RISC. Upside is that it runs x86 code about 10%
faster than Pentium PC's. PowerPC is a different beast but runs Apache just
fine.

AMD, Sun, PowerPC handle between 1-4 instructions per cycle, Itanium (which
is a whole new architecture) handles 6-8 instructions per cycle. Bottom line
if you code it correctly Itanium is going to be the flat out winner in a
drag race. It also happens to run Apache fine and as you've seen from our
benchmarks you can really tune this box to do some real magic. That's a
differentiator. Up until now there has been one big problem with Itanium -
it costs too much. That's going to change. 

We're coming up to a new hardware cycle (technology adoption life cycle
TALC). Machines purchased in 1999 are going to be replaced and people will
move to 64-bit. Why because we all like new toys especially if they are
cheap. What we don't want is lots of hard work to port our apps. AMD has
made this easy and next year Intel will do the same thing for Itanium by
releasing btrans (binary translator) which will allow you to run all your
x86 apps at Xeon speeds on an Itanium. 

So what are we going to see - lots of cheap web edge processors and people
will start moving their apps etc over. So what's the opportunity for Apache.

My opinion only - optimize it for 64-bit. Focus 80% of your available
resources on Apache 1.x because it has such a HUGE user base. The remaining
20% of your resources should focus on 2.x

Why? Because it's still too darn difficult to move people's apps to the 2.x
architecture. They will continue to take the path of least resistance which
means hardware. If you want to drop 1.3 and focus on 1.4 then make it for
64-bit, because it gives you a hardware performance boost.

It's all about focus and market segmentation - Covalent just learned the
hard way, even with mod_compat to make your 1.x modules run in a 2.x
environment they couldn't sell something with a delta of over a 1,000
dollars over something that is free. So they switched gears and went in a
different direction.

Learn from what they did - you don't need to do the same but don't continue
doing what your doing, the message is too confusing.

So for Jim J...

1.3 Done12 million happy customers
1.4 64-bit  12 million potential customers

2.x Work in progressnot sure how many customers?

I know where I would go.

Regards,


Peter 









-Original Message-
From: Bill Stoddard [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 16, 2003 6:03 PM
To: [EMAIL PROTECTED]
Subject: Re: consider reopening 1.3

Peter J. Cranstone wrote:

 In today's

RE: consider reopening 1.3

2003-11-17 Thread Peter J. Cranstone
To support my comments on cheap 64-bit computing see this link:
http://www.siliconvalley.com/mld/siliconvalley/7281990.htm

Sun, AMD announce plans for line of low-cost servers - 64-bit!

People will move Apache 1.x to this platform because there is virtually NO
migration cost (i.e. recoding modules etc) and they get a performance boost
and while replacing an aging infrastructure.

12 million user on the move - make it easy for them, buy a cheap AMD Opteron
and optimize and improve Apache 1.4 on that box.

Regards,


Peter


-Original Message-
From: Peter J. Cranstone [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2003 7:18 AM
To: [EMAIL PROTECTED]
Subject: RE: consider reopening 1.3

Bill,

I've done some thinking about this - price/performance is only part of the
equation.

Someone needs to take a step back and see where Apache wants to *be* in two
years time. I agree with Jim that 1.x probably is just about done, it works,
people understand it and have ported their modules to it, and performance is
good enough. So that leaves you with 2.x The debate will rage on whether
it's faster than 1.x, but the benchmarking is a slippery slope and we've all
been down it at some time.

Personally I think you have to focus on the customer via market segment. The
super user who wants to play with Apache 2.x work out the kinks, port his
modules and really fine tune it. That's about 1/10 of 1% of your addressable
market. He is not going to pull this product into the mainstream market.

Therefore who is your classic Apache user and what does he want that he
can't get right now? Probably not much. Let me use mod_gzip as an example.
When we released it we were busy for about 4 months fixing it for the real
world i.e. with a ton of feedback. Since then (over 2 years) we haven't
touched it, why? Because it does all it needs to do and people are happy
with the status quo. How do we improve on that? While I'm sure we can come
up with a few ideas will people buy into it? I can't justify the resources
in this economy to find out.

Right now the status quo is prevailing with 1.x and 2.x is not a must
have. To prove my point look no further than Covalent. When money is on the
table the rules change. Covalent have not been able to monetize 2.x in the
way everyone thought it could be done. Therefore what has the current
management team done - change direction to the new focus of web application
management and security. Sure there's still a web server in their somewhere
but now there's a different agenda i.e. make money for the shareholders. 

So what would I do...


Apache 1.x  - mass market - status quo prevails - boring but stable

Apache 2.x  - tiny market - **risk takers only** - not stable enough,
tough to understand

So what's the differentiator that drives the next wave? Personally (my
opinion only guys) it's hardware. 64-bit is now here. There are only going
to be a few choices. Either Sun, AMD, PowerPC or Itanium. Sun has had 64-bit
for a long time and is already entrenched at the web edge. AMD adds CISC
compatibility with a hint of RISC. Upside is that it runs x86 code about 10%
faster than Pentium PC's. PowerPC is a different beast but runs Apache just
fine.

AMD, Sun, PowerPC handle between 1-4 instructions per cycle, Itanium (which
is a whole new architecture) handles 6-8 instructions per cycle. Bottom line
if you code it correctly Itanium is going to be the flat out winner in a
drag race. It also happens to run Apache fine and as you've seen from our
benchmarks you can really tune this box to do some real magic. That's a
differentiator. Up until now there has been one big problem with Itanium -
it costs too much. That's going to change. 

We're coming up to a new hardware cycle (technology adoption life cycle
TALC). Machines purchased in 1999 are going to be replaced and people will
move to 64-bit. Why because we all like new toys especially if they are
cheap. What we don't want is lots of hard work to port our apps. AMD has
made this easy and next year Intel will do the same thing for Itanium by
releasing btrans (binary translator) which will allow you to run all your
x86 apps at Xeon speeds on an Itanium. 

So what are we going to see - lots of cheap web edge processors and people
will start moving their apps etc over. So what's the opportunity for Apache.

My opinion only - optimize it for 64-bit. Focus 80% of your available
resources on Apache 1.x because it has such a HUGE user base. The remaining
20% of your resources should focus on 2.x

Why? Because it's still too darn difficult to move people's apps to the 2.x
architecture. They will continue to take the path of least resistance which
means hardware. If you want to drop 1.3 and focus on 1.4 then make it for
64-bit, because it gives you a hardware performance boost.

It's all about focus and market segmentation - Covalent just learned the
hard way, even with mod_compat to make your 1.x modules run in a 2.x
environment they couldn't sell

RE: Antw: RE: consider reopening 1.3

2003-11-17 Thread Peter J. Cranstone
Oh yes - forgot about v6... that's a must have for Apache. Is it available
for 1.x? If not that would be the first feature to add.

Peter

-Original Message-
From: Andre Schild [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2003 10:07 AM
To: [EMAIL PROTECTED]
Subject: Antw: RE: consider reopening 1.3

People will move Apache 1.x to this platform because there is virtually NO
migration cost (i.e. recoding modules etc) and they get a performance boost
and while replacing an aging infrastructure.

12 million user on the move - make it easy for them, buy a cheap AMD
Opteron
and optimize and improve Apache 1.4 on that box.
Today perhaps, but tomorrow with IPv6 ?

André

aarboard ag
internet - networks - screenprint design - multimedia
Egliweg 10 - Postfach 214 - CH-2560 Nidau (Switzerland)
Phone +41 32 332 9714 - Fax +41 32 332 9715
www.aarboard.ch - [EMAIL PROTECTED]



RE: Antw: RE: consider reopening 1.3

2003-11-17 Thread Peter J. Cranstone
 then *what* is the driver for 1.4 over 2.x??

Right now I think it's unknown - but with some reasoned debate I think a
path will emerge. 

One other thought - Apache needs an enemy - and I mean this in the nicest
possible terms. Having been on the receiving end of the forums venom before
I know how everybody responds to a threat. With 66% market share and IIS the
only real threat, why does everyone come to work in the morning.

Larry Ellison said it best in Softwars - we pick our enemies very
carefully. I think this is an important point because it gives you focus
and reason for being.

2.x is adrift because there is no threat to it apart from apathy - so what
do you do to get the focus back on Apache? Personally I think the driver for
1.4 over 2.x is 64-bit because that's what people are going to be buying
next.

64-bit offers you three things:

1. Performance  1-4 IPC (instructions per cycle
2. Memory   4GB RAM
3. Security 

Sun, AMD, PowerPC all offer 1  2, Itanium offers you 6-8 IPC and some very
sophisticated security capabilities not found in any other chip
architecture. The downside of Itanium is it's perception in the market
place. Intel has done a terrible job of marketing it - however they do have
to protect the Pentium cash cow. This will all change sometime next year
with the btrans software which will give you Xeon performance for x86
programs running under the EPIC architecture.

Regards,



Peter



-Original Message-
From: Jim Jagielski [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2003 12:05 PM
To: [EMAIL PROTECTED]
Subject: Re: Antw: RE: consider reopening 1.3

Glenn wrote:
 
 On Mon, Nov 17, 2003 at 01:31:55PM -0500, Bill Stoddard wrote:
  Apache 1.4, an APR'ized version of Apache 1.3 (to pick up IPV6 and 64
bit 
  support) with all the Windows specific code stripped out and source 
  compatability (to the extent possible) with Apache 1.3 modules would 
  probably see rapid uptake. I can't say that thrills me but it's probably

  true...
 
 +1
 

Again, unless there is 100% binary compatibility, which I do NOT
see with 1.4, then *what* is the driver for 1.4 over 2.x??

-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  A society that will trade a little liberty for a little order
 will lose both and deserve neither - T.Jefferson


RE: consider reopening 1.3

2003-11-16 Thread Peter J. Cranstone
 What would 1.4 have or be for that to happen?

You have 12 million users - shouldn't be hard to simply ask them what they
would like to see.

Give the customer what he wants and he will be back for more. HTTP ain't
finished yet, plenty of room for some serious improvement.

And I'd also be seriously thinking about 64-bit and getting Apache around
the 4GB memory limitation.

Regards,


Peter




-Original Message-
From: Jim Jagielski [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 16, 2003 1:37 PM
To: [EMAIL PROTECTED]
Subject: Re: consider reopening 1.3

On Nov 16, 2003, at 4:12 AM, Glenn wrote:


 - lack of clear leadership and even basic direction
   scratch-an-itch development is fine and good, but not in total chaos

Umm... this *is* the ASF. It's *developer* driven. The direction
is defined by the developers.

 - cathedral development
   it appears that more than a few serious discussions have not happened
   on-list and instead happen on IRC or elsewhere (board rooms?) without
   apprising the list of what transpired.  (Or have there been 
 absolutely
   no recent design discussions?)

I agree that in some cases, irc is replacing dev@, which is
Not Good. Thank God we haven't started using stupid wikis.

 - patch management
   many patches posted to this list or the bug db languish in limbo.
   Very little happens until a core contributor decides to take over a 
 patch
   (more often than not it is more than simply shepherding it)
   Little feedback; it often feels like nobody's home to answer the 
 phone...
 - insufficient (developer) documentation
   sure, there's the source, but it takes a lot longer to wrap ones head
   around the Apache2 paradigms than it did for Apache 1.3 BUFFs and 
 such.
   The barrier to entry is much higher; solid design documents would be
   infinitely helpful
 - many new contributors are frustrated and discouraged
   see all of the above
   The voluble Kevin Kiley said it well:
   Make it EASY to contribute... not a nightmare.

The above are *not* 1.3 issues, per se, but httpd ones.

 *** We need to get back many of the disenfranchised Apache 1.3 
 developers

 Killing Apache 1.3 is not a good option.  There is a strong business
 need in many places to stay with Apache 1.3.

 The better option is reopening the 1.3 tree.
 Release 1.4 and open a 1.5 dev tree, with the specific intent on
 having the 1.4 release eventually map _directly_ into a _seamless_
 upgrade to Apache 2.x, with very easy and clear directions for using
 a reverse proxy for those legacy 1.3 third-pary modules.)  While
 upgrading is not hard for developers, Apache is not a simple product.
 We need an even-better (tm) way to get users from There (Apache 1.3)
 to Here (Apache 2.x).  Yes, more trees are extra work, but this
 community is rapidly deteriorating without them.



As noted many times, 1.3 is actively maintained but not
actively developed. To be honest, I've not seen that
many people saying I *really* want to add this to 1.3!.
If they had, chances are good that I'd +1 (not that what
goes into 1.3 is my decision...).

I'm curious how a 1.4 or whatever would make it easier for
people to make that transition. What would 1.4 have or be
for that to happen?



RE: Apache 2.0 Uptake thoughts

2003-11-14 Thread Peter J. Cranstone
Bill,

Thanks for the great link. Here's one for you:

http://www.securityspace.com/s_survey/server_graph.html?type=httpdomaindir=
month=200310servbase=YToxOntpOjA7czoxMzoiQXBhY2hlLzEuMy4yNyI7fQ==serv1=QX
BhY2hlLzIuMC40Nw==

It's the historical market share of all servers overlaid with 2.0.47

2.0.47 is making progress but you can clearly see that it's taken many years
to get any traction. 

Here's a question for you seeing that your closely related to Covalent
where are the Covalent stats for sites running their version of Apache 2.x?

How many servers have they shipped?

Thanks


Peter



-Original Message-
From: William A. Rowe, Jr. [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 14, 2003 1:33 AM
To: [EMAIL PROTECTED]
Subject: Apache 2.0 Uptake thoughts

For those interested in the question of Apache 2.0 uptake, my favorite
resource
is http://www.securityspace.com/s_survey/data/index.html - where you get
gobs of details.  The upgrade/downgrade report helps identify if a release
was
a winner (mostly upgrading to, or through, that version) - or a loser (when
you
see some significant percentage fall back on earlier releases.)

Drill down to Theft and Upgrades, choose Apache, then a specific release,
e.g.
2.0.47.  Scroll down to the version upgrade/downgrade list.  

Some of this is going to be random noise - multiple versions working in a 
distributed farm, pre-adoption testing, or difficulty reconfiguring the
server
(in the case of 1.3 - 2.0 transitions.)  But notably, 29.4k sites upgraded 
to .47 in October, and 1k sites backed down.  Good retention, it indicates 
that the 2.0.47 release solved problems.  (191 moved forward to 2.0.48-dev, 
not a bad thing at all.)

The server details is also fun, no matter if you are comparing products or
very specific releases.  Here's where it's interesting.

IIS 6.0 has 1.28% of the servers out there, that's about 5 1/3% of all IIS
servers deployed.  This, with a version that rolls out-of-the-box with
specific
flavors of the Windows OS.

About the same time as IIS 6, Apache 2.0 rolled out.  Ignoring for a moment
the 9.13% of Apache servers that don't reveal their version whatsoever,
ang ignorning rounding errors, 3.57% of the servers out there use some 2.0
version of Apache, so that 6% of Apache servers (identifying themselves)
run 2.0 as opposed to another version.

Personally,  I'm pleased by a 6% uptake in a software application that
doesn't 
have to change till someone needs the new features, given that we continue
to provide the security patches people need for their existing 1.3
infrastructure.

Of course it will only grow higher if folks trust 2.0 and can get their
problems
solved, which the current dialog in [EMAIL PROTECTED] I hope will help address.  

Just statistics to ponder as we approach next week.  See you all in Vegas!

Bill



RE: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-13 Thread Peter J. Cranstone
Good response - ask the customer what he wants and then help him achieve it.

It all starts with stability - compatibility - performance. The ASF has a
tough job ahead of it, getting millions of users to change.

Not an easy task in today's environment


Peter

-Original Message-
From: Ben Hyde [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 13, 2003 7:24 AM
To: [EMAIL PROTECTED]
Subject: Re: the wheel of httpd-dev life is surely slowing down, solutions
please

I've quite a few ideas and opinions about why things might be quiet 
these days.  I'd recommend against taking any of these ideas too 
seriously.  Here is an idea:  we have gotten out in front of the users.

Products have features and overtime they get more and more features.  
Users have some ability to absorb features and overtime their ability 
to do so rises.  For young products the users are ahead of the product 
(What do you mean it doesn't to CGI scripting!?).  For older products 
the mismatch gets smaller and smaller (I need massive virtual 
hosting!).

My joke about this is that the process is a perfect behaviorist 
training loop.  Young products get strong feedback which trains their 
sponsors to listen to customer demand and add features.  Overtime that 
feedback peter's out, which trains them to listen more and more 
carefully.  Finally they get to the point where they listen to the wind 
and they hear feature demands in it's ghostly moans.  That's call 
market research. :-)

Commercial products tend to overshoot the users.  Consider Microsoft's 
office products!

Open source is less given to overshoot.  If features only go in because 
a user volunteered to do the work that acts; to some degree to temper 
the chance of overshoot.

Open source can still overshoot.  Exceptional users may add features 
that are rarely needed.  Firms, lead by market research or other 
in-house demands, may volunteer.

So, one theory is that we have overshot the user's ability to absorb 
new features.  Some amount of overshoot is to be expected on any major 
release.

Solutions?  Well if you buy this model - and like I said it's only one 
- then the trick is to aid users in climbing the learning curve.  
Figuring out where the user demand is and then helping to bridge the 
gap using the new features.  Helping all the complementary products 
absorb the new stuff.

This may seem like an argument that we have filled out our ecological 
niche; but it's slightly different than that.  The niche isn't fixed, 
it is a free variable as well.

  - ben



RE: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-11 Thread Peter J. Cranstone
Something else to think about...

What's the differentiator in the market place between 1.x and 2.x?

(hint: it's not a feature list)

If I was to go out and buy Apache (Covalent) apart from some management
tools (features) what's the biggest differentiator between the old version
(public domain 1.x) and the new version (2.x.) 

Invariably three things come to mind - performance, stability,
compatibility? 

So is Apache 2.x faster or slower than 1.x? Has anyone shown definitively
that 2.x can serve pages faster than 1.x? and if so by what percentage or
factor?

Looks like 2.1 is going to be more stable than the old 2.x versions but is
it more stable than 1.x?

Finally compatibility? Can I run my 1.x modules unmodified in a 2.x
environment? 

HTTPD dev life is slowing down - simply fixing bugs and adding new features
is not the way to differentiate, there's probably 50,000 users of 2.x while
there's millions and millions of 1.x users.

What's it going to take to shift them to 2.x?

Not features (only the minority of users now care) - not compatibility
(although it helps) - my bet is raw performance.

So, anyone got any hard data that shows Apache 2.x serving pages factors
faster than 1.x?

Regards,


Peter


-Original Message-
From: William A. Rowe, Jr. [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 11, 2003 2:00 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: the wheel of httpd-dev life is surely slowing down, solutions
please

At 07:14 PM 11/10/2003, Stas Bekman wrote:
I have several reasons to believe that the wheel of httpd-dev life is
slowing down and something has to be done to get this wheel up to the
speed like in the good old days. The following observation are listed
in no particular order. I've also tried to offer suggestions how to
resolve problems.

Always good to discuss, but httpd is about folks contributing and the team
reviewing patches.  When there are lots of activity, things move quickly.
Summer is a slow period for alot of projects.  Stability slows things down
too (and yes, 2.0.48 is far more stable than anything released before on
the 2.0 branch.)

1) Bugs

searching for NEW and REOPENED bugs in httpd-2.0 returns: 420 entries
http://nagoya.apache.org/bugzilla/buglist.cgi?bug_status=NEWbug_status=REO
PENEDproduct=Apache+httpd-2.0

Yes - these are important, I'm the first to admit I have a number I track
that
I haven't resolved.  And more eyes are needed :)  However...

Suggestion: make bugzilla CC bug-reports to the dev list. Most
developers won't just go and check the bugs database to see if there
is something to fix, both because they don't have time and because
there are too many things to fix. Posting the reports to the list
raises a chance for the developer to have a free minute and may be to
resolve the bug immediately or close it.

Nope, it assures a good number of interested contributors SHUT DOWN
their subscriptions to httpd-dev.  Trust me, I subscribe to bug traffic as
well.

Guess which of the two I read daily?  If I didn't split them, I would never
be caught up on the developer discussions.  (Even those I don't understand
completely I do read.)

2) Lack of Design

In my personal opinion, the move to CTR from RTC had very bad
implications on the design of httpd 2.1.

You seem to be misinformed here...  Apache 2.0 was ALWAYS CTR
until we decided to put a roadblock in the way of breaking the stable
version, and encourage that experimentation to move to an unstable
branch.  RTC was only introduced after the last ApacheCon.  And very
very resisted, at that.

2a). Design of new on-going features (and changes/fixes) of the
existing features is not discussed before it's put to the code. When
it's committed it's usually too late to have any incentive to give a
design feedback, after all it's already working and we are too busy
with other things, so whatever.

However, Show me the code remains the mantra of httpd.  So continuing
to use CTR, the code is in place.  With httpd-test/perl-framework the coder
can ever verify that their new change doesn't break any tests that are
already defined.

The worst part is that it's now easy to sneak in code which otherwise
would never be accepted (and backport it to 2.0). I don't have any
examples, but I think the danger is there.

That's always been true in httpd.  We didn't lock 1.3 until 2.0 was nearing
completion.  We didn't lock 2.0 until 2.1 was available for folks to hack
on.

It is VERY hard to get code into 2.0.  That code is RTC.  Gratuitous changes
and new features aren't encouraged.  Time to move on to 2.1 for all of the
great and wonderful things our web server aught to do.  In fact, it's why
I've
suggested point blank that the 'which pass at the startup file' patch you
offered
will never belong in 2.0 - anyone designing for an arbitrary 2.0 can't even
count
on such a feature.  But anyone using the future 2.2 release should trust
that
it is present.

2b). As a side-effect when someone asks a design question 

RE: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-11 Thread Peter J. Cranstone
 It's not anymore cool to work on Apache.

You nailed it - because no one knows where it's going. Where's the focus,
what does Apache really want to be, whose leading the charge?

I've been following this forum a long, long time and the change in the last
2 years has been the most dramatic - the old guard has gone, there is little
leadership and even less reason to do anything.

It takes a tremendous amount of work to build a quality software project and
sadly there is little enthusiasm to really improve Apache. 

One reason is obvious - with 66% of the market you're a monopoly (close) and
we've all seen what happens when competition disappears from the market
place.

Regards,


Peter


-Original Message-
From: Daniel Lorch [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 11, 2003 7:10 AM
To: [EMAIL PROTECTED]
Subject: Re: the wheel of httpd-dev life is surely slowing down, solutions
please

hi,

  2d). CRT seemed to come as a replacement for design discussions. It's
  very easy to observe from the traffic numbers:

Please excuse the total ignorance of passive Apache-Dev readers, but
these abbreviations were new to me. I've found then im the Apache
Glossary, though, and provide them to all who didn't know them ei-
ther:

   http://incubator.apache.org/learn/glossary.html#CommitThenReview
   http://incubator.apache.org/learn/glossary.html#ReviewThenCommit

Not trying to start a flamewar, but could the Jakarta Project have
had an influence on the decline? Hal Flynn's [1] points out quite well
that for most server-sided applications Java (and it's clone .NET)
provide a viable platform for secure applications with negligible
impact on performance. And with all the fuss around J2EE (JBoss vs.
Geronimo) a new feature in Apache might not catch as much attention
in the community anymore. OpenSource is a development model which
bases on peer-based ego-gratification and if the incentives to work
on Apache aren't that high anymore, the Apache httpd server might not
be able to attract as many developers anymore. Or to put it in other
words: It's not anymore cool to work on Apache.

uuhmm .. asbestos or not? No intention to provoke, seriously. We're
all grownup people and our kids are going to read these mailinglists
in CS-history classes, so behave, please ;)

[1] http://www.theregister.co.uk/content/4/33859.html

-daniel




RE: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-11 Thread Peter J. Cranstone
There is no flame - just a couple of points and a request for data.

 If you want to improve something, you should provide solutions,
not critics

Certainly - early next year you will see them. Here are some current
performance stats with some new technology we're working on.


Configuration   ToolElapsed Time
(sec’s) Data Transfer Rate
(KB/sec)Requests per Second Requests per
Minute  Performance Gain
Factor  
Apache  Apache Bench38.735  882.92  2581.64 154,898 1.0 
Cyclone Proxy Cache
+
Apache  Apache Bench15.663  2387.79 6384.47 383,068 2.47
Apache  Zeus Bench  39.961  855.83  2502.44 150,146.4   1.0 
Squid
+
Apache  Zeus Bench  28.910  1314.42 3459.01 207,540.6   1.38
Cyclone Proxy Cache
+
Apache  Zeus Bench  15.176  2464.42 6589.35 395,361 2.63
Cyclone Proxy Cache
(Tuned Parser)
+
Apache  Zeus Bench  13.505  2769.34 7404.67 444,280.2   2.95
Cyclone Proxy Cache 
(4 Tuned Functions)
+
Apache  Zeus Bench  13.006  2875.6  7688.76 461,325.6   3.07

These numbers were obtained using a single processor Itanium® 1.0Ghz
(Madison) chip. By tuning certain HTTP string handling functions we have
seen up to a factor 11 performance improvement.

Our next benchmark is due by year end. Essentially we will be adding one
more line for the stats above. The goal is very simple - transmit greater
than 1 million requests in a single minute on a single processor Itanium
1Ghz machine. A factor 10 performance improvement. A single processor
Deerfield Itanium® chip costs $744 - our solution doesn't require a current
OS, nor hard drive to operate - it scales to multiple chips and can support
a cache of up to 1 terabyte of RAM

 Revolution is for new players, carefully crafted evolutions are for the 
Mass

Yep…  Support for a 1TB cache, no hard drive, no current OS required, and
the ability to pump data faster than any other platform on the planet should
do the trick. Only thing left is to get the Itanium® platform into a single
1RU box at sub $5,000. I doubt we will have to wait long for that.

Long live the revolution

Regards,


Peter

-Original Message-
From: Henri Gomez [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 11, 2003 7:41 AM
To: [EMAIL PROTECTED]
Subject: Re: the wheel of httpd-dev life is surely slowing down, solutions
please

Peter J. Cranstone a écrit :

It's not anymore cool to work on Apache.
 
 
 You nailed it - because no one knows where it's going. Where's the focus,
 what does Apache really want to be, whose leading the charge?
 
 I've been following this forum a long, long time and the change in the
last
 2 years has been the most dramatic - the old guard has gone, there is
little
 leadership and even less reason to do anything.
 
 It takes a tremendous amount of work to build a quality software project
and
 sadly there is little enthusiasm to really improve Apache. 
 
 One reason is obvious - with 66% of the market you're a monopoly (close)
and
 we've all seen what happens when competition disappears from the market
 place.

I'm not sure http-dev is the place to flam ASF and its commiters.

If you want to improve something, you should provide solutions,
not critics.

HTTPD have 66% of the market and that's great to see that
an OpenSource solution is well behind M$.

Sun, Oracle and majors corps have stopped dreaming having 50% of market
share some years ago.

At least we could say, Apache Software Foundation does it and
maintain its leading position.

How ?

- By producing solutions like HTTPD which are stable,
   full featured and works on so many platforms.

Revolution is for new players, carefully crafted evolutions are for the 
mass.




RE: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-11 Thread Peter J. Cranstone
 Well the http tuning of string handling is a known factor of
 optimization

You're right - nothing new about optimizing string handling - just doing it

 BTW, if you post these benchmarks on the HTTPd-dev list, should I
assume you'll give ASF your optimized  tuned algorithms ?

I wouldn't assume anything at this point - however if you remember correctly
we did give the ASF mod_gzip (last time I checked even Google was
compressing their output), however there will be an open source contribution
at some point.

 Do you known that IBM does some nice optimization using FRCA on its
Apache 2.0 implementation on iSeries

Great - where are the benchmarks, as I said in my earlier post what's the
differentiator between 1.x with 66% of the market and 2.x with 0% of the
market?

Remember your audience - it either has to make me money or save me money. If
I'm going to implement 2.x then I should be able to see a return on the time
I've invested either through a hardware/performance improvement or
productivity.

We all know why 2.x is struggling - the bar was set with 1.x and 2.x failed
to move it more than about 1 inch. 


Peter



-Original Message-
From: Henri Gomez [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 11, 2003 8:25 AM
To: [EMAIL PROTECTED]
Subject: Re: the wheel of httpd-dev life is surely slowing down, solutions
please

Peter J. Cranstone a écrit :

 There is no flame - just a couple of points and a request for data.
 
 
If you want to improve something, you should provide solutions,
 
 not critics
 
 Certainly - early next year you will see them. Here are some current
 performance stats with some new technology we're working on.
 
 
 Configuration ToolElapsed Time
 (sec’s)   Data Transfer Rate
 (KB/sec)  Requests per Second Requests per
 MinutePerformance Gain
 Factor
 ApacheApache Bench38.735  882.92  2581.64 154,898 1.0 
 Cyclone Proxy Cache
 +
 ApacheApache Bench15.663  2387.79 6384.47 383,068 2.47
 ApacheZeus Bench  39.961  855.83  2502.44 150,146.4   1.0

 Squid
 +
 ApacheZeus Bench  28.910  1314.42 3459.01 207,540.6   1.38

 Cyclone Proxy Cache
 +
 ApacheZeus Bench  15.176  2464.42 6589.35 395,361 2.63
 Cyclone Proxy Cache
 (Tuned Parser)
 +
 ApacheZeus Bench  13.505  2769.34 7404.67 444,280.2   2.95

 Cyclone Proxy Cache 
 (4 Tuned Functions)
 +
 ApacheZeus Bench  13.006  2875.6  7688.76 461,325.6   3.07

 
 These numbers were obtained using a single processor Itanium® 1.0Ghz
 (Madison) chip. By tuning certain HTTP string handling functions we have
 seen up to a factor 11 performance improvement.
 
 Our next benchmark is due by year end. Essentially we will be adding one
 more line for the stats above. The goal is very simple - transmit greater
 than 1 million requests in a single minute on a single processor Itanium
 1Ghz machine. A factor 10 performance improvement. A single processor
 Deerfield Itanium® chip costs $744 - our solution doesn't require a
current
 OS, nor hard drive to operate - it scales to multiple chips and can
support
 a cache of up to 1 terabyte of RAM
 
 
Revolution is for new players, carefully crafted evolutions are for the 
 
 Mass
 
 Yep…  Support for a 1TB cache, no hard drive, no current OS required, and
 the ability to pump data faster than any other platform on the planet
should
 do the trick. Only thing left is to get the Itanium® platform into a
single
 1RU box at sub $5,000. I doubt we will have to wait long for that.
 
 Long live the revolution
 
 Regards,

Well the http tuning of string handling is a known factor of
optimization, just study tomcat 3.2, 3.3 and Coyote 1.1 and
you'll that it could still be optimized.

BTW, if you post these benchmarks on the httpd-dev list, should I
assume you'll give ASF your optimizedtuned algorythms ?

Do you known that IBM does some nice optimization using FRCA on its
Apache 2.0 implementation on iSeries ?

A proof that Apache 2.0 is a great platform for such games ;)



RE: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-11 Thread Peter J. Cranstone
Thanks for the stat - our environment supports throughputs 1Gbps on a
single processor. Near linear scalability is expected with additional chips
up to 256 CPU's.

I agree with your comments on IPv6 - it's already here - might as well
embrace the horror.

Regards,


Peter

-Original Message-
From: Colm MacCarthaigh,,, [mailto:[EMAIL PROTECTED] On Behalf Of Colm
MacCarthaigh
Sent: Tuesday, November 11, 2003 9:53 AM
To: [EMAIL PROTECTED]
Subject: Re: the wheel of httpd-dev life is surely slowing down, solutions
please

On Tue, Nov 11, 2003 at 06:02:36AM -0700, Peter J. Cranstone wrote:
 So, anyone got any hard data that shows Apache 2.x serving pages factors
 faster than 1.x?

Yes, plenty :) ftp.heanet.ie serves about 1 million requests, well over
a terabyte of data per day and maintains an average of about 220Mbps for
http. It's roughly 10 times less latent, and serves about 5 times more per
second under 2.0 than 1.3, and I benchmarked that in the early 2.0 days.
Performance under 2.x has improved since. 

If I could use sendfile (my hardware is still broken in IPv6) I'd see
a much bigger increase in performance. 2.x knocks the socks off of 1.x.
I've benchmarked many other transitions, and though the improvement
is smaller for dynamic content I've never seen the numbers get worse.
Of course this is all going from 1.x prefork to a 2.0 worker mpm.

More importantly though; 2.x has IPv6 support. And whilst many people
reading this mail may think IPv6 is an obscure requirement, many of us
are in parts of internet where it's de facto. We couldn't even consider
rolling out an app which didn't have reliable IPv6 support. Many people
are finding themselves in such parts of the internet, at an increasing
rate.

As an asside; It's been my experience that most of industry doesn't
rate application performance all that highly in evaluation criteria
(mainly because buying beefier hardware is an easier solution).

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


RE: [PATCH] mod_deflate extensions

2002-11-21 Thread Peter J. Cranstone
 And in fine what about mod_deflate to be added by default in
 Apache 2.0.44 ?

Here's a reason for this. Content encoding and the ability to send
compressed data is part of the HTTP standard and if Apache 2.x is really
HTTP compliant then it should support it.

Why build and offer a new version of Apache which is not 100% HTTP
compliant?

Regards,


Peter



-Original Message-
From: Henri Gomez [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, November 21, 2002 2:55 AM
To: [EMAIL PROTECTED]
Subject: Re: [PATCH] mod_deflate extensions

Bill Stoddard wrote:
Pie is rarely free at a truck stop.

 
 At least none that you would want to dip your fingers into

And in fine what about mod_deflate to be added by default in
Apache 2.0.44 ?

And if so should we use the mod_gzip compression functions instead of 
depending on zlib ?



RE: mod_blanks

2002-09-26 Thread Peter J. Cranstone

Fabio,

Mod_gzip for Apache is a better solution. Prior to it's release both
Kevin and I looked at what we call poor man's compression. I.e. just
removing the blank spaces, lines and other garbage in a served page.

Here was what we learned.

No one was interested. It didn't save much on the overall page, and
people really don't like their HTML etc being messed with.

Also it's easier if you are going to spend the CPU cycles to simply use
gzip compression to squeeze the page by upwards of 80%+ and save all the
formatting to the Author's HTML

Mod_gzip already saves a ton of bandwidth and with a current browser
there is no need to install a client side decoder.

Regards,


Peter J. Cranstone


-Original Message-
From: fabio rohrich [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, September 26, 2002 6:38 AM
To: [EMAIL PROTECTED]
Subject: mod_blanks

I'm going to develop this topic for thesis.
Has anybody of you any suggest for it? Something to
addin the development (like compression of the string
) or some feature to implement!

And, the last thing, what do you think about it?

Thanks a lot,
Fabio

- mod_blanks: a module for the Apache web server which
would on-the-fly 
remove unnecessary blank space, comments and other
non-interesting 
things from the served page.  Skills needed: the C
langugae, a bit of 
text parsing techniques, HTML, learn Apache API. 
Complexity: low to 
moderate (after learning the API).  Usefulness:
moderate to low (but 
maybe better than that, it's a kind of nice toy topic
that could be 
shown to save a lot of bandwith on the Internet :-).



__
Mio Yahoo!: personalizza Yahoo! come piace a te 
http://it.yahoo.com/mail_it/foot/?http://it.my.yahoo.com/



RE: c-l filter and buffering of the entire response

2002-07-03 Thread Peter J. Cranstone

Couple of thoughts...

1. What if the content is compressed.
2. What about compressed chunked encoding.
3. What if servers start supporting compressed headers. RFC 1144

Regards,


Peter

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of Jeff Trawick
Sent: Wednesday, July 03, 2002 5:55 AM
To: [EMAIL PROTECTED]
Subject: c-l filter and buffering of the entire response

c-l filter thinks that a partial send is not okay if using HTTP
0.9. connection-keepalive is AP_CONN_UNKNOWN.  If this were set to
AP_CONN_CLOSE, c-l filter would allow a partial send.  I dunno what
else that would break.

Right now Apache is eating lots and lots of storage on a big CGI
response on HTTP 0.9 request.  The entire response is getting
buffered.

With HTTP/1.0, the entire response is getting buffered too.  Do we
absolutely have to get the content length, or can we set connection
close and omit the content length field?

(Haven't we been through this before?)

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



RE: c-l filter and buffering of the entire response

2002-07-03 Thread Peter J. Cranstone

I stand corrected... 

But there's no reason why the HTTP header cannot be compressed either.
This is especially critical when conserving bandwidth in an wireless
environment. 1200 byte headers are not uncommon and it a latency laden
environment every bit saved enhances the consumers experience. 

Is Apache ready for HTTP header compression?


Peter

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, July 03, 2002 6:15 AM
To: [EMAIL PROTECTED]
Subject: RE: c-l filter and buffering of the entire response


 3. What if servers start supporting compressed headers. RFC 1144

The 'header' as refered to by the rfc 1144 is not the HTTP header but
the
IP/TCP header.

Or in other words Van Jacobson Compression and other ethernet, IP, TCP
level compression techniques have fundamentally nothing to do with the
socked based TCP stream level on which apache, or any internet protocol
-application- functions.

Dw




RE: OT: whither are we going?

2002-02-27 Thread Peter J. Cranstone

Roy,

 People outside the community can only influence what they do by
performing the work necessary to eventually be considered part of the
community, or by paying someone within the community to do it for them.

I agree with you, however I think what everyone is looking for is
leadership. Stop the feature creep, decide what is necessary to get to
RC and then release a finished version.

Will the real person in charge of Apache 2.0 standup... it can't be a
democracy anymore, there has to be leadership. Apache is losing ground
to IIS like or not. 2.0 is important to the community, so someone needs
to lay down the law (that should be you) and say here are a set of
guidelines for the beta release of 2.x

Once people see leadership then the final part of your email will be
realized (i.e. The only responsibility we have is to keeping the
community open to new volunteers. (which will never happen without
leadership).



 Peter J. Cranstone



-Original Message-
From: Roy T. Fielding [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, February 26, 2002 2:35 PM
To: [EMAIL PROTECTED]
Subject: Re: OT: whither are we going?

 As far as having no responsibility to the people/companies that USE
 Apache, I put forth this argument.  When a company bases it's business
 or a person bases their career on a program, in MY OPINION, there then
 springs into a being an implied responsibility on the development team
 to support the product and keep it alive.  IE they have put THEIR
MONEY
 behind this product.  When a web hosting company says I use Apache.,
 that means that they are backing Apache with THEIR MONEY.  No, they
did
 NOT pay the ASF to RENT a license of Apache but they are STILL
spending
 money on Apache.

That is total bullshit.  When a company pays someone to support a
product,
whether that someone be a company like Covalent or an independent
software
developer, THEN and only then is there any implied responsibility to
that
person's needs.  It is completely insane to think that a volunteer group
of developers is going to be responsible to all 60 million or so users
just
because they happen to like the free product.

If you aren't contributing, you aren't part of the Apache community.
People within the community will work on the problems that they consider
to be most important.  People outside the community can only influence
what they do by performing the work necessary to eventually be
considered
part of the community, or by paying someone within the community to do
it
for them.

The only responsibility we have is to keeping the community open to new
volunteers.

Roy



RE: Some Benchmark Numbers

2001-11-27 Thread Peter J. Cranstone

While we're on the subject of benchmarks any numbers from the Covalent
Apache 2.0 version. I.e. how does it perform against the PD version and
or Apache 1.3.x


Peter

-Original Message-
From: Ryan Bloom [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, November 27, 2001 7:46 AM
To: [EMAIL PROTECTED]; Brian Pane
Subject: Re: Some Benchmark Numbers

On Monday 26 November 2001 06:25 pm, Brian Pane wrote:
 Ian Holsman wrote:
 [...]

 Summary:
 2.0 HEAD is approaching (and in some cases exceeeding) the
performance
 of Apache 1.3, but there still is some work needed to reduce the CPU
 utilization, and locking of the pools.

 Notably, there are just two things that stand out as big
 performance problems in 2.0 right now:

 * mutexes in the pools

 * mallocs in the bucket code

What ever happened to the bucket free list code?

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--




RE: cvs commit: httpd-2.0 STATUS

2001-09-10 Thread Peter J. Cranstone

 All I keep thinking, is that we are trying to spite RC by adding a
different GZ module

Don't worry about it. Let's see if we can make a decision on what is
good for the survival of Apache irrespective of what that means for RC.


Peter

-Original Message-
From: Ryan Bloom [mailto:[EMAIL PROTECTED]] 
Sent: Monday, September 10, 2001 7:46 AM
To: [EMAIL PROTECTED]; Greg Stein
Subject: Re: cvs commit: httpd-2.0 STATUS


On Monday 10 September 2001 03:59, Greg Stein wrote:
 On Fri, Sep 07, 2001 at 10:38:45PM -0700, William A. Rowe, Jr. wrote:
 ...
  Ryan's veto has effectively tabled this for now.  I'm begining to 
 respect  this from the perspective of putting a release in peoples 
 hands.  It can  be introduced soon afterwards, or if someone likes, a

 subproject can be  created.  This has been too long people, let's put

 2.0 to bed.

 Some people believe his veto is illegitimate -- that there is no 
 technical reason for vetoing the inclusion into modules/experimental.

I have removed my veto.  Although, I would point out that illegitimate
veto or not, nobody in this group has ever gotten away with going
through a veto. The only reason I have removed my veto is that it really
looks like everybody was about to ignore it anyway.  This whole thing
just leaves me with a bad taste in my mouth.  All I keep thinking, is
that we are trying to spite RC by adding a different GZ module.

Ryan __
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



RE: why not post mod_gzip 2.0? (was: Re: [PATCH] Add mod_gz to httpd-2.0)

2001-09-08 Thread Peter J. Cranstone

 The absolute best way to stay on top of API changes is to make your
code available to the people making those changes.

Rasmus... We don't want any distractions to the core code until it's
stable. It took 5-6 months to get mod_gzip stable for 1.3.x. I doubt it
will take that long in Apache 2.x but until it's stable why release
something which will could cause a problem.

Kevin and I have asked that mod_gzip be included in Apache 1.3.x because
it's stable, been tested by thousands of users and is part of the spec.
We'd like to repeat the same performance for 2.x


Peter

-Original Message-
From: Rasmus Lerdorf [mailto:[EMAIL PROTECTED]] 
Sent: Friday, September 07, 2001 12:08 AM
To: [EMAIL PROTECTED]
Subject: RE: why not post mod_gzip 2.0? (was: Re: [PATCH] Add mod_gz to
httpd-2.0)


  Why won't you post mod_gzip 2.0 *today*?

 Because Apache 2.x is not STABLE, not In BETA and the API set is not 
 yet FROZEN... When it is, we will release mod_gzip as a third party 
 module, which we will support and maintain.

I have stayed far away from this thread, but this just doesn't make any
sense to me.  The absolute best way to stay on top of API changes is to
make your code available to the people making those changes.  As soon as
we had an Apache2 PHP filter it was on our public CVS server and both
Doug and Ryan have tweaked it every now and then and many people have
tested it and found problems.  I don't understand how this could
possibly be a bad thing and definitely reduces the amount of
tail-chasing we will have to do later on.

-Rasmus




RE: zlib inclusion and mod_gz(ip) recap

2001-09-08 Thread Peter J. Cranstone

 I do not believe that adding new functionality to the server is the 
 way to get a release out the door.

Ryan, I agree with you on this point. Apache has to get to solid beta
before ANY new functionality is included. I believe I have backed you on
this subject before. It is simply too much to ask of everyone to get MPM
and Filtering working and then throw in something new to the mix which
has been untested and unproven.

When we released mod_gzip (through official channels at Apache) we did
so for Apache 1.3.x It took 6 months for the code to get to it's current
release and it's now considered stable. It's been tested by tens of
thousands of people. Kevin and I did nothing for 6 months after the
release except respond to issues and education of the HTTP 1.1 content
encoding spec.

There is a reason BOTH of us have insisted we will not release mod_gzip
for 2.x until Apache is in beta. You have more than enough to do to get
the whole server stable and in use by your user base before we blast
another module into the mix.

Kevin and I have been on this forum for years. Sure we've broken a few
rules but then that's what you'd expect from people who are going to
push the limits. The flame war is over, they never last more than a few
days anyway and business is returning to normal. Both Kevin and I are
passionate about compression on the web and have no need to see Apache
fail. We want you to succeed and we devote company resources to that
goal. It would be irresponsible to suddenly throw mod_gzip into 2.x
until you have it stable. 

It's another distraction which no one needs right now.

Even though my vote doesn't count -1 on including mod_gz for now.

Later...


Peter

-Original Message-
From: Ryan Bloom [mailto:[EMAIL PROTECTED]] 
Sent: Friday, September 07, 2001 10:08 PM
To: [EMAIL PROTECTED]; Rodent of Unusual Size
Subject: Re: zlib inclusion and mod_gz(ip) recap


On Friday 07 September 2001 18:28, Rodent of Unusual Size wrote:
 * On 2001-09-07 at 21:21,

   Ryan Bloom [EMAIL PROTECTED] excited the electrons to say:
  On Friday 07 September 2001 17:46, Greg Stein wrote:
   Current consensus appears to be to add it to modules/experimental.
 
  I don't see how that could possibly be the consensus, since I have 
  -1 in the STATUS file.

 And I pointed out that I think the bases you quoted for your veto were

 specious if the module in question is in /experimental/. :-)

 Since several people seem to be in favour of putting it into 
 experimental, I wonder how much effort is going to go into trying to 
 get you to rescind your veto instead of into real work on 2.0.. :-D

I would hope none.  I am incredibly unlikely to rescind my veto.  I do
not 
believe that adding new functionality to the server is the way to get a
release out the door.

I also do not believe that we should be making this decision right now.
I am 100% in agreement with Jim about this.  We should table this whole
discussion until emotions have calmed, the patent issue has been finally
resolved (Dirk told me today that he knows of two patents, and he would
post about them), and Apache 2.0 has shipped.

Putting the module in experimental essentially means that at some point,
we expect to move it out of that directory.  I do not know that is the
case.

I would also point out that the consensus as I count it doesn't have
this going into the server at all right now.  I am counting five people
who have said on list that they would rather this module didn't go into
the server at this point (although I am the only one to veto).  I also
count five who have said they would like it to go into the server,
either in filters or experimental. That does not sound like any kind of
consensus.  My count is below.

Would prefer not:  Ryan, Bill, Doug, Jim, Ben
Would prefer: Ken, Ian, Justin, Cliff, Greg

I may have missed one or two, but even if I did, that would not be a
very 
strong majority.  

Ryan __
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--




RE: why not post mod_gzip 2.0? (was: Re: [PATCH] Add mod_gz to httpd-2.0)

2001-09-06 Thread Peter J. Cranstone

 Why won't you post mod_gzip 2.0 *today*?

Because Apache 2.x is not STABLE, not In BETA and the API set is not yet
FROZEN... When it is, we will release mod_gzip as a third party module,
which we will support and maintain.

In the meantime use mod_gz.

Peter

-Original Message-
From: Greg Stein [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, September 05, 2001 10:34 PM
To: [EMAIL PROTECTED]
Subject: why not post mod_gzip 2.0? (was: Re: [PATCH] Add mod_gz to
httpd-2.0)


On Wed, Sep 05, 2001 at 01:46:55PM -0600, Peter J. Cranstone wrote:
 I suppose the only thing we can do is contribute. Kevin has, mod_gzip 
 was released under an ASF license which was approved by the ASF Board.

 If there is a hidden agenda there then you're better than I at 
 spotting it.
 
 Mod_gzip is available for 1.3.x
 
 It will be available for 2.x when you hit beta.
...
 Now tell me where the hidden agenda is.

You and Kevin never answered my simple question:

Why won't you post mod_gzip 2.0 *today*?

I asked several times, but it never got answered. I seem to recall
somebody stating it was being held, pending stability of the internal
Apache APIs. But that isn't an issue if it is to be incorporated into
Apache.

Another response was look at 1.3 and port it forwards to 2.0. Why
should that happen, when you've done the work? If you intend to
contribute it, then why the delay, making us repeat your work?

Without that simple question answered, then yes: I would think there is
some kind of hidden agenda. Avoiding that question, and not posting
mod_gzip 2.0 makes it appear that you are trying to hide something.

The more conspiratorial-minded of us would simply believe this whole
thread is to create a division and weaken the voting for mod_gz. Much
like politicians will introduce a second, similar bill to confuse and
divide supporters and then crush both bills. But of course that wouldn't
happen here, now would it? :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/




RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-05 Thread Peter J. Cranstone

After 3-4 years we know exactly how you work.


Peter

-Original Message-
From: Rodent of Unusual Size [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, September 05, 2001 11:58 AM
To: [EMAIL PROTECTED]
Subject: Re: [PATCH] Add mod_gz to httpd-2.0


[EMAIL PROTECTED] wrote:
 
 Ian... are you a committer?
 What do you say about adding ZLIB to Apache source ASAP.
 Yea or nay?

This only demonstrates your non-understanding of how we work, and/or how
to work with us.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!



RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-05 Thread Peter J. Cranstone

I suppose the only thing we can do is contribute. Kevin has, mod_gzip
was released under an ASF license which was approved by the ASF Board.
If there is a hidden agenda there then you're better than I at spotting
it.

Mod_gzip is available for 1.3.x

It will be available for 2.x when you hit beta.

It will contain the same license as before.

There are no patents, TM's or anything else associated with it.

We will continue to support both versions.

Now tell me where the hidden agenda is. 

If it's not technical, then it's social (you just plain don't like us...
Not a problem) or political (the powers that be don't like us... Again
not a problem)

From a political standpoint I'm pissed that Covalent Technologies can
cut a deal with Compaq for the new Compaq Apache server (wonder if it
will ship with or without compression (details are tough to find on this
whole deal). But you know what, more power to Ryan and his crew for
doing something like that. Did I ever see a vote for something like
that, no... I even checked the ASF minutes... Nothing since February.
Whatever.

This whole conversation is mute, include, exclude, revoke whatever,
mod_gzip will always be available from Kevin and I and we will support
it.

If you don't include it, all it means is another click to our website.

Later...


Peter



-Original Message-
From: Rodent of Unusual Size [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, September 05, 2001 12:20 PM
To: [EMAIL PROTECTED]
Subject: Re: [PATCH] Add mod_gz to httpd-2.0


Peter J. Cranstone wrote:
 
 After 3-4 years we know exactly how you work.

Oh?  Then what is the explanation for Kevin publicly soliciting an
individual to do something that recent discussion has shown the group
considers moot?

Regardless of facts, it is perception that matters.  Not speaking for
anyone else, my perception of the practises in which you and Kevin have
seemingly engaged makes me personally wary and unwilling to take
anything you write at face value.  Little things, like Kevin's post just
now, and the multiplicity of 'I'm not with RC' mail origins a couple of
years ago, and the overall tenor of your posts..

I try very hard to keep an open mind; when I committed to you to get you
a session at ApacheCon to talk about generic content compression issue,
I meant it -- but was overruled 4 to 1. Despite my best efforts at
open-mindedness, something about your collective tactics and polemic
keeps making me want to close my mind against you.  And I suspect (but
do not know) that others have the same perception, which may have been
the cause of that 4-1 vote.

Most people I take at face value -- but you seem to change positions so
much that I feel I cannot but suspect you of having a hidden agenda.
Maybe you do not, and maybe you do -- and maybe it is no more than
trying to get RC's package into the Apache distribution because of the
marketing bulge that would give RC.  But.. maybe it is more than that.
I cannot tell, and you have not made it easy to tell, and I am not sure
I would blindly accept it if you did.

This is not technical, this is social and political.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!




RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-05 Thread Peter J. Cranstone

Guys,

Conversation is over. I have nothing more to add. This whole
conversation is degenerating into meaningless nonsense. 

Someone else can carry the thread.


Peter

-Original Message-
From: Thomas Eibner [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, September 05, 2001 2:21 PM
To: [EMAIL PROTECTED]
Subject: Re: [PATCH] Add mod_gz to httpd-2.0


Okay, I'll bite.

On Wed, Sep 05, 2001 at 01:46:55PM -0600, Peter J. Cranstone wrote:
[Snip: nothing that hasn't been said in this thread before]

 If it's not technical, then it's social (you just plain don't like 
 us... Not a problem) or political (the powers that be don't like us...

 Again not a problem)
 
 From a political standpoint I'm pissed that Covalent Technologies can
 cut a deal with Compaq for the new Compaq Apache server (wonder if it 
 will ship with or without compression (details are tough to find on 
 this whole deal). But you know what, more power to Ryan and his crew 
 for doing something like that. Did I ever see a vote for something 
 like that, no... I even checked the ASF minutes... Nothing since 
 February. Whatever.

Why are you dragging this into the discussion? I can't see that it has
anything to do with it. Anyone else seeing this as a bad thing for 
Apache?

I don't see why they shouldn't be allowed to do this, anyone should be
able to do this, even your company. But do you have the expertise?

Looking at the License of Apache it doesn't make it sound like they
wouldn't be able to do so, as long as they state like written in the
license: This product includes software developed by the Apache Group
for use in the Apache HTTP server project (http://www.apache.org/).
Which I am quite sure they will, Covalent will probably use every chance
they get to promote Apache.

The reason why there might not be more information on this deal than
what Covalents website gives[1] might be that the rest is to be worked
out?

When I heard this I was kind of happy for Apache, 'cause it can only be
a good thing if Covalent gets a deal like this. More money, more likely
that Ryan, Randy, Dough, William, etc. will keep up their very good work
on Apache.

my $cent = 2;

[1] http://www.covalent.net/company/press/news-20010828.php

-- 
  Thomas Eibner




RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-05 Thread Peter J. Cranstone

 If somebody does find that name as a product anyplace, please let me
know ASAP.

It was on a recent CNET release:
http://news.cnet.com/news/0-1003-200-6963955.html

 Compaq Computer has signed a deal with Covalent Technology to jointly
develop and market Covalent's Apache Web server software, the companies
plan to announce Monday. 


-Original Message-
From: Ryan Bloom [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, September 05, 2001 2:27 PM
To: [EMAIL PROTECTED]; Rodent of Unusual Size
Subject: Re: [PATCH] Add mod_gz to httpd-2.0



  From a political standpoint I'm pissed that Covalent Technologies 
  can cut a deal with Compaq for the new Compaq Apache server (wonder 
  if it will ship with or without compression (details are tough to 
  find on this whole deal).

 This is news to me, and certainly no permission has been given to 
 either Compaq nor Covalent to call anything a 'Compaq Apache server.'

 I am on the ASF board and I can tell you this has not come before us.

I should point out that AFAIK, Compaq Apache server is not a product
name that I have ever heard before.  A quick look at Compaq's web site
also does not come up with that name anywhere.

If somebody does find that name as a product anyplace, please let me
know ASAP.  However, Covalent knows very well what we can and can not
call products, so I can't imagine that we would use that name.

Ryan __
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-05 Thread Peter J. Cranstone

Ken,

Kiss my ass... I have work to do. You want to continue the conversation
take it off line you know where I am.


Peter

-Original Message-
From: Rodent of Unusual Size [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, September 05, 2001 2:42 PM
To: [EMAIL PROTECTED]
Subject: Re: [PATCH] Add mod_gz to httpd-2.0


Peter J. Cranstone wrote:
 
 Conversation is over. I have nothing more to add. This whole 
 conversation is degenerating into meaningless nonsense.
 
 Someone else can carry the thread.

This clever technique of ducking out of the conversation rather than
answering pointed questions is just *so* endearing, Peter. And it is one
of the tactics to which I alluded earlier as making me wary of your
motives.  You bring something up, we question you about it, you say the
conversation is meaningless. You made remarks, statements, and
allegations -- stand up to them.

As for 'nothing more to add' -- well, you could add the answers to the
questions you have been asked..

But you do have one thing partly right, IMO -- trying to converse with
you seems to frequently be an exercise in futility.  That is a social
issue, and if the rest of the group cannot have a reasonable
conversation with a module developer, no technical merit of the module
is going to overcome the irritation and frustration the rest of the
group is going to experience if it gets included.

So, in short, if you have any interest in mod_gzip being included, stop
behaving like an ass and *converse*.  'Conversation over,' forsooth!
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!



RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-03 Thread Peter J. Cranstone

Jim,

1.  There are no patents on any of the technologies contained within
mod_gzip. Neither Remote Communications, HyperSpace
Communications or either Kevin and I have any patent coverage in
this module.

2.  Kevin has already covered the licensing issue in detail. (see
previous threads)

3.  Mod_gzip was released under the Apache style license and you
are free to include it.

 I see no real blocks to the ASF seriously looking at adding the code.

I agree. We we've worked hard to release and support a module which
Apache users will find useful. It's coming up a year now and people are
still downloading it.

If the issue is debug code we can always upload a copy which will be
about 90% smaller but tougher to understand for the new user. You could
use this with Apache distributions and we could carry the full debug
version on our site.

Regards


Peter

-Original Message-
From: Jim Jagielski [mailto:[EMAIL PROTECTED]] 
Sent: Monday, September 03, 2001 9:49 AM
To: [EMAIL PROTECTED]
Subject: RE: [PATCH] Add mod_gz to httpd-2.0


At 12:42 PM -0600 9/2/01, Peter J. Cranstone wrote:

It's an amazing analysis of mod_gzip on HTTP traffic and includes all 
different browser types. Here is what is amazing, check out the saved

column and the average savings for all the different stats... About 
51%

That's a HUGE benefit to ALL apache users. Why wouldn't you use it?


Here are my comments regarding mod_gzip...

  1. Yes, it's incredibly useful and a worthwhile module.
  2. Re: why wouldn't you use it?? As an end-user (sys-admin) I can't
 think of any real compelling reasons why not...

But I think the question you meant was why wouldn't the ASF 'bundle' it
with Apache, and the reasons are:

  1. Patent issues:
 I seem to recall that mod_gzip was somehow patented, and with
 some words to the effect that if it's included with software, then
 the software follows suit. Before the ASF can consider the module,
 we must know *exactly* the patent and licensing aspects of the
code.
  2. ZLIB issues:
 Because mod_gzip uses ZLIB, we also need to concern ourselves of
 the nature (patent, licensing, etc...) of that as well.

If you can assure us of no viral aspects of the code (or any required
code libraries of mod_gzip), no patent issues of any aspects of the code
(or it's supporting libraries) and no other conditions of the code
donation, then I see no real blocks to the ASF seriously looking at
adding the code.

As a side point, we really need to do a better job regarding 3rd party
modules... Of course, we can't include every 3rd party module that comes
down the path, and hopefully module authors realize that. But we do need
to make it easier for people to find them, etc...
-- 

===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  A society that will trade a little liberty for a little order
   will lose both and deserve neither




RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-03 Thread Peter J. Cranstone

Guys,

Whatever you want to do. I don't care. Vote on mod_gz for 2.x and
mod_gzip for 1.3.x (we submitted this to the ASF last October 13 2000)

It's really that simple, you can debate it for evermore. Kevin and I are
focused on mod_gzip 2.x which will be released when 2.x goes solid beta.

This is my last 2 cents worth. Time's a wasting.


Peter

-Original Message-
From: Sander Striker [mailto:[EMAIL PROTECTED]] 
Sent: Monday, September 03, 2001 2:32 PM
To: [EMAIL PROTECTED]
Subject: RE: [PATCH] Add mod_gz to httpd-2.0


 Marc,
 
 Rather than continue this thread let's see if we can put this subject 
 into the end zone.
 
  Think then act, not the other way around.
 
 Then vote on it. Either +1 or -1 on including mod_gzip into the Apache

 distribution.
 
 Simple.

It isn't as simple as that.  You can't just call out a vote to push this
through.  Lets let this issue settle down first and focus on 2.0 getting
good enough for beta.  In the mean time mod_gz and mod_gzip (if code is
posted) can be reviewed.  If there is a maintainer within the ASF, there
can be a vote if needed.  Otherwise, we don't need a vote, since if the
ASF isn't going to maintain it, it isn't going in *. This is what Marc
was trying to say aswell I think.  I don't think the majority of the
httpd developers are going to +1 putting mod_gzip in.  Remember that
they are the ones having to maintain it and ensure its quality, not you
(unless ofcourse you join the development team).

Right now it seems like some people are getting worked up and that is
not a good environment to make decisions in.  Personally I don't even
think it is time for this decision.

Sander

*) At least that is how I understand it and would find it logical to
   be.

 Peter.
 
 PS. (If I remember rightly I think you already voted +1 on the license

 for mod_gzip so this should be an easy decision)

Things are getting twisted...




RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Peter J. Cranstone

Hi All,

I think Sander sum it up nicely.

-   It is part of the spec. Apache should implement the spec.
-   Almost all new browsers support IETF content encoding/transfer
encoding. In testing with MSIE 6.x and Netscape 6.1
compression works fine.
-   The biggest users of mod_gzip are outside the USA. Why? Because
they pay for bandwidth.
-   There are some large institutions (financial markets) that use
mod_gzip to reduce HTML/JavaScript etc.
-   It supports dynamic and static content.
-   You can compress SSL (with some hacks)

A couple of other issues.

-   Netware. With a little help this can be fixed. However the
majority of the net runs either Apache, IIS, iPlanet or Zeus. 
-   Apache 2.x is not yet stable for all platforms.
-   Debug code and size of mod_gzip... We can remove the debug code.
It's stable enough now after 9 months of solid  testing to pull it.

In closing... Here is the biggest reason to include mod_gzip

-- compression (transfer encoding/content encoding) part of the HTTP
spec! --

for those who hate long emails you can stop reading here

soap box

Apache's market share dropped last month. Micro$oft IIS 5.0 is making
headway and IT INCLUDES AN ISAPI GZ FILTER. It happens to be a pig and
it does not support compressed POST transactions (mod_gzip does) and it
has issues compressing Javascript. But bottom line, Microsoft is
supporting the standard and even though the first pass is rough it will
get better. Which means that if people figure out what European users
have been saying for nearly a year that compressing HTML etc really
makes a difference then Apache needs to embrace the light. Mod_gzip was
released with an Apache style license with this thought in mind. The
writing is on the wall, if Micro$oft sees a benefit to adding
compression then it's only a matter of time before everyone is demanding
it be there. My thought is that it would be better for Apache to be
first rather than playing catch up.

On a personal note. Kevin and I have been on this forum long enough to
know what the rules are. We released mod_gzip under an Apache style
license for one reason. So Apache would benefit. Sure Kevin is the
author and has continued to do an incredible job supporting the code,
but now others have joined the mod_gzip forum and have taken up the
challenge. On October 13th 2001 it will have been a year since the code
came out. It has not undergone any changes since March 2001 and is now
considered stable for Apache 1.3.x users. The 2.x version is only
waiting for one thing which is *beta*. What the server is stable enough
to run for months and users are upgrading to the new version we will
release mod_gzip for 2.x under exactly the same license as 1.x version.

end

Regards


Peter


-Original Message-
From: Sander Striker [mailto:[EMAIL PROTECTED]] 
Sent: Sunday, September 02, 2001 6:12 AM
To: [EMAIL PROTECTED]
Subject: RE: [PATCH] Add mod_gz to httpd-2.0


Hi,

From what I have seen on the list I am on the +1 side of
adding mod_gz(ip) to the distribution.  Ofcourse, my vote doesn't count
since I don't have httpd commit.

I find the following arguments convincing (summarized):

 - The gzip content encoding is part of the HTTP spec.
 - Most clients support gzip transfer coding.
 - It is a real solution to the problem of network bandwidth
   being the limiting factor on many heavily-loaded web servers
   and on thin-piped clients.
 - It makes the compression transparent to the admin of the
   site and allows for dynamically generated content (which
   can grow quite large) to be compressed aswell.

I haven't seen anything that held on the negative side yet.

Sander