TestConfig-new deletes vhosts (et al)
Hi all, I'm getting problems with tests passing if I run: % perl t/TEST but failing if I run: % perl t/TEST -start-httpd % perl t/TEST -run-tests % perl t/TEST -stop-httpd It seems that this chunk of code, from TestConfig.pm (~ line 140), is the culprit. It deletes the section of the config file that contains the virtual host stuff, which breaks the tests which need them. #regenerating config, so forget old if ($args-{save}) { for (qw(vhosts inherit_config modules inc)) { delete $thaw-{$_} if exists $thaw-{$_}; } } Does anyone know _why_ it deletes the vhosts stuff -- is it required by something else, or was it once required but not any more? Cheers, Gary [ Gary Benson, Red Hat Europe ][ [EMAIL PROTECTED] ][ GnuPG 60E8793A ]
Re: [PATCH] Add mod_gz to httpd-2.0
In a message dated 01-09-03 04:55:08 EDT, Henri Gomez writes... Ryan Bloom [EMAIL PROTECTED] wrote: If you want to use gzip, then zip your data before putting it on-line. That doesn't help generated pages, but perl can already do gzip, as can PHP. Let me expose my mod_gzip user experience. I'm using it for more that 9 months on Apache 1.3 servers and never had any problems with it. It's really a great piece of code and all my end users are more than happy to get their stuff quicker. What about asking Jean-loup Gailly and Mark Adler about the leaks in zlib library, and possible fixes ? In case of severe problem, you could still add a warning to mod_gzip potential users. Hi Henri... This is Kevin Kiley. It isn't necessary to ask Jean or Mark about leaks in ZLIB with regards to mod_gzip or add any 'warnings' because mod_gzip does NOT USE ZLIB. There are no 'leaks' of any kind in mod_gzip and since it uses its own context-based control deck for all compression tasks it is 100% thread-safe. Your suggestion is a good one but it would only apply to things that actually use ZLIB such as Ian's 2.0 filtering demo. And Tomcat 4.x :) Pier Hello, Pier, happy to see your here also. Compression is a time consuming task and I'd rather like to see it handled by native code instead of java code. Of course the same thing is true for Crypto operation, and that's why I was more than happy to see mod_ssl contributed to Apache 2.0 :))) Yours... Kevin Kiley
Re: [PATCH] Add mod_gz to httpd-2.0
Hi Henri... This is Kevin Kiley. It isn't necessary to ask Jean or Mark about leaks in ZLIB with regards to mod_gzip or add any 'warnings' because mod_gzip does NOT USE ZLIB. Hi Kevin, happy to see you there :))) You're right, now the gzip code in included in mod_gzip.c and didn't rely anymore on the zlib external lib. And I didn't even noticed :( Question, when did you included the gzip code in mod_gzip ? I remember I've to add -lz -lm when compiling in early age of mod-gzip There are no 'leaks' of any kind in mod_gzip and since it uses its own context-based control deck for all compression tasks it is 100% thread-safe. true... Your suggestion is a good one but it would only apply to things that actually use ZLIB such as Ian's 2.0 filtering demo. And Tomcat 4.x :) Pier May be APR team could use you code to make it available to others modules or apps :!!! Thanks Kevin, I also updated my mod_gzip RPM :
Re: [PATCH] RE: make distclean doesn't
On Sun, Sep 02, 2001 at 02:01:15PM -0700, Ryan Bloom wrote: On Sunday 02 September 2001 10:28, Jim Winstead wrote: ... it may be worth following the gnu project's lead on these targets, since they use the same names. http://www.gnu.org/prep/standards_55.html#SEC55 (for them, distclean == what is in the tarball.) +1. If we are going to use their syntax, we should also use their semantics. I will check with some other packages later today to see what they do with make distclean. Few projects have a file like config.nice, so it doesn't apply. Don't sweat your time. I've explained my reasons, Cliff and Justin seem to agree quite wholeheartedly. And I'll repeat: -1 on anything rm'ing config.nice Cheers, -g -- Greg Stein, http://www.lyra.org/
Re: cvs commit: httpd-2.0/modules/http mod_mime.c
I'm in complete agreement with Justin on this one. Add says add to me. And filters *are* additive. I wouldn't agree with Joshua's comments about tossing filter directives and rely on each module to provide their own (how would you order them?), but I think his meta-comment about this stuff is confoozled applies. A step back and a rethink is probably necessary, rather than poking and prodding at the various config directives. Cheers, -g On Sun, Sep 02, 2001 at 09:05:50PM -0700, Justin Erenkrantz wrote: On Sun, Sep 02, 2001 at 10:49:52PM -0500, William A. Rowe, Jr. wrote: Not this way. No other mod_mime variable behaves the way you you are trying. I'm not kidding about adding a Set{Input|Output}FilterByType/SetHandlerByType so when we ask folks to rely upon mime types, they can actually do so. And, that's additive? So, I could do: SetOutputFilterByType BAR text/* SetOutputFilterByType FOO text/plain As a user, I *expect* that both filters are activated. I think you make it sound like we have to do: SetOutputFilterByType BAR text/* SetOutputFilterByType BAR;FOO text/plain Yuck, yuck, yuck, yuck, yuck. (Did I mention I think this is yucky?) Yes... please read mod_mime.html. AddSomething is not additive, and can't be. The server config is often a nightmare to grok as things stand today. Don't make things harder by absusing fairly consistant definitions such as AddSomething or SetSomething. The inner container always overrides the outer. As a user, I expect that to be additive *in the case of a filter*. I expect it to override in the other cases - just not this case. You can't have multiple handlers, but you can certainly have multiple filters. So the inner needs AddOutputFilter FOO;BAR html - as of today. I suggested an entire +|- syntax as well, it was somewhat booed since existing +|- syntaxes are perceived as confusing. Here, well I think it's necessary. That's confusing. I think the cleanest way is for it to be additive (with a RemoveOutputFilter to remove one from a higher level - ignoring this directive if the filter doesn't exist from a prior level). None of this is addressing filter ordering across commands yet. I said 8 months ago we've done an inadequte job of defining the use-filters syntax. I'm saying the very same thing today. Yeah, I expect that the ordering of filter execution isn't going to be right given the code we have now. -- justin -- Greg Stein, http://www.lyra.org/
RE: Memory management requirements buckets/brigades, new pools
Hi Sander. is it possible to post patches from the most recent CVS version. that would make it much easier for people to apply them Thanks ..Ian On Sun, 2001-08-26 at 08:29, Sander Striker wrote: [this time including the attachment...] Hi, It seems that the memory management requirements for buckets is that they have to be able to control their own lifetime. In other words, they need to be allocated and freed on an individual basis. It seems that their lifetime is bound by the lifetime of the connection. The above let me believe that buckets need a free function to complement apr_palloc. Hence the attached patch that introduces apr_pfree *). I know this patch introduces some extra overhead, although not much, which could be unacceptable. OTOH would this make it possible to use one memory management scheme throughout apache... Maybe something to consider, maybe not. I don't even know if these are the criteria or not ;) Sander *) patch is against the recently posted possible replacement code for pools. -- Ian Holsman Performance Measurement Analysis CNET Networks-415 364-8608
per-dir config
Are per-dir configs available before the uri-filename translation handler in 1.3.x, or do they apply to the translated filenames and thus any config directives accessed by the filename translation hook can only be server-wide? And is this the same in 2.0.x? It would make sense to me for this to be the case, but I am just looking for a sanity check here to verify what I am seeing in my code. -Rasmus
Re: [PATCH] Add mod_gz to httpd-2.0
On Monday 03 September 2001 03:32, Gomez Henri wrote: You're right, now the gzip code in included in mod_gzip.c and didn't rely anymore on the zlib external lib. And I didn't even noticed :( Question, when did you included the gzip code in mod_gzip ? I remember I've to add -lz -lm when compiling in early age of mod-gzip May be APR team could use you code to make it available to others modules or apps :!!! No. APR-util has already become too much of a kitchen sink. There is no reason to include compression in APR or APR-util. If you want to have a compression library, then create a compression library. Do not try to put it into another library. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
RE: [PATCH] Add mod_gz to httpd-2.0
At 12:42 PM -0600 9/2/01, Peter J. Cranstone wrote: It's an amazing analysis of mod_gzip on HTTP traffic and includes all different browser types. Here is what is amazing, check out the saved column and the average savings for all the different stats... About 51% That's a HUGE benefit to ALL apache users. Why wouldn't you use it? Here are my comments regarding mod_gzip... 1. Yes, it's incredibly useful and a worthwhile module. 2. Re: why wouldn't you use it?? As an end-user (sys-admin) I can't think of any real compelling reasons why not... But I think the question you meant was why wouldn't the ASF 'bundle' it with Apache, and the reasons are: 1. Patent issues: I seem to recall that mod_gzip was somehow patented, and with some words to the effect that if it's included with software, then the software follows suit. Before the ASF can consider the module, we must know *exactly* the patent and licensing aspects of the code. 2. ZLIB issues: Because mod_gzip uses ZLIB, we also need to concern ourselves of the nature (patent, licensing, etc...) of that as well. If you can assure us of no viral aspects of the code (or any required code libraries of mod_gzip), no patent issues of any aspects of the code (or it's supporting libraries) and no other conditions of the code donation, then I see no real blocks to the ASF seriously looking at adding the code. As a side point, we really need to do a better job regarding 3rd party modules... Of course, we can't include every 3rd party module that comes down the path, and hopefully module authors realize that. But we do need to make it easier for people to find them, etc... -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ A society that will trade a little liberty for a little order will lose both and deserve neither
RE: [PATCH] Add mod_gz to httpd-2.0
Jim, 1. There are no patents on any of the technologies contained within mod_gzip. Neither Remote Communications, HyperSpace Communications or either Kevin and I have any patent coverage in this module. 2. Kevin has already covered the licensing issue in detail. (see previous threads) 3. Mod_gzip was released under the Apache style license and you are free to include it. I see no real blocks to the ASF seriously looking at adding the code. I agree. We we've worked hard to release and support a module which Apache users will find useful. It's coming up a year now and people are still downloading it. If the issue is debug code we can always upload a copy which will be about 90% smaller but tougher to understand for the new user. You could use this with Apache distributions and we could carry the full debug version on our site. Regards Peter -Original Message- From: Jim Jagielski [mailto:[EMAIL PROTECTED]] Sent: Monday, September 03, 2001 9:49 AM To: [EMAIL PROTECTED] Subject: RE: [PATCH] Add mod_gz to httpd-2.0 At 12:42 PM -0600 9/2/01, Peter J. Cranstone wrote: It's an amazing analysis of mod_gzip on HTTP traffic and includes all different browser types. Here is what is amazing, check out the saved column and the average savings for all the different stats... About 51% That's a HUGE benefit to ALL apache users. Why wouldn't you use it? Here are my comments regarding mod_gzip... 1. Yes, it's incredibly useful and a worthwhile module. 2. Re: why wouldn't you use it?? As an end-user (sys-admin) I can't think of any real compelling reasons why not... But I think the question you meant was why wouldn't the ASF 'bundle' it with Apache, and the reasons are: 1. Patent issues: I seem to recall that mod_gzip was somehow patented, and with some words to the effect that if it's included with software, then the software follows suit. Before the ASF can consider the module, we must know *exactly* the patent and licensing aspects of the code. 2. ZLIB issues: Because mod_gzip uses ZLIB, we also need to concern ourselves of the nature (patent, licensing, etc...) of that as well. If you can assure us of no viral aspects of the code (or any required code libraries of mod_gzip), no patent issues of any aspects of the code (or it's supporting libraries) and no other conditions of the code donation, then I see no real blocks to the ASF seriously looking at adding the code. As a side point, we really need to do a better job regarding 3rd party modules... Of course, we can't include every 3rd party module that comes down the path, and hopefully module authors realize that. But we do need to make it easier for people to find them, etc... -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ A society that will trade a little liberty for a little order will lose both and deserve neither
Re: [PATCH] Add mod_gz to httpd-2.0
On Mon, Sep 03, 2001 at 04:40:15AM -0400, [EMAIL PROTECTED] wrote: I suggest (again) that the entire ZLIB source code package be IMMEDIATELY added to the Apache source code tree. Like... TOMORROW. It seems silly to discuss adding anything like mod_gz ( or our Enhanced ApacheBench or any built-in IETF Content-encoding support ) unless it is first determined if either the GZIP source code (GPL license) or ZLIB ( ZLIB/LIBPNG license) will ever make it into the Apache source tree. Votes, anyone? My interpretation of the ZLIB license at http://www.gzip.org/zlib/zlib_license.html leads me to believe that zlib is compatible with the ASF license. I believe OtherBill has already commented on that fact. I do not think that there are any patent issues on zlib - AFAIK, that was the point of writing zlib in the first place. I also think that we do not need to redistribute zlib in our source tree. I think it is common enough now that most OSes come with it. (I look at how we handle the OpenSSL library and think zlib falls in the same category.) If you are willing to post a version of mod_gzip for httpd-2.0 to this list, I will take the time to review it. However, I think there is an advantage to using zlib in this particular case rather than writing our own compression algorithms. -- justin
Re: [PATCH] Add mod_gz to httpd-2.0
On Mon, 3 Sep 2001, Justin Erenkrantz wrote: On Mon, Sep 03, 2001 at 04:40:15AM -0400, [EMAIL PROTECTED] wrote: I suggest (again) that the entire ZLIB source code package be IMMEDIATELY added to the Apache source code tree. Like... TOMORROW. Like, no. It makes zero sense to rush into doing something just to do something without any clear concept of where it is going or what steps really need to be taken to get there. If you are willing to post a version of mod_gzip for httpd-2.0 to this list, I will take the time to review it. However, I think there is an advantage to using zlib in this particular case rather than writing our own compression algorithms. -- justin Also note that, IMO, we do _NOT_ want a mod_g* in 2.0 that has a lot of ugly support trying to make it function in 1.3. I would also like to see more support for the claims that zlib is this horrible thing that just doesn't work properly in a huge number of ways, as tested by the major internet testing companies (whatever the heck that means). In any case, it makes a whole lot more sense to use a library for supporting gzip than to stick it all as custom code into any place that needs it (eg. ab, mod_g*, etc.), even if zlib isn't the library to use.
Re: [PATCH] Add mod_gz to httpd-2.0
From: Justin Erenkrantz [EMAIL PROTECTED] Sent: Monday, September 03, 2001 12:57 PM I also think that we do not need to redistribute zlib in our source tree. I think it is common enough now that most OSes come with it. (I look at how we handle the OpenSSL library and think zlib falls in the same category.) We don't distribute OpenSSL because it's a huge chunk of code!!! We certainly can't rely on folks having 0.9.6b installed (or even 0.9.6a, the absolute minimum to avoid some pretty significant holes, leaving a problem or to remaining.) But we aren't about to distribute that much code, we have a relationship with the maintainers (one sits on the ASF board), and _new_ crypto development still has hardships within the US. There is nothing new or novel about mod_ssl, which is why we have no problem falling under the crypt export relaxation for 'publicly available open sources'. I have no issue with dropping the current (and httpd-maintained) zlib, returning all patches to the authors. If there are problems with threading support + leaks, we will need to fix them if we will call this 'supported'. Same as we do for pcre and expat, which aren't as firmly established as the ASF or even the OpenSSL organization. It adds some 160kb to the tarball, as distributed at zlib.org. Bill
RE: [PATCH] Add mod_gz to httpd-2.0
On Mon, 3 Sep 2001, Peter J. Cranstone wrote: Marc, It makes zero sense to rush into doing something just to do something without any clear concept of where it is going or what steps really need to be taken to get there. Here's a concept Save bandwidth. Here's another one, it's part of the spec. Finally how about we released it under an ASF license. That is irrelevant. We are talking about the proccess of doing something. Just because something _may_ be desirable does NOT mean the proper way to implement it is to jump into doing something tomorrow! Sheesh. The discussion is about adding support to Apache for sending output using the gzip content encoding. There are various different ways to do this, and there is not yet any clear consensus on what the best way is. There are a number of issues that have been raised that do need to be investigated before such a decision can be made. Think then act, not the other way around. Also note that, IMO, we do _NOT_ want a mod_g* in 2.0 that has a lot of ugly support trying to make it function in 1.3. People around the world have been using mod_gzip on Apache 1.3.x for nearly a year. Kevin has supported the product. It has been stable since March. You've abandoned the 1.3.x people for 2.x which is getting closer to beta. As soon as it is you'll have a copy of mod_gzip for 2.x and we will support it. Sorry, that isn't how it works. When 2.0 goes beta, we want to start to _STOP_ adding features, instead of starting to add even more. We (whenever I speak for we, I mean me + my knowledge acquired over the past x years of how things are done around here) will _NOT_ release a product that is Apache 2.0 + some third party module bundled. The product will be Apache 2.0. Either mod_gzip will be an integrated part of that product in terms of development process, etc. or it won't be there at all. A module is not integrated into Apache by some third party saying ok, we will let you have this code when we think you are ready for it, and then we will support it for you. A module is integrated into Apache by the third party becoming a part of us. I'm not sure you have shown the interest in doing so or the ability to understand how the development process works. That certainly doesn't mean we won't consider your code, if you choose to make it available at a time in the product development cycle where we still want to consider adding such functionality, but it does mean that we would take the _code_, at which point you could either find a way to participate in the ongoing Apache development process, or not. I would also like to see more support for the claims that zlib is this horrible thing that just doesn't work properly in a huge number of ways, as tested by the major internet testing companies (whatever the heck that means). Sometimes I wonder where you've been. Only sometimes? I always wonder where I've been... Mercury Interactive has about 60% of the market when it comes to Internet testing. EVERY and I mean EVERY person who is serious about benchmarking uses their Load Runner Umh... maybe you haven't noticed, but there is more to the Internet than the web. Umh... maybe you haven't noticed, but there is a lot more to benchmarking than what Load Runner can do. So by the major internet testing companies, do you mean Mercury Interactive? companies is plural, so I assume there are others. Is that true? Regardless, vague rumors are of no help. Do you have any exact reference that talks about various issues with zlib on a technical level? product... Which, guess what, has just been overhauled to SUPPORT content encoding GZIP which is being used by M$ in IIS 5.0 Guess why the overhauled it. Because people (large financial institutions) are using mod_gzip and Apache and IIS 5.0 and want to know if there is a difference in performance. As the link to those stats yesterday shows, there is indeed a BIG difference and as the latest NetCraft survey shows, Apache is falling every month while IIS gains. This is not a feature, this is PART OF THE SPEC and should have been included from the get go. blah blah blah. It is a feature, pure and simple. There is NOTHING in the HTTP/1.1 spec that says you must send content gzipped to be unconditionally compliant with the spec. You do not advance your argument with random marketoid babble, and I think we would be much more receptive to what you are trying to say if you kept it to a technical discussion. Apache is failing behind the curve. People want to save bandwidth. I don't really care if you include mod_gzip or not, the train has already left the station on this one. Soon it will be the majority who runs compression not the minority. If you don't believe me... Do a FTP search for sites that carry mod_gzip. You'll be surprised. Some very smart people have figured out that this is here to stay. I'm glad that Apache has proven itself to be
Re: [PATCH] Add mod_gz to httpd-2.0
On Mon, Sep 03, 2001 at 12:22:33PM -0700, Justin Erenkrantz wrote: My point is that almost every OS comes with a copy of zlib now. We can't expect most people to have pcre and expat, but I think we can with zlib though. I'd rather not build zlib if we didn't need to. The exception here is probably Win32 (which is why I think you want the source). -- justin How many people compile Apache on Win32 themselves anyway? It should be enough if the person that creates the Apache Win32 binary has zlib installed right? So it really shouldn't be that big of a problem? Of course the sources could be bundled with a guide on how to install zlib on Win32 if you really wanted it anyway.. -- Thomas Eibner http://thomas.eibner.dk/ DnsZone http://dnszone.org/ mod_pointer http://stderr.net/mod_pointer
Re: [PATCH] Add mod_gz to httpd-2.0
On Mon, Sep 03, 2001 at 01:37:38PM -0600, Jerry Baker wrote: Of course, if it's made as easy as dropping the zlib dist into /srclib (like OpenSSL), then it doesn't matter to me. Oh, I see Makefile.win now. Yes, the Unix build doesn't do that, but for Win32, I bet this is a reasonable solution. If we need to replace zlib, we just check our zlib into srclib. Works for me. -- justin
RE: [PATCH] Add mod_gz to httpd-2.0
Guys, Whatever you want to do. I don't care. Vote on mod_gz for 2.x and mod_gzip for 1.3.x (we submitted this to the ASF last October 13 2000) It's really that simple, you can debate it for evermore. Kevin and I are focused on mod_gzip 2.x which will be released when 2.x goes solid beta. This is my last 2 cents worth. Time's a wasting. Peter -Original Message- From: Sander Striker [mailto:[EMAIL PROTECTED]] Sent: Monday, September 03, 2001 2:32 PM To: [EMAIL PROTECTED] Subject: RE: [PATCH] Add mod_gz to httpd-2.0 Marc, Rather than continue this thread let's see if we can put this subject into the end zone. Think then act, not the other way around. Then vote on it. Either +1 or -1 on including mod_gzip into the Apache distribution. Simple. It isn't as simple as that. You can't just call out a vote to push this through. Lets let this issue settle down first and focus on 2.0 getting good enough for beta. In the mean time mod_gz and mod_gzip (if code is posted) can be reviewed. If there is a maintainer within the ASF, there can be a vote if needed. Otherwise, we don't need a vote, since if the ASF isn't going to maintain it, it isn't going in *. This is what Marc was trying to say aswell I think. I don't think the majority of the httpd developers are going to +1 putting mod_gzip in. Remember that they are the ones having to maintain it and ensure its quality, not you (unless ofcourse you join the development team). Right now it seems like some people are getting worked up and that is not a good environment to make decisions in. Personally I don't even think it is time for this decision. Sander *) At least that is how I understand it and would find it logical to be. Peter. PS. (If I remember rightly I think you already voted +1 on the license for mod_gzip so this should be an easy decision) Things are getting twisted...
RE: IncreaseStartServers
Don't you think that is too much?? [Mon Sep 3 17:32:12 2001] [error] server reached MaxClients setting, consider raising the MaxClients setting In my httpd.conf: MaxClients = 256 Start Servers = 300 ( It was 5, I increased just for testing) MinSpareServers = 20 () MaxSpareServers = 800 () I can see at my access_log that one of my virtual domains is having a lot of access, but it looks like normal... what else now?? Tks for your help. Daniel -Original essage- From: Justin Erenkrantz To: [EMAIL PROTECTED] Sent: 3/9/2001 17:28 Subject: Re: IncreaseStartServers On Mon, Sep 03, 2001 at 05:16:33PM -0300, Daniel Abad wrote: What does it means?? Is it an attack?? [Mon Sep 3 17:04:22 2001] [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 16 children, there are 0 idle, and 35 total children What this means is that Apache is detecting that it doesn't not have enough children to service all incoming requests. Therefore, it is increasing the number of children to handle the load. I would look at your access logs or look at mod_status (ExtendedInfo enabled) to see what URLs are being requested. It may be an attack, or just that you have been /.ed. =-) -- justin
RES: IncreaseStartServers
So, what do you suggest? -Mensagem original- De: George Schlossnagle [mailto:[EMAIL PROTECTED]] Enviada em: Segunda-feira, 3 de Setembro de 2001 18:52 Para: [EMAIL PROTECTED] Cc: 'Justin Erenkrantz ' Assunto: Re: IncreaseStartServers If you set StartServers to 300, but MaxClients (whihc should probably be called MaxServers) to 256, how will that ever be satisfied. You'll always reach MaxClients immediately on startup On Monday, September 3, 2001, at 04:45 PM, Daniel Abad wrote: Don't you think that is too much?? [Mon Sep 3 17:32:12 2001] [error] server reached MaxClients setting, consider raising the MaxClients setting In my httpd.conf: MaxClients = 256 Start Servers = 300 ( It was 5, I increased just for testing) MinSpareServers = 20 () MaxSpareServers = 800 () I can see at my access_log that one of my virtual domains is having a lot of access, but it looks like normal... what else now?? Tks for your help. Daniel -Original essage- From: Justin Erenkrantz To: [EMAIL PROTECTED] Sent: 3/9/2001 17:28 Subject: Re: IncreaseStartServers On Mon, Sep 03, 2001 at 05:16:33PM -0300, Daniel Abad wrote: What does it means?? Is it an attack?? [Mon Sep 3 17:04:22 2001] [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 16 children, there are 0 idle, and 35 total children What this means is that Apache is detecting that it doesn't not have enough children to service all incoming requests. Therefore, it is increasing the number of children to handle the load. I would look at your access logs or look at mod_status (ExtendedInfo enabled) to see what URLs are being requested. It may be an attack, or just that you have been /.ed. =-) -- justin
Re: RES: IncreaseStartServers
What were your original settings? The defaults? On Monday, September 3, 2001, at 05:48 PM, Daniel Abad wrote: So, what do you suggest? -Mensagem original- De: George Schlossnagle [mailto:[EMAIL PROTECTED]] Enviada em: Segunda-feira, 3 de Setembro de 2001 18:52 Para: [EMAIL PROTECTED] Cc: 'Justin Erenkrantz ' Assunto: Re: IncreaseStartServers If you set StartServers to 300, but MaxClients (whihc should probably be called MaxServers) to 256, how will that ever be satisfied. You'll always reach MaxClients immediately on startup On Monday, September 3, 2001, at 04:45 PM, Daniel Abad wrote: Don't you think that is too much?? [Mon Sep 3 17:32:12 2001] [error] server reached MaxClients setting, consider raising the MaxClients setting In my httpd.conf: MaxClients = 256 Start Servers = 300 ( It was 5, I increased just for testing) MinSpareServers = 20 () MaxSpareServers = 800 () I can see at my access_log that one of my virtual domains is having a lot of access, but it looks like normal... what else now?? Tks for your help. Daniel -Original essage- From: Justin Erenkrantz To: [EMAIL PROTECTED] Sent: 3/9/2001 17:28 Subject: Re: IncreaseStartServers On Mon, Sep 03, 2001 at 05:16:33PM -0300, Daniel Abad wrote: What does it means?? Is it an attack?? [Mon Sep 3 17:04:22 2001] [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 16 children, there are 0 idle, and 35 total children What this means is that Apache is detecting that it doesn't not have enough children to service all incoming requests. Therefore, it is increasing the number of children to handle the load. I would look at your access logs or look at mod_status (ExtendedInfo enabled) to see what URLs are being requested. It may be an attack, or just that you have been /.ed. =-) -- justin
Re: cvs commit: httpd-2.0/modules/experimental mod_cache.c
Ryan Bloom wrote: But keeping it simple would essentially make the cache less useful. If I request a pdf file using three different browsers, the server will most likely have three different copies of the same file. One with byteranges, one with chunks, and one with neither. This returns to the point about whether the cache should store data with the transfer encodings applied, or not. I think that the cache should store data *without* transfer encodings applied: Ie not chunked and not byteranged. This solves your problem. If we do need to do anything clever (AKA potentially confusing) like apply a gzip transfer encoding before caching content (or applying a gunzip filter if the opposite is true) it's done within the cache_storage.c code - thus keeping potentially confusing stuff out of the Apache filter stacks. Regards, Graham -- - [EMAIL PROTECTED]There's a moon over Bourbon Street tonight... S/MIME Cryptographic Signature
Re: cvs commit: httpd-2.0/modules/experimental mod_cache.c
Multiple filters in the chain for each classification can exist (ordering between classifications shouldn't matter...) This way you can have a URL handled by mod_include and mod_php (but the mod_include portion can't create PHP tags and vice versa since you can't guarantee when it will run in relation to the other). I have no clue how this relates to our code or how others sees this. This is my quick uninformed birds-eye view. Thoughts? This ordering seems to be a problem that we need to address. Definitely. One of the most requested features in the past has been stacked content handlers, and if we now say that we support stacked modules as long as they aren't content handlers, then a lot of people will wonder what the heck we are doing. There should be a way to specify ordering of the content handlers in the stack. -Rasmus
Re: cvs commit: httpd-2.0/modules/experimental mod_cache.c
Justin Erenkrantz wrote: handler---content-filter-cache-filter-transfer-filter-network (default) (mod_include, (mod_cache) (mod_gz{ip}, (core) mod_php) ^ mod_proxy, | byte ranges) can short-cut handler and content-filters Almost - mod_proxy is a handler, not a transfer filter (otherwise you could never cache proxy requests): Basically, there are three states of operation, answering the question is this URL cached with the answers yes, no and maybe. The mod_cache handler works out each state, and in each state, the filter stack is set up to look like this: o NO - a virgin URL handler - content - CACHE_IN - transfer - core (mod_cache (mod_include (mod_gzip, mod_proxy mod_php) BYTERANGE) mod_core) o YES - a cached URL handler - CACHE_OUT - transfer - core o MAYBE - a stale URL handler - content - CACHE_CONDITIONAL - CACHE_IN|CACHE_OUT - transfer - core To sum up, the cache is always first and last (last except for transfer encodings) so ordering of content filters should not matter to mod_cache. Regards, Graham -- - [EMAIL PROTECTED]There's a moon over Bourbon Street tonight... S/MIME Cryptographic Signature
RE: [PATCH] Add mod_gz to httpd-2.0
On Mon, 3 Sep 2001, Peter J. Cranstone wrote: Marc, Rather than continue this thread let's see if we can put this subject into the end zone. There are numerous unresolved issues and unanswered questions that have been brought up. The only way to get anywhere is to change them from unresolved to resolved and unanswered to answered and go from there. Think then act, not the other way around. Then vote on it. Either +1 or -1 on including mod_gzip into the Apache distribution. thinking == discussing the issues and coming to a conclusion acting == voting based on that conclusion and then implementing the results of the vote Note again: think _then_ act. There isn't even any code to vote on. Simple. Ok, so what you are saying by not wanting to continue discussion on the issues raised or to answer the questions presented to you is that you aren't interested in working with us to contribute a 2.0 mod_gzip to Apache. That is, of course, a decision that you are free to make. But make sure you understand that it is _you_ deciding this, not anyone else. Alright then, end of thread.
[PATCH] pre-merge of per-dir configs to speed up directory_walk
This is an updated version of a patch that I posted a while back. I've cleaned up my code a bit and modified it to work with the latest version of mod_include. This patch attempts to pre-merge all of the non-regex, non-.htaccess per-directory configuration structures at server startup, so that directory_walk can use the pre-merged values. The motivation for this is to eliminate expensive dir_merge functions during request processing. Notes: * The pre-merge logic depends on dir_merge functions respecting the constness of the 'base' config that they're passed. If we later need to support dir_merge functions that modify the base in the 'same pool' case (like Bill Rowe was investigating for mod_mime), it's possible to do so by giving each pre-merged config its own pool. * The comments in core.c describe the pre-merge algorithm and the design tradeoffs that I selected. Does anybody have time to review this code? Thanks, --Brian Index: include/http_core.h === RCS file: /home/cvspublic/httpd-2.0/include/http_core.h,v retrieving revision 1.51 diff -u -r1.51 http_core.h --- include/http_core.h 2001/08/30 05:10:53 1.51 +++ include/http_core.h 2001/09/03 23:19:34 @@ -488,6 +488,14 @@ char *access_name; apr_array_header_t *sec_dir; apr_array_header_t *sec_url; + +/* Pre-merged per-dir configs */ +#define NUM_PRE_MERGE_DIRS 6 +struct { + const char *dirname; + ap_conf_vector_t *conf; +} pre_merged_dir_conf[1 NUM_PRE_MERGE_DIRS]; + } core_server_config; /* for http_config.c */ Index: server/core.c === RCS file: /home/cvspublic/httpd-2.0/server/core.c,v retrieving revision 1.58 diff -u -r1.58 core.c --- server/core.c 2001/08/31 13:45:16 1.58 +++ server/core.c 2001/09/03 23:19:36 @@ -3352,6 +3352,166 @@ ap_set_version(pconf); } + +static int is_possible_predecessor(const char *dir1, const char *dir2) +{ +char c1, c2; +while ((c1 = *dir1++)) { + c2 = *dir2++; + if (c1 != c2) { + if ((c1 == '*') || (c2 == '*')) { + while ((c1 = *dir1) (c1 != '/')) + dir1++; + while ((c2 = *dir2) (c2 != '/')) + dir2++; + } + else { + return 0; + } + } +} +return 1; +} + + +/* Pre-merge the per-directory configurations, to avoid the + * overhead of doing this for each request (see directory_walk + * in request.c). + * + * Background: + * + * There are 'n' per-directory configurations. The ap_directory_walk() + * function (server/request.c) scans through them from i=0 through i=n-1 + * and either merges 'per-dir-config[i]' into its pending request + * configuration or leaves that config out (depending on whether + * the directory name associated with per-dir-config[i] matches the + * requested URI or not). + * + * It's possible to represent the set of merges done for a request + * as a vector of bits, V[0] through V[n-1]. V[i] is 1 if + * per-dir-config[i] is used for the request, or 0 if it isn't. + * Conceptually, pre_merge_per_dir_configs() is responsible for + * pre-building the final per-dir configuration associated with + * each possible value of this bit vector. Given this pre-merged + * config structure, directory_walk() can just look up the + * end result instead of actually doing all of the merging at + * request time. + * + * In the general case, it's not feasible to precompute all + * permutations of V, because there are O(2^n) possibilities. + * Thus pre_merge_per_dir_configs() computes the pre-merge + * for the first NUM_PRE_MERGE_DIRS directories. It still + * uses O(2^m) storage, but m is constrained to a small value. + * As an additional space optimization, the function skips + * impossible permutations (e.g., if you have Directory + * blocks for /news and /downloads/pc, their configs can't + * ever be merged). + */ +static void pre_merge_per_dir_configs(apr_pool_t *p, server_rec *s) +{ +core_server_config *sconf; +apr_array_header_t *sec; +int nelts; +ap_conf_vector_t **elts; +int num_bits; +int i; +int max; +int mask; +int threshold; +int j; + +sconf = ap_get_module_config(s-module_config, core_module); +sec = sconf-sec_dir; +nelts = sec-nelts; +elts = (ap_conf_vector_t **)sec-elts; + +num_bits = + (sec-nelts NUM_PRE_MERGE_DIRS) ? sec-nelts : NUM_PRE_MERGE_DIRS; +max = (1 num_bits); + +sconf-pre_merged_dir_conf[0].conf = s-lookup_defaults; +sconf-pre_merged_dir_conf[0].dirname = ; +mask = 1; +threshold = 2; +j = 0; +/* i holds the value of the bit vector 'V' in the + * algorithm described in the comments at the top + * of this function. To make the code simpler, the + * bits are 'backwards'; the low order bit of i is V[0]. + *
Re: [PATCH] Add mod_gz to httpd-2.0
Ryan Bloom wrote: You know what's really funny? Every time this has been brought up before, the Apache core has always said, if you want to have gzip'ed data, then gzip it when you create the site. That way, your computer doesn't have to waste cycles while it is trying hard to serve requests. I personally stand by that statement. If you want to use gzip, then zip your data before putting it on-line. That doesn't help generated pages, but perl can already do gzip, as can PHP. Neither mod_perl nor mod_php should have to do gzip, as it's a transfer encoding - it should be done transparently. The job of making mod_gzip efficient (AKA not gzipping every file on every request) is the job of mod_cache with cache_storage.c optimisation. mod_gzip should be applied to all output where possible, not just static files. Regards, Graham -- - [EMAIL PROTECTED]There's a moon over Bourbon Street tonight... S/MIME Cryptographic Signature
Re: cvs commit: httpd-2.0/modules/experimental mod_cache.c
From: Graham Leggett [EMAIL PROTECTED] Sent: Monday, September 03, 2001 11:30 AM Ryan Bloom wrote: But keeping it simple would essentially make the cache less useful. If I request a pdf file using three different browsers, the server will most likely have three different copies of the same file. One with byteranges, one with chunks, and one with neither. This returns to the point about whether the cache should store data with the transfer encodings applied, or not. I think that the cache should store data *without* transfer encodings applied: Ie not chunked and not byteranged. This solves your problem. WFT?!? I've been letting this conversation slide (to avoid IO) but this is absurd. OF COURSE you don't apply transfer mechanics (e.g. byteranges, chunking etc) to a cache!!! They aren't reusable - they are likely meaningless to any other request. You will waste more time searching the cache than you save with occasional hits to it. A transfer encoding isn't a byterange or chunking output. It's a compression scheme, and that we _want_ to cache, to avoid the cpu overhead. Handler (e.g. Core/autoindex/mod_webapp etc) V Includes and other Filters V Charset Translation (Transform to Client's preference) V Content Encoding (gz) (Body - large packets - higher compression) V X cache here V Byterange V Chunking V Headers V SSL Crypto V Network I/O
Re: cvs commit: httpd-2.0/modules/experimental mod_cache.c
From: Justin Erenkrantz [EMAIL PROTECTED] Sent: Monday, September 03, 2001 5:39 PM I'll jump in here now that I have an idea what you are talking about. I think our filter classifications can be: handler---content-filter-cache-filter-transfer-filter-network (default) (mod_include, (mod_cache) (mod_gz{ip}, (core) mod_php) ^ mod_proxy, | byte ranges) can short-cut handler and content-filters And we don't proxy proxies??? mod_proxy (viewed as a handler, or origin of a request response body) is most definately cached. gzip can be cached. Your 'transfer filter' category is _way_ to broad.
Re: [PATCH] Add mod_gz to httpd-2.0
On Monday 03 September 2001 11:36, William A. Rowe, Jr. wrote: From: Justin Erenkrantz [EMAIL PROTECTED] Sent: Monday, September 03, 2001 12:57 PM I also think that we do not need to redistribute zlib in our source tree. I think it is common enough now that most OSes come with it. (I look at how we handle the OpenSSL library and think zlib falls in the same category.) We don't distribute OpenSSL because it's a huge chunk of code!!! We certainly can't rely on folks having 0.9.6b installed (or even 0.9.6a, the absolute minimum to avoid some pretty significant holes, leaving a problem or to remaining.) But we aren't about to distribute that much code, we have a relationship with the maintainers (one sits on the ASF board), and _new_ crypto development still has hardships within the US. There is nothing new or novel about mod_ssl, which is why we have no problem falling under the crypt export relaxation for 'publicly available open sources'. I have no issue with dropping the current (and httpd-maintained) zlib, returning all patches to the authors. If there are problems with threading support + leaks, we will need to fix them if we will call this 'supported'. Same as we do for pcre and expat, which aren't as firmly established as the ASF or even the OpenSSL organization. It adds some 160kb to the tarball, as distributed at zlib.org. I have a big problem with this. We had a hard enough time contributing patches back to MM. The only reason we keep expat and pcre up to date, is that we NEVER make any changes to them. I would be very much against adding zlib to our tree. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: cvs commit: httpd-2.0/modules/experimental mod_cache.c
On Monday 03 September 2001 15:42, Rasmus Lerdorf wrote: Multiple filters in the chain for each classification can exist (ordering between classifications shouldn't matter...) This way you can have a URL handled by mod_include and mod_php (but the mod_include portion can't create PHP tags and vice versa since you can't guarantee when it will run in relation to the other). I have no clue how this relates to our code or how others sees this. This is my quick uninformed birds-eye view. Thoughts? This ordering seems to be a problem that we need to address. Definitely. One of the most requested features in the past has been stacked content handlers, and if we now say that we support stacked modules as long as they aren't content handlers, then a lot of people will wonder what the heck we are doing. There should be a way to specify ordering of the content handlers in the stack. There is. In fact, there are TWO ways to configure this. The first way, is to do it in the config file. If you do: AddOutputFilter INCLUDES PHP Then mod_include will always be run before mod_php. the second way is for module authors to do this. Each module has an insert_filters phase. By ordering the insert_filters phase, you can order which filter is called when. If for some reason, the PHP developers always wanted to run before mod_include, then they should specify that the php insert_filters phase runs before the mod_include insert_filters phase. Ryan __ Ryan Bloom [EMAIL PROTECTED] Covalent Technologies [EMAIL PROTECTED] --
Re: cvs commit: httpd-2.0/server util_script.c
On Mon, 3 Sep 2001, William A. Rowe, Jr. wrote: This patch entirely blows up internal redirection and subrequests, I would expect. I'm building HEAD now in order to test it with httpd-test. Results to follow shortly. --Cliff -- Cliff Woolley [EMAIL PROTECTED] Charlottesville, VA
Re: [PATCH] pre-merge of per-dir configs to speed up directory_walk
From: Brian Pane [EMAIL PROTECTED] Sent: Monday, September 03, 2001 8:59 PM William A. Rowe, Jr. wrote: [...] About your patch - if it goes in this direction (I simply skimmed it so far) then I'd be happy to apply, but it looks like we are trying to accomplish two different things. Since the broader 'dynamic cache' schema is where we want to end up (IMHO) and will invalidate the usefulness of an additional 'static cache' (preconstructed at boot time), I'd like to pursue one chunk of code that everyone can focus energy at, rather than two sets of code with their own debugging cycles. Of course, I could be entirely off base. What are your thoughts on these two different strategies? I've definitely focused just on the static preconstruction approach, and I've specifically avoided the dynamic caching approach. Here's my rationale: 1. I'm not convinced that the dynamic caching technique can be implemented without creating a scalability problem (due to the need for mutexes around the cache). Agreed [to the mutex] - dunno if we can effect some scalability in any case by not locking the table for the duration of the update, but just for the invalidate - validate toggle. I suppose we would have to force another thread to lock while we finish constructing a given node on the cache, thought. 2. From a return-on-investment perspective, the static cache looks a lot more appealing than the dynamic cache. I.e., I anticipate that a static cache will offer 80% of the performance benefit of a dynamic one for 20% of the development cost. (In particular, a static cache is a *lot* cheaper to test, because it's not vulnerable to the race conditions that make a dynamic cache tricky.) Agreed that this is simple to validate. I'll take some time to look at it, but I (imagine) that the tree walking *could* become a nightmare. I tend to think in terms of caching all cited Directory blocks, Alias targets, etc, which the user has identified with Override None. Could we implement your cache on an explicit - rather than tree-walking basis? I would think an info level warning that a given directory/alias could not be cached (due to AllowOverrides) would help the administrator troubleshoot their performance questions. Sound like a plan? Bill