On Tue, Aug 21, 2001 at 01:09:08PM -0700, Justin Erenkrantz wrote:
On Tue, Aug 21, 2001 at 01:05:04PM -0700, Doug MacEachern wrote:
anybody have the configure fu to leave -O2 out of the build if
maintainer-mode is enabled? or know of another way i can turn off -O2?
I do:
CFLAGS=-g
Please see my responses to Lars. I think if you read carefully, you will
agree that the prefix rule is one way, and it doesn't allow the server to
return en for a client that requests en-us. I'm not going to argue that
this is what makes the most intuitive sense. However, we do need to
If the user says, --enable-modules=all, then the user has said, I want
ALL of the modules in the server. We have just disabled that, because
now even if I ask for all modules, I won't get them all. That is bad.
No, it means compile all of the modules available for that platform.
It has
I disagree. I specifically used --enable-module=all earlier today, because I
wanted mod_ssl. I was VERY surprised to find that it wasn't compiled. Now,
I understand that mod_ssl isn't enabled using that option. But I believe
that is a big mistake.
Not really. SSL is still illegal in
On daedalus, that ends up driving mod_mime_magic for many files every
single time a browser accesses a distribution directory. This generates
tons of overhead, including gunzip'ing at least part of every .gz file
in the directory. I suppose we should try to shake out bugs in m_m_m,
but we
The sensible thing would be to stop putting so many implementation
assumptions throughout the code. If we separated the act of checking
the value from the nature of the config data structure, then we could
have look-up routines that acted on a list of layered configs instead
of copying each
r-content_type is NULL. I don't know why it is NULL at this point.
All I can say is that the code surrounding that statement SUCKS and
it has been there for over a year. Someone must have fixed another
bug that was hiding this one.
Roy
On Sat, Aug 11, 2001 at 03:12:29PM -0400, Greg Ames
On Mon, Aug 13, 2001 at 05:09:42PM -, [EMAIL PROTECTED] wrote:
bnicholes01/08/13 10:09:42
Modified:src/include httpd.h
Log:
Redefined ap_http_method(r) to ap_os_http_method(r) for NetWare so that
we can appropriately recontruct the URL in ap_contruct_url() based on a
On Mon, Aug 13, 2001 at 03:20:42PM -0700, Ryan Bloom wrote:
Wild guess here, but look at line 827. and all the rest of the if statements
around it. Shouldn't all of those
((type = exinfo-forced_type)))
be
((type == exinfo-forced_type)))
I would bet that if they were, we
On Mon, Aug 13, 2001 at 08:26:12PM -0400, Bill Stoddard wrote:
I was just able to recreate this on my Windows machine. Will post an analysys (and
perhaps
a fix) later on.
I have a potential fix but no ability to recreate the fault. ;-)
Given the butt-ugly nature of the current code, I'll
On Tue, Aug 07, 2001 at 04:41:38PM -0500, RCHAPACH Rochester wrote:
Is directory indexing suppose to work in Location ... containers? The
doc doesn't really say either way. We're having problems getting things
like Indexoptions, AddIcon..., AddAlt..., ReadMeName to work when defined
within
On Wed, Aug 01, 2001 at 03:09:53AM -0600, Jerry Baker wrote:
Now trying to access a page via https with mod_tls loaded causes Apache
to crash with the following call stack:
Better call stack.
memcpy(unsigned char * 0x00571e98, unsigned char * 0x8cc832c0, unsigned long 766)
line 171
On Mon, Aug 06, 2001 at 10:42:59PM -, [EMAIL PROTECTED] wrote:
wrowe 01/08/06 15:42:59
Modified:modules/mappers mod_negotiation.c
Log:
Thanks goes to Manoj, while commenting on another issue, for triggering
this idea. If we find files matching (e.g.
If the file with the unrecognized extension did not exist, would the
result be a 404 or a 200? If it would have been 200, then 500 would be
a reasonable error response. If it would have been 404, then 404 is the
only reasonable response --- an admin can look at the error_log to find
out why.
Understand, this isn't a theoretical concern for me. I have modules that
walk the scoreboard on every request. They are looking to determine what
each of the other workers is doing.
Requiring any locking to walk the scoreboard is a non-starter.
Well, that's bizarre. Doing that in a
I'll be committing this tonight. The only change I will be adding, is that
the macro is going to be capitalized, because all macros should always be
capitalized.
No, that is wrong. A macro that is taking the place of a past or potentially
future function call should always be lowercase,
WTF is this supposed to mean: APACHE_2_0_22_dev ???
If we are going to keep a running marker on what appears to be the stable
revision set, then call it something sensible like APACHE_STABLE or
APACHE_BEST. The version number only has meaning for versions intended
for release. That way things
On Fri, Jul 27, 2001 at 04:14:00PM -0400, Bill Stoddard wrote:
There another bug lurking in mod_asis (reported by Ken Bruinsma in IBM).
We are creating the file bucket with file offset of 0. Problem is that
we have already read in part of the file (the headers) a bit earlier.
Need to give
Did you know that by two lines in CVSROOT/modules
apr and apr-util can be checked out automatically
to srclib when httpd-2.0 is checked out?
Add these two lines to CVSROOT/modules:
httpd-2.0httpd-2.0 httpd-2.0/srclib
httpd-2.0/srclib -d srclib apr apr-util
Figured as much when I had posted the message. Are there any httpd
developers without apr privilidges actually?
Yep, most of them (at least until they ask for it).
Maybe the construct is something for the public cvs?
That's a good idea. I need to get root to fix the public cvs config
BTW ... until it goes to http://www.apache.org/dist/ it isn't so much as an
alpha, by Roy's scheme, correct? Or did I miss something? Did we ever really
document the acceptance and steps for tag - alpha - beta - release on the
new schema?
Alpha means a packaged release to developers. Beta
Hey, what's going on with the retagging? There exists on developer dist:
-rw-r--r-- rbb httpd 5383847 Jul 19 16:59 httpd-2_0_21-alpha.tar.Z
-rw-r--r-- rbb httpd 466 Jul 19 16:59 httpd-2_0_21-alpha.tar.Z.asc
-rw-r--r-- rbb httpd 3226188 Jul 19 16:59
I don't have an issue with retagging things before a notice gets sent to
testers to download a tarball, or with retagging the set of config files
needed to build the binary install, but in all cases the previously tagged
tarball should be deleted as soon as there is a perceived need for
As these files never left http://dev.apache.org/dist for
http://www.apache.org/dist
I don't see an issue with using the 2.0.21 identifier.
Disagreement?
I think it is okay for this release.
Roy
I have a question regarding unsupported methods in LIMIT directives.
Let's say that you could limit arbitrary methods that different modules
have implemented. For this example, a module has defined a new method FOO,
but no module handles method FOO2:
In the cconfig file:
LIMIT FOO
It doesn't logically follow that just because the server has a limit
directive with that method name, it necessarily implements that method.
There is no following to the logic -- it is a definition.
In Apache httpd, LIMITable == implemented.
The only time 501 is preferred is if the
That architecture was explored in detail by Netscape. It isn't reliable
and slows your web server to a crawl whenever dynamic content is produced.
It should only be used for static file servers and caching gateways, and
people implementing those might as well use an in-kernel server like
The correct fix, as I see it, is to kill off the interprocess
accept lock by removing the possibility of having other processes
in a *threaded* MPM. -- justin
That architecture was explored in detail by Netscape. It isn't reliable
and slows your web server to a crawl whenever dynamic
Is it possible to create a httpd sub-project for this work? Doug, Ryan, and
Will can make sure the code is committed once the project is set up. A
separate mailing list for people working on the test harness and scripts
would allow anyone not necessarily interested in this work to ignore
ok studying the mpm threaded.c code i see that we give each thread a
sub_pool of pchild. but i think the following patch would be safe,
because each thread won't exit until it has done its own cleanup.
The last time I looked at the pool code it was bogus because clean_child_exit
assumed
Somebody just needs to commit it -- I haven't been doing it on every
change to the binary API because there were so many. I just bumped it.
Roy
They have configured their system for the general case. We are talking
about the edge case here. The user has asked for 10 processes with 25
threads each. What they are saying, is I want 250 threads. If we are
all of a sudden under heavy load, then we have to give them 250 threads,
227
if -X causes DEBUG to be defined and prefork uses that as a hint to do
ONE_PROCESS, that sounds perfectly fine to me.
Me too. +1 on the patch.
and when will unix thread debugging catch up with the 1980s? i was
debugging multithreaded programs with ease on OS/2 1.x over a decade ago.
On Thu, Jun 21, 2001 at 02:21:11PM -0700, Brian Pane wrote:
I spent some time recently profiling the latest 2.0 httpd source with gprof.
One surprising bottleneck that showed up in the results was the find_ct
function in mod_mime, which spends a lot of time in apr_table_get
calls. Using the
On Fri, Jun 01, 2001 at 03:53:47PM -0700, Greg Stein wrote:
Why hasn't the 2.0.17 tarball been moved to the public area as an alpha? For
that matter, where did 2.0.18 go?
I think the alphas should go to the public site. Sure, they aren't betas,
but they are certainly a lot newer than the
On Fri, May 25, 2001 at 07:56:28AM -0700, Brian Behlendorf wrote:
On Fri, 25 May 2001, William A. Rowe, Jr. wrote:
That would be I, and I simply routed it straight through the normal means
(sending it from [EMAIL PROTECTED], if I remember right.)
Any odditity appears to be header munging
On Wed, May 23, 2001 at 01:51:58PM -0700, Doug MacEachern wrote:
fresh update/build with the threaded mpm.. there are no headers in the
response when a handler return an error:
% telnet localhost 8082
Trying 127.0.0.1...
Connected to mako.covalent.net.
Escape character is '^]'.
GET /
WIBNIF we reintroduced the notion of a network layer IOL? For the above
mentioned SSL API, we would insert the appropriate IOL during the
pre_connection phase. The core filter would use iol_read, iol_write, etc.
to do the network i/o. Thoughts?
How is that different from swapping the
On Tue, May 22, 2001 at 04:13:34PM -0600, Charles Randall wrote:
2) Is configure emitting conformant sh code for this example?
Probably not.
3) Is there a workaround?
Use ./config.nice instead. I don't bother with config.status myself.
Roy
It seems that only certain headers are sent on these responses, all
others are stripped. Is this a requirement of HTTP? Can anyone explain
why this is like this?
section 10.3.5 of RFC 2068 spells out exactly which heads SHOULD and
SHOULD NOT, and MUST and MUST not be a part of a 304
I fixed it.
Roy
Oh, crumbs. -1. That just increases the work of people who install our
software. Never obsolete a config command without a damn good reason.
I don't mind the feature, but it needs a better config syntax and it can't
deprecate the old one. Besides, In and Out are ambiguous for HTTP.
Roy
what does a server do when it has no default listener? i.e. what's the
point again?
Without my patch, a listener is created on port 80 if none has been
configured. With my patch, no such listener is created, and the
appropriate return code is set such that the main loop is broken, the
On Thu, May 17, 2001 at 12:26:08PM -0500, William A. Rowe, Jr. wrote:
The 1.3.20 tarball is available from http://dev.apache.org/dist/
Seems this announce never went out amidst all the email delivery fooness
on Wednesday.
Schedule is to queue up the announce for first thing tommorow, I'll
On Tue, May 15, 2001 at 09:36:47AM -0400, Bill Stoddard wrote:
At global scope I have the following config:
Port 80
Listen 8080
As documented, Apache listens on port 8080 (the Port setting is ignored and
the server will not listen on port 80). Here is the bug... If I use the %p
log
I feel that httpd should pick up the port from the Host header, if one is
present. The documentation of UseCanonicalName, as I read it, indicates this
behavior.
No. The reason we do not do this is because the security context depends
on the physical incoming port, and allowing the
In message [EMAIL PROTECTED], Roy T. Fielding writes:
On Tue, May 15, 2001 at 09:36:47AM -0400, Bill Stoddard wrote:
At global scope I have the following config:
Port 80
Listen 8080
As documented, Apache listens on port 8080 (the Port setting is ignored and
the server will not listen
Following a suggestion by a Usenet user, I'm forwarding you my findings
about an incompatibility between Apache (v. 1.3 at least) and Microsoft Web
Publishing Wizard.
[...]
OK, I have understood what it happens, but I don't know how to fix it.
Briefly, WPW violates HTTP 1.1 specs in three
Fuck the freeze, just commit it. Nobody here has a right to freeze the
tree unless they intend to stay up all night working on it until it
can be unfrozen.
Roy
On Thu, May 10, 2001 at 11:12:04PM -0700, Dirk-Willem van Gulik wrote:
This fixes all warnings and bugs I know of on
I wonder if no longer running the libtool configuration that time
keeps $GCC from being set at the point where we add -DAP_DEBUG (and
gcc-specific flags) for --enable-maintainer-mode.
Oooh, thanks for finding that one -- I noticed it last night but was
too tired to figure out what was
If at any time progress is stopped because someone committed a bad
patch to the tree, the best way to fix it is to revert the patch
(unless for some reason that patch is necessary for the release).
I'd say you can revert all of the changes for isnan and ab if you
are in a hurry to get this
Of course, last-minute bug fixes have also resulted in problems
that have required re-releases in short order. So yeah, maybe freeze
isn't a good term, but generally what we mean is we plan on
tagging and rolling soon and want to reduce the risk of
problems on the present code tree so we
* checked out httpd-2.0 from scratch and now it works. I suppose that
'make distclean' is not very clean, since with 'cvs up' + 'make distclean'
the error didn't go away.
You probably didn't update srclib/apr first. Any changes to the *.m4 or
*.in files won't work until both httpd-2.0 and
Should the apr/configure.in prevent -O... options when it notes a -g?
Seems like it's an awkward reasonable default for debug / maintainer
mode.
I don't think it should default to anything, but then I am not Gnu.
I've never had any problems stepping through -O2 code, but that was
before all
Ok, got back to playing with this today: looks like the default CFLAGS
setting for autoconf (created by AC_PROG_CC) is -g -O2. That seems
like an odd combination to me, but it's as-installed on two RedHats and
a FreeBSD.
Yes, that is the standard reasonable default per the Gnu project's
On Tue, May 08, 2001 at 01:42:54PM -0400, Bill Stoddard wrote:
Is it reasonable for a client that claims to support HTTP/1.0 to -require- a content
length header
on all responses? The client I am working with will discard a response if it does
not have a
content-length header. This doesn't
I expect -Wall does, but since we don't have -Werror (why not?) I expect
no-one notices...
It can be added with
NOTEST_CFLAGS=-Werror ./configure ...
for everything except pcre (not wanted) and apr-util (coming soon).
I would have done that automatically with --enable-maintainer-mode but
Aye - shame -or- do I misunderstand - I was kind of looking at mod_tls -
and hoping it would be the underlaying basis for mod_ssl; so that
something like the client-side of the proxy could build on the mod_tls.
Or is this still the plan - and I am misunderstadning ?
Yes it is, but it makes
And why does each worker grab a mutex before checking the pipe of death?
All that a worker does is check to see if there is a character on the
pipe and, if so, it tells all of the workers to exit. So why does it
matter if more than one worker gets inside that exclusion zone? *shrug*
All of this stuff about apxs is bogus right now -- does anyone feel
inclined to update apxs?
Why is it all bogus? The last time I checked, apxs worked just fine with
2.0.
It doesn't include half the symbols that have been added since 1.3 and
still thinks it is building shared
I disagree completely. Neither is the Apache Group going to get to
a point where the political disagreement becomes any better,
nor will Apache simply come with a solution within the next years.
Well then, we are screwed until some people lose their attitude problem,
or someone else comes
Once upon a time, httpd would create a global pool as the result from
alloc_init and use that pool as the parent of almost all of the other
pools (I say almost only because there is one pcommands pool that was
separate, though I don't know why).
Now, httpd tells apr to initialize itself and
A) Every version of Apache 2.0 has used these signals, so it does seem to
work.
Granted -- more importantly, I wouldn't want to have to change every site's
custom log rotation script just because we changed the signals.
B) We made a conscious decision a few months ago to move ALL MPMs to
On Thu, Apr 12, 2001 at 10:58:53AM -0400, Chuck Murcko wrote:
I'd wager this is what's broken the install now...
...
make[2]: Leaving directory `/usr/local/src/httpd-2.0/srclib/apr-util'
Making install in pcre
make[2]: Entering directory `/usr/local/src/httpd-2.0/srclib/pcre'
make[3]:
downstream is the wrong name -- see how it is used in the HTTP spec.
Every data stream has an upstream (where data is coming from) and
a downstream (where data is going to be forwarded), so every connection
consists of two streams: an upstream and a downstream. What you want
is to differentiate
r-connection is the inbound conn_rec.
The original filters conversion on mod_proxy used 'conn_rec *origin' -
how about c-origin?
Nope, you don't know it is the origin server (and origin isn't sufficient
anyway, since the user agent is the origin of a request message).
c-outbound_server
On Tue, Apr 10, 2001 at 08:45:13PM -0700, Justin Erenkrantz wrote:
Now that the dust from ApacheCon has settled a bit, any further comments on
the following patches for updating the distclean targets?
I have been working on it -- forgot to tell you that yesterday. Thanks
for the updated
That URL doesn't work for me.
Roy
On Mon, Apr 09, 2001 at 06:08:31AM -0700, Frank Carlos wrote:
Thought you guys may want to take a look at this.
A new tutorial from IBM developerWorks that introduces the Apache
administrator to the directory layouts used for a given installation.
Has anyone considered permitting a hash database (instead of, or in
addition to, the flatfile setup) for redirect directives? The advantage
would be super-fast lookups by Apache of a URL redirection target,
without being responsible for keeping the data in memory (leaving it up
to the Unix
On Fri, Mar 30, 2001 at 05:37:57PM -, [EMAIL PROTECTED] wrote:
bnicholes01/03/30 09:37:57
Modified:src/main util.c
Log:
no message
This is not acceptable -- a description of the change is MANDATORY
on every commit to the code.
Roy
On Tue, Mar 27, 2001 at 06:34:32AM -0800, [EMAIL PROTECTED] wrote:
I rolled the 2.0.15 tarball on Saturday, and said I would release it today
unless there were any -1's. I have heard of a few bugs through Ed
Korthoff, and Dale Ghent. Nobody has given the exact tarballs a +1, -1,
or even a
On Tue, Mar 27, 2001 at 12:12:19PM -0500, Rodent of Unusual Size wrote:
Here is a patch that will fix this, without breaking anything
else as far as I can tell. ErrorDocument 400 will be honoured
regardless of the cause of the HTTP_BAD_REQUEST condition.
For 1.3.next consideration.
Nope,
On Wed, Mar 21, 2001 at 04:29:52PM -0500, Bill Stoddard wrote:
Are pipelined requests really used?
Only in W3C libwww and benchmarks, though I wouldn't be surprised if
the latest version of MSIE's request dll uses them -- Henrik has been
at Microsoft long enough, though I don't know if he has
Opera uses W3C libwww, which uses pipelined requests to get most of
its performance improvements.
Roy
I agree, the no-freeze model just doesn't work in this environment.
The no-freeze model hasn't even been tested in this environment.
It is necessary for the code to be in a stable state in order to do
a release at any time, regardless of a freeze. At no time in the past
six months has the
If, at any time while processing the request, the URL is rewritten such
that a bogus identifier is replaced with an identifier for a valid resource,
then that request should result in a 301 response. The only exceptions
are internal subrequests for SSI and places where there is a known
browser
I find this discussion a bit odd. I am 100% supportive of Ryan making
these changes as soon as he feels like it, and preferably before the beta,
because I think they will improve maintainability. But I honestly don't
care whether the next release is called alpha, beta, or prepubescant pink:
Actually, why don't we simplify this a bit more. I have been meaning to
finish the httpd_rolling script for a while, but I haven't had time. It
should be easy to add a few steps to the beginning, something like:
cvs co httpd-2.0
cd httpd-2.0
( cd srclib;cvs co apr apr-util )
(
WTF? First of all, Webfolders is buggy because it should always be
including the trailing slash. How the brainiacs at Microsoft got that
one wrong is beyond me -- I personally explained to them that Apache
returns a 301 on any directory without a slash because those are two
different resources
That's how it it was originally. It was changed to this model not long
after the original code was committed. One of the problems with using
seconds and a separate microsecond field, is that platforms other than
Unix don't have the same reliance on seconds. I believe Windows uses 100
APR needs to be written and made correct once. there will be more folks
using APR than there are needs to fix the time code within APR.
I'd certainly hope so, but that is not a very relevant metric.
There will be just as many folks, if not more, using it after it
is fixed.
time arithmetic
Are people actually using such constructs? According to RFC 2396 (and
1738), neither the scheme nor the hostname is allowed to contain escaped
characters:
RFC 2396, Appendix A:
|
| scheme= alpha *( alpha | digit | "+" | "-" | "." )
|
| [...]
|
| host
I am having a hard time understanding why we are requiring our first beta
to be seg fault free as well. The server works. It has been running on
apache.org for at least four or five days. This is not GA code. We are
stable, and things seem to work. A beta cycle means just that. We
83 matches
Mail list logo