Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
What would happen in this case httpd would infer a body while no body 
would be
found there?
 * In the case of a 'connection close' nothing, empty body would be 
found.
 * In the case of a 'persistent connection':
   * RFC2616 section 8.1.2.1:
   In order to remain persistent, all messages on the connection 
MUST
   have a self-defined message length (i.e., one not defined by 
closure
   of the connection), as described in section 4.4.
 Therefore 'persistent connection' is not allowed in this case.

Therefore it should be safe to assume if no Content-Length and no 
chunked
headers are present there MUST follow an optional body with the
connection-close afterwards as 'persistent connection' MUST NOT be 
present.
No, because looking for body when no body is present is an expensive
operation.  An HTTP request with no content-length and no 
tranfer-encoding
has no body, period:

   The presence of a message-body in a request is signaled by the
   inclusion of a Content-Length or Transfer-Encoding header field in
   the request's message-headers.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
On Saturday, August 7, 2004, at 01:17  PM, André Malo wrote:
* Nick Kew [EMAIL PROTECTED] wrote:
It occurs to me that a similar situation arises with CGI and chunked
input.  The CGI spec guarantees a content-length header,
ah, no.
| * CONTENT_LENGTH
|
| The length of the said content as given by the client.
That's rather, *if* the client says something about the length, then
CONTENT_LENGTH tells about it. One should not trust it anyway, since 
inflating
compressed content with mod_deflate (for example), changes the length, 
but
neither changes the Header nor the environment variable.
CGI would happen after mod_deflate.  If mod_deflate changes the request
body without also (un)setting content-length, then it is broken.  
However,
I suspect you are thinking of a response body, not the request.

Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
Thanks for the great support - httpd-2.0 HEAD 2004-08-07 really fixes 
it.
It even provides env variable proxy-sendchunks to select between 
compatible
Content-Length (default) and performance-wise chunked.
Sounds pretty complete to me.  Of course you'd need to stick to C-L 
unless
you *know* the backend accepts chunks.
If the client sent chunks, then it is safe to assume that the proxy
can send chunks as well.  Generally speaking, user agents only send
chunks to applications that they know will accept chunks.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
If the client sent chunks, then it is safe to assume that the proxy
can send chunks as well.  Generally speaking, user agents only send
chunks to applications that they know will accept chunks.
The client could be sending chunks precisely because it's designed to
work with a proxy that is known to accept them.  That doesn't imply
any knowledge of the backend(s) proxied, which might be anything up to
and including the 'net in general.
Theoretically, yes.  However, in practice, that is never the case.
Either a user agent is using generic stuff like HTML forms, which
will always result in a content-length if there is a body, or it
is using custom software designed to work with custom server apps.
There are no other real-world examples, and thus it is safe to use
chunks if the client used chunks.
Also bear in mind that we were discussing (also) the case where the
request came with C-L but an input filter invalidated it.
I was not discussing that case.  The answer to that case is don't do 
that.
Fix the input filter if it is doing something stupid.

Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
CGI would happen after mod_deflate.  If mod_deflate changes the 
request
body without also (un)setting content-length, then it is broken.
Huh? Input filters are pulled, so they run *after* the handler has been
started. And - CONTENT_LENGTH (if any - It's unset for chunked as 
well) still
reflects the Content-Length sent by the client. So the current 
behaviour is
correct in all cases.
No, it is broken in all cases.  CGI scripts cannot handle chunked input
and they cannot handle bodies without content-length -- that is how the
interface was designed.  You would have to define a CGI+ interface to
get some other behavior.
A CGI script therefore should never trust Content-Length, but just read
stdin until it meets an EOF.
We cannot redefine CGI.  It is a legacy crap interface.  Input filters
either have to be disabled for CGI or replaced with a buffering system
that takes HTTP/1.1 in and supplies CGI with the correct metadata and 
body.

Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
A CGI script therefore should never trust Content-Length, but just 
read
stdin until it meets an EOF.
That is well-known to fail in CGI.  A CGI must use Content-Length.
Hmm. any pointers where this is specified? I didn't have any problems 
with
this until now - but in trusting the C-L variable.
CGI doesn't require standard input to be closed by the server -- Apache
just happens to do that for the sake of old scripts that used fgets to
read line-by-line.  Other servers do things differently, which is why
reading til EOF does not work across platforms.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
On the contrary!  I myself have done a great deal of work on a proxy
for mobile devices, for a household-name Client.  The client software
makes certain assumptions of the proxy that would not be valid on the
Web at large.  But the backend *is* the web at large.
But then the client is either using non-standard HTML forms or
non-standard HTTP, neither of which is our concern.  It doesn't make
any sense to code a general proxy that assumes all chunked requests are
meant to be length-delimited just because someone might write
themselves a custom client that sends everything chunked.  Those
people can write their own proxies (or at least configure them to
be sub-optimal).
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
Since the Apache server can not know if CGI requires C-L, I conclude
that CGI scripts are broken if they require C-L and do not return
411 Length Required when the CGI/1.1 CONTENT_LENGTH environment
variable is not present.  It's too bad that CGI.pm and cgi-lib.pl
are both broken in this respect.  Fixing them would be simple and
that would take care of the vast majority of legacy apps.
CGI was defined in 1993.  HTTP/1.0 in 1993-95.  HTTP/1.1 in 1995-97.
I think it is far-fetched to believe that CGI scripts are broken
because they don't understand a feature introduced three years
after CGI was done.  I certainly didn't expect CGI scripts to
change when I was editing HTTP.
I probably expected that someone would define a successor to CGI
that was closer in alignment to HTTP, but that never happened
(instead, servlets were defined as a copy of the already-obsolete
CGI interface rather than something sensible like an HTTP proxy
interface).  *shrug*
CGI is supposed to be a simple interface for web programming.
It is not supposed to be a fast interface, a robust interface,
or a long-term interface -- just a simple one that works on
multiple independent web server implementations.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
On Saturday, August 7, 2004, at 05:21  PM, Jan Kratochvil wrote:
This whole thread started due to a commercial GSM mobile phone:
	User-Agent: SonyEricssonP900/R102 Profile/MIDP-2.0 
Configuration/CLDC-1.0 Rev/MR4

, it sends HTTP/1.1 chunked requests to its HTTP proxy although you 
will
access general web sites. The chunked body is apparently created on 
the fly,
each chunk as a specific body element generated by a part of P900 
code.
So stick a proxy in front of it that waits for the entire body
on every request and converts it to a content-length.  I am not
saying that it isn't possible -- it is just stupid for a
general-purpose proxy to do that (just as it is stupid to deploy
a cell phone with such a lazy HTTP implementation).
Roy


Re: [PATCH] mod_cache fixes: #9

2004-08-02 Thread Roy T . Fielding
On Monday, August 2, 2004, at 10:55  AM, Justin Erenkrantz wrote:
Avoid confusion when reading mod_cache code.  write_ and read_ often 
imply
network code; save_ and load_ are more understandable prefixes in this 
context.
Hmm, IIRC, loading a cache means writing to it, not reading from it.
Why not just change them to cache_write and cache_read?
Or store and recall?
Kudos on the other changes -- those are some significant improvements.
Roy


Re: cvs commit: httpd-2.0 STATUS

2004-07-29 Thread Roy T . Fielding
On Thursday, July 29, 2004, at 05:58  AM, André Malo wrote:
* Mladen Turk [EMAIL PROTECTED] wrote:
William A. Rowe, Jr. wrote:
  /* Scoreboard file, if there is one */
  #ifndef DEFAULT_SCOREBOARD
 @@ -118,6 +119,7 @@
  typedef struct {
  int server_limit;
  int thread_limit;
 +int lb_limit;
  ap_scoreboard_e sb_type;
  ap_generation_t running_generation;  /* the
generation of children which
   * should still
be serving
requests. */
This definitely breaks binary compatibility.
Moving the lb_limit to the end of the struct will not break the binary
compatibility. Correct?
Not Correct. It *may* be the case. Depending on who allocates the 
stuff.
Then the question to ask is whether any independent modules
(those that are not installed when the server is installed)
are likely to use that structure, and how they are expected
to use it.
I'd be surprised if it were even possible for an independent
module to allocate a scoreboard struct, but it has been a while
since I looked at that code.
Roy


Re: mod_proxy distinguish cookies?

2004-05-04 Thread Roy T. Fielding
Rather just use URL parameters. As I recall RFC2616 does not consider 
a
request with a different cookie a different variant, so even if you
patch your server to allow it to differentiate between cookies, 
neither
the browsers nor the transparent proxies in the path of the request 
will
do what you want them to do :(
Well, that truly sucks. If you pass options around in params then
whenever someone follows a link posted by someone else, they will
inherit that person's options.
I do wish people would read the specification to refresh their memory
before summarizing.  RFC 2616 doesn't say anything about cookies -- it
doesn't have to because there are already several mechanisms for marking
a request or response as varying.  In this case
   Vary: Cookie

added to the response by the server module (the only component capable
of knowing how the resource varies) is sufficient for caching clients
that are compliant with HTTP/1.1.  Expires and Cache-Control are usually
added as well if HTTP/1.0 caches are a problem.
Roy



Re: fix_hostname() in 1.3.30-dev broken

2004-03-18 Thread Roy T. Fielding
Ugg... fix_hostname() in 1.3.30-dev (and previous) are
broken such that it does *not* update parsed_uri with
the port and port_str value from the Host header.
This means that with a request like:
% telnet localhost 
GET / HTTP/1.1
Host: foo:
that the '' port value from the Host header is
ignored!
When is fix_hostname() used?  If it is used anywhere other than
ProxyPass redirects, then it must ignore that port value.  To do
otherwise would introduce a security hole in servers that rely on
port blocking at firewalls.  I agree that ProxyPass needs to
know that port number, but that should be handled within the
proxy itself.
Roy



Re: 1.3 (apparently) can build bogus chunk headers

2004-03-18 Thread Roy T. Fielding
That is a common thread on http-wg.  Spaces are allowed after the
chunk-size, or at least will be allowed by future specs.  The whole
HTTP BNF needs to be revamped, eventually.
Roy



Re: apr/apr-util python dependence

2004-02-19 Thread Roy T. Fielding
However I completely disagree that Python (or Perl or PHP) is
a good choice for use in build systems.
As part of the configure process, I would agree with you, but as part 
of
buildconf, I disagree--not everyone needs to run buildconf--only
developers, and if you're a developer, it's *really* not asking that
much to have Python on your dev box.
Sure it is.  If I wasn't so busy I would have vetoed the change on
the grounds that it causes httpd to no longer be buildable by developers
on the Cray MP.  And no, I don't care whether anyone else thinks that
is an important requirement.  Creating entry barriers is what prevents
development on new platforms that you haven't even heard of yet.
We haven't been using sh/sed/awk as our build platform because we
thought those were good languages.  I'm sorry, but being too busy to
maintain the existing scripts is no excuse for rewriting them in a
less portable language.  As soon as someone has the time to write
it in a portable language, the python should be removed.
So no... switching to a shell script would not be beneficial, as it 
would
cut off future capabilities.
I doubt that.  .dsp and .dsw files are just other text files
which can easily be created using sh, grep, sed, tr etc.
Ick. Ick ick ick ick ick.  Easily is obviously a subjective term.  
Who
wants to write (and, more importantly, *maintain*) hundreds (or
thousands) of lines of /bin/sh code?  Not to mention the fact that
Python can be much more compact than /bin/sh after you hit a certain
level of complexity.
Irrelevant to the task at hand.

Anyway, I suppose that agreeing to disagree may be for the best here.
Subversion has required python to run autogen.sh for years now, and 
it's
been great for us.
Subversion has zero deployment when compared to httpd.  It should
be learning lessons from httpd's history, not casting it aside.
Roy



Re: [SECURITY-PATCH] cygwin: Apache 1.3.29 and below directory traversal vulnerability

2004-02-04 Thread Roy T. Fielding
-1.  Reject the request with a 400 error instead.

Roy



Re: [PATCH 1.3] work around some annoyances with ab error handling

2004-01-14 Thread Roy T. Fielding
+1, though it would probably be better to add a parameter to err
to pass errno (or 0) rather than using the global in this way.
Roy



Re: httpd 2.1 project plan vs LINK method

2004-01-14 Thread Roy T. Fielding
On Wednesday, January 14, 2004, at 01:04  PM, Julian Reschke wrote:

From...:

http://httpd.apache.org/dev/project-plan.html

- Implementation of the LINK Method

Can anybody tell me what this is?
See RFC 2068, section 19.6.1.2 and 19.6.2.4
(you might want to look at the description of PATCH as well).
Just ignore the project-plan page -- it hasn't been updated since 1996.

Roy



Re: Copyrights

2004-01-12 Thread Roy T. Fielding
On Saturday, January 3, 2004, at 11:10  AM, William A. Rowe, Jr. wrote:
At 06:32 AM 1/2/2004, you wrote:
[EMAIL PROTECTED] wrote:
 update license to 2004.
Why? Unless the file changes in 2004, the copyright doesn't. And, in 
any case, the earliest date applies, so it gets us nowhere.
In fairness this has been Roy's practice, so let's not beat on Andre.
Roy's logic is that this is a single work.  If someone obtains a new
tarball in 2004, all of the
files will be marked with 2004, as some changes will have 
(undoubtedly) been
made.  Old tarballs of the combined work retain their old copyright 
dates.
That logic seems a bit odd to me -- we only need to change the date in
the LICENSE file for it to apply to the collection as a whole.
The reason the copyright was being updated by me within all of the
source code files was because I have traditionally been the person who
can write a perl script that can do the update without also changing
a million other things.  The logic behind doing the update had nothing
to do with copyright law -- folks were just tired of the inconsistency
and hassle of remembering to do it when a file is significantly updated.
BTW, the real rule is that the date must include the year that the
expression was originally authored and each year thereafter in which
the expression contains an original derivative work that is separately
applicable to copyright.  Since that distinction is almost impossible
to determine in practice, software folks tend to use a date range that
begins when the file was created and ends in the latest year of
publication.  And, since we are open source, that means 2004.
The main reason for doing so has more to do with ending silly questions
about whether or not to update the year than it does with copyright law,
which for the most part doesn't care.  Also, it cuts down on irrelevant
change clutter from appearing in cvs commit messages for later review
and makes it easier to make global changes to the license itself.
Roy



Re: new ETag supression/weakening API

2003-12-15 Thread Roy T. Fielding
one of the issues that needed working out was dealing with multiple 
ETag
headers.  my original idea was to have ap_weaken_etag guarantee that 
ETag
headers would be weak.  with ETag headers entering err_headers_out via 
a
third party, there exists the possibility that the server would send
multiple ETag headers for a single request.  while I'm not sure if 
this is
actually legal, I can't find anything that says it isn't.
RFC 2616, section 3.11, BNF does not allow multiple ETag header fields.

I think you need to work on making this patch more efficient -- it is
doing too much work for an activity in the critical path of servicing
a request.  BTW, an entity tag does not identify the entity -- it merely
acts as a key for cache and range request handling.  If a filter
consistently produces the same content, then it should not modify the
entity tag (the routine that arranges the filters must do that if 
needed).

Roy



Re: new ETag supression/weakening API

2003-12-15 Thread Roy T. Fielding
RFC 2616, section 3.11, BNF does not allow multiple ETag header fields.
   ^
   ^14.19 + 3.11
Roy



Re: new ETag supression/weakening API

2003-12-15 Thread Roy T. Fielding
BTW, an entity tag does not identify the entity -- it merely
acts as a key for cache and range request handling.
right.  and what I was trying to do was make it possible for
content-altering filters to handle that key a bit more intelligently 
than
just removing it altogether.  the situation I initially had in mind 
was when
a filter was bitwise altering the content but not the semantics of it, 
an
HTML scrubber perhaps.  in this case, it seems that allowing the 
default
ETag is wrong, but that removing it can be avoided (thus keeping to the
spirit that ETags should be sent if feasable).  granted, the 
circumstances
are probably very rare that filters would behave that way.  are you 
saying
that weakening the ETag is the wrong behavior here?  if so I'm kinda 
wasting
my time (as well as everyone else's).
If the filter is tied to the URI such that every GET request on that
URI will invoke the filter, then there is no reason to weaken the tag
(and many reasons why you wouldn't want to).  If, however, the filter
is only sometimes invoked, then the filter should define its own
strong entity tag based on the original etag.  Basically, the only time
a weak entity tag should be produced is if the server is unsure about
the content actually reflecting the conditions evaluated in creating
the ETag itself (like last-modification dates that indicate the content
may have changed during the request).
If a filter
consistently produces the same content, then it should not modify the
entity tag (the routine that arranges the filters must do that if 
needed).
hmm...  I don't see how that would work given the current API.  but it 
does
seem like the API could be a bit better.  perhaps filters could supply
criteria that ap_make_etag can draw from when the time comes.  is that 
what
you had in mind?
I don't know. My original criticism of the API still holds true: a 
filter
cannot process the metadata separately from the data -- the metadata 
must
flow through the same filter chain rather than be operated upon as if
it were global variables.  Nobody has taken up that suggestion, and I
haven't had time to implement it myself.

Roy



Re: Creating HTTPD Tarballs

2003-11-16 Thread Roy T. Fielding
-1.  I'm still of the mind that _every_ release should be recreatable.
Anything we put out there is going to be at least perceived as 
official,
and we should take that into account.
Every release is tagged.  A tarball is not a release.  Nothing is a
release until AFTER the associated tarball has three +1 votes, at which
point it becomes a release and should be tagged as such.
Roy



Re: Creating HTTPD Tarballs

2003-11-16 Thread Roy T. Fielding
So your basically saying that we retag a release candidate tag with the
final release tagname, when a tarball rolled from such a tag receives
three +1s for release?
I am saying that the contents of a release tarball must match the tag
of that release in cvs.  How that happens will depend on the mechanisms
used by the RM in constructing the tarball.  Tags are just a tool
to make the RM's job easier, at least until the release is approved,
after which the official release tag is needed for everyone.
Roy



Re: [DRAFT] configure documentation

2003-11-01 Thread Roy T. Fielding
at http://cvs.apache.org/~kess/programs/ you'll find a draft for a
configure script documentation. There are still some open ends - mostly
commented within the xml file - and there might be a lot of typos and
spelling mistakes, but it is ready for a review now...
It would be fine, if someone could also go through the text and improve
it. I do not trust my english skills ;)
It looks great -- go ahead and commit it.  If there are any typos they
won't get fixed until after you commit.
I suggest adding some mention of config.nice as well, but that can wait.

Roy



Re: [1.3 PATCH] another ap_die() issue related to error documents

2003-10-17 Thread Roy T. Fielding
On Friday, October 17, 2003, at 12:27  PM, Jeff Trawick wrote:
For ErrorDocument nnn http://url;, ap_die() will respond with 302 
redirect, and r-status will be updated to indicate that.  But the 
original error could have been one that keeps us from being able to 
process subsequent requests on the connection.  Setting r-status to 
REDIRECT keeps us from dropping the connection later, since it hides 
the nature of the original problem.

Example:

client uses HTTP/1.1 to POST a 2MB file, to be handled by a module...
module says no way and returns 413...
admin has ErrorDocument 413 http://file_too_big.html;...
Apache sends back 302 with Location=http://file_too_big.html, but 
since this is HTTP/1.1, Apache then tries to read the next request and 
blows up (invalid method in request)...
It sends 302?  Don't you mean it does a subrequest?  I'd hope so.

Anyway, +1 to the patch.

Roy



Re: [PATCH] ErrorLogsWithVhost for Apache 1.3.28

2003-10-13 Thread Roy T. Fielding
On Tue, Jul 08, 2003 at 12:41:09AM -0400, Glenn wrote:
With the talk about a minor MMN bump, I put together this patch which
adds a flag at the end of server_rec.  This also changes ErrorLog to
a TAKE12, with an optional style of default or vhosts, where the
vhosts includes the server name and port in the error log entries.
The TAKE12 maintains backwards compatibility to existing config files.
Comments appreciated on the method(s) that would most likely get this
accepted into 1.3.28 or 1.3.29.  (global flag, server_rec addition,
other ...)  Thanks!
In general, it would require a great deal of value added to justify
a new feature in 1.3.  I don't see that here.  It certainly doesn't
justify a change to server_rec (even an append is a risky change).
A global flag is possible, but a compile-time ifdef would be sufficient
for 1.3.
Roy



Re: [PATCH] Add .svn to IndexIgnore

2003-07-19 Thread Roy T. Fielding
Patch to add Subversion .svn directories to the default IndexIgnore in
httpd-[std|win].conf.
I'd rather you explain why the first entry (.??*) is not sufficient:

-IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t
+IndexIgnore .??* *~ *# HEADER* README* .svn RCS CVS *,v *,t
It should already be hiding the .svn directories.

Roy



Re: Changes to mime.types causes warnings...

2003-07-14 Thread Roy T. Fielding
The recent changes to the mime.types file for apache 1.3 causes
mod_mime to throw warnings due to the inline comments.  It now throws a
warning each time it hits # unregistered or # invalid while parsing
the file.
Warnings?  In error_log?  Hmmm, I must have tested under the wrong log
level.  I'll delete the comments.
Roy



Re: Getting close to 1.3.28

2003-07-10 Thread Roy T. Fielding
There is one final commit which we are waiting for before the
TR of 1.3.28. It's to close a bug in one of our support
programs distributed with Apache and affects Win32 and OS/2.
Whoa, sorry, I didn't realize that we were in just-before-release mode
on 1.3.x.  Are the mime types config changes I just committed okay?
Roy



Re: cvs commit: httpd-2.0/docs/conf httpd-std.conf.in httpd-win.conf mime.types

2003-07-10 Thread Roy T. Fielding
  -#AddType image/x-icon .ico
Though it's an x-type, I'd suggest not to remove it from the default 
config.
It matches the default user needs very well.
I added it to the default mime.types first, where it belongs.

Roy



+1 on public release of 2.0.47

2003-07-09 Thread Roy T. Fielding
The tarball checks out okay, verifies with signature and md5, and all
of my simple tests on OS X 10.2.6 seem to work great.
+1

I have a few corrections to make on the conf files, but those can wait
until the next release.
Roy



Re: Removing Server: header

2003-03-25 Thread Roy T. Fielding
On Saturday, March 22, 2003, at 07:15  AM, Brass, Phil (ISS Atlanta) 
wrote:
The point of stripping Date and Last-modified headers is that HTTP
fingerprinting tools look at things like header order, the formatting 
of
dates and times, etc.
So change the format and order.  Stripping them is a protocol violation.

Alternately, does anybody know why the Server, Date, Accept-Ranges,
Last-Modified, and other headers are put in last, after things like
mod_headers run?  Perhaps a better patch would be to move the code that
adds these headers to the respose earlier in the code so that users can
simply use mod_headers to strip whichever ones they want, or a module
for randomizing header order could be written, etc.
They are put in last specifically to prevent them from being randomized
by buggy modules.
Roy



more FrontPage extension idiocy

2003-02-26 Thread Roy T. Fielding
The patches at http://www.rtr.com/fpsupport/ for including
FrontPage support in Apache modify the request_rec to add an
execfilename field in the *middle* of the structure, thus
blowing binary compatibility with all other Apache modules.
Move the execfilename to the end of the request_rec to avoid
the support headache, or just shoot the bastard who installed
the extensions.
Roy



Re: HTTP TRACE issues (text-only)

2003-02-24 Thread Roy T. Fielding
There is no reason to discuss this on the security or pmc lists.

Which brings us back to the start... How should we address this, umm...
concern. Seems to me the 3 options are:
1. (continue to) Ignore it.
As far as the XSS concern, I'd ignore it.  However, it is perfectly
reasonable for server owners to want to allow or disallow this thing,
provided that the default is allow.
2. Address it via documentation (and relay our
   POV regarding the risks associated)
I would include a link to the response that other person made to
the original bugtraq posting, but I don't have it.
3. Add AllowTrace enable|disable
AllowTRACE yes|no (not available in .htaccess)

The right way to implement it would be to have the input filter
retain the original message as a read-only brigade and have the
parsed headers be a data structure that simply pointed to
places in that buffer, but that is obviously not feasible for 1.3
and won't even work efficiently with 2.0.  That would allow TRACE
to be implemented in a module.
In any case, disabling TRACE will not make a site more secure.

Roy



Re: cvs commit: httpd-2.0/modules/loggers mod_log_config.c mod_log_config.h

2003-02-13 Thread Roy T. Fielding
  change optional function to return the previous writer, allowing to 
have mutliple types
  of writers in the same server. (previously you could only have one)

  it needs a mmn bump.. sorry guys

Umm, okay, I give up... why does it need a major bump?  Would older 
modules
really blow up because of this change?  I'm curious if it has something
to do with the nature of the optional hook macro.

Roy



Re: cvs commit: httpd-2.0/modules/dav/main util.c

2003-01-29 Thread Roy T. Fielding
  Allow mod_dav to do weak entity comparison function rather than a 
strong
  entity comparison function.  (i.e. it will optionally strip the W/ 
prefix.)

That doesn't really follow the spirit of etag validation in HTTP.
In theory, the client is not allowed to use weak etags for anything
other than cache consistency checks, so this won't help any client
that is actually HTTP-compliant.  It would be better to send a strong
entity tag in the first place.

Roy




Re: [1.3 PATCH] enhance some trace messages

2003-01-16 Thread Roy T. Fielding
First, the NETWARE part has to be above your additions.


The reason I put the NETWARE part below the first new code was because
I assumed (perhaps incorrectly) that there was no way that Apache or
library functions it called were going to mess with the value returned
by WSAGetLastError(), but possibly they might mess with errno, so by
setting errno right before calling ap_log_error() there wouldn't be any
problems.


Doesn't a file write mess with WSAGetLastError?  *shrug*
log_error_core uses a temp variable to save and restore errno, which
is why setting errno first is more reliable.


  Second,
change the ap_log_error to the variable args version rather than
using a temporary buffer and ap_snprintf.


I'm afraid you've lost me here.  What function is there to use in
place of ap_log_error()?  Somehow use ap_pstrcat() and pass the buffer
it builds to ap_log_error()?


Er, right, what Jim said -- I forgot that ap_log_error is already 
varargs.
The important bit was to get rid of the buf.

Roy



Re: [1.3 PATCH] enhance some trace messages

2003-01-16 Thread Roy T. Fielding
I can certainly understand that :)  Here is a new patch along those 
lines.

+1, but you might want to reduce the severity on those error messages
if this is actually a common occurrence.  After all, there is nothing
that the server can do about it, and the client won't be complaining,
though it is still useful info for looking at DoS attacks.

Roy




Re: Tagged the tree, one more time

2003-01-14 Thread Roy T. Fielding
Private tags are getting pretty annoying.  You should only use one
and only one private tag per RM (without a version number) and just
move it around to reflect the state of your private tree.

On a related note, I would like to remove all of the non-official
tags that are older than a few months.  There are currently 18 such
tags on httpd-2.0 alone.  Any objections?

Roy




Re: cvs commit: httpd-2.0 STATUS

2002-11-26 Thread Roy T. Fielding
On Monday, November 25, 2002, at 04:58  PM, Aaron Bannert wrote:

I guess I just didn't read that much in to it. I just want
to see us move forward without getting bogged down in
misinterpreted emails and already acknowledged mistakes,
and to do that I'm trying to stay objective (eg. a Vote).

To me this looks like the set of concerns:

1) we want 2.0 maintenance
2) we want 2.1 development
3) we want parallel development of each
4) a bad name for a combined 2.0+2.1 CVS module is httpd-2.0
5) having separate CVS modules means we lose future history
6) creating a brand new CVS module means we lose past history

(does this cover everyone's concerns?)


Those aren't concerns -- they are answers.  One recent problem
I've noted is that we have lost the art of phrasing votes so that
they don't cut across several issues at once.  The vote on establishing
separate development trees of stable and unstable versions was fine,
but none of that implied a single new repository would be created
with a variety of branches interwoven within it.  We can decide that
now in STATUS.


Therefore I'm proposing that we just keep the httpd-2.0 CVS
module we have for a little longer, eventually on some
well-in-advance forewarned flag day we rename it to something
more generic, like just httpd and then keep a readonly
artifact of the old httpd-2.0 CVS module around for posterity.


Too many issues at once.  Do we want the new repository in order
to clean up legacy stuff, or simply because having 2.0 in the name
is confusing?  In either case, httpd is the wrong name -- httpd-2
would be okay.  Working within an ancient CVS module makes sense while
directory names and the purposes of files remain essentially the same,
but people are fooling themselves if they think CVS merge will work
after a large-scale change such as async-io.  Personally, I have a
hard time keeping track of branch-based modifications, even though
I know where to look in the commit message.  Maybe we could move the
branch tag into the subject?

I don't think we have a contract with developers to maintain the
httpd-2.0 module name for eternity, though the right solution is an
alias in CVSROOT modules, not a symlink.  FYI, a symlink is *never*
appropriate under /home/cvs, for any reason, because it doubles
the committable space while at the same time bifurcating access
control, commitlogs, notices, etc.  It is better to break existing
commit access and force people to checkout a fresh tree.

But, as OtherBill suggested, my main objection was that the changes
were made without discussion, and hence without a chance for me to
point out that symlinks are bad under /home/cvs, and it seemed better
to revert that change than try to accommodate it the right way with
changes to modules, avail, and apmail.  I still prefer new modules
for 2.1 and 2.2 simply because I know the performance will be better,
but that won't be substantial for another six months or so of dabbling,
and I wasn't even planning to vote on that because I am more
interested in 3.0.

Roy




Re: FW: Older version of apache2

2002-11-26 Thread Roy T. Fielding
So you suggest initially populating old/ and then symlinking the
now-current version in the main download directory at the old/
target, instead?  It would still initially download the package
twice, and then simply unlink it later on, right?

Or what's the right approach here?


I suggest moving the old directory out of dist completely and instead
put old distributions under history.apache.org (not mirrored).  That
way the rsync'ers don't have to sync gigabytes and will simply delete
the files that are no longer worth being mirrored.

Roy




Re: karma and cvs commit messages

2002-11-23 Thread Roy T. Fielding
Since we renamed the repository to httpd from httpd-2.0 (there is
a symlink for now), the CVSROOT/avail file doesn't match
the repository name, and therefore I can't commit. Can we
fix that so I can commit to the new httpd repository directly?


Why the heck was that done?  Too many things get screwed over
when you change a module name in cvs.

-1 -- I am reverting that change to cvs.  Don't screw with this stuff
without a clear plan in STATUS, and notify apmail and cvsadmin before
screwing with the filesystem.  It would have been far more sensible
not to branch 2.0 and instead create a new module that doesn't suffer
from legacy versions.

Roy




Re: Apache 1.3 and invalid headers

2002-11-20 Thread Roy T. Fielding
Does anyone know what the behaviour of Apache 1.3 is
under the circumstances where the HTTP request or
response contains an invalid request header?

Specifically, when the Connection header contains
something other than 'close'?


There is nothing invalid about that -- connection is completely
extensible.


It appears to immediately close the connection - can
anyone confirm or deny that this is Apache's behaviour
for both requests and responses?


It does not close the connection on that basis.  What you are
probably seeing is a server that is configured with keepalive off,
in which case all connections are closed regardless of what is
received in Connection.

Roy




Re: workaround for encoded slashes (%2f)

2002-10-30 Thread Roy T. Fielding
Your patch will simply let the %2F through, but then a later section
of code will translate them to / and we've opened a security hole
in the main server.  I'd rather move the rejection code to the
place where a decision has to be made (like the directory walk),
but I have no time to do it myself.  I think it is reasonable to
allow %2F under some circumstances, but only in content handlers
and only as part of path-info and not within the real directory
structure.

Roy




Re: strace of SPECWeb99 static workload

2002-10-30 Thread Roy T. Fielding
One of them is probably here (in function ap_meets_conditions). Is there 
any
reason we cannot use r-request_time here?

I can't tell for sure right now, but the original concern was that
dynamically generated pages that are forked into a cache (something
done by RobH for IMDB) would have a modification time after the
request time, and thus a later request for that same page (which has
not changed in the interim) would be later than the IMS time and
thus always result in a 200 instead of 304.  I don't know if that
was the same concern that motivated it here.

Roy




Re: [1.3 PATCH^H^H^H^H^HBUG] chunked encoding problem

2002-10-19 Thread Roy T. Fielding
I'm sure there's a great reason for setting B_EOUT flag here, but it
sure does suck if you have data waiting to be sent to the client since
setting B_EOUT convinces ap_bclose() not to write any more data.


It is only set when the connection is aborted or the fd is gone,
both indicating that we can't write any more data.  I think you
need to figure out why the conditional above it is false and
then fix the root of the problem.  My guess is that

  r-connection-aborted

is being set incorrectly somewhere.

Roy




Re: [1.3 PATCH^H^H^H^H^HBUG] chunked encoding problem

2002-10-19 Thread Roy T. Fielding

On Friday, October 18, 2002, at 07:44  PM, Roy T. Fielding wrote:


I'm sure there's a great reason for setting B_EOUT flag here, but it
sure does suck if you have data waiting to be sent to the client since
setting B_EOUT convinces ap_bclose() not to write any more data.


It is only set when the connection is aborted or the fd is gone,
both indicating that we can't write any more data.  I think you
need to figure out why the conditional above it is false and
then fix the root of the problem.  My guess is that

  r-connection-aborted

is being set incorrectly somewhere.


Or you are simply seeing what happens on a timeout in debug mode
and the actual problem has nothing to do with the flag being set.

In any case, it is safe to simply delete the line where the flag
is being set, since the reason for setting it is simply for performance
(we don't want to waste time writing to a socket that has aborted).

Roy




distributing encryption software

2002-10-19 Thread Roy T. Fielding
Ryan asked for a clarification about whether or not we have the ability
to redistribute SSL binaries for win32.

Last year, the board hired a lawyer to give us an opinion on whether
we can distribute encryption software, or hooks to such software.
The exact opinion we got back is, unfortunately, not online, but it
is essentially the same (with less detail) as the one given to Debian
and visible at http://debian.org/legal/cryptoinmain.  Basically,
we have the right to distribute encryption software in source or
executable form if we also distribute that same software as open
source for free to the public, provided we first notify the U.S.
authorities once per new encryption-enabled product.

This is sufficient for Debian because they distribute the source code
to everything in Debian within a single repository.  Note, however,
that we do not do the same for OpenSSL.  Not only is OpenSSL not in
our CVS, but it isn't normally distributed by us at all, and the
authors of OpenSSL aren't likely to want us to distribute it because
doing so pollutes the recipients rights with U.S. crypto controls
whereas they could simply grab the same distribution from the origin
and not be polluted.

I think that Bill Rowe at one point requested that we seek out a
lawyer's opinion on this specific matter, but that was not followed
through by the board because we already know the legal aspects.
The issue isn't legal -- it is social.  We can download a released
version of OpenSSL, compile it, and make both available from our
website provided we first notify the BXA as described in the Debian
opinion above.  However, it is still preferable for our users to
get the DLL themselves, from a distribution outside the U.S., and
avoid having to maintain our distribution of OpenSSL up-to-date.

I think a reasonable and defensible compromise would be to make
it part of the win32 installation script -- to select no SSL or,
if SSL is selected, to guide/automate the user in downloading an
appropriate DLL from some other site.  Besides, that would allow
the user to pick some other SSL library, such as one of the
optimized ones available commercially that may already be
installed on their system.  There is such a thing as being too
concerned about ease of installation.

Finally, it should also be noted that the exception for Apache ONLY
applies to non-commercial distributions.  Any commercial distribution,
even if it is simply Apache slapped onto a CD and sold for a buck,
remains subject to the old US export controls that everyone hates,
and must be approved via a separate process.

Roy




Re: [Patch]: ap_cache_check_freshness 64 bit oddities

2002-10-12 Thread Roy T. Fielding

 At first glance, I think there's an even more fundamental problem:
 the code in ap_cache_check_freshness() appears to be mixing times
 measured in microseconds (the result of ap_cache_current_age())
 with times measured in seconds (everything that it gets from the
 HTTP header).

And does that surprise you?  Probably not.  Add one more to the
continuing saga of errors due to flagrant type name abuse.

Roy




Re: PHP POST handling

2002-10-02 Thread Roy T. Fielding

Output filters cannot handle methods -- only input filters can do that.
It sounds to me like you guys are just arguing past each other -- the
architecture is broken, not the individual modules.  Just fix it.

Greg is right -- the default handler is incapable of supporting any
method other than GET, HEAD, and OPTIONS, and must error if it sees
anything else.  OTOH, mod_dav should not be monkeying with the content
hook if it isn't the content handler.  If you don't fix both problems
then the security issue will resurface at a later time.

Roy




Re: Cached response: 304 send as 200

2002-09-12 Thread Roy T. Fielding

On Wednesday, September 11, 2002, at 06:04  PM, Graham Leggett wrote:
 Kris Verbeeck wrote:

 The response:

  HTTP/1.0 200
  Date: Tue, 10 Sep 2002 09:45:39 GMT
  Server: web server
  Connection: close
  etag: b9829-2269-3cd12aa1

 Another bug - why is an HTTP/1.1 response prefixed with HTTP/1.0...?
 Nope, there is a force-response-1.0 in httpd.conf for this request.  So
 normal behaviour.

 Both Etag and Connection: close are HTTP/1.1 headers, simply changing the 
 version string on a forced-response-1.0 is wrong as I understand it.

 Can someone clarify what should happen in this case?

No, they are HTTP/1.x headers (there is no such thing as a 1.1 header,
only features that cannot be sent in response to a 1.0 request).
Both connection and etag should be sent regardless of protocol version.

Roy




Re: cvs commit: apache-1.3/src/modules/standard mod_digest.c

2002-09-10 Thread Roy T. Fielding

   +/* There's probably a better way to do this, but for the time 
 being...
   + *
   + * Right now the parsing is very 'slack'. Actual rules from RFC 
 2069 are:

The relevant spec is RFC 2617.  Were there significant changes since 2069?

Roy




Re: cvs commit: apache-1.3/src/modules/standard mod_digest.c

2002-09-10 Thread Roy T. Fielding

 Not in this section. Comma separation made clearer (but no explicit
 wording on white space eating) - and our old code was still at fault
 when isinsting that any non alpanumeric MUST be quoted.

Odd that the BNF doesn't require that -- it cannot be parsed
unambiguously without the quotes.

Roy




Re: cvs commit: httpd-2.0 acinclude.m4

2002-08-09 Thread Roy T. Fielding

-1.  Please revert the change.  The purpose of the check is to identify
incompatible APIs, not security holes.

Roy




Re: cvs commit: httpd-2.0 acinclude.m4

2002-08-09 Thread Roy T. Fielding

 -1.  Please revert the change.  The purpose of the check is to identify
 incompatible APIs, not security holes.

 should apache be allowed to be built against a version of OpenSSL that 
 has a
 known problem - I don't think so. But if everybody thinks against - then,
  so
 be it.

People need to be able to build against older versions specifically so
that they can test those older versions and so that they can introduce
our newer versions into an environment that has privately patched the
other library.

 Also, as per your argument, I'd question the validity of the following
 checks in acinclude.m4. Does it make sense to eliminate them ??.
 OpenSSL [[1-9]]*
 OpenSSL 0.[[1-9]][[0-9]]*

Those are to accept all future versions, not deny them.  I would be
happier if the entire check was removed, but the reason it exists is
to check for multiple installed versions and pick the first one that
passes the minimum compilable requirement.

Roy




Re: cvs commit: httpd-2.0 acinclude.m4

2002-08-09 Thread Roy T. Fielding

 -1.  Please revert the change.  The purpose of the check is to identify
 incompatible APIs, not security holes.

I have a patch to turn it into a warning -- will commit once tested.

Roy




Re: cvs commit: httpd-2.0 acinclude.m4

2002-08-09 Thread Roy T. Fielding

 Cool. I believe something is better than nothing :).

 (I'm sure you're already aware of this - but thought it'd be better to let
 you know)
 I believe my patch went into r1.127 - and has been labelled for the 2.0.40
 release. So, you might want to bump the label before it's released.

It has already been released.  And where did the three +1 come from
anyway?  That is still required on the tarball (not the tag) before
the announcement is supposed to go out, even for security releases.

2.0.40 will fail to compile for future releases of OpenSSL 0.9.x
except for those that also happen to end in e-z or are specifically
asked for via the --with-ssl=DIR option in configure.
Maybe that could go on the known bugs page.

I have no idea why the patch was applied just prior to the tag.

Roy




Re: cvs commit: httpd-2.0/modules/experimental mod_mem_cache.c

2002-07-18 Thread Roy T. Fielding

On Thursday, July 18, 2002, at 12:49  PM, [EMAIL PROTECTED] wrote:
}
   -if (sconf-max_cache_object_size = sconf-max_cache_size) {
   +if (sconf-max_cache_object_size = sconf-max_cache_size*1000) {
ap_log_error(APLOG_MARK, APLOG_CRIT, 0, s,
 MCacheSize must be greater than 
 MCacheMaxObjectSize);

Umm, that should be 1024, but wouldn't it be better to store
max_cache_size in bytes?

Roy




Re: cvs commit: apache-1.3/src/main http_protocol.c

2002-07-09 Thread Roy T. Fielding

WTF?  -1   Jim, that code is doing an error check prior to the
strtol.  It is not looking for the start of the number, but
ensuring that the number is non-negative and all digits prior
to calling the library routine.  A simple check of *lenp would
have been sufficient for the blank case.

I need to go through the other changes as well, so I'll fix it,
but don't release with this code.

Roy


On Tuesday, July 9, 2002, at 07:47  AM, [EMAIL PROTECTED] wrote:

 jim 2002/07/09 07:47:24

   Modified:src  CHANGES
src/main http_protocol.c
   Log:
   Allow for null/all-whitespace C-L fields as we did pre-1.3.26. However,
   we do not allow for the total bogusness of values for C-L, just this
   one special case. IMO a C-L field of iloveyou is bogus as is one
   of 123yabbadabbado, which older versions appear to have allowed
   (and in the 1st case, assume 0 and in the 2nd assume 123). Didn't
   make sense to make this runtime, but a documented special case
   instead.

   Revision  ChangesPath
   1.1836+8 -0  apache-1.3/src/CHANGES

   Index: CHANGES
   ===
   RCS file: /home/cvs/apache-1.3/src/CHANGES,v
   retrieving revision 1.1835
   retrieving revision 1.1836
   diff -u -r1.1835 -r1.1836
   --- CHANGES 8 Jul 2002 18:06:54 -   1.1835
   +++ CHANGES 9 Jul 2002 14:47:23 -   1.1836
   @@ -1,5 +1,13 @@
Changes with Apache 1.3.27

   +  *) In 1.3.26, a null or all blank Content-Length field would be
   + triggered as an error; previous versions would silently ignore
   + this and assume 0. As a special case, we now allow this and
   + behave as we previously did. HOWEVER, previous versions would
   + also silently accept bogus C-L values; We do NOT do that. That
   + *is* an invalid value and we treat it as such.
   + [Jim Jagielski]
   +
  *) Add ProtocolReqCheck directive, which determines if Apache will
 check for a valid protocol string in the request (eg: HTTP/1.1)
 and return HTTP_BAD_REQUEST if not valid. Versions of Apache



   1.324 +8 -2  apache-1.3/src/main/http_protocol.c

   Index: http_protocol.c
   ===
   RCS file: /home/cvs/apache-1.3/src/main/http_protocol.c,v
   retrieving revision 1.323
   retrieving revision 1.324
   diff -u -r1.323 -r1.324
   --- http_protocol.c 8 Jul 2002 18:06:55 -   1.323
   +++ http_protocol.c 9 Jul 2002 14:47:24 -   1.324
   @@ -2011,10 +2011,16 @@
const char *pos = lenp;
int conversion_error = 0;

   -while (ap_isdigit(*pos) || ap_isspace(*pos))
   +while (ap_isspace(*pos))
++pos;

if (*pos == '\0') {
   +/* special case test - a C-L field NULL or all blanks is
   + * assumed OK and defaults to 0. Otherwise, we do a
   + * strict check of the field */
   +r-remaining = 0;
   +}
   +else {
char *endstr;
errno = 0;
r-remaining = ap_strtol(lenp, endstr, 10);
   @@ -2023,7 +2029,7 @@
}
}

   -if (*pos != '\0' || conversion_error) {
   +if (conversion_error) {
ap_log_rerror(APLOG_MARK, APLOG_NOERRNO|APLOG_ERR, r,
Invalid Content-Length);
return HTTP_BAD_REQUEST;








Re: URL parsing changed between 1.3.23 and 1.3.26?

2002-07-02 Thread Roy T. Fielding

 That's true.  But  is definitely the one used by convention.  (Maybe it'
 s
 in the CGI spec?  Not sure on that one.)  And that doesn't change the fact
 that this in this case ':' was used in place of both the '?' and the
 '', which is definitely wrong.

No, it's just a different way of naming the path segment.  Any http
resource is free to construct its own namespace with the exception
that / and ? have a reserved meaning *when* they are used.

Roy




Re: CAN-2002-0392 : what about older versions of Apache?

2002-06-25 Thread Roy T. Fielding

On Tuesday, June 25, 2002, at 02:05  PM, Arliss, Noah wrote:
 Hopefully this is not a redundant question.. Does this patch cover issues 
 in
 mod_proxy as well, or were the issues introduced in 1.3.23 and later?

They were introduced later.  The patch says that it is not sufficient for
the releases after 1.3.22.

Roy




Re: Karma please

2002-06-23 Thread Roy T. Fielding

User rasmus already has karma.  apache-2.0 is not what you are looking for,
try the module httpd-2.0.

Roy


On Sunday, June 23, 2002, at 05:30  PM, Rasmus Lerdorf wrote:

 Could someone karma me for apache-2.0 please?

 -Rasmus





Re: CAN-2002-0392 : what about older versions of Apache?

2002-06-23 Thread Roy T. Fielding

 I don't remember seeing any +1's for this patch on the list.

I don't remember needing any.  There were no -1 with explanations.
There certainly hasn't been any effort spent, aside from my own, to
address the needs of those who cannot upgrade.  You guys punted, so
I picked up the ball and finished the task.  Somebody has to do it.
I refuse to consider votes based on I haven't looked at it yet.

 Please remove this patch until one can be made that addresses the same
 issues with the proxy code (which also uses get_chunk_size()).

No.  Aaron, use your brain.  First, the proxy code that implemented chunked
reading was introduced after 1.3.22 (hence my NUMEROUS comments to the 
effect
that it wasn't applicable).  Second, the bogus type casts were not present
until after 1.3.22.  Third, the pointless ap_strtol addition was only done
because someone wanted to check the errno field, which is totally
irrelevant to the security hole itself.

My patch does fix the problem, certainly far better than no patch at all.
If you disagree, then tell me why it doesn't fix the problem.  If all you
are going to do is pontificate about the subject without taking the five
minutes necessary to review the change, then piss off.

Roy




Re: [PATCH httpd 1.2] chunk size overflow

2002-06-21 Thread Roy T. Fielding

 This patch should be sufficient to fix the security hole for most
 versions of Apache httpd 1.2.  Should we put it up on dist/httpd?

It turns out that this small patch is sufficient to plug the hole
on all 1.2 and 1.3.* versions up until 1.3.24 if mod_proxy is in use.
I have placed it in the relevant dist/httpd/patches directories.
It probably should have been sent to CERT along with the advisory,
or at least linked from our info file.  I'll leave that to others.

Roy




[PATCH httpd 1.2] chunk size overflow

2002-06-20 Thread Roy T. Fielding

This patch should be sufficient to fix the security hole for most
versions of Apache httpd 1.2.  Should we put it up on dist/httpd?

Roy



--- apache-1.2/src/http_protocol.c  Thu Jan  4 01:21:10 2001
+++ apache-1.2/src/patched_http_protocol.c  Thu Jun 20 18:13:04 2002
 -1535,6 +1535,10 
 }
 
 len_to_read = get_chunk_size(buffer);
+if (len_to_read  0) {
+r-connection-keepalive = -1;
+return -1;
+}
 
 if (len_to_read == 0) {  /* Last chunk indicated, get footers */
 if (r-read_body == REQUEST_CHUNKED_DECHUNK) {



Re: cvs commit: apr/include apr_time.h

2002-06-12 Thread Roy T. Fielding

There is no reason for them to be all-uppercase.  I hate it when people
use uppercase for functions, including macro functions.  All-uppercase
is a convention for symbolic constants, not functions.

Roy




Re: [patch] reduce conversions of apr_time_t

2002-06-12 Thread Roy T. Fielding

Why do that when it is more effective to just blow away apr_time_t and
use the already-portable time_t when we want to store seconds?  I have
no need for microseconds outside of struct tm (which does need a more
portable apr structure type).

Roy




Re: apr_time_t -- apr_time_usec_t

2002-06-10 Thread Roy T. Fielding


On Monday, June 10, 2002, at 03:22  PM, Cliff Woolley wrote:

 On Mon, 10 Jun 2002, Roy T. Fielding wrote:

 I know of one existing bug in httpd that I would consider a
 showstopper, if I were RM, due to the way APR handles time.

 Are you going to tell me what it is?  :)

If-Modified-Since doesn't work because an HTTP time based on
seconds x10 is being compared to a file modification time
based directly on microseconds.

Roy




Re: apr_time_t -- apr_time_usec_t

2002-06-10 Thread Roy T. Fielding

 If-Modified-Since doesn't work because an HTTP time based on
 seconds x10 is being compared to a file modification time
 based directly on microseconds.

 I thought I fixed that already!?  Oh boy, did the patch not get 
 committed?
 It might be sitting in the PR waiting for somebody to test it.

 I'll go check.

 No, I committed a patch for this on May 8.  It's still broken for you?  In
 HEAD?  On Unix or Win32?

No, I missed that you had mostly fixed it --- I had saved the original
report for later work.

I still think it is insane to multiply or divide every time we want to
use seconds.  Not a showstopper, though.

Roy




Re: cvs commit: httpd-2.0 STATUS

2002-05-28 Thread Roy T. Fielding

 if (ap_status_drops_connection(r-status) ||
 (r-main  ap_status_drops_connection(r-main-status))) {
 return OK;
 }

 The idea is that if our status code is such that we're trying to
 avoid reading the body, we shouldn't actually read it.  We need
 the r-main trick as well because of subreqs (the .html.var file
 is a subreq handled by default_handler, so it will call discard_body
 as well on each subreq!).

Hmm, I may not be remembering this correctly, but is there any situation
in which a subrequest would be allowed to call discard_body?  If not,
it can simply check

if (r-main || ap_status_drops_connection(r-status)) {
return OK;
}

Roy




Re: cvs commit: httpd-2.0 STATUS

2002-05-28 Thread Roy T. Fielding

 Sounds good, but I disagree with your STATUS code.  It is a 400, not a
 413.  The request is completely invalid, not too large.  It would be too
 large if we had set a limit on the size of requests, but that isn't the
 problem.  The problem is that they have sent an invalid chunk.

No, it is a valid chunk-size.  It is simply too large for the data type
we chose to implement it with.  413 is the correct response, unless
somebody wants to bother implementing large-file input chunks.

Roy




Re: Stripping Content-Length headers

2002-05-05 Thread Roy T. Fielding


On Sunday, May 5, 2002, at 11:25  AM, Justin Erenkrantz wrote:

 On Sun, May 05, 2002 at 08:03:24PM +0200, Graham Leggett wrote:
 I understand the Content Length filter is responsible for sorting out
 Content-Length, and that chunked encoding will be enabled should the
 length be uncalculate-able, so it works as it is - but the question is,
 if we already have a content-length, should we not just keep it?

 Nah, this allows us flexibility in optimizing the data sent to our
 client.  If we can send chunked-encoding, I believe that is a better
 than using C-L.  I believe that the RFC allows us to do these sorts
 of optimizations.

 IIRC, the PR wasn't saying there was a problem with our approach - it
 was just that the admin didn't understand that was legal.  -- justin

It is legal, but not advisable.  It is not better to use chunking than
it is to use C-L.  C-L works better with HTTP/1.0 downstream and allows
progress bars to exist on big downloads.  Any filter that does not
transform the content should not modify the C-L.

Roy




Re: [Patch] Concept: have MPM identify itself in Server header

2002-05-03 Thread Roy T. Fielding

I do not believe that the Server string should be used to describe
implementation details of the server software.  I know we already
do that, over my objections.

Roy




Re: cvs commit: httpd-2.0/server/mpm/worker worker.c

2002-05-01 Thread Roy T. Fielding


On Wednesday, May 1, 2002, at 01:49  PM, Aaron Bannert wrote:

 And, consider my position on your calloc change in this patch as a
 veto.  If you want to remove calloc, then you should do so throughout
 the code rather than in sporadic places that may make maintaining the
 code a nightmare if we were to fix calloc.  But, that is an issue
 that is now open in APR's STATUS.

 What exactly are you vetoing? My use of apr_palloc()+memset() is
 technically superior to apr_pcalloc(). What is your technical reason
 for forcing me to use apr_pcalloc()?

Umm, no it isn't.  The reason is that it makes the code harder to
understand.

 If the end result of the calloc vote is that we should remove calloc,
 then feel free to do a search and replace across the entire tree.
 Until then, let's please remain consistent to our API.  -- justin

 I'm really not sure about the macro yet. On the one hand it's too
 late to remove apr_pcalloc(). The macro stinks the same way #define
 safe_free()-like stuff does, but at least it is a compromise. OTOH,
 as Cliff brought up, we'll get a SEGV if apr_palloc() returns NULL. I
 don't see how:

 foo = apr_palloc(p, sizeof(*foo));
 memset(foo, 0, sizeof(*foo));

 is any less desirable than:

 foo = apr_pcalloc(p, sizeof(*foo));

 It is quite obvious how the memory is being initialized in both cases,
 and for the reasons discussed earlier it is obvious why the first would
 be optimal.

Because we have to keep the old API working, and because duplicating code
everywhere is a bad thing.  The arguments have already been made.  I don't
even understand why people are voting on the macro -- just commit it.
Let's save the arguments for things where actual disagreement is useful.

And while we are on the topic, anything that is posted to the mailing
list is open for others to commit to the code base.  That is how we work.
People here are expected to be part-time volunteers, so if one person does
60% of the work and posts it, others should feel free to do the other 40%
and commit the sucker while the originator is sleeping.  The only necessary
part is that it be appropriately attributed in Submitted By.

In this case, there is no excuse for sitting on a bug fix just because
there are stylistic issues about a patch.  The appropriate thing to do
is remove the style changes and commit the fix.

Roy




Re: Bumping tags

2002-04-30 Thread Roy T. Fielding

 Well then why are the patches in the tree??? I'm not sure I like the 
 idea of
 tagging and then tagging just some files. Seems like if we haven't got a
 stable HEAD we shouldn't be tagging. We got into this whole business of
 tagging often as a way of avoiding having this sort of thing. Ifw e 
 tagged
 and it wasn't stable, who cares. Just retag when it is and move on...

 This seems to be a growing trend and one I think we should stop.

 I disagree.  I see a lot of value in managing a release by tagging then 
 selectively
 picking up showstopper fixes. And the RM should make the decision if this 
 is the way he
 wants to get the release out.

I strongly dislike the action of tagging the tree with a version number
and then moving that tag.  If we aren't sure about the version, then the
RM should use a personal tag and only replace it with the real version tag
when we are sure.  If people aren't willing to run up the version numbers,
then they shouldn't tag them as such until the version is ready for 
tarball.

Justin already showed that an RM can do it this way effectively.

Roy




Re: cvs commit: httpd-2.0/server core.c

2002-04-26 Thread Roy T. Fielding

I don't understand why you didn't simply reverse the test and
enclose the frequent case inside the if {} block.  I assume it
was just to avoid indenting a large block of code, which is not
sufficient justification for a goto.

A goto often has unforeseen effects on high-level optimizations
that can be as bad as a pipeline flush.

Roy




Re: REQUEST_CHUNKED_DECHUNK question

2002-04-26 Thread Roy T. Fielding


On Thursday, April 25, 2002, at 03:27  PM, Justin Erenkrantz wrote:

 On Thu, Apr 25, 2002 at 04:39:18PM -0400, Bill Stoddard wrote:
 From http_protocol.c...

 * 1. Call setup_client_block() near the beginning of the request
  *handler. This will set up all the necessary properties, and will
  *return either OK, or an error code. If the latter, the module 
 should
  *return that error code. The second parameter selects the policy to
  *apply if the request message indicates a body, and how a chunked
  *transfer-coding should be interpreted. Choose one of
  *
  *REQUEST_NO_BODY  Send 413 error if message has any body
  *REQUEST_CHUNKED_ERRORSend 411 error if body without 
 Content-Length
  *REQUEST_CHUNKED_DECHUNK  If chunked, remove the chunks for me.
  *
  *In order to use the last two options, the caller MUST provide a 
 buffer
  *large enough to hold a chunk-size line, including any extensions.
  *

 Anyone know off the top of their head know what the last sentence really 
 means? In 1.3 and
 2.0?

It means that the buffer passed for the later get_client_block calls
must be large enough to handle the chunk-size integer in character form,
since the parser will fail if it has to stop in mid-read of the integer
(not mid-read of the data within the chunk).  I don't know if it still
applies to 2.0.

 FWIW, all of this code is essentially a no-op in 2.0 now since the
 filters handle the chunking transparently.  Right now, there is no
 way to get the chunks without bypassing the filters.  I assume we
 could either setup a flag or the module needs to explicitly remove
 HTTP_IN.  (Indeed, I believe that removing the HTTP_IN filter is
 the best way to go if you want the real body.)  -- justin

Harrumph.  I hate it when features disappear.  The right solution is
for the HTTP_IN filter to obey that parameter, not ignore it.

Roy




Re: Can't force http 1.0

2002-04-22 Thread Roy T. Fielding


On Monday, April 22, 2002, at 11:11  AM, Joshua Slive wrote:

 Bill Stoddard wrote:

 SetEnv force-response-1.0

 According to the docs here:
 http://httpd.apache.org/docs/env.html#special
 The point of that was to deal with silly proxies that belched when they
 saw HTTP/1.1 (regardless of the actual protocol version of the
 response).

 Really? I don't intuit that from the doc though you may be right. The 
 behaviour being
 observed is how 1.3 has been working for years (pretty sure anyway) and 
 to the best of my
 knowledge, it is not breaking anything.  Would be interested in knowing 
 what exactly is
 breaking with this PR.

 I guess I'm reading that in the context of
 http://httpd.apache.org/info/aol-http.html
 and I'm also asking the question What does force-response-1.0 do that 
 downgrade-1.0 doesn't do?

downgrade-1.0 is for ignoring client requests that indicate HTTP/1.1 but
we know the client is broken and cannot deal with HTTP/1.1 features.
We send an HTTP/1.1 response using only 1.0 features.

force-response-1.0 is for dealing with clients that simply cannot parse
the HTTP/1.1 version number.  For these we send an HTTP/1.0 response
using only 1.0 features.

They are both needed, though I wouldn't consider it a high priority.

Roy




Re: eWeek: Apache 2.0 Beats IIS at Its Own Game

2002-04-16 Thread Roy T. Fielding

Nice article.  However, looking at their test results, I'd say they
are only measuring the limits of their test tool.  At least it is nice
to see that they have similar performance up to the test limitation.

Roy


   http://www.eweek.com/article/0,3658,s=702a=25458,00.asp




Re: Move perchild to experimental?

2002-04-16 Thread Roy T. Fielding

+1

Roy




Re: [PATCH] Move 100-Continue into HTTP_IN

2002-04-12 Thread Roy T. Fielding

We have to do more work than this.  The 100 has to be sent before 
attempting
to read the first chunk (if chunked) or only if C-L  0 (if length).

Also, the code that reads the chunk length is failing to check for errors.

Also, the code that reads the chunk end is failing to read the trailers.

In other words, this isn't even remotely HTTP/1.1 compliant right now.
I suspect the same is true of the proxy code.

Roy

On Friday, April 12, 2002, at 05:01  PM, Justin Erenkrantz wrote:

 As Ryan has pointed out, ap_http_filter doesn't properly
 handle 100-Continue requests.  Rather than call
 ap_should_client_block, HTTP_IN (aka ap_http_filter) should
 handle this transparently (since I am in the camp that
 HTTP_IN should handle all HTTP protocol issues).

 Untested.  Can I get any concept +1s (or -1s)?  -- justin

 Index: modules/http/http_protocol.c
 ===
 RCS file: /home/cvs/httpd-2.0/modules/http/http_protocol.c,v
 retrieving revision 1.407
 diff -u -r1.407 http_protocol.c
 --- modules/http/http_protocol.c  1 Apr 2002 22:26:09 -   1.407
 +++ modules/http/http_protocol.c  12 Apr 2002 23:54:34 -
  -736,6 +736,23 
  return APR_EGENERAL;
  }
  }
 +
 +/* Since we're about to read data, send 100-Continue if needed. 
 */
 +if (f-r-expecting_100  f-r-proto_num = HTTP_VERSION(1,1))
  {
 +char *tmp;
 +apr_bucket_brigade *bb;
 +
 +tmp = apr_pstrcat(f-r-pool, AP_SERVER_PROTOCOL,  ,
 +  status_lines[0], CRLF CRLF, NULL);
 +bb = apr_brigade_create(f-r-pool, f-c-bucket_alloc);
 +e = apr_bucket_pool_create(tmp, strlen(tmp), f-r-pool,
 +   f-c-bucket_alloc);
 +APR_BRIGADE_INSERT_HEAD(bb, e);
 +e = apr_bucket_flush_create(f-c-bucket_alloc);
 +APR_BRIGADE_INSERT_TAIL(bb, e);
 +
 +ap_pass_brigade(f-c-output_filters, bb);
 +}
  }

  if (!ctx-remaining) {
  -1576,24 +1593,6 

  if (r-read_length || (!r-read_chunked  (r-remaining = 0))) {
  return 0;
 -}
 -
 -if (r-expecting_100  r-proto_num = HTTP_VERSION(1,1)) {
 -conn_rec *c = r-connection;
 -char *tmp;
 -apr_bucket *e;
 -apr_bucket_brigade *bb;
 -
 -/* sending 100 Continue interim response */
 -tmp = apr_pstrcat(r-pool, AP_SERVER_PROTOCOL,  , status_lines[
 0],
 -  CRLF CRLF, NULL);
 -bb = apr_brigade_create(r-pool, c-bucket_alloc);
 -e = apr_bucket_pool_create(tmp, strlen(tmp), r-pool, 
 c-bucket_alloc);
 -APR_BRIGADE_INSERT_HEAD(bb, e);
 -e = apr_bucket_flush_create(c-bucket_alloc);
 -APR_BRIGADE_INSERT_TAIL(bb, e);
 -
 -ap_pass_brigade(r-connection-output_filters, bb);
  }

  return 1;





Re: [PATCH] convert worker MPM to leader/followers design

2002-04-11 Thread Roy T. Fielding

 Ok, now we're on the same page. I see this as a problem as well, but I
 don't think this is what is causing the problem described earlier in this
 thread. Considering how unlikely it is that all of the threads on one
 process are on long-lived connections, I don't see this as a critical
 short-term problem. What is more likely is that 'ab', used to observe
 this phenomon, is flawed in a way that prevents it from truly testing the
 concurrent processing capabilities of the worker MPM, when it is possible
 for a request on a different socket to be returned sooner than another.
 Flood would be much more appropriate for this kind of a test.

So, what you are saying is that it isn't common for Apache httpd to be used
for sites that serve large images to people behind modems.  Right?  And
therefore we shouldn't fix the only MPM that exists solely because sites
that mostly serve large images to people behind modems didn't want the
memory overhead of prefork.  Think about it.

Roy




1.3.24 +1

2002-03-21 Thread Roy T. Fielding

Tarball tested on RH Linux 2.2.16-22 with no problems.  +1

Roy



Re: PCRE status?

2002-03-20 Thread Roy T. Fielding

 We're upgraded to the latest PCRE now (thanks for Cliff to fixing the 
 Win32 build).

Thanks.

 I checked with the PCRE maintainer and learned that the next release is
 several months away.  In the meantime, that leaves me with two options
 for speeding up ap_regexec():
 
   * Commit a change to the PCRE regexec() function (the same change
 that I've submitted for the next release of PCRE) into the Apache
 copy of PCRE for now.

Yeah, do that, but surround it with comments saying that it is a change
that has been submitted to the maintainer.

   * Or change ap_regexec() to bypass regexec() and call the PCRE native
 regexp exec function directly.  (The PCRE regexec() is a thin wrapper
 around pcre_exec(), so this shouldn't be difficult.)

Ummm, do we always use our PCRE for the regexec library?  I was under the
impression that it is configurable.

Roy



Re: PCRE status?

2002-03-19 Thread Roy T. Fielding

On Tue, Mar 19, 2002 at 06:07:05PM -0800, Brian Pane wrote:
 Is the copy of PCRE within httpd-2.0 a separately maintained fork
 of PCRE, or is it supposed to be an unmodified copy?  (The one in
 the httpd tree appears to be a few releases out of date.)

It is supposed to be maintained up to date with the source, but has
no current maintainer.  It should not be a fork.

 The reason I ask is that I want to fix a performance problem in PCRE's
 regexec() function...

Do a vendor import and merge the latest stuff first -- it needs to be
done anyway for licensing reasons.

Roy



Re: [1.3 PATCH/QUESTION] Win32 ap_os_is_filename_valid()

2002-03-14 Thread Roy T. Fielding

 Apache 1.3 on Win32 assumes that the names of files served are comprised 
 solely of characters from character sets which are a superset of ASCII,
 such as UTF-8 or ISO-8859-1.  It has no logic to determine whether or not 

You wanted to say from character encodings that are a superset.

A character set is a different animal.

Roy



Re: Copyright year bumping

2002-03-13 Thread Roy T. Fielding

On Sat, Mar 09, 2002 at 12:20:23PM +0800, Stas Bekman wrote:
 Sander Striker wrote:
 Hi,
 
 Should we bump the copyright year on all the files?
 Anyone have a script handy?
 
 find . -type f -exec perl -pi -e 's|2000-2001|2000-2002|' {} \;

That would change a lot more, and a lot less, than we want.  I've committed
the change for 2.0 and will do 1.3 next.

Roy



Re: Content-length returned from HEAD requests?

2002-03-13 Thread Roy T. Fielding

On Tue, Mar 12, 2002 at 10:57:50AM -0800, Brian Pane wrote:
 Aaron Bannert wrote:
 
 Is it valid for Content-length to be returned from these types
 of requests? daedalus is showing it, and I'm seeing it in current CVS.
 
 -aaron
 
 
 I don't think so, unless it's Content-Length: 0, due to this
 part of section 10.2.7 in RFC 2616:
 If a Content-Length header field is present in the response,
 its value MUST match the actual number of OCTETs transmitted in
 the message-body.

That section is on the 206 response, not a 200 response.

Section 4.3:

   For response messages, whether or not a message-body is included with
   a message is dependent on both the request method and the response
   status code (section 6.1.1). All responses to the HEAD request method
   MUST NOT include a message-body, even though the presence of entity-
   header fields might lead one to believe they do. All 1xx
   (informational), 204 (no content), and 304 (not modified) responses
   MUST NOT include a message-body. All other responses do include a
   message-body, although it MAY be of zero length.

4.4 Message Length

   The transfer-length of a message is the length of the message-body as
   it appears in the message; that is, after any transfer-codings have
   been applied. When a message-body is included with a message, the
   transfer-length of that body is determined by one of the following
   (in order of precedence):

   1.Any response message which MUST NOT include a message-body (such
 as the 1xx, 204, and 304 responses and any response to a HEAD
 request) is always terminated by the first empty line after the
 header fields, regardless of the entity-header fields present in
 the message.

Roy



Re: PR 10163, location of config_vars.mk

2002-03-13 Thread Roy T. Fielding

On Wed, Mar 13, 2002 at 01:09:27PM -0500, Jeff Trawick wrote:
 short form: 
 
 I want to move config_vars.mk from top_builddir to
 top_builddir/build/config_vars.mk.  Okay?

+1

Roy



Re: [1.3 PATCH/QUESTION] Win32 ap_os_is_filename_valid()

2002-03-13 Thread Roy T. Fielding

On Wed, Mar 13, 2002 at 02:12:18PM -0500, Jeff Trawick wrote:
 Jeff Trawick [EMAIL PROTECTED] writes:
 
  This function is checking for several characters which, at least in
  ASCII, are supposedly not valid characters for filenames.  But some of
  these same characters can appear in valid non-ASCII filenames, and the
  logic to check for these characters breaks Apache's ability to serve
  those files.
  
  A user reported the inability to request a file with the Chinese
  character %b5%7c in the name.  The %7c byte tripped up the check for
  invalid ASCII characters.
 
 I think this is an accurate statement regarding the use of non-ASCII
 characters in filenames with Apache 1.3 on Win32.  Comments?
 
 ---cut here--
 Names of file-based resources with Apache 1.3 on Win32
 
 Apache 1.3 on Win32 assumes that the names of files served are comprised 
 solely of characters from the US-ASCII character set.  It has no logic to
 determine whether or not a possible file name contains invalid non-ASCII
 characters.  It has no logic to properly match actual non-ASCII file names 
 with names specified in the Apache configuration file.  Because Apache
 does not verify that the characters in file names are all ASCII, files
 files containing various non-ASCII characters in their names can be 
 successfully served by Apache.  However, this is not recommended for the
 following reasons:

No, it doesn't.  It treats all names as raw bytes, regardless of charset,
but the filtering process of preventing some filesystem-specific magic
characters from creating security holes on a server prevents the use
of unfiltered 16-bit Unicode or similar wide character sets from being used
directly.  This is true in general for the Web -- wide character encodings
are not allowed to appear in URI under any circumstances.

The solution is to use UTF-8 encoding for non-ASCII characters and not
allow any access via wide character function calls.

Roy



Re: [1.3 PATCH/QUESTION] Win32 ap_os_is_filename_valid()

2002-03-13 Thread Roy T. Fielding

 Regarding your key comment treats all file names as raw bytes,
 regardless of charset...  
 
 I would agree with that for Unix, but on Win32, in an attempt to match
 the semantics of the native filesystem (case preserving but not case
 significant), Apache will perform case transformations on file names*.
 This, along with the filtering code to check for specific ASCII
 values, is why I claimed that it assumes ASCII.

ISO-8859-1 or UTF-8 both contain ASCII as a subset.  It is therefore more
accurate to say that it assumes some character encoding that is a superset
of ASCII, rather than just ASCII.  It keeps you from getting your butt
flamed by the i18n crowd as well.

 *see ap_os_canonical_filename(), which is used to generate r-filename

Hey, I prefer to keep my sanity.  I mentioned a while back that the way
to do this right is to define directives for the Directory container that
define when a directory tree is case insensitive, Unicode, etc., since
this has very little to do with the operating system.

Roy



Re: [PATCH] (1.3, 2.0) config.layout update for SuSE

2002-02-28 Thread Roy T. Fielding

 Should I just create a new section labled Layout SuSE7?

No, just replace it  The worst that could happen is the man directory not
being found on an old SuSE 6x, which is an easy fix to the user  Keep in
mind that the layout is normally only used by the package installers prior
to burning the CD  A normal user would default to the Apache layout
The Layout config is just a way for us to reduce vendor-specific changes
to our released code

Roy



Re: OT: whither are we going?

2002-02-26 Thread Roy T. Fielding

 As far as having no responsibility to the people/companies that USE
 Apache, I put forth this argument.  When a company bases it's business
 or a person bases their career on a program, in MY OPINION, there then
 springs into a being an implied responsibility on the development team
 to support the product and keep it alive.  IE they have put THEIR MONEY
 behind this product.  When a web hosting company says I use Apache.,
 that means that they are backing Apache with THEIR MONEY.  No, they did
 NOT pay the ASF to RENT a license of Apache but they are STILL spending
 money on Apache.

That is total bullshit.  When a company pays someone to support a product,
whether that someone be a company like Covalent or an independent software
developer, THEN and only then is there any implied responsibility to that
person's needs.  It is completely insane to think that a volunteer group
of developers is going to be responsible to all 60 million or so users just
because they happen to like the free product.

If you aren't contributing, you aren't part of the Apache community.
People within the community will work on the problems that they consider
to be most important.  People outside the community can only influence
what they do by performing the work necessary to eventually be considered
part of the community, or by paying someone within the community to do it
for them.

The only responsibility we have is to keeping the community open to new
volunteers.

Roy



Re: cvs commit: httpd-2.0/server protocol.c

2002-02-06 Thread Roy T. Fielding

  I think you may have done the opposite of what you expected..
  Aren't NOTICE messages *always* logged, regardless of LogLevel?
 
 Oh, man.  That sucks.  It's #defined to be priority 5 in http_log.h,
 but we ignore that level.  Bah.  That's bogus.  NOTICE should be
 priority 0 if we always print it.  Switching it to DEBUG.  -- justin

syslog priorities are lame in general.  We needed something that was
both not an error and always printed (for the startup/shutdown messages)
and NOTICE is the only one that made sense.  That leaves INFO and DEBUG
for priority-based logging.

Roy




Re: Releases, showstoppers, and vetos

2002-02-06 Thread Roy T. Fielding

A showstopper, aside from a yet-to-be-reverted veto, can be moved from
one section of STATUS to another by the RM (or anyone, for that matter)
whenever they want.  It is only a showstopper if we ALL agree it is.
The category only exists to simply remind us of what needs to be fixed.

Roy




Re: Releases, showstoppers, and vetos

2002-02-06 Thread Roy T. Fielding

Nobody can veto a release, period.  It is therefore impossible for
anything to be a showstopper unless it is a pending veto of a commit
or the group makes a decision by majority of -1 on any release until
the problem is fixed.  If the RM doesn't think that is the case,
then they should move the issue out of that category.  If anyone else
has a problem with that, they are perfectly capable of calling for
a vote on not releasing the code until the issue is fixed.

The only reason the showstopper category exists is because I needed
a place to keep track of problems that we all agreed to fixing before
I could cut a release.  That is all.  Now it is just being abused.

Roy




Re: Releases, showstoppers, and vetos

2002-02-06 Thread Roy T. Fielding

On Wed, Feb 06, 2002 at 03:33:04PM -0500, Rodent of Unusual Size wrote:
 Roy T. Fielding wrote:
  
  A showstopper, aside from a yet-to-be-reverted veto, can be
  moved from one section of STATUS to another by the RM (or
  anyone, for that matter) whenever they want.  It is only
  a showstopper if we ALL agree it is. The category only exists
  to simply remind us of what needs to be fixed.
 
 Not codified, and certainly not clear:

Yes it is codified -- the status file and its categories has no bearing
whatsoever on our process guidelines except in that it keeps track of
the status of voting/releases.  It does not change the meaning of the
votes or what is required to do a release.  It is just a tool.

Roy




Re: Releases, showstoppers, and vetos

2002-02-06 Thread Roy T. Fielding

 I add a showstopper to STATUS. One other person says -1, that's
 not a showstopper. By my interpretation of the rules, they CANNOT
 demote it from showstopper until there are enough people who would
 vote to release (more +1s than -1s). This means that in order to
 demote it, there would have to be two -1s to offset my +1.

A showstopper is just an issue!  Damnit guys, if you can't figure this out
I am going to remove the whole category from STATUS as being obviously bad
for your brain cells.  A problem that is a showstopper is simply AN OPINION
that there won't be a majority +1 approval of a release until it is fixed.
Obviously, if there is *ANY* debate on whether or not something is a
showstopper, then it doesn't belong in that category -- it doesn't
become a showstopper until it has the effect of stopping the show.
It is just an opinion until someone calls for a vote.

The only person who can declare an issue as being a showstopper is the RM,
since they are the one waiting until after the fix is made before creating a
tarball.  Otherwise, they are free to ignore *any* issue that doesn't
involve an outstanding veto on HEAD.  Those are the rules that we've lived
by for a long time now, and there is no way in hell that I'll support the
notion that anyone can stop a release without a formal vote.

Roy




Re: UseCanonicalName considered harmful

2002-02-05 Thread Roy T. Fielding

On Tue, Feb 05, 2002 at 12:58:35PM -0800, Ryan Bloom wrote:
  Rodent of Unusual Size wrote:
  
   When enabled, UseCanonicalName causes the server to
   create any server-self-referential URLs using the name
   by which it knows itself -- as opposed to what the client
   may have called it.  In many cases this is entirely
   reasonable and good -- but it completely borks up the
   ability to run on a non-default port.
  
  Sorry, possibly a bit of missing info here: the canonicalisation
  currently forces the use of the server's own name for itself,
  and the *default* port for the scheme (e.g., 80 for http:, 443
  for https:, ...).  If the server is listening only on port 8080,
  the canonicalisation process will result in a useless and incorrect
  URL.
 
 Shouldn't we fix the canonicalisation then?  If you have configured your
 server so that it can't be reached through the canonical name, then you
 have an incorrect config.  The problem right now, is that if you don't
 specify a port in the ServerName directive, we assume you want the
 default port, instead of assuming you want whatever port your server is
 configured for.  If we change our assumption to be the port that you
 have configured your server for (which makes more sense IMO), then we
 would have solved this bug, right?
 

I'm with Ryan (and we've had this discussion before).  The code is busted.
Just fix it -- no need for a config change.

Roy




<    1   2   3   4   5   6   >