Re: Copyrights

2004-01-12 Thread Roy T. Fielding
On Saturday, January 3, 2004, at 11:10  AM, William A. Rowe, Jr. wrote:
At 06:32 AM 1/2/2004, you wrote:
[EMAIL PROTECTED] wrote:
 update license to 2004.
Why? Unless the file changes in 2004, the copyright doesn't. And, in 
any case, the earliest date applies, so it gets us nowhere.
In fairness this has been Roy's practice, so let's not beat on Andre.
Roy's logic is that this is a single work.  If someone obtains a new
tarball in 2004, all of the
files will be marked with 2004, as some changes will have 
(undoubtedly) been
made.  Old tarballs of the combined work retain their old copyright 
dates.
That logic seems a bit odd to me -- we only need to change the date in
the LICENSE file for it to apply to the collection as a whole.
The reason the copyright was being updated by me within all of the
source code files was because I have traditionally been the person who
can write a perl script that can do the update without also changing
a million other things.  The logic behind doing the update had nothing
to do with copyright law -- folks were just tired of the inconsistency
and hassle of remembering to do it when a file is significantly updated.
BTW, the real rule is that the date must include the year that the
expression was originally authored and each year thereafter in which
the expression contains an original derivative work that is separately
applicable to copyright.  Since that distinction is almost impossible
to determine in practice, software folks tend to use a date range that
begins when the file was created and ends in the latest year of
publication.  And, since we are open source, that means 2004.
The main reason for doing so has more to do with ending silly questions
about whether or not to update the year than it does with copyright law,
which for the most part doesn't care.  Also, it cuts down on irrelevant
change clutter from appearing in cvs commit messages for later review
and makes it easier to make global changes to the license itself.
Roy



Re: [PATCH 1.3] work around some annoyances with ab error handling

2004-01-14 Thread Roy T. Fielding
+1, though it would probably be better to add a parameter to err
to pass errno (or 0) rather than using the global in this way.
Roy



Re: httpd 2.1 project plan vs "LINK" method

2004-01-14 Thread Roy T. Fielding
On Wednesday, January 14, 2004, at 01:04  PM, Julian Reschke wrote:

From...:



"- Implementation of the "LINK" Method"

Can anybody tell me what this is?
See RFC 2068, section 19.6.1.2 and 19.6.2.4
(you might want to look at the description of PATCH as well).
Just ignore the project-plan page -- it hasn't been updated since 1996.

Roy



Re: [SECURITY-PATCH] cygwin: Apache 1.3.29 and below directory traversal vulnerability

2004-02-04 Thread Roy T. Fielding
-1.  Reject the request with a 400 error instead.

Roy



Re: apr/apr-util python dependence

2004-02-19 Thread Roy T. Fielding
However I completely disagree that Python (or Perl or PHP) is
a good choice for use in build systems.
As part of the configure process, I would agree with you, but as part 
of
buildconf, I disagree--not everyone needs to run buildconf--only
developers, and if you're a developer, it's *really* not asking that
much to have Python on your dev box.
Sure it is.  If I wasn't so busy I would have vetoed the change on
the grounds that it causes httpd to no longer be buildable by developers
on the Cray MP.  And no, I don't care whether anyone else thinks that
is an important requirement.  Creating entry barriers is what prevents
development on new platforms that you haven't even heard of yet.
We haven't been using sh/sed/awk as our build platform because we
thought those were good languages.  I'm sorry, but being too busy to
maintain the existing scripts is no excuse for rewriting them in a
less portable language.  As soon as someone has the time to write
it in a portable language, the python should be removed.
So no... switching to a shell script would not be beneficial, as it 
would
cut off future capabilities.
I doubt that.  .dsp and .dsw files are just other text files
which can easily be created using sh, grep, sed, tr etc.
Ick. Ick ick ick ick ick.  "Easily" is obviously a subjective term.  
Who
wants to write (and, more importantly, *maintain*) hundreds (or
thousands) of lines of /bin/sh code?  Not to mention the fact that
Python can be much more compact than /bin/sh after you hit a certain
level of complexity.
Irrelevant to the task at hand.

Anyway, I suppose that agreeing to disagree may be for the best here.
Subversion has required python to run autogen.sh for years now, and 
it's
been great for us.
Subversion has zero deployment when compared to httpd.  It should
be learning lessons from httpd's history, not casting it aside.
Roy



Re: fix_hostname() in 1.3.30-dev broken

2004-03-18 Thread Roy T. Fielding
Ugg... fix_hostname() in 1.3.30-dev (and previous) are
broken such that it does *not* update parsed_uri with
the port and port_str value from the Host header.
This means that with a request like:
% telnet localhost 
GET / HTTP/1.1
Host: foo:
that the '' port value from the Host header is
ignored!
When is fix_hostname() used?  If it is used anywhere other than
ProxyPass redirects, then it must ignore that port value.  To do
otherwise would introduce a security hole in servers that rely on
port blocking at firewalls.  I agree that ProxyPass needs to
know that port number, but that should be handled within the
proxy itself.
Roy



Re: 1.3 (apparently) can build bogus chunk headers

2004-03-18 Thread Roy T. Fielding
That is a common thread on http-wg.  Spaces are allowed after the
chunk-size, or at least will be allowed by future specs.  The whole
HTTP BNF needs to be revamped, eventually.
Roy



Re: mod_proxy distinguish cookies?

2004-05-04 Thread Roy T. Fielding
Rather just use URL parameters. As I recall RFC2616 does not consider 
a
request with a different cookie a different variant, so even if you
patch your server to allow it to differentiate between cookies, 
neither
the browsers nor the transparent proxies in the path of the request 
will
do what you want them to do :(
Well, that truly sucks. If you pass options around in params then
whenever someone follows a link posted by someone else, they will
inherit that person's options.
I do wish people would read the specification to refresh their memory
before summarizing.  RFC 2616 doesn't say anything about cookies -- it
doesn't have to because there are already several mechanisms for marking
a request or response as varying.  In this case
   Vary: Cookie

added to the response by the server module (the only component capable
of knowing how the resource varies) is sufficient for caching clients
that are compliant with HTTP/1.1.  Expires and Cache-Control are usually
added as well if HTTP/1.0 caches are a problem.
Roy



Re: cvs commit: httpd-2.0 STATUS

2004-07-29 Thread Roy T . Fielding
On Thursday, July 29, 2004, at 05:58  AM, André Malo wrote:
* "Mladen Turk" <[EMAIL PROTECTED]> wrote:
William A. Rowe, Jr. wrote:
  /* Scoreboard file, if there is one */
  #ifndef DEFAULT_SCOREBOARD
 @@ -118,6 +119,7 @@
  typedef struct {
  int server_limit;
  int thread_limit;
 +int lb_limit;
  ap_scoreboard_e sb_type;
  ap_generation_t running_generation;  /* the
generation of children which
   * should still
be serving
requests. */
This definitely breaks binary compatibility.
Moving the lb_limit to the end of the struct will not break the binary
compatibility. Correct?
Not Correct. It *may* be the case. Depending on who allocates the 
stuff.
Then the question to ask is whether any independent modules
(those that are not installed when the server is installed)
are likely to use that structure, and how they are expected
to use it.
I'd be surprised if it were even possible for an independent
module to allocate a scoreboard struct, but it has been a while
since I looked at that code.
Roy


Re: [PATCH] mod_cache fixes: #9

2004-08-02 Thread Roy T . Fielding
On Monday, August 2, 2004, at 10:55  AM, Justin Erenkrantz wrote:
Avoid confusion when reading mod_cache code.  write_ and read_ often 
imply
network code; save_ and load_ are more understandable prefixes in this 
context.
Hmm, IIRC, "load"ing a cache means writing to it, not reading from it.
Why not just change them to "cache_write" and "cache_read"?
Or "store" and "recall"?
Kudos on the other changes -- those are some significant improvements.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
What would happen in this case httpd would infer a body while no body 
would be
found there?
 * In the case of a 'connection close' nothing, empty body would be 
found.
 * In the case of a 'persistent connection':
   * RFC2616 section 8.1.2.1:
   In order to remain persistent, all messages on the connection 
MUST
   have a self-defined message length (i.e., one not defined by 
closure
   of the connection), as described in section 4.4.
 Therefore 'persistent connection' is not allowed in this case.

Therefore it should be safe to assume if no Content-Length and no 
"chunked"
headers are present there MUST follow an optional body with the
connection-close afterwards as 'persistent connection' MUST NOT be 
present.
No, because looking for body when no body is present is an expensive
operation.  An HTTP request with no content-length and no 
tranfer-encoding
has no body, period:

   The presence of a message-body in a request is signaled by the
   inclusion of a Content-Length or Transfer-Encoding header field in
   the request's message-headers.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
On Saturday, August 7, 2004, at 01:17  PM, André Malo wrote:
* Nick Kew <[EMAIL PROTECTED]> wrote:
It occurs to me that a similar situation arises with CGI and chunked
input.  The CGI spec guarantees a content-length header,
ah, no.
| * CONTENT_LENGTH
|
| The length of the said content as given by the client.
That's rather, *if* the client says something about the length, then
CONTENT_LENGTH tells about it. One should not trust it anyway, since 
inflating
compressed content with mod_deflate (for example), changes the length, 
but
neither changes the Header nor the environment variable.
CGI would happen after mod_deflate.  If mod_deflate changes the request
body without also (un)setting content-length, then it is broken.  
However,
I suspect you are thinking of a response body, not the request.

Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
Thanks for the great support - httpd-2.0 HEAD 2004-08-07 really fixes 
it.
It even provides env variable "proxy-sendchunks" to select between 
compatible
"Content-Length" (default) and performance-wise "chunked".
Sounds pretty complete to me.  Of course you'd need to stick to C-L 
unless
you *know* the backend accepts chunks.
If the client sent chunks, then it is safe to assume that the proxy
can send chunks as well.  Generally speaking, user agents only send
chunks to applications that they know will accept chunks.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
If the client sent chunks, then it is safe to assume that the proxy
can send chunks as well.  Generally speaking, user agents only send
chunks to applications that they know will accept chunks.
The client could be sending chunks precisely because it's designed to
work with a proxy that is known to accept them.  That doesn't imply
any knowledge of the backend(s) proxied, which might be anything up to
and including the 'net in general.
Theoretically, yes.  However, in practice, that is never the case.
Either a user agent is using generic stuff like HTML forms, which
will always result in a content-length if there is a body, or it
is using custom software designed to work with custom server apps.
There are no other real-world examples, and thus it is safe to use
chunks if the client used chunks.
Also bear in mind that we were discussing (also) the case where the
request came with C-L but an input filter invalidated it.
I was not discussing that case.  The answer to that case is "don't do 
that".
Fix the input filter if it is doing something stupid.

Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
CGI would happen after mod_deflate.  If mod_deflate changes the 
request
body without also (un)setting content-length, then it is broken.
Huh? Input filters are pulled, so they run *after* the handler has been
started. And - CONTENT_LENGTH (if any - It's unset for chunked as 
well) still
reflects the Content-Length sent by the client. So the current 
behaviour is
correct in all cases.
No, it is broken in all cases.  CGI scripts cannot handle chunked input
and they cannot handle bodies without content-length -- that is how the
interface was designed.  You would have to define a CGI+ interface to
get some other behavior.
A CGI script therefore should never trust Content-Length, but just read
stdin until it meets an EOF.
We cannot redefine CGI.  It is a legacy crap interface.  Input filters
either have to be disabled for CGI or replaced with a buffering system
that takes HTTP/1.1 in and supplies CGI with the correct metadata and 
body.

Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
A CGI script therefore should never trust Content-Length, but just 
read
stdin until it meets an EOF.
That is well-known to fail in CGI.  A CGI must use Content-Length.
Hmm. any pointers where this is specified? I didn't have any problems 
with
this until now - but in trusting the C-L variable.
CGI doesn't require standard input to be closed by the server -- Apache
just happens to do that for the sake of old scripts that used fgets to
read line-by-line.  Other servers do things differently, which is why
reading til EOF does not work across platforms.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
On the contrary!  I myself have done a great deal of work on a proxy
for mobile devices, for a household-name Client.  The client software
makes certain assumptions of the proxy that would not be valid on the
Web at large.  But the backend *is* the web at large.
But then the client is either using non-standard HTML forms or
non-standard HTTP, neither of which is our concern.  It doesn't make
any sense to code a general proxy that assumes all chunked requests are
meant to be length-delimited just because someone might write
themselves a custom client that sends everything chunked.  Those
people can write their own proxies (or at least configure them to
be sub-optimal).
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
Since the Apache server can not know if CGI requires C-L, I conclude
that CGI scripts are broken if they require C-L and do not return
411 Length Required when the CGI/1.1 CONTENT_LENGTH environment
variable is not present.  It's too bad that CGI.pm and cgi-lib.pl
are both broken in this respect.  Fixing them would be simple and
that would take care of the vast majority of legacy apps.
CGI was defined in 1993.  HTTP/1.0 in 1993-95.  HTTP/1.1 in 1995-97.
I think it is far-fetched to believe that CGI scripts are broken
because they don't understand a feature introduced three years
after CGI was done.  I certainly didn't expect CGI scripts to
change when I was editing HTTP.
I probably expected that someone would define a successor to CGI
that was closer in alignment to HTTP, but that never happened
(instead, servlets were defined as a copy of the already-obsolete
CGI interface rather than something sensible like an HTTP proxy
interface).  *shrug*
CGI is supposed to be a simple interface for web programming.
It is not supposed to be a fast interface, a robust interface,
or a long-term interface -- just a simple one that works on
multiple independent web server implementations.
Roy


Re: POST without Content-Length

2004-08-07 Thread Roy T . Fielding
On Saturday, August 7, 2004, at 05:21  PM, Jan Kratochvil wrote:
This whole thread started due to a commercial GSM mobile phone:
	User-Agent: SonyEricssonP900/R102 Profile/MIDP-2.0 
Configuration/CLDC-1.0 Rev/MR4

, it sends HTTP/1.1 "chunked" requests to its HTTP proxy although you 
will
access general web sites. The "chunked" body is apparently created on 
the fly,
each "chunk" as a specific body element generated by a part of P900 
code.
So stick a proxy in front of it that waits for the entire body
on every request and converts it to a content-length.  I am not
saying that it isn't possible -- it is just stupid for a
general-purpose proxy to do that (just as it is stupid to deploy
a cell phone with such a lazy HTTP implementation).
Roy


Re: cvs commit: apache-1.3/src/main http_protocol.c http_request.c

2004-08-27 Thread Roy T . Fielding
This doesn't look right.  Checking the notes table is a serious
performance hit, and why does it matter how many times keepalives
is incremented on an error path? There must be a better way to do this.
Roy
On Friday, August 27, 2004, at 04:44  PM, [EMAIL PROTECTED] wrote:
jim 2004/08/27 16:44:42
  Modified:src  CHANGES
   src/main http_protocol.c http_request.c
  Log:
  Make ap_set_keepalive more statefully aware, allowing it
  to be called multiple times (to correctly set keepalive)
  but not increment keepalives when not needed. This allows
  us to handle a special case where we need to discard
  body content "early"
  Revision  ChangesPath
  1.1949+3 -2  apache-1.3/src/CHANGES
  Index: CHANGES
  ===
  RCS file: /home/cvs/apache-1.3/src/CHANGES,v
  retrieving revision 1.1948
  retrieving revision 1.1949
  diff -u -r1.1948 -r1.1949
  --- CHANGES   27 Aug 2004 19:29:57 -  1.1948
  +++ CHANGES   27 Aug 2004 23:44:41 -  1.1949
  @@ -24,9 +24,10 @@
was not checked properly. This affects mod_usertrack and
core. PR 28218.  [André Malo]
  -  *) No longer breaks mod_dav, frontpage and others.  Backs out
  - a patch which prevented discarding the request body for 
requests
  +  *) No longer breaks mod_dav, frontpage and others.  Repair a patch
  + in 1.3.31 which prevented discarding the request body for 
requests
that will be keptalive but are not currently keptalive. PR 
29237.
  + [Jim Jagielski]

 *) COMPATIBILITY: Added new compile-time flag: 
UCN_OFF_HONOR_PHYSICAL_PORT.
It controls how UseCanonicalName Off determines the port value 
if


  1.336 +12 -1 apache-1.3/src/main/http_protocol.c
  Index: http_protocol.c
  ===
  RCS file: /home/cvs/apache-1.3/src/main/http_protocol.c,v
  retrieving revision 1.335
  retrieving revision 1.336
  diff -u -r1.335 -r1.336
  --- http_protocol.c	15 Apr 2004 15:51:51 -	1.335
  +++ http_protocol.c	27 Aug 2004 23:44:41 -	1.336
  @@ -391,6 +391,7 @@
   int wimpy = ap_find_token(r->pool,
  ap_table_get(r->headers_out, 
"Connection"), "close");
   const char *conn = ap_table_get(r->headers_in, "Connection");
  +const char *herebefore = ap_table_get(r->notes, 
"ap_set_keepalive-called");

   /* The following convoluted conditional determines whether or 
not
* the current connection should remain persistent after this 
response
  @@ -442,7 +443,17 @@
   int left = r->server->keep_alive_max - 
r->connection->keepalives;

   r->connection->keepalive = 1;
  -r->connection->keepalives++;
  + /*
  +  * ap_set_keepalive could be called multiple times (eg: in
  +  * ap_die() followed by ap_send_http_header()) during this
  +  * one single request. To ensure that we don't incorrectly
  +  * increment the keepalives counter for each call, we
  +  * use notes to store a state flag.
  +  */
  + if (!herebefore) {
  +r->connection->keepalives++;
  +ap_table_setn(r->notes, "ap_set_keepalive-called", "1");
  + }
   /* If they sent a Keep-Alive token, send one back */
   if (ka_sent) {

  1.176 +7 -1  apache-1.3/src/main/http_request.c
  Index: http_request.c
  ===
  RCS file: /home/cvs/apache-1.3/src/main/http_request.c,v
  retrieving revision 1.175
  retrieving revision 1.176
  diff -u -r1.175 -r1.176
  --- http_request.c28 May 2004 12:07:02 -  1.175
  +++ http_request.c27 Aug 2004 23:44:41 -  1.176
  @@ -1051,12 +1051,18 @@
   }
   /*
  + * We need to ensure that r->connection->keepalive is valid in
  + * order to determine if we can discard the request body below.
  + */
  +ap_set_keepalive(r);
  +
  +/*
* If we want to keep the connection, be sure that the request 
body
* (if any) has been read.
*/
   if ((r->status != HTTP_NOT_MODIFIED) && (r->status != 
HTTP_NO_CONTENT)
   && !ap_status_drops_connection(r->status)
  -&& r->connection && (r->connection->keepalive != -1)) {
  +&& r->connection && (r->connection->keepalive > 0)) {

   (void) ap_discard_request_body(r);
   }





Re: Bug 18388: cookies

2004-08-31 Thread Roy T. Fielding
[sent this yesterday, but it bounced]
personally, I tend to see it more from doug and nick's perspective and 
would
be inclined to fix a long-standing issue that never made sense to me, 
but
roy wrote the book and has unique insight here, so...
Umm, not really -- cookies are just broken by design.  That's why they
aren't in HTTP/1.1 and why they are not listed in 304. However, it is
kind of pointless to only partly implement them, so go ahead and add
Set-Cookie and Set-Cookie2 to the 304 list.  Both the original Netscape
spec and RFC 2965 allow Set-Cookie* to be sent on any response and
expect it to be passed along in a 304, so we might as well allow folks
to do totally moronic things with cookies.
Roy


Re: cvs commit: httpd-2.0/modules/generators mod_info.c

2004-09-03 Thread Roy T. Fielding
-0.9.   This change needs review.  The coding style should stick to
the httpd guidelines. Use of sprintf into a small fixed size
buffer is unwise (even when we know it fits) .. use 
apr_snprintf.
Also, we don't add comments to a credit log inside the file
(instead of just CHANGES), and whether or not God intended
recursion is not relevant (though that part of the changes
does look good).

It is unfortunate that the diff gets confused about the new functions
and treats them as changes -- it makes review a pain in the butt.
Roy
On Sep 2, 2004, at 7:31 PM, [EMAIL PROTECTED] wrote:
pquerna 2004/09/02 19:31:06
  Modified:.CHANGES
   modules/generators mod_info.c
  Log:
  Rewrote config tree walk using recursion the way God intended.
  Added ?config option. Added printout of config filename and line 
numbers.

  PR: 30919
  Submitted by: Rici Lake 
  Revision  ChangesPath
  1.1584+4 -0  httpd-2.0/CHANGES
  Index: CHANGES
  ===
  RCS file: /home/cvs/httpd-2.0/CHANGES,v
  retrieving revision 1.1583
  retrieving revision 1.1584
  diff -u -r1.1583 -r1.1584
  --- CHANGES   2 Sep 2004 19:49:19 -   1.1583
  +++ CHANGES   3 Sep 2004 02:31:05 -   1.1584
  @@ -2,6 +2,10 @@
 [Remove entries to the current 2.0 section below, when backported]
  +  *) mod_info: Rewrote config tree walk using a recursive function.
  + Added ?config option. Added printout of config filename and 
line numbers.
  + [Rici Lake , Paul Querna]
  +
 *) mod_proxy: Fix type error that prevents proxy-sendchunks from 
working.
[Justin Erenkrantz]


  1.57  +98 -95httpd-2.0/modules/generators/mod_info.c
  Index: mod_info.c
  ===
  RCS file: /home/cvs/httpd-2.0/modules/generators/mod_info.c,v
  retrieving revision 1.56
  retrieving revision 1.57
  diff -u -r1.56 -r1.57
  --- mod_info.c	9 Feb 2004 20:29:19 -	1.56
  +++ mod_info.c	3 Sep 2004 02:31:06 -	1.57
  @@ -25,6 +25,7 @@
* GET /server-info?server - Returns server configuration only
* GET /server-info?module_name - Returns configuration for a 
single module
* GET /server-info?list - Returns quick list of included modules
  + * GET /server-info?config - Returns full configuration
*
* Rasmus Lerdorf <[EMAIL PROTECTED]>, May 1996
*
  @@ -39,6 +40,12 @@
* 8.11.00 Port to Apache 2.0.  Read configuation from the 
configuration
* tree rather than reparse the entire configuation file.
*
  + * Rici Lake <[EMAIL PROTECTED]>
  + *
  + * 2004-08-28 Rewrote config tree walk using recursion the way God 
intended.
  + * Added ?config option. Added printout of config filename and line 
numbers.
  + * Fixed indentation.
  + *
*/

   #define CORE_PRIVATE
  @@ -86,108 +93,97 @@
   return new;
   }
  -static void mod_info_html_cmd_string(request_rec *r, const char 
*string,
  - int close)
  +static void mod_info_indent(request_rec * r, int nest, const char* 
thisfn, int linenum)
   {
  -const char *s;
  -
  -s = string;
  -/* keep space for \0 byte */
  -while (*s) {
  -if (*s == '<') {
  -	if (close) {
  -ap_rputs("') {
  -ap_rputs(">", r);
  -}
  -else if (*s == '&') {
  -ap_rputs("&", r);
  -}
  -	else if (*s == ' ') {
  -	if (close) {
  -	ap_rputs(">", r);
  -	break;
  -	} else {
  -ap_rputc(*s, r);
  -}
  -	} else {
  -ap_rputc(*s, r);
  -}
  -s++;
  +int i;
  +const char *prevfn = ap_get_module_config(r->request_config, 
&info_module);
  +char buf[32];
  +if (thisfn == NULL) thisfn = "*UNKNOWN*";
  +if (prevfn == NULL || 0 != strcmp(prevfn, thisfn)) {
  +thisfn = ap_escape_html(r->pool, thisfn);
  +ap_rprintf(r, "In file: 
%s\n", thisfn);
  +ap_set_module_config(r->request_config, &info_module, 
thisfn);
   }
  +ap_rputs("", r);
  +if (linenum > 0) sprintf(buf, "%d", linenum);
  +else buf[0] = '\0';
  +for (i = strlen(buf); i < 4; ++i) ap_rputs(" ", r);
  +ap_rputs(buf, r);
  +ap_rputs(": ", r);
  +for (i = 1; i <= nest; ++i) ap_rputs("  ", r);
   }

  -static void mod_info_module_cmds(request_rec * r, const command_rec 
* cmds,
  - ap_directive_t * conftree)
  +static void mod_info_show_cmd(request_rec * r, const ap_directive_t 
* dir,
  +int nest)
   {
  -const command_rec *cmd;
  -ap_directive_t *tmptree = conftree;
  -char htmlstring[MAX_STRING_LEN];
  -int block_start = 0;
  -int nest = 0;
  -
  -while (tmptree != NULL) {
  -	cmd = cmds;

Re: cvs commit: httpd-2.0/modules/generators mod_info.c

2004-09-03 Thread Roy T. Fielding
On Sep 3, 2004, at 6:06 PM, Paul Querna wrote:
On Fri, 2004-09-03 at 16:06 -0700, Roy T. Fielding wrote:
-0.9.   This change needs review.  The coding style should stick to
 the httpd guidelines.
Yes, but I did not commit a correct style patch, because that would of
been *impossible* to review.  The style of mod_info does not follow the
guidelines, and Rici's original patch was much larger (including style
fixes).  My plan was to make his functional changes first, and come 
back
later this weekend for a style cleanup of mod_info.
okay, that's a reasonable plan -- feel free to mention that in the
cvs log next time.
 Also, we don't add comments to a credit log inside the file
 (instead of just CHANGES), and whether or not God intended
 recursion is not relevant (though that part of the changes
 does look good).
What is the Policy on this?  Should we remove the old ones from 
existing
files?  Sort of like how @author tags have been removed from other ASF
projects...
We keep around things like "Originally written by ..." for entire
files, but nothing else.  Feel free to send a patch to the list if
you are unsure about something -- stuff like that takes time to
figure out the unwritten policy.
It is unfortunate that the diff gets confused about the new functions
and treats them as changes -- it makes review a pain in the butt.
Like I said, this is the minimal patch to get the functional 
differences
for mod_info. mod_info already did not fit the style guide, and making
Functional and Style changes in the same commit is even harder to
review.
Well, in this case the cvs diff screwed the review anyway, but I
was actually talking about the style of the new code.  Feel free
to fix that later, but I figured I should mention it just in case
you hadn't seen the guidelines yet.
Style is important because it makes it easier to review changes
like that one.  I actually prefer to commit the style fix first
and then the change, since things like the usage of sprintf are
easier to see under our normal style.  This part just makes me
uncomfortable, like walking down a dark alley at night:
  +char buf[32];
  +if (thisfn == NULL) thisfn = "*UNKNOWN*";
  +if (prevfn == NULL || 0 != strcmp(prevfn, thisfn)) {
  +thisfn = ap_escape_html(r->pool, thisfn);
  +ap_rprintf(r, "In file: 
%s\n", thisfn);
  +ap_set_module_config(r->request_config, &info_module, 
thisfn);
   }
  +ap_rputs("", r);
  +if (linenum > 0) sprintf(buf, "%d", linenum);
  +else buf[0] = '\0';
  +for (i = strlen(buf); i < 4; ++i) ap_rputs(" ", r);
  +ap_rputs(buf, r);
  +ap_rputs(": ", r);
  +for (i = 1; i <= nest; ++i) ap_rputs("  ", r);

Roy


Re: mod_cache 2 questions

2004-09-07 Thread Roy T. Fielding
Just a thought... why does this restriction exist in the first place?
Because, a long time ago, queries contained mostly user-defined
strings that were not likely to result in a later hit, so it wasn't
worth the effort.  Now, some web applications use a bogus query
string in order to override caching because of this default behavior.
It would be fine with me to change the default, but that may result
in very inefficient caches.  It is better to default to no-cache unless
it is specifically configured or indicated (cache-control) as cacheable.
Roy


Re: multiple host headers

2004-09-13 Thread Roy T. Fielding
Why do we merge multiple Host headers?  I am getting wierd things like
this for headers_in host: "www.cnn.com, www.cnn.com"
This may be correct, but it caught me by surprise!
Well, it is an invalid HTTP request.  The question is, should be
"fix" it for the client by choosing either the first or last field
(potentially masking a security hole), or simply respond with 400?
What is the user agent?
Roy


Re: Moving httpd-2.0 to Subversion

2004-09-17 Thread Roy T. Fielding
+1  Subversion still lacks a few features in commit notices, and
I don't see the equivalent of viewcvs diff (must be hidden
somewhere), but the developer interaction is much better.
What are we going to use for trunk names?  httpd-1.3 and httpd-2?
I wonder how hard it would be to make cvs2svn overlay apache-1.*
with httpd-2.* into one httpd repo.  Probably not worth it given
all of the parallel development.
Roy


Re: cvs commit: httpd-2.0/server core.c protocol.c request.c scoreboard.c util.c util_script.c

2004-10-22 Thread Roy T . Fielding
whoa!  -1
Was this even discussed on the list?  You just changed the
entire module API and introduced a dozen potential security holes.
Why on earth is it changing nvec to apr_size_t and then downcasting
its use?  Why is any of this even needed?
Roy
On Oct 22, 2004, at 8:22 AM, [EMAIL PROTECTED] wrote:
ake 2004/10/22 08:22:05
  Modified:.CHANGES
   include  ap_mmn.h http_protocol.h httpd.h scoreboard.h
util_script.h
   modules/http http_protocol.c
   server   core.c protocol.c request.c scoreboard.c util.c
util_script.c
  Log:
  WIN64: API changes to clean up Windows 64bit compile warnings
  Revision  ChangesPath
  1.1614+3 -0  httpd-2.0/CHANGES
  Index: CHANGES
  ===
  RCS file: /home/cvs/httpd-2.0/CHANGES,v
  retrieving revision 1.1613
  retrieving revision 1.1614
  diff -u -r1.1613 -r1.1614
  --- CHANGES   18 Oct 2004 00:49:30 -  1.1613
  +++ CHANGES   22 Oct 2004 15:22:03 -  1.1614
  @@ -2,6 +2,9 @@
 [Remove entries to the current 2.0 section below, when backported]
  +  *) WIN64: API changes to clean up Windows 64bit compile warnings
  + [Allan Edwards]
  +
 *) mod_cache: CacheDisable will only disable the URLs it was 
meant to
disable, not all caching. PR 31128.
[Edward Rudd , Paul Querna]


  1.70  +3 -2  httpd-2.0/include/ap_mmn.h
  Index: ap_mmn.h
  ===
  RCS file: /home/cvs/httpd-2.0/include/ap_mmn.h,v
  retrieving revision 1.69
  retrieving revision 1.70
  diff -u -r1.69 -r1.70
  --- ap_mmn.h	4 Jun 2004 22:40:46 -	1.69
  +++ ap_mmn.h	22 Oct 2004 15:22:04 -	1.70
  @@ -84,14 +84,15 @@
*  changed ap_add_module, ap_add_loaded_module,
*  ap_setup_prelinked_modules, 
ap_process_resource_config
* 20040425.1 (2.1.0-dev) Added ap_module_symbol_t and 
ap_prelinked_module_symbols
  + * 20041022   (2.1.0-dev) API changes to clean up 64bit compiles
*/

   #define MODULE_MAGIC_COOKIE 0x41503230UL /* "AP20" */
   #ifndef MODULE_MAGIC_NUMBER_MAJOR
  -#define MODULE_MAGIC_NUMBER_MAJOR 20040425
  +#define MODULE_MAGIC_NUMBER_MAJOR 20041022
   #endif
  -#define MODULE_MAGIC_NUMBER_MINOR 1 /* 0...n */
  +#define MODULE_MAGIC_NUMBER_MINOR 0 /* 0...n */
   /**
* Determine if the server's current MODULE_MAGIC_NUMBER is at 
least a


  1.93  +10 -10httpd-2.0/include/http_protocol.h
  Index: http_protocol.h
  ===
  RCS file: /home/cvs/httpd-2.0/include/http_protocol.h,v
  retrieving revision 1.92
  retrieving revision 1.93
  diff -u -r1.92 -r1.93
  --- http_protocol.h   18 Jul 2004 20:06:38 -  1.92
  +++ http_protocol.h   22 Oct 2004 15:22:04 -  1.93
  @@ -338,9 +338,9 @@
* @param str The string to output
* @param r The current request
* @return The number of bytes sent
  - * @deffunc int ap_rputs(const char *str, request_rec *r)
  + * @deffunc apr_ssize_t ap_rputs(const char *str, request_rec *r)
*/
  -AP_DECLARE(int) ap_rputs(const char *str, request_rec *r);
  +AP_DECLARE(apr_ssize_t) ap_rputs(const char *str, request_rec *r);
   /**
* Write a buffer for the current request
  @@ -357,9 +357,9 @@
* @param r The current request
* @param ... The strings to write
* @return The number of bytes sent
  - * @deffunc int ap_rvputs(request_rec *r, ...)
  + * @deffunc apr_ssize_t ap_rvputs(request_rec *r, ...)
*/
  -AP_DECLARE_NONSTD(int) ap_rvputs(request_rec *r,...);
  +AP_DECLARE_NONSTD(apr_ssize_t) ap_rvputs(request_rec *r,...);
   /**
* Output data to the client in a printf format
  @@ -367,9 +367,9 @@
* @param fmt The format string
* @param vlist The arguments to use to fill out the format string
* @return The number of bytes sent
  - * @deffunc int ap_vrprintf(request_rec *r, const char *fmt, 
va_list vlist)
  + * @deffunc apr_ssize_t ap_vrprintf(request_rec *r, const char 
*fmt, va_list vlist)
*/
  -AP_DECLARE(int) ap_vrprintf(request_rec *r, const char *fmt, 
va_list vlist);
  +AP_DECLARE(apr_ssize_t) ap_vrprintf(request_rec *r, const char 
*fmt, va_list vlist);

   /**
* Output data to the client in a printf format
  @@ -377,9 +377,9 @@
* @param fmt The format string
* @param ... The arguments to use to fill out the format string
* @return The number of bytes sent
  - * @deffunc int ap_rprintf(request_rec *r, const char *fmt, ...)
  + * @deffunc apr_ssize_t ap_rprintf(request_rec *r, const char *fmt, 
...)
*/
  -AP_DECLARE_NONSTD(int) ap_rprintf(request_rec *r, const char 
*fmt,...)
  +AP_DECLARE_NONSTD(apr_ssize_t) ap_rprintf(request_rec *r, const 
char *fmt,...)
   __attribute__((format(printf,2,3)));
   /**
* Flush all of the data for the current

Re: cvs commit: httpd-2.0/server core.c protocol.c request.c scoreboard.c util.c util_script.c

2004-10-22 Thread Roy T. Fielding
The precursor to this patch "[PATCH] WIN64: httpd API changes"
was posted 10/7 so I thought we had had suitable time for
discussion. I have addressed the one issue that was raised.
That explains why I didn't see it -- I was in Switzerland.
There have also been several other threads on the httpd & apr
lists and the feedback I had received indicated the it was
appropriate to sanitize the 64 bit compile even if it incurred
httpd API changes. However if there are specific security issues
that this has brought up I am more than anxious to address them.
Are you opposed to changing the API to fix 64 bit warnings or
are there specific issues that I can address and continue to
move forward rather that back out the entire patch?
I am opposed to changing the API just to mask warnings within
the implementations.  In any case, these changes cannot possibly
be correct -- the API has to be changed from the bottom-up, not
top-down.  It is far safer to cast module-provided data from int
up to 64 bits than it is to cast it down from 64 bit to int.
Fix mismatches of the standard library functions first, then
fix APR, then carefully change our implementation so that it works
efficiently on the right data types as provided by APR, and finally
fix the API so that modules can work.  If that isn't possible, then
just live with those warnings on win64.
In any case, changes like
  +/* Cast to eliminate 64 bit warning */
  +rv = apr_file_gets(buf, (int)bufsiz, cfp);
are absolutely forbidden.
Roy


Re: cvs commit: httpd-2.0/server protocol.c

2004-10-25 Thread Roy T . Fielding
What would make more sense is "Error while reading HTTP request line. 
(remote browser didn't send a request?)". This indicates exactly what 
httpd was trying to do when the error occurred, and gives a hint of 
why the error might have occurred.
We used to have such a message.  It was removed from httpd because too
many users complained about the log file growing too fast, particularly
since that is the message which will be logged every time a browser
connects and then its initial request packet gets dropped by the 
network.

This is not an error that the server admin can solve -- it is normal
life on the Internet.  We really shouldn't be logging it except when
on DEBUG level.
Roy


Re: More informative SVN subject line (Re: svn commit: r76284 - apr/apr/trunk)

2004-11-19 Thread Roy T. Fielding
I happen to agree that the commit messages suck, but the right thing
to do is have a look at the script and suggest a patch on the
infrastructure mailing list.  I would do it myself, but have a paper
to write first.  I also think that placement of the Log text after
the long list of files is obviously broken, and the commit template
does not include prefixes for
   Submitted by:
   Reviewed by:
   Obtained from:
which are really really important.
I don't think that this discussion belongs on every mailing list
that uses subversion -- move it to infrastructure.
Roy


Re: [PATCH] another mod_deflate vs 304 response case

2004-11-22 Thread Roy T. Fielding
Quoting "William A. Rowe, Jr." <[EMAIL PROTECTED]>:

> >Okay, but why the next three lines?  Why would Content-Encoding: gzip
> >*ever* be set on a 304?

Because Content-* header fields in a 304 response describe
what the response entity would contain if it were a 200 response.
Therefore, the header field must be the same as it would be for
a 200.  The body must be dropped by the HTTP filter.

Roy


Re: svn commit: r109866 - /httpd/httpd/trunk/modules/loggers/mod_log_config.c

2004-12-06 Thread Roy T. Fielding
-1 (veto) -- the message is copied to a single buffer for the write
because that is the only way to guarantee an atomic append under
Unix without locks, thus preventing multiple children from scribbling
over each other's log entries.  Please revert this change ASAP.
Roy
On Dec 5, 2004, at 12:58 PM, Paul Querna wrote:
Joe Orton wrote:
On Sun, Dec 05, 2004 at 07:05:23AM -, Paul Querna wrote:
Author: pquerna
Date: Sat Dec  4 23:05:23 2004
New Revision: 109866
URL: http://svn.apache.org/viewcvs?view=rev&rev=109866
Log:
mod_log_config.c: Use iovecs to write the log line to eliminate a 
memcpy
IIRC, writev'ing several small blocks to a file is actually generally
more expensive than doing a memcpy in userspace and calling write.  
Did
you benchmark this to be faster/better/...?
I did a local mini-benchmark of write w/ memcpy vs writev... and they 
came out to almost exactly the same on average with small sets of 
data.

-Paul




Re: svn commit: r109866 - /httpd/httpd/trunk/modules/loggers/mod_log_config.c

2004-12-07 Thread Roy T . Fielding
On Dec 6, 2004, at 7:53 PM, Paul Querna wrote:
I do not agree that using writev() would allow multiple children from 
scribbling over each other's log entries.  I have not been able to 
cause this to happen on my local machines.
You might want to consider what happens on all of the not so recent 
operating
systems that run Apache, especially those that don't even implement 
writev.
See what happens when APR_HAVE_WRITEV is not defined to 1.

Roy


Re: svn commit: r111386 - /httpd/httpd/trunk/CHANGES /httpd/httpd/trunk/include/httpd.h /httpd/httpd/trunk/modules/http/http_protocol.c

2004-12-09 Thread Roy T . Fielding
On Dec 9, 2004, at 8:46 AM, Justin Erenkrantz wrote:
--On Thursday, December 9, 2004 11:26 AM -0500 Geoffrey Young 
<[EMAIL PROTECTED]> wrote:

well, I guess it depends on whether the goal is to help (for some 
definition
of help) support official HTTP variants (if indeed that's what 3229 
is), or
just for things we actually take the time to implement fully.
I think it only makes sense for us to have the status lines for the 
things we actually implement.  I'm not going to veto it, but just that 
I think it's foolish for us to add status lines for the goofy 
'variants' of HTTP that we'll never support.  IETF's stamp of approval 
means little as they've produced their fair share of crappy RFCs 
trying to hop on the HTTP bandwagon.
I will veto it.  -1.  I consider 3229 to be harmful to HTTP and do not 
wish to
support it in the current form.  Folks can still implement it with 
extensions
if needed.

Roy


removing AddDefaultCharset from config file

2004-12-10 Thread Roy T . Fielding
I've looked back at the Jan-Feb 2000 discussion regarding cross-site
scripting in an attempt to find out why AddDefaultCharset is being
set to iso-8859-1 in 2.x (but not in 1.3.x).  I can't find any rationale
for that behavior -- in fact, several people pointed out that it would
be inappropriate to set any default, which is why it was not set in 1.3.
The purpose of AddDefaultCharset is to provide sites that suffer from
poorly written scripts and cross-site scripting issues an immediate
handle by which they can force a single charset.  As it turns out, 
forcing
a charset does nothing to reduce the problem of cross-site scripting
because the browser will either auto-detect (and switch) or the user,
upon seeing a bunch of gibberish, will go up to the menu and switch
the charset just out of curiosity.  The real solutions were to
stop reflecting client-provided data back to the browser without first
carefully validating or percent-encoding it.

To make matters worse, the documentation in the default config is
completely wrong:
# Specify a default charset for all pages sent out. This is
# always a good idea and opens the door for future 
internationalisation
# of your web site, should you ever want it. Specifying it as
# a default does little harm; as the standard dictates that a page
# is in iso-8859-1 (latin1) unless specified otherwise i.e. you
# are merely stating the obvious. There are also some security
# reasons in browsers, related to javascript and URL parsing
# which encourage you to always set a default char set.
#
AddDefaultCharset ISO-8859-1

First, it only applies to text/plain and text/html, in spite of the
convoluted implementation in core.c.  Second, setting a default in the
server config actually hinders internationalization because normal 
authors
don't understand config files.  Furthermore, it causes harm because
it overrides the indicators present in the content. There is some 
argument
to make for doing that to CGI and SSI output for the sake of protecting
idiots from themselves, but not for flat files that do not contain any
generated content.  And the security reasons are not fixed by overriding
the charset anyway -- that just makes it easier for people to ignore the
real problems of unencoded data.  All that is really needed is the
availability of the directive so that *if* a site or tree is subject to
the XSS problem, then the server admins can set a default.

In short, unless someone can think of a justification for the above
being in the default config for 2.x, I will delete it soon and close
the festering PR 23421.
Roy


Re: removing AddDefaultCharset from config file

2004-12-10 Thread Roy T. Fielding
On Dec 10, 2004, at 4:19 AM, Joe Orton wrote:
My understanding was that the forced default charset *does* prevent
browsers (or maybe, MSIE) from guessing the charset as UTF-7; UTF-7
being the special case as it's already an "escaped" encoding and hence
defies normal escaping-of-client-provided-data tricks.  Is that not
correct?
Yes and no -- it is both the source of the problem and the biggest
reason that we should NOT set charset as a default.
Consider the following two identical content resources, the first
being sent as
 Content-Type: text/html; charset=ISO-8859-15
  http://www-uxsup.csx.cam.ac.uk/~jw35/docs/cross-site-demo.html
and the second being sent with only
 Content-Type: text/html
  http://www.ics.uci.edu/~fielding/xss-demo.html
I've tested the above with all of my browsers.  Safari and MSIE-Mac do 
not
support utf-7 at all.  Firefox (Mac and Win) supports utf-7 but only 
when
manually set (it does not auto-detect utf-7, even when read from a 
local file).

MSIE (Windows), of course, does the least intelligent thing -- it does
not allow users to select utf-7 manually, but does auto-detect and 
interpret
utf-7 if it is read from a local file, or if "auto-detect" is enabled
regardless of the content-type charset parameter -- setting charset has
no effect on MSIE's auto-detect results.  In other words, it
is only at risk for XSS via utf-7 if auto-detect is enabled.

The problem we have created is that AddDefaultCharset causes entire
sites to default to one charset, usually iso-8859-1.  And because it
is set by default (no brains spent thinking about the right value),
it is often set that way even when installed in non-Latin countries
[and there is also a problem in Europe, since iso-8859-15 is where
the euro symbol was added].  As a result, normal users get a higher
frequency of wrong charset declarations in HTTP, for which the only
"standards-compliant" solution short of manually adjusting every
page received is to turn on auto-detect!  In other words, our default
is now causing more users to be vulnerable to utf-7 XSS attacks than
they would otherwise be if we never sent a default charset.
In any case, the only tutorials on cross-site scripting that still
emphasize setting charset is our own (written by Marc) and CERT's
(based on input from Marc).  Those were intended to be temporary
workarounds until folks had a chance to fix the real problems, which
were non-validating scripts that echo untrusted content to users.
After doing another afternoon of research on this one, I am now 
convinced
that AddDefaultCharset does far more harm than good.

Roy


Re: ALPN patch comments

2015-06-04 Thread Roy T. Fielding
> On Jun 4, 2015, at 9:19 AM, Stefan Eissing  
> wrote:
> 
> I think we need to clarify some things:
> 
> 1. ALPN is initiated by the client. When a client does not send ALPN as part 
> of client helo, the SSL alpn callbacks are not invoked and the server does 
> not send any ALPN information back. This is different from NPN.
> 
> 2. SSLAlpnPreference is intended as the final word to make a choice when 
> either one ALPN callback proposes many protocols or of several callbacks 
> propose protocols. So, when mod_spdy and mod_h2 are active *and* the client 
> claims to support spdy/3.1 and h2, the SSLAlpnPreference determines what gets 
> chosen and sent to the client. This was not necessary with NPN as in that SSL 
> extension the client was making the choice.
> 
> 3. Independent of the client proposal, as I read the spec, the server is free 
> to chose any protocol identifier it desires. This might result in the client 
> closing the connection. So, if the client uses ALPN and the server does not 
> want/cannot do/is configured not to support any of the clients proposals, 
> httpd can always send back „http/1.1“ since this is what it always supports.
> 
> In this light, and correct me if I’m wrong, I see no benefit and only 
> potential harm by introducing a „SSLALpn on|off“ configuration directive. I 
> think the current implementation covers all use cases and if one is missing, 
> please point out the scenario.

Ultimately, what we need is a single configuration that defines how the host
will respond to connections.  I suggest that this should be done on a per-vhost 
basis
if SNI is present, or a per-server basis if not.  It should not depend on 
either ALPN
or TLS being present.  This needs to be defined by the server admin, not hard 
coded in
the h2 code.  We should also have a way for the end of a response to reset the 
connection
to a possibly different set of protocols (i.e., Upgrade), but that's an 
independent concern.

Hence, we might need a configurable way to ignore a client's ALPN, though I 
doubt that
"SSLalpn off" is the right way to express that.  Likewise, neither is 
SSLAlpnPreference.
The server protocol(s) preference should be independent of the 
session/connection protocol.
Our internal configuration and use of ALPN should be based on the overall 
configuration, not a
configuration specific to the SSL code.  Many configurations won't include ALPN.

> As with the register once or on every connection optimization, yes, there 
> might be some performance to gain. But I think it is not so straightforward 
> to implement this, as not only the address and port influences this but also 
> the SNI that gets send in the client helo. So, one would have at least to 
> verify that registering an ALPN callback *after* the connection is open and 
> SNI has been received has any effect. 

I would hope that SNI is received before our connection is established (our 
connection is the
virtual session over TLS, not the TCP connection).  There shouldn't be any need 
to mess with
SSL internals within mod_h2.  Otherwise, it will be difficult to support h2c 
and h2 over SSH
with the same code.

Roy



Re: RFC 7540 (HTTP/2) wrt reusable connections and SNI

2015-06-09 Thread Roy T. Fielding
> On Jun 9, 2015, at 3:42 AM, Yann Ylavic  wrote:
> 
> It just needed to get out :)
> 
> But I agree that since we are to implement the RFC, we must comply,
> and find a way to still comply with HTTP/1.
> Both checks on SNI and renegotiation occur in the post_read_request
> hook, so we should be able to deal with vhost's parameters (configured
> Protocols, ProtocolTransports...), and do the right thing.
> 
> On Tue, Jun 9, 2015 at 12:09 PM, Stefan Eissing
>  wrote:
>> Yann, I am with you and feel at least unease about this mixing.
>> 
>> But the RFC has been approved and browsers will adhere to it. So if we do 
>> not enforce some policies in the server, connections will fail for 
>> mysterious reasons. And tickets will be raised...

Well, don't be too hasty.  There are a number of requirements in the RFC that
have nothing to do with HTTP and should be summarily ignored in the core 
implementation.
There are other requirements in the RFC that might turn out to be wrong or 
unnecessary,
just as we found in RFC2068, and it is our task to implement what works and 
change
the RFCs later.

However, the server as a whole should be configurable to be compliant (by 
default)
in the relevant code.  All of the requirements around TLS, for example, need to 
be
available in the SSL configs, but it is not h2's responsibility to ensure that 
it
has an RFC7540-compliant TLS config.  That is the admin's responsibility/choice.

WRT renegotiation, it is fair to say that the WG punted on the idea due to lack 
of time.
If someone figures out a way to safely renegotiate an h2 connection (and all of 
its
streams), then go ahead and implement it, describe it in an I-D, and submit it 
to
the httpbis WG.  There is nothing wrong with Apache leading by example.

Cheers,

Roy



Re: TWS ";" LWS permitted by RFC 7230 4.1.1? Apparently, no.

2015-06-15 Thread Roy T. Fielding
> On Jun 15, 2015, at 9:33 AM, William A Rowe Jr  wrote:
> 
> Reviewing the spec, I cannot find where Sambar server is permitted to insert 
> whitespace. I further reviewed the ABNF appendix, and it does not appear 
> there, either.

Right, this was a deliberate decision to reduce the number of infinite stream 
possibilities.
We can still read a few SP and discard for robustness, but it should be limited 
to the same
few characters as leading zeros.

Sambar Server has been EOL for 7 years with no available source code for 
review, so it's
behavior is no longer relevant to the standard.

Roy



Re: HTTP_MISDIRECTED_REQUEST

2015-08-27 Thread Roy T. Fielding
> On Aug 26, 2015, at 3:15 PM, William A Rowe Jr  > wrote:
> 
> Should this exception have a protocol version guard for HTTP/2.0 requests, 
> and leave the response as HTTP_BAD_REQUEST for HTTP/1.1 and earlier?
> 
> @@ -203,6 +204,9 @@
>  ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, 
> APLOGNO(02032)
>  "Hostname %s provided via SNI and hostname %s 
> provided"
>  " via HTTP are different", servername, host);
> +if (r->connection->keepalives > 0) {
> +return HTTP_MISDIRECTED_REQUEST;
> +}
>  return HTTP_BAD_REQUEST;
>  }
>  }

IIRC, it is applicable to HTTP/1.1 as well. Think misdirected requests 
containing
an absolute request URI that points to some other server.  I don't think the 
conditional
is needed at all -- just return HTTP_MISDIRECTED_REQUEST.

Hmm, I wonder how this impacts Google's desire to allow multiple hosts to reuse
the same SPDY connection ... was that dropped for h2?

Roy


Re: svn commit: r1708593 - in /httpd/httpd/trunk: docs/manual/mod/mod_http2.xml modules/http2/h2_config.c modules/http2/h2_config.h modules/http2/h2_conn.c modules/http2/h2_h2.c modules/http2/h2_h2.h

2015-10-14 Thread Roy T. Fielding
Can you please choose a more specific directive name? Like "LimitTLSunderH2".

We don't have switches for RFC compliance. We do have switches for stupid WG 
political positions that contradict common sense and are not applicable to 
non-Internet deployments.

Roy


> On Oct 14, 2015, at 5:10 AM, ic...@apache.org wrote:
> 
> Author: icing
> Date: Wed Oct 14 12:10:11 2015
> New Revision: 1708593
> 
> URL: http://svn.apache.org/viewvc?rev=1708593&view=rev
> Log:
> mod_http2: new directive H2Compliance on/off, checking TLS protocol and 
> cipher against RFC7540
> 
> Modified:
>httpd/httpd/trunk/docs/manual/mod/mod_http2.xml
>httpd/httpd/trunk/modules/http2/h2_config.c
>httpd/httpd/trunk/modules/http2/h2_config.h
>httpd/httpd/trunk/modules/http2/h2_conn.c
>httpd/httpd/trunk/modules/http2/h2_h2.c
>httpd/httpd/trunk/modules/http2/h2_h2.h
>httpd/httpd/trunk/modules/http2/h2_switch.c
> 
> Modified: httpd/httpd/trunk/docs/manual/mod/mod_http2.xml
> URL: 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/docs/manual/mod/mod_http2.xml?rev=1708593&r1=1708592&r2=1708593&view=diff
> ==
> --- httpd/httpd/trunk/docs/manual/mod/mod_http2.xml (original)
> +++ httpd/httpd/trunk/docs/manual/mod/mod_http2.xml Wed Oct 14 12:10:11 2015
> @@ -74,11 +74,11 @@
> Direct communication means that if the first bytes received 
> by the 
> server on a connection match the HTTP/2 preamble, the HTTP/2
> protocol is switched to immediately without further 
> negotiation.
> -This mode falls outside the RFC 7540 but has become widely 
> implemented
> -on cleartext ports as it is very convenient for development 
> and testing. 
> +This mode is defined in RFC 7540 for the cleartext (h2c) 
> case. Its
> +use on TLS connections is not allowed by the standard.
> 
> 
> -Since this detection implies that the client will send data 
> on
> +Since this detection requires that the client will send data 
> on
> new connection immediately, direct HTTP/2 mode is disabled by
> default.
> 
> 
> Modified: httpd/httpd/trunk/modules/http2/h2_config.c
> URL: 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_config.c?rev=1708593&r1=1708592&r2=1708593&view=diff
> ==
> --- httpd/httpd/trunk/modules/http2/h2_config.c (original)
> +++ httpd/httpd/trunk/modules/http2/h2_config.c Wed Oct 14 12:10:11 2015
> @@ -49,6 +49,7 @@ static h2_config defconf = {
> 0,/* serialize headers */
> 0,/* h2 direct mode */
> -1,   /* # session extra files */
> +1,/* rfc 7540 compliance */
> };
> 
> static int files_per_session = 0;
> @@ -100,6 +101,7 @@ static void *h2_config_create(apr_pool_t
> conf->serialize_headers= DEF_VAL;
> conf->h2_direct= DEF_VAL;
> conf->session_extra_files  = DEF_VAL;
> +conf->rfc_compliance   = DEF_VAL;
> return conf;
> }
> 
> @@ -138,6 +140,7 @@ void *h2_config_merge(apr_pool_t *pool,
> n->serialize_headers = H2_CONFIG_GET(add, base, serialize_headers);
> n->h2_direct  = H2_CONFIG_GET(add, base, h2_direct);
> n->session_extra_files = H2_CONFIG_GET(add, base, session_extra_files);
> +n->rfc_compliance = H2_CONFIG_GET(add, base, rfc_compliance);
> 
> return n;
> }
> @@ -162,6 +165,8 @@ int h2_config_geti(h2_config *conf, h2_c
> return H2_CONFIG_GET(conf, &defconf, alt_svc_max_age);
> case H2_CONF_SER_HEADERS:
> return H2_CONFIG_GET(conf, &defconf, serialize_headers);
> +case H2_CONF_COMPLIANCE:
> +return H2_CONFIG_GET(conf, &defconf, rfc_compliance);
> case H2_CONF_DIRECT:
> return H2_CONFIG_GET(conf, &defconf, h2_direct);
> case H2_CONF_SESSION_FILES:
> @@ -332,8 +337,25 @@ static const char *h2_conf_set_direct(cm
> return "value must be On or Off";
> }
> 
> -#define AP_END_CMD AP_INIT_TAKE1(NULL, NULL, NULL, RSRC_CONF, NULL)
> +static const char *h2_conf_set_compliance(cmd_parms *parms,
> +  void *arg, const char *value)
> +{
> +h2_config *cfg = h2_config_sget(parms->server);
> +if (!strcasecmp(value, "On")) {
> +cfg->rfc_compliance = 1;
> +return NULL;
> +}
> +else if (!strcasecmp(value, "Off")) {
> +cfg->rfc_compliance = 0;
> +return NULL;
> +}
> +
> +(void)arg;
> +return "value must be On or Off";
> +}
> +
> 
> +#define AP_END_CMD AP_INIT_TAKE1(NULL, NULL, NULL, RSRC_CONF, NULL)
> 
> const command_rec h2_cmds[] = {
> AP_INIT_TAKE1("H2MaxSessionStreams", h2_conf_set_max_streams, NULL,
> @@ -354,6 +376,8 @@ const command

Re: Enforce rewriting of Host header when an absolue URI is given

2015-10-26 Thread Roy T. Fielding
> On Oct 26, 2015, at 10:33 AM, Jacob Champion  wrote:
> 
> Yann,
> 
> I found this while trying to understand the corner cases for Origin header 
> checks for mod_websocket, and I do actually have some thoughts on it...
> 
> On 03/04/2015 07:21 AM, Yann Ylavic wrote:
>> (by default, not only with "HttpProtocol strict", which is trunk only btw).
>> 
>> Per RFC7230#section-5.4 :
>>When a proxy receives a request with an absolute-form of
>>request-target, the proxy MUST ignore the received Host header field
>>(if any) and instead replace it with the host information of the
>>request-target.  A proxy that forwards such a request MUST generate a
>>new Host field-value based on the received request-target rather than
>>forward the received Host field-value.
>> 
>> The first part is already honored, but not the forwarding part: the
>> Host header is not rewritten with the one extracted from the absolute
>> URI, neither at the protocol (core) level nor at proxy level.
>> 
>> There are PR56718 (and duplicate PR57563) about this.
>> Still the same question about whether providing the headers to some
>> CGI or module is a forward or not,
> 
> I don't buy this. IMO, CGI/plugins/modules/etc. are implementation details of 
> the server.

Correct.  The word "proxy" in HTTP only applies to to the forwarding of HTTP
messages by a client-selected (forward) proxy.  There was no such thing as
a "reverse proxy" (an idiotic marketing term invented by Netscape) when the
original HTTP specs were written.  Those are gateways (as in Common Gateway 
Interface).

>> I think personally it would be sane
>> to do this at the protocol level (beginning of the request).
>> I proposed a patch there and refined it a bit (attached), so that
>> section 5.4 is applied in vhost.c::fix_hostname().
>> 
>> It also implements this part of section 5.4 (second point, _underlined_) :
>>A server MUST respond with a 400 (Bad Request) status code to any
>>HTTP/1.1 request message that lacks a Host header field and _to any
>>request message that contains more than one Host header field_ or a
>>Host header field with an invalid field-value.
>> 
>> The first point is already handled, and the third is "HttpProtocol
>> strict" dependent (which is enough I think, but maybe deserve a
>> backport too).

We should always be strict on received Host handling because misplaced routing
information is often used to bypass security filters.  That is, we should not 
allow
an invalid Host header field to pass through. It should at least be rejected
by default (if non-reject is configurable).

Roy



Re: svn commit: r1710723 - in /httpd/httpd/trunk: CHANGES modules/cache/cache_util.h

2015-10-27 Thread Roy T. Fielding
> On Oct 26, 2015, at 11:45 PM, jaillet...@apache.org wrote:
> 
> Author: jailletc36
> Date: Tue Oct 27 06:45:03 2015
> New Revision: 1710723
> 
> URL: http://svn.apache.org/viewvc?rev=1710723&view=rev
> Log:
> RFC2616 defines #rules as:
>   #rule
>  A construct "#" is defined, similar to "*", for defining lists of
>  elements. The full form is "#element" indicating at least
>   and at most  elements, each separated by one or more commas
>  (",") and OPTIONAL linear white space (LWS). This makes the usual
>  form of lists very easy; a rule such as
> ( *LWS element *( *LWS "," *LWS element ))
>  can be shown as
> 1#element
> 
> It also defines Linear White Space (LWS) as:
>   LWS= [CRLF] 1*( SP | HT )
> 
> 
> The actual implementation only accepts SP (Space) and not HT (Horizontal Tab) 
> when parsing cache related header fields (i.e. "Vary", "Cache-Control" and 
> "Pragma")

Well, to be more accurate: RFC7230 defines these (2616 no longer applies) and
the original algorithm did handle HT.  My bet is that someone screwed up an
automated TAB -> two space conversion and the code change got lost in the noise.

Your fix looks right, though.

Roy



Re: Upgrade Summary

2015-12-08 Thread Roy T. Fielding
> On Dec 8, 2015, at 2:07 AM, Stefan Eissing  
> wrote:
> 
> Trying to summarize the status of the discussion and where the issues are 
> with the current Upgrade implementation.
> 
> Clarified:
> A. any 100 must be sent out *before* a 101 response
> B. request bodies are to be read in the original protocol, input filters like 
> chunk can be used, indeed are necessary, as if the request is being processed 
> normally

Yes.

> C. if a protocol supports upgrade on request bodies is up to the protocol 
> implementation and needs to be checked in the "propose" phase

In some respects, yes, but Upgrade is defined by HTTP/1 and no other
protocol applies until after it is done.  That means ignoring any
idiotic handshake requirements of the "new" protocol that aren't
consistent with HTTP/1.  It also means h2's requirements on Upgrade
are irrelevant except for when it requires not upgrading to h2.

The client's assumption must be that the Upgrade will fail and any
attempt to use Expect will timeout, so the entire message will be
sent eventually regardless of anyone's stupid hacks to try and game
the protocol.  Hence, the server has no choice but to receive the
entire message as HTTP/1.1 even if it thinks it could have responded
fast enough to interrupt the client in mid-send.

> 
> Open:
> 1. Protocols like Websocket need to take over the 101 sending themselves in 
> the "switch protocol" phase. (correct, Jacob?). Should we delegate the 
> sending of the 101 to the protocol switch handler?

That seems unlikely.

> 2. General handling of request bodies. Options:
>  a setaside in core of up to nnn bytes before switch invocation
>  b do nothing, let protocol switch handler care about it

I think folks are confusing implementation with protocol.  There is no need
for the protocol being read on a connection to be the same protocol that is
being written in response.  In other words, the incoming connection remains
reading HTTP/1 bits until the message is finished, regardless of the decision
to upgrade the response stream -- the other protocol engine doesn't care.

This should be easily handled by adding a filter that translates the HTTP/1
incoming body (if any) to a single channel of the new protocol.  Just fake it.
There is no need to wait or set aside the bytes, unless that is desired for
other reasons (e.g., denial of denial of service attacks).

> 3. When to do the upgrade dance:
>  a post_read_request: upgrade precedes authentication
>  b handler: upgrade only honored on authenticated and otherwise ok requests
>  c both: introduce separate hooks? have an additional parameter? more 
> complexity

(a).  We do want to upgrade non-ok responses.  If the "new" protocol wants to
send a canned HTTP/1.1 error, it can do so without our help.

> 4. status code of protocol switch handler: if we move the task of 101 
> sending, the switch handler might not do it and keep the connection on the 
> "old" protocol. Then a connection close is not necessary. So, we would do the 
> close only when the switch handler returns APR_EOF.

Eh?

> 5. Will it be possible to migrate the current TLS upgrade handling to this 
> revised scheme?

TLS upgrade is never done with a body (and is broken code anyway).  Just fix it.
Note that the upgrade token is "TLS" -- it does not have to be "TLS/1.0".

Roy



Re: svn commit: r1725349 - /httpd/httpd/trunk/docs/manual/env.xml

2016-01-20 Thread Roy T. Fielding
I don't understand this comment.  RFC7230 doesn't recommend sending HTTP/1.0.
It certainly allows it as a workaround for a broken client, but 
force-response-1.0
is not recommended for general use.

Roy

> On Jan 18, 2016, at 1:14 PM, cove...@apache.org wrote:
> 
> Author: covener
> Date: Mon Jan 18 21:14:46 2016
> New Revision: 1725349
> 
> URL: http://svn.apache.org/viewvc?rev=1725349&view=rev
> Log:
> emphasize http/1.0 clients, mention RFC7230 calling this
> envvar a SHOULD.
> 
> --This line, and those below, will be inored--
> 
> Menv.xml
> 
> Modified:
>httpd/httpd/trunk/docs/manual/env.xml
> 
> Modified: httpd/httpd/trunk/docs/manual/env.xml
> URL: 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/docs/manual/env.xml?rev=1725349&r1=1725348&r2=1725349&view=diff
> ==
> --- httpd/httpd/trunk/docs/manual/env.xml (original)
> +++ httpd/httpd/trunk/docs/manual/env.xml Mon Jan 18 21:14:46 2016
> @@ -322,12 +322,15 @@
> 
> force-response-1.0
> 
> -  This forces an HTTP/1.0 response to clients making an HTTP/1.0
> -  request. It was originally
> -  implemented as a result of a problem with AOL's proxies. Some
> +  This forces an HTTP/1.0 response to clients making an 
> +  HTTP/1.0 request. It was originally
> +  implemented as a result of a problem with AOL's proxies during the
> +  early days of HTTP/1.1. Some
>   HTTP/1.0 clients may not behave correctly when given an HTTP/1.1
> -  response, and this can be used to interoperate with them.
> -
> +  response, and this can be used to interoperate with them.  Later
> +  revisions of the HTTP/1.1 spec (RFC 7230) recommend this behavior 
> +  for HTTP/1.0 clients.
> + 
> 
> 
> 
> 
> 



where to put update_mime_types.pl?

2016-02-25 Thread Roy T. Fielding
I have a perl script (see below) for updating the mime.types file with the 
latest
registered IANA media types.  I would like to add it to our version control,
but I am unsure whether to place it in

  httpd/trunk/support/

or in

  httpd/docs-build/trunk/

I guess it depends on whether we want to distribute it as part of the product
or just use it ourselves as an occasional tool.  It is generally useful, though
not intended to be bullet proof.

Roy

==

#!/usr/bin/perl
#
# update_mime_types.pl: Read an existing Apache mime.types file and
# merge its entries with any new types discovered within an
# IANA media-types.xml file (see below for obtaining it).
#
# All existing mime.types entries are preserved as is (aside from sorting).
# Any new registered types are merged as a commented-out entry without
# an assigned extension, and then the entire file is printed to stdout.
#
# Typical use would be something like:
# 
#  wget -N http://www.iana.org/assignments/media-types/media-types.xml
#  ./update_mime_types.pl > new.types
#  diff -u mime.types new.types   ; check the differences
#  rm mime.types && mv new.types mime.types   ; only if diffs are good
#
# Note that we assume all files are in the current working directory
# and efficiency is not an issue.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
my $mity = 'mime.types';
my $medy = 'media-types.xml';

die "no $mity here\n" unless (-e $mity);
die "no $medy here\n" unless (-e $medy);

my $in_head = 1;
my @header = ();
my %mtype = ();

# Read through the Apache httpd mime.types file to create tables
# keyed on the minor type names.  We save the entire input line as
# the hash value so that existing configs won't change when output.
# We assume the type names are already lowercased tokens.
#
die "cannot open $mity: $!" unless open (MIME, "<", $mity);

while () {
if ($in_head) {
push @header, $_;
if (/^# =/) {
$in_head = 0;
}
next;
}
if (/^(# )?([a-z_\+\-\.]+\/\S+)/) {
$mtype{$2} = $_;
}
else {
warn "Skipping: ", $_;
}
}
close MIME;

# Read through the IANA media types registry, in XML form, and extract
# whatever looks to be a registered type based on the element structure.
# Yes, this is horribly fragile, but the format isn't expected to change.
#
die "cannot open $medy: $!" unless open (IANA, "<", $medy);

my $major= 'examples';
my $thistype = '';

while () {
last if (/^\s*/);
next if (/(OBSOLETE|DEPRECATE)/);

if (/^\s*([^<]+)<\/name>/) {
$thistype = lc "$major/$1";
if (!defined($mtype{$thistype})) {
$mtype{$thistype} = "# $thistype\n";
}
}
}
close IANA;

# Finally, output a replacement for Apache httpd's mime.types file
#
print @header;

foreach $key (sort(keys %mtype)) {
print $mtype{$key};
}

exit 0;

Fwd: RFC 7804 on Salted Challenge Response HTTP Authentication Mechanism

2016-03-09 Thread Roy T. Fielding
For folks looking for a new feature to develop,

Roy


> Begin forwarded message:
> 
> From: rfc-edi...@rfc-editor.org
> Subject: RFC 7804 on Salted Challenge Response HTTP Authentication Mechanism
> Date: March 9, 2016 at 11:01:55 AM PST
> To: ietf-annou...@ietf.org, rfc-d...@rfc-editor.org
> Cc: drafts-update-...@iana.org, http-a...@ietf.org, rfc-edi...@rfc-editor.org
> Reply-To: i...@ietf.org
> List-Archive: 
> 
> A new Request for Comments is now available in online RFC libraries.
> 
> 
>RFC 7804
> 
>Title:  Salted Challenge Response HTTP Authentication 
>Mechanism 
>Author: A. Melnikov
>Status: Experimental
>Stream: IETF
>Date:   March 2016
>Mailbox:alexey.melni...@isode.com
>Pages:  18
>Characters: 39440
>Updates/Obsoletes/SeeAlso:   None
> 
>I-D Tag:draft-ietf-httpauth-scram-auth-15.txt
> 
>URL:https://www.rfc-editor.org/info/rfc7804
> 
>DOI:http://dx.doi.org/10.17487/RFC7804
> 
> This specification describes a family of HTTP authentication
> mechanisms called the Salted Challenge Response Authentication
> Mechanism (SCRAM), which provides a more robust authentication
> mechanism than a plaintext password protected by Transport Layer
> Security (TLS) and avoids the deployment obstacles presented by
> earlier TLS-protected challenge response authentication mechanisms.
> 
> This document is a product of the Hypertext Transfer Protocol Authentication 
> Working Group of the IETF.
> 
> 
> EXPERIMENTAL: This memo defines an Experimental Protocol for the
> Internet community.  It does not specify an Internet standard of any
> kind. Discussion and suggestions for improvement are requested.
> Distribution of this memo is unlimited.
> 
> This announcement is sent to the IETF-Announce and rfc-dist lists.
> To subscribe or unsubscribe, see
>  https://www.ietf.org/mailman/listinfo/ietf-announce
>  https://mailman.rfc-editor.org/mailman/listinfo/rfc-dist
> 
> For searching the RFC series, see https://www.rfc-editor.org/search
> For downloading RFCs, see https://www.rfc-editor.org/retrieve/bulk
> 
> Requests for special distribution should be addressed to either the
> author of the RFC in question, or to rfc-edi...@rfc-editor.org.  Unless
> specifically noted otherwise on the RFC itself, all RFCs are for
> unlimited distribution.
> 
> 
> The RFC Editor Team
> Association Management Solutions, LLC
> 
> 



Re: "Upgrade: h2" header for HTTP/1.1 via TLS (Bug 59311)

2016-04-20 Thread Roy T. Fielding
> On Apr 20, 2016, at 4:29 AM, Stefan Eissing  
> wrote:
> 
>> 
>> Am 20.04.2016 um 13:16 schrieb Yann Ylavic :
>> 
>> On Wed, Apr 20, 2016 at 1:09 PM, Yann Ylavic  wrote:
>>> On Wed, Apr 20, 2016 at 11:25 AM, Stefan Eissing
>>>  wrote:
 Done in r1740075.
 
 I was thinking of a nicer solution, but that involved inventing new hooks 
 which seems not worth it.
 
 Since this area of protocol negotiation has already been talked about in 
 regard to TLS upgrades
 and websockets, I do not want to invest in the current way of handling 
 this too much time.
>>> 
>>> I really don't see why we couldn't upgrade to h2 from "http:" (not
>>> "https:" since ALPN did not take place already, or would have done
>>> it).
>>> ISTM that "Upgrade: h2" could be valid in response to a (plain) HTTP/1
>>> request, and the client could upgrade from there...
>> 
>> More on this and Michael's quote of RFC 7540 ("A server MUST ignore an
>> "h2" token...").
>> An HTTP/2 server must indeed ignore the inner HTTP/1 request's
>> "Upgrade: h2" header since it's RFC states it, but and HTTP/1 server
>> (AFAICT) is not concerned by this RFC, and should not...
> 
> Totally agree. And, although untested, in principle we would upgrade
> such a request, as our protocol negotiation framework is somewhat agnostic
> to which RFC can leak water farther than others.

I think we need to be clear when the RFC applies and when it does not.

For example, ANY requirement in the RFC about TLS must be ignored by our
code (but not our default config) because we intend to implement h2c over
anything within some environments.  The RFC was written for the sole
use case of browsers making requests over the open Internet and we should
expect to deviate where its requirements are clearly for that context
rather than for technical reasons.

In any case, RFC7540 does allow an "Upgrade: h2" to be sent by the server
when the connection is already TLS; the fact that it actually means
"Upgrade: HTTP/2.0, TLS" can be handled as we see fit.  There is no
need to be pedantic because we are doing HTTP/1.1 at that point, not h2.

If there is a problem with sending h2 or h2c in a non-TLS response, then
we can send

   Upgrade: HTTP/2.0

instead (with or without the TLS, depending on how we are configured).
It says the same thing as h2c (or h2), as far as HTTP/1.x is concerned.

If that is still a problem with NodeJS, then we should have a conditional
workaround that is limited to the NodeJS client string (including its version);
we cannot allow broken clients to define the protocol in the long term.

Cheers,

Roy



Re: svn commit: r1754548 - /httpd/httpd/trunk/server/protocol.c

2016-08-03 Thread Roy T. Fielding
> On Aug 3, 2016, at 11:44 AM, Jacob Champion  wrote:
> 
> On 07/31/2016 09:18 AM, William A Rowe Jr wrote:
>>> So all the trailing SP/HTAB are part of obs-fold IMHO.
>>> Should we replace all of them (plus the CRLF) with a single SP or with
>>> as many SP?
>> 
>> Hmmm... Good point. Advancing over them in our HTTP_STRICT mode seems
>> best, if we have a consensus on this.
> 
> Agreed that we should process all the obs-fold whitespace, and not just one 
> byte.
> 
> Replacing each byte with a separate space (as opposed to condensing into a 
> single space) *might* help prevent adversaries from playing games with header 
> length checks in more complicated/layered systems. That's probably a stretch 
> though. And if we consume the CRLF in a different layer of logic, adding on 
> two spaces just to keep everything "consistent" may also be a stretch. I'm 
> not feeling strongly either way.

What the spec is trying to say is that we can either replace all those bytes
with a single SP (semantically speaking they are the same) or we we can replace
them all with a sequence of SP (still the same, but doesn't require splitting
or recomposing the buffer).

> >> > So the obs-fold itself consists of CR LF [ SP | TAB ]
> >>
> >>obs-fold = CRLF 1*( SP / HTAB )
> >>
> 
> Note that this section of the spec has Errata associated with it; I'm reading 
> through the conversation [1] and it's seeming like they *may* want to treat 
> OWS preceding the CRLF as part of the obs-fold as well. I don't know what our 
> position is on adopting pieces of Errata that have been Held for Document 
> Update.

No, that is just an ABNF issue for matching purposes.  We don't use it.

Roy



Re: svn commit: r1754548 - /httpd/httpd/trunk/server/protocol.c

2016-08-04 Thread Roy T. Fielding
> On Aug 3, 2016, at 2:28 PM, William A Rowe Jr  wrote:
> 
> So AIUI, the leading SP / TAB whitespace in a field is a no-op (usually
> represented by a single space by convention), and trailing whitespace 
> in the field value is a no-op, all leading tabs/spaces (beyond one SP) 
> in the obs-fold line is a no-op. Is there any reason to preserve trailing 
> spaces before the obs-fold?

Not given our implementation.  The buffer efficiency argument is for other
kinds of parsers that are not reading just one line at a time.

> If not, then stripping trailing whitespace from the line prior to obs-fold and
> eating all leading whitespace on the obs-fold line will result in a single SP
> character, which should be just fine unless spaces were significant within
> a quoted value. The only way for the client to preserve such significant 
> spaces would be to place them after the opening quote before the obs-fold.

obs-fold is not allowed inside quoted text, so we need not worry about
messing with such a construct.

Note that obs-fold has been formally deprecated outside of message/http.
We can remove its handling at any time we are willing to accept the risk
of strange error reports.  I do not believe it is part of our versioning 
contract.

Roy



Re: HTTP/1.1 strict ruleset

2016-08-04 Thread Roy T. Fielding
> On Aug 3, 2016, at 4:33 PM, William A Rowe Jr  wrote:
> 
> So it seems pretty absurd we are coming back to this over
> three years later, but is there any reason to preserve pre-RFC 2068
> behaviors? I appreciate that Stefan was trying to avoid harming
> existing deployment scenarios, but even as I'm about to propose
> that we backport all of this to 2.4 and 2.2, I have several questions;

In general, I don't see a need for any "strict" options. The only changes we
made to parsing in RFC7230 were for the sake of security and known failures
to interoperate. This is exactly the feature we are supposed to be handling
automatically on behalf of our users: secure, correct, and interoperable
handling and generation of HTTP messaging.  We should not need to configure it.

Note that the MUST requirements in RFC7230 are not optional. We either implement
them as specified or we are not compliant with HTTP.  So, the specific issues of

https://tools.ietf.org/html/rfc7230#section-3

   A sender MUST NOT send whitespace between the start-line and the
   first header field.  A recipient that receives whitespace between the
   start-line and the first header field MUST either reject the message
   as invalid or consume each whitespace-preceded line without further
   processing of it (i.e., ignore the entire line, along with any
   subsequent lines preceded by whitespace, until a properly formed
   header field is received or the header section is terminated).

   The presence of such whitespace in a request might be an attempt to
   trick a server into ignoring that field or processing the line after
   it as a new request, either of which might result in a security
   vulnerability if other implementations within the request chain
   interpret the same message differently.  Likewise, the presence of
   such whitespace in a response might be ignored by some clients or
   cause others to cease parsing.

and

https://tools.ietf.org/html/rfc7230#section-3.2.4

   No whitespace is allowed between the header field-name and colon.  In
   the past, differences in the handling of such whitespace have led to
   security vulnerabilities in request routing and response handling.  A
   server MUST reject any received request message that contains
   whitespace between a header field-name and colon with a response code
   of 400 (Bad Request).  A proxy MUST remove any such whitespace from a
   response message before forwarding the message downstream.


must be complied with regardless of any "strict" config setting.

Some of those other things under "strict" seem a bit wonky. For example,
changing the Host header field when the incoming request URI is absolute
is fine by default but needs to be a configurable option for gateways.
Trying to validate IPv4/6 vs DNS doesn't work in intranet environments
that use local name servers.  The Location field-value is no longer required
to be absolute ("https://tools.ietf.org/html/rfc7231#section-7.1.2";).

> 1. offer a logging-only option? Why? It seems like a simple
>choice, follow the spec, or don't. If you want to see what's
>going on, Wireshark, Fiddler and dozens of other tools let
>you inspect the conversation.
> 
> 2. leave the default as 'not-strict'? Seems we should most
>strongly recommend that the server observe RFC's 2068,
>2616 and 723x, and not tolerate ancient behavior by default
>unless the admin insists on being foolish.

As far as the Internet is concerned, RFC723x is the new law of the land.
There is no reason to support obsolete RFCs.  No reason at all.  This has
nothing to do with semantic versioning or binary compatibility -- it is
simply doing what the product says it does: serve HTTP.

> 3. retain these legacy faulty behaviors in httpd 2.next/3.0?
>Seems that once we agree on a backport, the ancient
>side of this logic should all just disappear from trunk.
> 
> 4. detail the error to the error log? Again, there are inspection
>tools, but more importantly, no visual user-agent is going
>to send this garbage, and automated requests are going
>to discard the 400 response. Seems we can save a lot of
>code throwing away the details that just don't help, and
>are generally the product of abusive traffic.
> 
> Thoughts?

I think we just need to state in the log the reason for a 400 error. I don't 
like
sending invalid client-provided data back in a response, even when encoded.

Whitespace before the first header field can log a static message.
Whitespace after a field-name could log the field-name (no need to log the
field value). Invalid characters can be noted as "in a field-name" without
further data, or as "in a field-value" with only the field-name logged.

These are all post-error details off the critical path, so I don't buy the CPU
argument.  However, I do think our error handling in protocol.c has become
so verbose that it obscures the rest of the code.  Maybe it would be better if
we just stopped caring about 80-

Re: HTTP/1.1 strict ruleset

2016-08-04 Thread Roy T. Fielding
> On Aug 4, 2016, at 3:02 PM, William A Rowe Jr  wrote:
> 
> On Thu, Aug 4, 2016 at 3:46 PM, Roy T. Fielding  <mailto:field...@gbiv.com>> wrote:
> > On Aug 3, 2016, at 4:33 PM, William A Rowe Jr  > <mailto:wr...@rowe-clan.net>> wrote:
> >
> > So it seems pretty absurd we are coming back to this over
> > three years later, but is there any reason to preserve pre-RFC 2068
> > behaviors? I appreciate that Stefan was trying to avoid harming
> > existing deployment scenarios, but even as I'm about to propose
> > that we backport all of this to 2.4 and 2.2, I have several questions;
> 
> In general, I don't see a need for any "strict" options. The only changes we
> made to parsing in RFC7230 were for the sake of security and known failures
> to interoperate. This is exactly the feature we are supposed to be handling
> automatically on behalf of our users: secure, correct, and interoperable
> handling and generation of HTTP messaging.  We should not need to configure 
> it.
> 
> Note that the MUST requirements in RFC7230 are not optional. We either 
> implement
> them as specified or we are not compliant with HTTP. 
>  
> Understood. And that describes my attitude toward 2.6/3.0 next release.
> 
> We live in an ecosystem with literally hundreds of thousands of legitimate
> custom http clients asking httpd server for datum. Most projects would
> effectively declare their last major.minor release static, and fix the defects
> while doing all enhancement in their next release. This isn't that project.

By that logic, we can't make any changes in minor versions.  I disagree that
we have ever treated versioning in that way.  If a user doesn't like a bug fix,
they don't have to install the new version, or they have the code to unfix it.

> Because httpd fixes and introduces dozens of bugs each major.minor
> subversion release, and we truly agree that we want every user to move
> to the most recently released major.minor, breaking hundreds of these
> applications with *no recourse* in their build or configuration is 
> frustrating.

Leaving existing users in a broken state of non-compliance with the primary
Internet standard we are claiming to implement just because of unsubstantiated
FUD is far more frustrating.  Bugs get fixed. Users choose whether or not to 
install.
If we find a real problem in a deployed client that causes the bug fix to be 
intolerable,
then of course we will need to configure workarounds.  But we are not in that 
place.

> If consensus here agrees that no out-of-spec behavior should be tolerated
> anymore, I'll jump on board. I'm already in the consensus block that says
> we should not ship a new major.minor without disallowing all of this garbage.
> 
> It would be helpful if other PMC members would weigh in yea or nay on
> dropping out-of-spec behaviors from 2.4 and 2.2 maintenance branches. 

That would be weird.  One of us is going to create a patch.  That specific 
patch is
going to be voted upon for backport.  If anyone wants to veto it, they are free
to do so with justification.

Roy



Re: HTTP/1.1 strict ruleset

2016-08-12 Thread Roy T. Fielding
> On Aug 11, 2016, at 9:59 AM, William A Rowe Jr  wrote:
> 
> On Thu, Aug 11, 2016 at 11:54 AM, Eric Covener  > wrote:
> On Thu, Aug 11, 2016 at 12:44 PM, William A Rowe Jr  > wrote:
> > Just to be clear, that is now 2 votes for eliminating the 'classic parser'
> > from all
> > of trunk, 2.4.x and 2.2.x branches, and using only the strict parser,
> > unconditionally.
> >
> > That's actually 3 votes for removing it from trunk, which I planned to do
> > already,
> > after 2.4 and 2.2 backports are in-sync with trunk.
> 
> Without yet reviewing the votes, I would (personally) think this kind
> of split makes it your call as the one neck deep in the issue & doing
> all the work.Thank you for your work on this.
> 
> Maybe one last summary of your call, and a short window for strong
> objection/veto?
> 
> Certainly, that's what the backport proposal of everything from the initial
> commit by sf all the way to the present state will accomplish in STATUS.
> 
> With so many evolutions of various bits, a summary patch will be provided,
> of course. But it's helpful to me to know the opinions of Jim and Roy and
> everyone else in advance of proposing that backport.

I am having trouble keeping up while dealing with summer parenting issues.

I have no doubt that a strict parser is necessary, for some definition of 
strict.
I have no idea why there is any need to discuss EBCDIC here, since HTTP itself
is never EBCDIC.  We should not be transforming any input to EBCDIC until
after the request has been parsed.

I am not convinced that we need a wholesale rewrite of the parser code to
accomplish anything. Since most of the changes to trunk were tossed and repeated
multiple times due to unfamiliarity with the train parsing code or unawareness
that the read is already handling obs-fold and spurious newlines, I still think
we should just commit the simple fix (with your added logging statements)
and remove the bits from trunk that we don't actually need.

That doesn't mean we shouldn't attempt a better parser.  However, I would like
to review that as one clean diff with demonstrated performance tests.
That means setting up a test harness and proving that it is actually better.
For example, we might want to try using (or at least learning from) other 
parsers
that can work on the entire received buffer in one pass, rather than limit 
ourselves
to the existing line-at-a-time process, and simultaneously deprecate or remove
handling of obs-fold and HTTP/0.9.

In any case, if you have a working parser implementation, I will be happy to
review it regardless of my preferences.  If it is better than what we have, then
it will still get my +1 for trunk regardless of longer term plans.

Roy



Re: svn commit: r1756531 - /httpd/httpd/trunk/modules/proxy/proxy_util.c

2016-08-16 Thread Roy T. Fielding
> On Aug 16, 2016, at 9:21 AM, yla...@apache.org wrote:
> 
> Author: ylavic
> Date: Tue Aug 16 16:21:13 2016
> New Revision: 1756531
> 
> URL: http://svn.apache.org/viewvc?rev=1756531&view=rev
> Log:
> Follow up to r1750392: reduce AH03408 level to INFO as suggested by wrowe/jim.

It used to be that we always log INFO because we only use it for noting
configuration details.  Has that changed?

Roy

Re: svn commit: r1756531 - /httpd/httpd/trunk/modules/proxy/proxy_util.c

2016-08-16 Thread Roy T. Fielding
> On Aug 16, 2016, at 9:51 AM, Eric Covener  wrote:
> 
> On Tue, Aug 16, 2016 at 12:26 PM, Roy T. Fielding  wrote:
>> It used to be that we always log INFO because we only use it for noting
>> configuration details.  Has that changed?
> 
> You're probably thinking of the special handling of NOTICE level, so n/a here.

Oh, right. Brain fart.

Roy



Re: StrictURI in the wild [Was: Backporting HttpProtocolOptions survey]

2016-09-14 Thread Roy T. Fielding
> On Sep 14, 2016, at 6:28 AM, William A Rowe Jr  wrote:
> 
> On Tue, Sep 13, 2016 at 5:07 PM, Jacob Champion  > wrote:
> On 09/13/2016 12:25 PM, Jacob Champion wrote:
> What is this? Is this the newest "there are a bunch of almost-right
> implementations so let's make yet another standard in the hopes that it
> won't make things worse"? Does anyone know the history behind this spec?
> 
> (My goal in asking this question is not to stare and point and laugh, but 
> more to figure out whether we are skating to where the puck is going. It 
> would be nice for users to know which specification StrictURI is being strict 
> about.)
> 
> RFC3986 as incorporated by and expanded upon by reference in RFC7230. 
> 
> IP, TCP, HTTP and it's data and framing are defined by the IETF. HTTP's
> definition depends on the meaning of many things, including ASCII, URI
> syntax, etc, see its table of citations. The things it depends on simply
> can't be moving targets any more than those definitions that the TCP 
> protocol is dependant upon. The IETF process is to correct a broken 
> underlying spec with a newly revised spec subject to peer review, and 
> then update the consuming specs to leverage the changes in the 
> underlying, where necessary (in some cases the revised underlying
> spec, once applied, has no impact on the consuming spec.)
> 
> HTML folks use URL's, and therefore forked the spec they percieved as
> too rigid and inflexible. In fact, it wasn't, but it appears so if you read 
> the
> spec as requiring -users- to -type- valid URI's, which was never the case.
> Although it gets prickly if you consider handling badly authored href= links 
> in html. HTML became a "living spec" subject to perpetual evolution;
> this results in a state where all implementations are perpetually broken.
> But the key take-away is that whattfwg URI does not and cannot
> supercede RFC3986 for the purposes of RFC7230. Rather than improve
> the underlying spec, the group decided to overlay an unrelated spec.
> 
> https://daniel.haxx.se/blog/2016/05/11/my-url-isnt-your-url/ 
>  does one
> decent job explaining some of this. Google "URI whatwg vs. ietf" for
> an excessively long list of references.
> 
> So in short, whatwg spec describes URI's anywhere someone wants
> to apply their defintion; HTML5 is based upon this. The wire protocol 
> of talking to an http: schema server is defined by RFC7230, which 
> subordinates to the RFC3986 definition of a URI. How you choose to 
> apply these two specs depends on your position in the stack.

I don't consider the WHATWG to be a standards organization, nor should
anyone else. It is just a selective group (a clique) with opinions about
software that they didn't write and a desire to document it in a way that
excludes the interests of everyone other than browser developers.

The main distinction between the WHATWG "URL standard" (it isn't)  and
the IETF URI standard (it is, encompassing URL and URN) is that HTML5
needs to define the url object in DOM (what is basically an object containing
a parsed URI reference), whereas the IETF needs to define a grammar for
the set of uniform identifiers believed to be interoperable on the Internet.

Obviously, if one spec wants to define everything a user might input as a
reference and call that "URL", while the other wants to define the interoperable
identifier output after uniform parsing of a reference relative to a base URI
as a "URL", the two specs are not going to be compatible.

Do you think the empty string ("") is a URL?  I don't.

A normal author would have used two different terms to define the two
different things (actually, four different things, since the URL spec also uses
url to describe two other things related to URL processing). The IETF chose a
different term, 23 years ago, when it created the term URL instead of just
defining them as "WWW Addresses" or universal document identifiers.

Instead of making a rational effort to document references in HTML, the
WHATWG decided to go on an ego trip about what "real developers" call
a "URL", and then embarked on yet another political effort to reject IETF
standards (that represent the needs of all Internet users, not just
browser developers) in favor of their own "living standards" that only
reflect a figment of the author's imagination (not implementations).

Yes, a user agent will send invalid characters in a request URI.  That is a bug
in the user agent.  Even if every browser chose to do it, that is still a bug in
the browser (not a bug in the spec). The spec knows that those addresses
are unsafe on the command-line and therefore unable to be properly
handled by many parts of the Internet that are not browsers, whereas
the correctly encoded equivalent is known to be interoperable. Hence,
the real standard requires that they be sent in an interoperable form.

Anyway, we have to be careful when testing to 

Re: svn commit: r1764961 - in /httpd/httpd/trunk: docs/manual/mod/core.xml modules/http/http_filters.c server/core.c server/gen_test_char.c server/protocol.c server/util.c

2016-10-14 Thread Roy T. Fielding
Right, though several people have requested it now as errata. Seems likely to 
be in the final update for STD.

Roy


> On Oct 14, 2016, at 2:16 PM, William A Rowe Jr  wrote:
> 
>> On Fri, Oct 14, 2016 at 3:48 PM,  wrote:
>> Author: wrowe
>> Date: Fri Oct 14 20:48:43 2016
>> New Revision: 1764961
>> 
>> URL: http://svn.apache.org/viewvc?rev=1764961&view=rev
>> Log:
>> [...]
>> Apply HttpProtocolOptions Strict to chunk header parsing, invalid
>> whitespace is invalid, line termination must follow CRLF convention.
>> 
>> [...]
>  
>> static apr_status_t parse_chunk_size(http_ctx_t *ctx, const char *buffer,
>> [...]
>  
>> -else if (c == ' ' || c == '\t') {
>> +else if (!strict && (c == ' ' || c == '\t')) {
>>  /* Be lenient up to 10 BWS (term from rfc7230 - 3.2.3).
>>   */
>>  ctx->state = BODY_CHUNK_CR;
> 
> I'm not sure where this myth came from... 
> 
> https://tools.ietf.org/html/rfc7230#section-4.1
> 
> has *NO* provision for BWS in the chunk size.


Re: svn commit: r1764961 - in /httpd/httpd/trunk: docs/manual/mod/core.xml modules/http/http_filters.c server/core.c server/gen_test_char.c server/protocol.c server/util.c

2016-10-17 Thread Roy T. Fielding
> On Oct 15, 2016, at 2:10 AM, William A Rowe Jr  wrote:
> 
> On Sat, Oct 15, 2016 at 3:54 AM, William A Rowe Jr  <mailto:wr...@rowe-clan.net>> wrote:
> On Fri, Oct 14, 2016 at 4:44 PM, Roy T. Fielding  <mailto:field...@gbiv.com>> wrote:
> Right, though several people have requested it now as errata. Seems likely to 
> be in the final update for STD.
> 
> In the HttpProtocolOptions Unsafe mode, it is tolerated.
> 
> Should it be the proper 'Strict' behavior to parse (never generate) such 
> noise? 
> 
> FWIW, I see very little harm in potentially unsafe chunk headers because
> it becomes a serious chore to inject between alternating \r-only vs \n-only 
> vs space trailing chunk headers. I'm not suggesting it can't be done, but
> most requests-with-body are intrinsically not idempotent, so one must be
> extremely clever to affect cache history. 
> 
> But it isn't impossible, so if the editors follow the way of BWS vs. follow 
> the absolute explicit statements about HTTP request field names and
> the trailing ':', I'd be somewhat disappointed. Tighten ambiguity where
> there was little ambiguity before. Make explicit the real ambiguity for
> all user-agents and servers to implement. /shrug.

We tried.  People complained.

In any case, BWS only includes *( SP / HTAB ).  Not much ambiguity there.

Roy



Re: 2.2 mod_http_proxy and "partial" pages

2005-12-16 Thread Roy T. Fielding

On Dec 16, 2005, at 12:41 AM, Plüm, Rüdiger, VIS wrote:

I do not intend to do close the connection by myself. Currently it
will be closed because c->keepalive is set to AP_CONN_CLOSE
(a approach also suggested in Roys patch).


Right, the important bit is that the code managing the client
connection is what should close the client connection, not the
code that is managing the outbound (server) connection.  For all
we know, the client connection might be an SMTP notifier.

If we wanted to get really fancy, we could check to see if the
response code has been sent yet and change the whole response to
a 502 bad gateway, but that probably isn't worth the effort.


The only addition I want to make is that in the chunked case
the chunked filter should not sent the closing chunk to make
it clear to the client that something had broken.


I agree with that change.


The question that remains to me: Does it hurt that the core output
filter removes the error bucket once it has seen it?


I have no idea. I looked at the filtering code and the way it uses
error buckets (in fact, the only place that uses error buckets)
and I don't understand why it was written this way at all.  It is
more efficient to just use the EOS bucket data to indicate an
error in all of those cases, since they all result in an EOS
anyway and we just waste memory/time with the special bucket type.

Roy

Re: mod_mbox 0.2 goes alpha

2005-12-21 Thread Roy T. Fielding

You guys are confusing each other to bits.  We don't have release
candidates, we NEVER use m.n.v.rc1 versioning, and the thing that
Sam produced is called a tarball.  We call it that because we don't
want people to believe it is a release until the PMC has voted to
release it.  That's all there is to it.

Sam, you just need to follow the same process as all of our other
releases -- send a message to dev asking for votes on the tarball
for declaring it the 0.2.0 alpha release. Give it three days and,
at the end, if you have at least three +1s  (including your own)
and a majority of positive votes for release, then you can move it
to the actual release dist and work on an announcement to go out
24hrs later.

Roy



Re: Vote for mod_mbox 0.2 release

2005-12-21 Thread Roy T. Fielding

On Dec 21, 2005, at 2:54 PM, Sander Temme wrote:

On Dec 21, 2005, at 11:16 PM, Maxime Petazzoni wrote:


Of course, since this tag is currently running like a charm on Ajax,
my vote is +1 for GA.


Like a charm, indeed. Zero cores since this morning CET.

+1 for GA.


-1.  Er, sorry, I was about to vote +1 and then noticed that the
legal NOTICE file contains

==
This product includes software developed by
The Apache Software Foundation (http://www.apache.org/).

Originally developed by Justin Erenkrantz, eBuilt.

The SLL sort in mbox_sort.c is based on the public-domain algorithm by
Philip J. Erdelsky ([EMAIL PROTECTED]).  You may find the algorithm and
notes at: 

The threading code in mbox_thread.c is based Jamie Zawinski's
description of the Netscape 3.x threading algorithm at:


The 'mime_decode_qp' and `mime_decode_b64' routine are taken from
metamail 2.7, which is copyright (c) 1991 Bell Communications
Research, Inc. (Bellcore).

This product includes software developed by Edward Rudd and Paul
Querna (http://www.outoforder.cc/).

Most of mod_mbox development is now handled by Maxime Petazzoni.

==

Sorry, NOTICE is not a credits file.  Add a README instead that lists
all of the contributors.  Likewise, the scripts directory sucks beyond
description.  If you aren't going to fix it, then remove it from the
release.

Roy


Re: Vote for mod_mbox 0.2 release

2005-12-21 Thread Roy T. Fielding

On Dec 21, 2005, at 4:45 PM, Paul Querna wrote:

Roy T. Fielding wrote:

Sorry, NOTICE is not a credits file.  Add a README instead that lists
all of the contributors.  Likewise, the scripts directory sucks  
beyond

description.  If you aren't going to fix it, then remove it from the
release.


What specifically is wrong with the scripts directory?


They don't work.  They include actual company names that have since
been bought by people outside our influence.  They include a domain
name that I own which isn't active right now.  They are written in
a language that nobody other than Justin understands.  And, well,
they don't work.

Roy



Re: Vote for mod_mbox 0.2 release

2005-12-21 Thread Roy T. Fielding

On Dec 21, 2005, at 4:51 PM, Maxime Petazzoni wrote:


* Paul Querna <[EMAIL PROTECTED]> [2005-12-21 16:45:09]:


What specifically is wrong with the scripts directory?


Scripts are mostly specific to the ASF setup for
mail-archives.apache.org. They're not involved in making the module
work for the lambda user or even for anybody who's not part of the
infrastructure team ...

I did not changed my mind on this point : these scripts do not belong
to the mod_mbox repository, or at least not to trunk/ (and thus,
releases).


I think they should stay in trunk.  I just don't think they should
be included in the tarball until someone cares enough to fix or
replace them.

Roy


Re: Vote for mod_mbox 0.2 release

2005-12-21 Thread Roy T. Fielding

On Dec 21, 2005, at 5:15 PM, Paul Querna wrote:
I agree with all points, except the last.  They do work, they are  
running mail-archives.apache.org.


No, an edited version of them is doing a poor job of occasionally
being manually used to update our mailing lists.  I just ran into
that problem again today when I looked at wadi-dev at incubator,
which does not yet appear in the archives because the scripts
don't actually work.  They really need to be updated so that they
pick up all the archives in a dir tree automatically -- something
we could not do at the time, but can now that all of our public
archives are in a single tree.


If I had spare time, I would love to remove ZSH :)


It's on my list.  Right below finishing waka. ;-)

Roy


Re: Vote for mod_mbox 0.2 release

2005-12-23 Thread Roy T. Fielding

On Dec 23, 2005, at 10:34 AM, Maxime Petazzoni wrote:

Ok, updated tarballs have been uploaded to
. Vote is restarted.


They should be called 0.2.1, though I'll let that pass as there were
no code changes.  However, you do need to remember to check the
file permissions after uploading the files.  They need to be

   chmod 664 *

I fixed them myself the last time, but don't have time right now.

Roy



erain removed from list

2005-12-29 Thread Roy T. Fielding

It looks like an autobot was fooled into subscribing here.
I have removed it and added it to the deny list.

Roy


Re: Event MPM: Spinning on cleanups?

2005-12-30 Thread Roy T. Fielding

On Dec 30, 2005, at 5:51 PM, Brian Pane wrote:

I haven't been able to find the bug yet.  As a next step, I'll try  
using

valgrind on a build with pool debugging enabled.


On entry to allocator_free, if

   (node == node->next && node->index > current_free_index)

is true, then the do { } while ((node = next) != NULL);
will go into an infinite loop.  This is because

if (max_free_index != APR_ALLOCATOR_MAX_FREE_UNLIMITED
&& index > current_free_index) {
node->next = freelist;
freelist = node;
}

does not update current_free_index.  I don't know if that is the
problem, but it may be the symptom.

Roy



Re: svn commit: r360461 - in /httpd/httpd/trunk: CHANGES include/ap_mmn.h include/httpd.h server/protocol.c

2005-12-31 Thread Roy T. Fielding

On Dec 31, 2005, at 3:45 PM, [EMAIL PROTECTED] wrote:


Author: brianp
Date: Sat Dec 31 15:45:11 2005
New Revision: 360461

URL: http://svn.apache.org/viewcvs?rev=360461&view=rev
Log:
Refactoring of ap_read_request() to store partial request state
in the request rec.  The point of this is to allow asynchronous
MPMs do do nonblocking reads of requests.  (Backported from the
async-read-dev branch)


Umm, this needs more eyes.

It doesn't seem to me to be doing anything useful.  It just adds
a set of unused input buffer fields to the wrong memory structure,
resulting in what should be a minor (not major) MMN bump, and then
rearranges a critical-path function into several subroutines.

The nonblocking yield should happen inside ap_rgetline (or its
replacement), not in the calling routine.  The thread has nothing
to do until that call is finished or it times out.  In any case,
this code should be independent of the MPM and no MPM is going
to do something useful with a partial HTTP request.

I say -1 to adding the input buffer variables to the request_rec.
Those variables can be local to the input loop.  I don't see any
point in placing this on trunk until it can do something useful.

Roy


Re: svn commit: r360461 - in /httpd/httpd/trunk: CHANGES include/ap_mmn.h include/httpd.h server/protocol.c

2006-01-02 Thread Roy T. Fielding

On Dec 31, 2005, at 9:55 PM, Brian Pane wrote:

On Dec 31, 2005, at 6:50 PM, Roy T. Fielding wrote:


The nonblocking yield should happen inside ap_rgetline (or its
replacement), not in the calling routine.  The thread has nothing
to do until that call is finished or it times out.


On the contrary, the thread has some very important work to do before
that call finishes or times out: it has other connections to  
process.  If
the thread waits until the ap_rgetline completes, a server  
configuration
sized for multiple threads per connection will be vulnerable to a  
trivially

implementable DoS.


When I say "thread", I mean a single stream of control with execution
stack, not OS process/thread.  An event MPM is going to have a single
stream of control per event, right?  What I am saying is that the
control should block in rgetline and yield (return to the event pool)
inside that function.  That way, the complications due to yielding are
limited to the I/O routines that might block a thread rather than
being spread all over the server code.

Am I missing something?  This is not a new topic -- Dean Gaudet had
quite a few rants on the subject in the archives.


 In any case,
this code should be independent of the MPM and no MPM is going
to do something useful with a partial HTTP request.

I say -1 to adding the input buffer variables to the request_rec.
Those variables can be local to the input loop.


Are you proposing that the buffers literally become local variables?
That generally won't work; the input loop isn't necessarily contained
within a single function call, and the reading of a single request's
input can cross threads.


I am saying it doesn't belong in the request_rec.  It belongs on the
execution stack that the yield routine has to save in order to return
to this execution path on the next event.  The request does not care
about partial lines.


It would be feasible to build up the pending request in a structure
other than the request_rec, so that ap_read_async_request() can
operate on, say, an ap_partial_request_t instead of a request_rec.
My preference so far, though, has been to leave the responsibility
for knowing how to parse request headers encapsulated within
the request_rec and its associated "methods."


Maybe you should just keep those changes on the async branch for now.
The rest of the server cannot be allowed to degrade just because you
want to introduce a new MPM.  After the async branch is proven to be
significantly faster than prefork, then we can evaluate whether or
not the additional complications are worth it.

Roy


Re: svn commit: r360461 - in /httpd/httpd/trunk: CHANGES include/ap_mmn.h include/httpd.h server/protocol.c

2006-01-02 Thread Roy T. Fielding

On Jan 2, 2006, at 1:37 PM, Brian Pane wrote:


"Significantly faster than prefork" has never been a litmus test for
assessing new features, and I'm -1 for adding it now.  A reasonable
technical metric for validating the async changes would "significantly
more scalable than the 2.2 Event MPM" or "memory footprint
competitive with IIS/Zeus/phttpd/one's-competitive-benchmark-of- 
choice."


Those aren't features.  They are properties of the resulting system
assuming all goes well.


The bit about degrading the rest of the server is a wonderful sound
bite, but we need to engineer the httpd based on data, not FUD.


I said leave it on the async branch until you have data.  You moved
it to trunk before you've even implemented the async part, which I
think is wrong because the way you implemented it damages the
performance of prefork and needlessly creates an incompatible MMN.
Maybe it would be easier for me to understand why the event loop is
being controlled at such a high level if I could see it work first.

Now, if you want to tell me that those changes produced a net
performance benefit on prefork (and thus are applicable to other MPMs),
then I am all ears.  I am easily convinced by comparative performance
figures when the comparison is meaningful.

Roy


Re: svn commit: r360461 - in /httpd/httpd/trunk: CHANGES include/ap_mmn.h include/httpd.h server/protocol.c

2006-01-02 Thread Roy T. Fielding

On Jan 2, 2006, at 2:14 PM, Roy T. Fielding wrote:


Now, if you want to tell me that those changes produced a net
performance benefit on prefork (and thus are applicable to other  
MPMs),

then I am all ears.  I am easily convinced by comparative performance
figures when the comparison is meaningful.


BTW, part of the reason I say that is because I have considered
replacing the same code with something that doesn't parse the
header fields until the request header/body separator line is
seen, since that would allow the entire request header to be parsed
in-place for the common case.

Roy


Re: svn commit: r360461 - in /httpd/httpd/trunk: CHANGES include/ap_mmn.h include/httpd.h server/protocol.c

2006-01-03 Thread Roy T. Fielding

On Jan 3, 2006, at 12:02 AM, William A. Rowe, Jr. wrote:


Roy T. Fielding wrote:

On Jan 2, 2006, at 2:14 PM, Roy T. Fielding wrote:

Now, if you want to tell me that those changes produced a net
performance benefit on prefork (and thus are applicable to other   
MPMs),
then I am all ears.  I am easily convinced by comparative  
performance

figures when the comparison is meaningful.


lol, of course you choose the non-threaded MPM as a reference,  
which therefore
should receive no meaningful performance change.  The difference  
between an
async wakeup and a poll result should be null for one socket pool,  
one process,
one thread (of course select is a differently ugly beast, and if  
there were

a platform that supported async with no poll, I'd laugh.)


You seem to misunderstand me -- if I compare two prefork servers, one
with the change and one without the change, and the one with the change
performs better (by whatever various measures of performance you can  
test),

then that is an argument for making the change regardless of the other
concerns.

If, instead, you say that the change improves the event MPM by 10% and
degrades performance on prefork by 1%, then I am -1 on that change.
Prefork is our workhorse MPM.  The task then is to isolate MPM-specific
changes so that they have no significant impact on the critical path
of our primary MPM, even if that means using #ifdefs.

Alternatively, rewrite the server to remove all MPMs other than
event and then show that the new server is better than our existing
server, and we can adopt that for 3.0.


BTW, part of the reason I say that is because I have considered
replacing the same code with something that doesn't parse the
header fields until the request header/body separator line is
seen, since that would allow the entire request header to be parsed
in-place for the common case.


Well ... you are using protocol knowledge that will render us http- 
bound
when it comes time to efficently bind waka (no crlf delims in a  
binary format
protocol, no?) or ftp (pushes a 'greeting' before going back to  
sleep.)


Well, I am assuming that the MIME header parsing code is in the
protocol-specific part of the server, yes.

Roy



Fwd: I-D ACTION:draft-eastlake-sha2-01.txt

2006-01-04 Thread Roy T. Fielding

It might be a good project for someone to take this I-D and convert
it to an apache utility library.

Roy

Begin forwarded message:


From: [EMAIL PROTECTED]
Date: January 4, 2006 12:50:01 PM PST
To: i-d-announce@ietf.org
Subject: I-D ACTION:draft-eastlake-sha2-01.txt
Reply-To: [EMAIL PROTECTED]
Message-Id: <[EMAIL PROTECTED]>

A New Internet-Draft is available from the on-line Internet-Drafts  
directories.



Title   : US Secure Hash Algorithms (SHA)
Author(s)   : D. Eastlake 3rd, T. Hansen
Filename: draft-eastlake-sha2-01.txt
Pages   : 99
Date: 2006-1-4

The United States of America has adopted a suite of secure hash
   algorithms (SHAs), including four beyond SHA-1, as part of a  
Federal

   Information Processing Standard (FIPS), specifically SHA-224 [RFC
   3874], SHA-256, SHA-384, and SHA-512.  The purpose of this document
   is to make open source code performing these hash functions
   conveniently available to the Internet community. The sample code
   supports input strings of arbitrary bit length. SHA-1's sample code
   from [RFC 3174] has also been updated to handle an input string of
   arbitrary length. Most of the text herein was adapted by the  
authors

   from FIPS 180-2.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-eastlake-sha2-01.txt


Re: svn commit: r366174 - /httpd/mod_mbox/trunk/module-2.0/mod_mbox_mime.c

2006-01-05 Thread Roy T. Fielding

On Jan 5, 2006, at 4:49 AM, [EMAIL PROTECTED] wrote:


+In order to handle empty boundaries, we'll look for the
+boundary plus the \n. */
+
+   boundary_line = apr_pstrcat(p, "--", mail->boundary, "\n", NULL);

/* The start boundary */
-   bound = ap_strstr(mail->body, mail->boundary);
+   bound = ap_strstr(mail->body, boundary_line);


That seems a bit risky -- MIME parts are supposed to have CRLF for
line terminators, but that code will only search for LF on Unix.

Would it make more sense to use a regex?

Roy



Re: Merging branch authz-dev - Authorization and Access Control 2.3 vs. 2.2

2006-01-11 Thread Roy T. Fielding

On Jan 11, 2006, at 7:19 AM, Joshua Slive wrote:


[Your merge today prompted me to dig out a response I started but
never finished.]

I am still worried that we are underestimating the pain that this will
cause.  In my opinion, a config change that requires substantial
changes to every httpd.conf and many .htaccess files requires a major
version bump (to 3.0) unless it can, in some way, be made seamless to
the end user.  And there is no way to deny that this will put a large
roadblock in the way of upgraders.


It isn't just your opinion -- incompatible configuration changes
means third-parties have to change their source code, which means a
major version bump is required.  So either somebody gets busy on
implementing backward-compatibility or this stuff gets bumped to 3.x.

We could decide that the next release will be 3.0, but I doubt it.

Roy


filesystem directives

2006-01-11 Thread Roy T. Fielding

For someone looking for something to do,

The authorization code makes an assumption that filesystems allowing
file ownership is a platform-specific define.  That is not
the case for the same reason that case-sensitivity is not based
on the platform.  All of the filesystem characteristics should be
a runtime configuration scoped within the Directory directives, e.g.

   FileSystemCaseSensitive no
   FileSystemHasOwners yes

which can then be looked up in the per-dir config while performing
operations. [How to do this efficiently is unknown to me.]

Note that both of those are settable per disk in OS X, and in other
cases they will be dependent on the remote filesystem host (SMB).
The default should be based on some combination of platform and
perhaps platform-specific tests.  We've needed this feature for a
long time.

Roy


please set up a mod_python core group

2006-01-18 Thread Roy T. Fielding

It looks like mod_python is making good progress and everyone
is collaborating in the Apache way of testing and voting.
That's great!

Unfortunately, I have almost no insight into who these great people
are that are doing the RM task and testing and voting and preparing
for a next release.  That's not so great, since it is my job (as
VP of Apache HTTP Server Project) to be sure that the ASF knows all
this work is being done in its name and so that all of the people
doing it are appropriately recognized for their work.

So, please, take a few moments to decide amongst yourselves who
should have binding votes on mod_python (i.e., who has earned it),
keeping in mind that you need at least three binding +1 votes in
order to make any release at Apache, and send me a list of names
and email addresses of those people so that I can properly
record them in our records.

Cheers,

Roy T. Fielding<http://roy.gbiv.com/>
for the Apache HTTP Server PMC


Re: CHANGES attribution reminder

2006-01-21 Thread Roy T. Fielding

On Jan 21, 2006, at 2:29 PM, Ruediger Pluem wrote:

Ok. Then I had a different understanding from my osmosis :-).
Any other comments on this?
I have no problem adopting the above rules for future CHANGE entries.


Jim is correct.

It is easy to forget now because Subversion doesn't have the
rcstemplate feature, but commits should still have:

  PR:
  Obtained from:
  Submitted by:
  Reviewed by:

in the log when they are applicable.

CVS:  
--

CVS: PR:
CVS:   If this change addresses a PR in the problem report tracking
CVS:   database, then enter the PR number(s) here.
CVS: Obtained from:
CVS:   If this change has been taken from another system, such as NCSA,
CVS:   then name the system in this line, otherwise delete it.
CVS: Submitted by:
CVS:   If this code has been contributed to Apache by someone else;  
i.e.,
CVS:   they sent us a patch or a new module, then include their name/ 
email

CVS:   address here. If this is your work then delete this line.
CVS: Reviewed by:
CVS:   If we are doing pre-commit code reviews and someone else has
CVS:   reviewed your changes, include their name(s) here.
CVS:   If you have not had it reviewed then delete this line.


Roy


Re: ECONNRESET, low keepalives, and pipelined requests?

2006-02-09 Thread Roy T. Fielding

On Feb 9, 2006, at 9:36 PM, Justin Erenkrantz wrote:


Has anyone ever seen a situation where httpd (or the OS) will RST a
connection because there's too much unread data or such?

I'm doing some pipelined requests with serf against a 2.0.50 httpd  
on RH7.1
server (2.4.2 kernel?).  I'm getting ECONNRESET on the client after  
I try
to read or write a large number of requests.  httpd's side is  
sending the

RSTs - but there's nothing in the httpd logs.

Can an RST happen without a process dying?  Isn't that the most common
reason for the RST flag?  (We've checked and no httpd are dying,  
AFAICT.)


Bumping the MaxKeepAliveRequests from 10 to 100 apparently solves  
this; but
that's just odd - yet it implies that httpd is in some control over  
this

behavior.

Yet, if it were httpd, why isn't it responding to all of the previous
requests before it hit the MaxKeepAliveRequests?  (There is no  
response
with 'Connection: Close' being sent - it just drops off in the  
middle of

writing the response body as far as we can see.)  So, why would it
terminate the connection *before* responding to all of the outstanding
responses that are under the MaxKeepAliveRequests limit?  Is httpd  
writing

the response and Linux just dropping it?


Keep in mind that a RST also tells the recipient to throw away any
data that it has received since the last ACK.  Thus, you would never
see the server's last response unless you use an external network
monitor (like another PC running ethereal connected to your client PC
with a non-switching hub).

My guess is that, when MaxKeepAliveRequests is reached, the server
process closes the connection and tells the client.  If lingering
close hasn't been broken, it will then continue reading some data
from the client precisely to avoid this lost response problem.
Serf should be looking for Connection: close on the last response
it received and close the connection, starting again on a different one.

I suggest you check the lingering close code to see if someone has
disabled it on Linux.  People do that some times because they think
it is a performance drag to linger on close, neglecting to consider
that the client will be left clueless if a RST is sent before the
client acks receipt of the server's response.

Roy


Re: ECONNRESET, low keepalives, and pipelined requests?

2006-02-09 Thread Roy T. Fielding

On Feb 9, 2006, at 10:17 PM, Justin Erenkrantz wrote:

On IRC, Paul pointed out this bug (now fixed):

http://issues.apache.org/bugzilla/show_bug.cgi?id=35292

2.0.50 probably has this bug - in that it'll won't do lingering close
correctly - and perhaps that's what I'm running into.


You're testing against 2.0.50?  Crikey.


Any cute ideas on how to work around this?  The real problem is that
there's no way for the server to tell me what its configured
MaxKeepAliveRequests setting is.  If I knew that, I could respect it -
instead I have to discover it experimentally...


That's why we used to send a Keep-Alive: header on responses that
indicated how many requests were left.  Don't get me started...

Roy



Re: svn commit: r393037 - in /httpd/httpd/trunk: CHANGES server/protocol.c

2006-04-10 Thread Roy T. Fielding

On Apr 10, 2006, at 2:50 PM, Ruediger Pluem wrote:
I also thought initially to fix this in apr-util, but right know I  
am not
sure about it, because IMHO apr_uri_parse should do generic uri  
parsing.
Setting an empty uri to "/" seems to be HTTP specific, so I am not  
sure

if we should do this in apr_uri_parse. At least we would need to check
whether the scheme is http or https.


It probably needs to be updated for RFC 3986 anyway.  The path should
be set to "", not NULL.  The HTTP server should take care of the
redirect from "" to "/", which in this case means the http-proxy
needs to check for "" when it sends a request and respond with a
redirect that adds the "/".

Roy



Re: svn commit: r393037 - in /httpd/httpd/trunk: CHANGES server/protocol.c

2006-04-11 Thread Roy T. Fielding

On Apr 11, 2006, at 2:55 PM, Nick Kew wrote:


On Tuesday 11 April 2006 22:10, William A. Rowe, Jr. wrote:

Ruediger Pluem wrote:

On 04/11/2006 04:00 AM, Roy T. Fielding wrote:
It probably needs to be updated for RFC 3986 anyway.  The path  
should

be set to "", not NULL.  The HTTP server should take care of the
redirect from "" to "/", which in this case means the http-proxy
needs to check for "" when it sends a request and respond with a
redirect that adds the "/".


Um, it's not really a redirect; it's just a normalisation.  Shouldn't
really invoke any redirect logic, whether internal or external.


The server should redirect any time the characters in the request URI
are changed, since that impacts the digests used in various access
control mechanisms.

Roy



Re: [VOTE] 2.0.56 candidate

2006-04-18 Thread Roy T. Fielding

On Apr 18, 2006, at 1:35 PM, Colm MacCarthaigh wrote:


Also, what are people's thoughts on including sha1 signatures in our
official dist? We havn't heretofore, is there any benefit? The PGP
signatures are there to confirm veracity, the simple checksums are
really only to detect corrupted downloads, but some users do make the
md5 = insecure equation very readily.


No, there is no reason.  sha1 is just as "insecure" for hashes as md5.

Roy



Re: What are we doing about...

2006-04-19 Thread Roy T. Fielding

On Apr 19, 2006, at 8:55 AM, Colm MacCarthaigh wrote:


On Wed, Apr 19, 2006 at 08:31:25AM -0700, Justin Erenkrantz wrote:

On 4/19/06, Jim Jagielski <[EMAIL PROTECTED]> wrote:

Before I t/r 1.3, I'll be updating the files to reflect the
new copyright. We can determine some better way of doing it
post-release :)


No.  Please do not update any copyright years.


Eek, This has already been done, for trunk and for the 3 branches.


We are only supposed to indicate the year of *first* publication.


That won't have changed, so I don't think the update will have caused
any harm.


Ah, fer cryin out loud.  If this was actually needed, I would have
done it before the first release of the year.  I thought we already
had this discussion and I said don't update the years, but that may
have been a different dev list.

In any case, it requires great care -- you actually changed at
least one (maybe more) copyright lines belonging to other people,
which is somewhat illegal.  I really like all the energy you have
going right now, but we can't do a release until the commits are
checked and stuff like

--- httpd/httpd/trunk/server/util_pcre.c (original)
+++ httpd/httpd/trunk/server/util_pcre.c Wed Apr 19 05:23:42 2006
@@ -12,7 +12,7 @@

 Written by: Philip Hazel <[EMAIL PROTECTED]>

-   Copyright (c) 1997-2004 University of Cambridge
+   Copyright (c) 1997-2006 University of Cambridge

reverted.


Please let's just stick with what we have until Cliff gives a
definitive ruling.  I really would like to get this resolved soon,  
but

we're also going to be altering the license block as well.  (What
Jackrabbit just used is fairly close to what we should use, but I'd
like legal review before we switch all projects to it.)  -- justin


We had a legal review last year, and the review said that what we had
been doing is incorrect and the replacement text is fine.  It has simply
been held up because Cliff changed priorities. Jackrabbit used the text
that the lawyers approved.

 http://svn.apache.org/repos/asf/jackrabbit/trunk/jackrabbit/HEADER.txt

The only thing I would change is the phrase from our existing headers

   "you may not use this file except in compliance"

which I think was copied from MPL.  I would have preferred a more
positive statement, but that would have required another review.

I have a script that does the conversion, which I'll add to committers.

Note that I didn't have a choice with Jackrabbit.  I know exactly
who owns the copyright and I did the paperwork for the licenses, so
(IMO) that choice was to obey the law (as described by ASF attorneys)
or not.  httpd has more leeway since it is such an old project, but
I would like the board to make a decision soon.

Roy


Re: What are we doing about...

2006-04-19 Thread Roy T. Fielding

On Apr 19, 2006, at 3:38 PM, Justin Erenkrantz wrote:


On 4/19/06, Roy T. Fielding <[EMAIL PROTECTED]> wrote:
  http://svn.apache.org/repos/asf/jackrabbit/trunk/jackrabbit/ 
HEADER.txt


The only pedantic item I see with that wording is that it says "the
Apache Software Foundation" instead of "The".  ;-)  *ducks and run*


Corporation legal names are not case-sensitive.  ;-p

Roy


copyright notices

2006-04-21 Thread Roy T. Fielding

Wow, this discussion is getting out of hand.  It is not a technical
issue and thus isn't going to get resolved by throwing paint cans
at the shed.  For years, the ASF had been following the examples
commonly seen in commercial software products of placing a general
copyright header all over the place to indicate the collective work.
This was fine when we started because we used to say

   Copyright  The Apache Group.

which, as a semiformal group of individuals holding joint copyright,
was both legal and correct.  However, incorporating the ASF created
an entity that only held licenses (CLAs and license grants) to the
individual contributions.  Thus, our old practice of copyright notices
should have changed accordingly.  Unfortunately (or fortunately),
I am not a lawyer and did not know the finer details of US law
regarding copyright notices, and never had a reason to discuss it
with lawyers until last year.

I requested a related opinion from our ASF lawyers and, as part of a
lengthy discussion, was informed that we can't place a notice on
anything that the ASF does not own copyright (not just a license,
but ownership in the corporate name).  So, then we tried adding the
"or its licensors, as applicable" suffix to the notice, which
satisfied 2 out of 3 lawyers (resulting in another round of finger
wrestling, and eventual agreement by all that it was bogus).
It was finally suggested that the Berne convention (and our liberal
license) made the notice unnecessary, and I came up with a minimal
replacement text for the existing header that simply informs
recipients of the licensor and terms.

A policy proposal was drafted by Cliff Schmidt, based on my proposed
header changes, and reviewed again (last October).  That policy was
placed on the queue of things Cliff was going to present to the board.
Unfortunately, Cliff's free time got hit by the truck that we all
know of as the third-party licensing issue.

None of that, however, changes the fact that I (as an ASF officer)
asked our lawyer for a legal opinion and received an answer to the
effect of "You are doing what?  No, don't do that -- the law considers
it a misrepresentation, even if it does no harm to others."  Having
received such guidance, I am responsible for implementing it even
if the board never sets a policy for the ASF as a whole.

So, why didn't I apply the changes to httpd already?  The answer is
because I am waiting for a board decision, and because the httpd source
contains so much collective work that it is very hard to find a
file that cannot be argued as (at least) a joint effort by the ASF.

That is not the case for Apache Jackrabbit.  Much of the Jackrabbit
code was developed long before it was licensed to the ASF in a
CCLA+grant. Furthermore, all of the contributions since then have
been under CLAs.  Therefore, I do know the copyright owner of those
files and I do know what can't be said in the notices.  Even so,
I put off the change until the prep for our formal FCS 1.0 release
made it necessary.

On Apr 21, 2006, at 11:46 AM, William A. Rowe, Jr. wrote:

This just isn't making sense as I read the svn tree for jackrabbit  
however.


Individual files contain -no- copyright, -no- indicication of where  
the

copyright claim is

http://svn.apache.org/repos/asf/jackrabbit/trunk/jackrabbit/src/ 
main/java/org/apache/jackrabbit/


has JcrConstants.java pointing to http://www.apache.org/licenses/ 
LICENSE-2.0
and claiming the license but no copyright.  That LICENSE.txt is  
bundled in the
tree (not, you'll note, LICENSE-2.0) at that level.  NOTICE.txt  
tells us...


This product includes software developed by
The Apache Software Foundation (http://www.apache.org/).

Based on source code originally developed by
Day Software (http://www.day.com/).

which is all well and good, but doesn't assert copyrights.


The only way to "assert copyright" is to accuse someone of infringing
those exclusive rights.  A notice is not an assertion -- it is supposed
to be a simple statement of fact.


It's not until you climb all the way up to

http://svn.apache.org/repos/asf/jackrabbit/trunk/jackrabbit/

(outside of even the src/ tree!) that you discover...

http://svn.apache.org/repos/asf/jackrabbit/trunk/jackrabbit/README.txt


The source tree is trunk.  The "src" tree is a directory that Maven
uses to look for java source code.  The README is included in all
of our release jars.  Thus, the ASF collective work copyright notice
is present in all of our work products and our website.

I'm really completely unclear how this protects the files we  
author, the files
authored by others (which we have appropriately appropriated) and  
the files on

which no copyright is claimed (e.g. apr/ examples public domain.)


That is irrelevant.  They are protected by copyright law, regardless
of the notice or lack thereof.

Jackrabbit is the test animal (so to speak).  Let's give the board
time to consider what, if anything, should be done for general policy.
I don't kn

Re: [VOTE] 2.0.57 candidate

2006-04-21 Thread Roy T. Fielding

On Apr 21, 2006, at 10:39 AM, William A. Rowe, Jr. wrote:
-1 to adopting Jackrabbits' license until Roy's (very reasonable)  
nit on the
language is addressed.  -1 to removing copyright until we have an  
absolute,
documented policy from ASF legal.  I'm glad you and Roy feel  
entirely assured
that you speak for legal/privy to its workings and, of course, its  
final
conclusions.  For the sanity of all the rest of us project members,  
let us
please work from documented policy though, can we?  And feh - let's  
just

have done with this tarball release and revisit once policy is *set*.


FTR, we are not going to vote on legal policy.  The board will vote,
if anyone.  Legal policies are not a PMC thing.  I implement them as
needed or directed by the board.

I don't really care about the nit (it is present in the existing
header text).  It is just something I noticed while implementing
the changes for Jackrabbit.

I don't concur with Colm, the tarball is the release and changing  
the legal
text is more significant, perhaps, than even the code itself.  So  
it's yet

another bump that strikes me as silly.


*shrug*  version numbers are cheap.  I thought we only required them
to change if the compiled bits would change or if the release was
already announced.

Roy


Re: Possible new cache architecture

2006-05-03 Thread Roy T. Fielding

On May 3, 2006, at 5:56 AM, Davi Arnaut wrote:


On Wed, 3 May 2006 14:31:06 +0200 (SAST)
"Graham Leggett" <[EMAIL PROTECTED]> wrote:


On Wed, May 3, 2006 1:26 am, Davi Arnaut said:

Then you will end up with code that does not meet the  
requirements of

HTTP, and you will have wasted your time.


Yeah, right! How ? Hey, you are using the Monty Python argument  
style.

Can you point to even one requirement of HTTP that my_cache_provider
wont meet ?


Yes. Atomic insertions and deletions, the ability to update headers
independantly of body, etc etc, just go back and read the thread.


I can't argue with a zombie, you keep repeating the same  
misunderstands.


Seriously, please move this off list to keep the noise out of  
people's

inboxes.


Fine, I give up.


For the record, Graham's statements were entirely correct,
Brian's suggested architecture would slow the HTTP cache,
and your responses have been amazingly childish for someone
who has earned zero credibility on this list.

I suggest you stop defending a half-baked design theory and
just go ahead and implement something as a patch.  If it works,
that's great.  If it slows the HTTP cache, I will veto it myself.

There is, of course, no reason why the HTTP cache has to use
some new middle-layer back-end cache, so maybe you could just
stop arguing about vaporware and simply implement a single
mod_backend_cache that doesn't try to be all things to all people.

Implement it and then convince people on the basis of measurements.
That is a heck of a lot easier than convincing everyone to dump
the current code based on an untested theory.

Roy


Re: Generic cache architecture

2006-05-03 Thread Roy T. Fielding

On May 3, 2006, at 12:53 PM, William A. Rowe, Jr. wrote:


Brian Akins wrote:
Is anyone else interested in having a generic cache architecture?   
(not http).  I have plenty of cases were I re-invent the wheel for  
caching various things (IP's, sessions, whatever, etc.).  It would  
be nice to have a provider based architecture for such things.


Let's talk about httpd.  We have a cache of ssl sessions.  We have  
a cache
of httpd response bodies.  We have a cache of ldap credentials.  A  
really

thorough mod_usertrack would have a cache of user sessions.

So really, it doesn't make sense to have these four wheels spinning  
out of
sync at different stages of stability and performance.  I'm  
strongly +1 to

provide this functionality once, and reuse.


On the contrary, it makes no sense whatsoever to use a generic
storage facility for cached HTTP responses in a front-end cache
because those responses can only be delivered at maximum speed
through a single system call IFF they are not generic.  That is
why our front-end cache is not, and has never needed to be, a
generic cache.

A front-end cache is a completely different beast from a
back-end cache.  It doesn't make any sense to me to try to
make them the same, and it certainly isn't elegant.  SSL
session, ldap credentials, sessions, and all those related
things are trivial memory blocks that *are* suitable for
back-end caching.

I have no objection to creating a module for back-end caching.
I have no objection to creating five different forms of caching
modules, each with its own qualities, that can be selected by
configuration (preferably based on some suggested site profile).
However, I see no reason to start by changing the existing
module names and assuming that one cache fits all.

Roy


Re: test/zb.c

2006-05-08 Thread Roy T. Fielding

On May 8, 2006, at 4:24 PM, Garrett Rooney wrote:


On 5/8/06, Sander Temme <[EMAIL PROTECTED]> wrote:


Found on http://svn.apache.org/viewcvs.cgi?rev=80572&view=rev

Does an archive of that apache-core mailing list mentioned above  
exist?


Yes, it does.  The first few years of archives of the httpd pmc
mailing list are actually the archives of the old apache-core list.  
It's not public, but you're a member, so you should be able to read

it.


Right.


Do we need zb.c to be in our tree? Or can we declare it superseded by
ab.c? If only to help out our friends over at Debian?


I can't see why we'd need it...


we don't.

Roy



Re: [Fwd: Re: LICENSE file(s)]

2006-05-10 Thread Roy T. Fielding

On May 10, 2006, at 2:10 PM, William A. Rowe, Jr. wrote:

Just a footnote from legal-discuss that the win32 nmake -f  
Makefile.win install
isn't moving NOTICE (yet) to the target tree, and once we do that,  
we need to

then staple it into the installer.  Trivial but needed to be noted.


yes, please -- it is mandatory on all distributions.

LICENSE still has our mongo-long-list-of-collected licenses.  IIUC  
this is no

longer the way we do things.


Eh? It is still the way we do things.

Roy



Re: httpd-apreq AND /www/www.apache.org/dist/httpd/KEYS

2006-05-23 Thread Roy T. Fielding

On May 23, 2006, at 12:45 AM, Philip M. Gollucci wrote:

Sometime hopefully in the next week, I'll be releasing httpd-apreq  
(2.08).

So I added and committed by gpg key to the KEYS file.


I think you meant to say that sometime, hopefully in the next week,
you will RM a signed tarball for httpd-apreq-2.08 that will be proposed
to the project members to vote for release.

BTW, your key has no signatures and expires on 2006-09-07.  That isn't
very useful, as keys go.  I recommend making a new key (with no expiry),
keeping it secure, and finding someone in our web of trust to sign it.

Roy



restructuring mod_ssl as an overlay

2006-06-07 Thread Roy T. Fielding

After quite a bit of delving into the US export requirements for
encryption-related software, I have found that we are able to
distribute 100% open source packages with identifiable source code
to anyone not in the banned set of countries.  However,

  a) we have to file export notices prior to each release in which
 the crypto capabilities are changed;

  b) we have to file export notices prior to publishing each binary
 package that includes mod_ssl and the notice must include a
 URL to the 100% open source version of that package;

  c) each redistributor (re-exporter) of our packages must do the same
 [I am unsure if that means every mirror is supposed to file as  
well,

 but for now I am guessing that they don't];

  d) we can only distribute binary versions of openssl if we can point
 to the 100% open source package from which it was built *and*
 file an export notice for that package prior to our publication;

  e) people who are in the banned set of countries and people in
 countries that forbid encryption cannot legally download the
 current httpd-2 packages because they include mod_ssl even when
 it won't be used.

Given those constraints, I would prefer to separate the httpd releases
into a non-crypto package and a crypto overlay, similar to what most
of the packaging redistributors do (fink, apt, etc.).

I think the best way to accomplish that is to separate mod_ssl into
a subproject that is capable of producing overlay releases for each
release of httpd.  In other words, each package would depend on an
installed instance of httpd and (depending on platform) install
mod_ssl on top along with, optionally, a specific version of openssl.
We can then limit our crypto export notices to releases of the ssl
code (where we are much more likely to remember the export process).

Thoughts?  Anyone have any better ideas?

Roy


Re: restructuring mod_ssl as an overlay

2006-06-07 Thread Roy T. Fielding

On Jun 7, 2006, at 1:30 PM, Colm MacCarthaigh wrote:

  e) people who are in the banned set of countries and people in
  countries that forbid encryption cannot legally download the  
current

  httpd-2 packages because they include mod_ssl even when it won't be
  used.


I don't see how this can possibly be the case. If crypto code is  
illegal

locally, then it is illegal locally and people need to figure that out
from themselves.


The point is that they may want to download a web server which doesn't
have that problem, and right now they are limited to 1.3.x.  I consider
Web servers to be something we would want people in those countries
to be able to download without concern.  Freedom of the press.


If a person happens to live in a country which is on
the USA's banned list, there's nothing illegal (purely from their
perspective) about their act of download, US law does not apply to  
them.


Right, but it does apply to us (and to Ireland as well, AFAIK) if we
encourage people in those countries to download the web server but
do not also provide a non-crypto alternative.


Surely the illegality is that the ASF exports the code to those
countries, and if anyone is answerable to those particular laws it is
any US-based exporter of the code. I just want to be clear about this
distinction, if it's correct.


Mostly.  The banned countries are also banned by the EU (the
anti-terrorism laws), so it isn't as simple as you might think.

And pointing out the fact that this is all just a stupid waste
of time doesn't work either, apparently, as the current government
is technologically illiterate.


Or is there a suggestion that the situation invalidates the user's
license? (contracts can be invalidated when a law is broken, but it's
complex stuff).


No, it is strictly an advertising problem placed on the ASF.


I think the best way to accomplish that is to separate mod_ssl into a
subproject that is capable of producing overlay releases for each
release of httpd.


yuck! -1


Okay, let me put it in a different way.  The alternatives are

 1) retain the status quo, forbid distributing ssl binaries, and  
include
in our documentation that people in banned countries are not  
allowed

to download httpd 2.x.

 2) split the distribution into plain and crypto parts and only have to
deal with the export controls within the crypto distribution.

 3) delete mod_ssl from httpd

Pick one.


Thoughts?  Anyone have any better ideas?


Is the mere legal registration of the ASF within US borders a solid
stumbling block here? As in, could the situation be remedied by
forbiding US-based distributors? (Similar to what Debian used to do  
with

it's non-US repositories).


The ASF is within US borders and is a US corp.  And, no, whatever it
was that Debian was trying to do is not even remotely sufficient for
the US because it just makes each developer the exporter.

Roy


Re: restructuring mod_ssl as an overlay

2006-06-07 Thread Roy T. Fielding

On Jun 7, 2006, at 1:39 PM, William A. Rowe, Jr. wrote:
On the T-8 prohibited countries list, note it is a crime to export  
technologies
to them (it's hard for the US to define a crime to obtain said  
technologies in
a foreign jurisdiction - let's not get into that debate).  However,  
as a 'public
download' I believe we are now  exempted from trying to discern  
where these
parties are.  Providing both the base server and an ssl feature  
demonstrates good faith that we are providing unrestricted access  
to our httpd sources, and

permitting our users to avoid mod_ssl/crypto.


Exactly.  It avoids us getting into trouble for asking people to
download httpd without specific reference to the export controls.

Note that Cliff only looked at what was needed for crypto -- he didn't
look at the general issue of producing controlled versus uncontrolled
(for export purposes) software.


On very important points;

 we have to file export notices prior to each release in which
 the crypto capabilities are changed;

This means the strength of the crypto or feature set.


Right.


Does apr's sha1 and m5 hashing
still fall into this category?


No, one-way hashing and crypto technologies used for the sole purpose
of authentication (not data privacy) are specifically excluded.


 we have to file export notices prior to publishing each binary
 package that includes mod_ssl and the notice must include a
 URL to the 100% open source version of that package;

and

 we can only distribute binary versions of openssl if we can point
 to the 100% open source package from which it was built *and*
 file an export notice for that package prior to our publication;

seem in once sense to be the same issue.  Package mod_ssl + OpenSSL  
0.9.7i

and does this become one notification or two seperate notifications?

When/If OpenSSL 0.9.8 is distributed 'by us', it's crypto  
capabilites are
changed (notably ECC) so renotification is absolutely required.   
Less clear
when we go from 0.9.7i to 0.9.7j (it happened to be a buildfix  
release) what

is required.


It is impossible for us to distribute OpenSSL without also providing
a URL to the exact 100% open source distribution from which it was
built.  As you note, we can't do that by pointing to openssl.org, so
we would have to provide our own copy of the distribution or include
the source code directly in our product, just to comply with EAR.
My preference is to not distribute OpenSSL.

But if I understood Cliff's research and even our earliest legal  
advise, it's
not the 'binaries' we notify BIX of, it's actually the "source", so  
it wouldn't
seem that once we have notified them of the source code to our  
packages, that

any renotification would be needed for individual binaries.


Notification is for products.  We must have one notification per product
that includes export-controlled code and 100% of the source code for
that product must be available from a single URL.  The notification is
made each time the URL changes or the crypto capability changes.
Note that the single URL may contain a list of package versions and
docs on how to build each such version from a list of source packages.

If we have five different products that use mod_ssl or openssl
(e.g., httpd, tomcat, ftpd, flood, fubar) then we need five different
notifications and each must be distributed as 100% open source to
qualify for the TSU exception.

This also explains why we don't have to provide a notice for everything
in our subversion repository.

Are we 100% certain the 'hooks' for plugging in mod_ssl themselves  
are now
totally and completely clear of all this garbage?  That was once a  
concern back

in the 90's, and I'm almost certain it's technically irrelevant now.


The module hooks are not a concern.  The calls within mod_ssl itself
are sufficient to be controlled, as Cliff said:

   The 5D002 ECCN includes software specially designed to use other
   technology controlled by 5D002.  That would imply that mod_ssl is  
also

   subject to export regulation and is allowed the TSU exception.

Here are some other links worth looking at

   http://www.apache.org/dev/crypto.html
   http://www.access.gpo.gov/bis/ear/ear_data.html

   http://www.hecker.org/mozilla/eccn

   http://www.adobe.com/support/exportcompliance.html
   http://www.adobe.com/support/eccnmatrix.html

Note that most of Adobe's products are classified as 5D992, which is
either because they requested a specific review and the result was
NLR (no license required) or possibly because of the regulation of

 c.  "Software" designed or modified to protect
 against malicious computer damage, e.g., viruses.

and I don't need to speculate as to why such software is not allowed
to be exported to the T-8 countries.

One weird thing about the ECCNs is that there is no classification
number for "not controlled". *shrug*

Roy


Re: restructuring mod_ssl as an overlay

2006-06-07 Thread Roy T. Fielding

On Jun 7, 2006, at 3:02 PM, Colm MacCarthaigh wrote:


On Wed, Jun 07, 2006 at 02:51:12PM -0700, Cliff Schmidt wrote:

Here's the page that I've put together right now:
http://apache.org/dev/crypto.html.  Unfortunately, it  needs a little
more detail.


Thank you very much, that's already answered a few of my questions and
given me some good pointers.


The US export laws do not require us to offer a non-crypto version of
products we place on the web that do include export-controlled  
crypto.

The only thing we cannot do is knowingly export to a handful of
particular countries; however, placing an item on the web does not
qualify as knowingly exporting to any particular country.


That would be excellent.


We also cannot go to one of those countries and agitate for people
to download a copy of httpd and run their own web server, though
I imagine Brian, Dirk, and Sally are the only ones likely to travel
that far.  More to the point, I'd prefer not to have all the warnings
scrawled across the top of our downloads page.


However, if there are httpd users in countries that have *import*
restrictions that would like to use the non-ssl version of httpd,  
that
might be a reason to do what is being suggested here.  But there  
is no

U.S. regulation that I am aware of that requires us to distribute a
non-SSL versionbut maybe I'm not understanding the concern.


From the sound of things, we could put up ssl-capable downloads right
now with no liability for the ASF or anyone other than users in
countries with such restrictions, which is useful to know.


If and only if we FIRST notify BIS and SECOND place text similar to
what Adobe has on the download page, and that assumes we either
do not include openssl or we distribute the source code for that
as well.

So, I'm wondering how effective a liability shield it is for a US- 
based

corporation to export such content via non-US-based distributors. It
seems odd that this would work legally, but that SPI/Debian did  
it for

so long sparks my interest; maybe there is a path through.


I have no idea what the Debian story is, but that is not an option  
for

a number of reasons.  Here's the biggest reason, the same U.S.
government entity that controls our exports also controls reexport
from any other country of goods that were previously exported from  
the

U.S.


I've been reading http://www.debian.org/legal/cryptoinmain and it  
looks

like they shifted the liability to their developers personally, who
exported-by-proxy.


Yep.  However, Debian has no real problem because they do have a URL
to associate with the source code of whatever they distribute.  The
problem for us is because we don't distribute OpenSSL as it would be
built for mod_ssl *and* we wouldn't be controlled at all if it were
not for that single module.  That is why our dilemma is actually
worse than Mozilla (which requires SSL and binds it statically).

Roy



Re: restructuring mod_ssl as an overlay

2006-06-07 Thread Roy T. Fielding

On Jun 7, 2006, at 4:53 PM, Colm MacCarthaigh wrote:


On Wed, Jun 07, 2006 at 04:32:40PM -0700, Roy T. Fielding wrote:

We also cannot go to one of those countries and agitate for people
to download a copy of httpd and run their own web server


Who's "we"? Members of the ASF? Members of the PMC? committers?
developers?


"We" is anyone representing the ASF.  How (or who) would determine
that is anyone's guess.

More links

   http://www.exportcontrolblog.com/

   http://www.stanford.edu/dept/DoR/exp_controls/index.html

   http://www.bis.doc.gov/deemedexports/deemedexportsfaqs.html#17

The EAR guidelines are insanely complicated because they are basically
a summary of various laws and executive directives.  It is FUBAR, but
violating them can be subject to civil and criminal penalties in the US.
I think that's why most of the companies stay conservative and simply
ban all export to anyone on the lists.

Roy


Re: restructuring mod_ssl as an overlay

2006-06-07 Thread Roy T. Fielding

On Jun 7, 2006, at 4:02 PM, Roy T. Fielding wrote:


One weird thing about the ECCNs is that there is no classification
number for "not controlled". *shrug*


It seems that "EAR 99" is the catch-all name for things that might
be controlled but are not specifically classified already.

Roy



Re: restructuring mod_ssl as an overlay

2006-06-07 Thread Roy T. Fielding

On Jun 7, 2006, at 2:35 PM, Ruediger Pluem wrote:

On 06/07/2006 10:53 PM, William A. Rowe, Jr. wrote:
There's another gray point, without OpenSSL, mod_ssl is a noop,  
that is,
it does no crypto.  There is more crypto in mod_auth_digest,  
util_md5 or

in apr-util than there is in mod_ssl.


I think this is an excellent point regarding the source. Without an  
SSL toolkit
like openssl mod_ssl does nothing. It is no crypto software.  
Otherwise you could
argue that httpd without mod_ssl is also crypto software, because  
you can use
mod_ssl with httpd. So separating it into a subproject would not  
help either.


The controlled software under 5D002 includes both crypto software for  
the

purpose of information privacy (not authentication) and any software
specifically designed to use 5D002-covered software.  Any SSL library
is controlled by 5D002 and mod_ssl is specifically designed to use
an SSL library.  In contrast, httpd module hooks are not specifically
designed to use mod_ssl -- they are general-purpose.

So provided mod_auth_digest, util_md5 or apr-util do not impose  
further problems


One-way hash algorithms are not encryption technology.  Related, yes,
but "encryption" as it has been commonly defined is specific to
bidirectional transforms for information privacy applications.

Roy



Re: restructuring mod_ssl as an overlay

2006-06-08 Thread Roy T. Fielding

Sorry, I did a poor job of explaining -- the binaries issue is about
openssl.  The openssl issue is what required me to read the EAR
guidelines, but my response is based on what I learned about the
EAR in general.

The mere presence of mod_ssl source code appears to be sufficient to
make the product as a whole covered by 5D002 export controls, which  
means

we can distribute both source and binaries under the TSU exception iff
the binaries are built from a 100% open source package that we can point
to with a URL.  That is no big deal.  The big deal is that 5D002
classification also means that it is illegal for the ASF to knowingly
allow anyone residing in, or a citizen of, the T-8 countries, or anyone
on the "denied persons list", to even participate in our project,
let alone download packages, since that participation would be a
"deemed export".  That is why I suggested a separate (sub)project,
so that the "httpd" product could exist separately and be completely
open to participation and downloads.  Just making it a release-time
build separation is not sufficient.

However, if the group would prefer to keep mod_ssl within the package,
then we have to take the appropriate actions in our documentation and
committer policies.  I do not think we would be in any danger of the
FBI making an example of us provided that we publish the same export
guidelines as all the other software companies.

So, I guess the real question is: do we follow the example of Mozilla
et al and simply publish as 5D002 with the appropriate documentation,
or do we make an attempt to separate the products in a way that one
half is unrestricted and the other is 5D002?

Those are the two choices that *we* need to discuss (choosing to do
neither is not an option now that I have a vague understanding of EAR
and how larger institutions like Stanford U. have chosen to enforce it).

If anyone can think of another option, I'd like to hear it before
proposing a vote.  Once we make a decision on the technical contents
of the project, Cliff and I can work out the legal requirements and
BIS notices in a way that can be applied across the ASF.

Roy


  1   2   3   4   5   6   >