Re: libapreq 2.17 POST upload with empty filename parameter

2023-07-05 Thread Raymond Field via dev

Hi,

After building and installing from trunk, I can see all of the 
parameters being parsed as expected.


Thank you for your help,

kind regards,

Raymond Field

On 04/07/2023 22:01, Joe Schaefer wrote:

2.17 was a dud security release.  Use trunk

Joe Schaefer, Ph.D

+1 (954) 253-3732
SunStar Systems, Inc.
/Orion - The Enterprise Jamstack Wiki/
/
/

*From:* Raymond Field via dev 
*Sent:* Tuesday, July 4, 2023 7:36:33 AM
*To:* dev@httpd.apache.org 
*Subject:* libapreq 2.17 POST upload with empty filename parameter
Hi,

I don't know if this is the correct place to report an issue with
libapreq2, please let me know where I should sent this report if this
isn't the correct place.

If I POST a form to the server that contains unfilled file upload 
fields, the

library seems to give up processing at the first empty filename, e.g. if
I POST

-15448443913271751721417945010
Content-Disposition: form-data; name="postticket"


-15448443913271751721417945010
Content-Disposition: form-data; name="uid"

1263741688468911
-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_file";
filename="some_test.txt"
Content-Type: text/plain

this is some text


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_type"

Document
-15448443913271751721417945010
Content-Disposition: form-data; name="vidlinkhtml"


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_thumbnail"; filename=""
Content-Type: application/octet-stream


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_file_thumbnail"; filename=""
Content-Type: application/octet-stream


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_title"

joe_wicks_crispy_sesame_chicken
-15448443913271751721417945010
Content-Disposition: form-data; name="new_access"

General
-15448443913271751721417945010
Content-Disposition: form-data; name="new_port_name"


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_desc"


-15448443913271751721417945010
Content-Disposition: form-data; name="role_7_priv_2"

21
-15448443913271751721417945010
Content-Disposition: form-data; name="new_comments"

YES
-15448443913271751721417945010
Content-Disposition: form-data; name="new_notify"

YES
-15448443913271751721417945010
Content-Disposition: form-data; name="add_submit"

Submit
-15448443913271751721417945010
Content-Disposition: form-data; name="add_submit_button"

Submit
-15448443913271751721417945010--

When looking at $apr->param I only see the following names: postticket
uid new_doc_file vidlinkhtml

i.e. up to but not including the first parameter with filename=""

If I submit the form without the parameters that have empty filenames I
see all of the parameter names.

This started happening when I upgraded a server from Debian 11 to Debian
12, so it worked OK in libapreq 2.13.  The libapreq libraries are not
currently included in the Bookwork package list, so I added them from
testing.  I've also tried installing directly from CPAN, but the same 
issue.


Kind regards,

Raymond Field


Re: libapreq 2.17 POST upload with empty filename parameter

2023-07-04 Thread Joe Schaefer
2.17 was a dud security release.  Use trunk

Joe Schaefer, Ph.D

+1 (954) 253-3732
SunStar Systems, Inc.
Orion - The Enterprise Jamstack Wiki


From: Raymond Field via dev 
Sent: Tuesday, July 4, 2023 7:36:33 AM
To: dev@httpd.apache.org 
Subject: libapreq 2.17 POST upload with empty filename parameter

Hi,

I don't know if this is the correct place to report an issue with
libapreq2, please let me know where I should sent this report if this
isn't the correct place.

If I POST a form to the server that contains unfilled file upload fields, the
library seems to give up processing at the first empty filename, e.g. if
I POST

-15448443913271751721417945010
Content-Disposition: form-data; name="postticket"


-15448443913271751721417945010
Content-Disposition: form-data; name="uid"

1263741688468911
-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_file";
filename="some_test.txt"
Content-Type: text/plain

this is some text


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_type"

Document
-15448443913271751721417945010
Content-Disposition: form-data; name="vidlinkhtml"


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_thumbnail"; filename=""
Content-Type: application/octet-stream


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_file_thumbnail"; filename=""
Content-Type: application/octet-stream


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_title"

joe_wicks_crispy_sesame_chicken
-15448443913271751721417945010
Content-Disposition: form-data; name="new_access"

General
-15448443913271751721417945010
Content-Disposition: form-data; name="new_port_name"


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_desc"


-15448443913271751721417945010
Content-Disposition: form-data; name="role_7_priv_2"

21
-15448443913271751721417945010
Content-Disposition: form-data; name="new_comments"

YES
-15448443913271751721417945010
Content-Disposition: form-data; name="new_notify"

YES
-15448443913271751721417945010
Content-Disposition: form-data; name="add_submit"

Submit
-15448443913271751721417945010
Content-Disposition: form-data; name="add_submit_button"

Submit
-15448443913271751721417945010--

When looking at $apr->param I only see the following names: postticket
uid new_doc_file vidlinkhtml

i.e. up to but not including the first parameter with filename=""

If I submit the form without the parameters that have empty filenames I
see all of the parameter names.

This started happening when I upgraded a server from Debian 11 to Debian
12, so it worked OK in libapreq 2.13.  The libapreq libraries are not
currently included in the Bookwork package list, so I added them from
testing.  I've also tried installing directly from CPAN, but the same issue.

Kind regards,

Raymond Field



libapreq 2.17 POST upload with empty filename parameter

2023-07-04 Thread Raymond Field via dev

Hi,

I don't know if this is the correct place to report an issue with
libapreq2, please let me know where I should sent this report if this
isn't the correct place.

If I POST a form to the server that contains unfilled file upload fields, the
library seems to give up processing at the first empty filename, e.g. if
I POST

-15448443913271751721417945010
Content-Disposition: form-data; name="postticket"


-15448443913271751721417945010
Content-Disposition: form-data; name="uid"

1263741688468911
-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_file";
filename="some_test.txt"
Content-Type: text/plain

this is some text


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_type"

Document
-15448443913271751721417945010
Content-Disposition: form-data; name="vidlinkhtml"


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_thumbnail"; filename=""
Content-Type: application/octet-stream


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_file_thumbnail"; filename=""
Content-Type: application/octet-stream


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_title"

joe_wicks_crispy_sesame_chicken
-15448443913271751721417945010
Content-Disposition: form-data; name="new_access"

General
-15448443913271751721417945010
Content-Disposition: form-data; name="new_port_name"


-15448443913271751721417945010
Content-Disposition: form-data; name="new_doc_desc"


-15448443913271751721417945010
Content-Disposition: form-data; name="role_7_priv_2"

21
-15448443913271751721417945010
Content-Disposition: form-data; name="new_comments"

YES
-15448443913271751721417945010
Content-Disposition: form-data; name="new_notify"

YES
-15448443913271751721417945010
Content-Disposition: form-data; name="add_submit"

Submit
-15448443913271751721417945010
Content-Disposition: form-data; name="add_submit_button"

Submit
-15448443913271751721417945010--

When looking at $apr->param I only see the following names: postticket
uid new_doc_file vidlinkhtml

i.e. up to but not including the first parameter with filename=""

If I submit the form without the parameters that have empty filenames I
see all of the parameter names.

This started happening when I upgraded a server from Debian 11 to Debian
12, so it worked OK in libapreq 2.13.  The libapreq libraries are not
currently included in the Bookwork package list, so I added them from
testing.  I've also tried installing directly from CPAN, but the same issue.

Kind regards,

Raymond Field



Fwd: Returned post for annou...@httpd.apache.org

2021-11-01 Thread ste...@eissing.org
Hu? Halloween?

> Anfang der weitergeleiteten Nachricht:
> 
> Von: announce-h...@httpd.apache.org
> Betreff: Returned post for annou...@httpd.apache.org
> Datum: 1. November 2021 um 00:16:48 MEZ
> An: ic...@apache.org
> 
> 
> Hi! This is the ezmlm program. I'm managing the
> annou...@httpd.apache.org mailing list.
> 
> I'm sorry, the list moderators for the announce list
> have failed to act on your post. Thus, I'm returning it to you.
> If you feel that this is in error, please repost the message
> or contact a list moderator directly.
> 
> --- Enclosed, please find the message you sent.
> 
> 
> 
> 
>   October 07, 2021
> 
>   The Apache Software Foundation and the Apache HTTP Server Project
>   are pleased to announce the release of version 2.4.51 of the Apache
>   HTTP Server ("Apache").  This version of Apache is our latest GA
>   release of the new generation 2.4.x branch of Apache HTTPD and
>   represents fifteen years of innovation by the project, and is
>   recommended over all previous releases. This release of Apache is
>   a security, feature and bug fix release.
> 
>   We consider this release to be the best version of Apache available, and
>   encourage users of all prior versions to upgrade.
> 
>   Apache HTTP Server 2.4.51 is available for download from:
> 
> https://httpd.apache.org/download.cgi
> 
>   Apache 2.4 offers numerous enhancements, improvements, and performance
>   boosts over the 2.2 codebase.  For an overview of new features
>   introduced since 2.4 please see:
> 
> https://httpd.apache.org/docs/trunk/new_features_2_4.html
> 
>   Please see the CHANGES_2.4 file, linked from the download page, for a
>   full list of changes. A condensed list, CHANGES_2.4.51 includes only
>   those changes introduced since the prior 2.4 release.  A summary of all 
>   of the security vulnerabilities addressed in this and earlier releases 
>   is available:
> 
> https://httpd.apache.org/security/vulnerabilities_24.html
> 
>   This release requires the Apache Portable Runtime (APR), minimum
>   version 1.5.x, and APR-Util, minimum version 1.5.x. Some features may
>   require the 1.6.x version of both APR and APR-Util. The APR libraries
>   must be upgraded for all features of httpd to operate correctly.
> 
>   This release builds on and extends the Apache 2.2 API.  Modules written
>   for Apache 2.2 will need to be recompiled in order to run with Apache
>   2.4, and require minimal or no source code changes.
> 
> https://svn.apache.org/repos/asf/httpd/httpd/trunk/VERSIONING
> 
>   When upgrading or installing this version of Apache, please bear in mind
>   that if you intend to use Apache with one of the threaded MPMs (other
>   than the Prefork MPM), you must ensure that any modules you will be
>   using (and the libraries they depend on) are thread-safe.
> 
>   Please note the 2.2.x branch has now passed the end of life at the Apache
>   HTTP Server project and no further activity will occur including security
>   patches.  Users must promptly complete their transitions to this 2.4.x
>   release of httpd to benefit from further bug fixes or new features.
> 
> 
> 
> 
> 



Asking for Help: Processing POST enctype="multipart/form-data", libapreq2 /mod_upload examples cannot find

2021-05-05 Thread 巍才凌

Hi experts: 




I am developing a http mod_example for my web application. I want to proecess html form with POST 
enctype="multipart/form-data", for uploading files to my server. I google searched, did 
not find any API for POST "multipart/form-data" processing, like ap_parse_form_data for 
default POST processing. 




I find libapreq2 may help, but I cannot find an example at 
https://httpd.apache.org/apreq/docs/libapreq2/index.html   . I also searched 
for a mod_upload, my mod_example cannot call mod_upload_form defined in 
mod_upload. I didnot find an example of mod_upload.




Can any experts help to advise any examples for processing POST in type 
"multipart/form-data"?  




Thanks,

Forrest

Re: [PATCH 62186] POST request getting logged as GET request

2018-04-10 Thread Micha Lenk

This is a kind reminder that I still didn't get any response yet.

Is there any additional information needed from my side?


On 03/29/2018 09:09 PM, Micha Lenk wrote:

Hi Apache httpd committers,

I think I've found a bug which triggers under following conditions:

* Apache is configured to serve a local customized error page, e.g.
using something like "ErrorDocument 404 /var/www/errors/404.html"

* Apache is configured to log the original request's method, e.g.
using something like (please note, the "%s %b method=\"%method to "GET" and r->method_number to M_GET
before it is calling ap_internal_redirect(custom_response, r) to serve
the configured error document.

I've tried to fix this issue by taking a backup of the original
request's method and restoring it as soon as ap_internal_redirect()
returns (see attached patch bz62186_httpd_bugfix.patch). So far the
tests I've done are successful, i.e. the request is now correctly logged
as POST request.

I've filed this issue some days ago as
https://bz.apache.org/bugzilla/show_bug.cgi?id=62186 , but so far it
didn't get any comments yet. Could anybody please take a look?


Kind regards,
Micha



[PATCH 62186] POST request getting logged as GET request

2018-03-29 Thread Micha Lenk
Hi Apache httpd committers,

I think I've found a bug which triggers under following conditions:

* Apache is configured to serve a local customized error page, e.g.
   using something like "ErrorDocument 404 /var/www/errors/404.html"

* Apache is configured to log the original request's method, e.g.
   using something like (please note, the "%s %b method=\"%method to "GET" and r->method_number to M_GET
before it is calling ap_internal_redirect(custom_response, r) to serve
the configured error document.

I've tried to fix this issue by taking a backup of the original
request's method and restoring it as soon as ap_internal_redirect()
returns (see attached patch bz62186_httpd_bugfix.patch). So far the
tests I've done are successful, i.e. the request is now correctly logged
as POST request.

I've filed this issue some days ago as
https://bz.apache.org/bugzilla/show_bug.cgi?id=62186 , but so far it
didn't get any comments yet. Could anybody please take a look?


Kind regards,
Micha

--- t/apache/errordoc_method_logging.t	(nonexistent)
+++ t/apache/errordoc_method_logging.t	(working copy)
@@ -0,0 +1,34 @@ 
+use strict;
+use warnings FATAL => 'all';
+
+use Data::Dumper;
+use Apache::Test;
+use Apache::TestRequest;
+use Apache::TestUtil qw/t_cmp
+t_start_file_watch
+t_finish_file_watch/;
+
+Apache::TestRequest::module('error_document');
+
+plan tests => 3, need_lwp;
+
+{
+t_start_file_watch 'method_log';
+
+my $response = POST '/method_logging', content => 'does not matter';
+chomp(my $content = $response->content);
+
+ok t_cmp($response->code,
+     404,
+ 'POST /method_logging, code');
+
+ok t_cmp($content,
+ 'Error 404 Test',
+ 'POST /method/logging, content');
+
+    my @loglines = t_finish_file_watch 'method_log';
+chomp @loglines;
+ok t_cmp($loglines[0],
+ qr/"POST \/method_logging HTTP\/1.1" .* method="POST"/,
+ 'POST /method/logging, log');
+}
--- t/conf/extra.conf.in	(revision 1826815)
+++ t/conf/extra.conf.in	(working copy)
@@ -742,7 +742,11 @@ 
 ## 
 
 ErrorDocument 404 "per-server 404
- 
+
+CustomLog logs/method_log "%h %l %u %t \"%r\" %>s %b method=\"%s %b"
+
+
 
 ErrorDocument 404 "per-dir 404
 
@@ -760,6 +764,10 @@ 
 ErrorDocument 404 default
 
 
+
+ErrorDocument 404 /apache/errordoc/404.html
+
+
 
  ErrorDocument 404 "testing merge
 
--- t/htdocs/apache/errordoc/404.html	(nonexistent)
+++ t/htdocs/apache/errordoc/404.html	(working copy)
@@ -0,0 +1, @@ 
+Error 404 Test

--- modules/http/http_request.c	(revision 1826989)
+++ modules/http/http_request.c	(working copy)
@@ -187,7 +187,8 @@ 
 apr_table_setn(r->headers_out, "Location", custom_response);
 }
 else if (custom_response[0] == '/') {
-const char *error_notes;
+const char *error_notes, *original_method;
+int original_method_number;
 r->no_local_copy = 1;   /* Do NOT send HTTP_NOT_MODIFIED for
  * error documents! */
 /*
@@ -205,9 +206,13 @@ 
  "error-notes")) != NULL) {
 apr_table_setn(r->subprocess_env, "ERROR_NOTES", error_notes);
 }
+original_method = r->method;
+original_method_number = r->method_number;
 r->method = "GET";
 r->method_number = M_GET;
 ap_internal_redirect(custom_response, r);
+r->method = original_method;
+r->method_number = original_method_number;
 return;
 }
 else {



Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2017-01-04 Thread Jim Jagielski

> On Jan 3, 2017, at 8:04 PM, Noel Butler  wrote:
> 
> On 03/01/2017 23:11, Jim Jagielski wrote:
> 
>> Back in the "old days" we used to provide complimentary builds
>> for some OSs... I'm not saying we go back and do that necessarily,
>> but maybe also providing easily consumable other formats when we
>> do a release, as a "service" to the community might make a lot
>> of sense.
>>  
> 2 years ago it was decided to stop the official -deps (despite they are 
> included in dev still)... now you want to bring it back? (you'd have to if 
> you're going to roll usable binary packages or your "community service" 
> re-built packages are going to be broken)

Nope. Didn't say that. And the inclusion on dev still is known
and even explicitly addressed.



Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2017-01-03 Thread William A Rowe Jr
On Tue, Jan 3, 2017 at 7:04 PM, Noel Butler  wrote:
>
> On 03/01/2017 23:11, Jim Jagielski wrote:
>
> Back in the "old days" we used to provide complimentary builds
> for some OSs... I'm not saying we go back and do that necessarily,
> but maybe also providing easily consumable other formats when we
> do a release, as a "service" to the community might make a lot
> of sense.
>
>
> 2 years ago it was decided to stop the official -deps (despite they are 
> included in dev still)... now you want to bring it back? (you'd have to if 
> you're going to roll usable binary packages or your "community service" 
> re-built packages are going to be broken)

I don't think he said that. For years httpd shipped the compiled
current openssl, expat, pcre sources as a binary. There was no sources
package of these, although we did provide the .diff to get the
packages to build correctly.

Because HTTP/2 requires OpenSSL 1.0.2, that will have to be part of
most packages, including semi-modern Linux flavors.

PCRE[2] is unavoidable, and while libxml2 can sub in for libexpat, the
SVN project would rather we bound to libexpat for specific features
they rely on.


> Although I as many others here prefer to roll our own due to our configs, and 
> not having to deal with bloat, I can see this having a positive effect for 
> users of a couple of distros who when they release brand new releases, come 
> with antiquated junk thats outdated and stays outdated, to give those users a 
> choice of using a modern code set would be good, but requires long term 
> dedication.

Agreed - it simply has to land somewhere like /opt/apache/httpd/ or
whatnot, to disambiguate whatever the user builds for themself in
/usr/local/ and what the OS might provision in /usr/


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2017-01-03 Thread Noel Butler
On 03/01/2017 23:11, Jim Jagielski wrote:

> Back in the "old days" we used to provide complimentary builds
> for some OSs... I'm not saying we go back and do that necessarily,
> but maybe also providing easily consumable other formats when we
> do a release, as a "service" to the community might make a lot
> of sense.

2 years ago it was decided to stop the official -deps (despite they are
included in dev still)... now you want to bring it back? (you'd have to
if you're going to roll usable binary packages or your "community
service" re-built packages are going to be broken) 

Although I as many others here prefer to roll our own due to our
configs, and not having to deal with bloat, I can see this having a
positive effect for users of a couple of distros who when they release
brand new releases, come with antiquated junk thats outdated and stays
outdated, to give those users a choice of using a modern code set would
be good, but requires long term dedication.

-- 
Kind Regard, 

Noel Butler 

This Email, including any attachments, may contain legally 
privileged
information, therefore remains confidential and subject to copyright
protected under international law. You may not disseminate, discuss, or
reveal, any part, to anyone, without the authors express written
authority to do so. If you are not the intended recipient, please notify
the sender then delete all copies of this message including attachments,
immediately. Confidentiality, copyright, and legal privilege are not
waived or lost by reason of the mistaken delivery of this message. Only
PDF [1] and ODF [2] documents accepted, please do not send proprietary
formatted documents 

 

Links:
--
[1] http://www.adobe.com/
[2] http://en.wikipedia.org/wiki/OpenDocument

signature.asc
Description: OpenPGP digital signature


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2017-01-03 Thread William A Rowe Jr
On Jan 3, 2017 07:11, "Jim Jagielski"  wrote:

Back in the "old days" we used to provide complimentary builds
for some OSs... I'm not saying we go back and do that necessarily,
but maybe also providing easily consumable other formats when we
do a release, as a "service" to the community might make a lot
of sense.


It could be really helpful. Or we can follow svn's lead and hand it
entirely off to the broader community, which proved really effective on
Windows, given the number of distros to now choose between. I haven't seen
similar for RHEL users, for example.

That said, only one major Linux distro (April Ubuntu LTS) is at OpenSSL
1.0.2, which is a necessary part of http/2's special sauce.


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2017-01-03 Thread Jim Jagielski
Back in the "old days" we used to provide complimentary builds
for some OSs... I'm not saying we go back and do that necessarily,
but maybe also providing easily consumable other formats when we
do a release, as a "service" to the community might make a lot
of sense.


Re: Post 2.4.25

2016-12-31 Thread David Zuelke
On 31 Dec 2016, at 00:09, Stefan Fritsch  wrote:
> * the longer 2.6/3.0 takes the more half-baked/half-finished stuff 
> accumulates 
> that needs to be fixed before a release.
> 
> But I don't have any ideas how to resolve this.

Did you see my "A new release process?" thread? :)




Re: Post 2.4.25

2016-12-30 Thread Stefan Fritsch
On Saturday, 24 December 2016 08:29:35 CET Rich Bowen wrote:
> From my perspective, watching Nginx gain traction through superior
> marketing, and channeling Dilbert's Pointy Haired Boss in assuming that
> everything which I have never done must be simple, I, for one, would
> like to see us release a 2.6, and more generally, to release a 2.x every
> 2 years, or less, rather than every 4 years, or more.

There is the problem that on the one hand, one should do some invasive changes 
in trunk to improve the architecture. On the other hand, this is problematic 
if the 2.6/3.0 release is not coming soon because

* it makes it difficult to backport stuff to 2.4.x

* there is the danger that the people who did the invasive changes are no 
longer around when 2.6/3.0 is actually release. We had this problem with the 
authn/authz stuff for 2.4, which took quite some time to get fixed.

* the longer 2.6/3.0 takes the more half-baked/half-finished stuff accumulates 
that needs to be fixed before a release.

But I don't have any ideas how to resolve this.

Cheers,
Stefan



RE: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-30 Thread Houser, Rick
I agree with a lot of what Daniel says, and I'm in a similar role with 
maintaining my organization's httpd RPM packages.

However, I don't look at this suggestion so much as a replacement, but rather 
an additional option end users can use if they aren't up to the challenge of 
using sources, but can't get by with ancient builds in RHEL, etc.  I personally 
doubt this would affect that many of the bigger users (let alone those on this 
list), as we would just keep using sources to keep up with what the LTS distros 
leave off (a 5+ year cycle is just too slow for the modern web tier).  As 
someone who does distro packaging, I think this is completely the wrong 
distribution model, but it's also the quick and dirty one.

Just throwing this out there, but a better middle of the road option for 
similar user coverage may be a more aggressive backporting of bleeding edge 
apache-related packages from development distros like Fedora to repositories 
maintained for the LTS distros.  A lot of people already do this work 
independently, so perhaps much of the labor overhead could be knocked off with 
a bit more initial organizational effort, and referral/hosting support from the 
httpd project?


Rick Houser
Web Administration

> -Original Message-
> From: Daniel Ruggeri [mailto:drugg...@primary.net]
> Sent: Friday, December 30, 2016 10:12
> To: dev@httpd.apache.org
> Subject: Re: The Version Bump fallacy [Was Re: Post 2.4.25]
> 
> On 12/28/2016 6:40 PM, Yehuda Katz wrote:
> > On Wed, Dec 28, 2016 at 12:35 AM, William A Rowe Jr
> > mailto:wr...@rowe-clan.net>> wrote:
> >
> > Our adoption is *broadly* based on the OS distributions
> > from vendors, not from people picking up our sources.
> > Yes - some integrate directly from source, and others
> > use a non-OS distribution.
> >
> >
> > I think a significant number of users of nginx add the official nginx
> > yum/apt sources and keep up to date that way
> > (http://nginx.org/en/linux_packages.html#mainline).
> > This is particularly true because the vendor-supplied version are so
> > old. You can see this in the w3techs data: nginx 1.10.2 came out in
> > October and already makes up 75% of all nginx 1.10 users. nginx 1.11.8
> > usage has similar trends.
> >
> > A possible solution to this would be to start publishing binaries in a
> > package-manager-accessible format.
> > I am confident it would see a much higher rate of adoption.
> >
> > - Y
> 
> I feel strongly about this...
> 
> As a package builder/maintainer at $dayjob, this idea terrifies me.
> Given the huge variation in distributions and what is current on those
> platforms, the "best" option I see is to build for the least common
> denominator (minimum common libc, APR, APR-UTIL, openssl, openldap,
> etc). Otherwise, the package may only work on sufficiently modern
> installations. Things like Docker containers for the different distros
> are nice, but I'm not sure those are guaranteed to be current or
> accurately represent what an installation will look like. Additionally,
> vendors set different prefixes or split their configurations up
> differently meaning we would then have to bite off the work of creating
> vendor-specific packages (sucks for us) or force a standard installation
> format (sucks for operators of the servers). A really good illustration
> of this challenge is the layout differences between Debian and CentOS
> where even the name of the server binary is changed from "httpd" to
> "apache2" in the former distro.
> 
> Or worse... we would have to bundle/vendor a copy of the dependencies
> inside the httpd package. This becomes a nightmare for the package
> builders because (as wrowe pointed out recently) it requires us to build
> these updated libraries and push the new package at some cadence as well
> as changing library search paths to potentially funky locations. It also
> becomes a challenge for server operators because a library now exists in
> two locations on the machine so compliance auditing gets forked (my
> httpd installation may be using openssl 1.0.2j but my postfix server may
> be using 0.9.8zh).
> 
> Also, I'm sure it goes without saying, but we can't reasonably consider
> either approach without full CI... doing all this manually is
> unmaintainable (heh - ask me how I know).
> 
> --
> Daniel Ruggeri



Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-30 Thread Daniel Ruggeri
On 12/28/2016 6:40 PM, Yehuda Katz wrote:
> On Wed, Dec 28, 2016 at 12:35 AM, William A Rowe Jr
> mailto:wr...@rowe-clan.net>> wrote:
>
> Our adoption is *broadly* based on the OS distributions
> from vendors, not from people picking up our sources.
> Yes - some integrate directly from source, and others
> use a non-OS distribution.
>
>
> I think a significant number of users of nginx add the official nginx
> yum/apt sources and keep up to date that way
> (http://nginx.org/en/linux_packages.html#mainline).
> This is particularly true because the vendor-supplied version are so
> old. You can see this in the w3techs data: nginx 1.10.2 came out in
> October and already makes up 75% of all nginx 1.10 users. nginx 1.11.8
> usage has similar trends.
>
> A possible solution to this would be to start publishing binaries in a
> package-manager-accessible format.
> I am confident it would see a much higher rate of adoption.
>
> - Y

I feel strongly about this...

As a package builder/maintainer at $dayjob, this idea terrifies me.
Given the huge variation in distributions and what is current on those
platforms, the "best" option I see is to build for the least common
denominator (minimum common libc, APR, APR-UTIL, openssl, openldap,
etc). Otherwise, the package may only work on sufficiently modern
installations. Things like Docker containers for the different distros
are nice, but I'm not sure those are guaranteed to be current or
accurately represent what an installation will look like. Additionally,
vendors set different prefixes or split their configurations up
differently meaning we would then have to bite off the work of creating
vendor-specific packages (sucks for us) or force a standard installation
format (sucks for operators of the servers). A really good illustration
of this challenge is the layout differences between Debian and CentOS
where even the name of the server binary is changed from "httpd" to
"apache2" in the former distro.

Or worse... we would have to bundle/vendor a copy of the dependencies
inside the httpd package. This becomes a nightmare for the package
builders because (as wrowe pointed out recently) it requires us to build
these updated libraries and push the new package at some cadence as well
as changing library search paths to potentially funky locations. It also
becomes a challenge for server operators because a library now exists in
two locations on the machine so compliance auditing gets forked (my
httpd installation may be using openssl 1.0.2j but my postfix server may
be using 0.9.8zh).

Also, I'm sure it goes without saying, but we can't reasonably consider
either approach without full CI... doing all this manually is
unmaintainable (heh - ask me how I know).

-- 
Daniel Ruggeri



Re: Post 2.4.25

2016-12-29 Thread William A Rowe Jr
On Thu, Dec 29, 2016 at 8:23 AM, Jim Jagielski  wrote:
>
>> On Dec 28, 2016, at 6:28 PM, William A Rowe Jr  wrote:
>>
>> Because fixing r->uri is such a priority, trust that I'll be voting every 
>> 2.6 candidate a -1 until it exists. I don't know why the original httpd 
>> founders are so hung up on version number conservation, they are cheap, but 
>> we are breaking a key field of a core request structure and no other OSS 
>> project would be stupid enough to call that n.m+1.
>
> Who is digging in their heels and blocking new development
> now?
>
> So you are admitting that you will "veto" (although you
> can't veto a release) any 2.5.* "releases" unless and
> until r->uri is "fixed".

Wow, Jim, how did you misread my assertion that I'd oppose 2.6 GA
or 3.0 GA release until feature "X", where "X" represents the heavy-lift
of not using filesystem syntax as the uri path except for files, honoring
the URI and HTTP RFC, and therefore requiring some module authors to
re-think how they consumed or changed/assigned r->uri. Modules such
as proxy would actually pass on the *presented* uri (if valid) to the back
end http server - just imagine that. That change I'm expecting we all
want to call out as 3.0 for our developers, even though there are no
directives changed for our users so administration doesn't change.

How did you jump to the conclusion that I'd block an -alpha or -beta
release on the 2.5.x trunk? Usually takes some number of incremental
-alpha/-beta tags to get to GA.

And how did you translate 'vote -1' to veto?


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-29 Thread William A Rowe Jr
On Thu, Dec 29, 2016 at 8:25 AM, Jim Jagielski  wrote:
> It wasn't the paste that was the problem, but the inability
> of other email clients to determine from your email what
> parts/sections are quoted from *previous* emails.

Yann pointed me in the right direction, I believe it is fixed now.

Thanks for the heads-up!

>> On Dec 28, 2016, at 5:49 PM, William A Rowe Jr  wrote:
>>
>> Hi Jim,
>>
>> Talk to Google and the OpenOffice Team, that was a paste from OpenOffice 
>> Calc.
>>
>> I'll be happy to start summarizing as a shared Google sheet.

Google sheet might still be useful, so I'll maintain that as a general purpose
collection of shared-with-httpd-dev tabs.


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-29 Thread Jim Jagielski

> On Dec 28, 2016, at 7:40 PM, Yehuda Katz  wrote:
> 
> On Wed, Dec 28, 2016 at 12:35 AM, William A Rowe Jr  
> wrote:
> Our adoption is *broadly* based on the OS distributions
> from vendors, not from people picking up our sources.
> Yes - some integrate directly from source, and others
> use a non-OS distribution.
> 
> I think a significant number of users of nginx add the official nginx yum/apt 
> sources and keep up to date that way 
> (http://nginx.org/en/linux_packages.html#mainline).
> This is particularly true because the vendor-supplied version are so old. You 
> can see this in the w3techs data: nginx 1.10.2 came out in October and 
> already makes up 75% of all nginx 1.10 users. nginx 1.11.8 usage has similar 
> trends.
> 
> A possible solution to this would be to start publishing binaries in a 
> package-manager-accessible format.
> I am confident it would see a much higher rate of adoption.
> 

Good point. +1



Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-29 Thread Jim Jagielski
It wasn't the paste that was the problem, but the inability
of other email clients to determine from your email what
parts/sections are quoted from *previous* emails.

> On Dec 28, 2016, at 5:49 PM, William A Rowe Jr  wrote:
> 
> Hi Jim,
> 
> Talk to Google and the OpenOffice Team, that was a paste from OpenOffice Calc.
> 
> I'll be happy to start summarizing as a shared Google sheet.
> 
> Cheers,
> 
> Bill
> 
> 
> On Dec 28, 2016 14:22, "Jim Jagielski"  wrote:
> Bill, I don't know if it's just my Email client or not (doesn't
> look like it) but could you fix your Email client? It's impossible to
> reply and have the quoted parts parsed out correctly. I think
> it's to do w/ your messages being RTF or something.
> 
> Thx!
> 
> Included is an example of how a Reply misses quote levels...
> 
> > On Dec 28, 2016, at 1:34 PM, William A Rowe Jr  wrote:
> >
> > On Wed, Dec 28, 2016 at 9:13 AM, Jim Jagielski  wrote:
> > cPanel too... They are moving to EA4 which is Apache 2.4.
> >
> > If not moved yet, that example wouldn't be helpful, it reinforces my point
> > four years later. But EA itself seems to track pretty closely to the most
> > contemperanious versions, looks like within a month.
> >
> >
> > So the idea that supplemental (ie: 2.4.x->2.4.y) patches don't
> > have the reach or range of larger ones (2.4.x->2.6/3.0) isn't
> > quite accurate.
> >
> > It's entirely accurate. It isn't all-encompassing. We have that data too,
> > let's tear down SecuritySpace's Nov '16 dataset;
> > http://www.securityspace.com/s_survey/data/201611/servers.html
> >
> 



Re: Post 2.4.25

2016-12-29 Thread Jim Jagielski

> On Dec 28, 2016, at 6:28 PM, William A Rowe Jr  wrote:
> 
> 
> Because fixing r->uri is such a priority, trust that I'll be voting every 2.6 
> candidate a -1 until it exists. I don't know why the original httpd founders 
> are so hung up on version number conservation, they are cheap, but we are 
> breaking a key field of a core request structure and no other OSS project 
> would be stupid enough to call that n.m+1.
> 

Who is digging in their heels and blocking new development
now?

So you are admitting that you will "veto" (although you
can't veto a release) any 2.5.* "releases" unless and
until r->uri is "fixed". Which implies, obviously, a
very substantial refactoring. Which implies time. Which
implies that if you also say "no new enhancements in 2.4"
that it will be a long time until anything new and useful
will be added to, or available to, our end-users until
some unknown future time.

And that is acceptable to you?

And no one I know of in any way is "hung up on version
number conservation", and that is moot and strawman anyway.

As fair warning, I fully expect that we will release 2.4.26
within the next 3 months. I also fully expect that some
"new enhancements" from trunk to be backported and be in
that release.

I simply care about continuing to keep Apache httpd relevant
and a continued viable offering for our community. That
means us working on next-gen, of course, but also maintaining
and fostering a community until next-gen exists.

Re: Post 2.4.25

2016-12-29 Thread Reindl Harald



Am 29.12.2016 um 07:08 schrieb William A Rowe Jr:

(Again, it's gmail, /shrug. I can attempt to undecorate but doubt I'm
moving to a local client/mail store again. If anyone has good gmail
formatting tips for their default settings, I'd love a pointer.)


yes, setup thunderbird and gmail with IMAP for mailing-lists so that 
your sent and received mail are as now on the server or setup roundcube 
to access gmail via IMAP/SMTP and configure it to prefer plaintext


or complain loud enough to google that they are fools when they convert 
a plaintext message to html while press reply


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-29 Thread Stefan Eissing

> Am 29.12.2016 um 01:40 schrieb Yehuda Katz :
> 
> On Wed, Dec 28, 2016 at 12:35 AM, William A Rowe Jr  
> wrote:
> Our adoption is *broadly* based on the OS distributions
> from vendors, not from people picking up our sources.
> Yes - some integrate directly from source, and others
> use a non-OS distribution.
> 
> I think a significant number of users of nginx add the official nginx yum/apt 
> sources and keep up to date that way 
> (http://nginx.org/en/linux_packages.html#mainline).
> This is particularly true because the vendor-supplied version are so old. You 
> can see this in the w3techs data: nginx 1.10.2 came out in October and 
> already makes up 75% of all nginx 1.10 users. nginx 1.11.8 usage has similar 
> trends.
> 
> A possible solution to this would be to start publishing binaries in a 
> package-manager-accessible format.
> I am confident it would see a much higher rate of adoption.

Very good point. Myself using a ppa for my ubuntu server via

deb http://ppa.launchpad.net/ondrej/apache2/ubuntu trusty main

that updates very quickly. It already has 2.4.25. There are other people doing 
this for various distros. The least we could do is document the ones we know, 
talk to people how they see it continue. Offer a https place and visibility on 
Apache servers maybe?

Does that make sense?

> - Y

Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de



On the subject of r->uri [was: Post 2.4.25]

2016-12-28 Thread William A Rowe Jr
On Wed, Dec 28, 2016 at 6:42 PM, Yann Ylavic  wrote:

> [Bill, you definitely should do something with your email client, e.g.
> using plain text only, replying to your messages breaks indentation
> level (like the number of '>' preceding/according to the initial
> message)].
>

(Again, it's gmail, /shrug. I can attempt to undecorate but doubt I'm
moving to a local client/mail store again. If anyone has good gmail
formatting tips for their default settings, I'd love a pointer.)


> On Thu, Dec 29, 2016 at 12:28 AM, William A Rowe Jr 
> wrote:
> >
> > On Dec 24, 2016 07:57, "Jim Jagielski"  wrote:
> >
> > Well as breaking changes go, changing URI to remain an encoded value and
> to
> > mandate module authors accept that req_rec is free threaded are breaking
> > changes.
>
> Not sure what the second point means, but preserving r->uri while also
> allowing to (and internally work with) an escaped form of the URI does
> not necessarily require an API change.
> (We could possibly have an opaque struct to manipulate the URI in its
> different forms and some helper(s) be compatible with the API changes'
> requirements, e.g. 2.4.x).
>

To be clear, this isn't possible.

There are multiple meanings of every path segment character which is
in the reserved set. There is no way to preserve these multiple meanings
in a decoded context. The parallel entities may exist in any undecoded
string. So r->uri, if it still exists, will be subsumed by some variable
like
r->uri_path_unencoded and be retrievable into a decoded form.

Functions such as ap_hook_map_to_storage, in the filesystem backend,
will only be interested in the decoded form. Functions such as the http
proxy module will only be interested in passing a never-mangled version
of the encoded uri.

Even if r->uri is available as a read-only input, there is no simple way for
httpd to resolve r->uri manipulations if changed in place (it isn't const)
and whether an r->uri_path_unencoded mismatch which is canonical,
and what mishmash the legacy abuser of r->uri did with these parallel
reserved characters in their encoded and unnencoded forms. We are
stuck with the current mess of various %-escape workarounds until
we replace the core assumption.

This deserves a long discussion which already exists in the security@
list, but needs to be pushed outward on dev@, preferably by the original
authors of these thoughts. That includes the r->uri preserving flavor
that you mention above, as well as the various discussions about the
% entity encoding, and my concerns about canonicalization. With some
first-level triage already complete, there is no reason for uri discussion
to remain 'behind the curtain.'


Re: Post 2.4.25

2016-12-28 Thread Yann Ylavic
[Bill, you definitely should do something with your email client, e.g.
using plain text only, replying to your messages breaks indentation
level (like the number of '>' preceding/according to the initial
message)].

On Thu, Dec 29, 2016 at 12:28 AM, William A Rowe Jr  wrote:
>
> On Dec 24, 2016 07:57, "Jim Jagielski"  wrote:
>
[For example, here I had to add a '>' for Jim's original text to be
associated with the above "Jim ... wrote:"]
>>
>> My point is that even having a 6 month minimal (and that
>> is, IMO, widely optimistic and unrealistic) of "no new
>> features for 2.4" means that we are basically giving people
>> reasons to drop httpd. It would be a widely different story
>> if (1) trunk was ready to release and (2) we "committed" to
>> releasing trunk quickly by focusing on low-hanging fruit
>> which would make lives happier and better for our end-users.
>> As I said, my fear is that we will not be able to "control"
>> ourselves in limiting what is in 2.6, which will push the
>> actual release far past the point where it is even relevant.
>
> Well as breaking changes go, changing URI to remain an encoded value and to
> mandate module authors accept that req_rec is free threaded are breaking
> changes.

Not sure what the second point means, but preserving r->uri while also
allowing to (and internally work with) an escaped form of the URI does
not necessarily require an API change.
(We could possibly have an opaque struct to manipulate the URI in its
different forms and some helper(s) be compatible with the API changes'
requirements, e.g. 2.4.x).

Regarding the changes in configuration/behaviours, I don't think we
should break things anyway (even accross majors, if possible/relevant
of course), but rather provide options to have the one or the other
behaviour (trunk having the now considered better behaviour, stable(s)
the compatible one).

My point is mainly that rather than focusing on version numbers, we
probably should focus on:
1. what we have now (in trunk), and want to backport
2. what we don't have now, do it (the better wrt trunk), and go to 1.

There's (almost) always a way to backport things, though it should not
prevent us from doing the *necessary* changes (in trunk) for new
improvements/features.

Yet, first step is the "inventory" of what we have/want, each/all of
us involved and constructive...

Once this is done, let's see what is compatible or not (and if yes at
which cost).
If we are going to introduce incompatible features, let's do 3.x.
If we are going to introduce many features at once, let's do 2.6.x
(that's an announce/marketing "value", the user does not care about
the version, (s)he does about the features).
Otherwise let's continue improving trunk with 2.4.x.

When we start implementing new features, first discuss/specify them,
then implement, and see if it's compatible/backportable.
For now, I don't see many (if any) incompatibilities in trunk (I
surely don't have an exhaustive view), but many improvements to
backport.

Just my 2 cts...

Regards,
Yann.


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread Yehuda Katz
On Wed, Dec 28, 2016 at 12:35 AM, William A Rowe Jr 
wrote:

> Our adoption is *broadly* based on the OS distributions
> from vendors, not from people picking up our sources.
> Yes - some integrate directly from source, and others
> use a non-OS distribution.
>

I think a significant number of users of nginx add the official nginx
yum/apt sources and keep up to date that way (
http://nginx.org/en/linux_packages.html#mainline).
This is particularly true because the vendor-supplied version are so old.
You can see this in the w3techs data: nginx 1.10.2 came out in October and
already makes up 75% of all nginx 1.10 users. nginx 1.11.8 usage has
similar trends.

A possible solution to this would be to start publishing binaries in a
package-manager-accessible format.
I am confident it would see a much higher rate of adoption.

- Y


Re: Post 2.4.25

2016-12-28 Thread William A Rowe Jr
On Dec 24, 2016 08:32, "Eric Covener"  wrote:

> I'm not saying we don't do one so we can do the other; I'm
> saying we do both, at the same time, in parallel. I still
> don't understand why that concept is such an anathema to some
> people.

I also worry about our ability to deliver a 3.0 with enough
re-architecture for us and and function for users, vs a more
continuous delivery (apologies for bringing buzzaords to dev@httpd)
cadence on 2.4 as we've been in.


Here is the confusion (see the versioning thread.)

2.6 is a break in ABI compatibility.

3.0 is a break in API compatibility.

Size in this case doesn't matter. Any break at all merits these changes.

We are not a commercial product. We are httpd. Nobody cares what the
version no is other than us, they very largely install and forget, OS
vendors grab new at one point in their distribution gathering phase and
don't revisit.

Adoption outside of OS distros is largely irrelevant. Talk about
do-nothing, PCRE2 has been out a very long time with all the activity and
no adoption, PCRE 8.x is on life support with little pulse and is the
defacto standard.

Your assumptions don't reflect the actual adoption behaviors.


Re: Post 2.4.25

2016-12-28 Thread William A Rowe Jr
On Dec 24, 2016 07:57, "Jim Jagielski"  wrote:


> On Dec 24, 2016, at 8:29 AM, Rich Bowen  wrote:
>
> On 12/23/2016 03:52 PM, Jim Jagielski wrote:
>> Personally, I don't think that backporting stuff to
>> 2.4 prevents or disallows development on 2.6/3.0. In
>> fact, I think it helps. We can easily do both...
>> after all, we are still "working" on 2.2.
>>
>> As I have also stated, my personal belief is that
>> 2.4 is finally reaching some traction, and if we
>> "turn off" development/enhancement of 2.4, we will
>> stop the uptake of 2.4 in its track. We need to keep
>> 2.4 viable and worthwhile we, at the same time, work
>> on 2.6/3.0. I think we all understand that getting
>> 2.6/3.0 out will not be a quick and/or painless
>> action.
>
> From my perspective, watching Nginx gain traction through superior
> marketing, and channeling Dilbert's Pointy Haired Boss in assuming that
> everything which I have never done must be simple, I, for one, would
> like to see us release a 2.6, and more generally, to release a 2.x every
> 2 years, or less, rather than every 4 years, or more.
>
> My opinion on this, I would emphasize, is 100% marketing, and 0%
> technical. I realize we "don't do" marketing, but if we want to still ve
> having the fun of doing this in another 20 years, it may be necessary to
> get our name out there a little more frequently in terms of doing new
> things. We are frankly not great at telling the world about the cool new
> things we're doing.
>

Yeah, right now the way we are "doing marketing" is by
continually adding features and enhancements to 2.4... It
is what keeps 2.4 relevant and is what either keeps current
httpd users using httpd or maybe help those on the fence decide
on httpd instead of nginx/whatever.


And again to play devil's advocate, how has that worked out in the four
years of httpd 2.4?

My point is that even having a 6 month minimal (and that
is, IMO, widely optimistic and unrealistic) of "no new
features for 2.4" means that we are basically giving people
reasons to drop httpd. It would be a widely different story
if (1) trunk was ready to release and (2) we "committed" to
releasing trunk quickly by focusing on low-hanging fruit
which would make lives happier and better for our end-users.
As I said, my fear is that we will not be able to "control"
ourselves in limiting what is in 2.6, which will push the
actual release far past the point where it is even relevant.


Well as breaking changes go, changing URI to remain an encoded value and to
mandate module authors accept that req_rec is free threaded are breaking
changes. We did that on conn_rec post-2.2. But I suspect that we have now
done this to req_rec with the event mpm and sf's http2 enhancements already
on 2.4?

To be clear, if our goal was "Fork trunk as 2.5 NOW, polish
and tune 2.5 'as-is' with minimal major refactoring with
the goal of getting 2.6 out ASAP" then yeah, sure, the idea
of "no new features in 2.4" would have some merit. But based
on current conversation, it's obvious that, at least to me,
that won't happen and we will be continually refactoring 2.6
to make it a 3.0.


Two mistakes. If we commit to a new contract on these two things, there is
never 2.6.

Because fixing r->uri is such a priority, trust that I'll be voting every
2.6 candidate a -1 until it exists. I don't know why the original httpd
founders are so hung up on version number conservation, they are cheap, but
we are breaking a key field of a core request structure and no other OSS
project would be stupid enough to call that n.m+1.

So you can presume there is no such thing as 2.6.

Error 2 and you ignored the first reply, there is no branch until 3.0
occurs.

I'll save that detail for the next post.

Again, you don't "stop" development on a
current release until the next release is ready or, at least,
"this close" to being ready. You don't sacrifice the present,
nor do you leave your users in that limbo.


Users had been in limbo 10 months for security fixes and far longer for bug
fixes PatchAvailable in bugzilla, and you are worried about feature
development on an maintenance branch. Sigh...


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread William A Rowe Jr
On Dec 28, 2016 10:34, "William A Rowe Jr"  wrote:


Specific
Revision
Of all Most
Recent
Of m.m Of all
Apache/1.3.x 391898 3.33% 1.3.42 42392 10.82% 0.36%
Apache/2.0.x 551117 4.68% 2.0.64 36944 6.70% 0.31%
Apache/2.2.x 7129391 60.49% 2.2.31 1332448 18.78% 11.31%
Apache/2.4.x 3713364 31.51% 2.4.17+ 1502061 42.90% 12.74%

11785770
2.4.23 754385 21.54% 6.40%


Since this table is illegible to some, please see the second tab of

https://docs.google.com/spreadsheets/d/1aOxBRZ2IHsUJJcQNXu-oe6la4wMRIHN2mOlJCQGRy0k/edit?usp=drivesdk

The first tab is a crossref of many OS distribution components used by
httpd, as well as httpd itself.


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread William A Rowe Jr
Hi Jim,

Talk to Google and the OpenOffice Team, that was a paste from OpenOffice
Calc.

I'll be happy to start summarizing as a shared Google sheet.

Cheers,

Bill


On Dec 28, 2016 14:22, "Jim Jagielski"  wrote:

> Bill, I don't know if it's just my Email client or not (doesn't
> look like it) but could you fix your Email client? It's impossible to
> reply and have the quoted parts parsed out correctly. I think
> it's to do w/ your messages being RTF or something.
>
> Thx!
>
> Included is an example of how a Reply misses quote levels...
>
> > On Dec 28, 2016, at 1:34 PM, William A Rowe Jr 
> wrote:
> >
> > On Wed, Dec 28, 2016 at 9:13 AM, Jim Jagielski  wrote:
> > cPanel too... They are moving to EA4 which is Apache 2.4.
> >
> > If not moved yet, that example wouldn't be helpful, it reinforces my
> point
> > four years later. But EA itself seems to track pretty closely to the most
> > contemperanious versions, looks like within a month.
> >
> >
> > So the idea that supplemental (ie: 2.4.x->2.4.y) patches don't
> > have the reach or range of larger ones (2.4.x->2.6/3.0) isn't
> > quite accurate.
> >
> > It's entirely accurate. It isn't all-encompassing. We have that data too,
> > let's tear down SecuritySpace's Nov '16 dataset;
> > http://www.securityspace.com/s_survey/data/201611/servers.html
> >
>
>


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread Jim Jagielski
Bill, I don't know if it's just my Email client or not (doesn't
look like it) but could you fix your Email client? It's impossible to
reply and have the quoted parts parsed out correctly. I think
it's to do w/ your messages being RTF or something.

Thx!

Included is an example of how a Reply misses quote levels...

> On Dec 28, 2016, at 1:34 PM, William A Rowe Jr  wrote:
> 
> On Wed, Dec 28, 2016 at 9:13 AM, Jim Jagielski  wrote:
> cPanel too... They are moving to EA4 which is Apache 2.4.
>  
> If not moved yet, that example wouldn't be helpful, it reinforces my point
> four years later. But EA itself seems to track pretty closely to the most
> contemperanious versions, looks like within a month.
> 
> 
> So the idea that supplemental (ie: 2.4.x->2.4.y) patches don't
> have the reach or range of larger ones (2.4.x->2.6/3.0) isn't
> quite accurate.
> 
> It's entirely accurate. It isn't all-encompassing. We have that data too,
> let's tear down SecuritySpace's Nov '16 dataset;
> http://www.securityspace.com/s_survey/data/201611/servers.html 
> 



Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread Jan Ehrhardt
William A Rowe Jr in gmane.comp.apache.devel (Wed, 28 Dec 2016 10:46:51
-0600):
>On Wed, Dec 28, 2016 at 9:05 AM, Jan Ehrhardt  wrote:
>
>> Do not underestimate the influence of control panels. On all my Centos
>> servers I am running Directadmin. DA always offers to upgrade to the
>> latest release within a day after the release. Hence, I am running
>> Apache 2.4.25 everywhere at the moment.
>
> Excellent pointer. Thanks Jan.

BTW: I would be more hesitant to directly install a new release if I
would not have tested the dev/dist release candidates on my Windows
dev-server. But I am quite sure a lot of Directadmin users follow suit
soon.
-- 
Jan



Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread William A Rowe Jr
On Wed, Dec 28, 2016 at 9:13 AM, Jim Jagielski  wrote:

> cPanel too... They are moving to EA4 which is Apache 2.4.
>

If not moved yet, that example wouldn't be helpful, it reinforces my point
four years later. But EA itself seems to track pretty closely to the most
contemperanious versions, looks like within a month.


So the idea that supplemental (ie: 2.4.x->2.4.y) patches don't
> have the reach or range of larger ones (2.4.x->2.6/3.0) isn't
> quite accurate.
>

It's entirely accurate. It isn't all-encompassing. We have that data too,
let's tear down SecuritySpace's Nov '16 dataset;
http://www.securityspace.com/s_survey/data/201611/servers.html

First off, if you follow that link, you'll find much larger numbers
associated
to those specific revisions shipped with the likes of RHEL or CentOS, Ubuntu
(particularly -LTS flavors), etc etc etc. That was my contention in the top
post. But let's quantify 'accuracy' as you defined it in the reply...

Specific
Revision
Of all Most
Recent
Of m.m Of all
Apache/1.3.x 391898 3.33% 1.3.42 42392 10.82% 0.36%
Apache/2.0.x 551117 4.68% 2.0.64 36944 6.70% 0.31%
Apache/2.2.x 7129391 60.49% 2.2.31 1332448 18.78% 11.31%
Apache/2.4.x 3713364 31.51% 2.4.17+ 1502061 42.90% 12.74%

11785770
2.4.23 754385 21.54% 6.40%

The applicable data are 37.47% of all 'Apache[/n[.n[.n]]]' items, meaning
that some 2/3rds of users drop the ServerTokens down to product only
or major version only, and we can't derive anything useful from them, so
we will ignore the Apache and Apache/2 references for our % evaluations,
'Of all' refers to those with at least Apache/2.x designations.

I included 2.4.17-2.4.23 as an item, because that group are the versions
that released within the past year of this particular survey data (that does
include the then-current 2.4.23.)

The 'Of m.m' - same major.minor - backs out that Apache/2.x (without a
known subversion) from the calculation because we can't tell whether they
are the corresponding or a different subversion.

Of httpd users we can quantify, 6.4% updated within months of the 2.4.23
release (your 'power users' classification.) That minority doesn't move the
needle much on total adoption of httpd vs. others.

Only 11.3% bothered to pick up the final 2.2.31 that has been out
over a year, and combined with 12.74% running some 2.4.17...2.4.23,
*** only 24% *** run a version that had been a current release within
the preceding year.  E.g. of those running a somewhat-current version,
more than 1/4 are running the July 2.4.23 release by the end of November.
Note that Fedora 25 didn't move the needle much on this, it shipped GA
in December.

aren't the ones we are talking about in the 1st place. We are
> talking about real, "power" users, who want/need the latest
> and greatest.
>

Not if you are talking overall adoption rate. As illustrated, those
users adopting 2.4.23 already are an nearly accidental minority,
after 5 mos half of the 'current' 2.4 users are running 2.4.23, the
other half are running a flavor between 12 and 6 mos old. That
looks like overall random distribution by deployment date, with
no particular effort expended on 'staying current'.


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread William A Rowe Jr
On Wed, Dec 28, 2016 at 9:05 AM, Jan Ehrhardt  wrote:

> William A Rowe Jr in gmane.comp.apache.devel (Tue, 27 Dec 2016 23:35:50
> -0600):
> >But the vast majority of httpd, nginx, and yes - even IIS
> >users are all running what they were handed from their
> >OS distribution.
>
> Do not underestimate the influence of control panels. On all my Centos
> servers I am running Directadmin. DA always offers to upgrade to the
> latest release within a day after the release. Hence, I am running
> Apache 2.4.25 everywhere at the moment.
>
> Excellent pointer. Thanks Jan.


Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread Jim Jagielski
cPanel too... They are moving to EA4 which is Apache 2.4.

So the idea that supplemental (ie: 2.4.x->2.4.y) patches don't
have the reach or range of larger ones (2.4.x->2.6/3.0) isn't
quite accurate.

IMO, people who are comfortable with "whatever the OS provides"
aren't the ones we are talking about in the 1st place. We are
talking about real, "power" users, who want/need the latest
and greatest.

just my 2c

> On Dec 28, 2016, at 10:05 AM, Jan Ehrhardt  wrote:
> 
> William A Rowe Jr in gmane.comp.apache.devel (Tue, 27 Dec 2016 23:35:50
> -0600):
>> But the vast majority of httpd, nginx, and yes - even IIS
>> users are all running what they were handed from their
>> OS distribution.
> 
> Do not underestimate the influence of control panels. On all my Centos
> servers I am running Directadmin. DA always offers to upgrade to the
> latest release within a day after the release. Hence, I am running
> Apache 2.4.25 everywhere at the moment.
> -- 
> Jan
> 



Re: The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-28 Thread Jan Ehrhardt
William A Rowe Jr in gmane.comp.apache.devel (Tue, 27 Dec 2016 23:35:50
-0600):
>But the vast majority of httpd, nginx, and yes - even IIS
>users are all running what they were handed from their
>OS distribution.

Do not underestimate the influence of control panels. On all my Centos
servers I am running Directadmin. DA always offers to upgrade to the
latest release within a day after the release. Hence, I am running
Apache 2.4.25 everywhere at the moment.
-- 
Jan



The Version Bump fallacy [Was Re: Post 2.4.25]

2016-12-27 Thread William A Rowe Jr
On Fri, Dec 23, 2016 at 2:52 PM, Jim Jagielski  wrote:

>
> As I have also stated, my personal belief is that
> 2.4 is finally reaching some traction, and if we
> "turn off" development/enhancement of 2.4, we will
> stop the uptake of 2.4 in its track.


This is where I think we have a disconnect.

Our adoption is *broadly* based on the OS distributions
from vendors, not from people picking up our sources.
Yes - some integrate directly from source, and others
use a non-OS distribution.

But the vast majority of httpd, nginx, and yes - even IIS
users are all running what they were handed from their
OS distribution. This is why an amazing number of people
run 2.4.3-2.4.10 and soon, 2.4.18, even though these are
all already out of date. Once RHEL, Ubuntu LTS, SUSE
or others pick up a specific rev, that's where the typical
user is going to land for the next several years.

The raw stats show a couple of interesting things, IMO;
https://w3techs.com/technologies/overview/web_server/all
While we have slipped somewhat, the old adage that
httpd or another "Web Server" must sit in front of the
cobbled-together app servers doesn't apply anymore.
Code like Tomcat, etc, is now far more robust and
capable of sitting on the outward facing edge of the DMZ.

The two runners up in web server space have essentially
switched places, nginx now has the market penetration
that IIS once enjoyed. IIS now amounts to a fraction of
what it once did, essentially the 'everything else' share
that used to be held by webservers we don't think about
any more, such as Sun's, lighttpd, etc. And of course
custom server agents of the top 10 data providers skew
the results significantly.

Other surveys paint the data a little differently;
https://news.netcraft.com/archives/2016/12/21/december-
2016-web-server-survey.html
http://www.securityspace.com/s_survey/data/201611/index.html

Next up, we will see broad distribution of 2.4.23 - why?
Because that shipped in Fedora 25 which will very likely
become RHEL 8. E.g. we missed the boat, Generally
the Fedora release a year out from RHEL GA become
the shipping packages, and the pattern suggests this
early winter release becomes an early winter '17 RHEL.

If we don't ship improvements, we wither and fall into
oblivion. It does not matter that these are called 2.4.26
because *no vendor will ship it*. Not until they start
gathering the components of their next major release.
Which means, they are shipping are least interesting
sources over and over because we aren't shipping new
major releases.

So I'd respectively suggest that adding a feature to
2.4 vs releasing the feature in 3.0 makes not one
iota of difference in goodwill/adoption. The next major
releases who code freeze after 3.0 has shipped will
be in position to pick up and distribute 3.0. All the
rest will be stuck in the past.

But as a bottom line, all those users stuck in the past
until their OS catches up will have little opinion about
a feature in a 2.4.x release they will never see, since
their vendor keeps shipping the same 2.4.n that their
OS revision had initially shipped.
.


Re: Post 2.4.25

2016-12-24 Thread Mark Blackman

> On 24 Dec 2016, at 16:32, Eric Covener  wrote:
> 
>> I'm not saying we don't do one so we can do the other; I'm
>> saying we do both, at the same time, in parallel. I still
>> don't understand why that concept is such an anathema to some
>> people.
> 
> I also worry about our ability to deliver a 3.0 with enough
> re-architecture for us and and function for users, vs a more
> continuous delivery (apologies for bringing buzzaords to dev@httpd)
> cadence on 2.4 as we've been in.

If you can find a way with limited resources, I would encourage doing both in 
parallel as well.

What are the 2.6/3.0 re-architecture goals/vision out of curiosity?

- Mark

Re: Post 2.4.25

2016-12-24 Thread Eric Covener
> I'm not saying we don't do one so we can do the other; I'm
> saying we do both, at the same time, in parallel. I still
> don't understand why that concept is such an anathema to some
> people.

I also worry about our ability to deliver a 3.0 with enough
re-architecture for us and and function for users, vs a more
continuous delivery (apologies for bringing buzzaords to dev@httpd)
cadence on 2.4 as we've been in.


Re: Post 2.4.25

2016-12-24 Thread Rich Bowen
On Dec 24, 2016 10:57, "Jim Jagielski"  wrote:



Yeah, right now the way we are "doing marketing" is by
continually adding features and enhancements to 2.4... It
is what keeps 2.4 relevant and is what either keeps current
httpd users using httpd or maybe help those on the fence decide
on httpd instead of nginx/whatever.

My point is that even having a 6 month minimal (and that
is, IMO, widely optimistic and unrealistic) of "no new
features for 2.4" means that we are basically giving people
reasons to drop httpd.


Oh, sure, I agree with that. Six months of (perceived) inaction would tell
the world we're all done. I'm probably answering a different question.  :)


Re: Post 2.4.25

2016-12-24 Thread Jim Jagielski

> On Dec 24, 2016, at 8:54 AM, Eric Covener  wrote:
> 
> On Fri, Dec 23, 2016 at 3:28 PM, William A Rowe Jr  
> wrote:
>> Next step is to actually end enhancements alltogether
>> against 2.4 (we've done that some time ago, security
>> issues notwithstanding, on 2.2), and push all of the
>> enhancement effort towards 3.0 (2.5-dev). Of course,
>> we should continue to pick up bug fixes and help those
>> still on 2.4 have a good day.
>> 
>> Let those users looking for cool new things pick up
>> the 3.0 release.
> 
> What's the carrot for users/developers in a 2.6/3.0? I'm not sure
> they'd come along for this ride.  To play devils advocate, it seems
> like many of the breaking changes could be imposed by having
> deprecated fields/accessors (maybe moving to more of the latter) and
> preferred alternatives (to avoid major MMN bumps).
> 

Yeah, that is kind of alluded to in my thoughts. For 3.0 to
*really* be a major carrot, we are talking (IMO), a major
refactoring. A super streamlining of filters, etc. I used
to think making use of Serf would be it, but instead I'm
thinking libmill/libdill would be better (plus, to be honest,
I still can't figure out all the ins and outs of Serf and
there's no documentation at all)... 

In other words, to ensure that people come along for the
ride, the ride has to be revolutionary, at least at some
level. And that, imo, takes time to architecture, design,
implement and test. If we say "no new stuff for 2.4 until
then" then, as I have stated, we have given all our current
users a great reason and rationale for leaving, and they
will.

I'm not saying we don't do one so we can do the other; I'm
saying we do both, at the same time, in parallel. I still
don't understand why that concept is such an anathema to some
people.

> Anyone with ideas about what they'd want in a new release is
> encouraged to add them to the trunk STATUS file, even if they are just
> wishlist items -- it's not a commitment.

Added some of mine already :)



Re: Post 2.4.25

2016-12-24 Thread Jim Jagielski

> On Dec 24, 2016, at 8:29 AM, Rich Bowen  wrote:
> 
> 
> 
> On 12/23/2016 03:52 PM, Jim Jagielski wrote:
>> Personally, I don't think that backporting stuff to
>> 2.4 prevents or disallows development on 2.6/3.0. In
>> fact, I think it helps. We can easily do both...
>> after all, we are still "working" on 2.2.
>> 
>> As I have also stated, my personal belief is that
>> 2.4 is finally reaching some traction, and if we
>> "turn off" development/enhancement of 2.4, we will
>> stop the uptake of 2.4 in its track. We need to keep
>> 2.4 viable and worthwhile we, at the same time, work
>> on 2.6/3.0. I think we all understand that getting
>> 2.6/3.0 out will not be a quick and/or painless
>> action.
> 
> From my perspective, watching Nginx gain traction through superior
> marketing, and channeling Dilbert's Pointy Haired Boss in assuming that
> everything which I have never done must be simple, I, for one, would
> like to see us release a 2.6, and more generally, to release a 2.x every
> 2 years, or less, rather than every 4 years, or more.
> 
> My opinion on this, I would emphasize, is 100% marketing, and 0%
> technical. I realize we "don't do" marketing, but if we want to still ve
> having the fun of doing this in another 20 years, it may be necessary to
> get our name out there a little more frequently in terms of doing new
> things. We are frankly not great at telling the world about the cool new
> things we're doing.
> 

Yeah, right now the way we are "doing marketing" is by
continually adding features and enhancements to 2.4... It
is what keeps 2.4 relevant and is what either keeps current
httpd users using httpd or maybe help those on the fence decide
on httpd instead of nginx/whatever. 

My point is that even having a 6 month minimal (and that
is, IMO, widely optimistic and unrealistic) of "no new
features for 2.4" means that we are basically giving people
reasons to drop httpd. It would be a widely different story
if (1) trunk was ready to release and (2) we "committed" to
releasing trunk quickly by focusing on low-hanging fruit
which would make lives happier and better for our end-users.
As I said, my fear is that we will not be able to "control"
ourselves in limiting what is in 2.6, which will push the
actual release far past the point where it is even relevant.

To be clear, if our goal was "Fork trunk as 2.5 NOW, polish
and tune 2.5 'as-is' with minimal major refactoring with
the goal of getting 2.6 out ASAP" then yeah, sure, the idea
of "no new features in 2.4" would have some merit. But based
on current conversation, it's obvious that, at least to me,
that won't happen and we will be continually refactoring 2.6
to make it a 3.0. Again, you don't "stop" development on a
current release until the next release is ready or, at least,
"this close" to being ready. You don't sacrifice the present,
nor do you leave your users in that limbo.


Re: Post 2.4.25

2016-12-24 Thread Eric Covener
On Fri, Dec 23, 2016 at 3:28 PM, William A Rowe Jr  wrote:
> Next step is to actually end enhancements alltogether
> against 2.4 (we've done that some time ago, security
> issues notwithstanding, on 2.2), and push all of the
> enhancement effort towards 3.0 (2.5-dev). Of course,
> we should continue to pick up bug fixes and help those
> still on 2.4 have a good day.
>
> Let those users looking for cool new things pick up
> the 3.0 release.

What's the carrot for users/developers in a 2.6/3.0? I'm not sure
they'd come along for this ride.  To play devils advocate, it seems
like many of the breaking changes could be imposed by having
deprecated fields/accessors (maybe moving to more of the latter) and
preferred alternatives (to avoid major MMN bumps).

Anyone with ideas about what they'd want in a new release is
encouraged to add them to the trunk STATUS file, even if they are just
wishlist items -- it's not a commitment.


Re: Post 2.4.25

2016-12-24 Thread Rich Bowen


On 12/23/2016 03:52 PM, Jim Jagielski wrote:
> Personally, I don't think that backporting stuff to
> 2.4 prevents or disallows development on 2.6/3.0. In
> fact, I think it helps. We can easily do both...
> after all, we are still "working" on 2.2.
> 
> As I have also stated, my personal belief is that
> 2.4 is finally reaching some traction, and if we
> "turn off" development/enhancement of 2.4, we will
> stop the uptake of 2.4 in its track. We need to keep
> 2.4 viable and worthwhile we, at the same time, work
> on 2.6/3.0. I think we all understand that getting
> 2.6/3.0 out will not be a quick and/or painless
> action.

From my perspective, watching Nginx gain traction through superior
marketing, and channeling Dilbert's Pointy Haired Boss in assuming that
everything which I have never done must be simple, I, for one, would
like to see us release a 2.6, and more generally, to release a 2.x every
2 years, or less, rather than every 4 years, or more.

My opinion on this, I would emphasize, is 100% marketing, and 0%
technical. I realize we "don't do" marketing, but if we want to still ve
having the fun of doing this in another 20 years, it may be necessary to
get our name out there a little more frequently in terms of doing new
things. We are frankly not great at telling the world about the cool new
things we're doing.


-- 
Rich Bowen - rbo...@rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon



signature.asc
Description: OpenPGP digital signature


Re: Post 2.4.25

2016-12-23 Thread William A Rowe Jr
On Dec 23, 2016 9:58 PM, "Jim Jagielski"  wrote:

Well, since I am actively working on trunk, I am obviously interested in
seeing continued work being done on it and the work being usable to our
users in a timely fashion. Since backports to 2.2 have not affected work on
2.4 or trunk, it is obvious as well that any backport efforts for 2.4 won't
be any issue at all, so work on trunk will be unrestricted.


Restrictions, no, never. But if I had to ask how did that work out for me
merging antique commits back at 2.4 and from 2.4 to 2.2, you don't want my
opinion on your therom.

I hope your enthusiasm regarding timeframes is warranted and accurate.
Obviously my efforts are directed towards what is best for our community
and am looking forward to how we continue with next gen.


As do I.

Different refutation of your underlying therom on versioning will happen on
or after the weekend. In the meantime...

A joyous belated Solstice, happy Hanukkah, merry Christmas, or just wishing
everyone a good weekend. You have all been some of.my most favorite people
now for 15+ years.  See you all next year :)


Re: Post 2.4.25

2016-12-23 Thread Jim Jagielski

> On Dec 23, 2016, at 5:50 PM, William A Rowe Jr  wrote:
> 
> Just a couple quick thoughts...
> 
> On Dec 23, 2016 2:55 PM, "Jim Jagielski"  wrote:
> 
> . We need to keep
> 2.4 viable and worthwhile
> 
> So long as we fix the bugs, it is.
> 

Personally, especially considering the current landscape, I
believe that statement is simply wrong. Saying "just bug fixes"
for 2.4 for some unknown number of months is just flat out
incorrect when we haven't even EOLed it and, in fact, when
2.2 is still available, supported and would be in that self-
same mode.

"actually end enhancements alltogether against 2.4" at this
point is a sure fire way to completely kill Apache httpd and
is not required in the least. You seem to forget that people
can, and want, to do both. We do not, and should not, control
and restrict, without very good, solid, reasons, what people
do on their own free time here.

Just as it is "unwise" or "authoritarian" to "block" work
on trunk, it is the same for 2.4, considering the situation
that we are in *right now*. We need to continue to be relevant
and innovative in 2.4 while we are, at the same time, creating
the next rev. Suffocating one before its "replacement" is
even in pre-alpha stage is simply not needed nor is it a
wise move project-management-wise. It is unfair to our users.

It's like saying you can't have another kid until your youngest
is 18 :)

Cheers.


Re: Post 2.4.25

2016-12-23 Thread Jim Jagielski
Well, since I am actively working on trunk, I am obviously interested in seeing 
continued work being done on it and the work being usable to our users in a 
timely fashion. Since backports to 2.2 have not affected work on 2.4 or trunk, 
it is obvious as well that any backport efforts for 2.4 won't be any issue at 
all, so work on trunk will be unrestricted. I hope your enthusiasm regarding 
timeframes is warranted and accurate. Obviously my efforts are directed towards 
what is best for our community and am looking forward to how we continue with 
next gen. 

On 2016-12-23 17:50 (-0500), William A Rowe Jr  wrote: 
> Just a couple quick thoughts...
> 
> On Dec 23, 2016 2:55 PM, "Jim Jagielski"  wrote:
> 
> 
> As I have also stated, my personal belief is that
> 2.4 is finally reaching some traction, and if we
> "turn off" development/enhancement of 2.4, we will
> stop the uptake of 2.4 in its track.
> 
> 
> I think you might be misconstruing our flaws in httpd with our version
> numbering scheme.
> 
> There is only one other project with our longevity that refuses to bump
> version majors, and they are suddenly 2 versions ahead of us in only a few
> short years. If you haven't guessed, that's the Linux Kernel.
> 
> 
> . We need to keep
> 2.4 viable and worthwhile
> 
> 
> So long as we fix the bugs, it is.
> 
> Maybe the whole thing revolves around us mistakenly
> using the term "2.6/3.0"...
> 
> 
> I ceased doing this. After another admonishment that version numbers are
> cheap, and out team's concensus that treating r->uri as a decoded value was
> a wrong call, we won't have a release that can be called 2.next.
> 
> During its incubation of alphas and betas, it still remains 2.5.x, but on
> completion I can't imagine calling this 2.6. This will be a fundamental
> change that requires a 3.0 designation.
> 
> I don't see us taking shortcuts to get to that point, but believe it is a
> change that will occur in a very short timespan, because several committers
> want to see this happen.
> 
> So long as it is foretold that nobody is blocking 3.0, unlike 3 years ago,
> I expect that sort of energy and enthusiasm to take hold toward a GA
> release in the next six months, if we don't get bogged down in more
> backport type of activity.
> 


Re: Post 2.4.25

2016-12-23 Thread William A Rowe Jr
Just a couple quick thoughts...

On Dec 23, 2016 2:55 PM, "Jim Jagielski"  wrote:


As I have also stated, my personal belief is that
2.4 is finally reaching some traction, and if we
"turn off" development/enhancement of 2.4, we will
stop the uptake of 2.4 in its track.


I think you might be misconstruing our flaws in httpd with our version
numbering scheme.

There is only one other project with our longevity that refuses to bump
version majors, and they are suddenly 2 versions ahead of us in only a few
short years. If you haven't guessed, that's the Linux Kernel.


. We need to keep
2.4 viable and worthwhile


So long as we fix the bugs, it is.

Maybe the whole thing revolves around us mistakenly
using the term "2.6/3.0"...


I ceased doing this. After another admonishment that version numbers are
cheap, and out team's concensus that treating r->uri as a decoded value was
a wrong call, we won't have a release that can be called 2.next.

During its incubation of alphas and betas, it still remains 2.5.x, but on
completion I can't imagine calling this 2.6. This will be a fundamental
change that requires a 3.0 designation.

I don't see us taking shortcuts to get to that point, but believe it is a
change that will occur in a very short timespan, because several committers
want to see this happen.

So long as it is foretold that nobody is blocking 3.0, unlike 3 years ago,
I expect that sort of energy and enthusiasm to take hold toward a GA
release in the next six months, if we don't get bogged down in more
backport type of activity.


Re: Post 2.4.25

2016-12-23 Thread Jim Jagielski
Personally, I don't think that backporting stuff to
2.4 prevents or disallows development on 2.6/3.0. In
fact, I think it helps. We can easily do both...
after all, we are still "working" on 2.2.

As I have also stated, my personal belief is that
2.4 is finally reaching some traction, and if we
"turn off" development/enhancement of 2.4, we will
stop the uptake of 2.4 in its track. We need to keep
2.4 viable and worthwhile we, at the same time, work
on 2.6/3.0. I think we all understand that getting
2.6/3.0 out will not be a quick and/or painless
action.

Maybe the whole thing revolves around us mistakenly
using the term "2.6/3.0"... I seen trunk as something
that could become 2.6 in "short order", if that's
the direction we want to go. But there is also the
need and desire to really clean-up the codebase (r->uri
is the common example used), which means a codebase
which is more 3.0 related, and therefore, more extensive
and thus taking more time.

However, by us using the term "2.6/3.0" it muddies
the water, and implies that 2.6 could be much
more pervasive that it already is.

The long and short is that there is good stuff in trunk.
It should be available to our users sooner rather than
later. If you want to call that 2.6, fine. What I don't
want to see, since I don't think it is a viable solution,
is for us to say "OK, let's tag trunk as 2.5 with the goal
of getting 2.6 out soon... But hold on, this is broken and
we need to completely refactor this. And this is weird, let's
strip this out and replace it with this... And while we
are at it, let's change this to do that" with the end
result that 2.5/2.6 takes ages and 2.4 is left fallow. And,
to be honest, I think that is exactly what will happen.
The turd will never be polished enuff.

And our community suffers.

So, to make it crystal clear, I am 100% FOR httpd-next-gen.
All I am saying is that we have an existing user base
which is still mostly on 2.2, much less 2.4, and they
are currently at a disadvantage by not having access
to the latest and greatest stuff which is locked away
in trunk and could be available for them, *while httpd-next-gen
is being worked in parallel*.

Nothing is preventing people from playing on trunk... But my
feeling is that most people like hacking code that people
eventual run in short order and in a timely timeframe. Waiting
6-12-18 months for "new features" is how commercial s/w works,
not FOSS.

  https://w3techs.com/technologies/details/ws-apache/2/all


I will ignore the likelihood that httpd-next-gen will require
APR 2.0 which may also take a long time to be released.

> On Dec 23, 2016, at 3:28 PM, William A Rowe Jr  wrote:
> 
> On Fri, Dec 23, 2016 at 2:20 PM, Jim Jagielski  wrote:
> For me, it would be moving as much as we can from
> trunk to 2.4
> 
> -1. To echo your frequent use of media to emphasize
> the point, with a song nearly as old as us;
> https://www.youtube.com/watch?v=EsCyC1dZiN8
> 
> Next step is to actually end enhancements alltogether
> against 2.4 (we've done that some time ago, security
> issues notwithstanding, on 2.2), and push all of the
> enhancement effort towards 3.0 (2.5-dev). Of course,
> we should continue to pick up bug fixes and help those
> still on 2.4 have a good day.
> 
> Let those users looking for cool new things pick up
> the 3.0 release.
> 
> Or else you are kicking 'everything we can't' further
> down the road, again dismissing all of the activity 
> of so many of our fellow committers. Stale stuff on
> trunk/ now dates to over 4 years ago.
> 
> That state of things simply sucks.
> 



Re: Post 2.4.25

2016-12-23 Thread William A Rowe Jr
On Fri, Dec 23, 2016 at 2:20 PM, Jim Jagielski  wrote:

> For me, it would be moving as much as we can from
> trunk to 2.4


-1. To echo your frequent use of media to emphasize
the point, with a song nearly as old as us;
https://www.youtube.com/watch?v=EsCyC1dZiN8

Next step is to actually end enhancements alltogether
against 2.4 (we've done that some time ago, security
issues notwithstanding, on 2.2), and push all of the
enhancement effort towards 3.0 (2.5-dev). Of course,
we should continue to pick up bug fixes and help those
still on 2.4 have a good day.

Let those users looking for cool new things pick up
the 3.0 release.

Or else you are kicking 'everything we can't' further
down the road, again dismissing all of the activity
of so many of our fellow committers. Stale stuff on
trunk/ now dates to over 4 years ago.

That state of things simply sucks.


Post 2.4.25

2016-12-23 Thread Jim Jagielski
Now that we have 2.4.25 done, I'd like us to take the
next few weeks thinking about how we'd like to see
the next release shape up.

For me, it would be moving as much as we can from
trunk to 2.4, again, to enable current users to
leverage and enjoy the goodness which is currently
"stuck" in trunk. Some can be backported, some can't
of course, but it seems wise to try to backport what
we can. Other stuff, like brotli, seem low-hanging-fruit
which is ready to be plucked.

We should also, now that 2.4.25 is out with fixes/work-
arounds for some issues, tighten them up as needed.

No rush, of course, but assuming that many of us
have the next week or so as some "time off", it
might be a good opportunity for us to spend some of
our own time thinking what's next.


mod_websocket cross-post

2015-11-13 Thread Jacob Champion

Hi all,

(If you're already subscribed to modules-dev@ or users@, you've already 
seen this -- sorry -- but Rich Bowen suggested that I post here as well.)


I recently released a 0.1.0 version of mod_websocket, which was at one 
point [1] under consideration for folding into the httpd project, but it 
was abandoned sometime in 2012. I'm picking it up.


If you're interested, you can see a copy of the users@ announcement at


http://mail-archives.apache.org/mod_mbox/httpd-users/201511.mbox/%3C56425497.60301%40gmail.com%3E

or take a look at the project page at

https://github.com/jchampio/apache-websocket

If you'd like to talk about its design, or long-term goals, or the 
appropriateness of its being an httpd module -- or even if you just have 
questions like "who would use this thing?!" -- I would love to discuss it.


Thanks,
--Jacob

[1] 
https://github.com/disconnect/apache-websocket/issues/27#issuecomment-145019603


Re: buckets and connections (long post)

2015-10-22 Thread Graham Leggett
On 22 Oct 2015, at 6:04 PM, Stefan Eissing  wrote:

>> mod_ssl already worries about buffering on it’s own, there is no need to 
>> recreate this functionality. Was this not working?
> 
> As I wrote "it has other bucket patterns", which do not get optimized by the 
> coalescing filter of mod_ssl.

Then we must fix the coalescing filter in mod_ssl.

Regards,
Graham
—



Re: buckets and connections (long post)

2015-10-22 Thread Graham Leggett
On 22 Oct 2015, at 6:03 PM, Stefan Eissing  wrote:

> This is all true and correct - as long as all this happens in a single 
> thread. If you have multiple threads and create sub pools for each from a 
> main pool, each and every create and destroy of these sub-pools, plus any 
> action on the main pool must be mutex protected. I found out. 

Normally if you’ve created a thread from a main pool, you need to create a pool 
cleanup for that thread off the main pool that is registered with 
apr_pool_pre_cleanup_register(). In this cleanup, you signal the thread to shut 
down gracefully and then apr_thread_join to wait for the thread to shut down, 
then the rest of the pool can be cleaned up.

The “pre” is key to this - the cleanup must run before the subpool is cleared.

> Similar with buckets. When you create a bucket in one thread, you may not 
> destroy it in another - *while* the bucket_allocator is being used. 
> bucket_allocators are not thread-safe, which means bucket_brigades are not, 
> which means that all buckets from the same brigade must only be used inside a 
> single thread.

“…inside a single thread at a time”.

The event MPM is an example of this in action.

A connection is handled by an arbitrary thread until that connection must poll. 
At that point it goes back into the pool of connections, and when ready is 
given to another arbitrary thread. In this case the threads are handled “above” 
the connections, so the destruction of a connection doesn’t impact a thread.

> This means for example that, even though mod_http2 manages the pool lifetime 
> correctly, it cannot pass a response bucket from a request pool in thread A 
> for writing onto the  main connection in thread B, *as long as* the response 
> is not complete and thread A is still producing more buckets with the same 
> allocator. etc. etc.
> 
> That is what I mean with not-thread-safe.

In this case you have different allocators, and so must pass the buckets over.

Remember that being lock free is a feature, not a bug. As soon as you add 
mutexes you add delay and slow everything down, because the world must stop 
until the condition is fulfilled.

A more efficient way of handling this is to use some kind of IPC so that the 
requests signal the master connection and go “I’ve got data for you”, after 
which the requests don’t touch that data until the master has said “I;ve got 
it, feel free to send more”. That IPC could be a series of mutexes, or a socket 
of some kind. Anything that gets rid of a global lock.

That doesn’t mean request processing must stop dead, that request just gets put 
aside and that thread is free to work on another request.

I’m basically describing the event MPM.

Regards,
Graham
—



Re: buckets and connections (long post)

2015-10-22 Thread Graham Leggett
On 22 Oct 2015, at 5:55 PM, Stefan Eissing  wrote:

>> With the async filters this flow control is now made available to every 
>> filter in the ap_filter_setaside_brigade() function. When mod_http2 handles 
>> async filters you’ll get this flow control for free.
> 
> No, it will not. The processing of responses is very different.
> 
> Example: there is individual flow control of responses in HTTP/2. Clients do 
> small window sizes on images, like 64KB in order to get small images 
> completely or only the meta data of large ones. For these large files, the 
> client does not send flow-control updates until it has received all other
> resources it is interested in. *Then* it tells the server to go ahead and 
> send the rest of these images.
> 
> This means a file bucket for such images will hang around for an indefinite 
> amount of time and a filter cannot say, "Oh, I have n file buckets queued, 
> let's block write them first before I accept more." The server cannot do that.

What you’re describing is a DoS.

A client can’t tie up resources for an arbitrary amount of time, the server 
needs to be under control of this. If a client wants part of a file, the server 
needs to open the file, send the part, then close the file and be done. If the 
client wants more, then the server opens up the file again, sends more, and 
then is done.

> I certainly do not want to reinvent the wheel here and I am very glad about 
> any existing solution and getting told how to use them. But please try to 
> understand the specific problems before saying "we must have already a 
> solution for that, go find it. you will see…"

The http2 code needs to fit in with the code that is already there, and most 
importantly it needs to ensure it doesn’t clash with the existing mechanisms. 
If an existing mechanism isn’t enough, it can be extended, but they must not be 
bypassed.

The mechanism in the core keeps track of the number of file buckets, in-memory 
buckets and requests “in flight”, and then blocks if this gets too high. Rather 
block and live to fight another day than try an open too many files and get 
spurious failures as you run out of file descriptors.

The async filters gives you the ap_filter_should_yield() function that will 
tell you if downstream is too full and you should hold off sending more data. 
For example, don’t accept another request if you’ve already got too many 
requests in flight.

Regards,
Graham
—



Re: buckets and connections (long post)

2015-10-22 Thread Graham Leggett
On 22 Oct 2015, at 5:43 PM, Stefan Eissing  wrote:

>> The blocking read breaks the spirit of the event MPM.
>> 
>> In theory, as long as you enter the write completion state and then not 
>> leave until your connection is done, this problem will go away.
>> 
>> If you want to read instead of write, make sure the CONN_SENSE_WANT_READ 
>> option is set on the connection.
> 
> This does not parse. I do not understand what you are talking about. 
> 
> When all streams have been passed into the output filters, the mod_http2 
> session does a 
> 
>status = ap_get_brigade(io->connection->input_filters,...)  (h2_conn_io.c, 
> line 160)
> 
> similar to what ap_read_request() -> ap_rgetline_core() does. (protocol.c, 
> line 236)
> 
> What should mod_http2 do different here?

What ap_read_request does is:

- a) read the request (parse)
- b) handle the request (make decisions on what to do, internally redirect, 
rewrite, etc etc)
- c) exit, and let the MPM complete the request in the write_completion phase.

What you want to do is move the request completion into a filter, like 
mod_cache does. You start by setting up your request, you parse headers, you do 
the HTTP2 equivalent of ap_read_request(), then you do the actual work inside a 
filter. Look at the CACHE_OUT and CACHE_SAVE filters as examples.

To be more specific, in the handler that detects HTTP/2 you add a filter that 
processes the data, then write an EOS bucket to kick off the process and leave. 
The filter takes over.

The reason for this is you want to escape the handler phase as soon as 
possible, and leave the MPM to do it’s work.

Regards,
Graham
—





Re: buckets and connections (long post)

2015-10-22 Thread Stefan Eissing

> Am 21.10.2015 um 16:48 schrieb Graham Leggett :
> 
> On 21 Oct 2015, at 4:18 PM, Stefan Eissing  
> wrote:
> 
>> 7. The buckets passed down on the master connection are using another buffer 
>> - when on https:// -
>>  to influence the SSL record sizes on write. Another COPY is not nice, but 
>> write performance
>>  is better this way. The ssl optimizations in place do not work for HTTP/2 
>> as it has other
>>  bucket patterns. We should look if we can combine this into something 
>> without COPY, but with
>>  good sized SSL writes.
> 
> mod_ssl already worries about buffering on it’s own, there is no need to 
> recreate this functionality. Was this not working?

As I wrote "it has other bucket patterns", which do not get optimized by the 
coalescing filter of mod_ssl.

//Stefan

Re: buckets and connections (long post)

2015-10-22 Thread Stefan Eissing

> Am 21.10.2015 um 16:48 schrieb Graham Leggett :
> 
> On 21 Oct 2015, at 4:18 PM, Stefan Eissing  
> wrote:
>> 6. pool buckets are very tricky to optimize, as pool creation/destroy is not 
>> thread-safe in general
>>  and it depends on how the parent pools and their allocators are set up. 
>>  Early hopes get easily crushed under load.
> 
> As soon as I see “buckets aren’t thread safe” I read it as “buckets are being 
> misused” or “pool lifetimes are being mixed up".
> 
> Buckets arise from allocators, and you must never try add a bucket from one 
> allocator into a brigade sourced from another allocator. For example, if you 
> have a bucket allocated from the slave connection, you need to copy it into a 
> different bucket allocated from the master connection before trying to add it 
> to a master brigade.
> 
> Buckets are also allocated from pools, and pools have different lifetimes 
> depending on what they were created for. If you allocate a bucket from the 
> request pool, that bucket will vanish when the request pool is destroyed. 
> Buckets can be passed from one pool to another, that is what “setaside” means.
> 
> It is really important to get the pool lifetimes right. Allocate something 
> accidentally from the master connection pool on a slave connection and it 
> appears to work, because generally the master outlives the slave. Until the 
> master is cleaned up first, and suddenly memory vanishes unexpectedly in the 
> slave connections - and you crash.
> 
> There were a number of subtle bugs in the proxy where buckets had been 
> allocated from the wrong pool, and all sorts of weirdness ensued. Make sure 
> your pool lifetimes are allocated correctly and it will work.

This is all true and correct - as long as all this happens in a single thread. 
If you have multiple threads and create sub pools for each from a main pool, 
each and every create and destroy of these sub-pools, plus any action on the 
main pool must be mutex protected. I found out. 

Similar with buckets. When you create a bucket in one thread, you may not 
destroy it in another - *while* the bucket_allocator is being used. 
bucket_allocators are not thread-safe, which means bucket_brigades are not, 
which means that all buckets from the same brigade must only be used inside a 
single thread.

This means for example that, even though mod_http2 manages the pool lifetime 
correctly, it cannot pass a response bucket from a request pool in thread A for 
writing onto the  main connection in thread B, *as long as* the response is not 
complete and thread A is still producing more buckets with the same allocator. 
etc. etc.

That is what I mean with not-thread-safe.

//Stefan

Re: buckets and connections (long post)

2015-10-22 Thread Stefan Eissing

> Am 21.10.2015 um 16:48 schrieb Graham Leggett :
> 
> On 21 Oct 2015, at 4:18 PM, Stefan Eissing  
> wrote:
> [...]
>> 3. The amount of buffered bytes should be more flexible per stream and 
>> redistribute a maximum for 
>>  the whole session depending on load.
>> 4. mod_http2 needs a process wide Resource Allocator for file handles. A 
>> master connection should
>>  borrow n handles at start, increase/decrease the amount based on load, to 
>> give best performance
>> 5. similar optimizations should be possible for other bucket types (mmap? 
>> immortal? heap?)
> 
> Right now this task is handled by the core network filter - it is very likely 
> this problem is already solved, and you don’t need to do anything.
> 
> If the problem still needs solving, then the core filter is the place to do 
> it. What the core filter does is add up the resources taken up by different 
> buckets and if these resources breach limits, blocking writes are done until 
> we’re below the limit again. This provides the flow control we need.

I know that code and it does not help HTTP/2 processing.

> With the async filters this flow control is now made available to every 
> filter in the ap_filter_setaside_brigade() function. When mod_http2 handles 
> async filters you’ll get this flow control for free.

No, it will not. The processing of responses is very different.

Example: there is individual flow control of responses in HTTP/2. Clients do 
small window sizes on images, like 64KB in order to get small images completely 
or only the meta data of large ones. For these large files, the client does not 
send flow-control updates until it has received all other
resources it is interested in. *Then* it tells the server to go ahead and send 
the rest of these images.

This means a file bucket for such images will hang around for an indefinite 
amount of time and a filter cannot say, "Oh, I have n file buckets queued, 
let's block write them first before I accept more." The server cannot do that.

I certainly do not want to reinvent the wheel here and I am very glad about any 
existing solution and getting told how to use them. But please try to 
understand the specific problems before saying "we must have already a solution 
for that, go find it. you will see..."

//Stefan




Re: buckets and connections (long post)

2015-10-22 Thread Stefan Eissing
(I split these up, since answers touch on different topics):



> Am 21.10.2015 um 16:48 schrieb Graham Leggett :
> 
> On 21 Oct 2015, at 4:18 PM, Stefan Eissing  
> wrote:
> 
>> How good does this mechanism work for mod_http2? On the one side it's the 
>> same, on the other quite different.
>> 
>> On the real, main connection, the master connection, where the h2 session 
>> resides, things are
>> pretty similar with some exceptions:
>> - it is very bursty. requests continue to come in. There is no pause between 
>> responses and the next request.
>> - pauses, when they happen, will be longer. clients are expected to keep 
>> open connections around for
>> longer (if we let them).
>> - When there is nothing to do, mod_http2 makes a blocking read on the 
>> connection input. This currently
>> does not lead to the state B) or C). The worker for the http2 connection 
>> stays assigned. This needs
>> to improve.
> 
> The blocking read breaks the spirit of the event MPM.
> 
> In theory, as long as you enter the write completion state and then not leave 
> until your connection is done, this problem will go away.
> 
> If you want to read instead of write, make sure the CONN_SENSE_WANT_READ 
> option is set on the connection.

This does not parse. I do not understand what you are talking about. 

When all streams have been passed into the output filters, the mod_http2 
session does a 

status = ap_get_brigade(io->connection->input_filters,...)  (h2_conn_io.c, 
line 160)

similar to what ap_read_request() -> ap_rgetline_core() does. (protocol.c, line 
236)

What should mod_http2 do different here?

//Stefan

Re: buckets and connections (long post)

2015-10-21 Thread Graham Leggett
On 21 Oct 2015, at 4:18 PM, Stefan Eissing  wrote:

> How good does this mechanism work for mod_http2? On the one side it's the 
> same, on the other quite different.
> 
> On the real, main connection, the master connection, where the h2 session 
> resides, things are
> pretty similar with some exceptions:
> - it is very bursty. requests continue to come in. There is no pause between 
> responses and the next request.
> - pauses, when they happen, will be longer. clients are expected to keep open 
> connections around for
>  longer (if we let them).
> - When there is nothing to do, mod_http2 makes a blocking read on the 
> connection input. This currently
>  does not lead to the state B) or C). The worker for the http2 connection 
> stays assigned. This needs
>  to improve.

The blocking read breaks the spirit of the event MPM.

In theory, as long as you enter the write completion state and then not leave 
until your connection is done, this problem will go away.

If you want to read instead of write, make sure the CONN_SENSE_WANT_READ option 
is set on the connection.

(You may find reasons that stop this working, if so, these need to be isolated 
and fixed).

> This is the way it is implemented now. There may be other ways, but this is 
> the way we have. If we
> continue along this path, we have the following obstacles to overcome:
> 1. the master connection probably can play nicer with the MPM so that an idle 
> connection uses less
>   resources
> 2. The transfer of buckets from the slave to the master connection is a COPY 
> except in case of
>   file buckets (and there is a limit on that as well to not run out of 
> handles).
>   All other attempts at avoiding the copy, failed. This may be a personal 
> limitation of my APRbilities.

This is how the proxy does it.

Buckets owned by the backend conn_rec are copied and added to the frontend 
conn_rec.

> 3. The amount of buffered bytes should be more flexible per stream and 
> redistribute a maximum for 
>   the whole session depending on load.
> 4. mod_http2 needs a process wide Resource Allocator for file handles. A 
> master connection should
>   borrow n handles at start, increase/decrease the amount based on load, to 
> give best performance
> 5. similar optimizations should be possible for other bucket types (mmap? 
> immortal? heap?)

Right now this task is handled by the core network filter - it is very likely 
this problem is already solved, and you don’t need to do anything.

If the problem still needs solving, then the core filter is the place to do it. 
What the core filter does is add up the resources taken up by different buckets 
and if these resources breach limits, blocking writes are done until we’re 
below the limit again. This provides the flow control we need.

With the async filters this flow control is now made available to every filter 
in the ap_filter_setaside_brigade() function. When mod_http2 handles async 
filters you’ll get this flow control for free.

> 6. pool buckets are very tricky to optimize, as pool creation/destroy is not 
> thread-safe in general
>   and it depends on how the parent pools and their allocators are set up. 
>   Early hopes get easily crushed under load.

As soon as I see “buckets aren’t thread safe” I read it as “buckets are being 
misused” or “pool lifetimes are being mixed up".

Buckets arise from allocators, and you must never try add a bucket from one 
allocator into a brigade sourced from another allocator. For example, if you 
have a bucket allocated from the slave connection, you need to copy it into a 
different bucket allocated from the master connection before trying to add it 
to a master brigade.

Buckets are also allocated from pools, and pools have different lifetimes 
depending on what they were created for. If you allocate a bucket from the 
request pool, that bucket will vanish when the request pool is destroyed. 
Buckets can be passed from one pool to another, that is what “setaside” means.

It is really important to get the pool lifetimes right. Allocate something 
accidentally from the master connection pool on a slave connection and it 
appears to work, because generally the master outlives the slave. Until the 
master is cleaned up first, and suddenly memory vanishes unexpectedly in the 
slave connections - and you crash.

There were a number of subtle bugs in the proxy where buckets had been 
allocated from the wrong pool, and all sorts of weirdness ensued. Make sure 
your pool lifetimes are allocated correctly and it will work.

> 7. The buckets passed down on the master connection are using another buffer 
> - when on https:// -
>   to influence the SSL record sizes on write. Another COPY is not nice, but 
> write performance
>   is better this way. The ssl optimizations in place do not work for HTTP/2 
> as it has other
>   bucket patterns. We should look if we can combine this into something 
> without COPY, but with
>   good sized SSL writes.

mod_ssl already worries abo

buckets and connections (long post)

2015-10-21 Thread Stefan Eissing
(Sorry for the long post. It was helpful for myself to write it. If this does 
not
 hold your interest long enough, just ignore it please.)

As I understand it - and that is incomplete - we have a usual request 
processing like this:

A)
worker:
  conn <--- cfilter <--- rfilter
 |--b-b-b-b-b-b-b-b...

with buckets trickling to the connection through connection and request 
filters, state being
held on the stack of the assigned worker.

Once the filters are done, we have

B)
  conn 
 |--b-b-b-b-b...

just a connection with a bucket brigade yet to be written. This no longer needs 
a stack. The
worker can (depending on the mpm) be re-assigned to other tasks. Buckets are 
streamed out based
on io events (for example).

To go from A) to B), the connection needs to set-aside buckets, which is only 
real work for
some particular type of buckets. Transient ones for example, where the data may 
reside on the 
stack which is what we need to free in order to reuse the worker.

This is beneficial when the work for setting buckets aside has much less impact 
on the system
than keeping the worker threads allocated. This is especially likely when slow 
clients are involved
that take ages to read a response.

In HTTP/1.1, usually a response is fully read by the client before it makes the 
next request. So,
at least half the roundtrip time, the connection will be in state

C)
  conn 
 |-

without anything to read or write. But when the next request come in, it gets 
assigned a worker and is
back in state A). Repeat until connection close.

Ok, so far?


How good does this mechanism work for mod_http2? On the one side it's the same, 
on the other quite different.

On the real, main connection, the master connection, where the h2 session 
resides, things are
pretty similar with some exceptions:
- it is very bursty. requests continue to come in. There is no pause between 
responses and the next request.
- pauses, when they happen, will be longer. clients are expected to keep open 
connections around for
  longer (if we let them).
- When there is nothing to do, mod_http2 makes a blocking read on the 
connection input. This currently
  does not lead to the state B) or C). The worker for the http2 connection 
stays assigned. This needs
  to improve.

On the virtual, slave connection, the one for HTTP/2 streams, aka. requests, 
things are very different:
- the slave connection has a socket purely for the looks of it. there is no 
real connection.
- event-ing for input/output is done via conditional variables and mutex with 
the thread working on
  the main connection
- the "set-aside" happens, when output is transferred from the slave connection 
to the main one. The main
  connection allows a configurable number of maximum bytes buffered (or 
set-aside). Whenever the rest
  of the response fits into this buffer, the slave connection will be closed 
and the slave worker is
  reassigned. 
- Even better, when the response is a file bucket, the file is transferred, 
which is not counted 
  against the buffer limit (as it is just a handle). Therefore, static files 
are only looked up 
  by a slave connection, all IO is done by the master thread.

So state A) is the same for slave connections. B) only insofar as the set-aside 
is replaced with the 
transfer of buckets to the master connection - which happens all the time. So, 
slave connections are
just in A) or are gone. slave connections are not kept open.


This is the way it is implemented now. There may be other ways, but this is the 
way we have. If we
continue along this path, we have the following obstacles to overcome:
1. the master connection probably can play nicer with the MPM so that an idle 
connection uses less
   resources
2. The transfer of buckets from the slave to the master connection is a COPY 
except in case of
   file buckets (and there is a limit on that as well to not run out of 
handles).
   All other attempts at avoiding the copy, failed. This may be a personal 
limitation of my APRbilities.
3. The amount of buffered bytes should be more flexible per stream and 
redistribute a maximum for 
   the whole session depending on load.
4. mod_http2 needs a process wide Resource Allocator for file handles. A master 
connection should
   borrow n handles at start, increase/decrease the amount based on load, to 
give best performance
5. similar optimizations should be possible for other bucket types (mmap? 
immortal? heap?)
6. pool buckets are very tricky to optimize, as pool creation/destroy is not 
thread-safe in general
   and it depends on how the parent pools and their allocators are set up. 
   Early hopes get easily crushed under load.
7. The buckets passed down on the master connection are using another buffer - 
when on https:// -
   to influence the SSL record sizes on write. Another COPY is not nice, but 
write performance
   is better this way. The ssl optimizations in place do not work for HTTP/2 as 
it has other
   bucke

Re: PR56729: reqtimeout bug with fast response and slow POST

2014-11-24 Thread Yann Ylavic
On Sun, Nov 23, 2014 at 12:11 AM, Eric Covener  wrote:
> On Thu, Nov 20, 2014 at 9:57 AM, Yann Ylavic  wrote:
>> On Wed, Nov 19, 2014 at 1:13 PM, Eric Covener  wrote:
>>> On Wed, Nov 19, 2014 at 4:47 AM, Yann Ylavic  wrote:
 Errr, this is in 2.2.x/STATUS only (not 2.4.x).
 Is it already proposed/backported to 2.4.x (I can't find the commit)?
>>>
>>> I diff'ed trunk and 2.4 and It seems to be absent.
>>>
>>> I don't have the best handle on this, but if we're about to go down
>>> into a blocking read, wouldn't we want to check the time left and
>>> reduce the timeout?
>>
>> Yes, good point.
>>
>> Maybe this way then?
>>
>> Index: modules/filters/mod_reqtimeout.c
>> ===
>> --- modules/filters/mod_reqtimeout.c(revision 1640032)
>> +++ modules/filters/mod_reqtimeout.c(working copy)
>> @@ -311,7 +311,12 @@ static apr_status_t reqtimeout_filter(ap_filter_t
>>  else {
>>  /* mode != AP_MODE_GETLINE */
>>  rv = ap_get_brigade(f->next, bb, mode, block, readbytes);
>> -if (ccfg->min_rate > 0 && rv == APR_SUCCESS) {
>> +/* Don't extend the timeout in speculative mode, wait for
>> + * the real (relevant) bytes to be asked later, within the
>> + * currently alloted time.
>> + */
>> +if (ccfg->min_rate > 0 && rv == APR_SUCCESS
>> +&& mode != AP_MODE_SPECULATIVE) {
>>  extend_timeout(ccfg, bb);
>>  }
>>  }
>
> Looks good

Commited in r1641376.

However, I now think you were right with your original proposal to
bypass the filter based on EOS :p
I don't see any reason why we should not do the same for blocking and
nonblocking reads.
The call from check_pipeline() is an exception that shouldn't
interfere with the real calls from modules.

So the current code (including r1641376) is probably "good enough" for
2.2.x, but maybe we could make it better for trunk and 2.4.x.

A first proposal is straight forward from you original patch :
- bypass the filter when the request is over (base on EOS seen),
- check the timeout (but don't extend it) in speculative mode for both
blocking and nonblocking reads.
See httpd-trunk-reqtimeout_filter_eos.patch attached.

A second proposal would be to have an optional function from
mod_reqtimeout to (des)activate itself on demand.
Thus check_pipeline() can use it (if not NULL, ie. mod_reqtimeout
loaded) before/after the read to desactivate/reactivate the checks.
This is maybe more intrusive (requires a new
ap_init_request_processing() function is http_request.h) but is less
dependent on mod_reqtimeout seeing the EOS (and it could also be used
where currently mod_reqtimeout is forcibly removed from the chain).
See httpd-trunk-reqtimeout_set_inactive.patch attached.

WDYT?
Index: modules/filters/mod_reqtimeout.c
===
--- modules/filters/mod_reqtimeout.c	(revision 1641376)
+++ modules/filters/mod_reqtimeout.c	(working copy)
@@ -64,6 +64,7 @@ typedef struct
 } reqtimeout_con_cfg;
 
 static const char *const reqtimeout_filter_name = "reqtimeout";
+static const char *const reqtimeout_filter_eos_name = "reqtimeout_eos";
 static int default_header_rate_factor;
 static int default_body_rate_factor;
 
@@ -176,23 +177,13 @@ static apr_status_t reqtimeout_filter(ap_filter_t
 apr_status_t rv;
 apr_interval_time_t saved_sock_timeout = UNSET;
 reqtimeout_con_cfg *ccfg = f->ctx;
+int extendable;
 
 if (ccfg->in_keep_alive) {
 /* For this read, the normal keep-alive timeout must be used */
-ccfg->in_keep_alive = 0;
 return ap_get_brigade(f->next, bb, mode, block, readbytes);
 }
 
-if (block == APR_NONBLOCK_READ && mode == AP_MODE_SPECULATIVE) { 
-/*  The source of these above us in the core is check_pipeline(), which
- *  is between requests but before this filter knows to reset timeouts 
- *  during log_transaction().  If they appear elsewhere, just don't 
- *  check or extend the time since they won't block and we'll see the
- *  bytes again later
- */
-return ap_get_brigade(f->next, bb, mode, block, readbytes);
-}
-
 if (ccfg->new_timeout > 0) {
 /* set new timeout */
 now = apr_time_now();
@@ -212,6 +203,12 @@ static apr_status_t reqtimeout_filter(ap_filter_t
 ccfg->socket = ap_get_conn_socket(f->c);
 }
 
+/* Don't extend the timeout in speculative mode, wait for
+ * the real (relevant) bytes to be asked later, within the
+ * currently alloted time.
+ */
+extendable = (ccfg->min_rate > 0 && mode != AP_MODE_SPECULATIVE);
+
 rv = check_time_left(ccfg, &time_left, now);
 if (rv != APR_SUCCESS)
 goto out;
@@ -219,7 +216,7 @@ static apr_status_t reqtimeout_filter(ap_filter_t
 if (block == APR_NONBLOCK_READ || mode == AP_MODE_INIT
 || mode == AP_MODE_EATCRLF) {
 rv = ap_ge

Re: PR56729: reqtimeout bug with fast response and slow POST

2014-11-22 Thread Eric Covener
On Thu, Nov 20, 2014 at 9:57 AM, Yann Ylavic  wrote:
> On Wed, Nov 19, 2014 at 1:13 PM, Eric Covener  wrote:
>> On Wed, Nov 19, 2014 at 4:47 AM, Yann Ylavic  wrote:
>>> Errr, this is in 2.2.x/STATUS only (not 2.4.x).
>>> Is it already proposed/backported to 2.4.x (I can't find the commit)?
>>
>> I diff'ed trunk and 2.4 and It seems to be absent.
>>
>> I don't have the best handle on this, but if we're about to go down
>> into a blocking read, wouldn't we want to check the time left and
>> reduce the timeout?
>
> Yes, good point.
>
> Maybe this way then?
>
> Index: modules/filters/mod_reqtimeout.c
> ===
> --- modules/filters/mod_reqtimeout.c(revision 1640032)
> +++ modules/filters/mod_reqtimeout.c(working copy)
> @@ -311,7 +311,12 @@ static apr_status_t reqtimeout_filter(ap_filter_t
>  else {
>  /* mode != AP_MODE_GETLINE */
>  rv = ap_get_brigade(f->next, bb, mode, block, readbytes);
> -if (ccfg->min_rate > 0 && rv == APR_SUCCESS) {
> +/* Don't extend the timeout in speculative mode, wait for
> + * the real (relevant) bytes to be asked later, within the
> + * currently alloted time.
> + */
> +if (ccfg->min_rate > 0 && rv == APR_SUCCESS
> +&& mode != AP_MODE_SPECULATIVE) {
>  extend_timeout(ccfg, bb);
>  }
>  }

Looks good


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-11-20 Thread Yann Ylavic
On Wed, Nov 19, 2014 at 1:13 PM, Eric Covener  wrote:
> On Wed, Nov 19, 2014 at 4:47 AM, Yann Ylavic  wrote:
>> Errr, this is in 2.2.x/STATUS only (not 2.4.x).
>> Is it already proposed/backported to 2.4.x (I can't find the commit)?
>
> I diff'ed trunk and 2.4 and It seems to be absent.
>
> I don't have the best handle on this, but if we're about to go down
> into a blocking read, wouldn't we want to check the time left and
> reduce the timeout?

Yes, good point.

Maybe this way then?

Index: modules/filters/mod_reqtimeout.c
===
--- modules/filters/mod_reqtimeout.c(revision 1640032)
+++ modules/filters/mod_reqtimeout.c(working copy)
@@ -311,7 +311,12 @@ static apr_status_t reqtimeout_filter(ap_filter_t
 else {
 /* mode != AP_MODE_GETLINE */
 rv = ap_get_brigade(f->next, bb, mode, block, readbytes);
-if (ccfg->min_rate > 0 && rv == APR_SUCCESS) {
+/* Don't extend the timeout in speculative mode, wait for
+ * the real (relevant) bytes to be asked later, within the
+ * currently alloted time.
+ */
+if (ccfg->min_rate > 0 && rv == APR_SUCCESS
+&& mode != AP_MODE_SPECULATIVE) {
 extend_timeout(ccfg, bb);
 }
 }


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-11-19 Thread Eric Covener
On Wed, Nov 19, 2014 at 4:47 AM, Yann Ylavic  wrote:
> Errr, this is in 2.2.x/STATUS only (not 2.4.x).
> Is it already proposed/backported to 2.4.x (I can't find the commit)?

I diff'ed trunk and 2.4 and It seems to be absent.

I don't have the best handle on this, but if we're about to go down
into a blocking read, wouldn't we want to check the time left and
reduce the timeout?


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-11-19 Thread Yann Ylavic
On Wed, Nov 19, 2014 at 10:26 AM, Yann Ylavic  wrote:
> Eric, Jeff, since you already voted for r1621453 in 2.4.x/STATUS

Errr, this is in 2.2.x/STATUS only (not 2.4.x).
Is it already proposed/backported to 2.4.x (I can't find the commit)?


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-11-19 Thread Yann Ylavic
On Sat, Aug 30, 2014 at 3:19 PM, Yann Ylavic  wrote:
> On Sat, Aug 30, 2014 at 3:02 PM, Eric Covener  wrote:
>> On Tue, Aug 26, 2014 at 5:22 AM, Yann Ylavic  wrote:
>>> I don't think mod_reqtimeout should handle/count speculative bytes,
>>> they ought to be read for real later (and taken into account then).
>>> Otherwise, the same bytes may be counted multiple times.
>>>
>>> How about simply forward the ap_get_brigade() call?
>>
>> Makes sense --  I did limit it to nonblock as well. Can you take a
>> look before I propose? http://svn.apache.org/r1621453
>
> I'm not sure we should limit it to nonblock, speculative mode is
> mainly to be used in non-blocking, but ap_rgetline_core() for example
> does not (when folding), so mod_reqtimeout may still count header
> bytes twice.

Eric, Jeff, since you already voted for r1621453 in 2.4.x/STATUS, how
about this additional patch?
As said above, IMHO we really shouldn't count any speculative bytes in
mod_reqtimeout (nonblocking or not), relevant data should be asked
"for real" soon or later.

Index: modules/filters/mod_reqtimeout.c
===
--- modules/filters/mod_reqtimeout.c(revision 1640032)
+++ modules/filters/mod_reqtimeout.c(working copy)
@@ -183,12 +183,13 @@ static apr_status_t reqtimeout_filter(ap_filter_t
 return ap_get_brigade(f->next, bb, mode, block, readbytes);
 }

-if (block == APR_NONBLOCK_READ && mode == AP_MODE_SPECULATIVE) {
+if (mode == AP_MODE_SPECULATIVE) {
 /*  The source of these above us in the core is check_pipeline(), which
  *  is between requests but before this filter knows to reset timeouts
- *  during log_transaction().  If they appear elsewhere, just don't
- *  check or extend the time since they won't block and we'll see the
- *  bytes again later
+ *  during log_transaction(), or ap_rgetline_core() to handle headers'
+ *  folding (next char prefetch).  Likewise, if they appear elsewhere,
+ *  just don't check or extend the time since we should see the
+ *  relevant bytes again later.
  */
 return ap_get_brigade(f->next, bb, mode, block, readbytes);
 }
>
> Regards,
> Yann.


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-08-30 Thread Yann Ylavic
On Sat, Aug 30, 2014 at 3:02 PM, Eric Covener  wrote:
> On Tue, Aug 26, 2014 at 5:22 AM, Yann Ylavic  wrote:
>> I don't think mod_reqtimeout should handle/count speculative bytes,
>> they ought to be read for real later (and taken into account then).
>> Otherwise, the same bytes may be counted multiple times.
>>
>> How about simply forward the ap_get_brigade() call?
>
> Makes sense --  I did limit it to nonblock as well. Can you take a
> look before I propose? http://svn.apache.org/r1621453

I'm not sure we should limit it to nonblock, speculative mode is
mainly to be used in non-blocking, but ap_rgetline_core() for example
does not (when folding), so mod_reqtimeout may still count header
bytes twice.

Otherwise, looks good to me.

Regards,
Yann.


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-08-30 Thread Eric Covener
On Tue, Aug 26, 2014 at 5:22 AM, Yann Ylavic  wrote:
> On Mon, Aug 25, 2014 at 10:05 PM, Eric Covener  wrote:
>> But it seemed a little hokey, but I didn't really understand if we
>> could instead treat that speculative read as some kind of reset point
>> and couldn't think of any other hook to tell reqtimeout to bail out.
>>
>> Any alternatives?
>
> I don't think mod_reqtimeout should handle/count speculative bytes,
> they ought to be read for real later (and taken into account then).
> Otherwise, the same bytes may be counted multiple times.
>
> How about simply forward the ap_get_brigade() call?

Makes sense --  I did limit it to nonblock as well. Can you take a
look before I propose? http://svn.apache.org/r1621453


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-08-26 Thread Eric Covener
On Mon, Aug 25, 2014 at 4:05 PM, Eric Covener  wrote:
> I am looking at this PR which I was able to recreate:
>
> https://issues.apache.org/bugzilla/show_bug.cgi?id=56729


Whoops, I got the topic backwards. Fast post, slow response.

-- 
Eric Covener
cove...@gmail.com


Re: PR56729: reqtimeout bug with fast response and slow POST

2014-08-26 Thread Yann Ylavic
On Mon, Aug 25, 2014 at 10:05 PM, Eric Covener  wrote:
> But it seemed a little hokey, but I didn't really understand if we
> could instead treat that speculative read as some kind of reset point
> and couldn't think of any other hook to tell reqtimeout to bail out.
>
> Any alternatives?

I don't think mod_reqtimeout should handle/count speculative bytes,
they ought to be read for real later (and taken into account then).
Otherwise, the same bytes may be counted multiple times.

How about simply forward the ap_get_brigade() call?

Regards,
Yann.


RE: PR56729: reqtimeout bug with fast response and slow POST

2014-08-25 Thread Plüm , Rüdiger , Vodafone Group


> -Original Message-
> From: Eric Covener [mailto:cove...@gmail.com]
> Sent: Montag, 25. August 2014 22:05
> To: Apache HTTP Server Development List
> Subject: PR56729: reqtimeout bug with fast response and slow POST
> 
> I am looking at this PR which I was able to recreate:
> 
> https://issues.apache.org/bugzilla/show_bug.cgi?id=56729
> 
> mod_reqtimeout thinks the body is still being read when it gets called
> with mode=AP_MODE_SPECULATIVE during check_pipeline() near the end of
> a request
> 
> Since all of the handlers processing time has gone by, it thinks the
> read of the body has timed out and it returns an error, setting
> AP_CONN_CLOSE and a short linger time.
> 
> Since mod_reqtimeout is below the protocol level(AP_FTYPE_CONNECTION +
> 8), it cannot even see the HTTP input filters EOS bucket if it's
> looking for it.  I was able to add a 2nd filter that shares the
> conn_config with the normal filter and sits up higher looking for the
> EOS -- this seems to work
> 
> http://people.apache.org/~covener/patches/2.4.x-reqtimeout_post_error.diff
> 
> But it seemed a little hokey, but I didn't really understand if we
> could instead treat that speculative read as some kind of reset point
> and couldn't think of any other hook to tell reqtimeout to bail out.
> 
> Any alternatives?

I thought about to look out for an EOR bucket in an output filter. But your 
approach seems better
as it detects faster the end of the input stream.
Speculative reads might be also used by the handler for some reason. So I don't 
think they can be used
as a signal to the mod_reqtimeout filter.
OTOH speculative reads are not handled by the HTTP_INPUT filter. Hence if a 
handler uses a speculative
read it doesn't get the dechunking from the HTTP_INPUT filter if applicable. 
This only seems to make sense
if the handler doesn't handle with any possibly chunked body for whatever 
reason.

Regards

Rüdiger




PR56729: reqtimeout bug with fast response and slow POST

2014-08-25 Thread Eric Covener
I am looking at this PR which I was able to recreate:

https://issues.apache.org/bugzilla/show_bug.cgi?id=56729

mod_reqtimeout thinks the body is still being read when it gets called
with mode=AP_MODE_SPECULATIVE during check_pipeline() near the end of
a request

Since all of the handlers processing time has gone by, it thinks the
read of the body has timed out and it returns an error, setting
AP_CONN_CLOSE and a short linger time.

Since mod_reqtimeout is below the protocol level(AP_FTYPE_CONNECTION +
8), it cannot even see the HTTP input filters EOS bucket if it's
looking for it.  I was able to add a 2nd filter that shares the
conn_config with the normal filter and sits up higher looking for the
EOS -- this seems to work

http://people.apache.org/~covener/patches/2.4.x-reqtimeout_post_error.diff

But it seemed a little hokey, but I didn't really understand if we
could instead treat that speculative read as some kind of reset point
and couldn't think of any other hook to tell reqtimeout to bail out.

Any alternatives?

-- 
Eric Covener
cove...@gmail.com


Re: mod_ssl post-read-request error checking on internal redirects

2014-07-12 Thread Jeff Trawick
On Fri, Jul 11, 2014 at 5:09 PM, Yann Ylavic  wrote:

> On Fri, Jul 11, 2014 at 10:25 PM, Jeff Trawick  wrote:
> > A patch:
> >
> > Index: modules/ssl/ssl_engine_kernel.c
> > ===
> > --- modules/ssl/ssl_engine_kernel.c (revision 1609790)
> > +++ modules/ssl/ssl_engine_kernel.c (working copy)
> > @@ -164,7 +164,7 @@
> >  return DECLINED;
> >  }
> >  #ifdef HAVE_TLSEXT
> > -if (r->proxyreq != PROXYREQ_PROXY) {
> > +if (!r->prev && r->proxyreq != PROXYREQ_PROXY) {
> >  if ((servername = SSL_get_servername(ssl,
> > TLSEXT_NAMETYPE_host_name))) {
> >  char *host, *scope_id;
> >  apr_port_t port;
> >
> >
> > This path in the post-read-request hook is performing some SNI-related
> error
> > checking, catching situations where it will return 400 or 403.
> >
> > I noticed with StrictSNIVHostCheck failures that this code is triggering
> an
> > error on a subrequest to generate an error document after catching the
> same
> > error on the initial request.
> >
> > Is there a reason either of the checks here needs to be made on a
> > subrequest?
>
> I don't see any, the post-read-request hooks are always run on the
> initial request, and the SSL* will always be the one of the initial
> request for all its subrequests (unless some third-party module plays
> really bad with subr->connection).
>
> You probably could use !ap_is_initial_req(r) but post-read-request
> hooks are never run on ap_sub_req()uests (having r->main) AFAIK.
>

Thanks for looking, Yann.  I did change it to use ap_is_initial_req()
(without the ! :) )

r1609914


Re: mod_ssl post-read-request error checking on internal redirects

2014-07-11 Thread Yann Ylavic
On Fri, Jul 11, 2014 at 10:25 PM, Jeff Trawick  wrote:
> A patch:
>
> Index: modules/ssl/ssl_engine_kernel.c
> ===
> --- modules/ssl/ssl_engine_kernel.c (revision 1609790)
> +++ modules/ssl/ssl_engine_kernel.c (working copy)
> @@ -164,7 +164,7 @@
>  return DECLINED;
>  }
>  #ifdef HAVE_TLSEXT
> -if (r->proxyreq != PROXYREQ_PROXY) {
> +if (!r->prev && r->proxyreq != PROXYREQ_PROXY) {
>  if ((servername = SSL_get_servername(ssl,
> TLSEXT_NAMETYPE_host_name))) {
>  char *host, *scope_id;
>  apr_port_t port;
>
>
> This path in the post-read-request hook is performing some SNI-related error
> checking, catching situations where it will return 400 or 403.
>
> I noticed with StrictSNIVHostCheck failures that this code is triggering an
> error on a subrequest to generate an error document after catching the same
> error on the initial request.
>
> Is there a reason either of the checks here needs to be made on a
> subrequest?

I don't see any, the post-read-request hooks are always run on the
initial request, and the SSL* will always be the one of the initial
request for all its subrequests (unless some third-party module plays
really bad with subr->connection).

You probably could use !ap_is_initial_req(r) but post-read-request
hooks are never run on ap_sub_req()uests (having r->main) AFAIK.


mod_ssl post-read-request error checking on internal redirects

2014-07-11 Thread Jeff Trawick
A patch:

Index: modules/ssl/ssl_engine_kernel.c
===
--- modules/ssl/ssl_engine_kernel.c (revision 1609790)
+++ modules/ssl/ssl_engine_kernel.c (working copy)
@@ -164,7 +164,7 @@
 return DECLINED;
 }
 #ifdef HAVE_TLSEXT
-if (r->proxyreq != PROXYREQ_PROXY) {
+if (!r->prev && r->proxyreq != PROXYREQ_PROXY) {
 if ((servername = SSL_get_servername(ssl,
TLSEXT_NAMETYPE_host_name))) {
 char *host, *scope_id;
 apr_port_t port;


This path in the post-read-request hook is performing some SNI-related
error checking, catching situations where it will return 400 or 403.

I noticed with StrictSNIVHostCheck failures that this code is triggering an
error on a subrequest to generate an error document after catching the same
error on the initial request.

Is there a reason either of the checks here needs to be made on a
subrequest?

Thanks!


-- 
Born in Roswell... married an alien...
http://emptyhammock.com/
http://edjective.org/


Post to list

2013-03-23 Thread Diana Andrews



Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-11 Thread Joe Orton
On Fri, Jun 08, 2012 at 08:19:22AM -0400, Jeff Trawick wrote:
> On Fri, Jun 8, 2012 at 4:58 AM, Joe Orton  wrote:
> > Yes, but that was exactly the previous state: the security implication
> > of doing crazy stuff with rewrite rules really is totally unknown.  I
> > wouldn't say "infrequently used features", I'd say "undocumented
> > behaviour which happened to work previously".
> 
> "crazy stuff"/"happened to work" seems a bit convenient for referring
> to some useful functionality which was regressed :(  But as far as we
> know Right Now it is practical for a user to ensure that all their
> rewrite rules are well formed and turn on this option without fear.
> Right?

Right, so long as the rule set is safe for all possible input strings, 
and users realise mod_rewrite does not constrain that set of strings.

Yeah, this is perhaps a "convenient" position to take.  We'd be open to 
the same accusation had we decided that 3368/4317 were config issues not 
security issues, just with a different set of disgruntled users.  I'd 
still go this route, I think; default to safe + config option for 
"unsafe" mode.

> I guess there is no desire among the group to take any of the reported
> regressions and deem the "feature" supported in the normal manner.

Without a config option?  I've no objection but neither any desire to 
climb that mountain myself.  The problem I see is that we'd need a 
better specification for the "rule set input string" to replace 
"URL-path"; I've no handle on how complex that would be.

Regards, Joe


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-08 Thread Jeff Trawick
On Fri, Jun 8, 2012 at 4:58 AM, Joe Orton  wrote:
> On Thu, Jun 07, 2012 at 01:14:37PM -0400, Jeff Trawick wrote:
>> On Thu, Jun 7, 2012 at 11:55 AM, Joe Orton  wrote:
>> > I like Eric's suggestion of an opt-in RewriteOption.  This will avoid
>> > having to iterate yet again if the whitelist is either too broad or too
>> > narrow, and can make the security implications (such as they are)
>> > explicit.
>>
>> Doesn't that just mean that the security implications are unknown when
>> you want mod_rewrite to process a proxied http request or a CONNECT?
>> I.e., you have to turn off the sanity checks in order to use certain
>> infrequently used features.
>
> Yes, but that was exactly the previous state: the security implication
> of doing crazy stuff with rewrite rules really is totally unknown.  I
> wouldn't say "infrequently used features", I'd say "undocumented
> behaviour which happened to work previously".

"crazy stuff"/"happened to work" seems a bit convenient for referring
to some useful functionality which was regressed :(  But as far as we
know Right Now it is practical for a user to ensure that all their
rewrite rules are well formed and turn on this option without fear.
Right?

I guess there is no desire among the group to take any of the reported
regressions and deem the "feature" supported in the normal manner.

-- 
Born in Roswell... married an alien...
http://emptyhammock.com/


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-08 Thread Rainer Jung

On 08.06.2012 10:58, Plüm, Rüdiger, Vodafone Group wrote:




-Original Message-
From: Joe Orton
Sent: Freitag, 8. Juni 2012 10:38
To: dev@httpd.apache.org
Subject: Re: post-CVE-2011-4317 (rewrite proxy unintended
interpolation) rewrite PR's

On Thu, Jun 07, 2012 at 01:23:29PM -0400, Eric Covener wrote:

e.g. RewriteOptions +"I know I'm running this regex against something
that's not guaranteed to look like a URL-path, and I'll write a regex
that carefully matches/captures the input"


How about this?  I'm not sure how to put the right level of fear into
the name.  AllowUnsafeURI?  AllowInsecureURIMatch?


+1 for the patch as such. Option name discussion may take some time :-)


+1 as well.

Rainer



RE: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-08 Thread Plüm , Rüdiger , Vodafone Group


> -Original Message-
> From: Joe Orton 
> Sent: Freitag, 8. Juni 2012 10:38
> To: dev@httpd.apache.org
> Subject: Re: post-CVE-2011-4317 (rewrite proxy unintended
> interpolation) rewrite PR's
> 
> On Thu, Jun 07, 2012 at 01:23:29PM -0400, Eric Covener wrote:
> > e.g. RewriteOptions +"I know I'm running this regex against something
> > that's not guaranteed to look like a URL-path, and I'll write a regex
> > that carefully matches/captures the input"
> 
> How about this?  I'm not sure how to put the right level of fear into
> the name.  AllowUnsafeURI?  AllowInsecureURIMatch?

+1 for the patch as such. Option name discussion may take some time :-)

Regards

Rüdiger



Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-08 Thread Joe Orton
On Thu, Jun 07, 2012 at 01:14:37PM -0400, Jeff Trawick wrote:
> On Thu, Jun 7, 2012 at 11:55 AM, Joe Orton  wrote:
> > I like Eric's suggestion of an opt-in RewriteOption.  This will avoid
> > having to iterate yet again if the whitelist is either too broad or too
> > narrow, and can make the security implications (such as they are)
> > explicit.
> 
> Doesn't that just mean that the security implications are unknown when
> you want mod_rewrite to process a proxied http request or a CONNECT?
> I.e., you have to turn off the sanity checks in order to use certain
> infrequently used features.

Yes, but that was exactly the previous state: the security implication 
of doing crazy stuff with rewrite rules really is totally unknown.  I 
wouldn't say "infrequently used features", I'd say "undocumented 
behaviour which happened to work previously".

Regards, Joe


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-08 Thread Joe Orton
On Thu, Jun 07, 2012 at 01:23:29PM -0400, Eric Covener wrote:
> e.g. RewriteOptions +"I know I'm running this regex against something
> that's not guaranteed to look like a URL-path, and I'll write a regex
> that carefully matches/captures the input"

How about this?  I'm not sure how to put the right level of fear into 
the name.  AllowUnsafeURI?  AllowInsecureURIMatch?

(This patch works for the CONNECT rewriting case, I haven't tested the 
other problematic cases.)

Index: modules/mappers/mod_rewrite.c
===
--- modules/mappers/mod_rewrite.c   (revision 1347667)
+++ modules/mappers/mod_rewrite.c   (working copy)
@@ -190,6 +190,7 @@
 #define OPTION_INHERIT  1<<1
 #define OPTION_INHERIT_BEFORE   1<<2
 #define OPTION_NOSLASH  1<<3
+#define OPTION_ANYURI   1<<4
 
 #ifndef RAND_MAX
 #define RAND_MAX 32767
@@ -2895,6 +2896,9 @@
  "LimitInternalRecursion directive and will be "
  "ignored.");
 }
+else if (!strcasecmp(w, "allowanyuri")) {
+options |= OPTION_ANYURI;
+}
 else {
 return apr_pstrcat(cmd->pool, "RewriteOptions: unknown option '",
w, "'", NULL);
@@ -4443,8 +4447,14 @@
 return DECLINED;
 }
 
-if ((r->unparsed_uri[0] == '*' && r->unparsed_uri[1] == '\0')
-|| !r->uri || r->uri[0] != '/') {
+/* Unless the anyuri option is set, ensure that the input to the
+ * first rule really is a URL-path, avoiding security issues with
+ * poorly configured rules.  See CVE-2011-3368, CVE-2011-4317. */
+if ((dconf->options & OPTION_ANYURI) == 0
+&& ((r->unparsed_uri[0] == '*' && r->unparsed_uri[1] == '\0')
+|| !r->uri || r->uri[0] != '/')) {
+rewritelog((r, 8, NULL, "Declining, request-URI '%s' is not a 
URL-path",
+r->uri));
 return DECLINED;
 }
 
Index: docs/manual/mod/mod_rewrite.xml
===
--- docs/manual/mod/mod_rewrite.xml (revision 1347667)
+++ docs/manual/mod/mod_rewrite.xml (working copy)
@@ -188,6 +188,37 @@
   later.
   
 
+  AllowAnyURI
+  
+
+  When RewriteRule
+  is used in VirtualHost or server context with
+  version 2.2.22 or later of httpd, mod_rewrite
+  will only process the rewrite rules if the request URI is a URL-path.  This avoids
+  some security issues where particular rules could allow
+  "surprising" pattern expansions (see http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3368";>CVE-2011-3368
+  and http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-4317";>CVE-2011-4317).
+  To lift the restriction on matching a URL-path, the
+  AllowAnyURI option can be enabled, and
+  mod_rewrite will apply the rule set to any
+  request URI string, regardless of whether that string matches
+  the URL-path grammar required by the HTTP specification.
+
+  
+  Security Warning 
+
+  Enabling this option will make the server vulnerable to
+  security issues if used with rewrite rules which are not
+  carefully authored.  It is strongly recommended
+  that this option is not used.  In particularly, beware of input
+  strings containing the '@' character which could
+  change the interpretation of the transformed URI.
+  
+  
+
   
 
 


RE: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-07 Thread Plüm , Rüdiger , Vodafone Group


> -Original Message-
> From: Eric Covener []
> Sent: Donnerstag, 7. Juni 2012 19:23
> To: dev@httpd.apache.org
> Subject: Re: post-CVE-2011-4317 (rewrite proxy unintended
> interpolation) rewrite PR's
> 
> On Thu, Jun 7, 2012 at 1:14 PM, Jeff Trawick  wrote:
> > Eric, what was the opt-in exactly?  In what scope would you need to
> > enable it in order to process a CONNECT request?
> 
> e.g. RewriteOptions +"I know I'm running this regex against something
> that's not guaranteed to look like a URL-path, and I'll write a regex
> that carefully matches/captures the input"

Makes sense. +1.

Regards

Rüdiger


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-07 Thread Eric Covener
On Thu, Jun 7, 2012 at 1:14 PM, Jeff Trawick  wrote:
> On Thu, Jun 7, 2012 at 11:55 AM, Joe Orton  wrote:
>> On Wed, Jun 06, 2012 at 09:08:02PM -0400, Jeff Trawick wrote:
>>> Here are some valid requests which fail the 4317 checks:
>>>
>>> CONNECT foo.example.com[:port]
>>> GET http://foo.example.com
>>> GET proxy:http://foo.example.com/    (rewriting something which was
>>> already proxied internally)
>>>
>>> I am leaning towards the likely minority view that it is problematic
>>> to not know what the valid inputs to a ~15 year old module really are,
>>> and we should whitelist a few more patterns such as those above and
>>> see how far it gets us.  Unfortunately this breaks a few users but
>>> they are holding the testcases.
>>
>> Some thoughts:
>>
>> 1) FUD: if we start relaxing those checks again something else is going
>> to break in an unexpected way.
>
> Certainly a valid fear :)
>
>> 2) mod_rewrite's behaviour should match mod_rewrite's documentation.  If
>> mod_rewrite guarantees that the input to the first rule set (in vhost
>> contex) is a URL-path, it shouldn't arbitrarily ignore that guarantee
>> for "special" URIs.
>>
>> I like Eric's suggestion of an opt-in RewriteOption.  This will avoid
>> having to iterate yet again if the whitelist is either too broad or too
>> narrow, and can make the security implications (such as they are)
>> explicit.
>
> Doesn't that just mean that the security implications are unknown when
> you want mod_rewrite to process a proxied http request or a CONNECT?
> I.e., you have to turn off the sanity checks in order to use certain
> infrequently used features.
>
> Eric, what was the opt-in exactly?  In what scope would you need to
> enable it in order to process a CONNECT request?

e.g. RewriteOptions +"I know I'm running this regex against something
that's not guaranteed to look like a URL-path, and I'll write a regex
that carefully matches/captures the input"


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-07 Thread Jeff Trawick
On Thu, Jun 7, 2012 at 11:55 AM, Joe Orton  wrote:
> On Wed, Jun 06, 2012 at 09:08:02PM -0400, Jeff Trawick wrote:
>> Here are some valid requests which fail the 4317 checks:
>>
>> CONNECT foo.example.com[:port]
>> GET http://foo.example.com
>> GET proxy:http://foo.example.com/    (rewriting something which was
>> already proxied internally)
>>
>> I am leaning towards the likely minority view that it is problematic
>> to not know what the valid inputs to a ~15 year old module really are,
>> and we should whitelist a few more patterns such as those above and
>> see how far it gets us.  Unfortunately this breaks a few users but
>> they are holding the testcases.
>
> Some thoughts:
>
> 1) FUD: if we start relaxing those checks again something else is going
> to break in an unexpected way.

Certainly a valid fear :)

> 2) mod_rewrite's behaviour should match mod_rewrite's documentation.  If
> mod_rewrite guarantees that the input to the first rule set (in vhost
> contex) is a URL-path, it shouldn't arbitrarily ignore that guarantee
> for "special" URIs.
>
> I like Eric's suggestion of an opt-in RewriteOption.  This will avoid
> having to iterate yet again if the whitelist is either too broad or too
> narrow, and can make the security implications (such as they are)
> explicit.

Doesn't that just mean that the security implications are unknown when
you want mod_rewrite to process a proxied http request or a CONNECT?
I.e., you have to turn off the sanity checks in order to use certain
infrequently used features.

Eric, what was the opt-in exactly?  In what scope would you need to
enable it in order to process a CONNECT request?

>
> Regards, Joe

-- 
Born in Roswell... married an alien...
http://emptyhammock.com/


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-07 Thread Joe Orton
On Wed, Jun 06, 2012 at 09:08:02PM -0400, Jeff Trawick wrote:
> Here are some valid requests which fail the 4317 checks:
> 
> CONNECT foo.example.com[:port]
> GET http://foo.example.com
> GET proxy:http://foo.example.com/(rewriting something which was
> already proxied internally)
> 
> I am leaning towards the likely minority view that it is problematic
> to not know what the valid inputs to a ~15 year old module really are,
> and we should whitelist a few more patterns such as those above and
> see how far it gets us.  Unfortunately this breaks a few users but
> they are holding the testcases.

Some thoughts:

1) FUD: if we start relaxing those checks again something else is going 
to break in an unexpected way.

2) mod_rewrite's behaviour should match mod_rewrite's documentation.  If 
mod_rewrite guarantees that the input to the first rule set (in vhost 
contex) is a URL-path, it shouldn't arbitrarily ignore that guarantee 
for "special" URIs.

I like Eric's suggestion of an opt-in RewriteOption.  This will avoid 
having to iterate yet again if the whitelist is either too broad or too 
narrow, and can make the security implications (such as they are) 
explicit.

Regards, Joe


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-06-06 Thread Jeff Trawick
On Sat, May 26, 2012 at 9:19 AM, Rainer Jung  wrote:
> On 24.05.2012 17:12, Eric Covener wrote:
>>
>> There are a couple of PR's going around about people who were using
>> rewrite to operate on URL's now kicked out of mod_rewrite by default
>> (IIRC at least proxy:blah and CONNECT arg)
>>
>> Should we just add a mod_rewrite directive or RewriteOption that opts
>> in to handling any URL and document the cautions in the directive?  I
>> don't mind doing that code and doc work to skip the new check to
>> unblock people before 2.2.23.  Please comment!
>
>
> I thought the original problem with mod_rewrite existed only for rules with
> the proxy flag. So rules without the proxy floag should be always OK. Right?
> All bugzilla issues I am aware of only use such OK rules. If we would allow
> them, we would fix the problem for most users.

AFAIK the original problem was just for [P].  I don't know if it is
reasonable to let everything else through, on the theory that there's
no telling what can happen with mod_rewrite :)  (But thus far there
has been no telling what existing behavior became broken by NOT
letting everything else through.)

Elsewhere was reported another legacy configuration with [P] which
does not work with the checks added with 4317.  So just limiting the
new check to cases with [P] isn't sufficient.

>
> For rules with the proxy flag I don't know what the "right" soluation would
> be. I think the original CVE issue was triggered by interpreting some URL
> prefix as a userinfo (the "@" separated part).
>
> Jeff at some point was also looking at it, the patch attached to PR 52774
> and my suggestion of only restricting rewrite rules with proxy flag set. But
> it seems he also didn't come to a result.

What happened was that I signed up for a handful of courses on Udacity
and Coursera and am just now catching my breath this week :)

Here are some valid requests which fail the 4317 checks:

CONNECT foo.example.com[:port]
GET http://foo.example.com
GET proxy:http://foo.example.com/(rewriting something which was
already proxied internally)

I am leaning towards the likely minority view that it is problematic
to not know what the valid inputs to a ~15 year old module really are,
and we should whitelist a few more patterns such as those above and
see how far it gets us.  Unfortunately this breaks a few users but
they are holding the testcases.

-- 
Born in Roswell... married an alien...
http://emptyhammock.com/


Re: post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-05-26 Thread Rainer Jung

On 24.05.2012 17:12, Eric Covener wrote:

There are a couple of PR's going around about people who were using
rewrite to operate on URL's now kicked out of mod_rewrite by default
(IIRC at least proxy:blah and CONNECT arg)

Should we just add a mod_rewrite directive or RewriteOption that opts
in to handling any URL and document the cautions in the directive?  I
don't mind doing that code and doc work to skip the new check to
unblock people before 2.2.23.  Please comment!


I thought the original problem with mod_rewrite existed only for rules 
with the proxy flag. So rules without the proxy floag should be always 
OK. Right? All bugzilla issues I am aware of only use such OK rules. If 
we would allow them, we would fix the problem for most users.


For rules with the proxy flag I don't know what the "right" soluation 
would be. I think the original CVE issue was triggered by interpreting 
some URL prefix as a userinfo (the "@" separated part).


Jeff at some point was also looking at it, the patch attached to PR 
52774 and my suggestion of only restricting rewrite rules with proxy 
flag set. But it seems he also didn't come to a result.


Regards,

Rainer


post-CVE-2011-4317 (rewrite proxy unintended interpolation) rewrite PR's

2012-05-24 Thread Eric Covener
There are a couple of PR's going around about people who were using
rewrite to operate on URL's now kicked out of mod_rewrite by default
(IIRC at least proxy:blah and CONNECT arg)

Should we just add a mod_rewrite directive or RewriteOption that opts
in to handling any URL and document the cautions in the directive?  I
don't mind doing that code and doc work to skip the new check to
unblock people before 2.2.23.  Please comment!


Re: segfault with POST

2010-11-29 Thread William A. Rowe Jr.
On 11/26/2010 6:57 AM, Oden Eriksson wrote:
> Hello.
> 
> We're currenty experiencing a strange segfault in Mandriva Cooker (the 
> development branch) with the latest apache-2.2.17, gcc-4.5.1 and all that.
> 
> Compiling apache using "-DDEBUG=1 -DAPR_BUCKET_DEBUG=1 -DAPR_RING_DEBUG=1" or 
> without "-fomit-frame-pointer" makes the segfault go away. This seems to 
> happen on 32bit cooker only.
> 
> More info here:
> 
> https://qa.mandriva.com/show_bug.cgi?id=61384
> 
> A possible related issue in WebDAV:
> 
> https://qa.mandriva.com/show_bug.cgi?id=61655

Sounds like an over-aggressive optimization on gcc's part?


segfault with POST

2010-11-26 Thread Oden Eriksson
Hello.

We're currenty experiencing a strange segfault in Mandriva Cooker (the 
development branch) with the latest apache-2.2.17, gcc-4.5.1 and all that.

Compiling apache using "-DDEBUG=1 -DAPR_BUCKET_DEBUG=1 -DAPR_RING_DEBUG=1" or 
without "-fomit-frame-pointer" makes the segfault go away. This seems to 
happen on 32bit cooker only.

More info here:

https://qa.mandriva.com/show_bug.cgi?id=61384

A possible related issue in WebDAV:

https://qa.mandriva.com/show_bug.cgi?id=61655

Cheers.
-- 
Regards // Oden Eriksson
Security team manager - Mandriva
CEO NUX AB


Re: Read post data

2010-03-09 Thread Jeff Trawick
On Tue, Mar 9, 2010 at 7:45 AM, simon simon  wrote:
> Hi,
>  Many thanks for the tip
>  I have two modules, one have received the body with ap_get_client_block()(I
> have no source), it handle the content,
>  the other one need dispatch the original body to some servers.
>
>  so, I don't know how can i get the body

several modules in the 2.2.x distribution show how to read the body
from an input filter; modules/experimental/mod_case_filter_in.c is one
example; mod_deflate.c is another (the list goes on)


Re: Read post data

2010-03-09 Thread Jeff Trawick
On Mon, Mar 8, 2010 at 11:47 PM, simon simon  wrote:
> hi there,
> I am using ap_setup_client_block() and ap_get_client_block() methods of API
> to read POST request, Request body is being read properly, but there is
> another module waiting these data, it never receive it (also use
> ap_setup_client_block() and ap_get_client_block() methods)
>
> Any ideas can solve this?

What role are the modules trying to play?

Only one handler can retrieve the body with ap_get_client_block().  n
filters can see the body that some handler retrieves.

(And ap_get_client_block() isn't the only way for a handler to
retrieve the body, but that is beside the point.)


Re: Read post data

2010-03-09 Thread simon simon
Hi,
 Many thanks for the tip
 I have two modules, one have received the body with ap_get_client_block()(I
have no source), it handle the content,
 the other one need dispatch the original body to some servers.

 so, I don't know how can i get the body



2010/3/9 Jeff Trawick 

> On Mon, Mar 8, 2010 at 11:47 PM, simon simon  wrote:
> > hi there,
> > I am using ap_setup_client_block() and ap_get_client_block() methods of
> API
> > to read POST request, Request body is being read properly, but there is
> > another module waiting these data, it never receive it (also use
> > ap_setup_client_block() and ap_get_client_block() methods)
> >
> > Any ideas can solve this?
>
> What role are the modules trying to play?
>
> Only one handler can retrieve the body with ap_get_client_block().  n
> filters can see the body that some handler retrieves.
>
> (And ap_get_client_block() isn't the only way for a handler to
> retrieve the body, but that is beside the point.)
>


Read post data

2010-03-08 Thread simon simon
hi there,
I am using ap_setup_client_block() and ap_get_client_block() methods of API
to read POST request, Request body is being read properly, but there is
another module waiting these data, it never receive it (also use
ap_setup_client_block() and ap_get_client_block() methods)

Any ideas can solve this?


Thanks


Re: POST subrequests via mod_proxy

2010-01-13 Thread Sorin Manolache
On Wed, Jan 13, 2010 at 12:01, Graham Leggett  wrote:
> On 13 Jan 2010, at 12:39 PM, Sorin Manolache wrote:
>
>> Exactly. I thought of the same thing. However, if this "whatever" is a
>> ap_run_sub_req and the requests passes through mod_proxy, mod_proxy
>> does not include the request body for subrequests.
>> ap_proxy_http_request in mod_proxy_http.c contains
>>
>> if (r->main) {
>>  ...
>>  e = apr_bucket_eos_create(input_brigade->bucket_alloc);
>>  APR_BRIGADE_INSERT_TAIL(input_brigade, e);
>>  goto skip_body;
>> }
>>
>> My suggestion was to remove this code from mod_proxy_http.c.
>
> One option you can use is to set r->main to NULL on the subrequest. This
> causes the subrequest to be treated as a main request, which means an
> attempt will be made to read the request body. You need to make sure your
> input filter is in place to provide the request body before you do this, and
> that no attempt is made to read from the connection to the client.

Yes, I am aware of setting r->main to NULL but I think it is too
disruptive. For example, mod_deflate behaves differently if it deals
with a main or a sub-request. Also, some headers are forwarded or not
by mod_proxy depending if the request is a main request or a
sub-request.

So I would really ask the developers of mod_proxy to consider removing
the three lines from the if-block in ap_proxy_http_request.

if (r->main) {
 ...
 e = apr_bucket_eos_create(input_brigade->bucket_alloc);
 APR_BRIGADE_INSERT_TAIL(input_brigade, e);
 goto skip_body;
}

Thank you,
Sorin


Re: POST subrequests via mod_proxy

2010-01-13 Thread Graham Leggett

On 13 Jan 2010, at 12:39 PM, Sorin Manolache wrote:


Exactly. I thought of the same thing. However, if this "whatever" is a
ap_run_sub_req and the requests passes through mod_proxy, mod_proxy
does not include the request body for subrequests.
ap_proxy_http_request in mod_proxy_http.c contains

if (r->main) {
 ...
 e = apr_bucket_eos_create(input_brigade->bucket_alloc);
 APR_BRIGADE_INSERT_TAIL(input_brigade, e);
 goto skip_body;
}

My suggestion was to remove this code from mod_proxy_http.c.


One option you can use is to set r->main to NULL on the subrequest.  
This causes the subrequest to be treated as a main request, which  
means an attempt will be made to read the request body. You need to  
make sure your input filter is in place to provide the request body  
before you do this, and that no attempt is made to read from the  
connection to the client.


If you want to read the output of the subrequest and not send it to  
the client, add an output filter that captures the output without  
passing it down the filter stack. You can then do with the response  
what you will, and continue with the real main request when you are  
ready.


Regards,
Graham
--



Re: POST subrequests via mod_proxy

2010-01-13 Thread Sorin Manolache
On Wed, Jan 13, 2010 at 11:21, Graham Leggett  wrote:
> On 13 Jan 2010, at 12:07 PM, Sorin Manolache wrote:
>
>> I understand. However, I don't want to make a subrequest with the body
>> of the main request. I want to be able to make a subrequest with a
>> totally new request body.
>>
>> For example:
>>
>> The client sends to my server:
>>
>> POST /server_url
>> (main) request body: body_from_client_to_server
>>
>> My server would make a subrequest to a backend:
>>
>> POST /backend_url
>> subrequest body: body_from_server_to_backend
>>
>> Or even this scenario:
>>
>> Client to my server:
>> GET /server_url
>>
>> My server to a backend:
>>
>> POST /backend_url
>> subrequest body: blabla
>
> In this case, you want to create a simple input filter, which puts your
> intended request body into a brigade, and then passes the brigade
> (containing your body) to whatever is making the request.

Exactly. I thought of the same thing. However, if this "whatever" is a
ap_run_sub_req and the requests passes through mod_proxy, mod_proxy
does not include the request body for subrequests.
ap_proxy_http_request in mod_proxy_http.c contains

if (r->main) {
  ...
  e = apr_bucket_eos_create(input_brigade->bucket_alloc);
  APR_BRIGADE_INSERT_TAIL(input_brigade, e);
  goto skip_body;
}

My suggestion was to remove this code from mod_proxy_http.c.

Sorin


  1   2   3   4   5   >