Re: [VOTE] Release httpd-2.4.51-rc1 as httpd-2.4.51

2021-10-07 Thread Mark J . Cox
+1 on Fedora 34

On 2021/10/07 13:17:36, "ste...@eissing.org"  wrote: 
> Hi all,
> 
> due to found security weaknesses in our 2.4.50 release, the security team
> feels it is necessary to do a new release on very short notice. We will skip
> the usual 3 day voting period and close the vote once we feel comfortable
> with our testing.
> 
> Please find below the proposed release tarball and signatures:
> 
> https://dist.apache.org/repos/dist/dev/httpd/
> 
> I would like to call a VOTE over the next few days^h^h^h^hhours to release
> this candidate tarball httpd-2.4.51-rc1 as 2.4.51:
> [ ] +1: It's not just good, it's hopefully good enough!
> [ ] +0: Let's have a talk.
> [ ] -1: There's trouble in paradise. Here's what's wrong.
> 
> The computed digests of the tarball up for vote are:
> sha1: 516128e5acb7311e6e4d32d600664deb0d12e61f *httpd-2.4.51-rc1.tar.gz
> sha256: c2cedb0b47666bea633b44d5b3a2ebf3c466e0506955fbc3012a5a9b078ca8b4 
> *httpd-2.4.51-rc1.tar.gz
> sha512: 
> 507fd2bbc420e8a1f0a90737d253f1aa31000a948f7a840fdd4797a78f7a4f1bd39250b33087485213a3bed4d11221e98eabfaf4ff17c7d0380236f8a52ee157
>  *httpd-2.4.51-rc1.tar.gz
> 
> The SVN candidate source is found at tags/candidate-2.4.51-rc1.
> 
> Kind Regards,
> Stefan


Re: [httpd-site] branch main updated: publishing release httpd-2.4.49

2021-09-16 Thread Mark J Cox
Hi; at the moment the ASF customisation to the tool is tracked in my github
fork along with issues.  There's no specific place to discuss it other than
secur...@apache.org.  That's all just because there's only me having worked
on it.

There are going to be some big changes needed to the tool and running
instance in the coming months to support the new CVE Project v5.0 JSON
schema, as that is required for more of the future CVE project automation
(such as live submission to their database), so that will likely take up
all the time I can personally spend updating the tool in the near future.

Issues:
https://github.com/iamamoose/Vulnogram/issues

ASF changes from the upstream Vulnogram code:
https://github.com/Vulnogram/Vulnogram/compare/master...iamamoose:asfmaster

Regards, Mark J Cox
ASF Security


On Thu, Sep 16, 2021 at 4:57 PM Ruediger Pluem  wrote:

>
>
> On 9/16/21 3:16 PM, Eric Covener wrote:
> > On Thu, Sep 16, 2021 at 9:07 AM ste...@eissing.org 
> wrote:
> >>
> >>
> >>
> >>> Am 16.09.2021 um 15:01 schrieb Ruediger Pluem :
> >>>
> >>>
> >>>
> >>> On 9/16/21 2:59 PM, ste...@eissing.org wrote:
> >>>> And thanks, Rüdiger, for noticing and the quick fixes.\o/
> >>>
> >>> And thanks to you for all the release and scripting work.
> >>
> >> I think we should request some download url feature from the
> cveprocess, so that we can automate that part as well. The timeline entry
> should be added automatically. The "affected_by" we can at least check and
> report.
> >
> > I'm not sure we have Mark watching here, best to take it to the two
>
> I fear that as well, but I wanted to avoid crosposts on dev@ and security@
> at the same time due to their different visibility.
> In general I think improvements to the CVE tool can be discussed in
> public, but I am not sure what the correct venue aka list is
> for this topic.
> @Mark: Can you give us a hint what is the correct forum to talk about
> improvements of the CVE tool?
>
> > security lists.
> >
>
> Regards
>
> Rüdiger
>


Re: Changing the httpd security process

2020-08-17 Thread Mark J . Cox
> > This roughly reverts the httpd process to what we used prior to adopting 
> > the Tomcat-esque policy for the whole ASF.  We would have to document 
> > this and possibly need it approved by the ASF security team.
> 
> Not sure if we need to have it approved, but at least we should discuss with 
> the ASF security team.

https://s.apache.org/cveprocess allows projects to deviate from the default 
policy with "review" from the ASF security team.  So once you have agreement 
have the PMC present the proposed policy.  

This is not an uncommon plan, outside of ASF projects such as OpenSSL have 
similar policies where lower severity issues (low/moderate) are committed as 
security fixes prior to and independently of releases.  Dealing with security 
issues in private is a pain in both the process and getting the right fix with 
limited reviewers.  It's worth that pain when the issue is an actual risk to 
users, less so for the low risk issues.

Mark


Re: hardening mod_write and mod_proxy like mod_jk with servletnormalize

2020-06-11 Thread Mark Thomas
On 11/06/2020 07:51, jean-frederic clere wrote:
> On 10/06/2020 11:53, Ruediger Pluem wrote:
>>
>>
>> On 6/9/20 12:05 PM, jean-frederic clere wrote:
>>> Hi,
>>>
>>> Basically it adds servletnormalizecheck to mod_proxy for
>>> ProxyPass/ProxyPassMatch and mod_rewrite when using P
>>> I have tested the following uses:
>>> #ProxyPass  /docs ajp://localhost:8009/docs secret=%A1b2!@
>>> servletnormalizecheck
>>>
>>> #ProxyPassMatch  "^/docs(.*)$" "ajp://localhost:8009/docs$1"
>>> secret=%A1b2!@ servletnormalizecheck
>>>
>>> #RewriteEngine On
>>> #RewriteRule "^/docs(.*)$" "ajp://localhost:8009/docs$1" [P,SNC]
>>> #
>>> #ProxySet connectiontimeout=5 timeout=30 secret=%A1b2!@
>>> #
>>>
>>> #
>>> #  ProxyPass  ajp://localhost:8009/docs secret=%A1b2!@
>>> servletnormalizecheck
>>> #
>>>
>>> What is not supported is
>>> curl -v --path-as-is
>>> "http://localhost:8000/docs/..;foo=bar/;foo=bar/test/index.jsp";
>>>
>>> that could be remapped to
>>> ProxyPass  /test ajp://localhost:8009/test secret=%A1b2!@
>>> servletnormalizecheck
>>> or a 
>>>
>>> Comments?
>>
>> I understood from Mark that the request you do above with curl should
>> not be denied but just mapped to /test.
>> But rethinking that, it becomes real fun: For mapping we should use
>> the URI stripped off path parameters and then having done the
>> shrinking operation (servlet normalized) but we should use the
>> original URI having done the shrinking operation with path
>> parameters to sent to the backend. That might work for a simple prefix
>> matching, but it seems to be very difficult for regular
>> expression scenarios where you might use complex captures from the
>> matching to build the result. But if the matching was done
>> against the servlet normalized URI the captures might be different,
>> than the ones you would have got when doing the same against
>> not normalized URI. So I am little bit lost here.

I can see how this gets complicated for regular expression scenarios.

Since the servlet specification doesn't have the concept of regular
expression mapping, I don't think the rationale for servletnormalize
applies in that case. There is no expectation of how the mapping will
occur from a servlet perspective so the httpd behaviour cannot be
unexpected.

Coming from a servlet perspective I have no view on what the 'correct'
behaviour is in this case. I'll happily support whatever the httpd
community thinks is best.

>> What if we just have an option on virtual host base to drop path
>> parameters of the following kind
>>
>> s#/([.]{0,2})(;[^/]*)/#/$1/g
>>
>> do the usual shrinking operation afterwards and just process them
>> afterwards as usual.
> 
> I think it makes sense to have it there but separated from the
> servletnormalizecheck because that changes the whole  mapping
> So I will add something like MergeSlashes which will map
> http://localhost:8000/docs/..;foo=bar/;foo=bar/test/index.jsp
> to /test
> And arrange the proxy so that /docs/..;foo=bar/;foo=bar/test/index.jsp
> is sent to the back-end.

That sounds good to me. That is the expected mapping from a servlet
perspective.

Thanks for all your efforts on this.

Mark


Re: Cloudflare, Google Chrome, and Firefox Add HTTP/3 Support

2019-09-27 Thread Mark Blackman


> On 26 Sep 2019, at 18:54, Alex Hautequest  wrote:
> 
> https://news.slashdot.org/story/19/09/26/1710239/cloudflare-google-chrome-and-firefox-add-http3-support
> 
> With that, the obvious question: what about Apache?

What’s the incentive to add it? 


- Mark


RE: release?

2019-07-30 Thread Mark Blackman
Hi,

Is this release likely to be ready before August 10? I am guessing "no" at this 
point, but wanted to get an idea early.

Cheers,
Mark

-Original Message-
From: Rainer Jung [mailto:rainer.j...@kippdata.de]
Sent: 20 July 2019 10:44
To: dev@httpd.apache.org
Subject: Re: release?

m 20.07.2019 um 10:38 schrieb Marion & Christophe JAILLET:
> Hi,
>
> PR60757 and corresponding r1853560 could be a good candidate for backport.
>
> I don't have a configuration for testing so I won't propose it myself
> for backport, but the patch looks simple.

I have added this one (mod_proxy_hcheck in BalancerMember) and two other ones 
("Mute frequent debug message in mod_proxy_hcheck" and "bytraffic needs 
byrequests") to STATUS right now.

Regards,

Rainer

> Le 18/07/2019 à 16:06, Stefan Eissing a écrit :
>> It would be great if we could make a release this month. There are
>> several fixes and improvements already backported and a few
>> outstanding issues that need a vote or two.
>>
>> Please have a look if you find the time. I think Daniel expressed
>> willingness to RM this? That'd be great!
>>
>> Cheers, Stefan


---
This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and delete this e-mail. Any unauthorized copying, 
disclosure or distribution of the material in this e-mail is strictly forbidden.

Please refer to https://www.db.com/disclosures for additional EU corporate and 
regulatory disclosures and to 
http://www.db.com/unitedkingdom/content/privacy.htm for information about 
privacy.


Re: http workshop

2019-02-14 Thread Mark Thomas
On 14/02/2019 19:52, William A Rowe Jr wrote:
> On Mon, Jan 28, 2019 at 9:22 AM Stefan Eissing 
> wrote:



>> The HTTP WS organisers expressed the wish to have someone from "Apache"
>> present. Anyone interested? Could also be someone from another HTTP related
>> Apache project, of course.

It appears that the deadline to submit a statement of interest in
attending was a month ago.

Has it been extended?

Mark


Re: Fwd: [Bug 53579] httpd looping writing on a pipe and unresponsive (httpd )

2019-01-06 Thread Mark Thomas
Hi all,

Your project's Bugzilla database suffered some minor abuse earlier today
when a new user - ithanr...@gmail.com - started removing entries from CC
lists and replacing them with just themselves.

This is just a quick note to let you know that:

- The idiot concerned has had their account disabled.
- They have been removed from any CC list they added themselves to
- Any CC list they edited has been restored to its original value.

The corrective action has all been performed directly in the database
partly because it was a lot easier than going via the UI and partly so
your lists aren't spammed as each of the CC's was restored.

Mark


On 06/01/2019 15:23, Mark Thomas wrote:
> On 06/01/2019 13:28, Eric Covener wrote:
>> Some kind of weird abusive behavior this morning from
>> ithanr...@gmail.com hitting many bugs.
> 
> Seen it on Tomcat as well. I'm on it.
> 
> Mark
> 
> 
>>
>> -- Forwarded message -
>> From: 
>> Date: Sun, Jan 6, 2019 at 8:24 AM
>> Subject: [Bug 53579] httpd looping writing on a pipe and unresponsive
>> (httpd )
>> To: 
>>
>>
>> https://bz.apache.org/bugzilla/show_bug.cgi?id=53579
>>
>> ithan  changed:
>>
>>What|Removed |Added
>> 
>>  CC|ar...@maven.pl, |ithanr...@gmail.com
>>|loic.etienne@tech.swisssign |
>>|.com, mahud...@gmail.com,   |
>>|mark_a_ev...@dell.com,  |
>>|santhoshmukka...@gmail.com, |
>>|stix...@gmail.com,  |
>>|szg0...@freemail.hu |
>>
>> --
>> You are receiving this mail because:
>> You are the assignee for the bug.
>> -
>> To unsubscribe, e-mail: bugs-unsubscr...@httpd.apache.org
>> For additional commands, e-mail: bugs-h...@httpd.apache.org
>>
>>
>>
> 



Re: Load balancing and load determination

2018-10-30 Thread Mark Blackman



> On 30 Oct 2018, at 12:53, Jim Jagielski  wrote:
> 
> As some of you know, one of my passions and area of focus is
> on the use of Apache httpd as a reverse proxy and, as such, load
> balancing, failover, etc are of vital interest to me.
> 
> One topic which I have mulling over, off and on, has been the
> idea of some sort of universal load number, that could be used
> and agreed upon by web servers. Right now, the reverse proxy
> "guesses" the load on the backend servers which is OK, and
> works well enough, but it would be great if it actually "knew"
> the current loads on those servers. I already have code that
> shares basic architectural info, such as number of CPUs, available
> memory, loadavg, etc which can help, of course, but again, all
> this info can be used to *infer* the current status of those backend
> servers; it doesn't really provide what the current load actually
> *is*.
> 
> So I was thinking maybe some sort of small, simple and "fast"
> benchmark which could be run by the backends as part of their
> "status" update to the front-end reverse proxy server... something
> that shows general capability at that point in time, like Hanoi or
> something similar. Or maybe some hash function. Some simple code
> that could be used to create that "universal" load number.
> 
> Thoughts? Ideas? Comments? Suggestions? :)

What problem are you trying to solve? Broadly, I think they best you can do is 
ask the backends to include a response header indicating their current appetite 
for more connections.

- Mark



Re: t/modules/buffer.t failing in 2.4.36, LWP bug [Was: [VOTE] Release httpd-2.4.36]

2018-10-14 Thread Mark Blackman


> On 14 Oct 2018, at 12:33, Rainer Jung  wrote:
> 
> Am 13.10.2018 um 11:46 schrieb Rainer Jung:
>> Am 11.10.2018 um 20:55 schrieb Ruediger Pluem:
>>> 
>>> 
>>> On 10/11/2018 08:10 PM, Christophe JAILLET wrote:
>>>> No issue on my Ubuntu 18.04 VM.
>>>> 
>>>> On what configuration are you running your tests, Rüdiger? macOS, just 
>>>> like Jim?
>>> 
>>> Centos 7.5 64 Bit
>>> 
>>> Regards
>>> 
>>> Rüdiger
>> The test fails for me as well for 2.4.36 on SLES12. Small bodies are OK, 
>> large not. The limit is somewhere between 1.3 and 1.5 MB, not always the 
>> same. The test hangs there until mod_reqtimeout times out after a minute, 
>> complaining that it could not read more data from the client. If I reduce 
>> the multiplicator 100 to eg. 20 it always passes.
>> If I start the test server using "t/TEST -start-httpd" and then use curl to 
>> POST data, I can even POST much bigger data and get the correct result back. 
>> I use
>>   curl -v --data-binary @BIGFILE http://localhost:8529/apache/buffer_in/ > 
>> response-body
>> So I assume it is a problem of interaction between the server reading the 
>> POST body and the client sending it.
>> My test framework was freshly assembled recently, so lots of current modules.
>> The setup is based on OpenSSL 1.1.1 in the server and in the test framework, 
>> but the actual test runs over http, so I don't expect any OpenSSL related 
>> reason for the failure.
> 
> I did some more tests including using LWP directly and sniffing the packets 
> on the network plus with mod_dumpio and also doing truss / strace.
> 
> I can reproduce even when sending using LWP directly or just the POST binary 
> coming with LWP. I can not reproduce with curl.
> 
> With mod_dumpio and in a network sniff plus truss it looks like the client 
> simply stops sending once it got the first response bytes. LWP seems to 
> select the socket FD for read and write. As long as only write gets 
> signalled, it happily sends data. Once it gets write plus read signalled, it 
> switches over to read and no longer checks for write. Since our server side 
> implementation is streaming and starts to send the reflected bytes right 
> away, this LWP behavior breaks the request.

Hmm, it almost seems like that test/reflector module doesn’t reflect the 
protocol definition though, https://tools.ietf.org/html/rfc7231#section-1
"A server listens on a connection for a request,
   parses each message received, interprets the message semantics in
   relation to the identified request target, and responds to that
   request with one or more response messages”
I would interpret that “message received" as the server is expected to wait 
until the entire request is received, aside from the case of "Expect: 
100-continue” and even that alludes to waiting.
https://tools.ietf.org/html/rfc7231#section-6.2.1 
<https://tools.ietf.org/html/rfc7231#section-6.2.1>
"The server intends to send a final response after the request has been fully 
received and acted upon."
What do you think?
- Mark





Re: NOTICE: Intent to T&R 2.4.36

2018-10-10 Thread Mark Blackman



> On 10 Oct 2018, at 21:04, Jim Jagielski  wrote:
> 
>> 
>> Does the TLSv1.3 support need to be production ready?
>> 
>> TLSv1.3 is presumably an opt-in feature and as long as it doesn’t endanger 
>> existing behaviours, I would have assumed it’s relatively safe to release 
>> with caveats in the docs. 
>> Of course, once there’s more take-up of TLSv1.3, then the test suite needs 
>> to be useful. Getting real-world feedback about something completely new 
>> that doesn’t endanger existing behaviours outside of TLSv1.3 is probably 
>> worthwhile.
> 
> The issue is that such a major feature enhancement touches a lot of code. 
> That can cause regressions.
> 
> Sometimes, some people try to reduce and restrict development and new 
> features using that as an argument. I, and numerous others, have consistently 
> disagreed with that as a convincing argument against adding stuff to 2.4.x. 
> In this particular situation, the "usual suspect(s)" were actually very 
> gung-ho on release, despite this being the exact kind of situation they would 
> normally balk against. I was noting the discrepancy and wondering the 
> reasoning…

Fair enough, I hadn’t checked to see how invasive the change was. I had assumed 
a lot of "#ifdef TLSV13” protecting current behaviours.

- Mark

Re: NOTICE: Intent to T&R 2.4.36

2018-10-10 Thread Mark Blackman


> On 10 Oct 2018, at 20:28, Jim Jagielski  wrote:
> 
> 
> 
>> On Oct 10, 2018, at 3:01 PM, William A Rowe Jr > <mailto:wr...@rowe-clan.net>> wrote:
>> 
>> On Wed, Oct 10, 2018 at 1:45 PM Jim Jagielski > <mailto:j...@jagunet.com>> wrote:
>> I thought the whole intent for a quick 2.4.36 was for TLSv1.3 support.
>> 
>> If that's not ready for prime time, then why a release??
>> 
>> AIUI, it isn't that httpd isn't ready for release, or even httpd-test 
>> framework.
>> Until all the upstream CPAN modules behave reasonably with openssl 1.1.1
>> we will continue to see odd test results.
> 
> The question is How Comfortable Are We That TLSv1.3 Support Is Production 
> Ready?
> 
> This release seems very, very rushed to me. It seems strange that for someone 
> who balks against releasing s/w that hasn't been sufficiently tested, or 
> could cause regressions, and that the sole reason for this particular release 
> is TLSv1.3 support which seems insufficiently tested, you are 
> uncharacteristic cool with all this.

Does the TLSv1.3 support need to be production ready?

TLSv1.3 is presumably an opt-in feature and as long as it doesn’t endanger 
existing behaviours, I would have assumed it’s relatively safe to release with 
caveats in the docs. 
Of course, once there’s more take-up of TLSv1.3, then the test suite needs to 
be useful. Getting real-world feedback about something completely new that 
doesn’t endanger existing behaviours outside of TLSv1.3 is probably worthwhile.

- Mark



Re: slotmem + balancer

2018-05-12 Thread Mark Blackman

> On 8 May 2018, at 18:19, Jim Jagielski  wrote:
> 
> I am under the impression is that we should likely restore mod_slotmem_shm
> back to its "orig" condition, either:
> 
>o 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1822341
>  <http://svn.apache.org/viewvc?view=revision&revision=1822341>
>o 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1782069
>  
> <http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/slotmem/mod_slotmem_shm.c?view=markup&pathrev=1782069>
> 
> and try to rework all of this. I fear that all the subsequent work has really 
> made this module extremely fragile. We need, IMO, a very minimal fix for PR 
> 62044
> 
> Just for the heck of it, didn't r1822341 
> <http://svn.apache.org/viewvc?view=revision&revision=1822341> actually *FIX* 
> the PR? 


Hi,


To follow-up as the reporter of 62044, the original problem was a segmentation 
fault due to an entirely different 3rd party vendor module, which Apache 2.4.32 
magically fixed, no idea how. However, the segfaults meant that the SHMs and/or 
SHM placeholder files weren't getting correctly cleaned up on restarts (in 
2.4.29). I think it is important for httpd to handle the segfault case well 
because 3rd party modules can cause problems that httpd can’t anticipate. 
Ultimately, httpd is creating a bunch of persistent external state that it 
should make an effort to deal with cleanly when httpd stops unexpectedly and is 
subsequently restarted.

We restart/reload Apache frequently enough that preserving balancer state is 
useful but not critical.

I think you will find it difficult to re-work effectively unless you can 
identify representative test cases possibly including a segfault.

For me the most important characteristics of the fix were (a) to more 
accurately identify genuine virtual host changes (rather than simple line 
number shifts) that might invalidate balancer state and at least in some cases, 
to pick up existing SHMs left-over from the last httpd start. 

- Mark

Re: A proposal...

2018-04-23 Thread Mark Blackman


> On 23 Apr 2018, at 19:17, Christophe Jaillet  
> wrote:
> 
> Le 23/04/2018 à 16:00, Jim Jagielski a écrit :
>> It seems that, IMO, if there was not so much concern about "regressions" in 
>> releases, this whole revisit-versioning debate would not have come up. This 
>> implies, to me at least, that the root cause (as I've said before) appears 
>> to be one related to QA and testing more than anything. Unless we address 
>> this, then nothing else really matters.
>> We have a test framework. The questions are:
>>  1. Are we using it?
>>  2. Are we using it sufficiently well?
>>  3. If not, what can we do to improve that?
>>  4. Can we supplement/replace it w/ other frameworks?
>> It does seem to me that each time we patch something, there should be a test 
>> added or extended which covers that bug. We have gotten lax in that. Same 
>> for features. And the more substantial the change (ie, the more core code it 
>> touches, or the more it refactors something), the more we should envision 
>> what tests can be in place which ensure nothing breaks.
>> In other words: nothing backported unless it also involves some changes to 
>> the Perl test framework or some pretty convincing reasons why it's not 
>> required.
> 
> Hi,
> +1000 on my side for more tests.
> 
> But, IMHO, the perl framework is complex to understand for most of us.

Do you believe the Perl element is contributing to the complexity? I’d say Perl 
is perfect for this case in general, although I would have to look at it first 
to confirm.

I certainly believe adequate testing is a bigger and more important problem to 
solve than versioning policies, although some versioning policies might make it 
simpler to allow enough time for decent testing to happen. I personally have a 
stronger incentive to help with testing, than I do with versioning policies.

- Mark

Re: Start using RCs (Was: Re: So... when should we do 2.4.34? [WAS: Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)])

2018-04-20 Thread Mark Blackman


> On 20 Apr 2018, at 01:39, Daniel Ruggeri  wrote:
> 
> I'm not sure where in the conversation to add this, but I do want to
> point out a mechanical concern.
> 
> 
> If we end up with API and feature freeze on branch 2.4, then we'd expect
> to roll 2.6. Soon enough, we'll hit a situation where 2.6 (as a release
> branch) can't get a feature or the next great thing because of a
> required API incompatible change. We would then kick out 2.8, 2.10 and
> so on and so forth. This seems like it would satisfy both the
> keep-the-stuff-stable as well as the give-users-new-stuff "sides".
> 
> 
> Mechanically, this can become tiresome for a volunteer as we work
> through the STATUS ceremony for each of the branches. I remember that
> being less than enjoyable when 2.0, 2.2 and 2.4 were all release
> branches. I'm fearful of a future where we may have five branches and a
> bugfix applicable to all (or worse... a security fix that would dictate
> all should be released/disclosed at the same time).

Presumably most/all of that toil can be automated away as you have started 
doing, so that human intervention is only required where human judgement is 
actually required?  Does httpd have some massive CD/CI pipeline in the 
background I don’t see or could it do with one? 

- Mark





Re: Revisit Versioning? (Was: 2.4.3x regression w/SSL vhost configs)

2018-04-19 Thread Mark Blackman


> On 19 Apr 2018, at 21:35, David Zuelke  wrote:
> 
> On Thu, Apr 19, 2018 at 8:25 PM, Jim Jagielski  wrote:
>> 
>> 
>>> On Apr 19, 2018, at 11:55 AM, David Zuelke  wrote:
>>> 
>>> 
>>> I hate to break this to you, and I do not want to discredit the
>>> amazing work all the contributors here are doing, but httpd 2.4 is of
>>> miserable, miserable quality when it comes to breaks and regressions.
>>> 
>> 
>> Gee Thanks! That is an amazing compliment to be sure. I have
>> NO idea how ANYONE could take that in any way as discrediting
>> the work being done.
>> 
>> Sarcasm aside, could we do better? Yes. Can we do better? Yes.
>> Should we do better? Yes. Will we do better? Yes.
>> 
>> BTW, you DID see how h2 actually came INTO httpd, didn't you??
> 
> Of course, but that's exactly my point. It was introduced not in
> 2.4.0, but in 2.4.17. Five "H2…" config directives are available in
> 2.4.18+ only, one in 2.4.19+, and three in 2.4.24+.
> 
> I'm not saying no directives should ever be added in point releases or
> anything, but the constant backporting of *features* to 2.4 has
> contributed to the relatively high number of regressions, and to a
> lack of progress on 2.6/3.0, because, well, if anything can be put
> into 2.4.next, why bother?
> 
> David

What’s the rule for *features*?

- Mark

Re: "Most Popular Web Server?"

2018-04-18 Thread Mark Blackman


> On 18 Apr 2018, at 17:29, William A Rowe Jr  wrote:
> 
> 
> Many will always carry a deep fondness or appreciation for Apache
> httpd; how much traffic it actually carries in future years is another
> question entirely, and has everything to do with the questions we
> should have solved some time ago, and aught to solve now. Better late
> than never.

Is most popular the right thing to aim for? I would advise continuing to trade 
on Apache’s current strengths (versatility and documentation for me and 
relative stability) and let the chips fall where they may. It’s an open source 
project with a massive first-mover advantage and no investors to please. Just 
do the right thing, stay visible and the rest will sort itself out.

Corporates are pretty wedded to Apache due to 3rd party module support.

- Mark



Re: [Bug 62044] shared memory segments are not found in global list, but appear to exist in kernel.

2018-02-02 Thread Mark Blackman


> On 2 Feb 2018, at 14:19, Jim Jagielski  wrote:
> 
> To be honest, I don't think we ever envisioned an actual environ
> where the config files change every hour and the server gracefully
> restarted... I think our working assumptions have been that actual
> config file changes are "rare", hence the number of modules that
> allow for "on-the-fly" reconfiguration which avoid the need for
> restarts.
> 
> So this is a nice "edge case"
> 

Think mass hosting of 1+ reverse-proxy front-ends all in the same Apache 
instance, with self-service updates to configs as well as a staging 
environment.The 24-hour cycle is like this..

1am: Full stop (SIGTERM) and start of Apache with all configurations, primarily 
to permit log file rotation.

Then, on the hour, any configuration changes requested will be made live by 
auto-generation of a giant 200k+ configuration then a HUP (not a USR1) signal 
to keep the same parent, but a bunch of fresh children. As these are mostly 
reverse proxies, we generate thousands of balancer and balancermember 
directives per configuration.

In the background, once a minute, a background process is always checking for 
responses and forcibly restarting Apache (SIGTERM then SIGKILL if necessary) if 
it doesn’t respond.

Finally, bear in mind that line number changes can occur merely because a new 
virtualhost was added ahead of a given virtualhost, so some kind of tracking 
UUID for a virtualhost based on whatever non-line-number properties is probably 
useful.

- Mark




Re: [Bug 62044] shared memory segments are not found in global list, but appear to exist in kernel.

2018-02-01 Thread Mark Blackman


> On 1 Feb 2018, at 16:27, Yann Ylavic  wrote:
> 
>> On Thu, Feb 1, 2018 at 5:15 PM, Yann Ylavic  wrote:
>> On Thu, Feb 1, 2018 at 4:32 PM, Mark Blackman  wrote:>
>> 
>>> SHM clean-up is the key here and any patch that doesn’t contribute to
>>> that has no immediate value for me.
>> 
>> What you may want to try is remove "s->defn_line_number" from the id there:
>> https://github.com/apache/httpd/blob/trunk/modules/proxy/mod_proxy_balancer.c#L787
>> If your configuration file changes often, that contributes to changing
>> the name of the SHM...
> 
> FWIW, here is (attached) the patch I'm thinking about.
> 

Thanks, the configuration changes once an hour or so. Typically, we have about 
1000 active shared memory segments (yes, they are SHMs) attached to the httpd 
processes.

For now, we’ll just have to implement a SHM clean-up in the start/stop wrappers 
until we can address the root cause or find a cleaner mitigation, which your 
patch might help with.

- Mark


Re: [Bug 62044] shared memory segments are not found in global list, but appear to exist in kernel.

2018-02-01 Thread Mark Blackman

> On 1 Feb 2018, at 12:36, Yann Ylavic  wrote:
> 
> Hi Mark,
> 
> On Thu, Feb 1, 2018 at 10:29 AM, Mark Blackman  wrote:>
>> 
>> 
>> Just to confirm, you expect that patch to handle SHM clean-up even in
>> the “nasty error” case?
> 
> Not really, no patch can avoid a crash for a crashing code :/
> The "stop_signals-PR61558.patch" patch avoids a known httpd crash in
> some circumstances, but...

Well, I just mean, if sig_coredump gets called, will the patch result in the 
normal SHM clean-up routines getting called, where they would have not been 
called before?  SHM clean-up is the key here and any patch that doesn’t 
contribute to that has no immediate value for me.

> 
>> I suspect that nasty error is triggered by
>> the Weblogic plugin based on the adjacency in the logs, but the
>> tracing doesn’t reveal any details, so an strace will probably be
>> required to get more detail.

Tracing has confirmed this really is a segmentation fault despite the lack of 
host-level messages and that reading a 3rd party module (but not Weblogic) is 
the last thing that happens before the segmentation fault and that pattern is 
fairly consistent. Now we need to ensure coredumps are generated.

Finally, there are no orphaned child httpd processes with a PPID of 1.  Just 
thousands and thousands of SHM segments with no processes attached to them.

Regards,
Mark


Re: [Bug 62044] shared memory segments are not found in global list, but appear to exist in kernel.

2018-02-01 Thread Mark Blackman
On 31 Jan 2018, at 22:41, Yann Ylavic  wrote:
> 
> Hi Mark,
> 
> let's continue this debugging on dev@ if you don't mind..
> 
>> On Wed, Jan 31, 2018 at 10:15 PM,   wrote:
>> https://bz.apache.org/bugzilla/show_bug.cgi?id=62044
>> 
>> --- Comment #32 from m...@blackmans.org ---
>> so sig_coredump is being triggered by an unknown signal, multiple times a 
>> day.
>> It's not a segfault, nothing in /var/log/messages. That results in a bunch of
>> undeleted shared memory segments and probably some that will no longer be in
>> the global list, but still present in the kernel.
> 
> In 2.4.29, i.e. without patch [1], sig_coredump might be triggered by
> any signal received by httpd during a restart, and the signal handle
> crashes itself (double fault) so the process is forcibly SIGKILLed
> (presumably, no trace in /var/log/messages...).
> This was reported and discussed in [2], and seems to quite correspond
> to what you observe in your tests.
> 
> Moreover, if the parent process crashes nothing will delete the
> IPC-SysV SHMs (hence the leak in the system), while children processes
> may continue to be attached which prevents a new parent process to
> start (until children stop or are forcibly killed)...
> 
> When this happens, you should see non-root processes attached to PPID
> 1 (e.g. with "ps -ef"), "-f /path/to/httpd.conf" in the command line
> might help distinguish the different httpd instances to monitor
> processes.
> 
> If this is the case, you probably should try patch [1].
> If not, I can't explain why in httpd logs a process with a different
> PID appears after the SIGHUP, it must have been started
> (automatically?) after the previous one crashed.
> Here the generation number can't help, a new process always start at
> generation #0.
> 
> Regards,
> Yann.
> 
> [1] 
> https://svn.apache.org/repos/asf/httpd/httpd/patches/2.4.x/stop_signals-PR61558.patch
> [2] https://bz.apache.org/bugzilla/show_bug.cgi?id=61558

Thanks, for now, we will treat the “nasty error” as a separate question to 
resolve and hope that clean-up patch deals with the immediate issue.

I had originally treated that “nasty error” as a reference to the “file exists” 
error.  However, based on your feedback and reviewing the logs, I would 
conclude that “nasty error” is the trigger, as you suggrest, and the lack of 
SHM clean-up and consequent collisions are collateral damage.

Just to confirm, you expect that patch to handle SHM clean-up even in the 
“nasty error” case?  I suspect that nasty error is triggered by the Weblogic 
plugin based on the adjacency in the logs, but the tracing doesn’t reveal any 
details, so an strace will probably be required to get more detail.

Bugzilla was slightly easier to get log data into as I cannot use work email 
for these conversations.

Cheers,
Mark





Re: svn commit: r1822341 - in /httpd/httpd/trunk: CHANGES modules/slotmem/mod_slotmem_shm.c

2018-01-27 Thread Mark Blackman


> On 27 Jan 2018, at 15:55, Yann Ylavic  wrote:
> 
> On Sat, Jan 27, 2018 at 4:41 PM, Yann Ylavic  <mailto:ylavic@gmail.com>> wrote:
>> On Sat, Jan 27, 2018 at 4:39 PM, Yann Ylavic  wrote:
>>> On Sat, Jan 27, 2018 at 3:14 PM, Yann Ylavic  wrote:
>>>> 
>>>> So I wonder if we'd better:
>>>> 1. have a constant file name (no generation suffix) for all systems,
>>>> 2. not unlink/remove the SHM file on pconf cleanup,
>>>> 2'. eventually unlink the ones unused in ++generation (e.g. retained
>>>> global list),
>>>> 3. unlink all on pglobal cleanup.
>>>> 
>>>> Now we'd have a working shm_attach() not only for persisted slotmems,
>>>> while shm_create() would only be called for new SHMs which we know can
>>>> be pre-removed (to work around crashes leaving them registered
>>>> somewhere).
>>>> Also, if attach succeeds on startup (first generation) but the SHM is
>>>> not persisted (an old crash still), we can possibly pass the sizes
>>>> checks and re-create the SHM regardless.
>>>> 
>>>> WDYT?
>>> 
>>> Something like the attached patch (it talks better sometimes...).
>>> Not even tested of course :)
>> 
>> Oops, more than needed in there, here is the good one.
> 
> OK, v2 already, with correct usage of gpool (i.e. pconf), distinct
> from ap_pglobal.
> 

Thanks, do you recommend we test this version in our pre-production 
environments?

Regards,
Mark

shared memory segment issue in 2.4.29

2018-01-25 Thread Mark Blackman
Hi,

Any chance I could persuade anyone on the list to look over 
https://bz.apache.org/bugzilla/show_bug.cgi?id=62044 
<https://bz.apache.org/bugzilla/show_bug.cgi?id=62044> and provide feedback?

Cheers,
Mark

Re: DISCUSS: Establishing a release cadence

2017-11-02 Thread Mark Blackman


> On 2 Nov 2017, at 10:51, Daniel Ruggeri  wrote:
> 
> Hi, all;
> 
>I know we've chatted about this somewhat in the past so I wanted to
> kick off a formal topic to see if we can establish consensus for a
> release cadence of httpd. As a strawman proposal, I'd like to suggest
> that we...
> 
>  * Ensure stable branches are always in a releasable state (I believe
> this to currently be the case, right?)
> 
>  * Codify and automate the creation/handling of the release bits in a
> CI/CD job plan
> 
>  * Plan to release whatever is in stable branches every quarter via the job
> 
>  * Plan to "release" unstable/trunk bundles monthly
> 
>  * Support the ability to run the job ad-hoc for emergency fixes
> 
> 
> As a swag, all of the above is to make the work of the RM relatively
> light so more folks from the community can/will participate. There have
> been a few cases where features or fixes have sat in 2.4 without release
> because Jim or Bill just hadn't kicked off the process and committed
> their free time to doing it. The other thought is to help provide some
> degree of regularity our community (users and devs alike) can come to
> expect as well as do whatever we can to easily get experimental trunk
> bits in peoples' hands for testing.
> 
> 
> Thoughts?

Broadly reasonable, I can volunteer some online hardware if it helps. You might 
be shorter of people than hardware though.

- Mark

Re: Thoughts on 2.5.0

2017-10-24 Thread Mark Blackman


> On 24 Oct 2017, at 14:42, Jim Jagielski  wrote:
> 
> I would like to start a discussion on 2.5.0 and give
> some insights on my thoughts related to it.
> 
> First and foremost: there is cool stuff in 2.5.0
> that I really, REALLY wish were in a releasable, usable
> artifact. Up to now, we've been doing these as backports
> to the 2.4.x tree with, imo at least, good success.
> 
> So I think the main questions regarding 2.5.0 is a list
> of items/issues that simply *cannot* be backported, due
> to API/ABI concerns. And then gauge the "value" of
> the items on that list.
> 
> Another would be to look at some of the items currently
> "on hold" for backporting, due to outstanding questions,
> tech issues, more needed work/QA, etc... IMO, if these
> backports lack "support" for 2.4.x, then I wonder how
> "reliable" they are (or how tested they are) in the 2.5.o
> tree. And if the answer is "we pull them out of 2.5.0"
> then the question after that is what really *are* the
> diffs between 2.5.0 and 2.4.x... If, however, the
> answer is "tagging 2.5.0 will encourage us to address
> those issues" then my question is "Why aren't we doing
> that now... for 2.4.x".
> 
> And finally: 2.4.x is now, finally, being the default
> version in distros, being the go-to version that people
> are using, etc... I would like us to encourage that
> momentum.

As an observer, I’d like to ask what your goals are for these branches and what 
kinds of expectations would you like consumers of these branches to have?

- Mark 

Re: [Discuss] Rolling a 'final' 2.2.33 release

2017-06-25 Thread Mark Blackman

> On 14 Jun 2017, at 22:12, William A Rowe Jr  wrote:
> 
> 
> Thoughts/comments? Patches to hold for before we roll? If I don't hear
> otherwise, and we stick to the simpler alternative, then I'd plan to roll
> these candidates Thursday.

Would it be an option to get a fix in for the single-character header bug? ( 
https://bz.apache.org/bugzilla/show_bug.cgi?id=61220 
<https://bz.apache.org/bugzilla/show_bug.cgi?id=61220> ) 

If you add

HttpProtocolOptions Unsafe LenientMethods Allow0.9

to a default httpd.conf

single character header lines are rejected with a 400 code.

macmini:httpd-2.2.33 mark$ telnet localhost 8033
Trying ::1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1
Host: foobar
x: 0

HTTP/1.1 400 Bad Request
Date: Sun, 25 Jun 2017 21:43:53 GMT
Server: Apache/2.2.33 (Unix)
Content-Length: 226
Connection: close
Content-Type: text/html; charset=iso-8859-1



400 Bad Request

Bad Request
Your browser sent a request that this server could not understand.


Connection closed by foreign host.



Re: Post 2.4.25

2016-12-24 Thread Mark Blackman

> On 24 Dec 2016, at 16:32, Eric Covener  wrote:
> 
>> I'm not saying we don't do one so we can do the other; I'm
>> saying we do both, at the same time, in parallel. I still
>> don't understand why that concept is such an anathema to some
>> people.
> 
> I also worry about our ability to deliver a 3.0 with enough
> re-architecture for us and and function for users, vs a more
> continuous delivery (apologies for bringing buzzaords to dev@httpd)
> cadence on 2.4 as we've been in.

If you can find a way with limited resources, I would encourage doing both in 
parallel as well.

What are the 2.6/3.0 re-architecture goals/vision out of curiosity?

- Mark

module advice

2016-11-13 Thread Mark Blackman
Hi,

Not sure if this is the right mailing list for this question, and feel free to 
redirect me if so, however, if I want to write a module to examine an HTTP POST 
body and execute some action with the contents of the body, but only if the 
body matches some condition, should that be done with an input filter, taking a 
copy of the body and then end of the request, doing the matching and/or 
execution of the action? Or is there some way to plant a hook that gets 
executed after all the input filters have run and I just have the  whole 
request body sitting there ready for examination?

Currently, I believe an input filter is required to copy the request as it 
comes in, but it might be cleaner to just execute some code after the body has 
been received if practical.

Cheers,
Mark 

RE: [ANNOUNCE] Apache HTTP Server 2.4.23 Released [I]

2016-08-05 Thread Mark Blackman
Thanks for the speedy turn-around!

From: Luca Toscano [mailto:toscano.l...@gmail.com]
Sent: 04 August 2016 14:08
To: Apache HTTP Server Development List 
Subject: Re: [ANNOUNCE] Apache HTTP Server 2.4.23 Released [I]


https://httpd.apache.org front page updated!

Luca


---
This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and delete this e-mail. Any unauthorized copying, 
disclosure or distribution of the material in this e-mail is strictly forbidden.

Please refer to https://www.db.com/disclosures for additional EU corporate and 
regulatory disclosures and to 
http://www.db.com/unitedkingdom/content/privacy.htm for information about 
privacy.


RE: [ANNOUNCE] Apache HTTP Server 2.4.23 Released [I]

2016-08-04 Thread Mark Blackman
Classification: For internal use only

Hi,

Could I recommend that text about Apache 2.2 EOL notification be added to the 
2.2 section on http://httpd.apache.org please?

Regards,
Mark

> -Original Message-
> From: Jim Jagielski [mailto:j...@jagunet.com]
> Sent: 05 July 2016 14:04
> To: httpd 
> Cc: us...@httpd.apache.org
> Subject: [ANNOUNCE] Apache HTTP Server 2.4.23 Released.
>
> ..
>
> Please note that Apache Web Server Project will only provide maintenance
> releases of the 2.2.x flavor through June of 2017, and will provide some
> security patches beyond this date through at least December of 2017.
> Minimal maintenance patches of 2.2.x are expected throughout this period,
> and users are strongly encouraged to promptly complete their transitions
> to the the 2.4.x flavor of httpd to benefit from a much larger assortment
> of minor security and bug fixes as well as new features.


---
This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and delete this e-mail. Any unauthorized copying, 
disclosure or distribution of the material in this e-mail is strictly forbidden.

Please refer to https://www.db.com/disclosures for additional EU corporate and 
regulatory disclosures and to 
http://www.db.com/unitedkingdom/content/privacy.htm for information about 
privacy.


Re: End of the road of 2.2.x maintenance?

2016-05-11 Thread Mark Blackman

> On 10 May 2016, at 21:38, William A Rowe Jr  wrote:
> 
> 
> Are we ready to start the 12 month countdown as of the next/final bug
> fix release of 2.2, and highlight this in both the 2.2 and 2.4 announce
> broadcasts?
> 
> I'm hoping we conclude some fixes of 2.4 scoreboard regressions and
> get to the point of releasing 2.4 with mod_proxy_http2 sometime within
> the next month or so, and that we can reach a consensus about how
> we will proceed on the 2.2 branch, before we get to that release.
> 
> Feedback desired

As a big consumer of Apache 2.2 in my day job, where we are obliged to track 
Apache’s policies very closely, I would prefer to delay this a bit. When Apache 
announces the formal end-of-life date of 2.2, we will be required to engineer 
the migration of 6000+ wildly diverse sites to Apache 2.4 to meet internal 
audit policies. I would propose the 12 month countdown starts no earlier than 
Jan 2017 (as a consumer).

What’s the cost of maintaining (but maybe not updating) Apache 2.2?

Cheers,
Mark



Re: reverse proxy wishlist

2015-12-03 Thread Mark Thomas


On 2015-12-03 14:59, Jim Jagielski  wrote: 
> I put out a call on Twitter regarding this, but wanted to
> close the loop here as well.
> 
> What would *you* like to see as new features or enhancements
> w/ mod_proxy, esp reverse proxy. I was thinking about some
> sort of active backend monitoring, utilizing watchdog, which
> could also maybe, eventually, pull in performance and load
> data for the backend for a more accurate LB provider. But
> what about new LB methods? Any ideas there?
> 
> tia.

With my Tomcat hat on:

HTTP/2 support for mod_proxy_http
HTTP upgrade support for mod_proxy_ajp (we'll need to do work on the Tomcat 
side as well)
Improved WebSocket support in mod_proxy_wstunnel [1].

I'm happy to help out on the Tomcat side of things where required.

Mark

[1] The mod_wstunnel assumes (at least it did the last time I looked at it) 
that all requests under a given URL space will be WebSocket requests. That 
doesn't seem to be the way many apps are being implemented. It would be great 
if mod_proxy would allow both mod_proxy_[ajp|http] and mod_proxy_wstunnel to be 
mapped to the same URL space with the 'right' one being selected based on the 
request.
--
Sent via Pony Mail for dev@httpd.apache.org. 
View this email online at:
https://pony-poc.apache.org/list.html?dev@httpd.apache.org


Re: 2.2 and 2.4 and 2.6/3.0

2015-05-30 Thread Mark Blackman

> On 27 May 2015, at 13:54, Jim Jagielski  wrote:
> 
> Anyone else think it's time to EOL 2.2 and focus
> on 2.4 and the next gen? My thoughts are that http/2
> and mod_h2 will drive the trunk design efforts and so
> it would be nice to focus energy on 2.4 and later...

Depends on what EOL means practically, of course. As someone whose day job is 
engineering the web platform for one of the big investment banks, I can tell 
you we only just got everyone moved over to 2.2 (i.e. several years after 1.3 
got canned in early 2010).

So while I’m not looking for the latest and greatest features in 2.2 at this 
stage, I do want to continue to see bug fixes and security issues addressed for 
at least another five years (ideally), and really 2 years at a minimum. I can 
easily see this might be asking too much of volunteers, but those are my 
“enterprise” expectations. 

- Mark

Re: mod_proxy_fcgi issues

2014-12-04 Thread Mark Montague

On 2014-12-04 13:27, Eric Covener wrote:

On Thu, Dec 4, 2014 at 1:11 PM, Jim Riggs  wrote:

This all may certainly be true, but I just for clarity's sake (since it was my quote 
that started this new mod_proxy_fcgi thread), my mod_proxy_balancer -> 
mod_proxy_fcgi -> php-fpm issue is NOT an httpd issue...at least that is not how I 
have treated it. It is actually a code fix I have had to make in PHP to get it to 
work.

[...] It doesn't seem that usable values for these things should be so unique 
to php-fpm.


My experience has been that the PHP FPM SAPI function 
init_request_info() in sapi/fpm/fpm/fpm_main.c, which I think was 
originally copied from the CGI SAPI, is very old code that goes to great 
lengths to preserve old, not always standards-compliant behavior in 
order to avoid breaking backward compatibilities. Hence, I'm not 
convinced that the things Eric refers to above might not be unique to 
php-fpm.


After struggling to get php-fpm working with mod_proxy_fcgi, I 
eventually completely rewrote the whole init_request_info function the 
way I thought it "should be" without any regards to backwards 
compatibility; this solved the problems I was having.


If memory serves (it's been a few years) the main problems I was 
encountering were with serving the index file for directories and 
correct handling of PATH_INFO.


I've attached the patch I'm using (a completely new version of the 
init_request_info function) in case anyone wants to either play with it 
or compare it to the code that PHP currently uses.


--
  Mark Montague
  m...@catseye.org

diff -up php-5.6.3/sapi/fpm/fpm/fastcgi.c.fpm-init-request 
php-5.6.3/sapi/fpm/fpm/fastcgi.c
--- php-5.6.3/sapi/fpm/fpm/fastcgi.c.fpm-init-request   2014-11-18 
20:33:20.313769152 +
+++ php-5.6.3/sapi/fpm/fpm/fastcgi.c2014-11-18 20:33:38.424369147 +
@@ -488,6 +488,7 @@ static int fcgi_get_params(fcgi_request
ret = 0;
break;
}
+zlog(ZLOG_DEBUG, "fcgi_get_params: %s=%s", tmp, s);
zend_hash_update(req->env, tmp, eff_name_len+1, &s, 
sizeof(char*), NULL);
p += name_len + val_len;
}
@@ -1093,12 +1094,14 @@ char* fcgi_putenv(fcgi_request *req, cha
 {
if (var && req) {
if (val == NULL) {
+   zlog(ZLOG_DEBUG, "fcgi_putenv: %s=", var);
zend_hash_del(req->env, var, var_len+1);
} else {
char **ret;
 
val = estrdup(val);
if (zend_hash_update(req->env, var, var_len+1, &val, 
sizeof(char*), (void**)&ret) == SUCCESS) {
+   zlog(ZLOG_DEBUG, "fcgi_putenv: %s=%s", var, 
val);
return *ret;
}
}
diff -up php-5.6.3/sapi/fpm/fpm/fpm_main.c.fpm-init-request 
php-5.6.3/sapi/fpm/fpm/fpm_main.c
--- php-5.6.3/sapi/fpm/fpm/fpm_main.c.fpm-init-request  2014-11-12 
13:52:21.0 +
+++ php-5.6.3/sapi/fpm/fpm/fpm_main.c   2014-11-18 20:33:38.425369123 +
@@ -1422,6 +1422,317 @@ static void init_request_info(TSRMLS_D)
 }
 /* }}} */
 
+static char *fpm_cgibin_saveenv(char *name, char *val)
+{
+int name_len = strlen(name);
+char *old_val = sapi_cgibin_getenv(name, name_len TSRMLS_CC);
+char save_name[256];
+
+if (val != NULL && old_val != NULL && strcmp(val, old_val) == 0) {
+   return old_val;
+}
+
+if (name_len < 256 - strlen("ORIG_") - 1) {
+   strcpy(save_name, "ORIG_");
+   strcat(save_name, name);
+} else {
+   save_name[0] = '\0';
+   }
+
+/* Save the old value only if one was not previously saved */
+if (old_val && save_name[0] != '\0' &&
+   sapi_cgibin_getenv(save_name, strlen(save_name) TSRMLS_CC) == 
NULL) {
+   _sapi_cgibin_putenv(save_name, old_val TSRMLS_CC);
+   }
+
+   return _sapi_cgibin_putenv(name, val TSRMLS_CC);
+
+}
+
+static void init_request_info0(TSRMLS_D)
+{
+char *document_root;
+int document_root_len;
+char *script_filename;
+int script_filename_len;
+char *script_filename_part = NULL;
+char *s = NULL;
+char *path = NULL;
+char *path_info = NULL;
+char *path_translated = NULL;
+char *content_type;
+char *content_length;
+const char *auth;
+char *ini;
+int result;
+struct stat st;
+int add_index = 0;
+
+zlog(ZLOG_DEBUG, "initializing request info:");
+
+/* initialize the defaults */
+SG(request_info).path_translated = NULL;
+SG(request_info).request_method = NULL;
+SG(request_info).proto_num = 1000;
+SG(request_info).query_string = NULL;
+SG(request_info).request_uri = NULL;
+SG(request_i

Re: commercial support

2014-11-23 Thread Mark Blackman
Is Apache 2.4 really just as fast as nginx for response times for an arbitrary 
number of concurrent connections?

Apache is great and it’s now so mature that most enterprises are very 
comfortable with it, but where nginx started with a very simple premise and 
have kept the scope restricted.

For me, Apache is a Swiss army knife that has a solution for nearly every use 
case and nginx is more like a surgeon’s scalpel, best of breed in that domain 
(maximal concurrent connections for minimum resource cost with minimum feature 
set), but unhelpful in other places.


Cheers,
Mark

> On 20 Nov 2014, at 22:00, Jim Jagielski  wrote:
> 
> It's a shame that there isn't a company like Covalent
> around anymore that focuses on the Apache httpd web-server.
> nginx.com shows kinda clearly that having a motivated
> company behind a web-server helps grab market share and
> market awareness (they can continue to beat the drum about
> how fast and reliable they are, when so many benchmarks
> lately show that 2.4 is just as fast if not faster in
> actual response time, but we have limited ability to
> do that, and most reporters see us as Old "news" and
> nginx as the new hotness anyway)...
> 
> So who wants to get together and create a company
> around httpd? :)
> 
> Honestly though, how much of the uptake in nginx
> do people think is actually due to nginx being "better"
> or the "best" choice, and how much do you think is
> due simply because it's *seen* as better or that we
> are seen as old and tired?
> 
> This is our 20year anniversary... It would be cool
> to use that to remind people! :)



Re: [RFC] enhancement: mod_cache bypass

2014-08-24 Thread Mark Montague

On 2014-08-23 12:36, Graham Leggett wrote:

On 23 Aug 2014, at 3:40 PM, Mark Montague  wrote:


[root@sky ~]# httpd -t
AH00526: Syntax error on line 148 of /etc/httpd/conf/dev.catseye.org.conf:
CacheEnable cannot occur within  section
[root@sky ~]#

The solution here is to lift the restriction above. Having a generic mechanism 
to handle conditional behaviour, and then having a special case to handle the 
same behaviour in a different way is wrong way to go.


I've looked into allowing CacheEnable directives within  sections.  
This can be done by removing the NOT_IN_FILES flag from the call to 
ap_check_cmd_context() in modules/cache/mod_cache.c:add_cache_enable()


The problem is that  sections are currently walked only during 
mod_cache's normal handler phase, not during the quick handler phase.  
It looks easy enough to add a call to ap_if_walk() to 
cache_quick_handler(), but this would add significant extra processing 
to the quick handler phase, as all  expressions for the enclosing 
context would be evaluated, and I think that at the end we'd have to 
discard the results that ap_if_walk() caches in the request record so 
that they can be recomputed later during normal request processing after 
all information about the request is available.   Is this acceptable?


Also, the proof of concept patch that I sent yesterday, which adds an 
expr= clause to the CacheEnable directive, records cache bypasses in the 
request notes (for logging), in the X-Cache and X-Cache-Detail headers, 
in the cache-status subprocess environment variable, and adds a new 
subprocess environment variable named cache-bypass.  If we enable using 
CacheEnable in  sections, conditional cache bypasses will no longer 
be called out explicitly to the server administrator; they will need to 
infer a bypass from comparing the URL path to their configuration.  I do 
not see this as a large problem, but I thought I would mention it for 
consideration.


Given these things, what thoughts does the developer community have?  
Would a patch to allow CacheEnable within  sections have a better 
chance of being accepted than one that adds a expr= clause to the 
CacheEnable directive?


Or should mod_cache not allow cache bypassing at all?  "Use NGINX ( 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_bypass 
) if you want that" or "use Varnish ( 
https://www.varnish-cache.org/docs/4.0/users-guide/increasing-your-hitrate.html#cookies 
) if you want that" are answers I'm fine with, if there's no interest in 
this feature for httpd's mod_cache.


--
  Mark Montague
  m...@catseye.org



Re: [RFC] enhancement: mod_cache bypass

2014-08-23 Thread Mark Montague

On 2014-08-23 17:43, Mark Montague wrote:
> - Back-end sets response header "Cache-Control: max-age=0, 
s-maxage=14400" so that mod_cache
> caches the response, but ISP caches and browser caches do not.  
(mod_cache removes s-maxage

> and does not pass it upstream).
mod_cache shouldn’t remove any Cache-Control headers.


It apparently does, although I haven't found where in the code yet. I 
would be interested to see if anyone can reproduce my experience. As 
far as I know, I don't have any configuration that would result in this.


Please ignore this part of my previous reply, I found out what was going on:

When the content is first requested, mod_cache has a miss and it stores 
the content.  But when it sends it on to the client, it does so without 
any Cache-control header at all:


GET /test.php HTTP/1.1
Host: dev.catseye.org

HTTP/1.1 200 OK
Date: Sat, 23 Aug 2014 22:01:26 GMT
Server: Apache/2.4
X-Cache: MISS from dev.catseye.org
X-Cache-Detail: "cache miss: attempting entity save" from dev.catseye.org
Transfer-Encoding: chunked
Content-Type: text/html;charset=UTF-8

The second time the resource was requested, it is served by mod_cache 
from the cache with the original Cache-Control header (I added "foo=1" 
to the header track this when I generated the page):


GET /test.php HTTP/1.1
Host: dev.catseye.org

HTTP/1.1 200 OK
Date: Sat, 23 Aug 2014 22:02:21 GMT
Server: Apache/2.4
Cache-Control: max-age=0, foo=1, s-maxage=14400
Content-Security-Policy: default-src 'self'; script-src 'self' 
'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 
'self' data: ; font-src 'self' data: ; report-uri /csp-report.php

Age: 54
X-Cache: HIT from dev.catseye.org
X-Cache-Detail: "cache hit" from dev.catseye.org
Content-Length: 33
Content-Type: text/html;charset=UTF-8

What was happening was that led me to assume that mod_cache was 
"editing" the header (which, I now see, it wasn't) was because I had the 
following directives that escaped my attention when I was composing my 
previous reply:


ExpiresActive on
ExpiresDefault "access plus 1 week"
ExpiresByType text/html "access plus 0 seconds"

This resulted in a "Cache-control: max-age=0" header being 
unconditionally added to the response headers, even if another header 
was already there.  So for a cache miss, I would see:


Cache-control: max-age=0

while for a cache hit I would see

Cache-control: max-age=0
Cache-Control: max-age=0, foo=1, s-maxage=14400

Mystery solved.  I apologize for the red herring and waste of people's 
time and attention.


I'm still looking for a solution to the original problem:  how to 
indicate to mod_cache that cached content for a particular URL path 
should be served to some clients (ones without login cookies), but not 
to other clients (ones with login cookies).


--
  Mark Montague
  m...@catseye.org



Re: [RFC] enhancement: mod_cache bypass

2014-08-23 Thread Mark Montague

On 2014-08-23 12:36, Graham Leggett wrote:

On 23 Aug 2014, at 3:40 PM, Mark Montague  wrote:

AH00526: Syntax error on line 148 of 
/etc/httpd/conf/dev.catseye.org.conf: CacheEnable cannot occur within 
 section 

The solution here is to lift the restriction above. Having a generic mechanism 
to handle conditional behaviour, and then having a special case to handle the 
same behaviour in a different way is wrong way to go.


I assumed this would be OK because the Header directive has a similar 
expr=expression clause.


But, I'll look into whether if restriction on If could be removed. If I 
rewrite things to use the If directive, do you see bypass functionality 
as something worth including?  I ask because from your points below I 
get the impression that the answer is "no".




The proposed enhancement is about the server deciding when to serve items from 
the cache.  Although the client can specify a Cache-Control request header in 
order to bypass the server's cache, there is no good way for a web application 
to signal to a client when it should do this (for example., when a login cookie 
is set). The behavior of other caches is controlled using the Cache-Control 
response header.

There is - use “Cache-Control: private”. This will tell all public caches, 
including mod_cache and ISP caches, not to cache content with cookies attached, 
while at the same time telling browser caches that they should.


The problem is not whether the content should be cached:  it should.  
The problem is, to which clients should the cached content be served?  
If the client's request does not contain a login cookie, that client 
should get the cached copy.  If the client's request does contain a 
login cookie, the cache should be bypassed and the client should get a 
copy of the resource generated specifically for it.


"Cache-Control: private" cannot be used in a request, only in a 
response, where it works as you said.  The problem is that the first 
request for a given resource where the client includes a login cookie 
gets intercepted by mod_cache and served from the cache (if you assume 
that other clients without login cookies have already requested it).  
There must therefore be some way to tell mod_cache that this client 
needs something different. One way to do this would be by having 
different URL paths for logged in versus non-logged in users, but this 
is awkward, user-visible, and may not be feasible with all web application.




> - Back-end sets response header "Cache-Control: max-age=0, s-maxage=14400" so 
that mod_cache
> caches the response, but ISP caches and browser caches do not.  (mod_cache 
removes s-maxage
> and does not pass it upstream).
mod_cache shouldn’t remove any Cache-Control headers.


It apparently does, although I haven't found where in the code yet. I 
would be interested to see if anyone can reproduce my experience. As far 
as I know, I don't have any configuration that would result in this.


httpd 2.4.10 with mod_proxy_fcgi (Fedora 19 build)
PHP 5.5.5 with PHP-FPM

Relevant configuration:

CacheEnable disk /
CacheDefaultExpire 86400
CacheIgnoreHeaders Set-Cookie
CacheHeader on
CacheDetailHeader on
# We'll be paying attention to "Cache-Control: s-maxage=xxx" for all
# of our caching decisions.  The browser will use max-age=yyy for its
# decisions.  So we drop the Expires header. See the following page
# from Google which says, "It is redundant to specify both Expires and
# Cache-Control: max-age"
# https://developers.google.com/speed/docs/best-practices/caching?hl=sv
Header unset Expires
RewriteRule ^(.*\.php)$ 
fcgi://127.0.0.1:9001/www/dev.catseye.org/content/$1 [P,L]


File test.php, containing:


Hello!

Browser transaction for https://dev.catseye.org/test.php:

GET /test.php HTTP/1.1
Host: dev.catseye.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:31.0) 
Gecko/20100101 Firefox/31.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive

HTTP/1.1 200 OK
Date: Sat, 23 Aug 2014 20:11:00 GMT
Server: Apache/2.4
Cache-Control: max-age=0
X-Cache: MISS from dev.catseye.org
X-Cache-Detail: "cache miss: attempting entity save" from dev.catseye.org
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html;charset=UTF-8

And mod_cache definitely receives s-maxage from the backend:

[root@sky cache]# cat ./J/k/WPiKG0bwW@R_H4YvSOdw.header
(binary data omitted)https://dev.catseye.org:443/test.php?Cache-Control: 
max-age=0

Cache-Control: max-age=0, s-maxage=14400
Content-Security-Policy: default-src 'self'; script-src 'self' 
'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 
'self' data: ; fon

Re: [RFC] enhancement: mod_cache bypass

2014-08-23 Thread Mark Montague

On 2014-08-23 5:19, Graham Leggett wrote:
On 23 Aug 2014, at 03:50, Mark Montague <mailto:m...@catseye.org>> wrote:


I've attached a proof-of-concept patch against httpd 2.4.10 that 
allows mod_cache to be bypassed under conditions specified in the 
conf files.


Does this not duplicate the functionality of the If directives?


No, not in this case:


CacheEnable disk /


[root@sky ~]# httpd -t
AH00526: Syntax error on line 148 of /etc/httpd/conf/dev.catseye.org.conf:
CacheEnable cannot occur within  section
[root@sky ~]#

Also, any solution has to work within both the quick handler phase and 
the normal handler phase of mod_cache.



# Only serve cached data if no (login or other) cookies are present 
in the request:

CacheEnable disk / "expr=-z %{req:Cookie}"


As an aside, trying to single out and control just one cache using 
directives like this is ineffective, as other caches like ISP caches 
and browser caches will not be included in the configuration.


Rather control the cache using the Cache-Control headers in the formal 
HTTP specs.


The proposed enhancement is about the server deciding when to serve 
items from the cache.  Although the client can specify a Cache-Control 
request header in order to bypass the server's cache, there is no good 
way for a web application to signal to a client when it should do this 
(for example., when a login cookie is set). The behavior of other caches 
is controlled using the Cache-Control response header.


This functionality is provided by Varnish Cache: 
https://www.varnish-cache.org/docs/4.0/users-guide/increasing-your-hitrate.html#cookies


Squid does not currently provide this functionality, but it seems like 
there is consensus that it should: 
http://bugs.squid-cache.org/show_bug.cgi?id=2258


Here is a more detailed example scenario, in case it helps.  There are 
also many other scenarios in which conditionally bypassing mod_cache is 
useful.


- Reverse proxy setup using mod_proxy_fcgi
- Static resources served through httpd front-end with response header 
"Cache-Control: max-age=14400" so that they are cached by mod_cache, ISP 
caches, and browser caches.
- Back-end pages are dynamic (PHP), but very expensive to generate (1-2 
seconds).
- Back-end sets response header "Cache-Control: max-age=0, 
s-maxage=14400" so that mod_cache caches the response, but ISP caches 
and browser caches do not.  (mod_cache removes s-maxage and does not 
pass it upstream).
- When back-end content changes (e.g., an author makes an update), the 
back-end invokes "htcacheclean /path/to/resource" to invalidate the 
cached page so that it is regenerated the next time a client requests it.
- Clients have multiple cookies set.  Tracking cookies and cookies used 
by JavaScript should not cause a mod_cache miss.
- Dynamic pages that are generated when a login cookie is set should not 
be cached.  This is accomplished by the back-end setting the response 
header "Cache-Control: max-age=0".
- However, when a login cookie is set, dynamic pages that are currently 
cached should not be served to the client with the login cookie, while 
they should still be served to all other clients.


--
  Mark Montague
  m...@catseye.org



[RFC] enhancement: mod_cache bypass

2014-08-22 Thread Mark Montague
I've attached a proof-of-concept patch against httpd 2.4.10 that allows 
mod_cache to be bypassed under conditions specified in the conf files.  
It adds an optional fourth argument to the CacheEnable directive:


CacheEnable cache_type [url-string] [expr=expression]

If the expression is present, data will only be served from the cache 
for requests for which the expression evaluates to true.  This permits 
things such as:


# Only serve cached data if no (login or other) cookies are present in 
the request:

CacheEnable disk / "expr=-z %{req:Cookie}"

# Do not serve cached pages to our testing network:

CacheEnable disk "expr=! ( %{REMOTE_ADDR} -ipmatch 
192.168.0.0/16 )"



Is there interest in such an enhancement?  If so, I'll make any 
requested changes to the implementation, port the patch forward to 
trunk, put in real APLOGNOs, make sure it passes the test suite, create 
a documentation patch, and create a bugzilla for all this.


--
  Mark Montague
  m...@catseye.org

diff -urd httpd-2.4.10.orig/modules/cache/cache_util.c 
httpd-2.4.10/modules/cache/cache_util.c
--- httpd-2.4.10.orig/modules/cache/cache_util.c2014-05-30 
13:50:37.0 +
+++ httpd-2.4.10/modules/cache/cache_util.c 2014-08-23 01:34:25.521689874 
+
@@ -27,6 +27,10 @@
 
 extern module AP_MODULE_DECLARE_DATA cache_module;
 
+extern int cache_run_cache_status(cache_handle_t *h, request_rec *r,
+apr_table_t *headers, ap_cache_status_e status,
+const char *reason);
+
 /* Determine if "url" matches the hostname, scheme and port and path
  * in "filter". All but the path comparisons are case-insensitive.
  */
@@ -129,10 +133,30 @@
 }
 
 static cache_provider_list *get_provider(request_rec *r, struct cache_enable 
*ent,
-cache_provider_list *providers)
+cache_provider_list *providers, int *bypass)
 {
-/* Fetch from global config and add to the list. */
 cache_provider *provider;
+
+/* If an expression is present, evaluate it and make sure it is true */
+if (ent->expr != NULL) {
+const char *err = NULL;
+int eval = ap_expr_exec(r, ent->expr, &err);
+if (err) {
+ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
+APLOGNO(06668) "Failed to evaluate expression (%s) - skipping 
CacheEnable for uri %s", err, r->uri);
+return providers;
+}
+if (eval <= 0) {
+ap_log_rerror(APLOG_MARK, APLOG_DEBUG, APR_SUCCESS, r,
+  APLOGNO(06667) "cache: CacheEnable expr at %s(%u) is 
FALSE for uri %s", ent->expr->filename, ent->expr->line_number, r->uri);
+(*bypass)++;
+return providers;
+}
+ap_log_rerror(APLOG_MARK, APLOG_DEBUG, APR_SUCCESS, r,
+  APLOGNO(0) "cache: CacheEnable expr at %s(%u) is 
TRUE for uri %s", ent->expr->filename, ent->expr->line_number, r->uri);
+}
+
+/* Fetch from global config and add to the list. */
 provider = ap_lookup_provider(CACHE_PROVIDER_GROUP, ent->type,
   "0");
 if (!provider) {
@@ -172,6 +196,7 @@
 {
 cache_dir_conf *dconf = ap_get_module_config(r->per_dir_config, 
&cache_module);
 cache_provider_list *providers = NULL;
+int bypass = 0;
 int i;
 
 /* per directory cache disable */
@@ -193,7 +218,7 @@
 for (i = 0; i < dconf->cacheenable->nelts; i++) {
 struct cache_enable *ent =
 (struct cache_enable 
*)dconf->cacheenable->elts;
-providers = get_provider(r, &ent[i], providers);
+providers = get_provider(r, &ent[i], providers, &bypass);
 }
 
 /* loop through all the global cacheenable entries */
@@ -201,10 +226,16 @@
 struct cache_enable *ent =
 (struct cache_enable *)conf->cacheenable->elts;
 if (uri_meets_conditions(&ent[i].url, ent[i].pathlen, &uri)) {
-providers = get_provider(r, &ent[i], providers);
+providers = get_provider(r, &ent[i], providers, &bypass);
 }
 }
 
+if (providers == NULL && bypass > 0) {
+/* we're bypassing the cache. tell everyone who cares */
+cache_run_cache_status(NULL, r, r->headers_out, AP_CACHE_BYPASS,
+   apr_psprintf(r->pool, "cache bypass: %d 
conditions not satisfied", bypass));
+}
+
 return providers;
 }
 
diff -urd httpd-2.4.10.orig/modules/cache/cache_util.h 
httpd-2.4.10/modules/cache/cache_util.h
--- httpd-2.4.10.orig/modules/cache/cache_util.h2014-08-20 
14:50:12.251792173 +
+++ httpd-2.4.10/modules/cache/cache_util.h 2014-08-22 00:20:24.946556676 
+
@@ -109,6 +109,7 @@
 apr_uri_t url;
 const char *typ

Apache 2.2.28 release timing.

2014-08-05 Thread Mark Blackman
Hi,

This might be more of user than dev question, but as the discussions about 
timing were here, I’ll go with here.

http://mail-archives.apache.org/mod_mbox/httpd-dev/201407.mbox/<20140721075315.ec908e91c20de17e6e448089a4bc3ed2.f963b4ea46.wbe%40email11.secureserver.net>

suggested the 2.2.28 tagging and presumably release is imminent,  
however, http://svn.apache.org/repos/asf/httpd/httpd/tags/2.2.28 is still a 404.

I understand the mechanics of open source projects, so this is not a 
“hurry-up”, 
it’s just a "can I get Apache 2.2.28 into my next hosting platform release or 
not”, 
the contents of which will be frozen on Aug. 15.

I’m mostly interested in the CVE updates, so I can tell users we’re clear of 
them. 
If the 2.2.28 release is not likely before Aug. 15, that’s fine, I just wanted 
to be sure.

Cheers,
Mark

Re: bug? Inconsistent handling of query string

2013-02-15 Thread Mark Stosberg
On 02/15/2013 01:49 PM, Eric Covener wrote:
> On Fri, Feb 15, 2013 at 12:57 PM, Mark Stosberg  wrote:
>>
>> Thanks for the response, William --
>>
>> On 02/15/2013 11:49 AM, William A. Rowe Jr. wrote:
>>> On Fri, 15 Feb 2013 10:38:40 -0500
>>> Mark Stosberg  wrote:
>>>
>>>>
>>>> I'd like feedback on whether the following behavior is a bug, or
>>>> intentionally inconsistent.
>>>>
>>>> I was looking at the environment variables generated by this case:
>>>>
>>>>Browser URL: /file%3Fa=b?c=d
> 
>>   # Otherwise, pass everything through to the dispatcher
>>   RewriteRule ^home/project/www/(.*)$ /cgi-bin/dispatch.cgi/$1
>> [last,qsappend]
> 
> Here you'll match the decoded version and copy it into the path as a
> literal ? -- maybe you need [B] here to be safe, or capture
> %{THE_REQUEST} in a condition which has the still-encoded request.

Eric,

You were correct. Here's the result after adding [B] to my RewriteRule:

  'QUERY_STRING' => 'c=d',
  'SCRIPT_URL' => '/file?a=b',
  'SCRIPT_URI' => 'http://www.mark.net.adoptapet.com/file?a=b',
  'REQUEST_URI' => '/file%3Fa=b?c=d',

In summary, all variables look consistent and correct. The bug was on my
end!

Thanks for the feedback,

Unfortunately, this a subtle detail of how rewriting works that is
commonly overlooked.

A quick search on Github shows shows nearly 5,000 projects that use this
pattern, and none of them in the random sampling I looked at had
included [B] in their flags:

https://github.com/search?q=+RewriteCond+%25%7BREQUEST_FILENAME%7D+%21-f++%241&type=Code&ref=searchresults



   Mark





Re: bug? Inconsistent handling of query string

2013-02-15 Thread Mark Stosberg

Thanks for the response, William --

On 02/15/2013 11:49 AM, William A. Rowe Jr. wrote:
> On Fri, 15 Feb 2013 10:38:40 -0500
> Mark Stosberg  wrote:
> 
>>
>> I'd like feedback on whether the following behavior is a bug, or
>> intentionally inconsistent.
>>
>> I was looking at the environment variables generated by this case:
>>
>>Browser URL: /file%3Fa=b?c=d
>>
>>'QUERY_STRING' => 'a=b&c=d',
>>'SCRIPT_URL' => '/file?a=b',
>>    'SCRIPT_URI' => 'http://example.com/file?a=b',
>>'REQUEST_URI' => '/file%3Fa=b?c=d',
>>
>> The "%3F" is an encoded question mark.
>>
>> Note that SCRIPT_URI and SCRIPT_URL treat the query string as starting
>> after the unencoded question mark, while 'QUERY_STRING' variables
>> treats the query string as started at the encoded question mark.
>>
>> >From my reading of RFC 3875 (CGI), my understanding is that only an
>> unencoded question mark should mark the beginning the query string.
>> Thus, it appears that the QUERY_STRING variable being returned here is
>> incorrect. Is it?
> 
> I agree with your interpretation.  However, we pick apart QUERY_STRING
> long after %-escapes are transcoded.  That could be the issue.
> 
>> This data was generated with Apache/2.2.14.
> 
> Was mod_rewrite involved in any way with processing this request?

Yes, a standard dispatching recipe was in play:

  # If an actual file or directory is requested, serve directly
  RewriteCond %{REQUEST_FILENAME} !-f
  RewriteCond %{REQUEST_FILENAME} !-d

  # Otherwise, pass everything through to the dispatcher
  RewriteRule ^home/project/www/(.*)$ /cgi-bin/dispatch.cgi/$1
[last,qsappend]

Should I open a bug report? (If someone could check quickly against
HEAD, that could be helpful as well).

Mark



bug? Inconsistent handling of query string

2013-02-15 Thread Mark Stosberg

I'd like feedback on whether the following behavior is a bug, or
intentionally inconsistent.

I was looking at the environment variables generated by this case:

   Browser URL: /file%3Fa=b?c=d

   'QUERY_STRING' => 'a=b&c=d',
   'SCRIPT_URL' => '/file?a=b',
   'SCRIPT_URI' => 'http://example.com/file?a=b',
   'REQUEST_URI' => '/file%3Fa=b?c=d',

The "%3F" is an encoded question mark.

Note that SCRIPT_URI and SCRIPT_URL treat the query string as starting
after the unencoded question mark, while 'QUERY_STRING' variables treats
the query string as started at the encoded question mark.

>From my reading of RFC 3875 (CGI), my understanding is that only an
unencoded question mark should mark the beginning the query string.
Thus, it appears that the QUERY_STRING variable being returned here is
incorrect. Is it?

This data was generated with Apache/2.2.14.

Thanks!

   Mark  Stosberg




Re: TRACE still enabled by default

2012-03-21 Thread Mark Montague

On March 21, 2012 16:02 , Greg Stein  wrote:

TRACE won't work at all if the most popular end-point doesn't support it.

Why would this be a bad thing?  Or, to phrase it another way, what are the
situations in which it is desirable that TRACE be already-enabled on a web
server as opposed to having the owner of the web server enable the TRACE
method in response to a specific debugging need?

Roy means that if we don't set the precedent for TRACE being present
and how it is supposed to work, then nobody else will. The Apache HTTP
server is effectively the embodiment and leader of the HTTP
specification.


Yes, that was clear.  But why would setting a precedent and leading the 
way for TRACE only being present when explicitly enabled by the owner of 
a specific web server be bad?  For the sake of discussion, what real 
world problems -- troubleshooting, debugging, or other problems -- would 
such a course of action actually cause?


--
  Mark Montague
  m...@catseye.org



Re: TRACE still enabled by default

2012-03-21 Thread Mark Montague

On March 21, 2012 15:33 , "Roy T. Fielding"  wrote:
TRACE won't work at all if the most popular end-point doesn't support it. 


Why would this be a bad thing?  Or, to phrase it another way, what are 
the situations in which it is desirable that TRACE be already-enabled on 
a web server as opposed to having the owner of the web server enable the 
TRACE method in response to a specific debugging need?


--
  Mark Montague
  m...@catseye.org



Re: questions about document_root

2011-12-08 Thread Mark Montague

On December 8, 2011 1:48 , Rui Hu  wrote:

2011/12/8 Rui Hu mailto:tchrb...@gmail.com>>

Is $DOCUMENT_ROOT in php-cgi determined by ap_add_common_vars() in
Apache? It seems not to me. I commented the line 237 assigning
DOCUMENT_ROOT and re-compiled apache. php-cgi still works fine. It
seems that $DUCUMENT_ROOT in php-cgi is not determined by this
function.



What you say is correct.  This is an Apache HTTP Server mailing list.  
You said, "in apache, I cannot find any code which assign this var." and 
I showed you where in Apache HTTP Server the DOCUMENT_ROOT environment 
variable is set when running CGIs.


What you are now asking should be sent to the PHP users mailing list, 
since it is a question about PHP.  But, see the function 
init_request_info() in the PHP source code, in the file 
sapi/cgi/cgi_main.c (lines 1123-1137)


http://svn.php.net/viewvc/php/php-src/branches/PHP_5_3_8/sapi/cgi/cgi_main.c?revision=315335&view=markup

How PHP determines the value of $_SERVER[DOCUMENT_ROOT] depends on all 
of the following:


- The value for the PHP directive cgi.fix_pathinfo
- The value for the PHP directive doc_root
- The value of the environment variable DOCUMENT_ROOT (set by Apache 
HTTP Server)


--
  Mark Montague
  m...@catseye.org



Re: questions about document_root

2011-12-07 Thread Mark Montague

On December 7, 2011 23:23 , Rui Hu  wrote:
I looked up the code of PHP and apache2, and found that PHP gets 
docroot from environment var "$DOCUMENT_ROOT". However in apache, I 
cannot find any code which assign this var.


I googled but got nothing. Can you please show me the detailed process 
generating $DOCUMENT_ROOT in $_SERVER from apache to php. Thank you 
very much!


If you invoke PHP as a CGI, then Apache HTTP Server sets DOCUMENT_ROOT 
in the function ap_add_common_vars() which is in the file 
server/util_script.c


See line 237,

https://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/server/util_script.c?revision=1100216&view=markup


--
  Mark Montague
  m...@catseye.org



Re: mod_proxy_fcgi + mod_proxy_balancer vs. php-fpm and query strings

2011-09-19 Thread Mark Montague

On September 19, 2011 8:37 , Jim Riggs  wrote:

httpd ->  balancer ->  fcgi balancer members ->  php-fpm

Issue 1: PHP-FPM does not handle the "proxy:balancer" prefix in SCRIPT_FILENAME. It does handle 
"proxy:fcgi" as a special case (see https://bugs.php.net/bug.php?id=54152 fix by jim). So, it seems we need 
to also add a "proxy:balancer" exception there unless a balanced mod_proxy_fcgi member should actually be 
using "proxy:fcgi" instead. What are people's thoughts on the prefix that should be sent by httpd in this 
case? To address this for now, I have modified PHP (fpm_main.c alongside jim's existing changes).


As the person who wrote the changes that Jim later modified and 
committed, this seems reasonable to me, assuming it is correct (I say 
"assuming" only because I have never used mod_proxy_fcgi in a balancer 
configuration).




Issue 2: Once I got Issue 1 addressed, everything started working except in the case of a query string. I spent 
considerable time tracing and trying to figure out where the issue is occurring, but I am hoping one of you who is much 
more familiar with the code than I will be able to say, "Oh, look right here." The problem is that the query 
string is getting appended to SCRIPT_FILENAME if proxied through a balancer. FPM does not like this. It does not seem to 
happen in the case of proxying directly to "fcgi://...", but once I change this to "balancer://...", 
the query string gets added to SCRIPT_FILENAME. I believe this happened with both ProxyPass* and mod_rewrite [P]. In 
mod_rewrite, this should get handled in splitout_queryargs(), but somehow it is getting added back (probably in 
proxy_balancer_canon() which adds the query string back to r->filename?). For right now, I have done a brute-force 
"fix" for this by adding the code below to the beginning of send_environment() in mod_proxy_fcgi.c, before the 
calls to ap_add_common_vars() and ap_add_cgi_vars(). I am guessing that this isn't the ultimate fix for this issue, so I 
am interested in others' thoughts.

+/* Remove query string from r->filename (r->args is already set and passed 
via QUERY_STRING) */
+q = ap_strchr_c(r->filename, '?');
+if (q != NULL) {
+*q = '\0';
+}



This sounds like it is related to 
https://issues.apache.org/bugzilla/show_bug.cgi?id=51077 as well.  
Probably a new patch is needed to consistently and properly fix all of 
the cases (regular, mod_proxy_{f,s}cgi, mod_proxy_{f,s}cgi + balancer).


--
  Mark Montague
  m...@catseye.org




mod_rewrite proxy patch review request

2011-06-17 Thread Mark Montague

 On April 27, 2011 14:03 , Mark Montague   wrote:

Could someone with commit access take a look at the following [...]

https://issues.apache.org/bugzilla/show_bug.cgi?id=51077

Fixes two issues with how mod_rewrite handles rules with the [P] flag:
- Makes query string handling for requests destined for mod_proxy_fcgi 
and mod_proxy_scgi consistent with how query strings are already 
handled for mod_proxy_ajp and mod_proxy_http.
- Makes logic for handling query strings in directory context the same 
as in server context.


Also, any advice for how to submit better bug reports and/or make the 
report and patch easier for people to review would be appreciated.  Thanks!


--
  Mark Montague
  m...@catseye.org



Re: Apache janitor ?

2011-06-08 Thread Mark Montague
 On June 8, 2011 20:11 , Igor =?utf-8?Q?Gali=C4=87?= 
 wrote:

One of the many good suggestions they propose is to have a
"Patch Manager" - someone who makes sure that patches
submitted via Bugzilla or directly to the list don't get lost
in the noise and that people get some feedback, even if it's
just a one liner like "Thanks, we're looking into this",
"Nope, that's really not in our scope", etc...


Committers, is there anything that list/community members could do to 
pitch in and help?  What, if anything, would be useful and accepted?  
For example, is there a list of things you'd like to be done before you 
commit a patch, and are there parts of that list that could be delegated 
to one or more non-committer janitors?  I'd be willing to try and help 
(for example), if such help would be useful.


Big thanks to Igor for his message, his suggestions, and recommending 
the "Open Source Projects and Poisonous People" talk.  What Igor says 
has been bothering me for a while, too.


--
  Mark Montague
  m...@catseye.org



Patch review request

2011-04-27 Thread Mark Montague


Could someone with commit access take a look at the following patches?  
Neither of these are high priority, but I don't want to let them get too 
far out of date.


https://issues.apache.org/bugzilla/show_bug.cgi?id=51077

Fixes two issues with how mod_rewrite handles rules with the [P] flag:
- Makes query string handling for requests destined for mod_proxy_fcgi 
and mod_proxy_scgi consistent with how query strings are already handled 
for mod_proxy_ajp and mod_proxy_http.
- Makes logic for handling query strings in directory context the same 
as in server context.



https://issues.apache.org/bugzilla/show_bug.cgi?id=50880

Prevents mod_proxy_scgi from setting PATH_INFO unless requested, for 
better compliance with RFC 3875.  This will hopefully be an easy patch 
to review, since it was just submitted for consistency with a patch 
which was already committed for mod_proxy_fcgi, 
https://issues.apache.org/bugzilla/show_bug.cgi?id=50851



Thanks in advance.

--
  Mark Montague
  m...@catseye.org



Re: Need information about Apache module development.

2011-03-30 Thread Mark Montague


dev@httpd.apache.org is for discussions related to development of httpd 
itself.  Your questions below are more appropriate for the Third Party 
Module Authors' List.  See http://httpd.apache.org/lists.html#modules-dev



A rules execution engine that is able to accept the request, evaluate 
a set of ops defined rules and execute various responses. There are 
preference of using DROOLS fusion.Any functionality in Apache based on 
Rules ?


Most Apache HTTP Server configuration directives can be thought of as 
rules.  But this is probably not very helpful.


Note that Drools is written in Java, while Apache HTTP Server is written 
in C.  If you want to use Drools, you may want to consider using a web 
server that is written in Java.



What are the approaches need to be taken for dynamic load 
balancing.Like suppose I have 3 instances of Apache is running and due 
to some issue one of the instance goes down.I would expect the traffic 
should be balanced properly by the existing 2 instances.


This will not happen unless you have a load balancer that is external to 
Apache HTTP Server.



For load balancing apart from mod_proxy_balancer any other Apache 
modules can be worth looking into?


mod_proxy_balancer runs on a front-end (proxy) server to balance 
requests that the front-end server receives between multiple back-end 
servers.  If one of the back-end servers goes down, the front-end server 
will detect this and split the traffic betweent the remaining back-end 
servers.   However, this may not be what you want.



As and alert broadcast engine that has the ability to distribute an 
events to multiple end sources.


Apache HTTP Server is not a broadcast engine.  This is functionality 
that you would have to write and include in your module.


As a  storage layer that allows for data persistence for long term 
tracking of keys and values. We have the target of good performance.Is 
it going to be good using Apache or any other webserver would be 
suggestion.


Apache HTTP Server is not a storage layer.  Apache HTTP Server just 
processes HTTP requests.  In order to process these requests, Apache 
HTTP Server normally serves files from some filesystem or other storage 
layer which is external to Apache HTTP Server itself.  (Normally, this 
would be some local filesystem such as ext3 or NTFS; but it could also 
be a remote or distributed filesystem such as NFS, CIFS, or AFS.  In 
turn, the remote filesystem could be based on iSCSI, FibreChannel, or 
other technologies.)




Inputs would be very much appreciated.


Many thanks in advance


Good luck.

--
  Mark Montague
  m...@catseye.org



Re: mod_fcgid in httpd tarball?

2011-03-23 Thread Mark Montague

 On March 23, 2011 7:37 , Graham Leggett   wrote:
Do we want to introduce mod_fcgid now into httpd 2.3.x for the next 
beta?


How do we reconcile mod_fcgid with mod_proxy_fcgid?


Do they need to be reconciled?  Each currently has strengths the other 
lacks.  I'd be fine with having both in future httpd 2.3.x betas and 
2.4, at least until one clearly becomes redundant compared to the other.


--
  Mark Montague
  m...@catseye.org



Re: mod_fcgid in httpd tarball?

2011-03-18 Thread Mark Montague
 On March 18, 2011 18:07 , "William A. Rowe Jr."  
wrote:

It seems like mod_fcgid has made huge progress and is now in a much
more stable bugfix epoch of it's life, similar to how mod_proxy had
progressed when development was kicked out of core for major http/1.1
rework, and brought back in when a vast percentage of it's bugs had
been addressed.

Do we want to introduce mod_fcgid now into httpd 2.3.x for the next beta?


For what it's worth, on the systems I'm deploying, I'm using 
mod_proxy_fcgi and putting in as much effort as necessary to fix any 
bugs, add features I need to it, etc., simply because mod_proxy_fcgi is 
a core module, while mod_fcgid is not.  If mod_fcgid were in core, I may 
have wound up putting the effort there instead.  (I say "may have" 
because I've come to think that mod_proxy_fcgi is actually a better 
choice for my particular needs, anyway).


--
  Mark Montague
  m...@catseye.org



Adding ProxyErrorOverride support to mod_proxy_fcgi

2011-03-18 Thread Mark Montague
 I've created a patch to add support for the ProxyErrorOverride 
directive to mod_proxy_fcgi:


https://issues.apache.org/bugzilla/show_bug.cgi?id=50913

Could someone review this patch, please, and get back to me with 
feedback and/or requests for changes?  It's not important to me to have 
this functionality in 2.4, per se, but I'd like to address any concerns 
while everything is still relatively fresh in my mind.


Many thanks!

--
  Mark Montague
  m...@catseye.org



Re: How do I set KeepAliveTimeout for Apache 2.2 ?

2011-02-24 Thread Mark Watts
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/23/2011 08:27 PM, Datta321 wrote:
> 
>  Can anyone let me know where exactly and in which files do I have to make
> the changes in order to set the following variables in Apache server 2.2 ?
>  
> - KeepAlive
> - KeepAliveTimeout
> - MaxKeepAliveRequests
> - MaxClients

They're all things you set in your httpd.conf (or apache2.conf if you're
Debuntu). I'd be surprised if many of these aren't set already so grep
your configs for them.


- -- 
Mark Watts BSc RHCE
Senior Systems Engineer, MSS Secure Managed Hosting
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/

iEYEARECAAYFAk1mIKwACgkQBn4EFUVUIO10DwCg/K8Z2gwZqO+/ZYA9u/DtYO0E
gy0An1o31XQPW25hJJBqWG7KpbKXl/qk
=jcqc
-END PGP SIGNATURE-


Re: weird error after upgrade of cpan libs

2011-02-02 Thread Mark Hedges

Everything was working (as far as I knew) until I ran the
CPAN upgrade of ExtUtils::CBuilder and Module::Build.

Hrmm, the CGI example may be bad, I picked that up from a
search result... $r is undefined.  Handlers loaded from
apache conf work fine, so this does not appear to affect
production as far as I can tell right now.

`./Build test` shows the error, but `prove -r t` does not.

I use the CentOS binary packages for perl, mod_perl, httpd,
etc.  I built libapreq2 by hand, but it was working prior to
doing these CPAN upgrades.  mod_perl seems to work
otherwise.

Mark


On Wed, 2 Feb 2011, Fred Moyer wrote:

> Did you build mod_perl with /usr/bin/perl or another binary?
>
> Did you 'LoadModule perl_modulemodules/mod_perl.so'?
>
> This looks like more of a mod_perl issue than apreq.
>
> On Wed, Feb 2, 2011 at 12:44 PM, Mark Hedges  wrote:
> >
> > I upgraded some CPAN libs including Module::Build and
> > ExtUtils::CBuilder and started running into some problems.
> >
> > https://rt.cpan.org/Public/Bug/Display.html?id=65382
> >
> > Hrmm, this seems to also affect cgi's in production:
> >
> > [Wed Feb 02 12:41:48 2011] [error] [client 64.22.103.163] /usr/bin/perl: 
> > symbol lookup error: 
> > /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/auto/APR/Request/Apache2/Apache2.so:
> >  undefined symbol: modperl_xs_sv2request_rec
> >
> >  #!/usr/bin/perl
> >
> >  use strict;
> >  use warnings FATAL => 'all';
> >  use English '-no_match_vars';
> >  use YAML;
> >  use Apache2::Request;
> >  my $r = shift;
> >  my $req = Apache2::Request->new($r);
> >  $r->print("Content-type: text/plain\n\n");
> >  $r->print(Dump(\%ENV));
> >
> > Help?
> >
> > Mark
> >
> >
>

weird error after upgrade of cpan libs

2011-02-02 Thread Mark Hedges

I upgraded some CPAN libs including Module::Build and
ExtUtils::CBuilder and started running into some problems.

https://rt.cpan.org/Public/Bug/Display.html?id=65382

Hrmm, this seems to also affect cgi's in production:

[Wed Feb 02 12:41:48 2011] [error] [client 64.22.103.163] /usr/bin/perl: symbol 
lookup error: 
/usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/auto/APR/Request/Apache2/Apache2.so:
 undefined symbol: modperl_xs_sv2request_rec

  #!/usr/bin/perl

  use strict;
  use warnings FATAL => 'all';
  use English '-no_match_vars';
  use YAML;
  use Apache2::Request;
  my $r = shift;
  my $req = Apache2::Request->new($r);
  $r->print("Content-type: text/plain\n\n");
  $r->print(Dump(\%ENV));

Help?

Mark



Compiling httpd with older apr

2010-12-06 Thread Mark Watts
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Hi,

Can someone enlighten me as to any issues that may arise due to
compiling a modern httpd (2.2.17) with an older apr/apr-util (1.2.7)?

I've built http 2.2.17 (Fedora 14 SRPM) on a CentOS 5.5 box, and its
build cleanly but I'm not sure if I'm missing functionality by not
having a more recent apr.

Regards,

Mark.

- -- 
Mark Watts BSc RHCE
Senior Systems Engineer, Secure Managed Hosting
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/

iEYEARECAAYFAkz8z34ACgkQBn4EFUVUIO2ZeACgiXuimBZDHLnAVm5yQzlW+M/m
bPAAnAwz1FcK05C4wDDCm3tr8YMX6buI
=28SL
-END PGP SIGNATURE-


Re: Cipher suite used in default Apache

2010-10-28 Thread Mark Montague

 On October 28, 2010 17:30 , smu johnson   wrote:
Unfortunately, I cannot figure out a single way for apache2ctl to tell 
me what ciphers apache is using.  Not what it supports, but what it is 
currently allowing when clients use https://.


You can configure httpd to log which ciphers that are actually being 
used for each request, see:  
http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#logformats



The reason is I'm worried that it's allowing 40-bit encryption, and I 
would like to see actual verification from Apache whether or not my 
current setup is allowing it.


To see if 40-bit encryption is permitted, run the following from the 
command line:


openssl s_client -connect your-web-server.example.com:443 -cipher LOW

If you get a line that looks like

140735078042748:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 
alert handshake failure:s23_clnt.c:658:


then 40-bit encryption is not supported and you are safe.  If, however, 
you get an SSL-Session section in the output, then the Cipher line will 
indicate which cipher was actually negotiated and used in this test.


More information and additional tests and examples are available at

http://idlethreat.com/site/index.php/archives/181
http://stephenventer.blogspot.com/2006/07/openssl-cipher-strength.html

--
  Mark Montague
  m...@catseye.org



Re: mod_cache: serving stale content during outages

2010-10-19 Thread Mark Nottingham
FYI, while you're doing this it might be interesting to make it explicitly 
controllable by the origin:
   http://tools.ietf.org/html/rfc5861

Cheers,


On 12/10/2010, at 9:43 AM, Graham Leggett wrote:

> Hi all,
> 
> RFC2616 allows us to serve stale content during outages:
> 
>/* RFC2616 13.8 Errors or Incomplete Response Cache Behavior:
> * If a cache receives a 5xx response while attempting to revalidate an
> * entry, it MAY either forward this response to the requesting client,
> * or act as if the server failed to respond. In the latter case, it MAY
> * return a previously received response unless the cached entry
> * includes the "must-revalidate" cache-control directive (see section
> * 14.9).
> */
> 
> The next patch teaches mod_cache how to optionally serve stale content should 
> a backend be responding with 5xx errors, as per the RFC above.
> 
> In order to make this possible, the cache_out_filter needed to be cleaned up 
> so that it cleanly discarded data before the EOS bucket (instead of ignoring 
> it, as before). The cache_status hook needed to be updated so that 
> r->err_headers_out could be passed to it.
> 
> Regards,
> Graham
> --
> 

--
Mark Nottingham   http://www.mnot.net/





Re: threads apache http

2010-10-06 Thread Mark Watts
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/06/2010 02:39 AM, Paulo Eustáquio wrote:
> Hello!!
> I would like to know how apache schedule their threads? for example: how
> the workers are scheduled??
> thaks a lot

This is entirely down to the Operating System concerned.

Mark.

- -- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, IPR Secure Managed Hosting
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/

iEYEARECAAYFAkysNtcACgkQBn4EFUVUIO10uACfTmL5cPShzaxfnt11AlLo5tGE
0+cAnRXrosKTErrzaoHI1veLwzvCX/p+
=vOYy
-END PGP SIGNATURE-


Re: Talking about proxy workers

2010-08-06 Thread Mark Watts
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/08/10 12:13, Jeff Trawick wrote:
> On Fri, Aug 6, 2010 at 3:54 AM, Rainer Jung  wrote:
>> On 05.08.2010 21:30, Eric Covener wrote:
>>>>
>>>>
>>>> http://people.apache.org/~rjung/httpd/trunk/manual/mod/mod_proxy.html.en#workers
>>>
>>> "A direct worker is usually configured using any of ProxyPass,
>>> ProxyPassMatch or ProxySet."
>>
>>> I don't know much about Proxy, but can this hammer home a bit more
>>> that these directives create a new worker implicitly based on [some
>>> parts of?] the destination URL?
>>
>> Good point. I updated the patch and HTML page to stress the identification
>> of workers by their URL.
>>
>>> And what happens when there's
>>> overlap?
>>
>> There's a warning box at the end of the Workers section talking about that.
>> I slightly rephrased it to also contain the trm "overlap".
>>
>> New patch:
>>
>> http://people.apache.org/~rjung/patches/mod_proxy_docs_workers-v2.patch
> 
> nits:
> 
> +  There are two builtin workers, the default forward proxy worker and the
> 
> "built-in"
> 
> +  optionally included in a  module="mod_proxy">Proxy
> +  directive.
> 
> How about using "container" at the end instead of "directive"? (shrug)
> 
> +  . Direct workers can use connection pooling,
> +  HTTP Keep-Alive and individual configurations for example
> +  for timeouts.
> 
> (dumb question: what's the diff between keepalive and connection
> pooling?  reuse for one client vs. reuse for any client?)
> 
> That last part sounds a little awkward.  Maybe something like this:
> 
> A number of processing options can be specified for direct workers,
> including connection pooling, HTTP Keep-Alive, and I/O timeout values.
> 
> +   Which options are available is depending on the
> +  protocol used by the worker (and given in the origin server URL).
> +  Available protocols include ajp, fcgi,
> +  ftp, http and scgi.
> +
> 
> The set of options available for the worker depends on the protocol,
> which is specified in
> the origin server URL.  Available protocols include 
> 
> 
> +  A balancer worker is created, if its worker URL uses
> 
> no comma
> 
> +  Worker sharing happens, if the worker URLs overlap. More precisely
> +  if the URL of some worker is a leading substring of the URL of another
> +  worker defined later in the configuration file.
> 
> Worker sharing happens if the worker URLs overlap, which occurs when
> the URL of some worker is a leading substring of the URL of another
> worker defined later in the configuration file.
> 
> +  In this case the later worker isn't actually created. Instead
> the previous
> +  worker is used. The benefit is, that there is only one connection pool,
> +  so connections are more often reused. Unfortunately all the
> configuration attributes
> +  given explicitly for the later worker overwrite the respective
> configuration
> +  of the previous worker!
> 
> 
> This sounds like a discussion of pros and cons.  There's no pro, since
> the user didn't intend to configure it this way, right?


Can we have some examples put in that section - its a little wordy and I
found it a little hard to understand, and I use mod_proxy quite a lot!

Mark

- -- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, IPR Secure Managed Hosting
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/

iEYEARECAAYFAkxb+FIACgkQBn4EFUVUIO2c8gCg1Qd2zkUxtvzMRAi1FrccaK0+
UXcAoOR/Nmle+KN0/PaIKTMGMo6DJkf1
=KyEK
-END PGP SIGNATURE-


Re: HTTP trailers?

2010-08-04 Thread Mark Nottingham
Just to kick this discussion a bit;

The use cases that I've come across:

- adding debug / trace information to responses (e.g., how long it took, 
resource consumption, errors encountered)
- adding a late-bound ETag or Last-Modified to the response (so that you don't 
have to buffer)
- adding Content-MD5 or perhaps even something cryptographically signed (ditto)

I've had discussions with a few browser folks who have shown interest in making 
trailers available on their side (e.g., showing trace information in Firebug).

I don't agree that Apache should 5xx if trailers are set and they aren't able 
to be sent (e.g., because of a HTTP/1.0 client); the semantics of trailers in 
HTTP/1.1 are that they have to be able to be ignored by clients anyway (unless 
TE: trailers is received in the request, which IME is very uncommon). Dropping 
them on the floor is a fine solution -- as long as the code inserting the 
trailers knows this.

In the long run, it would also be interesting to have Apache examine the TE 
request header to determine whether trailers are supported; if they aren't, it 
could buffer the response and put the trailers up into the headers 
transparently. Of course, large responses might make this impractical, but in 
some cases it could be useful.

Finally -- HTTPbis gives us an opportunity to refine the design of trailers if 
there are issues. 

Cheers,



On 24/04/2010, at 2:01 AM, William A. Rowe Jr. wrote:

> On 4/23/2010 10:25 AM, Brian J. France wrote:
>> 
>> On Apr 23, 2010, at 10:08 AM, William A. Rowe Jr. wrote:
>> 
>>> On 4/23/2010 9:03 AM, Brian J. France wrote:
>>>> 
>>>> You can build a module that is able to insert a trailer by adding a filter 
>>>> and ap_hook_create_request call.
>>> 
>>> But doesn't this defeat the purpose of using a modular server
>>> architecture?  It seems this should be a facility of the core HTTP
>>> filter, if anyone wants to offer the patch for 2.3.
>> 
>> 
>> I agree, my module was more of a proof of concept that I can do it and then 
>> get some other server to able able to use it.
> 
> :)
> 
>> Not sure what the best solution would be because multiple things need to 
>> happen.  First part is you have to force chunk encoding either by removing 
>> content_length filter or tweaking the code to not add it if doing a trailer 
>> (which you might not know until it is time to insert a tailer).
> 
> Well, you also have to insert the 'Trailers' header, which must be known at 
> the
> beginning of the request, so that becomes a simple trigger for dropping the
> content-length and forcing chunked encoding.
> 
> "If no Trailer header field is present, the trailer SHOULD NOT include any 
> header
> fields" is a very explicit statement :)
> 
> This could be constructed from r->trailers_out, however users need to 
> understand
> that after the beginning of the response, r->trailers out cannot be extended, 
> only
> modified.
> 
>> Then you have to tweak modules/http/chunk_filter.c to allow others to insert 
>> a trailer, like adding a ap_hook_http_trailer or a optional function for 
>> inserting it.  I don't know if multiple modules should be allowed to add a 
>> trailer, if you do how to you join them since a trailer is nothing but a 
>> string ending with ASCII_CRLF (just strcat?).  Should we just grab 
>> r->notes['http_trailer'] and let modules just add/set/append values?
>> 
>> I think there is a bigger design discussion that should happen, but I might 
>> have a patch down the road as a starter if all goes well at work.
> 
> These pieces seem more like implementation details.


--
Mark Nottingham http://www.mnot.net/



Re: Missing proxy_balancer feature

2010-06-30 Thread Mark Watts

> As a reasonably heavy user of mod_proxy - all our web sites are
> proxied through a pair of reverse proxies - would you be interested in
> the problems we have encountered using it?
> 
> In particular, bug 45950 is symptomatic of our issues. We can't
> adjust/tune our reverse proxy servers and gracefully restart, because
> the proxy worker state is utterly mangled following a graceful
> restart.
> This means we must do a full restart to avoid mangling state, which in
> turn means any balancer that has had its configuration dynamically
> changed via balancer_manager will have its configuration reset to the
> config on disk.
> 
> The usual result of this is that sysadmin A will restart the proxies,
> unaware that sysadmin B has configured site S to be reverse proxied to
> application server app2 rather than app1, as app1 is undergoing
> maintenance and is down => we just broke one of our sites.
> 
> The other net result is that we are now loath to adjust/restart apache
> on the proxy servers, since we must do a hard restart, disrupting any
> sessions on that server. This is so onerous that we are now discussing
> moving away from apache as a reverse proxy, and instead looking at
> things like pound, varnish, perlbal.
> 
> This is a shame, as mod_proxy has many plus points - particularly that
> it's apache underneath, with all the flexibility that allows, and the
> mindshare that apache has already.
> 
> Cheers
> 
> Tom

A restart of httpd (graceful or otherwise) has no understanding that the
running config can be any different from that on disk.

Indeed, how would httpd differentiate between a restart to reconfigure a
given balancer (which may itself have a different in-memory loadfactor)
or some other part of the server?

One 'solution' may be to change the way you disable a balancer member
such that you change it in httpd.conf and do a graceful restart,
combined with the quiescence changes mentioned earlier so that balancer
members are withdrawn nicely.

I'd guess that the only true way to handle this problem is to somehow
compare the balancer configuration on disk with that in memory, and
restore each BalancerMember's loadfactor on restart if the
configurations match.

I suspect, but have no proof, that other load balancer software has this
same issue.

Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: canned deflate conf in manual -- time to drop the NS4/vary?

2010-06-04 Thread Mark Nottingham
Changing the semantics of Accept-Encoding / Content-Encoding is likely out of 
scope for HTTPbis; I have a hard time believing it wouldn't make existing 
implementations non-conformant, which we can really only do if there's a 
serious security or interoperability concern.

OTOH I think it would be reasonably easy to change Squid and other 
intermediaries to "pass through" TE; i.e., if the client asks for hop-by-hop 
compression, ask the server for hop-by-hop compression as well (so they don't 
have to dynamically decompress and possibly buffer responses for clients that 
don't support it). 

The question would be whether any reasonable number of browsers would start 
sending TE. Given that both Chrome and FF are on a perf kick these days, I 
think it's possible. The problem with hop-by-hop compression has always been 
that "no-one else does it"... if Apache were to start, that would be a step.

Interestingly, it appears that Mozilla had partial support at one time:
  http://www-archive.mozilla.org/projects/apache/gzip/

> Here we hope to use the new HTTP1.1 TE: gzip header to request compressed 
> versions of HTML files. Then the server would need to do streaming 
> compression to generate the results. To minimize the overhead on the server 
> it should keep a cache of the compressed files to quickly fill future 
> requests for the same compressed data.
> 
> The current Mozilla source can already accept and decode Transfer-encoding: 
> gzip data, but does not currently send the TE: header.

but I can't find the corresponding code in the current Mozilla source.

Regards,


On 04/06/2010, at 11:36 PM, Brian Pane wrote:

> On Fri, Jun 4, 2010 at 6:10 AM, "Plüm, Rüdiger, VF-Group"
>  wrote:
> [...]
>> Isn't that what Transfer-Encoding is designed for?
> 
> Yes, and in fact if we were talking about a brand new protocol, I'd
> probably argue in favor of putting the compression specifier in the
> Transfer-Encoding.  I think a change in the semantics of
> Content-Encoding in HTTP/1.1 might be a way to obtain similar benefits
> without breaking existing software.
> 
> -Brian


--
Mark Nottingham http://www.mnot.net/



Re: canned deflate conf in manual -- time to drop the NS4/vary?

2010-06-04 Thread Mark Nottingham

On 04/06/2010, at 6:51 PM, toki...@aol.com wrote:
> 
> I think you need to do a reboot on your definition of 'anecdotal'.

Good for you.


> The thread above was a focused discussion about what ACTUALLY
> happens if you try to 'Vary:' on 'User-Agent' in the real world
> these days accompanied by some additional (relevant) information about
> what COULD (actually) happen if you (alternatively) try to 'Vary:' on
> 'Accept-encoding:'. If you still think any of it 'lacks veracity'
> and is 'not trustworthy' then my only suggestion would be to spend
> a little time on Google or Bling. It's an ongoing 'story'.

I'm not sure why you're using so many quotes, unless you're trying to put words 
into my mouth. Please stop.


> > certainly no reproducible tests.
> 
> What sort of tests would you like to see?

Ones that can be reproduced. Preferably in an automated fashion, or at least 
with demonstrable proof. Waving around the phrase "kernel debugger" doesn't 
count.


> The 2.5 release of SQUID ( Early 2004 ) was the very FIRST version of that 
> Proxy Server that made any attempt to handle 'Vary:' headers at all. Prior to
> that, they were just doing the same thing all the browsers would. If a 'Vary:'
> header of ANY description arrived in the stream, it was simply treated as if
> it was 'Vary: *' ( STAR ) and there was no attempt to cache it at all.

What's your point? The deployment footprint of 2.4 is vanishingly small, given 
that it had a LOT of bugs, hasn't been supported for years, and still uses 
select/poll. 


> If you Google 'Vary Accept-Encoding Browsers SQUID' but also include
> Robert Collins name you'll find more than if you use 'Henrik's' name
> since he was ultra-focused on the ETag thing. ( He still is ).

Yes, as am I, and Roy for that matter, last time I talked to him about it.


> Only about 12 months ago one of the SQUID User's forum lit up with another
> 'discovered' problem surrounding all this 'Vary:' stuff and this had
> to do with non-compliance on the actual 'Accept-Encoding:' fields
> themselves coming from Browsers/User-Agents. ( Browser BUGS ).
> In some cases the newly discovered problem reflects the same nightmare 
> seen TODAY with the out-of-control use of 'User-Agent'. 

It's not a bug in the implementations, it's a grey area in 2616 that HTTPbis 
has since worked to resolve; 
  http://trac.tools.ietf.org/wg/httpbis/trac/ticket/147


> Too many variants being generated.
> 
> Squid User's Forum...
> http://www.pubbs.net/200904/squid/57482-re-squid-users-strange-problem-regarding-accept-encoding-and-compression-regex-anyone.html
> 
> Here's just a sampling of what was being shown from REAL WORLD
> Server logs just 12 months ago... 
> 
> Accept-Encoding: , FFF
> Accept-Encoding: mzip, meflate
> Accept-Encoding: identity, deflate, gzip
> Accept-Encoding: gzip;q=3D1.0, deflate;q=3D0.8, chunked;q=3D0.6,
> identity;q=3D0.4, *;q=3D0
> Accept-Encoding: gzip, deflate, x-gzip, identity; q=3D0.9
> Accept-Encoding: gzip,deflate,bzip2
> Accept-Encoding: ndeflate
> Accept-Encoding: x-gzip, gzip
> Accept-Encoding: gzip,identity
> Accept-Encoding: gzip, deflate, compress;q=3D0.9
> Accept-Encoding: gzip,deflate,X.509
> Yada, yada, yada...

Yes, yes, but in the REAL WORLD (as you like to say), there are only a few 
common browser families, and there is a high degree of similarity within those 
families. Caches may see some duplication, but the replacement algorithms will 
generally do the right thing. In the meantime, we'll fix header normalisation 
in Squid, Traffic Server and other caches.

I'm not necessarily agreeing with those who say that GZIP should be turned on 
by default in Apache now, but I hate to see the argument against it made with 
so many shoddy straw-men.


> People get REALLY PISSED these days when everything was running along
> just fine and suddenly there are 'problems'. Heads can roll.

Why don't you just shout "BOO" and get it over with?

*shakes head*



--
Mark Nottingham http://www.mnot.net/



Re: canned deflate conf in manual -- time to drop the NS4/vary?

2010-06-03 Thread Mark Nottingham
On 02/06/2010, at 9:00 AM, toki...@aol.com wrote:

> > Sergey wrote...
> > That's new to me that browsers don't cache stuff that has Vary only on 
> > Accept-Encoding - can you post some statistics or describe the test you ran?
> 
> Test results and statistics...
> 
> Apache DEV forum...
> http://www.pubbs.net/200908/httpd/55434-modcache-moddeflate-and-vary-user-agent.html

I don't see anything there but anecdotal evidence, certainly no reproducible 
tests.

> apache-modgzip forum...
> http://marc.info/?l=apache-modgzip&m=103958533520502&w=2

Seven and a half years old, and again anecdotal.

> Etc, etc. Lots of discussion about this has taken place over
> on the SQUID forums as well.

Yes; most of it in the past few years surrounding the ETag bug in Apache, not 
browser bugs. 

Regards,


--
Mark Nottingham http://www.mnot.net/



Re: HTTP trailers?

2010-04-22 Thread Mark Nottingham
Yes; see
  http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.6.1

Cheers,


On 23/04/2010, at 11:44 AM, Sander Temme wrote:

> Mark, 
> 
> On Apr 22, 2010, at 5:40 PM, Mark Nottingham wrote:
> 
>> I couldn't find any obvious way to set HTTP trailers in Apache 2.x without 
>> taking over all response processing (a la nph).
> 
> Stupid question: what is an HTTP trailer?  Is this in the context of Chunked 
> transfer-encoding? 
> 
> S.
> 
> -- 
> Sander Temme
> scte...@apache.org
> PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF
> 
> 
> 


--
Mark Nottingham http://www.mnot.net/



HTTP trailers?

2010-04-22 Thread Mark Nottingham
I couldn't find any obvious way to set HTTP trailers in Apache 2.x without 
taking over all response processing (a la nph).

Did I miss something?

Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: Soliciting thoughts about this proposed patch

2010-03-29 Thread Mark Watts
On Mon, 2010-03-29 at 10:29 -0500, Daniel Ruggeri wrote:
> Hello devs;
>I have written and submitted patch 48939 and wanted to solicit any
> thoughts or feelings about the idea of adding this configuration
> directive. For a quick summary, the directive would allow the server
> administrator to configure HTTPD such that it will force a worker into
> error status if one or X number of  HTTP status codes are seen after
> the request completes. I have noticed a fair amount of buzz about the
> topic around the web and thought this would be a good addition. How do
> you folks feel about this notion? For me, this seems to be a good
> answer to some of the slow-starting Application Servers out there.
> 
> Thank you kindly for your time
> -Daniel Ruggeri

I'm all for it - sounds like a no-brainer to me.

Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Listen syntax RFE

2010-02-08 Thread Mark Watts

I often need to configure httpd to Listen on more than one IP an a
range, and more than one port.

It would be nice to be able to do the following, for example:

Listen 192.168.1.1-10:80,443,8000-8020

This would replace a whole bunch of listen statements, and make configs
much more concise.

Regards,

Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


RE: ErrorDocument and ProxyErrorOverride

2010-01-20 Thread Mark Watts
On Tue, 2010-01-19 at 16:54 -0800, Jeff Tharp wrote:
> That does fix this issue, but the browser still gets a 200 instead of a 404.  
> I know that's caused some confusion for our operation as well.  Think about 
> SEO here -- we have a site behind an Apache-based reverse proxy.  We want to 
> use ProxyErrorOverride and ErrorDocument to make sure we send proper error 
> pages no matter what the backend application spits out (because often times 
> its more like a stack trace than a nice human-readable page).  Yet, if we 
> trigger a 404, we send a 200 back, which of course means a search engine 
> crawler misses the original 404.  I need ProxyErrorOverride on to deal with 
> the 500/503 type errors from the backend.  And thus I can't send a nice 404 
> from the backend, because the proxy will still override it.  So how do I 
> return a clean 404 in that scenario?

Thats annoying - I'd only been looking at the logs since that was what I
was worried about, but now you mention it, returning a 404 to the client
is just as important :/

Mark.

>  Jeff Tharp, System Administrator ESRI - Redlands, CA
>  http://www.esri.com 
> 
> -Original Message-
> From: Mark Watts [mailto:m.wa...@eris.qinetiq.com] 
> Sent: Tuesday, January 19, 2010 1:12 AM
> To: dev@httpd.apache.org
> Subject: Re: ErrorDocument and ProxyErrorOverride
> 
> 
> > What appears in the log file of the proxy depends on how the access
> > log line is configured.
> > 
> > Have a look here
> > http://httpd.apache.org/docs/2.2/mod/mod_log_config.html#formats
> > 
> > If you have %s in your CustomLog directive, you'll log the 404. If you
> > have %>s you'll log the 200.
> 
> Bingo!
> 
> I do indeed have %>s in my LogFormat, which I'd never noticed before
> (ahh, the joys of cut 'n paste)
> 
> Thanks for this.
> 
> Mark.
> 

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: ErrorDocument and ProxyErrorOverride

2010-01-19 Thread Mark Watts

> What appears in the log file of the proxy depends on how the access
> log line is configured.
> 
> Have a look here
> http://httpd.apache.org/docs/2.2/mod/mod_log_config.html#formats
> 
> If you have %s in your CustomLog directive, you'll log the 404. If you
> have %>s you'll log the 200.

Bingo!

I do indeed have %>s in my LogFormat, which I'd never noticed before
(ahh, the joys of cut 'n paste)

Thanks for this.

Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


ErrorDocument and ProxyErrorOverride

2010-01-18 Thread Mark Watts

[I originally sent this to users@ but here might be a better place to
ask]


I have a two Apache 2.2.13 servers. One is a straight proxy (doing SSL
offload) through to the other.

The proxy has the following settings relating to my issue:


ErrorDocument 404 /errors/error404.html
ProxyErrorOverride on

ProxyPass/  http://192.168.1.1/
ProxyPassReverse /  http://192.168.1.1/


This means 404's caused by content not existing on the back-end server
are captured by the proxy, which in turn pulls the page from the
back-end server. (The real configuration is more complicated, since in
reality I'm proxying another location to some IIS boxes, and they don't
have the customised error page).

I'm under the impression from the ErrorDocument documentation
(http://httpd.apache.org/docs/2.2/mod/core.html#errordocument) that my
custom 404 page should be being returned, while retaining the 404 status
code since I'm not technically redirecting it to another server from the
client PoV.
Indeed, testing this locally (without a proxy) returns me a customised
page and a 404 status code.


Does this still hold when your ErrorDocument is actually behind a proxy?
Testing here suggests not - I'm getting the custom page but with a 200.
Naturally, this means that logs from the proxy never include 404's,
which isn't the case. (I can't really use logs from the back-end server,
since they don't reflect the true source IP).

Mark.


-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: A fundamentally secure Apache server, any interest?

2009-11-16 Thread Mark Watts
On Mon, 2009-11-16 at 08:42 -0500, Sweere, Kevin E CTR USAF AFRL/RYT
wrote:
> Greetings,
>  
> I work for the US Air Force.  We have a prototype that dramatically,
> fundamentally increases a web server's security.  
>  
> We run an Apache server within a minimized, user-level-only, Linux variant
> only within RAM and from only a DVD (no harddrive).  With no shells, hackers
> have nowhere to go.  With no persistent memory, malware has no place to
> reside.  A simple reboot restores the website to a pristine state within
> minutes.  
>  
> Because a LiveDVD holds the OS, apps and content, its best for static,
> non-interactive, low-volume, high-value, highly-targeted websites.  Any
> change means burning a new DVD, but this also makes testing easier and less
> noisy.  Logs are tricky to extract. 
>  
> While it has worked well, some of us believe its usability drawbacks (e.g.
> limited ability to receive input from users, every change needs a new DVD)
> outweigh its great security benefits making it unmarketable (in govt or
> industry) and thus just another prototype to leave on the shelf.
>  
> I'm curious what your group thinks.  Thanks in advance -- I don't quite know
> with whom to discuss this idea.
>  
> Kevin Sweere

Hi Kevin,

The idea of a CD/DVD-ROM based webserver isn't new, I know we did some
internal research into it many years ago and came to the same
conclusions you have - the level of security offered seriously impedes
your ability to use/manage the server.

You also run into problems if your servers don't actually have an
optical drive (eg: Blades).

If I was looking for that level of assurance that my data hasn't been
tampered with, I'd be looking at using a mechanism of snapshoting your
webserver in some way such that a rollback is trivial. Linux LVM,
Solaris ZFS or even VMWare all offer this kind of snapshot and rollback.
I'd also be using TripWire or something similar to verify my content
directories.

Apache configured with minimum modules to simply serve static ASCII and
image files is about as secure at it gets for that type of content.
SELinux stops a rogue CGI from reading /etc/shadow, and mod_security
helps to block a lot of crud from ever generating a response from the
server.


Read-Only web servers are certainly secure but by their nature, very
time-consuming to manage.


Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


balancer-manager and server-status feature request.

2009-11-16 Thread Mark Watts

The statistics one gets from both /balancer-manager and mod_status are
useful but of course only exist until httpd is restarted.

It would be nice if they could be configured to periodically write some
lines to the error log (at LogLevel info or so) with these statistics so
the data can be preserved.
This would make it easy to parse by log monitoring tools and also allow
for analysis if desired.

XML output of both would be the icing on the cake :)


Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: Zero Config

2009-11-13 Thread Mark Watts
On Fri, 2009-11-13 at 07:15 -0500, Rich Bowen wrote:
> On Nov 13, 2009, at 06:39 , Igor Galić wrote:
> 
> >
> > It's been a long time now, but I still remember Rich asking for a
> > Apache httpd starting *without* a config.
> > I'm not sure this is easily doable, but I'm pretty confident that
> > we can slim down the basic config (httpd.conf) to a very minimalistic
> > level and put all the fluff in extra
> 
> Of course it's doable. Easily? I don't know. But certainly doable. You  
> define and document default values for everything, and then config  
> directives are only ever needed when things differ from the default.  
> Lots of products do this.
> 
> The default for LoadModule could be that you load all modules in  
> HTTPD_ROOT/modules unless there are LoadModule directives specifying  
> otherwise. We could (and should) protect  by default, and  
> allow access to HTTPD_ROOT/htdocs by default. And so on. Much of the  
> stuff that's in those configs you referenced can already be omitted  
> and fall back to default values.

+1, and can we _please_ default RewriteEngine to "on".

Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: dumping running config

2009-10-23 Thread Mark Watts
On Fri, 2009-10-23 at 15:04 +0100, Nick Kew wrote:
> Mark Watts wrote:
> > This may have been asked for before so apologies if it has.
> > 
> > In #httpd on FreeNode, we often get people asking if apache httpd can
> > dump its running config to a file for use on other servers or whatever.
> > 
> > Is this at all possible; mod_info does some of it so I would think yes,
> > (but I'm not a programmer).
> 
> Alternative suggestion: use a static config-analysis scripts.
> 
> I don't recollect names, but I do recollect searching CPAN and
> finding two likely-looking candidates, of which one did a
> good job of what I needed.
> 

Granted, these would parse the configs on disk into a single file; what
about the case where you want the actual running config?
I agree there should be no difference, and configs should be archived
before modification, but the case exists where the config on disk
doesn't reflect running config.

As an aside, I suspect a tool to generate a single httpd.conf file from
a multi-file installation (Debian anyone?) would be a useful addition to
the other tools.

Mark.



-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


dumping running config

2009-10-23 Thread Mark Watts

This may have been asked for before so apologies if it has.

In #httpd on FreeNode, we often get people asking if apache httpd can
dump its running config to a file for use on other servers or whatever.

Is this at all possible; mod_info does some of it so I would think yes,
(but I'm not a programmer).


Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: Feature Request for balancer-manager

2009-10-21 Thread Mark Watts
On Wed, 2009-10-21 at 06:32 -0400, Jeffrey E Burgoyne wrote:
> I am not using the apache balancing, but using a network level load
> balancer, but this concept may apply. We append an HTTP header on output
> that tells you which machine you were on. As long as each machine has a
> separate config file of some sort (in our setup it is http.conf unique per
> machine, with a global config for all machines) you can do this. We use :
> 
> Header always append ContentServer "strategis1"
> 
> Which gives an HTTP header value in your output saying where it has gone.
> Also, this works well with a reverse proxy setup if you balance front and
> back ends as you can use this on both ends, and the data will appear in
> one HTTP header (hence why we use append in the command).
> 
> This allows us to properly trace back exactly what machine handled the
> request, which I assume is what you wish to do.
> 

Nothing quite so compilicated:
Eg: (** is my addition)



 Load Balancer Manager for 192.168.1.100

   Server Version: Apache/2.2.13 (Unix) mod_ssl/2.2.13
  OpenSSL/0.9.8e-fips-rhel5 Apache

   ** Server Hostname: lb01.example.com **

   Server Built: Sep 17 2009 15:37:59
 __

  LoadBalancer Status for balancer://static-web

   StickySession Timeout FailoverAttempts   Method
   - 0   1byrequests

Worker URLRoute RouteRedir Factor Set Status Elected To  From
   http://web01  1  0   Ok 29  15K 669K
   http://web02      1  0   Ok 28  14K 258K


Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: Feature Request for balancer-manager

2009-10-21 Thread Mark Watts
On Wed, 2009-10-21 at 09:49 +0100, Mark Watts wrote:
> I hope this is the right place to ask...
> 
> Would it be possible to add the (real) hostname of the server serving
> a /balancer-manager URI?
> Reason being, if you have a pair of load-balancers in HA fail-over, it
> tells you which server you're looking at.
> 
> Cheers,
> 
> Mark.
> 

I should clarity - can the hostname be added to the /balancer-manager
output page?

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Feature Request for balancer-manager

2009-10-21 Thread Mark Watts

I hope this is the right place to ask...

Would it be possible to add the (real) hostname of the server serving
a /balancer-manager URI?
Reason being, if you have a pair of load-balancers in HA fail-over, it
tells you which server you're looking at.

Cheers,

Mark.

-- 
Mark Watts BSc RHCE MBCS
Senior Systems Engineer, Managed Services Manpower
www.QinetiQ.com
QinetiQ - Delivering customer-focused solutions
GPG Key: http://www.linux-corner.info/mwatts.gpg


signature.asc
Description: This is a digitally signed message part


Re: libapreq 2.12 failing with apache 2.2.14

2009-10-07 Thread Mark Hedges

> If you were referring to 5.10*, I can't use that version
> because of certain bugs that haven't been fixed - they
> keep throwing errors in my app.

Duh yeah that's what I meant.

I know with 5.8.8 there were a lot of problems because of a
bug in ExtUtils::ParseXS, there was a recent fix that was
supposed to fix this, does it help to upgrade that and/or
other core ExtUtils modules?

Mark


Re: can't build mod_perl2, libapreq2 glue test failures in perl 5.8.8 after cpan upgrades

2009-07-23 Thread Mark Hedges

Argh why do they try to backport bugfixes to three-year
old Apache 2.2.3 instead of using current stable minor
revision 2.2.11?  *tears out hair* thanks --mark--

On Thu, 23 Jul 2009, Fred Moyer wrote:

> Looks like d...@httpd is aware of the issue and will be releasing a
> fix.  Haven't tried 5.3 centos but this sounds like they shipped a
> version of apache that caused this.
>
> http://www.mail-archive.com/dev@httpd.apache.org/msg44177.html
>
> 2009/7/22 Mark Hedges :
> >
> > CentOS 5.3, perl 5.8.8, apache2, mod_perl2
> >
> > I am really stressed.  Seems like CentOS CPAN is breaking down.
> >
> > Something screwed up in ExtUtils (::ParseXS?).  It broke use
> > of DBD::SQLite under mod_perl2.
> >
> > I can't build mod_perl2 with CPAN:
> >
> > gcc -I/root/.cpan/build/mod_perl-2.0.4-Jjpb0E/src/modules/perl 
> > -I/root/.cpan/build/mod_perl-2.0.4-Jjpb0E/xs -I/usr/include/apr-1 
> > -I/usr/include/apr-1  -I/usr/include/httpd -D_REENTRANT -D_GNU_SOURCE 
> > -fno-strict-aliasing -pipe -Wdeclaration-after-statement 
> > -I/usr/local/include -I/usr/include/gdbm 
> > -I/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE -DMOD_PERL 
> > -DMP_COMPAT_1X -DLINUX=2 -D_LARGEFILE64_SOURCE -O2 -g -pipe -Wall 
> > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
> > --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic 
> > -fasynchronous-unwind-tables -fPIC \
> >    -c modperl_config.c && mv modperl_config.o modperl_config.lo
> > modperl_config.c: In function ‘modperl_config_insert’:
> > modperl_config.c:525: error: ‘OPT_INCNOEXEC’ undeclared (first use in this 
> > function)
> > modperl_config.c:525: error: (Each undeclared identifier is reported only 
> > once
> > modperl_config.c:525: error: for each function it appears in.)
> > make[1]: *** [modperl_config.lo] Error 1
> >
> > I can't get the Apache2::Request glue tests to work when building
> > libapreq2-2.12:
> >
> > t/api/cookie.t ... ok
> > t/api/error.t  ok
> > t/api/module.t ... ok
> > t/api/param.t  ok
> > t/apreq/big_input.t .. ok
> > t/apreq/cgi.t  32/? # Failed test 52 in t/apreq/cgi.t at line 198 
> > fail #6
> > # Failed test 56 in t/apreq/cgi.t at line 198 fail #7
> > # Failed test 60 in t/apreq/cgi.t at line 198 fail #8
> > # Failed test 64 in t/apreq/cgi.t at line 198 fail #9
> > # Failed test 68 in t/apreq/cgi.t at line 198 fail #10
> > t/apreq/cgi.t  Failed 5/71 subtests
> > t/apreq/cookie.t . ok
> > t/apreq/cookie2.t  ok
> > t/apreq/inherit.t  ok
> > t/apreq/request.t  ok
> > t/apreq/upload.t . 17/80 # Failed test 41 in t/apreq/upload.t at line 
> > 67 fail #11
> > # Failed test 45 in t/apreq/upload.t at line 67 fail #12
> > # Failed test 49 in t/apreq/upload.t at line 67 fail #13
> > # Failed test 53 in t/apreq/upload.t at line 67 fail #14
> > # Failed test 57 in t/apreq/upload.t at line 67 fail #15
> > # Failed test 61 in t/apreq/upload.t at line 67 fail #16
> > # Failed test 65 in t/apreq/upload.t at line 67 fail #17
> > # Failed test 69 in t/apreq/upload.t at line 67 fail #18
> > # Failed test 73 in t/apreq/upload.t at line 67 fail #19
> > # Failed test 77 in t/apreq/upload.t at line 67 fail #20
> > t/apreq/upload.t . Failed 10/80 subtests

possible integration into mod_proxy, mod_proxy_balancer of similar complementary load balancer module

2007-12-07 Thread mark wolgemuth
I spoke with a couple of you at ApacheCon, and there was some interest
in pursuing a conversation about how load balancing in mod_proxy might
be advanced and/or extended by reusing, merging or just adding some of
the code we have. Thanks especially to Jim Jiglieski for your time.

code can be retrieved here:
http://code.google.com/p/ath/downloads/list
or svn'ed from here:
svn checkout http://ath.googlecode.com/svn/trunk/ ath-read-only

a quick overview can be garnered from the slightly out of date docs,
especially directives:
http://ath.sourceforge.net/mod_athena_doc/html/mod_athena_directives.html

The code base in question works very much like mod_proxy_balancer, but
differs in a couple key points, some here:

1) uses separate shared memory segment to store state tables (no
overlap on scoreboard), with configurable lock granularity, down to
row level

2) uses the destination "host" field of a ProxyPass / ProxyPassReverse
declaration to map incoming requests to target farms

3) allows statistics on members to be pushed to the lb via a simple
GET query, allowing for load balancing on arbitrary metrics such as
cpu, load average, or higher lvl data like open database connections
or JVM thread count, etc.

4) there is a mechanism, via secret key protected cookie, for member
servers to set instructions to change load balancing decisions (eg
move to another farm, be sticky, be sticky safely (stick as long as
server is up, but otherwise move)).

5) the configuration and distinction in the reverse proxy of systems
that are "down" vs "administratively offline" and the ability to
redirect selectively on this state

... and some others that can be further discussed.

I'd say in simple examination, that where the conception of
mod_proxy_balancer may be more network and content centric, this
design is more application server centric.

It could be that you may want to just add it as an alternate and
complement to mod_proxy_balancer, with a new name of mod_proxy_??, or
that we get together on bringing this projects into a single expanded
lb solution.

This code has been out in the wild on AP2 license for a number of
years, and has >5 yrs production deployment in high traffic with
extensive performance testing and code path analysis, so its pretty
robust. My company pays me to support this code base, among other
things, so I can continue significant contributions, and have a few
developers here who could also assist.

Next steps for me, regardless, are to go through the code and
rationalize the function naming to fit in better with mod_proxy_* and
also to retire some code that is still in use from when this was
conceived as being potentially a standalone product; basically,
eliminating all our custom collections types in favor of apr tables,
lists, and hashes.

Any thoughts on this would be greatly appreciated.

--Mark Wolgemuth


-- 
--
Mark Wolgemuth


2.0.61 RPM

2007-10-04 Thread Mark Watts

Hi,

I'm trying to compile httpd 2.0.61 using the included spec file on Mandriva 
2005LE.

I get the following errors part-way through the build:


Configuring PCRE regular expression library ...

updating cache config.cache
configuring package in srclib/pcre now
configure: loading cache 
/home/mwatts/rpm/BUILD/httpd-2.0.61/prefork/config.cache
configure: error: `CFLAGS' has changed since the previous run:
configure:   former value:  -O2 -fomit-frame-pointer -pipe -march=i586 
-mtune=pentiumpro
configure:   current value: -O2 -fomit-frame-pointer -pipe -march=i586 
-mtune=pentiumpro
configure: error: changes in the environment can compromise the build
configure: error: run `make distclean' and/or `rm 
/home/mwatts/rpm/BUILD/httpd-2.0.61/prefork/config.cache' and start over
configure failed for srclib/pcre
error: Bad exit status from /var/tmp/rpm-tmp.15615 (%build)


I'm using the command "rpmbuild -ta httpd-2.0.61.tar.gz" to build it (I get the 
same error extracting the spec file and building from that too).

Does anyone have an idea how I can fix this?

Mark.


Re: Decompression with Bucket Brigade

2007-08-28 Thread Mark Harrison

Nick Kew wrote:

On Tue, 28 Aug 2007 10:12:47 +0530
prasanna <[EMAIL PROTECTED]> wrote:


I have added filter as below in conf file.

AddOutputFilter INFLATE;URLParser;DEFLATE html


I take it this is a proxy?  (If not, why are the contents coming
from the backend ever compressed?)

That scenario is exactly what the INFLATE output filter was
written for, with parsing modules such as mod_proxy_html and
mod_publisher (both of which process links) in the middle.
It works, though there are also some recently-fixed bugs,
so if you're using something older than 2.2.5 you could
usefully upgrade.


and also i am using apr_bucket_copy in my module, but whenever i use
this i got segmentation fault, is there anything behind this?


Sounds like you're using it wrong.


Is there any way to solve this problems!


See my .sig and/or my existing modules :-)


Hi Nick,

clicking on the book link gives me:

http://service.bfast.com/clients/notify/exmerchant-1.html

It looks like a good book, I've just placed an order
with amazon.


Cheers,
Mark

--
Mark Harrison
Pixar Animation Studios


2.0.59: ETag mtimes on 32- and 64-bit machines

2007-08-24 Thread Mark Drayton
Hi there

Forgive me if this is the wrong list. It's not really a user question but
I'm not sure it's a dev question, either, because I'm just looking for
clarification that my changes are correct.

We have a mix of 32- and 64-bit machines in our server farm across which
we'd like to guarantee consistent ETags (inodes turned off, of course). As
discussed here:

  http://issues.apache.org/bugzilla/show_bug.cgi?id=40064

the ETags differ between architectures:

[EMAIL PROTECTED] draytm01]$ GET -ed http://32bit/images/test.jpg | egrep
'(ETag|Last-M|Length)'
ETag: "2e30-9b91cfc0"
Content-Length: 11824
Last-Modified: Fri, 17 Sep 2004 10:27:19 GMT

[EMAIL PROTECTED] draytm01]$ GET -ed http://64bit/images/test.jpg | egrep
'(ETag|Last-M|Length)'
ETag: "2e30-3e4469b91cfc0"
Content-Length: 11824
Last-Modified: Fri, 17 Sep 2004 10:27:19 GMT

The problem is that the 64-bit (apr_uint64_t) mtime is cast to an unsigned
long before conversion to hex, effectively wiping out the high 32 bits of
the mtime on a 32-bit machine.

Issue #40064 has a patch for Apache 2.2 which changes etag_ulong_to_hex() to
etag_uint64_to_hex() and avoids casting the mtime to an (arch-dependent)
unsigned long. We can't move to 2.2 at the moment so instead I patched
2.0.59 with the same changes (diff below -- note 2.2.x moved this code out
to http_etag.c). Initially it didn't work -- the 32-bit machine still
returned a truncated ETag. I fixed it with (in etag_uint64_to_hex()):

-int shift = sizeof(unsigned long) * 8 - 4;
+int shift = sizeof(apr_uint64_t) * 8 - 4;

Is this right? I'm not a C programmer but it seems right to me: without this
change etag_uint64_to_hex() only converts the low 32 bits (ie, length of an
unsigned int on a 32-bit machine). So now I have:

[EMAIL PROTECTED] draytm01]$ GET -ed http://32bit/images/test.jpg | egrep
'(ETag|Last-M|Length)'
ETag: "2e30-3e4469b91cfc0"
Content-Length: 11824
Last-Modified: Fri, 17 Sep 2004 10:27:19 GMT

[EMAIL PROTECTED] draytm01]$ GET -ed http://64bit/images/test.jpg | egrep
'(ETag|Last-M|Length)'
ETag: "2e30-3e4469b91cfc0"
Content-Length: 11824
Last-Modified: Fri, 17 Sep 2004 10:27:19 GMT

If this is the correct fix then perhaps it should be applied to 2.2.x. If
it's not, perhaps you could point me in the right direction :~) I'm guessing
there won't be any interest in incorporating it into 2.0.x as 32-bit users
will suddenly see their ETags change between point releases.

Looking forward to any comments,

Mark Drayton


diff -ur httpd-2.0.59-orig/modules/http/http_protocol.c httpd-2.0.59
/modules/http/http_protocol.c
--- httpd-2.0.59-orig/modules/http/http_protocol.c  2006-07-12 08:40:
55.0 +0100
+++ httpd-2.0.59/modules/http/http_protocol.c   2007-08-24 16:06:
56.0 +0100
@@ -2698,16 +2698,17 @@
 l->method_list->nelts = 0;
 }

-/* Generate the human-readable hex representation of an unsigned long
+/* Generate the human-readable hex representation of an apr_uin64_t
  * (basically a faster version of 'sprintf("%lx")')
+ * (basically a faster version of 'sprintf("%llx")')
  */
 #define HEX_DIGITS "0123456789abcdef"
-static char *etag_ulong_to_hex(char *next, unsigned long u)
+static char *etag_uint64_to_hex(char *next, apr_uint64_t u)
 {
 int printing = 0;
 int shift = sizeof(unsigned long) * 8 - 4;
 do {
-unsigned long next_digit = ((u >> shift) & (unsigned long)0xf);
+unsigned short next_digit = ((u >> shift) & (apr_uint64_t)0xf);
 if (next_digit) {
 *next++ = HEX_DIGITS[next_digit];
 printing = 1;
@@ -2717,12 +2718,12 @@
 }
 shift -= 4;
 } while (shift);
-*next++ = HEX_DIGITS[u & (unsigned long)0xf];
+*next++ = HEX_DIGITS[u & (apr_uint64_t)0xf];
 return next;
 }

 #define ETAG_WEAK "W/"
-#define CHARS_PER_UNSIGNED_LONG (sizeof(unsigned long) * 2)
+#define CHARS_PER_UINT64 (sizeof(apr_uint64_t) * 2)
 /*
  * Construct an entity tag (ETag) from resource information.  If it's a
real
  * file, build in some of the file characteristics.  If the modification
time
@@ -2785,7 +2786,7 @@
  * FileETag keywords.
  */
 etag = apr_palloc(r->pool, weak_len + sizeof("\"--\"") +
-  3 * CHARS_PER_UNSIGNED_LONG + 1);
+  3 * CHARS_PER_UINT64 + 1);
 next = etag;
 if (weak) {
 while (*weak) {
@@ -2795,21 +2796,21 @@
 *next++ = '"';
 bits_added = 0;
 if (etag_bits & ETAG_INODE) {
-next = etag_ulong_to_hex(next, (unsigned long)r->finfo.inode);
+next = etag_uint64_to_hex(next, r->finfo.inode);
 bits_added |= ETAG_INODE;
 }
 if

HTTP BoF at IETF Chicago

2007-06-25 Thread Mark Nottingham
A Birds-of-a-Feather (BoF) session has been scheduled for Tuesday,  
July 24th, 9am US/Central* at the IETF Chicago meeting www3.ietf.org/meetings/69-IETF.html> to discuss proposed work in  
revising the HTTP specification.


The proposed charter <http://www.w3.org/mid/392C98BA- 
[EMAIL PROTECTED]> discusses one possible scope  
for this work. Note that this is NOT the final charter; if you have  
opinions about the charter of such a group, the BoF is the place to  
express them.


Besides the charter itself, it's likely that discussion will also  
touch on whether there are enough people willing to do the work to  
make this happen.


Therefore, if you're interested in this work, we encourage you to  
attend, either in-person or remotely. In particular, we'd like to  
encourage all HTTP implementers to send a representative.


More information about the of the BoF will be available soon.  
Discussion will primarily take place on the HTTP-WG mailing list  
<http://lists.w3.org/Archives/Public/ietf-http-wg/>; if you're  
interested, please make sure you're subscribed to this list.


* Note that the time and date are tentative at this point; the final  
meeting agenda will be at the IETF site on July 2nd.



--
Mark Nottingham http://www.mnot.net/



ap_hook_monitor example

2007-06-14 Thread Mark W. Humphries

Hi,

I'm looking for an example of module code that uses the monitor hook. 
Any help would be greatly appreciated.


Cheers,
Mark Humphries



Output Filtering

2007-05-17 Thread Mark Zetts
  I'm processing some custom XML of considerable length in an output 
filter and I collect the entire response, flatten it, process it, then 
pass it on.  One particular response body comprises 17 brigades of which 5 
are zero length.  As I'm coalescing these bucket brigades, is it correct 
behavior to:

1) Pass on any brigades of 0 length?

2) Delete non-EOS metadata buckets?

  Everything seems to work, but I'd like to know if there are any pitfalls 
to avoid when processing response bodies in the whole.

Thanks,
Mark Zetts 
 

Re: [Fwd: iDefense Final Notice [IDEF1445]]

2007-03-29 Thread Mark J Cox
For reference, Mitre assigned:

CVE-2007-1741 - Path Checking Race Condition Vulnerability
CVE-2007-1742 - Path Checking Design Error Vulnerability
CVE-2007-1743 - Arbitrary GID Input Validation  Vulnerability

We can supply statements to Mitre for any we dispute.

Mark
--
Mark J Cox | www.awe.com/mark





Email from apache c module

2007-02-07 Thread Mark Sasson
HI there,

Anyone knows how to send email from a handler?
executing system does not work for me.
Any ideas

Mark



2.0 vs 2.2 API

2006-09-17 Thread Mark Constable
Howdy all, is a module written under v2.0.54 supposed to be
able to work with apache 2.2 without being recompiled ?

Apologies if this is a rtfm question.

--markc


CGI Script Source Code Disclosure Vulnerability in Apache for Windows

2006-08-18 Thread Mark J Cox
See 
http://marc.theaimsgroup.com/?l=bugtraq&m=115527423727441&w=2

which basically reports "if you put cgi-bin under docroot then you can
view cgi scripts on OS which have case insensitive filesystems"

Joe replied: 
http://marc.theaimsgroup.com/?l=bugtraq&m=115574424402976&w=2
and I submitted that as an "DISPUTED" to CVE

But the original reporter disagrees:
http://marc.theaimsgroup.com/?l=bugtraq&m=115583509231594&w=2

I think the right response here is to make it more explicit in the
documentation that putting a ScriptAlias cgi-bin inside document root is
bad.

Mark
--
Mark J Cox | www.awe.com/mark





Re: [VOTES] please, 2.2.3, 2.0.59, 1.3.37 releases ASAP

2006-07-27 Thread Mark J Cox
   [+1]  apache_1.3.37

Was easy to compile and test this and a source diff shows only the only
code changes is the vulnerability fix.

Mark




Re: [Fwd: 2.2+ security page empty?]

2006-05-03 Thread Mark J Cox
>There is nothing on the security page any more for 2.2, is there a bug
> with the report you use to populate it?

Fixed

Cheers, Mark




Re: svn commit: r398494 - in /httpd/site/trunk: docs/security/vulnerabilities_13.html docs/security/vulnerabilities_20.html docs/security/vulnerabilities_22.html xdocs/security/vulnerabilities_22.xml

2006-05-01 Thread Mark J Cox
> This killed the list of vulnerabilities for all versions. Was this intended?
> And if yes, where can they be found now?

Must be someone with bad java foo, fixing.

Mark
--
Mark J Cox | www.awe.com/mark





  1   2   >