Re: apr_dbd_mysql for apache2.2

2006-10-29 Thread Philip M. Gollucci

Is anyone actually using MySQL(5) for authentication with apache2.2 ?

Yes, me.  /me thinks it your OS :) 
I just compiled this on my desktop to be sure, but this combo works as 
of now:

 apr - svn trunk
 apr-util svn trunk
 httpd trunk
 mysql 5.0.24
 FreeBSD 7.0-current

Attached are my config.nice(in this order: apr,apr-util,httpd) files and 
relevant httpd.conf stuff.


#! /bin/sh
#
# Created by configure

CFLAGS="-g3 -O0"; export CFLAGS
LIBS="-g3 -O0"; export LIBS
"./configure" \
"--prefix=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/apr/r469074" \
"--enable-debug" \
"--enable-nonportable-atomics" \
"--disable-ipv6" \
"--enable-maintainer-mode" \
"--disable-threads" \
"CFLAGS=-g3 -O0" \
"$@"
#! /bin/sh
#
# Created by configure

"./configure" \
"--prefix=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/apr-util/r469077-5.0.24"
 \
"--with-apr=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/apr/trunk" \
"--with-mysql=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/mysql/5.0.24"
 \
"--enable-maintainer-mode" \
"--with-expat=/usr/local" \
"$@"
#! /bin/sh
#
# Created by configure

CFLAGS="-g3 -O0 -DAP_UNSAFE_ERROR_LOG_UNESCAPED"; export CFLAGS
"./configure" \
"--prefix=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/httpd/r469078/prefork"
 \
"--with-apr=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/apr/trunk" \
"--with-apr-util=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/apr-util/trunk-5.0.24"
 \
"--with-perl=/usr/local/bin/perl" \
"--with-mpm=prefork" \
"--enable-ssl" \
"--enable-debug" \
"--enable-modules=all" \
"--enable-mods-shared=all" \
"--enable-so" \
"--enable-deflate-shared" \
"--enable-proxy-shared" \
"--enable-proxy" \
"--enable-proxy-connect" \
"--enable-proxy-ftp" \
"--enable-proxy-http" \
"--enable-maintainer-mode" \
"--with-mysql=/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/mysql/5.0.24"
 \
"--with-expat=/usr/local" \
"--with-ssl" \
"CFLAGS=-g3 -O0 -DAP_UNSAFE_ERROR_LOG_UNESCAPED" \
"$@"
#
# This is the main Apache HTTP server configuration file.  It contains the
# configuration directives that give the server its instructions.
# See http://httpd.apache.org/docs/trunk/> for detailed information.
# In particular, see 
# http://httpd.apache.org/docs/trunk/mod/directives.html>
# for a discussion of each configuration directive.
#
# Do NOT simply read the instructions in here without understanding
# what they do.  They're here only as hints or reminders.  If you are unsure
# consult the online docs. You have been warned.  
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path.  If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "logs/foo.log"
# with ServerRoot set to 
"/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/httpd/r469078/prefork" 
will be interpreted by the
# server as 
"/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/httpd/r469078/prefork/logs/foo.log".

#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# Do not add a slash at the end of the directory path.  If you point
# ServerRoot at a non-local disk, be sure to point the LockFile directive
# at a local disk.  If you wish to share the same ServerRoot for multiple
# httpd daemons, you will need to change at least LockFile and PidFile.
#
ServerRoot 
"/home/pgollucci/dev/software/freebsd-7.0-current/3.4.6/httpd/r469078/prefork"

#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the 
# directive.
#
# Change this to Listen on specific IP addresses as shown below to 
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80
Listen 80

#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as a DSO you
# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are used.
# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.
#
# Example:
# LoadModule foo_module modules/mod_foo.so
#
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_dbm_module modules/mod_authn_dbm.so
LoadModule authn_anon_module modules/mod_authn_anon.so
LoadModule authn_dbd_module modules/mod_authn_dbd.so
LoadModule authn_default_module modules/mod_authn_default.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_dbm_module modules/mod_authz_dbm.so
LoadModule authz_owner_module modules/mod_authz_owner.so
LoadModule authz_dbd_module modules/mod_authz_dbd.so
LoadModule authz_core_module mo

Bug report for Apache httpd-1.3 [2006/10/29]

2006-10-29 Thread bugzilla
+---+
| Bugzilla Bug ID   |
| +-+
| | Status: UNC=Unconfirmed NEW=New ASS=Assigned|
| | OPN=ReopenedVER=Verified(Skipped Closed/Resolved)   |
| |   +-+
| |   | Severity: BLK=Blocker CRI=CriticalMAJ=Major |
| |   |   MIN=Minor   NOR=Normal  ENH=Enhancement   |
| |   |   +-+
| |   |   | Date Posted |
| |   |   |  +--+
| |   |   |  | Description  |
| |   |   |  |  |
| 8329|New|Nor|2002-04-20|mime_magic gives 500 and no error_log on Microsoft|
| 8372|Ass|Nor|2002-04-22|Threadsaftey issue in Rewrite's cache [Win32/OS2/N|
| 8849|New|Nor|2002-05-07|make install errors as root on NFS shares |
| 8882|New|Enh|2002-05-07|[PATCH] mod_rewrite communicates with external rew|
| 9037|New|Min|2002-05-13|Slow performance when acessing an unresolved IP ad|
| 9126|New|Blk|2002-05-15|68k-next-openstep v. 4.0  |
| 9726|New|Min|2002-06-09|Double quotes should be flagged as T_HTTP_TOKEN_ST|
| 9894|New|Maj|2002-06-16|getline sub in support progs collides with existin|
| |New|Nor|2002-06-19|Incorrect default manualdir value with layout.|
|10038|New|Min|2002-06-20|ab benchmaker hangs on 10K https URLs with keepali|
|10073|New|Maj|2002-06-20|upgrade from 1.3.24 to 1.3.26 breaks include direc|
|10166|Opn|Min|2002-06-24|HTTP/1.1 proxy requests made even when client make|
|10169|New|Nor|2002-06-24|Apache seg faults due to attempt to access out of |
|10178|New|Maj|2002-06-24|Proxy server cuts off begining of buffer when spec|
|10195|New|Nor|2002-06-24|Configure script erroneously detects system Expat |
|10199|New|Nor|2002-06-24|Configure can't handle directory names with unders|
|10243|New|Maj|2002-06-26|CGI scripts not getting POST data |
|10354|New|Nor|2002-06-30|ErrorDocument(.htaccess) fails when passed URL wit|
|10446|Opn|Blk|2002-07-03|spaces in link to http server seen as foreign char|
|10666|New|Enh|2002-07-10|line-end comment error message missing file name  |
|10744|New|Nor|2002-07-12|suexec might fail to open log file|
|10747|New|Maj|2002-07-12|ftp SIZE command and 'smart' ftp servers results i|
|10760|New|Maj|2002-07-12|empty ftp directory listings from cached ftp direc|
|10939|New|Maj|2002-07-18|directory listing errors  |
|11020|New|Maj|2002-07-21|APXS only recognise tests made by ./configure |
|11236|New|Min|2002-07-27|Possible Log exhaustion bug?  |
|11265|New|Blk|2002-07-29|mod_rewrite fails to encode special characters|
|11765|New|Nor|2002-08-16|.apaci.install.tmp installs in existing httpd.conf|
|11986|New|Nor|2002-08-23|Restart hangs when piping logs on rotation log pro|
|12096|New|Nor|2002-08-27|apxs does not handle binary dists installed at non|
|12574|New|Nor|2002-09-12|Broken images comes from mod_proxy when caching ww|
|12583|New|Nor|2002-09-12|First piped log process do not handle SIGTERM |
|12598|Opn|Maj|2002-09-12|Apache hanging in Keepalive State |
|12770|Opn|Nor|2002-09-18|ErrorDocument fail redirecting error 400  |
|13188|New|Nor|2002-10-02|does not configure correctly for hppa64-hp-hpux11.|
|13274|Ass|Nor|2002-10-04|Subsequent requests are destroyed by the request e|
|13607|Opn|Enh|2002-10-14|Catch-all enhancement for vhost_alias?|
|13687|New|Min|2002-10-16|Leave Debug symbol on Darwin  |
|13822|New|Maj|2002-10-21|Problem while running Perl modules accessing CGI::|
|14095|Opn|Nor|2002-10-30|Change default Content-Type (DefaultType) in defau|
|14250|New|Maj|2002-11-05|Alternate UserDirs don't work intermittantly  |
|14443|New|Maj|2002-11-11|Keep-Alive randomly causes TCP RSTs   |
|14448|Opn|Cri|2002-11-11|Apache WebServer not starting if installed on Comp|
|14518|Opn|Nor|2002-11-13|QUERY_STRING parts not incorporated by mod_rewrite|
|14670|New|Cri|2002-11-19|Apache didn't deallocate unused memory|
|14748|New|Nor|2002-11-21|Configure Can't find DBM on Mac OS X  |
|15011|New|Nor|2002-12-03|Apache processes not timing out on Solaris 8  |
|15028|New|Maj|2002-12-03|RedirectMatch does not escape properly|
|16013|Opn|Nor|2003-01-13|Fooling mod_autoindex + IndexIgnore   |
|16236|New|Maj|2003-01-18|Include directive in Apache is not parsed within c|
|16241|New|Maj|2003-01-19|Apache processes takes 100% CPU until killed manua|
|16492|

Re: [Fwd: Re: svn commit: r467655 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_cache.xml modules/cache/mod_cache.c modules/cache/mod_cache.h]

2006-10-29 Thread Justin Erenkrantz

On 10/29/06, William A. Rowe, Jr. <[EMAIL PROTECTED]> wrote:

I strongly disagree because MOST of the flaws in the HTTP/1.1 implementation,
mod_proxy and even mod_cache exist because the development happened with
insufficient oversight.

Only code that's actively reviewed on trunk/ is going to get the level of
scrutiny required; let's all row in the same direction, shall we?


As long as the ideas are discussed on list first and we come to a
consensus, I'm fine with changes going into trunk instead of a branch.
But, when large changes get dropped into trunk without any warning,
it's extremely annoying.  -- justin


Re: svn commit: r468373 - in /httpd/httpd/trunk: CHANGES modules/cache/mod_cache.c modules/cache/mod_cache.h modules/cache/mod_disk_cache.c modules/cache/mod_disk_cache.h modules/cache/mod_mem_cache.c

2006-10-29 Thread Justin Erenkrantz

On 10/29/06, Graham Leggett <[EMAIL PROTECTED]> wrote:

The current expectation that it be possible to separate completely the
storing of the cached response and the delivery of the content is broken.

We have a real world case where the cache is expected to process a many
MB or many GB file completely, before sending that same response to the
network. This is too slow, and takes up too much RAM, resulting in a
broken response to the client.


In short, I haven't seen any evidence presented by you or others that
this is due to a design flaw in the cache/provider abstraction.  At
best, mod_disk_cache could be smarter about storing the file when it's
large - but that's just a small lines of code to fix - it doesn't
require any massive changes or new bucket types.  Just copy the file
bucket and consume that within store_body()'s implementation.

If that isn't enough, please identify here on list and backed by
references to the implementation and timings that it is 'too slow',
'takes up too much RAM', and results in a 'broken response to the
client' and why we must break all of our cache structures and
abstractions.

I'm tired of reviewing large code changes with only vague generalities
about why the code must change.  These decisions need to be explained
and reviewed on list before any more code is committed.  Your recent
commits have been chock full of mistakes and style violations - which
make it almost impossible to review what's going on.  If you are going
to commit, please take care to follow all of our style guidelines and
please ensure the commit works before using our trunk as your personal
dumping ground.  If your changes are incomplete, feel free to post
them to the list instead of committing.

Looking at the current implementation of mod_disk_cache, someone has
turned it into unreadable and unmanagable code.

Take a look at:

http://svn.apache.org/repos/asf/httpd/httpd/tags/2.2.3/modules/cache/mod_disk_cache.c

versus

http://svn.apache.org/repos/asf/httpd/httpd/trunk/modules/cache/mod_disk_cache.c

The code somehow went from 9KB total to 15KB for no good reason.
Somewhere, we went really wrong here.


So, we have disgreement over the right way to solve the problem of the
cache being expected to swallow mouthfuls too big for it to handle.

I agree with you that a design needs to be found on list first, as I
have wasted enough time going round in circles coming up with solution
after solution nobody is happy with.

Do we put this to a vote?


We're not even close to knowing what we'd be voting on.  So, please
draft up a proposed design that explains in detail why you think there
is a problem and what specific changes you propose and why that is the
best solution.  If you want to submit a patch to go with your
rationale, cool.  Yet, I certainly don't see any fundamental reason
why Joe's concerns and mine can't both be addressed in the same
design.

I will also note that the mod_cache provider system has explicit
versioning, so any modifications to the providers should be
represented with a new version number.  (i.e. providers for version
"0" should work while offering new features in version "1"-class
providers.)  We do not arbitrarily tweak the old provider structures
any more - instead, we introduce new versions.  -- justin


Re: [Fwd: Re: svn commit: r467655 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_cache.xml modules/cache/mod_cache.c modules/cache/mod_cache.h]

2006-10-29 Thread William A. Rowe, Jr.
Ruediger Pluem wrote:
> 
> Apart from this, Paul created a branch a while ago for mod_cache refactoring.
> As it has turned out the whole thing creates some bigger discussion and 
> patches
> go in and out. So I think it would be a good idea to do this on a dev branch 
> instead
> of the trunk. So I propose the following thing:

You mean sandboxes (at least that's what we normally refer to them as,
trunk/ IS the dev branch :) ...

I strongly disagree because MOST of the flaws in the HTTP/1.1 implementation,
mod_proxy and even mod_cache exist because the development happened with
insufficient oversight.

Only code that's actively reviewed on trunk/ is going to get the level of
scrutiny required; let's all row in the same direction, shall we?

Bill


Re: Problems with apreq2 on OS X

2006-10-29 Thread Patrick Galbraith

Dave,

Speaking of which - how do you use gdb with mod_perl/libapreq? I'm used 
to using it and other debuggers (Visual Studio, etc, DDD with gdb, 
Xcode) with mysqld and DBD::mysql, but how do you attach it to a 
mod_perl script, httpd, mod_perl, libapreq (?) to see what's going on? I 
have libapreq working on OS X, but something is giving me a bus error 
and I'd like to know where that's coming from.


Thanks!

Patrick

Dave Viner wrote:


no problemo...

i spent many hours staring, recompiling, messing with GDB, and  
swearing when i hit this same error.


dave



On Oct 29, 2006, at 2:36 PM, Fred Moyer wrote:


Dave Viner wrote:

this might be a dumb question, but have you checked that the apreq  
module is loaded?

LoadModule apreq_modulemodules/mod_apreq2.so
?



Egads - that was it.  I've only been using this module for how many  
years?  Somehow that line went missing from my httpd.conf in one of  
my latest development sessions.  Thanks for the spot Dave.  Patrick  
sorry for not seeing this earlier but I guess I overlooked the basics.


I'm going to go hide in the corner now for a while :)







Re: svn commit: r468373 - in /httpd/httpd/trunk: CHANGES modules/cache/mod_cache.c modules/cache/mod_cache.h modules/cache/mod_disk_cache.c modules/cache/mod_disk_cache.h modules/cache/mod_mem_cache.c

2006-10-29 Thread Graham Leggett

Justin Erenkrantz wrote:


-1.

This breaks the abstraction between the cache providers and the filter streams.
The cache providers should not be in the business of delivering content down to
the next filter - that is the job of mod_cache.  Following this route is
completely anti-thetical to the separation between storing the cache response
and delivery of the content.


The current expectation that it be possible to separate completely the 
storing of the cached response and the delivery of the content is broken.


We have a real world case where the cache is expected to process a many 
MB or many GB file completely, before sending that same response to the 
network. This is too slow, and takes up too much RAM, resulting in a 
broken response to the client.


On wednesday night I wrote a patch that solved the large file problem, 
while maintaining the current separation between write-to-cache and 
write-to-network as you assert. This mod_cache code broke up the brigade 
into bite sized chunks inside mod_cache before passing it to 
write-to-cache, then write-to-network, and so on.


Joe vetoed the patch, saying that it duplicated the natural behaviour of 
apr_bucket_read().


The wednesday night patch was reverted, and thursday night was spent 
instead changing the cache_body() signature to make its own better 
judgement on how to handle cached files.


Now you veto this next patch, saying it breaks the abstraction.

So, we have disgreement over the right way to solve the problem of the 
cache being expected to swallow mouthfuls too big for it to handle.


I agree with you that a design needs to be found on list first, as I 
have wasted enough time going round in circles coming up with solution 
after solution nobody is happy with.


Do we put this to a vote?

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Problems with apreq2 on OS X

2006-10-29 Thread Patrick Galbraith

Dave,

Ok, I feel dumm.

I would have never guessed this. I thought libapreq was part of 
mod_perl, and that when you compiled it it just was used by mod_perl. I 
kept seeing that library, but not making the connection!


Not a dumb question at all, but a good question. I'm just so out of date 
that I haven't fully realised mod_perl and libapreq2 are separate. In 
the Old'n days, it worked if you just installed mod_perl.


The reason I didn't have to do this on  my fedora box is because with 
fedora core, yum install does a pretty good job installing libapreq, and 
the module is included in the packaged httpd.conf.


use D'oh;

Thanks, and sorry for my grumblings! If I had read the INSTALL I 
probably would've seen that too.


Patrick



Dave Viner wrote:

this might be a dumb question, but have you checked that the apreq  
module is loaded?


LoadModule apreq_modulemodules/mod_apreq2.so

?

dave

On Oct 29, 2006, at 12:23 PM, Patrick Galbraith wrote:


Fred Moyer wrote:


Patrick Galbraith wrote:


Fred,

Ok: I have this failure on
1. OS X
2. Suse 10.0 amd 64
3. Suse 9.3   intel 32

Has anyone addressed this? This is what I would call severely  
broke. I would prefer not to use CGI. After this week, I think  
maybe the universe is telling me to learn PHP after all these  
years of being a perl developer.




The short answer would be to use an earlier release of libapreq -  
this only happened to me with the latest release.  Or use CGI for  
now until it's fixed.  There's nothing wrong with using CGI while  
this bug gets worked out, the api is the same for the most part.   
libapreq is still in a development version, and chances are that  
your application will not bottleneck on CGI.


If you go to PHP, you should not expect a trouble free life :) I  
don't have anything against PHP, but it has it's own set of  
problems.  With development in any language, you need to make sure  
that you keep a tight hold on your versions.  Using the latest  
version of something isn't always the best move, as experience has  
taught me.  It's nice to try it out and report bugs back, and  
helpful to the development of the project, but use the version  that 
works for you.


I've dug around a bit in the code trying to resolve this issue,  but 
it's a bit above my head right now.



Fred,

I know, I'm just frustrated and venting after losing many hours of  
dev time. The reason CGI won't work in my case is I've written all  
this code as a handler and putting into CGI seems like it'd be a  lot 
of work.  Maybe not, I'm not sure what I would have to change  since 
I rely so much on the request object.


When you say to use an earlier version of libapreq, do you mean  
version 1.0? That won't work because all the linux dists I deal  with 
are ones with pre-packaged mod_perl2 and apache2 (but haven't  been 
able to get apreq to compile correctly against those pre- package 
versions, trying everything from source).


Thanks for your replies!

Patrick






Patrick

Fred Moyer wrote:


Patrick Galbraith wrote:

[Sun Oct 29 12:38:27 2006] [notice] Apache/2.2.3 (Unix) mod_ssl/ 
2.2.3 OpenSSL/0.9.8d DAV/2 mod_perl/2.0.2 Perl/v5.8.8  configured 
-- resuming normal operations
dyld: lazy symbol binding failed: Symbol not found:  
_apreq_handle_apache2
 Referenced from: /opt/local/lib/perl5/vendor_perl/5.8.8/ 
darwin-2level/auto/APR/Request/Apache2/Apache2.bundle

 Expected in: dynamic lookup

dyld: Symbol not found: _apreq_handle_apache2
 Referenced from: /opt/local/lib/perl5/vendor_perl/5.8.8/ 
darwin-2level/auto/APR/Request/Apache2/Apache2.bundle

 Expected in: dynamic lookup

[Sun Oct 29 12:38:38 2006] [notice] child pid 11206 exit signal  
Trace/BPT trap (5)


OS X version: Darwin radha.local 8.8.1 Darwin Kernel Version  
8.8.1: Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/ 
RELEASE_I386 i386 i386


Not sure what this is. Anyone encountered this before?





I ran into this also, same platform.  I have been digging around  
a bit to see if I can resolve it but no luck so far - my foo in  
this area isn't quite where it needs to be.  This works fine for  
me on Linux though.


Also, is there a way to have access to things like $rec->param  
without having to use Apache2::Request/libapreq2? I ask this in  
case there is no solution for getting this to work, as well as  
on linux distributions I cannot get libapreq2 working.





You can use CGI.  Are you hitting this same issue on Linux?















Re: svn commit: r468373 - in /httpd/httpd/trunk: CHANGES modules/cache/mod_cache.c modules/cache/mod_cache.h modules/cache/mod_disk_cache.c modules/cache/mod_disk_cache.h modules/cache/mod_mem_cache.c

2006-10-29 Thread Justin Erenkrantz
On Fri, Oct 27, 2006 at 01:28:57PM -, [EMAIL PROTECTED] wrote:
> Author: minfrin
> Date: Fri Oct 27 06:28:56 2006
> New Revision: 468373
> 
> URL: http://svn.apache.org/viewvc?view=rev&rev=468373
> Log:
> mod_cache: Pass the output filter stack through the store_body()
> hook, giving each cache backend the ability to make a better
> decision as to how it will allocate the tasks of writing to the
> cache and writing to the network. Previously the write to the
> cache task needed to be complete before the same brigade was
> written to the network, and this caused timing and memory issues
> on large cached files. This fix replaces the previous fix for
> PR39380.

-1.

This breaks the abstraction between the cache providers and the filter streams.
The cache providers should not be in the business of delivering content down to
the next filter - that is the job of mod_cache.  Following this route is
completely anti-thetical to the separation between storing the cache response
and delivery of the content.

As others have mentioned, I would highly recommend that you take a step back
and come up with a design on-list first, run it through the gauntlet on dev@,
and test it before breaking carefully designed abstractions.  -- justin


Re: AW: mod_deflate and flush?

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 11:41 PM, Nick Kew wrote:
> On Sun, 2006-10-29 at 23:21 +0100, Ruediger Pluem wrote:
> 
> 
>>>Backport to 2.2.x ? I'm still using 2.0.x - LOL.
>>
>>Have you tried to apply the patches for 2.2.x to 2.0.x? I haven't tried so, 
>>but
>>I think mod_deflate has not changed that much between 2.2.x and 2.0.x, so that
>>might work. If you do so please report back, such that the patches can be 
>>proposed
>>for backport to 2.0.x.
> 
> 
> FWIW, I think mod_deflate 2.2 can be dropped in to 2.0 (source,
> not binary, of course).  But it does contain a very significant
> change: the inflate output filter.

Thanks for this hint. As I splitted my patches in changes for the deflate output
and the inflate output filter and the deflate output filter changes need to get
in first this can also work with 2.0.x :-).

Just checked on applying

http://people.apache.org/~rpluem/patches/mod_deflate_rework/deflate_output.diff

to 2.0.x which had two failures. They are fixed in

http://people.apache.org/~rpluem/patches/mod_deflate_rework/2.0.x/deflate_2.0.x.diff

which applies cleanly to 2.0.x and compiles. Maybe Sven should just try the one
above.

Regards

Rüdiger




Re: mod_cache and its ilk

2006-10-29 Thread Graham Leggett

Roy T. Fielding wrote:


As far as *I* am concerned, changes to the cache code must be correct
first and then perform second, and both of those should be proven by
actual testing before being committed to trunk.


+1.

We have an existing cache that breaks in real world environments.

We have a contributed patch set from Niklas Edmundsson that addresses 
these issues, and is used in production. It works. A significant amount 
of work has been done to ensure that after each patch was committed, the 
code was tested and still worked.


It works for me, very well.

We have some very valid objections to some of the methods used in this 
patch set, and based on these objections a major part of one patch was 
rewritten, and the last patch in the set was never committed.


We also have some very clear things that the patch is not allowed to do 
- including but not limited to threading and forking.


In response to the above, the following has been identified:

- APR needs significantly improved documentation attached to its doxygen 
comments.


- APR needs a notifier API to determine whether ap_core_output_filter() 
will block. This addresses Joe objection to the assumption that 
ap_core_output_filter() won't block on files. This also removes the need 
for any threading or forking.


- The notifier is also needed so that the need to fstat then sleep is 
removed.


Further work is being done to solve the above issues, but too few people 
are testing this code. This is another call to download trunk and to try 
it out, and to identify any issues encountered so that they may be 
identified and fixed.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: AW: mod_deflate and flush?

2006-10-29 Thread Nick Kew
On Sun, 2006-10-29 at 23:21 +0100, Ruediger Pluem wrote:

> > Backport to 2.2.x ? I'm still using 2.0.x - LOL.
> 
> Have you tried to apply the patches for 2.2.x to 2.0.x? I haven't tried so, 
> but
> I think mod_deflate has not changed that much between 2.2.x and 2.0.x, so that
> might work. If you do so please report back, such that the patches can be 
> proposed
> for backport to 2.0.x.

FWIW, I think mod_deflate 2.2 can be dropped in to 2.0 (source,
not binary, of course).  But it does contain a very significant
change: the inflate output filter.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/



Re: AW: mod_deflate and flush?

2006-10-29 Thread Ruediger Pluem


On 10/25/2006 04:46 PM, Sven Köhler wrote:
>>>Hi,
>>>
>>>JSP (via mod_jk) and maybe other plugins sometimes flush the 
>>>connection,
>>>so that the browsers receive everything that's stuck in some internal
>>>buffer. Here's a quote from mod_jk's docs:
>>>
>>>
>>>JkOptions +FlushPackets
>>>JkOptions FlushPackets, you ask mod_jk to flush Apache's connection
>>>buffer after each AJP packet chunk received from Tomcat.
>>>
>>>
>>>mod_deflate breaks that. I know the issue from ssh already. 
>>
>>I know. There are some patches to fix that. These are proposed for backport 
>>to 2.2.x:
> 
> 
> Backport to 2.2.x ? I'm still using 2.0.x - LOL.

Have you tried to apply the patches for 2.2.x to 2.0.x? I haven't tried so, but
I think mod_deflate has not changed that much between 2.2.x and 2.0.x, so that
might work. If you do so please report back, such that the patches can be 
proposed
for backport to 2.0.x.

Regards

Rüdiger


mod_cache and its ilk

2006-10-29 Thread Roy T. Fielding

As far as mod_*cache is concerned, we should work out the technical
definition of what those modules are supposed to be doing and just
stick with one direction on trunk.  Once that decision is made,
folks can veto code on the basis of technical concerns (such as, "that
module should be for small items -- big items belong in mod_cache_blah",
or "that change causes performance to decrease by 5%, therefore -1").

If the goal needs to change, a new module can be created with that
goal as its name (and a new set of config directives to avoid abusing
folks who expect their cache to work the same way on the next release).
If the current code does not accomplish the technical goal, then it
will be fixed on the basis of that goal or removed.

If we can't agree on what it is that the modules are supposed to be
designed to accomplish, then we should delete them from trunk.
If other folks don't like that result, they can bloody well distribute
their own cache module.

As far as *I* am concerned, changes to the cache code must be correct
first and then perform second, and both of those should be proven by
actual testing before being committed to trunk.

Roy



[jira] Work started: (MODPYTHON-200) Can't use signed and marshalled cookies together.

2006-10-29 Thread Graham Dumpleton (JIRA)
 [ http://issues.apache.org/jira/browse/MODPYTHON-200?page=all ]

Work on MODPYTHON-200 started by Graham Dumpleton.

> Can't use signed and marshalled cookies together.
> -
>
> Key: MODPYTHON-200
> URL: http://issues.apache.org/jira/browse/MODPYTHON-200
> Project: mod_python
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.2.10
>Reporter: Graham Dumpleton
> Assigned To: Graham Dumpleton
> Fix For: 3.3
>
>
> As reported by Clodoaldo Pinto Neto on mailing list:
>   http://www.modpython.org/pipermail/mod_python/2006-October/022427.html
> one cannot use signed and marshalled cookies together.
> For example, with publisher code example:
> from mod_python import Cookie
> def makecookies(req):
> c = Cookie.MarshalCookie('marshal', 'value', 'secret')
> d = Cookie.SignedCookie('signed', 'value', 'secret')
> Cookie.add_cookie(req, c)
> Cookie.add_cookie(req, d)
> return 'made\n' + str(req.headers_out)
> def showcookies(req):
> cookies = Cookie.get_cookies(req, Cookie.MarshalCookie, secret='secret')
> s = 'There are %s cookies'% len(cookies)
> for c in cookies.values():
> s += '\n%s %s' % (str(c), type(c))
> return 'read\n' + repr(cookies) + '\n' + s + '\n' + str(req.headers_in)
> if one access makecookies and then showcookies, you get:
> Traceback (most recent call last):
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/importer.py",
>  line 1519, in HandlerDispatch
> default=default_handler, arg=req, silent=hlist.silent)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/importer.py",
>  line 1224, in _process_target
> result = _execute_target(config, req, object, arg)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/importer.py",
>  line 1123, in _execute_target
> result = object(arg)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/publisher.py",
>  line 213, in handler
> published = publish_object(req, object)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/publisher.py",
>  line 425, in publish_object
> return publish_object(req,util.apply_fs_data(object, req.form, req=req))
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/util.py",
>  line 546, in apply_fs_data
> return object(**args)
>   File "/Users/grahamd/public_html/cookies/index.py", line 11, in showcookies
> cookies = Cookie.get_cookies(req, Cookie.MarshalCookie, secret='secret')
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/Cookie.py",
>  line 352, in get_cookies
> return Class.parse(cookies, **kw)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/Cookie.py",
>  line 254, in parse
> c.unmarshal(secret)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/Cookie.py",
>  line 282, in unmarshal
> self.value = marshal.loads(base64.decodestring(self.value))
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/base64.py",
>  line 44, in decodestring
> return binascii.a2b_base64(s)
> Error: Incorrect padding
> The problem is that Cookie.get_cookies() makes assumption that all cookies 
> being sent by browser will be of the same derived type, or are a basic 
> cookie. If mixing derived types and they are not compatible as far as 
> unpacking goes, the code will fail.
> For starters, there should be a new function called Cookie.get_cookie() where 
> you name the cookie and it only tries to decode that one cookie. This new 
> method should also be used in the Session class instead of using 
> Cookie.get_cookies().

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Problems with apreq2 on OS X

2006-10-29 Thread Dave Viner
this might be a dumb question, but have you checked that the apreq  
module is loaded?


LoadModule apreq_modulemodules/mod_apreq2.so

?

dave

On Oct 29, 2006, at 12:23 PM, Patrick Galbraith wrote:


Fred Moyer wrote:


Patrick Galbraith wrote:


Fred,

Ok: I have this failure on
1. OS X
2. Suse 10.0 amd 64
3. Suse 9.3   intel 32

Has anyone addressed this? This is what I would call severely  
broke. I would prefer not to use CGI. After this week, I think  
maybe the universe is telling me to learn PHP after all these  
years of being a perl developer.



The short answer would be to use an earlier release of libapreq -  
this only happened to me with the latest release.  Or use CGI for  
now until it's fixed.  There's nothing wrong with using CGI while  
this bug gets worked out, the api is the same for the most part.   
libapreq is still in a development version, and chances are that  
your application will not bottleneck on CGI.


If you go to PHP, you should not expect a trouble free life :) I  
don't have anything against PHP, but it has it's own set of  
problems.  With development in any language, you need to make sure  
that you keep a tight hold on your versions.  Using the latest  
version of something isn't always the best move, as experience has  
taught me.  It's nice to try it out and report bugs back, and  
helpful to the development of the project, but use the version  
that works for you.


I've dug around a bit in the code trying to resolve this issue,  
but it's a bit above my head right now.


Fred,

I know, I'm just frustrated and venting after losing many hours of  
dev time. The reason CGI won't work in my case is I've written all  
this code as a handler and putting into CGI seems like it'd be a  
lot of work.  Maybe not, I'm not sure what I would have to change  
since I rely so much on the request object.


When you say to use an earlier version of libapreq, do you mean  
version 1.0? That won't work because all the linux dists I deal  
with are ones with pre-packaged mod_perl2 and apache2 (but haven't  
been able to get apreq to compile correctly against those pre- 
package versions, trying everything from source).


Thanks for your replies!

Patrick






Patrick

Fred Moyer wrote:


Patrick Galbraith wrote:

[Sun Oct 29 12:38:27 2006] [notice] Apache/2.2.3 (Unix) mod_ssl/ 
2.2.3 OpenSSL/0.9.8d DAV/2 mod_perl/2.0.2 Perl/v5.8.8  
configured -- resuming normal operations
dyld: lazy symbol binding failed: Symbol not found:  
_apreq_handle_apache2
 Referenced from: /opt/local/lib/perl5/vendor_perl/5.8.8/ 
darwin-2level/auto/APR/Request/Apache2/Apache2.bundle

 Expected in: dynamic lookup

dyld: Symbol not found: _apreq_handle_apache2
 Referenced from: /opt/local/lib/perl5/vendor_perl/5.8.8/ 
darwin-2level/auto/APR/Request/Apache2/Apache2.bundle

 Expected in: dynamic lookup

[Sun Oct 29 12:38:38 2006] [notice] child pid 11206 exit signal  
Trace/BPT trap (5)


OS X version: Darwin radha.local 8.8.1 Darwin Kernel Version  
8.8.1: Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/ 
RELEASE_I386 i386 i386


Not sure what this is. Anyone encountered this before?




I ran into this also, same platform.  I have been digging around  
a bit to see if I can resolve it but no luck so far - my foo in  
this area isn't quite where it needs to be.  This works fine for  
me on Linux though.


Also, is there a way to have access to things like $rec->param  
without having to use Apache2::Request/libapreq2? I ask this in  
case there is no solution for getting this to work, as well as  
on linux distributions I cannot get libapreq2 working.




You can use CGI.  Are you hitting this same issue on Linux?












[jira] Created: (MODPYTHON-200) Can't use signed and marshalled cookies together.

2006-10-29 Thread Graham Dumpleton (JIRA)
Can't use signed and marshalled cookies together.
-

 Key: MODPYTHON-200
 URL: http://issues.apache.org/jira/browse/MODPYTHON-200
 Project: mod_python
  Issue Type: Bug
  Components: core
Affects Versions: 3.2.10
Reporter: Graham Dumpleton
 Assigned To: Graham Dumpleton
 Fix For: 3.3


As reported by Clodoaldo Pinto Neto on mailing list:

  http://www.modpython.org/pipermail/mod_python/2006-October/022427.html

one cannot use signed and marshalled cookies together.

For example, with publisher code example:



from mod_python import Cookie

def makecookies(req):
c = Cookie.MarshalCookie('marshal', 'value', 'secret')
d = Cookie.SignedCookie('signed', 'value', 'secret')
Cookie.add_cookie(req, c)
Cookie.add_cookie(req, d)
return 'made\n' + str(req.headers_out)

def showcookies(req):
cookies = Cookie.get_cookies(req, Cookie.MarshalCookie, secret='secret')
s = 'There are %s cookies'% len(cookies)
for c in cookies.values():
s += '\n%s %s' % (str(c), type(c))
return 'read\n' + repr(cookies) + '\n' + s + '\n' + str(req.headers_in)



if one access makecookies and then showcookies, you get:



Traceback (most recent call last):

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/importer.py",
 line 1519, in HandlerDispatch
default=default_handler, arg=req, silent=hlist.silent)

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/importer.py",
 line 1224, in _process_target
result = _execute_target(config, req, object, arg)

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/importer.py",
 line 1123, in _execute_target
result = object(arg)

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/publisher.py",
 line 213, in handler
published = publish_object(req, object)

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/publisher.py",
 line 425, in publish_object
return publish_object(req,util.apply_fs_data(object, req.form, req=req))

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/util.py",
 line 546, in apply_fs_data
return object(**args)

  File "/Users/grahamd/public_html/cookies/index.py", line 11, in showcookies
cookies = Cookie.get_cookies(req, Cookie.MarshalCookie, secret='secret')

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/Cookie.py",
 line 352, in get_cookies
return Class.parse(cookies, **kw)

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/Cookie.py",
 line 254, in parse
c.unmarshal(secret)

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/mod_python/Cookie.py",
 line 282, in unmarshal
self.value = marshal.loads(base64.decodestring(self.value))

  File 
"/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/base64.py",
 line 44, in decodestring
return binascii.a2b_base64(s)

Error: Incorrect padding



The problem is that Cookie.get_cookies() makes assumption that all cookies 
being sent by browser will be of the same derived type, or are a basic cookie. 
If mixing derived types and they are not compatible as far as unpacking goes, 
the code will fail.

For starters, there should be a new function called Cookie.get_cookie() where 
you name the cookie and it only tries to decode that one cookie. This new 
method should also be used in the Session class instead of using 
Cookie.get_cookies().

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Deleted: (MODPYTHON-199) Can

2006-10-29 Thread Graham Dumpleton (JIRA)
 [ http://issues.apache.org/jira/browse/MODPYTHON-199?page=all ]

Graham Dumpleton deleted MODPYTHON-199:
---


> Can
> ---
>
> Key: MODPYTHON-199
> URL: http://issues.apache.org/jira/browse/MODPYTHON-199
> Project: mod_python
>  Issue Type: Bug
>Reporter: Graham Dumpleton
> Assigned To: Graham Dumpleton
>


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MODPYTHON-199) Can

2006-10-29 Thread Graham Dumpleton (JIRA)
Can
---

 Key: MODPYTHON-199
 URL: http://issues.apache.org/jira/browse/MODPYTHON-199
 Project: mod_python
  Issue Type: Bug
  Components: core
Affects Versions: 3.2.10
Reporter: Graham Dumpleton
 Assigned To: Graham Dumpleton
 Fix For: 3.3




-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [Fwd: Re: Apache 2.2.3 mod_proxy issue]

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 05:42 PM, Mladen Turk wrote:
> Ruediger Pluem wrote:
> 
>>
>>
>> I guess we should create a directive like DefineWorker (I do not
>> really care about
>> the exact name), that enables the administrator to define / create a
>> worker.
> 
> 
> Then you can easily just use
> 
>ProxySet ...
> 
> 
> It will define a 'known' worker.
> There is no need for an additional directive.

Yes, this idea came also up to me after sending the mail.
It seems to be an acceptable solution for this problem, but
we should really add ProxySet to the documentation to make this
more obvious to other people :-).
Furthermore I think there should be some hints that mod_rewrite
proxy requests will use the default worker (which has no pooling)
if the worker is not defined explicitly.

Regards

Rüdiger


Re: mod_cache summary and plan

2006-10-29 Thread Davi Arnaut
Graham Leggett wrote:
> Davi Arnaut wrote:
> 
>>> You are not going to bully anybody on this list into accepting any 
>>> patch, it's not how this project works.
>> I'm not bulling anyone. This is not a personal attack, it was a public
>> calling for you to "adjust" the process.
> 
> Let's not fool ourselves, it was a personal attack.

I'm not fooling my self.

> Bullying me is not going to force me to "adjust" the Apache process that 
> has been in place on this project since I joined it 8 years ago. That 
> process is not mine to change, and the sooner you realise that the 
> better for us all.
> 
>  > I will let this thread die now, which was created to gather
>  > a consensus but failed miserably. I just hope our minor
>  > disagreements won't interfere with us working on mod_cache in
>  > the future. I will repeat again, I'm not attacking you. I was
>  > pursing what I thought was better for mod_cache.
> 
> mod_cache doesn't get better when the maintainers get fed up by the 
> constant barrage of your abuse and give it up as a bad idea. To declare 
> end of thread doesn't give us any guarantee that we won't see more of 
> the same trolling from you.

Fair enough, I withdrawal all my patches and I'm going to unsubscribe
from the list. Have a nice day.

--
Davi Arnaut


Re: Problems with apreq2 on OS X

2006-10-29 Thread Patrick Galbraith

Fred Moyer wrote:


Patrick Galbraith wrote:


Fred,

Ok: I have this failure on
1. OS X
2. Suse 10.0 amd 64
3. Suse 9.3   intel 32

Has anyone addressed this? This is what I would call severely broke. 
I would prefer not to use CGI. After this week, I think maybe the 
universe is telling me to learn PHP after all these years of being a 
perl developer.



The short answer would be to use an earlier release of libapreq - this 
only happened to me with the latest release.  Or use CGI for now until 
it's fixed.  There's nothing wrong with using CGI while this bug gets 
worked out, the api is the same for the most part.  libapreq is still 
in a development version, and chances are that your application will 
not bottleneck on CGI.


If you go to PHP, you should not expect a trouble free life :) I don't 
have anything against PHP, but it has it's own set of problems.  With 
development in any language, you need to make sure that you keep a 
tight hold on your versions.  Using the latest version of something 
isn't always the best move, as experience has taught me.  It's nice to 
try it out and report bugs back, and helpful to the development of the 
project, but use the version that works for you.


I've dug around a bit in the code trying to resolve this issue, but 
it's a bit above my head right now.


Fred,

I know, I'm just frustrated and venting after losing many hours of dev 
time. The reason CGI won't work in my case is I've written all this code 
as a handler and putting into CGI seems like it'd be a lot of work.  
Maybe not, I'm not sure what I would have to change since I rely so much 
on the request object.


When you say to use an earlier version of libapreq, do you mean version 
1.0? That won't work because all the linux dists I deal with are ones 
with pre-packaged mod_perl2 and apache2 (but haven't been able to get 
apreq to compile correctly against those pre-package versions, trying 
everything from source).


Thanks for your replies!

Patrick






Patrick

Fred Moyer wrote:


Patrick Galbraith wrote:

[Sun Oct 29 12:38:27 2006] [notice] Apache/2.2.3 (Unix) 
mod_ssl/2.2.3 OpenSSL/0.9.8d DAV/2 mod_perl/2.0.2 Perl/v5.8.8 
configured -- resuming normal operations
dyld: lazy symbol binding failed: Symbol not found: 
_apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle 


 Expected in: dynamic lookup

dyld: Symbol not found: _apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle 


 Expected in: dynamic lookup

[Sun Oct 29 12:38:38 2006] [notice] child pid 11206 exit signal 
Trace/BPT trap (5)


OS X version: Darwin radha.local 8.8.1 Darwin Kernel Version 8.8.1: 
Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 
i386 i386


Not sure what this is. Anyone encountered this before?




I ran into this also, same platform.  I have been digging around a 
bit to see if I can resolve it but no luck so far - my foo in this 
area isn't quite where it needs to be.  This works fine for me on 
Linux though.


Also, is there a way to have access to things like $rec->param 
without having to use Apache2::Request/libapreq2? I ask this in 
case there is no solution for getting this to work, as well as on 
linux distributions I cannot get libapreq2 working.




You can use CGI.  Are you hitting this same issue on Linux?










Re: Problems with apreq2 on OS X

2006-10-29 Thread Fred Moyer

Patrick Galbraith wrote:

Fred,

Ok: I have this failure on
1. OS X
2. Suse 10.0 amd 64
3. Suse 9.3   intel 32

Has anyone addressed this? This is what I would call severely broke. I 
would prefer not to use CGI. After this week, I think maybe the universe 
is telling me to learn PHP after all these years of being a perl developer.


The short answer would be to use an earlier release of libapreq - this 
only happened to me with the latest release.  Or use CGI for now until 
it's fixed.  There's nothing wrong with using CGI while this bug gets 
worked out, the api is the same for the most part.  libapreq is still in 
a development version, and chances are that your application will not 
bottleneck on CGI.


If you go to PHP, you should not expect a trouble free life :) I don't 
have anything against PHP, but it has it's own set of problems.  With 
development in any language, you need to make sure that you keep a tight 
hold on your versions.  Using the latest version of something isn't 
always the best move, as experience has taught me.  It's nice to try it 
out and report bugs back, and helpful to the development of the project, 
but use the version that works for you.


I've dug around a bit in the code trying to resolve this issue, but it's 
a bit above my head right now.




Patrick

Fred Moyer wrote:


Patrick Galbraith wrote:

[Sun Oct 29 12:38:27 2006] [notice] Apache/2.2.3 (Unix) mod_ssl/2.2.3 
OpenSSL/0.9.8d DAV/2 mod_perl/2.0.2 Perl/v5.8.8 configured -- 
resuming normal operations
dyld: lazy symbol binding failed: Symbol not found: 
_apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle 


 Expected in: dynamic lookup

dyld: Symbol not found: _apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle 


 Expected in: dynamic lookup

[Sun Oct 29 12:38:38 2006] [notice] child pid 11206 exit signal 
Trace/BPT trap (5)


OS X version: Darwin radha.local 8.8.1 Darwin Kernel Version 8.8.1: 
Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 
i386 i386


Not sure what this is. Anyone encountered this before?



I ran into this also, same platform.  I have been digging around a bit 
to see if I can resolve it but no luck so far - my foo in this area 
isn't quite where it needs to be.  This works fine for me on Linux 
though.


Also, is there a way to have access to things like $rec->param 
without having to use Apache2::Request/libapreq2? I ask this in case 
there is no solution for getting this to work, as well as on linux 
distributions I cannot get libapreq2 working.



You can use CGI.  Are you hitting this same issue on Linux?







Re: Problems with apreq2 on OS X

2006-10-29 Thread Patrick Galbraith

Fred,

Ok: I have this failure on
1. OS X
2. Suse 10.0 amd 64
3. Suse 9.3   intel 32

Has anyone addressed this? This is what I would call severely broke. I 
would prefer not to use CGI. After this week, I think maybe the universe 
is telling me to learn PHP after all these years of being a perl developer.


Patrick

Fred Moyer wrote:


Patrick Galbraith wrote:

[Sun Oct 29 12:38:27 2006] [notice] Apache/2.2.3 (Unix) mod_ssl/2.2.3 
OpenSSL/0.9.8d DAV/2 mod_perl/2.0.2 Perl/v5.8.8 configured -- 
resuming normal operations
dyld: lazy symbol binding failed: Symbol not found: 
_apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle 


 Expected in: dynamic lookup

dyld: Symbol not found: _apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle 


 Expected in: dynamic lookup

[Sun Oct 29 12:38:38 2006] [notice] child pid 11206 exit signal 
Trace/BPT trap (5)


OS X version: Darwin radha.local 8.8.1 Darwin Kernel Version 8.8.1: 
Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 
i386 i386


Not sure what this is. Anyone encountered this before?



I ran into this also, same platform.  I have been digging around a bit 
to see if I can resolve it but no luck so far - my foo in this area 
isn't quite where it needs to be.  This works fine for me on Linux 
though.


Also, is there a way to have access to things like $rec->param 
without having to use Apache2::Request/libapreq2? I ask this in case 
there is no solution for getting this to work, as well as on linux 
distributions I cannot get libapreq2 working.



You can use CGI.  Are you hitting this same issue on Linux?





Re: mod_cache summary and plan

2006-10-29 Thread Graham Leggett

Davi Arnaut wrote:

You are not going to bully anybody on this list into accepting any 
patch, it's not how this project works.


I'm not bulling anyone. This is not a personal attack, it was a public
calling for you to "adjust" the process.


Let's not fool ourselves, it was a personal attack.

Bullying me is not going to force me to "adjust" the Apache process that 
has been in place on this project since I joined it 8 years ago. That 
process is not mine to change, and the sooner you realise that the 
better for us all.


> I will let this thread die now, which was created to gather
> a consensus but failed miserably. I just hope our minor
> disagreements won't interfere with us working on mod_cache in
> the future. I will repeat again, I'm not attacking you. I was
> pursing what I thought was better for mod_cache.

mod_cache doesn't get better when the maintainers get fed up by the 
constant barrage of your abuse and give it up as a bad idea. To declare 
end of thread doesn't give us any guarantee that we won't see more of 
the same trolling from you.


Enough is enough.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: mod_cache summary and plan

2006-10-29 Thread Davi Arnaut
Graham Leggett wrote:
> Davi Arnaut wrote:
> 
>> I've just described that. Maybe my English was poor in the e-mail.
> 
> Your English is spot on, unfortunately the aggressive nature of your 
> email isn't.
> 
> You are not going to bully anybody on this list into accepting any 
> patch, it's not how this project works.

I'm not bulling anyone. This is not a personal attack, it was a public
calling for you to "adjust" the process.

> It is quite clear to me that you are upset that your patches were not 
> accepted as is. Unfortunately your patches break existing compliance 
> with RFC2616 in the cache, and in the process introduce a significant 
> performance penalty. This has been pointed out to you before, and not 
> just by me.

That's exactly the problem, I'm not trying to compete with you. I'm not
upset if my patches are not accepted, I just want the best possible
solution that satisfies the community. I called you to work together
with everybody on the list before committing.

My patches were intended as a experiment, there weren't even targeted at
trunk (cache refactor branch). I don't care if the patches are going to
be committed or not. I don't contribute to prove to anyone that I'm
better or anything else, I contribute because I really enjoy working on
some parts of httpd/apr.

I'm not going to dispute with you if mine suggestions or yours are
accepted. I just want that everybody is heard on the process and that
the final process pleases the majority.

> Your recent comments on patches contributed have made it clear that you 
> neither understand the patches so committed, nor have you actually run 
> the code in question. I respectfully request you fix both these issues 
> before continuing any work on this cache.

I don't want to solve this problem alone, I don't have all the answers.
But I do know that last week jumbo patches didn't advance the issue any
further because they were vetoed -- not because they were wrong, but
because you didn't work with everybody before committing then.

I will let this thread die now, which was created to gather a consensus
but failed miserably. I just hope our minor disagreements won't
interfere with us working on mod_cache in the future. I will repeat
again, I'm not attacking you. I was pursing what I thought was better
for mod_cache.

--
Davi Arnaut


Problems with apreq2 on OS X

2006-10-29 Thread Patrick Galbraith

Hi all,

Trying to move development to my local mac since coding remotely to the 
only box I could get apache2, mod_perl2 and libapreq2 working is remote, 
and the connection is terrible.


So, I have tried both using source and also with ports, and get the same 
problem upon trying to load my handler:


[Sun Oct 29 12:38:27 2006] [notice] Apache/2.2.3 (Unix) mod_ssl/2.2.3 
OpenSSL/0.9.8d DAV/2 mod_perl/2.0.2 Perl/v5.8.8 configured -- resuming 
normal operations

dyld: lazy symbol binding failed: Symbol not found: _apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle

 Expected in: dynamic lookup

dyld: Symbol not found: _apreq_handle_apache2
 Referenced from: 
/opt/local/lib/perl5/vendor_perl/5.8.8/darwin-2level/auto/APR/Request/Apache2/Apache2.bundle

 Expected in: dynamic lookup

[Sun Oct 29 12:38:38 2006] [notice] child pid 11206 exit signal 
Trace/BPT trap (5)



OS X version: Darwin radha.local 8.8.1 Darwin Kernel Version 8.8.1: Mon 
Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 i386 i386



Not sure what this is. Anyone encountered this before?

Also, is there a way to have access to things like $rec->param without 
having to use Apache2::Request/libapreq2? I ask this in case there is no 
solution for getting this to work, as well as on linux distributions I 
cannot get libapreq2 working.


Thanks!

Patrick


Re: mod_cache summary and plan

2006-10-29 Thread Graham Leggett

Davi Arnaut wrote:


I've just described that. Maybe my English was poor in the e-mail.


Your English is spot on, unfortunately the aggressive nature of your 
email isn't.


You are not going to bully anybody on this list into accepting any 
patch, it's not how this project works.


It is quite clear to me that you are upset that your patches were not 
accepted as is. Unfortunately your patches break existing compliance 
with RFC2616 in the cache, and in the process introduce a significant 
performance penalty. This has been pointed out to you before, and not 
just by me.


Your recent comments on patches contributed have made it clear that you 
neither understand the patches so committed, nor have you actually run 
the code in question. I respectfully request you fix both these issues 
before continuing any work on this cache.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Fwd: Re: Apache 2.2.3 mod_proxy issue]

2006-10-29 Thread Mladen Turk

Ruediger Pluem wrote:



I guess we should create a directive like DefineWorker (I do not really care 
about
the exact name), that enables the administrator to define / create a worker.


Then you can easily just use

   ProxySet ...


It will define a 'known' worker.
There is no need for an additional directive.

Regards,
Mladen


Re: mod_cache summary and plan

2006-10-29 Thread Davi Arnaut
Ruediger Pluem wrote:
> 
> On 10/29/2006 04:39 PM, Davi Arnaut wrote:
>> Graham Leggett wrote:
>>
>>> Davi Arnaut wrote:
>>>
>>>
 . Problem:
>>> You have described two separate problems below.
>>
>> No, and it's seems you are deeply confused on what buckets and brigades
>> represent. You already committed what ? four fixes to the same problem ?
>> Each time we point your wrong assumptions you came up with yet another
>> bogus fix. Could you please stop for a moment and listen ?
>>
>> IMHO, you haven't presented any acceptable fix and you keep trying to
>> fix things by your self without discussing on the list first. And more
>> important, discussing on the list means that you have to hear other
>> people comments.
>>
>>
> 
>>> The solution was to pass the output filter through the save_body() hook, 
>>> and let the save_body() code decide for itself when the best time is to 
>>> write the bucket(s) to the network.
>>>
>>> For example in the disk cache, the apr_bucket_read() loop will read 
>>> chunks of the 4.7GB file 4MB at a time. This chunk will be cached, and 
>>> then this chuck will be written to the network, then cleanup up. Rinse 
>>> repeat.
>>>
>>> Previously, save_body() was expected to save all 4.7GB to the cache, and 
>>> then only write the first byte to the network possibly minutes later.
>>>
>>> If a filter was present before cache that for any reason converted file 
>>> buckets into heap buckets (for example mod_deflate), then save_body() 
>>> would try and store 4.7GB of heap buckets in RAM to pass to the network 
>>> later, and boom.
>>
>> You just described what I've said with another words. Listen if you
>> don't change a bit your attitude I won't continue arguing with you, it's
>> pointless.
> 
> I do not really like the way the discussion goes here. If I remember myself
> correctly we had a very similar discussion between you and Graham several
> month ago regarding the need for mod_cache to be RFC compliant.

Yes, but it was about the generic cache architecture.

> It may be that
> we circle around the same things again and again and sometimes this may be 
> even
> unproductive. But this way for sure we do not get anything (and the past 
> proved it)
> productive out of this. As we had this in the past I try to throw a flag very 
> early
> to avoid wasting time for everybody with the back and forth following such 
> things.
> If you are frustrated by Grahams responses and the situation please try to 
> express
> this a little different and less personalized.

I'm not frustrated, I just wanted to tell him what I thought. I think he
is a smart guy and works hard on the issues but we could achieve much
more by collaborating constructively in a sane manner (step by step).

--
Davi Arnaut



Re: mod_cache summary and plan

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 04:39 PM, Davi Arnaut wrote:
> Graham Leggett wrote:
> 
>>Davi Arnaut wrote:
>>
>>
>>>. Problem:
>>
>>You have described two separate problems below.
> 
> 
> No, and it's seems you are deeply confused on what buckets and brigades
> represent. You already committed what ? four fixes to the same problem ?
> Each time we point your wrong assumptions you came up with yet another
> bogus fix. Could you please stop for a moment and listen ?
> 
> IMHO, you haven't presented any acceptable fix and you keep trying to
> fix things by your self without discussing on the list first. And more
> important, discussing on the list means that you have to hear other
> people comments.
> 
> 

> 
>>The solution was to pass the output filter through the save_body() hook, 
>>and let the save_body() code decide for itself when the best time is to 
>>write the bucket(s) to the network.
>>
>>For example in the disk cache, the apr_bucket_read() loop will read 
>>chunks of the 4.7GB file 4MB at a time. This chunk will be cached, and 
>>then this chuck will be written to the network, then cleanup up. Rinse 
>>repeat.
>>
>>Previously, save_body() was expected to save all 4.7GB to the cache, and 
>>then only write the first byte to the network possibly minutes later.
>>
>>If a filter was present before cache that for any reason converted file 
>>buckets into heap buckets (for example mod_deflate), then save_body() 
>>would try and store 4.7GB of heap buckets in RAM to pass to the network 
>>later, and boom.
> 
> 
> You just described what I've said with another words. Listen if you
> don't change a bit your attitude I won't continue arguing with you, it's
> pointless.

I do not really like the way the discussion goes here. If I remember myself
correctly we had a very similar discussion between you and Graham several
month ago regarding the need for mod_cache to be RFC compliant. It may be that
we circle around the same things again and again and sometimes this may be even
unproductive. But this way for sure we do not get anything (and the past proved 
it)
productive out of this. As we had this in the past I try to throw a flag very 
early
to avoid wasting time for everybody with the back and forth following such 
things.
If you are frustrated by Grahams responses and the situation please try to 
express
this a little different and less personalized.


Regards

Rüdiger


Re: [Fwd: Re: Apache 2.2.3 mod_proxy issue]

2006-10-29 Thread Jess Holle

Ruediger Pluem wrote:

I guess we should create a directive like DefineWorker (I do not really care 
about
the exact name), that enables the administrator to define / create a worker.
That would be really handy for mod_rewrite as in the reverse proxy case the 
number of
different backend targets are usually limited and known to the administrator.
This would avoid the need for nasty tricks like "pseudo balancers" with only 
one member.
  

Sounds good.

On a related note, our practice with mod_jk is to route only *.jsp, 
/servlet/*, and a few other URL patterns to Tomcat and let Apache handle 
everything else.  We also want to support load balancing with sticky 
sessions, of course.


That combination is pretty easy and straightforward with mod_jk.  It has 
been *baffling* with mod_proxy_ajp.  Perhaps we just haven't spent long 
enough on mod_rewrite, etc, but so far we're not getting anywhere...


--
Jess Holle


Re: mod_cache summary and plan

2006-10-29 Thread Davi Arnaut
Graham Leggett wrote:
> Davi Arnaut wrote:
> 
>> . Problem:
> 
> You have described two separate problems below.

No, and it's seems you are deeply confused on what buckets and brigades
represent. You already committed what ? four fixes to the same problem ?
Each time we point your wrong assumptions you came up with yet another
bogus fix. Could you please stop for a moment and listen ?

IMHO, you haven't presented any acceptable fix and you keep trying to
fix things by your self without discussing on the list first. And more
important, discussing on the list means that you have to hear other
people comments.

>> For a moment forget about file buckets and large files, what's really at
>> stake is proxy/cache brigade management when the arrival rate is too
>> high (e.g. a single 4.7GB file bucket, high-rate input data to be
>> consumed by relatively low-rate).
>>
>> By operating as a normal output filter mod_cache must deal with
>> potentially large brigades of (possibly) different (other than the stock
>> ones) bucket types created by other filters on the chain.
> 
> This first problem has largely been solved, bar some testing.

Those "fixes" were vetoed if I remember correctly.

> The solution was to pass the output filter through the save_body() hook, 
> and let the save_body() code decide for itself when the best time is to 
> write the bucket(s) to the network.
> 
> For example in the disk cache, the apr_bucket_read() loop will read 
> chunks of the 4.7GB file 4MB at a time. This chunk will be cached, and 
> then this chuck will be written to the network, then cleanup up. Rinse 
> repeat.
> 
> Previously, save_body() was expected to save all 4.7GB to the cache, and 
> then only write the first byte to the network possibly minutes later.
> 
> If a filter was present before cache that for any reason converted file 
> buckets into heap buckets (for example mod_deflate), then save_body() 
> would try and store 4.7GB of heap buckets in RAM to pass to the network 
> later, and boom.

You just described what I've said with another words. Listen if you
don't change a bit your attitude I won't continue arguing with you, it's
pointless.

> How mod_disk_cache chooses to send data to the network is an entirely 
> separate issue, detailed below.

NO! It's the same problem.

>> The problem arises from the fact that mod_disk_cache store function
>> traverses the brigade by it self reading each bucket in order to write
>> it's contents to disk, potentially filling the memory with large chunks
>> of data allocated/created by the bucket type read function (e.g. file
>> bucket).
> 
> To put this another way:
> 
> The core problem in the old cache code was that the assumption was made 
> that it was practical to call apr_bucket_read() on the same data _twice_ 
> - once during caching, once during network write.

No, the core problem it's how it manages the bucket/brigade (deep down
it's the same problem, but ...).

> This assumption isn't valid, thus the recent fixes.
> 
>> . Constraints:
>>
>> No threads/forked processes.
>> Bucket type specific workarounds won't work.
>> No core changes/knowledge, easily back-portable fixes are preferable.
>>
>> . Proposed solution:
>>
>> File buffering (or a part of Graham's last approach).
>>
>> The solution consists of using the cache file as a output buffer by
>> splitting the buckets into smaller chunks and writing then to disk. Once
>> written (apr_file_write_full) a new file bucket is created with offset
>> and size of the just written buffer. The old bucket is deleted.
>>
>> After that, the bucket is inserted into a temporary (empty) brigade and
>> sent down the output filter stack for (probably) network i/o.
>>
>> At a quick glance, this solution may sound absurd -- the chunk is
>> already in memory, and the output filter might need it again in memory
>> soon. But there's no silver bullet, and it's a simple enough approach to
>> solve the growing memory problem while not occurring into performance
>> penalties.
> 
> As soon as apr_file_write_full() is executed, the bucket just saved to 
> disk cache is also in kernel buffer memory - meaning that a 
> corresponding apr_bucket_read() afterwards in the network code reads 
> already kernel memory cached data.

I've just said that in the e-mail, but you deleted it.

> In performance testing, on files small enough to be buffered by the 
> kernel (a few MB), the initial part of the download after caching is 
> very fast.
> 
> What this technique does is guarantee that regardless of the source of 
> the response, be it a file, a CGI, or proxy, what gets written to the 
> network is always a file, and always takes advantage of kernel based 
> file performance features.

I've just described that. Maybe my English was poor in the e-mail.

--
Davi Arnaut


Re: [Fwd: Re: Apache 2.2.3 mod_proxy issue]

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 04:15 PM, Jess Holle wrote:
> Ruediger Pluem wrote:
> 
>> I guess we should create a directive like DefineWorker (I do not
>> really care about
>> the exact name), that enables the administrator to define / create a
>> worker.
>> That would be really handy for mod_rewrite as in the reverse proxy
>> case the number of
>> different backend targets are usually limited and known to the
>> administrator.
>> This would avoid the need for nasty tricks like "pseudo balancers"
>> with only one member.
>>   
> 
> Sounds good.
> 
> On a related note, our practice with mod_jk is to route only *.jsp,
> /servlet/*, and a few other URL patterns to Tomcat and let Apache handle
> everything else.  We also want to support load balancing with sticky
> sessions, of course.
> 
> That combination is pretty easy and straightforward with mod_jk.  It has
> been *baffling* with mod_proxy_ajp.  Perhaps we just haven't spent long
> enough on mod_rewrite, etc, but so far we're not getting anywhere...

How about

RewriteEngine On
RewriteRule ^(.*\.jsp|/servlet/.*)$ balancer://mycluster$1 [P]



ProxySet stickysession=JSESSIONID nofailover=On
BalancerMember ajp://1.2.3.4:8009 route=tomcat1 max=10
BalancerMember ajp://1.2.3.5:8010 route=tomcat2 max=10


Regards

Rüdiger


Re: [Fwd: Re: apr_brigade_create() produces a corrupt brigade]

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 03:53 PM, Graham Leggett wrote:
> Ruediger Pluem wrote:
> 
>> Just two curious questions:
>>
>> 1. Did APR_BRIGADE_EMPTY return true on this newly created brigade?
> 
> 
> No idea, didn't try it.
> 
>> 2. Shouldn't the code take care never to process the sentinel because
>> of the
>>problems you pointed out above (invalid data, especially in the
>> jump table)?
> 
> 
> Which code, apr or the client code?

Client code.

> 
> In the case of the client code, it shouldn't have to take care about
> anything - if an entry in the jump table is unimplemented for any
> reason, it should be initialised to NULL, and attempts to call those
> methods should return APR_ENOTIMPL.
> 
> At the moment, no clean error occurs, as the code falls of the rails and
> eventually crashes randomly later on.

This sounds reasonable. At least this produces reliable error situations in the
case you use the sentinel by error and make things much easier to debug.
Guess that needs to be fixed inside of apr-util.

Regards

Rüdiger



Re: [Fwd: Re: svn commit: r466865 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_authn_dbd.xml modules/aaa/mod_auth.h modules/aaa/mod_authn_dbd.c modules/aaa/mod_authnz_ldap.c]

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 03:47 PM, Graham Leggett wrote:
> Ruediger Pluem wrote:
> 
>> Yes, this is correct. It is set by AuthDBDUserPWQuery.
>>
>>> What sql statement would correspond with "USER_" above?
>>
>>
>> The one set by AuthDBDUserRealmQuery. It is used inside
>>
>> authn_dbd_realm
>>
>> OK, USER_ might the wrong word, but we definitely have two possible
>> different
>> queries with possible the same field names which are put in the same
>> environment
>> namespace.
> 
> 
> My understanding of the code is that either the realm query will get
> run, or the password query will get run - otherwise we would be checking
> the password twice.

Ok, this is true. I have not checked that before. password query is for basic 
auth and
realm query is for digest auth. I don't think that they get used in the same 
request

> 
> AUTHENTICATE_ entries are only added to the environment for the second
> and subsequent columns in each query.
> 
> If two sql queries are being done, then the admin need only add the
> extra columns to one of the queries.
> 
> If this is ever a problem, the admin can simply give the second query
> different column names to the first, assuming there are two queries at all.

Yes, but the rows selected could be different and thus the contents of the 
fields,
but as stated above it is very very unlikely that both queries are run for the
same request, so this does not matter.

> 
> The point behind the AUTHENTICATE_ is that it is the same as that of
> mod_authnz_ldap. If you put the sql ones in different namespaces, then
> it seriously reduces the usefulness of putting this info in the
> environment, as users of this information now have to care which module
> did the authz and authn.

This is clear. I was just worried that we overwrite the contents of one of the
AUTHENTICATE_ variables we just written a stage before, but as this is not the
case there is no point in having different namespaces and thus reducing 
usefulness.

Regards

Rüdiger



Re: mod_deflate ignores Content-Encoding header

2006-10-29 Thread Sven Köhler
>> imagine a simple CGI-script:
>>
>>
>> #!/usr/bin/perl
>> print "Content-Encoding: identity\n";
>> print "Content-Type: text/plain\n";
>> print "\n";
>> print "test";
>>
>>
>> AFAIK, "identity" indicates, that no transformation is being done on the
>> content.
>>
>> IMHO, mod_deflate should implement the following logic:
>>
>> Content-Encoding-header already present?
>>   yes: do nothing, just forward content
>>   no: add Content-Encoding header and do compression
> 
> It's a valid behaviour.  So's the current one.

Hmm, but at the moment, mod_deflate just adds another
Content-Encoding-header, even if one is already present. That client
gets two of them. Is that intended?

On the other hand, mod_deflate doesn't need to recompress something,
that is already compression (when indicated by Content-Encoding header).

I don't think, that this cases are currently handled properly.
(I can at least confirm the double Content-Encoding header using the
CGI-script above)

> Putting it under the control of the admin is a reasonable
> proposition.  mod_filter does that: you can conditionally
> insert mod_deflate.

I'm trying to understand that mod_filter, but can i also insert a filter
if and only if the Content-Encoding-header is not already present in the
response?



signature.asc
Description: OpenPGP digital signature


Re: [Fwd: Re: apr_brigade_create() produces a corrupt brigade]

2006-10-29 Thread Graham Leggett

Ruediger Pluem wrote:


Just two curious questions:

1. Did APR_BRIGADE_EMPTY return true on this newly created brigade?


No idea, didn't try it.


2. Shouldn't the code take care never to process the sentinel because of the
   problems you pointed out above (invalid data, especially in the jump table)?


Which code, apr or the client code?

In the case of the client code, it shouldn't have to take care about 
anything - if an entry in the jump table is unimplemented for any 
reason, it should be initialised to NULL, and attempts to call those 
methods should return APR_ENOTIMPL.


At the moment, no clean error occurs, as the code falls of the rails and 
eventually crashes randomly later on.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Fwd: Re: svn commit: r466865 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_authn_dbd.xml modules/aaa/mod_auth.h modules/aaa/mod_authn_dbd.c modules/aaa/mod_authnz_ldap.c]

2006-10-29 Thread Graham Leggett

Ruediger Pluem wrote:


Yes, this is correct. It is set by AuthDBDUserPWQuery.


What sql statement would correspond with "USER_" above?


The one set by AuthDBDUserRealmQuery. It is used inside

authn_dbd_realm

OK, USER_ might the wrong word, but we definitely have two possible different
queries with possible the same field names which are put in the same environment
namespace.


My understanding of the code is that either the realm query will get 
run, or the password query will get run - otherwise we would be checking 
the password twice.


AUTHENTICATE_ entries are only added to the environment for the second 
and subsequent columns in each query.


If two sql queries are being done, then the admin need only add the 
extra columns to one of the queries.


If this is ever a problem, the admin can simply give the second query 
different column names to the first, assuming there are two queries at all.


The point behind the AUTHENTICATE_ is that it is the same as that of 
mod_authnz_ldap. If you put the sql ones in different namespaces, then 
it seriously reduces the usefulness of putting this info in the 
environment, as users of this information now have to care which module 
did the authz and authn.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Fwd: Re: Apache 2.2.3 mod_proxy issue]

2006-10-29 Thread Ruediger Pluem


On 10/28/2006 05:26 PM, Jim Jagielski wrote:
> Ruediger Pluem wrote:
> 
>>
>>On 10/27/2006 06:20 PM, Jess Holle wrote:
>>
>>
>>>On the other hand, if I use:
>>>
>>>   ProxyPass /jsp-examples ajp://localhost:8010/jsp-examples
>>>
>>>This works fine!
>>>
>>>I assume I should file a bug against mod_proxy -- or is this a known issue?
>>
>>At least it is known to me :-). See 
>>http://issues.apache.org/bugzilla/show_bug.cgi?id=40275#c6
>>
> 
> 
> As well as anyone who knows how the proxy module works... :)
> 
> The main issue is that with ProxyPass the proxy module knows
> that an external proxy will be used at config time and is
> able to pro-actively construct the worker and pool. With
> mod_rewrite, it's tough, since what is proxied is runtime
> determined... Now maybe one could be maintained after
> the fact, but you would need some sort of GC, esp if
> the mod_rewrite proxy is totally indeterminant.

I guess we should create a directive like DefineWorker (I do not really care 
about
the exact name), that enables the administrator to define / create a worker.
That would be really handy for mod_rewrite as in the reverse proxy case the 
number of
different backend targets are usually limited and known to the administrator.
This would avoid the need for nasty tricks like "pseudo balancers" with only 
one member.

Regards

Rüdiger




Re: [Fwd: Re: apr_brigade_create() produces a corrupt brigade]

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 01:59 PM, Graham Leggett wrote:
> Ruediger Pluem wrote:
> 
>>> This runs fine - a brigade is created, containing a single bucket.
>>>
>>> The trouble is, the bucket inside the brigade is corrupt - it's name
>>> consists of random bytes, and the pointers to its methods are either
>>
>>
>> Maybe stupid thought, but isn't this bucket the sentinel and doesn't
>> APR_BRIGADE_EMPTY return true on this brigade?
> 
> 
> There definitely was one bucket in the new empty brigade, and it makes
> sense that this bucket was the sentinel. What didn't make sense though
> was that most of the fields in this bucket were uninitialised, so the
> jump table for code that implements the various bucket methods consisted
> of bogus addresses.

Just two curious questions:

1. Did APR_BRIGADE_EMPTY return true on this newly created brigade?
2. Shouldn't the code take care never to process the sentinel because of the
   problems you pointed out above (invalid data, especially in the jump table)?

Regards

Rüdiger



Re: [Fwd: Re: svn commit: r467655 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_cache.xml modules/cache/mod_cache.c modules/cache/mod_cache.h]

2006-10-29 Thread Ruediger Pluem


On 10/29/2006 01:56 PM, Graham Leggett wrote:
> Ruediger Pluem wrote:
> 
>> Apart from this, Paul created a branch a while ago for mod_cache
>> refactoring.
>> As it has turned out the whole thing creates some bigger discussion
>> and patches
>> go in and out. So I think it would be a good idea to do this on a dev
>> branch instead
>> of the trunk. So I propose the following thing:
>>
>> 1. Create a dev branch again for mod_cache based on the current trunk.
>> 2. Rollback the patches on trunk
>> 3. Work out the whole thing on the dev branch until there is consensus
>> about
>>the solution and only minor issues need to be addressed.
>> 4. Merge the dev branch back into trunk.
>> 5. Address the minor issues on trunk and tweak it there.
>>
>> This gives people who cannot follow up the whole history the chance to
>> review
>> the whole thing on step 4. as some sort of reviewing a complete new
>> module :-)
> 
> 
> A trunk by any other name, will still smell as sweet.
> 
> If the branch was created beforehand, then this would have made sense,

It was there, but sadly it was not used. But I admit that I should have pointed
out this much much earlier, so this is also my fault.

> but to have created the branch so late in the process, we are just
> creating work for ourselves that will ultimately end up with the same
> result.
> 
> Currently history reflects the reality of what was tried along the road,
> what was objected to, backed out, and tried again.
> 
> If we redo this without the objected-to bits, we are simply rewriting
> history (literally), thus removing the history's real value.

We actually do not remove the history. It will still live in the branch, but yes
on the trunk once we merge the branch back in it would look like some kind of
magical jump. To get all the gory details you need to go through the logs of the
(then deleted) branch to get all the history of the back and forth and the 
rationales.

Regards

Rüdiger



Re: [Fwd: Re: svn commit: r466865 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_authn_dbd.xml modules/aaa/mod_auth.h modules/aaa/mod_authn_dbd.c modules/aaa/mod_authnz_ldap.c]

2006-10-29 Thread Ruediger Pluem
.
On 10/29/2006 01:50 PM, Graham Leggett wrote:
> Ruediger Pluem wrote:
> 
>> Does it really make sense to put this in the same environment namespace?
>> What if we have rows with the same name here and for the password query?
>> Shouldn't the prefix be AUTHN_PREFIX + (USER_|PASSWORD_)?
> 
> 
> My understanding of the code is that only one password query is ever
> executed - is this correct?

Yes, this is correct. It is set by AuthDBDUserPWQuery.

> 
> What sql statement would correspond with "USER_" above?

The one set by AuthDBDUserRealmQuery. It is used inside

authn_dbd_realm

OK, USER_ might the wrong word, but we definitely have two possible different
queries with possible the same field names which are put in the same environment
namespace.

Regards

Rüdiger


Re: ROADMAP for mod_cache

2006-10-29 Thread Graham Leggett

Davi Arnaut wrote:


Graham, could you please summarize the problems we want to solve and the
possible solutions and send then to the list ?


The cache needs a notifier api, because as Joe pointed out, it cannot be
guaranteed that the ap_core_output_filter() will not block. You have one
in the pipeline, this is definitely a big next step.

The cache also needs some testing, both for performance, and to make 
sure there are no lingering memory leaks.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Fwd: Re: apr_brigade_create() produces a corrupt brigade]

2006-10-29 Thread Graham Leggett

Ruediger Pluem wrote:


This runs fine - a brigade is created, containing a single bucket.

The trouble is, the bucket inside the brigade is corrupt - it's name
consists of random bytes, and the pointers to its methods are either


Maybe stupid thought, but isn't this bucket the sentinel and doesn't
APR_BRIGADE_EMPTY return true on this brigade?


There definitely was one bucket in the new empty brigade, and it makes 
sense that this bucket was the sentinel. What didn't make sense though 
was that most of the fields in this bucket were uninitialised, so the 
jump table for code that implements the various bucket methods consisted 
of bogus addresses.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Fwd: Re: svn commit: r467655 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_cache.xml modules/cache/mod_cache.c modules/cache/mod_cache.h]

2006-10-29 Thread Graham Leggett

Ruediger Pluem wrote:


Apart from this, Paul created a branch a while ago for mod_cache refactoring.
As it has turned out the whole thing creates some bigger discussion and patches
go in and out. So I think it would be a good idea to do this on a dev branch 
instead
of the trunk. So I propose the following thing:

1. Create a dev branch again for mod_cache based on the current trunk.
2. Rollback the patches on trunk
3. Work out the whole thing on the dev branch until there is consensus about
   the solution and only minor issues need to be addressed.
4. Merge the dev branch back into trunk.
5. Address the minor issues on trunk and tweak it there.

This gives people who cannot follow up the whole history the chance to review
the whole thing on step 4. as some sort of reviewing a complete new module :-)


A trunk by any other name, will still smell as sweet.

If the branch was created beforehand, then this would have made sense, 
but to have created the branch so late in the process, we are just 
creating work for ourselves that will ultimately end up with the same 
result.


Currently history reflects the reality of what was tried along the road, 
what was objected to, backed out, and tried again.


If we redo this without the objected-to bits, we are simply rewriting 
history (literally), thus removing the history's real value.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Fwd: Re: svn commit: r466865 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_authn_dbd.xml modules/aaa/mod_auth.h modules/aaa/mod_authn_dbd.c modules/aaa/mod_authnz_ldap.c]

2006-10-29 Thread Graham Leggett

Ruediger Pluem wrote:


Does it really make sense to put this in the same environment namespace?
What if we have rows with the same name here and for the password query?
Shouldn't the prefix be AUTHN_PREFIX + (USER_|PASSWORD_)?


My understanding of the code is that only one password query is ever 
executed - is this correct?


What sql statement would correspond with "USER_" above?

I don't follow.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: mod_cache summary and plan

2006-10-29 Thread Graham Leggett

Davi Arnaut wrote:


. Problem:


You have described two separate problems below.


For a moment forget about file buckets and large files, what's really at
stake is proxy/cache brigade management when the arrival rate is too
high (e.g. a single 4.7GB file bucket, high-rate input data to be
consumed by relatively low-rate).

By operating as a normal output filter mod_cache must deal with
potentially large brigades of (possibly) different (other than the stock
ones) bucket types created by other filters on the chain.


This first problem has largely been solved, bar some testing.

The solution was to pass the output filter through the save_body() hook, 
and let the save_body() code decide for itself when the best time is to 
write the bucket(s) to the network.


For example in the disk cache, the apr_bucket_read() loop will read 
chunks of the 4.7GB file 4MB at a time. This chunk will be cached, and 
then this chuck will be written to the network, then cleanup up. Rinse 
repeat.


Previously, save_body() was expected to save all 4.7GB to the cache, and 
then only write the first byte to the network possibly minutes later.


If a filter was present before cache that for any reason converted file 
buckets into heap buckets (for example mod_deflate), then save_body() 
would try and store 4.7GB of heap buckets in RAM to pass to the network 
later, and boom.


How mod_disk_cache chooses to send data to the network is an entirely 
separate issue, detailed below.



The problem arises from the fact that mod_disk_cache store function
traverses the brigade by it self reading each bucket in order to write
it's contents to disk, potentially filling the memory with large chunks
of data allocated/created by the bucket type read function (e.g. file
bucket).


To put this another way:

The core problem in the old cache code was that the assumption was made 
that it was practical to call apr_bucket_read() on the same data _twice_ 
- once during caching, once during network write.


This assumption isn't valid, thus the recent fixes.


. Constraints:

No threads/forked processes.
Bucket type specific workarounds won't work.
No core changes/knowledge, easily back-portable fixes are preferable.

. Proposed solution:

File buffering (or a part of Graham's last approach).

The solution consists of using the cache file as a output buffer by
splitting the buckets into smaller chunks and writing then to disk. Once
written (apr_file_write_full) a new file bucket is created with offset
and size of the just written buffer. The old bucket is deleted.

After that, the bucket is inserted into a temporary (empty) brigade and
sent down the output filter stack for (probably) network i/o.

At a quick glance, this solution may sound absurd -- the chunk is
already in memory, and the output filter might need it again in memory
soon. But there's no silver bullet, and it's a simple enough approach to
solve the growing memory problem while not occurring into performance
penalties.


As soon as apr_file_write_full() is executed, the bucket just saved to 
disk cache is also in kernel buffer memory - meaning that a 
corresponding apr_bucket_read() afterwards in the network code reads 
already kernel memory cached data.


In performance testing, on files small enough to be buffered by the 
kernel (a few MB), the initial part of the download after caching is 
very fast.


What this technique does is guarantee that regardless of the source of 
the response, be it a file, a CGI, or proxy, what gets written to the 
network is always a file, and always takes advantage of kernel based 
file performance features.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: apr_dbd_mysql for apache2.2

2006-10-29 Thread Joachim Zobel
Am Sonntag, den 29.10.2006, 14:39 +1000 schrieb Mark Constable:
> *** glibc detected *** /usr/sbin/httpd: double free or corruption (!prev): 
> 0x08278360 ***

This is often mention as "the dreaded ...". Glibc has from a certain
version on treatet double free as a fatal error. I remember lots of apps
on my laptop blew up when debian testing (etch) hit this.

According to

http://mail-archives.apache.org/mod_mbox/httpd-bugs/200605.mbox/%
[EMAIL PROTECTED]/bugzilla/%3E

setting the env' var' MALLOC_CHECK_=1 or 0 should work around.

Sincerely,
Joachim




Re: mod_disk_cache summarization

2006-10-29 Thread Graham Leggett

Henrik Nordstrom wrote:


How ETag:s is generated is extremely server dependent, and not
guaranteed to be unique across different URLs. You can not at all count
on two files having the same ETag but different URLs to be the same
file, unless you also is responsible for the server providing all the
URLs in question and know that the server guarantees this behavior of
ETag beyond what the HTTP specification says.


Exactly - this is the case here.

The problem that is being solved is when a server serves the same file, 
within the same server, but at multiple different URLs. The file in this 
case is interpreted as a different file each time, and is cached over 
and over again.


Obviously such a feature wouldn't be something you would switch on by 
default, and the admin would be expected to know what he is doing when 
they switch this behaviour on.


Regards,
Graham
--



smime.p7s
Description: S/MIME Cryptographic Signature