Re: add cookie in handler process

2010-09-20 Thread Issac Goldstand
 Because a redirect isn't considered a normal (eg, 200) response, so
it uses the err_headers table on output.

Issac

On 9/20/2010 3:49 AM, whut_jia wrote:
 Thank you,According to your ways,I use r-err_headers_out .The question is 
 resolved! But I want ask why?Why did I can't set cookie directly in the main 
 request r-headers_out??
 Thanks,
 Jia  




 At 2010-09-20 02:09:54??Sorin Manolache sor...@gmail.com wrote:

 2010/9/19 whut_jia whut_...@163.com:
 Hello,
 I am new to apache module development,and now I have a problem.
 Now,I am writing a handler module.In this module,I need validate 
 username/password infomation sent by user.After validating,I set cookie 
 into headers_out(apr_table_set(r-headers_out,Ser-Cookie,)),and 
 then do external redirection (apr_table_setn(r-headers_out,Location, 
 URL)).The question is that I just get Location header but get Cookie 
 header when I access to apache server.why??
 Many thanks,
 Jia
 Use r-err_headers_out



[mod_fcgid-2.3.5] app classes based on the host header

2010-09-20 Thread Naresh Kumar

Hi,

mod_vhost has a feature to serve the files based on the  Host header using 
VirtualDocumentRoot /usr/local/apache/vhosts/%0 setting.

Similarly, i am wondering what is the process that i should follow to make the 
mod_fcgid spawn the
processes and manage them based on the Host header.

I have made the following changes in the modules/fcgid/fcgid_pm_unix.c file

407 command-deviceid = deviceid;
408 command-inode = inode;
409 command-share_grp_id = share_grp_id;
410 //command-virtualhost = r-server-server_hostname;
411 command-virtualhost = r-hostname;
412 
413 ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
414 %s , command-virtualhost);
415 

fcgid.conf:
FcgidWrapper /usr/bin/php-wrapper .php virtual
FcgidMaxProcessesPerClass 10

But, fcgid process manager is crashing after that. (I am not using any 
virtualhost setting in the apache config).

Looks i need to understand how the classes are managed. Any pointers ?

... Frobo.






Re:Re: add cookie in handler process

2010-09-20 Thread whut_jia
I want to ask what kind of relationship between external redirect and 
subrequest ??

Thanks,

Jia

Re: mod_disk_cache: making commit_entity() atomic

2010-09-20 Thread Niklas Edmundsson

On Fri, 17 Sep 2010, Graham Leggett wrote:


On 17 Sep 2010, at 1:41 PM, Niklas Edmundsson wrote:


I personally favor designs needs at most O_EXCL style write locking.

Having been bitten by various lock-related issues over the years I'm in 
favor of a explicit-lock-free design if it can be done cleanly and with 
good performance.


If going this route, I'd suggest to put the entire path to the data file in 
the header and not just a uniqifying string (to make it easier to split 
hashing of header and data in the future).


The problem with this is that if you bake the location of the file into the 
cache, you would never be able to move the cache around.


Just store the path relative to the cache root.


Is there a benefit to keeping headers and bodies separate?


As we cache files from an nfs mount, we hash on device:inode as a 
simple method of reducing duplicates of files (say a dozen URL:s all 
resolving to the same DVD image). We see a huge benefit of being able 
to do this as we get a grotesque amount of data duplication otherwise.


So we usually have multiple header files all pointing to the same data 
file.


For the more generic cache it might also be useful provided that you 
have a mechanism to identify duplicated data, the only thing I can 
think of is hashing on the data block but that isn't really feasible 
for large files. I suspect there might be cases where there exists 
usecases with a backend that can provide hints for this though.




/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se  | ni...@acc.umu.se
---
 Come on, higher now! A watcher scoffs at gravity! - Giles
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


Re: mod_disk_cache: making commit_entity() atomic

2010-09-20 Thread Graham Leggett

On 20 Sep 2010, at 12:52 PM, Niklas Edmundsson wrote:

As we cache files from an nfs mount, we hash on device:inode as a  
simple method of reducing duplicates of files (say a dozen URL:s all  
resolving to the same DVD image). We see a huge benefit of being  
able to do this as we get a grotesque amount of data duplication  
otherwise.


So we usually have multiple header files all pointing to the same  
data file.


For the more generic cache it might also be useful provided that you  
have a mechanism to identify duplicated data, the only thing I can  
think of is hashing on the data block but that isn't really feasible  
for large files. I suspect there might be cases where there exists  
usecases with a backend that can provide hints for this though.


I think this use case is bordering on something that would need to be  
in it's own module, rather than trying to stretch mod_disk_cache to be  
aware of FILE buckets. Something like mod_diskfile_cache (or  
something, mod_file_cache already exists and probably should have been  
called mod_fd_cache, but oh well).


Hmmm...

I notice the interface for create_entity() in the cache provider  
doesn't pass the output bucket brigade through to the provider.


This would be useful in this case, because a dedicated file caching  
provider module might want to look inside the brigade to see if it  
contains a single FILE bucket, and if not, to DECLINE the request to  
cache.


Does such a change sound sensible?

int (*create_entity) (cache_handle_t *h, request_rec *r,
   const char *urlkey, apr_off_t len,  
apr_bucket_brigade *bb);


Regards,
Graham
--



Re: Remove Limit and LimitExcept ?

2010-09-20 Thread William A. Rowe Jr.
On 9/18/2010 5:45 PM, Stefan Fritsch wrote:
 
 What do other people think about removing Limit and LimitExcept 
 and adding mod_allowmethods from the sandbox to easily forbid some 
 methods? Or would this create too much trouble when upgrading 
 configurations?

I've been distracted by patches which I owe to specific users, customers
and so forth, so I haven't actually had cycles to commit the patches that
float my own boat :)

But what I have are two patches, one which rips out Limit entirely from
the httpd code, along with the method limit logic from the handful of
directives which actually supported the feature, and a second patch which
introduces Method as a section and supports all directives, just as any
other first class Location, Directory, etc.

The unfinished bit of that patch is deciding how and where the section
merge will occur.  Since it's a NTP that could lead to some degree of
confusion about scoping, it really seems like that should happen every
time a per-dir merge occurs.

What I'm thinking of for the solution is to have a post-merge hook, so
that remerges can occur for any registered section provider.  The whole
Files  merge could become one consumer of this hook.  That hook would,
of course, return a newly merged section or the identity of the source
dir config, if everyone declines.  It's recursive, in that you could
end up with a nested Files in a location, which in turn has nested methods
which are acceptable.

Does anyone have thoughts on the best way to handle per-dir nesting?






Re: Remove Limit and LimitExcept ?

2010-09-20 Thread William A. Rowe Jr.
On 9/20/2010 11:12 AM, William A. Rowe Jr. wrote:
 On 9/18/2010 5:45 PM, Stefan Fritsch wrote:

 What do other people think about removing Limit and LimitExcept 
 and adding mod_allowmethods from the sandbox to easily forbid some 
 methods? Or would this create too much trouble when upgrading 
 configurations?
 
 The unfinished bit of that patch is deciding how and where the section
 merge will occur.  Since it's a NTP that could lead to some degree of
 confusion about scoping, it really seems like that should happen every
 time a per-dir merge occurs.
 
 What I'm thinking of for the solution is to have a post-merge hook, so
 that remerges can occur for any registered section provider.  The whole
 Files  merge could become one consumer of this hook.  That hook would,
 of course, return a newly merged section or the identity of the source
 dir config, if everyone declines.  It's recursive, in that you could
 end up with a nested Files in a location, which in turn has nested methods
 which are acceptable.
 
 Does anyone have thoughts on the best way to handle per-dir nesting?

I think perhaps the best solution is a sandbox where we could collaborate
on the unfinished aspect of this patch, and abuse the proxy/location/dir/files
handlers into cooperating with a new schema.

If noone objects, I'll fork the sandbox for this experiment later this evening.


Re: Remove Limit and LimitExcept ?

2010-09-20 Thread Greg Stein
The Limit/LimitExcept directives are *very* handy and important when
mod_dav is being used. In fact, LimitExcept was created specifically
in order to avoid listing every new method that might come along via
DAV specs and such.

As long as an alternative is available, then I don't care. But the
functionality is very important.

Cheers,
-g

On Sat, Sep 18, 2010 at 18:45, Stefan Fritsch s...@sfritsch.de wrote:
 This is from https://issues.apache.org/bugzilla/show_bug.cgi?id=49927

 On Saturday 18 September 2010, bugzi...@apache.org wrote:
 --- Comment #3 from Nick Kew n...@webthing.com 2010-09-18
 06:38:34 EDT ---

  No, the current documentation is correct. The semantics of
  Limit/LimitExcept is just insane. We should relly get rid if it
  in 2.4 and improve the docs for 2.2. Maybe the unprotected
  should be big, red, and blinking ;-)

 Agreed.  We can even document it as superseded by
 If $request-method ...
 having of course checked the expression parser, which probably
 needs updating to support things like
    ... in GET,HEAD,OPTIONS,TRACE
 without some nasty great OR expression.

 What do other people think about removing Limit and LimitExcept
 and adding mod_allowmethods from the sandbox to easily forbid some
 methods? Or would this create too much trouble when upgrading
 configurations?


 BTW, we could also add an authz provider to allow things like

 Require method GET,HEAD,...

 Though this would be slower than mod_allowmethods because authz
 providers have to parse the require line on every request.



Re: Remove Limit and LimitExcept ?

2010-09-20 Thread William A. Rowe Jr.
On 9/20/2010 12:27 PM, Greg Stein wrote:
 The Limit/LimitExcept directives are *very* handy and important when
 mod_dav is being used. In fact, LimitExcept was created specifically
 in order to avoid listing every new method that might come along via
 DAV specs and such.
 
 As long as an alternative is available, then I don't care. But the
 functionality is very important.

Agreed.  Although I have seen (from the dav perspective) many too many
configurations with directives other than the three, with the author's
expectation that they were triggered only for the Limit[Except] methods.

That's the idea of both the Method[Except] block and Nick's illustration
of the If block.  In 2.4, we shouldn't be accepting directives that don't
actually respect the container they are placed within, and operate without
any indication that it is a misconfiguration.

The ability to control by [unknown] methods isn't going anywhere, in fact
there will be MTOWTDI reliably, out of the box :)