Re: mod_perl and Transfer-Encoding: chunked

2013-07-03 Thread Joseph Schaefer
When you read from the input filter chain as $r->read does, the http input 
filter automatically handles the protocol and passes the dechunked data up to 
the caller. It does not spool the stream at all.

You'd have to look at how mod perl implements read to see if it loops its 
ap_get_brigade calls on the input filter chain to fill the passed buffer to the 
desired length or not.  But under no circumstances should you have to deal with 
chunked data directly.

HTH

Sent from my iPhone

On Jul 3, 2013, at 2:44 PM, Bill Moseley  wrote:

> Hi Jim,
> 
> This is the Transfer-Encoding: chunked I was writing about:
> 
> http://tools.ietf.org/html/rfc2616#section-3.6.1
> 
> 
> 
> On Wed, Jul 3, 2013 at 11:34 AM, Jim Schueler  wrote:
>> I played around with chunking recently in the context of media streaming: 
>> The client is only requesting a "chunk" of data.  "Chunking" is how media 
>> players perform a "seek".  It was originally implemented for FTP transfers:  
>> E.g, to transfer a large file in (say 10K) chunks.  In the case that you 
>> describe below, if no Content-Length is specified, that indicates "send the 
>> remainder".
>> 
>> From what I know, a "chunk" request header is used this way to specify the 
>> server response.  It does not reflect anything about the data included in 
>> the body of the request.  So first, I would ask if you're confused about 
>> this request information.
>> 
>> Hypothetically, some browsers might try to upload large files in small 
>> chunks and the "chunk" header might reflect a push transfer.  I don't know 
>> if "chunk" is ever used for this purpose.  But it would require the 
>> following characteristics:
>> 
>>   1.  The browser would need to originally inquire if the server is
>>   capable of this type of request.
>>   2.  Each chunk of data will arrive in a separate and independent HTTP
>>   request.  Not necessarily in the order they were sent.
>>   3.  Two or more requests may be handled by separate processes
>>   simultaneously that can't be written into a single destination.
>>   4.  Somehow the server needs to request a resend if a chunk is missing.
>>   Solving this problem requires an imaginitive use of HTTP.
>> 
>> Sounds messy.  But might be appropriate for 100M+ sized uploads.  This *may* 
>> reflect your situation.  Can you please confirm?
>> 
>> For a single process, the incoming content-length is unnecessary. Buffered 
>> I/O automatically knows when transmission is complete.  The read() argument 
>> is the buffer size, not the content length.  Whether you spool the buffer to 
>> disk or simply enlarge the buffer should be determined by your hardware 
>> capabilities.  This is standard IO behavior that has nothing to do with HTTP 
>> chunk.  Without a "Content-Length" header, after looping your read() 
>> operation, determine the length of the aggregate data and pass that to 
>> Catalyst.
>> 
>> But if you're confident that the complete request spans several smaller 
>> (chunked) HTTP requests, you'll need to address all the problems I've 
>> described above, plus the problem of re-assembling the whole thing for 
>> Catalyst.  I don't know anything about Plack, maybe it can perform all this 
>> required magic.
>> 
>> Otherwise, if the whole purpose of the Plack temporary file is to pass a 
>> file handle, you can pass a buffer as a file handle.  Used to be IO::String, 
>> but now that functionality is built into the core.
>> 
>> By your last paragraph, I'm really lost.  Since you're already passing the 
>> request as a file handle, I'm guessing that Catalyst creates the tempororary 
>> file for the *response* body.  Can you please clarify?  Also, what do you 
>> mean by "de-chunking"?  Is that the same think as re-assembling?
>> 
>> Wish I could give a better answer.  Let me know if this helps.
>> 
>> -Jim
>> 
>> 
>> 
>> On Tue, 2 Jul 2013, Bill Moseley wrote:
>> 
>>> For requests that are chunked (Transfer-Encoding: chunked and no
>>> Content-Length header) calling $r->read returns unchunked data from the
>>> socket.
>>> That's indeed handy.  Is that mod_perl doing that un-chunking or is it
>>> Apache?
>>> 
>>> But, it leads to some questions.   
>>> 
>>> First, if $r->read reads unchunked data then why is there a
>>> Transfer-Encoding header saying that the content is chunked?   Shouldn't
>>> that header be removed?   How does one know if the content is chunked or
>>> not, otherwise?
>>> 
>>> Second, if there's no Content-Length header then how does one know how much
>>> data to read using $r->read?   
>>> 
>>> One answer is until $r->read returns zero bytes, of course.  But, is
>>> that guaranteed to always be the case, even for, say, pipelined requests?  
>>> My guess is yes because whatever is de-chunking the request knows to stop
>>> after reading the last chunk, trailer and empty line.   Can anyone elaborate
>>> on how Apache/mod_perl is doing this? 
>>> 
>>> 
>>> Perhaps I'm approaching this incorrectly, but this is all a bit un

Re: mod_perl and Transfer-Encoding: chunked

2013-07-03 Thread Joseph Schaefer
Dechunked means it strips out the lines containing metadata about the next 
block of raw data. The metadata is just the length of the next block of data.  
Imagine a chunked stream is like having partial content length headers embedded 
in the data stream.

The http filter embedded in httpd takes care of the metadata so you don't have 
to parse the stream yourself. $r->read will always provide only the raw data in 
a blocking call, until the stream is finished in which case it should return 0 
or an error code.  Check the mod perl docs, or better the source, to see if the 
semantics are more like perl's sysread or more like read.

Sent from my iPhone

On Jul 3, 2013, at 4:31 PM, Jim Schueler  wrote:

> In light of Joe Schaefer's response, I appear to be outgunned.  So, if 
> nothing else, can someone please clarify whether "de-chunked" means 
> re-assembled?
> 
> -Jim
> 
> On Wed, 3 Jul 2013, Jim Schueler wrote:
> 
>> Thanks for the prompt response, but this is your question, not mine.  I 
>> hardly need an RTFM for my trouble.
>> 
>> I drew my conclusions using a packet sniffer.  And as far-fetched as my 
>> answer may seem, it's more plausible than your theory that Apache or modperl 
>> is decoding a raw socket stream.
>> 
>> The crux of your question seems to be how the request content gets
>> magically re-assembled.  I don't think it was ever disassembled in the first 
>> place.  But if you don't like my answer, and you don't want to ignore it 
>> either, then please restate the question.  I can't find any definition for 
>> unchunked, and Wiktionary's definition of de-chunk says to "break apart a 
>> chunk", that is (counter-intuitively) chunk a chunk.
>> 
>> 
>>>   Second, if there's no Content-Length header then how
>>>   does one know how much
>>>   data to read using $r->read?   
>>> 
>>>   One answer is until $r->read returns zero bytes, of
>>>   course.  But, is
>>>   that guaranteed to always be the case, even for,
>>>   say, pipelined requests?  
>>>   My guess is yes because whatever is de-chunking the
>> 
>> read() is blocking.  So it never returns 0, even in a pipeline request (if 
>> no data is available, it simply waits).  I don't wish to discuss the merits 
>> here, but there is no technical imperative for a content-length request in 
>> the request header.
>> 
>> -Jim
>> 
>> 
>> 
>> 
>> 
>> 
>> On Wed, 3 Jul 2013, Bill Moseley wrote:
>> 
>>> Hi Jim,
>>> This is the Transfer-Encoding: chunked I was writing about:
>>> http://tools.ietf.org/html/rfc2616#section-3.6.1
>>> On Wed, Jul 3, 2013 at 11:34 AM, Jim Schueler 
>>> wrote:
>>> I played around with chunking recently in the context of media
>>> streaming: The client is only requesting a "chunk" of data.
>>>  "Chunking" is how media players perform a "seek".  It was
>>> originally implemented for FTP transfers:  E.g, to transfer a
>>> large file in (say 10K) chunks.  In the case that you describe
>>> below, if no Content-Length is specified, that indicates "send
>>> the remainder".
>>> 
 From what I know, a "chunk" request header is used this way to
>>> specify the server response.  It does not reflect anything about
>>> the data included in the body of the request.  So first, I would
>>> ask if you're confused about this request information.
>>> 
>>> Hypothetically, some browsers might try to upload large files in
>>> small chunks and the "chunk" header might reflect a push
>>> transfer.  I don't know if "chunk" is ever used for this
>>> purpose.  But it would require the following characteristics:
>>> 
>>>   1.  The browser would need to originally inquire if the server
>>> is
>>>   capable of this type of request.
>>>   2.  Each chunk of data will arrive in a separate and
>>> independent HTTP
>>>   request.  Not necessarily in the order they were sent.
>>>   3.  Two or more requests may be handled by separate processes
>>>   simultaneously that can't be written into a single
>>> destination.
>>>   4.  Somehow the server needs to request a resend if a chunk is
>>> missing.
>>>   Solving this problem requires an imaginitive use of HTTP.
>>> 
>>> Sounds messy.  But might be appropriate for 100M+ sized uploads.
>>>  This *may* reflect your situation.  Can you please confirm?
>>> 
>>> For a single process, the incoming content-length is
>>> unnecessary. Buffered I/O automatically knows when transmission
>>> is complete.  The read() argument is the buffer size, not the
>>> content length.  Whether you spool the buffer to disk or simply
>>> enlarge the buffer should be determined by your hardware
>>> capabilities.  This is standard IO behavior that has nothing to
>>> do with HTTP chunk.  Without a "Content-Length" header, after
>>> looping your read() operation, determine the length of the
>>> aggregate data and p

Re: lost directory indexes

2014-09-15 Thread Joseph Schaefer
You have to recompile mod perl so its response handler hook runs before mod 
dir's.  Have them run first or really first instead of middle.

Sent from my iPhone

> On Sep 15, 2014, at 6:02 PM, Ruben Safir  wrote:
> 
> On Mon, Sep 15, 2014 at 05:40:00PM -0400, Ruben Safir wrote:
 
 
 Now the error log for an index request says:
 
 [Mon Sep 15 12:14:09 2014] [error] [client 10.0.0.57] Attempt to serve
 directory: /usr/local/apache/htdocs/resources/, referer:
 http://www.mrbrklyn.com/
 INDEXES ON
 SYMLINKS OFF
 CGI OFF
 
 
 It knows that INDEXES is on but it is ignoring the config files fault.
 
 I suppose I need to identify the directory request and then
 decline to respond to it.
>>> 
>>> Yes.
>>> 
 But I have no idea how to do this.
>>> 
>>> Have you tried
>>> 
>>> return DECLINED; # ?
>> 
>> 
>> I think I need return Apache2::Const::DECLINED;
> 
> This is the module that is failing to work
> 
> https://httpd.apache.org/docs/2.2/mod/mod_dir.html
> 
> As soon as you pull up mod_perl it stops working.  it will not pick up
> from the response cycle once you interject a modperl module
> 
> 
> 


save_gp segfaults during restart

2014-11-10 Thread Joseph Schaefer
Something odd is going on with errsv in trunk.  Without preloading apr::error I 
see lots of segfaults, but even preloading it doesn't fix them all. I'm running 
the latest 2.4 release with event.

Testing now with the savegp calls commented out...

Sent from my iPhone

Re: [RELEASE CANDIDATE]: mod_perl-2.0.9 RC3

2015-06-11 Thread Joseph Schaefer
+1 looking good Steve!

Sent from my iPhone

> On Jun 11, 2015, at 6:44 PM, Fred Moyer  wrote:
> 
> +1, all tests passed on httpd 2.4.12, perl 5.20.1, Centos 6.5. Nice work 
> Steve!
> 
>> On Wed, Jun 10, 2015 at 10:13 AM, Steve Hay  
>> wrote:
>> Please download, test, and report back on this release candidate of
>> the long-awaited mod_perl 2.0.9.
>> 
>> http://people.apache.org/~stevehay/mod_perl-2.0.9-rc3.tar.gz
>> 
>> MD5 = 61d07fe00919d9da2b49dbf7b821b1a7
>> SHA1 = 09e1d5f19312742db9da38c8e7f8955a77d29dfd
>> 
>> Changes since RC2:
>> 
>> Fix t/api/aplog.t for apr-1.5.2. [Steve Hay]
>> 
>> Note that Perl 5.22.x is currently not supported. This is logged as
>> CPAN RT#101962 and will hopefully be addressed in 2.0.10. [Steve Hay]
>> 
>> Fix unthreaded build, which was broken in 2.0.9-rc2. [Steve Hay]


Re: How to read request content without eating it ?

2016-02-28 Thread Joseph Schaefer
Use apreq.

Sent from my iPhone

> On Feb 28, 2016, at 1:04 PM, Ben RUBSON  wrote:
> 
> Hello,
> 
> I need to implement an access control handler based on request content.
> 
> So here is my (very simplified) PerlAccessHandler code :
> sub handler {
>  $r = shift;
>  $r->read($content,$r->headers_in->{'Content-length'});
>  if($content =~ /signature=expected_signature/)
>  {
>return Apache2::Const::OK;
>  }
>  return Apache2::Const::AUTH_REQUIRED;
> }
> 
> It works.
> My problem is further, when handler returns OK and Apache runs the user 
> requested CGI script.
> The request content provides some additional parameters the target CGI script 
> needs.
> However, as soon as $r->read is used, request content is no more available to 
> the CGI script.
> 
> So my question is, how to read request content without making it unavailable 
> to the final requested CGI ?
> 
> Thank you very much,
> 
> Best regards,
> 
> Ben
> 


Re: Question about Apache 2.4 and libapreq2 (Apache2::Request)

2017-01-18 Thread Joseph Schaefer
We've been using apreq with 2.4 for two years without issue at work.  I can't 
imagine why anyone would have a problem with it on any version of httpd 2.x.

Sent from my iPhone

> On Jan 18, 2017, at 3:06 PM, JW  wrote:
> 
> 
> Hi,
> 
> I currently use Apache 2.2, mod_perl and libapreq2 (for Apache2::Request and 
> Apache2::Cookie). I did a test installation of Apache 2.4 (yum), mod_perl 
> (source) and libapreq2-2.13 (source). and it seems to work fine. 
> 
> The last update of libapreq2 was in 2010. I'm aware that not every library 
> has to be updated and frankly I'm pleased that it still works. However, 
> before I make a permanent switch to Apache 2.4, I was wondering if anyone 
> doing a similar upgrade experienced problems using libapreq2 and what 
> alternative(s) they chose. 
> 
> Thank you. 
> 
> John
> 
> 
> 


Need a recommendation from an apreq user

2017-04-11 Thread Joseph Schaefer
Hi folks,

As one of the core developers for apreq and apreq2,  I'm currently in need of a 
recommendation from a happy user of the software for a private business lead.

If you'd be willing to write one on my behalf, please contact me offlist for 
further details.

Thanks all.

Sent from my iPhone


Re: Need a recommendation from an apreq user

2017-04-11 Thread Joseph Schaefer
My company is bidding on a state contract and they want letters of 
recommendation about past work I've done in the development arena.  The 
recommendation can be just about modperl itself too.

Sent from my iPhone

> On Apr 11, 2017, at 5:54 PM, Joseph Schaefer  wrote:
> 
> Hi folks,
> 
> As one of the core developers for apreq and apreq2,  I'm currently in need of 
> a recommendation from a happy user of the software for a private business 
> lead.
> 
> If you'd be willing to write one on my behalf, please contact me offlist for 
> further details.
> 
> Thanks all.
> 
> Sent from my iPhone



Re: New release of libapreq2

2020-02-05 Thread Joseph Schaefer
I’m no longer a part of Apache, sorry.

Sent from my iPhone

> On Jan 31, 2020, at 4:49 AM, p...@cpan.org wrote:
> 
> On Thursday 24 October 2019 20:58:41 Steve Hay wrote:
>>> On Thu, 24 Oct 2019 at 15:50,  wrote:
>>> 
>>> On Wednesday 06 September 2017 08:23:12 Steve Hay wrote:
 On 19 January 2017 at 14:25, Issac Goldstand  wrote:
> That release was canceled due to lack of votes,
>>> 
>>> Hello Issac! Have you released this version on cpan as trial release for 
>>> testing?
>>> 
>>> I have not found it on https://metacpan.org/release/libapreq2 so perl
>>> community have not noticed about it.
>>> 
> but regardless there was
> very little effective difference between that and 2.13 - mostly around
> tests, docs and build scripts.  2.13 should run just fine on 2.4
 
 Somehow, it only came to my attention yesterday that 2.14 never
 officially got released. That's a great shame because 2.13 doesn't
 build out-of-the-box on Windows, at least not with httpd-2.4, whereas
 2.14 does.
 
 Is there any chance of resurrecting it, or else just going for a new
 release numbered 2.15?
>>> 
>>> In svn repository are some fixes for NULL pointer dereference.
>>> https://svn.apache.org/viewvc/httpd/apreq/trunk/?view=log
>>> 
>>> So it would be great to see a new version with these fixes released.
>>> 
>> 
>> +1
> 
> Hello! Could you please do an official release of libapreq2 with
> mentioned fixes which are already in svn?