Re: Segmentation fault when downloading large files

2002-09-05 Thread Brian Pane

Joe Schaefer wrote:

>There's also a refcount problem in http_protocol.c wrt chunked
>transfer codings. The problem is that the bucket holding the
>chunk size isn't ever freed, so the corresponding data block
>is overcounted.
>
>Here's a diff for http_protocol.c against current anon-cvs (which
>doesn't seem to have any of the newer changes to ap_get_client_block).
>Light testing seems to indicate that this fixes the refcount problem.
>

Thanks, I'll commit this now.  I've applied a slightly simplified
form of this change: clearing the brigade right after the 
apr_brigade_flatten
regardless of the return code.

I think there may be a couple of other bucket leaks in that same
file; I'll scan through it and fix any other leaks I can find.

Brian

>diff -u -r1.454 http_protocol.c
>--- http_protocol.c 13 Aug 2002 14:27:39 -  1.454
>+++ http_protocol.c 5 Sep 2002 17:15:12 -
>@@ -901,6 +901,7 @@
> if (rv == APR_SUCCESS) {
> rv = apr_brigade_flatten(bb, line, &len);
> if (rv == APR_SUCCESS) {
>+apr_brigade_cleanup(bb);
> ctx->remaining = get_chunk_size(line);
> }
> }
>@@ -966,6 +967,7 @@
> if (rv == APR_SUCCESS) {
> rv = apr_brigade_flatten(bb, line, &len);
> if (rv == APR_SUCCESS) {
>+apr_brigade_cleanup(bb);
> ctx->remaining = get_chunk_size(line);
> }
> }
>  
>






Re: Segmentation fault when downloading large files

2002-09-05 Thread Joe Schaefer

Graham Leggett <[EMAIL PROTECTED]> writes:

> Peter Van Biesen wrote:
> 
> > Does anybody have another idea for me to try ?
> 
> Have you tried the latest fix for the client_block stuff, I think I saw 
> a very recent CVS checkin...?
> 
> There could of course be more than one leak, and we'll only fix the 
> problem once all of them are found...


There's also a refcount problem in http_protocol.c wrt chunked
transfer codings. The problem is that the bucket holding the
chunk size isn't ever freed, so the corresponding data block
is overcounted.

Here's a diff for http_protocol.c against current anon-cvs (which
doesn't seem to have any of the newer changes to ap_get_client_block).
Light testing seems to indicate that this fixes the refcount problem.

diff -u -r1.454 http_protocol.c
--- http_protocol.c 13 Aug 2002 14:27:39 -  1.454
+++ http_protocol.c 5 Sep 2002 17:15:12 -
@@ -901,6 +901,7 @@
 if (rv == APR_SUCCESS) {
 rv = apr_brigade_flatten(bb, line, &len);
 if (rv == APR_SUCCESS) {
+apr_brigade_cleanup(bb);
 ctx->remaining = get_chunk_size(line);
 }
 }
@@ -966,6 +967,7 @@
 if (rv == APR_SUCCESS) {
 rv = apr_brigade_flatten(bb, line, &len);
 if (rv == APR_SUCCESS) {
+apr_brigade_cleanup(bb);
 ctx->remaining = get_chunk_size(line);
 }
 }




Re: Segmentation fault when downloading large files

2002-09-05 Thread Brian Pane

Peter Van Biesen wrote:

>Hi,
>
>I've recompiled the server with only the proxy and the core modules :
>

>But the problem remains : 
>

>Does anybody have another idea for me to try ?
>

There was a fix for http_protocol.c last night that addressed
at least one problem that could cause the proxy to run out of
memory on a large request.  I recommend trying the latest code
in cvs.

Brian





Re: Segmentation fault when downloading large files

2002-09-05 Thread Graham Leggett

Peter Van Biesen wrote:

> Does anybody have another idea for me to try ?

Have you tried the latest fix for the client_block stuff, I think I saw 
a very recent CVS checkin...?

There could of course be more than one leak, and we'll only fix the 
problem once all of them are found...

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-09-05 Thread Peter Van Biesen

Hi,

I've recompiled the server with only the proxy and the core modules :

Compiled in modules:
  core.c
  mod_proxy.c
  proxy_connect.c
  proxy_ftp.c
  proxy_http.c
  prefork.c
  http_core.c

But the problem remains : 

[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:07:39 2002] [notice] child pid 17923 exit signal
Segmentation fault (11)

Does anybody have another idea for me to try ?

Thanx,

Peter.



Re: Segmentation fault when downloading large files

2002-09-04 Thread Graham Leggett

Peter Van Biesen wrote:

> Recompiled and tested, the problem remains ... :
> 
> [Wed Sep 04 13:22:27 2002] [info] Server: Apache/2.0.41-dev, Interface:
> mod_ssl/2.0.41-dev, Library: OpenSSL/0.9.6c
> [Wed Sep 04 13:22:27 2002] [notice] Apache/2.0.41-dev (Unix)
> mod_ssl/2.0.41-dev OpenSSL/0.9.6c DAV/2 configured -- resuming normal
> operations
> [Wed Sep 04 13:22:27 2002] [info] Server built: Sep  3 2002 16:31:17
> [Wed Sep 04 13:38:28 2002] [notice] child pid 29748 exit signal
> Segmentation fault (11)
> 
> Crash after 71 Mb . When I have the time, I'll investigate further !

Can you try and configure apache with no modules installed whatsoever, 
to see if just mod_proxy plus core has this problem...?

I have a feeling one of the filters in the stack is leaking buckets 
(leaking in the sense that the bucks are only freed at the end of the 
request, which is too late), if we can remove all the filters we can 
(specifically mod_include) we might be able to more clearly isolate 
where the problem lies.

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-09-04 Thread Peter Van Biesen

Recompiled and tested, the problem remains ... :

[Wed Sep 04 13:22:27 2002] [info] Server: Apache/2.0.41-dev, Interface:
mod_ssl/2.0.41-dev, Library: OpenSSL/0.9.6c
[Wed Sep 04 13:22:27 2002] [notice] Apache/2.0.41-dev (Unix)
mod_ssl/2.0.41-dev OpenSSL/0.9.6c DAV/2 configured -- resuming normal
operations
[Wed Sep 04 13:22:27 2002] [info] Server built: Sep  3 2002 16:31:17
[Wed Sep 04 13:38:28 2002] [notice] child pid 29748 exit signal
Segmentation fault (11)

Crash after 71 Mb . When I have the time, I'll investigate further !

Peter.

Graham Leggett wrote:
> 
> Peter Van Biesen wrote:
> 
> > I've checked out the latest version from CVS, but I see there's no
> > configure script in there. How do I get/generate it ? Do I need it to
> > compile ?
> 
> Pull the following three from cvs:
> 
> - httpd-2.0
> - apr
> - apr-util
> 
> Copy both the apr and apr-util directories to httpd-2.0/srclib, like so:
> 
> [root@jessica srclib]# pwd
> /home/minfrin/src/apache/sandbox/proxy/httpd-2.0/srclib
> [root@jessica srclib]# ls -al
> total 40
> drwxr-xr-x7 minfrin  minfrin  4096 Sep  2 10:31 .
> drwxr-xr-x   14 minfrin  minfrin  4096 Sep  2 10:39 ..
> drwxr-xr-x   32 minfrin  minfrin  4096 Sep  2 10:34 apr
> drwxr-xr-x   21 minfrin  minfrin  4096 Sep  2 10:35 apr-util
> drwxr-xr-x2 minfrin  minfrin  4096 Jun 20 00:01 CVS
> -rw-r--r--1 minfrin  minfrin32 Jun 20 00:01 .cvsignore
> -rw-r--r--1 root root0 Sep  2 10:31 .deps
> drwxr-xr-x3 minfrin  minfrin  4096 Dec  6  2001 expat-lite
> -rw-r--r--1 root root  477 Sep  2 10:31 Makefile
> -rw-r--r--1 minfrin  minfrin   136 May 20 10:53 Makefile.in
> drwxr-xr-x7 minfrin  minfrin  4096 Sep  2 10:36 pcre
> 
> In the httpd-2.0 directory, run the following to create the ./configure
> scripts:
> 
> ./buildconf
> 
> Then run ./configure as you normally would.
> 
> Regards,
> Graham
> --
> -
> [EMAIL PROTECTED]
> "There's a moon
> over Bourbon Street
> tonight..."



Re: Segmentation fault when downloading large files

2002-09-02 Thread Graham Leggett

Peter Van Biesen wrote:

> I've checked out the latest version from CVS, but I see there's no
> configure script in there. How do I get/generate it ? Do I need it to
> compile ?

Pull the following three from cvs:

- httpd-2.0
- apr
- apr-util

Copy both the apr and apr-util directories to httpd-2.0/srclib, like so:

[root@jessica srclib]# pwd
/home/minfrin/src/apache/sandbox/proxy/httpd-2.0/srclib
[root@jessica srclib]# ls -al
total 40
drwxr-xr-x7 minfrin  minfrin  4096 Sep  2 10:31 .
drwxr-xr-x   14 minfrin  minfrin  4096 Sep  2 10:39 ..
drwxr-xr-x   32 minfrin  minfrin  4096 Sep  2 10:34 apr
drwxr-xr-x   21 minfrin  minfrin  4096 Sep  2 10:35 apr-util
drwxr-xr-x2 minfrin  minfrin  4096 Jun 20 00:01 CVS
-rw-r--r--1 minfrin  minfrin32 Jun 20 00:01 .cvsignore
-rw-r--r--1 root root0 Sep  2 10:31 .deps
drwxr-xr-x3 minfrin  minfrin  4096 Dec  6  2001 expat-lite
-rw-r--r--1 root root  477 Sep  2 10:31 Makefile
-rw-r--r--1 minfrin  minfrin   136 May 20 10:53 Makefile.in
drwxr-xr-x7 minfrin  minfrin  4096 Sep  2 10:36 pcre

In the httpd-2.0 directory, run the following to create the ./configure 
scripts:

./buildconf

Then run ./configure as you normally would.

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-09-02 Thread Peter Van Biesen

Hi,

I've checked out the latest version from CVS, but I see there's no
configure script in there. How do I get/generate it ? Do I need it to
compile ?

Thanx,

Peter.

Brian Pane wrote:
> 
> Peter Van Biesen wrote:
> 
> >I've continued to investigate the problem, maybe you know what could
> >cause it.
> >
> >I'm using a proxy chain, a proxy running internally and forwarding all
> >requests to an other proxy in the DMZ. Both proxies are identical. It is
> >always the internal proxy that crashes; the external proxy has no
> >problem downloading large files ( I haven't tested the memory usage yet
> >). Therefor, when the proxy connects directly to the site, the memory is
> >freed, but when it forwards the request to another proxy, it is not.
> >
> >Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
> >solve the problem. Does anybody know when this will be ?
> >
> 
> There's no specific date planned for 2.0.41 yet.  My own thinking
> is that we should release 2.0.41 "soon," because it contains a few
> important performance and reliability fixes (mostly related to cases
> where 2.0.40 and prior releases were trying to buffer unreasonably
> large amounts of data).  In the meantime, if you have time, can you
> try your proxy test case against the current CVS head?  I ran some
> reverse-proxy tests successfully today using the latest 2.0.41-dev
> code, and it properly streamed large responses without buffering,
> but I'm not certain that my test case covered all the code paths
> involved in your two-proxy setup.
> 
> Thanks,
> Brian
> 
> >
> >Peter.
> >
> >Graham Leggett wrote:
> >
> >
> >>Brian Pane wrote:
> >>
> >>
> >>
> >>>But the memory involved here ought to be in buckets (which can
> >>>be freed long before the entire request is done).
> >>>
> >>>In 2.0.39 and 2.0.40, the content-length filter's habit of
> >>>buffering the entire response would keep the httpd from freeing
> >>>buckets incrementally during the request.  That particular
> >>>problem is gone in the latest 2.0.41-dev CVS head.  If the
> >>>segfault problem still exists in 2.0.41-dev, we need to take
> >>>a look at whether there's any buffering in the proxy code that
> >>>can be similarly fixed.
> >>>
> >>>
> >>The proxy code doesn't buffer anything, it basically goes "get a bucket
> >>from backend stack, put the bucket to frontend stack, cleanup bucket,
> >>repeat".
> >>
> >>There are some filters (like include I think) that "put away" buckets as
> >>the response is handled, it is possible one of these filters is also
> >>causing a "leak".
> >>
> >>Regards,
> >>Graham
> >>--
> >>-
> >>[EMAIL PROTECTED]
> >>"There's a moon
> >>over Bourbon Street
> >>tonight..."
> >>
> >>



Re: Segmentation fault when downloading large files

2002-09-02 Thread Brian Pane

Peter Van Biesen wrote:

>I've continued to investigate the problem, maybe you know what could
>cause it.
>
>I'm using a proxy chain, a proxy running internally and forwarding all
>requests to an other proxy in the DMZ. Both proxies are identical. It is
>always the internal proxy that crashes; the external proxy has no
>problem downloading large files ( I haven't tested the memory usage yet
>). Therefor, when the proxy connects directly to the site, the memory is
>freed, but when it forwards the request to another proxy, it is not. 
>
>Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
>solve the problem. Does anybody know when this will be ?
>

There's no specific date planned for 2.0.41 yet.  My own thinking
is that we should release 2.0.41 "soon," because it contains a few
important performance and reliability fixes (mostly related to cases
where 2.0.40 and prior releases were trying to buffer unreasonably
large amounts of data).  In the meantime, if you have time, can you
try your proxy test case against the current CVS head?  I ran some
reverse-proxy tests successfully today using the latest 2.0.41-dev
code, and it properly streamed large responses without buffering,
but I'm not certain that my test case covered all the code paths
involved in your two-proxy setup.

Thanks,
Brian

>
>Peter.
>
>Graham Leggett wrote:
>  
>
>>Brian Pane wrote:
>>
>>
>>
>>>But the memory involved here ought to be in buckets (which can
>>>be freed long before the entire request is done).
>>>
>>>In 2.0.39 and 2.0.40, the content-length filter's habit of
>>>buffering the entire response would keep the httpd from freeing
>>>buckets incrementally during the request.  That particular
>>>problem is gone in the latest 2.0.41-dev CVS head.  If the
>>>segfault problem still exists in 2.0.41-dev, we need to take
>>>a look at whether there's any buffering in the proxy code that
>>>can be similarly fixed.
>>>  
>>>
>>The proxy code doesn't buffer anything, it basically goes "get a bucket
>>from backend stack, put the bucket to frontend stack, cleanup bucket,
>>repeat".
>>
>>There are some filters (like include I think) that "put away" buckets as
>>the response is handled, it is possible one of these filters is also
>>causing a "leak".
>>
>>Regards,
>>Graham
>>--
>>-
>>[EMAIL PROTECTED]
>>"There's a moon
>>over Bourbon Street
>>tonight..."
>>
>>






Re: Segmentation fault when downloading large files

2002-09-02 Thread Graham Leggett

Peter Van Biesen wrote:

> I'm using a proxy chain, a proxy running internally and forwarding all
> requests to an other proxy in the DMZ. Both proxies are identical. It is
> always the internal proxy that crashes; the external proxy has no
> problem downloading large files ( I haven't tested the memory usage yet
> ). Therefor, when the proxy connects directly to the site, the memory is
> freed, but when it forwards the request to another proxy, it is not. 

Not necessarily. Your outer proxy that doesn't crash might have more RAM 
in the machine, or the inner proxy has already crashed, not allowing the 
outer proxy the opportunity to have a request large enough to crash it.

Have you tested the outer proxy with a large file on it's own, ie a file 
greater in size than the box's RAM+swap?

> Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
> solve the problem. Does anybody know when this will be ?

Pull the latest HEAD from cvs and give it a try to see if it is fixed.

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-09-02 Thread Peter Van Biesen

I've continued to investigate the problem, maybe you know what could
cause it.

I'm using a proxy chain, a proxy running internally and forwarding all
requests to an other proxy in the DMZ. Both proxies are identical. It is
always the internal proxy that crashes; the external proxy has no
problem downloading large files ( I haven't tested the memory usage yet
). Therefor, when the proxy connects directly to the site, the memory is
freed, but when it forwards the request to another proxy, it is not. 

Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
solve the problem. Does anybody know when this will be ?

Peter.

Graham Leggett wrote:
> 
> Brian Pane wrote:
> 
> > But the memory involved here ought to be in buckets (which can
> > be freed long before the entire request is done).
> >
> > In 2.0.39 and 2.0.40, the content-length filter's habit of
> > buffering the entire response would keep the httpd from freeing
> > buckets incrementally during the request.  That particular
> > problem is gone in the latest 2.0.41-dev CVS head.  If the
> > segfault problem still exists in 2.0.41-dev, we need to take
> > a look at whether there's any buffering in the proxy code that
> > can be similarly fixed.
> 
> The proxy code doesn't buffer anything, it basically goes "get a bucket
> from backend stack, put the bucket to frontend stack, cleanup bucket,
> repeat".
> 
> There are some filters (like include I think) that "put away" buckets as
> the response is handled, it is possible one of these filters is also
> causing a "leak".
> 
> Regards,
> Graham
> --
> -
> [EMAIL PROTECTED]
> "There's a moon
> over Bourbon Street
> tonight..."



Re: Segmentation fault when downloading large files

2002-09-01 Thread Graham Leggett

Brian Pane wrote:

> But the memory involved here ought to be in buckets (which can
> be freed long before the entire request is done).
> 
> In 2.0.39 and 2.0.40, the content-length filter's habit of
> buffering the entire response would keep the httpd from freeing
> buckets incrementally during the request.  That particular
> problem is gone in the latest 2.0.41-dev CVS head.  If the
> segfault problem still exists in 2.0.41-dev, we need to take
> a look at whether there's any buffering in the proxy code that
> can be similarly fixed.

The proxy code doesn't buffer anything, it basically goes "get a bucket 
from backend stack, put the bucket to frontend stack, cleanup bucket, 
repeat".

There are some filters (like include I think) that "put away" buckets as 
the response is handled, it is possible one of these filters is also 
causing a "leak".

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-09-01 Thread Brian Pane

Graham Leggett wrote:

> Peter Van Biesen wrote:
>
>> I now have a reproducable error, a httpd which I can recompile ( it's
>> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
>> I've seen in the code of ap_proxy_http_request that the variable e is
>> used many times but I can't seem to find a free somewhere ...
>
>
> This may be part of the problem. In apr memory is allocated from a 
> pool of memory, and is then freed in one go. In this case, there is 
> one pool per request, which is only freed when the request is 
> complete. But during the request, 100MB of data is transfered, 
> resulting buckets which are allocated, but not freed (yet). The 
> machine runs out of memory and that process segfaults. 


But the memory involved here ought to be in buckets (which can
be freed long before the entire request is done).

In 2.0.39 and 2.0.40, the content-length filter's habit of
buffering the entire response would keep the httpd from freeing
buckets incrementally during the request.  That particular
problem is gone in the latest 2.0.41-dev CVS head.  If the
segfault problem still exists in 2.0.41-dev, we need to take
a look at whether there's any buffering in the proxy code that
can be similarly fixed.

Brian






Re: Segmentation fault when downloading large files

2002-09-01 Thread Graham Leggett

Peter Van Biesen wrote:

> I now have a reproducable error, a httpd which I can recompile ( it's
> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> I've seen in the code of ap_proxy_http_request that the variable e is
> used many times but I can't seem to find a free somewhere ...

This may be part of the problem. In apr memory is allocated from a pool 
of memory, and is then freed in one go. In this case, there is one pool 
per request, which is only freed when the request is complete. But 
during the request, 100MB of data is transfered, resulting buckets which 
are allocated, but not freed (yet). The machine runs out of memory and 
that process segfaults.

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-08-29 Thread Jess M. Holle




Really?

I've built mod_jk v1.2.0 (i.e. from jtc 4.0.4 sources) against 2.0.40 on
Windows, Solaris, and AIX (and HP provides one for 2.0.39 on HPUX, but hasn't
gotten to 2.0.40 last I saw) -- though on AIX I had crashes until Jeff Trawick
helped me navigate the insanity of AIX linking (which the 2.0.40 build process
did not reliably do out-of-the-box).

--
Jess Holle

Peter Van Biesen wrote:

  That's a bit of a problem for the moment, I've compiled 2.0.40, but
httpd complains at runtime about mod_jk, apparently something has
changed in the module api ... I'm using the last version of the
connectors ( 4.0.4 ). Or is there a newer version somewhere ?

Peter.

Justin Erenkrantz wrote:
  
  
On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:


  I now have a reproducable error, a httpd which I can recompile ( it's
till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
  

Can you upgrade to at least .40 or better yet the latest CVS
version?  -- justin

  
  
  







Re: Segmentation fault when downloading large files

2002-08-29 Thread Peter Van Biesen

I give up, I can't find what's wrong ... 

Peter.

Peter Van Biesen wrote:
> 
> That's a bit of a problem for the moment, I've compiled 2.0.40, but
> httpd complains at runtime about mod_jk, apparently something has
> changed in the module api ... I'm using the last version of the
> connectors ( 4.0.4 ). Or is there a newer version somewhere ?
> 
> Peter.
> 
> Justin Erenkrantz wrote:
> >
> > On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:
> > > I now have a reproducable error, a httpd which I can recompile ( it's
> > > till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> >
> > Can you upgrade to at least .40 or better yet the latest CVS
> > version?  -- justin



Re: Segmentation fault when downloading large files

2002-08-28 Thread Peter Van Biesen

That's a bit of a problem for the moment, I've compiled 2.0.40, but
httpd complains at runtime about mod_jk, apparently something has
changed in the module api ... I'm using the last version of the
connectors ( 4.0.4 ). Or is there a newer version somewhere ?

Peter.

Justin Erenkrantz wrote:
> 
> On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:
> > I now have a reproducable error, a httpd which I can recompile ( it's
> > till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> 
> Can you upgrade to at least .40 or better yet the latest CVS
> version?  -- justin



Re: Segmentation fault when downloading large files

2002-08-28 Thread Justin Erenkrantz

On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:
> I now have a reproducable error, a httpd which I can recompile ( it's
> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,

Can you upgrade to at least .40 or better yet the latest CVS
version?  -- justin



Re: Segmentation fault when downloading large files

2002-08-28 Thread Peter Van Biesen

I now have a reproducable error, a httpd which I can recompile ( it's
till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
I've seen in the code of ap_proxy_http_request that the variable e is
used many times but I can't seem to find a free somewhere ...

I'm sorry I'm not trying to find the error myself, but I haven't got the
time to familiarize myself with the apr code ...

Peter.

"William A. Rowe, Jr." wrote:
> 
> At 07:06 AM 8/28/2002, Graham Leggett wrote:
> >Peter Van Biesen wrote:
> >
> Program received signal SIGSEGV, Segmentation fault.
> 0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> (gdb) where
> #0  0xc1bfb06c in apr_bucket_alloc () from
> /opt/httpd/lib/libaprutil.sl.0
> #1  0xc1bf8d18 in socket_bucket_read () from
> /opt/httpd/lib/libaprutil.sl.0
> #2  0x00129ffc in core_input_filter ()
> #3  0x0011a630 in ap_get_brigade ()
> #4  0x000bb26c in ap_http_filter ()
> #5  0x0011a630 in ap_get_brigade ()
> #6  0x0012999c in net_time_filter ()
> #7  0x0011a630 in ap_get_brigade ()
> >
> >The ap_get_brigade() is followed by a ap_pass_brigade(), then a
> >apr_brigade_cleanup(bb).
> >
> >What could be happening is that either:
> >
> >a) brigade cleanup is hosed or leaks
> >b) one of the filters is leaking along the way
> 
> Or it simply tries to slurp all 100's of MBs of this huge download.
> 
> As I guessed, we are out of memory.
> 
> Someone asked why I asserted that input filtering still sucks.  Heh.
> 
> Bill



Re: Segmentation fault when downloading large files

2002-08-28 Thread William A. Rowe, Jr.

At 07:06 AM 8/28/2002, Graham Leggett wrote:
>Peter Van Biesen wrote:
>
Program received signal SIGSEGV, Segmentation fault.
0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
(gdb) where
#0  0xc1bfb06c in apr_bucket_alloc () from
/opt/httpd/lib/libaprutil.sl.0
#1  0xc1bf8d18 in socket_bucket_read () from
/opt/httpd/lib/libaprutil.sl.0
#2  0x00129ffc in core_input_filter ()
#3  0x0011a630 in ap_get_brigade ()
#4  0x000bb26c in ap_http_filter ()
#5  0x0011a630 in ap_get_brigade ()
#6  0x0012999c in net_time_filter ()
#7  0x0011a630 in ap_get_brigade ()
>
>The ap_get_brigade() is followed by a ap_pass_brigade(), then a 
>apr_brigade_cleanup(bb).
>
>What could be happening is that either:
>
>a) brigade cleanup is hosed or leaks
>b) one of the filters is leaking along the way

Or it simply tries to slurp all 100's of MBs of this huge download.

As I guessed, we are out of memory.

Someone asked why I asserted that input filtering still sucks.  Heh.

Bill




Re: Segmentation fault when downloading large files

2002-08-28 Thread William A. Rowe, Jr.

At 03:41 AM 8/28/2002, Peter Van Biesen wrote:
>As far as I can see, no ranges supplied. I've downloaded a 'small file'
>with my browser :
>
>193.53.20.83 - - [28/Aug/2002:10:33:25 +0200] "-" "GET
>http://hpux.cs.utah.edu/ftp/hpux/Gnu/gdb-5.2.1/gdb-5.2.1-sd-11.00.depot.gz
>HTTP/1.1" 200 7349572
>
>the "-" is the range.
>
>Since the child crashes, nothing gets written in the access log, but I
>added code to print out the headers :

Hmmm.  I was asking about the client-request headers, not the server
response headers.  But no matter, this doesn't appear to be the problem.

It's definitely not the same problem as byteranges-not-handled-correctly
by mod_proxy.  I'm going to take a wild stab that you may simply be
running out of memory?


>[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
>= Transfer-Encoding: chunked
>[Wed Aug 28 10:30:04 2002] [debug] proxy_http.c(893): proxy: start body
>send
>[Wed Aug 28 10:36:23 2002] [notice] child pid 7534 exit signal
>Segmentation fault (11)
>
>I'm installing gdb, as you can see ;-)
>
>Peter.
>
>"William A. Rowe, Jr." wrote:
> >
> > At 07:53 AM 8/27/2002, Peter Van Biesen wrote:
> > >What should I call it then ? not-so-tiny-files ? 8-)
> >
> > Nah... large or big files is just fine :-)
> >
> > I'm guessing $$$s to OOOs [donuts] that your client is starting
> > some byteranges somewhere along the way.   Try adding the bit
> > \"%{Range}i\" in one of your access log formats to see if this is
> > the case.
> >
> > As I understand it today, proxy plus byteranges is totally borked.
> >
> > Bill





Re: Segmentation fault when downloading large files

2002-08-28 Thread Graham Leggett

Peter Van Biesen wrote:

>>>Program received signal SIGSEGV, Segmentation fault.
>>>0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
>>>(gdb) where
>>>#0  0xc1bfb06c in apr_bucket_alloc () from
>>>/opt/httpd/lib/libaprutil.sl.0
>>>#1  0xc1bf8d18 in socket_bucket_read () from
>>>/opt/httpd/lib/libaprutil.sl.0
>>>#2  0x00129ffc in core_input_filter ()
>>>#3  0x0011a630 in ap_get_brigade ()
>>>#4  0x000bb26c in ap_http_filter ()
>>>#5  0x0011a630 in ap_get_brigade ()
>>>#6  0x0012999c in net_time_filter ()
>>>#7  0x0011a630 in ap_get_brigade ()

The ap_get_brigade() is followed by a ap_pass_brigade(), then a 
apr_brigade_cleanup(bb).

What could be happening is that either:

a) brigade cleanup is hosed or leaks
b) one of the filters is leaking along the way

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-08-28 Thread Graham Leggett

Peter Van Biesen wrote:

> Program received signal SIGSEGV, Segmentation fault.
> 0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> (gdb) where
> #0  0xc1bfb06c in apr_bucket_alloc () from
> /opt/httpd/lib/libaprutil.sl.0

> The resources used by the process increase linearly until the maximum
> per process is reached after which the crash occurs. Did we do an alloc
> without a free ?

It looks like each bucket is being created but never freed, which 
eventually causes a segfault when buckets can no longer be created.

This might be the bucket code leaking, or it could be the proxy code not 
freeing buckets after the buckets are sent to the client.

Anyone know how you free buckets?

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-08-28 Thread Peter Van Biesen

Euh, in function apr_bucket_heap_make . Sorry.

Peter Van Biesen wrote:
> 
> Hi,
> 
> can anybody look into apr_buckets_heap.c ? I'm not familiar with the apr
> code, but I don't see the free_func called anywhere ( which frees up the
> memory ), or am I mistaken ?
> 
> Thanks !
> 
> Peter.
> 
> Peter Van Biesen wrote:
> >
> > Hi,
> >
> > I started my server with MaxClients=1, started the download and attached
> > to the process with gdb. The process crashed; This is the trace :
> >
> > vfsi3>gdb httpd 7840
> > GNU gdb 5.2.1
> > Copyright 2002 Free Software Foundation, Inc.
> > GDB is free software, covered by the GNU General Public License, and you
> > are
> > welcome to change it and/or distribute copies of it under certain
> > conditions.
> > Type "show copying" to see the conditions.
> > There is absolutely no warranty for GDB.  Type "show warranty" for
> > details.
> > This GDB was configured as "hppa2.0n-hp-hpux11.00"...
> > Attaching to program: /opt/httpd/bin/httpd, process 7840
> >
> > warning: The shared libraries were not privately mapped; setting a
> > breakpoint in a shared library will not work until you rerun the
> > program.
> >
> > Reading symbols from /opt/openssl/lib/libssl.sl.0.9.6...done.
> > Reading symbols from /opt/openssl/lib/libcrypto.sl.0.9.6...done.
> > Reading symbols from /opt/httpd/lib/libaprutil.sl.0...done.
> > Reading symbols from /opt/httpd/lib/libexpat.sl.1...done.
> > Reading symbols from /opt/httpd/lib/libapr.sl.0...done.
> > Reading symbols from /usr/lib/libnsl.1...done.
> > Reading symbols from /usr/lib/libxti.2...done.
> > Reading symbols from /usr/lib/libpthread.1...done.
> > Reading symbols from /usr/lib/libc.2...done.
> > Reading symbols from /usr/lib/libdld.2...done.
> > Reading symbols from /usr/lib/libnss_files.1...done.
> > Reading symbols from /usr/lib/libnss_nis.1...done.
> > Reading symbols from /usr/lib/libnss_dns.1...done.
> > 0xc0115b68 in _select_sys () from /usr/lib/libc.2
> > (gdb) continue
> > Continuing.
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > 0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> > (gdb) where
> > #0  0xc1bfb06c in apr_bucket_alloc () from
> > /opt/httpd/lib/libaprutil.sl.0
> > #1  0xc1bf8d18 in socket_bucket_read () from
> > /opt/httpd/lib/libaprutil.sl.0
> > #2  0x00129ffc in core_input_filter ()
> > #3  0x0011a630 in ap_get_brigade ()
> > #4  0x000bb26c in ap_http_filter ()
> > #5  0x0011a630 in ap_get_brigade ()
> > #6  0x0012999c in net_time_filter ()
> > #7  0x0011a630 in ap_get_brigade ()
> > #8  0x00092f3c in ap_proxy_http_process_response ()
> > #9  0x000935e0 in ap_proxy_http_handler ()
> > #10 0x0008484c in proxy_run_scheme_handler ()
> > #11 0x0008259c in proxy_handler ()
> > #12 0x000fdc40 in ap_run_handler ()
> > #13 0x000fea04 in ap_invoke_handler ()
> > #14 0x000c0d9c in ap_process_request ()
> > #15 0x000b8348 in ap_process_http_connection ()
> > #16 0x00115a00 in ap_run_process_connection ()
> > #17 0x001160c0 in ap_process_connection ()
> > #18 0x000fae00 in child_main ()
> > #19 0x000fb0ac in make_child ()
> > #20 0x000fb47c in perform_idle_server_maintenance ()
> > #21 0x000fbc88 in ap_mpm_run ()
> > #22 0x001079f0 in main ()
> > (gdb)
> >
> > The resources used by the process increase linearly until the maximum
> > per process is reached after which the crash occurs. Did we do an alloc
> > without a free ?
> >
> > Peter.



Re: Segmentation fault when downloading large files

2002-08-28 Thread Peter Van Biesen

Hi,

can anybody look into apr_buckets_heap.c ? I'm not familiar with the apr
code, but I don't see the free_func called anywhere ( which frees up the
memory ), or am I mistaken ?

Thanks !

Peter.

Peter Van Biesen wrote:
> 
> Hi,
> 
> I started my server with MaxClients=1, started the download and attached
> to the process with gdb. The process crashed; This is the trace :
> 
> vfsi3>gdb httpd 7840
> GNU gdb 5.2.1
> Copyright 2002 Free Software Foundation, Inc.
> GDB is free software, covered by the GNU General Public License, and you
> are
> welcome to change it and/or distribute copies of it under certain
> conditions.
> Type "show copying" to see the conditions.
> There is absolutely no warranty for GDB.  Type "show warranty" for
> details.
> This GDB was configured as "hppa2.0n-hp-hpux11.00"...
> Attaching to program: /opt/httpd/bin/httpd, process 7840
> 
> warning: The shared libraries were not privately mapped; setting a
> breakpoint in a shared library will not work until you rerun the
> program.
> 
> Reading symbols from /opt/openssl/lib/libssl.sl.0.9.6...done.
> Reading symbols from /opt/openssl/lib/libcrypto.sl.0.9.6...done.
> Reading symbols from /opt/httpd/lib/libaprutil.sl.0...done.
> Reading symbols from /opt/httpd/lib/libexpat.sl.1...done.
> Reading symbols from /opt/httpd/lib/libapr.sl.0...done.
> Reading symbols from /usr/lib/libnsl.1...done.
> Reading symbols from /usr/lib/libxti.2...done.
> Reading symbols from /usr/lib/libpthread.1...done.
> Reading symbols from /usr/lib/libc.2...done.
> Reading symbols from /usr/lib/libdld.2...done.
> Reading symbols from /usr/lib/libnss_files.1...done.
> Reading symbols from /usr/lib/libnss_nis.1...done.
> Reading symbols from /usr/lib/libnss_dns.1...done.
> 0xc0115b68 in _select_sys () from /usr/lib/libc.2
> (gdb) continue
> Continuing.
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> (gdb) where
> #0  0xc1bfb06c in apr_bucket_alloc () from
> /opt/httpd/lib/libaprutil.sl.0
> #1  0xc1bf8d18 in socket_bucket_read () from
> /opt/httpd/lib/libaprutil.sl.0
> #2  0x00129ffc in core_input_filter ()
> #3  0x0011a630 in ap_get_brigade ()
> #4  0x000bb26c in ap_http_filter ()
> #5  0x0011a630 in ap_get_brigade ()
> #6  0x0012999c in net_time_filter ()
> #7  0x0011a630 in ap_get_brigade ()
> #8  0x00092f3c in ap_proxy_http_process_response ()
> #9  0x000935e0 in ap_proxy_http_handler ()
> #10 0x0008484c in proxy_run_scheme_handler ()
> #11 0x0008259c in proxy_handler ()
> #12 0x000fdc40 in ap_run_handler ()
> #13 0x000fea04 in ap_invoke_handler ()
> #14 0x000c0d9c in ap_process_request ()
> #15 0x000b8348 in ap_process_http_connection ()
> #16 0x00115a00 in ap_run_process_connection ()
> #17 0x001160c0 in ap_process_connection ()
> #18 0x000fae00 in child_main ()
> #19 0x000fb0ac in make_child ()
> #20 0x000fb47c in perform_idle_server_maintenance ()
> #21 0x000fbc88 in ap_mpm_run ()
> #22 0x001079f0 in main ()
> (gdb)
> 
> The resources used by the process increase linearly until the maximum
> per process is reached after which the crash occurs. Did we do an alloc
> without a free ?
> 
> Peter.



Re: Segmentation fault when downloading large files

2002-08-28 Thread Peter Van Biesen

Hi,

I started my server with MaxClients=1, started the download and attached
to the process with gdb. The process crashed; This is the trace : 


vfsi3>gdb httpd 7840
GNU gdb 5.2.1
Copyright 2002 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you
are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "hppa2.0n-hp-hpux11.00"...
Attaching to program: /opt/httpd/bin/httpd, process 7840

warning: The shared libraries were not privately mapped; setting a
breakpoint in a shared library will not work until you rerun the
program.

Reading symbols from /opt/openssl/lib/libssl.sl.0.9.6...done.
Reading symbols from /opt/openssl/lib/libcrypto.sl.0.9.6...done.
Reading symbols from /opt/httpd/lib/libaprutil.sl.0...done.
Reading symbols from /opt/httpd/lib/libexpat.sl.1...done.
Reading symbols from /opt/httpd/lib/libapr.sl.0...done.
Reading symbols from /usr/lib/libnsl.1...done.
Reading symbols from /usr/lib/libxti.2...done.
Reading symbols from /usr/lib/libpthread.1...done.
Reading symbols from /usr/lib/libc.2...done.
Reading symbols from /usr/lib/libdld.2...done.
Reading symbols from /usr/lib/libnss_files.1...done.
Reading symbols from /usr/lib/libnss_nis.1...done.
Reading symbols from /usr/lib/libnss_dns.1...done.
0xc0115b68 in _select_sys () from /usr/lib/libc.2
(gdb) continue
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
(gdb) where
#0  0xc1bfb06c in apr_bucket_alloc () from
/opt/httpd/lib/libaprutil.sl.0
#1  0xc1bf8d18 in socket_bucket_read () from
/opt/httpd/lib/libaprutil.sl.0
#2  0x00129ffc in core_input_filter ()
#3  0x0011a630 in ap_get_brigade ()
#4  0x000bb26c in ap_http_filter ()
#5  0x0011a630 in ap_get_brigade ()
#6  0x0012999c in net_time_filter ()
#7  0x0011a630 in ap_get_brigade ()
#8  0x00092f3c in ap_proxy_http_process_response ()
#9  0x000935e0 in ap_proxy_http_handler ()
#10 0x0008484c in proxy_run_scheme_handler ()
#11 0x0008259c in proxy_handler ()
#12 0x000fdc40 in ap_run_handler ()
#13 0x000fea04 in ap_invoke_handler ()
#14 0x000c0d9c in ap_process_request ()
#15 0x000b8348 in ap_process_http_connection ()
#16 0x00115a00 in ap_run_process_connection ()
#17 0x001160c0 in ap_process_connection ()
#18 0x000fae00 in child_main ()
#19 0x000fb0ac in make_child ()
#20 0x000fb47c in perform_idle_server_maintenance ()
#21 0x000fbc88 in ap_mpm_run ()
#22 0x001079f0 in main ()
(gdb)

The resources used by the process increase linearly until the maximum
per process is reached after which the crash occurs. Did we do an alloc
without a free ?

Peter.



Re: Segmentation fault when downloading large files

2002-08-28 Thread Peter Van Biesen

As far as I can see, no ranges supplied. I've downloaded a 'small file'
with my browser :

193.53.20.83 - - [28/Aug/2002:10:33:25 +0200] "-" "GET
http://hpux.cs.utah.edu/ftp/hpux/Gnu/gdb-5.2.1/gdb-5.2.1-sd-11.00.depot.gz
HTTP/1.1" 200 7349572

the "-" is the range.

Since the child crashes, nothing gets written in the access log, but I
added code to print out the headers :

[Wed Aug 28 10:30:03 2002] [debug] proxy_http.c(109): proxy: HTTP:
canonicalising URL
//download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
[Wed Aug 28 10:30:03 2002] [debug] mod_proxy.c(442): Trying to run
scheme_handler against proxy
[Wed Aug 28 10:30:03 2002] [debug] proxy_http.c(1051): proxy: HTTP:
serving URL
http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
[Wed Aug 28 10:30:03 2002] [debug] proxy_http.c(221): proxy: HTTP
connecting
http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
to download.microsoft.com:80
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(1164): proxy: HTTP: fam
2 socket created to connect to vlafo3.vlafo.be
[Wed Aug 28 10:30:04 2002] [debug] proxy_http.c(370): proxy: socket is
connected
[Wed Aug 28 10:30:04 2002] [debug] proxy_http.c(404): proxy: connection
complete to 193.190.145.66:80 (vlafo3.vlafo.be)
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= Date: Wed, 28 Aug 2002 08:30:04 GMT
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= Server: Microsoft-IIS/5.0
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= Content-Type: application/octet-stream
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= Accept-Ranges: bytes
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= Last-Modified: Tue, 23 Jul 2002 16:23:09 GMT
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= ETag: "f2138b3b6532c21:8f9"
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= Via: 1.1 download.microsoft.com
[Wed Aug 28 10:30:04 2002] [debug] proxy_util.c(444): proxy: headerline
= Transfer-Encoding: chunked
[Wed Aug 28 10:30:04 2002] [debug] proxy_http.c(893): proxy: start body
send
[Wed Aug 28 10:36:23 2002] [notice] child pid 7534 exit signal
Segmentation fault (11)

I'm installing gdb, as you can see ;-)

Peter.

"William A. Rowe, Jr." wrote:
> 
> At 07:53 AM 8/27/2002, Peter Van Biesen wrote:
> >What should I call it then ? not-so-tiny-files ? 8-)
> 
> Nah... large or big files is just fine :-)
> 
> I'm guessing $$$s to OOOs [donuts] that your client is starting
> some byteranges somewhere along the way.   Try adding the bit
> \"%{Range}i\" in one of your access log formats to see if this is
> the case.
> 
> As I understand it today, proxy plus byteranges is totally borked.
> 
> Bill



Re: Segmentation fault when downloading large files

2002-08-27 Thread William A. Rowe, Jr.

At 07:53 AM 8/27/2002, Peter Van Biesen wrote:
>What should I call it then ? not-so-tiny-files ? 8-)

Nah... large or big files is just fine :-)

I'm guessing $$$s to OOOs [donuts] that your client is starting
some byteranges somewhere along the way.   Try adding the bit
\"%{Range}i\" in one of your access log formats to see if this is
the case.

As I understand it today, proxy plus byteranges is totally borked.

Bill





Re: Segmentation fault when downloading large files

2002-08-27 Thread Cliff Woolley

On Tue, 27 Aug 2002, Peter Van Biesen wrote:

> > APR's concept of a "large file" (which is the concept used by file
> > buckets, btw) is >2GB.
>
> What should I call it then ? not-so-tiny-files ? 8-)

hehehe  I was just pointing out that < 70MB vs. > 70MB shouldn't make any
difference as far as APR or APR-util are concerned.

Can you give us a backtrace on the segfaulting child please?  See
http://httpd.apache.org/dev/debugging.html for tips on how to do that if
you're not familiar with gdb.

Thanks,
Cliff




Re: Segmentation fault when downloading large files

2002-08-27 Thread Peter Van Biesen

What should I call it then ? not-so-tiny-files ? 8-)

Cliff Woolley wrote:
> 
> On Tue, 27 Aug 2002, Graham Leggett wrote:
> 
> > The filter code behaves differently depending on where the data is
> > coming from, eg an area in memory, or a file on a disk. As a result it
> > is quite possible that a large file from disk works and a large file
> > from proxy does not.
> 
> APR's concept of a "large file" (which is the concept used by file
> buckets, btw) is >2GB.
> 
> --Cliff



Re: Segmentation fault when downloading large files

2002-08-27 Thread Cliff Woolley

On Tue, 27 Aug 2002, Graham Leggett wrote:

> The filter code behaves differently depending on where the data is
> coming from, eg an area in memory, or a file on a disk. As a result it
> is quite possible that a large file from disk works and a large file
> from proxy does not.

APR's concept of a "large file" (which is the concept used by file
buckets, btw) is >2GB.

--Cliff




Re: Segmentation fault when downloading large files

2002-08-27 Thread Graham Leggett

Peter Van Biesen wrote:

> However, when downloading a large file from the server itself ( not
> using the proxy ), works fine ... either its a problem in the proxy or a
> timeout somewhere ( locally is a lot faster ).

The proxy is very "dumb" code, it relies almost exclusively on the 
filter code to do everything. As a result it's very unlikely this 
problem is in proxy.

The filter code behaves differently depending on where the data is 
coming from, eg an area in memory, or a file on a disk. As a result it 
is quite possible that a large file from disk works and a large file 
from proxy does not.

Regards,
Graham
-- 
-
[EMAIL PROTECTED] 
"There's a moon
over Bourbon Street
tonight..."




Re: Segmentation fault when downloading large files

2002-08-27 Thread Peter Van Biesen

However, when downloading a large file from the server itself ( not
using the proxy ), works fine ... either its a problem in the proxy or a
timeout somewhere ( locally is a lot faster ).

Peter.

Dirk-Willem van Gulik wrote:
> 
> This look like a filter issue I've seen before; but never could not quite
> reproduce. You may want to take this to [EMAIL PROTECTED]; as this is
> most likely related to the filters in apache; and not proxy specific.
> 
> Dw.
> 
> On Tue, 27 Aug 2002, Peter Van Biesen wrote:
> 
> > Hello,
> >
> > I'm using an apache 2.0.39 on a HPUX 11.0 system as a webserver/proxy.
> > When I try to download large files through the proxy, I get the
> > following error :
> >
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(109): proxy: HTTP:
> > canonicalising URL
> > //download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> > [Tue Aug 27 11:44:08 2002] [debug] mod_proxy.c(442): Trying to run
> > scheme_handler against proxy
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(1051): proxy: HTTP:
> > serving URL
> > http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(221): proxy: HTTP
> > connecting
> > http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> > to download.microsoft.com:80
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_util.c(1164): proxy: HTTP: fam
> > 2 socket created to connect to vlafo3.vlafo.be
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(370): proxy: socket is
> > connected
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(404): proxy: connection
> > complete to 193.190.145.66:80 (vlafo3.vlafo.be)
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Date: Tue, 27 Aug 2002 09:44:09 GMT
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Server: Microsoft-IIS/5.0
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Content-Type: application/octet-stream
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Accept-Ranges: bytes
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Last-Modified: Tue, 23 Jul 2002 16:23:09 GMT
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = ETag: "f2138b3b6532c21:8f9"
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Via: 1.1 download.microsoft.com
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Transfer-Encoding: chunked
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(893): proxy: start body
> > send
> > [Tue Aug 27 11:57:45 2002] [notice] child pid 7099 exit signal
> > Segmentation fault (11)
> >
> > I'm sorry for the example ... ;-))
> >
> > Anyway, I've tried on several machine that are configured differently (
> > swap, memory ), but the download stops always around 70 Mb. Does anybody
> > have an idea what's wrong ? Is there a core I could gdb ( I didn't find
> > any ) ?
> >
> > Thanks !
> >
> > Peter.
> >



Re: Segmentation fault when downloading large files

2002-08-27 Thread Dirk-Willem van Gulik


This look like a filter issue I've seen before; but never could not quite
reproduce. You may want to take this to [EMAIL PROTECTED]; as this is
most likely related to the filters in apache; and not proxy specific.

Dw.

On Tue, 27 Aug 2002, Peter Van Biesen wrote:

> Hello,
>
> I'm using an apache 2.0.39 on a HPUX 11.0 system as a webserver/proxy.
> When I try to download large files through the proxy, I get the
> following error :
>
> [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(109): proxy: HTTP:
> canonicalising URL
> //download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> [Tue Aug 27 11:44:08 2002] [debug] mod_proxy.c(442): Trying to run
> scheme_handler against proxy
> [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(1051): proxy: HTTP:
> serving URL
> http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(221): proxy: HTTP
> connecting
> http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> to download.microsoft.com:80
> [Tue Aug 27 11:44:08 2002] [debug] proxy_util.c(1164): proxy: HTTP: fam
> 2 socket created to connect to vlafo3.vlafo.be
> [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(370): proxy: socket is
> connected
> [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(404): proxy: connection
> complete to 193.190.145.66:80 (vlafo3.vlafo.be)
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Date: Tue, 27 Aug 2002 09:44:09 GMT
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Server: Microsoft-IIS/5.0
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Content-Type: application/octet-stream
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Accept-Ranges: bytes
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Last-Modified: Tue, 23 Jul 2002 16:23:09 GMT
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = ETag: "f2138b3b6532c21:8f9"
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Via: 1.1 download.microsoft.com
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Transfer-Encoding: chunked
> [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(893): proxy: start body
> send
> [Tue Aug 27 11:57:45 2002] [notice] child pid 7099 exit signal
> Segmentation fault (11)
>
> I'm sorry for the example ... ;-))
>
> Anyway, I've tried on several machine that are configured differently (
> swap, memory ), but the download stops always around 70 Mb. Does anybody
> have an idea what's wrong ? Is there a core I could gdb ( I didn't find
> any ) ?
>
> Thanks !
>
> Peter.
>