Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson


I am currently in the process of transferring a database driven site from
IIS to Apache on NT using mod_perl.  Apache seems to lock up after about
10-20 minutes and the only way to get things going again is to restart
Apache (Apache is running from the console not as a service).

The site isn't particularly heavily loaded, currently handling a request
every 5-10 seconds.

I have also noticed that on some occasions (after the lock up) the
error.log contains an entry stating that one of my content handlers is not
defined, the content handler works fine until this point.  I have checked
the FAQ's etc and I am almost 100% certain that I don't have a problem with
my namespace.

When the server locks up netstat lists a number of clients who have a
TIME_WAIT status on port 8080 but these connections are not listed in
/server-status.

I am using Apache/1.3.9 (Win32) with mod_perl/1.21 which I downloaded in
December last year.  My worry is that there is a problem with Apache on NT
and mod_perl, given that Apache on NT is multi-threaded.

Unfortunately, I am stuck in NT due to parts of the legacy system,
otherwise I would move to Linux or FreeBSD.  If anyone can offer any
suggestions I would be most grateful as the only alternative I have is to
re-engineer the site in IIS.

If anyone has any suggestions, or would like further specific detail then
please let me know.

Thanks

Matt

--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



Re: squid performance

2000-01-17 Thread Joshua Chamas

Gerald Richter wrote:
 
 I have seen this in the source too, that's why I wrote it will not work with
 Apache, because most pages will be greater the 8K. Patching Apache, is one
 possibility, that's right, but I just looked after the
 ProxyReceiveBufferSize which Oleg pointed to, and this one sets the socket
 options and therefore should do the same job (as far as the OS supports it).
 Look at proxy_http.c line 263 (Apache 1.3.9):
 
 if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
(const char *) conf-recv_buffer_size, sizeof(int))
 
 I am not an expert in socket programming, but the setsockopt man page on my
 Linux says: "The system places an absolut limit on these values", but
 doesn't says where this limit will be?
 

On Solaris, default seems to be 256K ...

tcp_max_buf

 Specifies the maximum buffer size a user is allowed to specify with the SO_SNDBUF 
or
 SO_RCVBUF options. Attempts to use larger buffers fail with EINVAL. The default is
 256K. It is unwise to make this parameter much larger than the maximum buffer
 size your applications require, since that could allow malfunctioning or malicious
 applications to consume unreasonable amounts of kernel memory.

I needed to buffer up to 3M files, which I did by dynamically 
allocating space in ap_proxy_send_fb.  I didn't know that you 
could up the tcp_max_buf at the time, and would be interested 
in anyone's experience in doing so, whether this can actually 
be used to buffer large files.  Save me a source tweak in 
the future. ;)

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



RE: accessing request headers from CGI?

2000-01-17 Thread Gerald Richter


 i am trying to read the headers of an incoming HTTP request
 in a CGI script. It seems to me that the only way to do so is
 to use mod_perl methods since the standard CGI interface does not
 provide request header access - or am I missing something here?

 The problem is that in this specific project I am stuck with
 a non-apache http-server (NS fasttrack) - do I have
 a chance to get the http headers without mod_perl?


Normaly only these one, which your server setup in the environement for you.
Print out the whole %ENV to see what you can get.

Gerald

-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-




Re: Apache locking up on WinNT

2000-01-17 Thread Waldek Grudzien

 I am currently in the process of transferring a database driven site from
 IIS to Apache on NT using mod_perl.  Apache seems to lock up after about
 10-20 minutes and the only way to get things going again is to restart
 Apache (Apache is running from the console not as a service).

Well - I am using NT distribution 
(APACHE 1.3.9 / mod_perl 1.21/ PERL 5.005_003)
downloaded from  :
ftp://theoryx5.uwinnipeg.ca/pub/other/

and noticed no problem with it.
Maybe the problem lays in your scripts (have
you analised entry from apache log you have descibed) ?

Regards,

Waldek Grudzien
_
http://www.uhc.lublin.pl/~waldekg/
University Health Care
Lublin/Lubartow, Poland
tel. +48 81 44 111 88
ICQ # 20441796



Re: Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson

At 12:20 PM 1/17/00 +0100, Waldek Grudzien wrote:
 I am currently in the process of transferring a database driven site from
 IIS to Apache on NT using mod_perl.  Apache seems to lock up after about
 10-20 minutes and the only way to get things going again is to restart
 Apache (Apache is running from the console not as a service).

Well - I am using NT distribution 
(APACHE 1.3.9 / mod_perl 1.21/ PERL 5.005_003)
downloaded from  :
ftp://theoryx5.uwinnipeg.ca/pub/other/

and noticed no problem with it.
Maybe the problem lays in your scripts (have
you analised entry from apache log you have descibed) ?

I have just gone back and checked the logs.  The majority of the time the
server locks up without putting anything in the error log.  Currently,
there are only about 10 content handlers in the system and I am fairly
confident they work.

When it locks up I can still telnet into the server and get a connection
immediately but I don't get a response.  I have waited considerably longer
than the Timeout (60 secs) and the connections are not terminated.

On some occasions /server-status has reported that almost all of the
threads are reading (although they don't give the remote addresses).
Normally, I would expect a maximum of about 5 threads doing concurrent
processing.

I am fairly happy to accept that the problem is in my scripts but I am not
entirely sure what I am doing wrong.

Matt


Regards,

Waldek Grudzien
_
http://www.uhc.lublin.pl/~waldekg/
University Health Care
Lublin/Lubartow, Poland
tel. +48 81 44 111 88
ICQ # 20441796

--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



RE: Apache locking up on WinNT

2000-01-17 Thread Gerald Richter


 I have just gone back and checked the logs.  The majority of the time the
 server locks up without putting anything in the error log.  Currently,
 there are only about 10 content handlers in the system and I am fairly
 confident they work.

 When it locks up I can still telnet into the server and get a connection
 immediately but I don't get a response.  I have waited considerably longer
 than the Timeout (60 secs) and the connections are not terminated.


I guess it locks up somewhere in perl part, because perl isn't renetrant, so
everythingelse has to wait!

 On some occasions /server-status has reported that almost all of the
 threads are reading (although they don't give the remote addresses).
 Normally, I would expect a maximum of about 5 threads doing concurrent
 processing.

 I am fairly happy to accept that the problem is in my scripts but I am not
 entirely sure what I am doing wrong.


I would suggest, put

warn "foo" ;

allover in your perl code and then look in the error log and see where the
last warn came from, then you have the place where it lock up.

Gerald



RE: squid performance

2000-01-17 Thread radu



On Mon, 17 Jan 2000, Gerald Richter wrote:

 Look at proxy_http.c line 263 (Apache 1.3.9):
 
   if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
  (const char *) conf-recv_buffer_size, sizeof(int))
 
 I am not an expert in socket programming, but the setsockopt man page on my
 Linux says: "The system places an absolut limit on these values", but
 doesn't says where this limit will be?


For 2.2 kernels the max limit is in /proc/sys/net/core/rmem_max and the
default value is in /proc/sys/net/core/rmem_default. It's good to note the
following comment from the kernel source:

"Don't error on this BSD doesn't and if you think about it this is right.
Otherwise apps have to play 'guess the biggest size' games. RCVBUF/SNDBUF
are treated in BSD as hints."

So, if you want to increase RCVBUF size above 65535, the default max
value, you have to raise first the absolut limit in
/proc/sys/net/core/rmem_max, otherwise you might be thinking that by
calling setsockopt you increased it to say 1 MB, but in fact the RCVBUF
size is still 65535.


HTH,
Radu Greab



RE: Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson

At 01:26 PM 1/17/00 +0100, Gerald Richter wrote:
I would suggest, put

warn "foo" ;

allover in your perl code and then look in the error log and see where the
last warn came from, then you have the place where it lock up.

Gerald

I added the warns to the scripts and it appears that access to the modules
is serialised.  Each call to the handler has to run to completion before
any other handlers can execute.  

I think the problem I am getting is that somebody on a slow link comes
along and effectively limits every user to run at that speed.  If I had
been more patient I would have got a response.  Once the queue of requests
cleared.

Can anyone verify this to be the case.  If this is the case I will have to
go back to IIS (for the time being) as I don't have time to work around this.


If you look at the following section of the log you will see that all of
the accesses are sequential but there is a period when a number of requests
appear to be bunched up after a slowish response. Names have been changed
to protect the innocent.

[Mon Jan 17 13:48:53 2000] [warn] Module::A::handler start
[Mon Jan 17 13:48:53 2000] [warn] Module::A::handler exit
[Mon Jan 17 13:48:57 2000] [warn] Module::B::handler start
[Mon Jan 17 13:48:57 2000] [warn] Module::B::handler exit
[Mon Jan 17 13:49:07 2000] [warn] Module::C::handler start
[Mon Jan 17 13:49:29 2000] [warn] Module::C::handler exit
[Mon Jan 17 13:49:29 2000] [warn] Module::D::handler start
[Mon Jan 17 13:49:29 2000] [warn] Module::D::handler exit
[Mon Jan 17 13:49:29 2000] [warn] Module::C::handler start
[Mon Jan 17 13:50:02 2000] [warn] Module::C::handler exit
[Mon Jan 17 13:50:14 2000] [warn] Module::A::handler start
[Mon Jan 17 13:50:14 2000] [warn] Module::A::handler exit
[Mon Jan 17 13:50:14 2000] [warn] Module::C::handler start
[Mon Jan 17 13:50:21 2000] [warn] Module::C::handler exit
[Mon Jan 17 13:50:22 2000] [warn] Module::C::handler start
[Mon Jan 17 13:50:51 2000] [warn] Module::C::handler exit

Matt
--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



Re: squid performance

2000-01-17 Thread Ask Bjoern Hansen

On Sun, 16 Jan 2000, DeWitt Clinton wrote:

[...]
 On that topic, is there an alternative to squid?  We are using it
 exclusively as an accelerator, and don't need 90% of it's admittedly
 impressive functionality.  Is there anything designed exclusively for this
 purpose?

At ValueClick we can't use the caching for obvious reasons so we're using
a bunch of apache/mod_proxy processes in front of the apache/mod_perl
processes to save memory.

Even with our average 1KB per request we can keep hundreds of mod_proxy
childs busy with very few active mod_perl childs.


  - ask

-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 60M impressions per day, http://valueclick.com



Re: squid performance

2000-01-17 Thread G.W. Haywood

Hi there,

On Mon, 17 Jan 2000, Joshua Chamas wrote:

 On Solaris, default seems to be 256K ...

As I remember, that's what Linux defalts to.  Don't take may word for
it, I can't remember exactly where or when I read it - but I think it
was in this List some time during the last couple of months!

 I needed to buffer up to 3M files, which I did by dynamically 
 allocating space in ap_proxy_send_fb.

For such large transfers between proxy and server, is there any reason
why one shouldn't just dump it into a tempfile in a ramdisk for the
proxy to deal with at its leisure, and let the OS take care of all the
virtual and sharing stuff?  After all, that's what it's for...

73
Ged.



RE: Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson

 I added the warns to the scripts and it appears that access to the modules
 is serialised.  Each call to the handler has to run to completion before
 any other handlers can execute.


Yes, on NT all accesses to the perl part are serialized. This will not
change before mod_perl 2.0

Gerald

I had a horrible feeling it was going to have something to do with the fact
that Apache on NT is multi-threaded and perl isn't (yet). 

I assume that Apache::Registry has the same problems.  However, good old
fashioned CGI scripts in the /cgi-bin directory should be OK?  Does anybody
have any performance stats between perl running externally on Apache and
PerlIS on IIS?

Matt
--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



Re: Program very slow

2000-01-17 Thread Stas Bekman

 An Englishman asked an Irishman for directions to a place some
 distance away.  The Irishman replied, "T'be sure, if oi was going
 there, oi wouldn't start from here!".
 
 This *is* a bit off-topic, but the guy needs help.

Ged, this is a wonderful thing that you do.

But, if you help someone with off-topic questions, please reply in person,
not to the list (like others did). Also make sure you stress in your
answer that this question shold be asked somewhere else. 

Why's that? Because we have worked hard to avoid a situation where the
list becomes ask-everything-and-you-will-be-answered and loose its value,
and worse its best contributors. 

Please try to keep it clean and not encourage off-topic questions. 

Thank you for understanding!

 Press `D' if you're bored already.
 
 On Sun, 16 Jan 2000, Kader Ben wrote:
 
  I want to check if @rec contains the string "Unknown" but when I do
  so the program is very very slow (this process 6M file into @rec
  array). Is there any other away to rewrite this code?
 
  for ($i = 0; $i  scalar(@rec); $i++) { $rec[$i] = '"'.$rec[$i].'"'; }
  if($rec[16] eq '"Unknown"') { Alert_Unknown_ChannelID($rec[0]); }
  else { my $out = join(',', @rec) . "\n"; print (G $out); }
  }
 
 I'd really need more to go on than you've given, so I'll make some
 wild assumptions, and here goes...
 
 It's horribly inefficient to read a big file into an array with a
 large number of elements only to process it with things like:
 
 $rec[$i] = '"'.$rec[$i].'"';
 
 Think about what you're asking.  Each element has to grow by a couple
 of bytes...
 
 Maybe you can manipulate smaller chunks of the file?  If you must add
 the quotes, do it before the pieces go into the array.  If you don't
 need to do any more processing on the array, just put the first 16
 elements into it (I assume they're relatively small), something like
 the code below.  Process the 16 element array as you do now, but deal
 with the remaining input on the fly, without putting it in an array.
 Try to use $_ wherever you can.
 
 I take it you *are* using the `-w' switch and `use strict;'?
 
 73,
 Ged.
 
 #!/usr/bin/perl -w
 # Read a file, put quotes around all the lines.
 # You can probably tell I'm really a `C' programmer.
 
 use strict;
 
 my @rec=();# Small array, big file
 my $fileName = "/home/ged/website/input/create/data/input/catalogue.srt";
 
 open(FD,$fileName);
 
 # Read first bit of file
 my $i=0;
 while( $rec[$i++]=FD ) { last if $i==16; }
 
 # My file has newlines so chop 'em off before wrapping with quotes.
 # More efficient to print this inside the body of the while() above,
 # maybe you don't need the array at all...
 for( $i=0; $i=$#rec; ) { chop($rec[$i]); print "\"$rec[$i++]\"\n"; }
 
 # Announce
 print "* Here we are at line 16. *\n";
 
 # Add quotes on the fly.  O'course we don't have to do it at all...
 if( 1 ) { while( FD ) { chop; print "\"$_\"\n"; } }
 
 close(FD);
 
 # EOF: ged.pl
 
 
 



___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com




Run away processes

2000-01-17 Thread Bill Moseley

The httpd.conf Timeout setting doesn't effect mod_perl, it seems, even if
the client breaks the connection.

Is there a recommendation on how to catch  stop run away mod_perl programs
in a way that's _not_ part of the run away program.  Or is this even
possible?  Some type of watchdog, just like httpd.conf Timeout?

Thanks,

Bill Moseley
mailto:[EMAIL PROTECTED]



Re: squid performance

2000-01-17 Thread G.W. Haywood

Hi there,

On Mon, 17 Jan 2000, Ask Bjoern Hansen wrote:

 At ValueClick we can't use the caching for obvious reasons so we're using
 a bunch of apache/mod_proxy processes in front of the apache/mod_perl
 processes to save memory.
 
 Even with our average 1KB per request we can keep hundreds of mod_proxy
 childs busy with very few active mod_perl childs.

Would it be breaching any confidences to tell us how many
kilobyterequests per memorymegabyte or some other equally daft
dimensionless numbers?

73,
Ged.



Re: Run away processes

2000-01-17 Thread Stas Bekman

 The httpd.conf Timeout setting doesn't effect mod_perl, it seems, even if
 the client breaks the connection.
 
 Is there a recommendation on how to catch  stop run away mod_perl programs
 in a way that's _not_ part of the run away program.  Or is this even
 possible?  Some type of watchdog, just like httpd.conf Timeout?

Try Apache::SafeHang
http://www.singlesheaven.com/stas/modules/Apache-SafeHang-0.01.tar.gz

It should be renamed one day when I get back to work on it, into something
like Apache::Watchdog::RunAwayProc as kindly was suggested by Ken Williams
(the Apache::Watchdog:: part :)

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: squid performance

2000-01-17 Thread Markus Wichitill

 So, if you want to increase RCVBUF size above 65535, the default max
 value, you have to raise first the absolut limit in
 /proc/sys/net/core/rmem_max, 

Is "echo 131072  /proc/sys/net/core/rmem_max" the proper way to do this? I don't have 
much experience with /proc, but this seems to work. If it's ok, it could be added to 
the Guide, which already mentions how to change it in FreeBSD.



redhat apache and modperl oh my!

2000-01-17 Thread Clay

so i am just wanting to know what anyone
has found out on mod perl not working properly
under redhat 6.1?

thanks





Re: Run away processes

2000-01-17 Thread Bill Moseley

At 06:48 PM 1/17/00 +0200, Stas Bekman wrote:
 The httpd.conf Timeout setting doesn't effect mod_perl, it seems, even if
 the client breaks the connection.
 
 Is there a recommendation on how to catch  stop run away mod_perl programs
 in a way that's _not_ part of the run away program.  Or is this even
 possible?  Some type of watchdog, just like httpd.conf Timeout?

Try Apache::SafeHang
http://www.singlesheaven.com/stas/modules/Apache-SafeHang-0.01.tar.gz

Oh, ya.  Thanks.

I'm curious.  What is the reason Timeout doesn't work?   Does Timeout only
work with mod_cgi?




Bill Moseley
mailto:[EMAIL PROTECTED]



Re: redhat apache and modperl oh my!

2000-01-17 Thread Gerd Kortemeyer

Clay wrote:
 
 so i am just wanting to know what anyone
 has found out on mod perl not working properly
 under redhat 6.1?

If you install everything (including modperl) from RedHat's RPMs, no problem (I
did this on five very different boxes, some new, some upgraded). If you try to
do it yourself by building from sources, etc, ... oh well - then RedHat is in
the way and gets all confused..

begin:vcard 
n:Kortemeyer;Gerd
tel;fax:(517) 432-2175
tel;work:(517) 432-5468
x-mozilla-html:TRUE
url:http://www.lite.msu.edu/kortemeyer/
org:Michigan State University;LITE Lab
adr:;;123 North Kedzie Labs;East Lansing;Michigan;48824;USA
version:2.1
email;internet:[EMAIL PROTECTED]
title:Instructional Technology Specialist
x-mozilla-cpt:;3
fn:Gerd Kortemeyer
end:vcard



Re: redhat apache and modperl oh my!

2000-01-17 Thread Clay


no, i have only used the redhat packages, i have extensivley search all
related new s groups etc,

the startup file ive incl'd has been the one ive borrowed from the mod perl
book or the guide online,
i have never had problems up until redhat 6.1 {stampede slak and redhat 6
all worked fine}

i realize this is not modperls fault but if anyone has any hints please let
me know

ive included the startup noticed the rem'd out onesd, if i load any of them
it cans out, i know they are installed !


 startup.pl


Re: redhat apache and modperl oh my!

2000-01-17 Thread Aaron Johnson

I have had the same exprience as Stas.

The Red Hat RPM uses Dynamic Shared Objects (DSO) for all the
modules.  This is NOT the ideal way to run mod_perl, I am not saying
you can't, but a lot of modules won't preload under these conditions.

my $.02

Aaron Johnson

Stas Bekman wrote:
 
  Clay wrote:
  
   so i am just wanting to know what anyone
   has found out on mod perl not working properly
   under redhat 6.1?
 
 Clay, did you try to find your answer in the list's archives? (hint: at
 perl.apache.org) There is no need to roll the broken record again. Thank
 you!
 
  If you install everything (including modperl) from RedHat's RPMs, no
  problem (I did this on five very different boxes, some new, some
  upgraded). If you try to do it yourself by building from sources, etc,
  ... oh well - then RedHat is in the way and gets all confused..
 
 Gerd, there is no problems to build mod_perl from scratch. I did it with
 all versions from the past 2.5 years I think. Make sure to remove the
 apache and mod_perl RPMs first!!!
 
 ___
 Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
 Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
 perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
 single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: squid performance

2000-01-17 Thread Stas Bekman

 On Mon, 17 Jan 2000, Markus Wichitill wrote:
 
   So, if you want to increase RCVBUF size above 65535, the default max
   value, you have to raise first the absolut limit in
   /proc/sys/net/core/rmem_max, 
  
  Is "echo 131072  /proc/sys/net/core/rmem_max" the proper way to do
  this? I don't have much experience with /proc, but this seems to work.
 
 Yes, that's the way described in Linux kernel documentation and I use
 myself.

So you should put this into /etc/rc.d/rc.local ?

  If it's ok, it could be added to the Guide, which already mentions how
  to change it in FreeBSD.
 
 I'd also like to see this info added to the Guide.

Of course! Thanks for this factoid!

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: Off Topic Questions

2000-01-17 Thread G.W. Haywood

Hi all,

On Mon, 17 Jan 2000, Stas Bekman wrote:

 Please try to keep it clean and not encourage off-topic questions. 

Sorry, Stas, you're quite right.  I often do reply privately to the
off-topic questions, and I suppose even that might be construed as
encouraging them.  I do however also try to point out gently that it
*is* off topic, and so shouldn't be here on the List, and also when
you should press `D'.  Er, now.

It's not always easy to know where to draw the line.  And unlike many,
I like the prickly feeling I get at the back of my neck when I think
that the likes of the Camel Book's authors are probably disecting my
replies and will blow me out of the water without mercy if I get it
wrong.  It's kind of like doing homework exercises, and it's amazing
what you learn when you try to explain something to someone else.  I
guess I'll lose some of these joys if I don't reply to the completely
off-topic questions that go to the List.  But that's OK.

For the record, I'm happy to answer Perl-specific questions if mailed
to me privately.  I can't guarantee to cope with the demand, even if
there's only one question in my mailbox.  And it has to be said that
if you have to ask me (and if I can answer it:) then it's probably a
dumb question.  But that's OK, too.

As for keeping it clean, well I suppose it was a racist joke...

73,
Ged.



redhat 6.1 apache and modperl woes !!

2000-01-17 Thread Naren Dasu

I seem to be having problems with Red Hat 6.1  mod_perl. Compiled and
installed from scratch. This is an out-of-the box Penguin Computer running
RH6.1.

The strange behavior that manifests itself as follows: 

Config 1
Location /server-status
SetHandler server-status
Order deny,allow
Deny from all
Allow from .divatv.com
/Location

The above configuration fails, I get the "Client does not have permission"
in the error_log file. 


Config 2

BUT this works ... I did this to test if the server-status modules were
properly installed. 

Location /server-status
SetHandler server-status
Order allow,deny
Allow from all
/Location

I am running out of ideas on why Config 1 fails.  I also tried a bunch of
chmod/chgrp commands to change the permissions on the files, but no luck.
Could someone shed some light on this ? 

thanks a bunch 
naren 



At 12:26 PM 1/17/00 -0500, you wrote:
Clay wrote:
 
 so i am just wanting to know what anyone
 has found out on mod perl not working properly
 under redhat 6.1?

If you install everything (including modperl) from RedHat's RPMs, no
problem (I
did this on five very different boxes, some new, some upgraded). If you
try to
do it yourself by building from sources, etc, ... oh well - then RedHat is in
the way and gets all confused..
Attachment Converted: "c:\eudora\attach\korte14.vcf"




Re: squid performance

2000-01-17 Thread Ask Bjoern Hansen

On Mon, 17 Jan 2000, G.W. Haywood wrote:

  At ValueClick we can't use the caching for obvious reasons so we're using
  a bunch of apache/mod_proxy processes in front of the apache/mod_perl
  processes to save memory.
  
  Even with our average 1KB per request we can keep hundreds of mod_proxy
  childs busy with very few active mod_perl childs.
 
 Would it be breaching any confidences to tell us how many
 kilobyterequests per memorymegabyte or some other equally daft
 dimensionless numbers?

Uh, I don't understand the question.

The replies to the requests are all redirects to the real content (which
is primarily served by Akamai) so it's quite non-typical.


 - ask

-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 60M impressions per day, http://valueclick.com



RE: squid performance

2000-01-17 Thread Stas Bekman

   No, that's the size of the system call buffer.  It is not an
   application buffer.
 
  So how one should interpret the info at:
  http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize
 
  QUOTE
  The ProxyReceiveBufferSize directive specifies an explicit network buffer
  size for outgoing HTTP and FTP connections, for increased throughput. It
  has to be greater than 512 or set to 0 to indicate that the system's
  default buffer size should be used.
  /QUOTE
 
  So what's the application buffer parameter? A hardcoded value?
 
 
 Yes, as Joshua posted today morning (at least it was morning in germany :-),
 the application buffer size is hardcoded, the size is 8192 (named
 IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
 
 The ProxyReceiveBufferSize set the receive buffer size of the socket, so
 it's an OS issue.

Which means that setting of ProxyReceiveBufferSize higher than 8k is
usless unless you modify the sources. Am I right? (I want to make it as
clear as possible i in the Guide)

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: redhat apache and modperl oh my!

2000-01-17 Thread Aaron Johnson

Clay,

Well I agree after seeing your startup.pl that the problem you are
expriencing is not with the DSO, however from past posts with problems
there are several modules that do not play well with DSO.

However as far as any problem with using the Red Hat distributed RPM
of Apache and mod_perl on 6.1 I can not say, I always compile my own.

Aaron Johnson

Clay wrote:
 
 DSO is not the issue
 
 that works just fine
 i know it does,
 the problem seems to be with the default binary packages that redhat 6.1
 comes with
 
 i ve just uninstalled/installed twice same results
 and there is no errors out in the error log
 
 i need this for work or i wouldnt be stressin!



ASP-Loader result in 'Attempt to free non-existent shared...'

2000-01-17 Thread Dmitry Beransky

Hi again, folks,

Last Saturday after manually relinking SDBM_File with a reference to 
mod_perl libperl.so, I was able to preload Apache::ASP and precompile the 
asp scripts from startup.pl without any segfaults.   This however, resulted 
in a different problem.  I didn't notice it right away (don't know how I 
could've missed it), but now every time a child process is been shutdown a 
hole slew of 'null: Attempt to free non-existent shared string during 
global destruction' messages (on the order of 2500 per process) is been 
dumped into the error log.  I've narrowed the problem down to the 
ASP-Loader call in startup.pl.  Any chance anybody knows what's going 
on?  Is it possible to at least somehow disable this error?

Thanks
Dmitry 



Re: ASP-Loader result in 'Attempt to free non-existent shared...'

2000-01-17 Thread Joshua Chamas

Dmitry Beransky wrote:
 
 Hi again, folks,
 
 Last Saturday after manually relinking SDBM_File with a reference to
 mod_perl libperl.so, I was able to preload Apache::ASP and precompile the
 asp scripts from startup.pl without any segfaults.   This however, resulted
 in a different problem.  I didn't notice it right away (don't know how I
 could've missed it), but now every time a child process is been shutdown a
 hole slew of 'null: Attempt to free non-existent shared string during
 global destruction' messages (on the order of 2500 per process) is been
 dumped into the error log.  I've narrowed the problem down to the
 ASP-Loader call in startup.pl.  Any chance anybody knows what's going
 on?  Is it possible to at least somehow disable this error?
 

You could try a " local $^W = 0; " before the Apache::ASP-Loader()
call, this might do away with these altogether.  But, there
is something else wrong lurking here, 2500 errors!!, so keep
this is the back of your mind in case anything else comes up.

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



RE: squid performance

2000-01-17 Thread Gerald Richter

Hi Stas,

 
  Yes, as Joshua posted today morning (at least it was morning in
 germany :-),
  the application buffer size is hardcoded, the size is 8192 (named
  IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
 
  The ProxyReceiveBufferSize set the receive buffer size of the socket, so
  it's an OS issue.

 Which means that setting of ProxyReceiveBufferSize higher than 8k is
 usless unless you modify the sources. Am I right? (I want to make it as
 clear as possible i in the Guide)


No, that means that Apache reads (and writes) the data of the request in
chunks of 8K, but the OS is providing a buffer with the size of
ProxyReceiveBufferSize (as far as you don't hit a limit). So the proxied
request data is buffered by the OS and if the whole page fit's inside the OS
buffer the sending Apache should be imediately released after sending the
page, while the proxing Apache can read and write the data in 8 K chunks as
slow as the client is.

That's the result of the discussion. I didn't tried it out myself until now
if it really behaves this way. I will do so the next time and let you know
if I find any different behaviour.

Gerald




RE: squid performance

2000-01-17 Thread Stas Bekman

   Yes, as Joshua posted today morning (at least it was morning in
  germany :-),
   the application buffer size is hardcoded, the size is 8192 (named
   IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
  
   The ProxyReceiveBufferSize set the receive buffer size of the socket, so
   it's an OS issue.
 
  Which means that setting of ProxyReceiveBufferSize higher than 8k is
  usless unless you modify the sources. Am I right? (I want to make it as
  clear as possible i in the Guide)
 
 
 No, that means that Apache reads (and writes) the data of the request in
 chunks of 8K, but the OS is providing a buffer with the size of
 ProxyReceiveBufferSize (as far as you don't hit a limit). So the proxied
 request data is buffered by the OS and if the whole page fit's inside the OS
 buffer the sending Apache should be imediately released after sending the
 page, while the proxing Apache can read and write the data in 8 K chunks as
 slow as the client is.

Gerald, thanks for your answer.
I'm still confused... which is the right scenario:

1) a mod_perl process generates a response of 64k, if the
ProxyReceiveBufferSize is 64k, the process gets released immediately, as
all 64k are buffered at the socket, then a proxy process comes in, picks
8k of data every time and sends down the wire. 

2) a mod_perl process generates a response of 64k, a proxy request reads
from mod_perl socket by 8k chunks and sends down the socket, No matter
what's the client's speed the data gets buffered once again at the socket.
So even if the client is slow the proxy server completes the proxying of
64k data even before the client was able to absorb the data. Thus the
system socket serves as another buffer on the way to the client.

3) neither of them

Also if the scenario 1 is the right one and it looks like:

  [  socket  ]
[mod_perl] = [  ] = [mod_proxy] = wire
  [  buffer  ]

When the buffer size is of 64k and the generated data is 128k, is it a
shift register (pipeline) alike buffer, so every time a chunk of 8k is
picked by mod_proxy, new 8k can enter the buffer. Or no new data can enter
the buffer before it gets empty, i.e. all 64k get read by mod_proxy? 

As you understand the pipeline mode provides a better performance as it
releases the heavy mod_perl process as soon as the amount of data awaiting
to be sent to the client is equal to socket buffer size + 8k. I think it's
not a shift register buffer type...

Thank you!

 
 That's the result of the discussion. I didn't tried it out myself until now
 if it really behaves this way. I will do so the next time and let you know
 if I find any different behaviour.
 
 Gerald
 
 
 



___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: DynaLoader/MakeMaker problem? - Apache::ASP: crash when placed in startup.pl

2000-01-17 Thread Matt Sergeant

On Sun, 16 Jan 2000, Alan Burlison wrote:
 I think we have a strong case for:
 
 a) Requesting that MakeMaker adds a dependency between the .so files it
 generates and the perl libperl.so
 
 b) Requesting that a 'remove a module' method is added to DynaLoader

Option b would be very useful for mod_perl because you could remove modules
used at startup, but not needed throughout the use of the system. For
example, say in your Perl section you want to parse an XML config file
that changes your httpd configuration somehow. So you load in XML::Parser.
But now you've got XML::Parser in each of your child procs that you
don't need and can't unload. Being able to call
DynaLoader::RemoveFromMemory(XML::Parser) would be ideal (yes, it could be
dangerous - so are most power tools).

-- 
Matt/

Details: FastNet Software Ltd - XML, Perl, Databases.
Tagline: High Performance Web Solutions
Web Sites: http://come.to/fastnet http://sergeant.org
Available for Consultancy, Contracts and Training.