[OT?] What exactly is forwarding?

2002-03-12 Thread Martin Haase-Thomas

Hi all,

I know this is not perfectly the right list for my topic, but before 
subscribing to another for just one question... forgive me if I'm going 
to be boring. Even more, because my question is rather philosophical.

If you consider JSPs, there is a tag called jsp:forward page=... /. 
My question is: how do I have to understand 'forward'? The java 
documentation isn't that verbose, and I can imagine two possible meanings:
1. Forwarding is some sort of an internal redirect to the servlet engine 
which the browser will not be informed of.  From this point of view 
forwarding will be nearly the same as a dynamic include.
2. Forwarding is the same as a redirect.

Maybe a superfluous question for some of you, for me it isn't actually. 
So, if anyone knows an answer - or knows the number of the RFC I'll find 
the information: you're welcome!

Thanx a lot in advance
Martin


-- 
   http://www.meome.de
---
Martin Haase-Thomas |   Tel.: 030 43730-558
meOme AG|   Fax.: 030 43730-555
Software Development|   [EMAIL PROTECTED]
---





Re: [OT] Thought for the Day

2002-03-12 Thread Carlos Ramirez

I wrote an article about Apache::Motd for UNIX SysAdmin magazine (March 
2001 issue). Here's the link: 
http://www.samag.com/documents/s=1153/sam0103a/

-Carlos


Geoffrey Young wrote:

 John Eisenschmidt wrote:
 
Sinister, aren't I? G

For the record I did email directly an explanation of what -o does and to exclude it 
for clean fortunes, and that it's more fun to make your own. I have quotes from all 
the Dogma movies that are called within my mod_perl index.html on my website. =)

Suddenly this thread is on-topic.

 
 everyone might want to look at Apache::MOTD - it's a similar idea to
 the Unix motd functionality whose implementation is quite clever.
 
 --Geoff
 
 





Re: [OT] Thought for the Day

2002-03-12 Thread Martin Haase-Thomas

Should we tell the yellow press in the end?

;)
Martin


Carlos Ramirez wrote:

 I wrote an article about Apache::Motd for UNIX SysAdmin magazine 
 (March 2001 issue). Here's the link: 
 http://www.samag.com/documents/s=1153/sam0103a/

 -Carlos


 Geoffrey Young wrote:

 John Eisenschmidt wrote:

 Sinister, aren't I? G

 For the record I did email directly an explanation of what -o does 
 and to exclude it for clean fortunes, and that it's more fun to 
 make your own. I have quotes from all the Dogma movies that are 
 called within my mod_perl index.html on my website. =)

 Suddenly this thread is on-topic.


 everyone might want to look at Apache::MOTD - it's a similar idea to
 the Unix motd functionality whose implementation is quite clever.

 --Geoff






-- 
   http://www.meome.de
---
Martin Haase-Thomas |   Tel.: 030 43730-558
meOme AG|   Fax.: 030 43730-555
Software Development|   [EMAIL PROTECTED]
---






loss of shared memory in parent httpd

2002-03-12 Thread Bill Marrs

I'm a heavy mod_perl user, running 3 sites as virtual servers, all with 
lots of custom Perl code.  My httpd's are huge(~50mb), but with the help of 
a startup file I'm able to get them sharing most of their 
memory(~43).  With the help of GTopLimit, I'm able to keep the memory usage 
under control.

But... recently, something happened, and things have changed.  After some 
random amount of time (1 to 40 minutes or so, under load), the parent httpd 
suddenly loses about 7-10mb of share between it and any new child it 
spawns.  As you can imagine, the memory footprint of my httpds skyrockets 
and the delicate balance I set up is disturbed.  Also, GTopLimit is no help 
in this case - it actually causes flailing because each new child starts 
with memory sharing that is out of bounds and is thus killed very quickly.

Restarting Apache resets the memory usage and restores the server to smooth 
operation.  Until, it happens again.

Using GTop() to get the shared memory of each child before and after 
running my perl for each page load showed that it wasn't my code causing 
the jump, but suddenly the child, after having a good amount of shared 
memory in use, loses a 10MB chunk and from then on the other children 
follow suit.

So, something I did on the server (I'm always doing stuff!) has caused this 
change to happen, but I've been pulling my hair out for days trying to 
track it down.  I am now getting desperate.  One of the recent things I did 
was to run tux (another web server) to serve my images, but I don't see 
how that could have any effect on this.

If anyone has any ideas what might cause the httpd parent (and new 
children) to lose a big chunk of shared memory between them, please let me 
know.

Thanks in advance,

-bill




Re: loss of shared memory in parent httpd

2002-03-12 Thread Elizabeth Mattijsen

At 09:18 AM 3/12/02 -0500, Bill Marrs wrote:
If anyone has any ideas what might cause the httpd parent (and new 
children) to lose a big chunk of shared memory between them, please let me 
know.

I've seen this happen many times.  One day it works fine, the next you're 
in trouble.  And in my experience, it's not a matter of why this avalanche 
effect happens, but is it more a matter of why didn't it happen 
before?  You may not have realised it that you were just below a 
threshold and now you're over it.  And the change can be as small as the 
size of a heavily used template that suddenly gets over an internal memory 
allocation border, which in turn causes Perl to allocate more, which in 
turn causes memory to become unshared.

I have been thinking about a perl/C routine that would internally use all 
of the memory that was already allocated by Perl.  Such a routine would 
need to be called when the initial start of Apache is complete so that any 
child that is spawned has a saturated memory pool, so that any new 
variables would need to use newly allocated memory, which would be 
unshared.  But at least all of that memory would be used for new 
variables and not have the tendency to pollute old memory segments.

I'm not sure whether my assessment of the problem is correct.  I would 
welcome any comments on this.

I have two ideas that might help:
-
- other than making sure that you have the most up-to-date (kernel) version 
of your OS.  Older Linux kernels seem to have this problem a lot more than 
newer kernels.

I wish you strength in fixing this problem...


Elizabeth Mattijsen




Re: loss of shared memory in parent httpd (2)

2002-03-12 Thread Elizabeth Mattijsen

Oops. Premature sending...

I have two ideas that might help:
- reduce number of global variables used, less memory pollution by lexicals
- make sure that you have the most up-to-date (kernel) version of your 
OS.  Newer Linux kernels seem to be a lot savvy at handling shared memory 
than older kernels.

Again, I wish you strength in fixing this problem...


Elizabeth Mattijsen




Cookies and redirects

2002-03-12 Thread Axel Andersson

Hello,
I'm having trouble with both setting a cookie and redirecting the user to
another page at the same time. It would appear the cookie is only sent
when a normal header is sent by server.

If I do the following (having baked the cookie first), where $r is the
Apache-request() object:

$r-content_type(text/html; charset=iso-8859-1);
$r-send_http_header();

I get this header:

  Connection: close
  Date: Tue, 12 Mar 2002 10:39:05 GMT
  Server: Apache/1.3.23 (Unix) mod_perl/1.26
  Content-Type: text/html; charset=iso-8859-1
  Client-Date: Tue, 12 Mar 2002 10:39:05 GMT
  Client-Response-Num: 1
  Client-Transfer-Encoding: chunked
  Set-Cookie: user=12::7c786c222596437b; domain=animanga.nu; path=/;
expires=Wed, 
  12-Mar-2003 10:39:05 GMT

Very nice and all, with cookie set. However, doing:

$r-method_number(M_GET);
$r-method(GET);
$r-headers_in-unset(Content-length);
$r-headers_out-add(Location = /users.pl);
$r-status(REDIRECT);
$r-send_http_header();

Which I gather is the normal way to redirect a user, I get this header:

  Connection: close
  Date: Tue, 12 Mar 2002 10:38:36 GMT
  Server: Apache/1.3.23 (Unix) mod_perl/1.26
  Content-Type: text/html; charset=iso-8859-1
  Client-Date: Tue, 12 Mar 2002 10:38:36 GMT
  Client-Response-Num: 1
  Client-Transfer-Encoding: chunked

Right, no Set-cookie there. So what's up? How do I redirect a browser,
and set a cookie at the same time?

Thanks in advance,
Axel Andersson

-- 
[EMAIL PROTECTED]
http://www.animanga.nu/morris/

38. Feel cosmos as translucent ever-living presence




Re: loss of shared memory in parent httpd

2002-03-12 Thread Stas Bekman

Bill Marrs wrote:
 I'm a heavy mod_perl user, running 3 sites as virtual servers, all with 
 lots of custom Perl code.  My httpd's are huge(~50mb), but with the help 
 of a startup file I'm able to get them sharing most of their 
 memory(~43).  With the help of GTopLimit, I'm able to keep the memory 
 usage under control.
 
 But... recently, something happened, and things have changed.  After 
 some random amount of time (1 to 40 minutes or so, under load), the 
 parent httpd suddenly loses about 7-10mb of share between it and any new 
 child it spawns.  As you can imagine, the memory footprint of my httpds 
 skyrockets and the delicate balance I set up is disturbed.  Also, 
 GTopLimit is no help in this case - it actually causes flailing because 
 each new child starts with memory sharing that is out of bounds and is 
 thus killed very quickly.
 
 Restarting Apache resets the memory usage and restores the server to 
 smooth operation.  Until, it happens again.
 
 Using GTop() to get the shared memory of each child before and after 
 running my perl for each page load showed that it wasn't my code causing 
 the jump, but suddenly the child, after having a good amount of shared 
 memory in use, loses a 10MB chunk and from then on the other children 
 follow suit.
 
 So, something I did on the server (I'm always doing stuff!) has caused 
 this change to happen, but I've been pulling my hair out for days trying 
 to track it down.  I am now getting desperate.  One of the recent things 
 I did was to run tux (another web server) to serve my images, but I 
 don't see how that could have any effect on this.
 
 If anyone has any ideas what might cause the httpd parent (and new 
 children) to lose a big chunk of shared memory between them, please let 
 me know.

I assume that you are on linux (tux :). look at the output of 'free' -- 
how much swap is used before you start the server and when the horrors 
begin? Your delicate ballance could be ruined when the system starts to 
swap and the load doesn't go away. Therefore what you see is normal. 
Notice that it's possible that you didn't add a single line of code to 
your webserver, but updated some other app running on the same machine 
which started to use more memory, and that takes the ballance off.

Hope that my guess was right. If so make sure that your system never 
swaps. Swap is for emergency short term extra memory requirement, not 
for normal operation.

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: loss of shared memory in parent httpd

2002-03-12 Thread Paolo Campanella

On Tue, 12 Mar 2002 09:18:32 -0500
Bill Marrs [EMAIL PROTECTED] wrote:

 But... recently, something happened, and things have changed.  After some 
 random amount of time (1 to 40 minutes or so, under load), the parent httpd 
 suddenly loses about 7-10mb of share between it and any new child it 
 spawns.  As you can imagine, the memory footprint of my httpds skyrockets 
 and the delicate balance I set up is disturbed.  Also, GTopLimit is no help 

 Restarting Apache resets the memory usage and restores the server to smooth 
 operation.  Until, it happens again.

Hi Bill

I can't give you a decent answer, but I have noticed this as well, and my
impression is that this happens when your httpd's are swapped out
(when your system runs short of free memory) - or perhaps just when the parent
httpd is swapped out (been a while - can't remember exactly what symptoms
I observed). I think the whole phenomenon is a side-effect of normal
memory management - I'm sure someone on the list will have a proper
explanation.


Bye

Paolo




Re: loss of shared memory in parent httpd

2002-03-12 Thread Stas Bekman

Elizabeth Mattijsen wrote:
 At 09:18 AM 3/12/02 -0500, Bill Marrs wrote:
 
 If anyone has any ideas what might cause the httpd parent (and new 
 children) to lose a big chunk of shared memory between them, please 
 let me know.
 
 
 I've seen this happen many times.  One day it works fine, the next 
 you're in trouble.  And in my experience, it's not a matter of why this 
 avalanche effect happens, but is it more a matter of why didn't it 
 happen before?  You may not have realised it that you were just below a 
 threshold and now you're over it.  And the change can be as small as 
 the size of a heavily used template that suddenly gets over an internal 
 memory allocation border, which in turn causes Perl to allocate more, 
 which in turn causes memory to become unshared.
 
 I have been thinking about a perl/C routine that would internally use 
 all of the memory that was already allocated by Perl.  Such a routine 
 would need to be called when the initial start of Apache is complete so 
 that any child that is spawned has a saturated memory pool, so that 
 any new variables would need to use newly allocated memory, which would 
 be unshared.  But at least all of that memory would be used for new 
 variables and not have the tendency to pollute old memory segments.
 
 I'm not sure whether my assessment of the problem is correct.  I would 
 welcome any comments on this.

Nope Elizabeth, your explanation is not so correct. ;)
Shared memory is not about sharing the pre-allocated memory pool (heap 
memory). Once you re-use a bit of preallocated memory the sharing goes away.

Shared memory is about 'text'/read-only memory pages which never get 
modified and pages that can get modified but as long as they aren't 
modified they are shared. Unfortunately (in this aspect, but fortunately 
for many other aspects) Perl is not a strongly-typed (or whatever you 
call it) language, therefore it's extremely hard to share memory, 
because in Perl almost everything is data. Though as you could see Bill 
was able to share 43M out of 50M which is damn good!


_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: Cookies and redirects

2002-03-12 Thread Geoffrey Young

Axel Andersson wrote:
 
 Hello,
 I'm having trouble with both setting a cookie and redirecting the user to
 another page at the same time. It would appear the cookie is only sent
 when a normal header is sent by server.
 

this is a common problem - you have to add the cookie to the
err_headers_out table instead of the headers_out table. 

if you are using Apache::Cookie then this is done for you, otherwise
you have to populate the correct set of headers.

see 

http://perl.apache.org/guide/snippets.html#Sending_Cookies_in_REDIRECT_Resp

or Recipes 3.7 and 3.13 in the mod_perl cookbook

--Geoff



Re: [OT?] What exactly is forwarding?

2002-03-12 Thread Paul Lindner

On Tue, Mar 12, 2002 at 10:01:42AM +0100, Martin Haase-Thomas wrote:
 Hi all,
 
 I know this is not perfectly the right list for my topic, but before 
 subscribing to another for just one question... forgive me if I'm going 
 to be boring. Even more, because my question is rather philosophical.
 
 If you consider JSPs, there is a tag called jsp:forward page=... /. 
 My question is: how do I have to understand 'forward'? The java 
 documentation isn't that verbose, and I can imagine two possible meanings:
 1. Forwarding is some sort of an internal redirect to the servlet engine 
 which the browser will not be informed of.  From this point of view 
 forwarding will be nearly the same as a dynamic include.

*ding* correct!  Basically it says, dump any buffered output and start
a new request.

 2. Forwarding is the same as a redirect.

nope, see above.

 Maybe a superfluous question for some of you, for me it isn't actually. 
 So, if anyone knows an answer - or knows the number of the RFC I'll find 
 the information: you're welcome!

You'll find that $r-internal_redirect() is the mod_perl equivalent.
Also Apache::ASP containts the Transfer() method which accomplishes
the same thing.

-- 
Paul Lindner[EMAIL PROTECTED]   | | | | |  |  |  |   |   |

mod_perl Developer's Cookbook   http://www.modperlcookbook.org/
 Human Rights Declaration   http://www.unhchr.ch/udhr/



Re: File upload example

2002-03-12 Thread Stas Bekman

Rich Bowen wrote:
 I am sure that this is a FAQ, but I had a very hard time finding
 examples of code for doing file upload. I wanted to post this here in
 order to have it in the permanent record so that other folks don't have
 to spend days figuring this out.

Great Rich! I think we can do better than just keeping it in the 
archive. How about adding it here?
http://perl.apache.org/guide/snippets.html
If you like the idea, can you please make it a complete section and send 
it to list/me and I'll add it to the guide? Thanks!



_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




BerkeleyDB Problems

2002-03-12 Thread Mark Matthews

Hello,

I am moving a website that now resides on a i686 server running RedHat
6.2 with perl v5.005_03 to another i686 server running Suse 7.1 with
perl v5.6.1.

The website uses a number of cgi scripts that read and write from
BerkeleyDB files using the tie function.

The site is currently running fine on the RedHat server, but when
testing the scripts on the Suse box I am finding the the scripts are
failing complaining that the db file cannot be opened.

The function calling the script is as follows...
$db = blurb;
tie(%BLURB, DB_File, $db, O_RDONLY, 0664) || die(Error: could not
open $db: $!\n);

Things I have tried so far..
- I have checked that the BerkeleyDB file (blurb) in the right path, and
is readable/writable.
- I have checked  that the DB file is not corrupt by ftping it back to
the RedHat box and testing it.. Works fine..
- the command file blurb shows that the db file is Berkeley DB (Hash,
version 5, native byte-order) and my guess is the version of DB_File
cannot read that DB version.   I have installed earlier versions of
DB_File on the Suse box with no luck.
- I have successfully created a new db file using tie. The file created
is version 7.

Since these scripts do infact work on the RedHat server, what do I need
to do to get them to work on the Suse server

Any help would be greatly appreciated..

Mark Matthews




Re: loss of shared memory in parent httpd

2002-03-12 Thread Elizabeth Mattijsen

At 11:46 PM 3/12/02 +0800, Stas Bekman wrote:
I'm not sure whether my assessment of the problem is correct.  I would 
welcome any comments on this.
Nope Elizabeth, your explanation is not so correct. ;)

Too bad...  ;-(


Shared memory is not about sharing the pre-allocated memory pool (heap 
memory). Once you re-use a bit of preallocated memory the sharing goes away.

I think the phrase is Copy-On-Write, right?  And since RAM is allocated in 
chunks, let's assume 4K for the sake of the argument, changing a single 
byte in such a chunk causes the entire chunk to be unshared.  In older 
Linux kernels, I believe to have seen that when a byte gets changed in a 
chunk of any child, that chunk becomes changed for _all_ children.  Newer 
kernels only unshare it for that particular child.  Again, if I'm not 
mistaken and someone please correct me if I'm wrong...

Since Perl is basically all data, you would need to find a way of 
localizing all memory that is changing to as few memory chunks as 
possible.  My idea was just that: by filling up all used memory before 
spawning children, you would use op some memory, but that would be shared 
between all children and thus not so bad.  But by doing this, you would 
hopefully cause all changing data to be localized to newly allocated memory 
by the children.  Wish someone with more Perl guts experience could tell me 
if that really is an idea that could work or not...


Shared memory is about 'text'/read-only memory pages which never get 
modified and pages that can get modified but as long as they aren't 
modified they are shared. Unfortunately (in this aspect, but fortunately 
for many other aspects) Perl is not a strongly-typed (or whatever you call 
it) language, therefore it's extremely hard to share memory, because in 
Perl almost everything is data. Though as you could see Bill was able to 
share 43M out of 50M which is damn good!

As a proof of concept I have run more than 100 200MB+ children on a 1 GB 
RAM machine and had sharing go up so high causing the top number of bytes 
field for shared memory to cycle through its 32-bit range multiple 
times...  ;-) .  It was _real_ fast (had all of its data that it needed as 
Perl hashes and lists) and ran ok until something would start an avalanche 
effect and it would all go down in a whirlwind of swapping.  So in the end, 
it didn't work reliably enough  ;-(  But man, was it fast when it ran...  ;-)


Elizabeth Mattijsen




Re: BerkeleyDB Problems

2002-03-12 Thread Paul Lindner

On Tue, Mar 12, 2002 at 11:06:00AM -0500, Mark Matthews wrote:
 Hello,
 
 I am moving a website that now resides on a i686 server running RedHat
 6.2 with perl v5.005_03 to another i686 server running Suse 7.1 with
 perl v5.6.1.

 The website uses a number of cgi scripts that read and write from
 BerkeleyDB files using the tie function.
 
 The site is currently running fine on the RedHat server, but when
 testing the scripts on the Suse box I am finding the the scripts are
 failing complaining that the db file cannot be opened.

 The function calling the script is as follows...
 $db = blurb;
 tie(%BLURB, DB_File, $db, O_RDONLY, 0664) || die(Error: could not
 open $db: $!\n);
 
 Things I have tried so far..
 - I have checked that the BerkeleyDB file (blurb) in the right path, and
 is readable/writable.
 - I have checked  that the DB file is not corrupt by ftping it back to
 the RedHat box and testing it.. Works fine..
 - the command file blurb shows that the db file is Berkeley DB (Hash,
 version 5, native byte-order) and my guess is the version of DB_File
 cannot read that DB version.   I have installed earlier versions of
 DB_File on the Suse box with no luck.
 - I have successfully created a new db file using tie. The file created
 is version 7.
 
 Since these scripts do infact work on the RedHat server, what do I need
 to do to get them to work on the Suse server

DB_File is usually implemented on top of whatever the latest Berkeley
DB is available.  The file formats are usually not compatible from
major version to another major version.  I believe RH 6.2 uses bdb v2
and Suse uses a newer version 3 or 3.1.  Use the rpm -q -a command to
find out which versions are which.

Anyway, you need to try out the db_upgrade command, it should upgrade
the db file to the latest format.  It might not be installed by
default, so check your local docs.  (I think it's in db3-utils, or
some such..)

You might also try out the BerkeleyDB module for access to more
interesting features provided by the later versions of this library.
In particular the transactions subsystem is very, very cool.

Good Luck

-- 
Paul Lindner[EMAIL PROTECTED]   | | | | |  |  |  |   |   |

mod_perl Developer's Cookbook   http://www.modperlcookbook.org/
 Human Rights Declaration   http://www.unhchr.ch/udhr/



Re: loss of shared memory in parent httpd

2002-03-12 Thread Elizabeth Mattijsen

At 12:43 AM 3/13/02 +0800, Stas Bekman wrote:
Doug has plans for a much improved opcode tree sharing for mod_perl 2.0, 
the details are kept as a secret so far :)

Can't wait to see that!


 This topic is covered (will be) in the upcoming mod_perl book, where 
we include the following reference materials
 which you may find helpful for understanding the shared memory concepts.

Ah... ok...  can't wait for that either...  ;-)


Don't you love mod_perl for what it makes you learn :)

Well, yes and no...  ;-)


Elizabeth Mattijsen




Re: [OT?] What exactly is forwarding?

2002-03-12 Thread Perrin Harkins

Paul Lindner wrote:
 You'll find that $r-internal_redirect() is the mod_perl equivalent.
 Also Apache::ASP containts the Transfer() method which accomplishes
 the same thing.

Personally, I always thought this was sort of a strange part of JSP.  It 
really shows the page-centric thinking behind it.  Doing a forward is 
essentially a GOTO statement for JSP.  When I write mod_perl apps, I 
never feel the need for that sort of thing, with so many better ways to 
accomplish things (OO, method calls, dispatch tables, template includes).

- Perrin




Debugging mod_perl

2002-03-12 Thread Nico Erfurth

Hi,


i'm creating an accounting system for my employer, the webfrontend is 
created with mod_perl.

Today i had a big problem, and i don't know how to track it down.
After changing one of my tool-modules apache segfaults on startup.

So, how can i debug something like this?
Are there any hooks inside of mod_perl i could use?
Do i need to debug apache itself?

Where could i start such debugging?

-- 
Mit freundlichen Grüßen
-
Nico Erfurth
Headlight Housingfactory GmbH
Email: [EMAIL PROTECTED]
-






Re: loss of shared memory in parent httpd

2002-03-12 Thread Perrin Harkins

Elizabeth Mattijsen wrote:
 Since Perl is basically all data, you would need to find a way of 
 localizing all memory that is changing to as few memory chunks as 
 possible.

That certainly would help.  However, I don't think you can do that in 
any easy way.  Perl doesn't try to keep compiled code on separate pages 
from variable storage.

- Perrin




Re: loss of shared memory in parent httpd

2002-03-12 Thread Graham TerMarsch

On Tuesday 12 March 2002 06:18, you wrote:
 I'm a heavy mod_perl user, running 3 sites as virtual servers, all with
 lots of custom Perl code.  My httpd's are huge(~50mb), but with the help
 of a startup file I'm able to get them sharing most of their
 memory(~43).  With the help of GTopLimit, I'm able to keep the memory
 usage under control.

 But... recently, something happened, and things have changed.  After
 some random amount of time (1 to 40 minutes or so, under load), the
 parent httpd suddenly loses about 7-10mb of share between it and any new
 child it spawns.  As you can imagine, the memory footprint of my httpds
 skyrockets and the delicate balance I set up is disturbed.  Also,
 GTopLimit is no help in this case - it actually causes flailing because
 each new child starts with memory sharing that is out of bounds and is
 thus killed very quickly.

We saw something similar here, running on Linux servers.  Turned out to be 
that if the server swapped hard enough to swap an HTTPd out, then you 
basically lost all the shared memory that you had.  I can't explain all of 
the technical details and the kernel-ness of it all, but from watching our 
own servers here this is what we saw on some machines that experienced 
quite a high load.

Our quick solution was first to reduce some the number of Mod_perls that 
we had running, using the proxy-front-end/modperl-back-end technique, and 
then supplemented that by adding another Gig of RAM to the machine.

And yes, once you've lost the shared memory, there isn't a way to get it 
back as shared again.  And yes, I've also seen that when this happens 
that it could full well take the whole server right down the toilet with 
it (as then your ~800MB of shared memory becomes ~800MB of _physical_ 
memory needed, and that could throw the box into swap city).

-- 
Graham TerMarsch
Howling Frog Internet Development, Inc.   http://www.howlingfrog.com



Re: Debugging mod_perl

2002-03-12 Thread Stas Bekman

Nico Erfurth wrote:
 Hi,
 
 
 i'm creating an accounting system for my employer, the webfrontend is 
 created with mod_perl.
 
 Today i had a big problem, and i don't know how to track it down.
 After changing one of my tool-modules apache segfaults on startup.
 
 So, how can i debug something like this?
 Are there any hooks inside of mod_perl i could use?
 Do i need to debug apache itself?
 
 Where could i start such debugging?


Try to 
use:http://perl.apache.org/guide/debug.html#Using_the_Interactive_Debugger
If you cannot figure out how to do it yourself read the SUPPORT file in 
the mod_perl distribution and follow the instructions on how to send the 
gdb backtrace to the list.


_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: Debugging mod_perl

2002-03-12 Thread Perrin Harkins

Nico Erfurth wrote:
 Today i had a big problem, and i don't know how to track it down.
 After changing one of my tool-modules apache segfaults on startup.
 
 So, how can i debug something like this?

Do you know exactly what you changed?  In that case, you have a small 
amount of code to look through for the problem.  Take stuff out until it 
stops segfaulting.  If you want some help figuring out why that part 
segfaults when you find it, post it here.

- Perrin




Re: File upload example

2002-03-12 Thread David Wheeler

On Tue, 2002-03-12 at 03:57, Rich Bowen wrote:

 my $form = Your::Class::form(); # Wherever you put this function
 if (my $file = $form-{UPLOAD}) {
 my $filename = $file-filename; # If you need the name

Actually, if you want the name, it's a really good idea to just get the
basename, since some browsers on some platforms (e.g., IE/Mac) send the
complete path name to the file on the browser's local file system (e.g.,
':Mac:Foo:Bar:image.jpg'). This is trickier than it sounds, because you
have to tell basename() what platform to assume the file is from. Here's
how I suggest doing it.

use File::Basename;
use HTTP::BrowserDetect;

my $browser = HTTP::BrowserDetect-new($ENV{HTTP_USER_AGENT});

# Tell File::Basename what platform to expect.
fileparse_set_fstype($browser-mac ? 'MacOS' :
 $browser-windows ? 'MSWin32' :
 $browser-dos ? 'MSDOS' :
 $browser-vms ? 'VMS'
 $browser-amiga ? 'AmigaOS' :
 $^O);

# Get the file name.
my $filename = basename($file-filename);

# Be sure to set File::Basename to the local file system again.
fileparse_set_fstype($^O);

HTH,

David

-- 
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
http://david.wheeler.net/  Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]



Serious bug, mixing mod-perl content

2002-03-12 Thread Miroslav Madzarevic



It seems that my mod-perl virtual hosts are mixing 
content :(
I don't know why ?

I have virthost1 and virthost2 on mod-perl apache, 
most of the time you get the right content when calling respective virthost but 
sometimes when you call virthost2 you get response from virt. host 1. This is a 
rare bug but happens.

We're using Mandrake Linux and it's 2 apache's (1 
mod-perl enabled and the other without mod-perl - this one uses mod proxy and 
mod rewrite).

Can someone please direct me how can I solve this 
problem ?
-Best 
regards,

Miroslav Madzarevic, Senior Perl Programmer[EMAIL PROTECTED]Mod Perl 
Development http://www.modperldev.comTelephone: 
+381 64 1193 501jamph

$_=",,.,,.,,,.,,,.,,,..,,.,,,.,.,,,";s/\s//gs;tr/,./05/;my(@a,$o,$i)=split//;$_=DATA;tr/~`'"^/0-4/;map{$o.=$a[$i]+$_;$i++}split//;@a=$o=~m!...!g;map{print 
chr}@a; 
__DATA__`~^`~~``^`~`~`^``~`~``''~^'`~^``'``^```~^``'```'~`~


Re: loss of shared memory in parent httpd

2002-03-12 Thread Perrin Harkins

Bill Marrs wrote:
 But... recently, something happened, and things have changed.  After 
 some random amount of time (1 to 40 minutes or so, under load), the 
 parent httpd suddenly loses about 7-10mb of share between it and any new 
 child it spawns.

One possible reason is that a perl memory structure in there might be 
changing.  Perl is able to grow variables dynamically by allocating 
memory in buckets, and it tends to be greedy when grabbing more.  You 
might trigger another large allocation by something as simple as 
implicitly converting a string to a number, or adding one element to an 
array.

Over time, I always see the parent process lose some shared memory.  My 
advice is to base your tuning not on the way it looks right after you 
start it, but on the way it looks after serving pages for a few hours. 
Yes, you will underutilize the box just after a restart, but you will 
also avoid overloading it when things get going.  I also recommend 
restarting your server every 24 hours, to reset things.

One more piece of advice: I find it easier to tune memory control with a 
single parameter.  Setting up a maximum size and a minumum shared size 
is not as effective as setting up a maximum *UNSHARED* size.  After all, 
it's the amount of real memory being used by each child that you care 
about, right?  Apache::SizeLimit has this now, and it would be easy to 
add to GTopLimit (it's just $SIZE - $SHARED).  Doing it this way helps 
avoid unnecessary process turnover.

- Perrin




Re: Serious bug, mixing mod-perl content

2002-03-12 Thread Stephen Gray

Are you using 2 separate apache processes or 2 virtual hosts within the 
same apache process?

If it's the latter, according to Apache's documentation:
If no matching virtual host is found, then the first listed virtual 
host that matches the IP address will be used.
(http://httpd.apache.org/docs/vhosts/name-based.html)

So it's possible that the client that is getting the wrong answers is
not specifying the Host: hostname parameter in the HTTP request and 
Apache is therefore sending the request to the first virtual host.

Steve

On Tue, 12 Mar 2002, Miroslav Madzarevic wrote:

 It seems that my mod-perl virtual hosts are mixing content :(
 I don't know why ?
 
 I have virthost1 and virthost2 on mod-perl apache, most of the time you get the 
right content when calling respective virthost but sometimes when you call virthost2 
you get response from virt. host 1. This is a rare bug but happens.
 
 We're using Mandrake Linux and it's 2 apache's (1 mod-perl enabled and the other 
without mod-perl - this one uses mod proxy and mod rewrite).
 
 Can someone please direct me how can I solve this problem ?
 -
 Best regards,
 
 Miroslav Madzarevic, Senior Perl Programmer
 [EMAIL PROTECTED]
 Mod Perl Development http://www.modperldev.com
 Telephone: +381 64 1193 501
 jamph
 
 $_=,,.,,.,,,.,,,.,,,..,,.,,,.,.,,,;
 s/\s//gs;tr/,./05/;my(@a,$o,$i)=split//;$_=DATA;tr/~`'^/0-4/;map{$o
 .=$a[$i]+$_;$i++}split//;@a=$o=~m!...!g;map{print chr}@a; __DATA__
 `~^`~~``^`~`~`^``~`~``''~^'`~^``'``^```~^``'```'~`~
 

-- 

===
Stephen M. Gray
www.frontiermedia.net





Re: Serious bug, mixing mod-perl content

2002-03-12 Thread Stas Bekman

Miroslav Madzarevic wrote:
 It seems that my mod-perl virtual hosts are mixing content :(
 
 I don't know why ?
 
  
 
 I have virthost1 and virthost2 on mod-perl apache, most of the time you 
 get the right content when calling respective virthost but sometimes 
 when you call virthost2 you get response from virt. host 1. This is a 
 rare bug but happens.
 
  
 
 We're using Mandrake Linux and it's 2 apache's (1 mod-perl enabled and 
 the other without mod-perl - this one uses mod proxy and mod rewrite).
 
 Can someone please direct me how can I solve this problem ?

Could this be your problem?
http://perl.apache.org/guide/config.html#A_Script_From_One_Virtual_Host_C

-- 


_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: Cookies and redirects

2002-03-12 Thread Geoffrey Young


 Geoff: I think I did this with my own module with no success... I'd end
 up with an extra set of headers, if I was _lucky_...  

perhaps that is due to a general misunderstanding of err_headers_out -
they are sent _even_ on Apache errors (of which REDIRECT is considered
one), not _only_ on errors.  so, if you were setting headers_out to
capture normal transactions and err_headers_out for errors, you might
get an extra set of headers if you were not careful in your own coding
methodology.

 Also, when I got
 it to redirect OK, even when I saw the cookie, sometimes the browser
 would not eat the cookie properly...  I don't have more specific
 details, because this was months ago and the project was not (then)
 under CVS control (now it is, of course)...

well, details are good :) this sounds like a browser issue, though -
if you populate the err_headers_out table with a cookie it will be
presented to the client on a REDIRECT response.

nevertheless, Axel emailed me privately saying that err_headers_out()
solved his issues.

--Geoff



Re: BerkeleyDB Problems

2002-03-12 Thread Michael Robinton

 On Tue, Mar 12, 2002 at 11:06:00AM -0500, Mark Matthews wrote:
  Hello,
 
  I am moving a website that now resides on a i686 server running RedHat
  6.2 with perl v5.005_03 to another i686 server running Suse 7.1 with
  perl v5.6.1.

  The website uses a number of cgi scripts that read and write from
  BerkeleyDB files using the tie function.
 
  The site is currently running fine on the RedHat server, but when
  testing the scripts on the Suse box I am finding the the scripts are
  failing complaining that the db file cannot be opened.

  The function calling the script is as follows...
  $db = blurb;
  tie(%BLURB, DB_File, $db, O_RDONLY, 0664) || die(Error: could not
  open $db: $!\n);
 
  Things I have tried so far..
  - I have checked that the BerkeleyDB file (blurb) in the right path,
and
  is readable/writable.
  - I have checked  that the DB file is not corrupt by ftping it back to
  the RedHat box and testing it.. Works fine..
  - the command file blurb shows that the db file is Berkeley DB
(Hash,
  version 5, native byte-order) and my guess is the version of DB_File
  cannot read that DB version.   I have installed earlier versions of
  DB_File on the Suse box with no luck.
  - I have successfully created a new db file using tie. The file
created
  is version 7.
 
  Since these scripts do infact work on the RedHat server, what do I
need
  to do to get them to work on the Suse server

 DB_File is usually implemented on top of whatever the latest
 Berkeley DB is available.  The file formats are usually not
 compatible from major version to another major version.  I believe
 RH 6.2 uses bdb v2 and Suse uses a newer version 3 or 3.1.  Use the
 rpm -q -a command to find out which versions are which.

 Anyway, you need to try out the db_upgrade command, it should
 upgrade the db file to the latest format.  It might not be installed
 by default, so check your local docs.  (I think it's in db3-utils,
 or some such..)

 You might also try out the BerkeleyDB module for access to more
 interesting features provided by the later versions of this library.
 In particular the transactions subsystem is very, very cool.

 Good Luck

 --

It's more complicated than that :-(

Newer versions of Linux (e.g. RedHat 6, SuSe 6) ship with a C library
that has version 2.x of Berkeley DB linked into it. My particular
version has 2.x with header files for 3.x. To make matters worse,
prior to Perl 5.6.1, the perl binary itself included the Berkeley DB
library. This has caused me some headaches which I've solved by
building and installing BDB v 4.x and including the following at the
beginning of the apachecel / httpsdctl file.

# fix up problem with C-lib database
export LD_PRELOAD=/usr/local/BerkeleyDB.4.0/lib/libdb.so

The LD_PRELOAD environment variable to point to the new shared
library, Perl will use it instead of the version of Berkeley DB that
shipped with your Linux distribution.

Maybe this will work for you as well. If you use DBD, make sure and
re-install the DBD / DBI modules with the new pointers to the
database lib.

Michael Robinton
BizSystems
4600 El Camino Real - Ste 206
Los Altos, CA 94022
650-947-3351




Re: File upload example

2002-03-12 Thread Rich Bowen

On Tue, 12 Mar 2002, Stas Bekman wrote:

 Rich Bowen wrote:
  I am sure that this is a FAQ, but I had a very hard time finding
  examples of code for doing file upload. I wanted to post this here in
  order to have it in the permanent record so that other folks don't have
  to spend days figuring this out.

 Great Rich! I think we can do better than just keeping it in the
 archive. How about adding it here?
 http://perl.apache.org/guide/snippets.html
 If you like the idea, can you please make it a complete section and send
 it to list/me and I'll add it to the guide? Thanks!

Absolutely. I will try to do that later this week.

-- 
http://www.CooperMcGregor.com/
Apache Support and Training




Re: Serious bug, mixing mod-perl content

2002-03-12 Thread C.Hauser - IT assistance GmbH

Basel, Dienstag, 12. März 2002, 19:27:24
.


Hello Miroslav


Assuming, that you are using a handler, I ask you how you separate
each calls to your vhosts? I also assume that the vhosts are on the
same Apache runtime?

Probably you mix up namespaces ...


Best Regards Christian  -  [EMAIL PROTECTED]  -

.



== beginn original ==
Date: Dienstag, 12. März 2002, 18:32:57
Subject: Serious bug, mixing mod-perl content

It seems that my mod-perl virtual hosts are mixing content :(
I don't know why ?

I have virthost1 and virthost2 on mod-perl apache, most of the time you get the right 
content when calling respective virthost but sometimes when you call virthost2 you get 
response from virt. host 1. This is a rare bug but happens.

We're using Mandrake Linux and it's 2 apache's (1 mod-perl enabled and the other 
without mod-perl - this one uses mod proxy and mod rewrite).

Can someone please direct me how can I solve this problem ?
-
Best regards,

Miroslav Madzarevic, Senior Perl Programmer
[EMAIL PROTECTED]
Mod Perl Development http://www.modperldev.com
Telephone: +381 64 1193 501
jamph

$_=,,.,,.,,,.,,,.,,,..,,.,,,.,.,,,;
s/\s//gs;tr/,./05/;my(@a,$o,$i)=split//;$_=DATA;tr/~`'^/0-4/;map{$o
.=$a[$i]+$_;$i++}split//;@a=$o=~m!...!g;map{print chr}@a; __DATA__
`~^`~~``^`~`~`^``~`~``''~^'`~^``'``^```~^``'```'~`~

=== end original 




Re: File upload example

2002-03-12 Thread Robert Landrum

At 9:01 AM -0800 3/12/02, David Wheeler wrote:
On Tue, 2002-03-12 at 03:57, Rich Bowen wrote:

 my $form = Your::Class::form(); # Wherever you put this function
 if (my $file = $form-{UPLOAD}) {
 my $filename = $file-filename; # If you need the name

Actually, if you want the name, it's a really good idea to just get the
basename, since some browsers on some platforms (e.g., IE/Mac) send the
complete path name to the file on the browser's local file system (e.g.,
':Mac:Foo:Bar:image.jpg'). This is trickier than it sounds, because you
have to tell basename() what platform to assume the file is from. Here's
how I suggest doing it.

Since when?  I just wrote something that did just this (in CGI), but 
it only uploaded the basename.  I'm using Mac OS 9, IE 5.0.  That 
sounds a lot like IE 3.0.

The other way to go is

$filename = $1 if($filename =~ /[\:\/\\]([^\:\/\\]+)$/);

How many people use / : or \ in their paths?  Can we shoot those 
people?  I especially don't want people who use those characters 
uploading files that might be downloaded by someone on another 
platform.  Just think what would happen if I downloaded Foo:Bar.txt, 
as uploaded by my windows friends.

Rob

--
When I used a Mac, they laughed because I had no command prompt. When 
I used Linux, they laughed because I had no GUI.  



Apache::TicketAccess

2002-03-12 Thread Ray Recendez








I am new to perl/mod_perl
and I am trying to implement secure authentication with expirable
ticket/cookies on our website (Apache 1.3.9-Solaris 2.8). I am trying to use
Apache::TicketAccess with Apache 1.3.9, modssl, openssl, and mod_ssl installed
but I am having problems even though everything compiled and installed without
errors. It seems like Apache/mod_perl cant locate some of the *.pm files even
though I add the lib paths using use lib. What is the difference between /usr/local/lib/perl5/5.6.1
directory and /usr/local/lib/perl/site_perl? Is site_perl platform specific?
Where should modules be installed? Can anyone see any problems? Is there an
easier/better solution? I know its a lot questions, but I hit a wall and need some help.



[Thu Mar 7 23:12:43 2002] [error] [Thu Mar 7 23:12:43 2002] TicketAccess.pm: [Thu
Mar 7 23:12:43 2002]
TicketTool.pm: [Thu Mar 7 23:12:43
2002] TicketTool.pm: [Thu Mar 7
23:12:43 2002] MD5.pm: Can't locate loadable object for module MD5 in @INC
(@INC contains: /usr/local/apache/lib/perl/
/usr/local/lib/perl5/5.6.1//sun4-solaris /usr/local/lib/perl5/5.6.1/
/usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/
/usr/local/lib/perl5/site_perl/5.6.1//sun4-solaris
/usr/local/lib/perl5/site_perl/5.6.1/ /export/home
/usr/local/lib/perl5/5.6.1/sun4-solaris /usr/local/lib/perl5/5.6.1
/usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris
/usr/local/lib/perl5/site_perl/5.6.1 /usr/local/lib/perl5/site_perl
/usr/local/apache/lib/perl . /usr/local/apache/) at /usr/local/apache/lib/perl//Apache/TicketTool.pm
line 11

[Thu Mar 7 23:12:43 2002] TicketAccess.pm: [Thu
Mar 7 23:12:43 2002]
TicketTool.pm: [Thu Mar 7 23:12:43
2002] TicketTool.pm: [Thu Mar 7
23:12:43 2002] MD5.pm: Compilation failed in require at /usr/local/apache/lib/perl//Apache/TicketTool.pm
line 11.

[Thu Mar 7 23:12:43 2002] TicketAccess.pm: [Thu
Mar 7 23:12:43 2002]
TicketTool.pm: [Thu Mar 7 23:12:43
2002] TicketTool.pm: BEGIN failed--compilation aborted at
/usr/local/apache/lib/perl//Apache/TicketTool.pm line 11.

[Thu Mar 7 23:12:43 2002] TicketAccess.pm: [Thu
Mar 7 23:12:43 2002]
TicketTool.pm: Compilation failed in require at
/usr/local/apache/lib/perl/Apache/TicketAccess.pm line 11.

[Thu Mar 7 23:12:43 2002] TicketAccess.pm: BEGIN
failed--compilation aborted at
/usr/local/apache/lib/perl/Apache/TicketAccess.pm line 11.

Compilation failed
in require at (eval 220) line 3.



Here is
/usr/local/apache/lib/perl/Apache/TicketAccess.pm-



#!/usr/local/bin/perl
-w -I /usr/local/lib/perl5/site_perl/5.6.1:/usr/local/apache/lib/perl:/usr/lo

cal/apache/lib/perl/Apache

package
Apache::TicketAccess;

# file:
Apache/TicketAccess.pm

use strict;

use warnings;

use lib
'/usr/local/lib/perl5/site_perl/5.6.1/';

use lib
'/usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/';

use lib '/usr/local/lib/perl5/5.6.1/';

use lib '/usr/local/apache/lib/perl/';

use Apache::Constants qw(:common);

use Apache::TicketTool ();



sub handler {

 my $r = shift;

 my $ticketTool =
Apache::TicketTool-new($r);

 my($result, $msg) =
$ticketTool-verify_ticket($r);

 unless ($result) {


$r-log_reason($msg, $r-filename);

 my
$cookie = $ticketTool-make_return_address($r);


$r-err_headers_out-add('Set-Cookie' = $cookie);


return FORBIDDEN;

 }

 return OK;

}



1;



Ray








Re: Serious bug, mixing mod-perl content

2002-03-12 Thread Ernest Lergon

 Miroslav Madzarevic wrote:
 
 I have virthost1 and virthost2 on mod-perl apache, most of the time
 you get the right content when calling respective virthost but
 sometimes when you call virthost2 you get response from virt. host 1.
 This is a rare bug but happens.
 
Do you have this in your httpd.conf?

VirtualHost *
ServerName www.virthost1.de
ServerAlias *.virthost1.de
ServerAlias virthost1.de
#   ...
/VirtualHost

VirtualHost *
ServerName www.virthost2.de
ServerAlias *.virthost2.de
ServerAlias virthost2.de
#   ...
/VirtualHost

If ServerAlias for the virthost2 is missing and the user types the
server name without 'www', the pages will be served from the first
server apache finds.

I use it like above -  just as wearing braces and belt ;-))

Ernest


-- 

*
* VIRTUALITAS Inc.   *  *
**  *
* European Consultant Office *  http://www.virtualitas.net  *
* Internationales Handelszentrum *   contact:Ernest Lergon  *
* Friedrichstraße 95 *mailto:[EMAIL PROTECTED] *
* 10117 Berlin / Germany *   ums:+49180528132130266 *
*




Re: Cookies and redirects

2002-03-12 Thread Hans Poo

El Mar 12 Mar 2002 11:23, Axel Andersson escribió:
 Hello,
 I'm having trouble with both setting a cookie and redirecting the user to
 another page at the same time. It would appear the cookie is only sent
 when a normal header is sent by server.

 If I do the following (having baked the cookie first), where $r is the
 Apache-request() object:

   $r-content_type(text/html; charset=iso-8859-1);
   $r-send_http_header();

 I get this header:

   Connection: close
   Date: Tue, 12 Mar 2002 10:39:05 GMT
   Server: Apache/1.3.23 (Unix) mod_perl/1.26
   Content-Type: text/html; charset=iso-8859-1
   Client-Date: Tue, 12 Mar 2002 10:39:05 GMT
   Client-Response-Num: 1
   Client-Transfer-Encoding: chunked
   Set-Cookie: user=12::7c786c222596437b; domain=animanga.nu; path=/;
 expires=Wed,
   12-Mar-2003 10:39:05 GMT

 Very nice and all, with cookie set. However, doing:

   $r-method_number(M_GET);
   $r-method(GET);
   $r-headers_in-unset(Content-length);
   $r-headers_out-add(Location = /users.pl);
   $r-status(REDIRECT);
   $r-send_http_header();

 Which I gather is the normal way to redirect a user, I get this header:

   Connection: close
   Date: Tue, 12 Mar 2002 10:38:36 GMT
   Server: Apache/1.3.23 (Unix) mod_perl/1.26
   Content-Type: text/html; charset=iso-8859-1
   Client-Date: Tue, 12 Mar 2002 10:38:36 GMT
   Client-Response-Num: 1
   Client-Transfer-Encoding: chunked

 Right, no Set-cookie there. So what's up? How do I redirect a browser,
 and set a cookie at the same time?

 Thanks in advance,
 Axel Andersson

Have you tried printing the headers_out hashref after sending the http header 
to see if the cookie is there ?.

my $headers_out = $r-headers_out;
foreach (keys %$headers_out) {
warn $_=$headers_out-{$_};
}

Hans



Re: Apache::TicketAccess

2002-03-12 Thread Perrin Harkins

Ray Recendez wrote:
 I am new to perl/mod_perl and I am trying to implement secure 
 authentication with expirable ticket/cookies on our website (Apache 
 1.3.9-Solaris 2.8). I am trying to use Apache::TicketAccess with Apache 
 1.3.9, modssl, openssl, and mod_ssl installed but I am having problems 
 even though everything compiled and installed without errors. It seems 
 like Apache/mod_perl can?t locate some of the *.pm files even though I 
 add the lib paths using ?use lib.?

The error message says it's looking for the MD5 module.  Do you have it?

 What is the difference between 
 /usr/local/lib/perl5/5.6.1 directory and /usr/local/lib/perl/site_perl?

The site_perl directory is for modules you install, as opposed to Perl's 
standard library.

 Is site_perl platform specific?

There are subdirectories under it for platform specific stuff.  Usually 
only XS modules will have anything there.

 Where should modules be installed?

The installation scripts for CPAN modules know where to install 
themselves: site_perl.

If you need more information on module installation, I suggest checking 
out man perlmod and the CPAN FAQ.  There's also lots of info in the 
Programming Perl book.

- Perrin




Re: loss of shared memory in parent httpd

2002-03-12 Thread Bill Marrs

Thanks for all the great advice.

A number of you indicated that it's likely due to my apache processes being 
partially swapped to disk.  That seems likely to me.  I haven't had a 
chance to prove that point, but when it does it again and I'm around, I 
plan to test it with free/top (top has a SWAP column which should show if 
my apaches are swapped out at all).

I am in the process of getting a memory upgrade, so that should ease this 
problem.  Meanwhile, I can set MaxClients lower and see if that keeps me 
out of trouble as well.

I suspect adding the tux server disrupted the balance I had (apparently, I 
was tuned pretty close to my memory limits!)

Yes, I am running on Linux...

One more piece of advice: I find it easier to tune memory control with a 
single parameter.  Setting up a maximum size and a minumum shared size is 
not as effective as setting up a maximum *UNSHARED* size.  After all, it's 
the amount of real memory being used by each child that you care about, 
right?  Apache::SizeLimit has this now, and it would be easy to add to 
GTopLimit (it's just $SIZE - $SHARED).  Doing it this way helps avoid 
unnecessary process turnover.

I agree.  For me, with my ever more bloated Perl code, I find this unshared 
number to be easier to keep a lid on.  I keep my apache children under 10MB 
each unshared as you say.  That number is more stable that the 
SIZE/SHARED numbers that GTopLimmit offers.  But, I have the GTopLimit 
sources, so I plan to tweak them to allow for an unshared setting.  I think 
I bugged Stas about this a year ago and he had a reason why I was wrong to 
think this way, but I never understood it.

-bill




trouble with GTop and

2002-03-12 Thread Bill Marrs

When I install the recent Redhat 7.2 updates for glibc:

glibc-2.2.4-19.3.i386.rpm
glibc-common-2.2.4-19.3.i386.rpm
glibc-devel-2.2.4-19.3.i386.rpm

It breaks my Apache GTop-based Perl modules, in a way that I don't understand.

Here is are the error messages from my httpd/error_log:

[Tue Mar 12 15:44:37 2002] [error] Can't load 
'/usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/GTop/GTop.so' for module 
GTop: /usr/lib/libgdbm.so.2: undefined symbol: gdbm_errno at 
/usr/lib/perl5/5.6.1/i386-linux/DynaLoader.pm line 206.
  at /usr/lib/perl5/site_perl/5.6.0/i386-linux/GTop.pm line 12
Compilation failed in require at 
/usr/lib/perl5/site_perl/5.6.0/Apache/GTopLimit.pm line 144.
BEGIN failed--compilation aborted at 
/usr/lib/perl5/site_perl/5.6.0/Apache/GTopLimit.pm line 144.
Compilation failed in require at /home/httpd/startup.pl line 32.
BEGIN failed--compilation aborted at /home/httpd/startup.pl line 32.
Compilation failed in require at (eval 1) line 1.

Syntax error on line 1017 of /etc/httpd/conf/httpd.conf:
Can't load '/usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/GTop/GTop.so' 
for module GTop: /usr/lib/libgdbm.so.2: undefined symbol: gdbm_errno at 
/usr/lib/perl5/5.6.1/i386-linux/DynaLoader.pm line 206.
  at /usr/lib/perl5/site_perl/5.6.0/i386-linux/GTop.pm line 12
Compilation failed in require at 
/usr/lib/perl5/site_perl/5.6.0/Apache/GTopLimit.pm line 144.
BEGIN failed--compilation aborted at 
/usr/lib/perl5/site_perl/5.6.0/Apache/GTopLimit.pm line 144.
Compilation failed in require at /home/httpd/startup.pl line 32.
BEGIN failed--compilation aborted at /home/httpd/startup.pl line 32.
Compilation failed in require at (eval 1) line 1.


Anyone have a clue about what I'd need to do to get this working?  I am 
able to force the old glibc rpms back on to fix the problem.  The previous 
versions that work for me are:

glibc-common-2.2.4-13
glibc-devel-2.2.4-13
glibc-2.2.4-13

-bill







Re: File upload example

2002-03-12 Thread D. Hageman


Mozilla sends a fully qualified path name ... just an FYI ...

On Tue, 12 Mar 2002, Robert Landrum wrote:

 At 9:01 AM -0800 3/12/02, David Wheeler wrote:
 On Tue, 2002-03-12 at 03:57, Rich Bowen wrote:
 
  my $form = Your::Class::form(); # Wherever you put this function
  if (my $file = $form-{UPLOAD}) {
  my $filename = $file-filename; # If you need the name
 
 Actually, if you want the name, it's a really good idea to just get the
 basename, since some browsers on some platforms (e.g., IE/Mac) send the
 complete path name to the file on the browser's local file system (e.g.,
 ':Mac:Foo:Bar:image.jpg'). This is trickier than it sounds, because you
 have to tell basename() what platform to assume the file is from. Here's
 how I suggest doing it.
 
 Since when?  I just wrote something that did just this (in CGI), but 
 it only uploaded the basename.  I'm using Mac OS 9, IE 5.0.  That 
 sounds a lot like IE 3.0.
 
 The other way to go is
 
 $filename = $1 if($filename =~ /[\:\/\\]([^\:\/\\]+)$/);
 
 How many people use / : or \ in their paths?  Can we shoot those 
 people?  I especially don't want people who use those characters 
 uploading files that might be downloaded by someone on another 
 platform.  Just think what would happen if I downloaded Foo:Bar.txt, 
 as uploaded by my windows friends.
 
 Rob
 
 --
 When I used a Mac, they laughed because I had no command prompt. When 
 I used Linux, they laughed because I had no GUI.  
 

-- 
//\\
||  D. Hageman[EMAIL PROTECTED]  ||
\\//




Re: trouble with GTop and

2002-03-12 Thread Perrin Harkins

Bill Marrs wrote:
 When I install the recent Redhat 7.2 updates for glibc:
 
 glibc-2.2.4-19.3.i386.rpm
 glibc-common-2.2.4-19.3.i386.rpm
 glibc-devel-2.2.4-19.3.i386.rpm
 
 It breaks my Apache GTop-based Perl modules, in a way that I don't 
 understand.
[...]
 Anyone have a clue about what I'd need to do to get this working?

Re-install/re-compile GTop?  RedHat probably changed some of the Gnome 
libs that it uses.




Re: File upload example

2002-03-12 Thread John Saylor

Hi

( 02.03.12 06:57 -0500 ) Rich Bowen:
 Comments welcome, YMMV, Caveat Emptor, and all that.

I have found that some browsers put the file in the value matching the
parameter name instead of putting a file upload object there. So your
code should check the value to see if it is a path AND a one liner
BEFORE trying to create the file upload object.

-- 
\js extend scalable infomediaries



Cache::SharedMemoryCache locking up on Solaris

2002-03-12 Thread Chris Allen

I have recently upgraded our modperl based web server
to use Cache::SharedMemoryCache. The code is changed very
little from in the docs - retrieve an entry from the 
cache if it is there, otherwise get an entry from the
database and store it in the cache.

However, after about fifteen minutes of moderate use - probably
with about 50 2-entry arrays stored in the cache and about
1 or so cache hits, the whole system ground to a halt.

You could do $cache=Cache::SharedMemoryCache-new() fine,
but doing:

$cache-set('foo','bar');

would never return.

The server is a Sun E220 running Solaris 7, with Apache 1.3.20
and modperl 1.26

The same configuration on a Linux 2.4 box works fine - over a
soak test of many hundreds of thousands of hits.


If anybody has any ideas, I'd be glad to hear from you!


In desperation, I have switched to Cache::FileCache - which
works fine, but I would be interested to know, for a system
that handles several hundred database queries per minute:

- What is the performance difference between SharedMemoryCache
and FileCache?

- What is the performance difference between FileCache and a
local MySQL database (on a simple indexed query)?

- What about with a LAN connected MySQL database running on
its own machine?

- The SharedMemoryCache docs say that it shouldn't be used
for large amounts of information. What size would be considered
large on a machine with 4GB RAM?


many thanks,


Chris Allen
[EMAIL PROTECTED]



RE: Apache::TicketAccess

2002-03-12 Thread Ray Recendez


-Original Message-
From: Perrin Harkins [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, March 12, 2002 12:16 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: Apache::TicketAccess

Ray Recendez wrote:
 I am new to perl/mod_perl and I am trying to implement secure
 authentication with expirable ticket/cookies on our website (Apache
 1.3.9-Solaris 2.8). I am trying to use Apache::TicketAccess with Apache
 1.3.9, modssl, openssl, and mod_ssl installed but I am having problems
 even though everything compiled and installed without errors. It seems
 like Apache/mod_perl can?t locate some of the *.pm files even though I
 add the lib paths using ?use lib.?

The error message says it's looking for the MD5 module.  Do you have it?

 What is the difference between
 /usr/local/lib/perl5/5.6.1 directory and /usr/local/lib/perl/site_perl?

The site_perl directory is for modules you install, as opposed to Perl's
standard library.

 Is site_perl platform specific?

There are subdirectories under it for platform specific stuff.  Usually
only XS modules will have anything there.

 Where should modules be installed?

The installation scripts for CPAN modules know where to install
themselves: site_perl.

If you need more information on module installation, I suggest checking
out man perlmod and the CPAN FAQ.  There's also lots of info in the
Programming Perl book.

- Perrin

Yes I have MD5 installed. However, MD5.pm is located in the following
locations: /usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/MD5.pm ;
/usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/MD5/MD5.pm ; and
/usr/local/lib/perl5/site_perl/5.6.1/MD5.pm. Which one is correct? Is there
another similar authentication package or is Apache::TicketAccess the best
one out there.

Thanks,
Ray




Re: Apache::TicketAccess

2002-03-12 Thread Perrin Harkins

Ray Recendez wrote:
 Yes I have MD5 installed. However, MD5.pm is located in the following
 locations: /usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/MD5.pm ;
 /usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/MD5/MD5.pm ; and
 /usr/local/lib/perl5/site_perl/5.6.1/MD5.pm. Which one is correct?

All of them.  There are platform-specific parts installed under the 
paths with solaris in them.

Does it work when you use it from command-line?

perl -MMD5 -e 'print ok\n;'

 Is there
 another similar authentication package or is Apache::TicketAccess the best
 one out there.

I've never used Apache::TicketAccess, but it looks fine.  Anyway, you 
aren't having problems with Apache::TicketAccess, you're having problems 
with MD5.  Any auth scheme is likely to want a working MD5.

- Perrin




RE: Apache::TicketAccess

2002-03-12 Thread Ray Recendez


-Original Message-
From: Perrin Harkins [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, March 12, 2002 12:48 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: Apache::TicketAccess

Ray Recendez wrote:
 Yes I have MD5 installed. However, MD5.pm is located in the following
 locations: /usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/MD5.pm ;
 /usr/local/lib/perl5/site_perl/5.6.1/sun4-solaris/MD5/MD5.pm ; and
 /usr/local/lib/perl5/site_perl/5.6.1/MD5.pm. Which one is correct?

All of them.  There are platform-specific parts installed under the
paths with solaris in them.

Does it work when you use it from command-line?

perl -MMD5 -e 'print ok\n;'

 Is there
 another similar authentication package or is Apache::TicketAccess the best
 one out there.

I've never used Apache::TicketAccess, but it looks fine.  Anyway, you
aren't having problems with Apache::TicketAccess, you're having problems
with MD5.  Any auth scheme is likely to want a working MD5.

- Perrin

Running it from the command line seems to work:
rift_rootperl -MMD5 -e 'print ok\n;'
ok
rift_root

--Ray




Re: Cache::SharedMemoryCache locking up on Solaris

2002-03-12 Thread Perrin Harkins

Chris Allen wrote:
 In desperation, I have switched to Cache::FileCache - which
 works fine, but I would be interested to know, for a system
 that handles several hundred database queries per minute:
 
 - What is the performance difference between SharedMemoryCache
 and FileCache?

You can test it yourself with the supplied benchmark script.  In 
general, file caching is faster for most applications.

 - What is the performance difference between FileCache and a
 local MySQL database (on a simple indexed query)?

FileCache will probably beat MySQL, but maybe not by much.  There were 
some benchmarks posted here a while back which might interest you.  The 
thread starts here:
http://marc.theaimsgroup.com/?l=apache-modperlm=10081212375w=2

- Perrin




Re: Apache::TicketAccess

2002-03-12 Thread Perrin Harkins

Ray Recendez wrote:
 Running it from the command line seems to work:
 rift_rootperl -MMD5 -e 'print ok\n;'
 ok

Is it possible that you may have installed this module using a different 
compiler from the one you used for mod_perl?  or maybe built mod_perl 
against a different perl installation?

Also, take all of those 'use libe' statements out of your script.  If 
you are using the perl installed at /usr/local/lib/perl5/5.6.1/ and 
those things aren't in your INC already, you have serious problems with 
your installation and should probably rebuild perl and mod_perl from 
scratch.

- Perrin




performance testing - emulating real world use

2002-03-12 Thread Bryan Henry

Anyone know of good guides or general info on 
performance testing and emulating real use of 
an application.

I would like to understand how to identify 
potential bottlenecks before I deploy web apps.

thank you,
~ b r y a n





Re: performance testing - emulating real world use

2002-03-12 Thread clayton cottingham

Bryan Henry wrote:
 
 Anyone know of good guides or general info on
 performance testing and emulating real use of
 an application.
 
 I would like to understand how to identify
 potential bottlenecks before I deploy web apps.
 
 thank you,
 ~ b r y a n


try httpd.apache.org/test/

and perl framework
there in

as well look on freshmeat for

siege
it does testing too



Re: performance testing - emulating real world use

2002-03-12 Thread Paul Lindner

On Tue, Mar 12, 2002 at 01:52:36PM -0800, clayton cottingham wrote:
 Bryan Henry wrote:
  
  Anyone know of good guides or general info on
  performance testing and emulating real use of
  an application.
  
  I would like to understand how to identify
  potential bottlenecks before I deploy web apps.
  
  thank you,
  ~ b r y a n

I've used HTTPD::Bench::ApacheBench (available on CPAN) to do load
testing.  It seems to do a good job.  The hardest part is writing the
testing script  (especially for form transactions..).  

However, if you can do your requests with LWP it's fairly
straightforward to convert over to the ApacheBench data structures.

I'm considering writing a little mod_perl proxy server that records
the different transactions.  Then I could just munge the separate
Authorization: headers to do some serious load testing...

 
 try httpd.apache.org/test/
 
 and perl framework
 there in
 
 as well look on freshmeat for
 
 siege
 it does testing too

-- 
Paul Lindner[EMAIL PROTECTED]   | | | | |  |  |  |   |   |

mod_perl Developer's Cookbook   http://www.modperlcookbook.org/
 Human Rights Declaration   http://www.unhchr.ch/udhr/



Re: performance testing - emulating real world use

2002-03-12 Thread Andrew Ho

Heyas,

BHAnyone know of good guides or general info on 
BHperformance testing and emulating real use of 
BHan application.

As a general rule, it's easiest if you have a production system already
running. Record all information that you need to reproduce the requests
(typically, HTTP request headers and POST data if applicable), from a
production server and you can replay any amount of data on a sandboxed QA
environment. You can either eliminate or proportionally shorten the time
period between requests to space out load arbitrarily.

This is extremely effective if you have enough real user data because
you're not inventing user load. You're using real user load.

I don't know of any product that does this all at once, but it's not hard
to hack together. If your site is entirely GET based, you can probably
just make do with parsing access logs and turning those into requests. I
believe Apache::DumpHeaders might get you most of the way on the capturing
side if you need special headers, cookies, or POST information.

Feeding scripts into commercial products like SilkPerformer will give you
the best client side testing and reports. However, a homebrew Perl user
agent will do okay, too. Unfortunately, ab doesn't support taking in a
custom URL workload.

For a simple record/replay load test tool that works remarkably well,
check out the resource CD that ships with Windows 2000 and you will find
the Microsoft Web Stress Tester. It's free and GUI based and can record IE
sessions and replay them with an arbitrary number of threads. It uses
Access databases to hold the tests and results so you can probably use
Perl on Windows to populate it with your custom tests.

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--




Re: File upload example

2002-03-12 Thread Rich Bowen

On Tue, 12 Mar 2002, John Saylor wrote:

 Hi

 ( 02.03.12 06:57 -0500 ) Rich Bowen:
  Comments welcome, YMMV, Caveat Emptor, and all that.

 I have found that some browsers put the file in the value matching the
 parameter name instead of putting a file upload object there. So your
 code should check the value to see if it is a path AND a one liner
 BEFORE trying to create the file upload object.

That's not really necessary, as Apache::Request does that for you. If
the upload method fails, then you won't get anything in the UPLOAD key.
The generic form handler does not know what field(s) in your form were
file upload forms, and so this method just lets you check the one key
(UPLOAD) and, if it is defined, then you know you got something.

Hopefully, *all* browsers put the file name in the parameter, since that
is the defined behavior. However, regardless of this, the file upload
object contains all the necessary information to reconstruct the file,
so you don't even have to look in that field.

-- 
Rich Bowen
Apache Administrators Handbook
ApacheAdmin.com




Re: loss of shared memory in parent httpd

2002-03-12 Thread Ask Bjoern Hansen

On Tue, 12 Mar 2002, Graham TerMarsch wrote:

[...] 
 We saw something similar here, running on Linux servers.  Turned out to be 
 that if the server swapped hard enough to swap an HTTPd out, then you 
 basically lost all the shared memory that you had.  I can't explain all of 
 the technical details and the kernel-ness of it all, but from watching our 
 own servers here this is what we saw on some machines that experienced 
 quite a high load.
 
 Our quick solution was first to reduce some the number of Mod_perls that 
 we had running, using the proxy-front-end/modperl-back-end technique,

You should always do that. :-)

 and then supplemented that by adding another Gig of RAM to the
 machine.

 And yes, once you've lost the shared memory, there isn't a way to get it 
 back as shared again.  And yes, I've also seen that when this happens 
 that it could full well take the whole server right down the toilet with 
 it (as then your ~800MB of shared memory becomes ~800MB of _physical_ 
 memory needed, and that could throw the box into swap city).

I forwarded this mail to one of the CitySearch sysadmins who had
told about seeing this.  He is seeing the same thing (using kernel
2.4.17), except that if he disables swap then the processes will get
back to reporting more shared memory.  So maybe it's really just
GTop or the kernel reporting swapped stuff in an odd way.

No, I can't explain the nitty gritty either. :-)

Someone should write up a summary of this thread and ask in a
technical linux place, or maybe ask Dean Gaudet.


 - ask 

-- 
ask bjoern hansen, http://ask.netcetera.dk/ !try; do();




Re: performance testing - emulating real world use

2002-03-12 Thread Ask Bjoern Hansen

On Tue, 12 Mar 2002, Andrew Ho wrote:

[...]
 This is extremely effective if you have enough real user data because
 you're not inventing user load. You're using real user load.

Not really; you also have to emulate the connection speeds of the
users.  Or does the tools you mentioned do that?


 - ask
 
-- 
ask bjoern hansen, http://ask.netcetera.dk/ !try; do();
more than a billion impressions per week, http://valueclick.com




Re: loss of shared memory in parent httpd

2002-03-12 Thread Tom Brown

 No, I can't explain the nitty gritty either. :-)
 
 Someone should write up a summary of this thread and ask in a
 technical linux place, or maybe ask Dean Gaudet.

I believe this is a linux/perl issue... stand alone daemons exhibit the
same behaviour... e.g. if you've got a parent PERL daemon that
fork()s ...  swapping in data from a child does _not_ have any
affect on other copies of that memory. I'm sure swapping in the
memory of the parent before fork()ing would be fine

Admittedly, my experience is from old linux kernels (2.0), but I
would not be suprised if current ones are similar.

I'm sure it is the same on some other platforms, but I haven't used much
else for a long time.

--
[EMAIL PROTECTED]   | Put all your eggs in one basket and 
http://BareMetal.com/  |  WATCH THAT BASKET!
web hosting since '95  | - Mark Twain




Re: performance testing - emulating real world use

2002-03-12 Thread Andrew Ho

Hello,

ABHNot really; you also have to emulate the connection speeds of the
ABHusers. Or does the tools you mentioned do that?

Both of the commercially produced tools I mentioned (SilkPerformer and the
free Microsoft Web Stress program) can throttle bandwidth. Rolling your
own is a bunch harder.

So you're correct. My point though is not so much that the load profile of
what pages get loaded in what order, and what data calls and dynamic
scripts are run in what order are genuine. If you simulate the timing
between requests, you'll even get spikes that are similar to the real
thing. It's definitely not reality! You also miss anomalies like users
closing browsers and (unless you capture full headers) which clients
support keep-alives for example. But, it's closer to reality than most
scripts that are invented (especially be developers ;)).

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--




Re: performance testing - emulating real world use

2002-03-12 Thread Andrew Ho

Hello,

AHSo you're correct. My point though is not so much that the load profile of
AHwhat pages get loaded in what order, and what data calls and dynamic
AHscripts are run in what order are genuine. If you simulate the timing
AHbetween requests, you'll even get spikes that are similar to the real
AHthing. It's definitely not reality! You also miss anomalies like users
AHclosing browsers and (unless you capture full headers) which clients
AHsupport keep-alives for example. But, it's closer to reality than most
AHscripts that are invented (especially be developers ;)).

Man, I can't type worth anything today. The gist of what I meant to type
was this: the exact load of the production server will not be replicated
in your simulation; but the load from data calls and dynamically generated
content will be similar in nature, patterned after how your server is hit
in real life. This will likely be a better exercise of your server than a
developer-invented test script.

If you have a production environment (or proxy) set up that can capture
real user requests, this is also far less work for creating a convincing
simulation load than having to sit down and write a new script every time 
your application changes.

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--




Re: loss of shared memory in parent httpd

2002-03-12 Thread Stas Bekman

Bill Marrs wrote:

 One more piece of advice: I find it easier to tune memory control with 
 a single parameter.  Setting up a maximum size and a minumum shared 
 size is not as effective as setting up a maximum *UNSHARED* size.  
 After all, it's the amount of real memory being used by each child 
 that you care about, right?  Apache::SizeLimit has this now, and it 
 would be easy to add to GTopLimit (it's just $SIZE - $SHARED).  Doing 
 it this way helps avoid unnecessary process turnover.
 
 
 I agree.  For me, with my ever more bloated Perl code, I find this 
 unshared number to be easier to keep a lid on.  I keep my apache 
 children under 10MB each unshared as you say.  That number is more 
 stable that the SIZE/SHARED numbers that GTopLimmit offers.  But, I have 
 the GTopLimit sources, so I plan to tweak them to allow for an unshared 
 setting.  I think I bugged Stas about this a year ago and he had a 
 reason why I was wrong to think this way, but I never understood it.

I don't remember why I was argueing against it :) But in any case,
I'll simply add this third option, so you can control it by either
SIZE/SHARED or UNSHARED.
_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: performance testing - emulating real world use

2002-03-12 Thread Jauder Ho


Another application (commercial) is Mercury Interactive's LoadRunner. It
actually records events and plays it back on load generator machines.
It's fairly complex, has LOTs of knobs to turn and can load test quite a
bit more than just web apps, I use it to load test/benchmark Oracle 11i
for instance. The software is not cheap but definitely worth looking into
if you are serious about testing. (www.merc-int.com)

They also sell something called ActiveTest which may be more suited to a
web applications. In this case, they will test your site for you using
their hardware at a colo site.

--Jauder

On Tue, 12 Mar 2002, Andrew Ho wrote:

 Heyas,

 BHAnyone know of good guides or general info on
 BHperformance testing and emulating real use of
 BHan application.

 As a general rule, it's easiest if you have a production system already
 running. Record all information that you need to reproduce the requests
 (typically, HTTP request headers and POST data if applicable), from a
 production server and you can replay any amount of data on a sandboxed QA
 environment. You can either eliminate or proportionally shorten the time
 period between requests to space out load arbitrarily.

 This is extremely effective if you have enough real user data because
 you're not inventing user load. You're using real user load.

 I don't know of any product that does this all at once, but it's not hard
 to hack together. If your site is entirely GET based, you can probably
 just make do with parsing access logs and turning those into requests. I
 believe Apache::DumpHeaders might get you most of the way on the capturing
 side if you need special headers, cookies, or POST information.

 Feeding scripts into commercial products like SilkPerformer will give you
 the best client side testing and reports. However, a homebrew Perl user
 agent will do okay, too. Unfortunately, ab doesn't support taking in a
 custom URL workload.

 For a simple record/replay load test tool that works remarkably well,
 check out the resource CD that ships with Windows 2000 and you will find
 the Microsoft Web Stress Tester. It's free and GUI based and can record IE
 sessions and replay them with an arbitrary number of threads. It uses
 Access databases to hold the tests and results so you can probably use
 Perl on Windows to populate it with your custom tests.

 Humbly,

 Andrew

 --
 Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
 Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
 Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
 --






Re: [OT?] What exactly is forwarding?

2002-03-12 Thread Martin Haase-Thomas

Thank you both. I thought so: it is one of these typical JSP features - 
but now it works in perl, too ;)

Cheers
Martin


Perrin Harkins wrote:

 Paul Lindner wrote:

 You'll find that $r-internal_redirect() is the mod_perl equivalent.
 Also Apache::ASP containts the Transfer() method which accomplishes
 the same thing.


 Personally, I always thought this was sort of a strange part of JSP.  
 It really shows the page-centric thinking behind it.  Doing a 
 forward is essentially a GOTO statement for JSP.  When I write 
 mod_perl apps, I never feel the need for that sort of thing, with so 
 many better ways to accomplish things (OO, method calls, dispatch 
 tables, template includes).

 - Perrin



-- 
   http://www.meome.de
---
Martin Haase-Thomas |   Tel.: 030 43730-558
meOme AG|   Fax.: 030 43730-555
Software Development|   [EMAIL PROTECTED]
---






Re: performance testing - emulating real world use

2002-03-12 Thread Matt Sergeant

On Tue, 12 Mar 2002, Jauder Ho wrote:


 Another application (commercial) is Mercury Interactive's LoadRunner. It
 actually records events and plays it back on load generator machines.
 It's fairly complex, has LOTs of knobs to turn and can load test quite a
 bit more than just web apps, I use it to load test/benchmark Oracle 11i
 for instance. The software is not cheap but definitely worth looking into
 if you are serious about testing. (www.merc-int.com)

 They also sell something called ActiveTest which may be more suited to a
 web applications. In this case, they will test your site for you using
 their hardware at a colo site.

Before anyone even looks into this, be warned they quoted me £50,000 once
for LoadRunner. Needless to say I was flabbergasted (though their software
did look kinda cool).

-- 
!-- Matt --
:-Get a smart net/:-




Re: performance testing - emulating real world use

2002-03-12 Thread Jauder Ho


Heh. Forgot to state that it does cost an arm and a leg but it's one of
the few software packages that is worth considering paying money for IMO.

However, with the economy being the way it is, it is possible to rent
the software for a period of time but this is done by special arrangement
on a case by case basis.

If you ask, they may be willing to give you a copy of the software. The
out-the-box install allows you to record and playback as well as load up
to 10 (iirc) users which nicely lets you test out the functionality. I
think I have a copy of it somewhere around here. If there is interest, I
can get you in touch with the right people.

--Jauder

On Wed, 13 Mar 2002, Matt Sergeant wrote:

 On Tue, 12 Mar 2002, Jauder Ho wrote:

 
  Another application (commercial) is Mercury Interactive's LoadRunner. It
  actually records events and plays it back on load generator machines.
  It's fairly complex, has LOTs of knobs to turn and can load test quite a
  bit more than just web apps, I use it to load test/benchmark Oracle 11i
  for instance. The software is not cheap but definitely worth looking into
  if you are serious about testing. (www.merc-int.com)
 
  They also sell something called ActiveTest which may be more suited to a
  web applications. In this case, they will test your site for you using
  their hardware at a colo site.

 Before anyone even looks into this, be warned they quoted me £50,000 once
 for LoadRunner. Needless to say I was flabbergasted (though their software
 did look kinda cool).

 --
 !-- Matt --
 :-Get a smart net/:-