strange memory-problem
something is wrong here. apache 2.0.45,mod_perl 1.99_08 and perl 5.8.0 The following little script running under ModPerl::Registry makes consuming all the memory and not releasing it back. Its not a classical memleak, cause the consumed mem does not increase. If running in single-server-mode the consumed mem stays at approx. 200M, and if I run apache in standard prefork-mode and call the script a few times all apache-threads want 200M and it eats up all my mem and in error_log I get: Out of memory! Callback called exit. #! /usr/bin/perl -w use strict; use CGI; run(); sub run{ my @x=(); $#x=5000; # this is the big one my $q=new CGI; print $q-header(),$q-start_html(); print int(rand(1)); print $q-end_html(),\n; # my @x=(); # undef @x; } I tried to set @x=() at the end of the sub, which did not change anything. If I explictely undef @x the memory is released. I triple-checked my logs if there is any 'stay not shared' or 'nested subroutine' or whatever warning and I'm stubbornly convinced that there is a problem. imho apache should release the memory back when the sub is exited. Please tell me where my brain-error is. thnx, peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED] http://www.goldfisch.at
Re: cgi.pm does not work in handlers (why responsehandlers at all ?)
insignificant if you programs are big enough. If you want to explore more, you will find this information and a lot more at http://perl.apache.org/docs/. I discovered replacements like Apache::Request but I'm now sure if this would work inside a handler. Apache::Request works as a drop-in replacement for CGI.pm's request parsing for mp1. It's not yet available for mp2. __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED] http://www.goldfisch.at
cgi.pm does not work in handlers (why responsehandlers at all ?)
While I'm programming mod_perl for quite a while now I only recently discovered the wonders of writing my own apache-handlers. I used CGI.pm in my mod_perl-application to get 'path_info', 'param', print out headers and more. None of this works inside my own handlers any more. I discovered replacements like Apache::Request but I'm now sure if this would work inside a handler. My Questions: - which docs can you recommend that will answer all these questions to me like *why* does CGI.pm not work any more ? Is there any way to make it nevertheless work, so I could easily use my old projects ? What to use instead ? Is the O'Reilly about apache-modules what I'm looking for ? I've the small O'reilly about mod_perl but it raises more questions than its answers. - whats the maineffect of a simple ResponseHandler at all compared with a perl-program run under mod_perl ? Is it much faster, cause it handles things more efficiently ? By now I used to use an Alias-Directive in apache to direct certains requests to a single perl-script and use path_info() to detect the additional arguments. For the use there is no difference at all. thnx, peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED] http://www.goldfisch.at
mod_perl2: apache.pm vs apache2.pm (CGI.pm)
I've mod_perl running on several machines (apache 1.x) Today I installed a new system with apache2 and ran into deep troubles and questions: I installed perl-5.8.0, apache 2.0.43 and mod_perl 1.99_07 I preload Apache2 and use ModPerl::Registry: LoadModule perl_module modules/mod_perl.so PerlModule Apache2 and for my perl-files: PerlResponseHandler ModPerl::Registry As soon as I try to run a script under mod_perl that uses CGI.pm I get the problem: [Fri Nov 01 23:27:43 2002] [error] 9558: ModPerl::Registry: Can't locate Apache.pm in @INC (@INC contains: /usr/local/lib/perl5/site_perl/5.8.0/i686-linux/Apache2 /usr/local/lib/perl5/5.8.0/i686-linux /usr/local/lib/perl5/5.8.0 /usr/local/lib/perl5/site_perl/5.8.0/i686-linux /usr/local/lib/perl5/site_perl/5.8.0 /usr/local/lib/perl5/site_perl .) at /usr/local/lib/perl5/5.8.0/CGI.pm line 161. Compilation failed in require at /home/htdocs/perl/testgoldfisch.cgi line 4. BEGIN failed--compilation aborted at /home/htdocs/perl/testgoldfisch.cgi line 4. Now I was starting to look around and in fact I have Apache.pm and Apache2.pm on my system. Apache.pm is not in @INC (its in /usr/local/lib/perl5/5.8.0/CGI/Apache.pm where it came from perl-insallation) In @INC I only have Apache2.pm, which comes from the mod_perl Installation. Now I dont know whats going on. Maybe this both two modules have nothing in common but a similar name. Shall I extend my @INC so that Apache.pm is in it (where is best place to change @INC ?). If I use the Compat-mode the problem vanishes. Is the CGI-module incompatible with mod_perl2 ? If yes, is there any alternative that can be used without need to rewrite all our libraries that rely on CGI.pm ? thnx, peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED]
Re: modules and pragmas - part II
On Wed, Sep 25, 2002 at 12:29:12AM -0400, Perrin Harkins wrote: Are you setting $CGI::XHTML to 0 somewhere? No I posted the whole script: #!/usr/bin/perl -w use CGI qw(standard -no_xhtml); my $q=new CGI; print $q-header,$q-start_html,\n; print $$,\n; thnx, peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED]
modules and pragmas - part II
Thnx for all the people contributing in the previous thread which gave me deeper insight and a very easy solution ($CGI::XHTML = 0;) But now I played around and found out that if loading a modules with pragmas this pragma only is valid for the first call and not valid for all further call. I wrote a tiny script to show this: -- #!/usr/bin/perl -w use CGI qw(standard -no_xhtml); my $q=new CGI; print $q-header,$q-start_html,\n; print $$,\n; -- At the first request each instance prints out the no_xhtml-header, but at the second call the no_xhtml-pragma is forgotten and the xhtml-header is printed out. Is this a problem in the CGI-module or is there a deeper reason for this in mod_perl ? btw and OT : in the previous thread there have been rumours about better things than the CGI-module. What could you recommend instead ? (compatible modules prefered cause I really use the sticky- and nostickyfeatures :) thnx, peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED]
same module with different pragmas
Today I finally resolved a strange error that was bugging me for days now. In the sets of applications that runs under mod_perl on our webserver we need the same modules twice, but with different pragmas. app1: use module qw(standard pragma1); app2: use module qw(standard pragma2); now, of course - whichever application is needed first it loads the module with the mentioned pragma. Later - when the second app wants to load the module, mod_perl uses the already module with the wrong pragma. Is there any trick around this problem or do I need to copy the module and alter its name ? (which would cost more administration-effort, cause each update of the module needs double work then) The module where I have this problem is the standard CGI-module. Some apps need the -no_xhtml pragma by heart, especially with new IE6 that seems to be quite strict about headers. thnx, peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED]
relative path not accepted in mod_perl2 ?
I just upgraded to apache2 and mod_perl1.99 and the first big difference I noticed is that I can include my libraries using relative pathes any more. Until now I had my mainprogram and its library/ies in the same path. like # ls edit.* edit.lib.pl edit.pl and in edit.pl I had the line require './edit.pl'; this worked fine with mod_perl1.27 now I receive a error like this: [Tue Jul 16 15:56:33 2002] [error] 25772: ModPerl::Registry: `Can't locate ./edit.lib.pl at /data/apache/htdocs/goldfisch/edit.pl line 24' and I need to specify the full path in the require-command. While this is not a real problem its a suprising behaviour .. peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED]
Re: relative path not accepted in mod_perl2 ?
just found out the deeper reason - mod_perl now always has path=/ which is unexpected behaviour. While I can filter out the actual path from the cgi-environment I dont understand this .. thnx, peter On Tue, Jul 16, 2002 at 04:12:02PM +0200, [EMAIL PROTECTED] wrote: I just upgraded to apache2 and mod_perl1.99 and the first big difference I noticed is that I can include my libraries using relative pathes any more. Until now I had my mainprogram and its library/ies in the same path. like # ls edit.* edit.lib.pl edit.pl and in edit.pl I had the line require './edit.pl'; this worked fine with mod_perl1.27 now I receive a error like this: [Tue Jul 16 15:56:33 2002] [error] 25772: ModPerl::Registry: `Can't locate ./edit.lib.pl at /data/apache/htdocs/goldfisch/edit.pl line 24' and I need to specify the full path in the require-command. While this is not a real problem its a suprising behaviour .. peter -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED] -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED]
Perl-directive unknown
just upgraded from 1.3.22/mod_perl 1.26 to 1.3.26/1.27 and while all the other stuff is working fine my daemon does not recognize the Perl-directive anymore !? test in httpsd.conf: Perl my $test=hans; /Perl # /usr/local/apache/bin/httpsdctl start Syntax error on line 105 of /usr/local/apache/conf/httpd.conf: Invalid command 'Perl', perhaps mis-spelled or defined by a module not included in the server configuration /usr/local/apache/bin/httpsdctl start: httpsd could not be started If I remove all Perl-Directives all is working fine. mod_perl-environment-variable is set and all other directives related to mod_perl are recognized, like PerlModule or PerlHandler Apache::Registry. what is going on ?! thnx, peter ps: stuff is running on linux on a 2.4.x-kernel. -- mag. peter pilsl IT-Consulting tel: +43-699-1-3574035 fax: +43-699-4-3574035 [EMAIL PROTECTED]
Re: disable mod_perl for certain virtual hosts/folders
On Tue, Jan 22, 2002 at 08:31:02AM -0500, Geoffrey Young wrote: [EMAIL PROTECTED] wrote: On my Apache mod_perl is generally enabled with the following statement: Directory /data/apache Files ~ \.pl$ SetHandler perl-script PerlHandler Apache::Registry Options +ExecCGI /Files /Directory you might have better luck with something like Directory /data/apache AddHandler .pl perl-script PerlHandler Apache::Registry Options +ExecCGI /Directory thnx, but: This part doesnt make the problem. mod_perl works like a charm. Problem is how to deactivate it for a certain location ? thnx, peter
Re: disable mod_perl for certain virtual hosts/folders
On Tue, Jan 22, 2002 at 08:53:39AM -0500, Geoffrey Young wrote: Directory /data/apache AddHandler .pl perl-script PerlHandler Apache::Registry Options +ExecCGI /Directory thnx, but: This part doesnt make the problem. mod_perl works like a charm. Problem is how to deactivate it for a certain location ? well, only .pl files will be handled by Apache::Registry under this config - /data/apache/foo.html ought to be handled by core. unless you have a global SetHandler perl-script somewhere, mod_perl should not handle the content-generation phase without being further specified. but this is what my config does: Directory /data/apache Files ~ \.pl$ SetHandler perl-script PerlHandler Apache::Registry Options +ExecCGI /Files /Directory only pl-files are affected. Unfortunately I have some pl-files that must not run under mod_perl (even not under PerlRun cause they are really dirty) and I wonder if there is no way to set the orginal cgi-handler (that does not use mod_perl) for a certain location/virtual host. I thought this default-handler is cgi-script, but SetHandler cgi-script in a virtual host does not do the trick. The only way I've found by now is not to enable mod_perl globally, but put the above config in each virtual-host section but the one that contains the mad pl-files. While this is what I do now, I really wonder I one cant disable mod_perl .. thnx, peter
disable mod_perl for specific loation/virtual host
I've a server that has pl-files for mod_perl and cgi for standard-cgi and now got a bunch of pl-files that do not run under mod_perl and cannot be renamed in cgi, cause they are linked in frames on distant hosts where I dont have access too. So I need to disable mod_perl for these pl-files for this specific virtual host: I activate mod_perl in main-part of my http.conf: Files ~ \.pl$ SetHandler perl-script PerlHandler Apache::Registry Options +ExecCGI /Files for all virtual hosts and now need to know which PerlHandler is responsible for no-mod_perl cgi. (The PerlRun-Handler also fails ...) It tried in the virtual host section: Files ~ \.pl$ SetHandler cgi-script Options +ExecCGI /Files and even SetHandler default-handler And I tried AddHandler cgi-script .pl but this wont do it. thnx, peter
disable mod_perl for certain virtual hosts/folders
On my Apache mod_perl is generally enabled with the following statement: Directory /data/apache Files ~ \.pl$ SetHandler perl-script PerlHandler Apache::Registry Options +ExecCGI /Files /Directory Now I've couple of pl-files in a certain virtual host that dont run under mod_perl. How can I disable mod_perl for this virtual host or a certain folder ? I tried (in the virtual-host config): Files ~ \.pl$ SetHandler cgi-script Options +ExecCGI /Files and even Files ~ \.pl$ SetHandler default-handler Options +ExecCGI /Files but it didnt work - I have the feeling that I also should specifiy PerlHandler, but dont know to which value (tried to set it to 'default-handler' but scripts are still running under mod_perl) any help appretiated, thnx, peter ps: scripts also dont run under PerlRun - they are really badstyled.
Re: cgi-object not cacheable
On Tue, Nov 13, 2001 at 09:18:04PM -0500, Perrin Harkins wrote: One run of my script takes about 2 seconds. This includes a lot of database-queries, calculations and so on. about 0.3 seconds are used just for one command: $query=new CGI; That's really awfully slow. Are you positive it's running under mod_perl? Have you considered using Apache::Request instead of CGI.pm? its definitely running under mod_perl. But imho the time it takes to create a new cgi-object should not depend too much wheter its running under mod_perl or not, cause the CGI-module is loaded before. (In fact I load in httpd.conf using PerlModule-Directive) This is not a problem of my persistent variables, cause this works with many other objects like db-handles (cant use Apache::DBI cause this keeps to many handles opened, so I need to cache and pool on my own), filehandles etc. Whoa, you can't use Apache::DBI but you can cache database handles yourself? That doesn't make any sense. What are you doing in your caching that's different from what Apache::DBI does? This makes very much sense. Apache::DBI does not limit the number of persistent connections. It just keeps all the connections open per apache-process. This will sum up to about 20 open database-connections, each having one postgres-client running 'idle in transaction' - and my old small serversystem is going weak. So I cant cache all connections, but only a limited number and so I cache on my own :) Beside: it is done with a few lines of code, so it wasnt much work either: if (exists($ptr-{global}-{dbhandles}-{_some_id_string})) { $dbh=$ptr-{global}-{dbhandles}-{_some_id_string}; $dbh or err($ptr,19); # there must have been something wrong internally if (not $dbh-ping) {$connect=1;$r='reconnect'} # we just reconnect .. $dbh and $dbh-rollback; # this issue a new begin-transaction and avoid several problem with 'current_timestamp' that dedlivers the value # of the time at the beginning of the transaction, even if this is hours ago. see TROUBLEREPORT1 $r= stored if $r eq '-'; } else {$connect=1;} if ($connect) { $dbh=DBI-connect(connectinformation) } and on exit I just disconnect all handles but keeping a specified amount. I would prefer to handle this in a special pooling-module like Apache::DBI is, but where one can specify a maximum number of open connections and a timeout per connection (connection will be terminated after it was not used a specified amount of time). As soon as I get IPC::Sharable to work I'll consider writing such a thingy. best, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: [EMAIL PROTECTED] sms : [EMAIL PROTECTED] pgp-key available
Re: cgi-object not cacheable
On Wed, Nov 14, 2001 at 10:39:36AM -0500, Perrin Harkins wrote: its definitely running under mod_perl. But imho the time it takes to create a new cgi-object should not depend too much wheter its running under mod_perl or not, cause the CGI-module is loaded before. (In fact I load in httpd.conf using PerlModule-Directive) If it was running under CGI, it would be compiling CGI.pm on each request, which I've seen take .3 seconds. Taking that long just to create the new CGI instance seems unusual. How did you time it? Are you using Apache::DProf? Wouldnt it be compiled at the use-statement ? I timed it using module-internal loggingfunction which use time::hires. This makes very much sense. Apache::DBI does not limit the number of persistent connections. It just keeps all the connections open per apache-process. That should mean one connection per process if you're connecting with the same parameters every time. in my case it means up to 4 connections per process, cause in fact it is not one module but 2 (input and output) and each needs to handle 2 different connections. if (exists($ptr-{global}-{dbhandles}-{_some_id_string})) You know that this is only for one process, right? If you limit this cache to 20 connections, you may get hundreds of connections. yes, thats why I limit it to 1 or even 0. I would prefer to handle this in a special pooling-module like Apache::DBI is, but where one can specify a maximum number of open connections and a timeout per connection (connection will be terminated after it was not used a specified amount of time). You can just set a timeout in your database server. If a connection times out and then needs to be used, the ping will fail and Apache::DBI will re-connect. thats an interesting idea. I experienced crashes on ping to dead connections under DBD::Pg but this is worth to check. As soon as I get IPC::Sharable to work I'll consider writing such a thingy. You can't share database handles over IPC::Shareable, but you could share a global number tracking how many total database handles exist. However, I think you'd be better off using Apache::DBI and limiting the number of Apache children to the number of connections your database can deal with. I hope to share databasehandles via IPC. One has to avoid that only one process writes to a handle at the same time !! (hope I'm right here) This would offer possibilities to create a pool of handles with limited max. number and clientsided timeouts. If a process requests a handle and there is one cached in the pool it will give this handle back. Otherwise it will create a new handle or - if max. number is reached - return 0. The calling application can then decide to print an excuse due to the user 'cause we are so popular we cant server you :)' or create and destroy a temporary handle to process the request. This would be something I would actually prefer to Apache::DBI, but I dont know if its possible, but I'll try. Such a thing would be very important, especially on slow servers with less ram, where Apache::DBI opens to many connections in peak-times and leaves the system in a bad condition ('to many open filehandles') peter ps: just if one is interested: today I was very happy to wear a helmet when I crashed with my bike ;) At least I can write this lines after my head touched the road. (well : it hurts in the arms when writing fast ;) -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: [EMAIL PROTECTED] sms : [EMAIL PROTECTED] pgp-key available
cgi-object not cacheable
One run of my script takes about 2 seconds. This includes a lot of database-queries, calculations and so on. about 0.3 seconds are used just for one command: $query=new CGI; I tried to cache the retrieved object between several requests by storing to a persistent variable to avoid this long time, but it is not cacheable. (in the meaning of : operations on a cached CGI-object will just produce nothing) This is not a problem of my persistent variables, cause this works with many other objects like db-handles (cant use Apache::DBI cause this keeps to many handles opened, so I need to cache and pool on my own), filehandles etc. any idea ? thnx, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: [EMAIL PROTECTED] sms : [EMAIL PROTECTED] pgp-key available
dont understand mem-mapping
I just cant get the following in my brain. I have a modules that is started with apache using the PerlModule-directive in httpd.conf. This module defines a global pointer on startup that should be the same in all sub-instances of httpd and really in the current apache-session all instances print out : $g_ptr : HASH(0x8458a30) This hashpointer is later filled with different values (dbhandles, filehandles ...) that should kept open over more calls. Now each session has the same pointer, but the content of the anonymous hash its pointing too is different in each instance !! thread 1: $g_ptr : HASH(0x8458a30) $g_ptr-{counter} : SCALAR(0x85aa62c) thread 2: $g_ptr : HASH(0x8458a30) $g_ptr-{counter} : SCALAR(0x85f5e2c) A even more strange example is an anonmous array that has the same adress, but different content too. The only explanation is that there is some mem-mapping for each httpd-instance, but I dont know much about this. My problem now is, that each httpd-instance opens a lot of db-handles and connections and I end up with system-errors 'to many files opened' or such things. Is there any way to share handles between all instances (I guess not, and I'm sure this mem-mapping has a deeper meaning too: if more than one instance access the same adress at the same time there would be lot of troubles and I'm even more sure that this has something to do with the copy-on-write feature of fork(), but I'm just not good in this things, so I'd like to have some comment to be sure that this is a principal problem and not a problem of my module) thnx, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: [EMAIL PROTECTED] sms : [EMAIL PROTECTED] pgp-key available
namespace-troubles
I run into deep namespacetroubles I understand onyl vaguely and I cant workaround: I have a script running under mod_perl that is called via two domains. www1.domain.at/ www2.domain.at/sub/ both of the above addresses lead to the very same script (its the same file on the disk, not a copy). When I call the first adress all is working fine, but as soon as I call the second adress I get a server-error. Restarting apache and I try the second first: running fine, but as soon as I call the first: server-error. The log reveals: Undefined subroutine Apache::ROOTwww1_2domain_2eat::main called at /data/public/stage2/fetch.pl line 9. or Undefined subroutine Apache::ROOTwww2_2edomain_2eat::editeinstieg::main called at /data/public/stage2/fetch.pl line 9. my script is structured like that: fetch.pl: require fetch.lib.pl main(); - ---fetch.lib.pl: sub main{ do everthing here } 1; As far I can see, the second call does not load the lib anymore, cause it was already loaded on the first call. Unfortunately it was loaded to a different namespace, so the script doesnt find it. What can I do ? I need this different domains, cause the script-action depends on the calling domain. The reason why I splitted in script/lib is a document at apache.org that recommends this to avoid a nested-sub-problem under mod_perl. I wonder if providing the lib-file as module (use instead of require) would be a solution, but I guess not. Can the above problem occure with modules too ? If two scripts call the same module, is it only loaded on the first call and the second script fails ?? thnx, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: [EMAIL PROTECTED] sms : [EMAIL PROTECTED] pgp-key available