Can't open perl script -spi.bak
I am attempting to build mod_perl on NT. I successfully built Apache. I have ActiveState 5.6 (Perl v5.6.1) installed. I also have MS Visual Studio 6.0 installed. I downloaded mod_perl-1.26.tar.gz from http://perl.apache.org/dist After unzipping the files I get the get the following error: C:\mod_perl-1.26perl Makefile.PL Can't open perl script -spi.bak : No such file or directory C:\Perl\bin\perl.exe -spi.bak -e s/sub mod_perl::hooks.*/sub mod_perl::hook s { qw() }/ lib/mod_perl_hooks.pm failed Any suggestions? Thanks, Pete __ Do You Yahoo!? Yahoo! Movies - coverage of the 74th Academy Awards® http://movies.yahoo.com/
Re: Off topic question a little worried
At 14:15 21.03.2002 -0600, you wrote: Any idea as to how it got on my server. It is owned by apache and in the apache group. That tells me that it was put on there by apache. It is in a directory that has the permissions 777 because the script that is normally in there keeps and writes traffic information, so I guess someone found a way to have apache write the file into that directory. But how did they get it to chmod 755? That is a DON'T. Apache should not have write access to anything under DocumentRoot. Sorry, I know this does not help now. Joachim -- ... ein Geschlecht erfinderischer Zwerge, die fuer alles gemietet werden koennen.- Bertolt Brecht - Leben des Galilei
mod_perl on Apache 2.0.32
Hi everybody there! I'm being out of the list for a while so I don't have any idea about how to use mod_perl with Apache 2 for testing. Any of you can tell me if mod_perl is already supporting Apache 2? Or let me know where to go to find documentation about? Thanks Jose Albert
Re: mod_perl on Apache 2.0.32
[EMAIL PROTECTED] wrote: Hi everybody there! I'm being out of the list for a while so I don't have any idea about how to use mod_perl with Apache 2 for testing. Any of you can tell me if mod_perl is already supporting Apache 2? Or let me know where to go to find documentation about? you cannot use modperl-1.x with httpd-2.0, you need modperl-2.0 which is in works now. If you want to play with it the info to get you started is here: http://cvs.apache.org/viewcvs.cgi/modperl-docs/src/docs/2.0/user/install/install.pod __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: [OT] Off topic question a little worried
Assuming the content isn't updated too often, burning the site from a test area and mounting it as a CDROM makes it pretty hard for outsiders to udpate. -- Steven Lembark 2930 W. Palmer Workhorse Computing Chicago, IL 60647 +1 800 762 1582
Re: 0 being appended to non mod_perl scripts.
On Thu, 2002-03-21 at 08:37, Mike Wille wrote: Hello all, I apologize if this has already been answered elsewhere, I haven't been able to find it. I am encountering a wierd problem where perl scripts running under a normal cgi-bin (ie no mod_perl) have a '0' appended to the output. This does not happen to scripts run under mod_perl. It also only happens when PerlSendHeader is set to on. I thought that PerlSendHeader was a mod_perl only directive, but just to check I added PerlSendHeader off to the cgi-bin directory. That had no effect. Has anyone else encountered this and how did you fix it? Are you sure this isn't an artifact of chunked encoding? Perhaps your browser or Apache are handling the encoding badly. I suggest you use a tool such as ethereal to examine the data on the wire. -jwb
'Pinning' the root apache process in memory with mlockall
Recently on this list the idea of 'pinning' or locking the root apache process in memory has been discussed with some interest. The reason being was that some users have experienced the situtaion where a server becomes loaded, and the root apache process gets swapped out, and in the process loses some of its shared memory. Future child processes that are forked also share in the loss of shared memory, so methods like using GtopLimit to 'recyle' child processes when their shared memory becomes too low cease to work because when they come up, they are already too low on shared memory. In our systems we had attempted this but it always came down to the same problem --the root process would lose its shared memory, to the point that any child process would come up, serve a request, find that it was beyond the threshold for shared memory and die. The only help was to restart Apache altogether. So in scouring the list I found someone mentioning using the mlockall C function to lock the pages of the core apache process in memory. Some handy .xs code was provided, so I built a module, Sys::Mman, which wraps mlockall, and makes it available to Perl. We installed this on our servers, and call mlockall right at the end of our preload stuff, i.e., the end of the 'startup.pl'-style script called from httpd.conf. The result has been very encouraging. The core apache process is able then to maintain all its shared memory, and child processes that are forked are able to start with high amounts of shared memory, all making for a much happier system. Now I also read that probably better than this would be to ensure that you never swap by tuning MaxClients, as well as examining our Perl code to make it less prone to lose shared memory. We're working on that sort of tuning, but in volatile environments like ours, where we serve a very large amount of data, and new code is coming out almost daily here and there, locking the core httpd in memory has been very helpful. I just thought I would let others know on the list that it is feasible, and works well in our environment. If there's enough interest I might put the module up on CPAN, but it's really very simple. h2xs did most of the work for me. And thanks to Doug MacEachern for posting the .xs code. It worked like a charm. FWIW, -- Dan Hanks Daniel Hanks - Systems/Database Administrator About Inc., Web Services Division
Re: Can't open perl script -spi.bak
On Fri, 22 Mar 2002, Pete Kelly wrote: I am attempting to build mod_perl on NT. I successfully built Apache. I have ActiveState 5.6 (Perl v5.6.1) installed. I also have MS Visual Studio 6.0 installed. I downloaded mod_perl-1.26.tar.gz from http://perl.apache.org/dist After unzipping the files I get the get the following error: C:\mod_perl-1.26perl Makefile.PL Can't open perl script -spi.bak : No such file or directory C:\Perl\bin\perl.exe -spi.bak -e s/sub mod_perl::hooks.*/sub mod_perl::hook s { qw() }/ lib/mod_perl_hooks.pm failed Any suggestions? After running 'perl Makefile.PL', and it dying at this point, is there a file lib/mod_perl_hooks.pm? If not, try copying lib/mod_perl_hooks.pm.PL to lib/mod_perl_hooks.pm and rerunning 'perl Makefile.PL'. best regards, randy kobes
Re: Can't open perl script -spi.bak
On Fri, 22 Mar 2002, Pete Kelly wrote: I am attempting to build mod_perl on NT. I successfully built Apache. I have ActiveState 5.6 (Perl v5.6.1) installed. I also have MS Visual Studio 6.0 installed. I downloaded mod_perl-1.26.tar.gz from http://perl.apache.org/dist After unzipping the files I get the get the following error: C:\mod_perl-1.26perl Makefile.PL Can't open perl script -spi.bak : No such file or directory C:\Perl\bin\perl.exe -spi.bak -e s/sub mod_perl::hooks.*/sub mod_perl::hook s { qw() }/ lib/mod_perl_hooks.pm failed Any suggestions? That's is very weird, because this code doesn't seem to work: perl -e 'system(perl, -e1) == 0 or die oops' while this does: perl -e 'system(perl, -e1) == 0 or die oops' notice the leading space before -e1 This patch should solve the problem. Index: Makefile.PL === RCS file: /home/cvs/modperl/Makefile.PL,v retrieving revision 1.196 diff -u -r1.196 Makefile.PL --- Makefile.PL 9 Sep 2001 21:56:46 - 1.196 +++ Makefile.PL 22 Mar 2002 18:59:34 - @@ -1101,7 +1101,7 @@ cp lib/mod_perl_hooks.pm.PL, lib/mod_perl_hooks.pm; if ($Is_Win32) { - my @args = ($^X, ' -spi.bak ', ' -e ', \s/sub mod_perl::hooks.*/sub mod_perl::hooks { qw($hooks) }/\, 'lib/mod_perl_hooks.pm'); + my @args = ($^X, '-spi.bak ', ' -e ', \s/sub mod_perl::hooks.*/sub +mod_perl::hooks { qw($hooks) }/\, 'lib/mod_perl_hooks.pm'); system(@args) == 0 or die @args failed\n; } iedit lib/mod_perl_hooks.pm, __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
RE: Re: mod_perl on Apache 2.0.32
Thanks very much for your feedback I will take a look to the mod_perl 2 project Thanks again Jose Albert [EMAIL PROTECTED] wrote: Hi everybody there! I'm being out of the list for a while so I don't have any idea about how to use mod_perl with Apache 2 for testing. Any of you can tell me if mod_perl is already supporting Apache 2? Or let me know where to go to find documentation about? you cannot use modperl-1.x with httpd-2.0, you need modperl-2.0 which is in works now. If you want to play with it the info to get you started is here: http://cvs.apache.org/viewcvs.cgi/modperl- docs/src/docs/2.0/user/install/install.pod __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: Apache::DBI or What ?
We encountered just this situation when we started to move from a win32 application connecting to an RDBMS to a web based app. On the win32 app, the DB authenticated each user with a loginid/pw. Since some users still use the win32 app, we can't just abandon the DB authentication, so here's what we did: Since the web users generaly login to the site once a day and then work for awhile, (we keep the login associated to a session cookie in the DB), we run the login script as a CGI, test the loginid/passwd in a connect, store the session info, then issue a redirect. All of the other pages use Apache::DBI with a web-user DB login. This allows us take advantage of the persistent connections for most of the requests. One trick here, if you're using the DB to enforce business rules based on the login, then you'll have to encorporate those rules into your mod_perl programs -- effectively giving the web-user the power to do whatever any of the users might need to. Fortunately, we were able to do this fairly easily. I'm open to opinions on this approach. Eric Frazier wrote: Hi, I was all happy and rolling along when I read this in the docs. With this limitation in mind, there are scenarios, where the usage of Apache::DBI is depreciated. Think about a heavy loaded Web-site where every user connects to the database with a unique userid. Every server would create many database handles each of which spawning a new backend process. In a short time this would kill the web server. I will have many different users, users as in database users. So am I just screwed and won't be able to keep connections open? Or maybe the idea would be to go ahead and let that happen, but timeout the connection in 5 mintues or so? That way I wouldn't have open connectoins from user bob from 5 hours ago still siting around. Or am I totaly not getting it at all? I am using Postgress, I am wondering how big DBs deal with this sort of thing. I am also wondering what the actual overhead is in starting the connection and if there is anything that I could to to limit that without validating a specific user. Last of all, I might not be posting this to the right place, but I hope so. It seems to me there is a grey area when it comes to Apache modules when you are using them with mod_perl. Or else I just don't know enough yet to see there is not a grey area :) Thanks, Eric http://www.kwinternet.com/eric (250) 655 - 9513 (PST Time Zone) -- Kevin Berggren 760-480-1828 System Maker, Inc 3913 Sierra Linda Dr. Escondido, CA 92025
Re: 'Pinning' the root apache process in memory with mlockall
Daniel Hanks wrote: Recently on this list the idea of 'pinning' or locking the root apache process in memory has been discussed with some interest. The reason being was that some users have experienced the situtaion where a server becomes loaded, and the root apache process gets swapped out, and in the process loses some of its shared memory. Future child processes that are forked also share in the loss of shared memory, so methods like using GtopLimit to 'recyle' child processes when their shared memory becomes too low cease to work because when they come up, they are already too low on shared memory. In our systems we had attempted this but it always came down to the same problem --the root process would lose its shared memory, to the point that any child process would come up, serve a request, find that it was beyond the threshold for shared memory and die. The only help was to restart Apache altogether. So in scouring the list I found someone mentioning using the mlockall C function to lock the pages of the core apache process in memory. Some handy .xs code was provided, so I built a module, Sys::Mman, which wraps mlockall, and makes it available to Perl. We installed this on our servers, and call mlockall right at the end of our preload stuff, i.e., the end of the 'startup.pl'-style script called from httpd.conf. The result has been very encouraging. The core apache process is able then to maintain all its shared memory, and child processes that are forked are able to start with high amounts of shared memory, all making for a much happier system. Now I also read that probably better than this would be to ensure that you never swap by tuning MaxClients, as well as examining our Perl code to make it less prone to lose shared memory. We're working on that sort of tuning, but in volatile environments like ours, where we serve a very large amount of data, and new code is coming out almost daily here and there, locking the core httpd in memory has been very helpful. I just thought I would let others know on the list that it is feasible, and works well in our environment. If there's enough interest I might put the module up on CPAN, but it's really very simple. h2xs did most of the work for me. And thanks to Doug MacEachern for posting the .xs code. It worked like a charm. See the discussion on the [EMAIL PROTECTED] list, http://marc.theaimsgroup.com/?t=10165973081r=1w=2 where it was said that it's a very bad idea to use mlock and variants. Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. As a conclusion to this thread I've added the following section to the performance chapter of the guide: =head3 Potential Drawbacks of Memory Sharing Restriction It's very important that the system won't be heavily engaged in swapping process. Some systems do swap in and out every so often even if they have plenty of real memory available and it's OK. The following applies to conditions when there is hardly any free memory available. So if the system uses almost all of its real memory (including the cache), there is a danger of parent's process memory pages being swapped out (written to a swap device). If this happens the memory usage reporting tools will report all those swapped out pages as non-shared, even though in reality these pages are still shared on most OSs. When these pages are getting swapped in, the sharing will be reported back to normal after a certain amount of time. If a big chunk of the memory shared with child processes is swapped out, it's most likely that CApache::SizeLimit or CApache::GTopLimit will notice that the shared memory floor threshold was crossed and as a result kill those processes. If many of the parent process' pages are swapped out, and the newly created child process is already starting with shared memory below the limit, it'll be killed immediately after serving a single request (assuming that we the C$CHECK_EVERY_N_REQUESTS is set to one). This is a very bad situation which will eventually lead to a state where the system won't respond at all, as it'll be heavily engaged in swapping process. This effect may be less or more severe depending on the memory manager's implementation and it certainly varies from OS to OS, and different kernel versions. Therefore you should be aware of this potential problem and simply try to avoid situations where the system needs to swap at all, by adding more memory, reducing the number of child servers or spreading the load across more machines, if reducing the number of child servers is not an options because of the request rate demands. __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---
RE: 0 being appended to non mod_perl scripts.
Well, after much testing, I've found the problem does not lie with mod_perl. I'm not sure of the source now, I just know how to recreate it. Originally it seemed like mod_perl but it was just a coincidence that it the problem started after making some configuration changes. The only thing I need to do to recreate the problem is include a 'use Whatever::Module;' in a simple hello world Perl script under mod_cgi. Mod_perl does not ever append the 0. System and exec calls do not cause the 0 to be displayed either. I have no idea what is causing this as the setup is an out of the box Red Hat 7.2 installation. But that is off topic... But thanks to everyone who answered this post! - Mike -Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Thursday, March 21, 2002 8:56 PM To: Randal L. Schwartz Cc: Mike Wille; [EMAIL PROTECTED] Subject: Re: 0 being appended to non mod_perl scripts. Randal L. Schwartz wrote: Mike == Mike Wille [EMAIL PROTECTED] writes: Mike I am encountering a wierd problem where perl scripts running under a normal Mike cgi-bin (ie no mod_perl) have a '0' appended to the output. I've seen this happen when people mistakenly write: print system foo; instead of print `foo`; but of course they should have written: system foo; instead. As to why it's not happening in an Apache::Registry script, I cannot say. Because , the output of system(), exec(), and open(PIPE,|program) calls will not be sent to the browser unless your Perl was configured with sfio. http://perl.apache.org/guide/porting.html#Output_from_system_calls -- _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://ticketmaster.com http://apacheweek.com http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Re: 'Pinning' the root apache process in memory with mlockall
On Sat, 23 Mar 2002, Stas Bekman wrote: See the discussion on the [EMAIL PROTECTED] list, http://marc.theaimsgroup.com/?t=10165973081r=1w=2 where it was said that it's a very bad idea to use mlock and variants. Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. As a conclusion to this thread I've added the following section to the performance chapter of the guide: Are we saying then that libgtop is erroneous in its reporting under these circumstances? And in the case of Linux, I'm asusming libgtop just reads its info straight from /proc. Is /proc erroneous then? -- Dan Daniel Hanks - Systems/Database Administrator About Inc., Web Services Division
Re: Off topic question a little worried
http://www.chkrootkit.org/ http://www.incident-response.org/LKM.htm -- Carsten Heinrigs Ocean-7 Development Tel: 212 533-7883
Re: 'Pinning' the root apache process in memory with mlockall
Stas, Thanks for tracking that down. So, the problem is our tools. For me, that's GTopLimit (but also SizeLimit). I would think it must be possible to cajole these two into realizing their error. top seems to know how much a process has swapped. If GTopLimit could know that, the number could be subtracted from the total used in calculating the amount of sharing (and the new unshared), then this bug would be resolved, right? I looked, but didn't see anything in Gtop.pm that gives swap per process, though. So, it's not going to be easy. I guess I'll turn of me deswapper... ...and GTopLimit as well. for now... hmm, maybe I could just avoid using the share-related trigger values in GTopLimit, and just use the SIZE one. That would be an acceptable compromise, though not the best. -bill
RE: 0 being appended to non mod_perl scripts.
At 14:48 22.03.2002 -0500, Mike Wille wrote: Well, after much testing, I've found the problem does not lie with mod_perl. I'm not sure of the source now, I just know how to recreate it. Originally it seemed like mod_perl but it was just a coincidence that it the problem started after making some configuration changes. The only thing I need to do to recreate the problem is include a 'use Whatever::Module;' in a simple hello world Perl script under mod_cgi. Mod_perl does not ever append the 0. System and exec calls do not cause the 0 to be displayed either. I have no idea what is causing this as the setup is an out of the box Red Hat 7.2 installation. But that is off topic... But thanks to everyone who answered this post! In that case it's most probably the point Jeffrey W. Baker noted about chunked encoding. Your User Agent might not be handling it. I suspect that you don't get the 0 from static files, or anything which sends a Content-Length header. Look more into the raw transmitted data, and you might find out something. -- Per Einar Ellefsen [EMAIL PROTECTED]
Re: 'Pinning' the root apache process in memory with mlockall
Stas Bekman wrote: Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. Apache::SizeLimit just reads /proc on Linux. Is that going to report a shared page as an unshared page if it has been swapped out? Of course you can void these issues if you tune your machine not to swap. The trick is, you really have to tune it for the worst case, i.e. look at the memory usage while beating it to a pulp with httperf or http_load and tune for that. That will result in MaxClients and memory limit settings that underutilize the machine when things aren't so busy. At one point I was thinking of trying to dynamically adjust memory limits to allow processes to get much bigger when things are slow on the machine (giving better performance for the people who are on at that time), but I never thought of a good way to do it. - Perrin
Re: 0 being appended to non mod_perl scripts.
Per Einar Ellefsen [EMAIL PROTECTED] writes: I suspect that you don't get the 0 from static files, or anything which sends a Content-Length header. Look more into the raw transmitted data, and you might find out something. Might it be an HTTP/1.1 KeepAlive artefact? -- David Hodgkinson, Wizard for Hirehttp://www.davehodgkinson.com Editor-in-chief, The Highway Star http://www.deep-purple.com All the Purple Family Tree news http://www.slashrock.com Interim Technical Director, Web Architecture Consultant for hire
RE: 'Pinning' the root apache process in memory with mlockall
Stas Bekman wrote: Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. Apache::SizeLimit just reads /proc on Linux. Is that going to report a shared page as an unshared page if it has been swapped out? Of course you can void these issues if you tune your machine not to swap. The trick is, you really have to tune it for the worst case, i.e. look at the memory usage while beating it to a pulp with httperf or http_load and tune for that. That will result in MaxClients and memory limit settings that underutilize the machine when things aren't so busy. At one point I was thinking of trying to dynamically adjust memory limits to allow processes to get much bigger when things are slow on the machine (giving better performance for the people who are on at that time), but I never thought of a good way to do it. Ooh... neat idea, but then that leads to a logical set of questions: Is MaxClients that can be changed at runtime? If not, would it be possible to see about patches to set this? :-) L8r Rob #!/usr/bin/perl -w use Disclaimer qw/:standard/;
Re: Can't open perl script -spi.bak
At 3:00 AM +0800 3/23/02, Stas Bekman wrote: On Fri, 22 Mar 2002, Pete Kelly wrote: I am attempting to build mod_perl on NT. I successfully built Apache. I have ActiveState 5.6 (Perl v5.6.1) installed. I also have MS Visual Studio 6.0 installed. I downloaded mod_perl-1.26.tar.gz from http://perl.apache.org/dist After unzipping the files I get the get the following error: C:\mod_perl-1.26perl Makefile.PL Can't open perl script -spi.bak : No such file or directory C:\Perl\bin\perl.exe -spi.bak -e s/sub mod_perl::hooks.*/sub mod_perl::hook s { qw() }/ lib/mod_perl_hooks.pm failed Any suggestions? That's is very weird, because this code doesn't seem to work: perl -e 'system(perl, -e1) == 0 or die oops' Actually, that's not all that weird. Most shells take care of stripping out garbage before setting the argument list. Since system(LIST) doesn't use the shell, it's passing perl the literal -e1 which perl won't recognize as a command line option (and correctly so in my opinion). Rob -- When I used a Mac, they laughed because I had no command prompt. When I used Linux, they laughed because I had no GUI.
Re: 'Pinning' the root apache process in memory with mlockall
Danger: Rant ahead. Proceed with caution. On Sat, 23 Mar 2002, Stas Bekman wrote: See the discussion on the [EMAIL PROTECTED] list, http://marc.theaimsgroup.com/?t=10165973081r=1w=2 where it was said that it's a very bad idea to use mlock and variants. Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. As a conclusion to this thread I've added the following section to the performance chapter of the guide: =head3 Potential Drawbacks of Memory Sharing Restriction It's very important that the system won't be heavily engaged in swapping process. Some systems do swap in and out every so often even if they have plenty of real memory available and it's OK. The following applies to conditions when there is hardly any free memory available. So if the system uses almost all of its real memory (including the cache), there is a danger of parent's process memory pages being swapped out (written to a swap device). If this happens the memory usage reporting tools will report all those swapped out pages as non-shared, even though in reality these pages are still shared on most OSs. When these pages are getting swapped in, the sharing will be My Solaris 2.6 box, while in this situation, was swapping hard, as measured by my ears, by iostat, and by top (both iowait and the memory stats). Note that mlockall does not restrict memory sharing, it restricts swapping a certain portion of memory. This will prevent this memory from ever being needlessly unshared. In the discussion you referred to, all of the people saying this was a bad idea were using terms like, I think. None of them had the situation themselves, so have a difficult time coming to terms with it. None of them had related former experience using this. Something like this really needs to be tested by someone who has the issue, and has the ability to do benchmarks with real data streams. If they find it seems to work well, then they should test it on production systems. Anyone else talking about it is simply that much hot air, myself included. (I *could* test it, but I don't have enough of a problem to put a priority on it. If we were waiting for me to get time, we'd be waiting a long time.) Yes, I agree, it's better to never swap. But if we take the attitude that we won't use tools to help us when times are tight, then get rid of swap entirely. Locking memory is all about being selective about what you will and won't swap. Yes, I agree, it'd be better to mlock those bits of memory that you really care about, but that's hard to do when that memory is allocated by software you didn't write. (In this case, I'd really like to mlock all the memory that perl allocated but did not free in BEGIN sections (including, of course, use blocks). I would also like to compact that first, but that could be even more difficult.) As far as the logic regarding 'let the OS decide' - the admin of the system has the ability to have a much better understanding of how the system resources are used. If I have one section of memory which is used 95% of the time by 75% of my active processes, I really don't want that memory to swap out just because another program that'll only run for a minute wants a bit more memory, if it can take that memory from anywhere else. When doing individual page-ins, memory managers tend to worry only about those processes that they are trying to make runable now; they're not going to go and load that page back on to every other page map that shares it just because they also use it. So even though that memory is loaded back into memory, all those processes will still have to swap it back. For them to do otherwise would be irresponsible, unless the system administrator clearly doesn't know how to system administrate, or has chosen not to. The OS is supposed to handle the typical case; having one segment of memory used by dozens of processes actively is not the typical case. This does not happen on end-user machines; this only happens on servers. Theoretically speaking, servers are run by people who can analyze and tune; mlock and mlockall are tools available to them to do such tuning. reported back to normal after a certain amount of time. If a big chunk of the memory shared with child processes is swapped out, it's most likely that CApache::SizeLimit or CApache::GTopLimit will notice that the shared memory floor threshold was crossed and as a result kill those processes. If many of the parent process' pages are swapped out, and the newly created child process is already starting with shared memory below the limit, it'll be killed immediately after serving a single request (assuming that we the C$CHECK_EVERY_N_REQUESTS is set to one). This is a very bad situation which will eventually lead to a state where the system won't
mod_perl on windows
Hi. my $erver: Apache/1.3.22 (Win32) PHP/4.0.6 mod_perl/1.26_01-dev perl v5.6.1 Consider this sample script: use Apache::Request;use strict;use warnings; my $r=Apache-request;my $apr = Apache::Request-new($r);$r-send_http_header('text/html'); The first time i run the script the error is: " Can't locate loadable object for module Apache::Request in @INC " Any trials after that would give: " Can't locate object method "new" via package "Apache::Request" " I sawsome other guy in the list had a similar program but in his case installing the libapreq pragma solved his problem.My problem is that the pragma seems already installed using ppm (!)but can't be installed using perl -MCPAN -e shell because it needs me to build mod_perl first.. However I've installed theBINARY mod_perl version using the ppm method described in the mod_perl cookbook listing 1.2 so the second method fails. I'm a little confused right now. Should i try to find another way to build mod_perl using the sources or is there maybe a trick or something i've missed that would help me bypass this problem and use the modules? (Apache::Cookie fails to load too) Any help would be appreciated. Thanks in advance.
Re: Berkeley DB 4.0.14 not releasing lockers under mod_perl
Aaron Ross wrote: my $db_key = tie( %{$Rhash}, 'BerkeleyDB::Btree', -Flags=DB_CREATE, -Filename=$file, -Env=$env ); die Can't open $file: $! .$BerkeleyDB::Error.\n if !$db_key; return $db_key; I was wondering if using tie() instead of just the OO interface was causing a problem. Maybe rewriting the code to just use new() would help? Of course that means losing the hash interface. Of course, that would avoid the problem, because I wouldn't be using DB's built-in locking anymore :-) Using tie() means losing the environment, which is needed for DB's internal locking. In a production environment with thousands of hits per hour coming in on multiple threads, I really don't trust flock() to do the job. Dan Wilga [EMAIL PROTECTED] Web Technology Specialist http://www.mtholyoke.edu Mount Holyoke CollegeTel: 413-538-3027 South Hadley, MA 01075 Who left the cake out in the rain?
Re: PerlModule hell - questions and comments
Kee Hinckley wrote: 1. *Why* are the apache config files executed twice (completely with loading and unloading all the modules)? This is a core apache thing. Apache does it to verify that a restart is safe. See http://thingy.kcilink.com/modperlguide/config/Apache_Restarts_Twice_On_Start.html I'm not saying I think it's the greatest idea, but that's the reason behind it. Modules loaded with PerlModule and PerlRequire are not supposed to be loaded again the second time. I seem to remember that they are loaded again when using DSO though, so if you're using DSO you may want to recompile as static. Also, if you have PerlFreshRestart on that will cause a reload. A couple of people reported a bug that they were seeing which caused these modules to be loaded twice anyway. That sounds like the issue you saw with Perl sections. I haven't tested this myself, and fixing it would probably require help from Doug. As a workaround, it is possible to do all of your module loading from a startup.pl called from PerlRequire, and avoid that problem. That's what I do. Of course my goal here sounds like exactly the opposite of yours: you actually *want* Embperl to get loaded both times so that your conf directives will work. I haven't run into that problem before because I don't use any modules that add conf directives. Maybe Gerald will have an explanation of what the expected behavior is on his end. It can't be this much trouble for most people or no one would be using Embperl or custom conf directives. - Perrin
Re: PerlModule hell - questions and comments
At 4:18 PM -0500 3/22/02, Perrin Harkins wrote: Modules loaded with PerlModule and PerlRequire are not supposed to be loaded again the second time. I seem to remember that they are loaded again when using DSO though, so if you're using DSO you may want to recompile as static. Also, if you have PerlFreshRestart on that will cause a reload. If all you were doing was loading a normal Perl module, the single load would be fine. The catch is that in this case we are loading a Perl module which in turn is registering an Apache module. The Apache module is being *unloaded* prior to the second pass through the config file. The only way that it will be reloaded is if the Perl module is reloaded on the second pass as well. A couple of people reported a bug that they were seeing which caused these modules to be loaded twice anyway. That sounds like the issue you saw with Perl sections. I haven't tested this myself, and fixing it would probably require help from Doug. As a workaround, it is possible to do all of your module loading from a startup.pl called from PerlRequire, and avoid that problem. That's what I do. That doesn't solve the problem because it won't load twice, and the Apache module won't get reloaded. because I don't use any modules that add conf directives. Maybe Gerald will have an explanation of what the expected behavior is on his end. It can't be this much trouble for most people or no one would be using Embperl or custom conf directives. At Embperl 2.0b6 Gerald switched to a new architecture. The previous version was just a plain Perl module loaded as a handler by mod_perl. This version is also an Apache module. Maybe we need an option for PerlModule that forces a load each time? -- Kee Hinckley - Somewhere.Com, LLC http://consulting.somewhere.com/ [EMAIL PROTECTED] I'm not sure which upsets me more: that people are so unwilling to accept responsibility for their own actions, or that they are so eager to regulate everyone else's.
Re: PerlModule hell - questions and comments
Kee Hinckley wrote: At Embperl 2.0b6 Gerald switched to a new architecture. The previous version was just a plain Perl module loaded as a handler by mod_perl. This version is also an Apache module. Okay, if it's only in the recent betas then it's possible that only a few people have encountered this. Can anyone else who has built a module with custom conf directives comment on this issue? Maybe we need an option for PerlModule that forces a load each time? It seems like something to keep the C and perl sides doing the same thing is what's needed, so that if the C stuff gets unloaded the perl stuff will too. In your case, PerlFreshRestart might help with what you're trying to do since it will clear %INC, but you may still have the problem with needing to call Init. - Perrin
Re: PerlModule hell - questions and comments
At 5:11 PM -0500 3/22/02, Perrin Harkins wrote: Kee Hinckley wrote: At Embperl 2.0b6 Gerald switched to a new architecture. The previous version was just a plain Perl module loaded as a handler by mod_perl. This version is also an Apache module. Okay, if it's only in the recent betas then it's possible that only a few people have encountered this. Can anyone else who has built a module with custom conf directives comment on this issue? I should note that it also appears to be at least partially either an architecture or configuration issue. The original code which worked for Gerald didn't call unload in its cleanup handler. However that did not work on my system (MacOS X), and on at least one other system (Linux). I found that I had to call unload for things to work, and then it wasn't getting it reloaded. So there may be another way to fix the problem. Suggestions are more than welcome. -- Kee Hinckley - Somewhere.Com, LLC http://consulting.somewhere.com/ [EMAIL PROTECTED] I'm not sure which upsets me more: that people are so unwilling to accept responsibility for their own actions, or that they are so eager to regulate everyone else's.
Re: mod_perl on windows
On Fri, 22 Mar 2002, [EMAIL PROTECTED] wrote: Hi. my $erver: Apache/1.3.22 (Win32) PHP/4.0.6 mod_perl/1.26_01-dev perl v5.6.1 Consider this sample script: use Apache::Request; use strict; use warnings; my $r=Apache-request; my $apr = Apache::Request-new($r); $r-send_http_header('text/html'); The first time i run the script the error is: Can't locate loadable object for module Apache::Request in @INC Any trials after that would give: Can't locate object method new via package Apache::Request I saw some other guy in the list had a similar program but in his case installing the libapreq pragma solved his problem.My problem is that the pragma seems already installed using ppm (!) but can't be installed using perl -MCPAN -e shell because it needs me to build mod_perl first.. However I've installed the BINARY mod_perl version using the ppm method described in the mod_perl cookbook listing 1.2 so the second method fails. Did you install the libapreq ppm package from ActiveState's repository, or from the repository at http://theoryx5.uwinnipeg.ca/ppmpackages/? If it was from Activestate's, maybe try the latter. best regards, randy kobes
Re: mod_perl on windows
Great! It worked now thanks! I only had to install --force libapreq and it worked. - Original Message - From: Randy Kobes [EMAIL PROTECTED] To: [EMAIL PROTECTED] [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Saturday, March 23, 2002 12:35 AM Subject: Re: mod_perl on windows On Fri, 22 Mar 2002, [EMAIL PROTECTED] wrote: Hi. my $erver: Apache/1.3.22 (Win32) PHP/4.0.6 mod_perl/1.26_01-dev perl v5.6.1 Consider this sample script: use Apache::Request; use strict; use warnings; my $r=Apache-request; my $apr = Apache::Request-new($r); $r-send_http_header('text/html'); The first time i run the script the error is: Can't locate loadable object for module Apache::Request in @INC Any trials after that would give: Can't locate object method new via package Apache::Request I saw some other guy in the list had a similar program but in his case installing the libapreq pragma solved his problem.My problem is that the pragma seems already installed using ppm (!) but can't be installed using perl -MCPAN -e shell because it needs me to build mod_perl first.. However I've installed the BINARY mod_perl version using the ppm method described in the mod_perl cookbook listing 1.2 so the second method fails. Did you install the libapreq ppm package from ActiveState's repository, or from the repository at http://theoryx5.uwinnipeg.ca/ppmpackages/? If it was from Activestate's, maybe try the latter. best regards, randy kobes
Problems installing on Solaris 8
Hi all, I'm trying to build mod_perl 1.26 and Apache 1.3.24 on Solaris 8. I have Perl 5.6.1 on the machine. I am building mod_perl as follows: perl Makefile.PL EVERYTHING=1 \ APACHE_SRC=../apache_1.3.24/src USE_APACI=1 \ PREP_HTTPD=1 DO_HTTPD=1 make make install I then change into the apache_1.3.24 directory and do ./configure --prefix=/usr/local/apache \ --enable-module=all --enable-shared=max \ --activate-module=src/modules/perl/libperl.a --enable-module=perl --disable-shared=perl make make install This all goes well, and I end up with an Apache installation in /usr/local/apache. However, I cannot start this. Doing a httpd -t gives me the following error: [7:13pm]# bin/httpd -t Syntax error on line 231 of /usr/local/apache/conf/httpd.conf: Cannot load /usr/local/apache/libexec/mod_auth_db.so into server: ld.so.1: bin/httpd: fatal: relocation error: file /usr/local/apache/libexec/mod_auth_db.so: symbol db_open: referenced symbol not found If I comment out the mod_auth_db lines in my config and try again I get : [7:14pm]# ../bin/httpd -t Syntax error on line 233 of /usr/local/apache/conf/httpd.conf: Cannot load /usr/local/apache/libexec/libproxy.so into server: ld.so.1: ../bin/httpd: fatal: relocation error: file /usr/local/apache/libexec/libproxy.so: symbol __floatdisf: referenced symbol not found When I comment this one out, the server starts. But I need mod_proxy for this site :( This only seems to happen on Solaris. I've tested on FreeBSD 4.3, FreeBSD 4.5 and Debian GNU/Linux (Woody) and not been able to replicate this error. However doing the above steps on another Solaris 8 box seems to have the same problems. Any advice on this would be much appreciated. Regards, -- - Wayne Pascoe | The time for action is passed. [EMAIL PROTECTED] | Now is the time for senseless http://www.molemanarmy.com | bickering. |
Re: Problems installing on Solaris 8
Wayne Pascoe wrote: Hi all, I'm trying to build mod_perl 1.26 and Apache 1.3.24 on Solaris 8. I have Perl 5.6.1 on the machine. I am building mod_perl as follows: perl Makefile.PL EVERYTHING=1 \ APACHE_SRC=../apache_1.3.24/src USE_APACI=1 \ PREP_HTTPD=1 DO_HTTPD=1 make make install I then change into the apache_1.3.24 directory and do ./configure --prefix=/usr/local/apache \ --enable-module=all --enable-shared=max \ --activate-module=src/modules/perl/libperl.a --enable-module=perl --disable-shared=perl make make install This all goes well, and I end up with an Apache installation in /usr/local/apache. However, I cannot start this. Doing a httpd -t gives me the following error: [7:13pm]# bin/httpd -t Syntax error on line 231 of /usr/local/apache/conf/httpd.conf: Cannot load /usr/local/apache/libexec/mod_auth_db.so into server: ld.so.1: bin/httpd: fatal: relocation error: file /usr/local/apache/libexec/mod_auth_db.so: symbol db_open: referenced symbol not found If I comment out the mod_auth_db lines in my config and try again I get : [7:14pm]# ../bin/httpd -t Syntax error on line 233 of /usr/local/apache/conf/httpd.conf: Cannot load /usr/local/apache/libexec/libproxy.so into server: ld.so.1: ../bin/httpd: fatal: relocation error: file /usr/local/apache/libexec/libproxy.so: symbol __floatdisf: referenced symbol not found When I comment this one out, the server starts. But I need mod_proxy for this site :( This only seems to happen on Solaris. I've tested on FreeBSD 4.3, FreeBSD 4.5 and Debian GNU/Linux (Woody) and not been able to replicate this error. However doing the above steps on another Solaris 8 box seems to have the same problems. Any advice on this would be much appreciated. Regards, This one is easy. Include the ssl library with env LIBRARIES=' -Ltherightdirectory -lssl -lcrypto I hope you have bettter luck then I have. I got past this, and I have not been able to get GCC to link mod_perl in at all. Static or DSO. I'm going to load the Forte compiler and try again Monday. I've never failed on getting stuff like this to compile and run correctly, and I'm not going to start now.
Re: Problems installing on Solaris 8
The Wizkid wrote: Wayne Pascoe wrote: Hi all, I'm trying to build mod_perl 1.26 and Apache 1.3.24 on Solaris 8. I have Perl 5.6.1 on the machine. I am building mod_perl as follows: perl Makefile.PL EVERYTHING=1 \ APACHE_SRC=../apache_1.3.24/src USE_APACI=1 \ PREP_HTTPD=1 DO_HTTPD=1 make make install I then change into the apache_1.3.24 directory and do ./configure --prefix=/usr/local/apache \ --enable-module=all --enable-shared=max \ --activate-module=src/modules/perl/libperl.a --enable-module=perl --disable-shared=perl make make install This all goes well, and I end up with an Apache installation in /usr/local/apache. However, I cannot start this. Doing a httpd -t gives me the following error: [7:13pm]# bin/httpd -t Syntax error on line 231 of /usr/local/apache/conf/httpd.conf: Cannot load /usr/local/apache/libexec/mod_auth_db.so into server: ld.so.1: bin/httpd: fatal: relocation error: file /usr/local/apache/libexec/mod_auth_db.so: symbol db_open: referenced symbol not found If I comment out the mod_auth_db lines in my config and try again I get : [7:14pm]# ../bin/httpd -t Syntax error on line 233 of /usr/local/apache/conf/httpd.conf: Cannot load /usr/local/apache/libexec/libproxy.so into server: ld.so.1: ../bin/httpd: fatal: relocation error: file /usr/local/apache/libexec/libproxy.so: symbol __floatdisf: referenced symbol not found When I comment this one out, the server starts. But I need mod_proxy for this site :( This only seems to happen on Solaris. I've tested on FreeBSD 4.3, FreeBSD 4.5 and Debian GNU/Linux (Woody) and not been able to replicate this error. However doing the above steps on another Solaris 8 box seems to have the same problems. Any advice on this would be much appreciated. Regards, This one is easy. Include the ssl library with Opps, I forgot some stuff Use this on the make command LIBS= -L/opt/local/lib -lssl -lcrypto \ INCLUDES= -I/opt/local/include \ make string I'm using is: env SSL_BASE=/opt/local \ CFLAGS=-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 \ LIBS= -L/opt/local/lib -lssl -lcrypto \ INCLUDES= -I/opt/local/include \ OPTIM=-O2 make You need this if Perl and Apache is compiled with the bigfiles option. I hope you have bettter luck then I have. I got past this, and I have not been able to get GCC to link mod_perl in at all. Static or DSO. I'm going to load the Forte compiler and try again Monday. I've never failed on getting stuff like this to compile and run correctly, and I'm not going to start now.
Re: Problems installing on Solaris 8
The Wizkid [EMAIL PROTECTED] writes: This one is easy. Include the ssl library with env LIBRARIES=' -Ltherightdirectory -lssl -lcrypto Why does mod_perl need ssl and crypto ? Just curious... I'll try this now... I hope you have bettter luck then I have. I got past this, and I have not been able to get GCC to link mod_perl in at all. Static or DSO. I'm going to load the Forte compiler and try again Monday. I've never failed on getting stuff like this to compile and run correctly, and I'm not going to start now. Wish I could help you on that :( -- - Wayne Pascoe | You know, it's simply not true that [EMAIL PROTECTED] | wars never settle anything - James Burnham http://www.molemanarmy.com |
Re: Can't open perl script -spi.bak
On Fri, 2002-03-22 at 12:57, Robert Landrum wrote: That's is very weird, because this code doesn't seem to work: perl -e 'system(perl, -e1) == 0 or die oops' Actually, that's not all that weird. Most shells take care of stripping out garbage before setting the argument list. Since system(LIST) doesn't use the shell, it's passing perl the literal -e1 which perl won't recognize as a command line option (and correctly so in my opinion). Actually this isn't standard behavior. I can't think of a situation where I would want to use system to concatanate a string for me rather than interpreting the string as an argument and act accordingly. If you check 'perldoc -f system', this is exactly what system is supposed to do when given a program name and a list of arguments, so it looks like 'systetm' may be buggy in the win32 version of perl G
Re: Problems installing on Solaris 8
The Wizkid [EMAIL PROTECTED] writes: Opps, I forgot some stuff Use this on the make command LIBS= -L/opt/local/lib -lssl -lcrypto \ INCLUDES= -I/opt/local/include \ make string I'm using is: A locate libssl shows it to be in /usr/local/openssl/lib libcrypto looks to be in /usr/local/lib. What should I set the LIBS line to in this case ? I've just tried -L/usr/local/openssl/lib \ -L/usr/local/lib -l libssl -lcrypto with no luck. env SSL_BASE=/opt/local \ CFLAGS=-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 \ LIBS= -L/opt/local/lib -lssl -lcrypto \ INCLUDES= -I/opt/local/include \ OPTIM=-O2 make You need this if Perl and Apache is compiled with the bigfiles option. How can I find out if perl was configured with the bigfiles option? Is there anyway to see these if I don't have the original config files from the source directory ? I've tried the above without the LARGEFILE stuff, but still no luck :( I'll try some more in the morning. Thanks for that :) -- - Wayne Pascoe | 'tis far easier to get forgiveness than [EMAIL PROTECTED] | it is to get permission - probably someone http://www.molemanarmy.com | famous, but more often, my Dad. |
Re: Problems installing on Solaris 8
Wayne Pascoe wrote: The Wizkid [EMAIL PROTECTED] writes: Opps, I forgot some stuff Use this on the make command LIBS= -L/opt/local/lib -lssl -lcrypto \ INCLUDES= -I/opt/local/include \ make string I'm using is: A locate libssl shows it to be in /usr/local/openssl/lib libcrypto looks to be in /usr/local/lib. What should I set the LIBS line to in this case ? I've just tried -L/usr/local/openssl/lib \ -L/usr/local/lib -l libssl -lcrypto with no luck. env SSL_BASE=/opt/local \ CFLAGS=-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 \ LIBS= -L/opt/local/lib -lssl -lcrypto \ INCLUDES= -I/opt/local/include \ OPTIM=-O2 make You need this if Perl and Apache is compiled with the bigfiles option. How can I find out if perl was configured with the bigfiles option? Is there anyway to see these if I don't have the original config files from the source directory ? perl -V will tell you I believe.I don't think mod_perl uses the ssl stuff. The error is actually comming from mod_auth_db.so, according to your original message. If you go into the conf file, and comment out this module, (line 231 of course) your http server Might start, with your mod_perl module. ALSO -- I'm kinda rusty on getting the modules and stuff compiled. Someone else on this list might chime up and come up with lots of better answers. W.Kid I've tried the above without the LARGEFILE stuff, but still no luck :( I'll try some more in the morning. Thanks for that :)
Re: Can't open perl script -spi.bak
On 22 Mar 2002, Garth Winter Webb wrote: On Fri, 2002-03-22 at 12:57, Robert Landrum wrote: That's is very weird, because this code doesn't seem to work: perl -e 'system(perl, -e1) == 0 or die oops' Actually, that's not all that weird. Most shells take care of stripping out garbage before setting the argument list. Since system(LIST) doesn't use the shell, it's passing perl the literal -e1 which perl won't recognize as a command line option (and correctly so in my opinion). Actually this isn't standard behavior. I can't think of a situation where I would want to use system to concatanate a string for me rather than interpreting the string as an argument and act accordingly. If you check 'perldoc -f system', this is exactly what system is supposed to do when given a program name and a list of arguments, so it looks like 'systetm' may be buggy in the win32 version of perl This behaviour seems to be dependent on the Perl version and on the Win32 shell used - the leading whitespace in front of the 1st argument after the program name in the system() call didn't cause a problem on Windows 98 with ActivePerl 626 (which I used to develop that part of the Makefile.PL), but it does cause a problem on other Win32s with different ActivePerl versions. best regards, randy kobes
Re: Apache and Perl with Virtual Host
Okay, this is still giving me problems. Here is my config. I've tried several things and still nothing. For some reason I can't get cgi scripts to run under any virtual webs, but the default web. I'm running RH 7.2 with apache 1.3.20. I do have mod_perl installed. My other box with RH 6.0 with apache 1.3.14, the cgi-bins work fine under all the virtual webs. I've compared the two config files, but I haven't seen what would cause it. I'm sure it's got to be something so simple. VirtualHost 192.168.1.106 DocumentRoot /var/www/html ServerName www2.zeetec.net Options +ExecCGI Alias /host/ /webhome/host/ Alias /cgi-bin/ /var/www/cgi-bin/ ScriptAlias /cgi-bin/ /var/www/cgi-bin Location /perl SetHandler perl-script PerlHandler Apache::Registry PerlSendHeader On Options +ExecCGI /Location Directory /var/www/cgi-bin AllowOverride None Options None Order allow,deny Allow from all /Directory /VirtualHost Bill Marrs wrote: At 04:02 AM 3/14/2002, Matt Phelps wrote: Forgive me if I'm posting to the wrong group. Ive got apache 1.3.22 running several virtual webs. I can get perl scripts to run under the default web but not in the others. All the webs point to the same script folder. If I try to run the script under a virtual web, all I get is text display. Any help would be great. Well, I use mod_perl with VituralHosts... My config looks something like: VirtualHost gametz.com ServerAdmin [EMAIL PROTECTED] DocumentRoot /home/tz/html ServerName gametz.com DirectoryIndex /perl/gametz.pl # The live area Alias /perl/ /home/tz/perl/ Location /perl AllowOverride None SetHandler perl-script PerlHandler Apache::RegistryBB PerlSendHeader On Options+ExecCGI /Location /VirtualHost VirtualHost surveycentral.org ServerAdmin [EMAIL PROTECTED] DocumentRoot /projects/web/survey-central ServerName surveycentral.org DirectoryIndex /perl/survey.pl Alias /perl/ /projects/web/survey-central/perl/ Location /perl SetHandler perl-script PerlHandlerApache::RegistryBB PerlSendHeader On Options+ExecCGI /Location /VirtualHost
Re: PerlModule hell - questions and comments
At 5:11 PM -0500 3/22/02, Perrin Harkins wrote: In your case, PerlFreshRestart might help with what you're trying to do since it will clear %INC, but you may still have the problem with needing to call Init. PerlFreshRestart will reload the module and thus call Init, but PerlFreshRestart is only called when we fork a new process, it is not called between the first and second parsing of the config file at startup. That seems like a bug to me. If reading the config file twice is intended to ensure that subsequent re-reads on HUP will work, then *everything* should be the same, and that means if PerlFreshRestart is set, it ought to happen there as well. Oddly enough though, PerlFreshRestart is not required for this to work after a HUP, even though it would seem to make sense. Looking at my debug output, it appears that -HUP doesn't cause the cleanup handlers to be called, so the module is never unloaded. -- Kee Hinckley - Somewhere.Com, LLC http://consulting.somewhere.com/ [EMAIL PROTECTED] I'm not sure which upsets me more: that people are so unwilling to accept responsibility for their own actions, or that they are so eager to regulate everyone else's.
Re: 'Pinning' the root apache process in memory with mlockall
Daniel Hanks wrote: On Sat, 23 Mar 2002, Stas Bekman wrote: See the discussion on the [EMAIL PROTECTED] list, http://marc.theaimsgroup.com/?t=10165973081r=1w=2 where it was said that it's a very bad idea to use mlock and variants. Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. As a conclusion to this thread I've added the following section to the performance chapter of the guide: Are we saying then that libgtop is erroneous in its reporting under these circumstances? And in the case of Linux, I'm asusming libgtop just reads its info straight from /proc. Is /proc erroneous then? As people have pointed out it's not libgtop, it's /proc. You have the same problem with top(1). It's not erroneous, it just doesn't reflect the immidiate change, the /proc will be updated when pages in question will be accessed which for performance reasons doesn't happen immediately. But this could be too late for the processes that are going to be killed. I've posted the C code to test this earlier this week here: http://marc.theaimsgroup.com/?l=apache-modperlm=101667859909389w=2 You are welcome to run more tests and report back. __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: 'Pinning' the root apache process in memory with mlockall
Perrin Harkins wrote: Stas Bekman wrote: Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. Apache::SizeLimit just reads /proc on Linux. Is that going to report a shared page as an unshared page if it has been swapped out? That's what people report. Try the code here: http://marc.theaimsgroup.com/?l=apache-modperlm=101667859909389w=2 to reproduce the phenomena in a few easy steps Of course you can void these issues if you tune your machine not to swap. The trick is, you really have to tune it for the worst case, i.e. look at the memory usage while beating it to a pulp with httperf or http_load and tune for that. That will result in MaxClients and memory limit settings that underutilize the machine when things aren't so busy. At one point I was thinking of trying to dynamically adjust memory limits to allow processes to get much bigger when things are slow on the machine (giving better performance for the people who are on at that time), but I never thought of a good way to do it. This can be done in the following way: move the variable that controls the limit into a shared memory. Now run a special monitor process that will adjust this variable, or let each child process to do that in the cleanup stage. To dynamically change MaxClients one need to re-HUP the server. __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: Can't open perl script -spi.bak
On FreeBSD using Perl 5.6.1: perl -e 'system(ls,-d,/);'-- This works, showing just / perl -e 'system(ls, -d,/);' -- This fails, showing ls: -d: No such file or directory On FreeBSD using tcsh: perldoc -f system-- This works perldoc -f system -- The shell sees that it doesn't start with a - and interperets it as a module to look up documentation for. On Win2K using cmd: dir C:\-- This works dir C:\ -- Again, same issue, The filename, directory name, or volume label syntax is incorrect I think that this is pretty standard behaviour, and will be seen in various examples on multiple system. I also think that this is indeed desirable. -- Ryan - Original Message - From: Randy Kobes [EMAIL PROTECTED] To: Garth Winter Webb [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Friday, March 22, 2002 5:43 PM Subject: Re: Can't open perl script -spi.bak On 22 Mar 2002, Garth Winter Webb wrote: On Fri, 2002-03-22 at 12:57, Robert Landrum wrote: That's is very weird, because this code doesn't seem to work: perl -e 'system(perl, -e1) == 0 or die oops' Actually, that's not all that weird. Most shells take care of stripping out garbage before setting the argument list. Since system(LIST) doesn't use the shell, it's passing perl the literal -e1 which perl won't recognize as a command line option (and correctly so in my opinion). Actually this isn't standard behavior. I can't think of a situation where I would want to use system to concatanate a string for me rather than interpreting the string as an argument and act accordingly. If you check 'perldoc -f system', this is exactly what system is supposed to do when given a program name and a list of arguments, so it looks like 'systetm' may be buggy in the win32 version of perl This behaviour seems to be dependent on the Perl version and on the Win32 shell used - the leading whitespace in front of the 1st argument after the program name in the system() call didn't cause a problem on Windows 98 with ActivePerl 626 (which I used to develop that part of the Makefile.PL), but it does cause a problem on other Win32s with different ActivePerl versions. best regards, randy kobes
[OT] Re: Can't open perl script -spi.bak
I also think that there me some mis-interpretation here of the system docs: snip src=cmd:perldoc -f system If there is more than one argument in LIST, or if LIST is an array with more than one value, starts the program given by the first element of the list with arguments given by the rest of the list. If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is C/bin/sh -c on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to Cexecvp, which is more efficient. /snip Basically this says that : system('ls','-d','/'); skips the shell system('ls -d /'); is broken into words and skips the shell and system('ls -d /*'); is passed to the shell as one big string. system('ls -d /*'); of course is the same After reading this I wasn't sure how it would handle: system('ls','-d','/*'); since it's more than one argument but with shell metacharacters... I assumed it would go to the shell... But I was wrong. It doesn't go to the shell for metacharacter interpretation and it reports: ls: /*: No such file or directory Therefore it's safe to say that no string concatenation is done, and it makes sense that ' -el' is not a valid argument. -- Ryan - Original Message - From: Ryan Parr [EMAIL PROTECTED] To: Randy Kobes [EMAIL PROTECTED]; Garth Winter Webb [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Friday, March 22, 2002 9:55 PM Subject: Re: Can't open perl script -spi.bak On FreeBSD using Perl 5.6.1: perl -e 'system(ls,-d,/);'-- This works, showing just / perl -e 'system(ls, -d,/);' -- This fails, showing ls: -d: No such file or directory On FreeBSD using tcsh: perldoc -f system-- This works perldoc -f system -- The shell sees that it doesn't start with a - and interperets it as a module to look up documentation for. On Win2K using cmd: dir C:\-- This works dir C:\ -- Again, same issue, The filename, directory name, or volume label syntax is incorrect I think that this is pretty standard behaviour, and will be seen in various examples on multiple system. I also think that this is indeed desirable. -- Ryan - Original Message - From: Randy Kobes [EMAIL PROTECTED] To: Garth Winter Webb [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Friday, March 22, 2002 5:43 PM Subject: Re: Can't open perl script -spi.bak On 22 Mar 2002, Garth Winter Webb wrote: On Fri, 2002-03-22 at 12:57, Robert Landrum wrote: That's is very weird, because this code doesn't seem to work: perl -e 'system(perl, -e1) == 0 or die oops' Actually, that's not all that weird. Most shells take care of stripping out garbage before setting the argument list. Since system(LIST) doesn't use the shell, it's passing perl the literal -e1 which perl won't recognize as a command line option (and correctly so in my opinion). Actually this isn't standard behavior. I can't think of a situation where I would want to use system to concatanate a string for me rather than interpreting the string as an argument and act accordingly. If you check 'perldoc -f system', this is exactly what system is supposed to do when given a program name and a list of arguments, so it looks like 'systetm' may be buggy in the win32 version of perl This behaviour seems to be dependent on the Perl version and on the Win32 shell used - the leading whitespace in front of the 1st argument after the program name in the system() call didn't cause a problem on Windows 98 with ActivePerl 626 (which I used to develop that part of the Makefile.PL), but it does cause a problem on other Win32s with different ActivePerl versions. best regards, randy kobes
Re: Apache and Perl with Virtual Host
Hi there, On Fri, 22 Mar 2002, Matt Phelps wrote: [snip,snip] Okay, this is still giving me problems. Here is my config. I've tried several things and still nothing. For some reason I can't get cgi scripts to run under any virtual webs, but the default web. What's a 'web'? I think you mean 'host'. (It helps if we all speak the same language, especially if we are using a search engine... :) I do have mod_perl installed. Oh, all right then... :) I'm sure it's got to be something so simple. I think it's called 'reading the documentation'... :) VirtualHost 192.168.1.106 DocumentRoot /var/www/html ServerName www2.zeetec.net Options +ExecCGI Alias /host/ /webhome/host/ Alias /cgi-bin/ /var/www/cgi-bin/ ScriptAlias /cgi-bin/ /var/www/cgi-bin Location /perl SetHandler perl-script PerlHandler Apache::Registry PerlSendHeader On Options +ExecCGI /Location Directory /var/www/cgi-bin AllowOverride None Options None --- This removes 'ExecCGI' from your Options Order allow,deny Allow from all /Directory /VirtualHost Check the Apache docs about the behaviour of the 'Options' directive. 73, Ged.
cvs commit: modperl Changes Makefile.PL
stas02/03/22 11:58:13 Modified:.Changes Makefile.PL Log: the first flag argument to perl cannot start with space, since perl tries to open the -spi.bak as a file. fix that in the win32 case. Revision ChangesPath 1.626 +4 -0 modperl/Changes Index: Changes === RCS file: /home/cvs/modperl/Changes,v retrieving revision 1.625 retrieving revision 1.626 diff -u -r1.625 -r1.626 --- Changes 19 Mar 2002 02:18:02 - 1.625 +++ Changes 22 Mar 2002 19:58:13 - 1.626 @@ -10,6 +10,10 @@ =item 1.26_01-dev +the first flag argument to perl cannot start with space, since perl tries +to open the -spi.bak as a file. fix that in the win32 case. +[Stas Bekman [EMAIL PROTECTED]] + starting from perl 5.7.3 for tied filehandles, tiedscalar magic is applied to the IO slot of the GP rather than the GV itself. adjust the TIEHANDLE macro to work properly under 5.7.3+. [Charles Jardine [EMAIL PROTECTED], 1.197 +1 -1 modperl/Makefile.PL Index: Makefile.PL === RCS file: /home/cvs/modperl/Makefile.PL,v retrieving revision 1.196 retrieving revision 1.197 diff -u -r1.196 -r1.197 --- Makefile.PL 9 Sep 2001 21:56:46 - 1.196 +++ Makefile.PL 22 Mar 2002 19:58:13 - 1.197 @@ -1101,7 +1101,7 @@ cp lib/mod_perl_hooks.pm.PL, lib/mod_perl_hooks.pm; if ($Is_Win32) { - my @args = ($^X, ' -spi.bak ', ' -e ', \s/sub mod_perl::hooks.*/sub mod_perl::hooks { qw($hooks) }/\, 'lib/mod_perl_hooks.pm'); + my @args = ($^X, '-spi.bak ', ' -e ', \s/sub mod_perl::hooks.*/sub mod_perl::hooks { qw($hooks) }/\, 'lib/mod_perl_hooks.pm'); system(@args) == 0 or die @args failed\n; } iedit lib/mod_perl_hooks.pm,
cvs commit: modperl STATUS
geoff 02/03/22 12:03:34 Modified:.STATUS Log: add reference to PERL5LIB patch Revision ChangesPath 1.2 +6 -1 modperl/STATUS Index: STATUS === RCS file: /home/cvs/modperl/STATUS,v retrieving revision 1.1 retrieving revision 1.2 diff -u -r1.1 -r1.2 --- STATUS2 Mar 2002 18:09:54 - 1.1 +++ STATUS22 Mar 2002 20:03:34 - 1.2 -1,5 +1,5 mod_perl 1.3 STATUS: - Last modified at [$Date: 2002/03/02 18:09:54 $] + Last modified at [$Date: 2002/03/22 20:03:34 $] Release: -78,6 +78,11 * Apache::test Report: http://marc.theaimsgroup.com/?l=apache-modperl-devm=98278446807561w=2 Status: + patch available + +* PERL5LIB should unshift INC instead of push +Report: http://marc.theaimsgroup.com/?l=apache-modperl-devm=100434522809036w=2 +Status: patch available