Re: Optimising cache performance
Thanks for your feedback - a couple more questions First, I'm assuming this is for a distributed system running on multiple servers. If not, you should just download one of the cache modules from CPAN. They're good. For now it's not a distributed system, and I have been using Cache::FileCache. But that still means freezing and thawing objects - which I'm trying to minimise. I suggest you use either Cache::Mmap or IPC::MM for your local cache. They are both very fast and will save you memory. Also, Cache::Mmap is only limited by the size of your disk, so you don't have to do any purging. When you say that Cache::Mmap is only limited by the size of your disk, is that because the file in memory gets written to disk as part of VM? ( I don't see any other mention of files in the docs.) Which presumably means resizing your VM to make space for the cache? You seem to be taking a lot of care to ensure that everything always has the latest version of the data. If you can handle slightly out-of-date Call me anal ;) Most of the time it wouldn't really matter, but sometimes it could be extremely off-putting If everything really does have to be 100% up-to-date, then what you're doing is reasonable. It would be nice to not do the step that checks for outdated objects before processing the request, but instead do it in a cleanup handler, although that could lead to stale data being used now and then. Yes - had considered that. If you were using a shared cache like Cache::Mmap, you could have a cron job or a separate Perl daemon that simply purges outdated objects every minute or so, and leave that out of your mod_perl code completely. I see the author of IPC::MM has an e-toys address - was this something you used at e-toys? I know very little about shared memory segments, but is MM used to share small data objects, rather than to keep large caches in shared memory? Ralph Engelschall writes in the MM documentation : "The maximum size of a continuous shared memory segment one can allocate depends on the underlaying platform. This cannot be changed, of course. But currently the high-level malloc(3)-style API just uses a single shared memory segment as the underlaying data structure for an MM object which means that the maximum amount of memory an MM object represents also depends on the platform." What implications does this have on the size of the cache that can be created with IPC::MM thanks Clinton Gormley
Re: Server questions
Absolutely. In this case, the cluster actually acts like a load balancing solution. Michael Hyman wrote: I am not familiar with clustering Would you run a mod_perl based web site on a cluster? Isn't the point of a cluster to make a group of machines appear to be one? If so, how is that beneficial for web services? Thanks...Michael - Original Message - From: "Dzuy Nguyen" <[EMAIL PROTECTED]> To: "Modperl" <[EMAIL PROTECTED]> Sent: Friday, March 07, 2003 6:19 PM Subject: Re: Server questions I always say, buy the best you can afford. Then again, consider how many Linux PC you can have for the price of the Sun. Run those PCs in a web farm or cluster and that Sun can't match the processing power and speed. Michael Hyman wrote: Hi guys, I have a dilemma that I need input on. If you were to buy machines to be used as dedicated web servers, which would you go with? Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run Solaris 9 Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run Linux The prices are worlds apart and I think I will get more bang for the buck with PC. The systems will connect to an Oracle server, via SQL*Net and server both dynamic and static content along with providing download files up to 1GB in size. The files will be stored locally. What I want to understand is which machine will be faster, be able to handle more peak loading, have a longer lifespan yet be upgradeable for a reasonable price. In the benchmarking we have done, we run out of Ram before CPU using Apache 1.3.27 and Mod_perl, so we will heavily load the machines with RAM. I have years of experience with Solaris and SunOS, and little with Linux, but the learning curve seems small and easily handled. It seems to me that Linux is more customizable than Solaris, but then Solaris comes pretty well tuned and does not always need much tweaking. Apache and all of our software components support both Solaris and Linux, so we can go either way as far as that goes. I think it comes down to a simple formula of which option gets us the most peak and sustained performance for the least amount of money. So, I am looking for some input as to which way you might go in my positions. Thanks in advance for the help!! Regards...Michael
Re: Server questions
I am not familiar with clustering Would you run a mod_perl based web site on a cluster? Isn't the point of a cluster to make a group of machines appear to be one? If so, how is that beneficial for web services? Thanks...Michael - Original Message - From: "Dzuy Nguyen" <[EMAIL PROTECTED]> To: "Modperl" <[EMAIL PROTECTED]> Sent: Friday, March 07, 2003 6:19 PM Subject: Re: Server questions > I always say, buy the best you can afford. > Then again, consider how many Linux PC you can have for the price of the Sun. > Run those PCs in a web farm or cluster and that Sun can't match the processing > power and speed. > > Michael Hyman wrote: > > Hi guys, > > > > I have a dilemma that I need input on. > > > > If you were to buy machines to be used as dedicated web servers, which would > > you go with? > > > > Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run > > Solaris 9 > > > > Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run Linux > > > > The prices are worlds apart and I think I will get more bang for the buck > > with PC. > > > > The systems will connect to an Oracle server, via SQL*Net and server both > > dynamic and static content along with providing download files up to 1GB in > > size. The files will be stored locally. > > > > What I want to understand is which machine will be faster, be able to handle > > more peak loading, have a longer lifespan yet be upgradeable for a > > reasonable price. > > > > In the benchmarking we have done, we run out of Ram before CPU using Apache > > 1.3.27 and Mod_perl, so we will heavily load the machines with RAM. > > > > I have years of experience with Solaris and SunOS, and little with Linux, > > but the learning curve seems small and easily handled. It seems to me that > > Linux is more customizable than Solaris, but then Solaris comes pretty well > > tuned and does not always need much tweaking. > > > > Apache and all of our software components support both Solaris and Linux, so > > we can go either way as far as that goes. > > > > I think it comes down to a simple formula of which option gets us the most > > peak and sustained performance for the least amount of money. > > > > So, I am looking for some input as to which way you might go in my > > positions. > > > > Thanks in advance for the help!! > > > > Regards...Michael > > > > > > > > > > >
PerlCleanupHandler firing too early?
My PerlCleanupHandler seems to be firing before the content phase has finished processing the page. The handler pretty much looks like sub handler { my ($r) = @_; undef $Foo::bar; undef $Foo::baz; return OK; } It's being invoked in a virtual host apache conf segment with PerlCleanupHandler Apache::CleanupFoo If I don't comment out the PerlCleanupHandler line pieces of the application that rely on any variable that I undef in the Cleanup phase will crash. In the error log it doesn't _LOOK_ like the handler is being called early. The log yields exactly what I would expect it to. PID 1000 REWRITE CALLED initial: 1 main: 0 PID 1000 REWRITE CALLED initial: 0 main: 0 PID 1000 REWRITE CALLED initial: 0 main: 1 PID 1000 AUTHENTICATION CALLED BUNCH OF PERL ERRORS GO HERE (can't call method foo on undefined value and the like) PID 1000 REWRITE CALLED initial:0 main: 1 (rewriting /cgi-bin/error/error.pl) PID 1000 LOGGER CALLED (uri: error.pl) PID 1000 CLEANUP CALLED (uri: mod_perl app) I'm running on Apache/1.3.27 (Unix) mod_perl/1.26 w/ embperl 1.3.6. Does anyone have an idea of what is going on here (or what I'm doing wrong here?). Am I right in thinking that the CleanupHandler isn't supposed to have any effect on the code _running_ in the current or subsequent processes? In summary, leave Cleanup handler in everything that I undef in the cleanup handler gets undef'ed in the middle of running the code, if I removed the CleanupHandler the app works as intended. -- Richard "Trey" Hyde Senior Software Engineer CNET Channel (949) 399 8722 [EMAIL PROTECTED] http://www.cnetchannel.com signature.asc Description: This is a digitally signed message part
Re: Server questions
I always say, buy the best you can afford. Then again, consider how many Linux PC you can have for the price of the Sun. Run those PCs in a web farm or cluster and that Sun can't match the processing power and speed. Michael Hyman wrote: Hi guys, I have a dilemma that I need input on. If you were to buy machines to be used as dedicated web servers, which would you go with? Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run Solaris 9 Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run Linux The prices are worlds apart and I think I will get more bang for the buck with PC. The systems will connect to an Oracle server, via SQL*Net and server both dynamic and static content along with providing download files up to 1GB in size. The files will be stored locally. What I want to understand is which machine will be faster, be able to handle more peak loading, have a longer lifespan yet be upgradeable for a reasonable price. In the benchmarking we have done, we run out of Ram before CPU using Apache 1.3.27 and Mod_perl, so we will heavily load the machines with RAM. I have years of experience with Solaris and SunOS, and little with Linux, but the learning curve seems small and easily handled. It seems to me that Linux is more customizable than Solaris, but then Solaris comes pretty well tuned and does not always need much tweaking. Apache and all of our software components support both Solaris and Linux, so we can go either way as far as that goes. I think it comes down to a simple formula of which option gets us the most peak and sustained performance for the least amount of money. So, I am looking for some input as to which way you might go in my positions. Thanks in advance for the help!! Regards...Michael
Re: [mp2] $r->document root("/my/hacked/path");
Rob Brown wrote: I need to be able to at least temporarily change the document_root on the fly. Something like the following: $r->document_root("/my/hacked/path"); But it crashes with a prototype mismatch. The docs say: $r->document_root:cannot currently be modified. requires locking since it is part ofthe per-server config structure which is shared between threads its in todo/api.txt: $r->document_root: cannot currently be modified. requires locking since it is part of the per-server config structure which is shared between threads Well, I could care less about actually modifying the server record. Isn't there a way to point it to a temporary string just for the request? In Apache1, it worked fine to copy the entire server record into a malloc and hack the document_root setting in the copy and point the server record pointer there just for the request. Then free this temporary server record. Or in mod_perl, just set the document_root to the real path just long enough for the Translation phase and then fix it back to the original value in the Cleanup phase. What do you recommend in order to accomplish this under mod_perl 1.99 for Apache 2.0? It needs to be implemented. Patches are welcome. __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: [mp2] What happened to $r->connection->remote_addr?
Rob Brown wrote: I'm getting complaints about Apache::DNAT not working with Apache 2 because mod_perl 1.99 suddenly can't handle the things it used to. I'm getting this spewage: [Fri Mar 07 11:22:21 2003] [error] [client 166.70.29.72] Can't locate object method "connection" via package "Apache::RequestRec" at /usr/lib/perl5/site_perl/5.8.0/Apache/DNAT.pm line 8. Here is the offending code: my $c = $r->connection; # line 8 my $old_remote_addr = $c->remote_addr; my ($old_port, $old_addr) = sockaddr_in($old_remote_addr); $old_addr = inet_ntoa $old_addr; # munge and compute $new_port and $new_addr . $c->remote_addr(scalar sockaddr_in($new_port, inet_aton($new_addr))); $c->remote_ip($new_addr); This all used to work just fine under mod_perl 1.27 but now fails miserably. I tried using "Apache::compat" also. This seemed to pick up the $r->connection slightly better. But $c->remote_addr is really wacked. Now it crashes with this spewage: [Fri Mar 07 12:20:32 2003] [error] [client 166.70.29.72] Bad arg length for Socket::unpack_sockaddr_in, length is 31, should be 16 at /usr/lib/perl5/5.8.0/i386-linux-thread-multi/Socket.pm line 370. I did a Data::Dumper on it and it's a ref??!! $old_remote_addr = bless( do{\(my $o = 135384736)}, 'APR::SockAddr' ); How am I supposed to pull the port and addr out of that nasty beast? $c->ip_get or $c->addr ? NOPE! [Fri Mar 07 12:22:12 2003] [error] [client 166.70.29.72] Can't locate object method "ip_get" via package "APR::SockAddr" at /usr/lib/perl5/site_perl/5.8.0/Apache/DNAT.pm line 13. [Fri Mar 07 12:24:58 2003] [error] [client 166.70.29.72] Can't locate object method "addr" via package "APR::SockAddr" at /usr/lib/perl5/site_perl/5.8.0/Apache/DNAT.pm line 13. I've tried searching all the documentation for clues, but I must be looking in the wrong place. I've even tried dereferencing the SCALAR ref and sending that through sockaddr_in or inet_ntoa, but that totally doesn't work either. I'm running out of possibilities to try. Also, once I GET the connection information, I need to be able to SET it again to spoof the request into thinking the request is coming from the more correct peer instead of what the actual socket reports. This should be reflected in the logs and in the environment REMOTE_ADDR and REMOTE_PORT for mod_cgi running CGI scripts. $r->SET_remote_addr($new_remote_addr); # ?? I just need to port my module from mp1 to mp2. Any ideas would be appreciated. -- Rob DETAILS: Apache::DNAT is freely available from CPAN: http://search.cpan.org/src/BBB/Net-DNAT-0.10/lib/Apache/DNAT.pm $ uname -a Linux box 2.4.18-14 #1 Wed Sep 4 12:13:11 EDT 2002 i686 athlon i386 GNU/Linux I'm using the mod_perl rpm that comes stock with RedHat 8.0 linux: mod_perl-1.99_05-3 1.99_05 is one 7 months old. Please test it again with the released 1.99_08 or even better with the current cvs: http://perl.apache.org/download/source.html#2_0_Development_Source_Distribution As for missing methods reports, see: http://perl.apache.org/docs/2.0/api/ModPerl/MethodLookup.html e.g. you need to load 'Apache::Connection' to get remote_addr Also, in the future please add some new lines in your reports, it's extremely hard to parse when everything is piled in one para. Thanks. I'm using the apache 2.0 rpm that comes stock with RedHat 8.0 linux: httpd-2.0.40-11 I'm using the perl 5.8.0 rpm that comes stock with RedHat 8.0 linux: perl-5.8.0-55 -- __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Server questions
Hi guys, I have a dilemma that I need input on. If you were to buy machines to be used as dedicated web servers, which would you go with? Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run Solaris 9 Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run Linux The prices are worlds apart and I think I will get more bang for the buck with PC. The systems will connect to an Oracle server, via SQL*Net and server both dynamic and static content along with providing download files up to 1GB in size. The files will be stored locally. What I want to understand is which machine will be faster, be able to handle more peak loading, have a longer lifespan yet be upgradeable for a reasonable price. In the benchmarking we have done, we run out of Ram before CPU using Apache 1.3.27 and Mod_perl, so we will heavily load the machines with RAM. I have years of experience with Solaris and SunOS, and little with Linux, but the learning curve seems small and easily handled. It seems to me that Linux is more customizable than Solaris, but then Solaris comes pretty well tuned and does not always need much tweaking. Apache and all of our software components support both Solaris and Linux, so we can go either way as far as that goes. I think it comes down to a simple formula of which option gets us the most peak and sustained performance for the least amount of money. So, I am looking for some input as to which way you might go in my positions. Thanks in advance for the help!! Regards...Michael
What does SetHandler do unexpectedly?
Hi, Well, by now you must know that I am working on something... and I keep stumbling on things I seem not to understand and not to be able to find in the docs / books. See this example: # SetHandler perl-script PerlHeaderparserHandler MyClass->first PerlAuthenHandler MyAuthen PerlFixupHandler MyClass->init # PerlHandler MyClass->handler PerlCleanupHandler MyClass->last I have stripped almost all functionality and just let the subroutines print. With this setup and a 'get http://mysite/dir/file' I see: -- first: got /dir/file authen: called for /dir/file init: called for /dir/file [error] ... /dir/file not found -- last: finished /dir/file No strange things, what I would expect. But now I remove the comments and see what happens: -- first: got /dir/file authen: called for /dir/file init: called for /dir/file authen: called for /file init: called for /file handler: called for /dir/file [error] ... /dir/file not found -- last: finished /dir/file What strike me are the two lines for /file. Why is this happening? I did not ask for it, at least not deliberately. Is this something that is related to a Handler (check one level below the uri)? Hope you can help me here (and on the other subjects...) --Frank
Re: DECLINE, ERRORs in the middle of hanlers stack
Ruslan U. Zakirov wrote: Hello All! "Stacked handlers" is a very useful technology, but as I think incomplete. I need some suggestions. My project split content handler in the few parts. And each handler send part of the requested page to user, but sometimes I must stop proccessing and return DECLINE, redirect user or return some ERROR. Error appear in the middle of the page. I want Apache to buffer content and send it after last handler in chain return OK. Is it possible? Sure, you can buffer the data using $r->notes or $r->pnotes and have the last handler send it out. But instead of stuffing the content there, you are probably better off (more efficient?) to create a buffering class, instantiate an instance, and store it in $r->pnotes between invocations. __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: Apache::DBI on mp2
Ask Bjoern Hansen wrote: On Fri, 7 Mar 2003, Stas Bekman wrote: If the physical connection is still there, would the database server do a rollback? If earlier the rollback worked correctly with $dbh ||= connect();, this will work the same, since it performs the same operation, but rips off only parts of $dbh, because of the constraints of sharing Perl datastructures and underlying C structs. Apache::DBI explicitly calls $dbh->rollback (when things are configured so it makes sense). Or maybe I am completely misunderstanding you. All I was saying is, that whatever worked for Apache::DBI will work for internal pooling. ps. yes, your DBI::Pool work is great. Thank you. :-) My pleasure. Thanks for the kind words. It's quite challenging, though you stop getting excited about segfaults and ddd (gdb frontend) is nowadays my best friend ;) :-) You are quite the masochist if you ever got excited about segfaults. I only recall segfaults making me slam my head into the wall to conceal the pain. I meant in the way that when something is repeated too many times it's not the same anymore ;) so s/excited/annoyed/ will do ;) __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: how to take advantage of mod_perl and analize effectiveness ofefforts?
On Fri, 7 Mar 2003, Charlie Smith wrote: > Date: Fri, 07 Mar 2003 16:21:15 -0700 > From: Charlie Smith <[EMAIL PROTECTED]> > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: how to take advantage of mod_perl and analize effectiveness of > efforts? > > A couple questions: > In order to take advantage of mod_perl, do you need to restart the apache > server, after making changes to a perl routine? > > What is being cached by the mod_perl? Is is just the perl executable, or > compiled in modules, or modules you have > written in cgi directories? > > How to tell if code is being cached? > > Can user code be made to use mod_perl caching? > > How to optimize code to make more efficient under mod_perl? > > > So, basically, how to take advantage of mod_perl and analize effectiveness of > efforts? > > a little puzzled, > Charlie Smith > x2791 > 3/7/03 Charlie, I would say that most of your questions will be answered if you spend a little time with the very excellent documentation which is available for free here: http://perl.apache.org/guide (I'm not trying to be rude here, but I can't summarize better for you the answers to your questions than what has been already done. This is just a friendly RTFM. :-) ky
how to take advantage of mod_perl and analize effectiveness ofefforts?
A couple questions: In order to take advantage of mod_perl, do you need to restart the apache server, after making changes to a perl routine? What is being cached by the mod_perl? Is is just the perl executable, or compiled in modules, or modules you have written in cgi directories? How to tell if code is being cached? Can user code be made to use mod_perl caching? How to optimize code to make more efficient under mod_perl? So, basically, how to take advantage of mod_perl and analize effectiveness of efforts? a little puzzled, Charlie Smith x2791 3/7/03 -- This message may contain confidential information, and is intended only for the use of the individual(s) to whom it is addressed. ==
Re: [mp2] $r->document root("/my/hacked/path");
I need to be able to at least temporarily change the document_root on the fly. Something like the following: $r->document_root("/my/hacked/path"); But it crashes with a prototype mismatch. The docs say: $r->document_root:cannot currently be modified. requires locking since it is part ofthe per-server config structure which is shared between threads Well, I could care less about actually modifying the server record. Isn't there a way to point it to a temporary string just for the request? In Apache1, it worked fine to copy the entire server record into a malloc and hack the document_root setting in the copy and point the server record pointer there just for the request. Then free this temporary server record. Or in mod_perl, just set the document_root to the real path just long enough for the Translation phase and then fix it back to the original value in the Cleanup phase. What do you recommend in order to accomplish this under mod_perl 1.99 for Apache 2.0?
[mp2] What happened to $r->connection->remote_addr?
I'm getting complaints about Apache::DNAT not working with Apache 2 because mod_perl 1.99 suddenly can't handle the things it used to. I'm getting this spewage: [Fri Mar 07 11:22:21 2003] [error] [client 166.70.29.72] Can't locate object method "connection" via package "Apache::RequestRec" at /usr/lib/perl5/site_perl/5.8.0/Apache/DNAT.pm line 8. Here is the offending code: my $c = $r->connection; # line 8 my $old_remote_addr = $c->remote_addr; my ($old_port, $old_addr) = sockaddr_in($old_remote_addr); $old_addr = inet_ntoa $old_addr; # munge and compute $new_port and $new_addr . $c->remote_addr(scalar sockaddr_in($new_port, inet_aton($new_addr))); $c->remote_ip($new_addr); This all used to work just fine under mod_perl 1.27 but now fails miserably. I tried using "Apache::compat" also. This seemed to pick up the $r->connection slightly better. But $c->remote_addr is really wacked. Now it crashes with this spewage: [Fri Mar 07 12:20:32 2003] [error] [client 166.70.29.72] Bad arg length for Socket::unpack_sockaddr_in, length is 31, should be 16 at /usr/lib/perl5/5.8.0/i386-linux-thread-multi/Socket.pm line 370. I did a Data::Dumper on it and it's a ref??!! $old_remote_addr = bless( do{\(my $o = 135384736)}, 'APR::SockAddr' ); How am I supposed to pull the port and addr out of that nasty beast? $c->ip_get or $c->addr ? NOPE! [Fri Mar 07 12:22:12 2003] [error] [client 166.70.29.72] Can't locate object method "ip_get" via package "APR::SockAddr" at /usr/lib/perl5/site_perl/5.8.0/Apache/DNAT.pm line 13. [Fri Mar 07 12:24:58 2003] [error] [client 166.70.29.72] Can't locate object method "addr" via package "APR::SockAddr" at /usr/lib/perl5/site_perl/5.8.0/Apache/DNAT.pm line 13. I've tried searching all the documentation for clues, but I must be looking in the wrong place. I've even tried dereferencing the SCALAR ref and sending that through sockaddr_in or inet_ntoa, but that totally doesn't work either. I'm running out of possibilities to try. Also, once I GET the connection information, I need to be able to SET it again to spoof the request into thinking the request is coming from the more correct peer instead of what the actual socket reports. This should be reflected in the logs and in the environment REMOTE_ADDR and REMOTE_PORT for mod_cgi running CGI scripts. $r->SET_remote_addr($new_remote_addr); # ?? I just need to port my module from mp1 to mp2. Any ideas would be appreciated. -- Rob DETAILS: Apache::DNAT is freely available from CPAN: http://search.cpan.org/src/BBB/Net-DNAT-0.10/lib/Apache/DNAT.pm $ uname -a Linux box 2.4.18-14 #1 Wed Sep 4 12:13:11 EDT 2002 i686 athlon i386 GNU/Linux I'm using the mod_perl rpm that comes stock with RedHat 8.0 linux: mod_perl-1.99_05-3 I'm using the apache 2.0 rpm that comes stock with RedHat 8.0 linux: httpd-2.0.40-11 I'm using the perl 5.8.0 rpm that comes stock with RedHat 8.0 linux: perl-5.8.0-55
Re: [mp2] automatic Apache::compat preloading in CPAN modules is a no-no
Hi Stas, I'm not interested in modifying CGI.pm to use MP2 until I start using MP2 myself. This isn't likely in the near future, since I'm very happy indeed with MP1/Apache1. Lincoln On Friday 07 March 2003 03:58 am, Stas Bekman wrote: > Stas Bekman wrote: > > Apache::compat is useful during the mp1 code porting. Though remember > > that it's implemented in pure Perl. In certain cases it overrides mp2 > > methods, because their API is very different and doesn't map 1:1 to mp1. > > So if anything, not under my control, loads Apache::compat my code is > > forced to use the potentially slower method. Which is quite bad. > > > > Some users may choose to keep using Apache::compat in production and it > > may perform just fine. Other users will choose not to use that module. > > It should be users' choice. > > > > Therefore CPAN modules should *not* preload Apache::compat, but use the > > mp2 API or copy the relevant parts from Apache::compat. > > > > Of course one can add an ugly workaround in startup.pl: > > > > $INC{'Apache/compat.pm'} = __FILE__; > > > > but I'd rather not have to remember doing that. I'll update the manpage > > to have this warning. > > > > I haven't scanned the CPAN modules yet, but I've noticed that CGI.pm's > > latest version does: > > > > require mod_perl; > > if ($mod_perl::VERSION >= 1.99) { > > eval "require Apache::compat"; > > } else { > > eval "require Apache"; > > } > > > > Lincoln, any chance we can kill that preloading? If you need help with > > porting the API please let us know. > > Here is a hopefully useful discussion and examples on how to get rid of > Apache::compat: > http://perl.apache.org/docs/2.0/devel/porting/porting.html#Handling_Missing >_and_Modified_mod_perl_1_0_APIs > > __ > Stas BekmanJAm_pH --> Just Another mod_perl Hacker > http://stason.org/ mod_perl Guide ---> http://perl.apache.org > mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com > http://modperlbook.org http://apache.org http://ticketmaster.com -- Lincoln D. Stein Cold Spring Harbor Laboratory [EMAIL PROTECTED] Cold Spring Harbor, NY
RE: error using instance: can't locate ... pnotes .. via
Regarding my previous post: > ... The routines work > fine if used "standalone" but as soon as the routine gets included > via the SSI method (subrequest?) apache/mod_perl complains. The call > to instance results in an error 'can't locate method 'pnotes' via > package "X::Y::Z"', where X::Y::Z my own package is. It seems as if the handler is called as if it were a method handler...? But it's not defined as such (from the Cookbook: ...the trigger for mod_perl is the use of the prototype ($$)). Huh? --Frank
Re: Optimising cache performance
On Friday, March 7, 2003, at 02:20 PM, Perrin Harkins wrote: Cory 'G' Watson wrote: I'm not sure if my way would fit in with your objects Clinton, but I have some code in the commit() method of all my objects which, when it is called, removes any cached copies of the object. That's how I stay up to date. Why wouldn't it simply update the version in the cache when you commit? Also, do you have a way of synchronizing changes across multiple machines? I suppose it could, but I use it as a poor man's cache cleaning. I suppose it would boost performance to do what you suggest. I'll just implement a cache cleaner elsewhere. I only run on one machine, so I don't do any synchronization. I hope to have that problem some day ;) Cory 'G' Watson http://gcdb.spleck.net
Re: Basic Auth logout
On Fri, Mar 07, 2003 at 08:48:41PM +0100, Frank Maas wrote: > > And would this be possible with mod_perl2 ? > > What you could try (note the 'could', it's not tested) is return > a redirect to the same realm with a different id/password that is > not correct. If your site is www.mysite.com then redirect to > http://logged:[EMAIL PROTECTED]/ This might help as browsers can > interpret the popup this will trigger (as user logged with pass out > is not known) as an implicit logout). I've tested this a while ago and the popular browsers cache their auth information and there is no general way to flush this, i.e. make it to "logout" or "forget" the auth information. The only alternative to have a cookie-based session is to keep the session id in the URL, either like amazon (PATH_INFO) or via a wildcard hostname. I would go for a cookie based solution. /magnus -- http://x42.com
Re: Optimising cache performance
Cory 'G' Watson wrote: I'm not sure if my way would fit in with your objects Clinton, but I have some code in the commit() method of all my objects which, when it is called, removes any cached copies of the object. That's how I stay up to date. Why wouldn't it simply update the version in the cache when you commit? Also, do you have a way of synchronizing changes across multiple machines? - Perrin
Re: Optimising cache performance
On Friday, March 7, 2003, at 12:45 PM, Perrin Harkins wrote: You seem to be taking a lot of care to ensure that everything always has the latest version of the data. If you can handle slightly out-of-date data, I would suggest that you simply keep objects in the local cache with a time-to-live (which Cache::Mmap or Cache::FileCache can do for you) and just look at the local version until it expires. You would end up building the objects once per server, but that isn't so bad. I'm not sure if my way would fit in with your objects Clinton, but I have some code in the commit() method of all my objects which, when it is called, removes any cached copies of the object. That's how I stay up to date. Cory 'G' Watson http://gcdb.spleck.net
RE: Basic Auth logout
> this has been asked before, and I've found in the archives > there is no way I could have a logout page for the Basic Auth in > apache. > > Is there nothing I can do ? This is required only for the > development team, so we need to let mozilla or IE forget > about the username and password. > > And would this be possible with mod_perl2 ? What you could try (note the 'could', it's not tested) is return a redirect to the same realm with a different id/password that is not correct. If your site is www.mysite.com then redirect to http://logged:[EMAIL PROTECTED]/ This might help as browsers can interpret the popup this will trigger (as user logged with pass out is not known) as an implicit logout). Good luck. --Frank
Re: Optimising cache performance
Clinton Gormley wrote: I'd appreciate some feedback on my logic to optimise my cache (under mod_perl 1) First, I'm assuming this is for a distributed system running on multiple servers. If not, you should just download one of the cache modules from CPAN. They're good. I'm planning a two level cache : 1) Live objects in each mod_perl process 2) Serialised objects in a database I suggest you use either Cache::Mmap or IPC::MM for your local cache. They are both very fast and will save you memory. Also, Cache::Mmap is only limited by the size of your disk, so you don't have to do any purging. You seem to be taking a lot of care to ensure that everything always has the latest version of the data. If you can handle slightly out-of-date data, I would suggest that you simply keep objects in the local cache with a time-to-live (which Cache::Mmap or Cache::FileCache can do for you) and just look at the local version until it expires. You would end up building the objects once per server, but that isn't so bad. If everything really does have to be 100% up-to-date, then what you're doing is reasonable. It would be nice to not do the step that checks for outdated objects before processing the request, but instead do it in a cleanup handler, although that could lead to stale data being used now and then. If you were using a shared cache like Cache::Mmap, you could have a cron job or a separate Perl daemon that simply purges outdated objects every minute or so, and leave that out of your mod_perl code completely. Yet another way to handle a distributed cache is to have each write to the cache send updates to the other caches using something like Spread::Queue. This is a bit more complex, but it means you don't need a second-tier in your cache to share updates. - Perrin
Re: Basic Auth logout
On Fri, 7 Mar 2003, Francesc Guasch wrote: > this has been asked before, and I've found in the archives > there is no way I could have a logout page for the Basic > Auth in apache. > > Is there nothing I can do ? This is required only for the > development team, so we need to let mozilla or IE forget > about the username and password. It all depends on the browser and version. I have been able to "logout" some versions of IE by having a link to another protected resource of the same auth name but different username and password (in the link). You are just better maintaining a session on the server. -- Bill Moseley [EMAIL PROTECTED]
RE: Basic Auth logout
The only way to expire a basic auth login is to close all instances of the browser. This is not a mod_perl limitation; it's just the way basic auth works. It's pretty easy to spin a mod_perl authentication handler to take the place of basic auth, though. There's some recipes in the cookbook. -Fran > -Original Message- > From: Francesc Guasch [mailto:[EMAIL PROTECTED] > Sent: Friday, March 07, 2003 12:35 PM > To: modperl > Subject: Basic Auth logout > > > this has been asked before, and I've found in the archives > there is no way I could have a logout page for the Basic > Auth in apache. > > Is there nothing I can do ? This is required only for the > development team, so we need to let mozilla or IE forget > about the username and password. > > This is a site built with HTML::Mason. So I tried > <%init> > $m->clear_buffer(); > $m->abort(401); > > with no luck at all > > And would this be possible with mod_perl2 ? > > -- > frankie >
Basic Auth logout
this has been asked before, and I've found in the archives there is no way I could have a logout page for the Basic Auth in apache. Is there nothing I can do ? This is required only for the development team, so we need to let mozilla or IE forget about the username and password. This is a site built with HTML::Mason. So I tried <%init> $m->clear_buffer(); $m->abort(401); with no luck at all And would this be possible with mod_perl2 ? -- frankie
error using instance: can't locate ... pnotes .. via
Hi, I am workin on a site where all pages are handled via an Apache::SSI descendant. Some included parts are itself mod_perl routines that use the instance-methode to recreate the request. The routines work fine if used "standalone" but as soon as the routine gets included via the SSI method (subrequest?) apache/mod_perl complains. The call to instance results in an error 'can't locate method 'pnotes' via package "X::Y::Z"', where X::Y::Z my own package is. Most probably the error is in me, can you help me point it out... --Frank
Optimising cache performance
I'd appreciate some feedback on my logic to optimise my cache (under mod_perl 1) I'm building a site which will have a large number of fairly complicated objects (each of which would require 5-20 queries to build from scratch) which are read frequently and updated relatively seldom. I'm planning a two level cache : 1) Live objects in each mod_perl process 2) Serialised objects in a database The logic goes as follows : NORMAL READ-ONLY REQUEST 1) REQUEST FROM BROWSER * Request comes from browser to view for object 12345 (responding to this request may involve accessing 10 other objects) 2) PURGE OUTDATED LIVE OBJECTS * mod_perl process runs a query to look for the ID's of any objects that have been updated since the last time this query was run (last_modified_time). * Any object IDs returned by this request have their objects removed from the in-memory mod_perl process specific cache 3) REQUEST IS PROCESSED * Any objects required by this request are retrieved first from the in-memory cache. * If they are not present, * the process looks in the serialised object cache in the database. * If not present there either, * the object is constructed from scratch the relational DB. and stored in the serialised object cache * retrieved object is store in the in-memory live object cache 4) TRIM LIVE OBJECT CACHE * Any live objects that are not in the 1000 most recently accessed objects are deleted from the in-memory cache UPDATE REQUEST Steps as above except : 3a) UPDATING OBJECT * Any objects that are modified * are deleted from the serialised object cache in the DB * and are deleted from the in-memory cache for this mod_perl process only This means that at the start of every request, each process has access to the most up to date versions of each object with a small (hopefully) penalty to pay in the form of the query checking for last_modified_time. Does this sound reasonable or is overkill many thanks Clinton Gormley
[ANNOUNCE] Apache::Clean-2.02b
The URL http://www.modperlcookbook.org/~geoff/modules/Apache-Clean-2.02b.tar.gz has entered CPAN as file: $CPAN/authors/id/G/GE/GEOFF/Apache-Clean-2.02b.tar.gz size: 6334 bytes md5: 55402e3e753599e56a74204b3e8649c6 this is a preliminary port of Apache::Clean over to mod_perl 2.0. in particular it uses the (current) implementation of Apache 2.0 output filters via the mod_perl 2.0 streaming filter API. so, if you're looking for an example of mod_perl 2.0 code without Apache::compat, or a working example of an output filter, this module has lots of good stuff in it - check out both the module code itself as well as the My::* modules in the test suite. it also uses Apache::Test to run live tests, so it's a good example of how to do that as well. This release also (hopefully) is intelligent enough to install relative to Apache or Apache2, depending on how you built mod_perl 2.0, so if you're trying to create a mod_perl 2.0 package, see the Makefile.PL. anyway, feel free to email me with questions or installation problems (after reading the INSTALL document, of course :) --Geoff from the README: NAME Apache::Clean - interface into HTML::Clean for mod_perl 2.0 SYNOPSIS httpd.conf: PerlModule Apache::Clean PerlOutputFilterHandler Apache::Clean PerlSetVar CleanLevel 3 PerlSetVar CleanOption shortertags PerlAddVar CleanOption whitespace DESCRIPTION Apache::Clean uses HTML::Clean to tidy up large, messy HTML, saving bandwidth. Only documents with a content type of "text/html" are affected - all others are passed through unaltered.
Re: shtml don´t get parsed with mod_perlprotecting the directory
Thank you, it works! my conf file has the following: Options +IncludesNOEXEC SetHandler perl-script PerlAccessHandler Apache::handlers::authentication PerlFixupHandler Apache::handlers::shtmlFixupHandler and I create the file shtmlFixupHandler.pm with the following content: #file begin package Apache::handlers::shtmlFixupHandler; use Apache::Constants qw(:common); sub handler { my $request = instance Apache::Request(shift); $request->handler('server-parsed') if $request->filename =~ m/\.shtml$/; return OK; } #file end And it work fine. When I tried to acces a page in the protected directory, for exemple, http://mysite.com/auth/index.shtml, it ask me my credentials and then parse the file correctly. thank you. Wladimir From: simran <[EMAIL PROTECTED]> To: Wladimir Boton <[EMAIL PROTECTED]> CC: [EMAIL PROTECTED] Subject: Re: shtml don´t get parsed with mod_perlprotecting the directory Date: 07 Mar 2003 11:29:18 +1100 You have to install a fixup handler to tell the server to parse them again, i have the following in my config file: -- in httpd.conf -- PerlFixupHandler +NetChant::Component::Handlers::shtmlFixupHandler and my shtmlFixupHandler subroutine looks like: # # shtmlFixupHandler # =head2 shtmlFixupHandler =over =item Description Enables you to have shtml files in areas where you might have other PerlHandlers installed. ie. Assuming you have the Includes option turned on for your virtual host/server, putting a .shtml file in the 'auth' directory will not work as expected (aka, it will not get parsed). This is because there is a PerlHandler already installed in that location which has an associated "SetHandler perl-script" directive, and that directive takes precedence over all others. To enable .shtml files to work in the /auth location for a virtual host put the following configuration in your virtual host apache config file. Options +IncludesNOEXEC SetHandler perl-script PerlFixupHandler +NetChant::Component::Handlers::shtmlFixupHandler =back =cut sub shtmlFixupHandler { my $request= instance Apache::Request(shift); # # # $request->handler('server-parsed') if $request->filename =~ m/\.shtml$/; # # # return OK; } On Fri, 2003-03-07 at 11:12, Wladimir Boton wrote: > Hi, > > I´m protecting a directory of my site with mod_perl, but all .shtml files > inside it don´t get parsed by mod_include. > > There are any way that shtml files get parsed? > > thanks > > > > _ > MSN Hotmail, o maior webmail do Brasil. http://www.hotmail.com _ MSN Hotmail, o maior webmail do Brasil. http://www.hotmail.com
Re: [mp2] integration with apache threads
Thank you very much, gentelmen. I'm happy for now, I guess. I do undestand perl threading issue, my problem was rather on how mod_perl and apache thread work together. I'm quite satisfied with your explanations. Thanks a lot. Pavel P.S.: Stas, ... yes I saw many segfaults... :) Perrin Harkins wrote: Pavel Hlavnicka wrote: Is it possible, that some new interpreter is cloned later then the pool is created "PerlInterpMax If all running interpreters are in use, mod_perl will clone new interpreters to handle the request, up until this number of interpreters is reached. when PerlInterpMax is reached, mod_perl will block (via COND_WAIT()) until one becomes available (signaled via COND_SIGNAL())." From http://perl.apache.org/docs/2.0/user/config/config.html#Threads_Mode_Specific_Directives and is it possible, that ANY interpreter cloning will clone my global data? (I really mean e.g object stored in my handler in global variables, perhaps just lexical ones) All of your data is always cloned, just as it was when apache forked in mod_perl 1.x. Remember, Perl threads share nothing unless you tell them to. Is there something specific that you're worried about? - Perrin -- Pavel Hlavnicka Ginger Alliance www.gingerall.com
Re: Apache::DBI on mp2
On Fri, 7 Mar 2003, Stas Bekman wrote: > > If the physical connection is still there, would the database server > > do a rollback? > > If earlier the rollback worked correctly with > $dbh ||= connect();, this will work the same, since it performs the same > operation, but rips off only parts of $dbh, because of the constraints of > sharing Perl datastructures and underlying C structs. Apache::DBI explicitly calls $dbh->rollback (when things are configured so it makes sense). Or maybe I am completely misunderstanding you. > > ps. yes, your DBI::Pool work is great. Thank you. :-) > > My pleasure. Thanks for the kind words. It's quite challenging, though you > stop getting excited about segfaults and ddd (gdb frontend) is nowadays my > best friend ;) :-) You are quite the masochist if you ever got excited about segfaults. I only recall segfaults making me slam my head into the wall to conceal the pain. - ask -- ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();
Re: [mp2] automatic Apache::compat preloading in CPAN modules isa no-no
Stas Bekman wrote: Apache::compat is useful during the mp1 code porting. Though remember that it's implemented in pure Perl. In certain cases it overrides mp2 methods, because their API is very different and doesn't map 1:1 to mp1. So if anything, not under my control, loads Apache::compat my code is forced to use the potentially slower method. Which is quite bad. Some users may choose to keep using Apache::compat in production and it may perform just fine. Other users will choose not to use that module. It should be users' choice. Therefore CPAN modules should *not* preload Apache::compat, but use the mp2 API or copy the relevant parts from Apache::compat. Of course one can add an ugly workaround in startup.pl: $INC{'Apache/compat.pm'} = __FILE__; but I'd rather not have to remember doing that. I'll update the manpage to have this warning. I haven't scanned the CPAN modules yet, but I've noticed that CGI.pm's latest version does: require mod_perl; if ($mod_perl::VERSION >= 1.99) { eval "require Apache::compat"; } else { eval "require Apache"; } Lincoln, any chance we can kill that preloading? If you need help with porting the API please let us know. Here is a hopefully useful discussion and examples on how to get rid of Apache::compat: http://perl.apache.org/docs/2.0/devel/porting/porting.html#Handling_Missing_and_Modified_mod_perl_1_0_APIs __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: Apache::DBI on mp2
Ask Bjoern Hansen wrote: On Thu, 6 Mar 2003, Stas Bekman wrote: re: rollback, the DBD drivers will perform the normal disconnect(), but without doing the physical disconnect, and normal DESTROY, without destroying the datastructures which maintain the physical connection, so there shouldn't be much to change for this feature. If the physical connection is still there, would the database server do a rollback? If earlier the rollback worked correctly with $dbh ||= connect();, this will work the same, since it performs the same operation, but rips off only parts of $dbh, because of the constraints of sharing Perl datastructures and underlying C structs. ps. yes, your DBI::Pool work is great. Thank you. :-) My pleasure. Thanks for the kind words. It's quite challenging, though you stop getting excited about segfaults and ddd (gdb frontend) is nowadays my best friend ;) __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
DECLINE, ERRORs in the middle of hanlers stack
Hello All! "Stacked handlers" is a very useful technology, but as I think incomplete. I need some suggestions. My project split content handler in the few parts. And each handler send part of the requested page to user, but sometimes I must stop proccessing and return DECLINE, redirect user or return some ERROR. Error appear in the middle of the page. I want Apache to buffer content and send it after last handler in chain return OK. Is it possible? Beforehead thanks, Ruslan.
porting modules to mp2 docs are updated
Since the questions of porting to mp 2.0 are on raise, and there is some confusion regarding use of Apache::compat. I've done a massive porting docs update: Please review the following if you are involved in porting, and let me know if I've missed something or if something is still unclear: http://perl.apache.org/docs/2.0/devel/porting/porting.html http://perl.apache.org/docs/2.0/api/Apache/compat.html http://perl.apache.org/docs/2.0/api/ModPerl/MethodLookup.html p.s. the site is being updated right now, so please wait for 20 minutes or so as the pdfs are crunched. __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: problems installing mod_perl 2
Carlos Villegas wrote: Hi, First some basic info: -Apache 2.0.44 -Solaris 9 sparc -perl 5.8.0 -mod_perl-1.99_08 (from mod_perl-2.0-current.tar.gz) -complete newbie to mod_perl I had some problems to compile mod_perl: make test would fail, after reading the mailing list archives, I found a few things related to my problem, so I added a few lines to: t/hooks/TestHooks/init.pm and t/hooks/TestHooks/trans.pm (I can post a diff if it's of interest) Certainly. But please follow the bug reporting guidelines: http://perl.apache.org/docs/2.0/user/help/help.html#Reporting_Problems This fixed some problems, but not all of them. I decided to ignore further failures of make test, and did a make install, it failed (?), however I did get a mod_perl.so under $apacheroot/modules, so I added the LoadModule line to apache, and restarted it. It seems to work, however I can't load my module (using PerlHandler mymodule), because it can't find Apache::Constants (which I use in my module). I have tried to install this (Apache::Constants) by using the shell of CPAN, but it refuses to install for apache2, I saw some references to Apache2 somewhere in the archives, but couldn't find it either (using CPAN shell). Does "make install" add some more stuff besides mod_perl.so? My perl is in a read only path, so this might be the problem, but I'm not sure... Which are the specific dependencies for mod_perl 2? It's all very well documented. Please spend some time at: http://perl.apache.org/docs/2.0/index.html in particular: http://perl.apache.org/docs/2.0/user/compat/compat.html How stable is mod_perl 2? Getting better every day. There are still problems here and there, but they get resolved when reported. See the todo directory and the STATUS file in the mod_perl source for things that are still missing. __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: Apache::DBI on mp2
On Thu, 6 Mar 2003, Stas Bekman wrote: > re: rollback, the DBD drivers will perform the normal disconnect(), but > without doing the physical disconnect, and normal DESTROY, without destroying > the datastructures which maintain the physical connection, so there shouldn't > be much to change for this feature. If the physical connection is still there, would the database server do a rollback? - ask ps. yes, your DBI::Pool work is great. Thank you. :-) -- ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();
Re: Transparent front-end proxying for many VirtualHosts
On Wed, 5 Mar 2003, Andrew Ho wrote: > I want to simplify my configuration in two ways. I'd prefer not to > maintain two sets of VirtualHost configuration data, and I'd like it if > the block that proxies .pl files to the backend proxy not be replicated > per VirtualHost. With the details you provided the best advice, as others have given, is mod_macro or making the httpd.conf from a template. I usually do the latter. If you added more details, for example a sample httpd.conf for the proxy and the backend it would be easier to help. Do you use 2.0 for the proxy? http://httpd.apache.org/docs-2.0/mod/mod_proxy.html#proxypreservehost is often helpful. "RewriteOptions inherit" might also help simplify your configuration. > The conceptual behavior I want, is for to be proxied > by the backend server, and everything else by the frontend. I've tried > many combinations which don't work, which I can post if it's relevant... Please do. :-) [...] > Does anybody have a pointer to a setup that looks like this? Maybe I am completely misunderstanding the problem, but a guess would be something like the following in the proxy: ProxyPreserveHost yes RewriteRule ^/(.*\.pl) http://localhost:1234/$1 [P] ServerName foo.example.com RewriteEngine on RewriteOptions inherit ... - ask -- ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();