Re: Apache is exiting....
[Fri Feb 28 14:31:49 2003] [alert] Child 1216 returned a Fatal error... Apache is exiting! That's bad. Sounds like an apache bug to me. Can anyone else confirm if this is intended behavior or not? You might want to check the httpd lists and newsgroups for info about this. In http_main.c:process_child_status() there's a bit of code: if ((WIFEXITED(status)) WEXITSTATUS(status) == APEXIT_CHILDFATAL) { ap_log_error(APLOG_MARK, APLOG_ALERT|APLOG_NOERRNO, server_conf, Child %d returned a Fatal error... \n Apache is exiting!, pid); exit(APEXIT_CHILDFATAL); } It looks like the parent server will exit if one of its children exits with APEXIT_CHILDFATAL. Unfortunately, if you grep for that in the Apache source, it comes up more than a few times. A stacktrace would be useful. Try setting a breakpoint on clean_child_exit() or use a call tracing utility like Solaris' truss. Without a stacktrace, you could try to work around the problem. It looks like many errors in the accept mutex code will trigger a fatal error -- try a different mutex (see http://httpd.apache.org/docs/mod/core.html#acceptmutex). Also, security errors such as errors from setuid(), setgid(), or getpwuid() may cause a fatal error. Finally, certain values of errno after the accept() call in child_main() will cause a fatal error (like ENETDOWN). -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com
Re: Apache::DBI and mod_perl 2
[EMAIL PROTECTED] wrote: Any plans to make Apache::FakeRequest work well enough to make this possible? [EMAIL PROTECTED] wrote: As for mod_perl 1.0, I'm not sure, but if you can make it useful that would be cool. A while back Andrew Ho posted his script, apr, that is similar to Apache::FakeRequest: http://marc.theaimsgroup.com/?l=apache-modperlm=98719810927157w=2 I've attached a slightly updated version called runtsp. Although it's called runtsp, it should handle Apache::Registry scripts just fine. (Tellme::Apache::TSP is our templating module a la Apache::ASP.) - Kyle runtsp Description: Binary data
Re: Double execution of PerlRequire statement
This behavior is documented in the guide... http://perl.apache.org/docs/1.0/guide/config.html#Apache_Restarts_Twice_On_S tart - kyle - Original Message - From: Andreas Rieke [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, December 18, 2002 1:41 AM Subject: Double execution of PerlRequire statement Hi, working with apache 1.3.26, mod_perl/1.27 and perl 5.6.0 on a Redhat Linux 7.2 with kernel 2.4.7-10, I have serious problems with a PerlRequire statement in the httpd.conf apache configuration file since the script is executed twice with the same pid. The PerlRequire statement is the first statement concerning mod_perl in the configuration file, and there is no other PerlRequire. Any help is very much appreciated, Andreas
Re: Per Vhost @INC
I think you are confusing @INC and %INC. You should probably read up on the difference between the two. The mod_perl guide provides a lot of background on this issue: http://perl.apache.org/docs/general/perl_reference/perl_reference.html#use__ __requiredo_INC_and__INC_Explained In any case, @INC and %INC are only used to load modules. You can't have multiple versions of the same package loaded at the same time in a single interpreter (even outside of mod_perl) since the symbols in the symbol table can only point to one thing at a time. Remember that there's only one Perl interpreter per child process (in Apache/mod_perl 1.x). If you try to load the same package again via `require' you'll overwrite the old symbols and, if you have warnings on, generate a bunch of redefine warnings. As you discovered, Apache::PerlVINC is just a work-around that reloads every module (at least those that you say matter) on every request. That's the best you can do unless Perl starts supporting some method of side-by-side versioning of modules. -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com - Original Message - From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, December 10, 2002 5:29 PM Subject: Per Vhost @INC This has come up a few times but I still do not fully understand how it works. Here is my situation. I have a body of software that runs in a specific namespace with about 10 libraries. (TourEngine::**). I have many versions of this software and will need the ability to, on the same server, run multiple instances of this software with different versions for a variety of reasons. Using a PerlRequire in the vhost seems to add the @INC for the entire vhost population on the server, not just my vhost. This results in 'first match wins' behavior since I would now have an @INC entry for each TourEngine::** directory. I tried Apache::PerlVINC but it wants to reload each module on each request and it seems to want to 'own' a location space. My modules actually get required as part of my HTML::Mason code, they do not handle the entire Location. I tried the PerlFixupHandler and a use libs declaration but that just gave me error messages like 'Undefined subroutine lib(use::handler called.' which makes me think I have the syntax wrong there and I can not find another syntax example. My preference is not to have to set my @INC in every component/lib I call but to have it set in the config globally for that Vhost. So, does anyone have any ideas on how I can load a per vhost @INC that doesn't appear to other Vhosts? I want my TourEngine::** namespace to be a unique @INC per vhost. Thanks John-
Re: win32 testing only?
The warning applies to mod_perl 1.x. The reason is that mod_perl 1.0 is single-threaded. In a UNIX environment, Apache (1.x) forks many child processes to handle requests which each have their own embedded Perl interpreter. In Win32, Apache has many threads to handle requests which all share a single Perl interpreter with a global lock around it -- effectively serializing all requests that depend on mod_perl. Mod_perl 2.0 can run multiple Perl interpreters in a single process (using new thread features introduced in Perl 5.6.0-5.8.0) so the need for the global lock on Win32 systems is not necessary. Also, Apache Portable Runtime -- new in in Apache 2.0 -- has explicit Win32 support unlike Apache 1.x which had it grafted on. There are no plans to update mod_perl 1.x to support threading. Note that mod_perl 2.0 is still beta, so it might not be ready for your production environment yet either. I don't have any experience running mod_perl on Win32, but I think the mod_perl 2.0 code is starting to stabalize enough that you could probably work-around its problems easier than working-around the performance bottleneck of 1.0. ymmv. - Kyle - Original Message - From: Jon Reinsch [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, December 05, 2002 9:18 AM Subject: win32 testing only? At http://perl.apache.org/docs/1.0/guide/getwet.html#Installing_mod_perl_for_Wi ndow it says: we recommend that mod_perl on Windows be used only for testing purposes, not in production Does this apply to mod_perl 1.0 only, or to 2.0 as well? If both, is it likely to change anytime soon? (I reached the page above by going to http://perl.apache.org/start/index.html, clicking on Get Your Feet Wet, and then on Installing mod_perl for Windows.)
Re: URI escaping question
According to RFC 2396 (http://www.ietf.org/rfc/rfc2396.txt) the reserved characters in the query component of a URI are ;, /, ?, :, @,, =, +, ,, and $. Apache::Util-escape_uri() does not escape :, @, , =, +, ,, or $. Something like the following should work: use URI::Escape qw(uri_escape); sub uri_escape_query_value { my $character_class = '^A-Za-z0-9\-_.!~*\'()'; uri_escape($_[0], $character_class); } - Kyle - Original Message - From: Ray Zimmerman [EMAIL PROTECTED] To: modperl List [EMAIL PROTECTED] Cc: Raj Chandran [EMAIL PROTECTED] Sent: Thursday, November 14, 2002 12:25 PM Subject: URI escaping question Oops ... finger slipped before I was done typing ... Suppose I have a hash of string values that I want to include in the query string of a redirect URL. What is the accepted way of escaping the values to be sure that they come through intact? Specifically, it seems that Apache::Util-escape_uri() is not escaping '=' and '' so if one of the values in the hash is a URI with a query string it messes things up. -- Ray Zimmerman / e-mail: [EMAIL PROTECTED] / 428-B Phillips Hall Sr Research / phone: (607) 255-9645 / Cornell University Associate / FAX: (815) 377-3932 / Ithaca, NY 14853
Re: URI escaping question
It looks like the default character class used by URI::Escape::uri_escape changed in version 1.16 to be exactly the same as the one I suggested. Older versions didn't escape the reserved characters. - Kyle - Original Message - From: Kyle Oppenheim [EMAIL PROTECTED] To: modperl List [EMAIL PROTECTED] Sent: Thursday, November 14, 2002 4:10 PM Subject: Re: URI escaping question According to RFC 2396 (http://www.ietf.org/rfc/rfc2396.txt) the reserved characters in the query component of a URI are ;, /, ?, :, @,, =, +, ,, and $. Apache::Util-escape_uri() does not escape :, @, , =, +, ,, or $. Something like the following should work: use URI::Escape qw(uri_escape); sub uri_escape_query_value { my $character_class = '^A-Za-z0-9\-_.!~*\'()'; uri_escape($_[0], $character_class); } - Kyle - Original Message - From: Ray Zimmerman [EMAIL PROTECTED] To: modperl List [EMAIL PROTECTED] Cc: Raj Chandran [EMAIL PROTECTED] Sent: Thursday, November 14, 2002 12:25 PM Subject: URI escaping question Oops ... finger slipped before I was done typing ... Suppose I have a hash of string values that I want to include in the query string of a redirect URL. What is the accepted way of escaping the values to be sure that they come through intact? Specifically, it seems that Apache::Util-escape_uri() is not escaping '=' and '' so if one of the values in the hash is a URI with a query string it messes things up. -- Ray Zimmerman / e-mail: [EMAIL PROTECTED] / 428-B Phillips Hall Sr Research / phone: (607) 255-9645 / Cornell University Associate / FAX: (815) 377-3932 / Ithaca, NY 14853
Re: Using Perl END{} with Apache::Registry
Although I can't reproduce the behavior you describe (I get a message in my error log plus the END block runs) I have seen something similar in the past when $@ gets reset by an intervening eval block before Apache::Registry gets a chance to log an error. In my case, I had a DESTROY handler on an object that went out of scope and the DESTROY block had an eval {} block inside of it. The presense of the eval {} block reset $@ to . The result was the original exception was raised but the value was lost and Apache returned server error w/o a log message. (I solved that specific case by localizing $@ in the DESTROY block.) It doesn't sound like this applies to you since you said you observed the faulty behavior w/ an empty END block, but I thought I'd toss it out there anyway. - Kyle - Original Message - From: Justin Luster [EMAIL PROTECTED] To: 'Perrin Harkins' [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Tuesday, November 12, 2002 3:34 PM Subject: RE: Using Perl END{} with Apache::Registry No. If there is an END block empty or not then the error logging does not happen. By the way do you know of any way to capture what would have been logged and print it through Apache-something? Thanks. -Original Message- From: Perrin Harkins [mailto:perrin;elem.com] Sent: Tuesday, November 12, 2002 3:34 PM To: Justin Luster Cc: [EMAIL PROTECTED] Subject: Re: Using Perl END{} with Apache::Registry Justin Luster wrote: I have an included file that I'm requiring: require test.pl; Without the END { } block if the script cannot find test.pl I get a Server error 500 and an appropriate error message in the log file. When I include the END{ } block I get no Server Error and no message in the log file. It is almost as if the END{ } is overwriting the ModPerlRegistry error system. Does it make any difference if you change what's in the END block? - Perrin
RE: conditional get
So, try the following change to your code: $R-content_type ($data {mimetype}); $R-set_content_length ($data {size}); $R-header_out ('ETag',$data {md5}); don't do that. use the $r-set_etag method instead, which is probably a bit safer than trying to figure out Etag rules yourself. I'm pretty sure that you shouldn't use the Etag header with non-static entities anyway, but I could be wrong. $r-set_etag ends up calling ap_make_etag to generate the ETag (from $r-mtime and $r-finfo) and setting the ETag header. That works great for static content, but for dynamic content you probably don't have valid finfo. So, generating an ETag yourself seems easier and safer, especially if you already have an MD5 hash of the content. Apache's $r-set_etag also satisfies RFC 2295, Transparent Content Negotiation in HTTP, by merging the variant list validator with whatever ap_make_etag returns. However, ETag's without vlv's are pretty much backwards compatible. -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com
RE: conditional get
I assume you are running this script under Apache::Registry (since your URLs have .pl extensions). Apache::Registry compiles your code into a subroutine and runs it using this code: my $old_status = $r-status; my $cv = \{$package\::handler}; eval { {$cv}($r, @_) } if $r-seqno; ... snip ... return $r-status($old_status); Notice that it throws away the result of the eval {}, so it doesn't really matter what you return. However, you can use $r-status from your script to set the status code. Apache::Registry will return the value you set via $r-status from it's handler() method. (I'm not sure why it's trying to set $old_status at the same time. If you look at the XS code it's clear that $r-status returns the previously set value. Anybody know the reason?) Also, Apache will call send_error_response for all 204, 3xx, 4xx, and 5xx status codes. So, you shouldn't call send_http_header in these cases. Finally, you probably also want to set the Last-Modified header for HTTP/1.0 clients that don't understand ETag headers. You can use $r-update_mtime and $r-set_last_modified. So, try the following change to your code: $R-content_type ($data {mimetype}); $R-set_content_length ($data {size}); $R-header_out ('ETag',$data {md5}); $R-update_mtime($data {mtime}); $R-set_last_modified; if ((my $rc = $R-meets_conditions) != OK) { $R-status($rc); return; } $R-send_http_header; unless($R-header_only) { print $file-get ('data'); } -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com -Original Message- From: Cristóvão Dalla Costa [mailto:cbraga;bsi.com.br] Sent: Friday, October 25, 2002 4:31 PM To: [EMAIL PROTECTED] Subject: conditional get Hi, I'm trying to get my script to work with conditional get, however, when the browser should use the local copy it doesn't display anything, just telling me that the image's broken. I get the image from a database, the snippet that sends it is this: $R-content_type ($data {mimetype}); $R-set_content_length ($data {size}); $R-header_out ('ETag',$data {md5}); $R-send_http_header; return OK () if ($R-header_only); if ((my $rc = $R-meets_conditions) != OK) { return $rc; } print $file-get ('data'); If I comment out $R-send_http_header, the apache logs show full size for the request that doesn't meet conditions, otherwise it shows 0 bytes. In both cases, when supposedly serving from its cache, the browser hangs for a few seconds. The logs look like these; the first line is for a request that meets conditions, the second for one that does not. 192.168.255.2 - - [25/Oct/2002:21:21:43 -0200] GET /binfile.pl?id=32 HTTP/1.1 200 148087 - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.2b) Gecko/20021016 192.168.255.2 - - [25/Oct/2002:21:25:28 -0200] GET /binfile.pl?id=32 HTTP/1.1 200 - - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.2b) Gecko/20021016 If I comment out the last return, it works perfectly. I copied the code from mod_perl's docs, from the section about correct headers. Thanks in advance. Cristóvão
RE: Documentation for Apache::exit()?
There are a few performance penalties when using Apache::Registry: * Scripts are compiled at first request instead of server start unless you use something like Apache::RegistryLoader. So, the first request per child will be a little bit slower and you don't get to share memory between httpd children. (Memory sharing can be a big problem.) But it still shares all the modules you pre-load in the http.conf, right? So how much memory is wasted depends on the size of the script in question, or more accurately on how big its data structures are? (including imported variables) Correct -- if you preload modules in httpd.conf, you will most likely use shared memory for those modules. Of course, the memory page will be copied to the child process if it is modified after forking. This could happen because the module allocates some memory, something else that shares the memory page is modified, etc. However, with Apache::Registry scripts, you are loading code in the child process directly (with the exception of using Apache::RegistryLoader) -- so there's no chance for any memory sharing. - Kyle
RE: Documentation for Apache::exit()?
If you're writing new code then I would recommend writing handlers and avoiding Apache::Registry altogether. I had been thinking about whether to do this. Why do you recommend avoiding Apache::Registry? Is there a performance penalty for using it? We sometimes use Apache::Registry scripts to implement a simple MVC model (look back in the archives for about a thousand different implementations of MVC using mod_perl!). Perl modules are the model, the Apache::Registry scripts are the controller, and a template provides the view. It keeps our configuration brain-dead since there are no PerlModule directives other than Apache::Registry. There are a few performance penalties when using Apache::Registry: * Scripts are compiled at first request instead of server start unless you use something like Apache::RegistryLoader. So, the first request per child will be a little bit slower and you don't get to share memory between httpd children. (Memory sharing can be a big problem.) * Every request runs through Apache::Registry::handler before your script gets called which has overhead including some setup code, an extra stat(), and a chdir(). (PerlRun and RegistryNG uses $r-finfo but Registry does an extra stat() -- not sure if there's a reason for that.) - Kyle
Re: top for apache? [OT]
On Sat, 21 Sep 2002, Nigel Hamilton wrote: to see the number of children and then make guestimates of average per child memory consumption. I'm not sure what the equivalent for other operating systems is, but here's a Solaris tip for the archives... we use /usr/proc/bin/pmap to determine memory consumption: for p in `pgrep httpd`; do /usr/proc/bin/pmap -x $p | tail -1; done pmap gives you the total memory usage, amount actually resident, amount shared, and the amount not shared (private). - Kyle
RE: mod_perl error
I assume you are running your scripts under Apache::Registry. Apache::Registry checks the modification time of your script on every request and will recompile (via eval) it if it has changed. However, nothing removes the old symbols from the previous compilation, so Perl generates a warning. For scripts, this warning is usually harmless. You can turn it off with the warnings pragma (Perl = 5.6.0): no warnings qw(redefine); -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com -Original Message- From: Boex,Matthew W. [mailto:[EMAIL PROTECTED]] Sent: Friday, May 31, 2002 10:43 AM To: [EMAIL PROTECTED] Subject: mod_perl error can i ignore this error? the script seems to work fine... Subroutine print_get_num redefined at /var/www/perl/cancel.cgi line 19. Subroutine print_gonna_del redefined at /var/www/perl/cancel.cgi line 27. Subroutine print_do_nothing redefined at /var/www/perl/cancel.cgi line 74. Subroutine print_do_del redefined at /var/www/fosbow/cancel.cgi line 83. Subroutine error_handler redefined at /var/www/fosbow/cancel.cgi line 156. matt
RE: [OT] Server error log says Accept mutex: sysvsem
However, the sample configuration file supplied with Apache contains no AcceptMutex runtime directive nor did I come across documentation suggesting how it should be used or where I would learn the options for my system (linux, i686, kernel 2.4.7-10). There's a good description of what the accept mutex is used for on the Apache Performance Notes page at http://httpd.apache.org/docs/misc/perf-tuning.html (search down for accept Serialization). Essentially, the call to accept() needs to be serialized to support servers that listen on multiple ports. Also, serialization is a performance optimization on servers that only listen on one port. I assume that apaci and/or configure made a good guess for your system when you compiled Apache, but you can experiment with other methods. Using a different mutex, if it works at all, should only help or hurt performance (perhaps using fcntl is really fast compared to pthreads on your OS). -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com
RE: Files problem, pulling my dam hair out
Are you setting the content-type header correctly? You can add the correct content type a number of ways: - Adding DefaultType text/html to your httpd.conf - Using the AddType directive in httpd.conf to single out .pl files - Add .pl files to your mime types config file (pointed to by TypesConfig) - Set it dynamically from your scripts using $r-content_type('text/html') - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Chuck Carson Sent: Thursday, November 29, 2001 2:38 PM To: [EMAIL PROTECTED] Subject: Files problem, pulling my dam hair out I have the following config: apache 1.3.22 with perl 1.26 built statically I want to use perl to dynamically generate html pages, so I have .pl files under DOCUMENT_ROOT. I have this config: Alias /perl /usr/local/apache/cgi-bin Directory /usr/local/apache/cgi-bin SetHandler perl-script PerlHandler Apache::Registry Options +ExecCGI /Directory Files *.pl SetHandler perl-script PerlHandler Apache::Registry Options ExecCGI /Files Whenever I try and get a perl script from a web browser, it pops up a dialog asking to save the damn file. I have tried Netscape 4.79 on NT and Unix as well as IE 5.5. I have configured a server in this manner probably 100 times, I cannot find what I a missing this particuliar time. Anyone have any ideas? Thanks, Chuck Chuck Carson Systems Administrator [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 858.202.4188 Office 858.442.0827 Mobile 858.623.0460 Fax
RE: Cookie authentication
Amazon seems to include your session id in the URL in addition to a cookie. I assume they do this to personalize when cookies are turned off and to prevent proxy caches from caching personalized pages and serving them to the wrong end-user. If you happen to type in a URL, they can revive your session from the cookie. Pretty nifty trick. - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of David Young Sent: Thursday, November 15, 2001 4:30 PM To: [EMAIL PROTECTED] Subject: Re: Cookie authentication I don't think that really solves Joe's proposed problem. Joe wants to ensure that the cookie is coming back from the client he sent it to. If you generate a unique ID, someone can sniff the network, grab the cookie, and send it as their own. The Eagle book does half-heartedly suggest IP address as being a way to ensure the cookie's source, but that's not reliable in these days of proxies and NAT. The only answer, I think, is to only send the cookie over an SSL connection, so that it can not be sniffed. Remember that there is an attribute you can set on the cookie that tells the browser to only send the cookie over an SSL connection. Spend some time playing with Amazon and see how they handle cookies. They appear to have cookies that get sent over every connection which they use to personalize your web pages (not necessarily sensitive info). However, as soon as you try to purchase something or go to a sensitive area, you are asked to sign-in and sent a cookie over https. From: Perrin Harkins [EMAIL PROTECTED] Date: Thu, 15 Nov 2001 18:40:03 -0500 To: Joe Breeden [EMAIL PROTECTED], mod_perl List [EMAIL PROTECTED] Subject: Re: Cookie authentication Excuse my question if it seems dumb I'm not 100% on NAT and proxies, but the Eagle book says to 1 Choose a secret, 2 Select fields to be user for the MAC. It also suggests to use the remote IP address as one of those fields. 3 Compute the MAC via a MD5 hash and store in the clients browser. 4 On subsequent visits recompute the MAC and verify it matches the original stored MAC. How is this reliable in a situation where many similarly configured computers are behind a NAT/Proxy and one of the users try to steal someone else's session by getting their cookie/session_id info? Don't use the IP address in the cookie, just generate a unique ID of your own. I suggest using mod_unique_id. - Perrin
RE: Apache-mod_perl
That error is simply saying that your subroutines, my_start and p, aren't defined in the current scope. The Apache::ROOT::vswap... is the package name that Apache::Registry (or similar module, sounds like maybe EmbPerl in your case) generated when it compiled your script. Check to make sure that you are exporting the sybols from your perl module (perldoc Exporter) and importing it in your script (perldoc -f use). - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Manjrekar Pratiksha Sent: Friday, November 16, 2001 1:05 AM To: '[EMAIL PROTECTED]' Subject: Apache-mod_perl Hello all, We are facing a problem while configuring perl-module in Apache webserver in the Solaris environment. The Server configuratin is as follows: OS : Solaris 2.7 SunOS 5.7 Sparc machine. Apache/1.3.20 (Unix) mod_fastcgi/2.2.10 mod_perl/1.26 We have installed a VSWAP1.1.6 which is a WAP test suite. The main file in this suite is index.eml 1. When we execute the index.eml file , we get the following error String found where operator expected wherever a subroutine was used. As suggested by a member of the mod_perl group, we inserted parenthesis for parameters of the subroutines. 2.This solved the error, but there was another error : Undefined subroutine Apache::ROOT::vswap1_2e1_2e6::index_2eeml::my_start called at /apps/vswap1.1.6/index.eml line 12. This could be solved by declaring package main in the perlmodule (.pm) file and referring all subroutines as (package name):: (subroutine) in the .eml file. refence url: http://perl.apache.org/dist/mod_perl_traps.html (Section: Perl Modules and Extensions ) 3.This gave another error Undefined subroutine Apache::ROOT::vswap1_2e1_2e6::index_2eeml::p However this error could not be solved by the method in point (2) Basically, we feel that there should not be any need to change the source code , the reason being we have installed the same test suite on a Linux machine. and it works fine there without any changes to the source code. Configuration of Linux: Apache/1.3.12 (Unix) (SuSE/Linux) ApacheJServ/1.1.2 mod_fastcgi/2.2.2 balanced_by_mod_backhand/1.0.8 DAV/1.0.0 mod_perl/1.24 PHP/3.0.16 Any quick help in this regard would be highly appreciated. regards, Pratiksha,
RE: no_cache()
$r-no_cache(1) adds the headers Pragma: no-cache and Cache-control: no-cache. So, you need to call no_cache before calling $r-send_http_header. You can verify that it works by looking at the headers returned by the server when you request your document. If your browser is caching the page w/o regard to these headers, then it's your browser, not mod_perl that's broken or misconfigured. - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Friday, November 16, 2001 10:48 AM To: Ask Bjoern Hansen Cc: [EMAIL PROTECTED] Subject: Re: no_cache() Ask Bjoern Hansen wrote: On Thu, 15 Nov 2001, Rasoul Hajikhani wrote: I am using $request_object-no_cache(1) with no success. Isn't it supported any more? Can some one shed some light on this for me... What do you mean with no success? What are you trying to do? -- ask bjoern hansen, http://ask.netcetera.dk/ !try; do(); more than a billion impressions per week, http://valueclick.com Well the cached document is returned rather than the new one. I know this because I make cosmetic changes to the document, reload it, and voila, still the old document. I have cleared the cache, set the cache size to 0, have even restarted the server at times just in case, but the result has been frantic. Sometimes the new document is shown on reloads, and at other times the old one is shown. -r
RE: Cookie authentication
If you happen to type in a URL, they can revive your session from the cookie. Pretty nifty trick. This would seem to be a security hole to me. URLs appear in the logs of the server as well as any proxy servers along the way. If the URL contains reusuable auth info, anybody accessing any of the logs could gain access to customer accounts. I disagree. The server logs are somewhat irrelevant because they should already be under access control, and they could contain anything including HTTP headers and content from post requests. As for proxies, they see the entire HTTP transaction anyway. If they aren't trusted, the data should be encrypted end-to-end with SSL. If the session-id is in the URL, an end-user cannot accidentally get a personalized page intended for somebody else. As you mentioned, you could prevent an intermediate cache from caching the page with a Cache-Control: private, but you then need to trust that the cache is HTTP/1.1 compliant. If anybody is afraid of using Amazon now, I believe David mentioned in a previous post that Amazon switches to SSL (and a new session id) whenever you deal with data they feel should be kept private. :-) - Kyle
RE: Problem with DBD::Oracle with mod_perl
We've seen this happen before. Unfortunately, I don't have a fix for you but I here's where we left off our chase... 1. ORA-03113: end-of-file on communication channel (for unknown reason, maybe a network blip?) 2. We have some code that will catch this error and call DBI-connect again. 3. Apache::DBI intercepts the connect(), looks in it's hash and sees that it already has a connection. It pings the handle, fails, and deletes the entry from the hash. That's the last refcount on the dbh, so DESTROY gets called. 4. The DESTROY method yields a DBI handle cleared whilst still active warning.. Issuing rollback() for database handle being DESTROY'd without explicit disconnect() at /usr/local/lib/perl5/site_perl/5.6.0/Apache/DBI.pm line 119. DBI handle cleared whilst still active at /usr/local/lib/perl5/site_perl/5.6.0/Apache/DBI.pm line 119. dbih_clearcom (h 0x82a3934, com 0x84ae49c): FLAGS 0x211: COMSET Warn AutoCommit TYPE 1 PARENT undef KIDS 0 (0 active) IMP_DATA undef in 'DBD::Oracle::dr' 5. Apache::DBI calls the real DBI-connect. This fails due to a ORA-12154: TNS:could not resolve service name (DBD ERROR: OCIServerAttach). If we run with DBI-trace(2) and $Apache::DBI::Debug = 2, we see ORA-01041: internal error. hostdef extension doesn't exist (DBD ERROR: OCITransRollback) appear in between the ORA-03113 and the ORA-12154 errors. We were running Perl 5.6.0, DBI 1.14, Apache::DBI 0.88, DBD::Oracle 1.06, and the Oracle 8.06 client against an 8.0.6.3.0 db. Make sure you let me know if you figure it out ;-) - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Alex Povolotsky Sent: Wednesday, August 22, 2001 4:59 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Problem with DBD::Oracle with mod_perl Hello! I'm getting constant troubles with DBD::Oracle and mod_perl. DBD::Oracle 1.08, DBI 1.19, mod_perl 1.26 on Apache 1.3.20, SunOS netra 5.8 Generic_108528-09 sun4u sparc SUNW,UltraAX-i2 gcc 2.95.3 20010315 (release) This is perl, v5.6.1 built for sun4-solaris # perl -V:usemymalloc usemymalloc='n'; After some time of work (about hundred of requests), I get DBD::Oracle::db prepare failed: ORA-03113: end-of-file on communication channel (DBD: error possibly near * indicator at char 1 in '*select slogan_text from slogans') at /usr/local/www/lib/SQL.pm line 221. and all Oracle-using perl programs within Apache stops to work until I restart Apache. With two clients fetching a page both at one time, I'm 100% getting this in less than a minute. I have read all READMEs I've found around, and I couldn't reproduce this error by standalone perl scripts. Any help, anyone? Alex.
RE: Segfaults
what happens to the newly spawned processes? The -f tells truss to follow forks. For completeness... The -l (that's an el) includes the thread-id and the pid (the pid is what we want). The -t specifies the syscalls to trace, and the !all turns them all off. The -s specifies signals to trace and the !SIGALRM turns off the numerous alarms Apache creates. The -S specifies signals that stop the process. Obviously, -p is used to specify the pid. what happens if the process segfaults immediately after it starts. You don't have enough time to get its PID. I suppose you could be less lazy than I and edit apachectl to call truss instead of using the -p option. :-) truss -f -l -t \!all -s \!SIGALRM -S SIGSEGV /usr/local/bin/httpd -f httpd.conf 21 Since I don't have an access to a Solaris system, is it possible for you to take the example code I've supplied below and apply these steps to it? So we can get a fully working example? Thanks a lot! (for example I'm not familiar with gcore... is it Solaris specific thing?) gcore(1) will get a core image of a running process. Therefore, you can put the core where you want it and have permission to write it. I assume it's a Solaris goodie. I don't have the pointer to the Bad::Segv module, but here's an example run. To get the messages from truss, you need to keep your tty open. Otherwise, redirect stdout/stderr somewhere else. $ apachetl start $ for pid in `ps -ef -o pid,comm | fgrep httpd | cut -d'/' -f1`; do truss -f -l -t \!all -s \!SIGALRM -S SIGSEGV -p $pid 21 done [1] 23353 [2] 23354 -- I'm only running one child for this example $ kill -SEGV 662 -- faking Bad::Segv (662 is the child pid) 662/1: Received signal #11, SIGSEGV, in accept() [caught] 662/1:siginfo: SIGSEGV pid=23306 uid=0 $ gcore 662 gcore: core.662 dumped $ kill -9 662 -- clean up the stopped process (at this point, Apache forked a new child and truss is hooked on that one too) $ pkill truss -- clean up the other truss processes that are still running $ gdb /usr/local/bin/httpd (gdb) core-file core.662 ... #0 0xdfae4d2c in _so_accept () from /usr/lib/libc.so.1 (gdb) Obviously, this isn't great to be doing on a production system since truss stops the process after it dumps core and prevents Apache from reaping it. So, you could use up a bunch of scoreboard slots and perhaps force httpd to hit MaxClients if you segfault a lot. -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com -Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Monday, August 06, 2001 8:33 PM To: Kyle Oppenheim Cc: mod_perl list Subject: RE: Segfaults [CC'ing back to the list for archival and possibly interesting followup discussion] On Mon, 6 Aug 2001, Kyle Oppenheim wrote: Here's another method to generate a core on Solaris that you may want to add to the guide. (I hope I'm not repeating something already in the guide!) 1. Use truss(1) as root to stop a process on a segfault: truss -f -l -t \!all -s \!SIGALRM -S SIGSEGV -p pid or, to monitor all httpd processes (from bash): for pid in `ps -eaf -o pid,comm | fgrep httpd | cut -d'/' -f1`; do truss -f -l -t \!all -s \!SIGALRM -S SIGSEGV -p $pid 21 done what happens to the newly spawned processes? what happens if the process segfaults immediately after it starts. You don't have enough time to get its PID. 2. Watch the server error_log for reaped processes 3. Use gcore to get a core of stopped process or attach gdb. 4. kill -9 the stopped process. Since I don't have an access to a Solaris system, is it possible for you to take the example code I've supplied below and apply these steps to it? So we can get a fully working example? Thanks a lot! (for example I'm not familiar with gcore... is it Solaris specific thing?)
RE: One more small Apache::Reload question
Those warnings are normal, and you can use the warnings pragma to turn them off. (Although, I believe the warnings pragma only exists in Perl 5.6.0+). use warnings; no warnings qw(redefine); - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Bryan Coon Sent: Wednesday, August 01, 2001 9:36 AM To: '[EMAIL PROTECTED]' Subject: One more small Apache::Reload question First, thanks to all the great suggestions, it looks like it works fine. However, now my logs are loaded with a ton of subroutine redefined warnings (which is normal I suppose?). I can certainly live with this in a development environment, but thought I would check to see if it is expected, and if it can be turned off while still enabling Reload. Thanks! Bryan
RE: Apache::Reload???
Apache::Reload works by performing a stat on every file in %INC and calling require for all the files that changed. It's quite possible that some of the files in %INC are using relative paths (often '.' is in @INC). So, Perl was able to load the file originally because the initial 'use' or 'require' was after Apache changed to your directory. However, when Apache::Reload goes to look for the file, it can't find it because the current directory is different (most likely the ServerRoot). You can fix the problem by installing your modules in a directory that is fully qualified in @INC. - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Bryan Coon Sent: Tuesday, July 31, 2001 3:16 PM To: '[EMAIL PROTECTED]' Subject: Apache::Reload??? I must have missed something in setting up Apache::Reload. What I want is simple that when I make a change in my scripts I dont have to restart the Apache server... I put PerlInitHandler Apache::Reload in my httpd.conf, and added 'use Apache::Reload' to the modules that I want to be reloaded on change. But I get the following warning message in my apache logs: Apache::Reload: Can't locate MyModule.pm for every module I have added Apache::Reload to. How do I do this so it works? The docs on Reload are a bit sparse... Thanks! Bryan
RE: no_cache pragma/cache-control headers : confusion
Apache (as in httpd) will set the 'Expires' header to the same value as the 'Date' header when no_cache is flagged in the request_rec. When your Perl handler sets $r-no_cache(1), mod_perl (in Apache.xs) is setting the 'Pragma: no-cache' and 'Cache-control: no-cache' headers in addition to setting the no_cache flag in the request_rec. From the code in Apache.xs, it seems like setting $r-no_cache(0) will unset the flag, but not remove the headers. -- Kyle Oppenheim Tellme Networks, Inc. http://www.tellme.com -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Patrick Sent: Wednesday, April 04, 2001 2:47 AM To: [EMAIL PROTECTED] Subject: no_cache pragma/cache-control headers : confusion Dear all, There is some kind of confusion in my head, and the Eagle book seems to me even more confusing. Any help appreciated. First, I always thought that no_cache() does everything regarding headers, and that you have just to turn it on or off. However I discovered yesterday that, at least in my setup, even with no_cache(0) I have Pragma: no-cache Cache-control: no-cache which seems counter-intuitive to me. I've checked the Eagle : it says that no_cache() only adds an Expires field. Ok. But then from where does the Pragma header come ? About -headers_out() it is specifically said : In addition, the Pragma: no-cache idiom, used to tell browsers not to cache the document, should be set indirectly using the no_cache() method. So, that seems confusing to me, since the no_cache() methods seem not to deal with Pragma headers. Who sets Pragma/Cache-control headers and why are they like that by default ? How to override that (with headers_out ?) ? TIA. -- Patrick. ``C'est un monde qui n'a pas les moyens de ne plus avoir mal.''