PerlOptions +Parent

2010-02-01 Thread Carl Johnstone
Suggestion for a wishlist feature.

I've got around 20 different sites, each of which is configured as it's own 
VirtualHost. Now from time to time I run a different newer version of our 
app on some of the VirtualHosts, whilst leaving others running the original 
code base.

Following some discussion here I tested and implemented the use of 
PerlOptions +Parent and this works just fine.

However my only problem with this is that what I'm effectively doing is 
running a separate perl instance for each of those VirtualHosts, sometimes 
I'll have half a dozen - even though I've only got two codebases.

So rather than +Parent which just gives a VirtualHost a separate instance - 
what would be nice would be some way of specifying *which* perl instance to 
use in this VirtualHost. Something like:

VirtualHost *
ServerName www.example1.com
PerlInstance myappv4
/VirtualHost

VirtualHost *
ServerName www.example2.com
PerlInstance myappv4
/VirtualHost


Where www.example1.com creates a new instance named myappv4, and 
www.example2.com uses the instance the first vhost created.

Carl



Re: mod-perl child process

2009-11-23 Thread Carl Johnstone
Kulasekaran, Raja wrote:
 The below method used to kill the child process after the successful
 execution of web request.

 $r-child_terminate();

 Can anyone suggest me where and how do I call this method in the
 httpd.conf file.

You don't, you put it in your application code.

However you should not be calling this under normal circumstances as all 
it'll do is cause the current apache child process to exit and for the main 
apache process to fork another child. Which will massively increase the load 
on your server for zero gain.

Specifically having read your previous posts it will not reduce the number 
of DB connections you're seeing, as the newly-forked replacement process 
will make a new DB connection. You'll also increase the load on your DB 
server as Oracle will be constantly closing down connections and opening new 
connections - which is relatively expensive in Oracle and the reason that 
modules such as Apache::DBI exist in the first place.

Assuming that your problem is the number of Oracle processes, then you may 
be better switching to multithreaded Oracle.

You may also be able to reduce the number of connection by checking your 
code base to ensure that the same options are used whenever you request a DB 
handle.

Finally you'll be able to limit the number of connection by limiting the 
number of Apache child processes (MaxClients in httpd.conf) - however all 
that you're likely to achieve is pushing the bottleneck closer to the 
client. As Perrin has already suggested if you're not proxying or dealing 
with static content in another manner you need to ensure that these requests 
aren't going through to your mod_perl server.

Carl



Re: Profiling live mod_perl backends

2009-03-30 Thread Carl Johnstone

Cosimo Streppone wrote:

However, some bad problems start to be evident only with really
high req/s rate.


Another possibility is that once you hit full load you're just pushing 
the server over the edge for what it can manage load-wise. Use your 
normal OS monitoring tools and ensure that you've not started to swap 
excessively or have two many CPU-bound requests.


If it is the case, you'll probably find that you'll benefit from 
reducing the number of concurrent apache processes.


Carl



Re: decline and fall of modperl?

2009-03-25 Thread Carl Johnstone
Perrin Harkins wrote:
 TicketMaster is Perl.

Ticketmaster switched their UK operation from MS technologies to mod_perl a 
couple of years back too. (Brought it inline with the US side.)

There's a couple of biggies that haven't been mentioned...

BBC
YouPorn (although I don't think they use mod_perl)

You could also ask the question of why Twitter has a shelf of perl books:

http://twitter.com/jobs

;-)

Carl



Re: Deployment strategies...

2009-03-13 Thread Carl Johnstone

Perrin Harkins wrote:

On Fri, Feb 27, 2009 at 3:06 PM, Mark Hedges hed...@scriptdolphin.org wrote:
  

What about PerlOptions +Parent?
http://perl.apache.org/docs/2.0/user/config/config.html#C_Parent_



That's the thing I was referring to that I haven't tried.  Can anyone
confirm that this works?
  
I can now. After a few days of internal testing, I deployed it onto our 
live servers yesterday. I've currently got two branches of the code-base 
running side-by-side fine.


Carl




Re: Deployment strategies...

2009-02-27 Thread Carl Johnstone
Perrin Harkins wrote:
 You can't run two modules with the same name in the same perl
 interpreter, so you either need to change the namespaces for
 different versions or run separate servers.

Yeah it's a pity that the perchild mpm never moved out of experimental. In 
theory something like loading different versions of modules in each child 
after the fork could solve the main part of the problem.

The thing is that as I'm running the app under Catalyst, once I've started 
splitting off into different server instances, there's not as much of an 
advantage in using mod_perl - I can use FastCGI or HTTP::Prefork or even 
just run catalyst_server.pl.

 Two minutes?  That's a very long load time.  I've seen huge codebases
 load in 10-20 seconds.  I think something might be wrong there, like a
 slow database connection or DNS lookup.

I'll do some profiling.

Although we don't have a huge amount of code we have a lot of Controllers 
inheriting from base classes - checking the 3 main culprit apps they contain 
~500 Controllers between them. As it's built up gradually I've always just 
presumed the overhead was from that.


 Is the real problem that apache is accepting connections during
 startup?

Yes. The actual restarting to load updated code is reasonably automated so 
I'm not sitting waiting. I have had complaints that whilst it's happening 
the sites seem to freeze.

 You could solve that by making your front-end smarter.  Some
 systems, like perlbal, will wait until apache responds to a HEAD
 request before considering a server up.

Actually I think the hardware load-balancing we've got does that - so rather 
than getting apache to balance, I should probably feed the requests back 
through that.

As ever, thanks for your advice Perrin.

Carl



Re: Deployment strategies...

2009-02-27 Thread Carl Johnstone
Perrin Harkins wrote:
 different FastCGI backend because it would waste memory, so either way
 you'll want to group things together as much as possible.

Yeah, that confirms my thinking.

 I've heard people say that DBIx::Class has a slow startup,
 so if you use that you might look there first.

We do, I'll look.

 You mean go directly to mod_perl?  You'd still want a front-end proxy
 for buffering.  I'd suggest checking with the mod_proxy_balancer
 people to see if they have a solution.

I mean balancers - front-end - balancers - back-end.

Using different IPs on the balancers for front/back.

It was supposed to work that way originally, but we never got round to 
implementing it once we discovered mod_proxy_balancer in Apache 2.2.

Thanks

Carl



Deployment strategies...

2009-02-26 Thread Carl Johnstone
Get a fairly typical mod-perl setup.

We have two load-balanced servers acting as front-end. They run threaded 
apache configured for lots of connections and KeepAlive on. They serve up as 
much stuff statically as possible, everything else is sent through 
mod_proxy_balancer to a number of backend servers.

The three back-end servers run a bunch of Catalyst apps under mod_perl - 
each server is configured for all apps. We've got around 30 sites in total, 
and currently have 6 distinct apps. We're building more new apps for 
different sites.

We've now got a couple of problems:

1) We want to run different versions of the same app for different sites - 
this means I have a namespace problem for MyApp v1 vs MyApp v2 etc.

2) The load times for each app are reasonable, however the load time for the 
whole lot is now getting towards a couple of minutes. However whilst apache 
is waiting for the perl code to load it accepts connections.


What are my deployment options for working around these problems?

Carl



Re: Reducing memory usage using fewer cgi programs

2008-10-29 Thread Carl Johnstone

I won't try to speak for Michael, but technically that memory does not
become available for reuse by Perl, even if they are lexical variables
and they go out of scope, unless you explicitly undef the variables.
Perl keeps the memory for them allocated as a performance
optimization.


That was the bit I was forgetting.

As said elsewhere I generally use a MaxRequestsPerChild value of 1000 which 
means that any extra memory allocated in your child processes will be 
returned to the OS fairly regularly and apache will fork a nice clean child.


Carl



Re: Reducing memory usage using fewer cgi programs

2008-10-29 Thread Carl Johnstone

I am not that far, yet. I am doing all my development work under Windows
and only for production I will use Linux (or Solaris if it needs bigger
iron).


Something to bear in mind then is that apache on Windows is (usually) 
multi-threaded rather than multi-process. So a lot of the discussion in this 
topic wouldn't apply in your Windows environment, but will when you deploy 
to Linux/Solaris.


Carl



Re: Reducing memory usage using fewer cgi programs

2008-10-24 Thread Carl Johnstone


I was referring to script initialization (responding to that first request) 
and not the httpd daemon startup.  Really, the only startup that should be 
slower is when the whole httpd service is restarted (such as at server 
startup) since it would have to preload all modules for all standby daemons.



Sorry for the misunderstanding - startup to me only refers to server 
startup. In my mod_perl setup all code is loaded at server startup, there is 
no additional code initialisation once the server is running.


Additionally if you preload your modules and code, then it is initialised 
only once in the parent apache process. When the child processes are forked 
to handle the requests, they are an exact copy of the parent so inherit all 
the perl code that was pre-loaded, saving additional load time.





I would expect (or hope -- I don't really know) that any individual httpd 
daemons that get re-initialized later on automatically (when 
MaxRequestsPerChild is reached) would be done after the previous request so 
they are ready for the next request.



When MaxRequestsPerChild is reached, the child process shuts down at the end 
of a request cycle.  returning any distinct resources to the OS. The parent 
then forks a new child process in exactly the same way as it did after 
startup. The same process is followed depending your spare server 
configuration settings.




Something else I will do for my low-usage but massive scripts (those that 
have large memory structures and take several seconds to execute) is to 
place these in a non-mod_perl directory so I can be assured their memory 
usage goes away at the end of the response.



There's no reason not to run these under mod_perl too - any memory allocated 
by perl will be re-used.


If you're really concerned and would rather the child process quits and 
frees additional memory to the OS, then call $r-child_terminate in any of 
your handlers, and the child process will automatically quit at the end of 
the request (same as if it had hit it's MaxRequests limit)


Carl



Re: Reducing memory usage using fewer cgi programs

2008-10-20 Thread Carl Johnstone
I also use a PerlRequire startup.pl to preload most modules (CGI, DBI), 
but I thought that was only for quicker startup times and not for sharing 
memory.  Is that correct?


Preloading helps with speed (you don't get the the initial loading hit for 
a module the first time it's used in a specific process) but it can also 
help with memory on certain OSs.


Pre-loading *will* give you a longer startup time, as you have to pre-load 
all the modules before apache can start.


However, as Michael says, the module would be loaded the first time a script 
used it anyway - so you'd have the same delay but in the middle of a request 
rather than at server start-up! Additionally you would have that delay in 
each child process that apache creates in the entire life of the server.


Carl



Re: secure media files without http_referer

2008-07-07 Thread Carl Johnstone

On Wed, Jul 2, 2008 at 3:18 PM, tmpusr889 [EMAIL PROTECTED] wrote:
A cookie would certainly work, but I was trying to find something 
simpler. I

don't know much about URL tokens. How would something like that work?


Redirect them to a URL with ?auth=x in it.  Check the token with an
access or authz handler.


How about mod_auth_tkt to protect the resources, then you don't need a 
mod_perl enabled server.


Use perlbal and redirect behind-the-scenes from a mod_perl auth-checker to 
the static resource.


Carl



RHEL5 mod_perl

2008-03-11 Thread Carl Johnstone

Hi,

Anybody got any experiences of mod_perl on RHEL5?

The rpms in RHN suggest versions of:

mod-perl 2.0.2
apache 2.2.3
perl 5.8.8

which are reasonable enough (and it's not the pre-release mp2 problem that 
RHEL4 had!)


However in the past there have been plenty of posts suggesting that Redhat's 
official rpms had relatively poor performance etc - is that still the case 
for RHEL5?


Thanks

Carl



Re: loading Apache2::ServerRec

2008-01-14 Thread Carl Johnstone
Using the debian-stable-provided version of mod_perl, I've got an app 
that's

working fine however the way it was configured meant we were causing an
early load of perl during the configuration phase of apache.


Just curious -- why are you trying to avoid loading perl during this 
phase?


Faster restarts in our development environment.

perl gets loaded twice, once to test the config then again to actually 
start. From my observations apache even does this if you run apachectl stop!


We don't need the additional test in config as any problems will appear in 
the error log and the devs can check there.


Carl



Re: 32 64 bit memory differences

2007-11-06 Thread Carl Johnstone

Gary Sewell wrote:

I am interested in the reverse proxy idea. We currently also run a static
image/js/css server and a static php server that runs the static pages we
are able to crate which is very few, 99% of our pages are dynamic and change
every second. Due to the bulk of our code (100Mb @ 32-bit  200Mb @ 64-bit)
we are only able to set Max Clients to 40. After arriving at a static php
homepage we refer subsequent pages randomly to one of the modperl servers to
share the stress. 


Is our php server acting similar to a reverse-proxy or am I missing out on
something, would a reverse-proxy help us with our setup. We are unable to
cache content and hit live databases for every dynamic page we serve.
Images, js and css are all served from a slimmed down apache server so these
aren't a problem.
  


As Jim says elsewhere it's not just about caching. As you say on your 
mod_perl server you can only have 40 requests, if you get 40 users all 
on a dial-up connection, each one doing multiple things and therefore 
taking 60 seconds to download the HTML, then effectively your mod_perl 
is serving 40 requests per minute!! So by putting a proxy in mod_perl 
returns the data to the proxy, which sits there waiting for these slow 
modem users to download their pages.


We tried the static server/mod_perl server route, and whilst it does 
work there are still the odd things that end up coming from from the 
mod_perl servers plus some human error. So we found that using apache as 
a reverse proxy allows us to map the whole domain except the static 
folder, which then is served from the front-end proxy.


For reference here's our setup.

On the front-end we run RHEL5 to get apache 2.2. We use the 
multi-threaded version of apache and have configured it for maximum 
threads/connections. We've left KeepAlive on, and tuned it to send as 
much content down the same connection, and keep it for as long as 
possible. We've disabled CGI and other dynamic stuff. We also do our 
logging here. We use mod_proxy_balancer to balance across 3 mod_perl 
servers, although anything in /static is served direct rather than proxied:


   ProxyPass /static !
   ProxyPass / balancer://catalyst/


At the back-end we're still running RHEL4 with a custom build of 
apache2.2/mod_perl2 running the Catalyst framework. We have KeepAlive 
off, and don't log within apache.


Now if you have a site where you can cache pages, or it's possible to 
tweak your more popular pages so that they can be cached even for a very 
short amount of time. You can also add mod_cache to the front end. If 
you have a very busy site, caching your front page even for 5 seconds 
could take a massive load of the servers.


Carl





Re: mod_auth_tkt [was: 32 64 bit memory differences]

2007-11-06 Thread Carl Johnstone

Michael Peters wrote:

mod_auth_tkt. You can set the authorization ticket with mod_perl and then just
let mod_auth_tkt handle it on the non-mod_perl apache. It's extremely light
weight and really fast.
  

Got this on my to implement soon list - any tips/caveats?

Carl



Re: 32 64 bit memory differences

2007-11-05 Thread Carl Johnstone

I'd recommend using apache for windows with
mod_proxy because it's more mature.


Another vote for apache as a proxy, using mod_proxy_balancer in apache 2.2 
to proxy to multiple mod_perl backends.


I wouldn't recommend doing dev on windows for a linux environment. Dual 
boot
your machine with Ubuntu linux and use that instead. It'll save you a lot 
of

time and headaches.


We use a VM linux server, then setup file shares between the desktop and VM. 
Allows the developer to use their favourite Windows editor, but the site 
runs on the same platform as on live.


Carl



Re: dev environment

2007-11-05 Thread Carl Johnstone



On 11/5/07, Jeff Armstrong [EMAIL PROTECTED] wrote:

Or even make yourself a virtual PC using MS Virtual PC and install a
Linux / Apache / Modperl / Samba / MySQL / SVN etc into it (e.g. Debian
is easy, or whatever you need for your prod).


Sorry replied elsewhere - missed the change in subject.


Or:
2. Pay up for VMWare Workstation.


We use the free VMWare server, and just use smbmount. Obviously needs a 
little more linux knowledge to setup.


Carl



Re: mod_perl, worker mpm and memory use optimisation

2007-11-05 Thread Carl Johnstone
I now want to bring down the process size and/or increased shared memory, 
and I wondered if anyone knows if it's possible to write perl in a way 
that maximises shared memory - eg. can I somehow ensure my data structures 
are all together and not sharing memory pages with compiled code?


General advice for mod_perl is to simply load as much as you can before 
apache forks. Beyond that you're getting into the realm of poking into 
perl's guts which is probably a bad idea if you want to maintain your code 
base in the longer term.


Carl



Re: mod_perl MVC framework.

2007-10-30 Thread Carl Johnstone
TAL does have a downside- as do all the other templating languages - 
they've all alllowed feature creep to turn them into micro-languages  or 
processing languages.


I wouldn't necessarily say that's a bad thing - sometimes the view does have 
to be more complicated, and if your templating language isn't up to it then 
you end up inter-mingling code and HTML.


Another vote here for Catalyst, we're running web sites for around 18 
regional UK newspapers using it. Amongst the models we use DBIx::Class for 
DB access, and for view TT.


Carl



Fw: mod_perl success stories on ZDNet (KMM6461378I96L0KM) :ppk1

2007-10-08 Thread Carl Johnstone


Looks like somebody has managed to subscribe one of the PayPal support email 
addresses to the list...


Carl



- Original Message - 
From: PayPal Customer Service 1 [EMAIL PROTECTED]

To: Carl Johnstone [EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 9:35 PM
Subject: Re: mod_perl success stories on ZDNet (KMM6461378I96L0KM) :ppk1


Dear Carl Johnstone,

Thank you for contacting PayPal. My name is Amberae and it is my
pleasure to assist you.

Carl, I didn't receive sufficient information to proceed with your
question. Please provide additional information such as:

  ·   What issue are you experiencing?
  ·   Are you receiving an error message? (If so, please include the
full error message.)
  ·   What steps are being taken when you are encountering the issue?
We appreciate your assistance in resolving your question.

If you have any further questions, please feel free to contact us again.

Sincerely,
Amberae
PayPal Consumer Support
PayPal, an eBay Company


Original Message Follows:



My Google alert sent this to me today:
http://whitepapers.zdnet.com/abstract.aspx?docid=##



Is it me, or does that just link through to here:


http://www.oreillynet.com/digitalmedia/blog//##/perl_success_story_t
ermiumplus.html

which doesn't require registration?

Carl



Re: mod_perl success stories on ZDNet

2007-10-01 Thread Carl Johnstone



My Google alert sent this to me today:
http://whitepapers.zdnet.com/abstract.aspx?docid=257555



Is it me, or does that just link through to here:

http://www.oreillynet.com/digitalmedia/blog/2002/05/perl_success_story_termiumplus.html

which doesn't require registration?

Carl



reverse proxy/logging problem

2007-08-02 Thread Carl Johnstone

Hi,

I've got a two-apache reverse proxy setup, split over two hosts.

The problem I've got is that I'd like to put the user_id in the access logs 
so that our log analysis software can make use of it.


Setting apache-user correctly logs the user at the back-end however the IP 
addresses are wrong, being the internal address of the front-end proxy. Also 
requests dealt with purely at the front-end aren't logged.


Logging at the front-end means all requests are logged with the right IP 
address. Additional bonuses for me are that those servers are less loaded, 
and I'd be able to turn off logging at the back-end. However the user_id 
isn't available.


Is there any easy way to pass the user_id from the back-end in such a way 
the front-end could log it? Or is there another option?


Carl



Re: apache mailing list

2007-06-01 Thread Carl Johnstone

I am trying to enable the apache module mod_speling on apache 1.3.27

I enable it find but it doesn't seem to support the

CheckCaseOnly

Apache directive.


Dunno about a list, but your problem is that directive was only added in 
Apache 2.2


The docs for apache 1.3 are here:

http://httpd.apache.org/docs/1.3/

Carl



Re: Global question

2007-05-07 Thread Carl Johnstone

You can use shared memory between apache processes. Check:

*Apache::SharedMem* 
http://search.cpan.org/author/RSOLIV/Apache-SharedMem-0.09/lib/Apache/SharedMem.pm 

*Tie::ShareLite* 
http://search.cpan.org/author/NSHAFER/Tie-ShareLite-0.03/ShareLite.pm
*Cache::SharedMemoryCache* 
http://search.cpan.org/author/DCLINTON/Cache-Cache-1.05/lib/Cache/SharedMemoryCache.pm 



all based on

*IPC::ShareLite* 
http://search.cpan.org/author/MAURICE/IPC-ShareLite-0.09/ShareLite.pm


Cache::SharedMemoryCache  say though, a decent OS will keep a frequently 
accessed *disk* cache in memory anyway through buffers etc. So a disk 
-based cache can frequently be as fast as shared memory.


So you'll probably find that Perrin's BDB suggestion is the quickest - 
easy to implement solution.


Carl



Re: httpd.conf problem

2007-05-02 Thread Carl Johnstone

Tyler Bird wrote:

but I am also trying to allow .htaccess in sub directories

Directory /b
   AllowOverride All
/Directory



This is the mod_perl rather than apache mailing list :-)

But are you getting confused between Directory and Location for a start?

It should be Directory /path/to/htdocs/subfolder - checking the 2.2 
docs you can only use the directive within Directory sections which 
means you'll  have to list all your subdirs - or maybe see if you can 
dynamically configure them via a Perl section.


Carl



Re: Logging With Mod Perl 2.0

2007-04-26 Thread Carl Johnstone

Tyler Bird wrote:
But I was wondering isn't there anything I can do to mod_perl that 
will allow a plain warn to send it to my virtualhosts log and not my 
servers

log.

without using the $r-warn() syntax



warn(hey) really goes t the virtual hosts log and I don't have to 
put $r-warn()




warn() is just sending the message to perl's STDERR which is attached to 
the main apache error log.


to get apache to send output to different places you need to call the 
relevant functions in apache's API - mod_perl provides a perl version of 
that API.


So for mp1 that's Apache::Log, and Apache2::Log for mp2. I can't see 
there being any options outside what those modules already provide.


Carl






Re: Growing Up

2007-04-25 Thread Carl Johnstone

There is a consideration, regarding using a proxy or a different server,
that has not been brought up: If there is mod_perl based access control
for the static files, then it's basically impossible not to go through a
mod_perl server to serve them.


If you're access control is in mod_perl, you have to at least hit the 
mod_perl server to check whether access is allowed.


I've not used it myself, however Perlbal has a neat feature where it can 
internally redirect. So mod_perl can return a redirect to Perlbal, which 
will then go and retrieve the real file from your static server and send 
that to the client. Otherwise I'm not sure how complete a proxy solution 
Perlbal is but Live Journal is suppoed to be using it.



In fact, I'm not sure what the effect would be in that scenario if a
proxy was used: would it serve the static file regardless of the access
control?, does it depend on the expiration data on the headers sent
through the proxy when the acess controled static file was sent?


Proxies should inspect the Vary: header to see under what conditions it can 
serve the same content. So if you're using Cookies for authentication, you 
should have 'Cookie' in your Vary header. It will then only re-serve the 
same content should it receive the same Vary header.


Compared with setting the content to be no-cache or immediately expired this 
has the advantage that if the client re-requests the same resource it can be 
served from proxy cache rather than hitting the end servers again.


Carl



Re: version checking

2007-04-25 Thread Carl Johnstone

I tried to the package manager to install mod_perl and it offers me
mod_perl-1.99-16.4

I thought that Apache 2 required MP2. Am I mistaken? I am not sure
what the best route to take is, even my httpd server version is a bit
old. Do I throw out the httpd server and start from scratch, possibly
confusing my package manager? Any thoughts?


mod_perl 1.99 was the testing/pre-release version. So you've kind-of got 
mp2.


Unfortunately there were major changes fairly late on in the pre-release 
process so the version RedHat ship with RHEL4 isn't compatible with the 
final release of mp2.


RHEL5 is out and from memory it has a proper version of MP2 (somebody like 
to confirm whether you can just upgrade from 4 to 5?)


Alternatively you'll have to compile from source. If you are compiling from 
source, then many people on here will suggest that for best performance you 
really need to compile a separate version of perl itself as the Redhat is 
multi-thread by default and offers relatively poor performance.


Carl



Re: version checking

2007-04-25 Thread Carl Johnstone
RHEL5 is out and from memory it has a proper version of MP2 (somebody like 
to confirm whether you can just upgrade from 4 to 5?)


To answer my own question - yes you can upgrade:

http://www.redhat.com/rhel/moving/

although I don't know what the process is.

RHEL5 comes with mod_perl 2.0.2, perl 5.8.8 and apache 2.2.3 so it's 
*nearly* up to date!


Anybody tried RHEL5 and like to comment on whether the perl performance is 
better than with RHEL4?


Carl



Re: Are RHEL 3.0 mod_perl 2.0.x compatible?

2007-04-25 Thread Carl Johnstone

Jonathan Vanasco wrote:


second:
your issue seems to be that you're trying to run everything via 
rpms , which is causing your dependency issue.
try compiling from source.  if you use cpan to install modperl, 
it'll download all of the perl dependancies for you.


You *may* be able to find rpms that somebody else has built that work on 
RHEL3 - but unless you want guaranteed repeatability for multiple 
servers, I wouldn't bother with rpms. Like Jonathan says just build and 
install from source.


Even if you are building many servers, it may be easier to build your 
own rpms than to find some suitable!


Carl



[job] mod_perl/Catalyst in Manchester, England

2007-04-05 Thread Carl Johnstone

Hi,

I work for the Guardian Media Group in their regional division based in
Manchester, England. We run the websites for the groups various regional
interests - mainly newspapers, but we also have a local TV station Channel
M. Our flagship title is the Manchester Evening News.

We're currently looking at expanding our web presence both for existing
titles/sites and into new areas, so are looking at expanding the development
team.

We're a Perl shop, and have been running a mod_Perl/Apache::Registry based 
setup for several years. We've recently made the decision to adopt the 
Catalyst Framework running under mod_perl for future developments.


We're looking for a number of server-side developers to come and join the
existing team here:

http://www.gmgjobsnorth.co.uk/digital/server_side_developers.html


And as Catalyst allows a reasonable separation of presentation, we're also
looking for client-side/JavaScript developers.

http://www.gmgjobsnorth.co.uk/digital/client_side_developers.html


Finally there's a little more blurb here:

http://www.gmgjobsnorth.co.uk/digital/


Carl





Re: Database transaction across multiple web requests

2006-03-31 Thread Carl Johnstone
Perrin Harkins wrote: 
I typically use a combination, retrieving all of the IDs in order in 
one shot and saving that in a cache, and then grabbing the details of 
the 10 records needed for the current page as I go.  This is described 
somewhat here:
http://perl.apache.org/docs/tutorials/apps/scale_etoys/etoys.html#Search_servers 

I'm with Perrin here, grab PKs for the entire resultset, your DB is 
going to do most of that work anyway if you're doing a count(*) to get 
the size of the resultset.


Stick that list in a cache somewhere using your favourite caching 
module. If you have a bunch of popular queries, chances are they'll 
almost always be in the cache anyway.


You can then use Data::Page or similar to do your pagination. Having got 
your paginated list expand out the PKs into rows, or turn them into 
objects using your favourite ORM. Feed it all into your favourite 
templating system.


Whole lot - couple of dozen lines of code?

Carl





Re: syntax vs. style

2006-03-28 Thread Carl Johnstone
But having said that, I find Apache2::Reload very handy for easy and quick 
development.


Me too!

Although I find that occasionally I have to restart apache on the 
development box before it'll work correctly. Hence I wouldn't run it on a 
production server.


Carl 



Re: No image creation in mod_perl

2006-03-28 Thread Carl Johnstone

In (eg) the worker MPM, each process contains its own perl interpreter,
so if each process handles one image once in its lifetime, there is a
lot of memory that has been grabbed by perl which is not available to
create more perl processes.

... is what makes sense to me but may be utterly meaningless.


Nope that's right, so you load up one image. The perl process allocates 
itself 100MB of memory for it from the OS. Then doesn't release it back to 
the OS once it's finished with.


The perl process will re-use this memory, so if you process another image 
you don't grab another 100MB, it's just not available at the OS level or for 
other processes.


This isn't completely bad as long as your OS has good memory management. The 
unused memory in the perl process will just be swapped out to disk and left 
there until that process uses it again or exits.


Carl



Re: No image creation in mod_perl (was RE: Apache2/MP2 Segfaults ...)

2006-03-27 Thread Carl Johnstone

Can you fork off a separate process to do this (that will die once it is
completed)?


The only place where forking is useful is where you want something to
continue processing after sending the response back to the client.

You can achieve the same effective result by calling $r-child_terminate()
(assuming your using pre-fork). The currently running child exits at the end
of the request freeing any memory it's allocated, and the main apache server
process will fork a new child to replace it if needed.

Carl



Re: syntax vs. style

2006-03-27 Thread Carl Johnstone



Instead of using MP2 syntax, etc, we're still using:

   print Content-type: $type\n\n;
   print blah;

and in sending redirects, we use stuff like:

   print Location: $url\n\n;


Is this a problem or is it just bad style?

What reasons are there to continue our port to use  the correct response
handler syntax, etc?


It's slightly slower. Effectively those headers have to be re-parsed by 
Apache::Registry. Another example is that $r-print is faster than a plain 
print.


Although Apache::Registry gives a good speed boost by itself, you need to 
make use of the mod_perl API to actually get the best out of it.


The next step from there is not using Apache::Registry at all, but putting 
mod_perl handlers into their own modules/packages. However there's more of a 
change here to other ways of working - for example you need to restart 
server to introduce code changes.


Carl



Re: [mp1] redirect in ErrorDocument

2006-03-23 Thread Carl Johnstone
  Yet enabling PerlSendHeader On and doing:
 
print Location: $newurl\n\n;
 
  works fine.

 What are the complete headers that sends?  I'm guessing it sends a 200
 status and it only works because your browser is broken and follows the
 redirect anyway.

No, it does convert the Location header into a 302 response, although it
doesn't add the body content like returning a 302 in the main request does.


HTTP/1.1 302 Found
Date: Mon, 20 Mar 2006 13:31:55 GMT
Server: Apache/1.3.33 (Debian GNU/Linux) mod_gzip/1.3.26.1a PHP/4.3.10-16
mod_perl/1.29
Set-Cookie: Apache=127.0.0.1.155421142861522485; path=/; expires=Sat,
16-Sep-06 13:31:55 GMT
Cache-Control: no-cache, must-revalidate
Pragma: no-cache
Location: http://www.manchesteronline.co.uk/
Connection: close
Transfer-Encoding: chunked
Content-Type: text/plain; charset=utf-8

0

Connection closed by foreign host.



Carl



[mp1] redirect in ErrorDocument

2006-03-17 Thread Carl Johnstone


In an Apache::Registry  ErrorDocument (e.g. ErrorDocument 404 /my404.pl )

This doesn't work:

 $r-header_out('Location', $newurl);
 $r-status(302);
 return OK;


Apache returns an error saying that my 404 handler returned a 302 error. 
Does the same if I use err_header_out


Yet enabling PerlSendHeader On and doing:

 print Location: $newurl\n\n;

works fine.


I'd prefer to get the first way working - any ideas?

Carl



Re: is there a way to force UTF-8 encoding

2006-03-04 Thread Carl Johnstone
 http://www.verizonnoticias.com/


Sure it validates on w3c?

http://validator.w3.org/check?verbose=1uri=http%3A%2F%2Fwww.verizonnoticias.com%2FHome%2Findex.ad2


In fact I'm not actually getting *any* headers back:

$ telnet www.verizonnoticias.com http
Trying 69.88.132.175...
Connected to www.verizonnoticias.com.
Escape character is '^]'.
GET /Home/index.ad2 HTTP/1.1
Host: www.verizonnoticias.com

!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN
html lang=es
...


Carl




Re: is there a way to force UTF-8 encoding

2006-03-04 Thread Carl Johnstone

 In fact I'm not actually getting *any* headers back:

Checking your server ident you're running modperl 1.

Sure you're calling

$r-send_http_headers()

?

Carl



Re: Make your next million Stas!

2006-02-13 Thread Carl Johnstone


Speaking of which, when will the mp2 version of Practical Mod Perl be 
coming

out? I'm sure Stas will make a handsome lot (I would use 'bomb' but these
are sensitive times...) when he releases it, cause we're all depending so
much on the book...


Are there any good mp2 books out in the wild yet?

Is anybody working on anything?

In particular do O'Reilly have plans to update any of the books?

Carl



testing frameworks

2006-01-30 Thread Carl Johnstone


Hi,

Been reading through this page:

http://perl.apache.org/docs/general/testing/testing.html

at the bottom of the page the Puffin testing framework is suggested, with 
a link to http://puffin.sourceforge.net/ this redirects to 
www.puffinhome.org which is a domain for sale page.



There seems to have been releases more current than the ones remaining on 
sourceforge - but can't find any current references to the software.


By the way, anybody got any other suggestions? I know there is some 
commercial software, but I've not seen anything cost effective with a good 
review yet...


Carl






Re: Getting keep-alive sorted

2005-12-29 Thread Carl Johnstone

If so how do I do it?  I looked at the mod_perl docs and found:
Apache2::Response-set_keep_alive()

The example:
$ret = $r-set_keepalive();


That overrides the default KeepAlive setting for this particular request.


Doesn't really make sense to me.  I already have the following in my
code:

$r-no_cache(1);


The adds HTTP headers instructing the client (and any proxies) that they 
shouldn't cache the page.


There's no relation between those two - they're doing completely separate 
things.


All that KeepAlive does is allow a client to request multiple resources on 
the same connection. Instead of


connect
request resource
disconnect
connect
request resource
disconnect

the browser can do:

connect
request resource
request resource
disconnect


This generally works out bad for a mod_perl server, as you end up with a 
large number of apache processes, and therefore a large amount of server 
resources just sat around waiting to see if any further requests are going 
to come in on that connection.


For a basic apache install, that's mainly serving static content KeepAlive 
is an advantage as it reduces the amount of time spent opening and closing 
connections. Remember that in most situations you're going to serve a page, 
followed by some images and stylesheets.


So if you have a front-end/back-end setup the best config is:

KeepAlive Off

in the back-end mod_perl servers, and On in the front-end static/proxy 
servers.


Carl



Re: Need use modules in my code?

2005-12-21 Thread Carl Johnstone



#!/usr/bin/perl
use Compress::Zlib;
.
print Compress::Zlib::memGzip($htmlstr);



This one is best practise - and is a requirement when you want to import 
functions into your own scripts name space.


Carl



Re: Apache::DBI + DBD::Multiplex

2005-11-24 Thread Carl Johnstone

 Philippe M. Chiasson wrote:
  2 possible avenues to investigate:
   - Change your code to not use an array-ref as mx_dsns argument but some
string (comma delimited, etc)
   - Look at changing Apache::DBI to not use the ref as part of the dsn
cache key, but use the contents of the array
 something along the lines of :
  $Key = Dumper([EMAIL PROTECTED]);
 

 The arrayref is a requirment of multiplex. I'll do some hacking in both
 modules and see what works. I'm not certain if the first possible change
 would work as suggested as Apache::DBI would end up caching possibly
 just one, if any of the multiple potential DB connections. The target
 will be to cache a persistant connection for all DB servers that
 multiplex is setup to query against.

 I'm not completly up to scratch when or how Apache::DBI does its magic,
 either within the connect call of my DBI handle or within Multiplex's
 (as multiplex does a similar thing to Apache::DBI in redirecting the
 connect request).

Have you copied the example code from DBD::Multiplex?

%attr = (
'mx_dsns' = [$dsn1, $dsn2, $dsn3, $dsn4],
'mx_master_id' = 'dbaaa1',
'mx_connect_mode' = 'ignore_errors',
'mx_exit_mode' = 'first_success',
'mx_error_proc' = \MyErrorProcedure,
 );
 $dbh = DBI-connect(dbi:Multiplex:, 'username', 'password', \%attr);
If so you're creating a NEW anonymous array everytime you build %attr. Try:
my @dsns = ($dsn1, $dsn2, $dsn3, $dsn4);%attr = ('mx_dsns' =
[EMAIL PROTECTED],'mx_master_id' = 'dbaaa1','mx_connect_mode' 
=
'ignore_errors','mx_exit_mode' = 'first_success',
'mx_error_proc' = \MyErrorProcedure, ); $dbh =
DBI-connect(dbi:Multiplex:, 'username', 'password', \%attr); Or even
better find someway of defining this stuff once, then re-using it. I'd
suggest a custom DB settings module then when loaded initialises the
required variables, and can return a DB handle to any other code when you
need it.Carl



Re: Apache::DBI + DBD::Multiplex

2005-11-24 Thread Carl Johnstone
 Thanks for the suggestion. It appears that DBD::Multiplex is mashing up
 the ref address each time. Interesting its not mashing up the sub ref
 address for mx_error_proc. I'll dig deaper into Multiplex and see why
 its changing it.

The subroutine will only be defined once at file load time so the reference
to it won't change.

Thinking about it, depending where you define the @dsns variable I suggested
won't necessarily work that much better. You need to make sure it's declared
in a namespace where it won't get redefined - effectively you want to ensure
that you only ever define that array once, then the reference to it won't
ever change.

Or *if* you do need to change the DSN list without restarting mod_perl you
need to do something that's a bit more of a work-around.

Carl



Re: A handler just for the / location

2005-10-14 Thread Carl Johnstone

Yes it also handles that path (no-path)
I heard that some browsers use to add the / after the location if it
doesn't end in /, but I am not sure they do that because for example if 
I

have an URL like:

http://www.site.com/test

then Internet Explorer doesn't add a / after it if /test location is
handled by a mod_perl handler, but it adds a / if /test is a common
directory.
IE cannot know if /test is a directory or just a virtual path, so / 
might

be returned by Apache.

Anyway, I am not sure, so I have created a little client with LWP that 
gets

http://www.site.com/test (without a trailing /) and the page is displayed
correctly.


/test and /test/ are different URL's, and of course may serve different 
things.


If apache only finds a directory it issues a redirect to /test/ so that 
relative URLs in any index page work correctly.


The annoying thing that IE does here is that it stores the URL *without* the 
slash in it's history - so if you want to use autocomplete or the drop-down 
you have to manually re-add the slash.


That said, IE5 suffered a much, much worse bug. If you submitted a url with 
a query string a=b it was then impossible to submit a=B without clearing 
your browser history as IE automatically corrected the case of your 
request based the entry in it's history...


Carl



Re: mod_perl, shared memory and solaris

2005-09-20 Thread Carl Johnstone



Any advice on memory use on Solaris is appreciated


Take a look at:

http://people.ee.ethz.ch/~oetiker/tobjour/2003-02-12-09-00-f-1.html

There's some useful information, including a chunk covering what you're 
after. Specifically try:


  pmap -x  PID

the private/anon column shows the amount of memory exclusive to the process.

Carl 



Re: Sth I don't understand about pnotes

2005-09-16 Thread Carl Johnstone

From: Anthony Gardner [EMAIL PROTECTED]

Can s.o. explain what is wrong with the following code


$r-pnotes('KEY' = push( @{ $ar }, $some_val ) );



It's a perl problem - not a mod_perl problem. push returns the new number of 
elements on the array. So in effect you're doing:


$r-pnotes('KEY' = number_of_elements_in_array );


Your other snippit looks OK to me.

Carl



Re: Masquerading requests as HTTPS

2005-09-16 Thread Carl Johnstone


Can add my voice to the BigIP should do this school of thought. If it's 
effectively converting HTTPS into HTTP requests for you, then I would expect 
it should be able to rewrite redirects automatically for you too. Same way 
that apache does it in mod_proxy.


However can I also point out that even if you catch redirects, you've still 
potentially got broken HTML etc etc to fix.


Carl



Re: *nix distro compatibility (was Re: survey)

2005-08-30 Thread Carl Johnstone
 I think a great first-place to start for advocacy is to work with the
 various linux/bsd/*nix distributions out there to make sure that they
 have a modern, compatible version of mod_perl 2.  As a user, I don't
 want to maintain my own perl/mod_perl build tree - I want my distro to
 do the right thing.  Perhaps a first-step in the advocacy movement is
 to maintain a distro compatibility list for mod_perl 2 on
 perl.apache.org, so that it's not such a black-art in determining
 whether mod_perl/Apache::* packages are up-to-date or whether there are
 timebombs waiting to ambush new users.

Sounds like a good idea, and if we point people in the right direction to
get updated versions/backports for their distro that might help with the
rest.

As a Debian user I'd like to move to mod_perl2 proper, however I don't want
to have to compile it for myself. So I've been taking the option of using
the version in Sarge, and figuring our where I differ from the docs.

I've been checking apt-get.org regularly to see if anybody had setup a
repository, and backports. I hadn't been checking the incoming folder on
backports, so didn't realise that somebody had done the pacakge till it was
mentioned on here recently.

Carl



Re: [MP2] having trouble obtaining $r

2005-08-11 Thread Carl Johnstone

Message
I'm having trouble getting the request object from mod_perl.  Here is the 
original attempt:




Have you simply tried:

my $r = shift;


at the beginning of your handler? If you're using Apache::Registry, then 
your whole file is a handler so you can do that at the top.


Carl






Docs error?

2005-04-26 Thread Carl Johnstone
On:
http://perl.apache.org/docs/tutorials/client/browserbugs/browserbugs.html
It talks about browsers misreading URLs like:
http://example.com/foo.pl?foo=barreg=foobar
Presuming this is within a HTML page e.g.:
a href=http://example.com/foo.pl?foo=barreg=foobar; ...
then the actual problem is that the  has not been changed into a HTML 
entity - so what you should do is:

a href=http://example.com/foo.pl?foo=baramp;reg=foobar; ...
Carl


Re: Docs error?

2005-04-26 Thread Carl Johnstone

OK, in which case it must be some relatively recent change, since an
unescaped  in the QUERY_STRING was a valid separator. A pointer to the 
relevant RFC would be nice so we can add that to the URL that started 
this thread.

Here?
http://www.w3.org/TR/1999/REC-html401-19991224/appendix/notes.html#h-B.2.2

To summarise  as a separator in a QUERY_STRING is valid, however when you 
represent a URI within HTML, the  has to be escaped to amp; same as for 
any other occurances of .

So you type (or the browser still generates)
 my.pl?foo=1bar=2
in the address bar of your browser, which is what comes through the request. 
This is still perfectly valid.

In the source of a HTML doc you've got to put it as
 a href=my.pl?foo=1amp;bar=2Click amp; Go/a
I've not tested, but I think putting my.pl?foo=1amp;bar=2 in the browser 
actually generates foo=1, amp=undef, and bar=2...


So should we commit something like the following? Or should we just nuke 
the whole section altogether?

Index: src/docs/tutorials/client/browserbugs/browserbugs.pod
===
--- src/docs/tutorials/client/browserbugs/browserbugs.pod   (revision 
164401)
+++ src/docs/tutorials/client/browserbugs/browserbugs.pod   (working 
copy)
@@ -37,6 +37,12 @@

 =head1 Preventing QUERY_STRING from getting corrupted because of entity 
key names

+This entry is now irrelevant since you must not use C to separate
+fields in the CQUERY_STRING as explained here:
+http://www.w3.org/TR/1999/REC-html401-19991224/appendix/notes.html#h-B.2.2
+If for some reason, you still want to do that, then make sure to read
+the rest of this section.
+
 In a URL which contains a query string, if the string has multiple
 parts separated by ampersands and it contains a key named reg, for
 example Chttp://example.com/foo.pl?foo=barreg=foobar, then some
I'd suggest rewording the answer to something like:

In a URL which contains a query string, if the string has multiple parts 
separated by ampersands and it contains a key named reg, for example 
http://example.com/foo.pl?foo=barreg=foobar, then some browsers will 
interpret reg as an SGML entity and encode it as reg;. This will result in 
a corrupted QUERY_STRING.

The behaviour is actually correct, and the problem is that you have not 
correctly encoded your ampersands into entities in your HTML. What you 
should have in the source of your HTML is 
http://example.com/foo.pl?foo=baramp;reg=foobar.

A much better, and recommended solution is to separate parameter pairs with 
; instead of . CGI.pm, Apache::Request and $r-args() support a semicolon 
instead of an ampersand as a separator. So your URI should look like this: 
http://example.com/foo.pl?foo=bar;reg=foobar.
Note that this is only an issue within HTML documents when you are building 
your own URLs with query strings. It is not a problem when the URL is the 
result of submitting a form because the browsers have to get that right. It 
is also not a problem when typing URLs directly into the address bar of the 
browser.

Reference: 
http://www.w3.org/TR/1999/REC-html401-19991224/appendix/notes.html#h-B.2.2

---

Carl






Re: RC5 really broke some stuff

2005-04-22 Thread Carl Johnstone

People like Jonathan and myself just have to double up (or triple-up!)
because of the change. Try telling that to your bosses that the latest 
version of modperl is holding things back ('told you we should've used 
Java!').
Did your boss know that the latest version of mod_perl is actually a 
pre-release?

I'll agree that pre-release is a bad time to make API changes like this and 
it would be better if it'd be done during the beta. It's only when a piece 
of software is released that the developers are making the implicit promise 
to not break anything.

If you're going to run beta or pre-release code - you've got to accept the 
risks that go with that.

Carl


Re: how to take out the default charset append?

2005-04-22 Thread Carl Johnstone
Please help me with one more related question: if I am to simply use 
$r-content_type('text/html'), the final render to the client comes with 
';charset=8859-1' appended to the header. Is this the result of Apache, 
or mod_perl? Is there any way I can just render Content-Type: text/html 
without a character set in the header?
You've probably got
AddDefaultCharset 8859-1
in your apache config, (On == iso-8859-1)change it to:
AddDefaultCharset Off
See:
http://httpd.apache.org/docs-2.0/mod/core.html#adddefaultcharset
Carl


Re: Apache-DBI

2005-04-18 Thread Carl Johnstone
From what I understand, Apache::DBI provides certain performance benefits 
over DBI in much the same manner that mod_perl does over standalone Perl, 
but isn't required to use DBI in a mod_perl environment.
Essentially it makes your database connections persistent.
So when you establish a connection at the start of a particular mod_perl 
script it intercepts the request and returns a pre-connected DB handle. 
Similarly it intercepts the disconnect request at the end of the script and 
leaves the connection to the DB open ready for next time it's needed.

For DB's that are slow when establishing connections it can make a 
considerable speed difference.

Carl


Re: Question about Files directive

2005-03-02 Thread Carl Johnstone
Hi,
To explain exactly what is going on...
The Files directive is an *apache* not mod_perl directive. You can use it 
within apache to define specific rules for a file (or bunch of files).

The SetHandler directive is also an apache directive, that tells apache 
what hander should be used.

So for instance this:
   Files *.cgi
   SetHandler perl-script
   /Files
tells apache that all files matching *.cgi should be handled by perl-script 
which is mod_perl.

The PerlHandler directive is mod_perl, it tells mod_perl which perl module 
should be used as the handler.

In the case of Apache::Registry this is a perl module that attempts to 
provide some of the advantages of mod_perl to regular cgi scripts. In 
particular you get use of the built-in perl interpreter, and your perl files 
are only interpreted when they are changed rather than everytime they are 
requested. This is the cache that you referred to in your first post.

The Options directive sets permissions, in this case it allows files to be 
executed as CGI. This option is also required by Apache::Registry.

Now you'll have to be very careful as you're not really caching CGI scripts. 
The Apache::Registry module does a reasonable job of making things works, 
but there are issues to be wary of. The porting page that's already been 
suggested lists all the pitfalls.

You may also want to investigate Apache::PerlRun. This is similar to 
Apache::Registry in that it uses the built-in perl interpreter but it 
reloads the script on every request. This means that it doesn't give the 
same performance benefit that Apache::Registry does, but on the other hand 
it's immune to some of the problems.

Hope this answers your question.
Carl



Re: Question about Files directive

2005-03-02 Thread Carl Johnstone
CCing it back onto the list...


Now if I do not want all to specify
all cgi scripts (*.cgi) to be handled by the handler
precisely to avoid pitfalls and only specify
some files to be used, then how do I do
that via the Files directive.


There's a couple of different approaches.

The first you posted - list them individually. Alternatively the Files
directive will take a regular expression. The apache docs are here for v1.3
http://httpd.apache.org/docs/mod/core.html#files

Another way if all your cgi scripts are in a single cgi-bin or similar is to
setup different aliases. Then you can access the same scripts as straight
CGI, Apache::Registry, or Apache::PerlRun. I'll assume your cgi-bin is
already setup - here's the rest of the config.

  Alias /perl/ /path/to/cgi-bin/
  PerlModule Apache::Registry
  Location /perl
SetHander perl-script
PerlHandler Apache::Registry
Options ExecCGI
allow from all
PerlSendHeader On
  /Location

  Alias /perl-cgi/ /path/to/cgi-bin/
  PerlModule Apache::PerlRun
  Location /perl-cgi
SetHander perl-script
PerlHandler Apache::PerlRun
Options ExecCGI
allow from all
PerlSendHeader On
  /Location


Then you can try running your script through Apache::Registry with
http://www.mydomain.com/perl/myscript.cgi if that doesn't work then
http://www.mydomain.com/perl-cgi/myscript.cgi for Apache::PerlRun. Failing
any of those revert back to CGI with
http://www.mydomain.com/cgi-bin/myscipt.cgi.


Carl



Re: Question about Files directive

2005-03-02 Thread Carl Johnstone
 Chuck, use Apache::Reload and you won't have to restart the server.
 http://modperlbook.org/html/ch06_08.html


Although on a production server that's dealing with many thousands of
requests, that could be an awful lot of checks to see if modules have been
updated. Personally I prefer a bit of shell scripting on a cron:

#!/bin/sh

if [ -n `find /path/to/my/modules -prune -newer /path/to/apache.pid` ];
then
  echo apache needs reloading
  echo Testing
  apachectl configtest
  if [ $? = 0 ] ; then
echo Restarting
apachectl restart
  else
echo ERROR in apache config
  fi
fi



-- 
Carl



Re: Protecting perl-status by IP on a backend server

2005-02-27 Thread Carl Johnstone
 Location /perl-server-status
  SetHandler server-status
  Order deny,allow
  Deny from all
  Allow from env=admin_ip
 /Location

 I still get

 [client 127.0.0.1] client denied by server configuration

Put the access rules in the front-end (non-mod_perl) apache rather than the
back-end.

Carl



Re: [mp1] Apache::Cookie(?) Seg Fault

2005-02-25 Thread Carl Johnstone
I'm using gcc 3.3.2. Will leave a build of gcc 3.4.3 running overnight and 
try rebuilding apache with that sometime tomorrow.
Actually realised this morning that one of our others servers is doing the 
same. Same versions of apache etc, this time on Solaris 9 built with gcc 
2.95.3. That isolates the build tools from being part of the problem.

Carl


Re: [mp1] Apache::Cookie(?) Seg Fault

2005-02-25 Thread Carl Johnstone
 Out of curiosity, what does `perl -V` say?
Summary of my perl5 (revision 5.0 version 8 subversion 3) configuration:
 Platform:
   osname=solaris, osvers=2.8, archname=sun4-solaris
   uname='sunos g-web1 5.8 generic_108528-19 sun4u sparc sunw,ultra-4 
solaris '
   config_args='-ds -e -DDEBUGGING -Doptimize=-g -Dcc=gcc'
   hint=recommended, useposix=true, d_sigaction=define
   usethreads=undef use5005threads=undef useithreads=undef 
usemultiplicity=undef
   useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
   use64bitint=undef use64bitall=undef uselongdouble=undef
   usemymalloc=n, bincompat5005=undef
 Compiler:
   cc='gcc', ccflags 
='-DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE  
-D_FILE_OFFSET_BITS=64',
   optimize='-g',
   cppflags='-DDEBUGGING -fno-strict-aliasing -I/usr/local/include'
   ccversion='', gccversion='3.3.2', gccosandvers='solaris2.8'
   intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=4321
   d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
   ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', 
lseeksize=8
   alignbytes=8, prototype=define
 Linker and Libraries:
   ld='gcc', ldflags =' -L/usr/local/lib '
   libpth=/usr/local/lib /usr/lib /usr/ccs/lib
   libs=-lsocket -lnsl -ldl -lm -lc
   perllibs=-lsocket -lnsl -ldl -lm -lc
   libc=/lib/libc.so, so=so, useshrplib=false, libperl=libperl.a
   gnulibc_version=''
 Dynamic Linking:
   dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' -Wl,-E'
   cccdlflags='-fPIC', lddlflags=' -Wl,-E -G -L/usr/local/lib'

Characteristics of this binary (from libperl):
 Compile-time options: DEBUGGING USE_LARGE_FILES
 Built under solaris
 Compiled at Jan 27 2005 13:17:09
 @INC:
   /usr/local/lib/perl5/5.8.3/sun4-solaris
   /usr/local/lib/perl5/5.8.3
   /usr/local/lib/perl5/site_perl/5.8.3/sun4-solaris
   /usr/local/lib/perl5/site_perl/5.8.3
   /usr/local/lib/perl5/site_perl/5.8.0/sun4-solaris
   /usr/local/lib/perl5/site_perl/5.8.0
   /usr/local/lib/perl5/site_perl/5.6.1
   /usr/local/lib/perl5/site_perl
   .



Re: [mp1] Apache::Cookie(?) Seg Fault

2005-02-25 Thread Carl Johnstone
OK;  I'm just guessing at this point, so you may need to track
this bug down yourself.
I'll do what I can - long time since I wrote any C though.
#2  0xfe1d5a40 in ApacheCookie_new (r=0x1380960) at apache_cookie.c:79
#3  0xfe1d3550 in XS_Apache__Cookie_parse (cv=0xe7cce4) at Cookie.xs:208
#4  0x001c8bf0 in Perl_pp_entersub () at pp_hot.c:2840
#5  0x00196538 in Perl_runops_debug () at dump.c:1438
#6  0x0010fdec in S_call_body (myop=0xffbef630, is_eval=0) at perl.c:2221
#7  0x0010f7f4 in Perl_call_sv (sv=0x1088a4c, flags=4) at perl.c:2139
#8  0x0004bce4 in perl_call_handler (sv=0x1088a4c, r=0x12481f8, args=0x0) 
at
mod_perl.c:1668

Notice that r has changed between frames 2 and 8.  Since
XS_Apache__Cookie_parse retrieves r from modperl's perl_request_rec
(the XS version of Apache-request), perhaps the problem relates
to that function?
Having a quick look, and there are SSI's involved which would explain the 
different request records, although the record in frame 2 doesn't seem 
right.

Shall continue next week...
Thanks
Carl


[mp1] Apache::Cookie(?) Seg Fault

2005-02-24 Thread Carl Johnstone
Getting an occasional Segfault which I've traced back to this line in a 
FixupHandler:

 my %cookies = Apache::Cookie-fetch;
I'm running Apache/1.3.31 with mod_perl/1.29 on Solaris 8.
Looking at the gdb trace, is the request pool becoming corrupt somewhere 
between ApacheCookie_new and ap_make_array?

Carl

Program received signal SIGSEGV, Segmentation fault.
0x000c56f0 in ap_palloc (a=0x1380938, reqsize=20) at alloc.c:700
700 char *first_avail = blok-h.first_avail;
(gdb) bt
#0  0x000c56f0 in ap_palloc (a=0x1380938, reqsize=20) at alloc.c:700
#1  0x000c5d84 in ap_make_array (p=0x1380938, nelts=1, elt_size=4) at 
alloc.c:992
#2  0xfe1d5a40 in ApacheCookie_new (r=0x1380960) at apache_cookie.c:79
#3  0xfe1d3550 in XS_Apache__Cookie_parse (cv=0xe7cce4) at Cookie.xs:208
#4  0x001c8bf0 in Perl_pp_entersub () at pp_hot.c:2840
#5  0x00196538 in Perl_runops_debug () at dump.c:1438
#6  0x0010fdec in S_call_body (myop=0xffbef630, is_eval=0) at perl.c:2221
#7  0x0010f7f4 in Perl_call_sv (sv=0x1088a4c, flags=4) at perl.c:2139
#8  0x0004bce4 in perl_call_handler (sv=0x1088a4c, r=0x12481f8, args=0x0) at 
mod_perl.c:1668
#9  0x0004af00 in perl_run_stacked_handlers (hook=0x2a5480 
PerlFixupHandler, r=0x12481f8, handlers=0x1088a7c) at mod_perl.c:1381
#10 0x000499d0 in perl_fixup (r=0x12481f8) at mod_perl.c:1071
#11 0x000cc8b4 in run_method (r=0x12481f8, offset=23, run_all=1) at 
http_config.c:327
#12 0x000cca10 in ap_run_fixups (r=0x12481f8) at http_config.c:354
#13 0x000ee578 in process_request_internal (r=0x12481f8) at 
http_request.c:1284
#14 0x000ee640 in ap_process_request (r=0x12481f8) at http_request.c:1305
#15 0x000e093c in child_main (child_num_arg=9) at http_main.c:4804
#16 0x000e0cb4 in make_child (s=0x2fbde8, slot=9, now=1109256953) at 
http_main.c:4974
#17 0x000e1218 in perform_idle_server_maintenance () at http_main.c:5159
#18 0x000e1bc0 in standalone_main (argc=1, argv=0xffbefe2c) at 
http_main.c:5412
#19 0x000e2524 in main (argc=1, argv=0xffbefe2c) at http_main.c:5665
(gdb) p *a
$2 = {
 first = 0x1380960,
 last = 0x0,
 cleanups = 0x0,
 subprocesses = 0x0,
 sub_pools = 0x0,
 sub_next = 0x0,
 sub_prev = 0x0,
 parent = 0x137e920,
 free_first_avail = 0x1380960 \0018\t8
}
(gdb) up
#1  0x000c5d84 in ap_make_array (p=0x1380938, nelts=1, elt_size=4) at 
alloc.c:992
992 array_header *res = (array_header *) ap_palloc(p, 
sizeof(array_header));
(gdb) p *p
$3 = {
 first = 0x1380960,
 last = 0x0,
 cleanups = 0x0,
 subprocesses = 0x0,
 sub_pools = 0x0,
 sub_next = 0x0,
 sub_prev = 0x0,
 parent = 0x137e920,
 free_first_avail = 0x1380960 \0018\t8
}
(gdb) up
#2  0xfe1d5a40 in ApacheCookie_new (r=0x1380960) at apache_cookie.c:79
79  c-values = ap_make_array(r-pool, 1, sizeof(char *));
(gdb) p r-pool
$4 = (ap_pool *) 0x1380938
(gdb) p *(r-pool)
$5 = {
 first = 0x2f436f6f,
 last = 0x6b69652f,
 cleanups = 0x14092a0,
 subprocesses = 0x0,
 sub_pools = 0x29,
 sub_next = 0x0,
 sub_prev = 0x0,
 parent = 0x0,
 free_first_avail = 0x0
}



Re: Apache::Clean worthwhile in addition to mod_gzip ?

2005-02-24 Thread Carl Johnstone
If you are already using a compression tool like mod_gzip, does it tend to 
be
worthwhile to add an Apache::Clean phase as well?

I'm curious to know if other Apache::Clean users have felt there was
significant benefit or a noticeably performance penalty.
It would same the bandwidth is more of an issue than the processor time, 
so my
assumption is that a little extra processor time would be a reasonable 
trade-off.
It's one of those things that really depends on your circumstances.
If your HTML is generated by tools that inserts lots of whitespace and extra 
tags then you'll see more benefit than if you hand-craft or machine-generate 
HTML.

Looking at a typical hand-built homepage I'm seeing around a 9-10% saving 
from cleaning (at level 9) before and after compression. I reckon it's 
probably worthwhile _if_ you've got plenty of CPU to spare and would like to 
knock 10% off your bandwidth utilisation.

Realistically it's one of those things you'll need to test for yourself. 
Monitor your system for a period of time, noting CPU load and bandwith 
usage. Install Apache::Clean and perform the same monitoring. You can then 
make the decision based on your real-world situation.

Carl


Re: [mp1] Apache::Cookie(?) Seg Fault

2005-02-24 Thread Carl Johnstone
This looks like a va_* related bug to me.
None of the va_* arguments would seem to be used before we hit the fault.
The arg list to
ApacheCookie_new() must be NULL-terminated, and between
the r and NULL there must be an even number of arguments.
The call to ApacheCookie_new() in XS_Apache__Cookie_parse() is OK:
 c = ApacheCookie_new(r, NULL);
I'm using gcc 3.3.2. Will leave a build of gcc 3.4.3 running overnight and 
try rebuilding apache with that sometime tomorrow.

Carl