Re: mod_perl slower than expected? - Test Code!

2003-06-18 Thread Trevor Phillips
Stas Bekman wrote:
Joe Schaefer wrote:

I doubt that this makes any difference. I think what makes the 
difference is
the fact that the mod_perl handler is setup via .htaccess. Have you 
tried setting it in httpd.conf? Otherwise it's parsed on each request, 
no surprises that it's slower.
Eh? How can that make a difference? Yes, I know that, between .htaccess 
vs httpd.conf vs FastCGI vs CGI there will be some small performance hit 
when accessing the URL. But that speed difference is irrelevant given 
the run time of the content generator. I'm not talking about a 
lightning-fast routine on a heavily hit server; I'm talking about a 
complex request whose time can be measured in whole seconds. In that 
scenario, .htaccess vs httpd.conf is irrelevant.

(And in my dev environment, both the FastCGI and mod_perl were being 
configured from .htaccess)

And on the topic of DSO vs Static, yes, I tested that today. The static 
mod_perl was marginally faster than the DSO mod_perl - but still nowhere 
near as fast as the FastCGI, CGI and command line versions.

Interestingly, I tried Thrash on the aforementioned anomaly: My Sun Box 
running Debian. I was wrong in saying the performance was the same for 
both versions - turns out the mod_perl version on THAT platform is 
slightly FASTER (but nowhere near the performance gap shown on the Intel 
platforms).

Moreover, your code loads Apache::Request, but doesn't use it.
Oops. Kick-back from the original module I was cloning.
Again, though, that would only affect start-up time, which in this test, 
is marginal. Also, it would not affect the time shown by the internal 
time difference displayed.

Unrelated to the case, but related to performance in general:

eval use Time::HiRes;;
if (!$@)
{
   $Thrash::UseTimeHiRes = 1;
}
is probably a bit faster as:

eval { require Time::HiRes };
if (!$@)
{
   $Thrash::UseTimeHiRes = 1;
}
and neater as:

use constant UseTimeHiRes = eval { require Time::HiRes };
Thanks for the syntax tips. I'll change my other code to use them. ^_^

--
. Trevor Phillips -   http://jurai.murdoch.edu.au/ .
: Web Technical Administrator -  [EMAIL PROTECTED] :
| IT Services-  Murdoch University |
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /


Re: mod_perl slower than expected?

2003-06-17 Thread Trevor Phillips
On Friday 13 June 2003 23:00, you wrote:
 [ please keep it on the list ]

Oops. Sorry. Used to mail lists auto-replying to the list. ^_^;;

 On Fri, 2003-06-13 at 03:23, Trevor Phillips wrote:

  I don't think so. Pretty standard Debian install, Perl 5.6.1...

 And you compiled both mod_perl and FastCGI yourself?

No, they're the Debian binary packages. Would it make a difference, if they're 
using the same compile of Perl itself?

The speed problem is not a connect time problem - it's actual run-time of the 
Perl code.

I'm trying to narrow down what the problem is; I have a simple EDO script 
which is basically a bunch of nested iterations incrementing a counter. It 
contains no DB activity (just to show this isn't a DBI issue).

I also ran it with the vanilla CGI version of the EDO parser, with the same 
result - everything is faster than running it under mod_perl.

I've tried it on several different systems. Interestingly, the slow-down has 
occurred on all systems, with the exception being a Sun Sparc Ultra-5. All 
systems are running Debian, with a variety of Apache configs. There's also a 
mix of dual and single CPU, and a mix of Intel  AMD CPUs.

The only common thing between all the systems with the problem is they're 
using the i686 Debian package for mod_perl.

I'll continue trying different configs, and will also try recompiling mod_perl 
myself...

  That's preloaded for some other modules. EDO uses Apache::Registry.
  (Which is another possible point of suspicion, although it's not used
  much... And Apache::Registry is supposed to be faster than CGI (which the
  FastCGI version uses) too...)

 Apache::Registry is faster than a CGI script.  The CGI.pm module does
 something totally different, i.e. parsing params.  CGI::Simple
 implements the same interface and is a drop-in replacement, so it might
 be worth a try.

Oops! Sorry! I meant Apache::Request. I've never used Apache::Registry before. 
^_^

  Personally, I'd love to see a blend: Where I can have a light-weight
  mod_perl style interface in all daemons, which can interface to a
  possibly more limited number of FastCGIs. Gain the power of mod_perl,
  with the resource control of FastCGIs.

 You can do that with mod_perl 2, by setting up the number of perl
 interpreters you want to have available for each script.  See
 http://perl.apache.org/docs/2.0/user/intro/overview.html#Threads_Support

Yes, I've heard many good things about Apache 2  mod_perl 2, as well as many 
reasons not to shift everything to it yet. I look forward to the time it and 
I are ready to migrate. ^_^

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: mod_perl slower than expected? - Test Code!

2003-06-17 Thread Trevor Phillips
On Tuesday 17 June 2003 18:18, Ged Haywood wrote:
 
 Do you know for sure that the Per was compiled for i686?  Maybe it was
 compiled for i386, so it would run on just about anything but it can't
 use a lot of the faster features found in later processors.

Whether it's i686 or i386 - both mod_perl and FastCGI are using the same 
compile of perl - so what difference should there be?

Right! I've put together a test case, which has nothing to do with my EDO 
project. It's just a simple iterative loop, with subroutine calls, updating a 
hash. I've included the main module Thrash.pm, which does the work, and 
doubles as the mod_perl Apache module. I've included a FastCGI version, and a 
plain CGI version (which can also be run from the command line), all using 
Thrash.pm. I've also attached a .htaccess file setting up thrash.html to be 
handled by Thrash.pm.

On my main dev box, ab gives an average of 8.8secs for the mod_perl run, and 
7.2secs for the FastCGI run. The internal timer and printed output reflects 
these results too. 

Interestingly, the time to clear the hash after building it is quite large. 
^_^

I commented out the hash adding code, to remove the hash processing, and leave 
it more as a subroutine call test, but mod_perl was still slower, with 
benchmarks of 3.7secs for mod_perl, and 2.7secs for FastCGI.

I've only tested this on one server so far. I'll test it on others now... I'd 
be interested to hear if others get this discrepancy on their servers...

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /

package Thrash;

$Count = 0;
%Hash = ();

use Apache::Constants qw(:common);
use Apache::Request ();
eval use Time::HiRes;;
if (!$@)
{
   $Thrash::UseTimeHiRes = 1;
}

sub handler
{
   my $r = shift;  # The Raw handler...
   $r-send_http_header;
   $r-print(Thrash::DoIt());
   return OK;
}

sub DoIt
{
   my $TIME_start = ($UseTimeHiRes?Time::HiRes::time():time());
   my $out = H1Thrash Test/H1\n;

   $Count = 0;
   %Hash = ();
   foreach my $A (qw(0 1 2 3 4 5 6 7 8 9 A B C D E F))
   {
  SubThrash(0,$A);
  $out.=$A / $CountBR\n;
   }
   $out.=sprintf('BRun Time: %.3f/B',($UseTimeHiRes?Time::HiRes::time():time())-$TIME_start);
   %Hash = ();  # Clear Hash now we're done.
   return $out;
}

sub SubThrash
{
   my ($depth,$key) = @_;
   if ($depth==3)
   {
  $Count++;
#  $Hash{$key} = $Count;
  return;
   }
   foreach my $B (qw(a b c d e f g h i j k l m n o p q r s t u v w x y z))
   {
  SubThrash($depth+1,$key.$B);
   }
}
1;


thrash.cgi
Description: Binary data


thrash.fcgi
Description: Binary data


.htaccess
Description: Binary data


Re: mod_perl slower than expected? - Test Code!

2003-06-17 Thread Trevor Phillips
On Wednesday 18 June 2003 11:30, Trevor Phillips wrote:

 On my main dev box, ab gives an average of 8.8secs for the mod_perl run,
 and 7.2secs for the FastCGI run. The internal timer and printed output
 reflects these results too.

Oops! The internal timer wasn't accurate: Swap lines 35  36 of Thrash.pm, so 
the timer happens AFTER the hash is reset:

   %Hash = ();  # Clear Hash now we're done.
   $out.=sprintf('BRun Time: 
%.3f/B',($UseTimeHiRes?Time::HiRes::time():time())-$TIME_start);

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: mod_perl slower than expected?

2003-06-13 Thread Trevor Phillips
On Friday 13 June 2003 13:57, Stas Bekman wrote:

 Since your question is too broad to be able to easily pinpoint any problems
 without spending some time with it, I'd suggest reading:
 http://perl.apache.org/docs/1.0/guide/performance.html
 if you haven't done that yet.

I have. Although it was several years ago now. Has much changed? I'll take a 
look, but I haven't seen anything in there that correlates to the sort of 
results I'm seeing. This isn't a perl load-time issue, and it's not a small 
set slow-down - it's a percentage slow-down on pages of which can take up to 
10 seconds to generate and return. It's something run-time, which is 
different between mod_perl and FastCGI. It's bizarre...

 Have you heard of Apache::Reload.
 http://perl.apache.org/docs/1.0/guide/porting.html#Reloading_Modules_and_Re
quired_Files

I have now. How does it handle syntax errors? Does it kill the whole server, 
keep the old module running, or just kill that module?

  As a FastCGI, all I have to do to restart it is touch the main CGI file.
  I also have tighter control on the number of FastCGI processes, which is
  more useful for development.

 Looks like you may need to do some docs reading. You can easily get a total
 control over the number of mod_perl processes.  It's all described here:
 http://perl.apache.org/docs/1.0/guide/performance.html

Uh, yes, I can limit the Apache processes, but again, that will impact other 
users on the system. Really, FastCGI does just what I want, and it means I 
can write a module that works either as a FastCGI or a module. For content 
modules, that's fine. I've written a myriad of access modules, loggers, trans 
handlers, etc... For those, yes, they're mod_perl exclusives. But for some 
purposes, I find FastCGI more efficient.

I'd almost be tempted to leave this app as a FastCGI - except for some places 
we use it we stack with other mod_perl content handlers. And I'm just baffled 
at this particular performance problem, since I have been through the 
standard things to check for.

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



mod_perl slower than expected?

2003-06-12 Thread Trevor Phillips
I am baffled!
I have a quite complex application. Actually, it's more like a language. (Yes, 
a language written in Perl... ^_^) I've written it as Perl Modules, and I 
have a number of front-ends: normal CGI, FastCGI, command line, and Apache 
Module.

When doing development, I predominantly work with the FastCGI front-end, since 
it's easy to restart, and still gives persistence between requests. My latest 
set of changes have resulted in optimisation and given a decent speed 
increase of up to 25% (depending on the exact usage) for complex pages.

However, when I used the revised modules with the Apache Module, I'm only 
getting a marginal performance increase!

Since the bulk of the work is being done by modules common to the Apache and 
FastCGI front-ends, I am at a loss as to explain why there is such a vast 
difference in performance.

Is there anything I may be missing about the general configuration or 
environment of mod_perl which may be causing this strange situation?

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: mod_perl slower than expected?

2003-06-12 Thread Trevor Phillips
On Friday 13 June 2003 12:14, Cees Hek:

I'm far from new to mod_perl, so yes, I've checked all the obvious stuff.

 - Are you only checking the first time you load the page?  mod_perl still
 needs to load all the perl modules on the first request to a page, unless
 you have specifically pre-loaded them in the apache configuration. 
 Subsequent page loads should be faster.

Benchmarking was done using an internal before and after check using 
Time::HiRes (as well as various stages during processing) as well as using 
ab to do multiple hits in succession.

It is definitely cached code. Successive hits are using the persistent cached 
data as they should.

 - are you using mod_perl2?  I don't know enough about mod_perl2 to help
 here, but possibly you are having thread issues with the threaded MPM.

Nope. Definitely normal old mod_perl for Apache 1.3.x.

 mod_perl should be just as fast as FastCGI as they achieve similar goals by
 eliminating the perl startup and compile stages.  If this doesn't help you,
 then please provide more info on your setup.

Hmmm. Problem is the config is complex, and the module itself is also very 
complex, so I didn't want to spam the list with tons of config info. I'll 
include some more info below...

On Friday 13 June 2003 12:26, Perrin Harkins wrote:

 You're not giving us much to go on here.  What kind of changes did you
 make?  Can you verify that you are running the correct versions of the
 modules under mod_perl?  Are you seeing generally about the same
 performance on both platforms?  What does you httpd.conf look like?

Ok, EDO (the name of the app) parses HTML for additional custom markups, and 
then executes functionality based on those markups (putting it very simply). 
The general procedure for a request involves:
  - Creating an EDO::Parser object
  - Parsing HTML source into an internal structure (this can be cached to 
avoid caching on every request)
  - Executing instructions based on the parsed code

The app does a lot of talking to databases - predominantly MySQL for testing.

Minor optimisations were achieved by replacing some internal hash structures 
with arrays, using constants to index them.

One major optimisation was replacing the conditional parsing module. I was 
using SQL::Statement, and fudging it to eval just the WHERE condition, to be 
able to do SQL-style conditionals. I replaced it with my own module, which 
again parses/compiles expressions into an external structure once, rather 
than doing it every time.

As mentioned, these (and a few other misc changes) optimisations gave about a 
25% speed boost for complex pages (such as long lists of data).

Predominantly, it does lots of references and manipulations of arrays and 
hashes, it talks quite a bit to databases via DBI, and reads the occasional 
file from disk. It stores a persistent cache of compiled code, and it 
builds page output entirely in memory before output.

All benchmarks I carried out on the same hardware platform, using the same 
HTML source files. The only thing I changed was whether it was parsed with 
the Apache Module version, or the FastCGI version.

My Apache conf. is long and complex, since the systems I develop  tested this 
on do lots of other things. Config items that are related to mod_perl 
include:
  PerlFreshRestart On
  PerlModule Apache::Filter;
  PerlModule Apache::SSI;
  PerlModule CGI;
  PerlModule Apache::Resource;
  PerlSetEnv PERL_RLIMIT_AS 48:64
  PerlChildInitHandler Apache::Resource

I also use PerlRequire to pre-load several other modules used by the server.

Although the mod_perl version of EDO can be used as an Apache::Filter, I 
turned it off for the benchmarks.

 By the way, I don't understand your comment about how you developed with
 FastCGI because it's easy to restart.  Is there something about mod_perl
 that makes it hard to restart for you?  I always restart after every
 code change since it takes me less than a second to do.

On a server where there are other developers working on it, if I restart the 
server, it can interrupt others working. In addition, if I stuff something up 
badly (such as a minor syntax error), it doesn't kill the whole server.

As a FastCGI, all I have to do to restart it is touch the main CGI file. I 
also have tighter control on the number of FastCGI processes, which is more 
useful for development.

I also work on FastCGI as a low-ish denominator. There's less server impact to 
throw a FastCGI on, than to install mod_perl, if it's not already there. It's 
also closer to the normal CGi version of EDO, and the command-line version.

If you want to know more about EDO itself, I've had it on Sourceforge for a 
while now:
   http://sourceforge.net/projects/edo/
Still, documentation is a little sketchy there, and it's still under 
development, even though it's used in production-level sites where I work.

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator

Simple DAV Server?

2003-06-10 Thread Trevor Phillips
I'm quite suprised at the limited amount of custom DAV server uses. I mean, 
here's a protocol for editing content over HTTP, which to me screams as an 
ideal solution for, say, editing full HTML content within a DB/CMS.

I mean, I've been working as Technical Support at a uni for Web Services, and 
there seems to be these two sides; on one side are the advocates of a 
file-based system, using any range of HTML editing tools to edit your content 
(and preferrably some server-side templating for maintaining common look and 
feel). On the other side is a content management system, which is heavily 
template structured, with content being chunks of text (or HTML) edited using 
web forms, or custom editors (eg; in Java).

One way of obtaining the advantages of both of these techniques would be to 
use a DB-driven CMS, but edit the content chunks using a DAV editor.

I'd like to write a simple DAV interface to a DB myself, but I'm looking for 
existing perl modules to make things easier, but I haven't found a heck of a 
lot along these lines. Most Perl Dav things seem to be more client focussed.

Of those that are for server stuff, they seem either overly complicated (eg; 
Apache::DAV), or fairly immature, and still emphasising filesys structures 
(eg; HTTP::DAVServer).

I suppose what I'm after is an implementation which is easily adaptable to 
editing data within a DB. I'm considering implementing enough of the HTTP 
methods to be functional myself, but would rather not bite off more than I 
have time to chew if there's a nicer solution.

Any suggestions?

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: Simple DAV Server?

2003-06-10 Thread Trevor Phillips
On Wednesday 11 June 2003 05:13, you wrote:
 Trevor Phillips wrote:
 I'm quite suprised at the limited amount of custom DAV server uses. I
  mean, here's a protocol for editing content over HTTP, which to me
  screams as an ideal solution for, say, editing full HTML content within a
  DB/CMS.

 I think the problem with webDAV, as a protocol through which to
 manipulate web pages, lies in the fact that it is difficult to
 manipulate dynamic content without sending the rendered content to the
 client, instead of the true source. (Phew!!  That was long winded... :P
 )  The only way that I have found to do it, is to either break the web
 server, (ie publish to a web server that doesn't have the dynamic
 language engine installed), or... (I don't know of another solution that
 works... :( )

I'm aware of the issue, but don't see it as a show-stopper. You could use an 
alternate URL to direct DAV handling to a different handler. 
ie; 
  To view: /path/to/content/
  To edit: /dav/path/to/content/
... where the module associated with /dav/ knows how to retrieve the raw 
content (be it files, or a map to DB-stored content) of the normal path.

When viewing the content, you could provide links to the edit version of the 
URL.

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Simple DAV Server?

2003-06-09 Thread Trevor Phillips
I'm quite suprised at the limited amount of custom DAV server uses. I mean, 
here's a protocol for editing content over HTTP, which to me screams as an 
ideal solution for, say, editing full HTML content within a DB/CMS.

I mean, I've been working as Technical Support at a uni for Web Services, and 
there seems to be these two sides; on one side are the advocates of a 
file-based system, using any range of HTML editing tools to edit your content 
(and preferrably some server-side templating for maintaining common look and 
feel). On the other side is a content management system, which is heavily 
template structured, with content being chunks of text (or HTML) edited using 
web forms, or custom editors (eg; in Java).

One way of obtaining the advantages of both of these techniques would be to 
use a DB-driven CMS, but edit the content chunks using a DAV editor.

I'd like to write a simple DAV interface to a DB myself, but I'm looking for 
existing perl modules to make things easier, but I haven't found a heck of a 
lot along these lines. Most Perl Dav things seem to be more client focussed.

Of those that are for server stuff, they seem either overly complicated (eg; 
Apache::DAV), or fairly immature, and still emphasising filesys structures 
(eg; HTTP::DAVServer).

I suppose what I'm after is an implementation which is easily adaptable to 
editing data within a DB. I'm considering implementing enough of the HTTP 
methods to be functional myself, but would rather not bite off more than I 
have time to chew if there's a nicer solution.

Any suggestions?

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: Web Technical Administrator -  [EMAIL PROTECTED] : 
| IT Services-  Murdoch University | 
 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: Advanced daemon allocation

2001-06-28 Thread Trevor Phillips

Matthew Byng-Maddick wrote:
 
 This is (in my mind) currently the most broken bit of modperl, because of
 the hacks you have to do to make it work. With a proper API for content
 filtering (apache2), it will be fantastically clean, but at the moment... :-(

The hacks are getting neater, but yes, proper content filtering support will be
wonderful. ^_^

 The fastcgi can run in a different apache again, potentially, it doesn't
 matter (unless I'm misunderstanding something you wrote)

I'm not sure you understand how the FastCGI works.
Apache has mod_fcgi (or was it mod_fastcgi?) which is a lightweight
dispatcher - it interfaces to FastCGI applications. The FastCGI applications
are separate processes, running as daemons. The handling of the FastCGI daemons
can be done statically (eg; run 5 instances of this app), or dynamically
(increase/decrease daemons based on load automatically).

So, if I have an Apache server, with a hefty Perl App that takes up 10Mb RAM,
then an Apache server with 50 daemons would take 500Mb. Having that app as a
FastCGI and limiting it to 5 daemons would mean only 50Mb of RAM is required,
but only 5 of those 50 daemons could access the application at a time - but
those other daemons can do other things, like dish up static content, access
other FastCGIs, etc...

What's more, you can host FastCGI apps on different hosts, and there's a common
protocol between webserver and CGI, so you can write the CGIs in any language,
and have them work with any webserver with FastCGI support.

IMHO, FastCGIs are a better way of doing applications, but don't have the
versatility mod_perl has of digging into Apache internals. Don't get me wrong,
most of what I do is in mod_perl, but part of that is because it's harder to
layer content from multiple FastCGIs.

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: Advanced daemon allocation

2001-06-22 Thread Trevor Phillips

Matthew Byng-Maddick wrote:
 
[useful description snipped]
 Obviously, if your modperl is URL dependent, then you can't determine what
 URL they are going to ask for at the time you have to call accept. The only
 alternative way of doing what you're asking for is to use file descriptor
 passing, which is still about *the* topmost unportable bit of UNIX. :-(
 It is also quite complicated to get right.

Aah! Ok.

 It isn't, because otherwise there'd be even more context-switching, (which is
 slow). The clean solution, in this case, would be to have the one apache that
 actually accepts, does a bit of work on the URL, and then delegates to
 children (probably by passing the fd), but then you still have to do rather
 too much work on the URL before you can do anything about it.

Is this how Apache 2 works, then?

 It isn't as unclean as you might think, though.
 
 Hope this helps

No, but it explains things a bit better. ^_^
Thanks!

I suppose another way to do it is to go the way of the application server,
where a light apache daemon then talks to a separate, dedicated, server for the
application. I do use FastCGI for some applications, which runs an app as a
separate process (and can support multiple processes, even on remote machines),
but I like mod_perl's ability to layer multiple content handlers.

I suppose there isn't a mod_perl implementation of FastCGI, is there? (To allow
mixing FastCGI application processing with other mod_perl content handlers) (Or
layer multiple FastCGIs?).

I haven't seen much support for FastCGI as I'd expect. Is there something
similar that's better that everyone's using and not telling me about? ^_^

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: Advanced daemon allocation

2001-06-18 Thread Trevor Phillips

Gunther Birznieks wrote:
 
 Yeah, just use the mod_proxy model and then proxy to different mod_perl
 backend servers based on the URL itself.

Isn't this pretty much what I said is *a* solution?

 I suppose I could do this now by having a front-end proxy, and mini-Apache
 configs for each group I want, but that seems to be going too far (at this
 stage), especially if the functionality already exists to do this within the
 one server.

To me, this isn't very ideal. Even sharing most of an apache configuration
file, what is the overhead of running a separate server? And can multiple
Apache servers share writing to the same log files?

It also doesn't help if I have dozens of possible groupings - running dozens of
slightly different Apache's doesn't seem a clean solution. Hence me asking if
it was possible within the one Apache server to prioritise the allocation to
specific daemons, based on some criteria, which would be a more efficient and
dynamic solution, if it's possible.

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Advanced daemon allocation

2001-06-17 Thread Trevor Phillips

Is there any way to control which daemon handles a certain request with apache
1.x?

eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
application to 10 specific daemons would improve the efficiency of data cached
in those processes.

If this is impossible in Apache 1.x, will it be possible in 2.x? I can really
see a more advanced model for allocation improving efficiency and performance.
Even if it isn't a hard-limit, but a preferential arrangement where, for
example, hits to a particular URL tend to go to the same daemon(s), this would
improve the efficiency of data cached within the daemon.

I suppose I could do this now by having a front-end proxy, and mini-Apache
configs for each group I want, but that seems to be going too far (at this
stage), especially if the functionality already exists to do this within the
one server.

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Apache::Filter upgrade issues...

2001-05-04 Thread Trevor Phillips

Hi! I recently upgraded a test server to a recent Apache::Filter, and hit
problems due to the new dependency on filter_register() being called. I
don't mind upgrading my filters to call this, but I have one, in which I
use Apache::Request (a sub-class of Apache), which I cannot seem to work
around.

The guts of the code goes something like this:

sub handler
{
   my $r = shift;
   my $IsFilter = ($r-dir_config('Filter') =~ /^on/i?1:0);
   $r = Apache::Request-new($r);
   if ($IsFilter)
   {
  $r = $r-filter_register();
  my ($fh, $status) = $r-filter_input();
  return $status unless $status == OK;  # The Apache::Constants OK
  my @file = $fh;
   }
etc...
}

The above code fails in that the extra methods provided by Apache::Request
are
no longer there.

The above code worked fine previously (prior to the requirement of
filter_register)...

Any ideas? How can I use both Apache::Filter and Apache::Request together?

--
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Redirects issues...

2000-08-14 Thread Trevor Phillips

I'm revisiting a routine I have which in the ContentHandler phase, redirects to
another URI (status 302). While redirecting, I'd like to also set a cookie.

The example in the Eagle Book is fairly straightforward - you set the content
type, set a Location header, then return REDIRECT. Expanding on this, I coded
the following routine:

sub handler
{
my $r = shift;
$r-content_type('text/html');
$r-header_out(URI=$uri);
$r-header_out(Location=$uri);
   
$r-header_out('Set-Cookie'=CGI::Cookie-new(-name=$CookieName,-value=$CookieVal,-path='/'));
return REDIRECT;
}

I set the URI header as well, due to habit. Didn't there used to be a browser
which required URI instead of Location? Is it safe to only use Location??

However, in my output from the above, I don't see the URI heading, and I don't
see the cookie heading. So I came up with the following alternative:

sub handler
{
my $r = shift;
$r-status(302);
$r-content_type('text/html');
$r-header_out(URI=$uri);
$r-header_out(Location=$uri);
   
$r-header_out('Set-Cookie'=CGI::Cookie-new(-name=$CookieName,-value=$CookieVal,-path='/'));
$r-send_http_header;
$r-print("Redirecting to A HREF=\"$uri\"$uri/A...\n");
return OK;
}

This works as I want; Gives all the headers, including the all-important
Cookie.

My query is, why didn't the first one work, and is the way I ended up doing it
the best way, or is there a neater solution?

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Multiple PerlAccessHandlers problem.

2000-08-14 Thread Trevor Phillips

I'm having problems with a mix of PerlAccessHandlers. I have two handlers, and
it is required that one be defined in a Location block, and the other
currently in .htaccess' as required.

The problem is that the one in the .htaccess is being completely ignored when I
have the one in the Locate block. ie;

In httpd.conf:

Location /
PerlAccessHandler XXX
/Location

In a .htaccess:

PerlAccessHandler YYY

So, XXX takes effect, but YYY is ignored.

If I take out the Location one and have both in the .htaccess:

PerlAccessHandler XXX YYY

Or even on separate lines:

PerlAccessHandler XXX
PerlAccessHandler YYY

Then it all works, but when XXX is in a Location, it overrides all .htaccess
references.

Is this supposed to be the correct behaviour? How do I get the behaviour I
want? (One in a Location, and one in .htaccess).

OS is Linux, mod_perl is still 1.21.x on these servers.

Thanks in advance...

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: Problem with proxys Auth...

2000-08-10 Thread Trevor Phillips

Eric Cholet wrote:
 
  This is a strange one for which I hope there's a simple answer  solution.
 
  I've put a Front-side Proxy on a webserver (as it was struggling under the load
  from lots of hits over slow links - more RAM than CPU issue), and it's helped
  performance wonderfully!
 
  However, my IP-based restrictions now seem to no longer work!
 
 Which version of mod_perl are you using? I fixed this in 1.22_01.

Aaah! I was running 1.21.x. Upgrading to 1.22 fixed the problem! Thanks...

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Problem with proxys Auth...

2000-07-30 Thread Trevor Phillips


This is a strange one for which I hope there's a simple answer  solution.

I've put a Front-side Proxy on a webserver (as it was struggling under the load
from lots of hits over slow links - more RAM than CPU issue), and it's helped
performance wonderfully!

However, my IP-based restrictions now seem to no longer work! 

Before you ask, yes, I'm using mod_proxy_add_forward, with a
"PerlPostReadRequestHandler My::ProxyRemoteAddr" routine at the other end to
rewrite the IP back using  $r-connection-remote_ip.

So, yes, as far as my CGIs, my modules, and the logging is concerned, people
are from their REAL IPs (rather than that of the FSP), but IP restrictions
(using the standard mod_access) are not taking effect at all!

Any ideas on why this is, and how to get around it? I've done some testing 
research, and AFAICS, it should work, as Post Read Requests happen before the
Access phase...

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Apache Proxy Virtual Servers...

1999-12-09 Thread Trevor Phillips

I'm trying to set up a lightweight Apache Proxy as reccomended for mod_perl
situations. I have the proxy set up on a different box, and for the proxy
config I have:

VirtualHost *:80
ProxyPass/ http://the.other.machine:80/
ProxyPassReverse / http://the.other.machine:80/
/VirtualHost

This works fine, except "the.other.machine" gets the Host header as
"the.other.machine" and NOT whatever is passed to the proxy by the client. As a
result, virtual servers with the same IP but different name are NOT working!

Help! Any ideas on getting around this?

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Technical Officer -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Re: Apache Proxy Virtual Servers...

1999-12-09 Thread Trevor Phillips

Trevor Phillips wrote:
 
 I'm trying to set up a lightweight Apache Proxy as reccomended for mod_perl
 situations. I have the proxy set up on a different box, and for the proxy
 config I have:
 
 VirtualHost *:80
 ProxyPass/ http://the.other.machine:80/
 ProxyPassReverse / http://the.other.machine:80/
 /VirtualHost
 
 This works fine, except "the.other.machine" gets the Host header as
 "the.other.machine" and NOT whatever is passed to the proxy by the client. As a
 result, virtual servers with the same IP but different name are NOT working!
 
 Help! Any ideas on getting around this?

Ok, I've managed to pass the clients Virtual Host name on to the REAL server,
by hacking mod_proxy_add_forward.c and adding in:
   ap_table_set(r-headers_in, "X-Virtual-Host",ap_get_server_name(r));

I can't seem to set it directly to "Host", though; I suspect it is overridden
later with "the.other.machine" as host, which is why I'm passing it as another
name.

So, can anyone tell me how I can adjust it to override the other Host header??

Alternatively, how can I get the end server to reinstate this as the real Host
header? It has mod_perl (of course), so I can do things there. I've tried
incorporating the fix back in the PostReadRequestHandler phase, but this phase
seems to happen AFTER the Virtual Servers are sorted out, so isn't much help.
Would this work better in some other phase??

Ideally, it'd be nice to have more control over Virtual servers; say, have a
perl handler determine which Virtual server to use, based on whatever info it
wants to use.

Any help or ideas on this would be most appreciated!! I need SOME solution ASAP
(impending traffic flood this weekend), and I'd like a "nice" solution in the
long run...

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Technical Officer -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /



Altering handler not working...

1999-11-24 Thread Trevor Phillips

I'm trying to write an access routine which requires altering the handler if
certain conditions are (not) met. There are a few interesting examples of this
in the "Apache Modules in Perl  C" book, chapter 7, which do something similar
within the TransHandler and some other phases, but I'm trying to do it in the
AccessHandler phase.

Here's a simple test module:

package Access;

use Apache::Constants qw(:common);

sub handler
{
   my $r = shift;
   if ( -- some condition test -- )
   {
  $r-handler("perl-script");
  $r-set_handlers(PerlHandler=[\SomeRoutine]);
  return OK;
   }
   return DECLINED;
}

sub SomeRoutine
{
   my $r = shift;

   $r-content_type('text/html');
   $r-send_http_header;

   print "Some content...";
   return OK;   
}
   
So, basically, if a condition is met, then I return Access as OK, but I also
override whatever handler is there with a custom one.

The problem is, I cannot get this to work!! If a URI's handler is already
perl-script, then SomeRoutine is called, but it is NOT overriding other
handlers.

If I use this module as a TransHandler, then it DOES work correctly, but I
really need this to come in at the Access phase (as that's what it relates to).

Any ideas?

-- 
. Trevor Phillips -   http://jurai.murdoch.edu.au/ . 
: CWIS Technical Officer -   [EMAIL PROTECTED] : 
| IT Services   -   Murdoch University | 
 --- Member of the #SAS#  #CFC# 
| On nights such as this, evil deeds are done. And good deeds, of /
| course. But mostly evil, on the whole. /
 \  -- (Terry Pratchett, Wyrd Sisters)  /