Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Jeremy Howard

Perrin Harkins wrote:
 What I was saying is that it doesn't make sense for one to need fewer
 interpreters than the other to handle the same concurrency.  If you have
 10 requests at the same time, you need 10 interpreters.  There's no way
 speedycgi can do it with fewer, unless it actually makes some of them
 wait.  That could be happening, due to the fork-on-demand model, although
 your warmup round (priming the pump) should take care of that.

I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
if, for instance, a large POST request is being uploaded, this takes a whole
perl interpreter while the transaction is occurring. This is at least one
place where a Perl interpreter should not be needed.

Of course, this could be overcome if an HTTP Accelerator is used that takes
the whole request before passing it to a local httpd, but I don't know of
any proxies that work this way (AFAIK they all pass the packets as they
arrive).





Re: Proposals for ApacheCon 2001 in; help us choose! (fwd)

2000-12-22 Thread Matt Sergeant

On Fri, 22 Dec 2000, Stas Bekman wrote:


 Well, this is new. You choose what sessions at ApacheCon you want.

 I don't think it's a fair approach by ApacheCon committee, as by applying
 this approach they are going to make the big player even bigger and kill
 the emerging technologies, who will definitely not get enough quotes and
 thus will not make it in :( I'm not on the committee, so I cannot really
 influence, but I'll try anyway.

I agree, and I've written to Ken the same. The benefit of refereed papers
by a technical committee is that you tend to get a good balance of
technologies at the conference. This way you will only get the best known
technologies talked about, I believe. I hope camelot change their mind on
this one.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: Why double requests?

2000-12-22 Thread Doug MacEachern

On Wed, 11 Oct 2000, Bill Moseley wrote:

...
 Here's the request:
 ---
 GET /test/abc/123 http/1.0
 
 HTTP/1.1 200 OK
 Date: Wed, 11 Oct 2000 17:17:16 GMT
 Server: Apache/1.3.12 (Unix) mod_perl/1.24
 Connection: close
 Content-Type: text/plain
 
 hello
 
 Here's the error_log
 
 [Wed Oct 11 10:17:16 2000] [error] initial:/test/abc/123
 [Wed Oct 11 10:17:16 2000] [error] not initial:/abc/123
 [Wed Oct 11 10:17:16 2000] [error] [client 192.168.0.98] client denied by
 server configuration: /usr/local/apache/htdocs/abc
 
 Why the second request with the extra path?

because in this case you have path_info data, when %ENV is populated,
ap_add_cgi_vars() calls ap_sub_req_lookup_uri() to resolve
PATH_TRANSLATED.

notice you won't see it with:
PerlSetupEnv Off





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Matt Sergeant

On Thu, 21 Dec 2000, Sam Horrocks wrote:

   Folks, your discussion is not short of wrong statements that can be easily
   proved, but I don't find it useful.

  I don't follow.  Are you saying that my conclusions are wrong, but
  you don't want to bother explaining why?

  Would you agree with the following statement?

 Under apache-1, speedycgi scales better than mod_perl with
 scripts that contain un-shared memory

NO!

When you can write a trans handler or an auth handler with speedy, then I
might agree with you. Until then I must insist you add "mod_perl
Apache::Registry scripts" or something to that affect.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




ANNOUNCE: Embperl 2.0b1

2000-12-22 Thread Gerald Richter

After long time of talking, designing, developing, codeing and testing I am
now happy to announce the first beta of Embperl 2.0. It has a totaly
rewritten core and makes the way free for a lot of new possibilities...

At the moment it's mainly a speed improvement and introduces caching of the
(component) output. In the next releases I successively make the
possibilities of this new architecture available. (see the readme below for
some ideas what will happen).

Since it's not yet ready for production use, it's only available from my ftp
server at

ftp://ftp.dev.ecos.de/pub/perl/embperl/HTML-Embperl-2.0b1.tar.gz

Enjoy

Gerald

README.v2:
==

Embperl 2 has a totaly rewritten core. It contains nearly 7500 lines
new (mostly C-) code. Also I have done a lot of testing, don't expect
it to work without errors!

Please report any weired behaviour to the embperl mailing list, but
be sure to read this whole README to understand what can't work so far.

The Embperl core now works in a totaly different way. It is divided into
smaller steps:

1 reading the source
2 parseing
3 compiling
4 executing
5 outputing

Further version will allow to replace every single step of this pipeline
with custom modules. Also it will be possible to cascade multiple
processors. This allows for example to have Embperl and SSI in one file
and to parse the file only once, feeding it first to the SSI processor and
afterwards to the Embperl processor. Also the parser will be exchangeable
in future version to allow for example to use an XML parser and an
XSLT stylesheet processor.

These new execution scheme is also faster, because html tags and
metacommands
are parsed only once (Perl code was also (and is still) cached in 1.x)
My first benchmarks show 50%-100% faster execution under mod_perl for pages
longer then 20K (For short pages (  5K ouput) you won't see such a great
difference)
and without any external database access.

Another new feature is that the syntax of the Embperl parser is defined
within the module HTML::Embperl::Syntax and can be modified as nessecary.
See the file Embperl/Syntax.pm how it looks like and

perldoc HTML::Embperl::Syntax

for a short description. A further verion will add an API to this syntax
module,
so custom syntaxes can be easily added, without modifiy Syntax.pm itself.

Also new is the possibility to cache (parts of) the output.


The following difference to Embperl 1.x apply:
--

- The following options can currently only set from the httpd.conf:
 optRawInput, optKeepSpaces

- The following options are currently not supported:
 optDisableHtmlScan, optDisableTableScan,
 optDisableInputScan, optDisableMetaScan

- Nesting must be properly. I.e. you cannot put a table tag (for an
  dynamic table) inside an if and the /table inside another if.
  (That still works for static tables)

- optUndefToEmptyValue is always set and cannot be disabled.

- [$ foreach $x (@x) $] requires now the bracket around the
  array (like Perl)

- [+ +] blocks must now contain a valid Perl expression. Embperl 1.x
  allows you to put multiple statements into such a block. For performance
  reasons this is not possible anymore. Also the expression must _not_
  terminated with a semikolon. To let old code work, just wrap it into a do
  e.g. [+ do { my $a = $b + 5 ; $a } +]


The following things are not fully tested/working yet:
--

- [- exit -]

- [- print OUT "foo" -]

- safe namespaces


Embperl 1.x compatibility flag
--

If you don't have a separate computer to make the test setup, you can
include

PerlSetEnv EMBPERL_EP1COMPAT 1

at the top level of your httpd.conf, then Embperl will behave just the same
like Embperl 1.3b7. In the directories where you make your tests, you
include a

PerlSetEnv EMBPERL_EP1COMPAT 0

to enable the new engine.

but _DON'T_ use this one a production machine. While this compatibility mode
is tested and shows no problems for me, it's not so hard tested as 1.3b7
itself!


Addtional Config directives
---

execute parameter / httpd.conf environment variable / name inside page (must
set inside [! !])


cache_key / EMBPERL_CACHE_KEY / $CACHE_KEY

literal string that is appended to the cache key


cache_key_options / EMBPERL_CACHE_KEY_OPTIONS / $CACHE_KEY_OPTIONS

ckoptCarryOver = 1, use result from CacheKeyFunc of preivious step
if any
ckoptPathInfo  = 2, include the PathInfo into CacheKey
ckoptQueryInfo = 4, include the QueryInfo into CacheKey
ckoptDontCachePost = 8, don't cache POST requests  (not yet implemented)

Default: all options set


expired_func / EMBPERL_EXPIRES_FUNC / EXPIRES

function that should be called when build a cache key. The result is
appended to the cache key.


cache_key_func / EMBPERL_CACHE_KEY_FUNC / CACHE_KEY

function that is called everytime before data 

Re: ANNOUNCE: Embperl 2.0b1

2000-12-22 Thread Jens-Uwe Mager

On Fri, Dec 22, 2000 at 07:51:36AM +0100, Gerald Richter wrote:

 Since it's not yet ready for production use, it's only available from my ftp
 server at

Hmm, with unstable software I prefer to use cvs upd more often, isn't
there a cvs repository anywhere? A stable version can also be fetched
from an FTP archive, but for development versions this is a bit painful.

-- 
Jens-Uwe Mager

HELIOS Software GmbH
Steinriede 3
30827 Garbsen
Germany

Phone:  +49 5131 709320
FAX:+49 5131 709325
Internet:   [EMAIL PROTECTED]



Re: ANNOUNCE: Embperl 2.0b1

2000-12-22 Thread Gerald Richter


  Since it's not yet ready for production use, it's only available from my
ftp
  server at

 Hmm, with unstable software I prefer to use cvs upd more often, isn't
 there a cvs repository anywhere? A stable version can also be fetched
 from an FTP archive, but for development versions this is a bit painful.


It is a branch in the normal Embperl cvs. You can check it out with

cvs -d :pserver:[EMAIL PROTECTED]:/home/cvspublic co -r Embperl2c
embperl

Gerald







[OT] Komodo

2000-12-22 Thread Robin Berjon

This is OT but there was talk here about Komodo and Perl IDEs. Well,
there's an alpha now available for Linux (and a beta for Windows) for those
that are interested:

http://www.ActiveState.com/Products/Komodo/

-- robin b.
By the time they had diminished from 50 to 8, the other dwarves began to
suspect Hungry.




RE: Transfering Headers... w/ miltiple 'Set_Cookie'

2000-12-22 Thread Chris Strom

You might try:

$headers-scan(sub {
$r-headers_out-add(@_);
print STDERR join("=", @_), "\n"
});

The STDERR will output the seen headers to your logs.  

 the problem is that there are many Set-Cookie instructions 
 in $headers but mod_perl seems to use a tied hash to link 
 to the request table and so each set cookie replaces the 
 last as the hash key is seen to equal?  How do I get 
 around thisI tried:
 
 $r-headers_out-add('Set-Cookie' = $cookieString);
 
This should work.  The add() method is part of the Apache::Table class,
which allows multiple key/value pairs with the same key. See the discussion
of Apache::Table in the eagle book for more details, and an exact example of
what you're trying to do here.  

If HTTP::Headers::scan() doesn't work, you might look into the HTTP::Cookies
module:

  $cookie_jar = HTTP::Cookies-new;
  $cookie_jar-extract_cookies($response);

  $cookie_jar-scan( sub {
$r-headers_out-add(@_[1,2]);
  }); 

Hope that helps.

Chris



RE: help with custom Error documents/redirection

2000-12-22 Thread Geoffrey Young



 -Original Message-
 From: Doug MacEachern [mailto:[EMAIL PROTECTED]]
 Sent: Friday, December 22, 2000 1:14 AM
 To: Geoffrey Young
 Cc: '[EMAIL PROTECTED]'; [EMAIL PROTECTED]
 Subject: RE: help with custom Error documents/redirection
 
 
 On Wed, 13 Dec 2000, Geoffrey Young wrote:
  
  BTW, it's always good (at least I've found) to call
  my $prev_uri = $r-prev ? $r-prev-uri : $r-uri;
 
 or with one less method call :)
 my $prev_uri = ($r-prev || $r)-uri; 

now that's cool - I just love perl...

 



[JOB] [CV] Job wanted

2000-12-22 Thread Matthew Byng-Maddick

I'd like to leave my current job.

Is anyone here offering mod_perl or apache programming jobs in and around
London?

I'd be interested in any job involving:
+ apache (mod_perl and C module programming)
+ Perl
+ XS
+ C

My current CV is available at http://colondot.net/mbm/cv.shtml, and
details what experience I have. I would be on a 1 month notice period in
terms of availability. I would be happy to answer any questions you may
have about things I've done in email.

Please reply by private email (the email address here) rather than to the
list.

Thank you

Matthew Byng-Maddick

-- 
Matthew Byng-Maddick   Home: [EMAIL PROTECTED]  +44 20  8981 8633  (Home)
http://colondot.net/   Work: [EMAIL PROTECTED] +44 7956 613942  (Mobile)
What  passes  for  woman's  intuition  is often  nothing  more  than man's
transparency. -- George Nathan




Measure of performance !!!

2000-12-22 Thread Edmar Edilton da Silva


 Hi all,
 Some days ago I sent a question about performance
of Oracle and MS SQL Server databases, but I don't got any answer that
help me. I have installed on my machine:
Linux Red Hat 6.2
Apache 1.3.14 (installed on machine 1 )
mod_perl 1.24-1
DBD::Oracle
DBD::Sybase ( to access the MS SQL Server database )
Oracle (installed on machine 2 )
MS SQL Server (installed on machine 3 )
I know using the Apache::DBI module the connection time is reduced,
but I need to make some tests of performance without Apache::DBI. The Oracle
and SQL Server servers are running on different machines that Web server.
I undestand one Oracle connection consumes more resources that SQL Server,
but there is a thing that I don't undestand, when I ran the tests for SQL
Server the machine 1 works properly, but when I ran the tests for Oracle
the machine 1 works much more slow. Same with the database servers installed
on different machines that Web server, does one connection Oracle consume
more resources on the machine 1 that a SQL Server connection? Why? Why
does the machine 1 work much more slow for Oracle? Please, if someone must
help me I will be very thankful. Happy hollidays for everyone.


 Edmar Edilton da Silva
 Bacharel em Cincia da Computaco - UFV
 Mestrando em Cincia da Computaco - UNICAMP




Apache::DBI and altered packages

2000-12-22 Thread Geoffrey Young

hi all...

I was wondering if anyone has found a good way around persistent connections
and package recompiles.  With Apache::DBI, on occasion when someone
recompiles a package and doesn't tell me, I see

ORA-04061: existing state of package body "FOO.BAR" has been invalidated
ORA-04065: not executed, altered or dropped package body "FOO.BAR"
ORA-06508: PL/SQL: could not find program unit being called

my Oracle gurus here tell me that whenever a package changes any open
connections will get this error.  Since the connection itself ok (just not
the stuff I need to use) the only solution currently available seems to be
$r-child_terminate() so that at least that child doesn't barf every time.
However, this leaves the current request in a lurch...

I was thinking that maybe something like Apache::DBI::reconnect() was needed
- call DBI-disconnect then DBI-connect again.

that is unless someone has dealt with this in another way - someone
mentioned that perhaps the proper ALTER OBJECT permissions would force a
recompile - I dunno...

--Geoff



Re: slight mod_perl problem

2000-12-22 Thread Vivek Khera

 "DM" == Doug MacEachern [EMAIL PROTECTED] writes:

DM On Thu, 21 Dec 2000, Vivek Khera wrote:
 I just tested it also, and the startup script is run exactly once.

DM could be he has PerlFreshRestart On, in which case it would be
DM called twice.

I have it on as well, and it was only called once.  I'm running
mod_perl as a DSO, though.  Perhaps that's the difference?



Re: Apache::DBI and altered packages

2000-12-22 Thread Perrin Harkins

 my Oracle gurus here tell me that whenever a package changes any open
 connections will get this error.  Since the connection itself ok (just not
 the stuff I need to use) the only solution currently available seems to be
 $r-child_terminate() so that at least that child doesn't barf every time.
 However, this leaves the current request in a lurch...

 I was thinking that maybe something like Apache::DBI::reconnect() was
needed
 - call DBI-disconnect then DBI-connect again.

If you change the ping() method to check whether or not your package still
works, Apache::DBI will automatically get a new connection when it fails.

- Perrin




Re: [repost]garbled redirects

2000-12-22 Thread Doug MacEachern

On Tue, 7 Nov 2000, Paul wrote:

 Hi all.
 
 A while back I posted a similar problem.  My error logs have frequent
 entries showing erroneous redirect strings, like this:
 
 [Tue Nov  7 08:57:45 2000] [error] [client 90.14.50.41] Invalid error
 redirection directive: üØ@
 
 Sometimes *most* of the redirect is fine; I found one where nothing was
 garbled but the protocol -- instead of "https" it had several binary
 characters, but from the :// on the address was fine. Here's one:
 
 [Tue Nov  7 09:05:56 2000] [error] [client 96.80.9.46] Invalid error
 redirection »xs://buda.bst.bls.com/dres/dres.cgi
 
 What would cause that?

the problem is with $r-custom_response().  if there are no ErrorDocuments
configured, it would allocate the table from r-pool, but the table needs
to live longer than r-pool, eek!  patch below will fix.  seems apache.org
sshd is down, so i can't commit yet.

--- src/modules/perl/Apache.xs~ Thu Dec 21 22:44:52 2000
+++ src/modules/perl/Apache.xs  Thu Dec 21 22:45:30 2000
@@ -247,7 +247,7 @@
 
 if(conf-response_code_strings == NULL) {
 conf-response_code_strings = (char **)
- pcalloc(r-pool,
+ pcalloc(perl_get_startup_pool(),
  sizeof(*conf-response_code_strings) * 
  RESPONSE_CODES);
 }





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Keith G. Murphy

Perrin Harkins wrote:
 
 
 Keith Murphy pointed out that I was seeing the result of persistent HTTP
 connections from my browser.  Duh.
 
I must mention that, having seen your postings here over a long period,
anytime I can make you say "duh", my week is made.  Maybe the whole
month.

That issue can be confusing.  It was especially so for me when IE did
it, and Netscape did not...

Let's make everyone switch to IE, and mod_perl looks good again!  :-b



Re: Measure of performance !!!

2000-12-22 Thread Buddy Lee Haystack

Didn't someone already mention that it takes longer to connect to an Oracle database?

If you don't use the Apache::DBI module, the Oracle database will be slower because it 
takes longer to connect to it for each request. By using the Apache::DBI module you 
effectively negate the time it takes to re-connect to the database for subsequent 
requests. You really need to use Apache::DBI to test for the differences in database 
performance, otherwise the excessive time to connect to the Oracle will always result 
in Oracle handling fewer requests than other databases that handle connections more 
efficiently.



Edmar Edilton da Silva wrote:
 
 Hi all,
 
 Some days ago I sent a question about performance of Oracle and MS SQL Server 
databases, but I don't got any answer that help me. I have installed on my machine:
 Linux Red Hat 6.2
 Apache 1.3.14 (installed on machine 1 )
 mod_perl 1.24-1
 DBD::Oracle
 DBD::Sybase ( to access the MS SQL Server database )
 Oracle (installed on machine 2 )
 MS SQL Server (installed on machine 3 )
 
 I know using the Apache::DBI module the connection time is reduced, but I need to 
make some tests of performance without Apache::DBI. The Oracle and SQL Server servers 
are running on different machines that Web server. I undestand one Oracle connection 
consumes more resources that SQL Server, but there is a thing that I don't undestand, 
when I ran the tests for SQL Server the machine 1 works properly, but when I ran the 
tests for Oracle the machine 1 works much more slow. Same with the database servers 
installed on different machines that Web server, does one connection Oracle consume 
more resources on the machine 1 that a SQL Server connection? Why? Why does the 
machine 1 work much more slow for Oracle? Please, if someone must help me I will be 
very thankful. Happy hollidays for everyone.
 
 
 Edmar Edilton da Silva
 Bacharel em Ciência da Computacão - UFV
   Mestrando em Ciência da Computacão - UNICAMP
 
 
 

-- 
BLH
www.RentZone.org



[OT] Anyone good with IPC?

2000-12-22 Thread Bill Moseley

Sorry for the way OT post, but this list seems to have the smartest, most
experienced, most friendly perl programmers around  -- and this question on
other perl lists failed to get any bites.

Would someone be willing to offer a bit of help off list?

I'm trying to get two programs talking in an HTTP-like protocol through a
unix pipe.  I'm first trying to get it to work between two perl programs
(below), but in the end, the "client" will be a C program (and that's a
different nut to crack).

The goal is to add a "filter" feature to the C program, where you register
some external program (called a server, in this example, since it will be
answering requests) and the C program starts the server, and then feeds
requests over and over leaving the server in memory.

A simple filter might be something that converts to lower case, or converts
text dates to a timestamp.  The C program (client) sends headers and some
content, and the filter (server) returns headers and some content.  But
it's a "Keep Alive" connection, so another request can be sent without
closing the pipe.

This approach seems simple -- at least for someone writing the filter
program.  Just read and print (non-buffered).  It's probably not very
portable -- I'd expect to fail on Windows.  (Are there better methods?)

Anyway, this is the sample code I was trying, but was not getting anywhere.
Seems like IO::Select::can_read() returns true and then I can read back the
first header, but then can_read() never returns true again.

I really need to be able to read and parse the headers, then read
Content-Length: bytes since the content can be of varying length.

 cat client.pl

#!/usr/local/bin/perl -w
use strict;

use IPC::Open2;
use IO::Select;
use IO::Handle;

my ( $rh, $wh );

my $pid = open2($rh, $wh, './server.pl');
$pid || die "Failed to open";

my $read = IO::Select-new( $rh );

$rh-autoflush;
$wh-autoflush;

for (1..2) {
print "\n$0: Sending Headers:$_\n";

print $wh "Header-number: $_\n",
  "Content-type: perl/test\n",
  "Header: test\n\n";


# Now read the response
while ( 1 ) {

my $fh;

if ( ($fh) = $read-can_read(0) ) {
print "Can read!\n";

my $buffer = $rh;
#$fh-read( $buffer, 1024 );

last unless $buffer;

print "$0: Read $buffer";
} else {
print "Can't read sleeping...\n";
sleep 1;
}
}
print "$0: All done!\n";
}



lii@mardy:~  cat server.pl
#!/usr/local/bin/perl -w
use strict;

$|=1;

warn "In $0 pid=$$\n";

while (1) {
my @headers = ();
while (  ) {
chomp;
if ( $_ ) {
warn "$0: Read '$_'\n";

push @headers, $_;
} else {
for ( @headers ) {
warn "$0: Sending $_\n";
print $_,"\n";
}
print "\n";
last;
}
}
}


Bill Moseley
mailto:[EMAIL PROTECTED]



[RFC] New Apache Module, comments and name suggestions requested

2000-12-22 Thread darren chamberlain

I am looking for some feedback, and possibly an idea for a name for this
module.

In my experience, one of the problems with template modules like
HTML::Template, TemplateToolkit, HTML::Mason, and the others, is that
they are *toolkits* -- there is no fast way to go from Perl module to
output. These modules require extensive setup, and usually are used as
pieces in the larger context of an application. This is great for many
things, but for sites that want to use one of these modules as their
primary document processor have to write something to do the actual
processing of the templates. 

I've been working on an Apache module for a while which utilizes
Template Toolkit, and provides a more or less generic interface to the
power of TT. Configuration is done through configuration directives,
which are the options that get passed to the Template object.

Pages that get processed by the module are handed hashrefs containing
form parameters, the environment, cookes, and pnotes, which can be
accessed directly in the page (e.g.: Hello, [% env.REMOTE_ADDR %].).
I have a variety of plugins and filters planned, as well.

How is this different from using Template Toolkit directly? It's not,
except that using the module is as simple as:

FilesMatch "*.html$"
  SetHandler  perl-script
  PerlHandler Grover
/FilesMatch

The module's working name is Grover, but it would probably end up living
somewhere in the Apache:: namespace, if I were to put it on CPAN. I am
open to suggestions for a new name.

There is a web site set up for it at
http://sevenroot.org/software/grover/, from which the source can be
downloaded.

(darren)

-- 
Once the realization is accepted that even between the closest human beings
infinite distances continue to exist, a wonderful living side by side can
grow up, if they succeed in loving the distance between them which makes it
possible for each to see each other whole against the sky.
-- Rainer Rilke



RE: Apache::DBI and altered packages

2000-12-22 Thread Geoffrey Young



 -Original Message-
 From: jared still [mailto:[EMAIL PROTECTED]]
 Sent: Friday, December 22, 2000 11:52 AM
 To: Geoffrey Young
 Cc: '[EMAIL PROTECTED]'; [EMAIL PROTECTED]
 Subject: Re: Apache::DBI and altered packages
 
 
 
 
 Geoff,
 
 I'm going to make a non-technical suggestion:  You need to have  
 change control implemented so that packages in your database
 are not changing during business hours.
 
 If you are 24x7, then you need to schedule an outage, and notify
 your customers of scheduled downtime.
 
 This is more of a management and logistics problem than a Perl
 or Apache or Oracle problem
 

what, you mean not making changes to production on a regular basis?  what do
you think I'm trying to run, a stable shop or something?

you have to actually be here to believe what goes on, though :)




RE: Apache::DBI and altered packages

2000-12-22 Thread Geoffrey Young



 -Original Message-
 From: Perrin Harkins [mailto:[EMAIL PROTECTED]]
 Sent: Friday, December 22, 2000 11:55 AM
 To: Geoffrey Young; [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: Apache::DBI and altered packages
 
 
 
 If you change the ping() method to check whether or not your 
 package still
 works, Apache::DBI will automatically get a new connection 
 when it fails.

yeah, that's an idea - lots of packages to check, though.  but this might be
the best solution...

thanks

--Geoff

 
 - Perrin
 



Re: [OT] Anyone good with IPC?

2000-12-22 Thread Matt Sergeant

On Fri, 22 Dec 2000, Bill Moseley wrote:

 The goal is to add a "filter" feature to the C program, where you register
 some external program (called a server, in this example, since it will be
 answering requests) and the C program starts the server, and then feeds
 requests over and over leaving the server in memory.

[snip]

use POE.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: [RFC] New Apache Module, comments and name suggestions requested

2000-12-22 Thread Matt Sergeant

On Fri, 22 Dec 2000, darren chamberlain wrote:

 I am looking for some feedback, and possibly an idea for a name for this
 module.

 In my experience, one of the problems with template modules like
 HTML::Template, TemplateToolkit, HTML::Mason, and the others, is that
 they are *toolkits* -- there is no fast way to go from Perl module to
 output. These modules require extensive setup, and usually are used as
 pieces in the larger context of an application. This is great for many
 things, but for sites that want to use one of these modules as their
 primary document processor have to write something to do the actual
 processing of the templates.

 I've been working on an Apache module for a while which utilizes
 Template Toolkit, and provides a more or less generic interface to the
 power of TT. Configuration is done through configuration directives,
 which are the options that get passed to the Template object.
 I have a variety of plugins and filters planned, as well.

This sounds like something that is screaming out to be added to
Apache::Dispatch...

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: Apache::DBI and altered packages

2000-12-22 Thread jared still


Geoff,

I'm going to make a non-technical suggestion:  You need to have  
change control implemented so that packages in your database
are not changing during business hours.

If you are 24x7, then you need to schedule an outage, and notify
your customers of scheduled downtime.

This is more of a management and logistics problem than a Perl
or Apache or Oracle problem

On Fri, 22 Dec 2000, Geoffrey Young wrote:

 I was wondering if anyone has found a good way around persistent connections
 and package recompiles.  With Apache::DBI, on occasion when someone
 recompiles a package and doesn't tell me, I see
 
 ORA-04061: existing state of package body "FOO.BAR" has been invalidated
 ORA-04065: not executed, altered or dropped package body "FOO.BAR"
 ORA-06508: PL/SQL: could not find program unit being called
 
 my Oracle gurus here tell me that whenever a package changes any open
 connections will get this error.  Since the connection itself ok (just not
 the stuff I need to use) the only solution currently available seems to be
 $r-child_terminate() so that at least that child doesn't barf every time.
 However, this leaves the current request in a lurch...
 
 I was thinking that maybe something like Apache::DBI::reconnect() was needed
 - call DBI-disconnect then DBI-connect again.
 
 that is unless someone has dealt with this in another way - someone
 mentioned that perhaps the proper ALTER OBJECT permissions would force a
 recompile - I dunno...
 
 --Geoff

Jared Still
Certified Oracle DBA and Part Time Perl Evangelist ;)
[EMAIL PROTECTED]
[EMAIL PROTECTED]




Re: Microperl

2000-12-22 Thread Doug MacEachern

On Wed, 15 Nov 2000, Bill Moseley wrote:

 This is probably more of a Friday topic:
 
 Simon Cozens discusses "Microperl" in the current The Perl Journal.
 
 I don't build mod_rewrite into a mod_perl Apache as I like rewriting with
 mod_perl much better.  But it doesn't make much sense to go that route for
 a light-weight front-end to heavy mod_perl backend servers, of course.
 
 I don't have any experience embedding perl in things like Apache other that
 typing "perl Makefile.PL  make", but Simon's article did make me wonder.
 
 So I'm curious from you that understand this stuff better: Could a
 microperl/miniperl be embedded in Apache and end up with a reasonably
 light-weight perl enabled Apache?  I understand you would not have
 Dynaloader support, but it might be nice for simple rewriting.

it would not make much difference.  the major source of bloat is Perl's
bytecode/data structures, microperl does not make these any smaller.
still might be worth looking into as an option, but somebody would need to
tweak Makefile.micro to build a libmicroperl.a to link against.  at the
moment it only builds the microperl program.




Localizing Perl sections to VHosts

2000-12-22 Thread Todd Finney

I'm trying to move my VHost-specific libraries into more 
logical directories (for me).  I have a number of them, and 
I'd rather not just use them all in my startup.pl - mostly 
because of concern over name collisions 
(Site1::connect_to_db(), Site2::connect_to_db()).

If I use this in a VirtualHost section in  my httpd.conf:

Perl
use lib('/www/mylibs');
/Perl

libraries under /www/mylibs/ are available to all 
VirtualHosts, at least as far as I can tell.  Is there a 
way to make the directive apply to only a single 
VirtualHost, or another way to do this?


thanks,
Todd






Re: Localizing Perl sections to VHosts

2000-12-22 Thread Stas Bekman

On Fri, 22 Dec 2000, Todd Finney wrote:

 I'm trying to move my VHost-specific libraries into more 
 logical directories (for me).  I have a number of them, and 
 I'd rather not just use them all in my startup.pl - mostly 
 because of concern over name collisions 
 (Site1::connect_to_db(), Site2::connect_to_db()).
 
 If I use this in a VirtualHost section in  my httpd.conf:
 
 Perl
   use lib('/www/mylibs');
 /Perl
 
 libraries under /www/mylibs/ are available to all 
 VirtualHosts, at least as far as I can tell.  Is there a 
 way to make the directive apply to only a single 
 VirtualHost, or another way to do this?

http://perl.apache.org/guide/config.html#Is_There_a_Way_to_Modify_INC_on


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Dynamic content that is static

2000-12-22 Thread Philip Mak

Hi everyone,

I have been going over the modperl tuning guide and the suggestions that
people on this list sent me earlier. I've reduced MaxClients to 33 (each
httpd process takes up 3-4% of my memory, so that's how much I can fit
without swapping) so if the web server overloads again, at least it won't
take the machine down with it.

Running a non-modperl apache that proxies to a modperl apache doesn't seem
like it would help much because the vast majority of pages served require
modperl.

I realized something, though: Although the pages on my site are
dynamically generated, they are really static. Their content doesn't
change unless I change the files on the website. (For example,
http://www.animewallpapers.com/wallpapers/ccs.htm depends on header.asp,
footer.asp, series.dat and index.inc. If none of those files change, the
content of ccs.htm remains the same.)

So, it would probably be more efficient if I had a /src directory and a
/html directory. The /src directory could contain my modperl files and a
Makefile that knows the dependencies; when I type "make", it will evaluate
the modperl files and parse them into plain HTML files in the /html
directory.

Does anyone have any suggestions on how to implement this? Is there an
existing tool for doing this? How can I evaluate modperl/Apache::ASP files
from the command line?

Thanks,

-Philip Mak ([EMAIL PROTECTED])






[Solved]: Apache::Compress + mod_proxy problem

2000-12-22 Thread Edward Moon

Here's a patch for Apache::Compress that passes off proxied requests to
mod_proxy.

Without this patch Apache::Compress will return an internal server error
since it can't find the proxied URI on the local filesystem.

Much of the patch was lifted from chapter 7 of the Eagle book.

Right now the code requires you to write proxy settings twice (i.e. in the
FileMatch block for Apache::Compress and in the mod_proxy section):

FilesMatch "\.(htm|html)$"
SetHandler  perl-script
PerlSetVar  PerlPassThru '/proxy/ =
http://private.company.com/,
/other/ = http://www.someother.co.uk'
PerlHandler Apache::Compress
/FilesMatch

ProxyRequests On
ProxyPass   /proxy  http://private.company.com/
ProxyPassReverse/proxy  http://private.company.com/
ProxyPass   /other  http://www.someother.co.uk/
ProxyPassReverse/other  http://www.someother.co.uk/

34a35,49
 
   if ($r-proxyreq) {
   use Apache::Proxy;
   my $uri = $r-uri();
 
   my %mappings = split /\s*(?:,|=)\s*/, $r-dir_config('PerlPassThru');
 
   for my $src (keys %mappings) {
 next unless $uri =~ s/^$src/$mappings{$src}/;
   }
 
   my $status = Apache::Proxy-pass($r, $uri);
   return $status;
   }
 




Re: Dynamic content that is static

2000-12-22 Thread G.W. Haywood

Hi there,

On Fri, 22 Dec 2000, Philip Mak wrote:

 I realized something, though: Although the pages on my site are
 dynamically generated, they are really static.

You're not alone.

 Does anyone have any suggestions on how to implement this? Is there an
 existing tool for doing this? How can I evaluate modperl/Apache::ASP files
 from the command line?

You could use 'lynx -source URI filename'.

73,
Ged.




Re: Dynamic content that is static

2000-12-22 Thread Edward Moon

Not necessarily.

You can use mod_proxy to cache the dynamically generated pages on the
lightweight apache.

Check out http://perl.apache.org/guide/strategy.html#Apache_s_mod_proxy
for details on what headers you'll need to set for caching to work.

On Fri, 22 Dec 2000, Philip Mak wrote:

 Hi everyone,
 
 I have been going over the modperl tuning guide and the suggestions that
 people on this list sent me earlier. I've reduced MaxClients to 33 (each
 httpd process takes up 3-4% of my memory, so that's how much I can fit
 without swapping) so if the web server overloads again, at least it won't
 take the machine down with it.
 
 Running a non-modperl apache that proxies to a modperl apache doesn't seem
 like it would help much because the vast majority of pages served require
 modperl.
 




Re: Dynamic content that is static

2000-12-22 Thread Philip Mak

On Fri, 22 Dec 2000, Edward Moon wrote:

  Running a non-modperl apache that proxies to a modperl apache doesn't seem
  like it would help much because the vast majority of pages served require
  modperl.

 Not necessarily.
 
 You can use mod_proxy to cache the dynamically generated pages on the
 lightweight apache.

I thought about this... but I'm not sure how I would tell the lightweight
Apache to refresh its cache when a file gets changed. I suppose I could
graceful restart it, but the other webmasters of the site do not have root
access. (Or is there another way? Is it possible to teach Apache or Squid 
that ccs.htm depends on header.asp, footer.asp, series.dat and index.inc?)

Also, does this mess up the REMOTE_HOST variable, or is Apache smart
enough to replace that with X-Forwarded-For when the forwarded traffic is
being sent from a local priviledged process?

-Philip Mak ([EMAIL PROTECTED])




mod_perl and chrooting question

2000-12-22 Thread Gunther Birznieks

OK, I think a few weeks ago we had agreed that the front-end proxy should 
be chrooted away from the back-end mod_perl server (each in its own chroot 
jail). So we are working on getting a sample setup (for our own site).

However, the resources that were posted strongly warn against doing any 
hard linking of resources (eg shared libraries and binaries) between and 
outside the chroot jails.

The authors do not state why though.

My thought is that /lib is going to all be owned by root and not writable. 
The only way to alter these files is to get root access within the chroot 
jail. And if you have root access within the chroot jail then you can 
escape chroot anyway.

So is there really a vulnerability?

I am concerned because I wonder if all the shared libraries that are copied 
to /lib in the chroot jail will cause me to have double the RAM taken up by 
duplicates of various shared libraries (for those running in the HTTP 
front-end server chroot jail and those running in the mod_perl backend 
chroot jail).

Thanks,
 Gunther

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: Dynamic content that is static

2000-12-22 Thread brian d foy

On Fri, 22 Dec 2000, Philip Mak wrote:

 So, it would probably be more efficient if I had a /src directory and a
 /html directory. The /src directory could contain my modperl files and a
 Makefile that knows the dependencies; when I type "make", it will evaluate
 the modperl files and parse them into plain HTML files in the /html
 directory.

i've had great success with squid in http accelerator mode.  we squeezed a
factor of 100 in speed with just that. :)

however, i have been talking to a few people about something like a
mod_makefile. :)

-- 
brian d foy [EMAIL PROTECTED]
Perl hacker for hire




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Joe Schaefer

"Jeremy Howard" [EMAIL PROTECTED] writes:

 Perrin Harkins wrote:
  What I was saying is that it doesn't make sense for one to need fewer
  interpreters than the other to handle the same concurrency.  If you have
  10 requests at the same time, you need 10 interpreters.  There's no way
  speedycgi can do it with fewer, unless it actually makes some of them
  wait.  That could be happening, due to the fork-on-demand model, although
  your warmup round (priming the pump) should take care of that.

A backend server can realistically handle multiple frontend requests, since
the frontend server must stick around until the data has been delivered
to the client (at least that's my understanding of the lingering-close
issue that was recently discussed at length here). Hypothetically speaking,
if a "FastCGI-like"[1] backend can deliver it's content faster than the 
apache (front-end) server can "proxy" it to the client, you won't need as 
many to handle the same (front-end) traffic load.

As an extreme hypothetical example, say that over a 5 second period you
are barraged with 100 modem requests that typically would take 5s each to 
service.  This means (sans lingerd :) that at the end of your 5 second 
period, you have 100 active apache children around.

But if new requests during that 5 second interval were only received at 
20/second, and your "FastCGI-like" server could deliver the content to
apache in one second, you might only have forked 50-60 "FastCGI-like" new 
processes to handle all 100 requests (forks take a little time :).

Moreover, an MRU design allows the transient effects of a short burst 
of abnormally heavy traffic to dissipate quickly, and IMHO that's its 
chief advantage over LRU.  To return to this hypothetical, suppose 
that immediately following this short burst, we maintain a sustained 
traffic of 20 new requests per second. Since it takes 5 seconds to 
deliver the content, that amounts to a sustained concurrency level 
of 100. The "Fast-CGI like" backend may have initially reacted by forking 
50-60 processes, but with MRU only 20-30 processes will actually be 
handling the load, and this reduction would happen almost immediately 
in this hyothetical.  This means that the remaining transient 20-30 
processes could be quickly killed off or _moved to swap_ without adversely 
affecting server performance.

Again, this is all purely hypothetical - I don't have benchmarks to
back it up ;)

 I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
 if, for instance, a large POST request is being uploaded, this takes a whole
 perl interpreter while the transaction is occurring. This is at least one
 place where a Perl interpreter should not be needed.
 
 Of course, this could be overcome if an HTTP Accelerator is used that takes
 the whole request before passing it to a local httpd, but I don't know of
 any proxies that work this way (AFAIK they all pass the packets as they
 arrive).

I posted a patch to modproxy a few months ago that specifically 
addresses this issue.  It has a ProxyPostMax directive that changes 
it's behavior to a store-and-forward proxy for POST data (it also enabled 
keepalives on the browser-side connection if they were enabled on the 
frontend server.)

It does this by buffering the data to a temp file on the proxy before 
opening the backend socket.  It's straightforward to make it buffer to 
a portion of RAM instead- if you're interested I can post another patch 
that does this also, but it's pretty much untested.


[1] I've never used SpeedyCGI, so I've refrained from specifically discussing 
it. Also, a mod_perl backend server using Apache::Registry can be viewed as 
"FastCGI-like" for the purpose of my argument.

-- 
Joe Schaefer




Re: Dynamic content that is static

2000-12-22 Thread Dave Seidel

  I realized something, though: Although the pages on my site are
  dynamically generated, they are really static. Their content doesn't
  change unless I change the files on the website. (For example,
  http://www.animewallpapers.com/wallpapers/ccs.htm depends on header.asp,
  footer.asp, series.dat and index.inc. If none of those files change, the
  content of ccs.htm remains the same.)

You might want to consider using a tool other than mod_perl in this situation. 
There are preprocessor/compiler-type such as htmlpp or WML (both written in
Perl), or you can build the pages in PHP (e.g.) and compile them into static
pages with the command-line version.  I've used both htmlpp and PHP, and both
work well, and I drive them both with make.

I don't know if either Mason or Embperl offer static compilation, but Mason has
caching and I believe that Embperl is getting caching.  AxKit is also very
cool, and caches.

-- 
Dave Seidel
[EMAIL PROTECTED]




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Gunther Birznieks

At 10:17 PM 12/22/2000 -0500, Joe Schaefer wrote:
"Jeremy Howard" [EMAIL PROTECTED] writes:

[snipped]
I posted a patch to modproxy a few months ago that specifically
addresses this issue.  It has a ProxyPostMax directive that changes
it's behavior to a store-and-forward proxy for POST data (it also enabled
keepalives on the browser-side connection if they were enabled on the
frontend server.)

It does this by buffering the data to a temp file on the proxy before
opening the backend socket.  It's straightforward to make it buffer to
a portion of RAM instead- if you're interested I can post another patch
that does this also, but it's pretty much untested.
Cool! Are these patches now incorporated in the core mod_proxy if we 
download it off the web? Or do we troll through the mailing list to find 
the patch?

(Similar question about the forwarding of remote user patch someone posted 
last year).

Thanks,
 Gunther




Re: Dynamic content that is static

2000-12-22 Thread Dave Rolsky

On 22 Dec 2000, Dave Seidel wrote:

 I don't know if either Mason or Embperl offer static compilation, but Mason has
 caching and I believe that Embperl is getting caching.AxKit is also very
 cool, and caches.

Using Mason to generate a set of HTML pages would not be too terribly
difficult.

If someone is interested in doing this and needs some guidance I'd be
happy to help.  It would be nice to include an offline site builder with
Mason (or as a separate project).

-dave

/*==
www.urth.org
We await the New Sun
==*/




libapreq-0.31_03

2000-12-22 Thread Jim Winstead

The uploaded file
 
libapreq-0.31_03.tar.gz

has entered CPAN as

  file: $CPAN/authors/id/J/JI/JIMW/libapreq-0.31_03.tar.gz
  size: 151839 bytes
   md5: c23cb069e42643e505d4043f0eef4b9f

it is also available at:

  http://httpd.apache.org/dist/libapreq-0.31_03.tar.gz

more information is at:

  http://httpd.apache.org/apreq/

the plan is to release this code as 0.32 early in the new year
if there are no major problems spotted with this test release. if
someone sends patches to compile libapreq on win32 with the latest
mod_perl cvs, those would gladly be incorporated. :)

the changes since 0.31:

=item 0.31_03 - December 23, 2000

Apache::Request-instance [Matt Sergeant [EMAIL PROTECTED]]

=item 0.31_02 - December 17, 2000

autoconf support
[Tom Vaughan [EMAIL PROTECTED]]

also parse r-args if content-type is multipart
[Ville Skyttä [EMAIL PROTECTED]]

deal properly with Apache::Cookie-new(key = undef)
thanks to Matt Sergeant for the spot

fix parsing of Content-type header to deal with charsets
[Artem Chuprina [EMAIL PROTECTED]]

fix nasty bug when connection is broken during file upload
thanks to Jeff Baker for the spot

multipart_buffer.[ch] renamed to apache_multipart_buffer.[ch]

=item 0.31_01 - December 4, 2000

keep reusing same buffer when handling file uploads to prevent overzealous
memory allocation [Yeasah Pell, Jim Winstead [EMAIL PROTECTED]]

handle internet explorer for the macintosh bug that sends corrupt mime
boundaries when image submit buttons are used in multipart/form-data forms
[Yeasah Pell]

fix uninitialized variable in ApacheRequest_parse_urlencoded [Yeasah Pell]



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Jeremy Howard

Joe Schaefer wrote:
 "Jeremy Howard" [EMAIL PROTECTED] writes:
  I don't know if Speedy fixes this, but one problem with mod_perl v1 is
that
  if, for instance, a large POST request is being uploaded, this takes a
whole
  perl interpreter while the transaction is occurring. This is at least
one
  place where a Perl interpreter should not be needed.
 
  Of course, this could be overcome if an HTTP Accelerator is used that
takes
  the whole request before passing it to a local httpd, but I don't know
of
  any proxies that work this way (AFAIK they all pass the packets as they
  arrive).

 I posted a patch to modproxy a few months ago that specifically
 addresses this issue.  It has a ProxyPostMax directive that changes
 it's behavior to a store-and-forward proxy for POST data (it also enabled
 keepalives on the browser-side connection if they were enabled on the
 frontend server.)

FYI, this patch is at:

  http://www.mail-archive.com/modperl@apache.org/msg11072.html





Re: Dynamic content that is static

2000-12-22 Thread John Michael

I may be out of my realm here.  I use mostly perl for everything and have
done similar things.  Create a directory tree with the source files.  In the
source files use something like %%INCLUDE_HEADER%% for each part of the page
that changes and have the script use flat text files for the build.  Have
the script traverse the tree of the source files writing the output to the
html directory.  Whenver you update the flat text files, just run the script
from the command line or write it to run from the web with a password.

Mason does something similar I believe.
You can even have the script write in the %%INCLUDES%% dynamically if you
take in the input and assign it like so.
$$Var = $value;  instead of  $input{'key'} = $value;
Then do the substitutions like so.

foreach (@variables){
 $template_txt =~ s/%%$_%%/$$_/gi;
}

Works great for me.  You can then make any change to the source.html page
and the flat text file without having to change the script.
John Michael
Not a mod perl solution, but it will work.


- Original Message -
From: "Dave Rolsky" [EMAIL PROTECTED]
To: "Dave Seidel" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, December 22, 2000 9:28 PM
Subject: Re: Dynamic content that is static


 On 22 Dec 2000, Dave Seidel wrote:

  I don't know if either Mason or Embperl offer static compilation, but
Mason has
  caching and I believe that Embperl is getting caching. AxKit is also
very
  cool, and caches.

 Using Mason to generate a set of HTML pages would not be too terribly
 difficult.

 If someone is interested in doing this and needs some guidance I'd be
 happy to help.  It would be nice to include an offline site builder with
 Mason (or as a separate project).

 -dave

 /*==
 www.urth.org
 We await the New Sun
 ==*/





Re: Dynamic content that is static

2000-12-22 Thread Edward Moon

You should check out the documentation on mod_proxy to see what it's
capable of: http://httpd.apache.org/docs/mod/mod_proxy.html

You can specify expiration values and be assured that cached files older
than expiry will be deleted.

So, for example, if you know that your content gets updated every 48 hours
you can specify 'CacheMaxExpire 48' and force the proxy server to
retrieve a new copy every 48 hours.

You can also set headers within a dynamic document that specifies an
expiration time. Check out the link in my previous e-mail for more info.

On Fri, 22 Dec 2000, Philip Mak wrote:

 I thought about this... but I'm not sure how I would tell the lightweight
 Apache to refresh its cache when a file gets changed. I suppose I could
 graceful restart it, but the other webmasters of the site do not have root
 access. (Or is there another way? Is it possible to teach Apache or Squid 
 that ccs.htm depends on header.asp, footer.asp, series.dat and index.inc?)
 
 Also, does this mess up the REMOTE_HOST variable, or is Apache smart
 enough to replace that with X-Forwarded-For when the forwarded traffic is
 being sent from a local priviledged process?
 
 -Philip Mak ([EMAIL PROTECTED])
 




Re: Dynamic content that is static

2000-12-22 Thread Bill Moseley

At 09:08 PM 12/22/00 -0500, Philip Mak wrote:
I realized something, though: Although the pages on my site are
dynamically generated, they are really static. Their content doesn't
change unless I change the files on the website.

This doesn't really help with your ASP files, but have you looked at ttree
in the Template Toolkit distribution?

The problem, AFAIK, is that ttree only looks only at the top level
documents and not included templates.  I started to look at
Template::Provider to see if there was an easy way to write out dependency
information to a file, and then stat all those files every five minutes
from a cron job and if anything changes, touch the top level files and then
run ttree again.

I'd like this because I'm generating cobarnded pages with mod_perl, and
many of the pages are really static content.


Bill Moseley
mailto:[EMAIL PROTECTED]



Re: Dynamic content that is static

2000-12-22 Thread Ask Bjoern Hansen

On Fri, 22 Dec 2000, Philip Mak wrote:

   Running a non-modperl apache that proxies to a modperl apache doesn't seem
   like it would help much because the vast majority of pages served require
   modperl.
 
  Not necessarily.
  
  You can use mod_proxy to cache the dynamically generated pages on the
  lightweight apache.
 
 I thought about this... but I'm not sure how I would tell the lightweight
 Apache to refresh its cache when a file gets changed. I suppose I could
 graceful restart it, but the other webmasters of the site do not have root
 access. (Or is there another way? Is it possible to teach Apache or Squid 
 that ccs.htm depends on header.asp, footer.asp, series.dat and index.inc?)

I don't know with Apache::ASP, but it can probably be done. With
HTMl::Embperl it would be pretty trivial.

You could probably get quite a gain with having a squid or a
mod_proxy process in front. Both because they would slurp up the
data and feed it to slow clients and because if you could cache the
documents for just a minute or so it might save quite some hits on
the backend.
 
 Also, does this mess up the REMOTE_HOST variable, or is Apache smart
 enough to replace that with X-Forwarded-For when the forwarded traffic is
 being sent from a local priviledged process?

Have a look at ftp://ftp.netcetera.dk/pub/apache/mod_proxy_add_forward.c


 - ask

-- 
ask bjoern hansen - http://ask.netcetera.dk/
more than 70M impressions per day, http://valueclick.com




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Ask Bjoern Hansen

On Thu, 21 Dec 2000, Sam Horrocks wrote:

   Folks, your discussion is not short of wrong statements that can be easily
   proved, but I don't find it useful.
 
  I don't follow.  Are you saying that my conclusions are wrong, but
  you don't want to bother explaining why?
  
  Would you agree with the following statement?
 
 Under apache-1, speedycgi scales better than mod_perl with
 scripts that contain un-shared memory 

Maybe; but for one thing the feature set seems to be very different
as others have pointed out. Secondly then the test that was
originally quoted didn't have much to do with reality and showed
that whoever made it didn't have much experience with setting up
real-world high traffic systems with mod_perl.


  - ask

-- 
ask bjoern hansen - http://ask.netcetera.dk/
more than 70M impressions per day, http://valueclick.com




cvs commit: modperl MANIFEST

2000-12-22 Thread dougm

dougm   00/12/22 12:55:56

  Modified:.MANIFEST
  Log:
  missed part of another commit
  
  Revision  ChangesPath
  1.66  +0 -1  modperl/MANIFEST
  
  Index: MANIFEST
  ===
  RCS file: /home/cvs/modperl/MANIFEST,v
  retrieving revision 1.65
  retrieving revision 1.66
  diff -u -r1.65 -r1.66
  --- MANIFEST  2000/04/12 16:13:09 1.65
  +++ MANIFEST  2000/12/22 20:55:55 1.66
  @@ -21,7 +21,6 @@
   INSTALL.apaci
   SUPPORT
   INSTALL.win32
  -INSTALL.activeperl
   INSTALL.raven
   MANIFEST
   ToDo
  
  
  



cvs commit: modperl/t/net/perl/io perlio.pl

2000-12-22 Thread dougm

dougm   00/12/22 15:59:39

  Modified:t/net/perl/io perlio.pl
  Log:
  fix variable will not stay shared
  
  Revision  ChangesPath
  1.7   +5 -4  modperl/t/net/perl/io/perlio.pl
  
  Index: perlio.pl
  ===
  RCS file: /home/cvs/modperl/t/net/perl/io/perlio.pl,v
  retrieving revision 1.6
  retrieving revision 1.7
  diff -u -r1.6 -r1.7
  --- perlio.pl 2000/12/20 08:07:37 1.6
  +++ perlio.pl 2000/12/22 23:59:39 1.7
  @@ -15,7 +15,7 @@
   my $r = shift;
   my $sub = "test_$ENV{QUERY_STRING}";
   if (defined {$sub}) {
  -{$sub};
  +{$sub}($r);
   }
   else {
   print "Status: 200 Bottles of beer on the wall\n",
  @@ -114,18 +114,19 @@
   }
   
   sub test_syswrite_1 {
  -test_syswrite();
  +test_syswrite(shift);
   }
   
   sub test_syswrite_2 {
  -test_syswrite(160);
  +test_syswrite(shift,160);
   }
   
   sub test_syswrite_3 {
  -test_syswrite(80, 2000);
  +test_syswrite(shift,80, 2000);
   }
   
   sub test_syswrite {
  +my $r = shift;
   my $len = shift;
   my $offset = shift;
   my $msg = "";
  
  
  



cvs commit: modperl/t/net/perl constants.pl

2000-12-22 Thread dougm

dougm   00/12/22 16:04:37

  Modified:.Changes
   t/modules cgi.t
   t/net/perl constants.pl
  Log:
  adjust test output (modules/{cgi,constants}) to make 5.7.0-dev
  Test::Harness happy
  
  Revision  ChangesPath
  1.563 +3 -0  modperl/Changes
  
  Index: Changes
  ===
  RCS file: /home/cvs/modperl/Changes,v
  retrieving revision 1.562
  retrieving revision 1.563
  diff -u -r1.562 -r1.563
  --- Changes   2000/12/22 20:56:24 1.562
  +++ Changes   2000/12/23 00:04:36 1.563
  @@ -10,6 +10,9 @@
   
   =item 1.24_02-dev
   
  +adjust test output (modules/{cgi,constants}) to make 5.7.0-dev
  +Test::Harness happy
  +
   fix $r-custom_response bug which was triggering core dumps if no
   ErrorDocuments were configured, thanks to Paul Hodges for the spot
   
  
  
  
  1.8   +1 -1  modperl/t/modules/cgi.t
  
  Index: cgi.t
  ===
  RCS file: /home/cvs/modperl/t/modules/cgi.t,v
  retrieving revision 1.7
  retrieving revision 1.8
  diff -u -r1.7 -r1.8
  --- cgi.t 1999/08/03 23:25:15 1.7
  +++ cgi.t 2000/12/23 00:04:36 1.8
  @@ -36,7 +36,7 @@
   print "1..$tests\nok 1\n";
   print fetch($ua, "http://$net::httpserver$net::perldir/cgi.pl?PARAM=2");
   print fetch($ua, "http://$net::httpserver$net::perldir/cgi.pl?PARAM=%33");
  -print upload($ua, "http://$net::httpserver$net::perldir/cgi.pl", "4 (fileupload)");
  +print upload($ua, "http://$net::httpserver$net::perldir/cgi.pl", "4 #(fileupload)");
   if($test_mod_cgi) { 
   print fetch($ua, "http://$net::httpserver/cgi-bin/cgi.pl?PARAM=5");
   }
  
  
  
  1.11  +1 -1  modperl/t/net/perl/constants.pl
  
  Index: constants.pl
  ===
  RCS file: /home/cvs/modperl/t/net/perl/constants.pl,v
  retrieving revision 1.10
  retrieving revision 1.11
  diff -u -r1.10 -r1.11
  --- constants.pl  1998/11/24 23:15:12 1.10
  +++ constants.pl  2000/12/23 00:04:37 1.11
  @@ -64,7 +64,7 @@
   eval {
$name = Apache::Constants-name($val);
   };
  -print defined $val ? "" : "not ", "ok $ix ($name|$sym: $val)\n";
  +print defined $val ? "" : "not ", "ok $ix #($name|$sym: $val)\n";
   $ix++;
   last if $ix = $tests;
   }
  
  
  



cvs commit: modperl/t/docs startup.pl

2000-12-22 Thread dougm

dougm   00/12/22 16:32:21

  Modified:t/docs   startup.pl
  Log:
  another fix for Test::Harness
  
  Revision  ChangesPath
  1.40  +1 -1  modperl/t/docs/startup.pl
  
  Index: startup.pl
  ===
  RCS file: /home/cvs/modperl/t/docs/startup.pl,v
  retrieving revision 1.39
  retrieving revision 1.40
  diff -u -r1.39 -r1.40
  --- startup.pl2000/09/27 16:26:01 1.39
  +++ startup.pl2000/12/23 00:32:21 1.40
  @@ -198,7 +198,7 @@
   print "1..$x\n";
   my $i = 1;
   for my $e (@entries) {
  - print "ok $i ($e)\n";
  + print "ok $i #($e)\n";
++$i;
   }
   1;
  
  
  



cvs commit: modperl/src/modules/perl Makefile

2000-12-22 Thread dougm

dougm   00/12/22 18:23:10

  Modified:.Changes Makefile.PL
   apacimod_perl.config.sh
   lib/Apache ExtUtils.pm
   src/modules/perl Makefile
  Log:
  if Perl is linked with -lpthread, then httpd needs to be linked with
  -lpthread, make sure that happens with USE_DSO=1, warn if USE_APXS=1
  
  disable uselargefile flags by default, enable with:
   Makefile.PL PERL_USELARGEFILES=1
  
  Revision  ChangesPath
  1.564 +6 -0  modperl/Changes
  
  Index: Changes
  ===
  RCS file: /home/cvs/modperl/Changes,v
  retrieving revision 1.563
  retrieving revision 1.564
  diff -u -r1.563 -r1.564
  --- Changes   2000/12/23 00:04:36 1.563
  +++ Changes   2000/12/23 02:23:07 1.564
  @@ -10,6 +10,12 @@
   
   =item 1.24_02-dev
   
  +if Perl is linked with -lpthread, then httpd needs to be linked with
  +-lpthread, make sure that happens with USE_DSO=1, warn if USE_APXS=1
  +
  +disable uselargefile flags by default, enable with:
  + Makefile.PL PERL_USELARGEFILES=1
  +
   adjust test output (modules/{cgi,constants}) to make 5.7.0-dev
   Test::Harness happy
   
  
  
  
  1.174 +35 -10modperl/Makefile.PL
  
  Index: Makefile.PL
  ===
  RCS file: /home/cvs/modperl/Makefile.PL,v
  retrieving revision 1.173
  retrieving revision 1.174
  diff -u -r1.173 -r1.174
  --- Makefile.PL   2000/12/21 20:00:09 1.173
  +++ Makefile.PL   2000/12/23 02:23:08 1.174
  @@ -41,6 +41,7 @@
   
   my $Is_dougm = (defined($ENV{USER})  ($ENV{USER} eq "dougm"));
   my $USE_THREADS;
  +my $thrlib = join '|', qw(-lpthread);
   
   if ($]  5.005_60) {
   $USE_THREADS = (defined($Config{usethreads})  
  @@ -94,6 +95,7 @@
   $Apache::MyConfig::Setup{Apache_Src} ; 
   
   my $PWD = cwd;
  +$ENV{PERL5LIB} = "$PWD/lib";
   
   my %SSL = (
   "modules/ssl/apache_ssl.c" = "Ben-SSL",
  @@ -188,7 +190,9 @@
   $PERL_DEBUG = "";
   $PERL_DESTRUCT_LEVEL = "";
   $PERL_STATIC_EXTS = "";
  -$PERL_EXTRA_CFLAGS = $] = 5.006 ? $Config{ccflags} : "";
  +$PERL_USELARGEFILES = 0;
  +$PERL_EXTRA_CFLAGS = "";
  +$PERL_EXTRA_LIBS = "";
   $SSLCacheServerPort = 8539;
   $SSL_BASE = ""; 
   $Port = $ENV{HTTP_PORT} || 8529;
  @@ -359,6 +363,10 @@
   $PERL_EXTRA_CFLAGS .= " -DPERL_SAFE_STARTUP=1";
   }
   
  +if ($PERL_USELARGEFILES and $] = 5.006) {
  +$PERL_EXTRA_CFLAGS .= " $Config{ccflags}";
  +}
  +
   for (keys %PassEnv) {
   $ENV{$_} = $$_ if $$_;
   }
  @@ -975,10 +983,18 @@
$cmd .= qq(CFLAGS="$PERL_EXTRA_CFLAGS" );
}

  - if ($USE_DSO) { # override apache's notion of this flag
  -   $cmd .= qq(LDFLAGS_SHLIB_EXPORT="$Config{ccdlflags}" );
  + if ($USE_DSO) {
  +# override apache's notion of this flag
  +$cmd .= qq(LDFLAGS_SHLIB_EXPORT="$Config{ccdlflags}" );
  +
  +#if Perl is linked with -lpthread, httpd needs tobe too
  +if ($Config{libs} =~ /($thrlib)/) {
  +$PERL_EXTRA_LIBS .= " $1";
  +}
}
  -
  +if ($PERL_EXTRA_LIBS) {
  +$cmd .= qq(LIBS="$PERL_EXTRA_LIBS" );
  +}
$cmd .= "./configure " .
  "--activate-module=src/modules/perl/libperl.a";
  
  @@ -1220,6 +1236,7 @@
HTTPD = $TARGET,
PORT = $PORT,
PWD = $PWD,
  +PERL5LIB = "PERL5LIB=$ENV{PERL5LIB}",
SHRPENV = $Config{shrpenv},
CVSROOT = 'perl.apache.org:/home/cvs',
   },
  @@ -1338,16 +1355,16 @@
(cd ./apaci  $(MAKE) distclean)
   
   apxs_libperl:
  - (cd ./apaci  $(MAKE))
  + (cd ./apaci  $(PERL5LIB) $(MAKE))
   
   apxs_install: apxs_libperl
(cd ./apaci  $(MAKE) install;)
   
   apache_httpd: $(APACHE_SRC)/Makefile.tmpl
  - (cd $(APACHE_SRC)  $(SHRPENV) $(MAKE) CC="$(CC)";)
  + (cd $(APACHE_SRC)  $(PERL5LIB) $(SHRPENV) $(MAKE) CC="$(CC)";)
   
   apaci_httpd: 
  - (cd $(APACHE_ROOT)  $(MAKE))
  + (cd $(APACHE_ROOT)  $(PERL5LIB) $(MAKE))
   
   apaci_install: 
(cd $(APACHE_ROOT)  $(MAKE) install)
  @@ -1961,6 +1978,7 @@
  'Apache_Src' = \'$APACHE_SRC\',
  'SSL_BASE' = \'$SSL_BASE\',
  'APXS' = \'$WITH_APXS\',
  +   'PERL_USELARGEFILES' = \'$PERL_USELARGEFILES\',
   EOT
   
 foreach my $key (sort @callback_hooks) {
  @@ -2125,8 +2143,7 @@
   
   sub ccopts {
   unless ($Embed::ccopts) {
  - $Embed::ccopts = `$^X -MExtUtils::Embed -e ccopts`;
  -
  +$Embed::ccopts = "$Config{ccflags} -I$Config{archlibexp}/CORE";
if($USE_THREADS) {
$Embed::ccopts .= " -DPERL_THREADS";
}
  @@ -2277,6 +2294,12 @@
   uselargefiles_check();
   dynaloader_check();
   
  +if ($USE_APXS and $Config{libs} =~ /($thrlib)/) {
  +my $lib = $1;
  +phat_warn(EOF);
  +Your Perl is linked with $lib, make sure that your httpd is