Re: Problem configuring and making mod_perl

2003-07-16 Thread C. Jon Larsen

I hit the same error trying to build on a rh9.0 workstation. This solved 
my problem:

CPPFLAGS=-I/usr/kerberos/include
export CPPFLAGS

Than unpack, config, make, etc ...

On Wed, 16 Jul 2003, Richard Kurth wrote:

> I am trying to compile mod_perl-1.28 with apache_1.3.27,openssl-0.9.7b and 
> mod_ssl-2.8.12-1.3.27. When I run configure with the following and then do 
> a make I get all these parse error can anybody tell me way I get this.
> 
> THIS IS WHAT I AM RUNNING TO CONFIGURE
> perl Makefile.PL USE_APACI=1 EVERYTHING=1 \
>  DO_HTTPD=1 SSL_BASE=/usr/ \
>  APACHE_PREFIX=/usr/apache \
>  APACHE_SRC=../apache_1.3.27/src \
>  APACI_ARGS='--enable-module=rewrite --enable-shared=rewrite \
> --sysconfdir=/etc/httpd/conf --logfiledir=/home/log --manualdir=/home/manual \
> --server-uid=apache --server-gid=apache --enable-module=so 
> --htdocsdir=/home/sites \
> --cgidir=/home/cgi-bin --enable-module=proxy --enable-shared=proxy 
> --enable-module=ssl \
> --enable-shared=ssl --enable-module=access --enable-module=autoindex '
> 
> 
> THIS IS WHAT I GET WHEN I DO A MAKE IT SEAMS TO HAVE SOMETHING TO DO WITH 
> OPENSSL
> gcc -c -I../.. -I/usr/lib/perl5/5.8.0/i386-linux-thread-multi/CORE 
> -I../../os/unix -I../../include   -DLINUX=22 -I/usr/include/gdbm 
> -DMOD_SSL=208112 -DMOD_PERL -DUSE_PERL_SSI 
> -D_REENTRANT  -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing 
> -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
> -I/usr/include/gdbm -DUSE_HSREGEX -DEAPI -D_REENTRANT -DTHREADS_HAVE_PIDS 
> -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE 
> -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm `../../apaci` -fpic 
> -DSHARED_MODULE -DSSL_COMPAT -DSSL_USE_SDBM -DSSL_ENGINE -I/usr//include 
> -DMOD_SSL_VERSION=\"2.8.12\" mod_ssl.c && mv mod_ssl.o mod_ssl.lo
> In file included from /usr/include/openssl/ssl.h:179,
>   from mod_ssl.h:116,
>   from mod_ssl.c:65:
> /usr/include/openssl/kssl.h:72:18: krb5.h: No such file or directory
> In file included from /usr/include/openssl/ssl.h:179,
>   from mod_ssl.h:116,
>   from mod_ssl.c:65:
> /usr/include/openssl/kssl.h:132: parse error before "krb5_enctype"
> /usr/include/openssl/kssl.h:134: parse error before "FAR"
> /usr/include/openssl/kssl.h:135: parse error before '}' token
> /usr/include/openssl/kssl.h:147: parse error before "kssl_ctx_setstring"
> /usr/include/openssl/kssl.h:147: parse error before '*' token
> /usr/include/openssl/kssl.h:148: parse error before '*' token
> /usr/include/openssl/kssl.h:149: parse error before '*' token
> /usr/include/openssl/kssl.h:149: parse error before '*' token
> /usr/include/openssl/kssl.h:150: parse error before '*' token
> /usr/include/openssl/kssl.h:151: parse error before "kssl_ctx_setprinc"
> /usr/include/openssl/kssl.h:151: parse error before '*' token
> /usr/include/openssl/kssl.h:153: parse error before "kssl_cget_tkt"
> /usr/include/openssl/kssl.h:153: parse error before '*' token
> /usr/include/openssl/kssl.h:155: parse error before "kssl_sget_tkt"
> /usr/include/openssl/kssl.h:155: parse error before '*' token
> /usr/include/openssl/kssl.h:157: parse error before "kssl_ctx_setkey"
> /usr/include/openssl/kssl.h:157: parse error before '*' token
> /usr/include/openssl/kssl.h:159: parse error before "context"
> /usr/include/openssl/kssl.h:160: parse error before "kssl_build_principal_2"
> /usr/include/openssl/kssl.h:160: parse error before "context"
> /usr/include/openssl/kssl.h:163: parse error before "kssl_validate_times"
> /usr/include/openssl/kssl.h:163: parse error before "atime"
> /usr/include/openssl/kssl.h:165: parse error before "kssl_check_authent"
> /usr/include/openssl/kssl.h:165: parse error before '*' token
> /usr/include/openssl/kssl.h:167: parse error before "enctype"
> In file included from mod_ssl.h:116,
>   from mod_ssl.c:65:
> /usr/include/openssl/ssl.h:909: parse error before "KSSL_CTX"
> /usr/include/openssl/ssl.h:931: parse error before '}' token
> make[5]: *** [mod_ssl.lo] Error 1
> make[4]: *** [all] Error 1
> make[3]: *** [subdirs] Error 1
> make[3]: Leaving directory `/tmp/builldinstall/apache_1.3.27/src'
> make[2]: *** [build-std] Error 2
> make[2]: Leaving directory `/tmp/builldinstall/apache_1.3.27'
> make[1]: *** [build] Error 2
> make[1]: Leaving directory `/tmp/builldinstall/apache_1.3.27'
> make: *** [apaci_httpd] Error 2
> 

-- 
+ Jon Larsen: Chief Technology Officer, Richweb, Inc.
+ Richweb.com: Providing Internet-Based Business Solutions since 1995
+ GnuPG Public Key: http://richweb.com/jlarsen.gpg
+ Business: (804) 359.2220 x 101; Mobile: (804) 307.6939



RE: development techniques - specifically debug methods

2003-01-09 Thread C. Jon Larsen

There is a good technique in the mod_perl cookbock that talks about using 
a Debug module with exported constants. If you program to the API where 
all of your code is compiled into bytecode at server startup into 
discrete packages then this means that all of your debug if() sections 
sprinkled throughout the code are not included as part of the run time 
footprint. This can be quite nice if you have largish chunks of code that 
should run only in debug mode.

I guess this might work for registry scripts too, the 1st time the are 
compiled. 

On Thu, 9 Jan 2003 [EMAIL PROTECTED] wrote:

> 
> > Do you develop with an xterm tailing the logs, an emacs 
> > window (or other
> > editor) to edit the script and/or the packages (and on some occassions
> > httpd.conf), and a web browser (on an alternate virtual 
> > desktop)?  
> 
> Bingo. :-)
> 
> Do you
> > pepper code with :
> > 
> > print "option:" . $option{$foo . "" if $debug;
> 
> If it's a longer-term debugging option that I might want again later then I
> might make a debug() method where I'll do 
> debug('this worked ok') and the debug() method might examine a flag to see
> whether it should do anything with that message.  
> Or a log() method that recognizes various levels of messages and obeys a
> debug_level setting or something.  I once used a Java package (the name
> escapes me but it was probably something simple like jLog) that worked sort
> of this way, though it also had some xml config files and such... anyways,
> I'm sure there are plenty of perl modules to do something similar, but the
> debug() is a fairly effective 2 minute alternative.  If's it just a quick
> one-time debug, I'll typically just use a warn or similar.  
> 
> > 
> > Fairly low tech, huh.
> > 
> > At apachecon, a speaker (who neither bragged nor rambled) 
> > mentioned lwp
> > use instead of (or to complement) the web browser portion.
> > 
> > Will the use of lwp instead of a browser improve my coding 
> > ability (either
> > in terms of speed or just improving my perl coding)?  Seems 
> > like I'd have
> > to spend too much time with the lwp script (tell it to first 
> > request the
> > page then choose option A and B then hit the "submit" button ... )
> 
> This sounds more like a testing suite than regular old
> debugging-while-you-go.  Probably a place for both.
> 
> > 
> > Is there some way to improve this cycle : edit code -> 
> > refresh browser ->
> > possibly look at the error log -> edit code -> ...
> 
> Honestly, this method has always been very efficient for us and most of the
> time we don't need anything more sophisticated for devel/debug.  Now for
> more formal testing, that gets trickier for us and we're currently looking
> for a good way to build some automated tests of our code and our web
> interface without it getting too unwieldy.  This will probably be where we
> spend a lot of time in the first part of the year.  Maybe LWP will be handy
> here.
> 
> -Fran 
> 

-- 
+ Jon Larsen; Chief Technology Officer, Richweb.com
+ GnuPG Public Key http://richweb.com/jlarsen.gpg
+ Richweb.com: Providing Internet-Based Business Solutions since 1995
+ Business Telephone: (804) 359.2220
+ Jon Larsen Cell Phone: (804) 307.6939




Re: mod_perl vs. C for high performance Apache modules

2001-12-14 Thread C. Jon Larsen


The original poster talked about C++ CGI programs. I have been using
mod_perl since 0.7x days and I can tell you there is no way a fork+exec
CGI program no matter what language its written in will come anywhere
close to a perl handler written against the mod_perl Apache API in
execution speed (when they are doing equivalnet types of work). Using C++
to build web applications is something developers who grew up in the
heyday of client server would think is a good idea. In the internet web
applications business by the time you get a C++ program debugged and ready
to roll the market has evolved and your software is out of date.  C++ is a
good language for systems programming, databases, etc., but web apps need
shorter life cycles.

I had an investor question similar to the one we are talking about 3 years
ago. I was questioned as to why we used Apache, mod_perl, and mysql
instead of C++ and Oracle's DB and Web Devel kit. Needless to say our
mod_perl systems have thrived while most of the investor's other
investments have had their expensive hardware auctioned off on Ebay
recently.

The essence of mod_perl is that it allows to to take an idea and build a
working prototype very quickly. When you prove that the prototype works
you don't need to rewrite - mod_perl scales up better than any other web
application technology available - period.

-jon

On Fri, 14 Dec 2001 [EMAIL PROTECTED] wrote:

>
>
> -- Jeff Yoak <[EMAIL PROTECTED]> on 12/14/01 12:58:51 -0800
>
> > This is something different.  The investor is in a related business, and has
> > developed substantially similar software for years.  And it is really good.
> > What's worse is that my normal, biggest argument isn't compelling in this
> > case, that by the time this would be done in C, I'd be doing contract work on
> > Mars.  The investor claims to have evaluated Perl vs. C years ago, to have
> > witnessed that every single hit on the webserver under mod_perl causes a CPU
> > usage spike that isn't seen with C, and that under heavy load mod_perl
> > completely falls apart where C doesn't.  (This code is, of course, LONG gone
> > so I can't evaluate it for whether the C was good and the Perl was screwy.)
> > At any rate, because of this, he's spent years having good stuff written in
> > C.  Unbeknownst to either me or my client, both this software and its
> > developer were available to us, so in this case it would have been faster,
> > cheaper and honestly even better, by which I mean more fully-featured.
>
> Constructing the $r object in perl-space is an overhead
> that mod_perl causes. This overhead has been managed more
> effectively in recent versions of perl/mod_perl. A study
> done "a few years ago" probably involved machines with
> significantly less core and CPU horsepower than the average
> kiddie-games PC does today. Net result is that any overhead
> caused by mod_perl in the previous study may well have been
> either mitigated with better code or obviated by faster
> hardware [how's that for a sentence?].
>
> Net result is that the objection is probably based on once-
> valid but now out of date analysis.
>
> --
> Steven Lembark  2930 W. Palmer
> Workhorse Computing  Chicago, IL 60647
>+1 800 762 1582
>

-- 

C. Jon Larsen Chief Technology Officer, Richweb.com (804.307.6939)
SMTP: [EMAIL PROTECTED] (http://richweb.com/cjl_pgp_pub_key.txt)

Richweb.com:
Designing Open Source Internet Business Solutions since 1995
Building Safe, Secure, Reliable Cisco-Powered Networks since 1995




Re: can't flush buffers?

2000-12-23 Thread C. Jon Larsen


> 
>   I posted something like this a week ago, but typos in my message kept
> anyone from understanding the issue.
> 
>   I am trying to return each row to the client as it comes from the
> database, instead of waiting for all the rows to be returned before
> displaying them.  

Why would you want to do this ?

Writing your application this way will ensure that:

a. end users can crash your server/application.
b. your application will preform poorly on the network.

Buffer your output, and when all the output is collected, print it, and
let tcp deliver the data in network-friendly chunks. If your database is
that slow that you think you need an approach like this, investigate the
possibility of a caching server process that you can sit in front of the
actual db. 

You need to consider what happens when a user executes a query that can
return more rows that a browser can reasonably display. In other words,
having a query results pagination module or feature is probably a must.

If you were writing a stand-alone application that ran on a single cpu
(like MS Access on a local file) in  this style (no pagination, no
buffering) I would consider this to be marginally bad style. Inside a
web-based application, this approach is horrendous.

Just my 2 cents . . .

On Sat, 23 Dec 2000, quagly wrote:

> 
>   I have set $|=1 and added $r->flush; after every print statement ( I
> realize this is redundant ) but to no avail.  
> 
> This is the relevant code:
> 
> while ($sth->fetch) {
>$r->print ("",
>map("$_",@cols),
>"");
>   $r->rflush;
> }
> 
> Here is the complete package:
> 
> package Sql::Client;
> 
> use Apache::Request;
> use strict;
> use warnings;
> use Apache::Constants qw(:common);
> 
> my $r;  #request
> my $apr;   #Apache::Request 
> my $host;  #hostname of remote user
> my $sql;#sql to execute
> 
> $|=1;
> 
> sub getarray ($) { 
> 
> my $dbh;  # Database handle
> my $sth;# Statement handle
> my $p_sql; # sql statement passed as parameter
> my @cols;  #column array to bind results
> my $titles;   # array ref to column headers
> 
> $p_sql = shift;
> 
> # Connect
> $dbh = DBI->connect (
> "DBI:mysql:links_db::localhost",
> "nobody",
> "somebody",
> {
> PrintError => 1,# warn() on errors
> RaiseError => 0,   # don't die on error
> AutoCommit => 1,# commit executes
> immediately
> }
> );
> 
> # prepare statment
> $sth = $dbh->prepare($p_sql);
> 
> $sth->execute;
> 
> $titles = $sth->{NAME_uc};
> #--
> # for minimal memory use, do it this way
> @cols[0..$#$titles] = ();
> $sth->bind_columns(\(@cols));
> $r->print( "");
> $r->print ("",
> map("$_",@$titles),
> "");
> while ($sth->fetch) {
> $r->print ("",
> map("$_",@cols),
> "");
> $r->rflush;
> }
> $r->print ("");
> return; 
> }
> 
> 
> sub handler {
> $r = shift;
> $apr =  Apache::Request->new($r);
> $sql = $apr->param('sql') || 'SELECT';
> $sql='SELECT' if  $apr->param('reset');
> 
> $r->content_type( 'text/html' );
> $r->send_http_header;
> return OK if $r->header_only;
> $host = $r->get_remote_host;
> $r->print(< 
> 
>  HREF="/styles/lightstyle.css" 
> >
> Hello $host
> 
> Sql Client
> 
> Enter your Select Statement:
> 
> $sql
> 
> 
> 
> 
> HTMLEND
> $r->rflush;
> getarray($sql) unless $sql =~ /^SELECT$/;
> 
> $r->print(< 
> 
> HTMLEND
> return OK;
> }
> 1;
> 




Re: Forking in mod_perl?

2000-10-04 Thread C. Jon Larsen


I use a database table for the queue. No file locking issues, atomic
transactions, you can sort and order the jobs, etc . . . you can wrap the
entire "queue" library in a module. Plus, the background script that
processes the queue can easily run with higher permissions, and you don't
have to worry as much with setuid issues when forking from a parent
process (like your apache) running as a user with less priviledges than
what you (may) need. You can pass all the args you need to via a column in
the db, and, if passing data back and forth is a must, serialize your data
using Storable and have the queue runner thaw it back out. Very simple,
very fast, very powerful.

On Wed, 4 Oct 2000, Neil Conway wrote:

> On Wed, Oct 04, 2000 at 02:42:50PM -0700, David E. Wheeler wrote:
> > Yeah, I was thinking something along these lines. Don't know if I need
> > something as complex as IPC. I was thinking of perhaps a second Apache
> > server set up just to handle long-term processing. Then the first server
> > could send a request to the second with the commands it needs to execute
> > in a header. The second server processes those commands independantly of
> > the first server, which then returns data to the browser.
> 
> In a pinch, I'd just use something like a 'queue' directory. In other
> words, when your mod_perl code gets some info to process, it writes
> this into a file in a certain directory (name it with a timestamp /
> cksum to ensure the filename is unique). Every X seconds, have a
> daemon poll the directory; if it finds a file, it processes it.
> If not, it goes back to sleep for X seconds. I guess it's poor
> man's IPC. But it runs over NFS nicely, it's *very* simple, it's
> portable, and I've never needed anything more complex. You also
> don't need to fork the daemon or startup a new script every
> processing request. But if you need to do the processing in realtime,
> waiting up to X seconds for the results might be unacceptable.
> 
> How does this sound?
> 
> HTH,
> 
> Neil
> 
> -- 
> Neil Conway <[EMAIL PROTECTED]>
> Get my GnuPG key from: http://klamath.dyndns.org/mykey.asc
> Encrypted mail welcomed
> 
> It is dangerous to be right when the government is wrong.
> -- Voltaire
> 







Re: [RFC] Do Not Run Everything on One mod_perl Server

2000-04-19 Thread C. Jon Larsen


My apache processes are typically 18MB-20MB in size, with all but 500K to
1MB of that shared. We restart our servers in the middle of the nite as
part of planned maintenance, of course, but even before we did that,
and even after weeks of uptime, the percentages did not change.

We do not use Apache::Registry at all; everything is a pure handler. We
cache all data structures (lots of storable things) in the parent process
by thawing refs to the datastructures into package variables. We use no
globals, only a few package variables (4) that we access by fully
qualified package name, and they get reset on each request. 

We use Apache::DBI and MySQL, and it works perfectly other than a few
segfaults that occur once in a while. Having all of the data structures
cached (and shared !) allows us to do some neat things without having to
rely solely on sql.

On Thu, 20 Apr 2000, Stas Bekman wrote:

> On Wed, 19 Apr 2000, Joshua Chamas wrote:
> 
> > Stas Bekman wrote:
> > > 
> > > Geez, I always forget something :(
> > > 
> > > You are right. I forgot to mention that this was a scenario for the 23 Mb
> > > of unshared memory. I just wanted to give an example. Still somehow I'm
> > > almost sure that there are servers where even with sharing in place, the
> > > hypothetical scenario I've presented is quite possible.
> > > 
> > > Anyway, it's just another patent for squeezing some more juice from your
> > > hardware without upgrading it.
> > > 
> > 
> > Your scenario would be more believable with 5M unshared, even
> > after doing ones best to share everything.  This is pretty typical
> > when connecting to databases, as the database connections cannot
> > be shared, and especially DB's like Oracle take lots of RAM
> > per connection.
> 
> Good idea. 5MB sounds closer to the real case than 10Mb. I'll make the
> correction. Thanks!!! 
> 
> > I'm not sure that your scenario is worthwhile if someone does
> > a good job preloading / sharing code across the forks, and 
> > the difference will really be how much of the code gets dirty
> > while you run things, which can be neatly tuned with MaxRequests.
> 
> Agree. But not everybody knows to do that well. So the presented idea
> might still find a good use at some web shops.
> 
> > Interesting & novel approach though.  I would bet that if people
> > went down this path, they would really end up on different machines
> > per web application, or even different web clusters per application ;)
> 
> :)
> 
> 
> __
> Stas Bekman | JAm_pH--Just Another mod_perl Hacker
> http://stason.org/  | mod_perl Guide  http://perl.apache.org/guide 
> mailto:[EMAIL PROTECTED]  | http://perl.orghttp://stason.org/TULARC/
> http://singlesheaven.com| http://perlmonth.com http://sourcegarden.org
> --
> 
> 




Re: performance mongers: since when is using CGI.pm or Apache::Registrydishonorable?

2000-03-31 Thread C. Jon Larsen


Setup a single appRequestHandler() that uses a hash table to map uris to
functions. Becomes even more useful when you put your library code for
sessions, headers, footers, etc in the wrapper routine. 


-

my $info = {

c jon larsen   =>  'richweb.com',
email: =>  '[EMAIL PROTECTED]',
wireless:  =>  '+804.307.6939',

};

On Fri, 31 Mar 2000, Shevek wrote:

> On Fri, 31 Mar 2000, Vivek Khera wrote:
> 
> > My question to all of you who use handlers directly, how do you manage
> > all your handler mappings?  I've seen it done where you add a
> >  mapping for each handler you use, which corresponds to each
> > "program" you need.  This, in my experience, tends to be error prone
> > and results in much cruft in your httpd.conf file.
> 
> I have a single handler which matches all files, if the file exists
> ($r->finfo and other tests), it returns DECLINED. Else it makes a decision
> in Perl, and dispatches to the relevant handler in Perl. This makes
> loading the Perl script in a server very intensive, a problem which I am
> looking to solve soon.
> 
> Ta.
> 
> S.
> 
> --
> Shevek
> GM/CS/MU -d+ H+>++ s+: !g p2 au0 !a w+++ v-(---) C$ UL$ UB+
> US+++$ UI+++$ P++> L$ 3+ E--- N K- !W(-) M(-) !V -po+ Y+
> t+ 5++ !j !R G' !tv b+++ D++ B--- e+ u+* h++ f? r++ n y?
> Recent UH+>++ UO+ UC++ U?+++ UV++ and collecting.
> 
> 




Re: performance mongers: since when is using CGI.pm or Apache::Registrydishonorable?

2000-03-29 Thread C. Jon Larsen


CGI.pm is a great piece of code, but its very monolithic. Lincoln/Doug's
libapreq module is probably much faster (I have never run benchmarks) than
CGI.pm, so it makes sense  for those who like the Q->param type interface
(I do) for working with CGI environment variables, but don't need all the
handy html and cookie functions/methods the CGI.pm provides. 

A lot of mod_perl shops have aready developed their own templating scheme,
or page object model, or are using one of Mason, ASP, embperl etc . . .
and don't have a lot of use for 85% of CGI.pm's library routines. 

Again, there are c based modules now for HTTP utils and cookies as well,
which provide more speed. 

The reason many prefer native Apache methods over wrapped cgi scripts is
not just speed, but coding style and maturity. Writing modules and setting
up objects requires more discipline that writing quick scripts and relying
on magic to reset your environment for the next execution run.  I have
applications that now run as deep as 50,000 to 100,000 lines of code. I
don't want wrapped scripts. I want re-usable functions, objects, etc.
As developers learn to write native handlers they are starting down the
path that gets them ready for more serious action (namespace management,
using lexicals properly, etc . . . )

That being said perl is all about getting your job done before you get
fired. So an elegant registry solution should never be looked down upon
just because its a registry solution. 

On Wed, 29 Mar 2000, Matt Arnold wrote:

> Many messages on this list perpetuate the notion that usage of CGI.pm and
> Apache::Registry is for beginners.  Popular opinion is that you're not a
> leet mod_perl h@x0r until you wean yourself from CGI.pm and Apache::Registry
> and "graduate" to the Apache API.  With this in mind, there have been many
> messages over the years making blanket statements along the lines of "CGI.pm
> is evil" and/or "Apache::Registry sux".  I'm trying to identify the source
> of this disatisfaction.  While it may seem that my intent is to start the
> ultimate, end-all CGI.pm/Apache::Registry flame war, please be assured that
> I am interested in ferreting out the real issues.  :-)
> 
> Anyway, in hope of generating some debate, I'll make some (potentially
> inflammatory) assertions:
> 
> 1. An Apache handler doesn't mean CGI.pm *ISN'T* in use
> 
> The "Apache::Registry sux" crowd claims I should forgo Apache::Registry and
> write handlers instead.  Okay, here's my handler:
> 
>   # in httpd.conf
>   
> SetHandler perl-script
> PerlHandler Apache::Foo
>   
> 
>   package Apache::Foo;
>   use strict;
>   use CGI ();
>   sub handler {
> my $q = CGI->new;
> print $q->header;
> print "Hello World\n";
> 200;
>   }
>   1;
> 
> Satisfied?  No Apache::Registry in use here.  Am I a l33t h@x0r now?  No?
> Why not?  Oh, so when the zealots say, "Apache::Registry sux, write handlers
> instead" they really mean I should be using the Apache API instead of
> CGI.pm.  I see.
> 
> I have another beef with the "CGI emulation sux, avoid Apache::Registry"
> crowd.  And that is:
> 
> 2. Just because you don't use Apache::Registry doesn't mean you're not doing
> CGI emulation (*gasp*)
> 
> What exactly is this "evil, bloated, slow CGI emulation" that everyone's
> trying to avoid?  Is it the overheard of setting up all the environment
> variables?  Well, gee whiz, regular handlers do this too unless you tell
> them not to.  Try the following exercise:
> 
>   
> #PerlSetupEnv Off  # try first without; then try with PerlSetupEnv Off
> SetHandler perl-script
> PerlHandler Apache::Env
>   
> 
>   package Apache::Env;
>   use strict;
>   use Data::Dumper qw(Dumper);
>   sub handler {
> my $r = shift;
> $r->content_type('text/plain');
> $r->send_http_header;
> $r->print(Dumper(\%ENV));
> 200;
>   }
>   1;
> 
> So let's not be so quick to curse Apache::Registry for it's "slow" CGI
> emulation.  Your "fast" handlers are probably doing the same thing
> unbeknownst to you.
> 
> Another assertion:
> 
> 3. Using Apache::Registry doesn't necessarily mean CGI.pm is at use
> 
> It seems the "Apache::Registry sux" crowd dislikes Apache::Registry because
> it implies that CGI.pm is in use.  Perhaps their real gripe is one should
> use the Apache API instead of CGI.pm's methods.  So how would they feel
> about this:
> 
>   # in httpd.conf
>   
> PerlSetupEnv Off  # we don't need no stinking ENV
> SetHandler perl-script
> PerlHandler Apache::Registry  # or Apache::RegistryNG->handler
> Options +ExecCGI
>   
> 
>   #!/usr/local/bin/perl -w
>   use strict;
>   my $r = Apache->request;
>   $r->content_type("text/html");
>   $r->send_http_header;
>   $r->print("Hi There!");
> 
> Does this count?  Am I a l33t h@x0r because I used the Apache API?  Or am I
> still a lamer for using Apache::Registry?
> 
> I can hear the hordes out there crying, "Why use Apache::Registry if you're
> not using CGI.pm?"  Well, per

RE: mod_perl on Apache 2.0

1999-11-03 Thread C. Jon Larsen


One of the main reasons I use mod_perl is because of the pre-fork caching
I can do in the parent that the children can share cheaply. I take huge
data structures and assemble them in ram as read-only databases (read
hash tables) that are much faster and simpler to access than sql (I use
sql only where my data is read/write). 

With all this talk about threaded perl interpreters what I'd like to be
sure of is that in Apache 2.0 my model does not break. Can each private
perl interpreter keep its cached (shared) data structures in memory ?

Most of my mod_perl processes are big - 10-12 MB, but easily 90+ percent
of that is shared. I like this setup, and want to be sure that apache 2.0
does not break this in any way. threads ? are they really appropriate for 
everyone ? can the old behavior be maintained ? If not, then we would seem
to be moving to a more php-like environment, which would be a major bust
for me. I can triple (sometimes more) the performance of my handlers by
packing my data that is read-only into memory, and using sql only as
needed !  

Any comments ? Am I off base ? Worrying without cause ?

On Thu, 4 Nov 1999, Gerald Richter wrote:

> > I'm assuming that Perl itself is reentrant, since it has been embedded
> > in multithreaded environments before (IIS).  Hopefully someone can
> > comment on that.
> >
> Perl 5.005 has experimetal thread support, Perl 5.006 might be stable
> enought to really use it.
> 
> What ActiveState has done for IIS, is to pack the Perlinterpreter in a
> single C++ object (compile Perl with PERL_OBJECT) and now every request can
> get his own private Perlinterpreter. This model (maybe a pool of
> Perlinterpreters) may also work for mod_perl. Jochen Wiedman already has
> done some work on make mod_perl compile with a Perl that is build with
> PERL_OBJECT, but a lot of work is left...
> 
> Gerald
> 
> ---
> Gerald Richter  ecos electronic communication services gmbh
> Internet - Infodatenbanken - Apache - Perl - mod_perl - Embperl
> 
> E-Mail: [EMAIL PROTECTED] Tel:+49-6133/925151
> WWW:http://www.ecos.de  Fax:+49-6133/925152
> ---
> 
>