Re: [RELEASE CANDIDATE] Apache::Test 1.03-dev

2003-06-24 Thread Rob Bloodgood
Wednesday, June 18, 2003, 2:13:46 AM, you wrote:

SB I've uploaded 1.03's release candidate. If nobody finds any faults, I'll 
SB upload it tomorrow on CPAN. (libapreq needs to rely on 1.03 fixes to release 
SB its 1.2's version).

SB Please try it out:
SB http://www.apache.org/~stas/Apache-Test-1.03-dev.tar.gz

SB Test it with mod_perl 1.0:

SB perl Makefile.PL -httpd /path/to/1.x/httpd  make test

Fails for me miserably.  For some reason, the test server config
returns 403 on / and /index.html:

t/logs/error_log:
[Tue Jun 24 13:02:22 2003] [info] created shared memory segment #589827

t/logs/access_log:
127.0.0.1 - - [24/Jun/2003:13:02:23 -0700] GET /index.html HTTP/1.0 403 208
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] GET / HTTP/1.0 403 198
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] GET / HTTP/1.0 403 198
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] GET / HTTP/1.0 403 198
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] GET / HTTP/1.0 403 198
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] HEAD / HTTP/1.0 403 0
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] HEAD / HTTP/1.0 403 0
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] HEAD / HTTP/1.0 403 0
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] GET / HTTP/1.0 403 198
127.0.0.1 - - [24/Jun/2003:13:02:24 -0700] GET / HTTP/1.0 403 198

This occurs on *multiple* Linux systems.  I'm using Apache-1.3.27 and
mod_perl 1.27 on all of them.  The above log files are absolutely
identical on all of them except for the timestamps.

I'm listing their system names and perl -V outputs below, for
reference.

I poked around in the Apache::Test tree for a minute, and found a
variable in TestRequest called $DebugLWP.  Setting that to true made
the output only slightly more informative, but did confirm 403 status
on the test requests:

[EMAIL PROTECTED] Apache-Test-1.03]# make test

/usr/bin/perl -Iblib/arch -Iblib/lib \
t/TEST -clean
*** setting ulimit to allow core files
ulimit -c unlimited; t/TEST -clean
APACHE_USER= APXS= APACHE_PORT= APACHE_GROUP= APACHE= \
/usr/bin/perl -Iblib/arch -Iblib/lib \
t/TEST -verbose=0 
*** setting ulimit to allow core files
ulimit -c unlimited; t/TEST -verbose=0
*** root mode: changing the fs ownership to 'nobody' (99:99)
/usr/sbin/httpd -X -d /root/.cpan/build/Apache-Test-1.03/t -f 
/root/.cpan/build/Apache-Test-1.03/t/conf/httpd
.conf -DAPACHE1 
using Apache/1.3.24 

waiting for server to start: .
waiting for server to start: ok (waited 0 secs)
server localhost.localdomain:8529 started
#lwp request:
#GET http://localhost.localdomain:8529/index.html HTTP/1.0
#User-Agent: libwww-perl/5.69
#
#server response:
#HTTP/1.1 403 Forbidden
#Connection: close
#Date: Tue, 24 Jun 2003 20:02:23 GMT
#Server: Apache/1.3.24 (Unix) mod_perl/1.26
#Content-Length: 208
#Content-Type: text/html; charset=iso-8859-1
#Client-Date: Tue, 24 Jun 2003 20:02:23 GMT
#Client-Peer: 127.0.0.1:8529
#Title: 403 Forbidden
#X-Content-Length-Note: added by Apache::TestRequest
#
ping...ok
request# Failed test 1 in request.t at line 11
# Failed test 5 in request.t at line 16
# Failed test 8 in request.t at line 20
# Failed test 9 in request.t at line 22
FAILED tests 1, 5, 8-9
Failed 4/9 tests, 55.56% okay
Failed Test Stat Wstat Total Fail  Failed  List of Failed
---
request.t  94  44.44%  1 5 8-9
*** server localhost.localdomain:8529 shutdown
!!! error running tests (please examine t/logs/error_log)
make: *** [run_tests] Error 1

So... What more can I check?  What more can I report?  I'm definitely
no newbie but I don't know my way around this code... I can send
anything you need, or try anything.

Thanks in advance!

L8r,
Rob

=
System listings and perl -V outputs.

1) Linux RedHat 6.2

Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
  Platform:
osname=linux, osvers=2.2.5-22smp, archname=i386-linux
uname='linux porky.devel.redhat.com 2.2.5-22smp #1 smp wed jun 2 09:11:51 edt 1999 
i686 unknown '
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
cc='cc', optimize='-O2', gccversion=egcs-2.91.66 19990314/Linux (egcs-1.1.2 
release)
cppflags='-Dbool=char -DHAS_BOOL -I/usr/local/include'
ccflags ='-Dbool=char -DHAS_BOOL -I/usr/local/include'
stdchar='char', d_stdstdio=undef, usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
alignbytes=4, usemymalloc=n, prototype=define
  Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -ldl -lm -lc -lposix -lcrypt
libc=, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'


Characteristics of this binary 

RE: [OT] Query

2003-01-07 Thread Rob Bloodgood
   I would like to know any such standalone servers that could
 process the perl requests offline (taking requests from a file or
 queue end).

   I definitely would like to get fancier as my requirement is
 immediate.  Upon finding a server that could process the requests
 away from mod_perl, I most probably would modify mod_perl to
 communicate with the standalone servers via sockets (and maybe
 maintain persistence).

Well, I had a need like this, and I wrote a standalone server that my
mod_perl processes communicate with, using POE (http://poe.perl.org).  My
POE server has, among other features, a TCP line-based interface.  I can
test it with a simple telnet, or using netcat(1).  It meets my needs, but be
advised that this server itself had (has!) a pretty substantial development
time investment involved as well... and that is aside from simply learning
how to make things work in POE!

However, by no means should you consider POE your only opportunity... there
are any number of ways to write a server daemon that can communicate with
another process via TCP or pipe or whatever.

That being said, this thread is now completely aside from mod_perl, and I
agree it should terminate.  But if you decide to pursue POE, then I'll see
you on the POE list!

L8r,
Rob




RE: [OT] Redirect POST to POST off-site?

2003-01-02 Thread Rob Bloodgood
(sorry about the blank reply a minute ago)

 I am looking into the more advanced paypal instant notification
 stuff for the next version of my sw, but version one is using a
 simpler approach to get it out the door. Even that paypal sw
 wouldn't solve my problem, which is to make sure that the POST to
 paypal actually matches the transaction that the user has built up.

I found IPN to be *very* simple to use, logging the notifications to a DB
and then acting as required.  I would even be happy to send you my
notification script, which uses Apache::Registry but is really just a simple
POST BACK to paypal and when the response is 'OK', take the appropriate
action (payment received, account terminated, etc) (but please reply
privately if you want it).

The only nits I experienced were A) forgetting to send back the OK\r\n to
paypal that they expect to see from a successful notify.  They called me and
wondered if my script was broken... B) having to set up a unique index on my
logging table on the verify_sign field, because in spite of the correct
response chain, paypal has a tendency to notify repeatedly and redundantly.

HTH!

L8r,
Rob




RE: AuthCookieDBI help please.... (more info)

2002-10-16 Thread Rob Bloodgood

 -Original Message-
 From: George Valpak [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, October 16, 2002 3:26 PM
 To: Vegard Vesterheim
 Cc: [EMAIL PROTECTED]
 Subject: Re: AuthCookieDBI help please (more info)


 I am still having trouble with Apache::AuthCookieDBI.

 I tried moving the PerlSetVar line out of the virtual server to
 the main server but nothing in the behavior changed.

 Is it possible that the Apache-server-dir_config() code is
 somehow wrong?

Move ALL mention of the AuthCookieDBI directives OUT of any Directory,
Location, or VirtualServer blocks.  Define the secret key PerlSetVar BEFORE
loading PerlModule directive.

The relevant section of my server config looks like this:
# These must be set
PerlSetVar AdminDBI_DSN dbi:Oracle:STATS
PerlSetVar AdminDBI_SecretKeyFile /etc/httpd/conf/sercret.key
PerlSetVar AdminDBI_SecretKey XXX

# moved BELOW AdminDBI_SecretKeyFile so the directive is available at
# BEGIN{} time
PerlModule Apache::AuthCookieDBI
PerlSetVar AdminPath /admin
PerlSetVar AdminLoginScript /scripts/adminlogin.pl
#PerlSetVar AdminLoginScript /error/adminlogin.html

## more directives here

Hope this helps!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;




notes/pnotes broke between 1.25=1.27

2002-06-26 Thread Rob Bloodgood

So I got the advisory about the Apache servers having a security hole, so I
decided to upgrade some servers.  I've been on v1.25 for awhile, so decided
to upgrade to 1.27 while I was at it... big mistake.

NONE of my notes/pnotes were getting thru, on the new version.

It took me 8 or 10 compilations, with 3 different apache versions and 4
different mod_perl versions, to establish that definitiely, on my machine
(RedHat Linux 6.2, custom apache, custom perl 5.005_03), the upgrade breaks
notes AND pnotes.

PLEASE tell me I missed something??? RTFM would be ok but I haven't found it
yet.

L8r,
Rob




RE: Problem with DBM concurrent access

2002-04-04 Thread Rob Bloodgood

 So my question narrows down to :
 How to flush on disk the cache of a tied DBM (DB_File) structure
 in a way that any concurrent process accessing it in *read only* mode
 would automatically get the new values as soon as they
 are published (synchronisation)

Isn't that just as simple as

tied(%dbm_array)-sync();

?

HTH!

L8r,
Rob



RE: Host name lookups are Off but...

2002-04-03 Thread Rob Bloodgood

 We have a mod_perl server that's under constant heavy load.  In
 our Apache
 config we have switched HostnameLookups off using

 HostnameLookups off

 and for the most part, it seems to work.  However, any check of
 the logs or
 /server-status shows that the server is *still* doing
 reverse-lookup of some
 addresses.  Often, a number of apache processes show up as D in
 /server-status, and it's pretty clear that it's slowing things down.

 Does anyone have any idea what might be causing this?  Could it
 be something
 in the mod_perl config?  Nowhere in any of our code do we do hostname
 resolution and for the most part couldn't care less what host/ip
 people come
 from.

 Sorry if this is the wrong list but I have a sneaking suspicion there's
 something about our mod_perl config that's affecting it.

 RTFM's are welcome...  I already tried but maybe I missed something.

This one bit me a couple of years ago.  *IN MY CASE* it was incorrect usage
of the Allow/Deny, I specified
Allow from all
Deny from none

The problem was, the webserver doesn't recognize none as a special value
like it does for all... so none became a hostname,

*** which enabled HostNameLookups for the whole webserver. ***

Look in every single place where you have access control by ip/hostname.
Make sure there are no hostnames, only ip.  Once Apache turns on
HostnameLookups, it's global.

HTH!

L8r,
Rob


#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Cookies and IE in mod_perl

2002-03-27 Thread Rob Bloodgood

 I've determined that it isn't the redirect causing the cookies not
 to be set.  If I take out the redirect, and just try to set a cookie
 w/o a redirect, it still doesn't set the cookies in IE.  Does M$
 have any docs on how IE6 handles cookies that I can look this up on?

YES, they do.
You have to setup the Privacy Policy,
which means you have to have a P3P header coming out of your webserver with
each request.

You'll want to lookup the details and docs, and PLEASE customize for your
own website, but...

*I* fixed this by adding this to my httpd.conf (and I got it from this
mailing list anyway :-):

# P3P Policy (required for IE6 to accept our cookies)
Header add P3P CP=\NOI DSP COR CURa PSDa OUR NOR NAV STA\

This requires mod_headers to be loaded or compiled into Apache.

Good luck!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: 'Pinning' the root apache process in memory with mlockall

2002-03-22 Thread Rob Bloodgood

 Stas Bekman wrote:

  Moreover the memory doesn't get unshared when the parent pages are
  paged out, it's the reporting tools that report the wrong
  information and of course mislead the the size limiting modules
  which start killing the processes.
 
 Apache::SizeLimit just reads /proc on Linux.  Is that going to report a 
 shared page as an unshared page if it has been swapped out?
 
 Of course you can void these issues if you tune your machine not to 
 swap.  The trick is, you really have to tune it for the worst case, i.e. 
 look at the memory usage while beating it to a pulp with httperf or 
 http_load and tune for that.  That will result in MaxClients and memory 
 limit settings that underutilize the machine when things aren't so busy. 
   At one point I was thinking of trying to dynamically adjust memory 
 limits to allow processes to get much bigger when things are slow on the 
 machine (giving better performance for the people who are on at that 
 time), but I never thought of a good way to do it.

Ooh... neat idea, but then that leads to a logical set of questions:
Is MaxClients that can be changed at runtime?
If not, would it be possible to see about patches to set this?
:-)

L8r
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;
 



[WOT] emacs and WEBDAV

2002-03-14 Thread Rob Bloodgood

I'm running a Mason based website, and I use Emacs when I write code.
My web designers use Dreamweaver.  I've designed the site so that my web
guys have to reserve me one table cell (or more than one depending on where
in the site, but you get the point) where I put a single dispatch  component
to the dynamic content appropriately.

The problem is, concurrency.  Dreamweaver has versioning built in... but
emacs has no way to recognize it.  So when I make a fix to a file, if the
designers aren't explicitly instructed to refresh-from-the-website-via-ftp,
my changes get hosed.

DW also speaks WEBDAV natively, but emacs does not.  Emacs speaks CVS
natively, but DW does not.  DW also speaks SourceSafe shudder, but I never
took that seriously... :-)

I've been trying, in various attempts over the past two years, to come up
with a compromise between the two.  The closest I've come was somebody
mentioned a CVS emulation layer over a DAV repository... but that never came
to fruition.  And even more frustrating, I haven't managed to pick up enough
eLisp to do it myself w/ vc.el sigh.

Does anybody have any ideas for my next direction to turn?

TIA!

L8r,
Rob




RE: [WOT] emacs and WEBDAV

2002-03-14 Thread Rob Bloodgood

 At 11:30 AM -0800 3/14/02, Rob Bloodgood wrote:
 The problem is, concurrency.  Dreamweaver has versioning built
 in... but emacs has no way to recognize it.  So when I make a fix
 to a file, if the designers aren't explicitly instructed to 
 refresh-from-the-website-via-ftp, my changes get hosed.

 Versioning, no.  Locking, yes, optionally.  (Well, I guess it can do
 versioning via SourceSafe, but not via anything else.)  I'm seriously
 hoping they'll address that in the next release.

sigh I meant locking.  Not versioning.  e-Foot in e-Mouth.

 Emacs over WebDAV should work fine if you run something that supports
 WebDAV as a filesystem (e.g. OSX), but that's not going to help you
 much.

If we're talking about LOCKING, is this statement still true?

 There are two options I can think of.

 1. If your designers aren't making use of checkin/checkout in
 DreamWeaver, then simply make it clear to them that before they can
 save a file to the server, they have to do a sync first.  Make the
 final repository sit on CVS, and do a checkin every night.  So if
 something does go wrong you can at least pick up the previous day's
 work.

That (the train-them-to-sync-first part) has been what I've been forced to
do so far.  I haven't gone so far as to set up a CVS for the website tho.
Thx for the, I'll look into it.

 2. DreamWeaver's locking mechanism is handled by placing lock files
 on the server.  Those files have the info about who has what.  It
 ought to be possible to write an emacs extension that would use those
 files.

Certainly.  But my original message mentioned the REAL source of my
frustration: I'm pretty limited at elisp, otherwise I might have already had
this worked out. :-)

L8r,
Rob




RE: mod_perl and perl RPMs and Oracle 9iAS

2002-03-06 Thread Rob Bloodgood

 Perrin Harkins wrote:
 Rafael Caceres wrote:
 I'm facing a dilemma here. We are testing an Oracle 9iAS installation
 (Apache 1.3.19, mod_ssl 2.8.1, mod_perl 1.25 as DSO, Perl 5.005_03) on
 Red Hat Linux 7.2, which itself came with Perl 5.6.0, and from your
 comments, that's bad..
 
 First of all, if it's working for you then don't worry about it.

 I have not started testing scripts that currently work on other boxes. I
 will install the required modules for the 5.005_03 perl used by Oracle
 9iAS, and see what happens.
 This road forces me to have the two perl versions coexisting, or,
 to search
 for all the perl modules installed for the 5.6 version by the rpm's on
 initial installation, install them for the 5.005_03 version and
 then remove
 the 5.6 one permanently.

OK, for starters:
Oracle includes their own version of perl/apache/mod_perl for the Web
interface they are bundling with the new 9i servers.  It's their own
version, built by their own people, for their own usage, on their own
product, in its own path, under the Oracle product installation tree.

Let 'em have it.  It's only a few megs of disk space, and if your 9i
installation works, GREAT.  Don't think of it as two versions co-existing.
Think of it as Oracle's insurance to themselves that their system will have
the exact parts it needs.  Besides, except for a few configuration files,
shouldn't everything under $ORACLE_HOME be considered hands-off anyway?

Now, on to the real world: 10 minutes ago I just saw a post by a RedHat
employee stating that there are new RPM's for Perl 5.6.1 and the latest
mod_perl.  Which means you can download and install them, and THEN begin
installing other modules, like Apache::DBI, Apache::Session, etc etc
according to your needs, into the real perl installation tree, where all
of YOUR system's perl modules live.

 Yes, there are at least two modules: mod_plsql and mod_oprocmgr for which
 there is no source, so rebuilding seems to be out of the question

Those modules are *only* for the Oracle administrative webservice, as I
mentioned above.  If you want to use Oracle from Perl/mod_perl, do it like
everybody else: DBI and DBD::Oracle (for the record, I build them for 9i
several months ago with 0 headaches).  This *does* include the ability to
execute PL/SQL.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: mod_perl and perl RPMs and Oracle 9iAS

2002-03-06 Thread Rob Bloodgood

 I've always used DBI along with DBD::Oracle for Database access, and I
 intend to use them along  Oracle 9iAS's other capabilities.

 So if I'm following you correctly, the steps involved are:
 -get the 5.6.1 RPM (which doesn't seem to be in Red Hat's site anyway)
 -get the Apache 1.3.19 sources (to be used in the next step), then
 'discarded' without installing Apache per se.
 -get the mod_perl 1.24_01-2.src.rpm and compile it as a DSO
 -reinstall all previously installed packages, so other programs
 using them
 keep working
 -install the modules the mod_perl apps require
 -change the apachectl and httpd.conf files to reflect the proper
 perl 'home'
 -change httpd.conf to load the mod_perl.so file from it's new location

 Is this list OK?

Hmm... if you like RPM's, then you should
download the updated perl-5.6.1 in the UPDATES/ERRATA section for RH7.2

reinstall all required packages, USING CPAN for the stuff you needed
before.

the rest depends: are you comfortable with RH rpm version of Apache? If you
use that, plus the new, updated mod_perl-1.26 RPM (which is DSO, and is also
on the Errata page), your configuration and recompilation is no longer
necessary.  Otherwise, you have the right idea.

   Yes, there are at least two modules: mod_plsql and mod_oprocmgr
   for which which there is no source, so rebuilding seems to be
   out of the question
 
 Those modules are *only* for the Oracle administrative webservice, as I
 mentioned above.  If you want to use Oracle from Perl/mod_perl, do it
like
 everybody else: DBI and DBD::Oracle (for the record, I build them for 9i
 several months ago with 0 headaches).  This *does* include the ability to
 execute PL/SQL.

 The mod_plsql is called heavily from the Oracle 9iAS Portal
 applets, so it needs to be kept in place.

So are you using Oracle Portal applets, or mod_perl?  We seem to have
miscommunicated somewhere.

Yes, it needs to be kept in place... because you aren't touching that copy
of apache and perl, right? :-) I mean, if you want to use the supplied
Oracle stuff that badly, then put it on a different port number.  That way
you can reference the Oracle stuff without being trapped in a little box
where you're afraid to recompile/reconfigure/make more useful for YOUR
situation.

L8r,
Rob




RE: Multiple Location directives question

2002-03-05 Thread Rob Bloodgood

 Answering my own question, I stupidly forgot that I had a TransHandler up
 above mucking my URLs before the Location directives got a chance
 to try to match  So my /foo location block was never seeing a /foo URL

 Still, I'm glad to see that the old system of post to a public list and
 then immediately find your dumb error still works like a charm :)

Well, my response has just left the building, and the only thing I can think
of right now (about my hasty response) is something I once saw on FidoNet:

Open mouth, insert foot, echo internationally

sigh

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Apache::Session

2002-02-25 Thread Rob Bloodgood

 I am using Apache::Session with Postgresql. Unfortunately I had
 never worked with a huge amount of data before I started to program
 something like a (little) web application. I happily packed
 everything in the session(s-table) that might be of any use. It
 hit me hard that it takes a veeey long time to get all the stuff
 out of the session(s-table) each time the client sends another
 request.

Sorry if this is obvious, but
do you have an index on your sessions table, on the sessionid column?
Because, without an index, PG will have to do a full table read for each
request.  Which means the more sessions you get, the slower each lookup is
going to be.  Whereas, if you index SESSIONID (or SESSION_ID or whatever it
is), it can go right to the row in question and return it immediately.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: [ANNOUNCE] libapreq 1.0 released [OT?]

2002-02-25 Thread Rob Bloodgood


 more information is at:

   http://httpd.apache.org/apreq/

Am I the only one that noticed that the web page thinks 1.0 was released 4
months before 0.33? :-)

News
February 21, 2001 - libapreq-1.0 was released.

June 19, 2001 - libapreq-0.33 was released.

December 23, 2000 - libapreq-0.31_03 was released for testing.

December 17, 2000 - libapreq-0.31_02 was released for testing

L8r,
Rob




RE: mod_perl, mod_gzip, incredible suckage

2002-02-14 Thread Rob Bloodgood

  Ditto here. Working quite well on fairly high volume servers.

 Hrmm how interesting.  My Apache is built with PHP (with DOM, MySQL, and
 Postgres) and mod_perl.  With mod_gzip enabled it simply segfaults on
 every single request.

have you looked at the work at http://www.apachetoolbox.com/ ?
This guy has an automated system for choosing what modules/packages to
install, then download/extract/patch/compile/install automatically.

He seems to be able to get ssl, php, gzip, and mod_perl all working at the
same time.

That might not be what you need, but if perhaps you were to review how he
handled gzip/php/etc, you might see what obscure flag or switch might be
affecting things poorly.

Good luck!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: [QUESTION]PerlHandler and PerlLogHandler Phase

2002-02-01 Thread Rob Bloodgood

 2.If the answer to the above question is YES? The
 Handler will add headers,footers for everything. What
 do I need to do to apply the handler logic just to the
 requested page and return the remaining files that are
 needed to complete the requested page as they are?

In the Eagle book (as well as a Perl Journal article) there is an example of
a Apache::Header/Apache::Footer.  CPAN doesn't show them right now.  But you
could implement them as filters using Apache::Filter to mark up each
document on its way out, based on URI.

 3. When I move these JS files outside the /en/course
 URI they seem to work? But now when I put them with
 in? It just displays the Javascript code like simple
 text on the browser.

SCRIPT SRC=/en/course/one.js/SCRIPT
... or you could template them in directly, since you're playing w/ the
content already.

 4. In the Logging Phase, I need to store the last
 requested page as a bookmark. So if the user logs out,
 and logs back in it takes him to the same page. Since
 the html files are made up of some many requests to
 other files, it stores the last file it requested. It
 may be path to an image file,style sheet file etc...
 Is there any way I can circumvent this problem?

You could use a cookie, issued with each document, noting what url they are
on right now??  Logging it (storing it) and then reading it back are bound
to be way too much work.

HTH!

L8r,
Rob




[OT] RE: New mod_perl Logo

2002-01-30 Thread Rob Bloodgood

Uhh... the platypus, the wombat, the tazmanian devil, and the emu.
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
  Sent: Wednesday, January 30, 2002 1:54 PM
  To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
  Subject: Re: New mod_perl Logo


  In a message dated 30-Jan-02 6:08:29 AM GMT Standard Time,
[EMAIL PROTECTED] writes:



All these American-style names are verging on the racist.

This is world-wide code, not f---ing American-wide code.




  Don't let the crappy AOL account fool you. Nessie is about 3 hours from
here. The Yetti I believe is indiginous to Asia isn't it?

  And as for australian beasties ... I just couldn't think of any off the
top of my head ... .

  -Chris
  [EMAIL PROTECTED]




test, pls ignore

2001-12-31 Thread Rob Bloodgood

un/re subscribed to a different addy, THIS IS JUST A TEST!



RE: Comparison of different caching schemes

2001-12-14 Thread Rob Bloodgood

 Another powerful tool for tracking down performance problems is perl's
 profiler combined with Devel::DProf and Apache::DProf.  Devel::DProf
 is bundled with perl. Apache::DProf is hidden in the Apache-DB package
 on CPAN.

Ya know the place in my original comment where I was optimizing a different
subsystem?  I just discovered Devel::DProf last week (after 5 *years* of
perl... smacks forehead), and was using that.  *AND* had improved a sore
spot's performance by 10% without even working hard, because of profiling.
Point taken.

 At the same time I added some code to track the time it takes to
 process a request using Time::HiRes.  This value is set as a note via
 $r-note('REQTIME').  A customlog directive takes care of dumping that
 value in the logs...

Hmm... I was already logging a status message via warn(), I did the SAME
TRICK but stored it in a local variable because I didn't need to go as far
as a customlog...

Sounds like great minds think alike! :-)

L8r,
Rob
#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Auth Handlers

2001-12-11 Thread Rob Bloodgood

 : )  No problem,  I guess I am unsure if this is the proper way
 to setup an
 Access, Authen, Authz handler.  When I use this configuration my
 'handler()'
 method does not get called and I get an error in the logs:

This is *not* the correct way to invoke it.

   Directory /home/stathy/apache/html
   AuthName Login

# This is incorrect
#  AuthType Base::Session::Handler

# *This* is what you need if you want the
# browser to prompt for a username/pass
AuthType Basic
   require valid-user
  
   PerlAuthenHandler Base::Session::Handler
   /Directory


I just checked my answers from the Eagle (Writing Apache Modules with Perl
and C), and that's the correct way.  If I'm not mistaken, the chapter on
Authentication is one of the sample chapters that's online at
http://www.modperl.com.  Have a look over there, it'll straighten you right
out. :-)

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: anyone know a trick with displaying 401 error messages for top level protected sites?

2001-12-11 Thread Rob Bloodgood


 for example if the protected url was http://www.site.com/ the user
 would be redirected to http://www.site.com/error/401 for the error
 message.. and because its protected wouldnt display the custom error
 page instead displaying the following error Additionally, a 401
 Authorization Required error was encountered while trying to use an
 ErrorDocument to handle the request.. Which i can understand.


How about 
Put your 401 html page into a directory like /error.
Set the PerlAuthenHandler for /error to Apache::Constants::OK:

Location /error
AuthType Basic
PerlAuthenHandler Apache::Constants::OK

# This 'require' is actually required. :-)
require valid-user
/Location

Do the same for the dir where any/all of its images are located 
-- or -- 
Put the images specific to the 401 handler in /error.

That should do it.
(but I haven't tested it, so YMMV :-).


L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;
 



RE: ErrorDocument 401 problem..

2001-12-10 Thread Rob Bloodgood

 How i expected the ErrorDocument directive to behave was as
 follows: WHEN there was an error 401 (ie the user had logged in 3
 times and failed) there would be an error page shown (in this case
 it would be /error/401).  But instead what seems to be happening as
 soon as a user goes to an authorisation required area they are just
 displayed the errordocument page right away without even being asked
 to login, forgive me if im wrong but this surely means the
 ErrorDocument handler is behaving in a non-standard way with
 mod-perl as opposed to using static pages as error urls. I have
 included the module, its only small and ive cut out some of the
 useless code to keep it short.

is your module calling
$r-note_basic_auth_failure
before it does a
return AUTH_REQUIRED;
?

$r-note_basic_auth_failure inserts the header WWW-Authenticate into the
header stream, which when combined with an error 401 signals a browser to
give a password prompt.  The BROWSER SOFTWARE (*NOT* your webserver) will
prompt the user for a password, often 3 times but that's not required.
After that third time, the browser simply displays the 401 page that it's
been seeing *the whole time*, instead of prompting you for a password again.
Without that WWW-Authenticate header, the browser has no choice recourse
except to show the 401.

HTH!

L8r,
Rob




RE: [OT] Re: How to create a browser popup window

2001-11-20 Thread Rob Bloodgood

 You must include code to deal with the fact that you may have already
 opened a popup window. Something like this:

That is simply not true.  window.open() with a named window ('popupwin', in
your example) ALWAYS reuses that window, on every browser I've ever been
able to test.  The second call to window.open, with a new URL, simply
refreshes the contents of the popup w/ the new URL.  Note, this is *only*
true for named windows.  Windows without a window name string as the second
parameter to window.open() will open a new window every time.

It can, however, be a good idea to explicitly call focus() on your child
window, because in the situation I've just mentioned, if the child window's
url is refreshed, it is NOT automatically brought to the foreground.

The original post was wondering how to put mod_perl output in a popup
window.  The answer is simply top call window.open() with the URL of the
mod_perl handler as its location.

If one is trying to be responsible about the window(s) being open, adding
a link like

a href=javascript:window.close()CLICK HERE CLOSE THIS WINDOW/a

in the child window is usually reasonably simple for the user to understand.
Of course, the normal caveats about users understanding something still
apply...

A corrected version of your sample script follows.  It's much simpler now...
:-)

 SCRIPT LANGUAGE=JavaScript
   !-- Hide
 var popupwin = null;
 function popup(loc,ww,hh) {
   var mywidth = (ww + 10);
   var myheight = (hh + 10);
   var myspecs =
 'menubar=1,status=1,resizable=1,location=1,titlebar=1,toolbar=1,
 scrollbars=1,width= + mywidth + ,height= + myheight + ';

 popupwin = window.open (loc, 'popupwin', myspecs);
 popupwin.focus();
 }
 /SCRIPT

  A HREF='javascript:' onClick='popup(foo.gif,300,200)'Look at foo/A


L8r,
Rob
#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: no_cache()

2001-11-16 Thread Rob Bloodgood


#set the content type
 $big_r-content_type('text/html');
 $big_r-no_cache(1);
 
   # some more code
 
   return OK;

You *are* remembering to do

$r-send_http_header();

somewhere in (some more code), arent you?

L8r,
Rob
#!/usr/bin/perl -w
use Disclaimer qw/:standard/;
 



RE: [JOB] Red Hat Network Web Engineer positions open

2001-11-07 Thread Rob Bloodgood

 We have a couple openings doing intense and interesting mod_perl work
 here at Red Hat.  Formal description is below.  Key skills are perl,
 mod_perl, apache, and DBI (especially Oracle).  Must relocate to
 Research Triangle Park, North Carolina.

If only Red Hat was in Oregon... sigh.

L8r
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;
 



RE: Apache::AuthCookie

2001-09-28 Thread Rob Bloodgood


  Does anyone know where I can find documentation to install
  and configure
  Apache::AuthCookie? The docs that come with it are thin and
  do not provide
  much information.

 you're kidding, right?

 [geoff@jib Apache-AuthCookie-2.011]$ perldoc AuthCookie.pm  | wc -l
 462

Verbiage and ASCII art do not good documentation make.

I'm *not* a newbie, but it took me almost a day to install a RUDIMENTARY
AuthCookie setup... because the docs were so thin.  And unclear.

My $.02.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Callback called exit. x 100000

2001-09-11 Thread Rob Bloodgood

 Attempt to free unreferenced scalar during global destruction.
 Attempt to free unreferenced scalar during global destruction.
 Attempt to free unreferenced scalar during global destruction.
 Attempt to free unreferenced scalar during global destruction.
 Out of memory!
 Callback called exit.
 Callback called exit.
 Callback called exit.
 Callback called exit.
 Callback called exit.
 Callback called exit.
 Callback called exit.
 Callback called exit.
 etc.


 The last line Callback called exit. is repeated a million times until
 the whole disc is full. It happens from time to time, I can't say when.
 Maybe it has something to do with heavy load, the webserver has about 10
 hits / second, in peak times more.

Your webserver has run out of memory.

as in, you have NOT limited the allowable resources that apache/mod_perl can
use
and have exceeded both RAM and swap.

Look into Apache::SizeLimit.

HTH!


L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Callback called exit. x 100000, additional info

2001-09-11 Thread Rob Bloodgood


Also, look into the MaxServers settings, and memory calculations in the
Guide:


http://perl.apache.org/guide/config.html#MinSpareServers_MaxSpareServers_

And especially
http://perl.apache.org/guide/performance.html#Choosing_MaxClients

GOOD LUCK!

L8r,
Rob




RE: Virtual Host?

2001-09-10 Thread Rob Bloodgood


 i think you may have to mount it
 mount -t smb -o username=user,password=pass //ntserver//disk7
 /mnt/smbshare

 then just add /mnt/smbshare to doc root!

Except that, to the best of my knowledge, Samba can only mount to regular
mount points on Linux.

Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





Shared memory caching revisited (was it's supposed to SHARE it, not make more!)

2001-09-04 Thread Rob Bloodgood

  One of the shiny golden nuggets I received from said slice was a
  shared memory cache.  It was simple, it was elegant, it was
  perfect.  It was also based on IPC::Shareable.  GREAT idea.  BAD
  juju.

 Just use Cache::Cache.  It's faster and easier.

Now, ya see...
Once upon a time, not many moons ago, the issue of Cache::Cache came up with
the SharedMemory Cache and the fact that it has NO locking semantics.  When
I found this thread in searching for ways to implement my own locking scheme
to make up for this lack, I came upon YOUR comments that perhaps
Apache::Session::Lock::Semaphore could be used, without any of the rest of
the Apache::Session package.

That was a good enough lead for me.

So a went into the manpage, and I went into the module, and then I
mis-understood how the semaphore key was determined, and wasted a good hour
or two trying to patch it.  Then I reverted to my BASICS: Data::Dumper is
your FRIEND.  Print DEBUGGING messages.  Duh, of course, except for some
reason I didn't think to worry about it, at first, in somebody else's
module. sigh So, see what I did wrong, undo the patches, and:

A:S:L:S makes the ASSUMPTION that the argument passed to its locking methods
is an Apache::Session object.  Specifically, that it is a hashref of the
following (at least partial) structure:

{
  data = {
_session_id = (something)
  }
}

The _session_id is used as the seed for the locking semaphore.  *IF* I
understood the requirements correctly, the _session_id has to be the same
FOR EVERY PROCESS in order for the locking to work as desired, for a given
shared data structure.
So my new caching code is at the end of this message.

***OH WOW!***  So, DURING the course of composing this message, I've
realized that the function expire_old_accounts() is now redundant!
Cache::Cache takes care of that, both with expires_in and max_size.  I'm
leaving it in for reference, just to show how it's improved. :-)

***OH WOW! v1.1*** :-) I've also just now realized that the call to
bind_accounts() could actually go right inside lookup_account(), if:
1) lookup_account() is the only function using the cache, or
2) lookup_account() is ALWAYS THE FIRST function to access the cache, or
3) every OTHER function accessing the cache has the same call,
   of the form bind() unless defined $to_bind;

I think for prudence I'll leave outside for now.

L8r,
Rob

%= snip =%

use Apache::Session::Lock::Semaphore ();
use Cache::SizeAwareSharedMemoryCache ();

# this is used in %cache_options, as well as for locking
use constant SIGNATURE = 'EXIT';
use constant MAX_ACCOUNTS = 300;

# use vars qw/%ACCOUNTS/;
use vars qw/$ACCOUNTS $locker/;

my %cache_options = ( namespace = SIGNATURE,
  default_expires_in =
  max_size = MAX_ACCOUNTS );

sub handler {

# ... init code here.  parse $account from the request, and then:

bind_accounts() unless defined($ACCOUNTS);

# verify (access the cache)
my $accountinfo = lookup_account($account)
  or $r-log_reason(no such account: $account), return
HTTP_NO_CONTENT;

# ... content here

}


# Bind the account variables to shared memory
sub bind_accounts {
warn bind_accounts: Binding shared memory if $debug;

$ACCOUNTS =
  Cache::SizeAwareSharedMemoryCache-new( \%cache_options ) or
croak( Couldn't instantiate SizeAwareSharedMemoryCache : $! );

# Shut up Apache::Session::Lock::Semaphore
$ACCOUNTS-{data}-{_session_id} = join '', SIGNATURE, @INC;

$locker = Apache::Session::Lock::Semaphore-new();

# not quite ready to trust this yet. :-)
# We'll keep it separate for now.
#
#$ACCOUNTS-set('locker', $locker);

warn bind_accounts: done if $debug;
}

### DEPRECATED!  Cache::Cache does this FOR us!
# bring the current session to the front and
# get rid of any that haven't been used recently
sub expire_old_accounts {

### DEPRECATED!
return;

my $id = shift;
warn expire_old_accounts: entered\n if $debug;

$locker-acquire_write_lock($ACCOUNTS);
#tied(%ACCOUNTS)-shlock;
my @accounts = grep( $id ne $_, @{$ACCOUNTS-get('QUEUE') || []} );
unshift @accounts, $id;
if (@accounts  MAX_ACCOUNTS) {
my $to_delete = pop @accounts;
$ACCOUNTS-remove($to_delete);
}
$ACCOUNTS-set('QUEUE', \@accounts);
$locker-release_write_lock($ACCOUNTS);
#tied(%ACCOUNTS)-shunlock;

warn expire_old_accounts: done\n if $debug;
}

sub lookup_account {
my $id = shift;

warn lookup_account: begin if $debug;
expire_old_accounts($id);

warn lookup_account: Accessing \$ACCOUNTS{$id} if $debug;

my $s = $ACCOUNTS-get($id);

if (defined $s) {
# SUCCESSFUL CACHE HIT
warn lookup_account: Retrieved accountinfo from Cache (bypassing SQL) if
$debug;
return $s;
}

## NOT IN CACHE... refreshing.

warn lookup_account: preparing SQL if $debug;

# ... do some SQL here.  Assign results to 

RE: ErrorDocument + Apache request tracing (OOPS!)

2001-09-04 Thread Rob Bloodgood

my sample code, from my last message, was incomplete... you should be shure
to

return OK;

when the authentication is successful... sigh

L8r,
Rob




RE: Shared memory caching revisited (was it's supposed to SHARE it, not make more!)

2001-09-04 Thread Rob Bloodgood

  The _session_id is used as the seed for the locking semaphore.
  *IF* I understood the requirements correctly, the _session_id has
  to be the same FOR EVERY PROCESS in order for the locking to work
  as desired, for a given shared data structure.

 Only if you want to lock the whole thing, rather than a single
 record.  Cache::Cache typically updates just one record at a time,
 not the whole data structure, so you should only need to lock that
 one record.

Uhh... good point, except that I don't trust the Cache code.  The AUTHOR
isn't ready to put his stamp of approval on the locking/updating.  I'm
running 10 hits/sec on this server, and last write wins, which ELIMINATES
other writes, is not acceptable.

 I had a quick look at your code and it seems redundant with
 Cache::Cache.  You're using the locking just to ensure safe updates,
 which is already done for you.

Well, for a single, atomic lock, maybe.  My two points above are the why of
my hesitancy.  Additionally, what if I decide to add to my handler?  What if
I update more than one thing at once?  Now I've got the skeleton based on
something that somebody trusts (A:S:L:S), vs what somebody thinks is
alpha/beta (C:SASMC).

In other words

TIMTOWTDI! :-)

L8r,
Rob




RE: Shared memory caching revisited (was it's supposed to SHARE it, not make more!)

2001-09-04 Thread Rob Bloodgood

  Uhh... good point, except that I don't trust the Cache code.  The AUTHOR
  isn't ready to put his stamp of approval on the locking/updating.

 That sort of hesitancy is typical of CPAN.  I wouldn't worry about it.  I
 think I remember Randal saying he helped a bit with that part.  In my
 opinion, there is no good reason to think that the Apache::Session locking
 code is in better shape than the Cache::Cache locking, unless you've
 personally reviewed the code in both modules.

Well, the fact is, I respect your opinion.  And YES, it seems like I'm doing
more work than is probably necessary.  I've been screwed over SO MANY TIMES
by MYSELF not thinking of some little detail, than I've developed a tendency
to design in redundant design redundancy :-) so that if one thing fails, the
other will catch it.  This reduces downtime...

  I'm running 10 hits/sec on this server, and last write wins,
  which ELIMINATES other writes, is not acceptable.

 As far as I can see, that's all that your code is doing.  You're
 simply locking when you write, in order to prevent corruption.  You
 aren't acquiring an exclusive lock when you read, so anyone could
 come in between your read and write and make an update which would
 get overwritten when you write, i.e. last write wins.

Again, good point... I'm coding as if the WHOLE cache structure will break
if any little thing gets out of line.  I was trying to think in terms of
data safety like one would with threading, because A) I was worried about
weather shared memory was as sensitive to locks/corruption as threading, and
B) I reviewed Apache::Session's lock code, but didn't review Cache::Cache's
(20/20 hindsight, ya know).

 You're more than welcome to roll your own solution based on your
 personal preferences, but I don't want people to get the wrong idea
 about Cache::Cache.  It handles the basic locking needed for safe
 updates.

Then my code just got waaay simpler, both in terms of data flow and
individual coding sections.  THANK YOU! :-)

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Compile problem w/ mod_perl-1.26 apache_1.3.20

2001-08-31 Thread Rob Bloodgood

  cp apaci/perl_config ../apache_1.3.20/src/modules/perl/perl_config
  ^^

 the file is copied all right

 [snip]
  Creating Makefile
  Creating Configuration.apaci in src
+ id: mod_perl/1.26
+ id: Perl/v5.6.0 (linux) [perl]
  modules/perl/mod_perl.config.sh: ./modules/perl/perl_config: No
 such file or directory

 You probably have a wrong path to perl in the first line of
 ../modules/perl/perl_config, that's what the error message says (I know
 it's not very clear from the error message, but it's a perl's error).

Actually...
is there any chance that the file was UNZIPPED or in some other way
dos-ized?  I've seen my shell barf when a script with DOS line endings
interferes with the way the shell interprets the shebang... No such file or
directory on
/usr/bin/perllf

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





IPC::Shareable, or, it's supposed to SHARE it, not make more!

2001-08-31 Thread Rob Bloodgood

So, once upon a time, I bought the Eagle and realized I had purchased a
small slice of heaven.

One of the shiny golden nuggets I received from said slice was a shared
memory cache.  It was simple, it was elegant, it was perfect.  It was also
based on IPC::Shareable.  GREAT idea.  BAD juju.

The code in expire_old_accounts is creating a new tied ARRAYREF instead of
replacing the value of the hash key on this line:

$ACCOUNTS{'QUEUE'} = [@accounts]; #also tried \@accounts;

This didn't happen w/ IPC::Shareable 0.52.  But 0.6 is apparently very
different, and I can't make the code look like it wants, so the new
reference is a replacement, not an autovivication.

HELP!

My code follows:

use vars qw/%ACCOUNTS/;

sub handler {

...
# bind accounts structure to shared memory
bind_accounts() unless defined(%ACCOUNTS)  tied(%ACCOUNTS);

my $accountinfo = lookup_account($account)
  or $r-log_reason(no such account: $account), return
HTTP_NO_CONTENT;

}

# Bind the account variables to shared memory using IPC::Shareable
sub bind_accounts {
warn bind_accounts: Binding shared memory if $debug;

unless (tied(%ACCOUNTS)) {
tie (%ACCOUNTS,
 'IPC::Shareable',
 SIGNATURE,
 { create = 1,
   destroy = 0,
   mode = 0666,
 }
) or die Couldn't bind shared memory: $!\n;
}
warn bind_accounts: done if $debug;
}

# bring the current session to the front and
# get rid of any that haven't been used recently
sub expire_old_accounts {
my $id = shift;
warn expire_old_accounts: entered\n if $debug;

tied(%ACCOUNTS)-shlock;
my @accounts = grep($id ne $_, @{$ACCOUNTS{'QUEUE'}});
unshift @accounts, $id;
if (@accounts  MAX_ACCOUNTS) {
my $to_delete = pop @accounts;
delete $ACCOUNTS{$to_delete};
}
$ACCOUNTS{'QUEUE'} = [@accounts]; #also tried \@accounts;
tied(%ACCOUNTS)-shunlock;

warn expire_old_accounts: done\n if $debug;
}


sub lookup_account {
   my $id = shift;

   warn lookup_account: begin if $debug;
   expire_old_accounts($id);

   warn lookup_account: Accessing \$ACCOUNTS{$id} if $debug;
   my $s = $ACCOUNTS{$id};

   if ($s and @{$s-{cat}}) {
   # SUCCESSFUL CACHE HIT
   warn lookup_account: Retrieved accountinfo from Cache (bypassing
SQL) if $debug;
   warn Data::Dumper-Dump([$s],[qw/s/]) if $debug;
   return $s;
   }

   ## NOT IN CACHE... refreshing.

   warn lookup_account: preparing SQL if $debug;

# ... look up some data here.  store in $s

   warn lookup_account: locking shared mem if $debug;
   tied(%ACCOUNTS)-shlock;
   warn lookup_account: assigning \$s to shared mem if $debug;
   $ACCOUNTS{$id} = $s;
   warn Just stored a value, Data::Dumper-Dump([$ACCOUNTS{$id}],[qw/s/])
if $debug;
   warn lookup_account: unlocking shared mem if $debug;
   tied(%ACCOUNTS)-shunlock;

   return $s;

}


TIA!

L8r,
Rob




RE: Problem with DBD::Oracle with mod_perl

2001-08-22 Thread Rob Bloodgood

 On Wed, Aug 22, 2001 at 09:42:59AM -0400, Perrin Harkins wrote:
Are you using Apache::DBI?  Are you opening a connection in
 the parent
process (in startup.pl or equivalent)?
   Yes, yes.
 
  Don't open a connection during startup.  If you do, it will be
 shared when
  Apache forks, and sharing a database handle is bad for the same reasons
  sharig a file handle is.  Open a connection in the child
 process instead.
  You can use connect_on_init() from Apache::DBI if you like.
 I misunderstood you. I was using connect_on_init. With or without
 Apache::DBI, it fails

I've SEEN this.  It SUCKED.  Then I figured it out.  And *IF* it's the same
thing, then:

ORACLE has to be reconfigured, to allow more connections.  By default, there
is a PROCESS max (200), because Oracle spawns a new process per connection.
And then there is a WHOLE different operating mode, called MTS.  You have to
modify init.ora and activate MTS.  The relevant section from MY Oracle
config (8.1.5 or 8.1.6, I forget):

# # This parameter turns on MTS
mts_servers = 1 # min value

mts_max_dispatchers = 5 # max value
mts_dispatchers = (PROTOCOL=TCP)(DISPATCHERS=2)
sessions = 1500

HTH

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Children dying

2001-08-16 Thread Rob Bloodgood

   No need for an apology :-) The trick is to build perl using the
   Solaris malloc (-Dusemymalloc as a flag to Configure), then apache,
   mod_perl and perl all agree on who manages memory.
 
  Might I suggest that this golden piece of information find it's
  way into the guide?  It's so rare to see a DEFINITIVE answer to
  one of the many (YMMV! :-) exceptions to the vanilla mod_perl
  build process.

 The definitive answer is there for at least 2 years: If in doubt compile
 statically, which covers Solaris as well. Why having a special case?

Because the admonition to -Dusemymalloc is not the same as, nor easily
deduce-able from, advice to compile static.  The guy who had the problem
that started this thread did everything right, i.e. use the same compiler,
start from fresh sources, compile static, no Expat, yet he still had
segfaults.

Maybe the Guide ISN'T the best place.  Maybe the best place is mod_perl's
INSTALL document.  But somehow I'd be willing to bet that this advice holds
true for earlier versions than, oh, the NEXT release of mod_perl... which is
where/when such a change to INSTALL would be for those of us who aren't yet
brave enough to use the CVS version daily.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;




RE: Children dying

2001-08-16 Thread Rob Bloodgood

 -Original Message-
 From: Rob Bloodgood [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, August 16, 2001 11:20 AM
 To: Stas Bekman
 Cc: mod_perl
 Subject: RE: Children dying

sigh... I didn't see the other thread that spawned from my orignal post...
rendering this reply redundant.  Apologies.




RE: Children dying

2001-08-15 Thread Rob Bloodgood

  AB Untrue. We ship mod_perl in Solaris 8 as a DSO, and it works
fine.

  I apologize. Let me qualify my original statement. In general, you
  want to compile mod_perl statically on Solaris 2.6 or 2.7 because
  in many instances, it core dumps when built as a DSO. FWIW, my
  particular experiences were with Perl 5.005_03 and 5.6.0, mod_perl
  1.24 and 1.25, and Apache 1.3.12, 1.3.14, 1.3.17, and 1.3.19 under
  Solaris 2.6 (both Sparc and Intel) and 2.7 (Intel only).

 No need for an apology :-) The trick is to build perl using the
 Solaris malloc (-Dusemymalloc as a flag to Configure), then apache,
 mod_perl and perl all agree on who manages memory.

Might I suggest that this golden piece of information find it's way into the
guide?  It's so rare to see a DEFINITIVE answer to one of the many (YMMV!
:-)exceptions to the vanilla mod_perl build process.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: 2 problems with mod_perl/Apache::DBI

2001-08-07 Thread Rob Bloodgood

 startup.pl cannot be run from the command line when it
 contains apache server specific modules.

But you can put those (Apache specific) modules in your httpd.conf instead
as

PerlModule Apache::DBI Apache::Status

and avoid compilation warnings in startup.pl.

But you should clearly note this, both in startup.pl and httpd.conf, as
explanatory comments.  Otherwise, you *will* forget that you did this... :-)

  However, when run under Apache
 
  PerlRequire /usr/local/etc/apache/startup.pl
 
  [Mon Aug  6 17:33:09 2001] [error] Can't load
 '/usr/local/lib/perl5/site_perl/5.6.1/i386-freebsd/auto/DBI/DBI.so
 ' for module DBI:
 /usr/local/lib/perl5/site_perl/5.6.1/i386-freebsd/auto/DBI/DBI.so:
  Undefined symbol PL_dowarn at
  Not sure what's up.

As far as the DBI error, is there a possibility that you are NOT using the
same build of perl as was compiled into Apache?  try rebuilding
mod_perl/apache at the same time.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Apache::DBI in startup.pl generating error

2001-08-03 Thread Rob Bloodgood

Well, it should be documented somewhere in the guide, or
presumable in
Apache::DBI.pod, that one should *only*
   
PerlModule Apache::DBI
   
Since it's pointless in startup.pl (right?).
  
   I think you need to think that one through a bit more :)
 
  I disagree.  I *did* think it through.
 
  When involving Apache::DBI, one of two situations is true:
  either you are
  starting the webserver, or you are changing/testing startup.pl.
 
  When starting the webserver, Apache::DBI loads and **transparently**
  replaces the function of DBI-connect.  If one were to NOT load
  Apache::DBI, everything would work JUST THE SAME (codewise), but we
  don't see the benefits that Apache::DBI provide.

 true

 
  When messing with startup.pl, Apache::DBI is redundant.  Since it's
  transparent, and only works when RUNNING the httpd,

 nope - see below

  it is undesired to have it in startup.pl, since it will only
  cause errors.

 I disagree with that - I do it all the time.

 
  HOWEVER,
  loading Apache::DBI as
 
  httpd.conf:
  PerlModule Apache::DBI
 
  means that it only loads when it's worth loading, i.e. server
  startup.  If startup.pl then logically contains init code wrapped
  in a test for whether it's running under mod_perl, within that
  block becomes a good place for Apache::DBI specific
  initialization.
 
  So... your response indicates you think I missed something so
  obvious that I would pick it up on a re-think.  Well, this is my
  original-think... have I really missed something?

 ok, at least you're thinking :)

 the two things I had in mind were:

 a) generally, you ought to pre-load all your modules in startup.pl
 so that you get the maximum amount of code-sharing/memory-sharing,
 etc.

 b) you couldn't call

 Apache::DBI-connect_on_init;
 without first
 use Apache::DBI;

Except that (and I have to check this to be ABSOLUTELY shure but) PerlModule
Apache::DBI happens first, THEN startup.pl.

I just checked, and it works exactly that way when the httpd.conf directives
are in the following order:

PerlModule  Apache::DBI
PerlRequire /etc/httpd/perl/lib/startup.pl

Since Apache::DBI is now loaded into the (single) perl interpreter's symbol
table, and since the above call to connect_on_init() is a method call (vs an
exported symbol), calling methods on it are valid, and can be easily wrapped
as I suggested in an if($ENV{MOD_PERL}) {} block.  Explicitly, it is *NOT*
required to 'use Apache::DBI ();' in startup.pl to do this.

 your *only* up top I thought was a bit strong, which was why I was trying
to
 encourage deeper thought...  for the most part, though, you get
 Apache::DBI, which is more than be said for most...

 I hope you found it constructive criticism and not condescending
 - you seem like a person who wants to understand :)

Absolutely... I'm on this list to edify and to be edified.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;




RE: Apache::DBI in startup.pl generating error

2001-08-02 Thread Rob Bloodgood

 only if you code it the way you did below, which isn't terribly portable.
 see http://perl.apache.org/guide/perl.html#use_require_do_INC_and

Ahem, PerlModule is a wrapper around the perl builtin require().  One
presumes that perl knows where it lives if perl can successfully require()
it.  Especially since this module is installed into the standard perl
heirarchy as a system module, from CPAN.  It's not even XS.

 basically, it's a bad programming practice not to use() modules in the
code
 that needs them.  it works if you call PerlModule before you use() the
 module, but again, it requires you to pay better attention to your
 httpd.conf than you ought to.

See my above point.  Apache::DBI is *made* to be transparent, or at least
semi-.  It exists at the server level, and without (much) interaction with
the programmer's dataspace at all.  What better place for it than
httpd.conf?  And as for having to pay too much attention... well... let's
just say that RedCode is spreading like it is because not enough IIS Admins
paid better attention to the defaults THEY were given. :-) Seriously tho,
why WOULDN'T you know what exactly is in your server configuration?

 consider writing an Apache module for CPAN, relying on, say, Apache::Log
 calls, but failing to use Apache::Log in your module.  If you have a
 PerlModule Apache::Log everything works - until somebody else tries to run
 your code with a different configuration.  There's what works and then
 there's how you ought to do things.

Again, server-level and mostly transparent.  And as far as requiring a
module, 1) I would expect it to be clearly documented 2) and if I didn't
read the dox I deserve to have wasted the time, and 3) I'll leave 3 for
below...

 methods on it are valid, and can be easily wrapped as I suggested
 in an if($ENV{MOD_PERL}) {} block.

 this is just a particular gripe of mine, but I think we ought to be
 past the $ENV{MOD_PERL} thing by now...  startup.pl is a mod_perl
 idiom.  if you are designing web applications that depend on things
 like the mod_perl API, Apache::DBI, and using the hooks into the
 Apache request cycle then you are already way beyond making a
 startup.pl script portable between mod_perl and other web
 environments.

Portable, portable, portable... First of all, understanding that items
appearing earlier or later in a config file is significant is so common I'm
astonished you consider it bad manners to see code that depends on it.  And
unless I'm very badly mistaken, it's even significant in httpd.conf as far
as *Apache* is concerned.

And secondly, you're right, this is *mod_perl*.  Not IIS, NSAPI, PHP, or
Cold Fusion.  startup.pl is indubitably a mod_perl idiom.  I'm failing to
understand how this can be considered portable.  But if you mean portable
from system to system, well, last I heard, ActiveState hadn't quite gotten
signals or sockets mastered, but I'm pretty certain the have the %ENV
emulation worked out.

But thirdly, I consider it a convenience to be able to test a script for
syntax errors before attempting to -HUP my webserver to see if it works or
not.  perl -wc is done almost before I register the cursor moving.  apache
restarts take me at least a minute.  It's not about the request cycle, it's
about the DEBUG cycle.  Since I didn't write Apache::DBI, but I have
reasonable confidence in the guy who did, it doesn't hurt my feelings to
defer initializing it until I've finished modifying startup.pl for other
reasons.  Which means I can make a change, switch windows, and perl -wc and
get a syntax check, instead of an Apache error on Apache::AuthDBI, in under
10 seconds.  Or even perl -w (even tho I run w/ -w in the shebang line and
PerlWarn On) and see debug output I'm working with.

For the record, I also consider it cheesy to have to check $ENV{MOD_PERL}
but to my knowledge, the Apache $s object isn't passed to startup.pl, and
setting an environment variable is significantly cheaper than creating a
perl object in terms of C code and devel/debug time.  Remember, the E in
perl is for Eclectic. :-)

Idioms are there for a reason: They do well in shorthand a required task.
Even if there are other, longer ways to do the same thing.  TIMTOWTDI.  The
way I ought to program is in the way that makes perfect sense when one
understands all of the pieces, and document the hell out of it so that
people behind me who don't understand can at least follow the requirements
list.  ESPECIALLY when the person behind me is me, 6 months later.

/rant

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: [OT] Inspired by closing comments from the UBB thread.

2001-08-01 Thread Rob Bloodgood

 Jay Jacobs wrote:
 
  I don't see any glue-sniffing symptoms from choosing
  embedded html in perl over embedded perl in html.
 

 Unless, of course, you're the graphic artist and you've been tasked
 with changing the look and feel of the application using embedded
 perl (which you, as the graphics person, probably don't know
 anything about), while the perl developer works on the perl portions
 of the code, then you might be sniffing some glue.  This the
 motivation for some (if not most) of the templating solutions Perrin
 mentioned.

Hmmm... Mason makes this *possible*, for me:
I tell my guys, make it look ANY way you like.  I don't care.  I don't WANT
to care.  Just leave me ONE td/td.  Since I have all of my components
called by a single dispatch component, all that td has to have is one line
of markup.

Then I tell them, here's the list of styles I'll be using in my markup.  You
have access to the stylesheet, make them look however you want but don't
add/remove/rename any of them.

Using this method, I've been able to extend the SAME CODE on two different
sites w/ radically different themes.

Of course, at this point, some would say XML / XSL!  Try AxKiT!

But to be honest, I haven't gone there yet.  XML, no matter how pretty the
tools, is still a pain and a bother, IMHO.  Dropping a couple of lines of
perl in a (mostly) static HTML table/form/chart is FAR simpler than learning
a new language (for the stylesheets) to implement a new paradigm (XML) that
in spite of its buzzword compliance is still a hit-and-miss crapshoot
against current browsers.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;




RE: [OT] Inspired by closing comments from the UBB thread.

2001-08-01 Thread Rob Bloodgood

 As for SQL, I just wish people would expand their horizons a little
 and start doing a bit of reading.  There are so many different ways
 to avoid embedding SQL in application code and I sincerely wish
 programmers would THINK before just coding... it's what
 differentiates scripters from engineers and I suggest everyone who
 embeds SQL in their perl for anything other than quick-and-dirty
 hacks start considering other options for the good of the
 programming community AND THE SANITY OF WHOMEVER HAS TO MAINTAIN OR
 ALTER YOUR CODE.

 I just implore readers of this list to start thinking more as
 engineers and less as script kiddies.  We all love mod_perl and its
 power and we want it to succeed.  We'll only get somewhere with it
 if we actually make the effort to write better code.  Mixing SQL and
 perl is not better code.

WHY?  WHY WHY WHY WHY  Tell me why it's this horrible, glue-sniffing,
script-kiddie badness to do something in a clear and simple fashion

Below is a pseudo-code handler.  It talks to the database:

use strict;

use vars qw/$dbh/;

sub handler {
my $r = shift;

lookup_info($r);

# ... blah...

return OK;
}

sub lookup_info {
my $r = shift;

# ||= allows an already connected $dbh to skip reconnect
$dbh ||= DBI-connect(My::dbi_connect_string(), My::dbi_pwd_fetch())
  or die DBI-errstr;

# WARNING! amateur code ahead!!!
my $sql_lookup_password = $dbh-prepare_cached( SQL );
SELECT passwrd, pageid
  FROM siteinfo si, pages pg
 WHERE si.acctid = pg.acctid
   AND si.acctid = ?
   AND pageno = 0
SQL

($c_pass, $c_pid) =
  $dbh-selectrow_array( $sql_lookup_password, undef, $acctid );

return undef unless defined $c_pass and $pass eq $c_pass;

# We've confirmed the password.
return $c_pid if !$pid or $pid eq $c_pid;

# some more logic, maybe even another query

return $pid;
}

Now.  Tell me ONE thing that's wrong with this?  The statement handle is
clearly named ($sql_lookup_password), the query is either A) really simple
or B) commented w/ SQL comments, C) if I change my schema, the query is
RIGHT THERE in the only place that acually USES it.

OO is an idea for cleaning up and packaging functionality.  Fine.  If I
need it that bad, I'll code my handler as an object.  But let's not forget
that the underlying mechanism, no matter how fancily layered, is still a
list of FUNCTION CALLS.  OO has its place.  ABSOLUTELY.  In perl I can
create an FTP connection _object_ and tell it what to do, and trust that it
knows how to handle it.  But in the REAL WORLD, my script is its own
object, with its own guts and implementation, and the interface is:
MyModule::handler.  Apache knows what function to call.  I can mess with the
guts and the interface doesn't change.

So what do I gain by adding 6 layers of indirection to something this
simple?  OO has its PLACE as a TOOL.  It should not be a jail with LOCKED
DOORS and ARMED ESCORT.  (and come to think of it, any objects I use aren't
cons :-)

My $.02.

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;






RE: One more small Apache::Reload question

2001-08-01 Thread Rob Bloodgood

 However, now my logs are loaded with  a ton of subroutine redefined
warnings
 (which is normal I suppose?).  I can certainly live with this in a
 development environment, but thought I would check to see if it is
expected,
 and if it can be turned off while still enabling Reload.

Well, first of all, you will want to turn off Apache::Reload during
production.  All of those stat()'s will slow down your server speed
significantly, as the disk is kept busy for each request.

Secondly, how is it you view your logs?  I have a window running tail -f
with a grep filter:

tail -f /var/log/httpd/error_log | egrep -v
'redefined.at|Apache::Reload|AuthenCache'

This way, I get the best of both worlds, by ignoring the noise:

# use constant SIGNATURE = 'TSTAT';
Constant subroutine SIGNATURE redefined at
/usr/lib/perl5/5.00503/constant.pm line 175.

# One of my module's subroutines.. there are 15 of these
Subroutine test_handler redefined at /etc/httpd/lib/perl/Stat/Count.pm line
315

I have AuthenCache in my filter because at LogLevel debug,
Apache::AuthenCache is *noisy*!!

HTH!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Porting CGI scripts help needed

2001-07-26 Thread Rob Bloodgood

 Heres what I did:
 I had many scripts in one dir that shared many things; subroutines, global
 variables and modules.  I wanted to clean things up, so I created a module
 called global.pm structured like this:

snip

 The custom stuff scripts all end in 1;, and are loaded with my custom
 subroutines.  For example, I have one called cgi.pl, that
 contains all subs
 for cgi related tasks, i.e. checkUser(), which verifies a users cookie.

 Each cgi simply calls 'use global;' and then off we go.  However, after
 moving all this stuff into /perl, none of the subs in the custom .pl files
 are found, I get a complaint:
 Undefined Subroutine Apache::ROOT::compar_2ecgi::checkUser
 called at .

 compare.cgi calls 'use global;' and then 'checkUser()'.

global.pm isn't exporting the symbol names it's defining.
if you were to refer to global::checkUser() in one of your scripts, and it
worked, then mebbe an EXPORT() list for global.pm is in order

so that the function names get defined in the package
(Apache::ROOT::compar_2ecgi, made up by apache on the spot)
that is accessing them.

HTH!

L8r,
Rob




[Probably OT] Make test fails in Net::Telnet

2001-07-24 Thread Rob Bloodgood

Hi, I'm building a new box intended to be a mod_perl/database machine, and
in the interests of making it as up-to-date as possible, I installed RedHat
7.1, then upgraded to perl 5.6.1.

Next step, of course, is to hit CPAN and install the basics, starting with
Bundle::CPAN.

But Net::Telnet barfs on make test.

I looked on usenet, and there was a post w/ a guy who had this same problem,
but no reply.  His post describes the problem clearly:

snip

From: [EMAIL PROTECTED] ([EMAIL PROTECTED])
Subject: select weirdness/Net::Telnet install barf
Newsgroups: comp.lang.perl.modules, comp.lang.perl.misc
Date: 2001-06-15 15:21:18 PST

On my RedHat 7.1 i586 (laptop) system perl 5.6.1
Net::Telnet won't install using CPAN since 'make test'
barfs.
The select call to read on a dummy handle returns 1!
briefly:

socket SOCK, AF_INET, SOCK_STREAM, 0;
$bitmask = '';
vec($bitmask, fileno(SOCK), 1) = 1;
$nfound = select ($bitmask, undef, undef, 0);

$nfound is 1 with SOCK allegedly ready to read!
Of course actually reading from it produces a SIGPIPE
since it's not connected to anything.
The 'socket'  'select' calls themselves don't barf.
I'm afraid to force install if there's something deep down
wrong w/my select.

Any ideas greatly appreciated -thanks!
Mark

snip

[rob@dyn5 /home/rob]$ perl -V
Summary of my perl5 (revision 5.0 version 6 subversion 1) configuration:
  Platform:
osname=linux, osvers=2.4.2-2, archname=i386-redhat-linux
uname='linux dyn5.empire2.com 2.4.2-2 #1 sun apr 8 20:41:30 edt 2001
i686 unknown '




config_args='-des -Dcc=gcc -Darchname=i386-redhat-linux -Dcccdlflags=-fPIC -
Dccdlflags=-rdynamic -Dprefix=/usr -Dscriptdir=/usr/bin -Dsitelib=/usr/lib/p
erl5/site_perl -Dman1dir=/usr/share/man/man1 -Dman3dir=/usr/share/man/man3 -
Dman3ext=3pm -Doptimize=-O2 -march=i386 -mcpu=i686 -Uusethreads -Uuselargefi
les -Duseshrplib -Dd_dosuid -Ud_setresuid -Ud_setresgid'
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef use5005threads=undef useithreads=undef
usemultiplicity=undef
useperlio=undef d_sfio=undef uselargefiles=undef usesocks=undef
use64bitint=undef use64bitall=undef uselongdouble=undef
  Compiler:
cc='gcc', ccflags ='-fno-strict-aliasing',
optimize='-O2 -march=i386 -mcpu=i686',
cppflags='-fno-strict-aliasing'
ccversion='', gccversion='2.96 2731 (Red Hat Linux 7.1 2.96-85)',
gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t',
lseeksize=4
alignbytes=4, usemymalloc=n, prototype=define
  Linker and Libraries:
ld='gcc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -ldl -lm -lc -lcrypt -lutil
perllibs=-lnsl -ldl -lm -lc -lcrypt -lutil
libc=/lib/libc-2.2.2.so, so=so, useshrplib=true, libperl=libperl.so
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef,
ccdlflags='-rdynamic -Wl,-rpath,/usr/lib/perl5/5.6.1/i386-redhat-linux/CORE'
cccdlflags='-fPIC', lddlflags='-shared -L/usr/local/lib'


Characteristics of this binary (from libperl):
  Compile-time options:
  Built under linux
  Compiled at Jul 23 2001 17:28:02
  %ENV:
PERL5LIB=/home/rob/lib/perl5:/home/rob/lib/perl5/site_perl/5.005
  @INC:
/home/rob/lib/perl5
/home/rob/lib/perl5
/home/rob/lib/perl5/site_perl/5.005
/usr/lib/perl5/5.6.1/i386-redhat-linux
/usr/lib/perl5/5.6.1
/usr/lib/perl5/site_perl/i386-redhat-linux/5.6.1
/usr/lib/perl5/site_perl
/usr/lib/perl5/site_perl/5.6.0
/usr/lib/perl5/site_perl

tips? ideas? suggestions?

TIA!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Requests using If-Modified-Since cause response Set-Cookie to be discarded

2001-06-27 Thread Rob Bloodgood

 me, on the other hand, i don't see the problem with

   on incoming request
 if has-cookie 'session'
 {
   update serverside 'accesstime' for session[this] to NOW

Oh yeah?  HOW???

   if not-modified-since
 report same
   else {
 send headers w/ cookie
 generate page
   }
 }
 else
   redirect to login page

 doesn't look unmanageable to me (until someone shows me the
 light, of course)...?

How many sessions/day are you running?  How big is your DB?  How much
processor do you have to throw at this? (these are the hurdles for storing
serverside info).

OTOH, what *benefit* is derived from storing all of this stuff serverside?

And how about: for the page in question, make an invisible frameset.  Use
the request for the FRAMESET (or the invisible frame) to update cookies,
etc, and when the main frame is requested, 304 has already been determined
but timestamps are properly updated.

yes? no?

L8r,
Rob




RE: Requests using If-Modified-Since cause response Set-Cookie to be discarded

2001-06-25 Thread Rob Bloodgood

  maybe storing 'last-access-time' on the server, instead of in
  the client-side, via cookie, would solve this snafu?

 But if you want to give out a new cookie on every request ?
 How would you prevent them from copying or tampering with the contents?
 a MD5-hash would stop them from changing values, but they could
 still copy the cookie,
 so the next idea is timeouts, and when you use timeouts it would
 be nice if the user
 don't have to login every couple of minutes, but would get a new
 valid cookie automaticly...

Aside from the fact that a server-side tracking system is bound to become
incredibly unmanageable, very quickly, in terms of server-side storage...

One of the methods I've used is to include a timestamp in the user's info
(incl the MD5 hash?  see the Eagle for Encryption of Cookies w/ MD5).

THEN, when deparsing the cookie, DELETE it if the timestamp is too old.

THEN, you either have a valid, non-timed out session, or no session at all
(which is what you were worrying about in the first place, no?).  If your
system is based on session LENGTH (ie this ticket is good for one hour from
last access), all you have to do is re-set the timestamp to the current
time.

HTH!

L8r,
Rob




RE: strange uninitialized value error.

2001-06-25 Thread Rob Bloodgood

   Changing:
  warn r-uri is undef unless defined $r-uri;  debugging?!?!?
  my $subr = $r-lookup_uri($r-uri); # uri is relative to doc root
 
   To:
 $uri = $r-uri;
  warn \$uri is undef unless defined $uri;  debugging?!?!?
  my $subr = $r-lookup_uri($uri); # uri is relative to doc root

Hmm... is $subr defined if $uri is EMPTY?

There's no check here for that.

(my $.02)

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;
 



Adding parameters to a request

2001-06-18 Thread Rob Bloodgood

In my AuthenHandler, I run the following snippet:

# validation successful
$apr-subprocess_env(REMOTE_PASSWORD = $pass);
  my $args = $apr-args || '';
$apr-args( $args . ( length $args ? '' : '' ) . pid=$pid )
   unless $args =~ /pid=\d+/;
return OK;

The intent is to add the parameter 'pid=99' or whatever to the request if it
is not already present.

It feels clunky and forced to me... is there a better way to do this?  As
indicated by the variable, I'm using Apache::Request, for the sole purpose
of having easier access to the parameters.  Except that it turns out
Apache::Request's param() method does NOT support *setting* parameters, only
*getting* them. sigh

TIA!

L8r,
Rob




RE: Adding parameters to a request

2001-06-18 Thread Rob Bloodgood

 [snip]
  I'm using Apache::Request, for the sole
 purpose
 of having easier access to the parameters.  Except that it turns out
 Apache::Request's param() method does NOT support *setting* parameters,
 only
 *getting* them. sigh

 the
 $apr-param('foo' = [qw(one two three)]);
 example in the docs didn't work until recently in CVS and it hasn't been
 propigated out yet (although a 0.33 release of libapreq is pending)

 for the moment, try

 my $parms = $apr-parms;

 $parms is an Apache::Table object, for which you can call get(), set(),
 add(), etc.

 in the next release, $apr-param will return an Apache::Table object in
 a scalar contect, removing the need for a separate parms() method.
 for now, that should help.

I tried this:

my $parms = $apr-parms;
$parms-add( pid = $pid) unless defined $parms-get('pid');

But now my app complains that it's not getting a parameter for pid at all.
I looked at the source, and parms() returns

ST(0) = mod_perl_tie_table(req-parms);

But I don't know if the above call is complete (changes to the
Apache::Table object reflect in the request).  Am I supposed to re-insert
the table into the request?  None of the following worked:

$apr-args($parms);
$apr-parms($parms);

and I couldn't figure out how to convince $parms to stringify so that I
could just assign THAT to $apr-args.

Suggestions?

TIA!

L8r,
Rob




RE: Tracking down taint problems

2001-06-14 Thread Rob Bloodgood

 if you can reproduce at will, use gdb:
 % gdb httpd
 (gdb) source mod_perl-x.xx/.gdbinit
 (gdb) b Perl_croak
 (gdb) run -X
  run request that causes error ...
 (gdb) where
  stack printed here ...
 (gdb) curinfo
  perl filename:linenumber printed here ...

OOOHH

Seriously, tho, do you think you could come up with a short list of
definitions for those macros?  I was pretty excited to see them, once,
except that I couldn't make them work. sigh  Even a comment w/ a usage:

AvFILL(address)

just to see what to feed the macro from gdb space?

Not like you have anything ELSE to do... (JUST KIDDING I can tell you've
been writing email *all day* by the posts that keep trickling into the
list).

L8r,
Rob




Preventing duplicate signups

2001-05-17 Thread Rob Bloodgood

So, like many of you, I've got a signup system in place for bringing on new
customers.

My signup script is reasonably straightforward.  I use CGI::Validate to make
my parameters pass muster (along with a little judicious JavaScript on the
signup form), Apache::Session::Oracle to maintain state between the multiple
pages of the signup, CGI::FastTemplate to print a pretty success page, and
DBI to create the account records at successful creation.

At one time it was straight CGI but I've since updated it for mod_perl.

Anyway, my only problem is that I can't seem to prevent duplicate signups,
e.g. reloading the last page to create multiple accounts.

This is my dupe detection code:

if (my (%post) = cookie('Signup')) {
local $^W = 0;
my $match = 0;
foreach (qw/ email url password / ) {
$match++ if param($_) and $post{$_} eq param($_)
}
if ($match == 3) {
# I tried this first, but some browsers are stupid.
# print header(-status='204 No Content');
print header(-status='304 Not Modified');
exit;
}
}

Naturally, I set the corresponding cookie in the Header of the Thank you
for signing up template output.

But it doesn't work.  I still get duplicate accounts, and I'm at a loss as
to how to attack this problem.  (this is the 3rd or 4th approach I've
tried).

Suggestions?

TIA!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Preventing duplicate signups

2001-05-17 Thread Rob Bloodgood

 A really simple trick would be rather than to use a cookie, if
 you are saving state to DB anyway.  Set a flag in the DB and test
 for its existence.

 sub handler{
 
 my $s = session-new();
 $s-continue();

 my $flag = $s-get('flag');
 if($flag){
 # do something else
 }
 else{
 # run insert for new signup
 }
 
 }

There should be a word for this...
2 minutes after I finished sending my post, I realized exactly this thing.
Since I'm already accessing the DB w/ Apache::Session, I get miscellaneous
flags thrown in for free... since the rest of the signup scripts only uses
the hash keys in %session that it knows about.

So, after a successful signup (read, creation of the new account record in
the database), I simply stash the new acctid into %session:
$session{id} = $id;

Now here's the fun part.
While writing this script originally, I put in a debug statement like so:

# bypass SQL activity
goto TEMPLATE if $ENV{REMOTE_ADDR} eq $my_ip and $session{email} ne
$my_email;

So the next time a signup comes in, I can check for the existence of
$session{id}.  If it exists... well, read the updated code:

goto TEMPLATE if exists $session{id} or $ENV{REMOTE_ADDR} eq $my_ip and
$session{email} ne $my_email;

Since the template gets the rest of its values straight from %session
anyway, the only other change I had to make was s/$id/$session{id}/.

Problem solved!

Now, of course, I have to provide for the previous behavior whereby a person
could validly sign up multiple accounts if they were pointing to different
urls, but hey!  I've nailed the PROBLEM.  Now I just have to adjust the
behaviors slightly.

L8r,
Rob




RE: Apache Oracle and Perl

2001-05-10 Thread Rob Bloodgood

 When I start getting this error, I can shutdown the httpd server, and the
 machine and it will still give this error. If I wait a while(sometimes
hours,
 sometimes days) it will come
 back. Sometimes it is a few hours. Sometimes it is days. I have installed
 Apache::DBI in hopes of a possible fix.

 The error I get is:
 Software error:
 Can't load
 '/usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/DBD/Oracle/Oracle.so' for
 module DBD::Oracle: libclntsh.so.8.0: cannot open shared object file: No
 such file or directory at /usr/lib/perl5/5.6.0/i386-linux/DynaLoader.pm
line

When I see this problem, I automatically think, Oh, the Oracle libs aren't
being located by the system.  Edit /etc/ld.so.conf and add the value of
$ORACLE_home/lib (the directory that has libclntsh.so.8 in it)
e.g.
/usr/local/oracle/8.1.5/lib

the run /sbin/ldconfig to update Linux's idea of where things are, and stop
and start the server.

ALSO, ensure that ORACLE_HOME is explicitly provided to your perl stuff:

in httpd.conf
PerlPassEnv ORACLE_HOME

THIS CAN BITE YOU! If your httpd startup script doesn't have the oracle
environment loaded, you may have to fix that as well:

at the beginning of /etc/rc.d/init.d/httpd:

# Source function library.
. /etc/rc.d/init.d/functions !-- original code --

# Source Oracle environment !-- you add these lines --
ORAENV_ASK=NO
ORACLE_SID=stats
. /usr/local/bin/oraenv

# See how we were called. !-- original code --
case $1 in


HTH!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





FW: authorization and mod_perl

2001-05-09 Thread Rob Bloodgood

I had intended this to CC: to the list... sigh




 location /foo*
   AuthNamefoo control
   AuthTypeBasic
   PerlAuthenHandlerApache::OK
   PerlAuthzHanlderWW_authz
   PerlSetVarMaskGeek
   requireusermaskgeeky
 /location

I have a similar setup, and my directory/authentication block is as follows:

Location /reports
DirectoryIndex stats.html

PerlAuthenHandler +Apache::AuthenCache +Stat::Auth
Apache::AuthenCache::manage_cache

# AuthenCache Directives
PerlSetVar AuthenCache_Encrypted Off
# AuthenCache Directives

AuthName Stats (Your username is your Account ID)
AuthType Basic
require valid-user

Options +Includes

ErrorDocument 403 /error/loginfail.html

/Location

As listed, every path under /reports is subject to authentication.
Naturally, auth in a SUBDIRECTORY can be overridden:

Location /reports/special
# Makes this directory globally accessible
# in spite of its parent and siblings still
# falling under the above listed AUTH handling.
PerlAuthenHandler Apache::OK
/Location

HTH!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: mod_perl and 700k files...

2001-05-09 Thread Rob Bloodgood

 That, unfortunately doesn't tell me what causes a USR2 signal to
 be sent to
 Apache. Or when it's caused. I only want to reload the file when
 said file
 has changed. Am I supposed to do some checking against the file -M time
 myself, and then send a USR2 signal myself?

USR2 only fires when you do it yourself, e.g.
kill -USR2 `cat /var/run/httpd.pid`
or under linux
killall -USR2 httpd

However, if you want to change based on mod time, then one way to do it
would be as follows (THIS CODE IS UNTESTED!!!).

in the handler/CGI that USES the 700k doc:

my $big_doc = My::get_big_doc();

and in startup.pl you can say:

package My;

my $big_doc = undef;
my $mod_time = 0;

my $big_file = '/path/to/big/file';

sub get_big_doc {

if (defined $big_doc and -M $big_file  $mod_time) {
return $big_doc;
} # implicit else

($big_doc, $mod_time) = some_complex_operation();

return $big_doc;

}

sub some_complex_operation {

# read in $big_doc and its $mod_time

return ($big_doc, $mod_time);
}

HTH!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: Reading the environment in perl block

2001-05-07 Thread Rob Bloodgood

 The way I've setup whole thing is like that : a script name restart is
 called with some parameters telling him to reload one or all the
 developpers environment, or the testing copy. This script would
 have some
 environments variables called SITE_USER and SITE_USER_PORT that will give

The PerlPassEnv directive should work for this, shouldn't it?  I've
experienced that it puts the (explicitly named) environment variables into
perlspace pretty effectively.

HTH!

L8r,
Rob




RE: Apache Processes hanging

2001-05-03 Thread Rob Bloodgood

 I'm having similar problems, but we think it's directly related to
 Oracle.  Basically, a connection is made to the Oracle database, a
 transaction is started and finished, but the connection to the
 database doesn't go away and the statement (at least from the oracle
 side) never seems to finish.  The data is present in the database
 (these are insert statement, btw).  Over time, every process collects
 one of these hanging statements and it eventually overwhelms our
 oracle database.  The only solution is to restart apache every 5
 minutes to eliminate the built-up non-finished transactions.

Yeah... two things: CONCURRENCY and TRANSACTIONS.

Concurrency: Are there any other processes/reports/queries running at the
time of insert?  That will lock ALL of them, waiting for the insert to
complete so the lock is released.  Or, Another Interesting Way To Lock A
Really Buff Linux Server (tm).

Transactions: how's this one for fun?  I started experimenting with
Apache::Session::Oracle to see what I could see.  Usually I run w/
$dbh-{AutoCommit} = 1, which is the default, because most of the time I'm
just running SELECT's.  But ::Oracle wouldn't ever complete the transaction,
hanging that server process and eventually most of the httpd system, all
waiting for the commit() on the INSERT (from the new Session) that doesn't
complete. sigh I ended up having to do a local block, with Commit = 1:

{
local $dbh-{AutoCommit} = 0;
tie %session, 'Apache::Session::Oracle', $session_id, { Handle = $dbh,
Commit = 1};
$session_id = $session{_session_id}; # save a copy

_set_cookie( $r, SESSION_COOKIE, $session{_session_id} );

$session{referer} ||= $referer; # preserve prior entries

untie %session;
}

HTH!

L8r,
Rob




RE: glimmer of hope -- cookies: www.host.tld vs host.tld

2001-05-02 Thread Rob Bloodgood

 Or at the very least, two segments thereof:

   domain=.org.tld

 Which would be sent to any of these hosts:

   www.org.tld
   some.obscure.server.org.tld
   even.here.org.tld

 BUT NOT TO

   ord.tlg

 Thank you very four-borking-days-lost-forever much.

 So, patient gurus laughing-up-your-sleeves, who've known this
 from the beginning and have only been waiting for grashopper to
 come to the epiphany on his own, would you mind sharing with us
 lesser folk... HOW to have cookies work for bare-domain hosts
 such as

   this.org
   something.net
   my.tld

 to operate as aliases for more specific-style sites such as

   www.this.org
   www.something.net
   a.very.deep.and.remote.server.my.tld

you have it right at the top.
assuming you are operating in org.tld, so www.org.tld and modperl.org.tld
are valid boxes, then you send the domain string as .$domain.  This one
cost me about a week, so don't feel too bad!

Until now, you've been dealing with not even seeing the cookie header (in
the raw req).  Once the raw req has the right info, (e.g. the Set-Cookie:
header), then it comes down to verifying the info IN the headers. sigh

DON'T EXPECT TO SET A COOKIE FOR MULTIPLE DOMAINS.  If you set a cookie for
.this.org, it's not a part of the technology to allow the same cookie to
work w/ .something.net as well.  ALTHO: There's nothing stopping you from
setting cookies from perl.this.org for the .something.org domain if you
expect to go back and forth.

HTH, and good luck!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;





RE: pulling arguments off rewrite rule

2001-04-18 Thread Rob Bloodgood

 Yes ... basically we want to track which company sent us the
 reference when customers subscribe.
 the ref=xxx where xxx will = some company id
 I want to tack this on to every click so that when the user
 finally submits an application we
 can credit the company that gave us the customer

 I will try your solution and thanks for your thoughts
 I appreciate it.   I could not find much documentation regarding
 this issue.

This is so close to what I'm doing right now, you could *ALMOST* drop in my
module and have it work the first time.

For that reason, I'm gonna post it. :-) COMMENTS ALWAYS WELCOME!

Notes:
1. this is an 0.1-0.2 version.
2. in _set_cookie, the line
  domain = ".$ENV{DOMAIN}",
needs clarification:  $ENV{DOMAIN} comes from my httpd.conf:

PerlSetEnv DOMAIN mydomain.com

(I use identical modules on multiple sites whereever possible)
*AND*, the '.' at the beginning is required.  When (if) you use this
yourself, DON'T FORGET IT!!!  Otherwise you WILL have problems (and for the
two months of wondering what the @#$ was going on: SIGH).

3. I'm using Apache::Session::Oracle, because I already have Oracle.  ::File
or ::MySQL or whatever might be better for you.

4. I have a private module called My, which is read-only to root and has my
DB auth stuff in it, that loads at server start.  This explains the odd
DBI-connect parameters.

5. I made path handling vaguely intelligent... if I get a request for
/affiliates/1122334, I read the /1122334 as the referer (see line 53, you'll
prolly wanna change this).  If I get a request for
/affiliates/somethingelse.html, then I DECLINE processing so that Apache can
either serve somethingelse.html or handle the 404 as usual.

The upshot is, when they finally sign up, I import the Apache::Session and
see if Cexists $session{referer}.  If yes, great, handle it.  If not, just
do a normal signup.

in order to activate this module, do something like this in httpd.conf:
Location /affiliates
PerlFixupHandler +Stat::Affiliate
/Location

Hope this helps!

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;

 Affiliate.pm


RE: Apache::Cookie and encoding

2001-04-12 Thread Rob Bloodgood

 I'd like to use Apache::Cookie, but I'm doing some tricky things with
 cookie data, which requires that I do the encoding myself.  However,
 every time I 'bake' a cookie object, it tries to encode stuff for me.  I
 don't like this.

 For example, if I've got cookie data that looks like 'foo%21', it
 emerges from 'bake' looking like 'foo%2521'.  Is there any way to
 prevent this behavior?

First of all,
reading the cookie value in should reverse the weirdness (encoding)
that -bake is doing.

Second of all,
if it's still a problem, you can either
A) design your cookie string to NOT use those characters (like, if % is a
separator, choose a : or something),
B) use Storable / MIME::Base64 / UUEncode ( which is as simple as pack('u',
$val) ! ) , or
C) encode it yourself.

Hope this helps!

L8r,
Rob




RE: Long KeepAlives sensible?

2001-04-06 Thread Rob Bloodgood


 What if you want to explicitly zap the KeepAlives but not terminate 
 the child. Example -- http chat scripts. Basically it amounts to 
 having KeepAlives off for the particular script but on for everything 
 else. How does one accomplish this.

$r-header_out(Connection = 'close');

L8r,
Rob



RE: Apache::AuthCookieDBI forgets its config [SOLVED]

2001-04-04 Thread Rob Bloodgood

 OK, more examination reveals that:
 At the time this BEGIN block is running, this call:
   my @keyfile_vars = grep {
   $_ =~ /DBI_SecretKeyFile$/
   } keys %{ Apache-server-dir_config() };

 is returning EMPTY.

 Meaning it's evaling too early to see the dir_config???  Or what?

- PerlModule Apache::AuthCookieDBI
 PerlSetVar AdminPath /admin
 PerlSetVar AdminLoginScript /scripts/adminlogin.pl

 # These must be set
 PerlSetVar AdminDBI_DSN "dbi:Oracle:STATS"
 PerlSetVar AdminDBI_SecretKeyFile /etc/httpd/conf/admin.secret.key
+ PerlModule Apache::AuthCookieDBI

My ealier message reveals the solution: move the line
PerlModule Apache::AuthCookieDBI
to *AFTER* the line(s)
PerlSetVar BlahBlahDBI_SecretKeyFile /path/to/keyfile

It now works perfectly!

Thx for putting up w/ me bouncing my problem-solving off the list...
hopefully somebody will save the day  a half I just spent on this.

L8r,
Rob

#/usr/bin/perl -w
use Disclaimer qw/:standard/;




RE: internal_redirect

2001-04-03 Thread Rob Bloodgood

 I'm trying to handle an exception using an internal_redirect.  I
 can get it to work by redirecting to a static page, but when I try to
 redirect to a modperl handler, I'm run into problems.

 Here are the two versions of code (BTW, the handler works fine when I
 access it directly via the browser).

 ## ver. 1
 print STDERR "$@";
 require Apache;
 my $r = Apache-request;
 $r-internal_redirect("/DBConnectError.cgi");

 ## ver. 2
 print STDERR "$@";
 require Apache;
 my $r = Apache-request;
 $r-internal_redirect("/errordocs/503.html");

 When I run the modperl handler, the browser prompts me and asks if I
 want to save the output of the cgi script that raised the error, but
 it never displays the content from the handler. The static file
 version works great and the browser displays it's content.

Hmm... First of all, the Guide (and experience :-) sez that IMMEDIATELY
after running

$r-internal_redirect(blah);

one should

return OK;

Secondly, I would suggest doing it differently:
tell the request object that the handler returned code 503, and in
httpd.conf:

ErrorDocument 503 /DBConnectError.cgi

I just wrote this handler in 2 minutes, that demonstrates:

=
package Stat::Testfail;

use strict;

sub handler {
my $r = shift;

$r-status(503);
return 503;
}

1;
=

with the following httpd.conf entry:
Location /testonly
SetHandler perl-script
PerlHandler +Stat::Testfail
/Location

and it works like I've described:

# telnet localhost 80
GET /testonly HTTP/1.0

HTTP/1.1 503 Service Temporarily Unavailable
Date: Tue, 03 Apr 2001 18:01:28 GMT
Server: Apache/1.3.9 (Unix)  (Red Hat/Linux) mod_perl/1.25
Connection: close
Content-Type: text/html

HTML
HEADTITLEAn Error Occurred/TITLE/HEAD
BODY
H1An Error Occurred/h1
503 Service Temporarily Unavailable
/BODY
/HTML

(Naturally this would have been different if i'd set an ErrorDocument 503).

HTH!

L8r,
Rob




RE: internal_redirect

2001-04-03 Thread Rob Bloodgood

 Rob, thanks for pointing me in the right direction.  Your advise
 helped me find a solution that works for my situation.

You're welcome!

 I'm working on an API that sits between an Oracle DB and bunch of web
 application programmers.  Unfortunately, the programmers run their
 apps under a variety of perl-handlers (Apache::Registry,
 Apache::RegistryNG,
 Apache::RegistryFilter, etc).  None of the programmers follow any
 sort of standard method for handling exceptions, so I can't assume
 that 'return OK;' will ever be called (in fact I'm pretty sure, it
 will never be called).  What I've been trying to do is kind of 'take
 over' the request, whenever a programmer fails to connect to the DB and
 redirect the browser to a handler that can put up a custom 503 page
 for each application.

OK 1: none of the example environments you listed that your programmers are
in include straight mod_perl... in fact they are all CGI emulation layers of
varying degrees of protection/dirtiness.  Do I read you correctly?

 I finally settled on putting the following in conf file for the web
 sites:

 ErrorDocument 503 "HTMLHEADMETA http-equiv="refresh"
 content="0;URL=/DBConnectError.cgi"/HEAD/HTML

 Files DBConnectError.cgi
SetHandler perl-script
PerlHandler Tec::Api::DBConnectError
 /Files

Well, this is all fine except for one important detail:  HOW, and I mean, if
you can't answer this you haven't solved the problem, but HOW do you know
that your programmers' programs are going to fire a 503 if there is a
database error

 It seems to work for just about every perl handler the programmers are
 using, as long as they doesn't use Carp::fatalsToBrowser, which
 raises a whole new set of problems.

(you could always chmod 000 `find /usr/lib/perl5 -name fatalsToBrowser.pm`
:-)

 If you see any issues with my solution, please chime in.

Well as far as I can see, you're trying to ensure that the programmers are
correctly connected to the database.  It *looks* like /DBConnectError.cgi is
a reconnect setup.  Presumably, this has an API that your programmers are
using to get DB handles (DBI?).

But the only way for this setup to work is if the PROGRAMMERS know that if a
database call fails, to throw 503:

my $dbh = DBI-connect(Local::get_connect_args) or print "Status: 503\n\n",
exit;

Otherwise, all of this fancy footwork you're doing will be pointless.

Is there something I'm missing?




RE: Apache::AuthCookieDBI forgets its config [UPDATE]

2001-04-03 Thread Rob Bloodgood

 HOWEVER, whenever the module is actually invoked, %SECRET_KEYS is empty!
 
 Here's the BEGIN{} block:
 BEGIN {
   my @keyfile_vars = grep {
   $_ =~ /DBI_SecretKeyFile$/
   } keys %{ Apache-server-dir_config() };
   foreach my $keyfile_var ( @keyfile_vars ) {
   my $keyfile = Apache-server-dir_config( $keyfile_var );
   my $auth_name = $keyfile_var;
   $auth_name =~ s/DBI_SecretKeyFile$//;
   unless ( open( KEY, "$keyfile" ) ) {
   Apache::log_error( "Could not open keyfile for 
 $auth_name in file
 $keyfile" );
   } else {
   $SECRET_KEYS{ $auth_name } = KEY;
   close KEY;
   }
   }
 }

OK, more examination reveals that:
At the time this BEGIN block is running, this call: 
  my @keyfile_vars = grep {
$_ =~ /DBI_SecretKeyFile$/
} keys %{ Apache-server-dir_config() };

is returning EMPTY.

Meaning it's evaling too early to see the dir_config???  Or what?

PerlModule Apache::AuthCookieDBI
PerlSetVar AdminPath /admin
PerlSetVar AdminLoginScript /scripts/adminlogin.pl
#PerlSetVar AdminLoginScript /error/adminlogin.html

# Optional, to share tickets between servers.
#PerlSetVar AdminDomain .domain.com


# These must be set
PerlSetVar AdminDBI_DSN "dbi:Oracle:STATS"
PerlSetVar AdminDBI_SecretKeyFile /etc/httpd/conf/admin.secret.key

# etc.



Ideas?

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;




RE: Getting a Cache::SharedMemoryCache started

2001-03-28 Thread Rob Bloodgood

Thanks for the pointers, unfortunately I've got a problem with the Shared
 cache in that I need IPC::ShareLite, no problem, except it won't test ok,
 I get:

 PERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/arch -Iblib/lib
 -I/usr/lib/perl5/i386-linux -I/usr/lib/perl5 test.pl
 1..8
 ok 1
 ok 2
 IPC::ShareLite store() error: Identifier removed at test.pl line 33

It took me forever, but I finally figured out HOW to deal w/ share memory!
As in, what is the status, what can I change, how can I make garbage go
away?

On Linux (and Solaris, probably others) there is a program called ipcs(8).
This will give you a listing of all shared memory segments, message queues,
and semaphore arrays.

An example (Linux):
-- Shared Memory Segments 
keyshmid owner perms bytes nattchstatus
0x00280267 1 root  644   1048576   0

-- Semaphore Arrays 
key   semid owner perms nsems status
0x00280269 0 root  666   14

-- Message Queues 
key   msqid owner perms used-bytes  messages


NOW, if I wanted to make these go away (AFTER checking that nothing critical
is actually USING these segments!), I would use the companion program
ipcrm(8).

ipcrm shm 1
ipcrm sem 0

the shm 1 line corresponds to the lines above that show shmid 1 under shared
memory.  Likewise, the sem 0 is for the semaphore w/ semid 0.  See the
manpages!

Anyway, whenever I've seen that message, it means that something is barfing
in shared memory.  Usually, something got left behind instead of being
cleaned up.  A quick removal usually takes care of it.   BUT!!!  Don't do
something silly (and dangerous) like deleting ORACLE's shared memory
segments, while it's running.  Fortunately, ipcs(8) shows the owners of
shared memory segments, so this should be reasonable simple to identify.

Hope this helps!

(is this OT? :-)

L8r,
Rob




RE: [BUG-REPORT] missing header (minor)

2001-03-28 Thread Rob Bloodgood

 Version: Apache/1.3.12 (Unix) mod_perl/1.24
 What: PerlAuthenHandler returns headers without WWW-Authenticate field
 Work-around: set with $r-err_header_out

It looks like you haven't fully read the book/docs/manpages/samples for auth
handling.
*All* of the code for Basic auth (i.e. browser based user/password from the
popup dialog) handlers have the following snippet:

$r-note_basic_auth_failure;
return AUTH_REQUIRED;

as in:

  # get username  password
  (my $res, $sent_pw) = $r-get_basic_auth_pw;
  return $res if $res != OK;
  $user = $r-connection-user;

  # need both username  password
  unless ( $user  $sent_pw) {
$r-note_basic_auth_failure;
return AUTH_REQUIRED;
  }

From http_protocol.h:
 * note_basic_auth_failure arranges for the right stuff to be scribbled on
 * the HTTP return so that the client knows how to authenticate itself the
 * next time. As does note_digest_auth_failure for Digest auth.
 *
 * note_auth_failure does the same thing, but will call the correct one
 * based on the authentication type in use.

The C API works the same way.  From src/modules/standard/mod_auth.c:

ap_note_basic_auth_failure(r);
return AUTH_REQUIRED;

AND, the actual function ap_note_basic_auth_failure, from Apache's
http_protocol.c:
   API_EXPORT(void) ap_note_basic_auth_failure(request_rec *r)
   {
/* sanity checks here*/

ap_table_setn(r-err_headers_out,
  r-proxyreq ? "Proxy-Authenticate" : "WWW-Authenticate",
  ap_pstrcat(r-pool, "Basic realm=\"", ap_auth_name(r),
"\"",
  NULL));
   }

which in mod_perl would be:
$r-err_header_out( $r-proxyreq ? "Proxy-Authenticate" :
"WWW-Authenticate",
  "Basic realm=" . $r-auth_name );

which looks alot like your workaround. :-)

L8r,
Rob




RE: /dev/null problems

2001-03-28 Thread Rob Bloodgood

 From the mod_perl guide:

   syntax error at /dev/null line 1, near "line arguments:"
   Execution of /dev/null aborted due to compilation errors.
   parse: Undefined error: 0
   There is a chance that your /dev/null device is broken. Try:
   % sudo echo  /dev/null

 This is exactly the problem I have been getting when starting Apache
 mod_perl, however the suggested fix does not work for me. We're on a
 HPUX 11 machine. Is there another way to solve this problem? As I
 understand it, if /dev/null is being used as the $0 argument to the
 handler, perhaps I could somehow explicitly set it to another (empty)
 file? How would I go about that?

I've never seen this to be an actual /dev/null problem.  I've seen this in
HTML::EmbPerl and Apache::Registry, where the "/dev/null" is what gets put
in $0 when mod_perl has lost track of the original filename (and line 1
because it forget WHERE it was, too :-).

According to the sample:
   syntax error at /dev/null line 1, near "line arguments:"
   Execution of /dev/null aborted due to compilation errors.
   parse: Undefined error: 0

Which says to me that one of your (scripts/server-parsed pages/modules) has
the string "line arguments:" in it, and there is a syntax error near there.

So (for example, on my Linux system):
find /home/httpd /etc/httpd/lib/perl -type f -exec grep -l 'line arguments:'
{} \;

the result should show you which file to fix.

HTH!

L8r,
Rob




RE: cgi_to_mod_perl manpage suggestion

2001-03-15 Thread Rob Bloodgood

 On Wed, 14 Mar 2001, Perrin Harkins wrote:

  On Wed, 14 Mar 2001, Issac Goldstand wrote:
 I still think that the above line is confusing:  It is
 because mod_perl is
   not sending headers by itelf, but rather your script must provide the
   headers (to be returned by mod_perl).  However, when you just
 say "mod_perl
   will send headers" it is misleading; it seems to indeicate
 that mod_perl
   will send "Content-Type: text/html\r\n\r\n" all by itself, and that
   conversely, to disable that PerlSendHeaders should be Off.
 
  Would it help if it said "PerlSendHeader On makes mod_perl act just like
  CGI with regard to headers"?

 A small correction: "PerlSendHeader On makes mod_perl act just like
 mod_cgi with regard to HTTP headers" :)

 CGI is a protocol...

Hmm.  What nobody seems to be mentioning explicitly (for the newbees who
would benefit from this discussion) are the things that
mod_cgi/PerlSendHeaders *DO*, that otherwise would have to be done manually.

Or, to put it more succinctly, what is the *exact* difference in headers
between PerlSendHeaders On and Off (which happen to be the same difference
as between a regular CGI script and an NPH script)?

It seems like almost all of the available documentation assumes that A) you
already know, or B) you don't need to know.

So at the risk of seeming bold, and understanding that this summary *is*
going to be incomplete:

There is a similarity of requirements between a CGI nph-script (Non Parsed
Headers) and mod_perl with PerlSendHeaders Off.

In basic CGI, one can simply:
print "Content-Type: text/html\r\n\r\n";

When the CGI script goes back to the web server, it can see from this
output, destined for the client browser, that:
The request was successful
The content type is specified
There is nothing further special about this request.

On (one of) my machines this returns:
HTTP/1.1 200 OK
Connection: close
Date: Thu, 15 Mar 2001 19:09:23 GMT
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux) mod_perl/1.19
Content-Type: text/html
Client-Date: Thu, 15 Mar 2001 19:09:24 GMT
Client-Peer: xx.xx.xx.xx:80

This is actually pretty boring so far.  I could send a cookie, too:
print "Set-Cookie: mycookie=test\r\n";
print "Content-Type: text/html\r\n\r\n";

Or any other headers I want, and the remainder is filled in by the webserver
for me.

But some magic happens when I want to, say, redirect.  Instead of printing
my content-type header,  all I have to do is print the following instead:
"Location: http://elsewhere.com\r\n\r\n";

Look what happens to the response!
HTTP/1.1 302 Found
Date: same
Server: same
Location: http://elsewhere.com
Connection: close
Content-Type: text/html

I have a different status line altogether (along with the Location: that I
printed)!  One can arbitrarily send custom status codes, too... I've done
this with CGI form re-submits:
print "Status: 204 No Content\r\n\r\n";

This returns:
HTTP/1.1 204 No Content
Date: Thu, 15 Mar 2001 19:22:21 GMT
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux) mod_perl/1.19
Connection: close
Content-Type: text/plain

Which is an expensive NO-OP to a browser. No change in window content
WHATSOEVER. (I love that trick.  Just love it! :-)

*NOW*
In mod_perl with PerlSendHeader Off, in order to perform a redirect one must
set up the headers manually:
Test.pm
===
package Test;

use Apache::Constants qw/:common REDIRECT/;
use strict;

sub handler() {
my $r = shift;
$r-content_type('text/html');
$r-headers_out-set(Location = "http://elsewhere.com");
return REDIRECT;
}

1;

REDIRECT here is a constant for the HTTP status code 302 (Moved).


But with PerlSendHeader On, I can take the same shortcuts as with CGI:

sub handler() {
print "Location: http://elsewhere.com\r\n\r\n";
}

And the response:
HTTP/1.1 302 Found
Date: Thu, 15 Mar 2001 19:32:10 GMT
Server: Apache/1.3.9 (Unix)  (Red Hat/Linux) mod_perl/1.21
Location: http://elsewhere.com
Connection: close
Content-Type: text/plain

But THE *SAME* CODE with PerlSendHeader Off returns:
Location: http://elsewhere.com

And that's *IT*.  Which parses as HTTP/0.9 and text/plain, causing my
browser to show that single line of text as my content.



NOW... to any non-newbies reading this, what have I left out? :-)

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;




RE: Dynamic loading of development libraries

2001-03-06 Thread Rob Bloodgood

 I'm currently a developer for an on-line publication using Apache /
 mod_perl / Mason.  We currently have about six developers working on the
 project and I have been running into problems with concurrent work on the
 Perl libraries that power our site.

Just a few days ago, somebody suggested http://www.freevsd.org/ as a
solution for this kind of situation.  I've reviewed it briefly, and it
appears to be able to do alot of what you seem to need.

Good luck!

L8r,
Rob




HTML Mason 1.0 setup

2001-03-01 Thread Rob Bloodgood

I've been using HTML::Mason under mod_perl on my site for awhile, using
0.89, and I like it lots. :-)  So when the new 1.0 came out, I went to go
upgrade, and broke EVERYTHING.

Not only that, but, I haven't been able to make sense out of what Mason
wants for its dir heirarchy, anyway:
First, comp_root (apparently) needs to be the same as DocumentRoot, which
seems horribly insecure...  if I could find another way to do it, I would,
but for now, knowing the path my components run under makes them viewable
_AS SOURCE_ by anyone who knows the url.

and in the same vein, the *ONLY* way I could get it to run was to put it's
data_dir under DocumentRoot as well. 

Why can't I have
/home/httpd/html
/home/httpd/components  (instead of /home/httpd/html/components)
/home/httpd/mason   (instead of /home/httpd/html/mason)

? Or more correctly, how do I tell Mason to use that kind of strucure?

And what (the docs don't say, the changelog isn't indicative) changed in the
required setup procedure at 1.0?  My friend called me wanting to do
HTML::Mason, which I told him was absolutely awesome for development, but he
couldn't get it running at all (he only had access to the 1.0 from CPAN)
(and we only had my working config to start with).

This is the relevant section of my startup.pl:
=
package HTML::Mason;

use strict;

use Apache::Constants qw(:common);
use Date::Format;

local $| = 1;

my $parser = new HTML::Mason::Parser;
my $interp = new HTML::Mason::Interp ( parser = $parser,
   comp_root = '/home/httpd/html',
   data_dir =
'/home/httpd/html/mason', );

my $ah = new HTML::Mason::ApacheHandler ( interp = $interp,
  output_mode = 'batch',
# output_mode = 'stream',
  error_mode = 'html', # fatal
  debug_mode = 'all',
  debug_perl_binary =
'/usr/bin/perl',
  debug_handler_script =
'/etc/httpd/lib/perl/startup.pl',
  debug_handler_proc =
'HTML::Mason::handler', );

# {{{ setuid/taint shut UP!
if (0) {

my @test = ( qw/1 2 3/ );

my @files_written = map {/(.*)/; $1} @test # $interp-files_written
;

warn "Trying to deal w/ tainting: ",
  Data::Dumper-Dump([ \@files_written ], [ qw/files_written/ ] ) ,
"\n";

chown( [getpwnam('nobody')]-[2],[getpwnam('nobody')]-[2],
@files_written );
}
# }}}

sub handler {
my ($r) = @_;
$ah-handle_request($r);
}

# {{{ globals

{
package HTML::Mason::Commands;

use vars qw($dbh %session);


  # my ($dsn, $user, $pass) = (My::dbi_connect_string(),
My::dbi_pwd_fetch());
  # $dsn = 'dbi:Proxy:hostname=devel;port=;dsn=' . $dsn;

  {
  local $^W = 1;
  #  ( dsn, username, password )
# $interp-set_global(dbh = DBI-connect(My::dbi_connect_string(),
My::dbi_pwd_fetch()));
  #$dbh = DBI-connect(My::dbi_connect_string(),
My::dbi_pwd_fetch()) or die DBI-errstr;
  #$dbh-{AutoCommit} = 0;
  }
}

# }}} globals
=

TIA

L8r,
Rob

#!/usr/bin/perl -w
use Disclaimer qw/:standard/;






RE: Doh; StatINC can't find files?

2001-02-06 Thread Rob Bloodgood

 wm looks like a home directory.  The default perms on the home
 directory are usually 700.  Try changing that to something like 755
 or even 744 (it may not need execute).

Actually, the x bit on directory perms means "accessible," meaning if you
KNOW the name of the file, U can reach it at all... I ran into this when
trying to allow ~/public_html.

701 is the correct mask.

L8r,
Rob




FW: Doh; StatINC can't find files?

2001-02-06 Thread Rob Bloodgood






execute  (or  access
for directories) (x)

 drwx-x3 rlandrum devel4096 Jan 30 14:14 public_html
 (701, Forbidden)

that's not what I meant, I should have been more clear.

755 on public_html
701 on ~user

so ~user is still "hidden" from general eyes
but ~user/public_html is ACCESSIBLE (x) thru ~user, and public_html is
READABLE/ACCESSIBLE (r-x) to nobody.





FW: Doh; StatINC can't find files?

2001-02-06 Thread Rob Bloodgood






Thanks for the clarification.  It worked perfect.

drwx-x   12 rlandrum rlandrum 4096 Feb  6 14:05 rlandrum
drwxr-xr-x3 rlandrum devel4096 Jan 30 14:14 rlandrum/public_html

Rob


execute  (or  access
for directories) (x)

 drwx-x3 rlandrum devel4096 Jan 30 14:14 public_html
 (701, Forbidden)

that's not what I meant, I should have been more clear.

755 on public_html
701 on ~user

so ~user is still "hidden" from general eyes
but ~user/public_html is ACCESSIBLE (x) thru ~user, and public_html is
READABLE/ACCESSIBLE (r-x) to nobody.


Robert L. Landrum
Senior Programmer
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"It's working correctly.  It's simply working in contrast to what you have
perceived to be correct."




Socket/PIPE/Stream to long running process

2001-02-02 Thread Rob Bloodgood

So, in my mod_perl app, I run thru each request, then blast a UDP packet to
a process on the local machine that collects statistics on my traffic:

sub send_packet {
my $r = shift;
my $packet = shift;

$r-warn("send_packet: No packet, not transmitting") if $debug 
!$packet;
return unless $packet;

$r-warn("Contents:\n$packethr") if $debug;

$r-warn("send_packet: creating socket") if $debug;
my $socket = new IO::Socket::INET (
   PeerAddr =
$r-dir_config('CountServer'),
   Proto= 'udp',
  )
   or die "CountServer unable to create socket: $@\n";

$r-warn("send_packet: Sending to ", $r-dir_config('CountServer')) if
$debug;

# OK we have the correct buffer, lets send it out...
unless ($socket-send($packet) == length $packet) {
$r-warn( "send error on " . $socket-peerhost . ": $!" );
return SERVER_ERROR;
  #redir($r-dir_config('ErrorURL'));
}

$r-warn( "send_packet: send succesful") if $debug;
}

My question is, should I be creating this socket for every request?  OR
would it be more "correct" to create it once on process startup and stash it
in $r-pnotes or something?

And if I did that, would it work w/ TCP?  Or unix pipes/sockets (which I
*don't* understand) (btw the box is linux)?  In testing, I'd prefer not to
use TCP because it blocks if the count server is hung or down, vs UDP, where
I just lose a couple of packets.

TIA!

L8r,
Rob




RE: applicationontent-Type?

2001-01-26 Thread Rob Bloodgood

  I've been getting these occassional errors from libapreq, 1 every couple
  days:
 
  [Thu Jan 25 15:54:33 2001] [error] [client 64.12.102.22] [libapreq]
  unknown content-type: `applicationontent-Type:
 application/x-www-form-urlencoded\'

Alright, I'm gonna toss my $.02 into this:
This *really* looks like bad string handling.  Specifically, the kind of
annoyance people can see when using strncat(3) and forgetting what end of
string was agreed on for the local project.

Now, I don't pretend to have that much insider info, but I recently had to
code an app that had to send different reponses to AOL because AOL's
instantiation of the Explorer object is deliberately broken.  In doing so, I
found that all AOL requests are made using an HTTP Proxy.

   if (
$r-header_in('Via') =~ m!Traffic-Server!
and
$r-header_in('User-Agent') =~ /AOL \d(\.d)?/
   )

So presumably something that's handling /proxy reponses/ is hiccuping.

L8r
Rob




RE: header_out/AUTH_REQUIRE

2001-01-23 Thread Rob Bloodgood

 In my PerlAuthenHandler I need to send back the WWW-Authenticate-line.
 I use $r-headers_out("WWW-Authenticate" = 'basic realm = "MyName"').
 But if i returned from the Handler with "return AUTH_REQUIRED" , Apache
 doesn't send this line in the header.

This is (one of) the relevant sections in *my* AuthenHandler:

unless (length $acctid) {
# no auth information
$r-note_basic_auth_failure;
$r-log_reason("no user provided", $r-filename);
return AUTH_REQUIRED;
}

It actually has several sections like that, for different criteria.  But
what is important here is the Apache method call
$r-note_basic_auth_failure().  This is the method that is responsible for
setting the right WWW-Authenticate header.  If your AuthHandler is for Basic
Auth (with the prompting from the browser), then the Realm should already be
configured in the httpd.conf, e.g.

Location /stats
  AuthName "Stats"
  AuthType Basic
  PerlAuthenHandler   +Stat::Auth
/Location

Just to pick a tiny nit, the header you are providing is incorrect:

WWW-Authenticate: basic realm = "MyName"   # -- yours
WWW-Authenticate: Basic realm="Stats"   # -- just pulled this off of a
server

Hope this helps!

L8r,
Rob




RE: Specific limiting examples (was RE: Apache::SizeLimit for unsharedRAM ???)

2001-01-15 Thread Rob Bloodgood

 I think that the problem here is that we've asked for more info
 and he hasn't
 supplied it.  He's given us generics and as a result has gotten generic
 answers.

I haven't been fishing for a handout of doing the work for me. I've been
trying to see what people have done.  The reason for the waiting I've done
is, I've looked at no less than 7 CPAN modules that do some kind of
resource/server monitoring, and I couldn't figure out which one(s) would go
together as a reasonable combination.

 1) What are the average hits per second that you are projecting
 this box to handle?
Up to, say 3,000,000 hits/aday.

 2) What is the peak hits per second that you are project this box
 to handle?

This is a guesstimate, but approx 50/s

 3) We know you have a gig of ram, but give us info on the rest of
 the platform?

No, I have 2GB of ram, on a dual PIII/550 BX board, on a 8GB drive

 4) What's your OS like? Solaris, AIX, HP-UX, Linux, FreeBSD, etc.   which
 version and/or flavor

Running RedHat 6.1 updated w/ all kindsa current pathches/updates.

 5) What other processes are you running?

predominantly I'm running a Count Daemon on the same box.  My project is
http://www.exitexchange.com, and I take raw hits at the webserver and fire
them over to my count daemon which keeps load down on the Database... but
I'm ahead of myself.

 6) Do you have a Database?  Which one? A gig of ram is nothing to Oracle

Oracle 8.1.6 on a Sun 450 (4x400Mhz UltraSparc ### w/ 2GB of Ram, 7x9GB SCSI
in a software RAID for the dataspace running Solaris 8)

 6a) Will be running queries constantly or will you be caching a lot?

Whenever possible, I try to cache alot.  The largest part of the application
that is NOT cached is the part I'm working on right now... ever heard of
POE?

 7)  What other modules are you running?  PhP? SpeedyCGI? Axkit? Cocoon?

Well, the count server is only running mod_perl, with a couple of custom
server extensions, all pretty lean.  Per process is abt 12.5MB, shared is
5600k.

 In short what is the server DOING at any given moment.  Until
 folks have a feel
 for this no one is going to be able to offer you any insight
 beyond what you
 already have.

Well I get a hit, I hit the database for the response, I send it back
interpolated into the response content.   Currently about 800K times/day.
:-)

However
I got what I was looking for out of this discussion a couple of messages
back, with Perrin's example.  YES the numbers are made up.  No problem... I
have a basic syntactic skeleton to work with, now I can fine tune.

L8r,
Rob




Specific limiting examples (was RE: Apache::SizeLimit for unshared RAM ???)

2001-01-11 Thread Rob Bloodgood

 RB Alright, then to you and the mod_perl community in general, since
 RB I never saw a worthwhile resolution to the thread "the edge of
 RB chaos,"

 The resolution is that the machine was powerful enough.  If you're
 running your mission critical service at "the edge of chaos" then
 you're not budgeting your resources properly.  You should have at
 least a 50% room for expansion.  That is, you should run your machines
 around 50% of their maximum load so you have room to absorb the spikes
 in traffic.

Well, yes and no... the HW is PLENTY powerful enuff, and I *know* I'm not
budgeting resources properly.

First of all, I'm a true geek... I can melt *any* machine. :-)

Second of all, with the literally thousands of pages of docs necessary to
understand in order to be really mod_perl proficient, I'm not at all
surprised or embarrassed that there are things about tuning a high-powered
server environment that I don't know.

Thirdly, I don't have significant load, for the most part.  I have designed
everything I've written to have as little impact on each specific phase of
the transaction process as possible.  On my most important server, a dual
PIII/600 w/ 2GB of RAM, part of my problem is that I put in the second GB
when I'm *CONVINCED* that all I needed to do was find the fine line of
resource limitation that would prevent meltdown... I mean, 1GB is a lot of
ram.

And finally, I was hoping to prod somebody into posting snippets of
CODE
and
httpd.conf

that describe SPECIFIC steps/checks/modules/configs designed to put a
reasonable cap on resources so that we can serve millions of hits w/o
needing a restart.

I know I'm not dumb... in fact, I know I'm exceptionally good.  But with the
ridiculous number of things I have to keep track of (being head geek is
always a busy job), I still haven't been able to wrap my mind around the
correct usage(s) of the various resource limiting modules.  A working
example (even for a completely different machine) would make my job 10 times
easier.

L8r,
Rob




RE: E-COMMERCE-SITE-DESIGN-HOWTO [was: Re: Specific limiting examples(was RE: Apache::SizeLimit for unshared RAM ???)]

2001-01-11 Thread Rob Bloodgood


 You simply cannot come forward and say, "look, I've got this big-assed
 linux box, why is my site sucking?" We don't know, and it's neither our

granted.  never my intention.
i described the box only to illustrate that i (should) have sufficient HW.

 The very, very best minds in production architecture out there will pine
 over how much accessing data from RAM sucks - they want all cache, all the
 time. There is not a linear falloff in performance from cache - RAM -
 disk, and you should not expect one from the applications which rely on
 those things.

F*** if I care about cache.  I just want to stay in RAM (where I can build
my own caches :-)

 The httpd.conf file is extremely well documented. There is a program in
 $APACHE/bin called 'ab,' and will give you a decent baseline from which to
 provision for your site, and etc.

sigh
I can see I gave the wrong impression by several miles.

I was never asking, "HOW DO I DO THIS?"  I have the same web and same
manpages and same apache/perl/mod_perl source as you do.

I'm a professional.  I have a big computing issue.  I was asking (other
professionals, who have similar big computing issues):

WHAT DID *YOU* DO?

Hopefully that's more clear.  I don't WANT to sound cocky or superior or too
lazy/cool to RTFM.  I just want to discuss EXAMPLES of solutions to this
(what seems to be rather common) type of issue.

L8r,
Rob




RE: Apache::SizeLimit for unshared RAM ???

2001-01-09 Thread Rob Bloodgood

  I like the idea of Apache::SizeLimit, to no longer worry about
  setting MaxRequestsPerChild.  That just seems smart, and might
  get maximum usage out of each Apache child.
 
  What I would like to see though is instead of killing the
  child based on VmRSS on Linux, which seems to be the apparent
  size of the process in virtual memory RAM, I would like to
  kill it based on the amount of unshared RAM, which is ultimately
  what we care about.

 It exists for a long time: Apache::GTopLimit. Of course if you have GTop.
 And it's in the guide including all the calculations of the real memory
 used (used by Apache::VMonitor)

So, forgive me for not "getting it," but is there a way to do this without
endless retries and experimentation?  It seems to me that blocking on a
per-child size usage is silly (even tho I'm shure it's what is available at
the programming level).

I mean,
I have a machine w/ 512MB of ram.
unload the webserver, see that I have, say, 450MB free.
So I would like to tell apache that it is allowed to use at most 425MB.

It's not out there as far as I can find.

So far all I've been able to find is:
Run your service for awhile.
Do some math and guesswork about size/totals/available.
Run it again.
Recheck your math.
Use (per-process limiting module).
Pray that your processes never grow because of rarely used
functionality/peak usage/larger than usual queries ...

because then all of your hard work before goes RIGHT out the window, and I'm
talking about a 10-15 MB difference between JUST FINE and DEATH SPIRAL,
because we've now just crossed that horrible, horrible threshold of (say it
quietly now) swapping! shudder

Have I jumped to the wrong conclusion?  Is there a module (or usage) I've
missed?  Somehow I doubt I'm the only one who sees the problem in these
terms... has anybody seen the SOLUTION in these terms??

L8r,
Rob




RE: Apache::SizeLimit for unshared RAM ???

2001-01-09 Thread Rob Bloodgood

  because then all of your hard work before goes RIGHT out the window,
  and I'm talking about a 10-15 MB difference between JUST FINE and
  DEATH SPIRAL, because we've now just crossed that horrible, horrible
  threshold of (say it quietly now) swapping! shudder

 That won't happen if you use a size limit and MaxClients.  The worst that
 can happen is processes will be killed too quickly, which will drive
 the load up.  Yes, that would be bad, but probably not as bad as swapping.

OK, so my next question about per-process size limits is this:
Is it a hard limit???

As in,
what if I alloc 10MB/per and every now  then my one of my processes spikes
to a (not unreasonable) 11MB?  Will it be nuked in mid process?  Or just
instructed to die at the end of the current request?




RE: Apache::SizeLimit for unshared RAM ???

2001-01-09 Thread Rob Bloodgood

 On Tue, 9 Jan 2001, Rob Bloodgood wrote:
  OK, so my next question about per-process size limits is this:
  Is it a hard limit???
 
  As in,
  what if I alloc 10MB/per and every now  then my one of my
 processes spikes
  to a (not unreasonable) 11MB?  Will it be nuked in mid process?  Or just
  instructed to die at the end of the current request?

 It's not a hard limit, and I actually only have it check on every other
 request.  We do use hard limits with BSD::Resource to set maximums on CPU
 and RAM, in case something goes totally out of control.  That's just a
 safety though.

chokes JUST a safety, huh? :-)
Alright, then to you and the mod_perl community in general,
since I never saw a worthwhile resolution to the thread "the edge of chaos,"

In a VERY busy mod_perl environment (and I'm taking 12.1M hits/mo right
now), which has the potential to melt VERY badly if something hiccups (like,
the DB gets locked into a transaction that holds up all MaxClient httpd
processes, and YES it's happened more than once in the last couple of
weeks),

What specific modules/checks/balances would you install into your webserver
to prevent such a melt from killing a box?

Red Hat Linux release 6.1 (Cartman)
Kernel 2.2.16-3smp on an i686
login: Out of memory for httpd

Out of memory for httpd

Out of memory for httpd

Out of memory for httpd
root

Out of memory for mingetty

Out of memory for httpd

Out of memory for httpd
sigh
reset

...and before the comments about client/server/DBA/caching/proxy/loadbalance
design start flying, I *know*!  I'm working on it right now, but for right
now I have what I have and I'm trying to keep it alive for just a little
longer until the real fix is done. :-)

TIA!

L8r,
Rob




RE: Problem: Number after header

2000-11-07 Thread Rob Bloodgood

You are not setting your Content-Type correctly.
The response contains:
Content-Type: text/plain

This needs to be 
Content-Type: text/html

to be rendered as HTML.

-Original Message-
From: Guido Moonen [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 07, 2000 3:06 AM
To: Modperl; Mason
Subject: FW: Problem: Number after header


Sorry if you receive this message twice!!!

 -Original Message-
 From: Guido Moonen [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, November 07, 2000 10:20 AM
 To: Mason
 Subject: Problem: Number after header
 
 
 Hi All, 
 
 I have a little problem with my Mason site:
 
 IEplore doesn't have a problem accessing the site, But when i use 
 Netscape Communicator i have a problem accessing the page and i 
 get only the html-source of the web site.
 
 I found that the problem is that Mason sends a Number of some 
 sort at from of the html (after the response header) but i cannot 
 find where the number gets printed to the output stream.
 
 Does anybody have any hints in how to find this?
 
 eg: 
 ** Request **
 GET /PHIKNPC1/EN/index.html HTTP/1.1
 Host: cb.clickly.com
 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, 
 application/vnd.ms-excel, application/msword, 
 application/vnd.ms-powerpoint, */*
 Accept-Language: en-us
 Accept-Encoding: gzip, deflate
 User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)
 Connection: Keep-Alive
 
 ** Response **
 HTTP/1.1 200 OK
 Date: Tue, 07 Nov 2000 10:13:13 GMT
 Server: Apache/1.3.12 (Unix) mod_perl/1.24
 Keep-Alive: timeout=15, max=100
 Connection: Keep-Alive
 Transfer-Encoding: chunked
 Content-Type: text/plain
 
 2b89
 html
 !-- HTML part --
 
 !-- HTML part --
 
 table width="100%" border="0" cellspacing="0" cellpadding="0"
   tr 
 td valign="middle" 
 etc...
 
 The Problem is the "2b89"
 
 I have:
 - Mason 0.87 (same problem with 0.89)
 - Apache 1.3.12
 - Mod_Perl 1.24
 
 ==
 Guido Moonen 
 Software Engineer
 Clickly.com
 Van Diemenstraat 206
 1013 CP Amsterdam
 THE NETHERLANDS
 
 Tel:+31 20 6934083
 Fax:+31 20 6934866
 E-mail: [EMAIL PROTECTED]
 web:http://www.clickly.com
 
 
 Get Your Software Clickly!
 ==
 

--




RE: Can't locate object method No via package such

2000-09-26 Thread Rob Bloodgood

Shoulda thought about your answer first, Doug.  :-)

I see this type of message ("error at /dev/null") when my mod_perl scripts
give warnings -w style instead of $r-warn.  For example, HTML::Embperl, or
Apache::Registry both do this.

The nature of the error message sez to me there is a mishandled error
somewhere, like possibly an eval that is turning into a method call:

eval { # read file here
   # file doesn't exist
   # error is
   No such file or directory
   # which is parsed by perl to something like:
   # such-No (file or directory)
 };

Which further tends to suggest that a necessary environment variable for SSL
is either not defined or pointing to the wrong place??

I would suggest *never* disregarding configtest errors... one poorly
indicative error message can be the final gasp of a long string of errors
caused by a simple typo or whatever several layers deep.

Good luck!

L8r,
L V

-Original Message-
From: Alan E. Derhaag [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 25, 2000 7:29 PM
To: Doug MacEachern
Cc: [EMAIL PROTECTED]
Subject: Re: Can't locate object method "No" via package "such"


Doug MacEachern [EMAIL PROTECTED] writes:

 On 4 Sep 2000, Alan E. Derhaag wrote:

  I upgraded to openssl-0.9.5a and recompiled apache w/mod_ssl and
  mod_perl defining the SSL_BASE to the apache src and now the thing
  won't start and complains about:
 
   Can't locate object method "No" via package "such" at /dev/null line 1.

 looks to me like /dev/null is broken.  if you run:
 % cat /dev/null


Good try, but /dev/null is not broken on my machine.

I finally gave up and eliminated the DSO version by compiling two
versions of httpd.  Both include mod_ssl but the Engine is only turned
on with the light server.

I did find a slight problem when running `configtest' but I doubt that
that could have been the problem.

--