Re: Redhat httpsd with mod_perl

1999-10-21 Thread Adi

 On Wed, 20 Oct 1999, Remi Fasol wrote:
 
  thanks for your suggestions...
 
  as a test, i set MaxRequestsPerChild to 500, but it
  didn't help.
 
  just out of curosity, could it be that i'm using
  mod_perl as a DSO? i've seen alot of warnings against
  that but that's how the redhat secure server is
  configured.

I'm using redhat secure server also, and I too get the decreasing amount of
shared memory as requests are served.  However, my SIZE and RSS don't
increase like yours do.  It looks like a memory leak.

Does anyone know why the shared memory would decrease so dramatically? 
Initially virtually all of the 10M size is shared, then it decreases to
about 3M.  I'm using Apache::ASP-Loader to pre-load all my .asp scripts,
and have all the modules I use pre-loaded into the parent server with
PerlModule's.

- Adi



Re: Redhat httpsd with mod_perl

1999-10-21 Thread Randy Harmon

On Wed, Oct 20, 1999 at 10:47:02PM -0700, Adi wrote:
  On Wed, 20 Oct 1999, Remi Fasol wrote:
 Does anyone know why the shared memory would decrease so dramatically? 

Perl code and data both live in the data segment.  As it is used, any time
it writes information into a new chunk of memory, the memory is
copied-on-write, becoming not-shared.  

You may or may not benefit from exiting your Apache child when its shared
memory size shrinks too much.  I think a package of Stas Bekman's
authorship may help you detect that occasion.  It wraps the gtop library in
perl, so you can monitor 'top' info from mod_perl.

Your own tests will give you an indication of whether this saves memory and
should also indicate whether there is any performance advantage one way or
other.

It'd be interesting to hear your story of the results.

Randy



can't download file with MSIE 5 using modperl

1999-10-21 Thread Dirk Lutzebaeck


Hi,

I have a script to allow dynamic downloading of files which works for
netscape and MSIE 4 but not for MSIE 5:

DOCDIR/PERL/file.pl:

use Apache::Constants;

$r = Apache-request;
open(FILE, "/tmp/o.pdf");
$r-status(OK);
$r-header_out("Content-Disposition", "attachment; filename=\"o.pdf\"");
$r-header_out("Content-Length", 6247);
$r-content_type("application/pdf");
$r-send_http_header;
$r-send_fd(FILE);
close(FILE);


in httpd.conf:

Location /PERL
Options ExecCGI
SetHandler perl-script
PerlHandler Apache::Registry
PerlAuthenHandler CS::Cookie-authen
PerlAuthzHandler CS::Cookie-authz
PerlSetVar CookieAuthPath /
PerlSetVar CookieAuthLoginScript /login/index.html
AuthType CS
AuthName CookieAuth
require valid-user
/Location

Hitting the link on a href="/PERL/file.pl"file/a gives an error on 
MSIE5 saying the page couldn't be opened. This works perfectly on
netscape and MSIE 4...

Any clues?

Dirk



RE: server internal error

1999-10-21 Thread Young, Geoffrey S.

well, the new mod_perl guide at perl.apache.org/guide includes a section on
getting mod_perl RPMs to run.
try reading the following:

http://perl.apache.org/guide/install.html#Installing_separate_Apache_and_m

in particular, I would pre-load Apache::Registry at server startup and see
what happens.  That is, make sure your apache config files match the section
in the guide, specifically preloading Registry.pm using

PerlModule Apache::Registry


then try one of the test scripts provided by the guide and let me know how
it goes...

--Geoff

BTW - I based most of the RPM section of the guide on my own installation of
the 5.2 RPMs, so if you find them lacking, please let me know - they were
made with people like yourself in mind.



 -Original Message-
 From: Fred CHAUSSUMIER [SMTP:[EMAIL PROTECTED]]
 Sent: Tuesday, October 19, 1999 11:21 AM
 To:   [EMAIL PROTECTED]
 Subject:  server internal error
 
 Hello,
 
 I installed apache and mod_perl with the rpm provided with the RedHat
 5.2.
 
 I configured my server and tried to execute a .wpl file with apache.
 
 
 I finally got this message:
 
 The server encountered an internal error or misconfiguration and was
 unable to complete your request.
 
 I then looked at the error_log file of my server and the correspondng
 error message is the following:
 
 [Tue Oct 19 17:15:42 1999] [error] Undefined subroutine main:: called
 at /usr/lib/perl5/site_perl/Apache/Registry.pm line 141.
 
 Has somebody an idea of what is wrong with my installation? Because I
 have no more ideas!
 
 Thank you very much for your help.
 Fred
 
 
 --
 -- Frederique CHAUSSUMIER, PhD student 
 -- LIP-LHPC, Ecole Normale Superieure de Lyon, Office 342 
 -- 46 allee d'Italie, F - 69364 Lyon Cedex 07 
 -- voice:(+33)04 72 72 84 70  fax:(+33)04 72 72 80 80



installation problems

1999-10-21 Thread Shay Mandel

Hi all,

I would like to install mod_perl within the apache (statically linked).

I would like to leave the machine's perl as is, and not to change it at
all.

So, as I run 'make install' I am not using the root user, and then I get
the error - cannot write to /usr/loca/bin/perl5..
I do not want to write anything there.

I have created this makepl_args.mod_perl:
APACHE_SRC=/www/apache/v1.3.9/src \
 PREFIX=/www/apache/v1.3.9 \
 PREFIX=/www/apache/v1.3.9 \
 DO_HTTPD=1 \
 USE_APACI=1 \
 PREP_HTTPD=1 \
 EVERYTHING=1

Isn't that enough ? what else should I do ? Or must I use the root user,
and have the machine's perl changed a bit ?

Thanks in advance.
Shay.





Re: installation problems

1999-10-21 Thread Matt Sergeant

On Thu, 21 Oct 1999, Shay Mandel wrote:
 Hi all,
 
 I would like to install mod_perl within the apache (statically linked).
 
 I would like to leave the machine's perl as is, and not to change it at
 all.
 
 So, as I run 'make install' I am not using the root user, and then I get
 the error - cannot write to /usr/loca/bin/perl5..
 I do not want to write anything there.
 
 I have created this makepl_args.mod_perl:
 APACHE_SRC=/www/apache/v1.3.9/src \
  PREFIX=/www/apache/v1.3.9 \
  PREFIX=/www/apache/v1.3.9 \
  DO_HTTPD=1 \
  USE_APACI=1 \
  PREP_HTTPD=1 \
  EVERYTHING=1
 
 Isn't that enough ? what else should I do ? Or must I use the root user,
 and have the machine's perl changed a bit ?

It's trying to install the Apache::* libraries and documentation there. I'm
guessing, but there should be a Makefile.PL somewhere that you can run with:

perl Makefile.PL LIB=/www/apache/perl/lib

to install in that directory instead. However I don't know if that Makefile
will get overwritten by the build process (if so you can add that option to
the call to WriteMakefile() in Makefile.PL in your editor).

After all that you'll have to have some way to add that new perl libs
directory to @INC before mod_perl starts. I don't know how to do that but I
assume it's possible. Probably as simple as:

Perl
use lib '/www/apache/perl/lib';
/Perl

in your httpd.conf - but I'm guessing.

--
Matt/

Details: FastNet Software Ltd - XML, Perl, Databases.
Tagline: High Performance Web Solutions
Web Sites: http://come.to/fastnet http://sergeant.org
Available for Consultancy, Contracts and Training.



Re: installation problems

1999-10-21 Thread James G Smith

Matt Sergeant [EMAIL PROTECTED] wrote:
After all that you'll have to have some way to add that new perl libs
directory to @INC before mod_perl starts. I don't know how to do that but I
assume it's possible. Probably as simple as:

Perl
   use lib '/www/apache/perl/lib';
/Perl

in your httpd.conf - but I'm guessing.

If you look in the mod_perl book, you will see that @INC includes 
approximately $server_root/lib.  I put my extra libraries in 
$server_root/lib/perl and they work without any modifications to @INC.
-- 
James Smith [EMAIL PROTECTED], 409-862-3725
Texas AM CIS Operating Systems Group, Unix




Re: Porting to Apache::Registry

1999-10-21 Thread Bill Moseley

At 08:54 AM 10/20/99 +0200, Stas Bekman wrote:
 Besides all the information at perl.apache.org, can you recommend any good
 resources (book, web pages) that stand out in your memory as being very
 helpful when you were starting out?

I'm not sure why have you discarded the docs at perl.apache.org so fast,
did you read them at all? Did you take a look at the guide?
perl.apache.org/guide 

I said "besides", not that I discarded the docs you mention.

I did read the Guide.  It's helpful and a wonderful bit of work.  But I
still have questions and I haven't been at this long enough to grok it all
in.  For example, you say:

"Perl's exit() built-in function cannot be used in mod_perl scripts."

I started to edit my scripts per the examples in the Guide, but then
decided to try it first, and, it turns out, exit() works without explicitly
overriding in my script.  Reading perldoc Apache::Registry again, I see
that exit is overridden automatically.  Confusing to a newcomer, no?

My CGI scripts were, in a BEGIN block, opening STDERR to a file (and using
that file as a lock file).  But now I see that STDERR is reset next time my
script is called.  Where can I read about that and other similar behavior
that I should be aware of?

How do I find out why die() is causing a diagnostics message being sent to
the client?  I'm not using Carp or any __WARN__ or __DIE__ handlers in my
script or in the startup.pl file.




Bill Moseley
mailto:[EMAIL PROTECTED]



RE: embperl session handling doc changes

1999-10-21 Thread Gerald Richter

Thanks, I update the docs

Gerald



---
Gerald Richter  ecos electronic communication services gmbh
Internet - Infodatenbanken - Apache - Perl - mod_perl - Embperl 

E-Mail: [EMAIL PROTECTED] Tel:+49-6133/925151
WWW:http://www.ecos.de  Fax:+49-6133/925152
---
 

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Cliff Rayman
 Sent: Thursday, October 21, 1999 7:40 PM
 To: [EMAIL PROTECTED]
 Subject: embperl session handling doc changes
 
 
 .. i think this needs to be:
 BEGIN {
  $ENV{EMBPERL_SESSION_CLASSES} = "DBIStore SysVSemaphoreLocker"
 ;
  $ENV{EMBPERL_SESSION_ARGS}= "DataSource=dbi:mysql:session
 UserName=test" ;
 }
  use Apache::Session::Embperl ;
  use HTML::Embperl ;
 
 .. otherwise the 'use' statements get evaluated before the environment
 variables have been set.
 
 .. also some mention of the fact that the new Apache::Session is setup
 in a TOTALLY different way from the old version. I never read the
 Apache::Session docs since I used it as part of Embperl and never on its
 own.  It took me a few minutes to realize that I had to use the keyword
 FileStore and pick a locking class, as well as use the keyword Directory
 in the ARGS.  Since I have never used any of the DBI stuff I ignored the
 example args and did not notice that they were any different than the
 ones used at 0.17.
 
 cliff rayman
 genwax.com
 



Re: Apache::Session and File Upload (Was: Apache::Session hangs script)

1999-10-21 Thread Jeffrey Baker

Kip Cranford wrote:
 
 Again, I'm using mod_perl 1.21, apache 1.3.9, Apache::Session 1.03, on a
 RedHat 6 linux system with perl 5.005_03, and am using Netscape Comm.
 4.51 as my browser.
 
 The problem now seems to be Apache::Session and file uploads.  My
 handler is providing a simple file upload interface, and I'm using
 Apache::Session to keep track of filenames, content types, sizes, etc.
 
 Using a very simple script, in which I store only a single scalar
 variable in my session, and using the "multipart/form-data" encoding
 type on my form, I can get the script to hang every time.  It _always_
 hangs in the same place in the "op" function:
 
   DB1 IPC::Semaphore::op(/usr/lib/perl5/5.00503/IPC/Semaphore.pm:90):
 90: croak 'Bad arg count' if @_ % 3;
   DB1 IPC::Semaphore::op(/usr/lib/perl5/5.00503/IPC/Semaphore.pm:91):
 91: my $data = pack("s*",@_);
   DB1 IPC::Semaphore::op(/usr/lib/perl5/5.00503/IPC/Semaphore.pm:92):
 92: semop($$self,$data);

The problem is that you are leaking session handles.  For
Apache::Session to work, there must be zero references to the session
hash at the end of the request.

-jwb
-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807



Re: PerlTransHandler

1999-10-21 Thread Dan Rench


On Tue, 19 Oct 1999, Mark Cogan wrote:

  On Tuesday, October 19, 1999 4:13 AM, William Deegan
 [SMTP:[EMAIL PROTECTED]] wrote:
   How can I change the environment variables that get passed to a perl
   script running under Apache::Registry from a PerlTransHandler?
  
   I'm using the PerlTransHandler to do a sort of dynamic mod_rewrite
   functionality.

[...]

 Use the %ENV hash in perl. The environment is shared between the whole
 request, so setting $ENV{whatever} in the PerlTransHandler will make it
 visible to the content handler down the line. 

I'd suggest using $r-subprocess_env() instead.

We have a somewhat similar situation where we have a PerlTransHandler
that sets certain environment variables that CGI scripts depend on
(yes, plain mod_cgi while we have mod_perl -- but that's another story).

I guess %ENV will work in many situations, but it might bite you later
when you can't figure out why a particular env variable isn't getting set
in certain situations (speaking from experience).

See the explanation on pages 454-455 in the Eagle book.



at /opt/perl5/lib/site_perl/5.005/i686-linux/Apache/SIG.pm line 31.

1999-10-21 Thread Oleg Bartunov

Hi,

today I found in httpd's error_log a bunch of messages like

at /opt/perl5/lib/site_perl/5.005/i686-linux/Apache/SIG.pm line 31.

I looked in error_log because server doesn't responds.
In system log files I found "unable to run interpreter".

What does it means ? I run apache 1.3.9 + modperl 1.21_01-dev

Regards,

Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



Re: PerlTransHandler

1999-10-21 Thread Randal L. Schwartz

 "Dan" == Dan Rench [EMAIL PROTECTED] writes:


Dan I'd suggest using $r-subprocess_env() instead.

I was going to suggest that too.  %ENV controls the environment
of the currently running Perl process, but child processes come from
the "subprocess env", which only the call above sets.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Re: Server Stats

1999-10-21 Thread Pascal Eeftinck


Hi. I realise this is getting off-topic, so I suppose replies should go
direct to me unless they'll interest the list.

I think it's of a general interest to the list really. :)

I work on a site that makes use of mod_perl, Apache and MySQL. We are
currently toying around with our server set-up, trying to spread the
load across multiple machines. For web-serving, this is fairly simple,
but we're concerned about our MySQL server. Currently, different apps
sit on different boxes, each with its own MySQL. However, for ease of
upgrading, we're thinking of moving all MySQL databases to dedicated
machine(s).

Note that once you do find a stable version of MySQL, unless you need
new features there's hardly much of a reason to upgrade. I've found
3.22.25 and up to be pretty stable, while older versions would go down
every now and then (on a Solaris 2.5 Ultra 2).

Not that crashing has been much of a problem - the safe_mysqld script
checks the databases [with only once a problem due to lack of diskspace
in creating the temporary tables] and restarts the server. I know the
server crashed once every 1-2 days, but I've never heard any complaint
over it having done that. With the newer versions it stays up a lot
longer, if it even crashes at all.

   MySQL is quick, it's by far the fastest you can get at most 
 operations. On the
   other hand, you can't easily spread the load over multiple servers

This is our current concern. Is the single machine a good way to go? If
one app takes down MySQL (which unfortunately does happen once in a
while) then all apps lose their database. But if the machine gets
bogged down, we can throw more ram/disk space at it. Is it possible to
run MySQL across multiple servers? Should we be looking at a solution
with multiple database servers instead of one machine? At the
hardware level, this would be more reliable, but at the script level,
we'd have to keep track of multiple machines, and being a lazy perl
monkey, I want all my scripts to talk to the database in an identical
manner.

I'd best direct you to section 19.1 of the MySQL manual for this. But
basically it boils down to you having to do all the hard work - it isn't
too hard to maintain a mirror server of your database (through the update
log) but once you start updating your slave server as well you will get
out of sync with your data on the master. So if you only need to read
from your database it might be worth the performance to have a slave
server to only read from.

Another alternative are of course different databases, if they are not
or only very slightly related then run multiple servers with these
different databases. (Perhaps one might contain all your persistent
session data, and another your customer data - people with a persistent
session are not necessarily your customer too).

What's probably also an important factor is whether you have one database
and a set of scripts accessing it, or lots of databases and more scripts
accessing them. The latter will give you problems with persistent
database connections and the database server will have to work a lot
harder to cache all the different sets of data.

And do try to keep your queries down. If you can cache data then by
all means go for it. I have a CGI server with a MySQL database on it
and dozens of CGI's accessing different databases on that server. It
has to do an average of some 5500 SQL queries per 5 minutes, peaking at
12200 queries per 5 minutes at peak load. It consumes 60% of the CGI's
server CPU capacity, and it takes a lot of simultaneous connects with
this peak load. As it takes so much performance it will also bring
down the number (and performance) of the mod_perl CGI's it can serve
in a snowball effect. I'm sure once I'm done converting the heavily
used scripts to HTML::Mason and implementing a lot of caching there
the load on the MySQL server will be a lot less and leave room for the
useful stuff. :)

As someone else already mentioned, the MySQL people are working hard on
making a distributed database possible. I'm curious to see what will
come of that, although I don't think I'll use the feature myself.

Grtz,
Pascal

--
Pascal Eeftinck - arcade^xs4all.nl - Perl is not a language, it's a way of life



Re: Porting to Apache::Registry

1999-10-21 Thread Stas Bekman

 At 08:54 AM 10/20/99 +0200, Stas Bekman wrote:
  Besides all the information at perl.apache.org, can you recommend any good
  resources (book, web pages) that stand out in your memory as being very
  helpful when you were starting out?
 
 I'm not sure why have you discarded the docs at perl.apache.org so fast,
 did you read them at all? Did you take a look at the guide?
 perl.apache.org/guide 
 
 I said "besides", not that I discarded the docs you mention.

99% of the info you can find about mod_perl located at perl.apache.org,
there are a few articles written here and there, but if you don't find the
info *about modperl* at modperl site, I'm not sure you will find it
elsewhere...

See below for the info about the book...
 
 I did read the Guide.  It's helpful and a wonderful bit of work.  But I
 still have questions and I haven't been at this long enough to grok it all
 in.  For example, you say:
 
 "Perl's exit() built-in function cannot be used in mod_perl scripts."
 
 I started to edit my scripts per the examples in the Guide, but then
 decided to try it first, and, it turns out, exit() works without explicitly
 overriding in my script.  Reading perldoc Apache::Registry again, I see
 that exit is overridden automatically.  

I beg your pardon, did you read the whole section or only the first
sentence?  I'm not hiding it, in third para of: 
http://perl.apache.org/guide/porting.html#Using_exit_ clearly written the
following: 

QUOTE
Note that if you run the script under Apache::Registry, The Apache
function exit() overrides the Perl core built-in function. While you see
the exit() listed in @EXPORT_OK of Apache package, Apache::Registry makes
something you don't see and imports this function for you. This means that
if your script is running under Apache::Registry handler (Apache::PerlRun
as well), you don't have to worry about exit().
/QUOTE

 Confusing to a newcomer, no?

If you read the whole thing, definitely not confusing.

Sorry if I sound harsh, But it's a known excuse some folks use to get a
free ride: "I read the docs, and didn't find the answer. Can you please
tell me...". 

Again, I'm not trying to offend you, Bill. Just a request to truly RTFM
before asking questions. I think it's pretty fair to the folks who spend
their time to answer questions.

 My CGI scripts were, in a BEGIN block, opening STDERR to a file (and using
 that file as a lock file).  But now I see that STDERR is reset next time my
 script is called.  Where can I read about that and other similar behavior
 that I should be aware of?

In the wonderful book Doug and Lincoln wrote, see www.modperl.com

STDERR is being tied by apache to error_log file. I'll add this note to
the guide.

BEGIN blocks are being run only once per process life:
http://perl.apache.org/guide/porting.html#BEGIN_blocks

 How do I find out why die() is causing a diagnostics message being sent to
 the client?  I'm not using Carp or any __WARN__ or __DIE__ handlers in my
 script or in the startup.pl file.

I beleive, it's because of your previous statement -- you break the tied
STDERR and all the output goes to STDOUT. Don't mungle with STDERR unless
you know what are you doing. Though it can be because of another reason...

Hope this helps...

___
Stas Bekman  mailto:[EMAIL PROTECTED]www.singlesheaven.com/stas  
Perl,CGI,Apache,Linux,Web,Java,PC at  www.singlesheaven.com/stas/TULARC
www.apache.org   www.perl.com  == www.modperl.com  ||  perl.apache.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: Server Stats

1999-10-21 Thread Ed Phillips

this is like closing the gate after the horse has bolted without things
like decent locking and transactions. Although perhaps I'm mistaken and

You can rest assured that they know what they are doing. :-)

It is also worth upgrading to newer versions. The newest versions not deemed stable 
just yet no longer use ISAM, are much faster, and will allow for a host of new 
features. stay tuned.

ed



Re: Perlhandler - AUTH: Solved

1999-10-21 Thread Jie Gao

On Thu, 21 Oct 1999, Darko Krizic wrote:

 I just hacked a little PerlHandler (content handler) module that uses Basic auth. I 
found out that things like 
 
 $r-note_basic_auth_failure;
 my($res, $sent_pw) = $r-get_basic_auth_pw;
 return AUTH_REQUIRED
 
 do not work correctly for a content handler. Therefore I wrote this code:
 
 
 #!/usr/bin/perl -w
 
 package My;
 
 use strict;
 use Apache ();
 use Apache::Log;
 use Apache::Constants (qw/:common/);
 use MIME::Base64;   # needed for decode of basic auth
 
 sub handler
 {
 my $r = shift;
 my $user;
 my $password;
 my $userpass = $r-header_in("Authorization") || undef;
 Apache-request($r);
 my $log = $r-log();
 
 # optionally decode authorization
 if( $userpass ) {   # got any authorization
 if( $userpass =~ m/^Basic / ) {
 # only basic
 $userpass =~ s/^Basic //;   # remove leading
 ($user,$password)   # decode user + pass
 = split(":", decode_base64 $userpass);
 $log-warn("user=$user, password=$password");
 }
 }
 
 unless( defined $user
 and $user eq "DeKay" and 
 defined $password
 and $password eq "got it" 
 ) {
 # no auth or auth not valid
 $r-header_out("WWW-Authenticate" = "Basic realm=\"Test\"");
 $r-content_type("text/html");
 $r-status(AUTH_REQUIRED);
 $r-send_http_header;
 $r-print("Auth required");
 return OK;
 }
 
 # auth valid
 $r-content_type("text/html");
 $r-send_http_header;
 $r-print("user: $userBR");
 $r-print("Password: $passwordBR");
 return OK;
 }
 
 1;
 
 Questions:
 
 - This does not look very mod_perlish, can this be done "better"?
 - How can I make Apache print its "Authentication required" message itself? In this 
module I have to do this by myself.

Get the Eagle book (see www.modperl.com) and you'll get what you want.


Jie



Re: Apache::Session and File Upload (Was: Apache::Session hangs script)

1999-10-21 Thread Kip Cranford


Thanks for the reply, Jeffrey.

Ok, I can understand how leaking session handles would cause a read or
write lock, or whatever.   However, I thought that untieing the session
hash would release whatever locks the session held -- at the end of the
simple script I untie the hash.

In fact, the script is so simple I don't see where I could be leaking
the handles (but just because _I_ can't see where doesn't mean a whole
lot :)

And finally, to add to my confusion, I can take the test script, change
the form encoding from multipart to x-www-form-urlencoded, and have it
work fine.  Change it back to multipart, and it hangs.  Any idea there??

Thanks for your attention,

--kip 

p.s. I'm including the test script at the end of this message -- there's
probably something obviously wrong that I just can't see...



The problem is that you are leaking session handles.  For
Apache::Session to work, there must be zero references to the session
hash at the end of the request.

-jwb
-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807




Test Script

 
use strict;
use Apache ();
use Apache::Constants qw( :common );
use Apache::Session::DBI;
use CGI();

sub handler {
my $r = shift;

$r-send_http_header("text/html");

my $session_id = CGI::param('session') || undef;

my %session;
my $opts = {
DataSource  = 'dbi:mysql:sessions',
UserName= 'nobody',
Password= '',
};

tie %session, 'Apache::Session::DBI', $session_id, $opts;

my $file = CGI::param('upload');
if ($file) {
$session{'file'} = $file;
}

print__EOS__;

Apache::Session Test Scriptbr
Session ID number is: $session{_session_id}br
Storing file: $filebr
br

form action="http://xxx/secure" enctype="multipart/form-data" method="post"
!--form action="http://xxx/secure"  method="post"--
  Type in your name here:
  input type="file" name="upload"br
  input type="submit" value="Go!"
  input type="hidden" name="session" value="$session{_session_id}"
/form
__EOS__
print "untieing the session...br";
untie %session;
}

1;

===
End Test Script
===



Re: Runaway processes

1999-10-21 Thread Joshua Chamas

Mike Dameron wrote:
 
 We have several scripts using Apache::ASP and DBI, which return
 information from an Oracle database.  The problem is that if I run a
 report that taken a long time (not sure how long but over 30 minutes)
 and the connection either times out or I hit the stop button the oracle
 process which gets spawned does not die.  I can see in the process list
 that the oracle process is still tied to a httpd process and continues
 to eat up cpu cycles.
 

Why would you run a report from a web site that would take
30+ minutes to finish ?  Typical web connection timeouts 
are a few minutes or less ?

I think the real solution is to make your reports fast enough
to run from a web site, like run the real report every night, 
and cache its results in the database for later that you can get
them from the browser instantly.

A problem is that any report that long has the potential 
to read lock your tables, so that if any updates need to 
happen they are locked out.  You web site seems to then
shut down.

One solution is to kill any queries that take too long,
so that your web site is less at risk to these kind of 
untuned database killing queries.  I do this with the 
the below Oracle specific code, run in a cron job.  

Note that I use a class that wraps around DBI, so you 
will have to change some things to make it work for you.

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NODEWORKS  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051


$self-Debug("cleaning up locks");
my $data = $self-DoSQL(SQL, 'Fetch');
-- (
select s.sid,s.serial#
from sys.v_\$lock l, sys.v_\$session s
where l.ctime  $self-{stale_time}
and l.block != 0
and (
l.lmode = 3 or
l.lmode = 5 or
l.lmode = 4 or
l.lmode = 6
)
and l.sid = s.sid
and s.status != 'KILLED'
order by l.ctime desc
-- )
SQL
;
return unless $data;

my($sid, $serial) = @{$data};
if($SessionKills{"$sid,$serial"}++ = 3) {
$self-Critical("killing session $sid,$serial, give up");
}

my $rv = $self-DoSQL("alter system kill session '$sid,$serial'") || 0;
$self-Log("killing stale session $sid: $rv\n");
$self-Commit();



Re: Runaway processes

1999-10-21 Thread Jeffrey Baker

Joshua Chamas wrote:
 
 Mike Dameron wrote:
 
  We have several scripts using Apache::ASP and DBI, which return
  information from an Oracle database.  The problem is that if I run a
  report that taken a long time (not sure how long but over 30 minutes)
  and the connection either times out or I hit the stop button the oracle
  process which gets spawned does not die.  I can see in the process list
  that the oracle process is still tied to a httpd process and continues
  to eat up cpu cycles.
 
 
 Why would you run a report from a web site that would take
 30+ minutes to finish ?  Typical web connection timeouts
 are a few minutes or less ?
 
 I think the real solution is to make your reports fast enough
 to run from a web site, like run the real report every night,
 and cache its results in the database for later that you can get
 them from the browser instantly.

Another workable solution is to make the reports in real time, but
deliver them by email.  Use the web interface to initiate the reporting
process and send a confirmation.

Cheers,
Jeffrey
-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807



Re: Runaway processes

1999-10-21 Thread Tobias Hoellrich

And yet another one (which I used for one project): Detach the process from
the Apache cgi (no don't use fork inside your script) and have the process
do its work in the background outputting progress information into a know
location. Have your web-page reload every minute or less, read the progress
information and display it to the user. If your script finally completes
the report have the next reload report the information.

Tobias

At 06:18 PM 10/21/99 -0700, Jeffrey Baker wrote:
Another workable solution is to make the reports in real time, but
deliver them by email.  Use the web interface to initiate the reporting
process and send a confirmation.

Cheers,
Jeffrey
-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807




Re: Spreading the load across multiple servers (was: Server Stats)

1999-10-21 Thread Ed Phillips



I don't have any real answers - just a suggestion. What is wrong with the
classic RDBMS architecture of RAID 1 on multiple drives with MySQL - surely
it will be able to do that transparently?


Yes, RAID is very helpful with MySQL.  I spoke with Monty, the developer of MySQL at 
the open source conference in Monterey and he said that they are currently working on 
replication and mirroring features. It might be worth inquiring directly with them. 


Ed



RE: Spreading the load across multiple servers (was: Server Stats)

1999-10-21 Thread William R. Lorenz

Has anyone considered writing a proxy to allow the client
and/or server software to connect to a single data source,
or would this defeat the purpose by having the software use
a single server as a proxy? :)

In addition, what are the issues involved with mirroring
a MySQL database between database servers?

I apologize to the list if this is considered off-topic
and non-beneficial, but I though I would try my luck, as
well. ;)  Thanks, in advance, for replies and comments.

  MySQL is quick, it's by far the fastest you can get at 
  most operations. On the other hand, you can't easily
  spread the load over multiple servers

  possible to run MySQL across multiple servers? Should
  we be looking at a solution with multiple database
  servers instead of one machine? At the hardware level,

 There's a DBD::Multiplex in development (by Thomas Kishel
 and myself) that's designed to allow you to spread the
 load across multiple database servers and/or add
 resilience in case one goes down.

%%  "The will to win is important, but the will to prepare
 is vital."  -- Joe Paterno