RE: EmdPerl and SSI

2000-01-18 Thread Gerald Richter

 Thanks but I can't get the configuration to work right. I am now using
 EmbPerlChain. What should my PerlHandler line in httpd.conf say?


I didn't have used EmbperlChain on my own so far, but if you search the
modperl mailing list archives (http://forum.swarthmore.edu/epigone/modperl)
, you will surely find an answer.

Gerald



Embperl optEarlyHttpHeader

2000-01-18 Thread Andre Landwehr

Hi,
if I set optEarlyHttpHeader (64) within my EMBPERL_OPTIONS my
page ist correctly shown in the browser. But if
optEarlyHttpHeader is not set, the contents of my page are mixed
up. I use several blocks of [- -] and [* *] and I execute two
Shellscripts using perls backtick-syntax (unfortunately I cannot
avoid this due to backward compatibility with other pages, those
scripts do some layout stuff). The different blocks of my page
get reordered and in between I see the HTTP-Headers. 

Can I avoid this reordering and ensure a correct display of my
page without having to do a major rewrite?

Thanks,
Andre



RE: Embperl optEarlyHttpHeader

2000-01-18 Thread Gerald Richter

Hi,
 if I set optEarlyHttpHeader (64) within my EMBPERL_OPTIONS my
 page ist correctly shown in the browser. But if
 optEarlyHttpHeader is not set, the contents of my page are mixed
 up. I use several blocks of [- -] and [* *] and I execute two
 Shellscripts using perls backtick-syntax (unfortunately I cannot
 avoid this due to backward compatibility with other pages, those
 scripts do some layout stuff). The different blocks of my page
 get reordered and in between I see the HTTP-Headers.

 Can I avoid this reordering and ensure a correct display of my
 page without having to do a major rewrite?


Without optEarlyHttpHeader Embperl is buffering your output and if anything
is ok sends the headers and afterwards the content of the page. If you print
directly to STDOUT, this will send directly to the browser and not buffered
by Embperl, so it will be shown up before the http headers. You can use the
optRedirectStdout to also buffer normal output to Perl's STDOUT, but this
will not affect shell scripts.

If you want to buffer the shell script output, you need to read the output
and output it afterwards:

[+

local $/= undef ;
open FH, "foo.sh|" ;
$out = FH ;
close FH ;

$out ;
+]

Gerald

-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



RE: squid performance

2000-01-18 Thread Ask Bjoern Hansen

On Tue, 18 Jan 2000, Stas Bekman wrote:

 I'm still confused... which is the right scenario:
 
 1) a mod_perl process generates a response of 64k, if the
 ProxyReceiveBufferSize is 64k, the process gets released immediately, as
 all 64k are buffered at the socket, then a proxy process comes in, picks
 8k of data every time and sends down the wire. 

Yes.

Or at least the mod_perl gets released quickly, also for responses  8KB.

 - ask 

-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 60M impressions per day, http://valueclick.com



Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Alan Burlison

Can someone please explain why Apache does all the dlclosing and
dlopening of shared files on startup and a restart?  I can think of no
reson why this would ever be necessary - why on earth is it done?

Alan Burlison



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Daniel Jacobowitz

On Tue, Jan 18, 2000 at 12:59:13PM +, Alan Burlison wrote:
 Can someone please explain why Apache does all the dlclosing and
 dlopening of shared files on startup and a restart?  I can think of no
 reson why this would ever be necessary - why on earth is it done?
 
 Alan Burlison
 

Probably the biggest reason for dlopen/dlclose on a restart is that the
list of modules in the config file can change on a restart.  The reason
for the reload on startup has something to do with parsing the config
file in the parent and child; it was never adequately explained to me.

The trick would be getting it not to do this without busting up the
module API, which I can actually think of a few ways to do, and in a
way that the Apache Group didn't rigorously object to :)

Dan

/\  /\
|   Daniel Jacobowitz|__|SCS Class of 2002   |
|   Debian GNU/Linux Developer__Carnegie Mellon University   |
| [EMAIL PROTECTED] |  |   [EMAIL PROTECTED]  |
\/  \/



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Gerald Richter

 Can someone please explain why Apache does all the dlclosing and
 dlopening of shared files on startup and a restart?  I can think of no
 reson why this would ever be necessary - why on earth is it done?

I don't know, but I know for sure that causes a lot of trouble with mod_perl
and Perl modules which uses XS code...

Gerald


-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Gerald Richter

 Probably the biggest reason for dlopen/dlclose on a restart is that the
 list of modules in the config file can change on a restart.  The reason
 for the reload on startup has something to do with parsing the config
 file in the parent and child; it was never adequately explained to me.

 The trick would be getting it not to do this without busting up the
 module API, which I can actually think of a few ways to do, and in a
 way that the Apache Group didn't rigorously object to :)


Do you know, would it be possible to catch the dlclose inside mod_perl and
unload (dlclose) all external libaries that Perl has loaded so far? The
problem is, that these libraries will persist between such an dlclose/dlopen
and afterward don't get proberly initialised, so some of the code really
points to data from the old Perl interpreter, which causes strange
behaviour.

Gerald




-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



Apache::Session, Embperl...

2000-01-18 Thread Robert Locke

We are using Embperl 1.2, Apache Session 1.3 (using 
DBIStore/SysVSemaphoreLocker) with Oracle as the backend.

We've been observing periodic browser hangs which can be sporadically 
replicated by hitting the same page in quick succession using the same 
session id.  After doing that, updating %udat seems to cause the hang, 
perhaps the server process is waiting to acquire a write lock (?).

Please note that we update the contents of %udat by calling a routine which 
exists in a separate module, something like:
[-
updateSession(\%udat);
-]

Are there any special considerations when doing something like that?  Or is 
that plain silly?

Sorry for the lack of detail.  It's getting late in my part of the world.  I 
will provide a more complete post tomorrow once we've had a chance to 
experiment some more.  But in the meantime, any insights would be 
appreciated.

Thanks,

Rob

PS. This seems related to a very recent post:
"Apache::Session: hanging until alarm"
(http://forum.swarthmore.edu/epigone/modperl/ningsmyplar)



__
Get Your Private, Free Email at http://www.hotmail.com



Well, I'm dumb...

2000-01-18 Thread tarkhil


Okay. I'm poor programmer. I'm dumb. OKAY, I'M AN IDIOT. I agree.

But now, PLEASE, point me WHY Apache::Session does NEVER get destroyed
it the sample handler.

Have I told you that I'm poor programmer, don't know Perl at all and
overall dumb and cannot even type?

JUST POINT ME TO MY ERROR!

PLEASE NO clever ideas of reread of mail archive, for over an hour I'm
trying to get my error.

=== cut handler.pl ===
#!/usr/bin/perl
# $Id: handler.pl,v 1.3 2000/01/14 19:42:16 tarkhil Exp $
#
$ENV{GATEWAY_INTERFACE} =~ /^CGI-Perl/
  or die "GATEWAY_INTERFACE not Perl!";
use Apache::Registry;
use Apache::Status;
use DBI;
use Socket;
use Carp qw(cluck confess);
use Apache::DBI ();
package HTML::Mason;
use HTML::Mason;
use strict;
{
  package HTML::Mason::Commands;
  use Apache::Registry;
  use Apache::Status;
  use DBI;
  use Apache::DBI ();
  use Apache::Session::DBI;
  use Net::SNPP;
  use Apache::AuthDBI;
  use MIME::Head;
  use Mail::Send;
  use Mail::Mailer;
  use Mail::Header;
  use Image::Size 'html_imgsize';
  use HTTP::Status;
  use Net::POP3;
  #use Apache::SizeLimit;
  #$Apache::SizeLimit::MAX_PROCESS_SIZE = 1; # 1kB = 10MB
  Apache::DBI-connect_on_init
("DBI:mysql:mail2pager",
 "tarkhil", "cypurcad",
 { RaiseError = 1, PrintError = 1, AutoCommit = 1, }
);
}

my $parser = new HTML::Mason::Parser
  (
   allow_globals = [qw($dbh %session %pagers %mailserv @mssort)]
  );
my $interp = new HTML::Mason::Interp 
  (parser=$parser,
   data_dir = "/usr/local/www/tmp",
   comp_root = "/usr/local/www");

$interp-set_global(dbh=DBI-connect
("DBI:mysql:mail2pager",
 "tarkhil", "cypurcad",
 { RaiseError = 1, PrintError = 1, 
   AutoCommit = 1, }
));
$interp-set_global('%pagers'=
(
 0 = "÷ÙÂÅÒÉÔÅ ÏÐÅÒÁÔÏÒÁ",
 1 = "íÏÂÉÌ ôÅÌÅËÏÍ",
 2 = "áÓÔÒÁÐÅÊÄÖ",
 3 = "òÁÄÉÏÓËÁÎ",
 4 = "ëÏÎÔÉÎÅÎÔÁÌØ",
 5 = "÷ÅÓÓÏÌÉÎË", 
 6 = "òÁÄÉÏÐÅÊÄÖ",
 7 = "÷ÅÓÓÏÔÅÌ",
 8 = "íÕÌØÔÉ-ðÅÊÄÖ",
 9 = "éÎÆÏÒÍ-üËÓËÏÍ",
 10 = "áÌØÆÁËÏÍ",
 1000 = "Mail2Phone Free (íôó)",
 1001 = "íôó/BeeLine",
 101 = "óôó-ðÅÊÄÖ",
 102 = "÷ÅÓÓÏÌÉÎË (ðÓËÏ×)",
 103 = "ëÁÌÕÖÓËÁÑ ÓÏÔÏ×ÁÑ",
 104 = "áÌËÏÍ üÌÅËÔÒÏÎÉËÓ (HÏ×ÏËÕÚÎÅÃË)",
 105 = "áÌËÏÍ üÌÅËÔÒÏÎÉËÓ (ëÅÍÅÒÏ×Ï)",
 106 = "üËÓËÏÍ ðÅÔÅÒÂÕÒÇ",
 107 = "íÔÅÌÅËÏÍ óÐÂ.",
 11 = "òÏÚÁ íÉÒÁ",
 12 = "÷ó ôÅÌÅËÏÍ",
 13 = 'üëóðòåóó (óÐÂ)',
 200 = 'óÅÒËÏÍ (èÁÂÁÒÏ×ÓË)',
));
$interp-set_global('%mailserv'=
(
 'pop.mail.ru' = 'MAIL.RU',
 'chat.ru' = 'CHAT.RU',
 'mail.express.ru' = 'EXPRESS.RU',
 'other' = '(ÄÒÕÇÏÍ)',
));
$interp-set_global('@mssort'=
(
 'other', 'chat.ru', 'mail.express.ru', 'pop.mail.ru'
 ));
my $ah = new HTML::Mason::ApacheHandler (interp=$interp,
output_mode='batch');
chown ( 60001, 60001, $interp-files_written );
use Apache::Cookie;

sub handler {
  my ($r) = @_;
  # Handle only what we understand
  return -1 if $r-method !~ /^(GET|POST|HEAD)$/;
  return -1 
if defined($r-content_type)  $r-content_type !~ m|^text/|io;
  my ($port, $addr) = Socket::sockaddr_in($r-connection-local_addr);
  # WebDAV will do everything on itself
  return -1 if $port == 8000;
  local $SIG{ALRM} = sub {Carp::confess "Alarm ";};
  alarm 20;
  my $cook = Apache::Cookie-new($r);
  my $cookie = $cook-get('SESSION_ID');
  # Unless exists session_id, clean it!
  my $dbh = DBI-connect
("DBI:mysql:mail2pager",
 "tarkhil", "cypurcad",
 { RaiseError = 1, PrintError = 1, 
   AutoCommit = 1, }
);
  my $sth = $dbh-prepare_cached(q{
select id from sessions
  where id = ?
});
  $sth-execute($cookie);
  my $rses;
  $rses = $sth-fetchall_arrayref();
  if (scalar @$rses == 0) {
$cookie = undef;
  }
  my %session;
  tie %session, 'Apache::Session::DBI', $cookie, 
  { DataSource = 'dbi:mysql:mail2pager', UserName = 'tarkhil', 
Password = 'cypurcad'};
  # warn "\[$$\] Tied session $session{_session_id}\n";
  $cook-set(-name = "SESSION_ID", -value = $session{_session_id}) 
  if ( !$cookie );
  
  local *HTML::Mason::Command::session = \%session;
  my $res = $ah-handle_request($r);
  # warn "\[$$\] Going to untie session $session{_session_id}\n";  
  alarm 2;
  untie %session;
  # warn "\[$$\] Session untied\n";
  alarm 0;
  return $res;
}

1;
=== cut handler.pl ===

=== from httpd.conf ===
PerlRequire 

Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Daniel Jacobowitz

On Tue, Jan 18, 2000 at 02:40:59PM +0100, Gerald Richter wrote:
  Probably the biggest reason for dlopen/dlclose on a restart is that the
  list of modules in the config file can change on a restart.  The reason
  for the reload on startup has something to do with parsing the config
  file in the parent and child; it was never adequately explained to me.
 
  The trick would be getting it not to do this without busting up the
  module API, which I can actually think of a few ways to do, and in a
  way that the Apache Group didn't rigorously object to :)
 
 
 Do you know, would it be possible to catch the dlclose inside mod_perl and
 unload (dlclose) all external libaries that Perl has loaded so far? The
 problem is, that these libraries will persist between such an dlclose/dlopen
 and afterward don't get proberly initialised, so some of the code really
 points to data from the old Perl interpreter, which causes strange
 behaviour.
 

That is what my patch did.  And that was the explanation I posted of
the problem last week when we were debugging it.

Dan

/\  /\
|   Daniel Jacobowitz|__|SCS Class of 2002   |
|   Debian GNU/Linux Developer__Carnegie Mellon University   |
| [EMAIL PROTECTED] |  |   [EMAIL PROTECTED]  |
\/  \/



Re: Well, I'm dumb...

2000-01-18 Thread Jeffrey W. Baker

[EMAIL PROTECTED] wrote:
 
 Okay. I'm poor programmer. I'm dumb. OKAY, I'M AN IDIOT. I agree.
 
 But now, PLEASE, point me WHY Apache::Session does NEVER get destroyed
 it the sample handler.
 
 Have I told you that I'm poor programmer, don't know Perl at all and
 overall dumb and cannot even type?
 
 JUST POINT ME TO MY ERROR!
 
 PLEASE NO clever ideas of reread of mail archive, for over an hour I'm
 trying to get my error.

Alexander,

You continue to post, now resorting to all capital letters, but you
still haven't answered the questions I asked of your original post.  At
least, if you have, I never saw them.

If you claim that you don't know much about Perl, why are you messing
around with changing the scope of a global glob (local
*HTML::Mason::blah)?  With all these globals, why would the session hash
ever get released?  Remember that globals never go away in mod_perl
unless you explicitly undefine them.

I honestly don't have the time to try to reproduce your problem, because
your code is doing some rather complex things.  Have you been able to
get something simpler working?

-jwb



mod_perl.pm and Apache.pm

2000-01-18 Thread Kreimendahl, Chad J


I'll start this off with the error message I've been getting:
Undefined subroutine Apache::perl_hook called at
/usr/local/lib/perl5/site_perl/5.005/sun4-solaris/mod_perl.pm line 28.

I have installed the Apache bundle (as well as several dozen other Apache
modules).  This error is occuring in the make test phase of
Apache::AuthCookie.  running as such:
/usr/bin/perl -Iblib/arch -Iblib/lib
-I/usr/local/lib/perl5/5.00503/sun4-solaris -I/usr/local/lib/perl5/5.00503
test.pl

A quick look at the Apache module shows the subroutine perl_hook is
non-existant, yet in the documentation.  Hopefully I'm just stupid and
missing something extremely obvious... should there not be a perl_hook
subroutine in Apache.pm?

-Chad K



RE: Apache::Session, Embperl...

2000-01-18 Thread Gerald Richter


 We are using Embperl 1.2, Apache Session 1.3 (using
 DBIStore/SysVSemaphoreLocker) with Oracle as the backend.

 We've been observing periodic browser hangs which can be sporadically
 replicated by hitting the same page in quick succession using the same
 session id.  After doing that, updating %udat seems to cause the hang,
 perhaps the server process is waiting to acquire a write lock (?).


I guess this will be the reason. I never tried it, but maybe the ipcs
utility can give some information about your semaphore when this occurs.

Are there anything special happening before the hang (errors etc.)?

 Please note that we update the contents of %udat by calling a
 routine which
 exists in a separate module, something like:
 [-
 updateSession(\%udat);
 -]

 Are there any special considerations when doing something like
 that?  Or is
 that plain silly?


That's no problem and shouldn't have anything todo with your problem.

Gerald



-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



Re: mod_perl.pm and Apache.pm

2000-01-18 Thread Stas Bekman

 I'll start this off with the error message I've been getting:
 Undefined subroutine Apache::perl_hook called at
 /usr/local/lib/perl5/site_perl/5.005/sun4-solaris/mod_perl.pm line 28.
 
 I have installed the Apache bundle (as well as several dozen other Apache
 modules).  This error is occuring in the make test phase of
 Apache::AuthCookie.  running as such:
 /usr/bin/perl -Iblib/arch -Iblib/lib
 -I/usr/local/lib/perl5/5.00503/sun4-solaris -I/usr/local/lib/perl5/5.00503
 test.pl
 
 A quick look at the Apache module shows the subroutine perl_hook is
 non-existant, yet in the documentation.  Hopefully I'm just stupid and
 missing something extremely obvious... should there not be a perl_hook
 subroutine in Apache.pm?

mod_perl is written in Perl and C. XS is used to glue the two. If you
cannot find a function in the perl code it's probably coded in C and you
will find the interface in XS file. A simple find, reveals that: 

  % cd /usr/src/mod_perl-1.21_dev
  % find . -type f -exec grep perl_hook {} \;

gives:

[snipped]
perl_hook(name);
[snipped]

Now looking for the XS file:

  % find . -type f -exec grep -l 'perl_hook(name)' {} \;

gives:
./src/modules/perl/Apache.xs

So it's in ./src/modules/perl/Apache.xs - well it was pretty obvious to
look for it there by simply trying s/Apache.pm/Apache.xs/ - but as it's
not obvious for those who don't fiddle with XS, find(1) can be quite
helpful...

I know that it doesn't answer your real question. But at least it
reaasures that the code is there :)

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Gerald Richter


 That is what my patch did.  And that was the explanation I posted of
 the problem last week when we were debugging it.


Sorry, I missed that thread. I have posted this problem more then once here,
because it have beaten me and other often when using Embperl. The problem
there is often more hidden, because it doesn't SIGSEGV, it still works, but
some functionality (where Perl variables are tied to C variables) doesn't
work, so it's often not easy to detect.

Unfortunably I never had the time to track this down enought to create a
real usefull patch (just a workaround, (PERL_STARTUP_DONE_CHECK), which will
cause the XS libraries only loaded after the second load of libperl.so; this
works for the startup, but not after a restart).

Also I didn't tried it yet, your patch makes much sense too me. I will try
it out as soon as I get a litle free time. The next step is to port it NT,
because there isn't a dlopen/dlclose (of course there is one, but with a
different name) and on NT Apache only work with dynamic libraries. If it
works on Unix and NT it's should go straight to the CVS and a mod_perl 1.22
!

Thanks for solving that issue!

Gerald


-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



Re: squid performance

2000-01-18 Thread jb

I looked at mod_proxy and found the pass thru buffer size
is IOBUFSIZ, it reads that from the remote server then
writes to the client, in a loop.
Squid has 16K.
Neither is enough.
In an effort to get those mod_perl daemons to free up for long
requests, it is possible to patch mod_proxy to read as much as
you like in one gulp then write it.. 
Having done that, I am now pretty happy - mod_rewrite mod_proxy
mod_forwarded_for in front of modperl works great.. just a handful
of mod_perls can drive scores of slow readers! I think that is better
than squid for those with this particular problem.
-Justin

On Mon, Jan 17, 2000 at 07:56:33AM -0800, Ask Bjoern Hansen wrote:
 On Sun, 16 Jan 2000, DeWitt Clinton wrote:
 
 [...]
  On that topic, is there an alternative to squid?  We are using it
  exclusively as an accelerator, and don't need 90% of it's admittedly
  impressive functionality.  Is there anything designed exclusively for this
  purpose?
 
 At ValueClick we can't use the caching for obvious reasons so we're using
 a bunch of apache/mod_proxy processes in front of the apache/mod_perl
 processes to save memory.
 
 Even with our average 1KB per request we can keep hundreds of mod_proxy
 childs busy with very few active mod_perl childs.
 
 
   - ask
 
 -- 
 ask bjoern hansen - http://www.netcetera.dk/~ask/
 more than 60M impressions per day, http://valueclick.com



Re: squid performance

2000-01-18 Thread Michael Alan Dorman

Vivek Khera [EMAIL PROTECTED] writes:
 This has infinite more flexibility than squid, and allows me to have
 multiple personalities to my sites.  See for example the sites
 http://www.morebuiness.com and http://govcon.morebusiness.com

If when you say "multiple personalities", you mean virtual hosts, then
squid---at least recent versions---can do this as well.

Mind you, the FAQ doesn't explain this at all, but the Users Guide at
http://www.squid-cache.org/Doc/Users-Guide/detail/accel.html.  Gives
you all the details.

You have to have a redirector process, but there are several out there
(squirm, a couple of others), and the specs for building your own are
really trivial---like five lines of perl for a basic shell.

Now I've not done bench tests to see which of these options does the
best (quickest, lowest load, etc) job, and of course familiarity with
the configuration options counts for a lot---but I have to say (not
that it's entirely pertinent) that even though I spend a lot more time
with apache than squid, the times I've had to play with acls, I've
found squid easier than apache.

Now if you meant something else, well, I don't know what to say other
than I think there are drugs that can help...

Mike.



Re: squid performance

2000-01-18 Thread Ask Bjoern Hansen

On 17 Jan 2000, Michael Alan Dorman wrote:

 Vivek Khera [EMAIL PROTECTED] writes:
  This has infinite more flexibility than squid, and allows me to have
  multiple personalities to my sites.  See for example the sites
  http://www.morebuiness.com and http://govcon.morebusiness.com
 
 If when you say "multiple personalities", you mean virtual hosts, then
 squid---at least recent versions---can do this as well.

(plain) Apache can also serve static files (maybe after language/whatever
negotiations etc), run CGI's and much more - as Vivek mentioned.


 - ask 

-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 60M impressions per day, http://valueclick.com



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Daniel Jacobowitz

On Tue, Jan 18, 2000 at 08:26:17PM +0100, Gerald Richter wrote:
 Also I didn't tried it yet, your patch makes much sense too me. I will try
 it out as soon as I get a litle free time. The next step is to port it NT,
 because there isn't a dlopen/dlclose (of course there is one, but with a
 different name) and on NT Apache only work with dynamic libraries. If it
 works on Unix and NT it's should go straight to the CVS and a mod_perl 1.22
 !

Now if only we could get the memory leaks tracked down...

ltrace logs of apache are simply too huge to deal with, but I'll try
anyway to straighten them out and figure what is not getting freed.

Dan

/\  /\
|   Daniel Jacobowitz|__|SCS Class of 2002   |
|   Debian GNU/Linux Developer__Carnegie Mellon University   |
| [EMAIL PROTECTED] |  |   [EMAIL PROTECTED]  |
\/  \/



Re: redhat apache and modperl oh my!

2000-01-18 Thread Clay

Todd Finney wrote:

 At 12:26 PM 1/17/00 , Gerd Kortemeyer wrote:
 Clay wrote:
 
  so i am just wanting to know what anyone
  has found out on mod perl not working properly
  under redhat 6.1?
 
 If you install everything (including modperl) from RedHat's RPMs, no
 problem (I
 did this on five very different boxes, some new, some upgraded). If you try to
 do it yourself by building from sources, etc, ... oh well - then RedHat is in
 the way and gets all confused..

 I just finished putting together a box based on RH6.1:

 apache 1.3.9
 mod_perl 1.21
 latest Perl 5( 5.005_03?)

 I compiled everything from source, no rpms.  It went together without a
 hitch.  Are people having problems with 6.1?

 Todd

yah, but i recomped apachje and modperl and it worked too



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Alan Burlison

Vivek Khera wrote:

 AB To summarise:  Apache dlclose's the mod_perl.so, which then results in
 AB the perl libperl.so being unloaded as well (there's a linker dependency
 
 Excellent summary... thanks!
 
 AB from mod_perl - perl libperl.so).  Unfortunately the perl XS modules
 AB loaded in during startup via dlopen are *not* unloaded, nor do they
 AB succeed in locking the perl libperl.so into memory (you could construe
 AB this as a linker bug).  Then Apache reloads the mod_perl libperl.so,
 
 I think the linker is in error here for not adding the dependency on
 the library and that is what should be fixed...

Don't worry, I already have a bug report open and someone from the
linker group is having a look.  I havn't been able to replicate the
exact same problem in a minimal set of C files, but I'm working on it. 
I have a linker debug trace that shows the problem, though (on Solaris,
use "LD_DEBUG=files,detail httpd -X" and look for the addresses that the
perl libperl.so is mapped in at).

However, other folks have reported the exact same problem on other OSs,
eg Linux  BSD, so I think that in the short term we need to be
realistic and find a way of not tickling what seems to be a common
linker bug.

Alan Burlison



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Daniel Jacobowitz

On Tue, Jan 18, 2000 at 08:03:42PM +, Alan Burlison wrote:
 The current fix is to forcibly unload the perl XS modules during the
 unload.  However, on reflection I'm not at all sure this is the correct
 thing to do.  Although you can unload the .so component of a perl
 module, you can't unload the .pm component, so just removing the .so
 part as in the current workaround is suspect at least.

Remember - mod_perl is being unloaded.  Perl going away.  At this point
perl_destruct/perl_free have already been called, and thus the .pm
components are effectively unloaded.

 I think the correct fix is for the Apache core to avoid dlclosing
 anything it has dlopened in the first place.  If new modules have been
 added to the config files, they should be dlopened, but any old ones
 should *not* be dlclosed, EVEN IF THEY ARE NO LONGER IN THE CONFIG
 FILE!!!
 
 I firmly believe this needs fixing in the Apache core, not by hacking
 around it in mod_perl.
 
 Alan Burlison
 


Dan

/\  /\
|   Daniel Jacobowitz|__|SCS Class of 2002   |
|   Debian GNU/Linux Developer__Carnegie Mellon University   |
| [EMAIL PROTECTED] |  |   [EMAIL PROTECTED]  |
\/  \/



splitting mod_perl and sql over machines

2000-01-18 Thread Stas Bekman

Well, I've got a performance question

We all know that mod_perl is quite hungry for memory, but when you have
lots of SQL requests, the sql engine (mysql in my case) and httpd are
competing for memory (also I/O and CPU of course). The simplest solution
is to bump in a stronger server until it gets "outgrown" as the loads
grow and you need a more sophisticated solution.

My question is a cost-effectiveness of adding another cheap PC vs
replacing with new expensive machine. The question is what are the
immediate implications on performace (speed)? Since the 2 machines has to
interact between them. e.g. when setting the mysql to run on one machine
and leaving mod_perl/apache/squid on the other. Anyone did that? 

Most of my requests are served within 0.05-0.2 secs, but I afraid that
adding a network (even a very fast one) to deliver mysql results, will
make the response answer go much higher, so I'll need more httpd processes
and I'll get back to the original situation where I don't have enough
resources. Hints?

I know that when you have a really big load you need to build a cluster of
machines or alike, but when the requirement is in the middle - not too
big, but not small either it's a hard decision to do... especially when
you don't have the funds :)

Thanks!
___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: splitting mod_perl and sql over machines

2000-01-18 Thread Vivek Khera

 "SB" == Stas Bekman [EMAIL PROTECTED] writes:

SB replacing with new expensive machine. The question is what are the
SB immediate implications on performace (speed)? Since the 2 machines has to
SB interact between them. e.g. when setting the mysql to run on one machine
SB and leaving mod_perl/apache/squid on the other. Anyone did that? 

We do this, but the queries that run are large and produce big data
sets.  The network latency is not an issue for us.

But if your choice is to replace with an expensive PC, why not just
*add* the expensive PC to be your DB server?  Then leave the web
serving to a dedicated web-serving box and tune it appropriately, and
tune the DB server as well.  I see no real long-term value in adding a
cheap box to be your DB server.



Re: splitting mod_perl and sql over machines

2000-01-18 Thread Jeffrey W. Baker

Stas Bekman wrote:
 
 Well, I've got a performance question
 
 We all know that mod_perl is quite hungry for memory, but when you have
 lots of SQL requests, the sql engine (mysql in my case) and httpd are
 competing for memory (also I/O and CPU of course). The simplest solution
 is to bump in a stronger server until it gets "outgrown" as the loads
 grow and you need a more sophisticated solution.

That is the simplest solution but it doesn't scale very highly.

 My question is a cost-effectiveness of adding another cheap PC vs
 replacing with new expensive machine. The question is what are the
 immediate implications on performace (speed)? Since the 2 machines has to
 interact between them. e.g. when setting the mysql to run on one machine
 and leaving mod_perl/apache/squid on the other. Anyone did that?

It is commonplace to operate the database and the web server on
different machines.

 Most of my requests are served within 0.05-0.2 secs, but I afraid that
 adding a network (even a very fast one) to deliver mysql results, will
 make the response answer go much higher, so I'll need more httpd processes
 and I'll get back to the original situation where I don't have enough
 resources. Hints?

Your latency concerns are unwarranted.  On my mostly empty switched 100
Mbps ethernet segment, packet latency between two fast machines is on
the order of 1 millisecond.

I like to put the database and the httpd on different machines because
that allows you to scale two requirements independently.  If your httpd
processes are heavyweights with respect to RAM consumption, you can
easily add another machine to accomodate more httpd processes, without
changing your database machine.  If your database is somewhat CPU
intensive, but your httpd doesn't need much CPU time, you can get low
end machines for the httpd and a nice CPU for the database.  Finally, in
the extreme case that your system is database-limited, you can use one
or more cheapo web servers (Cobalt Raqs for instance) and some giant
database machine like a UE10K.

With the database and the httpd on the same machine, you have
conflicting interests.  During your peak load, you will spawn more
processes and use RAM that the database might have been using, or that
the kernel was using on its behalf in the form of cache.  You will
starve your database of resources at the time when it needs those
resources the most.

If you've outgrown one box, go ahead and move the database to another
machine.  You won't regret the latency penalty, and you will gain a good
deal of felxibility.

Regards,
Jeffrey



Re: splitting mod_perl and sql over machines

2000-01-18 Thread Stas Bekman

 SB replacing with new expensive machine. The question is what are the
 SB immediate implications on performace (speed)? Since the 2 machines has to
 SB interact between them. e.g. when setting the mysql to run on one machine
 SB and leaving mod_perl/apache/squid on the other. Anyone did that? 
 
 We do this, but the queries that run are large and produce big data
 sets.  The network latency is not an issue for us.

My code is different, each request executes about 10-20 little queries. So
probably I'll see the latency influence the response times, right? 

 But if your choice is to replace with an expensive PC, why not just
 *add* the expensive PC to be your DB server?  Then leave the web
 serving to a dedicated web-serving box and tune it appropriately, and
 tune the DB server as well.  I see no real long-term value in adding a
 cheap box to be your DB server.

Throwing away the cheap box and putting two expensive instead is even
better :) Of course you are right about long-term planning, I was talking
about the case when you don't have to buy the cheap box, since we have it
already...


___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Gerald Richter


To summarise:...

Thanks for the summary, but I already know this problem for a long time and
I am very happy that somebody has taken the time track this down and provide
a solution :-)


 However, other folks have reported the exact same problem on other OSs,
 eg Linux  BSD, so I think that in the short term we need to be
 realistic and find a way of not tickling what seems to be a common
 linker bug.

That's the same on NT. It seems to occur on all OSs, so it won't help
anything to make the linker responsible, there are to much linkers... and I
am not sure if the linker can know under all circumstances which libraries
to unload.

I don't think we will convice the Apache people in the short term, that they
shouldn't unload the libraries (also we can discuss it with them). The only
practical solution I see, is the one to unload the libraries, as the patch
already does. The thing left to do is to port it to other OS's (like NT)
which does not have a function named dlclose (we need simply use another
name). This solution will work and if the Apache people one time decide to
not unload the modules, it won't hurt.

Gerald

P.S. Does you get any feedback from your post to p5p about the unload
function in Dynaloader?



Re: splitting mod_perl and sql over machines

2000-01-18 Thread Vivek Khera

 "SB" == Stas Bekman [EMAIL PROTECTED] writes:

SB Throwing away the cheap box and putting two expensive instead is even
SB better :) Of course you are right about long-term planning, I was talking
SB about the case when you don't have to buy the cheap box, since we have it
SB already...

Then use it!  As long as it has fast disks, that is.  You don't want
to be running a database server on IDE disks.  That's suicide!



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Alan Burlison

Daniel Jacobowitz wrote:
 
 On Tue, Jan 18, 2000 at 08:03:42PM +, Alan Burlison wrote:
  The current fix is to forcibly unload the perl XS modules during the
  unload.  However, on reflection I'm not at all sure this is the correct
  thing to do.  Although you can unload the .so component of a perl
  module, you can't unload the .pm component, so just removing the .so
  part as in the current workaround is suspect at least.
 
 Remember - mod_perl is being unloaded.  Perl going away.  At this point
 perl_destruct/perl_free have already been called, and thus the .pm
 components are effectively unloaded.

Ah yes, so they are.  I still think dlclosing for no good reason sucks
though.

Alan Burlison



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Alan Burlison

Gerald Richter wrote:

 That's the same on NT. It seems to occur on all OSs, so it won't help
 anything to make the linker responsible, there are to much linkers... and I
 am not sure if the linker can know under all circumstances which libraries
 to unload.

Yes it can.  Its main job is to keep track and control the dependencies
between libraries.  It's just that sometimes thy don't do a particularly
good job of it ;-)

 I don't think we will convice the Apache people in the short term, that they
 shouldn't unload the libraries (also we can discuss it with them). The only
 practical solution I see, is the one to unload the libraries, as the patch
 already does. The thing left to do is to port it to other OS's (like NT)
 which does not have a function named dlclose (we need simply use another
 name). This solution will work and if the Apache people one time decide to
 not unload the modules, it won't hurt.

I think they should be persuaded - this is a very insiduous bug and
extremely hard to find.

 P.S. Does you get any feedback from your post to p5p about the unload
 function in Dynaloader?

No.  Nothing meaningful.

Alan Burlison



Re: splitting mod_perl and sql over machines

2000-01-18 Thread Stas Bekman

Ok, thanks for the asnwers

Seems like a great addon for the guide's performance chapter.

Just to ride on this thread and to make the the section complete, what are
the suggested HW requirements for a machine running a general SQL vs
machine doing pure I/O and CPU (httpd/mod_perl). Let me try:

I'm talking about middle requirements, when you don't yet need RAID
solutions and 2 machines are just fine. The sky is the limit for the HW
you can throw on your service, so this is an attempt to specify the best
solution with least costs. 

SQL/DB machine:
* HD:  Ultra-Wide SCSI
* RAM: little/No Big Joins, otherwise according to needs
* CPU: medium-high (according to needs)

mod_perl machine:
* HD:  IDE (no real IO but logging)
* RAM: the more the better
* CPU: medium-high (according to needs)

Net: 
10M outside connection (no need for 100M because of the web bottleneck)
100M internal net between the 2 machines (the faster the better)

Anything else important I've missed? 

This is something I've attempted to cover at
perl.apache.org/guide/hardware.html but it was a general one, I'll have to
split it for httpd and 'other' parts :)


___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com




Re: splitting mod_perl and sql over machines

2000-01-18 Thread Leslie Mikesell

According to Stas Bekman:

 We all know that mod_perl is quite hungry for memory, but when you have
 lots of SQL requests, the sql engine (mysql in my case) and httpd are
 competing for memory (also I/O and CPU of course). The simplest solution
 is to bump in a stronger server until it gets "outgrown" as the loads
 grow and you need a more sophisticated solution.

In a single box you will have contention for disk i/o, RAM, and CPU.
You can avoid most of the disk contention (the biggest time issue)
by putting the database on it's own drive.  I've been running dual
CPU machines, which seems to help with the perl execution although
I haven't really done timing tests against a matching single
CPU box.  RAM may be the real problem when trying to expand a
Linux pentium box.

 My question is a cost-effectiveness of adding another cheap PC vs
 replacing with new expensive machine. The question is what are the
 immediate implications on performace (speed)? Since the 2 machines has to
 interact between them. e.g. when setting the mysql to run on one machine
 and leaving mod_perl/apache/squid on the other. Anyone did that? 

Yes, and a big advantage is that you can then add more web servers
hitting the same database server.

 Most of my requests are served within 0.05-0.2 secs, but I afraid that
 adding a network (even a very fast one) to deliver mysql results, will
 make the response answer go much higher, so I'll need more httpd processes
 and I'll get back to the original situation where I don't have enough
 resources. Hints?

The network just has to match the load.  If you go to a switched 100M
net you won't add much delay.  You'll want to run persistent DBI
connections, of course, and do all you can with front-end proxies
to keep the number of working mod_perl's as low as possible.

 I know that when you have a really big load you need to build a cluster of
 machines or alike, but when the requirement is in the middle - not too
 big, but not small either it's a hard decision to do... especially when
 you don't have the funds :)

The real killer time-wise is virtual memory paging to disk.  Try to 
estimate how much RAM you are going to need at once for the mod_perl
processes and the database and figure out whether it is cheaper to
put it all in one box or two.  If you are just boarderline on needing
the 2nd box, you might try a different approach.  You can use a
fairly cheap box as a server for images and static pages, and perhaps
even your front-end proxy server as long as it is reliable.

  Les Mikesell
   [EMAIL PROTECTED]



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Gerald Richter



 Yes it can.

No it can't :-)

  Its main job is to keep track and control the dependencies
 between libraries.  It's just that sometimes thy don't do a particularly
 good job of it ;-)


This works only if this dependencies are know at link time, but look at the
source of Dynloader. You can retrieve address of any (public)symbol inside a
library dynamicly at runtime. Now you have the entry address and can pass it
around. No linker will ever have a chance to know what you do in your
programm. As soon as you use such things (and Dynloader uses them), the
linker doesn't have chance!

Gerald




Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Daniel Jacobowitz

On Tue, Jan 18, 2000 at 10:19:04PM +0100, Gerald Richter wrote:
 
 
  Yes it can.
 
 No it can't :-)
 
   Its main job is to keep track and control the dependencies
  between libraries.  It's just that sometimes thy don't do a particularly
  good job of it ;-)
 
 
 This works only if this dependencies are know at link time, but look at the
 source of Dynloader. You can retrieve address of any (public)symbol inside a
 library dynamicly at runtime. Now you have the entry address and can pass it
 around. No linker will ever have a chance to know what you do in your
 programm. As soon as you use such things (and Dynloader uses them), the
 linker doesn't have chance!

You're confusing the dynamic and static linkers.  The dynamic linker is
what he was referring to; it knows what libraries it resolves symbols
to.

Dan

/\  /\
|   Daniel Jacobowitz|__|SCS Class of 2002   |
|   Debian GNU/Linux Developer__Carnegie Mellon University   |
| [EMAIL PROTECTED] |  |   [EMAIL PROTECTED]  |
\/  \/



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Gerald Richter


 You're confusing the dynamic and static linkers.  The dynamic linker is
 what he was referring to; it knows what libraries it resolves symbols
 to.

Yes, I know this difference and you will be right in most cases, but the
address that is returned, could be passed around to other libraries and the
linker can't know this. (the dynloader.so can retrieve an adresse of
embperl.so and pass it's address to modperl.so, which then pass the address
to whatever. How should the dynamic linker know, that whatever is calling
the symbol in embperl.so (and stores the address)

Gerald



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Daniel Jacobowitz

On Tue, Jan 18, 2000 at 10:43:09PM +0100, Gerald Richter wrote:
 
  You're confusing the dynamic and static linkers.  The dynamic linker is
  what he was referring to; it knows what libraries it resolves symbols
  to.
 
 Yes, I know this difference and you will be right in most cases, but the
 address that is returned, could be passed around to other libraries and the
 linker can't know this. (the dynloader.so can retrieve an adresse of
 embperl.so and pass it's address to modperl.so, which then pass the address
 to whatever. How should the dynamic linker know, that whatever is calling
 the symbol in embperl.so (and stores the address)

Ah, I see what you mean.  That's boundedly undefined - the linker
certainly shouldn't be trying to catch that sort of case.  But it
should, IMO, catch bindings that it makes itself.

Dan

/\  /\
|   Daniel Jacobowitz|__|SCS Class of 2002   |
|   Debian GNU/Linux Developer__Carnegie Mellon University   |
| [EMAIL PROTECTED] |  |   [EMAIL PROTECTED]  |
\/  \/



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Alan Burlison

Gerald Richter wrote:

 This works only if this dependencies are know at link time, but look at the
 source of Dynloader. You can retrieve address of any (public)symbol inside a
 library dynamicly at runtime. Now you have the entry address and can pass it
 around. No linker will ever have a chance to know what you do in your
 programm. As soon as you use such things (and Dynloader uses them), the
 linker doesn't have chance!

Nope, that's not how it works.  Take a look at
http://docs.sun.com:80/ab2/coll.45.10/LLM/@Ab2PageView/5121

*All* symbols in a shared library are known by ld.so

Alan Burlison



Re: redhat apache and modperl oh my!

2000-01-18 Thread G.W. Haywood

Hi there,

 I compiled everything from source, no rpms.  It went together without a
 hitch.  Are people having problems with 6.1?

[EMAIL PROTECTED] was having problems earlier as well.

I didn't see anyone reply to him yet.

73,
Ged.




Re: how come httpd doesn't start even though startup.pl is fine?(fwd)

2000-01-18 Thread Ricardo Kleemann

Hmmm :-(

On 14 Jan 2000, Frank D. Cringle wrote:

 
 Without having checked your list, I'll wager that the "good" modules
 are all pure perl and the "bad" ones use machine-language XS
 extensions.

So typical modules like MD5 and MIME::Body are "bad" modules?

Ricardo



Re: redhat apache and modperl oh my!

2000-01-18 Thread Ask Bjoern Hansen

On Mon, 17 Jan 2000, Todd Finney wrote:

[...]
 I compiled everything from source, no rpms.  It went together without a
 hitch.  Are people having problems with 6.1?

I use RedHat 6.1 on my workstation and my notebook and a few production
boxes without any problems. No DSO though.


 - ask

-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 60M impressions per day, http://valueclick.com



Re: how come httpd doesn't start even though startup.pl is fine?(fwd)

2000-01-18 Thread Clay

Cliff Rayman wrote:

 i don't think he is saying that the module is "bad",
 he is saying that modules that use XS, with apache
 mod_perl have caused  problems with startup and restarts.
 based on the running posts regarding
  dlopen and dlclose, i'd say he was correct.

 cliff rayman
 genwax.com

 Ricardo Kleemann wrote:

  Hmmm :-(
 
  On 14 Jan 2000, Frank D. Cringle wrote:
 
  
   Without having checked your list, I'll wager that the "good" modules
   are all pure perl and the "bad" ones use machine-language XS
   extensions.
 
  So typical modules like MD5 and MIME::Body are "bad" modules?
 
  Ricardo

ok, as far as i can tell apxs is not compiled right with redhat 6.1 , i
recomp'd apache 1.3.9 {dso style}and modperl1.21

and not a prob for a day now




Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Alan Burlison

Alan Burlison wrote:

  AB from mod_perl - perl libperl.so).  Unfortunately the perl XS modules
  AB loaded in during startup via dlopen are *not* unloaded, nor do they
  AB succeed in locking the perl libperl.so into memory (you could construe
  AB this as a linker bug).  Then Apache reloads the mod_perl libperl.so,

Actually, I retract that statement.  It is *not* a linker bug.  By
explicitly adding a dependency between the XS .so modules and the perl
libperl.so, the problem can be made to dissapear, as ld.so then knows
that there is a dependency between the XS module and the perl libperl.so

  I think the linker is in error here for not adding the dependency on
  the library and that is what should be fixed...

Nope, the *programmer* (or in this case, MakeMaker) is in error for not
specifying the dependency when the XS .so module was built.

Here's what one of our linker developers has to say about the subject:

[You may be assuming that]

 ii) the action of dlopen()'ing an object (say B.so) from an object
 (say A.so) creats a binding between the two objects that insure
 the latter cannot be deleted before the formed is deleted.
 This isn't the case.  ld.so.1 maintains bindings/refence counts
 etc. between the objects it has control over - ie. a relocation
 in one object that references another.  The dlopen() family are
 thought of as user code, and we do not try to interprete how
 objects are bound that use these facilities.  In fact we can
 get in all sorts of issues if we do - a handle, just like a
 function pointer, can be handed to other objects, thus who
 knows what object *expects* another object to be in existance.

i.e. if you decide to use dlopen/dlclose and you screw it up, then it is
*your* fault, not the linkers.  The fact that so many different systems
show the same behaviour suggests that this is a common linker design
decision.

I actually think that the real fault is in DynaLoader.  If you assume
that the exiting of the perl interpreter is the same thing as the exit
of the program, then it makes sense not to dlclose anything, as it will
all be reclaimed on exit anyway.  However, if you embed perl inside an
application, the exit of the perl interpreter is certainly *not* the
same thing as the program exiting, and DynaLoader should explicily
dlclose all the files it has dlopened.

I'm going to submit this as a bug to p5p.

Alan Burlison



Re: splitting mod_perl and sql over machines

2000-01-18 Thread Steve Reppucci


Stas:

One other thing you might want to mention in your thread: the use of
Apache::DBI to maintain persistent connections to the DB can cause a
problem if you have multiple modperl servers all talking to the same DB
server.

For instance, on our site, we have 2 hosts running modperl, each of which
is set to have a MaxClients of 128 (probably too much, but...) In
addition, there are various conventional CGIs talking to the same
host running a MySQL server.  If we try to run more modperl servers (or
even during heavy traffic times with only 2 modperl servers), we
frequently see MySQL errors from "maximum number of connections exceeded".
This makes sense, as all of those long-lived, persistent DB connections
are presumably tieing up MySQL resources...

Granted, this is using MySQL pretty much out of the box, without much
attention spent on whether it is possible to configure a larger connection
limit, but I think it's something folks might want to be aware of.

Not sure if I've added anything to this thread, but...

Steve

On Tue, 18 Jan 2000, Leslie Mikesell wrote:

 According to Stas Bekman:
 
  We all know that mod_perl is quite hungry for memory, but when you have
  lots of SQL requests, the sql engine (mysql in my case) and httpd are
  competing for memory (also I/O and CPU of course). The simplest solution
  is to bump in a stronger server until it gets "outgrown" as the loads
  grow and you need a more sophisticated solution.
 
 In a single box you will have contention for disk i/o, RAM, and CPU.
 You can avoid most of the disk contention (the biggest time issue)
 by putting the database on it's own drive.  I've been running dual
 CPU machines, which seems to help with the perl execution although
 I haven't really done timing tests against a matching single
 CPU box.  RAM may be the real problem when trying to expand a
 Linux pentium box.
 
  My question is a cost-effectiveness of adding another cheap PC vs
  replacing with new expensive machine. The question is what are the
  immediate implications on performace (speed)? Since the 2 machines has to
  interact between them. e.g. when setting the mysql to run on one machine
  and leaving mod_perl/apache/squid on the other. Anyone did that? 
 
 Yes, and a big advantage is that you can then add more web servers
 hitting the same database server.
 
  Most of my requests are served within 0.05-0.2 secs, but I afraid that
  adding a network (even a very fast one) to deliver mysql results, will
  make the response answer go much higher, so I'll need more httpd processes
  and I'll get back to the original situation where I don't have enough
  resources. Hints?
 
 The network just has to match the load.  If you go to a switched 100M
 net you won't add much delay.  You'll want to run persistent DBI
 connections, of course, and do all you can with front-end proxies
 to keep the number of working mod_perl's as low as possible.
 
  I know that when you have a really big load you need to build a cluster of
  machines or alike, but when the requirement is in the middle - not too
  big, but not small either it's a hard decision to do... especially when
  you don't have the funds :)
 
 The real killer time-wise is virtual memory paging to disk.  Try to 
 estimate how much RAM you are going to need at once for the mod_perl
 processes and the database and figure out whether it is cheaper to
 put it all in one box or two.  If you are just boarderline on needing
 the 2nd box, you might try a different approach.  You can use a
 fairly cheap box as a server for images and static pages, and perhaps
 even your front-end proxy server as long as it is reliable.
 
   Les Mikesell
[EMAIL PROTECTED]
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |
508/958-0183 Be Open |



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-18 Thread Gerald Richter


 Actually, I retract that statement.  It is *not* a linker bug.  By
 explicitly adding a dependency between the XS .so modules and the perl
 libperl.so, the problem can be made to dissapear, as ld.so then knows
 that there is a dependency between the XS module and the perl libperl.so

   I think the linker is in error here for not adding the dependency on
   the library and that is what should be fixed...

 Nope, the *programmer* (or in this case, MakeMaker) is in error for not
 specifying the dependency when the XS .so module was built.

 Here's what one of our linker developers has to say about the subject:

 [You may be assuming that]

  ii) the action of dlopen()'ing an object (say B.so) from an object
  (say A.so) creats a binding between the two objects that insure
  the latter cannot be deleted before the formed is deleted.
  This isn't the case.  ld.so.1 maintains bindings/refence counts
  etc. between the objects it has control over - ie. a relocation
  in one object that references another.  The dlopen() family are
  thought of as user code, and we do not try to interprete how
  objects are bound that use these facilities.  In fact we can
  get in all sorts of issues if we do - a handle, just like a
  function pointer, can be handed to other objects, thus who
  knows what object *expects* another object to be in existance.

 i.e. if you decide to use dlopen/dlclose and you screw it up, then it is
 *your* fault, not the linkers.  The fact that so many different systems
 show the same behaviour suggests that this is a common linker design
 decision.


That's exactly what I tried to explain in my last few mails, also I am not a
linker guru :-)

 I actually think that the real fault is in DynaLoader.  If you assume
 that the exiting of the perl interpreter is the same thing as the exit
 of the program, then it makes sense not to dlclose anything, as it will
 all be reclaimed on exit anyway.  However, if you embed perl inside an
 application, the exit of the perl interpreter is certainly *not* the
 same thing as the program exiting, and DynaLoader should explicily
 dlclose all the files it has dlopened.

 I'm going to submit this as a bug to p5p.

You are talking about two things, the first are the dependencies MakeMaker
could add at link time. This will make dlopen/dlclose know what to unload,
but, as the rest of Perl, Dynaloader is very "dynamicly" and let's you
specify additional dependencies during loadtime (e.g. via the *.bs files).
This can't be catched by MakeMaker. So I would agree to your last sentences
that Dynloader is responsible for unloading, because that's the only module,
which knows what it had loaded. Also I am not so sure if unloading all the
libraries can be really successfully done, because the Perl XS libraries
don't assume that they will ever unloaded (because they are actualy only
unloaded when the program exits). This may be the reason for memory leaks
Daniel metioned earlier, because the XS libraries don't have a chance to
free there resources (or not aware of that they actually should do so). So
in the long term, the solution that you have prefered in previous mail, not
to unload modperl at all, maybe the better one, to stay compatible with the
existing software.

Gerald




-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



OS Independend Patch for XS Extentions unload

2000-01-18 Thread Gerald Richter

Hi,

I have tested the patch to unload all XS libraries, when libperl is
unloaded, Daniel sended a few days ago on Unix and on NT and it works!
Really great!!

Here is a sligthly modified version, so it works also on NT (and on others
OS's). It uses the Apache function ap_os_dso_unload instead of dlclose.

The patch is against the actual CVS of mod_perl.

Gerald


-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-

 mod_perl.c.diff


Re: How to make EmbPerl stuff new content into a existing frame?

2000-01-18 Thread Scott Chapman

On Mon, 17 Jan 2000, you wrote:
 Why do you need this intermediate step i.e. 
 A
 HREF="/login/indxnav1.epl?session_id=[+$session_ID+]digest=[+$digest+]"
 TARGET="index"Update..
 ?
 
 Why not just get the server to populate frame index with required data
 up front ? Or have I missed something here ?

Here's the situation:
The user loads this page with the two frames.  The left frame is the 
navigation frame.  The user clicks on the left frame the link to log in.  
The right frame changes to the login screen and they login.  When 
they are done, I want the server to populate the frame "index" with 
the required data.  I don't know how to do this.

It seems a redirection in EmbPerl would do it but I don't know how to make a 
redirection with a TARGET.  Any clues?

TIA,





Re: How to make EmbPerl stuff new content into a existing frame?

2000-01-18 Thread David Emery

At 21:17 -0800 00.1.18, Scott Chapman wrote:
 On Mon, 17 Jan 2000, you wrote:
  Why do you need this intermediate step i.e. 
  A
  HREF="/login/indxnav1.epl?session_id=[+$session_ID+]digest=[+$digest+]"
  TARGET="index"Update..
  ?
  
  Why not just get the server to populate frame index with required data
  up front ? Or have I missed something here ?
 
 Here's the situation:
 The user loads this page with the two frames.  The left frame is the 
 navigation frame.  The user clicks on the left frame the link to log in.  
 The right frame changes to the login screen and they login.  When 
 they are done, I want the server to populate the frame "index" with 
 the required data.  I don't know how to do this.
 
 It seems a redirection in EmbPerl would do it but I don't know how to make a 
 redirection with a TARGET.  Any clues?
 
 TIA,

I missed part of this thread somewhere along the way, so sorry if I'm re-covering old 
ground here...

I assume that dumping the frames altogether is not an option for you.

Sounds like what you need to do is have the log-in form aimed at target _top, to 
reload the whole frameset. The frame-set page would may have to be an Embperl page, 
(or be otherwise output dynamically to decide what to load into each frame and 
possibly send params to the in-frame pages) which would in turn load the correct pages 
(Embperl or not) into the two frames.

Another option would be using Javascript to control what page gets put into what 
frame, as I believe someone else mentioned.

Hope that helps. Confuses the hell out of me...

- dave

"It's a thankless job, but I've got a lot of  Karma to burn off."



RE: How to make EmbPerl stuff new content into a existing frame?

2000-01-18 Thread Gerald Richter


 Here's the situation:
 The user loads this page with the two frames.  The left frame is the
 navigation frame.  The user clicks on the left frame the link to log in.
 The right frame changes to the login screen and they login.  When
 they are done, I want the server to populate the frame "index" with
 the required data.  I don't know how to do this.

 It seems a redirection in EmbPerl would do it but I don't know
 how to make a
 redirection with a TARGET.  Any clues?

A redirect with a TARGET other then the frame that send the redirect isn't
possible as far as I know. You could either use some JavaScript to update
the other index, or that's what I prefer, reload the whole frame (when the
user hit's the login button -  form action=" ..." target="..."), with
different frame pages. Then the frame parentpage has also to be an Embperl
page an passes the parameters along the the actual content pages.

Gerald


-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



Re: Attempt to free unreferenced scalar during global destruction.

2000-01-18 Thread Doug MacEachern

On Thu, 4 Nov 1999, Andrei A. Voropaev wrote:

 Hi!
 
 For some reason I get lots of 
 
 'Attempt to free unreferenced scalar during global destruction.'
 
 in my error log. Any one can give me a pointer where to search for the
 problem?

it's most likely due to a buggy xs module.  that message will only show up
during child_exit, it probably isn't cause for alarm.  how often do you
see the message?  what xs modules do you have loaded?



Re: SegFaults caused by Apache::Cookie during ChildExit

2000-01-18 Thread Doug MacEachern

On Wed, 22 Dec 1999, Clinton Gormley wrote:

 I am using a home-baked session manager on my web site.  I clean up
 expired sessions by called a child exit handlder and this all worked
 rather well.
 
 However, we have recompiled Perl, Apache, mod_perl and Perl modules with
 pgcc and a different configuration (removed all modules we didn't need),
 and now I get a SegFault when Apache::Cookie-new is called during a
 ChildExit.
 
 I use Apache::Cookie in Authorization and PerlHandler phases without a
 problem.
 
 Not sure whether this problem is caused by the compiler or the different
 configuration at compile.
 
 Any ideas of starting points?

Apache::Cookie needs a request_rec, there is no request_rec during
ChildExit.  this dependency could probably be loosened, but it won't work
with the current version of libapreq.



Re: your mail

2000-01-18 Thread Doug MacEachern

On Thu, 23 Dec 1999, Li,Yuan N.(NXI) wrote:

 I have fought hard quite a few days trying to add mod_perl1.21 onto apache
 1.3.9 on HP 11. I use the c compiler comes with the machine, and installed
 Perl 5.00503 under my home directory /home/c015932/opt/perl(I do not have
 access to the root), although the host has its own Perl 5.004 installed
 under /opt/perl5. Everything compiles, the current issue I have is at the
 link stage. I got the following massage from running make. I guess I either
 missed something in my library specification or forgot to load some Perl
 module. Can someone help me pointing out the obvious? Any help will be
 appreciated.
...
 -L/usr/local/lib
 /opt/perl5/lib/PA-RISC2.0/5.00404/auto/DynaLoader/DynaLoader.a
 -L/home/c015932/opt/perl/lib/5.00503/PA-RISC2.0/CORE
 -L/home/c015932/opt/perl5.005_03/libperl.a -lperl -lnet -lnsl -lnm -lndbm
 -ldld -lm -lc -lndir -lcrypt -lm -lpthread
 /usr/ccs/bin/ld: (Warning) At least one PA 2.0 object file (buildmark.o) was
 detected. The linked output may not run on a PA 1.x system.
 /usr/ccs/bin/ld: Unsatisfied symbols:
Perl_Sv (data)
...

do you have a libperl in /usr/local/lib ?  if so, delete it.



Re: a possible bug with Apache::Server::ReStarting

2000-01-18 Thread Doug MacEachern


 Hi,
 
 While documenting the 'restart twice on start' apache's behavior, I've
 tested $Apache::Server::ReStarting and $Apache::Server::Starting. 
 
 Perl section is executed twice -- OK.
 startup.pl is executed once  -- OK.
 $Apache::Server::ReStarting never gets set! - I suppose it's a bug.
 
 (I have tried Apache::Server::ReStarting and Apache::ServerReStarting
 which is the same... I've even tried Apache::Server::Restarting and
 Apache::ServerRestarting - I know they don't exist)
 
 Meanwhile the only workaround is: 
 
 ReStarting == true if Starting == false
 
 Here is the test that you can reproduce:
 
 I've added:
 httpd.conf:
 --
 Perl
 print STDERR "Perl Apache::Server::Starting   is true  \n" if
 $Apache::Server::Starting;
 print STDERR "Perl Apache::Server::Starting   is false \n" unless
 $Apache::Server::Starting;
 print STDERR "Perl Apache::Server::ReStarting is true  \n" if
 $Apache::Server::ReStarting;
 print STDERR "Perl Apache::Server::ReStarting is false \n" unless
 $Apache::Server::ReStarting;
 /Perl
 
 startup.pl:
 ---
 print STDERR "startup.pl: Apache::Server::Starting   is true  \n" if
 $Apache::Server::Starting;
 print STDERR "startup.pl: Apache::Server::Starting   is false \n" unless
 $Apache::Server::Starting;
 print STDERR "startup.pl: Apache::Server::ReStarting is true  \n" if
 $Apache::Server::ReStarting;
 print STDERR "startup.pl: Apache::Server::ReStarting is false \n" unless
 $Apache::Server::ReStarting;
 
 when server is started 
 
 startup.pl: Apache::Server::Starting   is true  
 startup.pl: Apache::Server::ReStarting is false 
 Perl Apache::Server::Starting   is true  
 Perl Apache::Server::ReStarting is false 
 
 and in the error_log:
 
 Perl Apache::Server::Starting   is false 
 Perl Apache::Server::ReStarting is false 
 
 I'm running the latest CVS (both mod_perl and apache) version on linux
 with perl5.00503 if it matters.
 
 BTW, Doug -- a wish list:
 
 I think we need four states:
 1. Starting
 2. Restarting
 3. Running
 4. Stopping
 
 I needed the 'Stopping' flag for the runwaway processes watchdog if you
 remember. probably other cleanup and alerting features can be added using
 the 'stopping' flag.
 
 Regarding implementation -- it can be a single variable, with four states.
 
 Thanks!
 
 ___
 Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
 Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
 perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
 single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com
 



Re: send_fd and timeout problem

2000-01-18 Thread Doug MacEachern

On Fri, 7 Jan 2000, Martin Lichtin wrote:

 Hi,
 
 I'm using send_fd() to send relatively large files. Apache's Timeout is 
 currently set to 60s and indeed, mod_perl aborts as soon as the minute 
 elapses. (error msg: mod_perl: Apache-print timed out). 
 However, it shouldn't do that, right? 
 
 As ap_send_fd_length() does 8k chunking and uses a soft timeout, 
 reset for each chunk, I would have expected that it would only abort if
 it couldn't send 8k within a minute. This would also match Apache's Timeout
 documentation saying "The amount of time between ACKs on transmissions of 
 TCP packets in responses".

mod_perl doesn't set it's own alarm when $r-send_fd is called.  did you
call $r-print or print before hand?



Re: Perl modules in apache configuration

2000-01-18 Thread Doug MacEachern

On Sun, 9 Jan 2000 [EMAIL PROTECTED] wrote:

  "Eric" == Eric  writes:
 
 Eric On Sun, Jan 09, 2000 at 08:47:04PM +0300, [EMAIL PROTECTED] wrote:
  I'm trying to configure httpd.conf using Perl sections (mod_macro is
  not enough for me), but the result is weird. 
 
 Eric Do you have a specific example of your config, and what doesn't work,
 Eric that you could post maybe? It's hard to help without specifics.
 Okay, here it is. Note that fragment marked #!!! is critical for some
 bugs: when these strings are commented out, first Perl block
 executes with error, if they are uncommented, it does NOT
 executes. Second Perl block never executes at all.

 #!!!
 #foreach (keys %$Location{"/$loc"}) {

that should be %Location, not %$Location

if you compile with PERL_TRACE=1 (option to Makefile.PL) and set the
environment variable MOD_PERL_TRACE to all before starting the server,
you'll get some diagnostics that can help debug these situations.



Re: File upload bug in libapreq

2000-01-18 Thread Doug MacEachern

On Tue, 11 Jan 2000, Jim Winstead wrote:

 There appears to be a file upload bug in libapreq that causes httpd
 processes to spin out of control. There's a mention of this in the
 mailing list archives with a patch that seems to be a partial
 solution, but we're still seeing problems even with the patch I've
 attached. They appear to get stuck in the strstr() call.
 
 Anyone tracked this one down before? We haven't had any real luck
 figuring out what triggers the condition that sends things into a
 tail-spin, and I admittedly haven't crawled through the code too
 carefully to see what could be going wrong.

do you have a test case I can use to reproduce the strstr() hang bug?



Re: Patch to Apache::RedirectFixLog in mod_perl-1.21

2000-01-18 Thread Doug MacEachern

looks good, thanks David!

On Tue, 11 Jan 2000, David D. Kilzer wrote:

 Hi,
 
 The patch below fixes a problem in Apache::RedirectFixLog when the URI
 being logged required use of a filename listed the DirectoryIndex 
 directive.
 
 The solution is described in the following post by Doug MacEachern to
 the modperl list in 1998.
 
   http://www.geocrawler.com/mail/msg.php3?msg_id=1059028list=182
 
 A similar problem is described in the following thread in 1999 (though 
 the answer is never realized):
 
   http://www.geocrawler.com/mail/msg.php3?msg_id=2466537list=182
 
 I am not subscribed to modperl.  Please Cc: me on any responses.
 
 Thanks!
 
 Dave
 --
 David D. Kilzer   \  ``Follow your dreams, you can reach your goals.
 Senior Programmer /  I'm living proof. Beefcake. BEEFCAKE!''
 E-Markets, Inc.   \   Eric Cartman
 [EMAIL PROTECTED]/   _South_Park_
 
 
 diff -u mod_perl-1.21/lib/Apache/RedirectLogFix.pm.orig 
mod_perl-1.21/lib/Apache/RedirectLogFix.pm
 --- mod_perl-1.21/lib/Apache/RedirectLogFix.pm.orig   Wed Aug 12 20:46:26 1998
 +++ mod_perl-1.21/lib/Apache/RedirectLogFix.pmTue Jan 11 21:21:08 2000
 @@ -3,7 +3,7 @@
  use Apache::Constants qw(OK DECLINED REDIRECT);
  
  sub handler {
 -my $r = shift;
 +my $r = shift-last;
  return DECLINED unless $r-handler  ($r-handler eq "perl-script");
  
  if(my $loc = $r-header_out("Location")) {
 



Re: alarm() in Apache::Registry

2000-01-18 Thread Doug MacEachern

On Thu, 13 Jan 2000, Bill Moseley wrote:

 At 08:50 AM 1/13/00 +0200, you wrote:
  Does anyone have experience using an alarm() call under Apache::Registry?
 
 http://perl.apache.org/guide/debug.html#Handling_the_server_timeout_case
 
  Should I set alarm(0) as my script "exits" or is it ok to leave it set?
  I'm using it to cap runaway scripts.
 
 Just so I understand the Guide - You can safely localize $SIG{ALRM} in 1.21
 and Perl 5.005_03, but not other signal handlers?

are you having trouble with other signals?  mod_perl is only trying to
deal with those that cause conflict with Apache's.  if you've found one,
add it to the list in perl_config.c:

static char *sigsave[] = { "ALRM", NULL };

does it help?
 
 In:
 http://perl.apache.org/guide/debug.html#Debugging_Signal_Handlers_SIG_
 
 The Sys::Signal example is a bit confusing to me, as it uses $SIG{ALRM} in
 the example.  Yet that seems like the one signal where you don't need to
 use Sys::Signal.  Was the example of Sys::Signal written before the
 $SIG{ALRM} fix?

yes.



Re: Apache::Registry should allow script to _only_ set return code

2000-01-18 Thread Doug MacEachern

thanks Charles, I think your patch is the way to go for now, or something
close to it for 1.22

On Thu, 13 Jan 2000, Charles Levert wrote:

 Hi.
 
 [ I use Apache 1.3.9 and mod_perl 1.21. ]
 
 I believe that there is a difference between the following two
 behaviors for an Apache module handler:
 
   -- setting the request's status field to a new value and also
   returning that value;
 
   -- just returning a value without assigning it to the
   request's status field.
 
 It's an useful thing that a mod_perl script (what
 Apache::Registry::handler calls) be able to decide that, e.g.,
 HTTP_NOT_FOUND, should be returned to the browser and just return
 then.
 
   # example mod_perl script
 
   use Apache::Constants ':response';
   my $r = Apache-request;
 
   if (...) {
   $r-status(NOT_FOUND);
   return;
   }
 
   $r-content_type('text/html');
   ...
 
 We then expect that the normal ErrorDocument, as configured, should be
 returned in that case.  However, this is not what happens because
 Apache::Registry chooses the first behavior listed above and so an
 HTTP_NOT_FOUND also seems to apply to the ErrorDocument then.  (This
 is what Apache reports to the browser.)  The useful behavior would be
 the second one listed above.
 
 Ideally, a method such as $r-return_value could be defined,
 independently of $r-status, and its argument could then be used when
 Apache::Registry::handler returns.  An other possibility which can
 probably be ruled out for compatibily with CGI scripts and existing
 mod_perl script is to use the return value from the mod_perl script
 itself.
 
 I have appended a patch to Apache::Registry below which I believe to
 be an acceptable solution for now.  (It really disallows a script to
 set the status field, but uses the possibly changed status field as
 return value instead and resets the status field to its original
 value.)
 
 Your comments are welcome.  Do you think that such a change or what I
 mention above ($r-return_value) should make it in the main
 distribution of mod_perl?
 
 
 Charles
 
 
 
 --- Registry.pm.orig-1.21 Tue Jan 12 21:56:34 1999
 +++ Registry.pm   Thu Jan 13 16:11:23 2000
 @@ -131,6 +131,8 @@
   $Apache::Registry-{$package}{'mtime'} = $mtime;
   }
  
 + my $old_status = $r-status;
 +
   my $cv = \{"$package\::handler"};
   eval { {$cv}($r, @_) } if $r-seqno;
   $r-chdir_file("$Apache::Server::CWD/");
 @@ -155,7 +157,7 @@
  #return REDIRECT;
  #}
  #}
 - return $r-status;
 + return $r-status($old_status);
  } else {
   return NOT_FOUND unless $Debug  $Debug  2;
   return Apache::Debug::dump($r, NOT_FOUND);
 
 



Re: Proxy example in eagle book does not work

2000-01-18 Thread Doug MacEachern

On Fri, 14 Jan 2000, Jason Bodnar wrote:

 A line in the proxy example of the eagle book on page 380 does not seem to work
 (entirely):
 
 The line:
 
 $r-headers_in-do(sub {$request-header(@_);});

what if you change that to:

 $r-headers_in-do(sub {$request-header(@_); 1});

?



Re: wierd problem with DBI::trace(1) and Apache (mod_perl)

2000-01-18 Thread Henrik Tougaard

On Fri, 14 Jan 2000, Cere M. Davis wrote:

 I have found the weirdest problem with (I think) DBD::Ingres,
 DBI::trace() and Apache::DBI when the DBI::trace level is set to 1 or 0.
 
 I get an error in the Apache error_logs that says:
 
   unitialized value at
 /uns/mind/usr/local/perl5/lib/site_perl/5.005/alpha-dec_osf/
DBD/Ingres.pm line 85.
 
 and than follows with text that implies that the connection was made and

In my code that line is smack bang in the middle of sub connect, the
line
  unless (-d "$ENV{'II_SYSTEM'}/ingres") {
to be exact.

Why on earth that fails when line 81 is
  unless ($ENV{'II_SYSTEM'}) {
I can't figure out.

Possibly your lines are not the same as mine?

I am not sure that you have connected to the database at all.

The 'normal' error here is to have forgotten to set $ENV{II_SYSTEM}
correctly - I assume that you have done that.

 the data was retrived correctly.   I get an error regarding my program
 below:
 
   DBD::Ingres::bind_param: parameter not a number at
   /homes/clin_infr/cere/public_html/WebDBIDemo.pl line 46.
 
 Which implies that my variable didn't get passed into the prepare
 statement correctly...but I can't figure out why.

This error comes when the first argument to $sth-bind_param is not
numeric. How that can come about if the connection fails I just can't
imagine.
 
 I suspect that this error comes about during the connection phase of the
 DBI calls.  Does
 anyone have a suggestion for how to find out for sure?
 
 The DBI::Ingres stuff works fine at all trace levels on the command line
 but seems to have a problem with the mod_perl stuff.
 
 If anyone has extperience to share regarding this I'd love to hear it...
 
 Anyway,  here's my system vitals:
 
 I'm running Ingres1.2 - accessing remote database via IngresNet -  DBI
 version 1.13
 DBD::Ingres version 0.24, mod_perl 1.21 and Apache::BDI version 0.87.

Brave man!
I havn't tested DBD::Ingres with Apache::DBI (we prefer not to allow
the web server to access the database directly) so you are trampling
on relatively virgin ground.

Some parts of Apache::DBI require cached statement handles - this is
*NOT* supported by the current version of DBD::Ingres. I have been
working on it, but my supply of tuits is low again.

---
Henrik Tougaard[EMAIL PROTECTED]  (a.k.a. [EMAIL PROTECTED])
Datani A/S, Software Consultants, Copenhagen, Denmark
#include disclaim.std




Re: a possible bug with Apache::Server::ReStarting

2000-01-18 Thread Doug MacEachern

On Wed, 19 Jan 2000, Stas Bekman wrote:
 
 Why? Some users need a control of what gets reloaded and what not on
 server start (Yes I know if you put in startup.pl file it loads only once) 
 For example parsing and loading some heavy xml files...
 
 Why do you want to take it away?

I think PerlRestartHandler is a better solution in most cases.  and inside
Perl sections you can always do it on your own:

Perl
do_something() unless $My::Init++
/Perl

I'm cringing at global variables in general looking forwared to threaded
2.0.  do you have a concrete example that requires
$Apache::Server::{Starting,ReStarting} ?