Re: horrible memory consumption

2000-01-20 Thread Vivek Khera

 "JT" == Jason Terry [EMAIL PROTECTED] writes:

JT Is there a way I can tell where my memory usage is going in an
JT Apache child?  I have a server that starts with acceptable
JT numbers, but after a while it turns into this

It would probably be best if you started by reading through the
performance tuning guide and Stas' mod_perl guide on how to reduce
memory consumption.  Basically, when you have complex mod_perl
operations going in, you want to offload non-mod_perl related tasks
(images, static content) to other servers.



Re: squid performance

2000-01-20 Thread Leslie Mikesell

According to Greg Stark:

  I think if you can avoid hitting a mod_perl server for the images,
  you've won more than half the battle, especially on a graphically
  intensive site.
 
 I've learned the hard way that a proxy does not completely replace the need to
 put images and other other static components on a separate server. There are
 two reasons that you really really want to be serving images from another
 server (possibly running on the same machine of course).

I agree that it is correct to serve images from a lightweight server
but I don't quite understand how these points relate.  A proxy should
avoid the need to hit the backend server for static content if the
cache copy is current unless the user hits the reload button and
the browser sends the request with 'pragma: no-cache'.

 1) Netscape/IE won't intermix slow dynamic requests with fast static requests
on the same keep-alive connection

I thought they just opened several connections in parallel without regard
for the type of content.

 2) static images won't be delayed when the proxy gets bogged down waiting on
the backend dynamic server.

Is this under NT where mod_perl is single threaded?  Serving a new request
should not have any relationship to delays handling other requests on
unix unless you have hit your child process limit.

 Eg, if the dynamic content generation becomes slow enough to cause a 2s
 backlog of connections for dynamic content, then a proxy will not protect the
 static images from that delay. Netscape or IE may queue those requests after
 another dynamic content request, and even if they don't the proxy server will
 eventually have every slot taken up waiting on the dynamic server. 

A proxy that already has the cached image should deliver it with no
delay, and a request back to the same server should be serviced
immediately anyway.

 So *every* image on the page will have another 2s latency, instead of just a
 2s latency for the entire page. This is worst in Netscape of course course
 where the page can't draw until all the images sizes are known.

Putting the sizes in the IMG SRC tag is a good idea anyway.

 This doesn't mean having a proxy is a bad idea. But it doesn't replace putting
 your images on pics.mydomain.foo even if that resolves to the same address and
 run a separate apache instance for them.

This is a good idea because it is easy to move to a different machine
if the load makes it necessary.  However, a simple approach is to
use a non-mod_perl apache as a non-caching proxy front end for the
dynamic content and let it deliver the static pages directly.  A
short stack of RewriteRules can arrange this if you use the 
[L] or [PT] flags on the matches you want the front end to serve
and the [P] flag on the matches to proxy.

  Les Mikesell
[EMAIL PROTECTED]



Re: Using mod_backhand for load balancing and failover.

2000-01-20 Thread Leslie Mikesell

According to Jeffrey W. Baker:
 
 Is anyone using mod_backhand (http://www.backhand.org/) for load
 balancing?  I've been trying to get it to work but it is really flaky. 
 For example, it doesn't seem to distribute requests for static content. 
 Bah.

I just started to look at it (and note that there was a recent update)but
haven't got it configured yet.  I thought it distributed whatever it
is configured to handle - it shouldn't be aware of the content type.
The parts I don't like just from looking at it are that the backend
servers all have to have the module included as well (I was hoping
to balance some non-apache servers too) and it looks like it may
be difficult or impossible to make it mesh with RewriteRules.

The mod_jserv load balancing looks much nicer at least at first
glance, but of course that doesn't help for mod_perl.   
 
Les Mikesell
 [EMAIL PROTECTED]



RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Jason Bodnar

There must be a bug somewhere because I had EMBPERL_DEBUG = 0 and was getting
errors about not being able to write to /tmp/embperl.log.

This is with v 1.2b4 I believe so if this has changed recently that may be why
I got the errors.

On 20-Jan-00 Gerald Richter wrote:

 That's what I thought. Setting 'EMBPERL_DEBUG 0' should really
 turn off any
 kind of logging including even trying to open the log file.

 
 Look at epio.c function OpenLog line 838
 
 if (r - bDebug == 0)
   return ok ; /* never write to logfile if debugging is disabled */
 
 If the DEBUG Flags are zero Embperl should never open the log, but if you do
 one request with EMBPERL_DEBUG != 0, the logfile is open and will stay open.
 
  I consider this a bug and a security
  hazard
  (writing anything blindly to /tmp can have potentially lethal
 side effects,
  eg: user foo puts in a symlink from /tmp/embperl.log to
 anything owned by the
  user running the server and that file gets embperl logs
 appended to it!).
 
 
 If the logfile really get opend before you have a chance to set
 EMBPERL_DEBUG to 0, then it's a bug and a security whole, but I can't see
 this for now, but maybe I oversee something...
 
  The log file is tied to at a few different spots within the
 code. None of
  these check the setting of EMBPERL_DEBUG before tying to the
 log. They should
  only tie to the log if the debug setting is not zero.
 
 
 The logfile is only opened at this one place in OpenLog I mentioned above
 and this function checks the debug setting, _before_ opening the log. So if
 EMBPERL_DEBUG is zero, the log file will never get opend and all other
 function will just throw anything you try to write to the logfile away, if
 the log file isn't opened.
 
 Gerald
 
 -
 Gerald Richterecos electronic communication services gmbh
 Internetconnect * Webserver/-design/-datenbanken * Consulting
 
 Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
 E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
 WWW:http://www.ecos.de  Fax:  +49 6133 925152
 -

---
Jason Bodnar + [EMAIL PROTECTED] + Tivoli Systems

I swear I'd forget my own head if it wasn't up my ass. -- Jason Bodnar



RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Gerald Richter


 There must be a bug somewhere because I had EMBPERL_DEBUG = 0 and
 was getting
 errors about not being able to write to /tmp/embperl.log.

 This is with v 1.2b4 I believe so if this has changed recently
 that may be why
 I got the errors.


This didn't have change recently, but it is possible that in 1.2b4 there are
some debug output that is written before the first request and in this case
the log file will be opened. Are you able to upgrade to 1.2.1 (you should do
this anyway) and see if the problem disapears.

Anyway I will put this on the TODO list, to make it more secure so the
logfile will not accidently opened

Gerald

-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-




httpd not copied into APACHE_PREFIX

2000-01-20 Thread Wang, Pin-Chieh

Hi,
I am building mod_perl-1.21 into apache_1.3.9 using apaci. 
I run the following commands under mod_perl-1.21 directory
perl Makefile.PL EVERYTHING=1 APACHE_PREFIX=/usr/local/apache
make
make test
make install
Everything looks fine, httpd was created in apache_1.3.9/src, but was not
get copied into /usr/local/apache/bin. After I manually copied the httpd
file and try to start it using apachectl, I got 
./apachectl start: httpd started
But it did not create httpd.pid in the logs directory neither httpd really
started. 
Any one can give a hint?
I am running Solaris 2.6
Thanks,
PC



Re: How do you handle simultaneous/duplicate client requests with modperl?

2000-01-20 Thread Jeffrey W. Baker

Keith Kwiatek wrote:
 
 Hello,
 
 I have a mod_perl application that takes a request from a client,  then does
 some transaction processing with a remote system, which then returns a
 success/fail result to the client. The transaction MUST happen only ONCE per
 client session.
 
 PROBLEM: the client clicks the submit buttom twice thus sending two
 requests, spawning two different processes to do the same remote
 transaction. BUT, the client request MUST be processed only ONCE for a given
 session_id. The first request will start a process to initiate the remote
 transaction, and then the second request process start, not knowing about
 the first  process. The result is that the client has the transaction
 performed two times!
 
 How do you handle this? My first thought is to write a "processing status"
 value to the session hash (using apache::session) AS SOON as the first
 request is received, and then when the second duplicate request is received,
 check the "processing status" in the session hash. If the processing status
 is "in progress", then wait till the processing status in the session hash
 is updated by the first request process and return the result.
 
 Is my concept on target? Is my implementation right? (or should I write
 directly to the files system?)

Yes yes no.  Apache::Session effectively serializes all requests for the
same session_id, so using a flag in the session hash is race-safe.

-jwb



Re: httpd not copied into APACHE_PREFIX

2000-01-20 Thread John M Vinopal

perl Makefile.PL USE_APACI=1 EVERYTHING=1 APACHE_PREFIX=/usr/local/apache

On Thu, Jan 20, 2000 at 03:36:44PM -0600, Wang, Pin-Chieh wrote:
 Hi,
 I am building mod_perl-1.21 into apache_1.3.9 using apaci. 
 I run the following commands under mod_perl-1.21 directory
 perl Makefile.PL EVERYTHING=1 APACHE_PREFIX=/usr/local/apache



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-20 Thread G.W. Haywood

Hi all,

On Wed, 19 Jan 2000, Gerald Richter wrote:

 in the long term, the solution that you have prefered in previous
 mail, not to unload modperl at all, maybe the better one

As I understand it with Apache/mod_perl:

1.  The parent (contains the Perl interpreter) fires up, initialises
things and launches some children.  Any memory it leaks stays
leaked until restart.  That could be weeks away.  Apart from
making copies of it, most of the time it doesn't do much with the
interpreter.  More waste.

2.  The children occasionally get the coup de grace, so we recover
any memory they leaked.  They do lots with the interpreter.

3.  When the parent fork()s a new child it can fork some leaked memory
too, which gradually will become unshared, so the longer this goes
on the closer we get to bringing the whole system to its knees.

So in the longer term, is there a reason the parent has to contain the
interpreter at all?  Can't it just do a system call when it needs one?
It seems a bit excessive to put aside a couple of megabytes of system
memory just to run startup.pl.  If one could get around any process
communication difficulties, the children could be just the same as
they are now, but exec()ed instead of fork()ed by a (smaller) child
process which has never leaked any memory.  The exec() latency isn't
an issue because of the way that Apache preforks a pool of processes
and the overhead will be minimal if the children live long enough.

Please tell me if I have got this all around my neck.

73,
Ged.



Re: Can't exec programs ?

2000-01-20 Thread Pierre-Yves BONNETAIN


[EMAIL PROTECTED] said:
 you'll get a better idea of the problem running strace (or truss) 
 against the server.  in any case, you should avoid any code that's 
 forking a process, since it's throwing performance out the window. 
   Is there a 'nice way' (meaning, a patch or manual change I can do to those
modules) to prevent forking or, rather, replace it by stg else that gets me the
same thing ? I can spend (a lot of) time looking for system() and
backticks in the modules I use, but if I need the functionnality how can I 
'correct' the code of those modules ?

 
  On Thu, 6 Jan 2000, Pierre-Yves BONNETAIN wrote:
  
   [Wed Jan  5 17:46:49 2000] null: Can't exec "pwd": Permission denied at
   /usr/lib/perl5/5.00503/Cwd.pm line 82.
 
 This is most likely due to a corruption of the PATH environment
 variable. In my case, Daniel Jacobowitz fixed this problem on Debian,
 I think by upgrading to the latest mod_perl snapshot.
 
   I thought I had the latest modperl, but...
   Still, your diagnostic seems to be right. I got those errors away by changing the 
.pm files and including  FULL PATH information ('/bin/pwd' instead
of 'pwd'). And one of my test, printing the $PATH, displayed weird characters at
the begining of this variable (@n:/usr/bin: instead of /bin:/usr/bin).

[EMAIL PROTECTED] said:
 There is a patch to correct the PATH environment variable corruption 
 problem, if you'd rather not go to the development mod_perl snapshot. 
  I applied the patch to mod_perl version 1.21 on Red Hat Linux 6.0 
 and it has been working fine for me.

 The patch was forwarded to me, originally authored by Doug 
 MacEachern. 
   And I will test it as soon as I get my dirty hands on the webserver.

   Thanks for everything !
-- Pierre-Yves BONNETAIN
   http://www.rouge-blanc.com



Re: squid performance

2000-01-20 Thread Greg Stark


Vivek Khera [EMAIL PROTECTED] writes:

 Squid does indeed cache and buffer the output like you describe.  I
 don't know if Apache does so, but in practice, it has not been an
 issue for my site, which is quite busy (about 700k pages per month).
 
 I think if you can avoid hitting a mod_perl server for the images,
 you've won more than half the battle, especially on a graphically
 intensive site.

I've learned the hard way that a proxy does not completely replace the need to
put images and other other static components on a separate server. There are
two reasons that you really really want to be serving images from another
server (possibly running on the same machine of course).

1) Netscape/IE won't intermix slow dynamic requests with fast static requests
   on the same keep-alive connection

2) static images won't be delayed when the proxy gets bogged down waiting on
   the backend dynamic server.

Both of these result in a very slow user experience if the dynamic content
server gets at all slow -- even out of proportion to the slowdown. 

Eg, if the dynamic content generation becomes slow enough to cause a 2s
backlog of connections for dynamic content, then a proxy will not protect the
static images from that delay. Netscape or IE may queue those requests after
another dynamic content request, and even if they don't the proxy server will
eventually have every slot taken up waiting on the dynamic server. 

So *every* image on the page will have another 2s latency, instead of just a
2s latency for the entire page. This is worst in Netscape of course course
where the page can't draw until all the images sizes are known.

This doesn't mean having a proxy is a bad idea. But it doesn't replace putting
your images on pics.mydomain.foo even if that resolves to the same address and
run a separate apache instance for them.

-- 
greg



Re: horrible memory consumption

2000-01-20 Thread Stas Bekman

  Is there a way I can find out where all this RAM is being used.  Or does
  anyone have any suggestions (besides limiting the MaxRequestsPerChild)
 
If anyone knows how to figure out shared vs not shared memory in a 
 process on Linux, I'd be interested in that too...and it sounds like Jason
 could benefit from the info as well. I know that Apache::Gtop can be 
 used for mod_perl from reading The Guide (thanks Stas!) but I'm interested 
 in finding out these numbers for other non-mod_perl binaries too (such as
 an apache+mod_proxy binary). Also if anyone has any good pointers to info
 on dynamic linking and libraries (again, oriented somewhat towards Linux),
 I've yet to see anything that's explained things sufficiently to me yet. 
 Thanks.

GTop (not Apache::Gtop) is written by Doug, not me :) So the credits go to
Doug.

What you are talking about is Apache::VMonitor which shows you almost
everything the top(1) does and much more. You can monitor the
apache/mod_perl processes and any other non-mod_perl processes as well.

When you click on the process id you get lots of information about it,
including memory maps a sizes of the loaded libs. 

This all of course uses GTop, which in turn uses libgtop from the GNOME
project and lately reported to be ported to new platforms as well. 

Enjoy!

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: squid performance

2000-01-20 Thread Greg Stark


"G.W. Haywood" [EMAIL PROTECTED] writes:

 Would it be breaching any confidences to tell us how many
 kilobyterequests per memorymegabyte or some other equally daft
 dimensionless numbers?

I assume the number you're looking for is an ideal ratio between the proxy and
the backend server? No single number exists. You need to monitor your system
and tune. 

In theory you can calculate it by knowing the size of the average request, and
the latency to generate an average request in the backend. If your pages take
200ms to generate, and they're 4k on average, then they'll take 1s to spool
out to a 56kbs link and you'll need a 5:1 ratio. In practice however that
doesn't work out so cleanly because the OS is also doing buffering and because
it's really the worst case you're worried about, not the average.

If you have the memory you could just shoot for the most processes you can
handle, something like 256:32 for example is pretty aggressive. If your
backend scripts are written efficiently you'll probably find the backend
processes are nearly all idle.

I tried to use the minspareservers and maxspareservers and the other similar
parameters to let apache tune this automatically and found it didn't work out
well with mod_perl. What happened was that starting up perl processes was the
single most cpu intensive thing apache could do, so as soon as it decided it
needed a new process it slowed down the existing processes and put itself into
a feedback loop. I prefer to force apache to start a fixed number of processes
and just stick with that number.

-- 
greg



Re: Run away processes

2000-01-20 Thread Stas Bekman

On 20 Jan 2000, Greg Stark wrote:

 
 Stas Bekman [EMAIL PROTECTED] writes:
 
   Is there a recommendation on how to catch  stop run away mod_perl programs
   in a way that's _not_ part of the run away program.  Or is this even
   possible?  Some type of watchdog, just like httpd.conf Timeout?
  
  Try Apache::SafeHang
  http://www.singlesheaven.com/stas/modules/Apache-SafeHang-0.01.tar.gz
 
 Runaway? you mean 100% CPU ? Set up Apache::Resource then.
 This isn't related to Oracle by any chance is it? 
 We had this problem inside the Oracle libs at one point.

The process can be "runaway" waiting for some event to happen, or not to
complete for some other reason. It might use 0% CPU in this case and
untrappable by Apache::Resource. I've showen a few examples in the debug
chapter of the guide. 

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: How to make EmbPerl stuff new content into a existing frame?

2000-01-20 Thread Gerald Richter


 When the user hits the login button, I am calling a CGI script that
 validates the login against a database.  I can't make it have a action
 that loads a HTML page before the script is executed.  Therefore the
 script has to reload the frame with frame pages.  I also need to pass
 values to the frame, as in the example link above.

 Can you make a redirection have a "target=frame" and
 "?parameter=value" to do this?

No you can't do this, but you can say in your form  form action=".."
target="frame", so the cgi script will displayed on the whole screen, when
the cgi does the redirect, it will request the frame page

Gerald




Re: oracle : The lowdown

2000-01-20 Thread Perrin Harkins

Perrin Harkins wrote:
 Greg Stark wrote:
  For example, it makes it very hard to mix any kind of long running query with
  OLTP transactions against the same data, since rollback data accumulates very
  quickly. I would give some appendage for a while to tell Oracle to just use
  the most recent data for a long running query without attempting to rollback
  to a consistent view.
 
 I believe setting the isolation level for dirty reads will allow you to
 do exactly that.

Oh, silly me.  Oracle doesn't appear to offer dirty reads.  The lowest
level of isolation is "read committed" which reads all data that was
committed at the time the query began, but doesn't preserve that state
for future queries.  So, if you have lots of uncommitted data or you
commit lots of data to the table being queried while the query is
running you could make your rollback segment pretty big.  But, if you
can afford Oracle, you can afford RAM.

- Perrin



Re: How download data/file in http [Embperl 1.2.0]

2000-01-20 Thread Gerald Richter



Hi

  I am using Emberl 1.2.0.
  
  This is the probleme :
  I have a form with a submit bouton to download some data ( 
  or a file ). I want that the user can save these data.
  When I submit it, the header that I want to send to generate 
  the download is printed in the page ( ...and the data too ) and after its the 
  html part.
  
  What's wrong ? The header ? The Method ?
  
  Thanks
  
  
  -
  
  The result is : 
  
  Content-type: application/octet-stream 
  Content-Transfer-Encoding: binary Content-Disposition: attachment; 
  filename="logs.txt" one lineone lineone lineone lineone lineone lineone 
  lineone lineone lineone lineone lineone lineone lineone lineone line
  
   etc...
  
  ...and the html part
  
  
  The file"this_file.epl" :
  
  [- if ( $fdat{'export'} ) 
  {print "Content-type: 
  application/octet-stream\n";print 
  "Content-Transfer-Encoding: binary\n";print 
  "Content-Disposition: attachment; 
  filename=\"logs.txt\"\r\n\r\n";
Don't use print inside a Embperl page (unless you 
set optRedirectStdout). You can't print headers inside a Embperl page, that is 
done by Embperl. Assign them to the %http_headers hash, or use the $req_rec 
- header_out mod_perl function.

Take a look at the Embperl Faq for 
examples

Gerald

  


Re: question?

2000-01-20 Thread Rod Butcher

If I understand you correctly, you don't want to use mod_perl, just
Perl.
If so, the easiest way to use perl is to dowload ActivePerl (free) from
Activestate website http://www.activestate.com/ActivePerl/download.htm
It comes with great documentation, install it as per documentation.

If you wish to use CGI, the first line of each script should start with 
#!x:/yyy/perl.exe
where x is the drive and yyy is the directory where perl.exe is.
This is known as the shebang line in the trade.

You should ejoin the activestate perl newsgroup at 
http://www.activestate.com/lyris/lyris.pl?join=perl-win32-users

Best Rgds
Rod Butcher

Jingtao Yun wrote:
 
 Hi,
I installed Apache Server for NT on my machine. But
 I don't know how to get perl to work not using module perl.
 Any message will be appreciated.
 

-- 
Rod Butcher | "... I gaze at the beauty of the world,
Hyena Holdings Internet | its wonders and its miracles and out of
  Programming   | sheer joy I laugh even as the day laughs.
("it's us or the vultures") | And then the people of the jungle say,
[EMAIL PROTECTED] | 'It is but the laughter of a hyena'".
|Kahlil Gibran..  The Wanderer



Re: Apache locking up on WinNT

2000-01-20 Thread Waldek Grudzien

  I added the warns to the scripts and it appears that access to the
modules
  is serialised.  Each call to the handler has to run to completion before
  any other handlers can execute.
 

 Yes, on NT all accesses to the perl part are serialized. This will not
 change before mod_perl 2.0

Oh my ...
Indeed this happens. This is horrible ;o(
and make mod_perl unusable with NT web site with many visitors ;o(
How do you think - for intranet web application is it reasonable
to run few Apaches (with mod_perl)  on the same box ?
[and assign users to the different apache]. How many Apache can
I start ?

BTW Does anyone know when mod_perl 2.0 is supposed to be released ?

Best regards

Waldek Grudzien
_
http://www.uhc.lublin.pl/~waldekg/
University Health Care
Lublin/Lubartow, Poland
tel. +48 81 44 111 88
ICQ # 20441796



Re: Apache locking up on WinNT

2000-01-20 Thread Gerald Richter

 Oh my ...
 Indeed this happens. This is horrible ;o(
 and make mod_perl unusable with NT web site with many visitors ;o(
 How do you think - for intranet web application is it reasonable
 to run few Apaches (with mod_perl)  on the same box ?

yes, but as far as I know that isn't possible as service. You must start
them from the DOS prompt. (I didn't look at this since 1.3.6, so there may a
change in 1.3.9)

 [and assign users to the different apache]. How many Apache can
 I start ?


Only a matter of your memory...

You should also consider to move long running script to good old (external)
cgi scripts

 BTW Does anyone know when mod_perl 2.0 is supposed to be released ?


It should be comming when Apache 2.0 is comming, don't know when this will
be, but I don't expect it in the near future

Gerald




EMBPERL_SESSION_ARGS problem

2000-01-20 Thread Jean-Philippe FAUVELLE

The following directives work fine with Embperl 1.2.0

PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=dev2-sparc UserName=www
Password=secret"

PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=localhost UserName=www
Password=secret"


But these one cause a permanent fatal error.

PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=dev2-sparc.dev.fth.net
UserName=www Password=secret"

PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=193.252.66.1 UserName=www
Password=secret"


[6431]ERR: 24: Line 13: Error in Perl code: DBI-connect failed: Access
denied for user: 'www@dev2-sparc' (Using password: YES) at
/usr/local/lib/perl5/site_perl/5.005/Apache/Session/DBIStore.pm line 117

Apache/1.3.9 (Unix) mod_perl/1.21 HTML::Embperl 1.2.0 [Thu Jan 20 12:17:42
2000]

HTTP/1.1 500 Internal Server Error Date: Thu, 20 Jan 2000 11:17:41 GMT
Server: Apache/1.3.9 (Unix) mod_perl/1.21 Connection: close

Note that the two parameters differ only by the target hostname...
and that the sql server is local.

Could this be a bug in the parameter parser ?

Regards.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Jean-Philippe FAUVELLE
[EMAIL PROTECTED] [EMAIL PROTECTED]
Responsable du Pole de Developpement Unix/Web
Departement Developpement Applicatif
France Telecom Hosting
40 rue Gabriel Crie, 92245 Malakoff Cedex, France
[http://www.fth.net/] [+33 (0) 1 46 12 67 89]
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=




Re: squid performance

2000-01-20 Thread Stas Bekman

On 20 Jan 2000, Greg Stark wrote:
 I tried to use the minspareservers and maxspareservers and the other similar
 parameters to let apache tune this automatically and found it didn't work out
 well with mod_perl. What happened was that starting up perl processes was the
 single most cpu intensive thing apache could do, so as soon as it decided it
 needed a new process it slowed down the existing processes and put itself into
 a feedback loop. I prefer to force apache to start a fixed number of processes
 and just stick with that number.

This shouldn't happen if you preload most or all your code that you use.
The fork is very effective on modern OSes, and since most use
copy-on-write methodm the spawning of a new process should be almost
unnoticeable.

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: Apache locking up on WinNT

2000-01-20 Thread Waldek Grudzien

  Oh my ...
  Indeed this happens. This is horrible ;o(
  and make mod_perl unusable with NT web site with many visitors ;o(
  How do you think - for intranet web application is it reasonable
  to run few Apaches (with mod_perl)  on the same box ?

 yes, but as far as I know that isn't possible as service. You must start
 them from the DOS prompt. (I didn't look at this since 1.3.6, so there may
a
 change in 1.3.9)

In 1.3.9 you can register as many services as you want... ;o)

  [and assign users to the different apache]. How many Apache can
  I start ?
 

 Only a matter of your memory...

How much apaches with mod_perl won't hurt 128 MB NT box ?
(for application with 91 scripts and modules with summary
PERL code 376 KB)
(or how many RAM should I add ?)

 You should also consider to move long running script to good old
(external)
 cgi scripts

I know -  now... ;o)

  BTW Does anyone know when mod_perl 2.0 is supposed to be released ?
 

 It should be comming when Apache 2.0 is comming, don't know when this will
 be, but I don't expect it in the near future

It is not good news ;-(. Many fellows can choose meantime PHP instead PERL
solutions ;o( (moreover speed charts  says PHPis a little bit faster than
mod_perl
http://www.chamas.com/hello_world.html)
I like PERL so much so I wouldn't like to be forced (by boss ;o)) some day
to
start next web apps with PHP ;o(

Best regards,

Waldek Grudzien
_
http://www.uhc.lublin.pl/~waldekg/
University Health Care
Lublin/Lubartow, Poland
tel. +48 81 44 111 88
ICQ # 20441796



Performance ?'s regarding Apache::Request

2000-01-20 Thread Clifford Lang

mod_perl 1.21
Apache 1.3.9
Solaris 2.5.1, Linux 6.0

Is this a good or bad idea?

I want to create an inheritable module based on Apache::Request mainly for
uploading files, then create individual PerlHandler modules for individual
page content.

If I do this, will the uploaded files end up increase the memory consumption
of the module, or is all memory freed after the fileupload process?

I was going to use "register_cleanup(\CGI::_reset_globals)" to clear the
CGI environment but don't know if that frees up the memory.  If used, should
the register_cleanup be attached to the original request object ($r =
shift;), or the $apr=Apache::Request-new($r) object?

Should I (How can I) destroy the entire page after each request?  Doing so
would lose some of the reason for using mod_perl, I'd like to write these
handlers with all dynamic content (no "real" page source).  Some thoughts
for destruction would be to create the Apache::Request::Upload as a blessed
object, then destroy it on page completion - Is this wise or even possible?
Any pointers on how to accomplish this?


TIA,  Cliff



RE: mod_rewrite and Apache::Cookie

2000-01-20 Thread Geoffrey Young

for anyone interested...

I wrote a PerlTransHandler and removed mod_rewrite and am seeing the same
problem as outlined below...

can anyone verify this?

--Geoff

 -Original Message-
 From: Geoffrey Young 
 Sent: Wednesday, January 19, 2000 9:27 AM
 To: '[EMAIL PROTECTED]'
 Subject: mod_rewrite and Apache::Cookie
 
 
 hi all..
 
 I've noticed that using mod_rewrite with Apache::Cookie 
 exhibits odd behavior...
 
 scenario:
   foo.cgi uses Apache::Cookie to set a cookie
   mod_rewite writes all requests for index.html to 
 /perl-bin/foo.cgi
 
 problem:
   access to /perl-bin/foo.cgi sets the cookie properly
   access to /  or index.html runs foo.cgi, and attempts 
 to set the cookie, but $cookie-bake issues the generic:
 Warning: something's wrong at 
 /usr/local/apache/perl-bin/foo.cgi line 34.
 
 While I know I can use a PerlTransHandler here (and probably 
 will now), does anyone have any ideas about this behavior?
 
 In the meanwhile, if I find out anything more while 
 investigating, I'll post it...
 
 --Geoff
 



Re: oracle : The lowdown

2000-01-20 Thread Ed Phillips

For those of you tired of this thread please excuse me, but
here is MySQL's current position statement on and discussion
about transactions:

Disclaimer: I just helped Monty write this partly in response to
some of the fruitful, to me, discussion on this list. I know
this is not crucial to mod_perl but I find the "wise men who 
are enquirers into many things" to be one of the great things
about this list, to paraphrase old Heraclitus. I learn quite
a bit about quite many things by following leads and hints here
as well as by seeing others problems.

I'd love to see your criticism of the below either here or
off the list.


Ed
-


The question is often asked, by the curious and the critical, "Why is
MySQl not a transactional database?" or "Why does MySQl not support 
transactions."

MySQL has made a conscious decision to support another paradigm for 
data integrity, "atomic operations." It is our thinking and experience 
that atomic operations offer equal or even better integrity with much 
better performance. We, nonetheless, appreciate and understand the 
transactional database paradigm and plan, in the next few releases, 
on introducing transaction safe tables on a per table basis. We will 
be giving our users the possibility to decide if they need
the speed of atomic operations or if they need to use transactional 
features in their applications. 

How does one use the features of MySQl to maintain rigorous integrity 
and how do these features compare with the transactional paradigm?

First, in the transactional paradigm, if your applications are written 
in a way that is dependent on the calling of "rollback" instead of "commit" 
in critical situations, then transactions are more convenient. Moreover, 
transactions ensure that unfinished updates or corrupting activities 
are not commited to the database; the server is given the opportunity 
to do an automatic rollback and your database is saved. 

MySQL, in almost all cases, allows you to solve for potential 
problems by including simple checks before updates and by running 
simple scripts that check the databases for inconsistencies and 
automatically repair or warn if such occurs. Note that just by 
using the MySQL log or even adding one extra log, one can normally 
fix tables perfectly with no data integrity loss. 

Moreover, "fatal" transactional updates can be rewritten to
 be atomic. In fact,we will go so far as to say that all
 integrity problems that transactions solve can be done with 
LOCK TABLES or atomic updates, ensuring that 
you never will get an automatic abort from the database, which is a
common problem with transactional databases.
 
Not even transactions can prevent all loss if the server goes down.  
In such cases even a transactional system can lose data.  
The difference between different systems lies in just how small 
the time-lap is where they could lose data. No system is 100 % secure, 
only "secure enough". Even Oracle, reputed to be the safest 
of transactional databases, is reported to sometimes lose data
 in such situations.

To be safe with MySQL you only need to have backups and have the update
logging turned on.  With this you can recover from any situation that you could
with any transactional database.  It is, of course, always good to have
backups, independent of which database you use.

The transactional paradigm has its benefits and its drawbacks. Many users
and application developers depend on the ease with which they can code around
problems where an "abort" appears or is necessary, and they may have to do
 a little more work with MySQL to either think differently or write more.
 If you are new to the atomic operations paradigm, or more familiar or more
comfortable with transactions, do not jump to the conclusion that MySQL 
has not addressed these issues. Reliability and integrity are foremost 
in our minds.

Recent estimates are that there are more than 1,000,000 mysqld servers 
currently running, many of which are in production environments.  We hear
 very, very seldom from our users that they have lost any data, and in
 almost all of those cases user error is involved. This is in our 
opinion the best proof of MySQL's stability and reliability.

Lastly, in situations where integrity is of highest importance, MySQL's
 current features allow for transaction-level or 
better  reliability and integrity. 

If you lock tables with LOCK TABLES, all updates will stall until any
integrity checks are made.  If you only do a read lock (as opposed to
a write lock), then reads and inserts are still allowed to happen.
The new inserted records will not be seen by any of the clients
that have a READ lock until they relaease their read locks.
With INSERT DELAYED you can queue insert into a local queue, until
the locks are released, without having to have the client to wait for
the insert to complete.


Atomic in the sense that we mean it is nothing magical, it only means 
that you can be sure that while each specific