Re: must I use mod-perl

2003-07-14 Thread Les Mikesell
From: Oskar [EMAIL PROTECTED]

 Install it if you have a lot of time. It took me week to config it and month
 for rewritting scripts.

A RedHat 7.3 install with current updates should run mod_perl nicely
with only the changes to httpd.conf to load the dso and use it as
a handler. 

However, as to whether you need it or not, performance is the main
issue.   As a rule of thumb, I'd plan to install it for anything where you
expect 10 hits a second or more to a perl script or where the script
is slow to start because it is large or needs a database connection.

---
   Les Mikesell
 [EMAIL PROTECTED]




Re: HTML::Mason and mod_perl issues with DSO?

2003-06-08 Thread Les Mikesell
From: Forrest Aldrich [EMAIL PROTECTED]

 With regard to the RT (http://www.fsck.com/rt) program, there were notes in
 the README file that said mod_perl DSO has/had massive scalability
 problems when using HTML::Mason.

This is probably old info based on the DSO build shipped through RedHat 7.2.
Somewhere as an update to the 7.2 release they got it right and it is solid at
least
through 7.3 and its updates. The only quirk is that there is a memory leak if
you
attempt graceful restarts so it is always best to stop/start.   I'm not sure
about
 RH8/9 with apache 2.x/mp2.

---
   Les Mikesell
   [EMAIL PROTECTED]




Re: Job tracking and publishing question.

2003-03-11 Thread Les Mikesell
From: Thomas Whitney [EMAIL PROTECTED]

 I want to implement a job tracking and database publishing system and hoping
 for some assistance.
 
 My company does short run 4 color digital printing.  Because it is short urn
 we handle multiple jobs every day.  I developed an online bidding system; it
 use Apache, mod_perl, and  mysql.  Now I would like to move to tracking jobs
 online; first, for internal purposes -- it would make the workflow much
 easier to follow -- and later for customers to view the status of their jobs
 on the web.  Each bid has about 38 data fields associated with it and each
 job will have a few more fields along with an image file in the form of a
 pdf.

You would have to customize it quite a bit but you might look at
'Request Tracker' as a starting point:  http://www.bestpractical.com/rt/
It uses mason with sql or postgresql as the framework.   A new version
is on the way so if you would probably want to get the latest beta from
http://www.fsck.com/pub/rt/devel/  (2.1.86 now) to start.

---
  Les Mikesell
 [EMAIL PROTECTED]




Re: Shroud+ Perl obfuscator....

2002-12-21 Thread Les Mikesell
From: Randal L. Schwartz [EMAIL PROTECTED]
 
 Andrzej I extended Robert Jones' Perl Obfuscator, Shroud into what I
 Andrzej am calling Shroud+. I needed it to protect some rather
 Andrzej extensive scripts I have developed for Inventory and Image
 Andrzej Gallery management on client web sites.
 
 I just want to go on the record to say that I consider your action
 personally offensive and ethically questionable.

Yep, if we could just make all those damn consultants, book authors,
and training professionals give away all their work for free whether
they choose to or not  But then we wouldn't need the Artistic license.

--
   Les Mikesell
   [EMAIL PROTECTED]





Re: Online Tonight (Bricolage)

2002-12-06 Thread Les Mikesell
From: David Wheeler [EMAIL PROTECTED]
 
 I certainly mentioned how important Perl, mod_perl, Apache, Mason, and 
 PostgreSQL are to the success of the project.

I got the workspace side running with only a little fiddling to
back out a CPAN-installed Mason 1.15, but still don't understand
how to publish pages to a viewing site.   Is a working example included
or available?   I was hoping for something like http://www.plone.org
has as a starting point (but of course I want to use apache/mod_perl
instead of zope/python).  Or do I just have to learn a lot more about
Mason myself?

---
  Les Mikesell
[EMAIL PROTECTED]







Re: [OTish] Version Control?

2002-11-03 Thread Les Mikesell
From: Steven Lembark [EMAIL PROTECTED]

 One quick way to deal with testing things is to tag the releases
 with something meaningful and pass the tag to Q/A. They do an
 update -d -P -r yourtag and test it. If things work then the update
 gets run again in production and life goes on. Since nothing but
 tagged releases go into production they can also back off easily if
 a problem is found; ditto Q/A.

Yes, you always need to tag everything at least at the 'known-working'
points.  In addition to assigning unique tags that you have to remember,
you can 'float' existing tags to specific points which makes it easier
to script actions and keep track of the 'release-1' and 'release-2' versions.
For example, one script can pull several developers' work together from
the head of the repository into a test area.  Then when it all works, another
script can float the 'release-n' tags up one position, tag the testbed instance
as the current release, then update using the release tag to the production
machine or a staging area where you rsync to production.  This allows new
work to continue at the head of the repository without getting pulled into
a production update before testing and gives you a known tag to go back
to the previous tested/working system.  I've never actually backed out a
whole release that way but its one of those things that gives a warm-fuzzy
feeling knowing that you can and the ability to see all the changes between
versions from cvsweb has made it much easier to find and fix problems
that pop up after an update into production.

---
  Les Mikesell
  [EMAIL PROTECTED]





Re: [OT]: In my Farthers House there are many mansions

2002-11-02 Thread Les Mikesell
From: Jeff [EMAIL PROTECTED]

 Another time, a contractor working for me complained bitterly about
 someone elses obtuse code and lack of comments - the other party 
 said 'Why don't you scroll up?', which he did - lo and behold, about 
 two pages of beautiful comment. Mmmm as he read the comments, from 
 the top, the complainant highlighted each line [easier to read] and 
 when he understood the point of it all, pressed Enter - not 
 realising that this replaced two screens of comment with an empty 
 line... ah ha! we finally figured out the answer to the song:
 'Where have all the comments gone?'

Good story - and a good tie-in to the other off-topic thread about
whether on not it is worth the trouble to use CVS.  If you make sure
that the only way to get between your workspaces and the test/production
servers is a commit/update step, cvsweb can always provide a color-coded
answer to questions like that.

---
  Les Mikesell
[EMAIL PROTECTED]





Re: Optional HTTP Authentication ?

2002-07-01 Thread Les Mikesell

From: Jean-Michel Hiver [EMAIL PROTECTED]

 Oh but I have that already. I know that I need to password protect
 
 /properties.html
 /content.html
 /move.html
 /foo/properties.html
 /foo/content.html
 /foo/move.html
 etc...
 
 Is it possible to password-protect a class of URIs using regexes? That
 would be another good option.

I thought you meant that you wanted the same location to be
accessed by different people with/without passwords.  You
should be able to put the authentication directives in a 
LocationMatch  container in this case.   Another approach
would be to use mod_rewrite to map the request to a directory
containing a symlink to the script and an appropriate .htaccess file.
This is kind of brute-force but it lets you do anything you want with
a request including proxying to an otherwise unreachable port or
server for certain content. Unfortunately I think the symlink approach
appears as a different script to mod_perl so it will cache a separate
copy in memory.

   Les Mikesell
[EMAIL PROTECTED]



Re: Optional HTTP Authentication ?

2002-06-30 Thread Les Mikesell

From: Jean-Michel Hiver [EMAIL PROTECTED]
 
 I *CANNOT* use cookies nor URIs for any kind of session tracking.
 Otherwise I don't think I would have posted this message to the list in
 the first place :-)
 
 I agree that HTTP Basic authentication is totally and uterly ugly, but I
 am going to have to stick with it no matter what... My problem is:
 
 How do I tell apache to set the $ENV{REMOTE_USER} variable if the
 browser sent the credentials, or leave $ENV{REMOTE_USER} undef
 otherwise, without sending a 401 back.

I didn't think a browser would send authentication unless the server
requested it for an authentication domain.  How are you going to 
get some people to send the credentials and some not unless you
use different URLs so the server knows when to request them?
Note that you don't have to embed session info here, just add
some element to the URL that serves as the point where you
request credentials and omit it for people that don't log in.  Or
redirect to a different vhost that always requires authentication but
serves the same data.


   Les Mikesell
  [EMAIL PROTECTED]




Re: Is mod_perl the right solution for my GUI dev?

2002-06-26 Thread Les Mikesell

From: Fran Fabrizio [EMAIL PROTECTED]

 You oversimplify.  Cookies do work fine.  What creates, reads, modifies, 
 validates the cookies?  What ties the cookies to useful state 
 information that is stored server-side?  There's a lot of coding 
 potentially involved.  Yes, perl modules exist.  Yes, they'll most 
 likely need customization (in my case, I've customized AuthCookie, and 
 tied it to Apache::Session.  It wasn't the end of the world, but it 
 wasn't trivia.   A cookie by itself is of rather limited usefulness.

You can avoid most of the grunge work here by using one or more
of the high-level modules like Embperl, Apache::ASP, Mason, etc.
that handle sessions and a lot more for you.

One thing that I don't think anyone has mentioned so far that is
an inherent advantage to a web interface is that unless you really
go out of your way to break it, you automatically end up with
a well-defined client-server network interface.   Thus as you
or your users discover parts of the application that can be
completely automated later, someone can easily whip out some
scripts with LWP to get/post anything without human intervention
where with a traditional GUI it takes a massive effort to ever go
beyond pointing and clicking.   Making an interface easy to
use is often at odds with making it easy to automate, and in
the long run anything you can make 'full-auto' is going to save
time and money (unless the object of the site is to make the
user view a lot of banner ads...).

  Les Mikesell
[EMAIL PROTECTED]




Re: Logging under CGI

2002-06-10 Thread Les Mikesell

From: Sergey Rusakov [EMAIL PROTECTED]

 open(ERRORLOG, '/var/log/my_log');
 print ERRORLOG some text\n;
 close ERRORLOG;
 
 This bit of code runs in every apache child.
 I worry abount concurent access to this log file under heavy apache load. Is
 there any problems on my way?

This is OS dependent.  Most unix type systems perform an
atomic seek-to-end-of-file along with the write when you
have opened in append mode, so as long as the writes are
small enough to complete in one operation concurrency
is not an issue.  It doesn't work over NFS when writing
from multiple machines and it probably doesn't work
the same way on Windows.

  Les Mikesell
  [EMAIL PROTECTED]




RE: File::Redundant (OT: AFS)

2002-04-25 Thread Les Mikesell

 From: D. Hageman [mailto:[EMAIL PROTECTED]]
 Subject: Re: File::Redundant
 
 Interesting ... not sure if implementing this in this fashion would be 
 worth the overhead.  If such a need exists I would imagine that 
 would have 
 choosen a more appropriate OS level solution.  Think OpenAFS.

This is off-topic of course, but you often don't get
unbiased opinions from the specific list.  Does anyone
have success or horror stories about AFS in a distributed
production site?  Oddly enough the idea of using it
just came up in my company a few days ago to publish
some large data sets that change once daily to several
locations.  I'm pushing a lot of stuff around now with
rsync which works and is very efficient, but the ability
to move the source volumes around transparently and keep
backup snapshots is attractive. 

  Les Mikesell
   [EMAIL PROTECTED]





RE: banner system

2002-04-12 Thread Les Mikesell

 From: Maarten Stolte [mailto:[EMAIL PROTECTED]]
 
 before i invent the wheel for the xth time, can someone tell me wether
 there is an (opensource) banner management system in modperl (MASON?).
 

I used this one for a busy site for several years:
http://www.sklar.com/dad/.  

   Les Mikesell
 [EMAIL PROTECTED]




Re: WYSIWYG Template Editor

2002-01-01 Thread Les Mikesell

From: Matt Sergeant [EMAIL PROTECTED]

  Does anybody know a template engine, whose templates can be edited with a
  WYSIWYG editor (favourably dreamweaver) as they will look when filled
  with example data?

 If you use XSLT, there's a number of options available to you. Try
 searching a site like http://www.cafeconlech.org/

I can't reach that site - is the spelling correct?   I'd like to find something
that
would allow non-technical people to write their own templates for pages
that, as they are accessed, fill in variables pulled by a server-side http
request
to an XML data source.   To make things even more difficult, I'd like parts
of the resulting page to appear in editable form fields that could be modified
before submitting to yet another location.   We have data servers with
commodity
exchange data, and reporters that need to generate stories showing those
values,
sometimes including comments.   Some of the layouts never change, but it
would really be best if the reporters could generate and control their own
templates without having to understand all of the details involved.

Les Mikesell
[EMAIL PROTECTED]




Re: Any good WebMail programs?

2001-12-13 Thread Les Mikesell

This is very off-topic but I'll mention it because you can
install the whole server and be running faster than you
can do the research to find something else.Look at
http://www.e-smith.org.  They have a modified Redhat
distribution that is an 'office-in-a-box' and includes
webmail access along with most of the other services
you might want, all managed through a web interface.
If you go to www.e-smith.com instead, you will find
the commercial, supported service, but the .org site
has a free iso image download and pretty good documentation
on how to customize their template-driven configuration.
The webmail program happens to be imp, using php and
it currently doesn't have a way to authenticate against
LDAP, although an LDAP directory is automatically
updated as you add users.  

   Les Mikesell
   [EMAIL PROTECTED]




Re: multiple rapid refreshes - how to handle them.

2001-10-17 Thread Les Mikesell

This doesn't solve the specific problem, but it is a good idea
to tune 'MaxClients' down in httpd.conf to a number that your
server can sustain.   The browsers may see a few errors during
the overload but the server will recover a lot faster when it stops.

  Les Mikesell
[EMAIL PROTECTED]


- Original Message - 
From: Mark Maunder [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 17, 2001 5:36 PM
Subject: multiple rapid refreshes - how to handle them.


 Is there a standard way of dealing with users who are on high bandwidth
 connections who hit refresh (hold down F5 in IE for example) many times
 on a page that generates alot of database activity?
 
 On a 10 meg connection, holding down F5 in IE for a few seconds
 generates around 300 requests and grinds our server to a halt. The app
 is written as a single mod_perl handler that maintains state with
 Apache::Session and cookies. content is generated from a backend mysql
 database.
 
 tnx!
 
 
 




Re: Module to catch (and warn about) Code Red

2001-08-05 Thread Les Mikesell

The descriptions I've seen indicate that it has a flaw in
the attempt to pick random targets.  It always uses the
same seed so every instance runs through the same addresses
in the same order.  That means you will get hit by the same
box if it has been rebooted and then re-infected  (and that it
is almost sure to be re-infected if the patch has not been applied).

  Les Mikesell
 [EMAIL PROTECTED]

- Original Message - 
From: Todd Finney [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, August 05, 2001 9:51 AM
Subject: Re: Module to catch (and warn about) Code Red


 
 I don't think this is an issue.  Someone more familiar with the virus 
 can chime in, but the information that's out there on it seems to 
 indicate that it's not going to pick the same IP twice, except by 
 chance.





RE: Ultimate Bulletin Board? Jezuz.

2001-07-28 Thread Les Mikesell

It has been a while since I did a comparison but I have
mwforum (http://www.mawic.de/mwforum) running under
mod_perl without any trouble.  I think it is distantly
related to wwwthreads but still under the GPL.

  Les Mikesell
[EMAIL PROTECTED]

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Saturday, July 28, 2001 10:42 AM
To: kyle dawkins
Cc: [EMAIL PROTECTED]
Subject: Re: Ultimate Bulletin Board? Jezuz.


Hi,

 you might want to look into vBulletin, it is used on a lot of different
 sites is written in php with a MySQL back end and looks very similar to
 UBB.


yes, but as an engineer i can't condone the use of PHP, sorry...

You might want to consider WWWThreads (http://www.wwwthreads.com/). Code
is simple to read/understand, and it works out of the box under 
Apache::Registry.

2. any problems with it under mod_perl?  I have it running fine under 
PerlRun but I am not so sure it'll behave under Registry.

Give up on this. If you have the resources to run mod_perl, no reason
you should be using UBB. Use something with an SQL backend. If you do
decide you want to stick with UBB, feel free to drop me an email as
I've written several conversion scripts and can give you some pointers
on how the member database is stored.

Cheers,

Alex


--
Gossamer Threads Inc. 
--
This email was brought to you by Gossamer Mail 2
http://www.gossamer-threads.com/scripts/webmail/





Re: cron for mod_perl?

2001-03-03 Thread Les Mikesell

 From: Matt Sergeant [EMAIL PROTECTED]
 On Thu, 15 Feb 2001, Perrin Harkins wrote:
 
  Maybe we should add process scheduling into Apache, and a file system, and
  a window manager, and...
 
 Perhaps its the difference between people who've had to write shrink-wrap
 apps? The question for me is dependencies. We add in Schedule::Cron or
 whatever and then you've got to add in LWP or HTTP::GHTTP or HTTP::Lite
 to do the request. Its just something that would be useful to a lot of
 people, IMHO.

I'm not sure why you would want a perl cron-alike under unix unless you
are doing something so frequently that you don't want to reload perl for
every run, but it would be handy to have a scheduling module that would
work on windows NT or Win2k to provide the missing cron functionality.

  Les Mikesell
   [EMAIL PROTECTED]





Re: Using rewrite...

2001-01-24 Thread Les Mikesell


- Original Message - 
From: "Tomas Edwardsson" [EMAIL PROTECTED]
To: "Les Mikesell" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, January 22, 2001 3:01 AM
Subject: Re: Using rewrite...


   The problem is that I can't find a way to send the request
   to a relevant port if the request calls for a URL which ends
   with a slash ("/"). Any hints ?
  
 
 I have to problem with matching the slash. The problem is if you ask
 for http://www.domain.com/ the proxy daemon has no idea if the client
 is asking for a perl, php or static document. Therefore I need a method
 to extract the filename which is relevant from DirectoryIndex.

I see what you mean now.  That could be a problem if you are supporting
a lot of sites or ones that someone else configured.  I avoided the issue
by never referencing directory names internally except for the server
root (/), and I handle that by letting mod_rewrite on the front end
send a client redirect to /index.shtml  (I have mod_perl handle .shtml
files because most have virtual includes of perl scripts).  When the
redirected client comes back, the rewrite rule for shtml will match.

  Les Mikesell
[EMAIL PROTECTED]





Re: Using rewrite...

2001-01-20 Thread Les Mikesell


- Original Message - 
From: "Tomas Edwardsson" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 19, 2001 4:56 AM
Subject: Using rewrite...


 Hi
 
 I'm using rewrite to send a request to a relevant server, for
 instance if a filename ends with .pl I rewrite it to the perl
 enabled apache:
 
 RewriteEngine On
 
 # Perl Enabled.
 RewriteRule ^/(.*\.ehtm)$ http://%{HTTP_HOST}:81/$1 [P]
 RewriteRule ^/(.*\.pl)$ http://%{HTTP_HOST}:81/$1 [P]
 # PHP Enabled
 RewriteRule ^(.*\.php)$ http://%{HTTP_HOST}:83$1 [P]
 # Everyting else, images etc...
 RewriteRule ^/(.*)$ http://%{HTTP_HOST}:82/$1 [P]
 
 The problem is that I can't find a way to send the request
 to a relevant port if the request calls for a URL which ends
 with a slash ("/"). Any hints ?

Won't it work if you make the regexp match the slash too?  Something
like:
RewriteRule ^/(.*\.pl/*)$ http://%{HTTP_HOST}:81/$1 [P]
  or just duplicate the line with and without the slash in the pattern.


Les Mikesell
   [EMAIL PROTECTED]
 




Re: Using rewrite...

2001-01-20 Thread Les Mikesell


- Original Message - 
From: "Tomas Edwardsson" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 19, 2001 4:56 AM
Subject: Using rewrite...


 Hi
 
 I'm using rewrite to send a request to a relevant server, for
 instance if a filename ends with .pl I rewrite it to the perl
 enabled apache:
 
 RewriteEngine On
 
 # Perl Enabled.
 RewriteRule ^/(.*\.ehtm)$ http://%{HTTP_HOST}:81/$1 [P]
 RewriteRule ^/(.*\.pl)$ http://%{HTTP_HOST}:81/$1 [P]
 # PHP Enabled
 RewriteRule ^(.*\.php)$ http://%{HTTP_HOST}:83$1 [P]
 # Everyting else, images etc...
 RewriteRule ^/(.*)$ http://%{HTTP_HOST}:82/$1 [P]
 
 The problem is that I can't find a way to send the request
 to a relevant port if the request calls for a URL which ends
 with a slash ("/"). Any hints ?

Won't it work if you make the regexp match the slash too?  Something
like:
RewriteRule ^/(.*\.pl/*)$ http://%{HTTP_HOST}:81/$1 [P]
  or just duplicate the line with and without the slash in the pattern.


Les Mikesell
   [EMAIL PROTECTED]
 




Re: Using rewrite...

2001-01-20 Thread Les Mikesell


- Original Message - 
From: "Tomas Edwardsson" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 19, 2001 4:56 AM
Subject: Using rewrite...


 Hi
 
 I'm using rewrite to send a request to a relevant server, for
 instance if a filename ends with .pl I rewrite it to the perl
 enabled apache:
 
 RewriteEngine On
 
 # Perl Enabled.
 RewriteRule ^/(.*\.ehtm)$ http://%{HTTP_HOST}:81/$1 [P]
 RewriteRule ^/(.*\.pl)$ http://%{HTTP_HOST}:81/$1 [P]
 # PHP Enabled
 RewriteRule ^(.*\.php)$ http://%{HTTP_HOST}:83$1 [P]
 # Everyting else, images etc...
 RewriteRule ^/(.*)$ http://%{HTTP_HOST}:82/$1 [P]
 
 The problem is that I can't find a way to send the request
 to a relevant port if the request calls for a URL which ends
 with a slash ("/"). Any hints ?

Won't it work if you make the regexp match the slash too?  Something
like:
RewriteRule ^/(.*\.pl/*)$ http://%{HTTP_HOST}:81/$1 [P]
  or just duplicate the line with and without the slash in the pattern.


Les Mikesell
   [EMAIL PROTECTED]
 




Re: Using rewrite...

2001-01-20 Thread Les Mikesell


- Original Message - 
From: "Tomas Edwardsson" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 19, 2001 4:56 AM
Subject: Using rewrite...


 Hi
 
 I'm using rewrite to send a request to a relevant server, for
 instance if a filename ends with .pl I rewrite it to the perl
 enabled apache:
 
 RewriteEngine On
 
 # Perl Enabled.
 RewriteRule ^/(.*\.ehtm)$ http://%{HTTP_HOST}:81/$1 [P]
 RewriteRule ^/(.*\.pl)$ http://%{HTTP_HOST}:81/$1 [P]
 # PHP Enabled
 RewriteRule ^(.*\.php)$ http://%{HTTP_HOST}:83$1 [P]
 # Everyting else, images etc...
 RewriteRule ^/(.*)$ http://%{HTTP_HOST}:82/$1 [P]
 
 The problem is that I can't find a way to send the request
 to a relevant port if the request calls for a URL which ends
 with a slash ("/"). Any hints ?

Won't it work if you make the regexp match the slash too?  Something
like:
RewriteRule ^/(.*\.pl/*)$ http://%{HTTP_HOST}:81/$1 [P]
  or just duplicate the line with and without the slash in the pattern.


Les Mikesell
   [EMAIL PROTECTED]
 




Re: Using rewrite...

2001-01-20 Thread Les Mikesell


- Original Message - 
From: "Tomas Edwardsson" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 19, 2001 4:56 AM
Subject: Using rewrite...


 Hi
 
 I'm using rewrite to send a request to a relevant server, for
 instance if a filename ends with .pl I rewrite it to the perl
 enabled apache:
 
 RewriteEngine On
 
 # Perl Enabled.
 RewriteRule ^/(.*\.ehtm)$ http://%{HTTP_HOST}:81/$1 [P]
 RewriteRule ^/(.*\.pl)$ http://%{HTTP_HOST}:81/$1 [P]
 # PHP Enabled
 RewriteRule ^(.*\.php)$ http://%{HTTP_HOST}:83$1 [P]
 # Everyting else, images etc...
 RewriteRule ^/(.*)$ http://%{HTTP_HOST}:82/$1 [P]
 
 The problem is that I can't find a way to send the request
 to a relevant port if the request calls for a URL which ends
 with a slash ("/"). Any hints ?

Won't it work if you make the regexp match the slash too?  Something
like:
RewriteRule ^/(.*\.pl/*)$ http://%{HTTP_HOST}:81/$1 [P]
  or just duplicate the line with and without the slash in the pattern.


Les Mikesell
   [EMAIL PROTECTED]
 




Re: Using rewrite...

2001-01-20 Thread Les Mikesell


- Original Message - 
From: "Tomas Edwardsson" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 19, 2001 4:56 AM
Subject: Using rewrite...


 Hi
 
 I'm using rewrite to send a request to a relevant server, for
 instance if a filename ends with .pl I rewrite it to the perl
 enabled apache:
 
 RewriteEngine On
 
 # Perl Enabled.
 RewriteRule ^/(.*\.ehtm)$ http://%{HTTP_HOST}:81/$1 [P]
 RewriteRule ^/(.*\.pl)$ http://%{HTTP_HOST}:81/$1 [P]
 # PHP Enabled
 RewriteRule ^(.*\.php)$ http://%{HTTP_HOST}:83$1 [P]
 # Everyting else, images etc...
 RewriteRule ^/(.*)$ http://%{HTTP_HOST}:82/$1 [P]
 
 The problem is that I can't find a way to send the request
 to a relevant port if the request calls for a URL which ends
 with a slash ("/"). Any hints ?

Won't it work if you make the regexp match the slash too?  Something
like:
RewriteRule ^/(.*\.pl/*)$ http://%{HTTP_HOST}:81/$1 [P]
  or just duplicate the line with and without the slash in the pattern.


Les Mikesell
   [EMAIL PROTECTED]
 




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-18 Thread Les Mikesell


- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: "mod_perl list" [EMAIL PROTECTED]; "Stephen Anderson"
[EMAIL PROTECTED]
Sent: Thursday, January 18, 2001 10:38 PM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc
ripts that contain un-shared memory


  There's only one run queue in the kernel.  THe first task ready to run is
put
  at the head of that queue, and anything arriving afterwards waits.  Only
  if that first task blocks on a resource or takes a very long time, or
  a higher priority process becomes able to run due to an interrupt is that
  process taken out of the queue.

Note that any I/O request that isn't completely handled by buffers will
trigger the 'blocks on a resource' clause above, which means that
jobs doing any real work will complete in an order determined by
something other than the cpu and not strictly serialized.  Also, most
of my web servers are dual-cpu so even cpu bound processes may
complete out of order.

   Similarly, because of the non-deterministic nature of computer systems,
   Apache doesn't service requests on an LRU basis; you're comparing
SpeedyCGI
   against a straw man. Apache's servicing algortihm approaches randomness,
so
   you need to build a comparison between forced-MRU and random choice.

  Apache httpd's are scheduled on an LRU basis.  This was discussed early
  in this thread.  Apache uses a file-lock for its mutex around the accept
  call, and file-locking is implemented in the kernel using a round-robin
  (fair) selection in order to prevent starvation.  This results in
  incoming requests being assigned to httpd's in an LRU fashion.

But, if you are running a front/back end apache with a small number
of spare servers configured on the back end there really won't be
any idle perl processes during the busy times you care about.  That
is, the  backends will all be running or apache will shut them down
and there won't be any difference between MRU and LRU (the
difference would be which idle process waits longer - if none are
idle there is no difference).

  Once the httpd's get into the kernel's run queue, they finish in the
  same order they were put there, unless they block on a resource, get
  timesliced or are pre-empted by a higher priority process.

Which means they don't finish in the same order if (a) you have
more than one cpu, (b) they do any I/O (including delivering the
output back which they all do), or (c) some of them run long enough
to consume a timeslice.

  Try it and see.  I'm sure you'll run more processes with speedycgi, but
  you'll probably run a whole lot fewer perl interpreters and need less ram.

Do you have a benchmark that does some real work (at least a dbm
lookup) to compare against a front/back end mod_perl setup?

  Remember that the httpd's in the speedycgi case will have very little
  un-shared memory, because they don't have perl interpreters in them.
  So the processes are fairly indistinguishable, and the LRU isn't as
  big a penalty in that case.

  This is why the original designers of Apache thought it was safe to
  create so many httpd's.  If they all have the same (shared) memory,
  then creating a lot of them does not have much of a penalty.  mod_perl
  applications throw a big monkey wrench into this design when they add
  a lot of unshared memory to the httpd's.

This is part of the reason the front/back end  mod_perl configuration
works well, keeping the backend numbers low.  The real win when serving
over the internet, though, is that the perl memory is no longer tied
up while delivering the output back over frequently slow connections.

   Les Mikesell
   [EMAIL PROTECTED]





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Les Mikesell


- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: "mod_perl list" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, January 06, 2001 6:32 AM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
that contain un-shared memory



  Right, but this also points out how difficult it is to get mod_perl
  tuning just right.  My opinion is that the MRU design adapts more
  dynamically to the load.

How would this compare to apache's process management when
using the front/back end approach?

  I'd agree that the size of one Speedy backend + one httpd would be the
  same or even greater than the size of one mod_perl/httpd when no memory
  is shared.  But because the speedycgi httpds are small (no perl in them)
  and the number of SpeedyCGI perl interpreters is small, the total memory
  required is significantly smaller for the same load.

Likewise, it would be helpful if you would always make the comparison
to the dual httpd setup that is often used for busy sites.   I think it must
really boil down to the efficiency of your IPC vs. access to the full
apache environment.

  Les Mikesell
 [EMAIL PROTECTED]




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Les Mikesell


- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; "mod_perl list" [EMAIL PROTECTED]
Sent: Saturday, January 06, 2001 4:37 PM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
that contain un-shared memory


Right, but this also points out how difficult it is to get mod_perl
 tuning just right.  My opinion is that the MRU design adapts more
 dynamically to the load.
  
   How would this compare to apache's process management when
   using the front/back end approach?

  Same thing applies.  The front/back end approach does not change the
  fundamentals.

It changes them drastically in the world of slow internet connections,
but perhaps not much in artificial benchmarks or LAN use.   I think
you can reduce the problem to:

 How much time do you spend in non-perl apache code vs. how
 much time  you spend in perl code.
and the solution to:
Only use the memory footprint of perl for the miminal time it is needed.

If your I/O is slow and your program complexity minimal, the bulk of
the wall-clock time is spent in i/o wait by non-perl apache code.  Using
a front-end proxy greatly reduces this time (and correspondingly the
ratio of time spent in non-perl code) for the backend where it matters
because you are tying up a copy of perl in memory. Likewise, increasing
the complexity of the perl code will reduce this ratio, reducing the
potential for saving memory regardless of what you do, so benchmarking
a trivial perl program will likely be misleading.

 I'd agree that the size of one Speedy backend + one httpd would be the
 same or even greater than the size of one mod_perl/httpd when no memory
 is shared.  But because the speedycgi httpds are small (no perl in
them)
 and the number of SpeedyCGI perl interpreters is small, the total
memory
 required is significantly smaller for the same load.
  
   Likewise, it would be helpful if you would always make the comparison
   to the dual httpd setup that is often used for busy sites.   I think it
must
   really boil down to the efficiency of your IPC vs. access to the full
   apache environment.

  The reason I don't include that comparison is that it's not fundamental
  to the differences between mod_perl and speedycgi or LRU and MRU that
  I have been trying to point out.  Regardless of whether you add a
  frontend or not, the mod_perl process selection remains LRU and the
  speedycgi process selection remains MRU.

I don't think I understand what you mean by LRU.   When I view the
Apache server-status with ExtendedStatus On,  it appears that
the backend server processes recycle themselves as soon as they
are free instead of cycling sequentially through all the available
processes.   Did you mean to imply otherwise or are you talking
about something else?

   Les Mikesell
 [EMAIL PROTECTED]





Re: Javascript - just say no(t required)

2001-01-05 Thread Les Mikesell


- Original Message - 
From: "dreamwvr" [EMAIL PROTECTED]
To: "Randal L. Schwartz" [EMAIL PROTECTED]
Cc: "Gunther Birznieks" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Friday, January 05, 2001 12:00 PM
Subject: Re: Javascript - just say no(t required)


 hi,
Seems to me the only reasonable usage for cookies that does not
 seem to be abuse.org is as a temporary ticket granting system.. so
 the next time you want to get a byte you need a ticket to goto the
 smorg..

I think it is also very reasonable to store user-selected preferences
in cookies, especially for things likes sizes, colors, fonts for
certain pages.  Why should the server side have to store millions
of things like that?  Even if it does, the choices may be different
for the same user running a different browser.   Normally you
would have some default that would work for the cookie-challenged
folks anyway.

   Les Mikesell
  [EMAIL PROTECTED]





Re: [OT] Rewrite arguments?

2001-01-05 Thread Les Mikesell


- Original Message -
From: "G.W. Haywood" [EMAIL PROTECTED]
To: "Les Mikesell" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, January 05, 2001 1:44 PM
Subject: Re: [OT] Rewrite arguments?


 On Thu, 4 Jan 2001, Les Mikesell wrote:

  This may or may not be a mod_perl question:

 Probably not :)

I have a feeling it is going to end up being possible only
with LWP...

  I want to change the way an existing request is handled and it can be done
  by making a proxy request to a different host but the argument list must
  be slightly different.It is something that a regexp substitution can
  handle and I'd prefer for the front-end server to do it via mod_rewrite
  but I can't see any way to change the existing arguments via RewriteRules.

 I don't exactly understand your problem, but from what I can see you
 should be able to do what you want with mod_rewrite if you just use a
 regexp which contains a question mark.  Have I missed something?

One of us is missing something.  I hope it is me, but when I turn on
rewrite logging, the input side contains only the location portion.  The
argument string has already been stripped.  Apparently it is put back
in place after the substition, since  ^(.*)$  http://otherserver$1  [P] will
send the same arguments on to the downstream host.

 Does this extract from the docs help?
 --
 One more note: You can even create URLs in the substitution string containing
 a query string part. Just use a question mark inside the substitution string
 to indicate that the following stuff should be re-injected into the
 QUERY_STRING.  When you want to erase an existing query string, end the
 substitution string with just the question mark.

This allows adding additional arguments, or deleting them all.  I want to
change an existing one and add some more.  Something like:
/cgi-bin/prog?arg1=22arg2=24 should become:
   http://otherhost.domain/prog?newarg1=22arg2=24uname=mepwd=password


 Note: There is a special feature: When you prefix a substitution field
 with http://thishost[:thisport] then mod_rewrite automatically strips
 it out.  This auto-reduction on implicit external redirect URLs is a
 useful and important feature when used in combination with a
 mapping-function which generates the hostname part.  Have a look at
 the first example in the example section below to understand this.

That won't affect this case.  The hostname will be fixed and always
require the proxy mode.

   Les Mikesell
 [EMAIL PROTECTED]




Re: [OT] Rewrite arguments?

2001-01-05 Thread Les Mikesell


- Original Message -
From: "Dave Kaufman" [EMAIL PROTECTED]
To: "Les Mikesell" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, January 05, 2001 11:09 PM
Subject: Re: [OT] Rewrite arguments?


  One of us is missing something.  I hope it is me, but when I turn on
  rewrite logging, the input side contains only the location portion.  The
  argument string has already been stripped.

 the query string is stripped from what the rewrite rule is matching, yes.
but
 you can use a RewriteCond above the rule to test %{QUERY_STRING} against a
 regexp pattern, and store backreferences from it as %1, %2...etc.

That's it - thank you very much.  I had seen how to match and reuse chunks
in the RewriteCond, but somehow missed the ability to substitute them
in the RewriteRule.  I should have known it was too useful to have
been left out.

 # match and store the interesting arg values as backrefs
 RewriteCond %{QUERY_STRING} arg1=([0-9]+)arg2=([0-9]+)
 # build a new QS for the proxy url
 RewriteRule ^/cgi-bin/prog
 http://otherhost/prog?newarg1=%1arg2=%2uname=mepwd=password [R,L]

Since I only want to substitute one argument name without knowing much
else I think this will work:
RewriteCond  %{QUERY_STRING} (.*)(arg1=)(.*)
RewriteRule ^/cgi-bin/prog
http://otherhost/prog?%1newarg1=%2uname=mepwd=password [P,L]
(I want a proxy request to hide the password usage, not a client redirect
but either could work)

Les Mikesell
   [EMAIL PROTECTED]





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-04 Thread Les Mikesell


- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: "Perrin Harkins" [EMAIL PROTECTED]
Cc: "Gunther Birznieks" [EMAIL PROTECTED]; "mod_perl list"
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 6:56 AM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
that contain un-shared memory


  
   Are the speedycgi+Apache processes smaller than the mod_perl
   processes?  If not, the maximum number of concurrent requests you can
   handle on a given box is going to be the same.

  The size of the httpds running mod_speedycgi, plus the size of speedycgi
  perl processes is significantly smaller than the total size of the httpd's
  running mod_perl.

That would be true if you only ran one mod_perl'd httpd, but can you
give a better comparison to the usual setup for a busy site where
you run a non-mod_perl lightweight front end and let mod_rewrite
decide what is proxied through to the larger mod_perl'd backend,
letting apache decide how many backends you need to have
running?

  The reason for this is that only a handful of perl processes are required by
  speedycgi to handle the same load, whereas mod_perl uses a perl interpreter
  in all of the httpds.

I always see at least a 10-1 ratio of front-to-back end httpd's when serving
over the internet.   One effect that is difficult to benchmark is that clients
connecting over the internet are often slow and will hold up the process
that is delivering the data even though the processing has been completed.
The proxy approach provides some buffering and allows the backend
to move on more quickly.  Does speedycgi do the same?

  Les Mikesell
[EMAIL PROTECTED]





Rewrite arguments?

2001-01-04 Thread Les Mikesell

This may or may not be a mod_perl question: 
I want to change the way an existing request is handled and it can be done
by making a proxy request to a different host but the argument list must
be slightly different.It is something that a regexp substitution can
handle and I'd prefer for the front-end server to do it via mod_rewrite
but I can't see any way to change the existing arguments via RewriteRules.
To make the new server accept the old request I'll have to modify the name
of one of the arguments and add some extra ones.  I see how to make
mod_rewrite add something, but not modify the existing part. Will I
have to let mod_perl proxy with LWP instead or have I missed something
about mod_rewrite?   (Modifying the location portion is easy, but the
argument list seems to be handled separately).

Les Mikesell
  [EMAIL PROTECTED]





Re: the edge of chaos

2001-01-04 Thread Les Mikesell


- Original Message - 
From: "Justin" [EMAIL PROTECTED]
To: "Geoffrey Young" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 4:55 PM
Subject: Re: the edge of chaos


 
 Practical experiments (ok - the live site :) convinced me that 
 the well recommended modperl setup of fe/be suffer from failure
 and much wasted page production when load rises just a little
 above *maximum sustainable throughput* ..

It doesn't take much math to realize that if you continue to try to
accept connections faster than you can service them, the machine
is going to die, and as soon as you load the machine to the point
that you are swapping/paging memory to disk the time to service
a request will skyrocket.   Tune down MaxClients on both the
front and back end httpd's to what the machine can actually
handle and bump up the listen queue if you want to try to let
the requests connect and wait for a process to handle them.  If
you aren't happy with the speed the machine can realistically
produce, get another one (or more) and let the front end proxy
to the other(s) running the backends.

 Les Mikesell
 [EMAIL PROTECTED]






Re: getting rid of multiple identical http requests (bad users double-clicking)

2001-01-04 Thread Les Mikesell


- Original Message -
From: "Ed Park" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 6:52 PM
Subject: getting rid of multiple identical http requests (bad users
double-clicking)


 Does anyone out there have a clean, happy solution to the problem of users
 jamming on links  buttons? Analyzing our access logs, it is clear that it's
 relatively common for users to click 2,3,4+ times on a link if it doesn't
 come up right away. This not good for the system for obvious reasons.

The best solution is to make the page come up right away...  If that isn't
possible, try to make at least something show up.  If your page consists
of a big table the browser may be waiting until the closure to compute
the column widths before it can render anything.

 I can think of a few ways around this, but I was wondering if anyone else
 had come up with anything. Here are the avenues I'm exploring:
 1. Implementing JavaScript disabling on the client side so that links become
 'click-once' links.
 2. Implement an MD5 hash of the request and store it on the server (e.g. in
 a MySQL server). When a new request comes in, check the MySQL server to see
 whether it matches an existing request and disallow as necessary. There
 might be some sort of timeout mechanism here, e.g. don't allow identical
 requests within the span of the last 20 seconds.

This might be worthwhile to trap duplicate postings, but unless your page
requires a vast amount of server work you might as well deliver it as
go to this much trouble.

  Les Mikesell
[EMAIL PROTECTED]





Re: can't flush buffers?

2000-12-23 Thread Les Mikesell


- Original Message -
From: "Wesley Darlington" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, December 23, 2000 1:44 PM
Subject: Re: can't flush buffers?


 Hi All,

 On Sat, Dec 23, 2000 at 09:38:11AM -0800, quagly wrote:
  This is the relevant code:
 
  while ($sth-fetch) {
 $r-print ("TR",
 map("TD$_/TD",@cols),
 "/TR");
$r-rflush;
  }

 A thought is knocking at the back of my head - browsers don't render
 tables until they've got the whole thing. I think. Try sending lots
 of single-row tables instead of one big table...?

Yes, this is most likely the real problem - if the browser has to compute
the
column widths it can't do it until it has seen the end of the table.  You
can
avoid it by specifying the widths in the table tag or by closing and
restarting
the table after some reasonable sized number of rows.

Les Mikesell
[EMAIL PROTECTED]





Re: Question

2000-11-22 Thread Les Mikesell

If you run the 2-apache model described in the guide (as you generally
need on a busy site), you can use the locations set in ProxyPass
directives to determine which requests are passed to the backend
mod_perl apache and let the lightweight front end handle the
others directly.   Or you can use mod_rewrite to almost arbitrarily
select which requests are run immediately by the front end or
proxied through to the back end server.  You don't have to make
it visible to the outside by running the back end on a different
address - it can be another port accessed only by the front end
proxy.

  Les Mikesell
 [EMAIL PROTECTED]

- Original Message -
From: "Peiper,Richard" [EMAIL PROTECTED]
To: "'Jonathan Tweed'" [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Wednesday, November 22, 2000 8:20 AM
Subject: RE: Question



 How could they not? Since the files are executable by any process,
 then all processes must have the mod_perl code in it. You could if you
 really wanted to run 2 versions of Apache, one with mod_perl and one
 without. You could then call all CGI's through a different IP and then run
 mod_perl on that one only. This would reduce the sizes of your executables
 running in memory for Apache.

 Richard
 Web Engineer
 ProAct Technologies Corp.


  -Original Message-
  From: Jonathan Tweed [mailto:[EMAIL PROTECTED]]
  Sent: Wednesday, November 22, 2000 9:15 AM
  To: '[EMAIL PROTECTED]'
  Subject: Question
 
 
  Hi
 
  I would be grateful if someone could answer this question:
 
  Even if you tell Apache only to execute files in a certain
  directory under
  mod_perl do all processes still include the mod_perl code?
 
  Thanks
 
  Jonathan Tweed



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: database access

2000-11-16 Thread Les Mikesell


- Original Message -
From: "Gunther Birznieks" [EMAIL PROTECTED]
To: "Les Mikesell" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: "Tim Bunce" [EMAIL PROTECTED]; "Aaron" [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Thursday, November 16, 2000 3:04 AM
Subject: Re: database access


 1. I don't see the scenario you are talking about (dynamic connection
 pooling) actually working too well in practice because with most web sites
 there is usually peak times and non-peak times. Your database still has to
 handle the peak times and keep the connection open, so why not just leave
 the connection open during non-peak. It doesn't seem like it would really
 matter. Do you have a real scenario that this is a concern in?

This is not so much a problem where you have one or two databases
on a production server that is used by most pages - you are right
that you just have to be able to handle those connections.  The problem
I see is on an internal intranet server where there are lots of little
special-purpose databases - separate calendar, forums, and assorted
applications, each with its own database and usage pattern.  If
these all get used at once, the backend database server accumulates
a lot of connections.   At the moment I don't have run-time user/password
based connections per user, but that would be an even bigger problem.

 2. I do see adding a regex on the connect string to explicitly stop
 Apache::DBI from caching the connection being valuable.

That would probably be enough - or the other way around to
specify the one(s) you know should be cached.

 3. As a plug, I had also suggested a couple years ago that I would like
the
 feature to be able to specifically not having the ping() method by
 Apache::DBI if the database had already been pinged within a set period of
 time.

That would fall into place if it timestamped the usage per handle and
also did a real disconnect on any that hadn't been used recently.

 Les Mikesell
[EMAIL PROTECTED]





Re: Microperl

2000-11-15 Thread Les Mikesell


- Original Message -
From: "Bill Moseley" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, November 15, 2000 12:30 PM
Subject: Microperl

 I don't build mod_rewrite into a mod_perl Apache as I like rewriting with
 mod_perl much better.  But it doesn't make much sense to go that route for
 a light-weight front-end to heavy mod_perl backend servers, of course.

Just curious: what don't you like about mod_rewrite?

  Les Mikesell
   [EMAIL PROTECTED]





Re: Microperl

2000-11-15 Thread Les Mikesell


- Original Message -
From: "Bill Moseley" [EMAIL PROTECTED]
To: "Les Mikesell" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, November 15, 2000 8:04 PM
Subject: Re: Microperl
 
  I don't build mod_rewrite into a mod_perl Apache as I like rewriting
with
  mod_perl much better.  But it doesn't make much sense to go that route
for
  a light-weight front-end to heavy mod_perl backend servers, of course.
 
 Just curious: what don't you like about mod_rewrite?

 You ask that on the mod_perl list? ;)  It's not perl, of course.

Yes, if it weren't for a lightweight httpd with mod_rewrite able
to selectively proxy to several machines my mod_perl servers
would have melted long ago.

 I like those perl sections a lot.

 Oh, there were the weird segfaults that I had for months and months.
 http://www.geocrawler.com/archives/3/182/2000/10/0/4480696/

I usually force a proxy with [p], or immediate local
action with [L] instead of falling through.  I don't know
if that would have avoided your problem or not.


 Nothing against mod_rewrite -- I was just wondering if a small perl could
 be embedded with out bloating the server too much.

I don't think 'small' and 'perl' belong in the same sentence...

Les Mikesell
   [EMAIL PROTECTED]




Re: database access

2000-11-15 Thread Les Mikesell


- Original Message -
From: [EMAIL PROTECTED]
To: "Les Mikesell" [EMAIL PROTECTED]
Cc: "Tim Bunce" [EMAIL PROTECTED]; "Aaron" [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Wednesday, November 15, 2000 2:21 AM
Subject: Re: database access


 On Tue, 14 Nov 2000, Les Mikesell wrote:

  I wonder if Apache::DBI could figure out what connections are worth
  holding open by itself?  It could have some hints in the config file
  like regexps to match plus a setting that would tell it not to
  make a connection persist unless it had seen  x instances of that
  connect string in y seconds.

 That really, really sucks, but Apache is selecting on the HTTP socket, and
 nothing matters beyond that, except signals of course. What you are
 implying is that DBI will be aware of the connections being closed or
 SIGALRM coming thru to the apache and on its lap, but it won't happen.

No, I realize there is nothing to do after the fact - what I meant was that
Apache::DBI might allow disconnect to really happen the first few
times it sees a connect string after a child startup.   If it saved the
string with a timestamp and waited until it had seen the same string
several times within a short interval it would be fairly likely that it
would be worth staying connected.   You'd trade some slower hits
off against not accumulating a huge number of little-used database
connections.

 Les Mikesell
[EMAIL PROTECTED]





Re: database access

2000-11-14 Thread Les Mikesell


- Original Message -
From: "Tim Bunce" [EMAIL PROTECTED]

  
Don't get me wrong here, "but", it would be nice if the undocumented
somehow made it to the documented status.
  
 
   yeah... but Apache::DBI and DBI are in cahoots! it's a secret love that
   no documentation can break apart!
 
   no, really it would be nice if the DBI connection "hook" was
documented. it
   might even turn out to be useful for other module authors.

 The next version will support a dbi_connect_method attribute to
 DBI-connect() to tell it which driver class::method to use.

I wonder if Apache::DBI could figure out what connections are worth
holding open by itself?  It could have some hints in the config file
like regexps to match plus a setting that would tell it not to
make a connection persist unless it had seen  x instances of that
connect string in y seconds.

   Les Mikesell
 [EMAIL PROTECTED]





Re: Fast DB access

2000-11-10 Thread Les Mikesell

I think it is at least to the point where commercial code would be
released - free software never has any pressure to make claims
of stability even when they can...   A lot of places are using it in
production just to avoid the possibility of a slow fsck after a crash,
but it is enormously faster at creating and deleting files too because
everything is indexed so it would be an ideal stash for fast changing
session data.   If you don't trust it for the whole system you can just
use it on one partition for the session database.   Several Linux
distributions include it now.

- Original Message -
From: "Gunther Birznieks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, November 10, 2000 12:58 AM
Subject: Re: Fast DB access


 Isn't that a beta-level filesystem?

 At 12:47 AM 11/10/2000 -0600, Les Mikesell wrote:

 - Original Message -
 From: "Tim Bunce" [EMAIL PROTECTED]
 
 If you're always looking stuff up on simple ID numbers and
 "stuff" is a very simple data structure, then I doubt any DBMS can
 beat

  open D, "/data/1/12/123456" or ...

 from a fast local filesystem.
   
Note that Theo Schlossnagel was saying over lunch at ApacheCon that
if
your filename has more than 8 characters on Linux (ext2fs) it skips
from
 a
hashed algorithm to a linear algorithm (or something to that
affect). So
go careful there. I don't have more details or a URL for any
information
on this though.
  
   Similarly on Solaris (and perhaps most SysV derivatives) path
component
   names longer than 16 chars (configurable) don't go into the inode
   lookup cache and so require a filesystem directory lookup.
 
 If you are building a new system with this scheme, try ReiserFS on
 a Linux box.   It does not suffer from the usual problems when
 you put a large number of files in one directory and is extremely
 fast at lookups.
 
Les Mikesell
[EMAIL PROTECTED]

 __
 Gunther Birznieks ([EMAIL PROTECTED])
 eXtropia - The Web Technology Company
 http://www.extropia.com/





Re: database access

2000-11-10 Thread Les Mikesell

Perrin Harkins wrote:
 
 On Fri, 10 Nov 2000, Tim Sweetman wrote:
   Would you be interested in adding support for resetting some of these to
   Apache::DBI?  It's pretty easy to do, using PerlCleanupHandler like the
   auto-rollback does.  It would be database-specific though, so you'd have
   to find a way for people to explicitly request cleanups.
 
  I suspect automating via DBI would be a major pain, because you'd have
  to be able to identify the "dangerous" changes in state. Probably
  requiring SQL to be parsed. :(
 
 The current rollback cleanup doesn't parse SQL.  It knows that a rollback
 won't do damage if there are no transactions to be cleaned up, so it's
 safe to do every time.  If there are other things that work this way, you
 could add them.  Probably wouldn't work for things like MySQL table locks
 though.  People will have to do that themselves.

If the backend supports rollback, couldn't you just try that instead
of the ping method to see if you are still connected and get the
cleanup as a side effect?

 
  In principle, you could probably have a -do_not_reuse method which
  could be called before someone does something dangerous. As long as they
  remember.
 
 But that would also mean no more persistent connection.  Maybe that would
 work if you don't do it very often.

It would be very useful to be able to specify at connect time that
you don't want a particular connection to be persistent.  If you have
a lot of small databases or some with different user/password
permissions
you accumulate too many backend servers - but if you also have one
or more connections used all the time you don't want to give up
Apache::DBI.  I'm not sure it is worth caching MySQL connections
on the same host, either since it is so fast to start up.  Has
anyone timed the difference?

   Les Mikesell
   [EMAIL PROTECTED]



Re: Fast DB access

2000-11-09 Thread Les Mikesell


- Original Message -
From: "Tim Bunce" [EMAIL PROTECTED]

   If you're always looking stuff up on simple ID numbers and
   "stuff" is a very simple data structure, then I doubt any DBMS can
   beat
  
open D, "/data/1/12/123456" or ...
  
   from a fast local filesystem.
 
  Note that Theo Schlossnagel was saying over lunch at ApacheCon that if
  your filename has more than 8 characters on Linux (ext2fs) it skips from
a
  hashed algorithm to a linear algorithm (or something to that affect). So
  go careful there. I don't have more details or a URL for any information
  on this though.

 Similarly on Solaris (and perhaps most SysV derivatives) path component
 names longer than 16 chars (configurable) don't go into the inode
 lookup cache and so require a filesystem directory lookup.

If you are building a new system with this scheme, try ReiserFS on
a Linux box.   It does not suffer from the usual problems when
you put a large number of files in one directory and is extremely
fast at lookups.

  Les Mikesell
  [EMAIL PROTECTED]






Re: ApacheCon report

2000-10-31 Thread Les Mikesell


- Original Message - 
From: "Perrin Harkins" [EMAIL PROTECTED]
To: "Ask Bjoern Hansen" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, October 31, 2000 8:47 PM
Subject: Re: ApacheCon report


  Mr. Llima must do something I don't, because with real world
  requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
  "my" site. And that is serving 500byte stuff.
 
 I'm not following.  Everyone agrees that we don't want to have big
 mod_perl processes waiting on slow clients.  The question is whether
 tuning your socket buffer can provide the same benefits as a proxy server
 and the conclusion so far is that it can't because of the lingering close
 problem.  Are you saying something different?
 

A tcp close is supposed to require an acknowledgement from the
other end or a fairly long timeout.  I don't see how a socket buffer
alone can change this.Likewise for any of the load balancer
front ends that work on the tcp connection level (but I'd like to
be proven wrong about this).

  Les Mikesell
[EMAIL PROTECTED]





Re: ApacheCon report

2000-10-30 Thread Les Mikesell


- Original Message -
From: "Perrin Harkins" [EMAIL PROTECTED]


 Here's what I recall Theo saying (relative to mod_perl):

 - Don't use a proxy server for doling out bytes to slow clients; just set
 the buffer on your sockets high enough to allow the server to dump the
 page and move on.  This has been discussed here before, notably in this
 post:


http://forum.swarthmore.edu/epigone/modperl/grerdbrerdwul/2811200559.B17
[EMAIL PROTECTED]

 The conclusion was that you could end up paying dearly for the lingeirng
 close on the socket.

In practice I see a fairly consistent ratio of 10 front-end proxies running
per one back end on a site where most hits end up being proxied so
the lingering is a real problem.

 Ultimately, I don't see any way around the fact that proxying from one
 server to another ties up two processes for that time rather than one, so
 if your bottleneck is the number of processes you can run before running
 out of RAM, this is not a good approach.

The point is you only tie up the back end for the time it takes to deliver
to the proxy, then it moves on to another request while the proxy
dribbles the content back to the client.   Plus, of course, it doesn't
have to be on the same machine.

 If your bottleneck is CPU or
 disk access, then it might be useful.  I guess that means this is not so
 hot for the folks who are mostly bottlenecked by an RDBMS, but might be
 good for XML'ers running CPU hungry transformations.  (Yes, I saw Matt's
 talk on AxKit's cache...)

Spreading requests over multiple backends is the fix for this.  There is
some gain in efficiency if you dedicate certain backend servers to
certain tasks since you will then tend to have the right things in the
cache buffers.

  Les Mikesell
 [EMAIL PROTECTED]




Re: Connection Pooling / TP Monitor

2000-10-28 Thread Les Mikesell


- Original Message -
From: "Matt Sergeant" [EMAIL PROTECTED]
.
 
  To redirect incoming url's that require database work to mod_perl
'heavy'
  servers? Just like a smarter and more dynamic mod_rewrite? Yes?

 Yes basically, except its not a redirect. mod_backhand can use keep-alives
 to ensure that it never has to recreate a new connection to the heavy
 backend servers, unlike mod_rewrite or mod_proxy. And it can do it in a
 smart way so that remote connections don't use keepalives (because they
 are evil for mod_perl servers - see the mod_perl guide), but backhand
 connections do. Very very cool technology.

Is there any way to tie proxy requests mapped by mod_rewrite to
a balanced set of servers through mod_backhand (or anything
similar)?Also, can mod_backhand (or any alternative) work
with non-apache back end servers?I'm really looking for a way
to let mod_rewrite do the first cut at deciding where (or whether)
to send a request, but then be able to send to a load balanced, fail
over set, preferably without having to interpose another physical
proxy.

Les Mikesell
  [EMAIL PROTECTED]





Re: sending Apache::ASP output to a variable?

2000-10-26 Thread Les Mikesell


- Original Message -
From: "Jeff Ng" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, October 26, 2000 12:07 PM
Subject: sending Apache::ASP output to a variable?


 I would like to Apache::ASP to parse pages in an existing mod_perl
 environment.  Ideally, I could set headers with mod_perl, then use
 Apache::ASP to parse templates which I can arbitrarily combine.  It seems
 that using Apache::ASP forces me to do most of my coding in the perlscript
 whereas I would prefer to minimize this for the sake of not interspersing
 too much code within the HTML.

 As it stands, it appears that the output of Apache::ASP goes directly to
 stdout.  Is there a way to use Apache::ASP as part of a normal mod_perl
 module, then capture the output to a variable?

One thing that may not be obvious is that if you use mod_include in
apache along with mod_perl and put something like:
!--#include virtual="/cgi-bin/perlprog.pl$PATH_INFO?$QUERY_STRING" --
in the *.shtml file, apache will run it efficiently as a subrequest in
the same process (assuming apache is configured to run that URL under
mod_perl) and substitute the output in the page.  It isn't quite as flexible
as being able to reparse the output by a program but it does let people
who are likely to break the perl programs use them in their html pages.

   Les Mikesell
[EMAIL PROTECTED]





Re: DBI/PostgreSQL/MySQL mod_perl intermittent problem

2000-10-16 Thread Les Mikesell


- Original Message -
From: "Rajit Singh" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, October 16, 2000 8:58 AM
Subject: DBI/PostgreSQL/MySQL  mod_perl intermittent problem


 I should probably note that I've used Apache::DBI on and off to see if
 it makes any difference.

Did it?  One possibility, especially with Apache::DBI is that you are
exceeding
the maximum number of connections the database is configured to allow.
With MySQL you can use 'mysqladmin status'  or 'mysqladmin processlist'
to see the number of backends and what they are doing.  With PostgreSQL
you may have to use ps and count the postgres processes.

  Les Mikesell
[EMAIL PROTECTED]





Re: Zope functionality under mod_perl

2000-09-30 Thread Les Mikesell


- Original Message - 
From: "Philip Molter" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, September 28, 2000 5:29 PM
Subject: Zope functionality under mod_perl

 I've looked at AxKit, and I'm not quite sure if it's exactly what
 I'm looking for, especially since the development team I'm working
 for does not have much XML/XSL experience and I'd like to keep it
 as perl/HTML oriented as possible.  I've also looked at several of
 the templating tools, but they don't look like they provide the
 object-oriented aspect I'm looking for (or do they; anyone have
 experience down that path)?

Have you looked at embperl?  The latest version allows you to map
the filesystem/URL path to a hierarchy of objects so you can
create a top level default and only override the parts you
want to change as you go down to different sections - without
changes in the page itself.

  Les Mikesell
   [EMAIL PROTECTED]




Re: One httpd.conf for both apache heavy and apache-light [IfModule]

2000-09-30 Thread Les Mikesell


- Original Message -
From: "Robin Berjon" [EMAIL PROTECTED]
To: "martin langhoff" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, September 30, 2000 2:53 PM
Subject: Re: One httpd.conf for both apache heavy and apache-light
[IfModule]


 It is indeed a bit of a configuration nightmare when vhosts add up (even
 though it's well worth the trouble). What I do is use both IfModule to
 merge both confs and mod_macro to factor out the repetitive bits. It works
 quite well and at any rate the win of splitting into two httpds overweighs
 the configuration overhead, but I'm still not completely happy. Half of me
 is thinking about using Include to try and break up the conf file into
 smaller bits and the other half wants to write something that would help
 automate the configuration completely. It's possible on the mod_perl side
 if you write your conf in Perl, but it's more troublesome on the plain
 Apache side. If anyone has a silver bullet for this (well, I'll settle for
 a good solution that makes my life easier ;) I'd gladly hear it.

The problem with conditionals and includes is that the light/heavy httpds
tend to have almost nothing in common except the DocumentRoot and
they may not even be on the same machines.  Some sort of template
mechanism might work with postprocessing to spit out the front and
back end config files, but then you have yet-another-syntax to learn.
I usually just open both files in different windows and cut and paste
to make any common changes.

   Les Mikesell
 [EMAIL PROTECTED]