Re: Restarting named service

2002-04-19 Thread Bruno Connelly

  Abd How can i restart the named service via mod_perl.
  Abd The script will be activated via a web page.

  Abd My apache is configured to use User: apache, Group: apache

  Abd Is ther any other way except usine User root directive in my
  Abd httpd.conf file

Assuming you're running a somewhat modern version of BIND and you make
the ndc domain socket read/writable via the user/group Apache is
running as, you should be able to restart the daemon without root
privs.

Not that you still shouldn't be weary of doing something like that,
though.

b.
--
/*  Bruno Connelly, [EMAIL PROTECTED]  */




RE: Throttling, once again

2002-04-19 Thread Jeremy Rusnak

Hi,

I *HIGHLY* recommend mod_throttle for Apache.  It is very
configurable.  You can get the software at
http://www.snert.com/Software/mod_throttle/index.shtml .

The best thing about it is the ability to throttle based
on bandwidth and client IP.  We had problems with robots
as well as malicious end users who would flood our
server with requests.

mod_throttle allows you to set up rules to prevent one
IP address from making more than x requests for the
same document in y time period.  Our mod_perl servers,
for example, track the last 50 client IPs.  If one of
those clients goes about 50 requests, it is blocked
out.  The last client that requests a document is put
at the top of the list, so even very active legit users
tend to fall off the bottom, but things like robots
stay blocked.

I highly recommend you look into it.  We were doing some
custom writting functions to block this kind of thing,
but the Apache module makes it so much nicer.

Jeremy

-Original Message-
From: Bill Moseley [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 18, 2002 10:56 PM
To: [EMAIL PROTECTED]
Subject: Throttling, once again


Hi,

Wasn't there just a thread on throttling a few weeks ago?

I had a machine hit hard yesterday with a spider that ignored robots.txt.  

Load average was over 90 on a dual CPU Enterprise 3500 running Solaris 2.6.
 It's a mod_perl server, but has a few CGI scripts that it handles, and the
spider was hitting one of the CGI scripts over and over.  They were valid
requests, but coming in faster than they were going out.

Under normal usage the CGI scripts are only accessed a few times a day, so
it's not much of a problem have them served by mod_perl.  And under normal
peak loads RAM is not a problem.  

The machine also has bandwidth limitation (packet shaper is used to share
the bandwidth).  That combined with the spider didn't help things.  Luckily
there's 4GB so even at a load average of 90 it wasn't really swapping much.
 (Well not when I caught it, anyway).  This spider was using the same IP
for all requests.

Anyway, I remember Randal's Stonehenge::Throttle discussed not too long
ago.  That seems to address this kind of problem.  Is there anything else
to look into?  Since the front-end is mod_perl, it mean I can use mod_perl
throttling solution, too, which is cool.

I realize there's some fundamental hardware issues to solve, but if I can
just keep the spiders from flooding the machine then the machine is getting
by ok.

Also, does anyone have suggestions for testing once throttling is in place?
 I don't want to start cutting off the good customers, but I do want to get
an idea how it acts under load.  ab to the rescue, I suppose.

Thanks much,


-- 
Bill Moseley
mailto:[EMAIL PROTECTED]




Re: Problem with Perl sections in httpd.conf, mod_perl 1.26

2002-04-19 Thread PinkFreud

Here's a bit more information:

Given two directives:
$VirtualHost{$host}-{Alias} = [ '/perl/', $vhostdir/$dir/perl/ ];
$VirtualHost{$host}-{Alias} = $vhost{config}-{Alias};

The first works.  The second does not.  According to
Apache::PerlSections-dump, %VirtualHost is *exactly* the same when
using both directives - yet it seems the server ignores the Alias
directive when it's assigned from $vhost{config} (either that, mod_perl
fails to pass it to the server).
Also, if I set a variable within httpd.conf to mimic $vhost{config} and
then assign that to $VirtualHost{$host}, it works without a problem.
The issuse is definitely with the variable being read in from an
external file.

Strange, no?


On Fri, Apr 19, 2002 at 01:31:45AM -0400, PinkFreud babbled thus:
 Date: 19 Apr 2002 01:31:45 -0400
 Date: Fri, 19 Apr 2002 01:31:45 -0400
 From: PinkFreud [EMAIL PROTECTED]
 To: Salvador Ortiz Garcia [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: Problem with Perl sections in httpd.conf, mod_perl 1.26
 Mail-Followup-To: Salvador Ortiz Garcia [EMAIL PROTECTED],
   [EMAIL PROTECTED]
 User-Agent: Mutt/1.3.25i
 
 On Thu, Apr 18, 2002 at 11:15:15PM -0500, Salvador Ortiz Garcia babbled thus:
  Subject: Re: Problem with Perl sections in httpd.conf, mod_perl 1.26
  From: Salvador Ortiz Garcia [EMAIL PROTECTED]
  To: PinkFreud [EMAIL PROTECTED]
  Cc: [EMAIL PROTECTED]
  X-Mailer: Ximian Evolution 1.0.3 
  Date: 18 Apr 2002 23:15:15 -0500
  
  On Mon, 2002-04-15 at 23:17, PinkFreud wrote:
   I have a rather odd problem, one which I can only assume is a bug
   somewhere, due to how bizarre it is.
   
   I am attmempting to generate virtual host configs via mod_perl, using
   Perl sections in httpd.conf.  Not all hosts will be using a /perl
   Alias, though, so I'm reading in an external config, which looks like
   the following:
   
  [ Deleted ]
  
  Plese try changing your Alias array ref to a simple scalar:
  
 ...
   'Alias' = '/perl/ /home/vhosts/linuxhelp.mirkwood.net/perl/',
 ...
  
  That should work.
 
 Nope.  Still can't find it.
 
 What gets me is the syntax I use for Alias works just fine when the
 code is in httpd.conf.  It only fails to work when I read it in via a
 require'd file.  This same behavior occurs when I use your syntax in
 the require'd file as well.
 
 /me scratches his head, perplexed.
 
  
  And yes, I think its an old bug in perl_handle_command_av.
  
  Salvador Ortiz.
  

-- 

Mike Edwards

Brainbench certified Master Linux Administrator
http://www.brainbench.com/transcript.jsp?pid=158188
---
Unsolicited advertisments to this address are not welcome.



Re: Throttling, once again

2002-04-19 Thread Marc Slagle

When this happened to our clients servers we ended up trying some of the
mod_perl based solutions.  We tried some of the modules that used shared
memory, but the traffic on our site quickly filled our shared memory and
made the module unuseable.  After that we tried blocking the agents
altogether, and there is example code in the Eagle book (Apache::BlockAgent)
that worked pretty well.

You might be able to place some of that code in your CGI, denying the search
engines agents/IPs from accessing it, while allowing real users in.  That
way the search engines can still get static pages.

We never tried mod_throttle, it might be the best solution.  Also, one thing
to keep in mind is that some search engines will come from multiple IP
addresses/user-agents at once, making them more difficult to stop.

 Hi,

 Wasn't there just a thread on throttling a few weeks ago?

 I had a machine hit hard yesterday with a spider that ignored robots.txt.

 Load average was over 90 on a dual CPU Enterprise 3500 running Solaris
2.6.
  It's a mod_perl server, but has a few CGI scripts that it handles, and
the
 spider was hitting one of the CGI scripts over and over.  They were valid
 requests, but coming in faster than they were going out.

 Under normal usage the CGI scripts are only accessed a few times a day, so
 it's not much of a problem have them served by mod_perl.  And under normal
 peak loads RAM is not a problem.

 The machine also has bandwidth limitation (packet shaper is used to share
 the bandwidth).  That combined with the spider didn't help things.
Luckily
 there's 4GB so even at a load average of 90 it wasn't really swapping
much.
  (Well not when I caught it, anyway).  This spider was using the same IP
 for all requests.

 Anyway, I remember Randal's Stonehenge::Throttle discussed not too long
 ago.  That seems to address this kind of problem.  Is there anything else
 to look into?  Since the front-end is mod_perl, it mean I can use mod_perl
 throttling solution, too, which is cool.

 I realize there's some fundamental hardware issues to solve, but if I can
 just keep the spiders from flooding the machine then the machine is
getting
 by ok.

 Also, does anyone have suggestions for testing once throttling is in
place?
  I don't want to start cutting off the good customers, but I do want to
get
 an idea how it acts under load.  ab to the rescue, I suppose.

 Thanks much,


 --
 Bill Moseley
 mailto:[EMAIL PROTECTED]





Re: Throttling, once again

2002-04-19 Thread kyle dawkins

Guys

We also have a problem with evil clients. It's not always spiders... in fact 
more often than not it's some smart-ass with a customised perl script 
designed to screen-scrape all our data (usually to get email addresses for 
spam purposes).

Our solution, which works pretty well, is to have a LogHandler that checks the 
IP address of an incoming request and stores some information in the DB about 
that client; when it was last seen, how many requests it's made in the past n 
seconds, etc.  It means a DB hit on every request but it's pretty light, all 
things considered.

We then have an external process that wakes up every minute or so and checks 
the DB for badly-behaved clients.  If it finds such clients, we get email and 
the IP is written into a file that is read by mod_rewrite, which sends bad 
clients to, well, wherever... http://www.microsoft.com is a good one :-)

It works great.  Of course, mod_throttle sounds pretty cool and maybe I'll 
test it out on our servers.  There are definitely more ways to do this...

Which reminds me, you HAVE to make sure that your apache children are 
size-limited and you have a MaxClients setting where MaxClients * SizeLimit  
Free Memory.  If you don't, and you get slammed by one of these wankers, your 
server will swap and then you'll lose all the benefits of shared memory that 
apache and mod_perl offer us.  Check the thread out that was all over the 
list about a  month ago for more information.  Basically, avoid swapping at 
ALL costs.


Kyle Dawkins
Central Park Software

On Friday 19 April 2002 08:55, Marc Slagle wrote:
 We never tried mod_throttle, it might be the best solution.  Also, one
 thing to keep in mind is that some search engines will come from multiple
 IP addresses/user-agents at once, making them more difficult to stop.




Re: Apache::DProf seg faulting

2002-04-19 Thread Perrin Harkins

Paul Lindner wrote:
But while I have your attention, why are you using Apache::DB at all?  The
Apache::DProf docs just have:

  PerlModule Apache::DProf
 
 
 Legacy knowledge :)
 
 I think it may have been required in the past, or perhaps I had some
 problems with my INC paths long-long ago..  And well, it just kinda
 stuck.

I used to do it because if I didn't DProf would not know anything about 
modules I pull in from startup.pl (since the debugger was not 
initialized until after they were loaded).  This may not be necessary 
anymore.

- Perrin




Re: Throttling, once again

2002-04-19 Thread Peter Bi

If merely the last access time and number of requests within a given time
interval are needed, I think the fastest way is to record them in a cookie,
and check them via an access control. Unfortunately, access control is
called before content handler, so the idea can't be used for CPU or
bandwidth throttles. In the later cases, one has to call DB/file/memory for
history.

Peter Bi


- Original Message -
From: kyle dawkins [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, April 19, 2002 8:02 AM
Subject: Re: Throttling, once again


 Guys

 We also have a problem with evil clients. It's not always spiders... in
fact
 more often than not it's some smart-ass with a customised perl script
 designed to screen-scrape all our data (usually to get email addresses for
 spam purposes).

 Our solution, which works pretty well, is to have a LogHandler that checks
the
 IP address of an incoming request and stores some information in the DB
about
 that client; when it was last seen, how many requests it's made in the
past n
 seconds, etc.  It means a DB hit on every request but it's pretty light,
all
 things considered.

 We then have an external process that wakes up every minute or so and
checks
 the DB for badly-behaved clients.  If it finds such clients, we get email
and
 the IP is written into a file that is read by mod_rewrite, which sends bad
 clients to, well, wherever... http://www.microsoft.com is a good one :-)

 It works great.  Of course, mod_throttle sounds pretty cool and maybe I'll
 test it out on our servers.  There are definitely more ways to do this...

 Which reminds me, you HAVE to make sure that your apache children are
 size-limited and you have a MaxClients setting where MaxClients *
SizeLimit 
 Free Memory.  If you don't, and you get slammed by one of these wankers,
your
 server will swap and then you'll lose all the benefits of shared memory
that
 apache and mod_perl offer us.  Check the thread out that was all over the
 list about a  month ago for more information.  Basically, avoid swapping
at
 ALL costs.


 Kyle Dawkins
 Central Park Software

 On Friday 19 April 2002 08:55, Marc Slagle wrote:
  We never tried mod_throttle, it might be the best solution.  Also, one
  thing to keep in mind is that some search engines will come from
multiple
  IP addresses/user-agents at once, making them more difficult to stop.






RE: Throttling, once again

2002-04-19 Thread Christian Gilmore

Bill,

If you're looking to throttle access to a particular URI (or set of URIs),
give mod_throttle_access a look. It is available via the Apache Module
Registry and at http://www.fremen.org/apache/mod_throttle_access.html .

Regards,
Christian

-
Christian Gilmore
Technology Leader
GeT WW Global Applications Development
IBM Software Group


-Original Message-
From: Bill Moseley [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 19, 2002 12:56 AM
To: [EMAIL PROTECTED]
Subject: Throttling, once again


Hi,

Wasn't there just a thread on throttling a few weeks ago?

I had a machine hit hard yesterday with a spider that ignored robots.txt.

Load average was over 90 on a dual CPU Enterprise 3500 running Solaris 2.6.
 It's a mod_perl server, but has a few CGI scripts that it handles, and the
spider was hitting one of the CGI scripts over and over.  They were valid
requests, but coming in faster than they were going out.

Under normal usage the CGI scripts are only accessed a few times a day, so
it's not much of a problem have them served by mod_perl.  And under normal
peak loads RAM is not a problem.

The machine also has bandwidth limitation (packet shaper is used to share
the bandwidth).  That combined with the spider didn't help things.  Luckily
there's 4GB so even at a load average of 90 it wasn't really swapping much.
 (Well not when I caught it, anyway).  This spider was using the same IP
for all requests.

Anyway, I remember Randal's Stonehenge::Throttle discussed not too long
ago.  That seems to address this kind of problem.  Is there anything else
to look into?  Since the front-end is mod_perl, it mean I can use mod_perl
throttling solution, too, which is cool.

I realize there's some fundamental hardware issues to solve, but if I can
just keep the spiders from flooding the machine then the machine is getting
by ok.

Also, does anyone have suggestions for testing once throttling is in place?
 I don't want to start cutting off the good customers, but I do want to get
an idea how it acts under load.  ab to the rescue, I suppose.

Thanks much,


--
Bill Moseley
mailto:[EMAIL PROTECTED]




Re: framesets/AuthCookie question

2002-04-19 Thread Michael Schout

On Wed, 17 Apr 2002, Peter Bi wrote:

 Fran:

 You may need to 1) add a few lines of code in AuthCookie to make your error
 code aware to other methods,  and 2) have a dynamic login page that can
 interpret the code. Alternatively,  you may try AccessCookie I posted. :-)

The CVS version of AuthCookie has a custom_errors() hook in it that does this
sort of thing.

However, I dont think it will work for his problem because his javascript code
seems to launch a NEW REQUEST, thus loosing anything that was stored away in
$r-subprocess_env().  So the only viable option is to pass the error codes in
they url (as part of the query string) I think.

Mike




@DB::args not working on 5.6.1 and 1.26

2002-04-19 Thread Rob Nagler

It seems that DB::args is empty on mod_perl 1.26 and perl 5.6.1.
This is stock Red Hat 7.2 (apache 1.3.22).  The code which references
DB::args works in perl 5.6.1.  It also appears that the failure only
occurs after the perl restarts.  The first time Apache loads mod_perl,
DB::args is being set correctly.

I assume that DB::args isn't empty running under PERLDB, but I
haven't tried this.  The use of DB::args is not for debugging, so I
can't use Apache::DB.

Anybody else seeing this?

Thanks,
Rob





Re: Solaris 8 lockups

2002-04-19 Thread Tom Servo

Thank you, and Marc as well, it looks like it was a combination of both
version options and having it compiled as a DSO.   We've upgraded
mod_perl and apache and no longer have mod_perl as a DSO, and except for
new error messages popping up (nothing serious, mostly just slightly
sloppy coding issues that will be easily fixed), it looks like it's
finally stable, at least for the last little while.

Thanks again for the help.


Brian Nilsen
[EMAIL PROTECTED]

On Thu, 18 Apr 2002, Jamie LeTual wrote:

 We had problems with mod_perl compiled as a DSO under Solaris 8. Try
 it with mod_perl compiled into apache.
 
 Peace,
 Jamie
 
 
 On Thu, Apr 18, 2002 at 10:39:31AM -0700, Tom Servo wrote:
  We've recently started trying to migrate a number of Solaris 7 machines to
  Solaris 8, and everything seemed fine for a while.
  
  We have each box running its own static, dynamic (mod_perl) and ssl
  servers, and everything runs fine for 3-7 hours after starting the server.
  Eventually, however, the mod_perl children just stop responding.   If you
  try to telnet into the port, the connection just hangs why trying to
  connect...we don't get a refused connection, and it doesn't let you
  connect to the point where you can issue an HTTP request.   It just seems
  to stop responding and get stuck.
  
  Has anyone else run into any of these problems?   So far we've had nothing
  like this happen with Solaris 7 and with Linux, but I can't even think
  where to start looking on Solaris 8.
  
  As a side note, this happened both on boxes that had been upgraded from 7
  to 8, as well as boxes that had fresh 8 installs.
  
  
  Brian Nilsen
  [EMAIL PROTECTED]
  
  
 
 -- 
 -[sent-from-the-office]
 |||
 | HBE Software, Inc. | Email: [EMAIL PROTECTED]   |
 | http://www.hbesoftware.com | AIM  : Reng8tak|
 | (514) 876-3542 x259| Web  : http://jamie.people.hbesoftware.com |
 ---
 perl -e 'for($c=0;$c=length unpack(B*,pack(H*,$ARGV[0]));$c+=7){$_=chr((ord 
packB*,0.substr(unpack(B*,pack(H*,$ARGV[0])),$c,7))+040);print;}' -- 
55569d4010674fa9222d200c22d29801441872e2d2
 
 
 
 
 
 
 
 
 
 




RE: Throttling, once again

2002-04-19 Thread Jeremy Rusnak

Hi,

I looked at the page you mentioned below.  It wasn't really
clear on the page, but what happens when the requests get above
the max allowed?  Are the remaining requests queued or are they
simply given some kind of error message?

There seem to be a number of different modules for this kind of
thing, but most of them seem to be fairly old.  We could use a
more currently throttling module that combines what others have
come up with.  

For example, the snert.com mod_throttle is nice because it does
it based on IP - but it does it site wide in that mode.  This
mod_throttle seems nice because it can be set for an individual
URI...But that's a pain for sites like mine that have 50 or
more intensive scripts (by directory would be nice).  And still
both of these approaches don't use cookies like some of the
others to make sure that legit proxies aren't blocked.

Jeremy

-Original Message-
From: Christian Gilmore [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 19, 2002 8:31 AM
To: 'Bill Moseley'; [EMAIL PROTECTED]
Subject: RE: Throttling, once again


Bill,

If you're looking to throttle access to a particular URI (or set of URIs),
give mod_throttle_access a look. It is available via the Apache Module
Registry and at http://www.fremen.org/apache/mod_throttle_access.html .

Regards,
Christian

-
Christian Gilmore
Technology Leader
GeT WW Global Applications Development
IBM Software Group


-Original Message-
From: Bill Moseley [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 19, 2002 12:56 AM
To: [EMAIL PROTECTED]
Subject: Throttling, once again


Hi,

Wasn't there just a thread on throttling a few weeks ago?

I had a machine hit hard yesterday with a spider that ignored robots.txt.

Load average was over 90 on a dual CPU Enterprise 3500 running Solaris 2.6.
 It's a mod_perl server, but has a few CGI scripts that it handles, and the
spider was hitting one of the CGI scripts over and over.  They were valid
requests, but coming in faster than they were going out.

Under normal usage the CGI scripts are only accessed a few times a day, so
it's not much of a problem have them served by mod_perl.  And under normal
peak loads RAM is not a problem.

The machine also has bandwidth limitation (packet shaper is used to share
the bandwidth).  That combined with the spider didn't help things.  Luckily
there's 4GB so even at a load average of 90 it wasn't really swapping much.
 (Well not when I caught it, anyway).  This spider was using the same IP
for all requests.

Anyway, I remember Randal's Stonehenge::Throttle discussed not too long
ago.  That seems to address this kind of problem.  Is there anything else
to look into?  Since the front-end is mod_perl, it mean I can use mod_perl
throttling solution, too, which is cool.

I realize there's some fundamental hardware issues to solve, but if I can
just keep the spiders from flooding the machine then the machine is getting
by ok.

Also, does anyone have suggestions for testing once throttling is in place?
 I don't want to start cutting off the good customers, but I do want to get
an idea how it acts under load.  ab to the rescue, I suppose.

Thanks much,


--
Bill Moseley
mailto:[EMAIL PROTECTED]




RE: Throttling, once again

2002-04-19 Thread Drew Wymore

I came across the very problem you're having. I use mod_bandwidth, its
actively maintained, allows via IP, directory or any number of ways to
monitor bandwidth usage http://www.cohprog.com/mod_bandwidth.html

Although its not mod_perl related I hope that this helps 
Drew
-Original Message-
From: Jeremy Rusnak [mailto:[EMAIL PROTECTED]] 
Sent: Friday, April 19, 2002 12:06 PM
To: Christian Gilmore; [EMAIL PROTECTED]
Subject: RE: Throttling, once again

Hi,

I looked at the page you mentioned below.  It wasn't really
clear on the page, but what happens when the requests get above
the max allowed?  Are the remaining requests queued or are they
simply given some kind of error message?

There seem to be a number of different modules for this kind of
thing, but most of them seem to be fairly old.  We could use a
more currently throttling module that combines what others have
come up with.  

For example, the snert.com mod_throttle is nice because it does
it based on IP - but it does it site wide in that mode.  This
mod_throttle seems nice because it can be set for an individual
URI...But that's a pain for sites like mine that have 50 or
more intensive scripts (by directory would be nice).  And still
both of these approaches don't use cookies like some of the
others to make sure that legit proxies aren't blocked.

Jeremy

-Original Message-
From: Christian Gilmore [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 19, 2002 8:31 AM
To: 'Bill Moseley'; [EMAIL PROTECTED]
Subject: RE: Throttling, once again


Bill,

If you're looking to throttle access to a particular URI (or set of
URIs),
give mod_throttle_access a look. It is available via the Apache Module
Registry and at http://www.fremen.org/apache/mod_throttle_access.html .

Regards,
Christian

-
Christian Gilmore
Technology Leader
GeT WW Global Applications Development
IBM Software Group


-Original Message-
From: Bill Moseley [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 19, 2002 12:56 AM
To: [EMAIL PROTECTED]
Subject: Throttling, once again


Hi,

Wasn't there just a thread on throttling a few weeks ago?

I had a machine hit hard yesterday with a spider that ignored
robots.txt.

Load average was over 90 on a dual CPU Enterprise 3500 running Solaris
2.6.
 It's a mod_perl server, but has a few CGI scripts that it handles, and
the
spider was hitting one of the CGI scripts over and over.  They were
valid
requests, but coming in faster than they were going out.

Under normal usage the CGI scripts are only accessed a few times a day,
so
it's not much of a problem have them served by mod_perl.  And under
normal
peak loads RAM is not a problem.

The machine also has bandwidth limitation (packet shaper is used to
share
the bandwidth).  That combined with the spider didn't help things.
Luckily
there's 4GB so even at a load average of 90 it wasn't really swapping
much.
 (Well not when I caught it, anyway).  This spider was using the same IP
for all requests.

Anyway, I remember Randal's Stonehenge::Throttle discussed not too long
ago.  That seems to address this kind of problem.  Is there anything
else
to look into?  Since the front-end is mod_perl, it mean I can use
mod_perl
throttling solution, too, which is cool.

I realize there's some fundamental hardware issues to solve, but if I
can
just keep the spiders from flooding the machine then the machine is
getting
by ok.

Also, does anyone have suggestions for testing once throttling is in
place?
 I don't want to start cutting off the good customers, but I do want to
get
an idea how it acts under load.  ab to the rescue, I suppose.

Thanks much,


--
Bill Moseley
mailto:[EMAIL PROTECTED]




Re: PDF generation

2002-04-19 Thread David Wheeler

On Wed, 03 Apr 2002 16:01:24, Drew Taylor [EMAIL PROTECTED] wrote:

 I can highly recommend PDFLib. It's not quite free in that you have to buy
 a license if you make a product out of it, but it's still cheap. Matt
 Sergeant has recently added an OO interface over the PDFLib functions with
 PDFLib. http://search.cpan.org/search?dist=PDFLib

This looks pretty good to me. Can anyone suggest how I might programmtically
send a PDF to a printer once I've generated it in Perl/mod_perl?

Thanks,

David

-- 
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
http://david.wheeler.net/  Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]





Re: Problem with Perl sections in httpd.conf, mod_perl 1.26

2002-04-19 Thread Salvador Ortiz Garcia

On Fri, 2002-04-19 at 01:43, PinkFreud wrote:
 Here's a bit more information:
 
 Given two directives:
 $VirtualHost{$host}-{Alias} = [ '/perl/', $vhostdir/$dir/perl/ ];
 $VirtualHost{$host}-{Alias} = $vhost{config}-{Alias};
 
 The first works.  The second does not.  According to
 Apache::PerlSections-dump, %VirtualHost is *exactly* the same when
 using both directives - yet it seems the server ignores the Alias
 directive when it's assigned from $vhost{config} (either that, mod_perl
 fails to pass it to the server).
 Also, if I set a variable within httpd.conf to mimic $vhost{config} and
 then assign that to $VirtualHost{$host}, it works without a problem.
 The issuse is definitely with the variable being read in from an
 external file.
 
 Strange, no?

Yes, weird.

I'm hunting any remaining bugs related to Perl Sections.

Can you please test the attached patch vs 1.26? (please forget about the
patch posted by Michel, it is mine and in CVS now, but in this I'm
trying a more radical approach and checking for the proper nesting of
directives)

Then try to reproduce your problems under MOD_PERL_TRACE=ds (see the
DEBUGGIN section in the mod_perl man page), thats is, compile mod_perl
with PERL_TRACE=1 and run your Apache in single process mode:

  # MOD_PERL_TRACE=ds path_to_your/httpd -X

And please post the generated log.

Regards.

Salvador Ortiz


diff -ru mod_perl-1.26.orig/src/modules/perl/perl_config.c mod_perl-1.26.msg/src/modules/perl/perl_config.c
--- mod_perl-1.26.orig/src/modules/perl/perl_config.c	Tue Jul 10 20:47:15 2001
+++ mod_perl-1.26.msg/src/modules/perl/perl_config.c	Thu Feb 21 01:43:10 2002
 -51,6 +51,7 
 #include mod_perl.h
 
 extern API_VAR_EXPORT module *top_module;
+IV mp_cmdparms = 0;
 
 #ifdef PERL_SECTIONS
 static int perl_sections_self_boot = 0;
 -1166,6 +1167,9 
 char *tmpkey; 
 I32 tmpklen; 
 SV *tmpval;
+const command_rec *orec = cmd-cmd;
+const char *old_end_token = cmd-end_token;
+cmd-end_token = (const char *)cmd-info;
 (void)hv_iterinit(hv); 
 while ((tmpval = hv_iternextsv(hv, tmpkey, tmpklen))) { 
 	char line[MAX_STRING_LEN]; 
 -1173,6 +1177,13 
 	if (SvMAGICAL(tmpval)) mg_get(tmpval); /* tied hash FETCH */
 	if(SvROK(tmpval)) {
 	if(SvTYPE(SvRV(tmpval)) == SVt_PVAV) {
+		module *tmod = top_module;
+		const command_rec *c; 
+		if(!(c = find_command_in_modules((const char *)tmpkey, tmod))) {
+		fprintf(stderr, command_rec for directive `%s' not found!\n, tmpkey);
+		continue;
+		}
+		cmd-cmd = c; /* for do_quote */
 		perl_handle_command_av((AV*)SvRV(tmpval), 
    0, tmpkey, cmd, cfg);
 		continue;
 -1195,8 +1206,12 
 	if(errmsg)
 	log_printf(cmd-server, Perl: %s, errmsg);
 }
-/* Emulate the handling of end token for the section */ 
+cmd-cmd = orec;
+cmd-info = cmd-end_token;
+cmd-end_token = old_end_token;
+/* Emulate the handling of end token for the section  
 perl_set_config_vectors(cmd, cfg, core_module);
+*/
 } 
 
 #ifdef WIN32
 -1225,13 +1240,21 
 pool *p = cmd-pool;
 char *arg; 
 const char *errmsg = NULL;
+const char *err = ap_check_cmd_context(cmd, GLOBAL_ONLY);
+if (err != NULL) {
+return err;
+}
+if (main_server-is_virtual) {
+	return VirtualHost doesn't nest!;
+}
+
 dSECiter_start
 
 if(entries) {
 	SECiter_list(perl_virtualhost_section(cmd, dummy, tab));
 }
 
-arg = pstrdup(cmd-pool, getword_conf (cmd-pool, key));
+arg = getword_conf (cmd-pool, key);
 
 #if MODULE_MAGIC_NUMBER = 19970912
 errmsg = init_virtual_host(p, arg, main_server, s);
 -1256,9 +1279,9 
 perl_section_hash_walk(cmd, s-lookup_defaults, tab);
 
 cmd-server = main_server;
+TRACE_SECTION_END(VirtualHost);
 
 dSECiter_stop
-TRACE_SECTION_END(VirtualHost);
 return NULL;
 }
 
 -1281,6 +1304,11 
 #ifdef PERL_TRACE
 char *sname = SECTION_NAME(Location);
 #endif
+const char *err = ap_check_cmd_context(cmd,
+	   NOT_IN_DIR_LOC_FILE|NOT_IN_LIMIT);
+if (err != NULL) {
+return err;
+}
 
 dSECiter_start
 
 -1295,10 +1323,10 
 
 new_url_conf = create_per_dir_config (cmd-pool);
 
-cmd-path = pstrdup(cmd-pool, getword_conf (cmd-pool, key));
+cmd-path = getword_conf (cmd-pool, key);
 cmd-override = OR_ALL|ACCESS_CONF;
 
-if (cmd-info) { /* LocationMatch */
+if (cmd-cmd-cmd_data) { /* LocationMatch */
 	r = pregcomp(cmd-pool, cmd-path, REG_EXTENDED);
 }
 else if (!strcmp(cmd-path, ~)) {
 -1317,12 +1345,12 
 conf-r = r;
 
 add_per_url_conf (cmd-server, new_url_conf);
+TRACE_SECTION_END(sname);
 	
 dSECiter_stop
 
 cmd-path = old_path;
 cmd-override = old_overrides;
-TRACE_SECTION_END(sname);
 return NULL;
 }
 
 -1334,6 +1362,11 
 #ifdef PERL_TRACE
 char *sname = SECTION_NAME(Directory);
 #endif
+const char *err = ap_check_cmd_context(cmd,
+	   NOT_IN_DIR_LOC_FILE|NOT_IN_LIMIT);
+if (err != NULL) {
+ 

Re: Throttling, once again

2002-04-19 Thread Peter Bi

How about adding a MD5 watermark for the cookie ? Well, it is becoming
complicated 

Peter Bi

- Original Message -
From: kyle dawkins [EMAIL PROTECTED]
To: Peter Bi [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Friday, April 19, 2002 8:29 AM
Subject: Re: Throttling, once again


 Peter

 Storing the last access time, etc in a cookie won't work for a perl script
 that's abusing your site, or pretty much any spider, or even for anyone
 browsing without cookies, for that matter.

 The hit on the DB is so short and sweet and happens after the response has
 been sent to the user so they don't notice any delay and the apache child
 takes all of five hundredths of a second more to clean up.

 Kyle Dawkins
 Central Park Software

 On Friday 19 April 2002 11:18, Peter Bi wrote:
  If merely the last access time and number of requests within a given
time
  interval are needed, I think the fastest way is to record them in a
cookie,
  and check them via an access control. Unfortunately, access control is
  called before content handler, so the idea can't be used for CPU or
  bandwidth throttles. In the later cases, one has to call DB/file/memory
for
  history.
 
  Peter Bi
 
 
  - Original Message -
  From: kyle dawkins [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Friday, April 19, 2002 8:02 AM
  Subject: Re: Throttling, once again
 
   Guys
  
   We also have a problem with evil clients. It's not always spiders...
in
 
  fact
 
   more often than not it's some smart-ass with a customised perl script
   designed to screen-scrape all our data (usually to get email addresses
   for spam purposes).
  
   Our solution, which works pretty well, is to have a LogHandler that
   checks
 
  the
 
   IP address of an incoming request and stores some information in the
DB
 
  about
 
   that client; when it was last seen, how many requests it's made in the
 
  past n
 
   seconds, etc.  It means a DB hit on every request but it's pretty
light,
 
  all
 
   things considered.
  
   We then have an external process that wakes up every minute or so and
 
  checks
 
   the DB for badly-behaved clients.  If it finds such clients, we get
email
 
  and
 
   the IP is written into a file that is read by mod_rewrite, which sends
   bad clients to, well, wherever... http://www.microsoft.com is a good
one
   :-)
  
   It works great.  Of course, mod_throttle sounds pretty cool and maybe
   I'll test it out on our servers.  There are definitely more ways to do
   this...
  
   Which reminds me, you HAVE to make sure that your apache children are
   size-limited and you have a MaxClients setting where MaxClients *
 
  SizeLimit 
 
   Free Memory.  If you don't, and you get slammed by one of these
wankers,
 
  your
 
   server will swap and then you'll lose all the benefits of shared
memory
 
  that
 
   apache and mod_perl offer us.  Check the thread out that was all over
the
   list about a  month ago for more information.  Basically, avoid
swapping
 
  at
 
   ALL costs.
  
  
   Kyle Dawkins
   Central Park Software
  
   On Friday 19 April 2002 08:55, Marc Slagle wrote:
We never tried mod_throttle, it might be the best solution.  Also,
one
thing to keep in mind is that some search engines will come from
 
  multiple
 
IP addresses/user-agents at once, making them more difficult to
stop.






Re: PDF generation

2002-04-19 Thread Sam Tregar

On Fri, 19 Apr 2002, Andrew Ho wrote:

 DWThis looks pretty good to me. Can anyone suggest how I might
 DWprogrammtically send a PDF to a printer once I've generated it in
 DWPerl/mod_perl?

 Use either Ghostscript or Adobe Acrobat Reader to convert to Postscript,
 then print in your normal manner (if you usually use Ghostscript as a
 print filter anyway, you can just print directly using it). For Adobe
 Acrobat Reader, use the -toPostScript option.

Use Acrobat Reader if you can.  The font support is significantly better
in my experience, at least under Linux.

-sam




Re: Problem with Perl sections in httpd.conf, mod_perl 1.26

2002-04-19 Thread PinkFreud

Log is attached.  I'm amused with this line:

handle_command (Alias /perl/ /home/vhosts/linuxhelp.mirkwood.net/perl/): OK

That looks right, but I *still* get a 404 error:

 404 Not Found 

   Not Found

   The requested URL /perl/ was not found on this server.   
 _  


Apache/1.3.24 Server at linuxhelp.mirkwood.net Port 80   

[Fri Apr 19 21:33:08 2002] [error] [client x.x.x.x] File does not exist: 
/home/vhosts/linuxhelp.mirkwood.net/htdocs/perl/
(note it's still trying to go to htdocs/perl/)

ls -d /home/vhosts/linuxhelp.mirkwood.net/perl/
drwxr-xr-x2 root root 4096 Apr 15 02:07
/home/vhosts/linuxhelp.mirkwood.net/perl//


Hope that helps.


On Fri, Apr 19, 2002 at 03:37:35PM -0500, Salvador Ortiz Garcia babbled thus:
 Subject: Re: Problem with Perl sections in httpd.conf, mod_perl 1.26
 From: Salvador Ortiz Garcia [EMAIL PROTECTED]
 To: PinkFreud [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 X-Mailer: Ximian Evolution 1.0.3 
 Date: 19 Apr 2002 15:37:35 -0500
 
 On Fri, 2002-04-19 at 01:43, PinkFreud wrote:
  Here's a bit more information:
  
  Given two directives:
  $VirtualHost{$host}-{Alias} = [ '/perl/', $vhostdir/$dir/perl/ ];
  $VirtualHost{$host}-{Alias} = $vhost{config}-{Alias};
  
  The first works.  The second does not.  According to
  Apache::PerlSections-dump, %VirtualHost is *exactly* the same when
  using both directives - yet it seems the server ignores the Alias
  directive when it's assigned from $vhost{config} (either that, mod_perl
  fails to pass it to the server).
  Also, if I set a variable within httpd.conf to mimic $vhost{config} and
  then assign that to $VirtualHost{$host}, it works without a problem.
  The issuse is definitely with the variable being read in from an
  external file.
  
  Strange, no?
 
 Yes, weird.
 
 I'm hunting any remaining bugs related to Perl Sections.
 
 Can you please test the attached patch vs 1.26? (please forget about the
 patch posted by Michel, it is mine and in CVS now, but in this I'm
 trying a more radical approach and checking for the proper nesting of
 directives)
 
 Then try to reproduce your problems under MOD_PERL_TRACE=ds (see the
 DEBUGGIN section in the mod_perl man page), thats is, compile mod_perl
 with PERL_TRACE=1 and run your Apache in single process mode:
 
   # MOD_PERL_TRACE=ds path_to_your/httpd -X
 
 And please post the generated log.
 
 Regards.
 
 Salvador Ortiz



-- 

Mike Edwards

Brainbench certified Master Linux Administrator
http://www.brainbench.com/transcript.jsp?pid=158188
---
Unsolicited advertisments to this address are not welcome.


loading perl module 'Apache'...loading perl module 'Apache::Constants::Exports'...ok
ok
init `PerlHandler' stack
perl_cmd_push_handlers: @PerlHandler, 'Apache::Status'
pushing `Apache::Status' into `PerlHandler' handlers
[Fri Apr 19 21:32:57 2002] [warn] module mod_php4.c is already added, skipping
[Fri Apr 19 21:32:57 2002] [warn] module mod_ssl.c is already added, skipping
loading perl module 'Apache'...ok
loading perl module 'Tie::IxHash'...not ok
Warn: Directive `vhost' not found in handle_command_av!
LocationMatch OK
perl_section: VirtualHost linuxhelp.mirkwood.net
perl_section: Location /perl/
init `PerlHandler' stack
perl_cmd_push_handlers: @PerlHandler, 'Apache::Registry'
pushing `Apache::Registry' into `PerlHandler' handlers
PerlHandler Apache::Registry (OK) Limit=no
Options ExecCGI (OK) Limit=no
SetHandler perl-script (OK) Limit=no
perl_section: /Location
Location OK
DocumentRoot /home/vhosts/linuxhelp.mirkwood.net/htdocs (OK) Limit=no
Group users (OK) Limit=no
ServerAdmin test@vhost (OK) Limit=no
handle_command (Alias /perl/ /home/vhosts/linuxhelp.mirkwood.net/perl/): OK
CustomLog /home/vhosts/linuxhelp.mirkwood.net/logs/linuxhelp.mirkwood.net.access_log 
combined (OK) Limit=no
User sauron (OK) Limit=no
ServerName linuxhelp.mirkwood.net (OK) Limit=no
ErrorLog /home/vhosts/linuxhelp.mirkwood.net/logs/linuxhelp.mirkwood.net.error_log 
(OK) Limit=no
handle_command (ScriptAlias /cgi-bin/ 
/home/vhosts/linuxhelp.mirkwood.net/cgi-bin/): OK
ServerAlias linuxhelp (OK) Limit=no
perl_section: /VirtualHost
perl_section: VirtualHost orodruin.rivendell.mirkwood.net
CustomLog /var/log/apache/orodruin.access_log combined (OK) Limit=no
ServerAlias orodruin (OK) Limit=no
ProxyPassReverse / http://orodruin.mirkwood.net:80/ (OK) Limit=no
ServerName 

Re: Apache2 HellowWorld question

2002-04-19 Thread Randy Kobes

On Fri, 19 Apr 2002, Lihn, Steve wrote:

 Hi,
 I just downloaded Randy's win32 build of Apache2.

 I do not understand in httpd.conf:

 PerlSwitches -Mblib=C:\Apache2
 PerlModule Apache2
 Location /hello
 PerlResponseHandler Apache::HelloWorld
 SetHandler modperl
 /Location

 But in the directory, HellowWorld.pm is under
 C:/Apache2/blib/lib/Apache2/Apache

 I thought it should be under C:/Apache2/blib/lib/Apache???

 Thanks.

   Steve Lihn
   FIS Database Support, Merck  Co., Inc.
   Tel: (908) 423 - 4441

There's two things going on here - the
PerlSwitches -Mblib=C:\Apache2
is specifying to use the C:\Apache2\blib directory. The
PerlModule Apache2
is loading C:\Apache2\blib\lib\Apache2.pm, which
modifies INC to include INC directories with Apache2/
appended (eg, C:\Apache\blib\lib\Apache2, in this case).
Thus, Apache::HelloWorld would be found in
   C:\Apache2\blib\lib\Apache2\Apache\HelloWorld.pm
Apache2.pm us useful if you want to install mod_perl-2
stuff in a Perl tree that already contains mod_perl-1
things under an Apache/ subdirectory.

best regards,
randy kobes




Re: Problem using Perl Modules under mod_perl

2002-04-19 Thread Stas Bekman

Per Einar Ellefsen wrote:
 At 21:12 19.04.2002, Sören Stuckenbrock wrote:
 
 Hi there,

 mod_perl-newbie needs help!
 I have a nasty problem using Perl Modules under mod_perl.
 I've developed a CGI-Application, that retrieves its configuration values
 from a module that gets included (with use) in every Script of the
 application.
 So far no problem.

 But when I try to set up more than one instance of the application on the
 same (virtual) server under different paths, with different configuration
 values in the config-module (which I must do) mod_perl mixes up the
 different configuration-modules. Every httpd-child only loads the
 configuraion module of the applications instance that it first served a
 request for.
 If the same process serves a request for a different instance later, it
 always uses the config-module of the instance it first served a 
 request for,
 because it thinks it already has loaded the right module.
 
 
 This is because once you use a module (say My::Config), it won't be 
 reloaded as that package name has already been loaded. If your 
 configuration files are different, and in different locations, you 
 should be giving them different names: say My::App1::Config, 
 My::App2::Config, etc. THen you won't have that problem.

For more explanations see this item:
http://perl.apache.org/preview/modperl-docs/dst_html/docs/1.0/guide/porting.html#Reloading_Modules_and_Required_Files

And Per Einar's suggestion too, but the problem you are having is 
actually explained in the above item. The one below talks about 
different problems, which you want to read as well.

 See 
 
http://perl.apache.org/preview/modperl-docs/dst_html/docs/1.0/guide/porting.html#Writing_Configuration_Files
 
 for more information.
 
 
 



-- 


__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com




Re: PDF generation

2002-04-19 Thread Stas Bekman

Andrew Ho wrote:
 Hello,
 
 DWThis looks pretty good to me. Can anyone suggest how I might
 DWprogrammtically send a PDF to a printer once I've generated it in
 DWPerl/mod_perl?
 
 Use either Ghostscript or Adobe Acrobat Reader to convert to Postscript,
 then print in your normal manner (if you usually use Ghostscript as a
 print filter anyway, you can just print directly using it). For Adobe
 Acrobat Reader, use the -toPostScript option.

If your end goal is PS, better generated PS in first place. From my 
experience
ps - pdf - ps, makes the final PS a much bigger file (5-10 times 
bigger). I use html2ps for generating PS files (used for generating the 
mod_perl guide's pdf).

__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com