RE: templating system opinions (axkit?)

2003-07-23 Thread Sam Tregar
On Wed, 23 Jul 2003, Hauck, William B. wrote:

 What i've done is just use completely external html files with
 html-compliant comments indicating the data field. (example !--
 APPNAME_USER_FIRST_NAME --).  My application just reads in the html
 on startup and does a series of substition statements over the file
 as necessary to replace the comments with the actual data.  Thus,
 each type of page has one base html (or html file pieces) that are
 merged with each other and data as necessary allowing all logic to
 be kept in the program.

Change that to:

  !-- TMPL_VAR APPNAME_USER_FIRST_NAME --

and you can use HTML::Template!  You'll also get loops, includes,
and simple conditionals should you ever need them.

-sam



Re: templating system opinions

2003-07-21 Thread Sam Tregar
On Sun, 20 Jul 2003, Dave Rolsky wrote:

 OTOH, if you were to try to replicate some of Mason's more powerful
 features with H::T, like autohandlers, inheritance, etc., then I'm
 sure that'd bring H::T's speed down to Mason's level ;)

I wouldn't be too sure.  I implemented a lot of that stuff to add
HTML::Template support to Bricolage and it's still much faster than
Mason.

 In other words, you generally get what you pay for.  The most powerful and
 flexible systems are generally slower and more RAM-hungry.  One exception
 to this might be Embperl, which has large chunks written in C.  In that
 case, the cost is paid for in development time.

HTML::Template::JIT also trades development time (mine) for run-time
speed.  Right now it doesn't support all of HTML::Template's
functionality, but it comes pretty close.  The upside is that it's
between four and eight times faster than HTML::Template, which makes
it the fastest templating system by a large margin.

-sam




Re: templating system opinions

2003-07-21 Thread Sam Tregar
On Mon, 21 Jul 2003, Dave Rolsky wrote:

 On Mon, 21 Jul 2003, Sam Tregar wrote:
 
  I wouldn't be too sure.  I implemented a lot of that stuff to add
  HTML::Template support to Bricolage and it's still much faster than
  Mason.
 
 A lot as in _all_ of it, or a lot as in autohandlers and dhandlers?

A lot as in everything that was needed to get HTML::Template to fill
the role of Mason in Bricolage's publish process.  I'd certainly be a
fool to claim I'd implemented all of Mason!  I doubt I could even list
all the stuff that Mason does.

 In other words, I don't think one could do all of the same stuff, or
 even most, and achieve a huge speed increase.  There would have to
 be something sacrificed.

My impression is that Mason doesn't get much advantage from clients
that only use part of the Mason system.  I imagine that one of the
reasons that the Mason workalike I built for Bricolage is faster than
Mason is that it only implements the functionality actually needed by
Bricolage.  Following this line of thinking it might be possible to
modify Mason to only use/load the slower/bigger pieces when they are
actually needed.  Of course, I'm no authority on why Mason is slow or
how it could be fixed.

I have plans to go a similar route with HTML::Template in the future.
I'd like to build a system that dynamically assembles itself based on
the usage pattern of the program.  That way if the programmer sticks
to the basics they get a smaller, faster system.  If they need the big
guns then the more complete systems can be loaded at some moderate
penalty.

-sam



ANNOUNCEMENT: HTML::Template 2.6

2002-08-29 Thread Sam Tregar

CHANGES

   - New Feature: HTML::Template will combine HTML_TEMPLATE_ROOT
  environment variable and path option if both are
  available. (Jesse Erlbaum)

   - New Feature: __counter__ variable now available when
  loop_context_vars is set (Simran Gambhir)

   - New Feature: The default attribute allows you to specify
  defaults for tmpl_var tags.

   - Bug Fix: fixed parser to reject tmpl_vars with no names.
  (crazyinsomniac)

   - Doc Fix: fixed documentation to correctly describe the
  interaction of case_sensitive and loop_context_vars.
  (Peter Claus Lamprecht)

   - Doc Fix: updated mailing-list information to reflect move from
  vm.com to sourceforge.net

DESCRIPTION

This module attempts to make using HTML templates simple and natural. It
extends standard HTML with a few new HTML-esque tags - TMPL_VAR,
TMPL_LOOP, TMPL_INCLUDE, TMPL_IF, TMPL_ELSE and TMPL_UNLESS.
The file written with HTML and these new tags is called a template. It
is usually saved separate from your script - possibly even created by
someone else! Using this module you fill in the values for the
variables, loops and branches declared in the template. This allows you
to separate design - the HTML - from the data, which you generate in the
Perl script.

This module is licensed under the GPL. See the LICENSE section below for
more details.

TUTORIAL

If you're new to HTML::Template, I suggest you start with the
introductory article available on the HTML::Template website:

   http://html-template.sourceforge.net

AVAILABILITY

This module is available on SourceForge.  Download it at:

   http://html-template.sourceforge.net

The module is also available on CPAN.  You can get it using
CPAN.pm or go to:

   http://www.cpan.org/authors/id/S/SA/SAMTREGAR/

CONTACT INFO

This module was written by Sam Tregar ([EMAIL PROTECTED]). You can
join the HTML::Template mailing-list by visiting:

  http://lists.sourceforge.net/lists/listinfo/html-template-users






ANNOUNCEMENT: HTML::Template::JIT 0.04

2002-08-29 Thread Sam Tregar

CHANGES

- Added support for HTML::Template 2.6's new DEFAULT attribute.

- Added support for HTML::Template 2.6's new __counter__ variable.

- Updated mailing-list information to reflect move from vm.com
  to sourceforge.net

- Fixed bug where tmpl_var's with the escape attribute would
  cause a crash if not set with a value.


DESCRIPTION

This module provides a just-in-time compiler for HTML::Template.
Templates are compiled into native machine code using Inline::C.  The
compiled code is then stored to disk and reused on subsequent calls.

HTML::Template::JIT is up to 8 times as fast as HTML::Template using
caching.


NOTE

This module is not feature-complete.  Be sure to read the CAVEATS
section in the documentation before using!


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://html-template.sourceforge.net

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


CONTACT INFO

This module was written by Sam Tregar ([EMAIL PROTECTED]). You can
join the HTML::Template mailing-list by visiting:

  http://lists.sourceforge.net/lists/listinfo/html-template-users






ANNOUNCEMENT: HTML::Template::Expr 0.04

2002-08-29 Thread Sam Tregar

CHANGES

- Fixed parser to recognize negative numbers.  Thanks to Fran
  Fabrizio for the report.

- Fixed parser to allow for HTML-comment style tags.  Thanks to
  Stuhlpfarrer Gerhard for the spot.

- Updated mailing-list information to reflect move from vm.com to
  sourceforge.net

DESCRIPTION

This module provides an extension to HTML::Template which allows
expressions in the template syntax.  This is purely an addition - all
the normal HTML::Template options, syntax and behaviors will still
work.

Expression support includes comparisons, math operations, string
operations and a mechanism to allow you add your own functions at
runtime.


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://html-template.sourceforge.net

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


CONTACT INFO

This module was written by Sam Tregar ([EMAIL PROTECTED]). You can
join the HTML::Template mailing-list by visiting:

  http://lists.sourceforge.net/lists/listinfo/html-template-users





Re: HTML::Template

2002-08-19 Thread Sam Tregar

On Mon, 19 Aug 2002, Pierre Vaudrey wrote:

 with the following starnge error (The Title is displayed but not the
 vignette.gif file)
 [Mon Aug 19 07:22:24 2002] [error] Missing right curly or square bracket
 at /Library/WebServer/Documents/perl/vignette.gif line 1, at end of line
 syntax error at /Library/WebServer/Documents/perl/vignette.gif line 1,
 at EOF

For some reason vingette.gif is being interpreted as a Perl script instead
of an image.  This is probably a case of a misconfigured web server,
although I don't know enough about your setup to be sure.  What happens if
you try to load this image separate from HTML::Template, just by typing
the URL into your browser?

-sam





Re: Static vs. DSO on Linux specifically

2002-07-24 Thread Sam Tregar

On Tue, 23 Jul 2002, WC -Sx- Jones wrote:

 Back in RH 6.2 I would hazard that the segfault was more related to Perl
 being set to uselargefiles and Apache NOT being set.  This only became
 visible when one tried to build mod_perl as a DSO.  Building as STATIC caused
 Apache to be rebuilt using the now current uselargefiles setting.

I don't think so.  Rebuilding Apache/mod_perl static with the exact same
Perl that shipped with Redhat 6.2 solved the segfaults.  Perhaps it is a
problem in Perl, I wouldn't know, but I guarantee it wasn't a result of
using a different Perl.

-sam





Re: Static vs. DSO on Linux specifically

2002-07-22 Thread Sam Tregar

On 22 Jul 2002, David Dyer-Bennet wrote:

 So, specifically for the Linux environment, what are the downsides of
 running mod_perl as a DSO?  (Pointers to the FM so I can R it would be
 fine.)

Segmentation faults, pure and simple.  The Apache/mod_perl that ships with
Redhat, and I assume other DSO Apache/mod_perl setups, is unstable.
Here's one place I've seen this mentioned:

  http://masonhq.com/docs/faq/#why_am_i_getting_segmentation_fa

-sam




Re: CGI::Application

2002-06-16 Thread Sam Tregar

On Sun, 16 Jun 2002, Eric Frazier wrote:

 I have been looking into HTML::Template which is a lot simper than Embed
 perl or the template tool kit. I am wondering if anyone has experence with
 using both of these with Registry.pm

I do!  Back when I worked for Jesse Erlbaum (the author of
CGI::Application) most of our development was in CGIs designed to be run
under Apache::Registry.  CGI::Application uses CGI.pm for all its CGI
services and CGI.pm works great under Apache::Registry.

 The big points I want to achieve right now, is to make everything I write
 OOP,  separate HTML from code as much as possible, and to not make it
 impossible to deal with for the people I work with who don't know as much
 perl as I do.

That sounds like an excelent goal.  Feel free to drop by the
CGI::Application (and HTML::Template) mailing-list if you run into
trouble.

-sam





[ANNOUNCE] HTML::Template::JIT 0.03

2002-06-15 Thread Sam Tregar

HTML::Template::JIT - a just-in-time compiler for HTML::Template

CHANGES

- Added support for case_sensitive option to new().

- Added new print_to_stdout option to new() to have output
  printed to STDOUT as it is generated.

- Added support for ESCAPE.  Template syntax support is now
  complete

- Improved the quality of generated code - variables are now
  looked-up once and stored in lexical variables.  This has
  improved performance a small amount.

- Fixed bug in escaping of template text.  This caused templates
  containing the characters any of ($, @, %, \) to be rendered
  incorrectly.  Thanks to Smejkal Petr for the report.

- Fixed bug where parameters from previous runs were persisting
  across calls to new().  Thanks to Tugrul Galatali for the spot.

- Arguments to new() that modify the compiled object are now
  included in hashing to create package names.  This means that
  a single template can be used with different options and
  different compiled objects will be generated.  Thanks to
  Tugrul Galatali for the spot.


DESCRIPTION

This module provides a just-in-time compiler for HTML::Template.
Templates are compiled into native machine code using Inline::C.  The
compiled code is then stored to disk and reused on subsequent calls.

HTML::Template::JIT is up to 8 times as fast as HTML::Template using
caching.


NOTE

This module is not feature-complete.  Be sure to read the CAVEATS
section in the documentation before using!


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://prdownloads.sf.net/html-template/HTML-Template-JIT-0.03.tar.gz?download

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


CONTACT INFO

You can join the HTML::Template mailing-list by sending a blank
message to [EMAIL PROTECTED]





Re: Logging under CGI

2002-06-11 Thread Sam Tregar

On Mon, 10 Jun 2002, Bill Moseley wrote:

 You are correct to worry.  You should use flock() to prevent your log file
 from becoming corrupted.  See perldoc -f flock() for more details.

 Maybe it's a matter of volume.  Or size of string written to the log.  But
 I don't flock, and I keep the log file open between requests and only
 reopen if stat() shows that the file was renamed.  So far been lucky.

Nope, just plain luck.  Keep it running long enough without locking and
you will eventually have a corrupted log file.

-sam





Re: Logging under CGI

2002-06-11 Thread Sam Tregar

On Mon, 10 Jun 2002, Tom Brown wrote:

 ?? AFAIK, Files opened in append mode, and written to without buffering,
 should _not_ get corrupted in any manner that flock would prevent.
 (basically small writes should be atomic.)

Right, and does Perl write with buffering when you call print()?  Yes, it
does!

 that should be pretty universal for most UNIXs

I've actually never heard this before.  I've been taught that if you have
multiple processes writing to one file you must use flock() or another
equivalent mechanism to prevent overwrites.  Do you have a source where I
could learn about guaranteed atomic file writes without locking under
UNIX?

-sam






Re: Logging under CGI

2002-06-11 Thread Sam Tregar

On Tue, 11 Jun 2002, Tom Brown wrote:

  Right, and does Perl write with buffering when you call print()?  Yes, it
  does!

 huh? That's what $| is all about, and $|++ is a pretty common line of
 code.

A pretty common line of code that wasn't in the example shown!  And that
only unbuffers the currently selected filehandle.  His example showed a
print to a named filehandle, so a simple $|++ isn't even enough.  Your
advice to skip the flock() without explaining any extra steps has a pretty
decent chance of resulting in a corrupt logfile given enough time.

 man(2) open.  see the O_APPEND option... the only footnote is that it
 doesn't work properly via NFS...

Interesting stuff.  But are you sure it works with Perl?  Does it work
with PerlIO, which is the new default IO scheme in 5.8.0?

 p.s. I'm not the only one who considers it impolite to have off-list
 messages taken back onto the list... I generally don't post AFAIK comments
 to lists, prefering to keep the signal to noise ratio higher.

My apologies.  I assumed you omitted the mod_perl address from the CC: by
accident.  I actually think this discussion is still mostly signal.  I
would like to make sure your advice is either correct for the situation
given or taken back publicly to avoid potential harm.  Either outcome
would be fine with me, actually.

-sam




Re: [OT+RFC] Template.pm-patch

2002-06-11 Thread Sam Tregar

On Tue, 11 Jun 2002, Nico Erfurth wrote:

 It changes the way arrays/loops are handled.
 1.) If you pass in a array-reference, it will be not dereferenced anymore
 I did this, so i can use a small Wrapper-class, which allows me to
 tie a database-statement to an array, and returning the results row by
 row, so i don't need to waste memory inside of mod_perl(Reading all
 results at once).

This is incorrect.  People like to do:

  my loop = ( { row = 'foo' }, { row = 'bar'} );
  $template-param(LOOP_ONE = \@loop);
  loop = ( { row = 'bif' }, { row = 'bop'} );
  $template-param(LOOP_TWO = \@loop);

If you don't copy out the contents of loop in the first param() call then
you'll end up referencing the same array twice.  This was actually a bug
fixed in the early development of HTML::Template.

 2.) HTML::Template::Loop::output was changed, so it appends to a given
 scalar-reference(the one from HTML::Template::output), this saves much
 memory if you have a big loop and combine it with the print_to-option.

That sounds interesting, but have done tests to confirm that it helps?  I
suspect that you'd have to choose a truely pathalogical data-set to see
any improvement.

 I send this patch to Sam Tregar weeks ago, and i never answered, but maybe
 someone here thinks that it's worth to have a look at it, because AFAIK
 many ppl use mod_perl+HTML::Template (i do it myself) ;)

Sorry about that!  I must have let it fall through the cracks.  Did you
send it directly to me or to the HTML::Tempate mailing-list?  Things sent
to the mailing-list tend to stay on my radar slightly longer.

-sam





Re: [OT+RFC] Template.pm-patch

2002-06-11 Thread Sam Tregar

On Tue, 11 Jun 2002, Nico Erfurth wrote:

 I thought about this, and i'm wondering how much ppl realy use it in
 this way. IMHO it should be a Don't try this, it will break, instead
 introducing this copy-workaround.  But i think i will use this patch
 only for my private-version, because i don't use such constructs ;)

Well, someone used it that way - I got it as a bug-report in an early
version of HTML::Template.  Maybe we could add an option like
no_loop_copy that people could set on to get better performance?

 I have to print out much lines in a big loop, and these two patches helped
 me to decrease the memusage from 50MB per instance to 5MB, but i haven`t
 checked both things seperatly.

Well, that does sound significant.  Please do determine which change
caused this improvement.

-sam





[ANNOUNCE] Bricolage 1.3.2

2002-06-10 Thread Sam Tregar

The Bricolage development team is proud to announce the release of
Bricolage version 1.3.2. This is a development release with new
features as well as numerous bug fixes.  Summary of major changes (see
the Changes file in the distribution for details):

* New installation system tested on Linux and FreeBSD (other
  systems should work too)

* Enhanced system for tracking published status of stories and media

* New Check-in  Publish feature enables one-click publishing
  from the story editor

* New caching system improves performance throughout the application

* Support for Apache::SizeLimit helps keep memory usage under control

* New search paging enables Bricolage to work with large data sets

* Code and database profilers for performance tuning

* Numerous bug fixes, major and minor

Here's a brief description of Bricolage:

Bricolage is a full-featured, open-source, enterprise-class content
management system. It offers a browser-based interface for ease-of
use, a full-fledged templating system with complete programming
language support for flexibility, and many other features. It
operates in an Apache/mod_perl environment, and uses the PostgreSQL
RDBMS for its repository.

More information on Bricolage can be found on its home page.

http://bricolage.thepirtgroup.com/

And it can be downloaded from SourceForge.

http://sourceforge.net/project/showfiles.php?group_id=34789

--The Bricolage Team




Re: Logging under CGI

2002-06-10 Thread Sam Tregar

On Tue, 11 Jun 2002, Sergey Rusakov wrote:

 open(ERRORLOG, '/var/log/my_log');
 print ERRORLOG some text\n;
 close ERRORLOG;

 This bit of code runs in every apache child.
 I worry abount concurent access to this log file under heavy apache load. Is
 there any problems on my way?

You are correct to worry.  You should use flock() to prevent your log file
from becoming corrupted.  See perldoc -f flock() for more details.

-sam





Re: Logging under CGI

2002-06-10 Thread Sam Tregar

On Mon, 10 Jun 2002, Sam Tregar wrote:

 You are correct to worry.  You should use flock() to prevent your log file
 from becoming corrupted.  See perldoc -f flock() for more details.

Gah, these fingers...  That should be perldoc -f flock.

-sam





Re: Separating Aspects (Re: separating C from V in MVC)

2002-06-06 Thread Sam Tregar

On Thu, 6 Jun 2002, Perrin Harkins wrote:

 For posterity, and possible inclusion in the next rev of the templating
 tutorial, how would you recommend people handle this sort of situation
 without using HTML::Template::Expr?

 Suppose you have a model object for a concert which includes a date.  On
 one page, the designers want to dipslay the date in a verbose way with
 the month spelled out, but on another they want it abbreviated and fixed
 length so that dates line up nicely.  Would you put that formatting in
 the controller?

In the script:

   $template-param(long_date  = $long_date,
short_date = $short_date);

In the template:

   The long date: tmpl_var long_date  br
   The short date: tmpl_var short_date

 What if you had a model object that generates a list of these concerts,
 and on a certain page the designers want to show it in two columns.
 Would you split it into two arrays in the controller?

I'm not sure I understand what you mean.  You're asking about how to flow
a list between two columns?  With vanilla HTML::Template that would
requrie a small amount of work in the script.  Either there would need to
be a column_break variable thrown in at the appropriate place or two
separate loops.  I think I would prefer the former.  In the template that
would look like:

  tabletr
tmpl_loop concerts
   tmpl_if column_break /trtr /tmpl_if
   td tmpl_var long_date /td
/tmpl_loop
  /tr/table

In the script you'd just set the column_break in the appropriate row (or
rows for a multi-column layout).

Is that a point in favor of scripting in the templates?  Perhaps.  Of
course by limiting the power of template syntax I've made some things more
difficult.  If simple things should be simple and hard things should be
possible then not everything can be simple!

-sam





Re: Separating Aspects (Re: separating C from V in MVC)

2002-06-05 Thread Sam Tregar

On Wed, 5 Jun 2002, Andy Wardley wrote:

 In TT, you would usually pre-declare a particular format in a config
 file, pre-processed templates, or some other global style document.
 e.g.

   [% USE money = format('%.02f') %]

 In your main page templates you would do something like this:

   [% money(order.total) %]

 Then you can change the money format in one place and your designers
 don't have to worry about sprintf formats.

In HTML::Template::Expr:

  sub money { sprintf %.02f, $_[0] }
  HTML::Template::Expr-register_function(money = \money);

Then in the template:

  tmpl_var expr=money(order_total)

Now, I don't use HTML::Template::Expr.  I think it's generally not such a
good idea.  But it's there if you want it...

 See, the problem is that MVC is just one particular decomposition.  It's
 generally a good one because application, data and presentation are typically
 the three biggest aspects that you want to separate.  However, it doesn't
 take into account any of the other dozen or so aspects that you might want
 to model in your system.  Nowhere in MVC or any other Design Pattern does
 it tell you to define all your URLs in one place in case you ever need to
 change them en masse.  You have to think of that all by yourself.  MVC is
 no substitute for thinking

Oh, absolutely.  MVC is just a framework, and it only addresses a subset
of the problems in any large system.  I think that's actually a strength.
I would be deeply suspicious of any paradigm that claimed to solve ALL my
problems.  I prefer small, simple (tools|paradigms) that do one thing and
do it well.

 I've seen far too many example of people who didn't pass objects into their
 templates, didn't embed Perl code, or didn't do this or that because they
 thought that it might violate the MVC principal.

Here!

 The end result was that they jumped through hoops and made the system
 more complex than it needed to be for the sake of purity.

It was?  I don't think this is the only result.  It might be that these
people you've observed were just the hoop-jumping, complexifying types.
I've built quite a number of large systems without embedded Perl and object
variables without excessive hoop-jumping.

Here's my theory: the best usage of most templating systems are virtually
indistinguishable and all result in reasonably maintainable systems.
However, the bad usage of some templating systems is much worse than
others.  Also, the general usage of a templating system, even by otherwise
bright people, tends more towards the bad than the good.

Thus my solution: a templating system that simply will not allow you to
put anything significantly complicated in the template.  You can't.  If
you want complexity you'll just have to put it in the code, where it
belongs.  That's HTML::Template in a nutshell.

   [% silver.bullet %] isn't the answer by itself...

Here here.  Neither is tmpl_var silver_bullet but I'd rather get shot
with mine than yours!

-sam





Re: separating C from V in MVC

2002-06-02 Thread Sam Tregar

On Sat, 1 Jun 2002, Barry Hoggard wrote:

 I don't think the standard HTML::Template has support for formatting
 numbers, dates, etc.

And thank the sweet lord it doesn't!  HTML::Template is a do one thing
and do it well module.  If you want routines for formatting numbers,
dates, etc. then CPAN has just what you need.

 How do you make sure that it's done consistently in your applications?

Code reviews and testing.  I don't know of any other way, module support
or no module support.

 It seems problematic to me to require the programmers to do work when a
 designer wants to change the number of decimals in a page, for example.

HTML::Template::Expr may present a solution to this particular desire,
although it isn't one I've come across.  How often are HTML designers
fiddling with numeric formats?  Are they really HTML designers if they can
deal with, say, a printf-style format string?

-sam





Re: PDF generation

2002-04-19 Thread Sam Tregar

On Fri, 19 Apr 2002, Andrew Ho wrote:

 DWThis looks pretty good to me. Can anyone suggest how I might
 DWprogrammtically send a PDF to a printer once I've generated it in
 DWPerl/mod_perl?

 Use either Ghostscript or Adobe Acrobat Reader to convert to Postscript,
 then print in your normal manner (if you usually use Ghostscript as a
 print filter anyway, you can just print directly using it). For Adobe
 Acrobat Reader, use the -toPostScript option.

Use Acrobat Reader if you can.  The font support is significantly better
in my experience, at least under Linux.

-sam




Re: Apache::DProf seg faulting

2002-04-18 Thread Sam Tregar

On Wed, 17 Apr 2002, Paul Lindner wrote:

 I think that this may be a problem with the use of Perl sections.

 I believe your original post had something like this:

 Perl
   use Apache::DProf
   use Apache::DB
   Apache::DB-init();
 /Perl

Nope.  That was Perrin Harkins, but I tried it too!

 Geoffrey and I tested our environments today and the recipe given in
 the book seems to work just fine:

   PerlModule Apache::DB
   PerlModule Apache::DProf

With those lines I get a seg-fault on the first hit to the server.
Reversing the lines I can get a few hits before seg-faulting.  I doubt
it's a problem in your example - something inside Devel::DProf is
seg-faulting as far as I can tell.  I'm planning to build a debugging
Perl and see if I can get more information.

But while I have your attention, why are you using Apache::DB at all?  The
Apache::DProf docs just have:

  PerlModule Apache::DProf

-sam




Re: Sharing Variable Across Apache Children

2002-04-17 Thread Sam Tregar

On Wed, 17 Apr 2002, Perrin Harkins wrote:

 Benjamin Elbirt wrote:
  Well, lets assume that I were to go with
  the shared memory option anyway... what would the pitfalls be / concerns?

 As mentioned before, you'd probably be better off with MLDBM::Sync or
 Cache::Cache.  You can try IPC::Shareable, but a lot of people seem to
 have trouble getting it to work.

I agree with you 100% - file-based caches are generally as-fast and far
easier to manage.  Still, I can't resist the urge to plug my
IPC::SharedCache module.  It's much easier than using IPC::Shareable (or
even the better alternative, IPC::ShareLite).

-sam




Apache::DProf seg faulting

2002-04-16 Thread Sam Tregar

Hello all.  I'm trying to use Apache::DProf but all I get is seg faults.
I put these lines in my httpd.conf:

  PerlModule Apache::DB
  PerlModule Apache::DProf

Then I start the server, and it looks ok:

  [Tue Apr 16 17:22:12 2002] [notice] Apache/1.3.20 (Unix) mod_perl/1.25
  mod_ssl/2.8.4 OpenSSL/0.9.6a configured -- resuming normal operations
  [Tue Apr 16 17:22:12 2002] [info] Server built: Aug 17 2001 13:29:44
  [notice] Apache::DB initialized in child 2234
  [notice] Apache::DB initialized in child 2235

I hit the server and I get:

  [Tue Apr 16 17:22:17 2002] [notice] child pid 2235 exit signal
  Segmentation fault (11)
  [notice] Apache::DB initialized in child 2237

Looking in logs/dprof I see a bunch of numeric directories with tmon.out
files in them.  All the around same size (400 bytes).

Any suggestions on how to proceed?

-sam






Re: Apache::DProf seg faulting

2002-04-16 Thread Sam Tregar

On 16 Apr 2002, Garth Winter Webb wrote:

 Sam, try getting rid of the 'PerlModule Apache::DB' line.  I've used
 Apache::DProf w/o any problems by including only the one PerlModule
 line.  Since they both want to use perl debugging hooks, I'm guessing
 that Apache::DProf is getting crashed up when it tries to use hooks
 already grabbed by Apache::DB...

Same result.  Thanks though!

-sam




Re: Apache::DProf seg faulting

2002-04-16 Thread Sam Tregar

On Tue, 16 Apr 2002, Sam Tregar wrote:

 On 16 Apr 2002, Garth Winter Webb wrote:

  Sam, try getting rid of the 'PerlModule Apache::DB' line.  I've used
  Apache::DProf w/o any problems by including only the one PerlModule
  line.  Since they both want to use perl debugging hooks, I'm guessing
  that Apache::DProf is getting crashed up when it tries to use hooks
  already grabbed by Apache::DB...

 Same result.  Thanks though!

Aw nuts, that was the problem!  I thought I'd tried that already, but I
guess not.  I actually got those PerlModule lines from the mod_perl
Developers Cookbook - guess this is an errata!

Thanks!
-sam






Re: Apache::DProf seg faulting

2002-04-16 Thread Sam Tregar

On Tue, 16 Apr 2002, Perrin Harkins wrote:

 Strange, that works for me.  I do it like this:
 Perl
  use Apache::DProf;
  use Apache::DB;
  Apache::DB-init;
 /Perl

That works, but this doesn't:

  Perl
   use Apache::DB;
   use Apache::DProf;
   Apache::DB-init;
  /Perl

It looks like the poison pill is loading Apache::DB before Apache::DProf.
Odd, eh?

-sam




ANNOUNCE: Bricolage 1.3.1

2002-04-05 Thread Sam Tregar

The Bricolage development team is proud to announce the release of
Bricolage version 1.3.1. This is a development release with new
features as well as numerous bug fixes.  Summary of major changes (see
the Changes file in the distribution for details):

*   SOAP interface fully implemented

*   New FTP distribution move method

*   New preferences to change the way URIs are formatted

Here's a brief description of Bricolage:

Bricolage is a full-featured, open-source, enterprise-class content
management system. It offers a browser-based interface for ease-of
use, a full-fledged templating system with complete programming
language support for flexibility, and many other features. It
operates in an Apache/mod_perl environment, and uses the PostgreSQL
RDBMS for its repository.

More information on Bricolage can be found on its home page.

http://bricolage.thepirtgroup.com/

And it can be downloaded from SourceForge.

http://sourceforge.net/project/showfiles.php?group_id=34789

--The Bricolage Team




Re: [?] Same Named Modules, Different Paths

2002-02-02 Thread Sam Tregar

On Sat, 2 Feb 2002, John Heitmann wrote:

 Here is what I had to do to force correct module loading (mostly stolen
 from the great mod_perl guide):

 %INC = (); # Possibly unnecessary
 do 'FindBin.pm';
 unshift @INC, $FindBin::Bin; # There are also modules in the same dir
 as the script
 unshift @INC, $FindBin::Bin/../lib/;

 require MyModule;
 import MyModule;

This isn't going to work if your modules store anything in package
globals.  You should probably empty the package stash before you load the
new module.  That'll also save you subroutine redefined  warnings too, I
think.

 One obvious answer is to move the devel tree off of the same server as
 the release tree.

This is undoubtably the best way to go.

 Any other ideas or places to RTFM?

I've sometimes been able to get away with running the live version through
Apache::Registry while developing small changes under mod_cgi.  Then when
the .cgi version is ready I just copy it into the .pl and restart.
However, a full staging server is definitely preferable.

-sam





Re: [?] Same Named Modules, Different Paths

2002-02-02 Thread Sam Tregar

On Sun, 3 Feb 2002, Stas Bekman wrote:

 I think the best solution is to run your staging server on a different
 port and use a front-end proxy to rewrite to the right server based on
 the Host: name. Alternatively put 2 NICs with 2 IPs, that will work if
 you don't hardcode the server name in your code/html.

Or 1 NIC with 2 IPs if your OS supports it (Linux does).

 BTW, mod_perl 2.0 solves this problem.

How?  Is the one global namespace per server changed in 2.0 perhaps?

-sam





ANN: HTML::Template 2.5

2002-02-01 Thread Sam Tregar

HTML::Template - a Perl module to use HTML Templates

CHANGES

2.5

- Doc Fix: added reference to new HTML::Template website at
   http://html-template.sourceforge.net

- Bug Fix: global_vars fixed for loops within loops

- Bug Fix: include paths were broken under Windows (David Ferrance)

- Bug Fix: nested include path handling was wrong (Gyepi Sam)

- Bug Fix: MD5 signatures for file cache corrected (Martin Schroth)

- Bug Fix: print_to was broken for tied filehandles (Darren Chamberlain)

- Doc Fix: added mailing-list archive URL to FAQ, added link to
   tutorial, fixed typos and formatting


DESCRIPTION

This module attempts to make using HTML templates simple and natural. It
extends standard HTML with a few new HTML-esque tags - TMPL_VAR,
TMPL_LOOP, TMPL_INCLUDE, TMPL_IF, TMPL_ELSE and TMPL_UNLESS.
The file written with HTML and these new tags is called a template. It
is usually saved separate from your script - possibly even created by
someone else! Using this module you fill in the values for the
variables, loops and branches declared in the template. This allows
you to separate design - the HTML - from the data, which you generate
in the Perl script.

This module is licensed under the GPL. See the LICENSE section below
for more details.

TUTORIAL

If you're new to HTML::Template, I suggest you start with the
introductory article available on the HTML::Template website:

   http://html-template.sourceforge.net

AVAILABILITY

This module is available on SourceForge.  Download it at:

   http://sourceforge.net/project/showfiles.php?group_id=1075

The module is also available on CPAN.  You can get it using
CPAN.pm or go to:

   http://www.cpan.org/authors/id/S/SA/SAMTREGAR/

CONTACT INFO

This module was written by Sam Tregar ([EMAIL PROTECTED]). You can join
the HTML::Template mailing-list by sending a blank message to
[EMAIL PROTECTED]






Re: IPC::ShareLite

2002-01-31 Thread Sam Tregar

On Thu, 31 Jan 2002, Rasoul Hajikhani wrote:

 I have created a data structure and used IPC::ShareLite to save it in
 the main memeory. Can someone tell me how to look at it and destroy it.

Your system should have a program called ipcs you can use to examine IPC
shared structures (memory, semaphores and message queues).  Look at the
ipcs manpage for details.

-sam





Re: performance coding project? (was: Re: When to cache)

2002-01-26 Thread Sam Tregar

On Sat, 26 Jan 2002, Perrin Harkins wrote:

  It all depends on what kind of application do you have. If you code is
  CPU-bound these seemingly insignificant optimizations can have a very
  significant influence on the overall service performance.

 Do such beasts really exist?  I mean, I guess they must, but I've never
 seen a mod_perl application that was CPU-bound.  They always seem to be
 constrained by database speed and memory.

Think search engines.  Once you've figured out how to get your search
database to fit in memory (or devised a cachin strategy to get the
important parts there) you're essentially looking at a CPU-bound problem.
These days the best solution is probably some judicious use of Inline::C.
Back when I last tackled the problem I had to hike up mount XS to find my
grail...

-sam






Re: slow regex [BENCHMARK]

2002-01-23 Thread Sam Tregar

On Wed, 23 Jan 2002, Paul Mineiro wrote:

 i've cleaned up the example to tighten the case:

 the mod perl code  snippet is:

Fascinating.  The only thing I don't see is where $seq gets assigned to in
the CGI case.  Where is the data coming from?  Is it perhaps a tied
variable or otherwise unlike the $seq in the command-line version?  If
that's not it then I think you might have to build a debugging version of
Apache and Perl and break out GDB to get to the bottom of things.

-sam




Re: kylix: rad!

2002-01-13 Thread Sam Tregar

On Sat, 12 Jan 2002, Perrin Harkins wrote:

 Well, does this product actually have any users to compete for?  GUI
 builders usually don't work for anything but the most trivial websites
 that could be written in anything and do fine.  People seem to come to
 mod_perl because they need more performance or more control than they
 can get from CGI.

Agree.

 I'm not sure I want to try and draw in users who can't program at all.

Tangential thought: we may not want to draw individual non-programmers but
we undoubtably do work with non-programmers - artists and HTML writers.
I think a GUI system that made it easier for these non-programmers to
interface with our creations would have some utility.  I've got a
half-baked module sitting in my workspace, HTML::Template::Explorer, that
was an attempt to do something along these lines for HTML::Template.  I
didn't get very far before I realized I didn't have a strong enough design
to be coding...

-sam





Re: kylix: rad!

2002-01-13 Thread Sam Tregar

On Sun, 13 Jan 2002, brian moseley wrote:

 altho kylix was discussed in the first post of the thread,
 my actual reply to you stood on its own as a condemnation of
 a general cliquish attitude.

Oh, consider me properly chastened then.  BTW - kylix is actually the
subject of this thread, supposedly.  I didn't think addressing it
directly was too out of bounds!  And Kylix *is* aimed at non-programmers,
or at least it was when it was Delphi.

 but microsoft visual studio blah blah .net blah blah is
 quite popular, isn't it?

Have you used MS visual studio?  There isn't much visual about it.
In my experience it's pretty much on par with the various C/C++ IDE's
around for Linux already.  All of which are pretty close to useless, IMO.

People use MS Visual Studio because they have to.  Same reason they'll use
.NET.  If there's anything Borland has proved it's that providing a better
development environment than Microsoft doesn't get you more developers.

Kylix is, as I understand it, something much closer to original Delphi aim
of programming without coding.  I'm not saying it wouldn't be neat if you
could do Kylix for Perl.  I'm just saying I don't think it would be a
fantastic success.  So, yeah, I'm agreeing with Perrin, but I don't think
that makes me some kind of horrible elitist.

-sam





Re: ANNOUNCE: Bricolage 1.2.0

2002-01-11 Thread Sam Tregar

On Fri, 11 Jan 2002, Matt Sergeant wrote:

 Any chance of supporting more template systems in the future, like TT and
 XSLT?

Adding more Burners (brictalk for templating system) is definitely
something we're interested in.  If you'd like to give it a try there's a
brief set of instructions in the Bric::Util::Burner docs:

   http://bricolage.thepirtgroup.com/docs/Bric/Util/Burner.html

Look for the ADDING A NEW BURNER section.  For general information
about the templating system, see:

   http://bricolage.thepirtgroup.com/documentation.html

-sam




ANNOUNCE: CGI::Application::MailPage 1.0

2002-01-05 Thread Sam Tregar

I've got a new module to introduce - CGI::Application::MailPage.  It's a
little CGI::Application module that allows users to send HTML documents to
their friends via email.  It's configurable in a number of useful
directions and useful if you need this sort of thing.  However, it's also
a proof of concept - that CGI::Application can enable the distribution of
small, configurable CGI apps via CPAN.

So, without further ado, I give you a module created almost two years ago
and aged to perfection in the moldy depths of my home directory:


CGI::Application::MailPage - module to allow users to send HTML pages by
email

This module is a CGI::Application module that allows users to send HTML
pages to their friends.  This module provides the functionality behind a
typical Mail This Page To A Friend link.


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://sourceforge.net/project/showfiles.php?group_id=12636

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


AUTHOR

Copyright 2000-2002, Sam Tregar ([EMAIL PROTECTED]).

Questions, bug reports and suggestions can be sent to the
CGI::Application mailing list.  You can subscribe by sending a blank
message to [EMAIL PROTECTED]  See you there!


LICENSE

This library is free software; you can redistribute it and/or modify
it under the same terms as Perl itself.





ANNOUNCEMENT: HTML::Template::JIT 0.02

2001-11-26 Thread Sam Tregar

HTML::Template::JIT - a just-in-time compiler for HTML::Template

CHANGES

- Added support for loop_context_vars.

- Added support for global_vars.

- Fixed bug in loop param handling that made loop variables
  case-sensitive.


DESCRIPTION

This module provides a just-in-time compiler for HTML::Template.
Templates are compiled into native machine code using Inline::C.  The
compiled code is then stored to disk and reused on subsequent calls.

HTML::Template::JIT is up to 4 times as fast as HTML::Template using
caching.


NOTE

This module is not feature-complete.  Be sure to read the CAVEATS
section in the documentation before using!


AVAILABILITY

This module is available on SourceForge.  Download it at:


http://prdownloads.sourceforge.net/html-template/HTML-Template-JIT-0.02.tar.gz

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


CONTACT INFO

You can join the HTML::Template mailing-list by sending a blank
message to [EMAIL PROTECTED]





ANNOUNCEMENT: HTML::Template::JIT 0.01

2001-11-17 Thread Sam Tregar

HTML::Template::JIT - a just-in-time compiler for HTML::Template

DESCRIPTION

This module provides a just-in-time compiler for HTML::Template.
Templates are compiled into native machine code using Inline::C.  The
compiled code is then stored to disk and reused on subsequent calls.

HTML::Template::JIT is up to 4 times as fast as HTML::Template using
caching.


NOTE

This module is not feature-complete.  Be sure to read the CAVEATS
section in the documentation before using!


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://prdownloads.sourceforge.net/html-template/HTML-Template-JIT-0.01.tar.g

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


CONTACT INFO

You can join the HTML::Template mailing-list by sending a blank
message to [EMAIL PROTECTED]





ANNOUNCEMENT: HTML::Template::Expr 0.03

2001-11-13 Thread Sam Tregar

CHANGES

- Added register_function() class method add functions globally.
  (Tatsuhiko Miyagawa)

- Fixed broken cache mode.


DESCRIPTION

This module provides an extension to HTML::Template which allows
expressions in the template syntax.  This is purely an addition - all
the normal HTML::Template options, syntax and behaviors will still
work.

Expression support includes comparisons, math operations, string
operations and a mechanism to allow you add your own functions at
runtime.


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://prdownloads.sourceforge.net/html-template/HTML-Template-Expr-0.03.tar.gz

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


CONTACT INFO

You can join the HTML::Template mailing-list by sending a blank
message to [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





ANNOUNCEMENT: HTML::Template::Expr 0.02

2001-11-05 Thread Sam Tregar

CHANGES

- Fixed bug where numeric functions all returned 1.
  (reported by Peter Leonard)

- Improved performance over 300% with a new grammar and expression
  evaluator.

- Enhanced grammar to support call(foo  10) syntax.


DESCRIPTION

This module provides an extension to HTML::Template which allows
expressions in the template syntax.  This is purely an addition - all
the normal HTML::Template options, syntax and behaviors will still
work.

Expression support includes comparisons, math operations, string
operations and a mechanism to allow you add your own functions at
runtime.


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://prdownloads.sourceforge.net/html-template/HTML-Template-Expr-0.02.tar.gz

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


CONTACT INFO

You can join the HTML::Template mailing-list by sending a blank
message to [EMAIL PROTECTED]




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-19 Thread Sam Horrocks

There's only one run queue in the kernel.  THe first task ready to run is
  put
at the head of that queue, and anything arriving afterwards waits.  Only
if that first task blocks on a resource or takes a very long time, or
a higher priority process becomes able to run due to an interrupt is that
process taken out of the queue.
  
  Note that any I/O request that isn't completely handled by buffers will
  trigger the 'blocks on a resource' clause above, which means that
  jobs doing any real work will complete in an order determined by
  something other than the cpu and not strictly serialized.  Also, most
  of my web servers are dual-cpu so even cpu bound processes may
  complete out of order.

 I think it's much easier to visualize how MRU helps when you look at one
 thing running at a time.  And MRU works best when every process runs
 to completion instead of blocking, etc.  But even if the process gets
 timesliced, blocked, etc, MRU still degrades gracefully.  You'll get
 more processes in use, but still the numbers will remain small.

 Similarly, because of the non-deterministic nature of computer systems,
 Apache doesn't service requests on an LRU basis; you're comparing
  SpeedyCGI
 against a straw man. Apache's servicing algortihm approaches randomness,
  so
 you need to build a comparison between forced-MRU and random choice.
  
Apache httpd's are scheduled on an LRU basis.  This was discussed early
in this thread.  Apache uses a file-lock for its mutex around the accept
call, and file-locking is implemented in the kernel using a round-robin
(fair) selection in order to prevent starvation.  This results in
incoming requests being assigned to httpd's in an LRU fashion.
  
  But, if you are running a front/back end apache with a small number
  of spare servers configured on the back end there really won't be
  any idle perl processes during the busy times you care about.  That
  is, the  backends will all be running or apache will shut them down
  and there won't be any difference between MRU and LRU (the
  difference would be which idle process waits longer - if none are
  idle there is no difference).

 If you can tune it just right so you never run out of ram, then I think
 you could get the same performance as MRU on something like hello-world.

Once the httpd's get into the kernel's run queue, they finish in the
same order they were put there, unless they block on a resource, get
timesliced or are pre-empted by a higher priority process.
  
  Which means they don't finish in the same order if (a) you have
  more than one cpu, (b) they do any I/O (including delivering the
  output back which they all do), or (c) some of them run long enough
  to consume a timeslice.
  
Try it and see.  I'm sure you'll run more processes with speedycgi, but
you'll probably run a whole lot fewer perl interpreters and need less ram.
  
  Do you have a benchmark that does some real work (at least a dbm
  lookup) to compare against a front/back end mod_perl setup?

 No, but if you send me one, I'll run it.



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-19 Thread Sam Horrocks

  You know, I had brief look through some of the SpeedyCGI code yesterday,
  and I think the MRU process selection might be a bit of a red herring. 
  I think the real reason Speedy won the memory test is the way it spawns
  processes.

 Please take a look at that code again.  There's no smoke and mirrors,
 no red-herrings.  Also, I don't look at the benchmarks as "winning" - I
 am not trying to start a mod_perl vs speedy battle here.  Gunther wanted
 to know if there were "real bechmarks", so I reluctantly put them up.

 Here's how SpeedyCGI works (this is from version 2.02 of the code):

When the frontend starts, it tries to quickly grab a backend from
the front of the be_wait queue, which is a LIFO.  This is in
speedy_frontend.c, get_a_backend() function.

If there aren't any idle be's, it puts itself onto the fe_wait queue.
Same file, get_a_backend_hard().

If this fe (frontend) is at the front of the fe_wait queue, it
"takes charge" and starts looking to see if a backend needs to be
spawned.  This is part of the "frontend_ping()" function.  It will
only spawn a be if no other backends are being spawned, so only
one backend gets spawned at a time.

Every frontend in the queue, drops into a sigsuspend and waits for an
alarm signal.  The alarm is set for 1-second.  This is also in
get_a_backend_hard().

When a backend is ready to handle code, it goes and looks at the fe_wait
queue and if there are fe's there, it sends a SIGALRM to the one at
the front, and sets the sent_sig flag for that fe.  This done in
speedy_group.c, speedy_group_sendsigs().

When a frontend wakes on an alarm (either due to a timeout, or due to
a be waking it up), it looks at its sent_sig flag to see if it can now
grab a be from the queue.  If so it does that.  If not, it runs various
checks then goes back to sleep.

 In most cases, you should get a be from the lifo right at the beginning
 in the get_a_backend() function.  Unless there aren't enough be's running,
 or somethign is killing them (bad perl code), or you've set the
 MaxBackends option to limit the number of be's.


  If I understand what's going on in Apache's source, once every second it
  has a look at the scoreboard and says "less than MinSpareServers are
  idle, so I'll start more" or "more than MaxSpareServers are idle, so
  I'll kill one".  It only kills one per second.  It starts by spawning
  one, but the number spawned goes up exponentially each time it sees
  there are still not enough idle servers, until it hits 32 per second. 
  It's easy to see how this could result in spawning too many in response
  to sudden load, and then taking a long time to clear out the unnecessary
  ones.
  
  In contrast, Speedy checks on every request to see if there are enough
  backends running.  If there aren't, it spawns more until there are as
  many backends as queued requests.
 
 Speedy does not check on every request to see if there are enough
 backends running.  In most cases, the only thing the frontend does is
 grab an idle backend from the lifo.  Only if there are none available
 does it start to worry about how many are running, etc.

  That means it never overshoots the mark.

 You're correct that speedy does try not to overshoot, but mainly
 because there's no point in overshooting - it just wastes swap space.
 But that's not the heart of the mechanism.  There truly is a LIFO
 involved.  Please read that code again, or run some tests.  Speedy
 could overshoot by far, and the worst that would happen is that you
 would get a lot of idle backends sitting in virtual memory, which the
 kernel would page out, and then at some point they'll time out and die.
 Unless of course the load increases to a point where they're needed,
 in which case they would get used.

 If you have speedy installed, you can manually start backends yourself
 and test.  Just run "speedy_backend script.pl " to start a backend.
 If you start lots of those on a script that says 'print "$$\n"', then
 run the frontend on the same script, you will still see the same pid
 over and over.  This is the LIFO in action, reusing the same process
 over and over.

  Going back to your example up above, if Apache actually controlled the
  number of processes tightly enough to prevent building up idle servers,
  it wouldn't really matter much how processes were selected.  If after
  the 1st and 2nd interpreters finish their run they went to the end of
  the queue instead of the beginning of it, that simply means they will
  sit idle until called for instead of some other two processes sitting
  idle until called for.  If the systems were both efficient enough about
  spawning to only create as many interpreters as needed, none of them
  would be sitting idle and memory usage would always be as low as
  possible.
  
  I don't know if I'm explaining this very well, but the gist of my theory
  is that at any given time both 

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Sam Horrocks
, each customer
puts in their order immediately, then waits 50 minutes for it to arrive.
In the second scenario each customer waits 40 minutes in to put in
their order, then waits another 10 minutes for it to arrive.

What I'm trying to show with this analogy is that no matter how many
"simultaneous" requests you have, they all have to be serialized at
some point because you only have one CPU.  Either you can serialize them
before they get to the perl interpreter, or afterward.  Either way you
wait on the CPU, and you get the same throughput.

Does that help?

  I have just gotten around to reading this thread I've been saving for a 
  rainy day. Well, it's not rainy, but I'm finally getting to it. Apologizes 
  to those who hate when when people don't snip their reply mails but I am 
  including it so that the entire context is not lost.
  
  Sam (or others who may understand Sam's explanation),
  
  I am still confused by this explanation of MRU helping when there are 10 
  processes serving 10 requests at all times. I understand MRU helping when 
  the processes are not at max, but I don't see how it helps when they are at 
  max utilization.
  
  It seems to me that if the wait is the same for mod_perl backend processes 
  and speedyCGI processes, that it doesn't matter if some of the speedycgi 
  processes cycle earlier than the mod_perl ones because all 10 will always 
  be used.
  
  I did read and reread (once) the snippets about modeling concurrency and 
  the HTTP waiting for an accept.. But I still don't understand how MRU helps 
  when all the processes would be in use anyway. At that point they all have 
  an equal chance of being called.
  
  Could you clarify this with a simpler example? Maybe 4 processes and a 
  sample timeline of what happens to those when there are enough requests to 
  keep all 4 busy all the time for speedyCGI and a mod_perl backend?
  
  At 04:32 AM 1/6/01 -0800, Sam Horrocks wrote:
 Let me just try to explain my reasoning.  I'll define a couple of my
 base assumptions, in case you disagree with them.

 - Slices of CPU time doled out by the kernel are very small - so small
 that processes can be considered concurrent, even though technically
 they are handled serially.
  
Don't agree.  You're equating the model with the implemntation.
Unix processes model concurrency, but when it comes down to it, if you
don't have more CPU's than processes, you can only simulate concurrency.
  
Each process runs until it either blocks on a resource (timer, network,
disk, pipe to another process, etc), or a higher priority process
pre-empts it, or it's taken so much time that the kernel wants to give
another process a chance to run.
  
 - A set of requests can be considered "simultaneous" if they all arrive
 and start being handled in a period of time shorter than the time it
 takes to service a request.
  
That sounds OK.
  
 Operating on these two assumptions, I say that 10 simultaneous requests
 will require 10 interpreters to service them.  There's no way to handle
 them with fewer, unless you queue up some of the requests and make them
 wait.
  
Right.  And that waiting takes place:
  
   - In the mutex around the accept call in the httpd
  
   - In the kernel's run queue when the process is ready to run, but is
 waiting for other processes ahead of it.
  
So, since there is only one CPU, then in both cases (mod_perl and
SpeedyCGI), processes spend time waiting.  But what happens in the
case of SpeedyCGI is that while some of the httpd's are waiting,
one of the earlier speedycgi perl interpreters has already finished
its run through the perl code and has put itself back at the front of
the speedycgi queue.  And by the time that Nth httpd gets around to
running, it can re-use that first perl interpreter instead of needing
yet another process.
  
This is why it's important that you don't assume that Unix is truly
concurrent.
  
 I also say that if you have a top limit of 10 interpreters on your
 machine because of memory constraints, and you're sending in 10
 simultaneous requests constantly, all interpreters will be used all the
 time.  In that case it makes no difference to the throughput whether you
 use MRU or LRU.
  
This is not true for SpeedyCGI, because of the reason I give above.
10 simultaneous requests will not necessarily require 10 interpreters.
  
   What you say would be true if you had 10 processors and could get
   true concurrency.  But on single-cpu systems you usually don't need
   10 unix processes to handle 10 requests concurrently, since they get
   serialized by the kernel anyways.

 I think the CPU slices are smaller than that.  I don't know much about
 process scheduling, so I could be wrong.  I would agree with you if we
 were ta

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Sam Horrocks
which/hello_world
ab -t 30 -c 300 http://localhost/$which/hello_world


Before running each test, I rebooted my system.  Here's the software
installed:

angel: {139}# rpm -q -a |egrep -i 'mod_perl|speedy|apache'
apache-1.3.9-4
speedycgi-2.02-1
apache-devel-1.3.9-4
speedycgi-apache-2.02-1
mod_perl-1.21-2

Here are some relevant parameters from my httpd.conf:

MinSpareServers 8
MaxSpareServers 20
StartServers 10
MaxClients 150
MaxRequestsPerChild 1
SpeedyMaxRuns 0









  At 03:19 AM 1/17/01 -0800, Sam Horrocks wrote:
  I think the major problem is that you're assuming that just because
  there are 10 constant concurrent requests, that there have to be 10
  perl processes serving those requests at all times in order to get
  maximum throughput.  The problem with that assumption is that there
  is only one CPU - ten processes cannot all run simultaneously anyways,
  so you don't really need ten perl interpreters.
  
  I've been trying to think of better ways to explain this.  I'll try to
  explain with an analogy - it's sort-of lame, but maybe it'll give you
  a mental picture of what's happening.  To eliminate some confusion,
  this analogy doesn't address LRU/MRU, nor waiting on other events like
  network or disk i/o.  It only tries to explain why you don't necessarily
  need 10 perl-interpreters to handle a stream of 10 concurrent requests
  on a single-CPU system.
  
  You own a fast-food restaurant.  The players involved are:
  
   Your customers.  These represent the http requests.
  
   Your cashiers.  These represent the perl interpreters.
  
   Your cook.  You only have one.  THis represents your CPU.
  
  The normal flow of events is this:
  
   A cashier gets an order from a customer.  The cashier goes and
   waits until the cook is free, and then gives the order to the cook.
   The cook then cooks the meal, taking 5-minutes for each meal.
   The cashier waits for the meal to be ready, then takes the meal and
   gives it to the customer.  The cashier then serves another customer.
   The cashier/customer interaction takes a very small amount of time.
  
  The analogy is this:
  
   An http request (customer) arrives.  It is given to a perl
   interpreter (cashier).  A perl interpreter must wait for all other
   perl interpreters ahead of it to finish using the CPU (the cook).
   It can't serve any other requests until it finishes this one.
   When its turn arrives, the perl interpreter uses the CPU to process
   the perl code.  It then finishes and gives the results over to the
   http client (the customer).
  
  Now, say in this analogy you begin the day with 10 customers in the store.
  At each 5-minute interval thereafter another customer arrives.  So at time
  0, there is a pool of 10 customers.  At time +5, another customer arrives.
  At time +10, another customer arrives, ad infinitum.
  
  You could hire 10 cashiers in order to handle this load.  What would
  happen is that the 10 cashiers would fairly quickly get all the orders
  from the first 10 customers simultaneously, and then start waiting for
  the cook.  The 10 cashiers would queue up.  Casher #1 would put in the
  first order.  Cashiers 2-9 would wait their turn.  After 5-minutes,
  cashier number 1 would receive the meal, deliver it to customer #1, and
  then serve the next customer (#11) that just arrived at the 5-minute mark.
  Cashier #1 would take customer #11's order, then queue up and wait in
  line for the cook - there will be 9 other cashiers already in line, so
  the wait will be long.  At the 10-minute mark, cashier #2 would receive
  a meal from the cook, deliver it to customer #2, then go on and serve
  the next customer (#12) that just arrived.  Cashier #2 would then go and
  wait in line for the cook.  This continues on through all the cashiers
  in order 1-10, then repeating, 1-10, ad infinitum.
  
  Now even though you have 10 cashiers, most of their time is spent
  waiting to put in an order to the cook.  Starting with customer #11,
  all customers will wait 50-minutes for their meal.  When customer #11
  comes in he/she will immediately get to place an order, but it will take
  the cashier 45-minutes to wait for the cook to become free, and another
  5-minutes for the meal to be cooked.  Same is true for customer #12,
  and all customers from then on.
  
  Now, the question is, could you get the same throughput with fewer
  cashiers?  Say you had 2 cashiers instead.  The 10 customers are
  there waiting.  The 2 cashiers take orders from customers #1 and #2.
  Cashier #1 then gives the order to the cook and waits.  Cashier #2 waits
  in line for the cook behind cashier #1.  At the 5-minute mark, the first
  meal is done.  Cashier #1 delivers the meal to customer #1, then serves
  customer #3.  Cashier #1 then goes and stands in line behind cashier #2.
  At the 10-minute mark, cashier #2's meal is ready - it's delivered to
  customer 

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Sam Horrocks

There is no coffee.  Only meals.  No substitutions. :-)

If we added coffee to the menu it would still have to be prepared by the cook.
Remember that you only have one CPU, and all the perl interpreters large and
small must gain access to that CPU in order to run.

Sam


  I have a wide assortment of queries on a site, some of which take several minutes to 
 execute, while others execute in less than one second. If understand this analogy 
 correctly, I'd be better off with the current incarnation of mod_perl because there 
 would be more cashiers around to serve the "quick cups of coffee" that many customers 
 request at my dinner.
  
  Is this correct?
  
  
  Sam Horrocks wrote:
   
   I think the major problem is that you're assuming that just because
   there are 10 constant concurrent requests, that there have to be 10
   perl processes serving those requests at all times in order to get
   maximum throughput.  The problem with that assumption is that there
   is only one CPU - ten processes cannot all run simultaneously anyways,
   so you don't really need ten perl interpreters.
   
   I've been trying to think of better ways to explain this.  I'll try to
   explain with an analogy - it's sort-of lame, but maybe it'll give you
   a mental picture of what's happening.  To eliminate some confusion,
   this analogy doesn't address LRU/MRU, nor waiting on other events like
   network or disk i/o.  It only tries to explain why you don't necessarily
   need 10 perl-interpreters to handle a stream of 10 concurrent requests
   on a single-CPU system.
   
   You own a fast-food restaurant.  The players involved are:
   
   Your customers.  These represent the http requests.
   
   Your cashiers.  These represent the perl interpreters.
   
   Your cook.  You only have one.  THis represents your CPU.
   
   The normal flow of events is this:
   
   A cashier gets an order from a customer.  The cashier goes and
   waits until the cook is free, and then gives the order to the cook.
   The cook then cooks the meal, taking 5-minutes for each meal.
   The cashier waits for the meal to be ready, then takes the meal and
   gives it to the customer.  The cashier then serves another customer.
   The cashier/customer interaction takes a very small amount of time.
   
   The analogy is this:
   
   An http request (customer) arrives.  It is given to a perl
   interpreter (cashier).  A perl interpreter must wait for all other
   perl interpreters ahead of it to finish using the CPU (the cook).
   It can't serve any other requests until it finishes this one.
   When its turn arrives, the perl interpreter uses the CPU to process
   the perl code.  It then finishes and gives the results over to the
   http client (the customer).
   
   Now, say in this analogy you begin the day with 10 customers in the store.
   At each 5-minute interval thereafter another customer arrives.  So at time
   0, there is a pool of 10 customers.  At time +5, another customer arrives.
   At time +10, another customer arrives, ad infinitum.
   
   You could hire 10 cashiers in order to handle this load.  What would
   happen is that the 10 cashiers would fairly quickly get all the orders
   from the first 10 customers simultaneously, and then start waiting for
   the cook.  The 10 cashiers would queue up.  Casher #1 would put in the
   first order.  Cashiers 2-9 would wait their turn.  After 5-minutes,
   cashier number 1 would receive the meal, deliver it to customer #1, and
   then serve the next customer (#11) that just arrived at the 5-minute mark.
   Cashier #1 would take customer #11's order, then queue up and wait in
   line for the cook - there will be 9 other cashiers already in line, so
   the wait will be long.  At the 10-minute mark, cashier #2 would receive
   a meal from the cook, deliver it to customer #2, then go on and serve
   the next customer (#12) that just arrived.  Cashier #2 would then go and
   wait in line for the cook.  This continues on through all the cashiers
   in order 1-10, then repeating, 1-10, ad infinitum.
   
   Now even though you have 10 cashiers, most of their time is spent
   waiting to put in an order to the cook.  Starting with customer #11,
   all customers will wait 50-minutes for their meal.  When customer #11
   comes in he/she will immediately get to place an order, but it will take
   the cashier 45-minutes to wait for the cook to become free, and another
   5-minutes for the meal to be cooked.  Same is true for customer #12,
   and all customers from then on.
   
   Now, the question is, could you get the same throughput with fewer
   cashiers?  Say you had 2 cashiers instead.  The 10 customers are
   there waiting.  The 2 cashiers take orders from customers #1 and #2.
   Cashier #1 then gives the order to the cook and waits.  Cashier #2 waits
   in line for the cook behind cashier #1.  At the 5-minute mark, the first
   me

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Sam Horrocks
 in use, I think Speedy would handle requests
  more quickly, which would allow it to handle n requests in less time
  than mod_perl.  Saying it handles more clients implies that the requests
  are simultaneous.  I don't think it can handle more simultaneous
  requests.

 Don't agree.

 Are the speedycgi+Apache processes smaller than the mod_perl
 processes?  If not, the maximum number of concurrent requests you can
 handle on a given box is going to be the same.
   
The size of the httpds running mod_speedycgi, plus the size of speedycgi
perl processes is significantly smaller than the total size of the httpd's
running mod_perl.
   
The reason for this is that only a handful of perl processes are required by
speedycgi to handle the same load, whereas mod_perl uses a perl interpreter
in all of the httpds.
  
  I think this is true at lower levels, but not when the number of
  simultaneous requests gets up to the maximum that the box can handle. 
  At that point, it's a question of how many interpreters can fit in
  memory.  I would expect the size of one Speedy + one httpd to be about
  the same as one mod_perl/httpd when no memory is shared.  With sharing,
  you'd be able to run more processes.

 I'd agree that the size of one Speedy backend + one httpd would be the
 same or even greater than the size of one mod_perl/httpd when no memory
 is shared.  But because the speedycgi httpds are small (no perl in them)
 and the number of SpeedyCGI perl interpreters is small, the total memory
 required is significantly smaller for the same load.

 Sam



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Sam Horrocks

Right, but this also points out how difficult it is to get mod_perl
tuning just right.  My opinion is that the MRU design adapts more
dynamically to the load.
  
  How would this compare to apache's process management when
  using the front/back end approach?

 Same thing applies.  The front/back end approach does not change the
 fundamentals.

I'd agree that the size of one Speedy backend + one httpd would be the
same or even greater than the size of one mod_perl/httpd when no memory
is shared.  But because the speedycgi httpds are small (no perl in them)
and the number of SpeedyCGI perl interpreters is small, the total memory
required is significantly smaller for the same load.
  
  Likewise, it would be helpful if you would always make the comparison
  to the dual httpd setup that is often used for busy sites.   I think it must
  really boil down to the efficiency of your IPC vs. access to the full
  apache environment.

 The reason I don't include that comparison is that it's not fundamental
 to the differences between mod_perl and speedycgi or LRU and MRU that
 I have been trying to point out.  Regardless of whether you add a
 frontend or not, the mod_perl process selection remains LRU and the
 speedycgi process selection remains MRU.



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Sam Horrocks

A few things:

- In your results, could you add the speedycgi version number (2.02),
  and the fact that this is using the mod_speedycgi frontend.
  The fork/exec frontend will be much slower on hello-world so I don't
  want people to get the wrong idea.  You may want to benchmark
  the fork/exec version as well.

- You may be able to eke out a little more performance by setting
  MaxRuns to 0 (infinite).  The is set for mod_speedycgi using the
  SpeedyMaxRuns directive, or on the command-line using "-r0".
  This setting is similar to the MaxRequestsPerChild setting in apache.

- My tests show mod_perl/speedy much closer than yours do, even with
  MaxRuns at its default value of 500.  Maybe you're running on
  a different OS than I am - I'm using Redhat 6.2.  I'm also running
  one rev lower of mod_perl in case that matters.


  Hey Sam, nice module.  I just installed your SpeedyCGI for a good 'ol
  HelloWorld benchmark  it was a snap, well done.  I'd like to add to the 
  numbers below that a fair benchmark would be between mod_proxy in front 
  of a mod_perl server and mod_speedycgi, as it would be a similar memory 
  saving model ( this is how we often scale mod_perl )... both models would
  end up forwarding back to a smaller set of persistent perl interpreters.
  
  However, I did not do such a benchmark, so SpeedyCGI looses out a
  bit for the extra layer it has to do :(   This is based on the 
  suite at http://www.chamas.com/bench/hello.tar.gz, but I have not
  included the speedy test in that yet.
  
   -- Josh
  
  Test Name  Test File  Hits/sec   Total Hits Total Time sec/Hits  
  Bytes/Hit  
     -- -- -- -- 
 -- -- 
  Apache::Registry v2.01 CGI.pm  hello.cgi   451.9 27128 hits 60.03 sec  0.002213  
  216 bytes  
  Speedy CGI hello.cgi   375.2 22518 hits 60.02 sec  0.002665  
  216 bytes  
  
  Apache Server Header Tokens
  ---
  (Unix)
  Apache/1.3.14
  OpenSSL/0.9.6
  PHP/4.0.3pl1
  mod_perl/1.24
  mod_ssl/2.7.1



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-05 Thread Sam Horrocks

 Are the speedycgi+Apache processes smaller than the mod_perl
 processes?  If not, the maximum number of concurrent requests you can
 handle on a given box is going to be the same.
  
The size of the httpds running mod_speedycgi, plus the size of speedycgi
perl processes is significantly smaller than the total size of the httpd's
running mod_perl.
  
  That would be true if you only ran one mod_perl'd httpd, but can you
  give a better comparison to the usual setup for a busy site where
  you run a non-mod_perl lightweight front end and let mod_rewrite
  decide what is proxied through to the larger mod_perl'd backend,
  letting apache decide how many backends you need to have
  running?

 The fundamental differences would remain the same - even in the mod_perl
 backend, the requests will be spread out over all the httpd's that are
 running, whereas speedycgi would tend to use fewer perl interpreters
 to handle the same load.

 But with this setup, the mod_perl backend could probably be set to run
 fewer httpds because it doesn't have to wait on slow clients.  And the
 fewer httpd's you run with mod_perl the smaller your total memory.

The reason for this is that only a handful of perl processes are required by
speedycgi to handle the same load, whereas mod_perl uses a perl interpreter
in all of the httpds.
  
  I always see at least a 10-1 ratio of front-to-back end httpd's when serving
  over the internet.   One effect that is difficult to benchmark is that clients
  connecting over the internet are often slow and will hold up the process
  that is delivering the data even though the processing has been completed.
  The proxy approach provides some buffering and allows the backend
  to move on more quickly.  Does speedycgi do the same?

 There are plans to make it so that SpeedyCGI does more buffering of
 the output in memory, perhaps eliminating the need for caching frontend
 webserver.  It works now only for the "speedy" binary (not mod_speedycgi)
 if you set the BufsizGet value high enough.

 Of course you could add a caching webserver in front of the SpeedyCGI server
 just like you do with mod_perl now.  So yes you can do the same with
 speedycgi now.



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-04 Thread Sam Horrocks

Sorry for the late reply - I've been out for the holidays.

  By the way, how are you doing it?  Do you use a mutex routine that works
  in LIFO fashion?

 Speedycgi uses separate backend processes that run the perl interpreters.
 The frontend processes (the httpd's that are running mod_speedycgi)
 communicate with the backends, sending over the request and getting the output.

 Speedycgi uses some shared memory (an mmap'ed file in /tmp) to keep track
 of the backends and frontends.  This shared memory contains the queue.
 When backends become free, they add themselves at the front of this queue.
 When the frontends need a backend they pull the first one from the front
 of this list.

  
I am saying that since SpeedyCGI uses MRU to allocate requests to perl
interpreters, it winds up using a lot fewer interpreters to handle the
same number of requests.
  
  What I was saying is that it doesn't make sense for one to need fewer
  interpreters than the other to handle the same concurrency.  If you have
  10 requests at the same time, you need 10 interpreters.  There's no way
  speedycgi can do it with fewer, unless it actually makes some of them
  wait.  That could be happening, due to the fork-on-demand model, although
  your warmup round (priming the pump) should take care of that.

 What you say would be true if you had 10 processors and could get
 true concurrency.  But on single-cpu systems you usually don't need
 10 unix processes to handle 10 requests concurrently, since they get
 serialized by the kernel anyways.  I'll try to show how mod_perl handles
 10 concurrent requests, and compare that to mod_speedycgi so you can
 see the difference.

 For mod_perl, let's assume we have 10 httpd's, h1 through h10,
 when the 10 concurent requests come in.  h1 has aquired the mutex,
 and h2-h10 are waiting (in order) on the mutex.  Here's how the cpu
 actually runs the processes:

h1 accepts
h1 releases the mutex, making h2 runnable
h1 runs the perl code and produces the results
h1 waits for the mutex

h2 accepts
h2 releases the mutex, making h3 runnable
h2 runs the perl code and produces the results
h2 waits for the mutex

h3 accepts
...

 This is pretty straightforward.  Each of h1-h10 run the perl code
 exactly once.  They may not run exactly in this order since a process
 could get pre-empted, or blocked waiting to send data to the client,
 etc.  But regardless, each of the 10 processes will run the perl code
 exactly once.

 Here's the mod_speedycgi example - it too uses httpd's h1-h10, and they
 all take turns running the mod_speedycgi frontend code.  But the backends,
 where the perl code is, don't have to all be run fairly - they use MRU
 instead.  I'll use b1 and b2 to represent 2 speedycgi backend processes,
 already queued up in that order.

 Here's a possible speedycgi scenario:

h1 accepts
h1 releases the mutex, making h2 runnable
h1 sends a request to b1, making b1 runnable

h2 accepts
h2 releases the mutex, making h3 runnable
h2 sends a request to b2, making b2 runnable

b1 runs the perl code and sends the results to h1, making h1 runnable
b1 adds itself to the front of the queue

h3 accepts
h3 releases the mutex, making h4 runnable
h3 sends a request to b1, making b1 runnable

b2 runs the perl code and sends the results to h2, making h2 runnable
b2 adds itself to the front of the queue

h1 produces the results it got from b1
h1 waits for the mutex

h4 accepts
h4 releases the mutex, making h5 runnable
h4 sends a request to b2, making b2 runnable

b1 runs the perl code and sends the results to h3, making h3 runnable
b1 adds itself to the front of the queue

h2 produces the results it got from b2
h2 waits for the mutex

h5 accepts
h5 release the mutex, making h6 runnable
h5 sends a request to b1, making b1 runnable

b2 runs the perl code and sends the results to h4, making h4 runnable
b2 adds itself to the front of the queue

 This may be hard to follow, but hopefully you can see that the 10 httpd's
 just take turns using b1 and b2 over and over.  So, the 10 conncurrent
 requests end up being handled by just two perl backend processes.  Again,
 this is simplified.  If the perl processes get blocked, or pre-empted,
 you'll end up using more of them.  But generally, the LIFO will cause
 SpeedyCGI to sort-of settle into the smallest number of processes needed for
 the task.

 The difference between the two approaches is that the mod_perl
 implementation forces unix to use 10 separate perl processes, while the
 mod_speedycgi implementation sort-of decides on the fly how many
 different processes are needed.

Please let me know what you think I should change.  So far my
benchmarks only show one trend, but if you can tell me specifically
what I'm doing wrong (and it's something reasonable), I'll try it.
  
  Try setting MinSpareServers 

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2001-01-04 Thread Sam Horrocks

This is planned for a future release of speedycgi, though there will
probably be an option to set a maximum number of bytes that can be
bufferred before the frontend contacts a perl interpreter and starts
passing over the bytes.

Currently you can do this sort of acceleration with script output if you
use the "speedy" binary (not mod_speedycgi), and you set the BufsizGet option
high enough so that it's able to buffer all the output from your script.
The perl interpreter will then be able to detach and go handle other
requests while the frontend process waits for the output to drain.

  Perrin Harkins wrote:
   What I was saying is that it doesn't make sense for one to need fewer
   interpreters than the other to handle the same concurrency.  If you have
   10 requests at the same time, you need 10 interpreters.  There's no way
   speedycgi can do it with fewer, unless it actually makes some of them
   wait.  That could be happening, due to the fork-on-demand model, although
   your warmup round (priming the pump) should take care of that.
  
  I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
  if, for instance, a large POST request is being uploaded, this takes a whole
  perl interpreter while the transaction is occurring. This is at least one
  place where a Perl interpreter should not be needed.
  
  Of course, this could be overcome if an HTTP Accelerator is used that takes
  the whole request before passing it to a local httpd, but I don't know of
  any proxies that work this way (AFAIK they all pass the packets as they
  arrive).



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

  Gunther Birznieks wrote:
   Sam just posted this to the speedycgi list just now.
  [...]
   The underlying problem in mod_perl is that apache likes to spread out
   web requests to as many httpd's, and therefore as many mod_perl interpreters,
   as possible using an LRU selection processes for picking httpd's.
  
  Hmmm... this doesn't sound right.  I've never looked at the code in
  Apache that does this selection, but I was under the impression that the
  choice of which process would handle each request was an OS dependent
  thing, based on some sort of mutex.
  
  Take a look at this: http://httpd.apache.org/docs/misc/perf-tuning.html
  
  Doesn't that appear to be saying that whichever process gets into the
  mutex first will get the new request?

 I would agree that whichver process gets into the mutex first will get
 the new request.  That's exactly the problem I'm describing.  What you
 are describing here is first-in, first-out behaviour which implies LRU
 behaviour.

 Processes 1, 2, 3 are running.  1 finishes and requests the mutex, then
 2 finishes and requests the mutex, then 3 finishes and requests the mutex.
 So when the next three requests come in, they are handled in the same order:
 1, then 2, then 3 - this is FIFO or LRU.  This is bad for performance.

  In my experience running
  development servers on Linux it always seemed as if the the requests
  would continue going to the same process until a request came in when
  that process was already busy.

 No, they don't.  They go round-robin (or LRU as I say it).

 Try this simple test script:

 use CGI;
 my $cgi = CGI-new;
 print $cgi-header();
 print "mypid=$$\n";

 WIth mod_perl you constantly get different pids.  WIth mod_speedycgi you
 usually get the same pid.  THis is a really good way to see the LRU/MRU
 difference that I'm talking about.

 Here's the problem - the mutex in apache is implemented using a lock
 on a file.  It's left up to the kernel to decide which process to give
 that lock to.

 Now, if you're writing a unix kernel and implementing this file locking code,
 what implementation would you use?  Well, this is a general purpose thing -
 you have 100 or so processes all trying to acquire this file lock.  You could
 give out the lock randomly or in some ordered fashion.  If I were writing
 the kernel I would give it out in a round-robin fashion (or the
 least-recently-used process as I referred to it before).  Why?  Because
 otherwise one of those processes may starve waiting for this lock - it may
 never get the lock unless you do it in a fair (round-robin) manner.

 THe kernel doesn't know that all these httpd's are exactly the same.
 The kernel is implementing a general-purpose file-locking scheme and
 it doesn't know whether one process is more important than another.  If
 it's not fair about giving out the lock a very important process might
 starve.

 Take a look at fs/locks.c (I'm looking at linux 2.3.46).  In there is the
 comment:

 /* Insert waiter into blocker's block list.
  * We use a circular list so that processes can be easily woken up in
  * the order they blocked. The documentation doesn't require this but
  * it seems like the reasonable thing to do.
  */
 static void locks_insert_block(struct file_lock *blocker, struct file_lock *waiter)

  As I understand it, the implementation of "wake-one" scheduling in the
  2.4 Linux kernel may affect this as well.  It may then be possible to
  skip the mutex and use unserialized accept for single socket servers,
  which will definitely hand process selection over to the kernel.

 If the kernel implemented the queueing for multiple accepts using a LIFO
 instead of a FIFO and apache used this method instead of file locks,
 then that would probably solve it.

 Just found this on the net on this subject:
http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.0/0455.html
http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.0/0453.html

   The problem is that at a high concurrency level, mod_perl is using lots
   and lots of different perl-interpreters to handle the requests, each
   with its own un-shared memory.  It's doing this due to its LRU design.
   But with SpeedyCGI's MRU design, only a few speedy_backends are being used
   because as much as possible it tries to use the same interpreter over and
   over and not spread out the requests to lots of different interpreters.
   Mod_perl is using lots of perl-interpreters, while speedycgi is only using
   a few.  mod_perl is requiring that lots of interpreters be in memory in
   order to handle the requests, wherase speedy only requires a small number
   of interpreters to be in memory.
  
  This test - building up unshared memory in each process - is somewhat
  suspect since in most setups I've seen, there is a very significant
  amount of memory being shared between mod_perl processes.

 My message and testing concerns un-shared memory only.  If all of your memory
 is shared, 

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

  Folks, your discussion is not short of wrong statements that can be easily
  proved, but I don't find it useful.

 I don't follow.  Are you saying that my conclusions are wrong, but
 you don't want to bother explaining why?
 
 Would you agree with the following statement?

Under apache-1, speedycgi scales better than mod_perl with
scripts that contain un-shared memory 



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

I've put your suggestion on the todo list.  It certainly wouldn't hurt to
have that feature, though I think memory sharing becomes a much much smaller
issue once you switch to MRU scheduling.

At the moment I think SpeedyCGI has more pressing needs though - for
example multiple scripts in a single interpreter, and an NT port.


  I think you could actually make speedycgi even better for shared memory 
  usage by creating a special directive which would indicate to speedycgi to 
  preload a series of modules. And then to tell speedy cgi to do forking of 
  that "master" backend preloaded module process and hand control over to 
  that forked process whenever you need to launch a new process.
  
  Then speedy would potentially have the best of both worlds.
  
  Sorry I cross posted your thing. But I do think it is a problem of mod_perl 
  also, and I am happily using speedycgi in production on at least one 
  commercial site where mod_perl could not be installed so easily because of 
  infrastructure issues.
  
  I believe your mechanism of round robining among MRU perl interpreters is 
  actually also accomplished by ActiveState's PerlEx (based on 
  Apache::Registry but using multithreaded IIS and pool of Interpreters). A 
  method similar to this will be used in Apache 2.0 when Apache is 
  multithreaded and therefore can control within program logic which Perl 
  interpeter gets called from a pool of Perl interpreters.
  
  It just isn't so feasible right now in Apache 1.0 to do this. And sometimes 
  people forget that mod_perl came about primarily for writing handlers in 
  Perl not as an application environment although it is very good for the 
  later as well.
  
  I think SpeedyCGI needs more advocacy from the mod_perl group because put 
  simply speedycgi is way easier to set up and use than mod_perl and will 
  likely get more PHP people using Perl again. If more people rely on Perl 
  for their fast websites, then you will get more people looking for more 
  power, and by extension more people using mod_perl.
  
  Whoops... here we go with the advocacy thing again.
  
  Later,
  Gunther
  
  At 02:50 AM 12/21/2000 -0800, Sam Horrocks wrote:
 Gunther Birznieks wrote:
  Sam just posted this to the speedycgi list just now.
 [...]
  The underlying problem in mod_perl is that apache likes to spread out
  web requests to as many httpd's, and therefore as many mod_perl 
   interpreters,
  as possible using an LRU selection processes for picking httpd's.

 Hmmm... this doesn't sound right.  I've never looked at the code in
 Apache that does this selection, but I was under the impression that the
 choice of which process would handle each request was an OS dependent
 thing, based on some sort of mutex.

 Take a look at this: http://httpd.apache.org/docs/misc/perf-tuning.html

 Doesn't that appear to be saying that whichever process gets into the
 mutex first will get the new request?
  
I would agree that whichver process gets into the mutex first will get
the new request.  That's exactly the problem I'm describing.  What you
are describing here is first-in, first-out behaviour which implies LRU
behaviour.
  
Processes 1, 2, 3 are running.  1 finishes and requests the mutex, then
2 finishes and requests the mutex, then 3 finishes and requests the mutex.
So when the next three requests come in, they are handled in the same order:
1, then 2, then 3 - this is FIFO or LRU.  This is bad for performance.
  
 In my experience running
 development servers on Linux it always seemed as if the the requests
 would continue going to the same process until a request came in when
 that process was already busy.
  
No, they don't.  They go round-robin (or LRU as I say it).
  
Try this simple test script:
  
use CGI;
my $cgi = CGI-new;
print $cgi-header();
print "mypid=$$\n";
  
WIth mod_perl you constantly get different pids.  WIth mod_speedycgi you
usually get the same pid.  THis is a really good way to see the LRU/MRU
difference that I'm talking about.
  
Here's the problem - the mutex in apache is implemented using a lock
on a file.  It's left up to the kernel to decide which process to give
that lock to.
  
Now, if you're writing a unix kernel and implementing this file locking 
   code,
what implementation would you use?  Well, this is a general purpose thing -
you have 100 or so processes all trying to acquire this file lock.  You 
   could
give out the lock randomly or in some ordered fashion.  If I were writing
the kernel I would give it out in a round-robin fashion (or the
least-recently-used process as I referred to it before).  Why?  Because
otherwise one of those processes may starve waiting for this lock - it may
never get the lock unless you do it in a fair (round-robin) manner.
  
THe

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

I really wasn't trying to work backwards from a benchmark.  It was
more of an analysis of the design, and the benchmarks bore it out.
It's sort of like coming up with a theory in science - if you can't get
any experimental data to back up the theory, you're in big trouble.
But if you can at least point out the existence of some experiments
that are consistent with your theory, it means your theory may be true.

The best would be to have other people do the same tests and see if they
see the same trend.  If no-one else sees this trend, then I'd really
have to re-think my analysis.

Another way to look at it - as you say below MRU is going to be in
mod_perl-2.0.  ANd what is the reason for that?  If there's no performance
difference between LRU and MRU why would the author bother to switch
to MRU.  So, I'm saying there must be some benchmarks somewhere that
point out this difference - if there weren't any real-world difference,
why bother even implementing MRU.

I claim that my benchmarks point out this difference between MRU over
LRU, and that's why my benchmarks show better performance on speedycgi
than on mod_perl.

Sam

- SpeedyCGI uses MRU, mod_perl-2 will eventually use MRU.  
  On Thu, 21 Dec 2000, Sam Horrocks wrote:
  
 Folks, your discussion is not short of wrong statements that can be easily
 proved, but I don't find it useful.
   
I don't follow.  Are you saying that my conclusions are wrong, but
you don't want to bother explaining why?

Would you agree with the following statement?
   
   Under apache-1, speedycgi scales better than mod_perl with
   scripts that contain un-shared memory 
  
  I don't know. It's easy to give a simple example and claim being better.
  So far whoever tried to show by benchmarks that he is better, most often
  was proved wrong, since the technologies in question have so many
  features, that I believe no benchmark will prove any of them absolutely
  superior or inferior. Therefore I said that trying to tell that your grass
  is greener is doomed to fail if someone has time on his hands to prove you
  wrong. Well, we don't have this time.
  
  Therefore I'm not trying to prove you wrong or right. Gunther's point of
  the original forward was to show things that mod_perl may need to adopt to
  make it better. Doug already explained in his paper that the MRU approach
  has been already implemented in mod_perl-2.0. You could read it in the
  link that I've attached and the quote that I've quoted.
  
  So your conclusions about MRU are correct and we have it implemented
  already (well very soon now :). I apologize if my original reply was
  misleading.
  
  I'm not telling that benchmarks are bad. What I'm telling is that it's
  very hard to benchmark things which are different. You benefit the most
  from the benchmarking when you take the initial code/product, benchmark
  it, then you try to improve the code and benchmark again to see whether it
  gave you any improvement. That's the area when the benchmarks rule and
  their are fair because you test the same thing. Well you could read more
  of my rambling about benchmarks in the guide.
  
  So if you find some cool features in other technologies that mod_perl
  might adopt and benefit from, don't hesitate to tell the rest of the gang.
  
  
  
  Something that I'd like to comment on:
  
  I find it a bad practice to quote one sentence from person's post and
  follow up on it. Someone from the list has sent me this email (SB == me):
  
  SB I don't find it useful
  
  and follow up. Why not to use a single letter:
  
  SB I
  
  and follow up? It's so much easier to flame on things taken out of their
  context.
  
  it has been no once that people did this to each other here on the list, I
  think I did too. So please be more careful when taking things out of
  context. Thanks a lot, folks!
  
  Cheers...
  
  _
  Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
  http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
  mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
  http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  
  



Re: ANNOUNCEMENT: NEW VERSION: HTML::Template 2.1

2000-12-18 Thread Sam Tregar

On Mon, 18 Dec 2000, Eric Cholet wrote:

  ANNOUNCEMENT: NEW VERSION: HTML::Template 2.1

 Does it support ELSIF yet?

Nope, but you can build your own now with the new filter option.  I expect
someone to post up an "ELSIF" = "ELSE IF" filter to the HTML::Template
mailinglist any time now.

-sam




error messages..

2000-09-28 Thread Sam Park

Anybody knows why I'm getting this messages...???
5744 Pinging 'prodcrank.excite.com~crank~crank~RaiseError=1'
5744 Apache::DBI already connected to
'prodcrank.excite.com~crank~crank~RaiseErr
or=1'
5744 Pinging 'prodcrank.excite.com~crank~crank~RaiseError=1'
5744 Apache::DBI already connected to
'prodcrank.excite.com~crank~crank~RaiseErr
or=1'
5744 Pinging 'prodcrank.excite.com~crank~crank~RaiseError=1'
5744 Apache::DBI already connected to
'prodcrank.excite.com~crank~crank~RaiseErr
or=1'
5744 Pinging 'prodcrank.excite.com~crank~crank~RaiseError=1'
5744 Apache::DBI already connected to
'prodcrank.excite.com~crank~crank~RaiseErr
or=1'
5744 Pinging 'prodcrank.excite.com~crank~crank~RaiseError=1'
5744 Apache::DBI already connected to
'prodcrank.excite.com~crank~crank~RaiseErr
or=1'

John Saylor wrote:

 Hi

 ( 00.09.28 17:29 -0500 ) Philip Molter:
  Recently, one of my co-employees has been messing around with Zope
  (http://www.zope.org) and I was wondering if there's a package that
  provides similar functionality using mod_perl and Apache rather
  than its own web server.

 That would be mason
 http://www.masonhq.com/

 --
 \js

 A steak a day keeps the cows dead.




ANNOUNCEMENT: NEW VERSION: HTML::Template 2.0

2000-09-17 Thread Sam Tregar

ANNOUNCEMENT: NEW VERSION: HTML::Template 2.0

HTML::Template - a Perl module to use HTML Templates

CHANGES

2.0

- New Feature: new 'search_path_on_include' option (Jody Biggs)

- New Feature: much requested variable __ODD__ added to set of
  loop_context_vars.

- New Feature: new 'no_includes' option (Scott Guelich)

- Doc Addition: Added link to Japanese translation (Kawai Takanori)

- Bug Fix: loop_context_vars broken again (T.J. Mather, Martin Schroth
  and Dave Wolfe)

- Bug Fix: vanguard_compatibility_mode was broken on first line of
  included files. (uchum)


DESCRIPTION

This module attempts make using HTML templates simple and natural.  It
extends standard HTML with a few new HTML-esque tags - TMPL_VAR,
TMPL_LOOP, TMPL_INCLUDE, TMPL_IF, TMPL_UNLESS and TMPL_ELSE.
The file written with HTML and these new tags is called a template.
It is usually saved separate from your script - possibly even created
by someone else!  Using this module you fill in the values for the
variables, loops and branches declared in the template.  This allows
you to seperate design - the HTML - from the data, which you generate
in the Perl script.

A Japanese translation of the documentation is available at:

   http://member.nifty.ne.jp/hippo2000/perltips/html/template.htm

This module is licenced under the GPL.  See the LICENCE section of the
README.


AVAILABILITY

This module is available on SourceForge.  Download it at:

  http://download.sourceforge.net/HTML-Template/HTML-Template-2.0.tar.gz

The module is also available on CPAN.  You can get it using CPAN.pm or
go to:

  http://www.cpan.org/authors/id/S/SA/SAMTREGAR/


MOTIVATION

It is true that there are a number of packages out there to do HTML
templates.  On the one hand you have things like HTML::Embperl which
allows you to freely mix Perl with HTML.  On the other hand lie
home-grown variable substitution solutions.  Hopefully the module can
find a place between the two.

One advantage of this module over a full HTML::Embperl-esque solution
is that it enforces an important divide - design and programming.  By
limiting the programmer to just using simple variables and loops in
the HTML, the template remains accessible to designers and other
non-perl people.  The use of HTML-esque syntax goes further to make
the format understandable to others.  In the future this similarity
could be used to extend existing HTML editors/analyzers to support
this syntax.

An advantage of this module over home-grown tag-replacement schemes is
the support for loops.  In my work I am often called on to produce
tables of data in html.  Producing them using simplistic HTML
templates results in CGIs containing lots of HTML since the HTML
itself could not represent loops.  The introduction of loop statements
in the HTML simplifies this situation considerably.  The designer can
layout a single row and the programmer can fill it in as many times as
necessary - all they must agree on is the parameter names.

For all that, I think the best thing about this module is that it does
just one thing and it does it quickly and carefully.  It doesn't try
to replace Perl and HTML, it just augments them to interact a little
better.  And it's pretty fast.


DOCUMENTATION

The documentation is in Template.pm in the form of POD format
perldocs.  Even the above text might be out of date, so be sure to
check the perldocs for the straight truth.


CONTACT INFO

This module was written by Sam Tregar ([EMAIL PROTECTED]) for Vanguard
Media (http://www.vm.com).  You can join the HTML::Template
mailing-list by sending a blank message to
[EMAIL PROTECTED]





Re: HTML Template Comparison Sheet ETA

2000-09-04 Thread Sam Tregar

On Mon, 4 Sep 2000, Nelson Correa de Toledo Ferraz wrote:

 I still think that this:
 
? foreach $name (@names) { ?
  Name: ?=$name? P
  Job: ?=$job{$name}? P
? } ?
 
 Is cleaner (well, as much as perl can be :-)) than this:
 
TMPL_LOOP NAME=EMPLOYEE_INFO
  Name: TMPL_VAR NAME=NAME P
  Job: TMPL_VAR NAME=JOB P
/TMPL_LOOP

That's because you're a Perl programmer.  The template syntax wasn't
designed for your tastes.  It was designed for the HTML designers you will
eventually have to work with - wether while you're actually on the project
or when it moves into maintainance and needs design changes.

 And the first one has two major advantages: 1) requires less code in the
 Perl modules and 2) allows designers to know how Perl looks like.

1) The more code you put in your modules the better.  This promotes code
reuse and better documentation.

2) Say what?  Are you running a school or trying to things done?  Putting
raw Perl in your HTML isn't helping you designers is any way I understand.

-sam





Re: HTML Template Comparison Sheet ETA

2000-09-04 Thread Sam Tregar

On Mon, 4 Sep 2000, Perrin Harkins wrote:

 Embedded perl is absolutely the best answer sometimes, but don't
 underestmate the value of turning your example into this:
 
 [% FOREACH thing = list %]
   a href="[% thing.url %]"b[% thing.name %]/b/a
 [% END %]

That isn't really much better, in my opinion.  It's still too much of a
departure from the HTML around it.  Contrast the above to HTML::Template's
looping:

  TMPL_LOOP list
 a href="TMPL_VAR url"bTMPL_VAR name/b/A
  /TMPL_LOOP

With a little education an HTML designer can learn to manipulate the
template syntax.  You'll have to teach them to program before they can
deal with a full "foreach" no matter how you dress it up.

-sam





Re: HTML Template Comparison Sheet ETA

2000-09-03 Thread Sam Tregar

On Mon, 28 Aug 2000, Nelson Correa de Toledo Ferraz wrote:

  "This approach has two problems: First, their little language is
   crippled. If you need to do something the author hasn't thought of, you
   lose. Second: Who wants to learn another language? You already know
   Perl, so why not use it?"

To which HTML::Template responds: "Sure you know Perl, but does the HTML
designer you're working with?"  HTML::Template has a simple, HTML-esque
syntax for its template files that is aimed at HTML designers.  Keep the
Perl in your modules and keep the HTML in your template files.  Go the
other direction and soon enough you've got your programmers changing 
font colors.

You can put that in your sheet and, er... smoke it?
-sam





Re: howto config apache to allow perl to filter php pages

2000-07-22 Thread Sam Carleton

Rob Tanner wrote:
 
 --On 07/16/00 16:11:07 -0400 Sam Carleton [EMAIL PROTECTED] wrote:
 
  I would like perl to process a php page before or after the php
  interpreter get's it hands on the file.  I am trying to add a navbar to
  the PHP code.  How would I go about doing that?
 
  Sam
 
 The simple answer is wait for Apache 2.x, but since that's just barely
 alpha now, that's a looong [sic] while away.
 
 The issue in Apache 1.x is that you can use only one handler in any
 particular phase to process you're request.  Thus, php or mod_perl (or cgi,
 depending on how you meant to invoke perl).
 
 But the real question is why?  I have never done a navbar on a page (most
 of my web work is server app development, not pages), si I may be making
 some wrong assumptions here.  If you are creating the page with a cgi or a
 mod-perl app, I would think you would be able to do the whole thing without
 ever using PHP.
 
 But, if what you are really doing is displaying a page with server-side
 components, PHP is a much better chice by far than cgi or mod-perl.  What
 are you trying to do that php won't do for you?

I was reading the O'Reilly book "Writting Apache Modules in Perl and C"
and discovered the navbar example.  I really like how Stein/MacEachern
designed the navbar code.  Once the code was written, it read in a
configuration file so that it would know the url/name of all items on
the navbar.  This config file was passed to the navar code via variable
in the apache directive.  I put the navbar code into the essi example
(enhansted server side include) so that one would only need to the
correct comment !--#NAVBAR-- in the html file.

I think this is a great idea.  If I could get the navbar/essi code to
parse the php page either before or after php processed the page, the
navbar would always be there.

The bottom line:  I would like to have some navigation code that is
totally seperate from everything so that I don't have to worry about
broken links and stuff like that.  I also want the ability to change the
look/feel of the navigation without affecting everything. 

sam



my first attempt at a perl script (syntax error)

2000-07-22 Thread Sam Carleton

I have begun writting my first mod_perl module.  I figured that I would
get the logic working perl first, considering I am new at the language
in general.  Well, low and behold, I have a few syntax error which I
don't know how to resolve.  One issue is how to declare a local
variable, and the other is a problem with my open statement.  Can
someone take a look and help me out?  Here is the code:


#! /usr/bin/perl

use strict;

sub process {
local $str = @_;
return $str;
}


my $filename="testinput.txt";
my $fh;

unless( open $fh, $filename) {
print STDERR "Cannot open file [$filename]\n";
}

while($fh) {
chomp;
print process($_);
}

1;
__END__

This is the error I am getting:

Global symbol "$str" requires explicit package name at ./test.pl line 6.
Bareword found where operator expected at ./test.pl line 18, near "$fh"
(Missing operator before fh?)
Bareword "fh" not allowed while "strict subs" in use at ./test.pl line
18.
syntax error at ./test.pl line 18, near "$fh"
syntax error at ./test.pl line 21, near "}"
Execution of ./test.pl aborted due to compilation errors.



Re: my first attempt at a perl script (syntax error)

2000-07-22 Thread Sam Carleton

Alex Farber wrote:
 
 Hi Sam,
 
 Sam Carleton wrote:
  I have a few syntax error which I
  don't know how to resolve.  One issue is how to declare a local
  variable, and the other is a problem with my open statement.
 
 maybe you should read some introductionary Perl
 books, like http://www.effectiveperl.com/toc.html or
 http://www.ebb.org/PickingUpPerl/pickingUpPerl.html
 
 A good place to ask is news:comp.lang.perl.misc (after you've
 read http://www.perl.com/pub/doc/manual/html/pod/perlfaq.html )

Maybe I have read things like "Programming Perl" from O'Reilly and
"Writting Apache Modules in Perl and C", am tired of reading page after
page and want to do some real coding.  Maybe I thought that folks in the
mod_perl mailing list would be understanding of someone who has spent
many years in another language and needs a little help overcoming some
syntax issues.

One thing is for sure, I did not expect to get a responce such as your,
one that says: "Go  yourself, if you don't know the language we sure
as  aren't going to help your ___!!!"  

Live and learn, I guess...

Sam



Re: my first attempt at a perl script (syntax error)

2000-07-22 Thread Sam Carleton

Sam Carleton wrote:
 
 Alex Farber wrote:
 
  A good place to ask is news:comp.lang.perl.misc (after you've
  read http://www.perl.com/pub/doc/manual/html/pod/perlfaq.html )
 
 Maybe I have read things like "Programming Perl" from O'Reilly and
 "Writting Apache Modules in Perl and C", am tired of reading page after
 page and want to do some real coding.  Maybe I thought that folks in the
 mod_perl mailing list would be understanding of someone who has spent
 many years in another language and needs a little help overcoming some
 syntax issues.
 
 One thing is for sure, I did not expect to get a responce such as your,
 one that says: "Go  yourself, if you don't know the language we sure
 as  aren't going to help your ___!!!"
 
 Live and learn, I guess...

There where a number of people that did reply privately, I cheched my
mod_perl folder firstg.  I would like to thank eveyone that did reply
kinding to my question.  My question's where answered.  Thank you, I
knew that most participants of the mailing list where willing to help
even when the subject was a bit off topic.  Again, thank you all for the
help!

Sam



Re: problems with code in Writing Apache Modules with Perl and C

2000-07-16 Thread Sam Carleton

m m wrote:
 
 folks allow me, Im the other newbie who was grappling
 with Apache::NavBar the other day :-)
 Ged will be proud, I persevered as he advised ;-)
 
 Sam new to perl, welcome.
 This maynot be the canonically right answer but for a
 simple task like youre asking, you can just "warn"
 stuff to your error logs.
 so for example, (if I understood your initial request
 correctly) , this piece of code will show you the line
 you are reading from your configuration file and then
 the url,location match if any.
 
  while ($fh) {
 chomp;
 s/^\s+//; s/\s+$//;   #fold leading and trailing
 whitespace
 next if /^#/ || /^$/; # skip comments and empty lines
 next unless my($url, $label) = /^(\S+)\s+(.+)/;
 warn "here is the line $_\n";
 if ( my($url,$label) = /^(\S+)\s+(.+)/ ) {
 warn "here are the matches $url, $label\n";
 } else {
 next;
 }
 push @c, $url; # keep the url in an ordered array
 $c{$url} = $label; # keep its label in a hash
 }

Well, I took your if statement and put cut/paste it into my code, things
still did not work.  So I cut the NavBar object out of that file and put
it into a normal perl file.  It works and here it is:

sub new {
my ($class,$conf_file) = @_;
my (@c,%c);
my $url;
my $label;
print "filename = [$conf_file]\n";
open fh, $conf_file or return;
while (fh) {
chomp;
s/^\s+//; s/\s+$//;   #fold leading and trailing whitespace
next if /^#/ || /^$/; # skip comments and empty lines

#   next unless my($url, $label) = /^(\S+)\s+(.+)/;

print "here is the line $_\n";
if ( ($url,$label) = /^(\S+)\s+(.+)/ ) {
print "here are the matches [$url], [$label]\n";
} else {
next;
}

print "url = [$url], label = [$label]\n";

push @c, $url; # keep the url in an ordered array
$c{$url} = $label; # keep its label in a hash
}
return bless {'urls' = \@c,
  'labels' = \%c,
  'modified' = (stat $conf_file)[9]}, $class;
}

Well,  When I put a debug line right after the chomp of the mod_perl
code, using Apache::File to open the conf_file, it displays the whole
conf_file, not just one line.  Any thoughs on how I read through the
conf_file one line at a time?

Sam



howto config apache to allow perl to filter php pages

2000-07-16 Thread Sam Carleton

I would like perl to process a php page before or after the php
interpreter get's it hands on the file.  I am trying to add a navbar to
the PHP code.  How would I go about doing that?

Sam



getting mod_perl configured on FreeBSD

2000-07-14 Thread Sam Carleton

I have successfully gotten Apache/mod_perl to compile under Linux many a
times.  This is my first attempt at compiling it on FreeBSD and I am
having problems.  The problem is that when to do the "make test", apache
never starts up.  I had once run into this on Linux and that was because
the  .makepl_args.mod_perl was pointing to a non-existing layout file
and I did not catch the error from the "perl Makefile.PL".  But I have
looked and look at the output of the "perl Makefile.PL" and see nothing
wrong.  I am going to post the output of "perl Makefile.PL", along with
my .makepl_args.mod_perl and my layout file in hopes that one of you can
find my error.  Thanks

output from "perl Makefile.PL"
Will run tests as User: 'nobody' Group: 'wheel'
Configuring for Apache, Version 1.3.12
 + using installation path layout: maineville
(/usr/src/apache.config.layout)
 + activated perl module (modules/perl/libperl.a)
Creating Makefile
Creating Configuration.apaci in src
 + enabling mod_so for DSO support
  + id: mod_perl/1.24
  + id: Perl/5.00503 (freebsd) [perl]
Creating Makefile in src
 + configured for FreeBSD 4.0 platform
 + setting C pre-processor to cc -E
 + checking for system header files
 + adding selected modules
o rewrite_module uses ConfigStart/End
  enabling DBM support for mod_rewrite
o dbm_auth_module uses ConfigStart/End
o perl_module uses ConfigStart/End
  + mod_perl build type: DSO
  + setting up mod_perl build environment
  + adjusting Apache build environment

** Error: Cannot build mod_include with Perl support (USE_PERL_SSI) **
** when mod_perl is compiled as DSO because of cross-module calls.  **
** Ignoring PERL_SSI flag now.  **

 + checking sizeof various data types
 + doing sanity check on compiler and options
Creating Makefile in src/support
Creating Makefile in src/os/unix
Creating Makefile in src/ap
Creating Makefile in src/main
Creating Makefile in src/modules/standard
Creating Makefile in src/modules/proxy
Creating Makefile in src/modules/perl
Reading Makefile.PL args from ../.makepl_args.mod_perl
Will configure via APACI
cp apaci/Makefile.libdir
/usr/src/apache/src/modules/perl/Makefile.libdir
cp apaci/Makefile.tmpl /usr/src/apache/src/modules/perl/Makefile.tmpl
cp apaci/README /usr/src/apache/src/modules/perl/README
cp apaci/configure /usr/src/apache/src/modules/perl/configure
cp apaci/libperl.module /usr/src/apache/src/modules/perl/libperl.module
cp apaci/mod_perl.config.sh
/usr/src/apache/src/modules/perl/mod_perl.config.sh
cp apaci/load_modules.pl.PL
/usr/src/apache/src/modules/perl/load_modules.pl.PL
cp apaci/find_source.PL /usr/src/apache/src/modules/perl/find_source.PL
cp apaci/apxs_cflags.PL /usr/src/apache/src/modules/perl/apxs_cflags.PL
cp apaci/mod_perl.exp /usr/src/apache/src/modules/perl/mod_perl.exp
PerlDispatchHandler.enabled
PerlChildInitHandlerenabled
PerlChildExitHandlerenabled
PerlPostReadRequestHandler..enabled
PerlTransHandlerenabled
PerlHeaderParserHandler.enabled
PerlAccessHandler...enabled
PerlAuthenHandler...enabled
PerlAuthzHandlerenabled
PerlTypeHandler.enabled
PerlFixupHandlerenabled
PerlHandler.enabled
PerlLogHandler..enabled
PerlInitHandler.enabled
PerlCleanupHandler..enabled
PerlRestartHandler..enabled
PerlStackedHandlers.enabled
PerlMethodHandlers..enabled
PerlDirectiveHandlers...enabled
PerlTableApienabled
PerlLogApi..enabled
PerlUriApi..enabled
PerlUtilApi.enabled
PerlFileApi.enabled
PerlConnectionApi...enabled
PerlServerApi...enabled
PerlSectionsenabled
PerlSSI.enabled
(cd /usr/src/apache  CC="cc" ./configure
--activate-module=src/modules/perl/libperl.a --disable-rule=EXPAT
--with-layout=/usr/src/apache.config.layout:maineville
--server-uid=wwwrun --server-gid=daemon --enable-module=most
--enable-shared=max --prefix=/data01/maineville)
Checking CGI.pm VERSION..ok
Checking for LWP::UserAgent..ok
Checking for HTML::HeadParserok
'-ADD_MODULE' is not a known MakeMaker parameter name.
Writing Makefile for Apache
Writing Makefile for Apache::Connection
Writing Makefile for Apache::Constants
Writing Makefile for Apache::File
Writing Makefile for Apache::Leak
Writing Makefile for Apache::Log
Writing Makefile for Apache::ModuleConfig
Writing Makefile for Apache::PerlRunXS
Writing Makefile for Apache::Server
Writing Makefile for Apache::Symbol
Writing Makefile for Apache::Table
Writing Makefile for Apache::URI
Writing Makefile for Apache::Util
Writing Makefile for mod_perl

.makepl_args.mod_perl
# File: .makepl_args.mod_erl
# enable all phase callbacks, API modules and misc features
EVERYTHING=1

# tell runtime diagnostics to 

redirecting a domain

2000-07-14 Thread Sam Carleton

I have an apache question and I have NO idea where to post it.  Is there
a newsgroup or mailing list simply for apache?

I have multipal domain names: domain.net  domain.org.  I would like to
configure apache such that when someone goes to www.domain.org, they are
"redirect" to www.domain.net.  They are both the exact same web site, I
simply want the domain name to show up as www.domain.net.  Any thoughs
on how to do that?

Sam



access control by using a name list File in AFS?

2000-07-12 Thread Sam Xie

Hi! There,
I am trying to set the web access control by using a name list file in AFS.
I don't know how to handle this issue.  It will be gratefull if somebody can
help.
Many Thanks!
Sam



Re: What is *.xs file?

2000-07-06 Thread Sam Xie

 Umm this list is for perl as a module in apache
 not modules for perl...
 
Yes! It is for a perl module in apache.  I am going to write a perl module, 
which is capable to read a name list file in AFS for authenticattion and 
authorization.  That's why I am studying on this issue.  
Thanks for your help!
Sam




What is *.xs file?

2000-07-05 Thread Sam Xie

Hi! There,
I am learning to write a perl module.  I saw someone's AFS.pm module, 
in which, he wrote a AFS.xs file in C. I don'e know what .xs extension
means and how to write it?  If someone knows it, it will be gratefull to 
help me to understand it!
Thanks!
Sam



Getting DB2U support

2000-07-04 Thread Sam Carleton

Folks, I have installed mod_perl and I would like to access a DB2
server.  What perl modules do I need to install?  Are there any
tutorials out there to give me the basics of access DB2 from perl?

Sam



modperl1.24 with apache1.3.12

2000-06-16 Thread Sam Park


Do you know how I can install mod-perl.1.24 with the apache1.3.12?
I following the instruction but it's giving me this error?
If I run the make... then I get this error..
cd ../apache_1.3.12/src   make CC="cc";)
=== os/unix
cc -c  -I../../os/unix -I../../include   -DSOLARIS2=260 -DMOD_PERL
-DUSE_EXPAT -
I../../lib/expat-lite -DNO_DL_NEEDED -DMOD_PERL os.c
/usr/ucb/cc:  language optional software package not installed
*** Error code 1
make: Fatal error: Command failed for target `os.o'
Current working directory /excite/adm/sam/apache_1.3.12/src/os/unix
*** Error code 1
make: Fatal error: Command failed for target `subdirs'
Current working directory /excite/adm/sam/apache_1.3.12/src
*** Error code 1
make: Fatal error: Command failed for target `apache_httpd'






why a mod_perl module,Footer.pm stop cgi-bin?

2000-05-17 Thread Sam Xie

Hello! All,
   I am new user on mod_perl, and study it from the book, "Write Apache Modules with
Perl and C".  I installed a Handler, Footer.pm, in apache by embeding the following 
lines in the file apache.conf:
   Alias / /usr/local/share/apache/htdocs/
   Location /
   SetHandlerperl-script
   PerlHandler   Apache::Footer
   /Location
It works but the scripts in /cgi-bin/ do not function at all!  If I comment this 
handler
, the cgi-bin works again.  I don't know whay?  Can somebody tell me the reason and how
to overcome this side effect?  The code and the system information is appended with 
this
email as follow.
   Many Thanks!
Sam Xie

Operating System:  FreeBSD-4.0 Current
Perl Version:  Perl 5.005_03
Apache Version:Apache13-php4
Mod_perl version:  mod_perl-1.22

Perl Handler:   Footer.pm 
-Code -
package Apache::Footer;
use strict;
use Apache::Constants qw(:common);
use Apache::File ();

sub handler {
my $r = shift;
return DECLINED unless $r-content_type() eq 'text/html';
my $file = $r-filename;
unless (-e $r-finfo) {
$r-log_error("File does not exist: $file");
return NOT_FOUND;
}
unless (-r _) {
$r-log_error("File permissions deny access: $file");
return FORBIDDEN;
}
my $modtime = localtime((stat _)[9]);
my $fh;
unless ($fh = Apache::File-new($file)) {
$r-log_error("Couldn't open $file for reading: $!");
return SERVER_ERROR;
}
my $footer = END;
hr
copy; 2000 a href="http://samxie.cl.msu.edu"Sam Xie's Footer; /abr
emLast Modified: $modtime/em
END
$r-send_http_header;
while ($fh) {
s!(/body)!$footer$1!oi;
} continue {
$r-print($_);
}
return OK;
}

1;
__END__



Re: Most nonesense I've ever read about mod_perl

2000-05-06 Thread Sam Carleton

"Jason C. Leach" wrote:
 
 hi,
 
 There be truth to the reply.  You can write all the C or ASM you like, but
 your algorithm is where it will count.  Anyone who knows how to do BIG-O
 will know this.
 
 A good perl programmer will code a bad C programmer under the table with
 speed and eficiency.
 
Jason,

Your posting was the first one I saw.  Your statement is 100% correct. 
On the other hand, if you put an outstanding Perl programmer up against
an equally outstanding C programmer, the C programmer's code will run
loops around the Perl programmer.  The only question is:  How much more
time will it take the C program to write the code?  Does the time
justify the speed?

I am a C/C++ programmer and love it for the power and the speed, but... 
I am working on putting together a web site.  I have desided to learn
Perl because I have desided that the time to do it right in C/C++ just
isn't worth it.  

Every language has it use, the truly knowledgeable understand when to
use each language:)

Sam



Re: perl code to handle a multipart/form-data POST

2000-04-30 Thread Sam Carleton

"Jeffrey W. Baker" wrote:
 
 On Sat, 29 Apr 2000, Sam Carleton wrote:
 
  #! /usr/bin/perl
 
  #cgi-lib.pl was optained from http://cgi-lib.berkeley.edu/
  require "cgi-lib.pl";
 
 If we can erradicate polio and small pox, surely there must be a way to
 rid the world of cgi-lib.pl.  Apparently it can still strike the young,
 elderly, and infirm.  What a senseless waste.
 
 -jwb
 
 PS.  If Mr. Carleton is not in too much of a hurry, I would advise him to
 look into the modern software available on CPAN, such as Apache.pm, which
 comes with mod_perl, and Apache::Request, which is available at
 http://www.perl.com/CPAN/modules/by-module/Apache/.

Ok,  So cgi-lib.pl isn't the greatest in the world, but it did help me
get a big farther in my project:)  Now I need something that works.  I
have "Writing Apache Modules with Perl and C", but have not read too
deep into it and time is very short.  Again, all I am trying to do is
print out ALL the name/values that was on the form.  The code I have so
far does not display any of the name/values, Below is my code, could
someone please show me what I am doing wrong and how do I fix it?

package Apache::POSTDisplay;

use strict;
use Apache::Constants qw(:common);

sub handler {
  my $r = shift;

  $r-content_type('text/html');
  $r-send_http_header;
  $r-print(HTTP_HEADER);
HTML
TITLEPOSTDisplay/TITLE
BODY
H1POSTDisplay/H1
UL
HTTP_HEADER

  my @args = ($r-args, $r-content);
  while(my($name,$value) = splice @args,0,2) {
$r-print("li[$name]=[$value]/li\n");
  }

  $r-print(HTTP_FOOTER);
/UL
/BODY/HTML
HTTP_FOOTER

return OK;
}

1;
__END__



where to find info on the Apache request object

2000-04-30 Thread Sam Carleton

I am learning perl/mod_perl right now and have some questions.  I would
like to see all the functions that I can call on the Apache request
object.  Can anyone point me to some documentation?  I didn't see a
listing in "Writing Apache Modules in Perl and C".

Sam



Re: perl code to handle a multipart/form-data POST

2000-04-30 Thread Sam Carleton

Tobias Hoellrich wrote:
 
 Almost :-) Apache cannot be used for multipart/form-data, gotta use
 Apache::Request instead. Change the start of the handler to :
 
 sub handler {
   my $r = shift;
   my $apr = Apache::Request-new($r)
 
 and then get the params with @params=$apr-param;

Tobias,

I am looking into it right now, but you might be able to save me a lot
of time.  I want to display the name/values from the HTML form.  How
would I go about enumerating through the @params to do this?

Sam



Apache::Request-new($r) does NOT work, why?

2000-04-30 Thread Sam Carleton

Tobias Hoellrich wrote:
 
 Almost :-) Apache cannot be used for multipart/form-data, gotta use
 Apache::Request instead. Change the start of the handler to :
 
 sub handler {
   my $r = shift;
   my $apr = Apache::Request-new($r)

Tobias,

The new is blowing up on me.  This is the error message:

null: Can't locate object method "new" via package "Apache::Request"



Re: where to find info on the Apache request object

2000-04-30 Thread Sam Carleton

Jeff Beard wrote:
 
 Or read chapter 9 in the Eagle book.
 
 --Jeff
 
 At 10:43 AM 4/30/00, Tobias Hoellrich wrote:
 At 01:34 PM 4/30/00 -0400, Sam Carleton wrote:
  I am learning perl/mod_perl right now and have some questions.  I would
  like to see all the functions that I can call on the Apache request
  object.  Can anyone point me to some documentation?  I didn't see a
  listing in "Writing Apache Modules in Perl and C".
  
  Sam
 
 try 'perldoc Apache'
 

Tobias and Jeff,

Thanks for the pointer, but now I am looking for info on the
Apache::Request object, I did not see it in Chapter 9 of the Eagle
book.  I tried a number of different ways of trying to get to it from 
perldoc, but failed.  How do I go about bring up the doc on this in
perldoc?

Sam



perl code to handle a multipart/form-data POST

2000-04-29 Thread Sam Carleton

I am in a very tight spot right now.  I need to have some C++ code
posting data to a web browser vi 'multipart/form-data' by Monday.  I
would REALLY like to have either a normal perl CGI script or mod_perl
script that will simply display all the information POSTed to it from my
code.  Is anyone up to the task, or able to give me some pointers on how
I can do this myself?

Sam Carleton



Re: perl code to handle a multipart/form-data POST

2000-04-29 Thread Sam Carleton

Sam Carleton wrote:
 
 I am in a very tight spot right now.  I need to have some C++ code
 posting data to a web browser vi 'multipart/form-data' by Monday.  I
 would REALLY like to have either a normal perl CGI script or mod_perl
 script that will simply display all the information POSTed to it from my
 code.  Is anyone up to the task, or able to give me some pointers on how
 I can do this myself?

I LOVE answering my own questions:

#! /usr/bin/perl

#cgi-lib.pl was optained from http://cgi-lib.berkeley.edu/
require "cgi-lib.pl"; 

ReadParse();

print PrintHeader();
print HtmlTop("POST/GET Display");
print PrintVariables();
print HtmlBot();

exit 0;



Re: Error compiling mod_perl

2000-04-12 Thread Sam Carleton

Doug MacEachern wrote:

 On Tue, 11 Apr 2000, Sam Carleton wrote:

  This is the error message I got when I compiled mod_perl:
 
  Perl lib version (5.00503) doesn't match executable version (5.006) at
  /usr/lib/perl5/5.00503/i586-linux/Config.pm line 7.

 you either installed a new Perl after running mod_perl's Makefile.PL or
 have a broken Perl installation.  try building mod_perl from a fresh
 source tree.

OK, I messed things up with CPAN, I believe that it installed 5.006 where my
distribution came with 5.003.  I decided to resolve the issue by installing
5.6.  I am able to run perl Makefile.PL without error and compile without
errors.  When I run make test, I get this error:

---make test error---
Syntax error on line 30 of /usr/src/mod_perl-1.21_03/t/conf/httpd.conf:
Invalid command '=pod', perhaps mis-spelled or defined by a module not
included in the server configuration
done
/usr/local/bin/perl t/TEST 0
still waiting for server to warm up...not ok
server failed to start! (please examine t/logs/error_log) at t/TEST line 95.

make: *** [run_tests] Error 111
---make test error---

I looked in t/logs for a error_log, but t/logs is empty.  I think I might
have issue with the way I am configuring apache and mod_perl.  This is my
.makepl_args.mod_perl:

---.makepl_args.mod_perl---
# File: .makepl_args.mod_erl
# enable all phase callbacks, API modules and misc features
EVERYTHING=1

# tell runtime diagnostics to active if MOD_PERL_TRACE environment
# variable is set at runtime
PERL_TRACE=1

# tell Makefile.pl where the Apache source tree is
APACHE_SRC=/usr/src/apache/src

# tell Makefile.PL where the Apache is to be isntalled
APACHE_PREFIX=/data01/apache

# disable Makefile.pl from compiling Apache
#PREP_APACHED=1

# tell Makefile.PL to use the first source found, which will be the
# path specified above by APACHE_SRC
DO_HTTPD=1

# tell Makefile.PL to configure Apache using the apaci interface
USE_APACI=1

# tell makefile.PL to configure ssl support, too
# SSL_BASE=/usr/local/ssl

# add mod_info, mod_status, mod_usertrack, and mod_unique_id
ADD_MODULE=info,status,usertrack,unique_id

# additional arguments to give Apache's configure script
# aruments can be delimited by comma and/or specified with multipal
# APACI_ARGS lines
#APACI_ARGS=--includedir=/usr/src/php
#APACI_ARGS=--activate-module=src/modules/php3/libphp3.a
APACI_ARGS=--with-layout=apache.config.layout:Sam-Layout
APACI_ARGS=--server-uid=wwwrun
APACI_ARGS=--server-gid=dosemu
APACI_ARGS=--enable-module=most
APACI_ARGS=--enable-shared=max
---.makepl_args.mod_perl---

And the options I am using to make apache:

---apache options---
configure \
--with-layout=/root/apache.config.layout:Sam-Layout \
--with-perl=src/modules/perl \
--enable-module=most \
--server-uid=wwwrun \
--server-gid==dosemu \
--enable-shared=max
---apache options---

Any thoughts on what I have wrong?

Sam

P.S.  Thanks a millon for having the .makepl_args.mod_perl idea!!!  It is an
outstanding one!




make test is bomming out

2000-04-12 Thread Sam Carleton

Doug MacEachern wrote:

 On Tue, 11 Apr 2000, Sam Carleton wrote:

  This is the error message I got when I compiled mod_perl:
 
  Perl lib version (5.00503) doesn't match executable version (5.006)
at
  /usr/lib/perl5/5.00503/i586-linux/Config.pm line 7.

 you either installed a new Perl after running mod_perl's Makefile.PL
or
 have a broken Perl installation.  try building mod_perl from a fresh
 source tree.

OK, I messed things up with CPAN, I believe that it installed 5.006
where my
distribution came with 5.003.  I decided to resolve the issue by
installing
5.6.  I am able to run perl Makefile.PL without error and compile
without
errors.  When I run make test, I get this error:

---make test error---
Syntax error on line 30 of /usr/src/mod_perl-1.21_03/t/conf/httpd.conf:
Invalid command '=pod', perhaps mis-spelled or defined by a module not
included in the server configuration
done
/usr/local/bin/perl t/TEST 0
still waiting for server to warm up...not ok
server failed to start! (please examine t/logs/error_log) at t/TEST line
95.

make: *** [run_tests] Error 111
---make test error---

I looked in t/logs for a error_log, but t/logs is empty.  I think I
might
have issue with the way I am configuring apache and mod_perl.  This is
my
.makepl_args.mod_perl:

---.makepl_args.mod_perl---
# File: .makepl_args.mod_erl
# enable all phase callbacks, API modules and misc features
EVERYTHING=1

# tell runtime diagnostics to active if MOD_PERL_TRACE environment
# variable is set at runtime
PERL_TRACE=1

# tell Makefile.pl where the Apache source tree is
APACHE_SRC=/usr/src/apache/src

# tell Makefile.PL where the Apache is to be isntalled
APACHE_PREFIX=/data01/apache

# disable Makefile.pl from compiling Apache
#PREP_APACHED=1

# tell Makefile.PL to use the first source found, which will be the
# path specified above by APACHE_SRC
DO_HTTPD=1

# tell Makefile.PL to configure Apache using the apaci interface
USE_APACI=1

# tell makefile.PL to configure ssl support, too
# SSL_BASE=/usr/local/ssl

# add mod_info, mod_status, mod_usertrack, and mod_unique_id
ADD_MODULE=info,status,usertrack,unique_id

# additional arguments to give Apache's configure script
# aruments can be delimited by comma and/or specified with multipal
# APACI_ARGS lines
#APACI_ARGS=--includedir=/usr/src/php
#APACI_ARGS=--activate-module=src/modules/php3/libphp3.a
APACI_ARGS=--with-layout=apache.config.layout:Sam-Layout
APACI_ARGS=--server-uid=wwwrun
APACI_ARGS=--server-gid=dosemu
APACI_ARGS=--enable-module=most
APACI_ARGS=--enable-shared=max
---.makepl_args.mod_perl---

And the options I am using to make apache:

---apache options---
configure \
--with-layout=/root/apache.config.layout:Sam-Layout \
--with-perl=src/modules/perl \
--enable-module=most \
--server-uid=wwwrun \
--server-gid==dosemu \
--enable-shared=max
---apache options---

Any thoughts on what I have wrong?

Sam

P.S.  Thanks a millon for having the .makepl_args.mod_perl idea!!!  It
is an
outstanding one!




PLEASE HELP!!!!! I cannot get mod_perl/apache compiled

2000-04-12 Thread Sam Carleton

I simply cannot get mod_perl/apache to compile.  My understanding is
that I configure .makepl_args.mod_perl to compile both mod_perl.  Then I
do the following:

perl Makefile.PL
make
make test
make install

Assuming there where no problems, all should be installed and ready to
go.  But all is not well.  First some version info.  I just downloaded
mod_perl-1.22 and apache_1.3.12 and am working with fresh trees.  I run
the perl Makefile.PL and that seems to work well, I don't see any
errors.  When I try to run make, I get this error:

# make
(cd /usr/src/apache_1.3.12  make)
make[1]: Entering directory `/usr/src/apache_1.3.12'
make[1]: *** No targets.  Stop.
make[1]: Leaving directory `/usr/src/apache_1.3.12'
make: *** [apaci_httpd] Error 2

My understanding is that the `perl Makefile.PL` WILL also run configure
for apache.  Just to make sure I was not mistaken, I have tried to first
go into the apache tree and run configure with the same options that are
in my .makepl_args.mod_perl.  Then run `perl Makefile.PL`, run `make`
(which works this time), and then run `make test`.  It is `make test`
that bombs out with this error:

letting apache warm up...\c
Syntax error on line 30 of /usr/src/mod_perl-1.22/t/conf/httpd.conf:
Invalid command '=pod', perhaps mis-spelled or defined by a module not
included in the server configuration
done
/usr/local/bin/perl t/TEST 0
still waiting for server to warm up...not ok
server failed to start! (please examine t/logs/error_log) at t/TEST line
95.
make: *** [run_tests] Error 111


Now, there is no 't/logs/error_log' file to examine.  But I did notice
an error in reading in the the httpd.conf file.  I looked into line 30
of '/usr/src/mod_perl-1.22/t/conf/httpd.conf' and this is what I found:

=pod

=head1 NAME

mod_perl test configuration file

=head1 DESCRIPTION

umm, we use this to test mod_perl

=over to apache


I am under the impression that the httpd.conf file is the conf file that
httpd is reading in for the test.  My understanding is that equals is
not a valid beginning of an apache conf file.  

I have NO CLUE as to what is going on here.  I would truly appreciate it
if you know anything about this please let me know what is going on.  At
the bottom you will find my '.makepl_args.mod_perl' and the command line
options I am using for apache.

Sam

--.makepl_args.mod_perl--
# File: .makepl_args.mod_erl
# enable all phase callbacks, API modules and misc features
EVERYTHING=1

# tell runtime diagnostics to active if MOD_PERL_TRACE environment 
# variable is set at runtime
PERL_TRACE=1

# tell Makefile.pl where the Apache source tree is
APACHE_SRC=/usr/src/apache_1.3.12/src

# tell Makefile.PL where the Apache is to be isntalled
APACHE_PREFIX=/data01/apache

# disable Makefile.pl from compiling Apache
#PREP_HTTPD=1

# tell Makefile.PL to use the first source found, which will be the
# path specified above by APACHE_SRC
DO_HTTPD=1

# tell Makefile.PL to configure Apache using the apaci interface
USE_APACI=1

# tell makefile.PL to configure ssl support, too
#-SSL_BASE=/usr/local/ssl

# add mod_info, mod_status, mod_usertrack, and mod_unique_id
#-ADD_MODULE=info,status,usertrack,unique_id 

# additional arguments to give Apache's configure script
# aruments can be delimited by comma and/or specified with multipal
# APACI_ARGS lines
#APACI_ARGS=--includedir=/usr/src/php
#APACI_ARGS=--activate-module=src/modules/php3/libphp3.a
APACI_ARGS=--with-layout=apache.config.layout:Sam-Layout
APACI_ARGS=--server-uid=wwwrun
APACI_ARGS=--server-gid=dosemu
APACI_ARGS=--enable-module=most
APACI_ARGS=--enable-shared=max
--.makepl_args.mod_perl--



--apache config script--
#! /bin/sh
ROOT_DIR=/usr/src

$ROOT_DIR/apache_1.3.12/configure \
--with-layout=/root/apache.config.layout:Sam-Layout \
--with-perl=src/modules/perl \
--enable-module=most \
--server-uid=wwwrun \
--server-gid==dosemu \
--enable-shared=max
--apache config script--



Error compiling mod_perl

2000-04-11 Thread Sam Carleton

This is the error message I got when I compiled mod_perl:

Perl lib version (5.00503) doesn't match executable version (5.006) at
/usr/lib/perl5/5.00503/i586-linux/Config.pm line 7.
Compilation failed in require at
/usr/lib/perl5/5.00503/ExtUtils/MakeMaker.pm line 13.
BEGIN failed--compilation aborted at
/usr/lib/perl5/5.00503/ExtUtils/MakeMaker.pm line 13.
Compilation failed in require.
BEGIN failed--compilation aborted.
make: *** [Version_check] Error 255

How do I fix this?

Sam




best way to call traceroute

2000-04-07 Thread Sam Carleton

I want to call traceroute to the remote_host from within a mod_perl
script, being a C/C++ programmer I don't the best way to do that.  Is
there a traceroute object I could use?  If so, how?  Otherwise how do I
run traceroute from within a perl script?

Sam




Re: best way to call traceroute

2000-04-07 Thread Sam Carleton

Steven Champeon wrote:

 On Fri, 7 Apr 2000, Sam Carleton wrote:
  I want to call traceroute to the remote_host from within a mod_perl
  script, being a C/C++ programmer I don't the best way to do that.  Is
  there a traceroute object I could use?  If so, how?  Otherwise how do I
  run traceroute from within a perl script?

 I'm getting ready to port an old and somewhat clunky traceroute CGI script
 to mod_perl, mostly to avoid the horrid 'nph-' construction. If you'd like
 I can make the source available.

That would be great!  Any idea when it will be ready?

Sam




getting server side includes to work server wide

2000-03-31 Thread Sam Carleton

I have followed the example in "Writing Apache Modules in Perl and C".
The module Apache::ESSI is working fine for a virtual site (development
site), but it does not work for the main (non-virutal) site.  Here a bit
of my httpd.conf:

-
##
## httpd.conf -- Apache HTTP server configuration file
##

[...snip...]

ResourceConfig conf/perl.conf

[...snip...]

ServerAdmin  [EMAIL PROTECTED]
ServerName miltonstreet.tzo.com

DocumentRoot "/data01/www/miltonstreet"

Directory /
Options FollowSymLinks
AllowOverride None
/Directory

Directory "/data01/www/miltonstreet"
Options FollowSymLinks MultiViews Includes ExecCGI
AllowOverride None
Order allow,deny
Allow from all
/Directory

[...snip...]

VirtualHost 192.168.0.5:80
DocumentRoot /data01/www/dev-collect-lure

Directory "/data01/www/scripts/cgi-bin-dev"
AllowOverride None
Options None
Order allow,deny
Allow from all
/Directory

Location /images
  SetHandler perl-script
  PerlHandler Apache::Magick
/Location

/VirtualHost
-

Apache::ESSI works fine on the 192.168.0.5 web site, but does not work
on the miltonstreet web site.  Here is the perl.conf:

-
# perl.conf

PerlRequire  /data01/www/scripts/startup.pl
PerlFreshRestart On

Location /hello/world
  SetHandler  perl-script
  PerlHandler Apache::Hello
/Location

Files ~ "\.ehtml$"
  SetHandler perl-script
  Perlhandler Apache::ESSI
  PerlSetVar ESSIDefs conf/essi.defs
/Files

AddType text/html .ehtml

Location /image
  SetHandler perl-script
  PerlHandler Apache::Magick
/Location
-----

Any thoughts on what I am doing wrong?

Sam

P.S.  the miltonstreet site is: http://miltonstreet.tzo.com/index.ehtml




getting server side includes to work server wide

2000-03-31 Thread Sam Carleton

I have followed the example in "Writing Apache Modules in Perl and C".
The module Apache::ESSI is working fine for a virtual site (development
site), but it does not work for the main (non-virutal) site.  Here a bit

of my httpd.conf:

-
##
## httpd.conf -- Apache HTTP server configuration file
##

[...snip...]

ResourceConfig conf/perl.conf

[...snip...]

ServerAdmin  [EMAIL PROTECTED]
ServerName miltonstreet.tzo.com

DocumentRoot "/data01/www/miltonstreet"

Directory /
Options FollowSymLinks
AllowOverride None
/Directory

Directory "/data01/www/miltonstreet"
Options FollowSymLinks MultiViews Includes ExecCGI
AllowOverride None
Order allow,deny
Allow from all
/Directory

[...snip...]

VirtualHost 192.168.0.5:80
DocumentRoot /data01/www/dev-collect-lure

Directory "/data01/www/scripts/cgi-bin-dev"
AllowOverride None
Options None
Order allow,deny
Allow from all
/Directory

Location /images
  SetHandler perl-script
  PerlHandler Apache::Magick
/Location

/VirtualHost
-

Apache::ESSI works fine on the 192.168.0.5 web site, but does not work
on the miltonstreet web site.  Here is the perl.conf:

-
# perl.conf

PerlRequire  /data01/www/scripts/startup.pl
PerlFreshRestart On

Location /hello/world
  SetHandler  perl-script
  PerlHandler Apache::Hello
/Location

Files ~ "\.ehtml$"
  SetHandler perl-script
  Perlhandler Apache::ESSI
  PerlSetVar ESSIDefs conf/essi.defs
/Files

AddType text/html .ehtml

Location /image
  SetHandler perl-script
  PerlHandler Apache::Magick
/Location
-----

Any thoughts on what I am doing wrong?

Sam

P.S.  the miltonstreet site is: http://miltonstreet.tzo.com/index.ehtml




Can't locate object method OPEN via package Apache

2000-03-30 Thread Sam Carleton

I am trying to get the Apache::Magick module from the O'Reilly book
"Writing Apache Modules with Perl and C" to work.  The error I am
running into is:

Can't locate object method "OPEN" via package "Apache" (line 80)

The looks real simply:

open(STDOUT, "=" . fileno($fh));

Any thoughts on what is going wrong?

Sam

P.S.  The whole Apache::Magick is attached, in case you want to look at
it.




package Apache::Magick;

use strict;
use Apache::Constants qw(:common);
use Image::Magick ();
use Apache::File ();
use File::Basename qw(fileparse);
use DirHandle ();

my %LegalArguments = map { $_ = 1 } 
qw (adjoin background bordercolor colormap colorspace
colors compress density dispose delay dither
display font format iterations interlace
loop magick mattecolor monochrome page pointsize
preview_type quality scene subimage subrange
size tile texture treedepth undercolor);

my %LegalFilters = map { $_ = 1 } 
qw(AddNoise Blur Border Charcoal Chop
   Contrast Crop Colorize Comment CycleColormap
   Despeckle Draw Edge Emboss Enhance Equalize Flip Flop
   Frame Gamma Implode Label Layer Magnify Map Minify
   Modulate Negate Normalize OilPaint Opaque Quantize
   Raise ReduceNoise Rotate Sample Scale Segment Shade
   Sharpen Shear Solarize Spread Swirl Texture Transparent
   Threshold Trim Wave Zoom);

sub handler {
my $r = shift;

# get the name of the requested file
my $file = $r-filename;

# If the file exists and there are no transformation arguments
# just decline the transaction.  It will be handled as usual.
return DECLINED unless $r-args || $r-path_info || !-r $r-finfo;

my $source;
my ($base, $directory, $extension) = fileparse($file, '\.\w+');
if (-r $r-finfo) { # file exists, so it becomes the source
$source = $file;
} 
else {  # file doesn't exist, so we search for it
return DECLINED unless -r $directory;
$source = find_image($r, $directory, $base);
}

unless ($source) {
$r-log_error("Couldn't find a replacement for $file");
return NOT_FOUND;
}

$r-send_http_header;
return OK if $r-header_only;

# Read the image
my $q = Image::Magick-new;
my $err = $q-Read($source);

# Conversion arguments are kept in the query string, and the
# image filter operations are kept in the path info
my(%arguments) = $r-args;

# Run the filters
foreach (split '/', $r-path_info) {
my $filter = ucfirst $_;  
next unless $LegalFilters{$filter};
$err ||= $q-$filter(%arguments);
}

# Remove invalid arguments before the conversion
foreach (keys %arguments) { 
delete $arguments{$_} unless $LegalArguments{$_};
}

# Create a temporary file name to use for conversion
my($tmpnam, $fh) = Apache::File-tmpfile;

# Write out the modified image
open(STDOUT, "=" . fileno($fh));
$extension =~ s/^\.//;
$err ||= $q-Write('filename' = "\U$extension\L:-", %arguments);
if ($err) {
unlink $tmpnam;
$r-log_error($err);
return SERVER_ERROR;
}
close $fh;

# At this point the conversion is all done!
# reopen for reading
$fh = Apache::File-new($tmpnam);
unless ($fh) {
$r-log_error("Couldn't open $tmpnam: $!");
return SERVER_ERROR;
}

# send the file
$r-send_fd($fh);

# clean up and go
unlink $tmpnam;  
return OK;
}

sub find_image {
my ($r, $directory, $base) = @_;
my $dh = DirHandle-new($directory) or return;

my $source;
for my $entry ($dh-read) {
my $candidate = fileparse($entry, '\.\w+');
if ($base eq $candidate) {
# determine whether this is an image file
$source = join '', $directory, $entry;
my $subr = $r-lookup_file($source);
last if $subr-content_type =~ m:^image/:;
$source = "";
}
}
$dh-close;
return $source;
}

1;
__END__



Re: Can't locate object method OPEN via package Apache

2000-03-30 Thread Sam Carleton

darren chamberlain wrote:

 Try using CORE::open to be sure that the default open is being called.

tried it, I am getting the same error, any more ideas?

Sam




Re: [admin] NO HTML posts please!

2000-03-30 Thread Sam Carleton

Stas Bekman wrote:

 Folks, please refrain from posting in HTML.

 Some of us use email clients that post and read email in the old good text
 mode. When I don't have enough time on my hands I delete such emails since
 I cannot read them right away. Probably others too.

 Please don't tell me to get more _sophisticated_ email client, my pine
 does everything for me. HTML should NOT be used for posting emails.

And then there are those of us that do have sophisticated email clients that
simply don't care for HTML posting.  I agree 100%, keep it simple, keep it
TEXT!

Sam




adding Server-Side Includes to default files

2000-03-30 Thread Sam Carleton

I would like to have server-side includes to be parsed on DirectoryIndex
files.  I have followed the example in "Writing Apache Modules in Perl
and C" and have my Apache::ESSI and this is what is in my perl.conf:

Files ~ "\.ehtml$"
SetHandler perl-script
PerlHandler Appache::ESSI
PerlSetVar ESSIDefs conf/essi.defs
/Files
AddType text/html .ehtml

What type of directive to I need to put into perl.conf so that the code
gets called on a directory index file?

Sam




getting Image::Magick

2000-03-29 Thread Sam Carleton

I am trying to get Image::Magick compiled and installed.  I am using
CPAN and am getting this error:
---
AutoSplitting blib/lib/Image/Magick.pm (blib/lib/auto/Image/Magick)
/usr/local/bin/perl -I/usr/local/lib/perl5/5.6.0/i686-linux
-I/usr/local/lib/perl5/5.6.0 /usr/local/lib/perl5/5.6.0/ExtUtils/xsubpp
-typemap /usr/local/lib/perl5/5.6.0/ExtUtils/typemap Magick.xs 
Magick.xsc  mv Magick.xsc Magick.c
cc -c -I.. -I/usr/local/include -I/usr/openwin/include
-I/usr/openwin/include/X11 -fno-strict-aliasing -I/usr/local/include
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -DVERSION=\"4.28\"
-DXS_VERSION=\"4.28\" -fpic
-I/usr/local/lib/perl5/5.6.0/i686-linux/CORE  Magick.c
In file included from /usr/local/include/magick/magick.h:45,
 from Magick.xs:78:
/usr/include/assert.h:79: warning: `assert' redefined
/usr/local/lib/perl5/5.6.0/i686-linux/CORE/perl.h:2054: warning: this is
the location of the previous definition
Magick.xs:79: magick/defines.h: No such file or directory
make: *** [Magick.o] Error 1
  /usr/bin/make  -- NOT OK
Running make test
  Oops, make had returned bad status
Running make install
  Oops, make had returned bad status
---
Has anyone seen this error before?  Any thoughts on how to fix it?

Sam Carleton




  1   2   >