Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Issac Goldstand

Ah yes, but don't forget that to get this speed, you are sacrificing 
memory.  You now have another locally scoped variable for perl to keep 
track of, which increases memory usage and general overhead (allocation 
and garbage collection).  Now, those, too, are insignificant with one 
use, but the significance will probably rise with the speed gain as you 
use these techniques more often...

  Issac


Stas Bekman wrote:

 Rob Nagler wrote:

 Perrin Harkins writes:


 Here's a fun example of a design flaw.  It is a performance test sent
 to another list.  The author happened to work for one of our
 competitors.  :-)


   That may well be the problem. Building giant strings using .= can be
   incredibly slow; Perl has to reallocate and copy the string for each
   append operation. Performance would likely improve in most
   situations if an array were used as a buffer, instead. Push new
   strings onto the array instead of appending them to a string.

 #!/usr/bin/perl -w
 ### Append.bench ###

 use Benchmark;

 sub R () { 50 }
 sub Q () { 100 }
 @array = (  x R) x Q;

 sub Append {
 my $str = ;
 map { $str .= $_ } @array;
 }

 sub Push {
 my @temp;
 map { push @temp, $_ } @array;
 my $str = join , @temp;
 }

 timethese($ARGV[0],
 { append = \Append,
   push   = \Push });
 

 Such a simple piece of code, yet the conclusion is incorrect.  The
 problem is in the use of map instead of foreach for the performance
 test iterations.  The result of Append is an array of whose length is
 Q and whose elements grow from R to R * Q.  Change the map to a
 foreach and you'll see that push/join is much slower than .=.

 Return a string reference from Append.  It saves a copy.
 If this is the page, you'll see a significant improvement in
 performance.

 Interestingly, this couldn't be the problem, because the hypothesis
 is incorrect.  The incorrect test just validated something that was
 faulty to begin with.  This brings up you can't talk about it unless
 you can measure it.  Use a profiler on the actual code.  Add
 performance stats in your code.  For example, we encapsulate all DBI
 accesses and accumulate the time spent in DBI on any request.  We also
 track the time we spend processing the entire request.


 While we are at this topic, I want to suggest a new project. I was 
 planning to start working on it long time ago, but other things always 
 took over.

 The perl.apache.org/guide/performance.html and a whole bunch of 
 performance chaptes in the upcoming modperl book have a lot of 
 benchmarks, comparing various coding techniques. Such as the example 
 you've provided. The benchmarks are doing both pure Perl and mod_perl 
 specific code (which requires running Apache, a perfect job for the 
 new Apache::Test framework.)

 Now throw in the various techniques from 'Effective Perl' book and 
 voila you have a great project to learn from.

 Also remember that on varous platforms and various Perl versions the 
 benchmark results will differ, sometimes very significantly.

 I even have a name for the project: Speedy Code Habits  :)

 The point is that I want to develop a coding style which tries hard to 
 do early premature optimizations. Let me give you an example of what I 
 mean. Tell me what's faster:

 if (ref $b eq 'ARRAY'){
$a = 1;
 }
 elsif (ref $b eq 'HASH'){
$a = 1;
 }

 or:

 my $ref = ref $b;
 if ($ref eq 'ARRAY'){
$a = 1;
 }
 elsif ($ref eq 'HASH'){
$a = 1;
 }

 Sure, the win can be very little, but it ads up as your code base's 
 size grows.

 Give you a similar example:

 if ($a-lookup eq 'ARRAY'){
$a = 1;
 }
 elsif ($a-lookup eq 'HASH'){
$a = 1;
 }

 or

 my $lookup = $a-lookup;
 if ($lookup eq 'ARRAY'){
$a = 1;
 }
 elsif ($lookup eq 'HASH'){
$a = 1;
 }

 now throw in sub attributes and re-run the test again.

 add examples of map vs for.
 add examples of method lookup vs. procedures
 add examples of concat vs. list vs. other stuff from the guide.

 mod_perl specific examples from the guide/book ($r-args vs 
 Apache::Request::param, etc)

 If you understand where I try to take you, help me to pull this 
 project off and I think in a long run we can benefit a lot.

 This goes along with the Apache::Benchmark project I think (which is 
 yet another thing I want to start...), probably could have these two 
 ideas put together.

 _







Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Stas Bekman

Issac Goldstand wrote:

 Ah yes, but don't forget that to get this speed, you are sacrificing 
 memory.  You now have another locally scoped variable for perl to keep 
 track of, which increases memory usage and general overhead (allocation 
 and garbage collection).  Now, those, too, are insignificant with one 
 use, but the significance will probably rise with the speed gain as you 
 use these techniques more often...

Yes, I know. But from the benchmark you can probably have an idea 
whether the 'caching' is worth the speedup (given that the benchmark is 
similar to your case). For example it depends on how many times you need 
to use the cache. And how big is the value. e.g. may be caching 
$foo-bar doesn't worth it, but what about $foo-bar-baz? or if you 
have a deeply nested hash and you need to work with only a part of 
subtree, do you grab a reference to this sub-tree node and work it or do 
you keep on dereferencing all the way up to the root on every call?

But personally I still didn't decide which one is better and every time 
I'm in a similar situation, I'm never sure which way to take, to cache 
or not to cache. But that's the cool thing about Perl, it keeps you on 
your toes all the time (if you want to :).

BTW, if somebody has interesting reasonings for using one technique 
versus the other performance-wise (speed+memory), please share them.

This project's idea is to give stright numbers for some definitely bad 
coding practices (e.g. map() in the void context), and things which vary 
a lot depending on the context, but are interesting to think about (e.g. 
the last example of caching the result of ref() or a method call)

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: Documentation

2002-01-25 Thread Axel Gerstmair

 I know there is some good reference material for mod_perl out there, just
 can't remember where. Anybody?

http://perl.apache.org/guide/

Best regards,
Axel




Tracing script with problem

2002-01-25 Thread Jon Molin

Hi list,

I had problems with a script that went nuts and took 65MB memory and
alot of cpu. To track this script down I thought Apache:VMonitor would
be perfect, unfortenately I ran into some weird promlems (it said there
was an error in mod_perl.h) and i know gcc might be broken on this
machine so I started scratching my head and came to the conclusion that
this 'oneliner' ought to help me track the error down:

find /www/docs -name '*.cgi' -type f -exec perl -pi -e
's:(#!/usr/bin/perl[ w-]*\s*):$1\nprint STDERR \\nPID=\$\$
SCRIPT=\$ENV{REQUEST_URI} \\n;\n:s;' {} \;

ie, every script now prints it's httpd pid and it's request_uri. So i
just started waiting watching top with excitment and when the 65 MB
httpd process i greped for the pid in the error_log and got the
scriptname and it's arguments. 

Then i reproduced the error on a server with VMonitor to see if what i
missed out. Ok i could see the name of the scrpit but the real problem
was with the query_string, choped after a couple of chars. Now, if i
understand things right (i tried some tweaking on the module) it's not
possible to get more than 64 char. Why is this, and is it really so?

I know I'm no Einstein and i presume thousands of ppl have tried tracing
similar problems, how did you do it? There must be a more effective way
to find it? I know I would've got the script name but since i never
thought it would get the input it got chances are it'd take me a long
time finding the problem if i only knew the name. 

/Jon



RE: DBI/MySQL causing SIGPIPE

2002-01-25 Thread Narins, Josh

2 quick notes.

Have you seen the epigone archives? I'm sure I've seen mention
of SIGPIPE in this scenario some time before.

Upgrade! You are using old versions of apache, perl and mod_perl.

-Original Message-
From: Balazs Rauznitz [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 25, 2002 1:11 AM
To: [EMAIL PROTECTED]
Subject: DBI/MySQL causing SIGPIPE



My setup is apache/modperl+Apache::DBI with MySQL driver. On server startup
in every httpd child a few queries that are executed very often are
prepared. When the Apache::Registry scripts run values are bound to the
cursors and they are executed. The server runs ok for 6-10 hours and then
I'm seeing these messages in the error_log when trying to execute the
cursors

[modperl] caught SIGPIPE in process 12620
hint: may be a client (browser) hit STOP?

My initial guess was that the mysql daemon of the cursor has exitted, so I
had the library recompiled by having all apache children execute a do
'db.pl' using a custom USR2 handler. Take a look at the file below: I think
that the db connection and the cursor should have been reinitialized, but
the SIGPPIPE remained. Now my guess is that Apache::DBI gets confused
somehow... To stop the problem I added a $SIG{PIPE} = sub {} into the code,
which works well, but isn't this going to cause other problems with mod_perl
?

apache  1.3.12
mod_per 1.23
perl5.005_03
Apache::DBI 0.87
DBI 1.14

Any help would be greatly appreciated.

Thanks,

-Balazs

ps: The library looks something like this:

#db.pl
$DBH = connect();
$CURSOR = $DBH-prepare(some sql);

sub routine {
$CURSOR-bind_param(1, $_[0]);
$SUSPECTS_CUR-execute();
.
.
}


--
This message is intended only for the personal and confidential use of the designated 
recipient(s) named above.  If you are not the intended recipient of this message you 
are hereby notified that any review, dissemination, distribution or copying of this 
message is strictly prohibited.  This communication is for information purposes only 
and should not be regarded as an offer to sell or as a solicitation of an offer to buy 
any financial product, an official confirmation of any transaction, or as an official 
statement of Lehman Brothers.  Email transmission cannot be guaranteed to be secure or 
error-free.  Therefore, we do not represent that this information is complete or 
accurate and it should not be relied upon as such.  All information is subject to 
change without notice.





Re: Tracing script with problem

2002-01-25 Thread Stas Bekman

Jon Molin wrote:

 Hi list,
 
 I had problems with a script that went nuts and took 65MB memory and
 alot of cpu. To track this script down I thought Apache:VMonitor would
 be perfect, unfortenately I ran into some weird promlems (it said there
 was an error in mod_perl.h) and i know gcc might be broken on this
 machine so I started scratching my head and came to the conclusion that
 this 'oneliner' ought to help me track the error down:
 
 find /www/docs -name '*.cgi' -type f -exec perl -pi -e
 's:(#!/usr/bin/perl[ w-]*\s*):$1\nprint STDERR \\nPID=\$\$
 SCRIPT=\$ENV{REQUEST_URI} \\n;\n:s;' {} \;
 
 ie, every script now prints it's httpd pid and it's request_uri. So i
 just started waiting watching top with excitment and when the 65 MB
 httpd process i greped for the pid in the error_log and got the
 scriptname and it's arguments. 
 
 Then i reproduced the error on a server with VMonitor to see if what i
 missed out. Ok i could see the name of the scrpit but the real problem
 was with the query_string, choped after a couple of chars. Now, if i
 understand things right (i tried some tweaking on the module) it's not
 possible to get more than 64 char. Why is this, and is it really so?


it's the limitation from apache scoreboard, it gives us only 64 chars. I 
  don't think this is going to change this scoreboard must be very light 
and not to add an overhead to requests.


 I know I'm no Einstein and i presume thousands of ppl have tried tracing
 similar problems, how did you do it? There must be a more effective way
 to find it? I know I would've got the script name but since i never
 thought it would get the input it got chances are it'd take me a long
 time finding the problem if i only knew the name. 

It's actually easy, take a look at the Apache::SizeLimit or 
Apache::GTopLimit, look at the cleanup handler that they register. Now 
take this handler and dump whatever you need to the file or error_log 
when you find that the process was taking too much memory.

Take a look at this code and you will see that it's very simple.

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Rob Nagler

 This project's idea is to give stright numbers for some definitely bad 
 coding practices (e.g. map() in the void context), and things which vary 
 a lot depending on the context, but are interesting to think about (e.g. 
 the last example of caching the result of ref() or a method call)

I think this would be handy.  I spend a fair bit of time
wondering/testing myself.  Would be nice to have a repository of the
tradeoffs.

OTOH, I spend too much time mulling over unimportant performance
optimizations.  The foreach/map comparison is a good example of this.
It only starts to matter (read milliseconds) at the +100KB and up
range, I find.  If a site is returning 100KB pages for typical
responses, it has a problem at a completely different level than map
vs foreach.

Rob

Pre-optimization is the root of all evil -- C.A.R. Hoare



Re: Tracing script with problem

2002-01-25 Thread Jon Molin

Stas Bekman wrote:
 
 It's actually easy, take a look at the Apache::SizeLimit or
 Apache::GTopLimit, look at the cleanup handler that they register. Now
 take this handler and dump whatever you need to the file or error_log
 when you find that the process was taking too much memory.
 
 Take a look at this code and you will see that it's very simple.
 

Thanks a bunch, I'll look into that.

Another question, do you (or anyone else for that matter) know how the
accesslog works? (and also why it does work like it does) It seems it
prints after the request is done, otherwise could that easily be used
for checking the parameters, and not only loging. 


/Jon

 _
 Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
 http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
 mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
 http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Re: Tracing script with problem

2002-01-25 Thread Stas Bekman

Jon Molin wrote:

 Stas Bekman wrote:
 
It's actually easy, take a look at the Apache::SizeLimit or
Apache::GTopLimit, look at the cleanup handler that they register. Now
take this handler and dump whatever you need to the file or error_log
when you find that the process was taking too much memory.

Take a look at this code and you will see that it's very simple.


 
 Thanks a bunch, I'll look into that.
 
 Another question, do you (or anyone else for that matter) know how the
 accesslog works? (and also why it does work like it does) It seems it
 prints after the request is done, otherwise could that easily be used
 for checking the parameters, and not only loging. 

You probably need to read some docs, which explain how can specify your 
own access log format or supply your own log handler.

For using the standard Apache formats see the docs at apache.org (you 
also have them installed together with Apache under 'manual' dir on your 
machine. For mod_perl examples you should probably see the eagle book, 
check www.modperl.com (which seems to be offline now) if it has the 
relevant chapters online, I think it's chapters 7 and 9 that you want.

The guide has some info here:
http://thingy.kcilink.com/cgi-bin/modperlguide.cgi?q=PerlLogHandler

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: Tracing script with problem

2002-01-25 Thread Geoffrey Young

[snip]

 
  Another question, do you (or anyone else for that matter) know how the
  accesslog works? (and also why it does work like it does) It seems it
  prints after the request is done, otherwise could that easily be used
  for checking the parameters, and not only loging.
 
 You probably need to read some docs, which explain how can specify your
 own access log format or supply your own log handler.
 
 For using the standard Apache formats see the docs at apache.org (you
 also have them installed together with Apache under 'manual' dir on your
 machine. For mod_perl examples you should probably see the eagle book,
 check www.modperl.com (which seems to be offline now) if it has the
 relevant chapters online, I think it's chapters 7 and 9 that you want.
 
 The guide has some info here:
 http://thingy.kcilink.com/cgi-bin/modperlguide.cgi?q=PerlLogHandler

our chapter on logging and the PerlLogHandler also happens to be online :)

http://www.modperlcookbook.org/chapters/ch16.pdf

--Geoff



PerlAddVar alternative in v1.21

2002-01-25 Thread Vladislav Shchogolev

Hello,

I'm using mod_perl 1.21 on a host where i don't have the option of upgrading
mod_perl. Is there an alternative way to use PerlSetVar to simulate the
effect of PerlAddVar. I want to create a variable, namely MasonCompRoot,
that has two entries in it.

Thanks,
Vlad




Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Perrin Harkins

 The point is that I want to develop a coding style which tries hard to
 do early premature optimizations.

We've talked about this kind of thing before.  My opinion is still the same
as it was: low-level speed optimization before you have a working system is
a waste of your time.

It's much better to build your system, profile it, and fix the bottlenecks.
The most effective changes are almost never simple coding changes like the
one you showed, but rather large things like using qmail-inject instead of
SMTP, caching a slow database query or method call, or changing your
architecture to reduce the number of network accesses or inter-process
communications.

The exception to this rule is that I do advocate thinking about memory usage
from the beginning.  There are no good tools for profiling memory used by
Perl, so you can't easily find the offenders later on.  Being careful about
passing references, slurping files, etc. pays off in better scalability
later.

- Perrin




Re: PerlAddVar alternative in v1.21

2002-01-25 Thread Geoffrey Young

Vladislav Shchogolev wrote:
 
 Hello,
 
 I'm using mod_perl 1.21 on a host where i don't have the option of upgrading
 mod_perl. Is there an alternative way to use PerlSetVar to simulate the
 effect of PerlAddVar. I want to create a variable, namely MasonCompRoot,
 that has two entries in it.

I think I just read in the eagle book the other day that suggested something like

PerlSetVar MasonCompRoot foo:bar

my @roots = split :, $r-dir_config('MasonCompRoot');

or whatever...

HTH

--Geoff



Re: PerlAddVar alternative in v1.21

2002-01-25 Thread Dave Rolsky

On Fri, 25 Jan 2002, Geoffrey Young wrote:

 I think I just read in the eagle book the other day that suggested something like

 PerlSetVar MasonCompRoot foo:bar

 my @roots = split :, $r-dir_config('MasonCompRoot');

 or whatever...

Except that the code that read the dir_config is part of the Mason core.
Of course, changing it is entirely possible but it doesn't fix the problem
that Mason, by default, has certain features which are not available on
older mod_perl versions.  But we can live with that.


-dave

/*==
www.urth.org
we await the New Sun
==*/




UI Regression Testing

2002-01-25 Thread David Wheeler

Hi All,

A big debate is raging on the Bricolage development list WRT CVS
configuration and application testing.

http://www.geocrawler.com/mail/thread.php3?subject=%5BBricolage-Devel%5D+More+on+Releaseslist=15308

It leads me to a question about testing. Bricolage is a monster
application, and its UI is built entirely in HTML::Mason running on
Apache. Now, while we can and will do a lot more to improve the testing
of our Perl modules, we can't really figure out a way to automate the
testing of the UI. I'm aware of the the performance testing utilities
mentioned in the mod_perl guide -- 

  http://perl.apache.org/guide/performance.html

-- but they don't seem to be suited to testing applications.

Is anyone familiar with how to go about setting up a test suite for a
web UI -- without spending an arm and a leg? (Remember, Bricolage is an
OSS effort!).

Thanks!

David

-- 
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
   Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]




Re: UI Regression Testing

2002-01-25 Thread Rob Nagler

 Is anyone familiar with how to go about setting up a test suite for a
 web UI -- without spending an arm and a leg? (Remember, Bricolage is an
 OSS effort!).

Yes, it's very easy.  We did this using student labor, because it is
an excellent project for students and it's probably cheaper.  It's
very important.  We run our test suite nightly.

I'm an extreme programming (XP) advocate.  Testing is one of the most
important practices in XP.

I'm working on packaging what we did so it is fit for public
consumption.  Expect something in a month or so.  It'll come with a
rudimentary test suite for our demo petshop app.

There are many web testers out there.  To put it bluntly, they don't
let you write maintainable test suites.  The key to maintainability is
being able to define your own domain specific language.  Just like
writing maintainable code, you have to encapsulate commonality and
behavior.  The scripts should be short and only contain the details
pertinent to the particular test.  Perl is ideal for this, because you
can easily create domain specific languages.

Rob



Re: UI Regression Testing

2002-01-25 Thread Perrin Harkins

 There are many web testers out there.  To put it bluntly, they don't
 let you write maintainable test suites.  The key to maintainability is
 being able to define your own domain specific language.

Have you tried webchat?  You can find webchatpp on CPAN.




Re: UI Regression Testing

2002-01-25 Thread David Wheeler

On Fri, 2002-01-25 at 10:12, Perrin Harkins wrote:

 Have you tried webchat?  You can find webchatpp on CPAN.

Looks interesting, although the documentation is rather sparse. Anyone
know of more examples than come with it?

Thanks,

David

-- 
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
   Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]




Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread David Wheeler

On Fri, 2002-01-25 at 09:08, Perrin Harkins wrote:

snip /

 It's much better to build your system, profile it, and fix the bottlenecks.
 The most effective changes are almost never simple coding changes like the
 one you showed, but rather large things like using qmail-inject instead of
 SMTP, caching a slow database query or method call, or changing your
 architecture to reduce the number of network accesses or inter-process
 communications.

qmail-inject? I've just been using sendmail or, preferentially,
Net::SMTP. Isn't using a system call more expensive? If not, how does
qmail-inject work?

Thanks,

David

-- 
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
   Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]




Re: UI Regression Testing

2002-01-25 Thread Rob Nagler

 Have you tried webchat?  You can find webchatpp on CPAN.

Just had a look.  It appears to be a rehash of chat (expect) for the
web.  Great stuff, which is really needed and demonstrates the power
of Perl for test scripting.

But...

This is a bit hard to explain.  There are two types of XP testing:
unit and acceptance.  Unit testing is pretty clear in Perl circles
(ok, I have a thing or two to say about it, but not now :-).

Acceptance testing (aka functional testing) is traditionally handled
by a third party testing organization.  The test group writes scripts.
If they are testing GUIs, they click in scripts via a session
recorder.  They don't program anything.  There's almost no reuse,
and very little abstraction.

XP flips testing on its head.  It says that the programmers are
responsible for testing, not some 3rd party org.  The problem I have
found is that instead of programming the test suite, XPers script it,
using the same technology that a testing organization would use.  With
the advent of the web, this is a real shame.

HTTP and HTML are middleware.  You have full programmatic control to
test your application.  You can't control the web browser, so you
still need to do some ad hoc how does it look testing, but this
isn't the hard part.

The acceptance test suite is testing the system from the user's point
of view.  In XP, the user is the customer, and the customer writes
tests.  In my opinion, this means the customer writes tests in a pair
with a programmer.  The programmer's job is to create a language which
the user understands.

Here's an example from our test suite:

Accounting-setup_investment('AAPL');

The user knows what an investment is.  She also knows that AAPL is a
stock ticker.  This statement sets up the environment (using LWP to
the app) to execute tests such as entering dividends, buys, sells,
etc.

The test infrastructure must support the ability to create new
language elements with the ability to build elements using the other
elements.  This requires modularization, and today this means classes
and instances.  There's also a need for state management, just like
the request object in your web application.

Part of the packaging process we're going through is making it even
easier to create domain specific languages.  You actually want to
create lots of dialects, e.g. in our case this means investments, cash
accounts, member accounts, and message boards.  These dialects use
building blocks such as logging in, creating a club, and so on.  At
the bottom you use LWP or webchat.  However, the user doesn't care if
the interface is HTTP or Windows.  You're job as a test suite
programmer is meeting her domain knowledge, and abstracting away
details like webchat's CLICK and EXPECT OK.

In the end, your test suite is a domain knowledge repository.  It
contains hundreds of concise scenarios comprised of statements, or
facts, in knowledge base parlance.  The execution of the test suite
asserts all the facts are true about your application.  The more
concise the test language.  The more easily the user-tester can verify
that she has encoded her expertise correctly.

Rob



Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Matt Sergeant

On 25 Jan 2002, David Wheeler wrote:

 On Fri, 2002-01-25 at 09:08, Perrin Harkins wrote:

 snip /

  It's much better to build your system, profile it, and fix the bottlenecks.
  The most effective changes are almost never simple coding changes like the
  one you showed, but rather large things like using qmail-inject instead of
  SMTP, caching a slow database query or method call, or changing your
  architecture to reduce the number of network accesses or inter-process
  communications.

 qmail-inject? I've just been using sendmail or, preferentially,
 Net::SMTP. Isn't using a system call more expensive? If not, how does
 qmail-inject work?

With qmail, SMTP generally uses inetd, which is slow, or daemontools,
which is faster, but still slow, and more importantly, it anyway goes:

  perl - SMTP - inetd - qmail-smtpd - qmail-inject.

So with going direct to qmail-inject, your email skips out a boat load of
processing and goes direct into the queue.

Of course none of this is relevant if you're not using qmail ;-)

-- 
!-- Matt --
:-Get a smart net/:-




Loading documents from a database

2002-01-25 Thread Michael A Nachbaur

This may sound strange, but bear with me.  I want to create an ApacheHandler
that will pull all the files in a virtualhost, not from the filesystem, but
from an RDBMS (built on PostgreSQL).  This includes .htaccess files, binary
files (e.g. pdf and images) and text files (e.g. html and xml).  I'm
building a content management system, and this is for the editing interface.
I realize this is going to be dog slow, but I don't care.

I wrote a handler that passes back regular files already, but my site is
built using XML(withAxKit), and it relies on my .htaccess files associating
XML files with their corresponding stylesheets.  Since Apache isn't loading
the .htaccess files using my handler, the stylesheets aren't being called.

Any clues on how I can get Apache to load the .htaccess files using my
handler?

Thanks

-man
Michael A Nachbaur




Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Tatsuhiko Miyagawa

On Fri, 25 Jan 2002 21:15:54 + (GMT)
Matt Sergeant [EMAIL PROTECTED] wrote:

 
 With qmail, SMTP generally uses inetd, which is slow, or daemontools,
 which is faster, but still slow, and more importantly, it anyway goes:
 
   perl - SMTP - inetd - qmail-smtpd - qmail-inject.
 
 So with going direct to qmail-inject, your email skips out a boat load of
 processing and goes direct into the queue.
 
 Of course none of this is relevant if you're not using qmail ;-)

Yet another solution:

use Mail::QmailQueue, directly 
http://search.cpan.org/search?dist=Mail-QmailQueue


--
Tatsuhiko Miyagawa [EMAIL PROTECTED]




Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread David Wheeler

On Fri, 2002-01-25 at 13:15, Matt Sergeant wrote:

 With qmail, SMTP generally uses inetd, which is slow, or daemontools,
 which is faster, but still slow, and more importantly, it anyway goes:
 
   perl - SMTP - inetd - qmail-smtpd - qmail-inject.
 
 So with going direct to qmail-inject, your email skips out a boat load of
 processing and goes direct into the queue.

Okay, that makes sense. In my activitymail CVS script I just used
sendmail.

 http://www.cpan.org/authors/id/D/DW/DWHEELER/activitymail-0.987

But it looks like this might be more efficient, if qmail happens to be
installed (not sure on SourceForge's servers).
 
 Of course none of this is relevant if you're not using qmail ;-)

Yes, and in Bricolage, I used Net::SMTP to keep it as
platform-independent as possible. It should work on Windows, even!
Besides, all mail gets sent during the Apache cleanup phase, so there
should be no noticeable delay for users.

David

-- 
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
   Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]




Re: slow regex [BENCHMARK]

2002-01-25 Thread Paul Mineiro

Paul Mineiro wrote:


 right.  i probably should've mentioned earlier that CGAT x 5 is 
 really fast in both mod_perl and command line.

 if anybody wants my actual $seq data, please let me know.


i neglected to mention something big:  the production version is 
identical but using perl 5.005 and it
doesn't have this problem.

perl -V of the production perl follows.

thanks,

-- p

Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
  Platform:
osname=linux, osvers=2.4.4-xfs, archname=i686-linux
uname='linux rock.codegrok.lab 2.4.4-xfs #8 smp wed may 30 17:37:44 
pdt 2001 i686 unknown '
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
cc='cc', optimize='-O2', gccversion=2.95.4 20010319 (Debian prerelease)
cppflags='-Dbool=char -DHAS_BOOL -I/usr/local/include'
ccflags ='-Dbool=char -DHAS_BOOL -I/usr/local/include'
stdchar='char', d_stdstdio=undef, usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
alignbytes=4, usemymalloc=n, prototype=define
  Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -lgdbm -ldbm -ldb -ldl -lm -lc -lcrypt
libc=, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'


Characteristics of this binary (from libperl):
  Built under linux
  Compiled at Oct 30 2001 10:33:04
  %ENV:

PERL5LIB=/home/codegrok/genegrokker-interface/lib/perl5:/home/codegrok/genegrokker-interface/ext/lib/perl5
  @INC:
/home/codegrok/genegrokker-interface/lib/perl5
/home/codegrok/genegrokker-interface/ext/lib/perl5
/home/codegrok/genegrokker-interface/ext/lib/perl5/5.00503/i686-linux
/home/codegrok/genegrokker-interface/ext/lib/perl5/5.00503

/home/codegrok/genegrokker-interface/ext/lib/perl5/site_perl/5.005/i686-linux
/home/codegrok/genegrokker-interface/ext/lib/perl5/site_perl/5.005
.





Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Joe Schaefer

Stas Bekman [EMAIL PROTECTED] writes:

 I even have a name for the project: Speedy Code Habits  :)
 
 The point is that I want to develop a coding style which tries hard to  
 do early premature optimizations.

I disagree with the POV you seem to be taking wrt write-time 
optimizations.  IMO, there are precious few situations where
writing Perl in some prescribed style will lead to the fastest code.
What's best for one code segment is often a mediocre (or even stupid)
choice for another.  And there's often no a-priori way to predict this
without being intimate with many dirty aspects of perl's innards.

I'm not at all against divining some abstract _principles_ for
accelerating a given solution to a problem, but trying to develop a 
Speedy Style is IMO folly.  My best and most universal advice would 
be to learn XS (or better Inline) and use a language that was _designed_
for writing finely-tuned sections of code.  But that's in the
post-working-prototype stage, *not* before.

[...]

 mod_perl specific examples from the guide/book ($r-args vs 
 Apache::Request::param, etc)

Well, I've complained about that one before, and since the 
guide's text hasn't changed yet I'll try saying it again:  

  Apache::Request::param() is FASTER THAN Apache::args(),
  and unless someone wants to rewrite args() IN C, it is 
  likely to remain that way. PERIOD.

Of course, if you are satisfied using Apache::args, than it would
be silly to change styles.

YMMV
-- 
Joe Schaefer




Re: slow regex [BENCHMARK]

2002-01-25 Thread Paul Mineiro

Rob Mueller (fastmail) wrote:

I recently had a similar problem. A regex that worked fine in sample code
was a dog in the web-server code. It only happened with really long strings.
I tracked down the problem to this from the 'perlre' manpage.

   WARNING: Once Perl sees that you need one of $, $`, or $'
anywhere in the program, it

snip


What I did in the end was something like this:

In the code somewhere add this so it's run when a request hits.

open(F, '/tmp/modulelist');
print F join(\n, values %INC), \n;
close(F);

This creates a file which lists all the loaded modules. Then after sticking
a request through the browser, do something like:

grep \$\' `cat /tmp/modulelist`
grep \$\ `cat /tmp/modulelist`
grep \$\` `cat /tmp/modulelist`

to try and track down the offending module. 

well, the good (bad?) news is, none of the modules in the module list 
have the expensive regex variables in them.

i've attached the module list, in case it is of interest.

thanks,

-- p



/home/aerives/genegrokker-interface/lib/perl/Keys.pm
/usr/share/perl/5.6.1/Carp.pm
/usr/share/perl/5.6.1/unicode/To/Upper.pl
/home/aerives/genegrokker-interface/lib/perl/PrimerFunctions.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/auto/Storable/autosplit.ix
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/Common.pm
/usr/share/perl/5.6.1/IO/Socket/UNIX.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/Annotation.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Constants.pm
/usr/share/perl/5.6.1/IO/Socket/INET.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/XML/Parser.pm
/usr/share/perl/5.6.1/strict.pm
/usr/share/perl/5.6.1/base.pm
/usr/share/perl/5.6.1/vars.pm
/usr/share/perl/5.6.1/utf8.pm
/usr/lib/perl/5.6.1/Config.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/Feature.pm
/home/aerives/genegrokker-interface/mod_perl/tools.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Crypt/CBC.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Digest/SHA1.pm
/usr/lib/perl/5.6.1/Data/Dumper.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Session.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Server.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/URI/Escape.pm
/home/aerives/genegrokker-interface/lib/perl/Validate.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/GskXmlProtocol.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Connection.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/User.pm
/usr/share/perl/5.6.1/Symbol.pm
/usr/share/perl/5.6.1/Exporter/Heavy.pm
/home/aerives/genegrokker-interface/mod_perl/genomic_img.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Storable.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Session/Lock/File.pm
/home/aerives/genegrokker-interface/mod_perl/genomicbrowser.pm
/home/aerives/genegrokker-interface/lib/perl/Authenticate.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/mod_perl.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/AnnotationSequence.pm
/home/aerives/genegrokker-interface/ext/var/tmp/genegrokker-interface-aerives/ssl_mod_perl_apache-startup.pl
/usr/share/perl/5.6.1/Benchmark.pm
/usr/lib/perl/5.6.1/IO/Handle.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/MD5.pm
/usr/lib/perl/5.6.1/Fcntl.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/XML/Parser/Expat.pm
/usr/lib/perl/5.6.1/IO/Seekable.pm
/usr/share/perl/5.6.1/Exporter.pm
/usr/lib/perl/5.6.1/IO/Socket.pm
/usr/share/perl/5.6.1/utf8_heavy.pl
/usr/lib/perl/5.6.1/Errno.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/SequenceRegion.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/Object.pm
/usr/lib/perl/5.6.1/DynaLoader.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Request.pm
/usr/share/perl/5.6.1/FileHandle.pm
/usr/share/perl/5.6.1/File/Spec/Unix.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Session/Serialize/Storable.pm
/usr/share/perl/5.6.1/SelectSaver.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/HTML/Template.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/DNAregex.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Apache/Session/Generate/MD5.pm
/usr/lib/perl/5.6.1/IO.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/auto/Storable/nfreeze.al
/usr/lib/perl/5.6.1/Socket.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/GD.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Genegrokker/Sequence.pm
/home/aerives/genegrokker-interface/lib/perl/GenegrokkerUtil.pm
/usr/lib/perl/5.6.1/IO/File.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/auto/Storable/_freeze.al
/usr/share/perl/5.6.1/integer.pm
/home/aerives/genegrokker-interface/lib/perl/SequenceNavigator.pm
/usr/lib/perl/5.6.1/XSLoader.pm
/home/aerives/genegrokker-interface/ext/lib/perl5/Digest/MD5.pm
/usr/share/perl/5.6.1/File/Spec.pm

Re: UI Regression Testing

2002-01-25 Thread Gunther Birznieks

I suppose it depends on what you want out of testing.

Frequently, unit testing is OK in simple applications. But in an 
application whose job it is to communicate with a mainframe or back-end 
databases, frequently the tests you might perform are based on some 
previous persistent state of the database.

It takes a lot of effort in other words. For example, 2-3 programmers could 
easily get away (through normal testing) with sharing a database. But 
generally if you require each programmer to have their own database and 
their own system completely where they are constantly wiping the state of 
the database to perform a test suite, this can get time consuming and 
entails a lot of infrastructural overhead.

I've seen some articles that demonstrate doing something with test XML 
backends to emulate database retrieval results, but these seem quite hard 
to set up also.

I agree that testing is great, but I think it is quite hard in practice. 
Also, I don't think programmers are good to be the main people to write 
their own tests. It is OK for programmers to write their own tests but 
frequently it is the user or a non-technical person who is best at doing 
the unexpected things that are really were the bug lies.

The other annoying thing about programmers writing tests is deciding where 
to stop. I believe the HTTP level for tests is really good. But I see much 
unit testing being done on the basis of writing a test class for every 
class you write. Ugh! That means that any time you refactor you throw away 
the 2x the coding you did.

To some degree, there should be intelligent rules of thumb as to which 
interfaces tests should be written to because the extreme of writing tests 
for everything is quite bad.

Finally, unit tests do not guarantee an understanding of the specs because 
the business people generally do not read test code. So all the time spent 
writing the test AND then writing the program AND ONLY THEN showing it to 
the users, then you discover it wasn't what the user actually wanted. So 2x 
the coding time has been invalidated when if the user was shown a prototype 
BEFORE the testing coding commenced, then the user could have confirmed or 
denied the basic logic.

The same frequently and especially is true for UIs.

Later,
Gunther




Re: UI Regression Testing

2002-01-25 Thread Rob Nagler

Gunther Birznieks writes:
 the database to perform a test suite, this can get time consuming and 
 entails a lot of infrastructural overhead.

We haven't found this to be the case.  All our database operations are
programmed.  We install the database software with an RPM, run a
program to build the database, and program all schema upgrades.  We've
had 194 schema upgrades in about two years.

 unit testing being done on the basis of writing a test class for every 
 class you write. Ugh! That means that any time you refactor you throw away 
 the 2x the coding you did.

By definition, refactoring doesn't change observable behavior.  You
validate refactorings with unit tests.  See http://www.refactoring.com

 To some degree, there should be intelligent rules of thumb as to which 
 interfaces tests should be written to because the extreme of writing tests 
 for everything is quite bad.

Again, we haven't seen this.  Every time I don't have unit tests, I
get nervous.  How do I know if I broke something with my change?
 
 Finally, unit tests do not guarantee an understanding of the specs because 
 the business people generally do not read test code. So all the time spent 
 writing the test AND then writing the program AND ONLY THEN showing it to 
 the users, then you discover it wasn't what the user actually wanted. So 2x 
 the coding time has been invalidated when if the user was shown a prototype 
 BEFORE the testing coding commenced, then the user could have confirmed or 
 denied the basic logic.

Unit tests aren't about specs.  They are about APIs.  Acceptance tests
need to be written by the user or written so the user can understand
them.  You need both kinds of testing.
See http://www.xprogramming.com/xpmag/Reliability.htm

Rob



Re: UI Regression Testing

2002-01-25 Thread Ed Grimm

On Sat, 26 Jan 2002, Gunther Birznieks wrote:

 I agree that testing is great, but I think it is quite hard in practice. 
 Also, I don't think programmers are good to be the main people to write 
 their own tests. It is OK for programmers to write their own tests but 
 frequently it is the user or a non-technical person who is best at doing 
 the unexpected things that are really were the bug lies.

My experience is that the best testers come from technical support,
although this is not to suggest that all technical support individuals
are good at this; even among this group, it's rare.  Users or other
non-technical people may find a few more bugs, but frequently, the
non-technical people don't have the ability to correctly convey how to
reproduce the problems, or even what the problem was.  I clicked on the
thingy, and it didn't work.

This being said, users and tech support can't create unit tests; they're
not in a position to.

 Finally, unit tests do not guarantee an understanding of the specs because 
 the business people generally do not read test code. So all the time spent 
 writing the test AND then writing the program AND ONLY THEN showing it to 
 the users, then you discover it wasn't what the user actually wanted. So 2x 
 the coding time has been invalidated when if the user was shown a prototype 
 BEFORE the testing coding commenced, then the user could have confirmed or 
 denied the basic logic.

For your understanding of the spec, you use functional tests.  If your
functional test suite uses test rules which the users can understand,
you can get the users to double-check them.

For example, at work, we use a suite which uses a rendered web page as
its test output, and the input can be sent to a web page to populate a
form; this can be read by most people who can use the application.

Unit software is a means of satisfying a spec, but it doesn't satisfy
the spec itself - if it did, you'd be talking about the entire package,
and therefore refering to functional testing.  (At least, this is the
way I distinguish between them.)

Admittedly, we are a bit lacking in our rules, last I checked.

Ed




Apache::args vs Apache::Request speed

2002-01-25 Thread Stas Bekman

Joe Schaefer wrote:


mod_perl specific examples from the guide/book ($r-args vs 
Apache::Request::param, etc)

 
 Well, I've complained about that one before, and since the 
 guide's text hasn't changed yet I'll try saying it again:  
 
   Apache::Request::param() is FASTER THAN Apache::args(),
   and unless someone wants to rewrite args() IN C, it is 
   likely to remain that way. PERIOD.
 
 Of course, if you are satisfied using Apache::args, than it would
 be silly to change styles.

Well, I've run the benchmark and it wasn't the case. Did it change 
recently? Or do you think that the benchmark is not fair?

we are talking about this item
http://perl.apache.org/guide/performance.html#Apache_args_vs_Apache_Request

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: UI Regression Testing

2002-01-25 Thread Perrin Harkins

 Gunther Birznieks writes:
  the database to perform a test suite, this can get time consuming
and
  entails a lot of infrastructural overhead.

 We haven't found this to be the case.  All our database operations are
 programmed.  We install the database software with an RPM, run a
 program to build the database, and program all schema upgrades.  We've
 had 194 schema upgrades in about two years.

But what about the actual data?  In order to test my $product-name()
method, I need to know what the product name is in the database.  That's
the hard part: writing the big test data script to run every time you
want to run a test (and probably losing whatever data you had in that
database at the time).

This has been by far the biggest obstacle for me in testing, and from
Gunther's post it sounds like I'm not alone.  If you have any ideas
about how to make this less painful, I'd be eager to hear them.

- Perrin




Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Stas Bekman

Perrin Harkins wrote:

The point is that I want to develop a coding style which tries hard to
do early premature optimizations.

 
 We've talked about this kind of thing before.  My opinion is still the same
 as it was: low-level speed optimization before you have a working system is
 a waste of your time.
 
 It's much better to build your system, profile it, and fix the bottlenecks.
 The most effective changes are almost never simple coding changes like the
 one you showed, but rather large things like using qmail-inject instead of
 SMTP, caching a slow database query or method call, or changing your
 architecture to reduce the number of network accesses or inter-process
 communications.

It all depends on what kind of application do you have. If you code is 
CPU-bound these seemingly insignificant optimizations can have a very 
significant influence on the overall service performance. Of course if 
you app, is IO-bound or depends with some external component, than your 
argumentation applies.

On the other hand how often do you get a chance to profile your code and 
  see how to improve its speed in the real world. Managers never plan 
for debugging period, not talking about optimizations periods. And while 
premature optimizations are usually evil, as they will bait you later, 
knowing the differences between coding styles does help in a long run 
and I don't consider these as premature optimizations.

Definitely this discussion has no end. Everybody is right in their 
particular project, since there are no two projects which are the same.

All I want to say is that there is no one-fits-all solution in Perl, 
because of TIMTOWTDI, so you can learn a lot from running benchmarks and 
picking your favorite coding style and change it as the language 
evolves. But you shouldn't blindly apply the outcomes of the benchmarks 
without running your own benchmarks.

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: UI Regression Testing

2002-01-25 Thread Stas Bekman

David Wheeler wrote:

 Hi All,
 
 A big debate is raging on the Bricolage development list WRT CVS
 configuration and application testing.
 
 
http://www.geocrawler.com/mail/thread.php3?subject=%5BBricolage-Devel%5D+More+on+Releaseslist=15308
 
 It leads me to a question about testing. Bricolage is a monster
 application, and its UI is built entirely in HTML::Mason running on
 Apache. Now, while we can and will do a lot more to improve the testing
 of our Perl modules, we can't really figure out a way to automate the
 testing of the UI. I'm aware of the the performance testing utilities
 mentioned in the mod_perl guide -- 
 
   http://perl.apache.org/guide/performance.html
 
 -- but they don't seem to be suited to testing applications.
 
 Is anyone familiar with how to go about setting up a test suite for a
 web UI -- without spending an arm and a leg? (Remember, Bricolage is an
 OSS effort!).

You probably also need some good back-end engine for testing since you 
mostly likely need to test against a live Apache/mod_perl. If that's the 
case you should try to use the new Apache::Test framework that will be 
released with mod_perl 2.0. You can get it from here:
http://cvs.apache.org/snapshots/modperl-2.0/ (or use cvs)
Some docs are here:
http://cvs.apache.org/viewcvs.cgi/modperl-docs/src/docs/2.0/devel/testing/testing.pod

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: UI Regression Testing

2002-01-25 Thread Tatsuhiko Miyagawa

On Sat, 26 Jan 2002 00:23:40 -0500
Perrin Harkins [EMAIL PROTECTED] wrote:

 But what about the actual data?  In order to test my $product-name()
 method, I need to know what the product name is in the database.  That's
 the hard part: writing the big test data script to run every time you
 want to run a test (and probably losing whatever data you had in that
 database at the time).
 
 This has been by far the biggest obstacle for me in testing, and from
 Gunther's post it sounds like I'm not alone.  If you have any ideas
 about how to make this less painful, I'd be eager to hear them.

You're not alone ;) here is my solution.

* All datasource are maintained with separate config file
* Generate config file for testing
* Create database and tables for testing (called test_foo)
* Insert dummy data into test_foo
* Test 'em
* Drop dummy data

Then my test script has both client side testing and server side
testing, like this.

  use Test::More 'no_plan';

  BEGIN { do 'db_setup.pl'; }
  END   { do 'db_teardown.pl'; }

  # server-side
  my $product = Product-create({ name = 'foo' });

  # client-side
  my $ua = LWP::UserAgent-new;
  my $res = $ua-request(GET /foo/bar);
  like $res-content, qr/foo/;

  my $form = HTML::Form-parse($res-content);
  my $req2 = $form-click;
  my $res2 = $ua-request($req);
  like $res2-content, qr/blah/;

  # server-side
  my @p = Product-retrieve_all;
  is @p, 2;




--
Tatsuhiko Miyagawa [EMAIL PROTECTED]




cvs commit: modperl-2.0/t/apache .cvsignore

2002-01-25 Thread stas

stas02/01/25 00:17:58

  Modified:t/apache .cvsignore
  Log:
  - ignore file adjst.
  
  Revision  ChangesPath
  1.4   +2 -0  modperl-2.0/t/apache/.cvsignore
  
  Index: .cvsignore
  ===
  RCS file: /home/cvs/modperl-2.0/t/apache/.cvsignore,v
  retrieving revision 1.3
  retrieving revision 1.4
  diff -u -r1.3 -r1.4
  --- .cvsignore18 Dec 2001 01:56:47 -  1.3
  +++ .cvsignore25 Jan 2002 08:17:58 -  1.4
  @@ -1,5 +1,6 @@
   cgihandler.t
   compat.t
  +compat2.t
   conftree.t
   constants.t
   post.t
  @@ -7,3 +8,4 @@
   scanhdrs.t
   write.t
   subprocess.t
  +