Re: The spewing problem.

2008-01-15 Thread Sam Vilain
Aristotle Pagaltzis wrote:
 * Geoffrey Young [EMAIL PROTECTED] [2008-01-14 17:05]:
 it's useful to me because I say it is. personally I don't feel
 the need to defend something many people would like to see this
 like we're being forced to here.
 
 Yeah, agreed. Why is everyone so dogmatic and prescriptive? What
 happened to giving people enough rope to hang themselves if they
 really want to?

Why, feature creep of course.  I like the way Grant writes it in this talk:

http://wellington.pm.org/archive/200702/grant_mclean__xml_simple_sucks/index.html

I think that the proposed feature is possible to do without being too
complex or hard to maintain, and perhaps even a useful addition to
Test::Builder for making scripts based on it more functional.  However I
am yet to see anything other than a monkeypatch on Ovid's journal, and
an incomplete patch linked earlier on the thread.

If you think it's so useful and simple POST A PATCH.  And don't attach
the damned thing, put it inline so it can be reviewed and commented on
more easily as patches were intended to be.  Don't expect people to do
the work for you.

Sam.


Re: The spewing problem.

2008-01-14 Thread Sam Vilain
Matisse Enzer wrote:
 Ok, why do you want to stop it as fast as possible when a failure
 occurs?
 So I can more quickly focus on fixing that first test failure.
 I use
  make test 21 | less
 Works for individual tests too
  make  perl -Mblib t/testname.t 21 | less
 I don't see how this stops running the test suite upon the first
 failure, can you explain please?

Sure.  What happens is that less doesn't read all of its input.  As soon
as the script emits more than 8k of output (or whatever the pipe buffer
happens to be) it blocks on the next write() call that it makes and
doesn't run any more tests.  If you quit less, the test script gets a
SIGPIPE and probably quits.

Ok, that's not the first failure, but you'll always see the first
failure on your screen with that.

Or, back to the question Schwern posed,

 Ok, it's nice to want things, but why do you want it?

Still curious - perhaps you can explain more about why you think this is
useful thing.

Sam.


Re: The spewing problem.

2008-01-14 Thread Sam Vilain
Geoffrey Young wrote:
  schwern has a valid point in not wanting to lose 
 diagnostics upon implementing this feature, but outside of that it 
 wastes too many cycles going back and forth like this over what is a 
 pretty minimal feature.

Stop wasting cycles arguing, and start posting patches then.  If it's
that minimal to implement, show us the code, and it can be reviewed.

Sam.


Re: The spewing problem.

2008-01-13 Thread Sam Vilain
Matisse Enzer wrote:
 Ok, why do you want to stop it as fast as possible when a failure  
 occurs?
 So I can more quickly focus on fixing that first test failure.

I use

  make test 21 | less

Works for individual tests too

  make  perl -Mblib t/testname.t 21 | less

Sam.


Re: The spewing problem.

2008-01-12 Thread Sam Vilain
Michael G Schwern wrote:
 Paul Johnson wrote:
 This is something that I too have asked for in the past.  I've even
 hacked up my own stuff to do it, though obviously not as elegantly as
 you or Geoff.  Here's my use case.

 I have a bunch of tests that generally pass.  I hack something
 fundamental and run my tests.  Loads of them fail.  Diagnostics spew
 over my screen.  Urgh, I say.  Now I could scroll back through them.
 
 When faced with a tough problem it's often useful to go back check that it's
 actually the problem and not a solution posing as a problem.
 
 Make Test::Builder die on failure is a solution, and it's not a particularly
 good one.  It's hard to implement in Test::Builder and there's all the loss of
 information issues I've been yelping bout.
 
 The problem I'm hearing over and over again is Test::Builder is spewing crap
 all over my screen and obscuring the first, real failure.  So now that the
 problem is clearly stated, how do we solve it without making all that spew
 (which can be useful) totally unavailable?

This is one of the reasons I wrote Test::Depends - it means that for
instance if you have a module load failure, the later tests all fail
with a smaller message.

I personally have very little sympathy for people who can't write or
die at the end of their assertions (or, better, or skip ...).
ESPECIALLY if a side effect of catering for them impacts on the way
harness works.

Sam.


Re: Test::Aggregate - Speed up your test suites

2008-01-01 Thread Sam Vilain
Ovid wrote:
 Why not just load Perl once and fork for the execution of each test
 script.  You can pre-load modules before you fork.
 
 Forking is also more likely to be used for parallelization.  Often code
 requires sweeping changes before it can be run in parallel.  So this
 means we're reduced to running the code sequentially and forking
 doesn't offer a huge advantage and can mask hidden state assumptions
 like when naughty code is munging globals such as filehandles or
 built-in globals.



 Also, since forking is only emulated on Windows, it's not reliable
 (I've had it crash and burn more than once).  I prefer to avoid writing
 modules that are limited to specific platforms.
 
 (I'm not saying forking is a bad solution, just a different one).
 
 Finally, Test::Aggregate is designed to have tests run with minimal
 changes.  For many tests, just move them to the aggregate directory. 
 No worries about which modules to preload or anything like that.
 
 Finally, if you think my code is such a bad idea, I'm sure folks would
 welcome alternatives.

Nono, I was just wondering why that approach, it just seemed quite odd.
  You've now explained that quite nicely.  Actually a large part of my
initial reaction was due to the use of the word concatenation.
Looking at the module documentation I see that it's not anywhere near as
simplistic as that.

Aggregating tests is something that I do a lot of, it's just that
normally I'm writing data driven tests - and on larger code bases the
module load time can end up taking a non-trivial time.  I only care
about loading the modules as a part of the test for the first couple of
tests; the other ones I just use Test::Depends or something to skip if
that module fails to load.  So, in the general case I can probably
pre-load the lib/ modules for all but a few specially marked tests.
However the usual problematic boundary between the harness and the test
is there.  How do you solve this for Test::Aggregate, is it by making it
one test at the TAP level for each aggregated test?

Sam


Re: Test::Aggregate - Speed up your test suites

2007-12-30 Thread Sam Vilain
Ovid wrote:
 If you have slow test suites, you might want to give it a spin and see
 if it helps.  Essentially, it concatenates tests together and runs them
 in one process.  Thus, you load Perl only once and load all modules
 only once.

Yuck.

Why not just load Perl once and fork for the execution of each test
script.  You can pre-load modules before you fork.

Sam.


Re: TAP historical versions

2007-03-13 Thread Sam Vilain
Sam Vilain wrote:
 I just gave the cg- commands initially because I didn't want to write
 this git-core equivalent in public:
   mkdir perl
   cd perl
   git-init
   git-remote add catalyst git://git.catalyst.net.nz/perl.git
   git-config remote.catalyst.fetch \
  '+refs/heads/restorical:refs/remotes/restorical'
   git-fetch catalyst
   git-checkout -b master restorical

   

Shawn Pearce has pointed out this much more straightforward sequence:

  mkdir perl
  cd perl
  git init
  git remote add -t restorical -f catalyst \
   git://git.catalyst.net.nz/perl.git
  git checkout -b master catalyst/restorical

Sam.


Re: TAP historical versions

2007-03-12 Thread Sam Vilain
Sam Vilain wrote:
 You can add them all as branches with that cg-branch-add command then
 suck them all down with a big cg-fetch command. Another option is to
 just grab the lot with git-clone.

Forgot to say, that's almost a 200MB download at the moment.

 Actually if you've got the lot, then this will crank up the graphical
 history browser showing just commits that changed that file:

gitk --all t/TEST
   

And here's the teaser for that ;-)

  http://utsl.gen.nz/git/gitk-on-tTEST.png

 Which is probably going to be more fun than wading through pages like this:

   
 http://git.catalyst.net.nz/gitweb2?p=perl.git;a=history;f=t/TEST;h=p4-perl;hb=p4-perl

 Sam.

   



Re: TAP historical versions

2007-03-12 Thread Sam Vilain
Michael G Schwern wrote:
 cg-branch-add p4-perl git://git.catalyst.net.nz/perl.git#p4-perl
 cg-fetch p4-perl
 cg-switch p4-perl
 
 cg-switch: refusing to switch to a remote branch - see README for lengthy
 explanation; use cg-seek to just quickly inspect it
   

Oops, yeah, my mistake.  cg-seek is what you need there; cogito won't
let you switch to that because it considers it a remote branch (ie, a
tracking branch - mirror path in svk terms).  'cg-status' shows these
with a R.

This is the cogito way to make a local branch based on a remote branch:

  cg-switch -r p4-perl somelocalname

git-log also accepts a revision to start from:

  git-log p4-perl t/TEST

To confound matters, the remote tracking has seen several revisions.

First, git-core just had a remotes file that specified which refs (ie,
branches) on the upstream side get converted to refs locally, and all
the branches were in the same namespace.  Files in .git/remotes/*

Then, cogito allowed branches to be remote branches, that it would
refuse to commit to, and display specially, with the cg-branch-*
commands to map to remote places.  Files in .git/branches/*

Recently (git 1.5+) git-core re-invented them in a more flexible and
different way (see git-remote, git-config --global color.branch auto
and git-branch -a -v).  Sections in .git/config

However, the only real side effect of this mess is ending up with junk
refs.  In general just ignore them, you'll see when it's safe to delete
them later when you get more familiar with the concept of the commit DAG.

I just gave the cg- commands initially because I didn't want to write
this git-core equivalent in public:

  mkdir perl
  cd perl
  git-init
  git-remote add catalyst git://git.catalyst.net.nz/perl.git
  git-config remote.catalyst.fetch \
 '+refs/heads/restorical:refs/remotes/restorical'
  git-fetch catalyst
  git-checkout -b master restorical

In terms of a tutorial... well, yeah, not sure.  I'm writing one that's
more of a working with projects still using SVN repositories with
git-svn tutorial which doesn't really cover this case very well. 
There's a guy doing lots of work on the git user manual, which by now is
getting quite complete.  The nice thing about that is that it ships with
git and shouldn't get stale like on-line tutorials do.

I should probably confess that my git training has included two long
talks from Martin Langhoff, and a 1½ hour internals demo from Linus at
LCA last year.  And of course, rigorous experimentation...

Sam