Dropping 5.5 support from my modules.

2007-11-18 Thread Michael G Schwern
This is an announcement that my modules will no longer try to be backwards
compatible with 5.5.x.  This includes ExtUtils::MakeMaker and Test::More.
Toolchain modules will now target 5.6.0.  Modules not part of the build
toolchain will be moving up to 5.8.0.

This doesn't mean I'm going to go right now and bust compatibility, I just
won't be checking for it anymore or doing any more free work to support it (if
someone wants to pay for it, that's fine).  Patches will still be accepted as
long as they don't cause too much disruption.

The consequences of this change are that 5.5 is effectively end-of-lifed.
Without MakeMaker or Test::More support most modules cannot be installed.
Eventually modules will start to use new features of Test::More and MakeMaker
and 5.5 users will be unable to upgrade.  The upside is that this de-facto
releases everyone else from having to support 5.5.  If one part of the chain
won't work there's no point in putting effort into the rest.

Why make this change now?  I've always been frustrated at being hamstrung from
using "new" features of perl.  The Perl Survey results is what pushed me over
the edge. [1]  Only 6% of the respondents say they used 5.5.x as their
*minimum* version of Perl in the last year and 0.5% said 5.5.x was their max.
 As we didn't get specific about what they used those versions for, I suspect
there's a lot of CPAN testers that reported it as their min.  I rarely get bug
reports from actual 5.5 users, they're almost always from CPAN testers.  Thus,
it's not a large enough population for me to spend my unpaid time and effort
on and to delay new features.

Also, I think SEVEN YEARS (how long ago 5.6.0 came out, five for 5.8.0) is
long enough for folks to get around to upgrading.

Finally, I'm coming around to chromatic's philosophy:  why are we worry about
the effect of upgrades on users who don't upgrade?  Alan Burlson's comments
about Solaris vs Linux are telling:  if you're worried more about supporting
your existing users then finding new ones, you're dead.


[1]  Yes, I realize we have no clear idea of what portion of the actual Perl
population the survey represents, but some information is better than no
information and frankly I'm sick of 5.5 anyway.


-- 
If at first you don't succeed--you fail.
-- "Portal" demo


Re: Test quality

2007-11-18 Thread Matisse Enzer


On Nov 18, 2007, at 3:50 AM, nadim khemir wrote:

What are your thoughts, way of working that avoid the problems to  
start with


I organize my test files using an approach similar to nUnit - I create  
a bunch of subroutines that each do a few assertions, and that call  
set_up() and tear_down().


In my foo.t file I might have:

#---
use strict;
use warnings;
use Readonly;
use Test::More tests => 10;

Readonly my $CLASS_UNDER_TEST => Foo::Bar;
use_ok($CLASS_UNDER_TEST);

test_foo();
test_bar();
test_baz();

exit;

sub set_up {
  my $object = Foo::Bar->new();
  return $object;
}

sub tear_down {
   my @things_to_destroy = @_;
   foreach $thing (@things_to_destroy) {
 undef $thing;
   }
   return 1;
}

sub test_foo {
   my $foo = set_up();
   # do some assertions using $foo
   return tear_down($foo);
}

sub test_bar {
   my $foo = set_up();
   # do some assertions using $foo
   return tear_down($foo);
}

sub test_baz {
   my $foo = set_up();
   # do some assertions using $foo
   return tear_down($foo);
}

#





---
Matisse Enzer <[EMAIL PROTECTED]>
http://www.matisse.net/  - http://www.eigenstate.net/





Re: My list of small quirks

2007-11-18 Thread Matisse Enzer


On Nov 18, 2007, at 3:50 AM, Michael G Schwern wrote:



I start at the top, read the first few failures, fix them and  
rerun.  I ignore
the bulk of a really large failure as they're probably just cascades  
of the

one mistake.


This reminds me - I was wondering what it would take to implement a  
"BAIL_ON_FAIL" approach to running a test suite - a settings that  
determines how many failures before the whole test run stops. Default  
would be to keep running no matter how many failures, but you could  
set it to 1 and then bam, the whole test run stops on the frst failure.


-M

---
Matisse Enzer <[EMAIL PROTECTED]>
http://www.matisse.net/  - http://www.eigenstate.net/





Re: New proposed CPANTS metric: prereq_matches_use

2007-11-18 Thread Matisse Enzer


On Nov 18, 2007, at 7:25 AM, Andreas J. Koenig wrote:



Even if it's in the perl core, the developer may have compiled with

   -Dnoextensions=Encode

In such a case Encode is not present. I have skipped Encode many times
because it takes up so much time, others may do likewise.



So, I think the bottom line here is: List them ALL in Makefile.PL/ 
Build.PL


Hmm, sounds like I should create a PPI-based utility that walks a code  
tree and finds all 'use' and 'require' statements and makes a list for  
potential use in Makefile.PL / Build.PL


---
Matisse Enzer <[EMAIL PROTECTED]>
http://www.matisse.net/  - http://www.eigenstate.net/





Re: New proposed CPANTS metric: prereq_matches_use

2007-11-18 Thread Andreas J. Koenig
> On Sat, 17 Nov 2007 21:47:57 -0800, Matisse Enzer <[EMAIL PROTECTED]> 
> said:

  > On Nov 15, 2007, at 8:04 PM, A. Pagaltzis wrote:

 >> So in order to make everything work robustly, distros should
 >> explicitly list every single module they explicitly use – no
 >> shortcuts, no implications.


  > basically, I agree completely, with the possible exception of modules
  > that are in the Perl core - the standard libraries. On the otehr
  > hand, if a specific version of a standard library is required then it
  > most certainly should be listed, for example:

  >   # In Something.pm
  >   use File::HomeDir 0.66;

  > and

  >  # In Makefile.PL
  >  PREREQ_PM=> { 'File::HomeDir' => '0.66' },

Even if it's in the perl core, the developer may have compiled with 

-Dnoextensions=Encode

In such a case Encode is not present. I have skipped Encode many times
because it takes up so much time, others may do likewise.

-- 
andreas


Test quality

2007-11-18 Thread nadim khemir
Hi, This mail is not discussing what quality and what test quality is. It is 
about what quality our 'test files' have.

I run Test::Fixme, Kwalitee, perl::Critic, etc ... on my modules but none of 
them is ran on my tests. tests have a tendency to become a mess, be 
undocumented, etc...

What are your thoughts, way of working that avoid the problems to start with 
and is there a way to for the tests modules on the tests; a kind of 
Test::Kwalitee::Tests

Cheers, Nadim.


Re: My list of small quirks

2007-11-18 Thread Michael G Schwern
nadim khemir wrote:
> I spend a rather large amount of time writing and running tests. There are a 
> few things that could be better. I either don't know how  or it may not 
> possible. I thought we could share some of questions and ideas that can make 
> working with tests more pleasent. This should go into a Q&A I guess.

Much of what you bring up here has been or can finally be addressed with
Test::Harness 3, which was just released, and its underlying TAP::Parser.


> - 'list of failed' is not very usefull
> 
> Failed Test  Stat Wstat Total Fail  List of Failed
> ---
> t/010_shared_variables.t1   256121  11
> 
> Scroll up, look at the errors, scan the text for a line number, fix the 
> error. 
> The error index given in the summary is almost never used.

Test::Harness 3 has a revamped summary.  It's open to more work and the code
is finally sane.


> I think it would 
> be better to get the test line. Is there a way to do that in the current 
> harness in the future harness?

Yes, the information can finally be made available to the harness in a
parsable form and Test::Harness can finally do something with it.  See
http://search.cpan.org/dist/Test-More-Diagnostic


> - Colors in test outputs:
> TAP::Harness::Color is nice but Module::Build doesn't use it. Has someone 
> plans to put it in?

As TAP::Harness::Color is experimental, no.  However, the way formatters work
in TAP::Harness is still being worked on.


> - Too much output:
> My desktop is my IDE (sometimes my terminal is my IDE) and I like it that 
> way, 
> IDE's are too often in the way (or eating 1 GB of memory and cpu cycles 
> (Eclipse)) but I must admit that when there are lots test of failures I would 
> have liked to see the test results organized instead for having a thousands 
> of lines dump. How do you guys cope?

I start at the top, read the first few failures, fix them and rerun.  I ignore
the bulk of a really large failure as they're probably just cascades of the
one mistake.

With TAP::Parser you finally have the ability to write your own displayer.


> - Coverage per test:
> Is there a way to get that?

Dunno.


> -Idea: I, sometimes, write my code in the test files.
> 
> -
> file A.pm:
> 
> package A ;
> sub s1 { s2() } ;
> 
> 
> file t/00X_test.t
> 
> package A ;
> 
> use Test::More ;
> sub s2 { .., diag, .}
> 
> 
> package main ;
> 
> # all the usuall testing
> -
> 
> That's very handy but it also shows a pattern. Debugging code that has tests 
> versus debugging code that you would normally run or run in the debugger. I 
> don't want to have yet another framework, everything is setup in the test but 
> the test steals my output so I have to 'diag' things out. Any other way you 
> know of?

I don't get it.


-- 
Don't try the paranormal until you know what's normal.
-- "Lords and Ladies" by Terry Prachett


My list of small quirks

2007-11-18 Thread nadim khemir
Hi,

I spend a rather large amount of time writing and running tests. There are a 
few things that could be better. I either don't know how  or it may not 
possible. I thought we could share some of questions and ideas that can make 
working with tests more pleasent. This should go into a Q&A I guess.

- 'list of failed' is not very usefull

Failed Test  Stat Wstat Total Fail  List of Failed
---
t/010_shared_variables.t1   256121  11

Scroll up, look at the errors, scan the text for a line number, fix the error. 
The error index given in the summary is almost never used. I think it would 
be better to get the test line. Is there a way to do that in the current 
harness, in the future harness? Names would be best of course. hmm, maybe 
nothing would be best.

- Colors in test outputs:
TAP::Harness::Color is nice but Module::Build doesn't use it. Has someone 
plans to put it in?

- Too much output:
My desktop is my IDE (sometimes my terminal is my IDE) and I like it that way, 
IDE's are too often in the way (or eating 1 GB of memory and cpu cycles 
(Eclipse)) but I must admit that when there are lots test of failures I would 
have liked to see the test results organized instead for having a thousands 
of lines dump. How do you guys cope?

- Coverage per test:
Is there a way to get that?

-Idea: I, sometimes, write my code in the test files.

-
file A.pm:

package A ;
sub s1 { s2() } ;


file t/00X_test.t

package A ;

use Test::More ;
sub s2 { .., diag, .}


package main ;

# all the usuall testing
-

That's very handy but it also shows a pattern. Debugging code that has tests 
versus debugging code that you would normally run or run in the debugger. I 
don't want to have yet another framework, everything is setup in the test but 
the test steals my output so I have to 'diag' things out. Any other way you 
know of?


Cheers, Nadim.