Re: Having memory leak issues with perl-c

2022-08-04 Thread demerphq
On Thu, 4 Aug 2022 at 17:04, Mark Murawski 
wrote:

> On 8/4/22 02:50, demerphq wrote:
>
> On Thu, 4 Aug 2022 at 01:58, Mark Murawski 
> wrote:
>
>> I'm still not getting something... if I want to fix the code-as-is and do
>> this:
>>
>> FNsv = get_sv("main::_FN", GV_ADD);
>> if (!FNsv)
>> ereport(ERROR,
>> (errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),
>>  errmsg("couldn't fetch $_FN")));
>>
>> save_item(FNsv);/* local $_FN */
>>
>
> I dont get the sequence here. You take the old value of $main::_FN and
> then you localize it after you fetch it? That seems weird.
>
>
You did not respond to this comment ^^


>
>
>>
>> hv = newHV(); // create new hash
>> hv_store_string(hv, "name", cstr2sv(desc->proname));
>>
>
> Really you shouldnt do this until you have safely managed the refcounts of
> all your newly created objects so that if this die's nothing leaks.
>
>
> I take this to mean , setting up FNsv first, and then allocating hv?  But
> in this case we seem to have a chicken/egg problem?  How can you set up
> FNsv to point to hv without first setting up hv?
>

No, you shouldn't do anything after creating the hash that might die until
you have arranged for hv to be freed. Storing into the hash might die.


>
> WARNING:  Attempt to free unreferenced scalar: SV 0x55d5b1cf6480, Perl
>> interpreter: 0x55d5b17226c0
>>
>
> Why are you decrementing hv? You dont own hv anymore, it's owned by svFN
> and after the sv_setsv() call also FNsv. You shouldnt mess with its
> refcount anymore.
>
>
> The ownership aspect is making more sense now, thanks for clarifying.
>

YW.


>
> Obviously in perl we can write:
>
> my %hash;
> $main::_FN= \%hash;
>
> And in XS we can do the same thing. Unfortunately there isn't a utility
> sub to do this currently, it has been on my TODO list to add one for some
> time but lack of round tuits and all that.
>
> You want code something like this:
>
> sv_clear(FNsv); /* undef the sv */
> sv_upgrade(FNsv,SVt_RV);
> SvRV_set(FNsv, (SV*)hv);
> SvROK_on(FNsv);
>
> Again, make liberal use of sv_dump() it is the XS version of Data::Dumper
> more or less.
>
>
> I have been playing with sv_dump()... At the end of this flow, the
> refcount to FNsv is 1 and should get automatically cleaned up by Perl,
> right?
>

Well I dont know for sure as I dont know the latest state of your code. But
my thinking is that you asked for $main::_FN which is in the package table,
so perl should clean it up. But I dont understand why you are using
save_item(FNsv).


> I still have a leak here, using the above code.
>

Probably I would be able to help you  better if you posted the latest state
of your code, with sv_dump() calls liberally sprinkled through the code,
and post the output as well.


>
> Also... i get a crash when I use sv_clear(FNsv); right away like this.
>

And what does FNsv look like immediately before you call sv_clear()?


> If I take it out, the code seems to all run correctly, but I have a leak
> and the hash or the hash reference is not being cleaned up.
>

Can you please show a reduced version of the code? And explain why you are
doing save_item(FNsv)? And provide some of the output of sv_dump()?

cheers,
Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Having memory leak issues with perl-c

2022-08-03 Thread demerphq
On Thu, 4 Aug 2022 at 01:58, Mark Murawski 
wrote:

> I'm still not getting something... if I want to fix the code-as-is and do
> this:
>
> FNsv = get_sv("main::_FN", GV_ADD);
> if (!FNsv)
> ereport(ERROR,
> (errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),
>  errmsg("couldn't fetch $_FN")));
>
> save_item(FNsv);/* local $_FN */
>

I dont get the sequence here. You take the old value of $main::_FN and then
you localize it after you fetch it? That seems weird.


>
> hv = newHV(); // create new hash
> hv_store_string(hv, "name", cstr2sv(desc->proname));
>

Really you shouldnt do this until you have safely managed the refcounts of
all your newly created objects so that if this die's nothing leaks.


>
> svFN = newRV_noinc((SV *) hv); // reference to the new hash
> sv_setsv(FNsv, svFN);
>
> // dostuff
>
> SvREFCNT_dec_current(svFN);
> SvREFCNT_dec_current((SV *) hv);
>
>
> You're saying that the   sv_setsv(FNsv, svFN); creates a second ref... so
> in theory I can unref it and then all else would be equal
> but I get this:
>
> WARNING:  Attempt to free unreferenced scalar: SV 0x55d5b1cf6480, Perl
> interpreter: 0x55d5b17226c0
>

Why are you decrementing hv? You dont own hv anymore, it's owned by svFN
and after the sv_setsv() call also FNsv. You shouldnt mess with its
refcount anymore.

HV *hv= newHV(); /* until this is attached to something that will get
cleaned up you need to deal with its refcnt */
svFN = newRV_noinc((SV *) hv); /* now the hv is owned by svFN, we no longer
have to worry about hv, just svFN */
sv_setsv(FNsv, svFN);  /* now FNsv is a copy of the ref that is in svFN */

So at this point we have two SV's which reference hv, svFN and FNsv, hv
will have a refcount of 2, svFN should have a refcount of 1.

SvREFCNT_dec(svFN);

This will decrement svFN to 0, which will cause it to be freed, but not
before Perl walks the reference tree decrementing the things the reference
contains, so this will trigger an automatic SvREFCNT_dec() on hv. So after
this line the refcount of hv would be 1, and it would be owned by FNsv only.

Btw, if you liberally added sv_dump(svFN); sv_dump(hv); and sv_dump(FNsv);
to your code you would see what it is happening here. It will show you the
refcounts of each object.


> Also.. something I didn't follow was this:
> "or even better, simply dont use it. You dont need it, you can simply turn
> FNsv into an RV, and then set its RV field appropriately, the leak will go
> away and the code will be more efficient. "
>
> How do you turn an SV into an RV without creating this extra reference..
> Aren't I doing this already, with:
>svFN = newRV_noinc((SV *) hv);
>

No. That  creates a new SV which is a reference to a hv.

Your code does this basically:

my %hash;
my $ref= \%hash;
$main::_FN= $ref;

Obviously in perl we can write:

my %hash;
$main::_FN= \%hash;

And in XS we can do the same thing. Unfortunately there isn't a utility sub
to do this currently, it has been on my TODO list to add one for some time
but lack of round tuits and all that.

You want code something like this:

sv_clear(FNsv); /* undef the sv */
sv_upgrade(FNsv,SVt_RV);
SvRV_set(FNsv, (SV*)hv);
SvROK_on(FNsv);

and you are done. It's possible that the sv_upgrade() is unnecessary as I
believe every SV is big enough to handle SVr_RV, try with and without,
keeping will make your code more backwards compatible to really old
versions of perl. But in anything modern an RV is a bodyless SV, and thus
every SV can turn into one without allocating a new body and the sv_upgrade
should be superfluous.

Again, make liberal use of sv_dump() it is the XS version of Data::Dumper
more or less.

Yves





-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Having memory leak issues with perl-c

2022-07-28 Thread demerphq
On Thu, 28 Jul 2022 at 10:35, demerphq  wrote:

> On Wed, 27 Jul 2022 at 22:41, Mark Murawski 
> wrote:
>
>>
>>
>> On 7/27/22 13:20, demerphq wrote:
>>
>> Spectacular.  I'll be reviewing and making changes that you're suggesting
>> and this helps my understanding for sure.
>>
>
> I was a bit off my game yesterday. I should have told you to make liberal
> use of sv_dump() when you are debugging. That will help you visualize what
> is going on. It's the XS equivalent to the Dump function from Devel::Peek
> (or more accurately it is what the Dump functions uses under the hood).
>
> You can pass any SV like structure (Eg, HV, AV, etc) into it with casting
> and it will dump out all the useful details, including refcounts.
>
> Good luck!
>

Last comment. You might find the code in Sereal::Encoder and
Sereal::Decoder helpful. Sereal is a serialization package I maintain, and
as such it contains all the code for serializing perl data structures and
then reconstituting them. It was written in a style where it avoids mortal
values as much as possible, where it mortalizes the root of the data
structure being deserialized and then does not mortalize anything else
(barring state data), and passes down the variables being "filled in"
(usually called the SV *into). So for instance if it is deserializing an
array ref, it first creates an SV with refcount 1, it then passes that into
the code that parses array refs, which turns the into into an SvRV pointing
at a newAV, then for each element in the array it creates a newSV, pushes
it into the array, and then calls the parse logic with that new element as
the into argument, etc. Thus it bypasses all of the recount twiddling. If
the code dies in the middle the root is mortal so Perl will free it later.

A mortal pattern might look like this:

HV *sv= newSV(0);
sv_2mortal(sv); /* if we die this will get freed when perl cleans up later!
*/

/* do stuff that could die */
...

/* Yay, we didn't die! increment sv's refcount to demortalize it and then
return it */
SvREFCOUNT_inc(sv);
return sv;

Sorry, just trying to arm you with as much useful info as I can. I happened
to be working on very similar code the last day or two for Sereal. :-)

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Having memory leak issues with perl-c

2022-07-28 Thread demerphq
On Wed, 27 Jul 2022 at 22:41, Mark Murawski 
wrote:

>
>
> On 7/27/22 13:20, demerphq wrote:
>
> Spectacular.  I'll be reviewing and making changes that you're suggesting
> and this helps my understanding for sure.
>

I was a bit off my game yesterday. I should have told you to make liberal
use of sv_dump() when you are debugging. That will help you visualize what
is going on. It's the XS equivalent to the Dump function from Devel::Peek
(or more accurately it is what the Dump functions uses under the hood).

You can pass any SV like structure (Eg, HV, AV, etc) into it with casting
and it will dump out all the useful details, including refcounts.

Good luck!

cheers,
Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Having memory leak issues with perl-c

2022-07-27 Thread demerphq
On Wed, 27 Jul 2022 at 17:46, Mark Murawski 
wrote:

> Hi All!
>
> I'm working on a new feature for plperl in postgresql to populate some
> metadata in main::_FN for running plperl functions from postgres sql.
>
> I've snipped out some extra code that's unrelated to focus on the issue
> at hand.   If the section labeled 'Leaking Section' is entirely
> commented out (and of course the related SvREFCNT_dec_current is
> commented as well), then there's no memory issue at all.  If I do use
> this section of code, something is not freed and I'm not sure what that
> is, since I'm very new to perl-c
>
>
>
> /*
>   * Decrement the refcount of the given SV within the active Perl
> interpreter
>   *
>   * This is handy because it reloads the active-interpreter pointer, saving
>   * some notation in callers that switch the active interpreter.
>   */
> static inline void
> SvREFCNT_dec_current(SV *sv)
> {
>  dTHX;
>
>  SvREFCNT_dec(sv);
> }
>
> static SV  *
> plperl_call_perl_func(plperl_proc_desc *desc, FunctionCallInfo fcinfo)
> {
>  dTHX;
>  dSP;
>
>  HV *hv;  // hash
>  SV *FNsv;// scalar reference to the
> hash
>  SV *svFN;// local reference to the
> hash?
>
>  ENTER;
>  SAVETMPS;
>
>  /* Give functions some metadata about what's going on in $_FN
> (Similar to $_TD for triggers) */
>
>  // Leaking Section {
>  FNsv = get_sv("main::_FN", GV_ADD);
>  save_item(FNsv);/* local $_FN */
>
>  hv = newHV(); // create new hash
>  hv_ksplit(hv, 12);  /* pre-grow the hash */
>  hv_store_string(hv, "name", cstr2sv(desc->proname));
>
>  svFN = newRV_noinc((SV *) hv); // reference to the new hash
>  sv_setsv(FNsv, svFN);
>

So at this point FNsv and svFN both are references to the same HV. But you
dont free svFN.

Likely a simple SvREFCNT_dec(svFN) would do it.

Basically you are nearly doing this:

for my $FNsv ($main::FN) {
my %hash;
my $svFN= \%hash; push @LEAK, \$svFN;
$FNsv= $svFN;
}

If you put a sv_2mortal(svFN) or an explicit refcount decrement on svFN
then I expect the leak would go away. Note the reason i use a for loop here
is I am trying to emphasize that $FNsv IS $main::FN, as the topic of a for
loop is an alias which is as close as you can get in Perl to the C level
operation of copying a pointer to an SV structure. (The difference being
that in Perl aliases are refcounted copies and C they are not refcounted).
Compare this to $svFN which is clearly a new variable.

What you should be doing is this:

my %hash;
$main::FN= \%hash;

Which would be something like this:


sv_upgrade(FNsv,SVtRV);
SvRV_set(FNsv,(SV*)newHV());
SvROK_on(FNsv);


 // Leaking Section }
>
>  // dostuff
>
>  SvREFCNT_dec_current(hv);
>
>  PUTBACK;
>  FREETMPS;
>  LEAVE;
>
>  ...snip...
> }
>
>
> If anyone would like to see the full context, I've attached the entire
> file.  My additions are between the 'New..' sections
>
> My question is... the perl-c api docs do not make it clear for which
> allocations or accesses that you need to decrement the ref count.
>

I would say the docs actually do explain this. Most functions will specify
that you need to do the refcount incrementing yourself if it is relevant.

A general rule of thumb however is that any item you create yourself which
is not "owned by perl" by being attached to some data structure it exposes
is your problem to deal with. So for instance this:

 FNsv = get_sv("main::_FN", GV_ADD);

is getting you a local pointer to the structure representing $main::_FN
which is a global var. Thus it is stored in the global stash, and thus Perl
knows about it and will clean it up, you don't need to worry about its
refcount unless you store it in something else that is refcount managed.

On the other hand

 hv = newHV(); // create new hash

is a pointer to a new HV structure which is not stored in any place that
perl knows about. So if you dont arrange for it to be recount decremented
it will leak. However you then did this:

 svFN = newRV_noinc((SV *) hv); // reference to the new hash

this is creating a new reference to the hash. The combination of the two
basically is equivalent to creating an anonymous hashref. The "noinc" is
there because the hv starts off with a refcount of 1, and the new reference
is the thing that will "own" that refcount. So at this point you no longer
need to manage 'hv' provided you correctly manage svFN.

You then do this:

sv_setsv(FNsv, svFN);

This results in the refcount of 'hv' being incremented, as there are now
two RV's pointing at it. You need to free up the temporary, or even better,
simply dont use it. You dont need it, you can simply turn FNsv into an RV,
and then set its RV field appropriately, the leak will go away and the code
will be more efficient.

Another comment is regarding the hv_ksp

Re: Test::Smoke failing to test most recent commit to Perl 5 blead

2017-01-29 Thread demerphq
On 29 January 2017 at 15:07, James E Keenan  wrote:
> On 01/27/2017 10:58 PM, demerphq wrote:
>>
>>
>> On 28 Jan 2017 6:32 a.m., "James E Keenan" > <mailto:jk...@verizon.net>> wrote:
>>
>> On 01/14/2017 10:10 AM, James E Keenan wrote:
>> Here is the first part of the smokecurrent.log for this run -- with
>> some comments:
>>
>> #
>> [2017-01-27 08:11:22-0500] Read configuration from:
>> /usr/home/jkeenan/p5smoke/install/smokecurrent_config
>> [2017-01-27 08:11:22-0500] Commitlevel before sync:
>> d6115793d6cc41755a3ed4baaa38d30653656f41
>>
>> # d611579 was HEAD in the previous branch being smoked:
>> smoke-me/jkeenan/130635-storable
>>
>> [2017-01-27 08:11:22-0500] ==> Starting synctree
>> [2017-01-27 08:11:22-0500] Reading branch to smoke from:
>> '/usr/home/jkeenan/p5smoke/install/smokecurrent.gitbranch'
>> [2017-01-27 08:11:22-0500] In
>> pwd(/usr/home/jkeenan/p5smoke/git-perl) running:
>> [2017-01-27 08:11:22-0500] qx[/usr/local/bin/git pull --all]
>> From git://perl5.git.perl.org/perl <http://perl5.git.perl.org/perl>
>>
>>1f74a12..9a7b7fb  blead  -> origin/blead
>> [2017-01-27 08:11:37-0500] Fetching origin
>> [2017-01-27 08:11:37-0500] Already up-to-date.
>> [2017-01-27 08:11:37-0500] In
>> pwd(/usr/home/jkeenan/p5smoke/git-perl) running:
>> [2017-01-27 08:11:37-0500] qx[/usr/local/bin/git remote prune origin]
>> [2017-01-27 08:11:38-0500] In
>> pwd(/usr/home/jkeenan/p5smoke/git-perl) running:
>> [2017-01-27 08:11:38-0500] qx[/usr/local/bin/git checkout blead
>> [2017-01-27 08:11:38-0500]  2>&1]
>> Switched to branch 'blead'
>> [2017-01-27 08:11:41-0500] Your branch is behind 'origin/blead' by 7
>> commits, and can be fast-forwarded.
>> [2017-01-27 08:11:41-0500]   (use "git pull" to update your local
>> branch)
>>
>> # Note: No indication that 'git pull' was actually run!  Why not?
>>
>> [2017-01-27 08:11:41-0500] In
>> pwd(/usr/home/jkeenan/p5smoke/perl-current) running:
>> [2017-01-27 08:11:41-0500] qx[/usr/local/bin/git reset --hard]
>>
>> # Note: Although 'man git-reset' is not explicit about this, we can
>> probably assume
>> # that 'git reset --hard' with no  resets to HEAD -- i.e.,
>> no update to checkout.
>>
>>
>> Yes. It throws away any changes to the current branch. That should say
>>
>> git reset --hard origin/blead
>>
>>
>> [2017-01-27 08:11:43-0500] HEAD is now at d611579 Fix stack buffer
>> overflow in deserialization of hooks.
>> #
>>
>> So why does Test-Smoke not update the branch being tested in cases
>> like this?
>>
>>
>> This is an educated guess: whoever wrote that code did not understand
>> git well and got confused about what git pull does, and what git pull
>> --all do.
>>
>> Seeing git pull in a script like this is a red flag, seeing  --all is a
>> bigger red flag for me. ( It does not "pull all", it does a fetch
>> against all remotes.)
>>
>> IMO git pull is not really suitable for scripting as it can trigger a
>> rebase or merge, and trigger an editor.
>>
>> I would expect to see a sequence like this:
>>
>> git remote update -p
>> git checkout $branch
>> git reset --hard origin/$branch
>>
>> or like this:
>>
>> git remote prune --all
>> git fetch --all
>> git checkout $branch
>> git reset --hard origin/$branch
>>
>> or even like this:
>>
>> git fetch origin
>> git checkout $branch
>> git reset --hard origin/$branch
>>
>> The difference between the variants being about whether the code should
>> be managing multiple upstream repos or not. The --all introduces
>> ambiguity about this, as it means "fetch code from --all remotes", so
>> one might guess someone wanted to support multiple upstreams. On the
>> other hand, my guess is that the person who coded it to do a 'git pull
>> --all' thought that the -all would update --all branches, which it does
>> not. Bolstering this view is that it seems to make little sense to smoke
>> branches from multiple upstreams, especially for Perl. I would have
>> expected only the canonical branches in the master upstream repo should
>> be smoked, so the --all probably was a bug

Re: Test::Smoke failing to test most recent commit to Perl 5 blead

2017-01-27 Thread demerphq
On 28 Jan 2017 6:32 a.m., "James E Keenan"  wrote:

> On 01/14/2017 10:10 AM, James E Keenan wrote:
> Here is the first part of the smokecurrent.log for this run -- with some
> comments:
>
> #
> [2017-01-27 08:11:22-0500] Read configuration from:
> /usr/home/jkeenan/p5smoke/install/smokecurrent_config
> [2017-01-27 08:11:22-0500] Commitlevel before sync:
> d6115793d6cc41755a3ed4baaa38d30653656f41
>
> # d611579 was HEAD in the previous branch being smoked:
> smoke-me/jkeenan/130635-storable
>
> [2017-01-27 08:11:22-0500] ==> Starting synctree
> [2017-01-27 08:11:22-0500] Reading branch to smoke from:
> '/usr/home/jkeenan/p5smoke/install/smokecurrent.gitbranch'
> [2017-01-27 08:11:22-0500] In pwd(/usr/home/jkeenan/p5smoke/git-perl)
> running:
> [2017-01-27 08:11:22-0500] qx[/usr/local/bin/git pull --all]
> From git://perl5.git.perl.org/perl
>1f74a12..9a7b7fb  blead  -> origin/blead
> [2017-01-27 08:11:37-0500] Fetching origin
> [2017-01-27 08:11:37-0500] Already up-to-date.
> [2017-01-27 08:11:37-0500] In pwd(/usr/home/jkeenan/p5smoke/git-perl)
> running:
> [2017-01-27 08:11:37-0500] qx[/usr/local/bin/git remote prune origin]
> [2017-01-27 08:11:38-0500] In pwd(/usr/home/jkeenan/p5smoke/git-perl)
> running:
> [2017-01-27 08:11:38-0500] qx[/usr/local/bin/git checkout blead
> [2017-01-27 08:11:38-0500]  2>&1]
> Switched to branch 'blead'
> [2017-01-27 08:11:41-0500] Your branch is behind 'origin/blead' by 7
> commits, and can be fast-forwarded.
> [2017-01-27 08:11:41-0500]   (use "git pull" to update your local branch)
>
> # Note: No indication that 'git pull' was actually run!  Why not?
>
> [2017-01-27 08:11:41-0500] In pwd(/usr/home/jkeenan/p5smoke/perl-current)
> running:
> [2017-01-27 08:11:41-0500] qx[/usr/local/bin/git reset --hard]
>
> # Note: Although 'man git-reset' is not explicit about this, we can
> probably assume
> # that 'git reset --hard' with no  resets to HEAD -- i.e., no
> update to checkout.
>
>
Yes. It throws away any changes to the current branch. That should say

git reset --hard origin/blead


> [2017-01-27 08:11:43-0500] HEAD is now at d611579 Fix stack buffer
> overflow in deserialization of hooks.
> #
>
> So why does Test-Smoke not update the branch being tested in cases like
> this?
>
>
This is an educated guess: whoever wrote that code did not understand git
well and got confused about what git pull does, and what git pull --all do.

Seeing git pull in a script like this is a red flag, seeing  --all is a
bigger red flag for me. ( It does not "pull all", it does a fetch against
all remotes.)

IMO git pull is not really suitable for scripting as it can trigger a
rebase or merge, and trigger an editor.

I would expect to see a sequence like this:

git remote update -p
git checkout $branch
git reset --hard origin/$branch

or like this:

git remote prune --all
git fetch --all
git checkout $branch
git reset --hard origin/$branch

or even like this:

git fetch origin
git checkout $branch
git reset --hard origin/$branch

The difference between the variants being about whether the code should be
managing multiple upstream repos or not. The --all introduces ambiguity
about this, as it means "fetch code from --all remotes", so one might guess
someone wanted to support multiple upstreams. On the other hand, my guess
is that the person who coded it to do a 'git pull --all' thought that the
-all would update --all branches, which it does not. Bolstering this view
is that it seems to make little sense to smoke branches from multiple
upstreams, especially for Perl. I would have expected only the canonical
branches in the master upstream repo should be smoked, so the --all
probably was a bug.

What appears to be happening is that when the pull is executed it is in a
different branch, so it fetches and then updates *THAT* branch only (via a
merge!). When it then checks-out the new branch branch it is on an old
commit, and requires a reset to the latest code.

I would recommend that the smoke scripts do not use pull *ever*. They
should never ever modify code, so doing a pull makes no sense.  git pull is
basically a smart wrapper around git fetch + git merge. A smoker should
*NEVER* be doing git-merge, so it should *never* be using git-pull.

Note, I am fully aware that under perfect circumstances you /can/ script
this using git pull. I would recommend not to. First it is confusing.
Second, if circumstances are less than perfect then it could lead to
unwanted behavior. For instance someone tinkers in a branch used by the
smoker, then one could imaging the script breaking because of uncommitted
changes, or breaking because pull has triggered a merge which requires a
edited text message.

FWIW, i have in the past recommended that new users to git do NOT use "git
pull", until they have mastered using "git fetch" and "git rebase" or "git
merge" as independent commands. Only once having experience of the three
basic commands should people use git pull. Since it is a wrapper around
three dis

Re: Pending Test::More fixage - DateTime and string overload users take note

2010-05-22 Thread demerphq
On 22 May 2010 01:35, Michael G Schwern  wrote:
> On 2010.5.21 1:04 PM, Slaven Rezic wrote:
>> Michael G Schwern  writes:
>>
>>> CPAN Testers, please load your smokers with Test::More 0.95_02, compare the
>>> results with Test::More 0.94 and report any differences in test results to
>>> their respective authors.  I would like to see a summary of the differences 
>>> as
>>> well so I know the scope of the fixage.
>>
>> Yesterday I had setup a parallel smoker and by now ~2700 distributions
>> are tested. A preliminary summary of regressions may be seen here:
>>
>>     http://bbbike.dyndns.org/parsmoker?smoke=testsimple0.9502;notes=1
>>
>> (note, no update guarantees on this machine!)
>
> Thank you, Slaven, you are awesome!  I'll start sending out bug reports now.

One thought...

There has been turbulence in the Regexp space over the last versions of perl.

Is it possible these changes might intersect with those changes to
make it harder to compare regexes?

cheers,
Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: OT: lists.cpan.org

2009-03-24 Thread demerphq
2009/3/24 Gabor Szabo :
> I know there are people who will complain about the fact I did it
> or the way I did it, I am getting really used to it.

Im really upset you didn't do this before.

;-)

Just so you have some variety in the complaints department.

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Fwd: [rt.cpan.org #40976] New files installed without user-write permission

2008-11-16 Thread demerphq
Can anyone help me out with this? Is this EUI's fault or something
else? Is anything to do with this permission issue that we have been
discussing?

Cheers,
yves


-- Forwarded message --
From: Guillaume Rousse via RT <[EMAIL PROTECTED]>
Date: 2008/11/16
Subject: [rt.cpan.org #40976] New files installed without user-write permission
To: undisclosed-recipients


Sun Nov 16 13:41:21 2008: Request 40976 was acted upon.
Transaction: Ticket created by GROUSSE
  Queue: ExtUtils-Install
Subject: New files installed without user-write permission
  Broken in: 1.52
   Severity: Normal
  Owner: Nobody
 Requestors: [EMAIL PROTECTED]
 Status: new
 Ticket http://rt.cpan.org/Ticket/Display.html?id=40976 >


All new files installed through ExtUtils::MakeMaker-generated makefiles
are installed without user-writable permission bit set. As a
consequence, all attempt to modify them during installation with a
non-privilegiated user (typically, package building) fails. PDL
installation is a good example:

couldn't open
/home/guillomovitch/cooker/perl-PDL/BUILDROOT/perl-PDL-2.4.4/usr/lib/perl5/vendor_perl/5.10.0/i386-linux-thread-multi/PDL/Index.pod
at Doc/scantree.pl line 44.

[EMAIL PROTECTED] perl-PDL]$ ll
/home/guillomovitch/cooker/perl-PDL/BUILDROOT/perl-PDL-2.4.4/usr/lib/perl5/vendor_perl/5.10.0/i386-linux-thread-multi/PDL/Index.pod

-r--r--r-- 1 guillomovitch users 10763 2008-11-16 19:23
/home/guillomovitch/cooker/perl-PDL/BUILDROOT/perl-PDL-2.4.4/usr/lib/perl5/vendor_perl/5.10.0/i386-linux-thread-multi/PDL/Index.pod


Removing 1.52 version, so as to use perl-5.10 provided 1.44 version, fix
the issue.



-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: [RFC] Dealing with World-writable Files in the Archive of CPAN Distributions

2008-11-13 Thread demerphq
2008/9/30 Andreas J. Koenig <[EMAIL PROTECTED]>:
>> On Tue, 23 Sep 2008 11:40:09 +0200, "Jos I. Boumans" <[EMAIL PROTECTED]> 
>> said:
>
>  >> And so I have implemented it now. If it breaks too much in too short
>  >> time, we could probably revert it, but first I'd like to see how bad
>  >> we really do.
>
>  > I agree to this (first) solution; this will give us a good idea about
>  > the
>  > scope of the problem.
>
> I have watched the indexer for a week now. The scope is more than two
> uploads per day. These uploads got an email about world writable files
> or directories. I looked up their CPAN directories right now and based
> on the findings I have added the third column.
>
> 23-Sep  SEMUELF/Data-ParseBinary-0.07.tar.gzfixed
> 26-Sep  GFUJI/warnings-unused-0.02.tar.gz   not fixed
> 26-Sep  STEFFENW/DBD-PO-0.10.tar.gz not fixed
> 26-Sep  STEFFENW/Bundle-DBD-PO-0.10.tar.gz  not fixed
> 26-Sep  AJDIXON/daemonise-1.0.tar.gznot fixed
> 26-Sep  RPHANEY/openStatisticalServices-0.015.tar.gzfixed
> 26-Sep  RPHANEY/openStatisticalServices-0.018.tar.gzfixed
> 27-Sep  COSIMO/Imager-SkinDetector-0.01.tar.gz  fixed
> 27-Sep  FAYLAND/Pod-From-GoogleWiki-0.06.tar.gz fixed
> 28-Sep  DANNY/Rose-DBx-Object-Renderer-0.34.tar.gz  not fixed
> 28-Sep  MTHURN/WWW-Search-Ebay-2.244.tar.gz fixed
> 28-Sep  JSTROM/Tk-TextVi-0.014.tar.gz   not fixed
> 28-Sep  JSTROM/Tk-TextVi-0.0141.tar.gz  not fixed
> 29-Sep  MATTN/Net-Kotonoha-0.07.tar.gz  fixed
> 29-Sep  MTHURN/WWW-Search-Ebay-Europe-2.002.tar.gz  fixed
> 29-Sep  ANGERSTEI/Net-Ping-Network-1.57.tar.gz  not fixed
> 29-Sep  RPHANEY/openStatisticalServices-0.019.tar.gzfixed
>
> Congratulations to all authors who managed to fix their distros.
> I *you* are among them, please spread the word how you did it.

I fixed the issue for ExtUtils::Install by changing my windows
permissions to be me only instead of Everyone. Also unclicked the
"inherit permissions from parent object" and used the advanced tab to
propagate the permissions to all children. No doubt I could have done
it with a command line tool, but I couldnt remember what it was
called.

Switching to CREATOR OWNER didnt work, nor did CREATOR GROUP.

Yves.


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists

2008-11-12 Thread demerphq
2008/11/13 chromatic <[EMAIL PROTECTED]>:
> On Wednesday 12 November 2008 22:36:31 demerphq wrote:
>
>> > I really, really, really don't want PAUSE modifying my stuff after it's
>> > uploaded.  Oh god the mysterious bugs.  And then there's the fact that
>> > the code I've put my name and signature on is not the same code as is
>> > being distributed!  That's a trust violation as well as maybe a license
>> > violation.
>
>> Oh please, save me the drama. We aren't talking about modifying "your
>> stuff" we are talking about twiddling some bits in a tar file.
>
> I can only think of several ways that could possibly go wrong.

Pray tell, what are they?

> I understand why PAUSE enforces the policy that it won't index anything it
> can't index, but I don't understand what permission bits that may or may not
> be set have to do with indexing.
>
> I realize the longstanding Perl cultural view of encapsulation is, to put it
> mildly, highly voluntary -- but the first time I catch a naked, drunk
> neighbor rifling through my closet is the last time any naked, drunk neighbor
> rifles through my closet, regardless of sincerity of intent.

So you equate PAUSE unpacking the tar file, chmod'ing to not be world
writable and then retarring it to a naked drunk neighbor rifling
through your closet? I don't get it really, and I'm wondering what
kind of neighborhood you live in!. And presumably this would never
happen to you right? Being a switched on unix guy you wouldnt roll a
world writable CPAN package anyway would you?

If there is any comparison its like the library putting durable
binding and a security strip on a book before it hits the shelves.

Cheers,
yves
-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists

2008-11-12 Thread demerphq
2008/11/13 Michael G Schwern <[EMAIL PROTECTED]>:
> Jonathan Rockway wrote:
>> * On Wed, Nov 12 2008, David Golden wrote:
>>> On Wed, Nov 12, 2008 at 3:17 PM, demerphq <[EMAIL PROTECTED]> wrote:
>>>> IMO if the toolchain is to work this should happen at PAUSE (if it can
>>>> detect this problem IMO it should just damn well fix it itself) or at
>>>> extraction.
>>> It *is* being fixed at extraction.  But it requires people to upgrade
>>> CPAN and CPANPLUS (maybe Archive::Extract as well).  It was a faster
>>> fix to close the PAUSE indexing door than to get those fixes released.
>>
>> I agree with demerphq here, why can't PAUSE just fix this?  It won't
>> break signatures (since they sign file content, not file metadata),
>
> Maybe they should start signing the metadata then!
>
>
>> and
>> it won't break the CHECKSUMS file (since that could be generated after
>> the tarball is fixed).
>>
>> It could be weird if what you upload to CPAN isn't what you
>> download... but it fixes a security problem, and it doesn't require
>> authors to know that this problem exists.  Abstraction++
>
> -100_000_000
>
> I really, really, really don't want PAUSE modifying my stuff after it's
> uploaded.  Oh god the mysterious bugs.  And then there's the fact that the
> code I've put my name and signature on is not the same code as is being
> distributed!  That's a trust violation as well as maybe a license violation.

Oh please, save me the drama. We aren't talking about modifying "your
stuff" we are talking about twiddling some bits in a tar file.

Bits in a tar file mind you that mean nothing to the system that the
tar file was created on.

And if you really do want to be picky about this, then it could be
voluntary as was already suggested.

Then when PAUSE bounces my package it can say "We've rejected your
package for blah blah blah, but we can fix it for you if you visit
this [link], or if you reupload a new package with SPECIALFLAG set in
the FNORBLE file."

> This security check has sent CPAN on the slippery slope of security.  Until
> now CPAN has been a common carrier.  Pretty much anything was allowed, stuff
> was only rejected for extreme reasons and always on a case-by-case basis and
> always by human judgment.  Now we've put in an automatic filter to reject some
> vaguely insecure code.  CPAN is no longer a common carrier.  Once that line
> has been crossed, all sorts of attempts will be made to add more filtering,
> such as the suggestion above.
>
> They will be well intentioned and they will add complications and generate
> false negatives and get in people's way and continue to erode CPAN's policy of
> being a common carrier.
>
> Now that the CPAN shells and archiving modules are handling it at their end, I
> think the PAUSE filter should be removed.  It's not PAUSE's job to be the code
> police.

I agree with this. However we are where we are, and PAUSE fixing the
package in a way that doesn't require windows users to get annoyed is
a good solution.

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists

2008-11-12 Thread demerphq
2008/11/12 David Golden <[EMAIL PROTECTED]>:
> On Wed, Nov 12, 2008 at 3:17 PM, demerphq <[EMAIL PROTECTED]> wrote:
>> I rather strongly object to this change.
>
> I totally understand -- but keep in mind that this was in response to
> someone flagging this as a potential (if highly unlikely) security
> hole, forwarding it to some security-watchdog site, etc.  So the rapid
> response was "close the hole so no one can say CPAN creates a security
> risk".  (Other than the usual, obvious one of running arbitrary
> code...)
>
> So it causes some pain, but in my view, it's in the interest of the
> Perl community to be seen as vigilant.

Ah well fair enough. Writing my rant was cathartic. :-)

>> this silly test. What really gets me going tho is I WASNT TOLD THIS
>> ABOUT 1.51_01 or 1.51_02 or 1.51_03 or (do you detect a pattern here?)
>> 1.51_04 or 1.51_05, all of which i uploaded in the last few days in
>> the exact same way!!!
>
> That's kind of a loophole, since development versions aren't indexed.
> I think any upload that fails a security test should probably be
> rejected, whether development or full release.

Or at least the author should be notified about it.

>> IMO if the toolchain is to work this should happen at PAUSE (if it can
>> detect this problem IMO it should just damn well fix it itself) or at
>> extraction.
>
> It *is* being fixed at extraction.  But it requires people to upgrade
> CPAN and CPANPLUS (maybe Archive::Extract as well).  It was a faster
> fix to close the PAUSE indexing door than to get those fixes released.

Just curious whats wrong with PAUSE repacking the file with the required perms?

>> Whats going to happen next, stuff rejected because they don't have
>> *nix line endings? Or *nix style shebangs? Or use perl-qa's preferred
>> indentation style or something? H?!
>
> Maybe instead, at a minimum, every distribution should be run against
> Perl::Critic at severity level 4 and anything that doesn't pass should
> be rejected as well.  ;-)
>
> (THAT'S A JOKE, PEOPLE!)
>
>> /g
>
> Right there with you, except my "/grrr" was back when the "security
> alert" got sent off to the watchdogs while the discussion was still
> going on as to whether this was a significant risk in the first place.

Ah, yes I can imagine that being worth a /grrr or two.

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists

2008-11-12 Thread demerphq
2008/10/1 Andreas J. Koenig <[EMAIL PROTECTED]>:
>> On Tue, 30 Sep 2008 17:11:00 -0500, Jonathan Rockway <[EMAIL PROTECTED]> 
>> said:
>
>  >> Anyway, I think the average CPAN author doesn't
>  >> really know or care about that, sadly.
>  >> See also
>
>  > FWIW, this is true.  I have never thought about it.
>
>  > Personally, I am confused as to why users have programs that do whatever
>  > an input file from the Internet tells them to do.  If you don't want
>  > your tar command to create world-writable files, you should probably
>  > tell your tar command to not create world-writable files... right?  That
>  > is much easier than convincing every person on the Internet to do what
>  > you want.  It is also easier than convincing every CPAN author to
>  > upgrade MakeMaker.
>
> Not true. The tools we are advocating must simply work. If they would
> leave all decisions to the end user we would have no CPAN. MakeMaker
> has always done the right thing, no need to upgrade. There was a bug
> to squash and not to paint over it or bathe in ignorance. Did you even
> notice the bug in the noise?



I rather strongly object to this change.

I just uploaded ExtUtils-Install 1.51, and was rudely told it would
not be indexed. I then spent 15 minutes trying to figure out what
win32 permissions would allow me to create a tarball that would pass
this silly test. What really gets me going tho is I WASNT TOLD THIS
ABOUT 1.51_01 or 1.51_02 or 1.51_03 or (do you detect a pattern here?)
1.51_04 or 1.51_05, all of which i uploaded in the last few days in
the exact same way!!!

IMO if the toolchain is to work this should happen at PAUSE (if it can
detect this problem IMO it should just damn well fix it itself) or at
extraction. It shouldn't be my problem how you nixy people (and I'm
one these days myself) want your files permissioned when they are
installed, especially if those permissions don't make sense on my box
(of the moment) anyway. And at the very least I should find out about
this when I upload a dev release and not be surprised when I make a
production release.

Whats going to happen next, stuff rejected because they don't have
*nix line endings? Or *nix style shebangs? Or use perl-qa's preferred
indentation style or something? H?!

/g



ps: Andreas, don't take this personally I'm just letting off steam and
I still think you are a really nice guy and do a fantastic job with
PAUSE.  :-)
-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTesters considered harmful

2008-03-03 Thread demerphq
On 03/03/2008, demerphq <[EMAIL PROTECTED]> wrote:
> On 03/03/2008, Andy Armstrong <[EMAIL PROTECTED]> wrote:
>  > On 3 Mar 2008, at 12:24, demerphq wrote:
>  >  > Chris has since explained on IRC that this is a CPANPLUS bug that is
>  >  > tickled by the fact that I put "Config" in the EU::Install
>  >  > prerequisite list so ill be uploading a new release shortly.
>  >
>  >
>  >
>  > The CPANPLUS bug is the reason you got an NA rather than a FAIL but
>  >  the original test failure was in one of your tests AFAIK.
>
>
> Yes, it turns out that Chris was running the test as root. We are
>  currently investigating how to fix it so the test works as expected
>  when root as well.

New version of ExtUtils::Install is released.

It turned out the problem is that when the tests are root it seems to
be not possible to create a directory that is not writeable by root.
Our test is verifying some private logic that checks if a directory is
writeable, and it fails as root.

Im not really sure how to tackle this better than simply skipping the
tests as root which is what the most recent release does.

Ideas as to how to improve the testing in
ExtUtils-Install/t/can_write_dir.t would be graciously welcome.

The relevent code looks like the following. Any ideas how it could be
changed so it doesnt have to skip these tests as root?

SKIP: {
my $exists = FS->catdir(qw(exists));
my $subdir = FS->catdir(qw(exists subdir));

skip "Tests will not work as expected when run under root", 9
  unless $>; #effective UID must not be 0
ok mkdir $exists;
END { rmdir $exists }

ok chmod 0555, $exists, 'make read only';
ok !-w $exists;
is_deeply [can_write_dir($exists)], [0, $exists];
is_deeply [can_write_dir($subdir)], [0, $exists, $subdir];

ok chmod 0777, $exists, 'make writable';
ok -w $exists;
is_deeply [can_write_dir($exists)], [1, $exists];
is_deeply [can_write_dir($subdir)],
  [1,
   $exists,
   $subdir
  ];
}

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTesters considered harmful

2008-03-03 Thread demerphq
On 03/03/2008, Andy Armstrong <[EMAIL PROTECTED]> wrote:
> On 3 Mar 2008, at 12:24, demerphq wrote:
>  > Chris has since explained on IRC that this is a CPANPLUS bug that is
>  > tickled by the fact that I put "Config" in the EU::Install
>  > prerequisite list so ill be uploading a new release shortly.
>
>
>
> The CPANPLUS bug is the reason you got an NA rather than a FAIL but
>  the original test failure was in one of your tests AFAIK.

Yes, it turns out that Chris was running the test as root. We are
currently investigating how to fix it so the test works as expected
when root as well.

yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTesters considered harmful

2008-03-03 Thread demerphq
On 03/03/2008, David Golden <[EMAIL PROTECTED]> wrote:
> On Mon, Mar 3, 2008 at 6:57 AM, demerphq <[EMAIL PROTECTED]> wrote:
>  >  IMO if an NA result comes in without email contact details and without
>  >  an explanation for the NA then the result should not be aggregated
>  >  against the module.
>
>
> The email contact details are there, just suppressed by the NNTP web
>  gateway to avoid email harvesting by spambots.  If you have a real
>  NNTP client, you'll see the email.  Also, see Google Groups (though
>  you have to solve a captcha to reveal the email):
>
>  
> http://groups.google.com/group/perl.cpan.testers/browse_thread/thread/f67ccb5a66aed2e/ffa37628e76a42e5?lnk=gst&q=NA+ExtUtils-Install#ffa37628e76a42e5

This information would be useful to display on CpanTesters itself. The
point is I saw NA's that were inexplicable to me, and found no further
useful information.

Chris has since explained on IRC that this is a CPANPLUS bug that is
tickled by the fact that I put "Config" in the EU::Install
prerequisite list so ill be uploading a new release shortly.

Thanks Chris!

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


CPANTesters considered harmful

2008-03-03 Thread demerphq
Ok, now that my subject line has got your attention

WTF is the deal with NA results with NO information, NO way to contact
the tester, and from what I can tell NO validity.

http://www.nntp.perl.org/group/perl.cpan.testers/2008/03/msg1098294.html
http://www.nntp.perl.org/group/perl.cpan.testers/2008/03/msg1097911.html

IMO if an NA result comes in without email contact details and without
an explanation for the NA then the result should not be aggregated
against the module.

At this point im wondering if this tester is just sending NA for
everything, but i cant find a way to find all of their reports.

/grrr

Yves
-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Dude, where's my diagnostics? (Re: Halting on first test failure)

2008-01-11 Thread demerphq
On 12/01/2008, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> --
> 184. When operating a military vehicle I may *not* attempt something
>  "I saw in a cartoon".
> -- The 213 Things Skippy Is No Longer Allowed To Do In The U.S. Army
>http://skippyslist.com/?page_id=3

That was one of the funniest things i have read in quite a long time.

yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Dev version numbers, warnings, XS and MakeMaker dont play nicely together.

2008-01-07 Thread demerphq
On 07/01/2008, Yitzchak Scott-Thoennes <[EMAIL PROTECTED]> wrote:
> On Sun, January 6, 2008 4:54 pm, demerphq wrote:
> > So we are told the way to mark a module as development is to use an
> > underbar in the version number:
> >
> > $VERSION= "1.23_01";
> >
> >
> > but this will produce warnings if you assert a required version number, as
> > the version isn't numeric.
> >
> > So the standard response is to do
> >
> > $VERSION= eval $VERSION;
> >
> > on the next line. This means that MakeMaker sees "1.23_01" but Perl
> > internal code for doing version checks sees "1.2301". This is all fine and
> > dandy with pure perl modules.
> >
> > BUT, if the module is "bog standard" and uses XS this is a recipe for
> > a fatal error. XS_BOOT_VERSIONCHECK will compare "1.23_01" to "1.2301" and
> > decide that they are different versions and die.
>
> See perlmodstyle:
> > If you want to release a 'beta' or 'alpha' version of a module but
> > don't want CPAN.pm to list it as most recent use an '_' after the
> > regular version number followed by at least 2 digits, eg. 1.20_01. If
> > you do this, the following idiom is recommended:
> >
> >   $VERSION = "1.12_01";
> >   $XS_VERSION = $VERSION; # only needed if you have XS code
> >   $VERSION = eval $VERSION;

im not convinced this actually does anything. ISTR i had to roll my
own "parse out the $XS_VERSION code" thingee for DDS.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Fixed Test::Builders "regexp" detection code.

2008-01-06 Thread demerphq
Just a heads up that I patched the core version of Test::Builder to
use more reliable and robust methods for detecting regexps in test
cases. This makes them robust to changes in the internals and also
prevents Test::Builder from getting confused if someone uses blessed
qr//'s.

Cheers,
yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Dev version numbers, warnings, XS and MakeMaker dont play nicely together.

2008-01-06 Thread demerphq
So we are told the way to mark a module as development is to use an
underbar in the version number:

$VERSION= "1.23_01";

but this will produce warnings if you assert a required version
number, as the version isn't numeric.

So the standard response is to do

$VERSION= eval $VERSION;

on the next line. This means that MakeMaker sees "1.23_01" but Perl
internal code for doing version checks sees "1.2301". This is all fine
and dandy with pure perl modules.

BUT, if the module is "bog standard" and uses XS this is a recipe for
a fatal error. XS_BOOT_VERSIONCHECK will compare "1.23_01" to "1.2301"
and decide that they are different versions and die.

The solution I came up with was to add

  XS_VERSION=> eval MM->parse_version("Filename"),

to the WriteMakefile() call in the Makefile.PL.

But to me this is unsatisfactory. It means either duplicating code
because there is normally something like:

  VERSION_FROM => "DB_File.pm"

or something like it in the WriteMakefile() arguments. Even if you use
a variable so that you end up with

$file= "DB_File.pm";
WriteMakefile(
  XS_VERSION=> eval MM->parse_version($file),
  VERSION_FROM => $file,
...
);

you are still duplicating effort, essentially parsing the version
twice, no biggie really in terms of CPU cycles, but somewhat
distasteful nevertheless.

I thought of fixing XS_BOOT_VERSIONCHECK so that when there is a
mismatch it checks for an underbar and then evals the string first,
but this didnt seem either fun (its C code after all) or really all
that great a solution since on older perls it wouldnt help at all.

About the best thing I can think of is making it so that the version
checked in XS_BOOT_VERSIONCHECK is ALWAYS numeric. IOW, MakeMaker
should do the eval trick itself.  I think that this would work out,
but im not sure of the ramifications. (its late).

Anyway, i welcome any thoughts anyone might have on this.

Ive cross posted this because I think this is as much a Perl core dev
issue as it is a Quality Assurance issue as well as a MakeMaker issue.
I apologise if this irritates anyone.

Cheers,
Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate and UNIVERSAL::can

2008-01-03 Thread demerphq
On 03/01/2008, Aristotle Pagaltzis <[EMAIL PROTECTED]> wrote:
> * Andy Armstrong <[EMAIL PROTECTED]> [2008-01-03 18:25]:
> > On 3 Jan 2008, at 16:55, Ovid wrote:
> >>  my $super = __PACKAGE__->can("SUPER::$sub") or die;
> >>
> >> This is OO code and that should actually read:
> >>
> >>  my $super = __PACKAGE__->can($sub) or die;
> >
> > Should that be __PACKAGE__->SUPER::can($sub) ?
>
> No. That calls `can` from the superclass, but passes
> `__PACKAGE__` as the invocant. Assuming that the subclass and the
> superclass use the same inherited `can` method, the result is
> therefore exactly the same.
>
> The correct incantation is
>
> my ( $super ) = grep { $_->can( $sub ) } @ISA;

I think the correct thing is what kicked off this conversation.

my $sub= __PACKAGE__->can("SUPER::$sub");

Consider that with 5.10 its possible to use other method resolution
rules than the one your snippet mimics.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate and UNIVERSAL::can

2008-01-03 Thread demerphq
On 03/01/2008, chromatic <[EMAIL PROTECTED]> wrote:
> On Thursday 03 January 2008 09:58:28 demerphq wrote:
>
> > And thinking about it more i think that was the whole point of the
> > weird call, Im guessing here, but probably this code isnt in a method
> > which means that he doesnt have access to SUPER so, he passes it into
> > can() which does.
> >
> > The following one liner demonstrates what the author of
> > Template::Timer was (correctly) doing. Note that it doesnt matter if
> > you define B::foo or not.
> >
> > $ perl -le'@B::ISA=qw(A); sub A::foo {print "in A::foo"} package B;
> > sub foo {print "in B::foo"} __PACKAGE__->can("SUPER::foo")->();'
> > in A::foo
>
> Is that documented anywhere to work?  I couldn't find it.  In fact, it
> contradicts the documentation of can():
>
> "can" checks if the object or class has a method called "METHOD".
> If it does then a reference to the sub is returned. If it does not
> then undef is returned.  This includes methods inherited or
> imported by $obj, "CLASS", or "VAL".

Even if it isnt explicitly documented I take it as being implicitly
documented by the fact that SUPER::  is documented (in perltoot at
least) to have no meaning except inside of a method call. So if you
arent inside of a method call how are you to get access to an objects
overloaded method? And i dont think "you don't" is a good answer as
its clearly useful to do in dynamically constructed classes.

As for the documentation, ive probably been in front of this machine
for too long today so im not seeing the contradiction. Are you basing
this on the bit where it says 'called "METHOD"'?  Personally I dont
think that paragraph goes a long way towards settling things either
way.

OTOH Looking through the sources a bit it looks to me like this is
pretty deliberate behaviour. UNIVERSAL::can() is just a wrapper around
gv_fetchmethod_autoload() which is internally documented to respect
SUPER::.

Also consider the case of $obj->$method(). Unless we want

$method='SUPER::foo';
$obj->can($method)->()

to do something different than

$method='SUPER::foo';
$obj->$method()

I think it has to respect SUPER.

I think when you add up the incidental evidence like things like above
the only rational conclusion is that the native ->can() *is* doing the
right thing.

Yves



-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate and UNIVERSAL::can

2008-01-03 Thread demerphq
On 03/01/2008, Andy Armstrong <[EMAIL PROTECTED]> wrote:
> On 3 Jan 2008, at 17:20, Andy Armstrong wrote:
> >> my $super = __PACKAGE__->can($sub) or die;
> >
> > Should that be __PACKAGE__->SUPER::can($sub) ?
>
>
> Hmm. Does that do what I think it does? Maybe not.

Without looking at the code we dont know whether the call is from
within a method call.

And thinking about it more i think that was the whole point of the
weird call, Im guessing here, but probably this code isnt in a method
which means that he doesnt have access to SUPER so, he passes it into
can() which does.

The following one liner demonstrates what the author of
Template::Timer was (correctly) doing. Note that it doesnt matter if
you define B::foo or not.

$ perl -le'@B::ISA=qw(A); sub A::foo {print "in A::foo"} package B;
sub foo {print "in B::foo"} __PACKAGE__->can("SUPER::foo")->();'
in A::foo

So assuming this code was NOT inside a method (most likely as he said
it was part of code that installs methods) it looks like the code that
Ovid bumped into was in fact correct, and chromatic's UNIVERSAL::can
is broken with regard to SUPER::.

> And talking to yourself? What's all that about?

Sometimes its hard to find decent conversation. :-)

cheers,
Yves



-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate and UNIVERSAL::can

2008-01-03 Thread demerphq
On 03/01/2008, Ovid <[EMAIL PROTECTED]> wrote:
> --- demerphq <[EMAIL PROTECTED]> wrote:
>
> > > The problem is this line in Template::Timer:
> > >
> > >   my $super = __PACKAGE__->can("SUPER::$sub") or die;
> > >
> > > This is OO code and that should actually read:
> > >
> > >   my $super = __PACKAGE__->can($sub) or die;
> >
> > Er, i dont see how it could. Then $super would have a reference to
> > its own method and not its parents.
>
> I should have posted more context.  What's happening is that
> Template::Timer inherits from Template::Context and that line merely
> checks that two methods which it inherits actually are available.  If
> they are, it wraps them in timing code.  It does *not* implement those
> methods directly so it can't have a reference to its own method.
>
> An alternative would be to do search @ISA or refer to the base class
> directly:
>
>   my $super = Template::Context->can($sub) or die;
>
> That's a bit ugly and not really in the OO spirit, but it also works
> around the fact that it doesn't play well with UNIVERSAL::can.

This is all strange, but now i understand what you mean. Thanx.

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate and UNIVERSAL::can

2008-01-03 Thread demerphq
On 03/01/2008, Ovid <[EMAIL PROTECTED]> wrote:
> I mentioned that we removed UNIVERSAL::can because of bugs introduced
> by global behavior changes, but to be fair to chromatic, I should
> explain that this is because of code in Template::Timer:
>
>   perl -MUNIVERSAL::can -MTemplate::Timer -e 1
>   Died at lib/perl5/Template/Timer.pm line 63.
>   Compilation failed in require.
>   BEGIN failed--compilation aborted.
>
> The problem is this line in Template::Timer:
>
>   my $super = __PACKAGE__->can("SUPER::$sub") or die;
>
> This is OO code and that should actually read:
>
>   my $super = __PACKAGE__->can($sub) or die;

Er, i dont see how it could. Then $super would have a reference to its
own method and not its parents.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate - Speed up your test suites

2008-01-02 Thread demerphq
On 02/01/2008, Ovid <[EMAIL PROTECTED]> wrote:
> --- demerphq <[EMAIL PROTECTED]> wrote:
>
> > Ah this reminds me. One of these days someone needs to write a robust
> > DD output validator. I tried to convince MJD it would be a great
> > example for HOP parser technology and i think I almost succeeded
>
> I assume this would be so that you could read in DD output and assign
> it to a scalar without eval?

No. Thats called eval STRING. :-)

This would simple parse the output and validate that it is eval STRING is safe.

DD output is highly regular with none of the edge cases that make
parsing perl hard. Thus it should be fairly straight forward to write
code that validates that an arbitrary piece of code conforms to the
constraints of a DD output.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate - Speed up your test suites

2008-01-01 Thread demerphq
On 01/01/2008, Ovid <[EMAIL PROTECTED]> wrote:
> --- Eric Wilhelm <[EMAIL PROTECTED]> wrote:
>
> > Do you happen to have another example?  This one looks to me like
> > poorly
> > written code in the test (or are you citing this as code in the
> > product?)
>
> What???  That's the point!
>
> > Either way, it is glaringly bad code.
> >
> >   a.  any call to slurp() doesn't pass a filename -- screams of evil
> >   b.  2-arg form of open -- banned
> >   c.  non-lexical filehandles -- banned
>
> This is the sort of stuff that tests are designed to catch, but stuff
> this bad *might* get missed with tight process boundaries.  When you're
> working with teams of programmers (and you do, virtually, if you use
> CPAN modules), it's not uncommon to find code which makes global state
> assumptions (package variables, Perl's built-ins, etc.)
>
> If you want, I could come up with far more subtle examples of code
> which demonstrates this, but I suspect we'll have to agree to disagree.
>  This is a real-world problem I've encountered before and will
> encounter again (such as the time someone was parsing Data::Dumper
> output without considering that I may have set $Data::Dumper::Indent to
> a different value than the default).

Ah this reminds me. One of these days someone needs to write a robust
DD output validator. I tried to convince MJD it would be a great
example for HOP parser technology and i think I almost succeeded

yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Aggregate - Speed up your test suites

2007-12-31 Thread demerphq
On 31/12/2007, Sam Vilain <[EMAIL PROTECTED]> wrote:
> Ovid wrote:
> > If you have slow test suites, you might want to give it a spin and see
> > if it helps.  Essentially, it concatenates tests together and runs them
> > in one process.  Thus, you load Perl only once and load all modules
> > only once.
>
> Yuck.
>
> Why not just load Perl once and fork for the execution of each test
> script.  You can pre-load modules before you fork.

Fork not being all that portable or implemented equivalently on all
platforms might be an issue.

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Auto: Your message 'FAIL IO-AIO-2.51 i386-freebsd-thread-multi 6.2-release' has NOT been received

2007-12-22 Thread demerphq
On 22/12/2007, David Golden <[EMAIL PROTECTED]> wrote:
> On Dec 22, 2007 3:52 PM, chromatic <[EMAIL PROTECTED]> wrote:
> > Let me rephrase then.
> >
> > I feel dirty writing tests just to trip up testers who can't set up working
> > testing environments.
>
> Is this really a problem?  Let me flip this around -- I get very few
> problems of this sort for CPAN::Reporter, which is probably the thing
> submitting these reports from Windows anyway, since CPANPLUS doesn't
> (didn't?) run there.
>
> So what exactly is it that you're doing that has trouble with spaces
> in path names?  There may be things that would allow you to easily
> avoid the failures instead of complaining about testers with "broken"
> setups.
>
> Send me some test report URLs and I'd be happy to take a look.

I recall that a while back an effort was made to eliminate all the
main causes of problems from whitespace in filenames and paths. So its
possible that this is an old issue and those with newer builds don't
see it.

Yves
-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Ignoring parts of compiled-in @INC during CPAN builds

2007-11-24 Thread demerphq
On Nov 24, 2007 8:02 PM, Matisse Enzer <[EMAIL PROTECTED]> wrote:
>
> On Nov 24, 2007, at 5:00 AM, demerphq wrote:
>
> > On Nov 23, 2007 11:36 PM, Matisse Enzer <[EMAIL PROTECTED]> wrote:
> >>
> >> I think it is actually
> >> $CPAN::Perl
> >> and, if the value you use contains any whitespace the entire command
> >> will get quoted, which could break things.
> >
> > I think this is because the assumption is that the spaces will be due
> > to spaces in the path (such as on Win32), not spaces due to
> > command/argument separator.
> ...
> >
> > On win32 you would want to quote $^X, as it could very likely (and
> > annoyingly) resolve to
> >
> > "C:\program files\perl\bin\perl.exe"
> >
> > And given that you can create directories with spaces in them on Linux
>
> You are right (and Mac OS X, really any Unix-like system allows spaces
> and other oddities in file names.)
>
> > I think maybe what you really want to do is use the environment
> > variable PERL5OPT for this instead of messing with $CPAN::Perl.
> >
> > $ENV{PERL5OPT}='-MINC::Surgery';
>
>
> Ahh, excellent. thank you. I did not know about PERL5OPT and I think
> that would be the right choice, except I tried that and it mostly
> works, but I ran into a problem while my script was building
> HTML::Tagset.
>
> I did:
>
> local $ENV{PERL5OPT} = qq{ -I$Bin -MStripNonCorePathsFromINC};
>
> and the  HTMLL::Tagset t/pod.t script fails with:
>
> t/pod...Can't open perl script " -I/path/to/my/dir -
> MStripNonCorePathsFromINC": No such file or directory

Hmm, that strikes me as likely being that something used in t/pod.t
(Test::Pod maybe) is broken somehow. Theres a big debate about whether
t/pod.t belongs in the actual module distribution anyway. It would
be nice would be if you could configure the test harness framework to
ignore t/pod.t :-)

yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Ignoring parts of compiled-in @INC during CPAN builds

2007-11-24 Thread demerphq
On Nov 23, 2007 11:36 PM, Matisse Enzer <[EMAIL PROTECTED]> wrote:
>
> On Nov 21, 2007, at 2:44 PM, Michael G Schwern wrote:
> >
> > While it is not documented, you can override what perl CPAN.pm uses
> > with
> > $CPAN::Shell.  So you can write a little @INC modification module
> > and set
> >
> >   $CPAN::Shell = "$^X -MINC::Surgery";
>
>
> I think it is actually
>  $CPAN::Perl
> and, if the value you use contains any whitespace the entire command
> will get quoted, which could break things.

I think this is because the assumption is that the spaces will be due
to spaces in the path (such as on Win32), not spaces due to
command/argument separator.

> For example, if you set:
>
>$CPAN::Perl = "$^X -MINC::Surgery";
>
> then CPAN will convert that to:
>
>'perl -MINC::Surgery'
> and if you have any makepl_args then:
>
>   'perl -MINC::Surgery' INSTALL_BASE=/some/path
>
> which will break when CPAN calls:
>system('perl -MINC::Surgery' INSTALL_BASE=/some/path);
> with "file not found".
>
> If you include some quotes when setting  $CPAN::Perl then the auto-
> quoting won't happen:
>
> $CPAN::Perl = "$^X -M'INC::Surgery'";  # Will not get "safe_quote"

On win32 you would want to quote $^X, as it could very likely (and
annoyingly) resolve to

"C:\program files\perl\bin\perl.exe"

And given that you can create directories with spaces in them on Linux
as well I guess the same applies there, although the likely its needed
is much lower.

I think maybe what you really want to do is use the environment
variable PERL5OPT for this instead of messing with $CPAN::Perl.

$ENV{PERL5OPT}='-MINC::Surgery';

before spawning a new perl process should do the trick.

Yves



-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Devel::CheckLib: Please try to break our code!

2007-10-21 Thread demerphq
On 10/20/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
> * demerphq <[EMAIL PROTECTED]> [2007-10-19 23:10]:
> > On 10/19/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
> > > * demerphq <[EMAIL PROTECTED]> [2007-10-19 18:50]:
> > > > How does one use this then? Where is it documented?
> > >
> > > http://module-build.sourceforge.net/META-spec-blead.html#configure_requires
> >
> > So how do i use this with MakeMaker?
>
> Doesn't seem like you can do that from within MakeMaker so far.
>
> However you can certainly hand-edit your META.yml to add it. Once
> created the meta file is not regenerated if you don't ask for it,
> I think.

The point of my reply to Eric was dialectical, in the sense that the
answer to my question revealed the flaw in his response.

A) The contents of META.yml is not well or widely documented. That
META-spec-blead in the Module::Build source code repository does
mention it does not make it well publicized nor documented. About the
only people who would know about it are META.yml wonks and active
developers on the MB project.
B) Absent a documented way to set this in MakeMaker, suggesting that
it is the appropriate solution to the problem intended to be solved by
Devel::CheckLib seems out of place at best, and presumptive at worst.

As an aside, it seems to me that both Devel::CheckLib and
configure_requires suffer from a fatal flaw in that they do not solve
the problem for existing modules. The solution to this problem lies in
better logic in CPAN Testers. What that logic should be is left as an
exercise for the reader, but i suspect it has something to do with
being better at detecting errors generated by the build process and
discriminating between them. If  XS code fails to build because it
can't link to a required non-bundled library then this should be
distinguished from things like something failing to build because the
author used unportable GCC'isms like declarations after code.
Naturally of course this is a Hard Problem, hence why i leave it as an
exercise :-)

cheers,
Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Devel::CheckLib: Please try to break our code!

2007-10-19 Thread demerphq
On 10/19/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
> * demerphq <[EMAIL PROTECTED]> [2007-10-19 18:50]:
> > How does one use this then? Where is it documented?
>
> http://module-build.sourceforge.net/META-spec-blead.html#configure_requires

So how do i use this with MakeMaker?

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Devel::CheckLib: Please try to break our code!

2007-10-19 Thread demerphq
On 10/19/07, Eric Wilhelm <[EMAIL PROTECTED]> wrote:
> # from David Cantrell
> # on Friday 19 October 2007 04:00:
>
> >The more alert of you will have noticed that there is a bootstrapping
> >problem in using this from within a Makefile.PL - relax, it will come
> >with a script to bundle itself in an inc/ directory.
>
> Or use configure_requires.

How does one use this then? Where is it documented?

cheers,
Yves


Re: Q: Build.PL/Makefile.PL, and CPAN testers...

2007-10-02 Thread demerphq
On 10/2/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
> * David Golden <[EMAIL PROTECTED]> [2007-10-01 18:30]:
> > So if CPAN.pm 1.92XX goes into 5.10, then there *will* be
> > a CPAN client in core that knows when to upgrade itself. We
> > don't need CPANPLUS for that.
>
> Yeah, OK. Still, it would be nice if both of them had feature
> parity here, because people will tend to use one or the other,
> not really both.

And actually the whole point of putting CPANPLUS in core was that it
was feature positive with respect to CPAN, if it is feature negative
then it defeats that point and makes it arguable that it shouldn't be
in core. At the very least they should be feature equivalent.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: So bewilderingly. about closure

2007-09-20 Thread demerphq
On 9/20/07, flw <[EMAIL PROTECTED]> wrote:
> C:\>cat ttt.pl
> use strict;
> use warnings;
>
> {
>   my $x = 'A';
>   sub f { sub { $x++ }   }
>   sub g { sub { $x++ } if $x }
> }
>
> my $F=f();
> my $G=g();
>
> print $F->(),$G->(),"," for 1..4;
> print "\n";
>
> C:\>ttt.pl
> 0A,1B,2C,3D,
>
> C:\>

Known bug in closure implementation. Since sub f doesnt mention $x the
sub that it returns doesnt enclose the same $x as the sub returned by
g().

Changing the code to read

{
my $x = 'A';
sub f { sub { print \$x; $x++ }   }
sub g { sub { print \$x; $x++ } if $x }
}

Produces:

SCALAR(0x225f18)
SCALAR(0x226d4c)
0A,
SCALAR(0x225f18)
SCALAR(0x226d4c)
1B,
SCALAR(0x225f18)
SCALAR(0x226d4c)
2C,
SCALAR(0x225f18)
SCALAR(0x226d4c)
3D,

Which shows that the two are operating on different scalars. I had
guessed that the f() sub would be operating on $::x but it turns out
that it isnt. Somehow *two* $x'es are being created. Which i find
surprising even though i know about this bug.

The solution is to add a dummy line to f() to make sure that  it mentions $x.

{
my $x = 'A';
sub f { my $y=$x; sub { print \$x; $x++ }   }
sub g {   sub { print \$x; $x++ } if $x }
}


Will cause both subs to bind to the same scalar.


Cheers,
Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Prior art for testing against many local perls?

2007-09-14 Thread demerphq
On 9/14/07, Paul Johnson <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 13, 2007 at 05:10:36PM -0500, brian d foy wrote:
>
> > Without getting into a bikeshed discussion, I'm looking for prior art
> > to test a tarball against a list of local perls before I write my own
> > thing. This sounds like a fun and mostly easy project, but I don't want
> > to reinvent the wheel.
> >
> > I want to take a distro tarball and test it against every perl I have
> > installed. This is development testing, not end user / installation
> > testing:
> >
> >% test_with_every_perl foo-1.1.tgz
> >Testing with perl5.6.2.
> >Testing with perl5.8.0.
> >
> >Testing with perl5.9.5.
> >
> > At the end I get a nice report saying what went wrong with each version.
> >
> > This is something that I want to run right in my sandbox. The process
> > is really easy: unpack the distro, use the appropriate perl with
> > Makefile.Pl, and capture the results to make the report. So, who's
> > already done this? :)
>
> http://pjcj.sytes.net/svnweb/Devel::Cover/view/Devel-Cover/trunk/all_versions
> is what I use to do this.
>
> $ perl all_versions make test
>
> You'd need to change the commands at the end to do exactly what you
> wanted - unpacking the tarball rather than deleting the cover_db
> directory, I expect.  And I have all the perls on my path, which might
> not be your case.
>
> Not sure whether piping the output into a file counts as a nice report
> though :-)

Wonder how many of us have hand rolled such scripts... I know i have
one, but alas on a machine that is offline right now.

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Confusing output from Test::Harness 2.99_02

2007-09-10 Thread demerphq
On 9/10/07, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> Michael Kernaghan wrote:
> > While you are at it would you care to try to install modules from CPAN
> > onto Active Perl  running on Vista? It just seems a world of grief;
> > although identical  installs are great under XP.  I just flat gave up
> > using Vista for Perl. I feel sad, but what can one do?
>
> If its a problem with Perl in general one can talk about it with the
> perl5-porters and with Activestate.  I don't know about Activestate, but p5p
> gets so little response from Windows users.
>
> You can also figure out a way to provide p5p with a Vista box they can log
> into and do development work on.  That's a major issue, most Perl developers
> don't have a Windows machine and Windows is so difficult to make multi-user.
>
> Send off some details to [EMAIL PROTECTED], thanks.

There is no question that perl5-porters needs more win32 talent.
Currently there are only a few of us using win32 at all on p5p, and
frankly the only reason I have a win32 box at all is a combination of
lazyness and because Nicholas Clark keeps reminding me that we need
win32 testing.

So if you have a win32 box, and decent compilers (particularly Visual
Studio style compilers) and feel like helping out then there is lots
to do.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: prove -j 9

2007-09-10 Thread demerphq
On 9/10/07, Nicholas Clark <[EMAIL PROTECTED]> wrote:
> On Mon, Sep 10, 2007 at 07:39:21PM +0200, demerphq wrote:
> > On 9/10/07, Eric Wilhelm <[EMAIL PROTECTED]> wrote:
>
> > > Note that your test suite may or may not play nicely with that.  Have
> > > you ever run them in parallel before? ;-)  Assuming common tempfiles
> > > and such really rains on the parade.  Other than that, we're talking
> > > 40% less waiting.
> >
> > Will this make it easier to parallelize tests in the perl core?
> >
> > There the issue isnt so much running the tests in a single module
> > directory simultaneously but rather running the tests in different
> > test directories simultaneously.
>
> I had a hacked proof of concept for this, but it needed moderately large
> changes to Test::Harness to make it work. This was around the time that the
> TAPx::Parser work commenced, so it wasn't going to happen with Test::Harness
> 2.
>
> I still *have* the proof of concept code. The key thing I needed changed was
> that once you're running 2 or more tests you can't print any progress about
> *the* test currently running, because your assumption that there is just one
> is no longer valid.
>
> As to the core tests, they trip up like crazy, I think because they are not
> original in their choice of names for temporary files.

When you say core tests which do you mean? The ones that test modules
or the ones that test core? I guess you mean the latter, in which case
I personally wouldnt be too fussed. On my machine those tests take
about a minute. Its the module tests which really take a while.
Especially on a threaded build

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: prove -j 9

2007-09-10 Thread demerphq
On 9/10/07, Eric Wilhelm <[EMAIL PROTECTED]> wrote:
> As of TAP::Harness r465, the --jobs switch to prove enables
> parallelization of your test suite.
>
>   http://svn.hexten.net/tapx/trunk
>   http://scratchcomputing.com/svn/TAP-Harness-Parallel/trunk
>
> Note that your test suite may or may not play nicely with that.  Have
> you ever run them in parallel before? ;-)  Assuming common tempfiles
> and such really rains on the parade.  Other than that, we're talking
> 40% less waiting.

Will this make it easier to parallelize tests in the perl core?

There the issue isnt so much running the tests in a single module
directory simultaneously but rather running the tests in different
test directories simultaneously.

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Confusing output from Test::Harness 2.99_02

2007-09-10 Thread demerphq
On 9/10/07, Michael Kernaghan <[EMAIL PROTECTED]> wrote:
> While you are at it would you care to try to install modules from CPAN
> onto Active Perl  running on Vista? It just seems a world of grief;
> although identical  installs are great under XP.  I just flat gave up
> using Vista for Perl. I feel sad, but what can one do?

Please report issues of this nature to perl5-porters.

And what you can do is complain to your corporate MS rep about getting
us some support from MS...

cheers,
Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Deep without the Test part?

2007-09-05 Thread demerphq
On 9/5/07, Gabor Szabo <[EMAIL PROTECTED]> wrote:
> I would like to compare data structure in some non-test code.
> Test::Deep seems to give all the features I need, except that it is
> integrated with
> the testing framework.
>
> How could I use that or what else should I use to compare two deep
> data structures?

Diff the output of Data::Dump::Streamer on the two objects. Or maybe
reuse the code contained in Test::Struct or Test::Deep.

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: TAP YAML diagnostics

2007-09-04 Thread demerphq
On 9/4/07, Ovid <[EMAIL PROTECTED]> wrote:
> --- demerphq <[EMAIL PROTECTED]> wrote:
>
> > > If string data was *always* quoted and numeric data never was
> > (assuming
> > > it's really numeric), then most of the issues raised in this thread
> > > would go away.
> >
> > Define string data and numeric data. :-)
> >
> > IOW, is 0E0 string or numeric?
>
> This can be tricky to determine in a dynamic language.  This is easy to
> determine in Java.  I would argue that for dynamic language
> practitioners, we can ignore the distinction but the YAML output should
> *not* ignore that distinction.  We should have a single TAP YAML
> diagnostic standard so that we have less concern about interoperability
> issues.  Admittedly, not too many here seem worried about that so maybe
> it's a minor issue but when I work on exchanging data between static
> and dynamic languages, I keep getting bit by related issues.
>
> And I have no idea if 0E0 should be string or numeric :)

I think the solution is to leave it up to a given language as to
whether they discriminate or not, document how they can, and leave it
alone. Its not going to be easy to do in some languages, so they
probably dont have to or want to care, and in the others it will be
trivial so probably isnt an issue anyway.

IOW, if your language is strongly typed you will never get a char*
return from a routine that is supposed to return an INT. So it
probably doesnt matter.

I really think this is an area where the more you think about it the
more problems you discover and the less you think about it the less
likely you will encounter any problems at all.

And just to make life nice and difficult, if you ARE going to
discriminate between strings and true numeric data formats then are
you going to discriminate between different types of native format?
IOW, are you going to want to facilitate checking that a routine
returned an 8 bit integer and not a 16 bit integer? Etc? When somebody
decides to use TAP for Pascal code are you going to distinguish a
string of 10 chars from a string of 11 (as Pascal itself is documented
to distinguish).

So to summarize:

In dynamic languages its unlikely that there is going to be a
difference between a "string" value and a "numeric" value. In static
typed languages it wont matter because a routine cant return the wrong
type anyway. And if the language allows returning a pointer to a given
type there will be issues when dereferencing that type in the wrong
way (or maybe not, such as in C, but then its highly unlikely that the
values will compare correctly if they return a int *v instead of a
char *v.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: TAP YAML diagnostics

2007-09-04 Thread demerphq
On 9/3/07, Ovid <[EMAIL PROTECTED]> wrote:
> --- "A. Pagaltzis" <[EMAIL PROTECTED]> wrote:
>
> > Is it possible to force this in tandem? Ie. when one of the keys
> > has to be quoted, the other is always quoted also? Because I'd
> > hate to see this:
> >
> >   wanted: elbow
> >   found: 'elbow '
> >
> > For simple cases like this one it's livable, but if the data gets
> > more complex, then comparing a raw string with a quoted one will
> > unnecessarily sprain people's brains.
>
> If string data was *always* quoted and numeric data never was (assuming
> it's really numeric), then most of the issues raised in this thread
> would go away.

Define string data and numeric data. :-)

IOW, is 0E0 string or numeric?

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Current state of TAP::Diagnostics

2007-09-02 Thread demerphq
On 9/2/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:
> On 1 Sep 2007, at 19:14, Eric Wilhelm wrote:
> >>   my $data = 3;
> >>   my $data = "3";
> >
> > YAML::Tiny?
>
> I don't believe that makes the distinction either.

Data::Dump::Streamer specifically does not make a distinction as it
just caused trouble in testing as the strangest things can cause an
upgrade to SvPVIV.  I cant even see XS caring that much if you pass a
SvIV or SvPV as if you use either through the normal interfaces they
will both be auto-upgraded to SvPVIV's if they are of the wrong type
for a given operation.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Current state of TAP::Diagnostics

2007-09-01 Thread demerphq
On 9/2/07, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> The first is a single ISO 8601 datetime.  The latter is an ISO 8601 date and
> an ISO 8601 time separated by a space.  Two data fields instead of one.  So
> it's all kosher, we just have to specify that's what we're doing.
>
> tapdate = isodate " " isotime

Been a while since i looked at the spec closely, but I seem to recall
that this is actually one of the valid formulations for an iso 8601
date/timestamp. The spec doesn't specify only a single format, but
rather a number of them. ISTR the 'T' is recommended but not
mandatory.

Yves




-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Current state of TAP::Diagnostics

2007-09-01 Thread demerphq
On 9/1/07, Ovid <[EMAIL PROTECTED]> wrote:
> --- Eric Wilhelm <[EMAIL PROTECTED]> wrote:
>
> > I'm not sure the YAML spec distinguishes between string and number
> > when
> > the string is a number.
> >
> >   $ perl -e 'use YAML; warn YAML::Dump([3,"3"]);'
> >   ---
> >   - 3
> >   - 3
> >   $ perl -e 'use YAML::Syck; warn YAML::Syck::Dump([3,"3"]);'
> >   ---
> >   - 3
> >   - 3
>
> Ah, crud. Is this because YAML doesn't quote things without whitespace?
>  That really seems like a serious limitation to me.  Can I really keep
> a straight face and tell a C programmer that the "Test Anything
> Protocol" deliberately chose a serialization language that ignores data
> types?
>
> This is mentioned in Wikipedia:
>
>   http://en.wikipedia.org/wiki/YAML#Pitfalls_and_implementation_defects
>
> Workarounds for this issue are listed, but that still puts us in the
> position of needing a pure-perl, core method of disambiguating
> integers, floats, and strings.

No. I thought I already said this, but apparently not.

There Is No Difference.

The core doesn't track a difference except in the most trivial way.
That you can tell a difference AT ALL is purely an implementation
detail that hypothetically could change in some release of perl.

Consider the following code:

use Devel::Peek;
my $x=3;
my $y="3";
# $x and $y are different
Dump $x;
Dump $y;
push @x, "".$x, 0+$y;
# $x and $y are indistinguishable
Dump $x;
Dump $y;
__END__
SV = IV(0x1a4beb0) at 0x1a458e0
  REFCNT = 1
  FLAGS = (PADBUSY,PADMY,IOK,pIOK)
  IV = 3
SV = PV(0x15d59dc) at 0x1a458f8
  REFCNT = 1
  FLAGS = (PADBUSY,PADMY,POK,pPOK)
  PV = 0x15d96e4 "3"\0
  CUR = 1
  LEN = 2
SV = PVIV(0x15d5e0c) at 0x1a458e0
  REFCNT = 1
  FLAGS = (PADBUSY,PADMY,IOK,POK,pIOK,pPOK)
  IV = 3
  PV = 0x1a744bc "3"\0
  CUR = 1
  LEN = 2
SV = PVIV(0x15d5e1c) at 0x1a458f8
  REFCNT = 1
  FLAGS = (PADBUSY,PADMY,IOK,POK,pIOK,pPOK)
  IV = 3
  PV = 0x15d96e4 "3"\0
  CUR = 1
  LEN = 2

Notice that simply using a scalar in a numeric or string context will
force it to upgrade to a form that holds both.

And that is ignoring the issue of dual vars entirely.

Id say forget this problem. If a tester wants this kind of detail he
is going to output diagnostics that will deal with it, and its not
your problem to worry about what they are.

Yves




-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: TAP YAML diagnostics

2007-09-01 Thread demerphq
On 9/1/07, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> demerphq wrote:
> > On 9/1/07, Fergal Daly <[EMAIL PROTECTED]> wrote:
> >> On a tangent, I think using quotes is important otherwise you end up 
> >> output like
> >>
> >>   wanted: elbow
> >>   found: elbow
> >>
> >> when what you really needed was
> >>
> >>   wanted: 'elbow'
> >>   found: 'elbow '
>
> No need to quote everything, just when there's leading or trailing whitespace
> to clarify.  YAML.pm does that automatically.  In fact, I don't think you can
> express leading or trailing whitespace without quotes in YAML.
>
>
> >> I'd even suggest sticking \t and \n in there when required and giving
> >> the option of outputting \{x} for unicode characters. The whitespace
> >> issue is one I had to deal with when writing Test::Tester as it allows
> >> you to check diag strings and they involve plenty of tricky
> >> whitespace,
>
> An option to display whitespace escaped is a good idea.  Also all non-ASCII.
> However, Unicode definitely has to default to normal display or else all those
> who don't speaky the ASCII will be very unhappy.
>
>
> > Could this be a reason NOT to emit a YAML stream?
>
> Not particularly, why do you ask?

Cause id hate to see all this work done and then find out that its
less than it could be because YAML in the end doesnt meet the
requirements. If it does then its fine.

Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: TAP YAML diagnostics

2007-09-01 Thread demerphq
On 9/1/07, Fergal Daly <[EMAIL PROTECTED]> wrote:
> On 30/08/07, Ovid <[EMAIL PROTECTED]> wrote:
> > After doing a bit of thinking about this and chatting with Andy
> > Armstrong about this, I've realized that much of the current thought
> > about the TAP diagnostics is wrong.  We already have much of what we
> > want in the TAP line above the diagnostics so there's no need to be
> > redundant.  The "description:" key is gone.
> >
> > In fact, there will be no mandatory keys.  So for a test like this:
> >
> >   ok $foo, '... my toe hurts';
> >
> > We might see the following TAP:
> >
> >   not ok 42 ... my toe hurts
> >   ---
> >   line: 53
> >   file: t/23-body-parts.t
> >   ...
> >
> > But for this:
> >
> >   is $elbow, $hole_in_the_ground;
> >
> > You might get this TAP:
> >
> >   not ok 12
> >   ---
> >   line: 15
> >   file: t/23-body-parts.t
> >   wanted: elbow
> >   found: moron
> >   ...
> >
> > Because the current behavior of Test::Harness is to discard "unknown"
> > lines (except when verbose), you won't even see the YAML.
>
> On a tangent, I think using quotes is important otherwise you end up output 
> like
>
>   wanted: elbow
>   found: elbow
>
> when what you really needed was
>
>   wanted: 'elbow'
>   found: 'elbow '
>
> I'd even suggest sticking \t and \n in there when required and giving
> the option of outputting \{x} for unicode characters. The whitespace
> issue is one I had to deal with when writing Test::Tester as it allows
> you to check diag strings and they involve plenty of tricky
> whitespace,

Could this be a reason NOT to emit a YAML stream?

yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: add points for registered namespaces

2007-08-21 Thread demerphq
On 8/21/07, brian d foy <[EMAIL PROTECTED]> wrote:
> In article
> <[EMAIL PROTECTED]>, demerphq
> <[EMAIL PROTECTED]> wrote:
>
> > On 8/21/07, brian d foy <[EMAIL PROTECTED]> wrote:
>
> > > The effect of this kwalitee metric would be that fewer modules are
> > > registered as I or Adam just stop paying attention because it's too
> > > much work now.
> >
> > Maybe you need more assistants to help out?
>
> We don't really need more assistants. People are added as PAUSE admins
> from time to time on the recommedation of the current PAUSE admins.
> Anyone interested in helping can start by reading the [EMAIL PROTECTED]
> list for a while to see how things work, then slowly starting to
> participate, and eventually gaining the trust of everyone before
> finally being trusted with PAUSE powers.

Ok. Thats a natural process. :-)

>
> > Personally i always found the module registration process to be more
> > opaque than it should be, and IMO "opening it up" a bit might be a
> > good call.
>
> What's opaque about it? I'm happy to answer any questions, but it's
> really not that complicated or secretive. *Everything* happens in
> public on the [EMAIL PROTECTED] list.

Probably things have changed since when I used to monitor the list.
Sorry for the off the cuff comment.

Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: add points for registered namespaces

2007-08-21 Thread demerphq
On 8/21/07, brian d foy <[EMAIL PROTECTED]> wrote:
> In article <[EMAIL PROTECTED]>,
> Cyberiade . it Anonymous Remailer <[EMAIL PROTECTED]>
> wrote:
>
> > there's a lot of questionable modules being uploaded to CPAN
> > which create top-level namespaces, very often not even being
> > self-explanatory. it would be nice to add points for
> > modules which have registered namespaces. this should encourage
> > more appropriate module naming.
>
> Please don't do that. It has nothing to do with quality in any sense,
> and it would create a lot more work for us PAUSE admins. There are
> three of us who handle the registrations now although it's mostly me. I
> don't want to have to see tens of messages every day which I have to
> respond to and suggest more meaningful names.
>
> The effect of this kwalitee metric would be that fewer modules are
> registered as I or Adam just stop paying attention because it's too
> much work now.

Maybe you need more assistants to help out?

Personally i always found the module registration process to be more
opaque than it should be, and IMO "opening it up" a bit might be a
good call.

yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Tests involving regexes

2007-08-03 Thread demerphq
Hi,

I thought id bring a tricky subject that ive encountered in Perl core
development up here so you testing mavens can have a look at the
problem and maybe come up some suggestions for me.

The problem is this: one of the main obstacles for adding new regex
flags to the perl regex engine are the oodles of tests that expect
stringified regular expressions to look like:

/\A\(\?[msix-]*:.*\)\z/s

There are also lots of tests out there that do thing like:

is("$qr","(?ms-ix:...)",'Got back the regex we expected');

all of these break if I add a new modifier or change the flag layout
in any way (it was surprising how many test failures were caused by
reordering the flags). With Perl 5.9.4 or so there is a new modifier
'p' (for 'preserve') but various special properties of what it
represents mean I was able to bypass the problem as it only shows up
in the list if its on, there is no 'off' form of it. However for a
true boolean flag this wont work.

So assuming we want to add new modifiers (we kinda do) how should this
be addressed? Both going forward and dealing with legacy code.

BTW I recall that a notable area in the core which does this type of
naughty testing is actually the Test:: modules themselves. :-)

Im open to suggestions that dont involve hunting down all of these
naughty tests and fixing them. Including stuff that doesnt strictly
involve test modules.

Cheers,
Yves


-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Fixing the damage caused by has_test_pod

2007-07-30 Thread demerphq
On 7/30/07, Ovid <[EMAIL PROTECTED]> wrote:
> Tests should *only* fail when there is a clear, unequivocal reason to
> believe that the code will not function appropriately on someone's
> machine.  Having '=head0' or a '=back' without '=over' should not be
> such a failure.  It's taken a lot of grief while working on TAP::Parser
> to realize how terribly wrong I was about this in the past and it's a
> mistake I would like to better understand and rectify in the future.

Thats a very interesting comment that Id love to hear more about. Can
you expand on your experiences with TAP::Parser in this respect?

cheers,
Yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Code coverage awesomeness

2007-06-18 Thread demerphq

On 6/18/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:

* Joshua ben Jore <[EMAIL PROTECTED]> [2007-06-18 02:10]:
> Probably but I'd ask Avar or Yves about that and I'm sure the
> method would be entirely different. The 5.10 engine is
> pluggable so I'm sure it's wrappable and therefore traceable.

Cool. Because that's a subject that the existing coverage tools
don't deal very well with – just because I used a regex once
doesn't mean I actually have any significant coverage of the
"code paths" in that regex. Yet all the tools operate on this
assumption so far.


Its come up in discussion on p5p in the past. The basic idea would be
to count regops and then note which regops were not touched by
executing the pattern. It becomes a lot trickier when you consider the
TRIE regop, so i guess the thing to do would be under coverage you
would disable that optimisation. But the issue applies to any sort of
"dfa"isation of the engine, which potentially makes the problem very
hard indeed.

cheers,
Yves





--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTS: suggestion for a new metric

2007-05-30 Thread demerphq

On 5/29/07, David Cantrell <[EMAIL PROTECTED]> wrote:

demerphq wrote:
> On 5/26/07, Andreas J. Koenig <[EMAIL PROTECTED]>
> wrote:
>> AFAIK it is not Archive::Tar either. I have not found out which
>> compression software packages do it right and which do it wrong. I
>> have communicated with several authors about it but being Windows
>> users, they do not know it either.
> It would be nice to know tho. If only so as to know what to avoid.

GNU tar on Windows, I think.  At least, pointing out the GNUish --mode
switch has helped people to fix it whenever I've muttered at them about it.


Ive not experienced this problem with cygwin tar, but ill keep an eye
out for it.


>>   But everybody should know that PAUSE
>> cannot index these beasts anyway and sends mail to the authors that it
>> cannot read the contents of the distro ...
> Which makes me wonder why David complained about this issue at...

Because I test *everything* that the PAUSE tells cpan-testers about,
which includes unindexable distributions.


Er, so you want a metric to tell people about how their rejected
upload to PAUSE isnt going to work right?

That doesnt sound like a very useful metric. If PAUSE doesnt index it
then you shouldnt test it as its already failed the most important
metric there is.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: testing on other platforms

2007-05-26 Thread demerphq

On 5/26/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:

On 26 May 2007, at 19:21, demerphq wrote:
> On 5/26/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:
>> On 26 May 2007, at 19:08, Gabor Szabo wrote:
>> > Is there some publicly available Solaris/HP-UX/etc
>> > (see full list on this page http://www.cpan.org/ports/index.html )
>> > set of servers where one could test his modules?
>>
>> There's the HP Testdrive thing - and there may be similar things for
>> other platforms.
>>
>> I can give you an account on a Linux box if it's useful?
>
> I get the feeling the real problematic one is Windows.

I seem to remember that OpenSSH works on Cygwin. A small network of
volunteers offering access to Windows boxes would be useful I think.


Afaiu there is a terminal services (remote desktop) client for linux,
which i would *guess* would be a better approach as it would not force
the user to go through cygwin.

yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTS: suggestion for a new metric

2007-05-26 Thread demerphq

On 5/26/07, Andreas J. Koenig <[EMAIL PROTECTED]> wrote:

>>>>> On Sat, 26 May 2007 20:06:18 +0200, demerphq <[EMAIL PROTECTED]> said:

  > On 5/26/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:
 >> On 26 May 2007, at 18:45, demerphq wrote:
 >> > Maybe ill just upload my files in zip format from now on only, then
 >> > its not my problem anymore right? Would that be better?
 >>
 >> That would be fine.

  > Fine then.

You do not have to change anything, Yves. Your tarballs were all fine
as far as I know. Do not switch to zip format, please, without a
reason. While ZIP has several significant advantages over TAR.GZ, it
is inferior in compression metrics. CPAN is not fond of seeing zip
files because they usually are 10-30 percent bigger.


Is 7z supported by CPAN?  :-)

http://sevenzip.sourceforge.net/download.html


  > The fact that ExtUtils make dist automatically produces a
  > .tar.gz and the fact that Archive::Tar does not do the right thing is
  > not exactly my fault however.

AFAIK it is not Archive::Tar either. I have not found out which
compression software packages do it right and which do it wrong. I
have communicated with several authors about it but being Windows
users, they do not know it either.


It would be nice to know tho. If only so as to know what to avoid.


On new metrics: I agree with the OP that software packaged with absurd
permission bits offends kwality. But everybody should know that PAUSE
cannot index these beasts anyway and sends mail to the authors that it
cannot read the contents of the distro and that they need to make a new
upload when they want to be indexed.


Which makes me wonder why David complained about this issue at...

Could it be that it actually *is* his decompression software?

Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: testing on other platforms

2007-05-26 Thread demerphq

On 5/26/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:

On 26 May 2007, at 19:08, Gabor Szabo wrote:
> Is there some publicly available Solaris/HP-UX/etc
> (see full list on this page http://www.cpan.org/ports/index.html )
> set of servers where one could test his modules?

There's the HP Testdrive thing - and there may be similar things for
other platforms.

I can give you an account on a Linux box if it's useful?


I get the feeling the real problematic one is Windows.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTS: suggestion for a new metric

2007-05-26 Thread demerphq

On 5/26/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:

On 26 May 2007, at 18:45, demerphq wrote:
> Maybe ill just upload my files in zip format from now on only, then
> its not my problem anymore right? Would that be better?

That would be fine.


Fine then. The fact that ExtUtils make dist automatically produces a
.tar.gz and the fact that Archive::Tar does not do the right thing is
not exactly my fault however.


You know - you've kind of tickled a raw nerve here.


As did David in his original post for me.


One of the very few reasons I maintain a Windows box here and endure
the pain (for me - subjective I know) that goes with it is so I can
test my modules against Win32. And the only reason I bang my head off
Win32 related problems is because I have a deeply held conviction
that my stuff should - if possible - work on any platform Perl supports.


Yes i agree with this as well. I go to similar lengths to ensure my
code works on *nix. And yes i appreciate your efforts, although to the
best of my knowledge ive never directly benefitted from them.


I honestly don't think - given the hassle that supporting Win32 is
for so many people who otherwise wouldn't touch it - that you have
much room to bitch about a Unix specific problem.


I was out of line in how i put things. I apologise.

yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTS: suggestion for a new metric

2007-05-26 Thread demerphq

On 5/26/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:

* demerphq <[EMAIL PROTECTED]> [2007-05-26 19:20]:
> BTW, id say that if this is an issue for Unix users then they
> should file a bug with the people that wrote their
> decompression software and/or installer software.

It's the decompression software's fault that it correctly
preserves the data in the archive when decompressing?! Are you
serious?

Sorry, but it is *the _compression_ software's* bug.


Fine, then what do i do about it? File a bug with Archive::Tar
(maintained by a non windows programmer)?


> I dont see it as being my problem as a Win32 developer at all.
> Im sympathetic to the annoyance it causes but to me its like
> opening a book written in a language you dont read and
> complaining that it isnt written in one you do. I mean if Win32
> doesnt even support this concept how is it my problem what your
> software does when unpacking?

Tarballs are a Unix concept and use Unix permission semantics.
Win32 developers creating tarballs are writing books in a
language they don't speak. Then they get annoyed that native
(or fluid) speakers of the language complain when their books
are full of mistakes.

(What a silly metaphor… but anyway.)


Ok fair enough. I sit corrected.



Sorry, you're wrong. I'm sorry if the silly Unix people are
grating on your nerves, but they're right.


Ok im wrong, and yes comments like "suffering from windows" do grate
on my nerves.

Maybe ill just upload my files in zip format from now on only, then
its not my problem anymore right? Would that be better?

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTS: suggestion for a new metric

2007-05-26 Thread demerphq

On 5/26/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:

* demerphq <[EMAIL PROTECTED]> [2007-05-26 17:35]:
> Can you explain this please? Why would the lack of a set x bit
> on a directory prevent you from doing
>
>  perl Makefile.PL
>  make
>  make test

Yes.


Im assuming this means "Yes it prevents this"

(Excessive terseness)--


> Is this simply so you dont have to type 'perl'?

No.

The x bit on a directory determines whether you can resolve paths
that cross it.

~ $ cd Proc-Fork
~/Proc-Fork $ chmod -x lib
~/Proc-Fork $ perl Build.PL
Can't find file lib/Proc/Fork.pm to determine version at \
/usr/lib/perl5/site_perl/5.8.8/Module/Build/Base.pm line 937.


Ok.

Thanks.

BTW, id say that if this is an issue for Unix users then they should
file a bug with the people that wrote their decompression software
and/or installer software.

I dont see it as being my problem as a Win32 developer at all. Im
sympathetic to the annoyance it causes but to me its like opening a
book written in a language you dont read and complaining that it isnt
written in one you do. I mean if Win32 doesnt even support this
concept how is it my problem what your software does when unpacking?

Yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: CPANTS: suggestion for a new metric

2007-05-26 Thread demerphq

On 5/26/07, David Cantrell <[EMAIL PROTECTED]> wrote:

Some modules' tarballs don't set the x bit on directories, which makes
it impossible for a non-root user to run Makefile.PL or the module's
tests.  The usual cause is that the author suffers from Windows, and the
fix is to use '--mode 755' when creating the tarball.


Can you explain this please? Why would the lack of a set x bit on a
directory prevent you from doing

 perl Makefile.PL
 make
 make test

Is this simply so you dont have to type 'perl'?

cheers
Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Should TAP capture exit codes

2007-03-07 Thread demerphq

On 3/7/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:

On 7 Mar 2007, at 18:59, demerphq wrote:
> Neither to me to be a very convincing reason to redesign something as
> well thought out as the HTTP response code schema. With it you have a
> well documented, well designed language agnostic response structure.
> It seems to me youd have to work hard to come up with something
> better.

We already have an interface that's even closer to home: emit an
error message and return non-zero status.

As I say I'm not thoroughly opposed to using HTTP like responses;
just not convinced that even that level of complexity is necessary.


I guess it comes down to whether you can anticipate the possibility
that you will need new codes, and having a framework to put them into.
The nice thing about the HTTP scheme is it gives you a way to add new
codes that can be interpreted more or less correctly by something that
wasnt designed with those codes in mind.  (For instance while a
particular client might not know what 569 is exactly, they should know
its a server error.)

If you do go so far as to design a new protocol for this please take a
minute to read the RFC's for the HTTP response code schema and the
SMTP response schemas.  The SMTP schema goes a bit further than the
HTTP one, in that it also defines special meaning for the second digit
as well as for the first.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Should TAP capture exit codes

2007-03-07 Thread demerphq

On 3/7/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:

On 7 Mar 2007, at 18:18, demerphq wrote:
>> If you want to say "Temporary Redirect" don't say "307" say
>> "Temporary
>> Redirect"!  If you want to put lots of information into one value,
>> like
>> categorization, use a hash!  { type => "Redirect", permanent => 0 }
>
> Numeric response codes have the advantage that they are language
> agnostic.

Indeed. That doesn't mean we have to coerce our status information to
map onto HTTP response codes though or even that we have to use a
numeric scheme.


Personally I see this is as a wheel-reinvention issue. Reusing the
HTTP response code seems to me to be a logical and natural step. As a
framework it strikes me that you will be unlikely to come up with
something truely better, so why not just reuse it and not worry about
it?

So far the objections we have are:

1) It uses numeric error codes and not english ones.
2) It wasnt custom designed for test responses.

Neither to me to be a very convincing reason to redesign something as
well thought out as the HTTP response code schema. With it you have a
well documented, well designed language agnostic response structure.
It seems to me youd have to work hard to come up with something
better.

Anyway, just my $0.02

cheers,
Yves




--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Should TAP capture exit codes

2007-03-07 Thread demerphq

On 3/7/07, Michael G Schwern <[EMAIL PROTECTED]> wrote:

Andy Armstrong wrote:
> On 7 Mar 2007, at 16:26, Eric Hacker wrote:
> [snip]
>> The first digit can be a grouping by success/failure.
>
> Yes, I see where you're going with this :)
>
>> So then if I'm not too far off base with the above, then to use
>> something different than HTTP::Status type codes would be reinventing.
>>
>> 1xx Info
>> 2xx Success
>> 3xx Redirect, probably not applicable to testing
>> 4xx Failure
>> 5xx Server/System Error
>
> As I say I'm broadly in favour of something /like/ this - but we have a
> clean slate here and it seems kind of arbitrary to commit to using
> HTTP-like status codes when we don't have to.

Any time you start writing a system that involves representing states as
numbers and doing bitmasks and math to add extra meaning, step back and
remind yourself that its 2007 and this is not C and you're not writing a
network protocol.  You shouldn't have to memorize a table or do math in your
head to figure out the basics of what a message means.

And god forbid we had more than 100 failure types!

If you want to say "Temporary Redirect" don't say "307" say "Temporary
Redirect"!  If you want to put lots of information into one value, like
categorization, use a hash!  { type => "Redirect", permanent => 0 }


Numeric response codes have the advantage that they are language agnostic.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Object Identification Cold War and the return of autobox.pm (wasRe: UNIVERSAL::ref might make ref( $mocked_obj ) sane)

2007-03-02 Thread demerphq

On 3/2/07, Adam Kennedy <[EMAIL PROTECTED]> wrote:

> 99.999% of the time you do not want to really know how something is
> implemented, you want to remain ignorant.

I concur, which is what really pisses me off them
IO::String->new(\$string)->isa('IO::Handle') returns false, because the
author believes in duck typing over actually having isa describe the
interfaces.


This comment lead to do some thinking about perls typing and OO
structure and why people seem to have a lot of trouble agreeing on
things like this.

A "type" in a language like C is a specification of data
representation in memory.

In an OO language like C++ this gets extended to be a data
representation in memory plus a set of methods that represents an
interface.

But...

In Perl the memory organization is orthagonal to the interface.

So we have immutable types like SV's, AV's,HV's, CV's, FORMATS, IO, etc.

And then through blessing we can associate the same set of methods to
any or all of those types.

So when you try to fit the Perl model into the C++ model something has to give.

The only way to properly map the perl level concepts to C++ is to say
that a C++ type is a perl type+a perl class.

But because of the history of ref(), and isa() and the way 'type' is
misused in the perl literature the distinction is blurred. Some people
want to say "are you something I can deref as a particular type AND
has an interface I know how to use" and some people JUST want to say
"are you something that has an interface I know how to use". Both
questions are reasonable and mutually exclusive. One routine cant do
both.

So for instance to me it seems to reasonable for IO::String to say, no
I am not a IO::Handle. Its data representation doesnt use a IO object
so its not an IO::Handle, even if it does provide the same method
interface.

isa() is to some a method that asks the first question, but to others
is asks the second question.

Now if there was an ->implements() utility function as well as the
->isa() funcation, then I think IO::Handle would contrive to ensure
that IO::String->new(\$string)->implements('IO::Handle') would return
true even though IO::String->new(\$string)->isa('IO::Handle') would
not. (Im not sure if this is the same as chromatics proposed 'does'
function).

Anyway, thanks. The comment on "duck typing" sorta made all this click
for me in a way it hasnt before.

Cheers,
Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: UNIVERSAL::ref might make ref( $mocked_obj ) sane

2007-02-26 Thread demerphq

On 2/26/07, Yuval Kogman <[EMAIL PROTECTED]> wrote:

On Mon, Feb 26, 2007 at 17:47:23 +0100, demerphq wrote:
> On 2/26/07, Joshua ben Jore <[EMAIL PROTECTED]> wrote:
> >I'm of the opinion that it is clear and blatantly an error to ever use
> >ref as a boolean.
>
> One liners and minor snippets where you control the input data would
> be an exception IMO.


Are you guys serious?


I am. Actually my view is that using ref at all unless you control the
input is an error.

Too many times ive bumped into dumbass code that does

if (ref($ob) == 'ARRAY') {...}

so that i cant pass an object into the routine, for no good reason at all.

Likewise with ref in boolean context, I almost never want the object
to be able to lie to me.


When has this ever been an issue in practice? The only time it has
been an issue the intended behavior was in fact that the object
really not be treated as an object (Scalar::Defer). Someone trying
to bless into these classes is obviously doing it for that intended
behavior.


Well i am biased in that ive spent a ton of time on data serialization
modules where i never want to be lied to.

If i dont care what the object is or whether they passed in an object
then i wont test for it at all.

Turning this around for a moment, when does it make sense to use ref
and yet at the same time not care that it might lie?

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: UNIVERSAL::ref might make ref( $mocked_obj ) sane

2007-02-26 Thread demerphq

On 2/26/07, Joshua ben Jore <[EMAIL PROTECTED]> wrote:

I'm of the opinion that it is clear and blatantly an error to ever use
ref as a boolean.


One liners and minor snippets where you control the input data would
be an exception IMO.


The correct boolean is defined( blessed( obj ) ) because this returns
a true value for all objects regardless of goofy class name and a
false value for everything not an object. This is purely because
blessed() avoided ref()'s problem and used undef as its false value
and not ''. It is impossible to bless into undef and this makes
blessed() a better test.


I agree, blessed has the right behaviour, but only because '',"\0" and
0 are legal class names. reftype() however has no excuse for returning
undef as all the possible legal values are determined by core and will
never evaluate to false. But alas its a long time too late to change.


Although...

I also think there may be a difference between code that wants the
"common" truth and other code that wants the "real" truth. Dumping and
serialization code probably wants to avoid being lied to. User code
might be willing to accept lies and in fact, might want them. I think
the real answer is that there is an oddly shaped boundary between lies
being preferable or truth being preferable and that each piece of code
may have a different idea of how to distinguish that line.


Exaclty. A dumper like DDS never wants to be lied to, and hence
avoides using $obj->can, $obj->isa, ref() or anything that allows the
object to lie about itself. Whereas most code should probably let
objects lie to it. There is no black or white about this.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: UNIVERSAL::ref might make ref( $mocked_obj ) sane

2007-02-26 Thread demerphq

On 2/26/07, Michael G Schwern <[EMAIL PROTECTED]> wrote:

Joshua ben Jore wrote:
> On 2/25/07, Yuval Kogman <[EMAIL PROTECTED]> wrote:
>> Is there a function that is to this as overload::StrVal is to
>> stringification?

Wouldn't that just be CORE::ref $obj ?



I think Josh is doing something sorta evil with pp dispatch table to
make this work, so im guessing that CORE::ref wont change anything

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: ExtUtils::MakeMaker, and t/ sub-directories

2007-02-09 Thread demerphq

On 2/8/07, Christopher H. Laco <[EMAIL PROTECTED]> wrote:

Nik Clayton wrote:
> Paul Johnson wrote:
>> On Thu, Feb 08, 2007 at 09:26:01AM +, Nik Clayton wrote:
>>
>>> [ I vaguely recall a discussion about this, but my search-fu is weak,
>>> and I can't find it ]
>>>
>>> Is there a standard way/idiom to get ExtUtils::MakeMaker to support
>>> tests in subdirectories of t/?
>>>
>>> I've got a bunch of tests, and rather than client-ls.t, client-add.t,
>>> client-commit.t, etc, I'd like t/client/ls.t, t/client/add.t,
>>> t/client/commit.t, and so on.
>>
>> I have this in one of my Makefile.PLs, which seems to be just about
>> what you
>> are looking for:
>>
>> WriteMakefile
>> (
>> ...
>> test => { TESTS => "t/*/*.t" },
>> ...
>> );
>
> Ah.  My mistake for not being clear enough.  I want to run t/*.t and
> t/*/*.t.
>
> Of course, I tried
>
>   test => { TESTS => [ "t/*.t", "t/*/*.t" ] },
>
> and it doesn't work.  It's just occurred to me that I'm trying to be too
> clever.
>
>   test => { TESTS => "t/*.t t/*/*.t" },
>
> works perfectly.
>
> N
>
>

I offer this word of warning. If you have too many tests, or many
longly-named tests, win32 will in all likelyhood barf with a command
line too long error.


I think this is a legitimate issue to consider. What particular
sequence of events leads to this happening and how can we address it.

The other thing that comes to mind is making the tree too deep, such
that on win32 you exceed the maximum path length.

I guess other platforms have similar but different zaps and traps to
consider as well.

cheers,
Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Devel::Cover newbie - what does 'yourprog' have to be?

2007-02-03 Thread demerphq

On 2/2/07, Paul Johnson <[EMAIL PROTECTED]> wrote:

On Fri, Feb 02, 2007 at 01:32:37PM -0600, Mike Malony wrote:

> I'm into testing, got some nice .t files, and prove tells me things I'd
> rather not hear.  So, my next step on the straight and narrow path of
> testing, is to gauge my testing coverage.
>
> IN the doc, the synopsis suggests
>   "perl -MDevel::Cover yourprog args
>   cover"
>
> But what can you use in 'yourprog'?
>
>   .t and .pl files run, do the tests and have some extra messages implying
> that cover was running, but there are no stats printed.
>
>  prove also runs my tests, and produces stats, but only for the installed
> modules.
>
> Clearly I'm missing something.  As a self taught perler on a windows system,
> who knows what craziness I'm doing.  Any guidance appreciated (mild cheerful
> abuse expected)

I think you might just need to read a couple of lines further in the docs.

From the sound of things, you have created a module in the standard format,
and so you want the next section, "To test an uninstalled module:"

  cover -delete
  HARNESS_PERL_SWITCHES=-MDevel::Cover make test
  cover

or perhaps you might prefer the newer, underdocumented alternative:

  cover -test

which does about the same thing, but also includes a little light magic to
try running gcov on your XS files, if you have both gcov and XS files.

But perhaps a part of the problem here is that you don't have "make"
available?  In that case, you should []


Download the free nmake from Microsoft at:

 http://download.microsoft.com/download/vc15/patch/1.52/w95/en-us/nmake15.exe

and put it in a directory in your path (system32 or your perl\bin
directory come to mind).

:-)

Cheers,
Yves
ps: The free nmake has been available from that url for ages.

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Harness 3.0

2007-01-21 Thread demerphq

On 1/21/07, David Golden <[EMAIL PROTECTED]> wrote:

On 1/21/07, demerphq <[EMAIL PROTECTED]> wrote:
> Why cant something that wants to monitor the test process do something
> other than make test?
>
> They can do a make, and or make test-prep or whatever, and then call
> into an alternative test harness framework to monitor the tests.
>
> Can you explain why this is a no-go in more detail?

I can undestand your thinking, but I think this is a no-go because of
the legacy of Makefile.PL, Build.PL and test.pl and the ability for
them to change what "make test" or "Build test" actually does.

Your suggestion about "alternative test harness frameworks" will
probably work 99% of the time.  But it's not entirely backwards
compatible.  And as long as people can continue to customize
make/Build, it introduces yet another variation in how modules might
be run.   I think we want to avoid the case where an alternative
framework might fail tests, but the author's "make test" works fine,
or the case where the alternative test framework passes, but CPAN.pm's
"make test" fails.

The only think that distributions can be safely assumed to do is
execute "make test" or "Build test" because that's exactly how CPAN.pm
runs tests and interprets output.  So in my opinion, that's *exactly*
what the test monitors should be checking, not the results of calling
an alternative framework on the "t/" directory.


I think having distros that want to push the envelope that far should
have to somehow inform the harness of such. I see no reason of making
everybody pay for a few people bending the rules.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Harness 3.0

2007-01-21 Thread demerphq

On 1/21/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:

On 21 Jan 2007, at 13:36, demerphq wrote:
> I dont get this logic.
>
> Why cant something that wants to monitor the test process do something
> other than make test?
>
> They can do a make, and or make test-prep or whatever, and then call
> into an alternative test harness framework to monitor the tests.
>
> Can you explain why this is a no-go in more detail?

I'm sure that's possible but I like the simplicity of Adam's proposal
to just dump raw TAP into a file and then feed it back into
TAPx::Parser for subsequent analysis. Is there anything wrong with
that picture?


Scale:

All tests successful (1 subtest UNEXPECTEDLY SUCCEEDED), 56 tests and
579 subtests skipped.
Passed TODO   Stat Wstat TODOs Pass  List of Passed
---
../ext/B/t/optree_constants.t11  28
Files=1351, Tests=177305, 1037 wallclock secs ( 0.00 cusr +  0.00 csys
=  0.00 CPU)
   cd ..\win32

Can you imagine the logfile of 177305 tests?

Also how will this stream represent things like "test file
segfaulted", "test file hung and was killed", timing out the test
process, etc. All legitimate cases when dealing with core testing.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Harness 3.0

2007-01-21 Thread demerphq

On 1/21/07, David Golden <[EMAIL PROTECTED]> wrote:

On 1/21/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:
> >   Of course, it also depends upon what
> > output information is being scraped.  If it's only something simple
> > like 'All
> > tests successful', then this is an easier task.
>
> I presume anyone who wanted to know that currently would just be calling
> T::H::execute_tests which returns three hashes that summarise the test
> results.

This is a false assumption.  You've got to consider that the
"canonical" way to run tests for a distribution is still "make test"
and thus Test::Harness is getting called from a command-line embedded
in a Makefile.  The program calling "make test" (e.g. CPAN.pm) has no
way of getting structured data back from the function call -- it has
to have some sort of IPC.  Thus my suggestion for dumping structured
output to text files.


I dont get this logic.

Why cant something that wants to monitor the test process do something
other than make test?

They can do a make, and or make test-prep or whatever, and then call
into an alternative test harness framework to monitor the tests.

Can you explain why this is a no-go in more detail?

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Test::Harness 3.0

2007-01-21 Thread demerphq

On 1/21/07, Ovid <[EMAIL PROTECTED]> wrote:

--- Steve Peters <[EMAIL PROTECTED]> wrote:

> The primary feature that we've seen missing in the Perl core is the
> ability
> to run tests in parallel.  This would greatly reduce our timelines in
> the
> fix-make-make test cycle that we go through in making changes in the
> core.

Yves was also pointing out stuff about the smoke tests.  This is
another area I've no experience in.  Is this a matter of looking at
Test::Smoke and figuring out how to integrate our work with it and
seeing if it doesn't fall down?  After that, I'm not sure what would be
involved in running tests in parallel with this.


I think you should look at the core test suite and Test::Smoke as
being your primary target. If you can make Smoke tests easier to do
and more robust then you have a winner on your hands.

My thinking is like this: core tests whacks of stuff, loads of modules
plus the internals, plus core tests are routinely run on many
platforms and architectures both in smoke form and not. Blead
routinely has to deal with threading errors that hang, segfaults and
the like, so using it there will expose every wart you could imagine.

Thus if you can prove that your parser is suitable for the core then I
think you will have made a pretty big step towards proving that it is
suitable for release in the wild.


> Overall, I believe that we need to be a bit cautious in taking a new
> rewrite of Test::Harness into the core.

No question about it.  This is one of the most critical pieces of
software out there.  If it fails, everything else falls down, too.


Absolutely, and proving it in core will be a massive confidence boost
for the rest of perldom.


> Before commiting it to the core, I'd
> like to use it in the smoke tests first to make sure that any
> performance issues are discovered in advance,

TAPx::Parser collects a lot more information thann Test::Harness and
the cost is that it currently runs a bit slower, despite my working
very hard to profile and optimize it.  I still have more tricks up my
sleeve for that, but I suspect that running tests in parallel would be
the key for this.


And theres the point, smoke tests need access to a lot of that data.
If your harness is designed right it should make Test::Smoke much
easier to implement, and potentially faster even if it is slower than
harness itself.

You really should look into the smoke reports and speak with
Abe-Timmerman about what the harness should be doing to make smoking
easier and more reliable yadayada. He should be able to give you lots
of useful comments on the perl test process and harness features that
would be advantageous to have.


> and that we are able to tap into the
> wide variety of architectures and operating systems that the core
> smoke tests allow.

Though the latest test results and comments for TAPx::Parser look
promising, we don't have access to that 'wide variety of architecture'.
 How can we test something so key without throwing the switch?  This
concerns me, but I don't know enough about this area to really comment.


As i said earlier, if we are using your parser for testing blead then
given time it will be quite thoroughly tested on a range of
architectures

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: TAPx::Parser 0.50_06 -- Now on Windows!

2007-01-19 Thread demerphq

On 1/18/07, Ovid <[EMAIL PROTECTED]> wrote:

Hi all,

I've just released 0.50_06 to the CPAN. This should be called the 'Andy
Armstrong' release because he's tracked down several irritating bugs
and made it work on Windows!


Just a thought but recently there has been a bit of work on p5p about
getting smoke testing of perl working properly on Win32.

One of the critical problems is of hanging tests causing the smoke
process to stall.

Jan Dubois and Steve Hay have been hashing out the details of how to
manage a win32 test process that can autotime out test files without
leaving zombies or stalling the test process. I think you should take
advantage of this effort, especially if you wish to see your parser
used by core. Specifically using Win32::Job to manage the test
sessions to ensure that hung process trees can be killed as a group.

I also think you should take some time to review the various test
harnesses used to build perl, assuming you havent already of course.
They arent the same necessarily on *nix as on Win32 and being familiar
with the features and quirks of the two systems will definitely
improve your odds of a core integration long term.

Likewise you should review the work of the smoke suite for handling
oddball cases like VMS, you certainly shouldnt think that windows will
be the oddest platform your code will run on if it is core integrated.

Just some thoughts that ive been saving up and meaning to send, and
this mail happened to be on a more or less relevent topic.

Cheers,
Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: TAPx::Parser: 'wait' and 'exit' status on Windows

2007-01-15 Thread demerphq

On 1/15/07, Ovid <[EMAIL PROTECTED]> wrote:

It looks like TAPx::Parser is now working on Windows (thanks Corion!),
even though some tests fail.  TAPx-Parser-0.50_05 may fix the
whitespace issue with the Windows tests, but the wait status still
appears to be broken (set in TAPx::Parser::Iterator).  After the handle
used with open3 is finished, $? appears to have the wait status on OS X
and other *nix operating systems, but not on Windows.  Any thoughts?

One person has told me that they don't think it's applicable under
Windows, but I don't think that's correct.


I wasnt aware that open3() was even reliable on win32.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Thoughts about test harness summary

2007-01-06 Thread demerphq

On 1/6/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:

* Ovid <[EMAIL PROTECTED]> [2007-01-05 20:15]:
> How about this instead?
>
> --
>
> Failed Test | Total | Fail |   List of Failed   |   TODO Passed
> +---+--++
> t/bar.t | 13| 4|2, 6-8  |3-4
> +---+--++
> t/foo.t | 10| 1|5   |
>
> Time:  0 wallclock secs ( 0.10 cusr +  0.01 csys =  0.11 CPU)
> Files=3.  Failed 2/3 test programs. 5/33 subtests failed.

Better, but it has flaws.

• The horizontal lines are too noisy.

• The column labels are too wide and conversely the lack of cell
  padding makes it unnecessarily uneven in blackness.

• The "Files=3" bit, while a minor issue, is just ugly.

• The failed tests and passing TODO tests columns can be
  empty, in which case they're a huge waste of space.

I think the following format reconciles the various points best
(note: this is the same data as in your example, except that it
assumes 4 test files in total):

Test file | Failed | Bad tests
--+---++-
t/bar.t   : 4 / 13 : Fail: 2, 6-8; Pass TODO: 3-4
t/foo.t   : 1 / 10 : Fail: 5
... (2)   :   / 10
Total (4) : 5 / 33 : Non-passing files: 2

Time: 0 wallclock secs (0.10 cusr + 0.01 csys = 0.11 CPU)

Note that the "Failed" header spans two columns.

The "... (#)" ellipsis line for passed test files is there to
supply the count of passed tests so the totals line is more
logical. It has the side benefit of calling out explicitly how
many files are omitted from the output; I like that. For polish,
if the count of ellipsised test files is exactly 1, you could
just show the one filename instead of ellipsising it.

This format is much denser but also more evenly spaced than
yours. There's more noise than in Andy's proposition simply
because it's a table but the non-variable bits are also more
easily scannable. (In a longer example this would *really* show.)
And though I picked meaningful characters as column separators to
visually lighten the "chart junk," it now occurs to me that this
also means the format is minimally legible when rendered in a
proportional font. It is also seems easier to implement to me
than your format because there's only a single column with really
variable content width and it's the trailing right column.


FWIW, of the variants ive seen posted so far i'd vote for this one

Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: testing module loading output and testing under the debugger

2006-12-19 Thread demerphq

On 12/19/06, Nadim Khemir <[EMAIL PROTECTED]> wrote:

>Personally I wouldn't get /too/ hung up about 100% test coverage - it
>can be taken too seriously. See Brian Marick's "How to Misuse Code
>Coverage"  for example.

Thanks for the article link. I've seen bad test code with 100% coverage but
I've never seen good test code with bad coverage. Also, I'd rather not have
98.7% coverage. It's nagging me and I'd rather spend five extra minutes to
get 100%.


Hmm, well, if you are like me then ocassionally you will have branches
to handle "can't happen" cases in your code. Eliminating them makes
your code less robust as at some future time the can't happen just
might, but at the same time since they are can't happen cases you
can't really test them or get coverage for them. Some people go to
inordinate lengths to trigger these, and I have to say im not
convinced that its time well spent.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Frequency of test.pl

2006-11-01 Thread demerphq

On 11/1/06, Michael G Schwern <[EMAIL PROTECTED]> wrote:

This distribution uses 'test.pl' for its tests.  The output of 'test.pl' is not 
parsed.  When
"make test" is run by automated installers (for example, the CPAN

shell) your tests will

always appear to have passed no matter what the output of 'test.pl'.


Is that true when test.pl dies? Maybe it should say that if you must
use test.pl it should die on test failure.

Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Redirecting STDERR/STDOUT on Win32 (was Re: Terrible diagnostic failure)

2006-09-23 Thread demerphq

On 9/23/06, Adam Kennedy <[EMAIL PROTECTED]> wrote:

> Yeah, but thats a can of worms in of itself. Using backticks is
> simple, and requires no special stuff. If you dont mind blocking until
> the other process completes,  I see no reason to use another more
> complex approach.

I seem to recall Randal talking about exploding buffers or something,


Forgive me, but i get this ridiculous mental image of bits flying everywhere.

:-)

Yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Merging STDOUT and STDERR

2006-09-21 Thread demerphq

On 9/21/06, Ovid <[EMAIL PROTECTED]> wrote:

- Original Message 
From: demerphq <[EMAIL PROTECTED]>

> There is no problem if you do the backtick method I mentioned.
>
> my $in_sync=`someprocess 2>&1`;
>
> and it should be in sync. I seem to recall it has to be, but i cant
> find the source of that claim. But i know that ive never seen any
> synchronization problems with this approach.

There is one serious problem with that:  my process blocks until that's done.  
Infinite streams won't work even though we've documented that they should.  
Even long-running tests appear to hang.  We also lose the ability to process 
test results as they come in.  We have to wait until all of the results come in 
and that may or may not happen.


Well then forget about synchronized two-channel communications. You
can't have everything in this world. :-)


Also, is that syntax portable across all operating systems which Perl runs on?  
I can't tell from the docs.


Hmm, all operating systems? I dont know. Id guess that a conformant
Perl implementation for a given OS will do some magic with that type
of construct, but I'm not sure. Im pretty sure its used in blead test
suite tho.


> So I guess it comes down to whats more important: A test counter being
> shown or handling STDOUT/STDERR in a synchronized fashion?

More than just a test counter :)


Ok. Whatever. :-)

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Merging STDOUT and STDERR

2006-09-21 Thread demerphq

On 9/21/06, Ovid <[EMAIL PROTECTED]> wrote:

OK, I'm stuck.  I've read through IPC::Open3 and friends, I've looked through 
CPANPLUS, I've tried to solve this problem but no matter how hard I try, while 
I can fetch both STDOUT and STDERR, I cannot guarantee that they're in synch.  
That's my big problem and

So I'm going to head over to Perlmonks and ask there.  It seems like, unless 
the source of the data ensures that everything is going to the same filehandle, 
I can't reliably solve this problem.  Lots of folks have offered suggestions 
and for that I am very grateful, but what about the 'must be in synch' problem? 
 That's the one I'm struggling with.


There is no problem if you do the backtick method I mentioned.

my $in_sync=`someprocess 2>&1`;

and it should be in sync. I seem to recall it has to be, but i cant
find the source of that claim. But i know that ive never seen any
synchronization problems with this approach.

So I guess it comes down to whats more important: A test counter being
shown or handling STDOUT/STDERR in a synchronized fashion?

cheers,
Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: A Suitable Iterator for TAPx::Parser

2006-09-20 Thread demerphq

On 9/20/06, Smylers <[EMAIL PROTECTED]> wrote:

Ovid writes:

> From: Shlomi Fish <[EMAIL PROTECTED]>
>
> > H you may wish to differentiate between #2 and #3 by saying
> > that a filename is passed as a plain string, while a string is
> > passed by taking a reference to it. That's what Template Toolkit and
> > other modules are doing.
>
> Good call.  That's a common enough idiom that I think it will work
> fine.

Yes, but make sure you do it right.

I've been caught out by passing something like a Path::Class::File
object, which stringifies just fine as a file path, to modules like this
if only they'd just treat it as a string -- but instead overjealous
checking spots that it's a reference and declines to stringify it.

If you get a reference to a blessed object and that object has
overloaded stringification then please just treat it as a string, not a
reference.


You of course are aware of what a pain it is to apply this logic?

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Regexp::Common's tests (was Re: Bad TAP in Perl core?)

2006-09-20 Thread demerphq

On 9/20/06, A. Pagaltzis <[EMAIL PROTECTED]> wrote:

* Michael G Schwern <[EMAIL PROTECTED]> [2006-09-18 13:15]:
> In Regexp::Common's case the tests are almost entirely
> generated so fixing them would be easy.

No way. Abigail lovingly hand-crafted every last one of those
240,000 tests (or whatever insane number it was last time I
looked). :-)


I recently had to poke around at Abigails tests and i have to say
there was some cool stuff in there. Its cryptic, but there are a few
neat ideas. One of which I borrowed for t/op/pat.t, which is to output
the plan in a BEGIN block that is at the bottom of the test file.
Presto, only one place to update when the number of tests in the file
change. Kinda obvious in retrospect, but pretty new to me, and useful
for test files where you can't calculate your plan.

Yves



--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: A short rant on the purpose of the CPAN install chain.

2006-09-19 Thread demerphq

On 9/19/06, David Cantrell <[EMAIL PROTECTED]> wrote:

Adrian Howard wrote:

> Yeah - it's something I've noticed over the last year or so. I'm
> talking to people less about "you should write tests", and much more
> about "you should write /good/ tests".

What do people think are *good* tests?

My modules mostly have *comprehensive* tests, but that doesn't make them
good.  In particular, my tests are largely uncommented, depend on
previous tests working, and are generally not laid out particularly
clearly.  So my attempt to make my tests good will mostly consist of
applying the same coding standards to the test suites as I do to the
rest of the code.

Any tips on what - other than comprehensiveness, clarity and
maintainability - I should aim for specifically in test suites would be
greatly appreciated.


I think that an important tip for producing good tests is to consider
carefully that you haven't established implicit dependencies on a
specific operating environment in your tests.

So for instance if you have a routine that you expect to return a
specific path consider carefully how you can insulate your test from
platform specific representations of the path. Hardcoding it as
"/foo/bar/baz" is going to cause problems on many platforms, Win32,
Mac and VMS being the most obvious.

Examples of this type of stuff abound, as David Golden said earlier.

cheers,
Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Redirecting STDERR/STDOUT on Win32 (was Re: Terrible diagnostic failure)

2006-09-18 Thread demerphq

On 9/18/06, David Golden <[EMAIL PROTECTED]> wrote:

demerphq wrote:
> On 9/18/06, Ovid <[EMAIL PROTECTED]> wrote:
>> I've gotten a report that the open command fails on Windows.  Not a
>> surprise, now that I think about it.  However, I don't know of any
>> portable way of forcing STDERR to STDOUT (and I don't have a Windows
>> box handy).  This means that my 2000+ TAPx::Parser tests are in
>> trouble.  If Test::Builder accepted an environment variable which
>> allowed me to override this, I might have a way out.  So far removing
>> the 2>&1 seems to make my tests pass on a Linux box, but that strikes
>> me as bizarre as I thought STDERR wouldn't get read that way.  What
>> the heck am I misunderstanding?
>
> The easiest way I know of to execute a process in win32 and get both
> the stderr and stdout back is to use backticks.
>
> my $res=`$cmd 2>&1`;

I found that the suggested code for saving and restoring STDOUT and
STDERR given in "perldoc -f open" seems to work OK.  This is essentially
what IPC::Run3 is doing -- capturing to an external file and then
reading it back in and making it available.


Yeah, but thats a can of worms in of itself. Using backticks is
simple, and requires no special stuff. If you dont mind blocking until
the other process completes,  I see no reason to use another more
complex approach.

yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Terrible diagnostic failure

2006-09-18 Thread demerphq

On 9/18/06, Ovid <[EMAIL PROTECTED]> wrote:

- Original Message 
From: Michael G Schwern <[EMAIL PROTECTED]>

> > What about an optional environment variable
> > which forcess *all* output to STDOUT  or STDERR
> > but, if not present, leaves things as is?
>
> Did anyone think to try it?
>
> $ cat ~/tmp/stdout.t
> #!/usr/bin/perl -w
>
> use Test::More tests => 1;
>
> my $tb = Test::More->builder;
>
> $tb->failure_output( $tb->output );
>
> is 23, 42;
>
>
> $ perl -MTest::Harness -wle 'runtests @ARGV' ~/tmp/stdout.t
> /Users/schwern/tmp/stdoutdubious
>Test returned status 1 (wstat 256, 0x100)
> DIED. FAILED test 1
>Failed 1/1 tests, 0.00% okay
> Failed Test Stat Wstat Total Fail  Failed  List of Failed
> 
---
> /Users/schwern/tmp/stdout.t1   256 11 100.00%  1
> Failed 1/1 test scripts, 0.00% okay. 1/1 subtests failed, 0.00% okay.
>
> Test::Harness throws out all non-TAP stuff going to STDOUT.
> This includes comments.  So if Test::Builder started sending
> its diagnostics to STDOUT they'd disappear into the ether.

I have a bit of a problem, I think.  It could simply be a matter of 
misunderstanding how things work, but I have the following bit of code in 
TAPx::Parser::Source::Perl:

my $sym = gensym;
if ( open $sym, "$command 2>&1 |" ) {
return TAPx::Parser::Iterator->new($sym);
}
else {
$self->exit($? >> 8);
$self->error("Could not execute ($command): $!");
warn $self->error;
return;
}

I've gotten a report that the open command fails on Windows.  Not a surprise, now that 
I think about it.  However, I don't know of any portable way of forcing STDERR to 
STDOUT (and I don't have a Windows box handy).  This means that my 2000+ TAPx::Parser 
tests are in trouble.  If Test::Builder accepted an environment variable which allowed 
me to override this, I might have a way out.  So far removing the 2>&1 seems to 
make my tests pass on a Linux box, but that strikes me as bizarre as I thought STDERR 
wouldn't get read that way.  What the heck am I misunderstanding?


The easiest way I know of to execute a process in win32 and get both
the stderr and stdout back is to use backticks.

my $res=`$cmd 2>&1`;

I guess you would have to wrap the result in something so that you get
an iterator over it, but it does work as you can see below.

D:\dev\cpan\re-0.0601>perl -e"my $r=`cl`; print qq(\n); print $r"
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.00.9466 for 80x86
Copyright (C) Microsoft Corporation 1984-2001. All rights reserved.


usage: cl [ option... ] filename... [ /link linkoption... ]

D:\dev\cpan\re-0.0601>perl -e"my $r=`cl 2>&1`; print qq(\n); print $r"

Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.00.9466 for 80x86
Copyright (C) Microsoft Corporation 1984-2001. All rights reserved.

usage: cl [ option... ] filename... [ /link linkoption... ]

D:\dev\cpan\re-0.0601>

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Breaking compatability with Test::Harness and friends?

2006-09-16 Thread demerphq

On 9/16/06, Ovid <[EMAIL PROTECTED]> wrote:

The following line is giving me pause:

  ok 9 Elegy 9B   # TOdO

That's an 'unexpectedly succeeded' test ('bonus', in the Test::Harness world).

Right now, if that read 'not ok # TODO', TAPx::Parser would have this:

  passedtrue
  actual_passedfalse
  todo_failedfalse

But since it reads 'ok', I've reversed the sense:

  passedfalse
  actual_passedtrue
  todo_failedtrue

In this case, Test::Harness and friends report that 'ok 9 # todo' is passing, 
not failing, but I'm reporting the opposite result.  I think my behavior is 
more correct because I'm trying to write things so that someone who forgets 
writes a bad harness will still see what's going on.  For example, let's say 
someone only wants to see failing tests:

  # the 'source' key is new.  You no longer have to manually create a stream
  my $parser = TAPx::Parser->new( { source => $test_file } );
  while ( my $result = $parser->next ) {
print $result->as_string if ! $parser->passed;
  }

With that, they'll never notice that 'ok # TODO' has unexpectedly succeeded if I 
adopt the current behavior of Test::Harness unless they explicitly remember to 
check the $parser->todo_failed method.

I propose that 'ok # TODO' tests be reported as failures.  I think it would be 
good if Test::Harness also adopted this strategy because right now, if you see 
that tests unexpectedly succeeded, you don't know which tests they are and you 
have to try and grep through the TAP output manually.

Thoughts?  Is this going to break a lot of stuff (I suspect it might).


I guess you missed the huge flamish thread where I brought this up
before. I basically said exactly what you said above. While the
consensus seemed to be that I was wrong there was one modest positive
result: a patch was applied to Test::Harness in blead that makes
passing TODO tests show up listed, similar to how failures are listed.
Which at least resolves the problem of finding the test later on.

Its actually very convenient as there are 6 TODO tests that have been
passing for about a year in blead...

Yves
ps: I derive a certain satisfaction in hearing you suggest exactly the
same thing i was castigated for suggesting before. At least somebody
agrees with me. :-)


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Too many tests

2006-09-14 Thread demerphq

On 9/14/06, Ovid <[EMAIL PROTECTED]> wrote:

Here's we have a test where the plan is 1..3 but we've run 7 tests.

  TAPx-Parser $ prove -v t/sample-tests/too_many
  t/sample-tests/too_many...1..3
  ok 1
  ok 2
  ok 3
  ok 4
  ok 5
  ok 6
  ok 7
  dubious
  Test returned status 4 (wstat 1024, 0x400)
  DIED. FAILED tests 4-7
  Failed 4/3 tests, -33.33% okay
  Failed Test Stat Wstat Total Fail  List of Failed
  
---
  t/sample-tests/too_many4  1024 34  4-7
  Failed 1/1 test scripts. -4/3 subtests failed.
  Files=1, Tests=3,  0 wallclock secs ( 0.01 cusr +  0.01 csys =  0.02 CPU)
  Failed 1/1 test programs. -4/3 subtests failed.

The last three tests have passed, but Test::Harness says they've failed.  My 
TAPx::Parser reports that they've passed and the only real way to know if there's 
a problem is to test the $parser->good_plan method.  I've added this as a parse 
error, but why are the passing tests listed as failing?


Well, I dont know that I can say it authoritatively, but treating a
passing test you have specifically said wasnt going to occur as a fail
seems to reasonable behaviour to me.

How else are you going to deal with "you've run more tests than you
said you were"? If you accept the results and assume the count is
wrong how do you know what the program didn't silently die part way
through and in fact you are dealing with a catastrophic failure in the
middle of more tests than were run?

But a nice message like "you ran 128 tests, but you said you were
going to run 123 tests, probably you should change your test count"
would be useful. (And if i recall is produced by Test::Harness?)

Cheers,
Yves



--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: TidyView - preview your perltidy options

2006-09-14 Thread demerphq

On 9/15/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Well, I'm hoping for feedback from this maillist, and when that settles down, 
I'm debating whether to send it to CPAN ten announce on perlmonks et al, or to 
announce on perl monks whilst still on sourceforge, and after feedback from 
perl monks, post it to CPAN -I already have a PAUSE id etc.

So I'm going for a little more feedback and stability before posting a release 
to CPAN - maybe I'm being too precious, I don't know.

Leif

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday, 14 September 2006 10:45 PM
To: Leif Eriksen
Cc: perl-qa@perl.org
Subject: Re: TidyView - preview your perltidy options

And the second obvious question, while I'm thinking about it...

When will we be able to install this from CPAN? :)

Adam K

[EMAIL PROTECTED] wrote:
> Hi all,
> I have release a pet project on Sourceforge called TidyView, at
> https://sourceforge.net/projects/tidyview/
> 
>
> Basically it is a Tk GUI to help preview the effect of the
> plethora of options provided by Perltidy.
>
> If you dont know what Perltidy is, its a pretty-printer for
> Perl-code. You can use it to have all your code consistently indented
> and spaced, automagically.
>
> Perltidy is recommended by TheDamian at page 34 of PBP, and he
> provides a sample perltidy config file. But if you wish to tweak this,
> it is easy to get lost in the hundreds of option choices available.
>
> You can preview the effect of your selected options on your code,
> and if you like them, have a nicely formatted perltidyrc file saved
> for you. It can also parse and present your exisiting perltidyrc
> files, and allow you to make incremental adjustments to tighten up
> your desired autoamtic code formatting
>
> Obviously it requires Tk and Perltidy, both available from CPAN.
> It supports some pretty old Tk versions, but requires a very recent
> Perltidy. Additionally, whilst this is in the early release phase,
> version and Log::Log4perl are required.
>
> I have been working with the author of Perltidy over the past few
> months, and he has been using TidyView to debug and improve Perltidy
> itself, which is just super.
>
> So if you wish to have a consistent code style for all your (and
> your dev teams) Perl-code, till you get it looking just right.
>
> Its licensed under the same terms as Perl itself, and I am very,
> VERY eager to receive feedback, complaints, abuse, suggestions and
> patches. There are a list of things I'd like to add in the TODO file,
> colourised diff's between what you code originally looked like and how
> Perltidy formatted it would be a great addition, but I haven't a clue
> how to do it.
>
> Note, there are some people who have expressed the concern that
> Perltidy can inadvertently change the parse tree of the code it
> reformats - that is, change the meaning of your code.
> However, the developer of Perltidy says no one has every reported that
> to him in the many years he's been developing Perltidy, though he's
> sure someone (TheDamian would have to be at the top of that
> list) could write something sufficiently freaky to do that - but they
> havent yet. But if there is enough demand for it, I can add in support
> for PPI::Signature to make sure that doesnt ever happen without
> TidyView noticing. I haven't done it yet as at the moment it solves a
> problem that doesn't exist, and it introduces another dependency.
> Patches to flexibly support PPI::Signature are welcome.
>
> The purpose of announcing this on PerlQA is that coding standards
> are often lumped into the 'QA'-bucket, so the QA mail-list seems most
> appropriate. I hope to announce this more widely (perl monks, CPAN
> maybe) in a few weeks.


Uploading to CPAN and voting in a pseudo democratic state have much in
common. Release early, release often.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: post-YAPC::Europe CPANTS news

2006-09-08 Thread demerphq

On 9/8/06, Adam Kennedy <[EMAIL PROTECTED]> wrote:


On another subject that came up today one one of my modules
(specifically the new Test::Object dependency of PPI) it seems like it
could be a bad idea to have explicit dependencies on the latest version
of a dual-life module.

One of the linux distro guys pinged me about Test::Object needing the
very latest (CPAN-only) version of Test::Builder, because it means they
can't package it properly for the distros without upgrading the main
Perl package. Some packaging systems can't handle having the same file
in more than one distro it seems.


I think that this is a general problem that probably requires some
pushback towards the packaging system maintainers.

The concept that a package can contain numerous parts, some of which
may be superseded by later released stand alone parts shouldnt be a
surprise to anyone. It happens all the time in every aspect of life.

This came up recently for me with regard to EU::MakeMaker and
EU::Install. The package EU::Install is contained by both EUMM and
EUI. In principle the EUMM package shouldnt install its EU::I unless
the existing EU::I is older than the one it contains. OTOH, the EUI
package should always be used if one wants to upgrade it alone.

I personally dont think that this is a bizarre use case. EUMM needs to
have EUI to do its thing, and EUMM needs to have EUI around to do its
thing. But they can be updated independently. Such a chicken and egg
kind of relationship probably isnt that unusual. The only solution
can't be that you have to bundle them together or bundle them all
independently.

On win32 this is an old problem that is mostly dealt with
transparently to the user (except when its done badly in which case
things can go horribly wrong). Its common for a package to bundle
various items, particularly .dll's, but not install them when it finds
that later versions are available from some other source.

BTW, im not saying that this is an easy problem to resolve, or that im
interested in rushing out and solving it myself. But it seems to me to
be one that needs solving. :-)

I mean, we are talking about tools here, can you imagine going into a
hardward store and finding out that you cant replace a tool from a
toolset without replacing the entire toolset? Or conversly that nobody
offers a complete toolbox just because then they would be forbidden
from supplying the individual tools alone?

Anyway, sorry, minor rant. Ill shut up now.

:-)

Cheers,
Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Terrible diagnostic failure

2006-09-04 Thread demerphq

On 9/4/06, Ovid <[EMAIL PROTECTED]> wrote:

Once again we have an example of how using different output streams for regular 
and diagnostic information causes serious problems.  This is the worst I've 
seen.  In one method I'm testing I embedded the following:

use Data::Dumper;
::diag(Data::Dumper->Dump(
[$stripped, [EMAIL PROTECTED],
[qw<$stripped *records> ]
));

And here's the output in the test:

  ok 14 - ... and splitting the sql into individual statements should succeed
  # $stripped = 'alter table test_colors add column foo varchar(200)';
  ok 15 - ... and splitting the sql should succeed
  # @records = (
  ok 16 - ... but the original SQL should remain unchanged
  #  'alter table test_colors add column foo varchar(200)'
  #);
  not ok 17 - We should be able to "split" a single statement

Ouch.  This is what I 'normally' see when I run this:

  ok 14 - ... and splitting the sql into individual statements should succeed
  ok 15 - ... and splitting the sql should succeed
  ok 16 - ... but the original SQL should remain unchanged
  # $stripped = 'alter table test_colors add column foo varchar(200)';
  # @records = (
  #  'alter table test_colors add column foo varchar(200)'
  #);


This looks like buffering issues to me. I see stuff like this all the
time when i run code through an editor. Perl tests to see if stdout
(and maybe stderr) are terminals and then automatically turns on
auto-flush when they are. But when i launch the script via my editor
apparently perl thinks its not dealing with a terminal and doesnt
buffer the filehandles, resulting in output much like you have above.
So i wonder if you manually turn on autoflushing of all the output
handles if that helps.

Yves


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: use Tests; # ?

2006-07-19 Thread demerphq

On 7/20/06, chromatic <[EMAIL PROTECTED]> wrote:

On Wednesday 19 July 2006 15:17, demerphq wrote:

> On 7/19/06, chromatic <[EMAIL PROTECTED]> wrote:
> No. I didnt imply anything. I spelled it out quite clearly.

Fine, then you said "quite clearly" that it was broken.


The whole reason this thread started was that i suggested that test
descriptions be mandatory, as they were in my opinion the best way to
resolve this problem.

When the subject of wrong line numbers came up I was writing in
context of replying to someone who felt that the line number reporting
of Test::Builder was sufficient to not require test descriptions. I
responded by pointing out that such information is not so useful as it
is often wrong.

The fact that we have digressed into discussing the myriad ways that
one can get wrong line number reports and whether they are bugs in the
test file or in the builder is a totally different discussion.

Now i consider "wrong" to be different from "broken" or "buggy". To me
broken and buggy mean that the reporting doesnt do what its supposed
to do. But since I know that what its supposed to do is report where a
given Test::Builder based test routine was called I dont consider it
broken when it does exactly that. I call it wrong because often
finding where the routine is called doesnt help you find the code
being tested. And thats what started this whole thing off.


> I said that the code to find the line number uses a heurisitic that
> gets things wrong, and that I didn't see any way to improve the
> heurisitc.

But there is one!  Give it the explicit knowledge that you, the writer of the
test suite, has.


Right. Give it a damn name so i can find it in an editor  with its
built in search function.


> And therefore that the line number information is an
> unreliable way to find what code actually failed the test, which in my
> experience is a problem properly solved by using test names.

If you don't give it a test name, you have the exact same problem, only worse,
because there's absolutely *no* information.   Test::Builder has a heuristic
for giving the location of failures and it works *most* of the time.
Test::Builder has no heuristic for giving test names in the absence of test
names, and thus it works *none* of the time.


Thus make test names mandatory, and the problem is solved.


> > > use Test::More tests => 3;
> > >
> > > sub my_ok {
> > > ok($_[0],$_[1]);
> > > }
> >
> > I don't know why you'd expect this to report the right line numbers; this
> > code really *is* broken.
>
> No, its not broken, using subroutines is not broken.

No, using buggy, incomplete subroutines is.  At least complaining that an
incomplete, buggy subroutine does the wrong thing is silly.  Of course it
does the wrong thing.  You wrote it to do the wrong thing.


Show me where this is documented in Test::More and we can discuss why
I didnt follw that documentation. Until its there you have right
calling my usage buggy.


Would you complain that my favorite chocolate chip cookie recipe is awful if
you deliberately left out the sugar?


If the sugar wasnt in the recipie, but was rather documented somewhere
else, perhaps under "cookie builder" then yes I would.


> And, even it were
> broken, its a common mistake. The code is basically a stripped down
> example of stuff i see in test files A LOT.

Yes, it's common.  That doesn't mean it's not broken.  I'm not in the habit of
voting whether to redefine wrong behavior as right because it's common.


If its so easy to use wrong then its probably deficient.


> And even if i were to concede that my_ok() is broken (which I don't)
> there is still Fergals example of a data driven tests in a loop:
>
> my @tests=();
>
> foreach my $test (@tests) {
>   is($test->[0],$test->[1]);
> }
>
> how do you propose to get a useful line number out of that? Are you
> going to say that its broken as well?

Not at all.  The line number is correct: there are no additional call frames
between the call to the underlying library and the call point of the test.
The line number reported there is the correct line number for the important
point of calling the test.

I didn't say it was useful.  I said it was correct.


Correct in some restricted definition. It correctly tells you where
the test procedure was called. It often tells you the wrong thing
about what failed in the test file.


> Now if  it was written
>
> foreach my $test (@tests) {
>   is($test->[0],$test->[1],$test->[2]);
> }
>
> Then i can find the test _easily_.

For a value of "easily" defined as "by looking up the definition of the data
structure in @tests and then reading further in the code until I find the

Re: use Tests; # ?

2006-07-19 Thread demerphq

On 7/19/06, Fergal Daly <[EMAIL PROTECTED]> wrote:

On 19/07/06, chromatic <[EMAIL PROTECTED]> wrote:
> On Wednesday 19 July 2006 06:03, demerphq wrote:
>
> > Excuse me? Where did I say the code was "broken"?
>
> Wasn't that the implication when you said you've seen misleading line numbers
> many times?
>
> > use Test::More tests => 3;
> >
> > sub my_ok {
> > ok($_[0],$_[1]);
> > }
>
> I don't know why you'd expect this to report the right line numbers; this code
> really *is* broken.

What's wrong with that code? It doesn't do anything useful right now
but you can't argue that a system that stops being useful when you use
subroutines is good.


Hear, hear!


If Test::Builder gave a stack trace rather than a single line number
then this wouldn't be broken,


Yes that would improve things for cases like my_ok(). And i like the
idea you posted elsewhere about showing a stack trace only of the
stuff above where it currently does.

However I dont see how it would help in the case of data driven tests
in a loop. For those the best policy IMO is still to provide a
description. Which is why i wanted it mandatory, or at the least,
harder to avoid than it currently is.

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: use Tests; # ?

2006-07-19 Thread demerphq

On 7/19/06, chromatic <[EMAIL PROTECTED]> wrote:

On Wednesday 19 July 2006 06:03, demerphq wrote:

> Excuse me? Where did I say the code was "broken"?

Wasn't that the implication when you said you've seen misleading line numbers
many times?


No. I didnt imply anything. I spelled it out quite clearly.

I said that the code to find the line number uses a heurisitic that
gets things wrong, and that I didn't see any way to improve the
heurisitc. And therefore that the line number information is an
unreliable way to find what code actually failed the test, which in my
experience is a problem properly solved by using test names.


> use Test::More tests => 3;
>
> sub my_ok {
> ok($_[0],$_[1]);
> }

I don't know why you'd expect this to report the right line numbers; this code
really *is* broken.


No, its not broken, using subroutines is not broken. And, even it were
broken, its a common mistake. The code is basically a stripped down
example of stuff i see in test files A LOT.

And even if i were to concede that my_ok() is broken (which I don't)
there is still Fergals example of a data driven tests in a loop:

my @tests=();

foreach my $test (@tests) {
 is($test->[0],$test->[1]);
}

how do you propose to get a useful line number out of that? Are you
going to say that its broken as well?

Now if  it was written

foreach my $test (@tests) {
 is($test->[0],$test->[1],$test->[2]);
}

Then i can find the test _easily_. No heurisitics, no BS with poorly
documented vars in Test::Builder. And speaking of
$Test::Builder::Level at me let me ask a question, how many are  going
to read Test::Builder to get the line numbers from tests in Test::More
right? Experience shows not very many people[1]. Heck the variable
isn't even mentioned in Test::More. And Test::Builder isn't mentioned
in Test::Simple at all (presumably because it doesnt use it, in which
case $Test::Builder::Level isnt going to help.)

[1] I might wager that a lot of test authors dont even notice the
problem. I say this because in my own experience of writing my own
test suites the line number is something I dont even look at. I know
what test failed because I wrote the test file and the code its
testing and I can find the relevent stuff in an instant. Finding the
code responsible for a failing test is something that IMO is done more
often by module consumers who for one reason or another are seeing
things go wrong. The author on the other hand is unlikey to have seen
the tests fail, and therefore might not even know the line numbers are
wrong.


--
perl -Mre=debug -e "/just|another|perl|hacker/"


Re: Real Kwalitee, or please stop spending time thinking about CPANTS

2006-07-19 Thread demerphq

On 7/19/06, David Golden <[EMAIL PROTECTED]> wrote:

Andy Lester wrote:
>
> On Jul 19, 2006, at 4:13 AM, David Golden wrote:
>
>> * Laugh at code that gets its slashes wrong
>>   (*cough* Test::Pod *cough*)
>
> I thought I'd fixed your slashie problems long ago.  No?  Please lean on
> me if not.

http://rt.cpan.org/Public/Bug/Display.html?id=17892

It's here, with a one-line patch from Yves included.  It looks like the
switch away from File::Find did in one of the assumptions in the test file.


Heh. I completely forgot that one. :-)

Yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"


  1   2   3   >