Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Michael G Schwern
On 1/11/16 4:53 PM, Chad Granum wrote:
> Test::More/Test::Builder work VERY hard to ensure nothing inside them alters 
> $! or $@. This is for
> thing like this:
> 
> ok(do_something_scary());
> is($!, 0, "expected $! val");
> is($@, undef, '$@ not changed');
> 
> Without Test::More/Builder being careful to support this, the second 2 
> assertions could fail because
> something inside ok() modifies $! or $@.

If your test functions modify the really common parts of your global 
environment then it becomes
very difficult to test them... and testing error reporting is a very common 
thing you'd want to do!

Kent already pointed out why changing this makes testing $! and $@ very, very 
awkward.  Let's see
that again.

my ( $error, $exception );
ok(do {
  local $@;
  local $!;
  my $ret  = do_something_scary());
  ( $error, $exception ) = ($!, $@);
  $ret
});
is($error, 0, "expected $! val");
is($exception, undef, '$@ not changed);

Gross.  I don't know how many times I've written code like this.  I hate it.  I 
always encapsulate
it somehow.  And when a library doesn't have good global discipline it makes it 
even harder for me
to have good global discipline.

We tell the users that $! and $@ are only safe per function call.  Then we 
encourage them all over
the docs and interface to pass function return values directly into test 
functions.  Then we tell
them they should be testing their error cases... but to do safely requires 
gross scaffolding.
That's not fair to the user.  The result will be that people won't test their 
error conditions,
library quality will drop, and you'll waste a lot of time on a bug that should 
have been tested.

The argument that $! is only reliable per function call, that's a lowest common 
denominator
thinking.  One of the fundamental design principles of Test::Builder was that 
it had to be the
GREATEST common denominator!

I don't write libraries to the lowest common denominator.  I write libraries 
that raise the bar and
encourage others to do so as well.  Perl 5's error handling is bad, but that 
doesn't mean my
library's error handling also has to be bad.  As library authors we've been 
spending decades working
around Perl 5's bad parts.  We know how to do it better.  This is 
extraordinarily important for a
language's testing library: if you can't test it (or it's gross and annoying) 
then it won't happen.
 If you want to see better global discipline in Perl libraries, then the 
testing library has to
start first, because everything will use it.

Test libraries are first and foremost about the user.  They encourage the user 
to more and better
testing!  Implementation effort is a much, much lesser concern.  And all the 
positives reasons below
not to do it are implementation reasons.  There are no positives for the Test2 
user.

> *Reasons for dropping this promise from Test2:*
> 
>   * Simplifies code
>   * $! and $@ are altered by many many things, adding { local ... } around 
> all of them is a pain
>   * Sometimes internals you don't expect to set $! will
>   * Perl itself documents that you cannot depend on $! and $@ being unchanged 
> past the immediate
> line after you set it.
>   * Test::Builder will continue to protect $! and $@, so nothing will break
>   * Test2 is new, nothing depends on it preserving these

The one below is the only reason that talks about making Test2 better for the 
user.

> *Reasons not to drop it:*
> 
>   * It is helpful to people who might not realize $! and $@ are often altered 
> unexpectedly.
> 
> I am asking for input in case I missed any reasons/arguments for either side. 
> In either case
> backwards compatibility will be preserved.
> 
> -Chad
> 
> 


Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Sawyer X
[Top-posting]

Chad, I think I understand what you mean now. You were referring to
whether the underlying pinnings should take care of it (Test2) or
whether the chrome (testing functions) around that should do so. Yes?

If so, I think you should probably clarify what Test2 *does* do. It
doesn't provide the functions - alright. What *does* it provide then?

Also, since the discussion was "Should I - the testing functions' user
- write code around my testing functions to accommodate for the
testing framework not preserving `$!` and `$@`?" instead of "Should
the testing functions take care of it rather than the gory details
underneath them?", it might be useful (other than explaining what the
difference between the testing functions and the gory details,
recommended in previous paragraph) explaining how such an
implementation would look like, or how it would be different, in the
testing functions vs. in the gory details.

I think then it would be simple to understand your intent and to help
you with some useful commentary.

If I didn't understand what you meant, then... this might needs
additional clarification.

S. :)


On Tue, Jan 12, 2016 at 8:56 PM, Chad Granum  wrote:
> Thanks for the input.
>
> My question was not very well formed.
>
> What I should have asked is this:
>
> Preserving $! and $@ is a valuable behavior, and one I won't get rid of.
> However, I am questioning the decision to make the test library jump through
> hoops to preserve them, as opposed to having the test functions be
> responsible for doing it.
>
> Now then, having the test library do it means the tools usually do not have
> to worry about it (though it would be easy for them to still damage $! or
> $@). On the other hand, the protection in-library gets very complicated, and
> hard to maintain, and will not solve all cases.
>
> It was not a question of if we should do it, but how we should do it. Test2
> is just the back-end, but asking if Test2 should do it sounded like I was
> asking if we should do it at all, which was not the intent, the front end
> was always gonna do it.
>
> That said, the discussion served as rubber ducking and I was able to find a
> solution that lets the library do the heavy lifting, but in a single spot
> that avoids the complexity and un-maintainability of the way it has done it
> so far. So the discussion is no longer necessary, the library will continue
> to do the protection, and it will do it in a better way. New tools will not
> need to start concerning themselves with this. Old tools never had to worry
> as I was not going to break backcompat.
>
> -Chad
>
> On Tue, Jan 12, 2016 at 11:41 AM, Michael G Schwern 
> wrote:
>>
>> On 1/11/16 4:53 PM, Chad Granum wrote:
>> > Test::More/Test::Builder work VERY hard to ensure nothing inside them
>> > alters $! or $@. This is for
>> > thing like this:
>> >
>> > ok(do_something_scary());
>> > is($!, 0, "expected $! val");
>> > is($@, undef, '$@ not changed');
>> >
>> > Without Test::More/Builder being careful to support this, the second 2
>> > assertions could fail because
>> > something inside ok() modifies $! or $@.
>>
>> If your test functions modify the really common parts of your global
>> environment then it becomes
>> very difficult to test them... and testing error reporting is a very
>> common thing you'd want to do!
>>
>> Kent already pointed out why changing this makes testing $! and $@ very,
>> very awkward.  Let's see
>> that again.
>>
>> my ( $error, $exception );
>> ok(do {
>>   local $@;
>>   local $!;
>>   my $ret  = do_something_scary());
>>   ( $error, $exception ) = ($!, $@);
>>   $ret
>> });
>> is($error, 0, "expected $! val");
>> is($exception, undef, '$@ not changed);
>>
>> Gross.  I don't know how many times I've written code like this.  I hate
>> it.  I always encapsulate
>> it somehow.  And when a library doesn't have good global discipline it
>> makes it even harder for me
>> to have good global discipline.
>>
>> We tell the users that $! and $@ are only safe per function call.  Then we
>> encourage them all over
>> the docs and interface to pass function return values directly into test
>> functions.  Then we tell
>> them they should be testing their error cases... but to do safely requires
>> gross scaffolding.
>> That's not fair to the user.  The result will be that people won't test
>> their error conditions,
>> library quality will drop, and you'll waste a lot of time on a bug that
>> should have been tested.
>>
>> The argument that $! is only reliable per function call, that's a lowest
>> common denominator
>> thinking.  One of the fundamental design principles of Test::Builder was
>> that it had to be the
>> GREATEST common denominator!
>>
>> I don't write libraries to the lowest common denominator.  I write
>> libraries that raise the bar and
>> encourage others to do so as well.  

Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Kent Fredric
On 13 January 2016 at 10:48, Sawyer X  wrote:
>
> If so, I think you should probably clarify what Test2 *does* do. It
> doesn't provide the functions - alright. What *does* it provide then?


Oh, and thought: It may help to consider what testing /testing tools/
looks like here, and wether the tools themselves need to trap $! and
$@ and test for their changes.

Its probably immaterial and indifferent from the "handle it at the
chrome layer", but it may have implications in internals that make
things more difficult if they're suppressed/not-suppressed.

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Sawyer X
On Tue, Jan 12, 2016 at 10:55 PM, Kent Fredric  wrote:
> On 13 January 2016 at 10:48, Sawyer X  wrote:
>>
>> If so, I think you should probably clarify what Test2 *does* do. It
>> doesn't provide the functions - alright. What *does* it provide then?
>
>
> Oh, and thought: It may help to consider what testing /testing tools/
> looks like here, and wether the tools themselves need to trap $! and
> $@ and test for their changes.
>
> Its probably immaterial and indifferent from the "handle it at the
> chrome layer", but it may have implications in internals that make
> things more difficult if they're suppressed/not-suppressed.

Good point!


Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Sawyer X
[Top-posted]

Chad, thank you for the detailed response. I think I now understand
the scope of the problem and your solutions.

I think it makes sense to put this in the guts inside the construction
of a new context (or retrieval of current context) and in the release
of that context. Kent, am I right to believe this also addresses the
concerns you raised regarding testing of testing modules?

I'm sorry to say I'm quite ignorant [also] when it comes to the
testing underpinnings, so I'm not sure if there are additional
concerns this does not address, but otherwise it sounds very
reasonable to me.

Thanks again for taking the time to clarify in detail. :)
(It might be useful to start working on a document for the internals
for anyone who wants to hack on it. This should at least be in the
commit messages so it could be tracked down.)

S.



On Tue, Jan 12, 2016 at 11:26 PM, Chad Granum  wrote:
> Yes, your understanding appears correct. And I can make it more clear.
>
> This is a very simple test tool in Test::Builder (the chrome):
>
>> my $TB = Test::Builder->new;  # Call to internals/guts (singleton)
>>
>> sub ok($;$) {
>> my ($bool, $name) = @_;
>> $TB->ok($bool, $name);# Call to internals/guts
>> return $bool
>> }
>
>
> Here it is again using Test2 instead of Test::Builder (the chrome):
>
>> sub ok($;$) {
>> my ($bool, $name) = @_;
>> my $ctx = context();# Call to internals/guts (A)
>> $ctx->ok($bool, $name); # another one (B)
>> $ctx->release;  # another one (C)
>> return $bool;
>> }
>
>
> The lines marked with 'Call to internals/guts' kick off a series of things
> that read/write from filehandles, possibly opens them, evals code/catches
> exceptions, and any number of other things that can squash $! and $@. It
> should be noted that 'ok' is not the only method ever called on either
> Test::Builder or $ctx, this is just a very simple illustration.
>
> Now for starters, Test::Builder uses a singleton, so it can do all its
> initialization at load time, which allows it to leave several things
> unprotected. The singleton was bad, so Test2 does not use one, which means
> it has to be more protective of $! and $@ in more places to accomplish the
> same end goal.
>
> History, what Test::Builder does: It localizes $! and $@ in an eval wrapper
> called _try() that it uses to wrap things it expects could squash $! and $@.
> It also localizes $SIG{__DIE__} for various reasons. In some places where
> $SIG{__DIE__} should not be localized it will instead use local
> independently of _try(). There is also extra logic for subtests to ensure $?
> from the end of the subtest is carried-over into the regular testing outside
> of the subtest. Some places also need to be careful of $? because they run
> in an end block where squashing $? unintentionally is bad. (Yeah, $? is
> involved in all this as well, but much less so)
>
> This results in a lot of places where things are localized, and several
> places that run through an eval. People simply looking at the code may
> overlook these things, and not know that the protection is happening. The
> first time a new-dev will notice it is when tests start breaking because
> they added an open/close/eval/etc in the wrong place. Thanfully there are
> some tests for this, but not enough as I have found downstream things
> (PerlIO::via::Timeout as an example) that break when $! is squashed in a way
> Test::Builder never tests for.
>
> Test::Builder does not localize $! and $@ in all its public method.
> Realistically it cannot for 2 reasons:
>
> Performance hit
> Can mask real exceptions being thrown that are not caught by Test::Builder
> itself.
>
> In short, this is a significant maintenance burden, with insufficient
> testing, and no catch-all solution.
>
> --
>
> Up until this email thread, Test2 was doing the same thing as Test::Builder.
> The problem is that Test2 does lots of other things differently for good
> reasons, unfortunately it provides a lot more opportunity to squash $! and
> $@. Like Test::Builder it is not reasonable to localize $! and $@ at every
> entry point.
>
> I decided to start this thread after a few downstream breakage was detected
> due to $! being squashed. One because perl changes $! when you clone a
> filehandle, even when no errors happen. Another because of a shared memory
> read that was added in an optional concurrency driver used by
> Test::SharedFork. I realized this would be an eternal whack-a-mole for all
> future maintainers of both projects, and one that is hard to write
> sufficient testing for.
>
> The solution:
> Go back to my second example. Notice there are 3 calls to the guts, marked
> (A), (B), and (C). (A) and (C) are universal to all Test2 based tools, and
> are also universally used in the Test::Builder dev releases when it calls
> out to Test2. No tool will function properly if it does not use both of
> those when it uses Test2.