Again, thanks for the feedback.  Responses inline again.

-eric

On Fri, Feb 13, 2015 at 11:36 AM, Gustavo Niemeyer <gust...@niemeyer.net> wrote:
> On Fri, Feb 13, 2015 at 3:25 PM, Eric Snow <eric.s...@canonical.com> wrote:
>>> This is a "mock object" under some well known people's terminology [1].
>>
>> With all due respect to Fowler, the terminology in this space is
>> fairly muddled still. :)
>
> Sure, I'm happy to use any terminology, but I'd prefer to not make one
> up just now.

I've decided to just follow the definitions to which Fowler refers
(citing Meszaros).  That will be my small effort to help standardize
the terminology. :)  The name then for what I'm talking about is
"stub".

>
>>> The most problematic aspect of this approach is that tests are pretty
>>> much always very closely tied to the implementation, in a way that you
>>> suddenly cannot touch the implementation anymore without also fixing a
>>> vast number tests to comply.
>>
>> Let's look at this from the context of "unit" (i.e. function
>> signature) testing.  By "implementation" do you mean you mean the
>> function you are testing, or the low-level API the function is using,
>> or both?  If the low-level API then it seems like the "real fake
>> object" you describe further on would help by moving at least part of
>> the test setup down out of the test and down into the fake.  However
>> aren't you then just as susceptible to changes in the fake with the
>> same maintenance consequences?
>
> No, because the fake should behave as a normal type would, instead of
> expecting a very precisely constrained orchestration of calls into its
> interface. If we hand the implementation a fake value, it should be
> able to call that value as many times as it wants, with whatever
> parameters it wants, in whatever order it wants, and its behavior
> should be consistent with a realistic implementation. Again, see the
> dummy provider for a convenient example of that in practice.

Right, that became clear to me after I send my message.  So it seems
to me that fakes should typically only be written by the maintainer of
the thing they fake; and they should leverage as much of the real
thing as possible.  Otherwise you have to write your own fake and try
to duplicate all the business logic in the real deal and run the risk
of getting it wrong.  Either way you'd end up with the risk of getting
out-of-sync (if you don't leverage the production code).

So with a fake you still have to manage its state for at least some
tests (e.g. stick data into its DB) to satisfy each test's
preconditions.  That implies  you need knowledge of how to manage the
fake's state, which isn't free, particularly for a large or complex
faked system.  So I guess fake vs. stub is still a trade-off.

>
>> Ultimately I just don't see how you can avoid depending on low-level
>> details ("closely tied to the implementation") in your tests and still
>> have confidence that you are testing things rigorously.  I think the
>
> I could perceive that on your original email, and it's precisely why
> I'm worried and responding to this thread.

I'm realizing that we're talking about the same thing from two
different points of view.  Your objection is to testing a function's
code in contrast to validating its outputs in response to inputs.  If
that's the case then I think we're on the same page.  I've just been
describing a function's low-level dependencies in terms of it's
implementation.  I agree that tests focused on the implementation
rather than the "contract" are fragile and misguided, though I admit
that I do it from time to time. :)

My point is just that the low-level API used by a function (regardless
of how) is an input for the function, even if often just an implicit
input.  Furthermore, the state backing that low-level API is
necessarily part of that input.  Code should be written with that in
mind and so should tests.

Using a fake for that input means you don't have to encode the
low-level business logic in each test (just any setup of the fake's
state).  You can be confident about the low-level behavior during
tests as matching production operation (as long as the fake
implementation is correct and bug free).  The potential downsides are
any performance costs of using the fake, maintaining the fake (if
applicable), and knowing how to manage the fake's state.
Consequently, there should be a mechanism to ensure that the fake's
behavior matches the real thing.

Alternately you can use a "stub" (what I was calling a fake) for that
input.  On the upside, stubs are lightweight, both performance-wise
and in terms of engineer time.  They also help limit the scope of what
executes to just the code in the function under test.  The downside is
that each test must encode the relevant business logic (mapping
low-level inputs to low-level outputs) into the test's setup.  Not
only is that fragile but the low-level return values will probably not
have a lot of context where they appear in the test setup code
(without comments).

Still, keeping the per-test stubbed return values in sync with the
actual low-level behavior is the real challenge when using stubs.
Perhaps a happy medium (without an ideal fake) would be to drive a
simple fake with tables of known outputs.  Then those tables could be
vetted against the real thing.  At my last job (manufacturing testing
for SSDs) we actually did something like this for our code base.  We
would periodically run drives through the system and record the
low-level inputs and outputs.  Then for testing we would use a fake to
play it back and make sure we got the same behavior.


>> 1. a mix of high branch coverage through isolated unit tests,
>
> I'd be very careful to not overdo this. Covering a line just for the
> fun of seeing the CPU passing there is irrelevant. If you fake every
> single thing around it with no care, you'll have a CPU pointer jumping
> in and out of it, without any relevant achievement.

I was talking about branches in the source code, not actual CPU branch
operations. :)

> I have seen over
> and over "isolated unit tests" which blew up when put in context,
> after some monumental work wasted. The parameter for faking and
> isolating things out should be timing and feasibility, not the
> pretentious "unity purity" perfectionist metric.

Agreed, particularly in a statically typed language like Go.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev

Reply via email to