Re: Unit Tests Integration Tests

2014-09-15 Thread roger peppe
On 12 September 2014 14:46, Katherine Cox-Buday
katherine.cox-bu...@canonical.com wrote:
 I have been trying to digest the following series of talks between Martin
 Fowler, Kent Beck, and David Heinemeier, called Is TDD Dead?. The topic is
 a bit inflammatory, but there's some good stuff here.

Thank you very much for linking to those videos. I wasn't previously
aware of them, and they (or at least the first one, which is the
only one I've got through so far) seem very salient to this thread.

I like this quote a lot (at around 21:14 in the first part, from Kent
Beck), which
fits very well with my own feelings on some of this:

My experience is, if I use TDD, I can refactor stuff.
And then I heard stories people say, well 'I use TDD and
now I can't refactor anything', and I couldn't understand it,
and I started looking at their tests. Well, if you have
mocks returning mocks returning mocks, your test is
completely coupled to the implementation, not the interface,
but the exact implementation of some object, y'know,
three streets away... of course you can't change anything
without breaking the tests.

I look forward to watching the rest.

  cheers,
rog.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-12 Thread Michael Foord


On 12/09/14 06:05, Ian Booth wrote:


On 12/09/14 01:59, roger peppe wrote:

On 11 September 2014 16:29, Matthew Williams
matthew.willi...@canonical.com wrote:

Hi Folks,

There seems to be a general push in the direction of having more mocking in
unit tests. Obviously this is generally a good thing but there is still
value in having integration tests that test a number of packages together.
That's the subject of this mail - I'd like to start discussing how we want
to do this. Some ideas to get the ball rolling:

Personally, I don't believe this is obviously a good thing.
The less mocking, the better, in my view, because it gives
better assurance that the code will actually work in practice.

Mocking also implies that you know exactly what the
code is doing internally - this means that tests written
using mocking are less useful as regression tests, as
they will often need to be changed when the implementation
changes.


Let's assume that the term stub was meant to be used instead of mocking. Well
written unit tests do not involve dependencies outside of the code being tested,
and to achieve this, stubs are typically used. As others have stated already in
this thread, unit tests are meant to be fast. Our Juju unit tests are in many
cases not unit tests at all - they involve bringing up the whole stack,
including mongo in replicaset mode for goodness sake, all to test a single
component. This approach is flawed and goes against what would be considered as
best practice by most software engineers. I hope we can all agree on that point.


I agree. I tend to see the need for stubs (I dislike Martin Fowler's 
terminology and prefer the term mock - as it really is by common 
parlance just a mock object) as a failure of the code. Just sometimes a 
necessary failure.


Code, as you say, should be written as much as possible in decoupled 
units that can be tested in isolation. This is why test first is 
helpful, because it makes you think about how am I going to test this 
unit before your write it - and you're less likely to code in hard to 
test dependencies.


Where dependencies are impossible to avoid, typically at the boundaries 
of layers, stubs can be useful to isolate units - but the need for them 
often indicates excessive coupling.




To bring up but one of many concrete examples - we have a set of Juju CLI
commands which use a Juju client API layer to talk to an API service running on
the state server. We unit test Juju commands by starting a full state server
and ensuring the whole system behaves as expected, end to end. This is
expensive, slow, and unnecessary. What we should be doing here is stubbing out
the client API layer and validating that:
1. the command passes the correct parameters to the correct API call
2. the command responds the correct way when results are returned

Anything more than that is unnecessary and wasteful. Yes, we do need end-end
integration tests as well, but these are in addition to, not in place of, unit
tests. And integration tests tend to be fewer in number, and run less frequently
than, unit tests; the unit tests have already covered all the detailed
functionality and edge cases; the integration tests conform the moving pieces
mesh together as expected.

As per other recent threads to juju-dev, we have already started to introduce
infrastructure to allow us to start unit testing various Juju components the
correct way, starting with the commands, the API client layer, and the API
server layer. Hopefully we will also get to the point where we can unit test
core business logic like adding and placing machines, deploying units etc,
without having to have a state server and mongo. But that's a way off given we
first need to unpick the persistence logic from our business logic and address
cross pollination between our architectural layers.



+1

Being able to test business logic without having to start a state server 
and mongo will make our tests s much faster and more reliable. The 
more we can do this *without* stubs the better, but I'm sure that's not 
entirely possible.


All the best,

Michael

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-12 Thread Mark Ramm-Christensen (Canonical.com)
On Thu, Sep 11, 2014 at 3:41 PM, Gustavo Niemeyer gust...@niemeyer.net
wrote:

 Performance is the second reason Roger described, and I disagree that
 mocking code is cleaner.. these are two orthogonal properties, and
 it's actually pretty easy to have mocked code being extremely
 confusing and tightly bound to the implementation. It doesn't _have_
 to be like that, but this is not a reason to use it.


It is easy to do that, though often that is a sign of not having clean
separations of concerns.   Messy mocking can (though does not always)
reflect messiness in the code itself.  Messy, poorly isolated code is bad
and messy mocks, often mean you have not one but two messes to clean up.

 Like any tools, developers can over-use, or mis-use them.   But, if you
  don't use them at all,



 That's not what Roger suggested either. A good conversation requires
 properly reflecting the position held by participants.


You are right, I wasn't precise about the details of his suggestion to not
use them, but he did suggest not using mocks unless there is *no other
choice.* And it is that rule against them that I was trying to make a case
against.

With that said, I definitely agree with the experience that both of you are
trying to highlight about the dangers of over-reliance on mocks.  I think
everybody who has written a significant amount of test code knows that
passing a test against a mock is not the same thing as actually working
against the mocked out library/function/interface.


  you often end up with what I call the binary test suite in which one
  coding error somewhere creates massive test failures.

 A coding error that creates massive test failures is not a problem, in
 my experience using both heavily mocking and heavily non-mocking code
 bases.


It's not a problem for new code, but it makes refactorings and cleanup
harder because you change a method, and rather than the test suite telling
you which things depend on that and therefore need to be updated, and how
far you need to go, you get 100% test failures and you're not quite sure
how many changes are needed, or where they are needed -- until suddenly you
fix the last thing and *everything* passes again.

 My belief is that you need both small, fast, targeted tests (call them
 unit
  tests) and large, realistic, full-stack tests (call them integration
 tests)
  and that we should have infrastructure support for both.

 Yep, but that's besides the point being made. You can do unit tests
 which are small, fast, and targeted, both with or without mocking, and
 without mocking they can be realistic, which is a good thing. If you
 haven't had a chance to see tests falsely passing with mocking, that's
 a good thing too.. you haven't abused mocking too much yet.


Sorry, I was transitioning back to the main point of the thread, raised by
Matty at the beginning.  And I was agreeing that there are two very
different *kinds of tests* and we should have a place for large tests to
go.

I think the two issues ARE related because a bias against mocks, and a
failure to separate out functional tests, in a large project leads to a
test suite that has lots of large slow tests, and which developers can't
easily run many, many, many times a day.

By allowing explicit ways to write larger functional tests as well as small
(unitish) tests you get to let the two kinds of tests be what they need to
be, without trying to have one test suite serve both purposes.  And the
creation of a place for those larger tests was just as much a part of the
point of this thread, as Roger's comments on mocking.

--Mark Ramm

PS, if you want to fit this into the Martin Fowler terminology I'm just
using mocks as a shorthand for all of the kinds of doubles he describes.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-12 Thread Nate Finch
I like Gustavo's division - slow tests and fast tests, not unit tests and
integration tests.  Certainly, integration tests are often also slow tests,
but that's not the division that really matters.

*I want* go test github.com/juju/juju/... *to finish in 5 seconds or less.
 I want the landing bot to reject commits that cause this to no longer be
true.*

This is totally doable even on a codebase the size of juju's.  Most tests
that don't bring up a server or start mongo finish in milliseconds.

There are many strategies we can use deal with slower tests.  One of those
may be don't run slow tests unless you ask for them.  Another is
refactoring code and tests so they don't have to bring up a server/mongo.
 Both are good and valid.

This would make developers more productive.  You can run the fast tests
trivially whenever you make a change.  When you're ready to commit, run the
long tests to pick up anything the short tests don't cover.

Right now, I cringe before starting to run the tests because they take so
long.

I don't personally care if it's a test flag or an environment variable,
hell, why not both? It's trivial either way.  Let's just do it.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-12 Thread Mark Ramm-Christensen (Canonical.com)
On Fri, Sep 12, 2014 at 12:25 PM, Gustavo Niemeyer gust...@niemeyer.net
wrote:

 On Fri, Sep 12, 2014 at 12:00 PM, Mark Ramm-Christensen
 (Canonical.com) mark.ramm-christen...@canonical.com wrote:
  I think the two issues ARE related because a bias against mocks, and a
  failure to separate out functional tests, in a large project leads to a
 test
  suite that has lots of large slow tests, and which developers can't
 easily
  run many, many, many times a day.

 There are test doubles in the code base of juju since pretty much the
 start (dummy provider?). If you have large slow tests, this should be
 fixed, but that's orthogonal to having these or not.

 Then, having a bias against test doubles everywhere is a good thing.
 Ideally the implementation itself should be properly factored out so
 that you don't need the doubles in the first place. Michael Foord
 already described this in a better way.


Hmm, there seems to be some nuance missing here.  I see the argument as
originally made as saying don't have doubles anywhere unless you
absolutely have to for performance reasons or because a non-double is the
only possible way to do a test.

I disagree with that.

I know there are good uses of doubles in the code, and bad ones.


 If you want to have a rule Tests are slow, you should X, the best X
 is think about what you are doing, rather than use test doubles.


Agreed. I did not and would never argue otherwise.

 By allowing explicit ways to write larger functional tests as well as
 small
  (unitish) tests you get to let the two kinds of tests be what they need
 to
  be, without trying to have one test suite serve both purposes.  And the
  creation of a place for those larger tests was just as much a part of
 the
  point of this thread, as Roger's comments on mocking.

 If by functional test you mean test that is necessarily slow,
 there should not be _a_ place for them, because you may want those in
 multiple places in the code base, to test local logic that is
 necessarily expensive. Roger covered that by suggesting a flag that is
 run when you want to skip those. This is a common technique in other
 projects, and tends to work well.


I agree with tagging.  A place wasn't necessarily intended to be
prescriptive.  My point, which I feel has already been well enough made is
that there needs to be a way to separate out long running tests.

--Mark Ramm
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-11 Thread Nate Finch
definitely not all in the same package.

On Thu, Sep 11, 2014 at 11:29 AM, Matthew Williams 
matthew.willi...@canonical.com wrote:

 Hi Folks,

 There seems to be a general push in the direction of having more mocking
 in unit tests. Obviously this is generally a good thing but there is still
 value in having integration tests that test a number of packages together.
 That's the subject of this mail - I'd like to start discussing how we want
 to do this. Some ideas to get the ball rolling:

 Having integration tests spread about the package and having environment
 variables that switch on/ off them being run

 $ JUJU_INTEGRATION go test ./...

 We could make use of build tags:

 $ go test -tags integration ./...

 We could put all the integration tests in a single package:

 $ go test github.com/juju/juju/integrationtests/...


 Thoughts?

 Matty

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-11 Thread Gustavo Niemeyer
On Thu, Sep 11, 2014 at 4:06 PM, Mark Ramm-Christensen (Canonical.com)
mark.ramm-christen...@canonical.com wrote:
 But they are not the ONLY reasons why they are valuable.
 There are plenty of others -- performance, test-code cleanliness/re-use,
 result granularity, etc.

Performance is the second reason Roger described, and I disagree that
mocking code is cleaner.. these are two orthogonal properties, and
it's actually pretty easy to have mocked code being extremely
confusing and tightly bound to the implementation. It doesn't _have_
to be like that, but this is not a reason to use it.

 Like any tools, developers can over-use, or mis-use them.   But, if you
 don't use them at all,

That's not what Roger suggested either. A good conversation requires
properly reflecting the position held by participants.

 you often end up with what I call the binary test suite in which one
 coding error somewhere creates massive test failures.

A coding error that creates massive test failures is not a problem, in
my experience using both heavily mocking and heavily non-mocking code
bases. It rarely goes into the repository in the first place, because
it's a massive breakage, and when it does go in due to differences in
environment, it's easy to spot the root of the failure because proper
code is layered.

(...)
 My belief is that you need both small, fast, targeted tests (call them unit
 tests) and large, realistic, full-stack tests (call them integration tests)
 and that we should have infrastructure support for both.

Yep, but that's besides the point being made. You can do unit tests
which are small, fast, and targeted, both with or without mocking, and
without mocking they can be realistic, which is a good thing. If you
haven't had a chance to see tests falsely passing with mocking, that's
a good thing too.. you haven't abused mocking too much yet.


gustavo @ http://niemeyer.net

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-11 Thread Andrew Wilkins
On Thu, Sep 11, 2014 at 11:29 PM, Matthew Williams 
matthew.willi...@canonical.com wrote:

 Hi Folks,

 There seems to be a general push in the direction of having more mocking
 in unit tests. Obviously this is generally a good thing but there is still
 value in having integration tests that test a number of packages together.
 That's the subject of this mail - I'd like to start discussing how we want
 to do this. Some ideas to get the ball rolling:

 Having integration tests spread about the package and having environment
 variables that switch on/ off them being run

 $ JUJU_INTEGRATION go test ./...

 We could make use of build tags:

 $ go test -tags integration ./...

 We could put all the integration tests in a single package:

 $ go test github.com/juju/juju/integrationtests/...



Cut and paste from my reply to Ian's Call to action email:

-
I'd like to make a few suggestions regarding moving tests to CI.

- For now, I think we should create a new package tree under
github.com/juju/juju/ci, and move functional tests there. The package name
is unimportant, alternative suggestions wlecome, but the idea is that they
will continue running as unit tests until CI is ready to start running
them. At that point we'd create build constraints on those tests and they'd
only be run if you explicitly enable them.
- The functional tests may test on package boundaries (e.g. for configuring
replica sets), but may not use package internals.
- The functional tests must be runnable on developers' machines, with
minimal prior setup required (like the unit tests now).
-

So I think we're thinking the same thing. The tests should not be in one
package, but I think having a common root would be good (I think you're
suggesting that, given the ...).

I'd be fine with Roger's suggestion of using -short to disable the
longer-running integration tests (e.g. replicaset). If the tests are short,
then I don't see a compelling reason to disable them in regular runs. They
need to be stable either way.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-11 Thread Gustavo Niemeyer
On Thu, Sep 11, 2014 at 10:42 PM, Andrew Wilkins
andrew.wilk...@canonical.com wrote:
 I basically agree with everything below, but strongly disagree that mocking
 implies you know exactly what the code is doing internally. A good interface

I'm also in agreement about your points. But just so you understand
where Roger is coming from, the term mocking is often [1] associated
with a test style that does bind very closely to what the code does.
But you're probably using the term more loosely for test doubles in
general, and I'm all for not being pedantic, so yes, +1 to the
intention of what you've said.

[1] http://martinfowler.com/articles/mocksArentStubs.html


gustavo @ http://niemeyer.net

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-11 Thread Jonathan Aquilina
 

With github I know continuous integration is possible. On another
project I work on we use it. The perk with travis is that it works with
a YAML file plus when a PR is filed it send the patch to be built and
lets you know in the PR if the build was successful or not. I am not
sure though how that would fit into the workflow for you guys.


---
Regards,
Jonathan Aquilina
Founder Eagle Eye T

On 2014-09-11
17:29, Matthew Williams wrote: 

 Hi Folks, 
 
 There seems to be a
general push in the direction of having more mocking in unit tests.
Obviously this is generally a good thing but there is still value in
having integration tests that test a number of packages together. That's
the subject of this mail - I'd like to start discussing how we want to
do this. Some ideas to get the ball rolling: 
 
 Having integration
tests spread about the package and having environment variables that
switch on/ off them being run 
 
 $ JUJU_INTEGRATION go test ./... 


 We could make use of build tags: 
 
 $ go test -tags integration
./... 
 
 We could put all the integration tests in a single package:

 
 $ go test github.com/juju/juju/integrationtests/. [1].. 
 

Thoughts? 
 
 Matty
 

Links:
--
[1]
http://github.com/juju/juju/integrationtests/.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev