Re: Tags and object IDs

2016-01-25 Thread Nate Finch
I was really trying not to give too much information about this exact case,
so we could avoid talking about a specific implementation, and focus on the
more general question of how we identify objects.  Yes, we get the bytes
using an HTTP request, but that is irrelevant to my question :)

On Mon, Jan 25, 2016 at 2:00 AM John Meinel  wrote:

> On Sat, Jan 23, 2016 at 1:28 AM, William Reade <
> william.re...@canonical.com> wrote:
>
>> On Fri, Jan 22, 2016 at 9:53 PM, Nate Finch 
>> wrote:
>>
>>> Working in the model layer on the server between the API and the DB.
>>> Specifically in my instance, an API call comes in from a unit, requesting
>>> the bytes for a resource.  We want to record that this unit is now using
>>> the bytes from that specific revision of the resource.  I have a pointer to
>>> a state.Unit, and a function that takes a Resource metadata object and some
>>> reference to the unit, and does the actual transaction to the DB to store
>>> the unit's ID and the resource information.
>>>
>>
>> I'm a bit surprised that we'd be transferring those bytes over an API
>> call in the first place (is json-over-websocket really a great way to send
>> potential gigabytes? shouldn't we be getting URL+SHA256 from the apiserver
>> as we do for charms, and downloading separately? and do we really want to
>> enforce charmstore == apiserver?); and I'd point out that merely having
>> agreed to deliver some bytes to a client is no indication that the client
>> will actually be using those bytes for anything; but we should probably
>> chat about those elsewhere, I'm evidently missing some context.
>>
>
> So I would have expected that we'd rather use a similar raw
> HTTP-to-get-content instead of a JSON request (given the intent of
> resources is that they may be GB in size), but regardless it is the intent
> that you download the bytes from the charm rather from the store directly.
> Similar to how we currently fetch the charm archive content itself.
> As for "will you be using it", the specific request from the charm is when
> it calls "resource-get" which is very specifically the time when the charm
> wants to go do something with those bytes.
>
> John
> =:->
>
>
>> But whenever we do record the unit-X-uses-resource-Y info I assume we'll
>> have much the same stuff available in the apiserver, in which case I think
>> you just want to pass the *Unit back into state; without it, you just need
>> to read the doc from the DB all over again to make appropriate
>> liveness/existence checks [0], and why bother unless you've already hit an
>> assertion failure in your first txn attempt?
>>
>> Cheers
>> William
>>
>> [0] I imagine you're not just dumping (unit, resource) pairs into the DB
>> without checking that they're sane? that's really not safe
>>
>>
>>> On Fri, Jan 22, 2016 at 3:34 PM William Reade <
>>> william.re...@canonical.com> wrote:
>>>
 Need a bit more context here. What layer are you working in?

 In general terms, entity references in the API *must* use tags; entity
 references that leak out to users *must not* use tags; otherwise it's a
 matter of judgment and convenience. In state code, it's annoying to use
 tags because we've already got the globalKey convention; in worker code
 it's often justifiable if not exactly awesome. See
 https://github.com/juju/juju/wiki/Managing-complexity#workers

 Cheers
 William

 On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch 
 wrote:

> I have a function that is recording which unit is using a specific
> resource.  I wrote the function to take a UnitTag, because that's the
> closest thing we have to an ID type. However, I and others seem to 
> remember
> hearing that Tags are really only supposed to be used for the API. That
> leaves me with a problem - what can I pass to this function to indicate
> which unit I'm talking about?  I'd be fine passing a pointer to the unit
> object itself, but we're trying to avoid direct dependencies on state.
> People have suggested just passing a string (presumably
> unit.Tag().String()), but then my API is too lenient - it appears to say
> "give me any string you want for an id", but what it really means is "give
> me a serialized UnitTag".
>
> I think most places in the code just use a string for an ID, but this
> opens up the code to abuses and developer errors.
>
> Can someone explain why tags should only be used in the API? It seems
> like the perfect type to pass around to indicate the ID of a specific
> object.
>
> -Nate
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>>
-- 
Juju-dev mailing list
Juju-d

Re: Defaulting tests internal to the package

2016-01-25 Thread William Reade
On Mon, Jan 25, 2016 at 4:29 AM, Nate Finch 
wrote:

> I think the idea that tests of an internal function are worthless because
> you might just not be using that function is a fairly specious argument.
> That could be applied to unit tests at any level.  What if you're not using
> the whole package?  By that logic, the only reliable tests are full stack
> tests.
>
Not quite: the most important questions you should be asking are "will a
behavioural change cause this test to fail?" and "will this test's failure
indicate a behaviour change?". When you stick to external package tests,
you are much more likely to be able to answer "yes" to those questions (and
you're much more likely to be able to *keep on* answering yes as your
package is inevitably modified by people who don't know it as well as you).

I am saying that the only reliable tests are those that exercise *the same
code paths* as can be invoked by a client; and that internal tests, which
*by definition* fail to enforce this restriction, come with subtle but
serious long-term costs, such that it's very rare for their net value to be
positive in the long run. Hence, shorthand, "worthless" :).

> I think that William's position is correct in an ideal world.  The public
> interface is the only contract you really need to worry about.  In theory,
> all your packages would be simple enough that this would not be difficult,
> and the tests would be straightforward.  However, in practice, some
> packages are fairly complex beasts, and it can literally be an order of
> magnitude easier to test a bit of logic internally than to test it
> externally.  And I don't just mean easier in terms of time it takes to
> write (although that shouldn't be ignored - we do have deadlines), but also
> in simplicity of the code in the tests... the more complicated the code in
> the tests, the harder they are to get right, the more difficult it is for
> other people to understand them, and the more difficult it is to maintain
> them.
>
Right. The point is not that you cannot ever write internal tests, I think
I've always (regretfully) agreed that sometimes, indeed, they are necessary
for the reasons you describe -- and Tim's point about model migrations is
well taken, the state package is absolutely a case where the forces in play
push you hard in that direction. But they are a *much much* weaker tool
than external unit tests, and they're ultimately trying to do the same job.
So you should be very aware of what you're giving up when you use them; to
the point, IMO, where you should explicitly consider them a tool of last
resort.

And I think it's worth reiterating my point re code quality: when it's hard
to write external tests for some piece of functionality, that is *strong*
evidence that the concept exposed is only half-baked. When this happens,
you should strongly prefer to evolve the SUT towards better coherence; and
when circumstances prevent you from so doing, you should be thinking
clearly about the characteristics of the particular technical debt you're
taking on in service of those deadlines. A complicated fixture has costs;
an internal test has costs; and  I think you're insufficiently wary of the
latter.

(And I'm not saying we should default to taking on complicated fixtures,
either -- I'm saying that the need for *either* is a problem with the
*code*, and that we need to consider the situation from that perspective
before deciding that we need to take on either form of technical debt for
the tests. And so, when we can, we should gently move the code towards the
ideal, because then we get better and simpler client code everywhere, not
just in the local tests.)

> Even if you have perfect code coverage, using inversion of control and
> dependency injection to be able to do every test externally, you can still
> have bugs that slip through, either due to weaknesses in your tests, or
> even just poorly coded mocks that don't actually behave like the production
> things they're mocking.  Isolating a piece of code down to its bare bones
> can make its intended behavior a lot easier to define and understand, and
> therefore easier to test.
>
Sure, you can screw up external tests. It's much easier to screw up
internal tests, and much harder to see when you've done it.

> I think small scope internal unit tests deliver incredible bang for the
> buck.  They're generally super easy to write, super easy to understand, and
> give you confidence that you haven't screwed up a piece of logic. They're
> certainly not ironclad proof that your application as a whole doesn't have
> bugs in it... but they are often very strong proof that this bit of logic
> does not have bugs.
>
I strongly agree that unit tests should be tightly scoped, easy to write,
easy to understand. But a unit test for a unit of *logic* is nowhere near
as valuable as a unit test for a unit of *functionality*, and is much more
vulnerable to churn over time. (I kinda feel like I'm just restating "test
behaviour

Re: Tags and object IDs

2016-01-25 Thread William Reade
On Mon, Jan 25, 2016 at 12:07 PM, Nate Finch 
wrote:

> I was really trying not to give too much information about this exact
> case, so we could avoid talking about a specific implementation, and focus
> on the more general question of how we identify objects.  Yes, we get the
> bytes using an HTTP request, but that is irrelevant to my question :)
>

I thought I did answer the question:

But whenever we do record the unit-X-uses-resource-Y info I assume we'll
>>> have much the same stuff available in the apiserver, in which case I think
>>> you just want to pass the *Unit back into state; without it, you just need
>>> to read the doc from the DB all over again to make appropriate
>>> liveness/existence checks [0], and why bother unless you've already hit an
>>> assertion failure in your first txn attempt?
>>>
>>
...but perhaps I misunderstood what you were looking for?

Cheers
William
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-01-25 Thread John Meinel
I think to William's point, we should have already authenticated the unit
as part of the API request, thus we should have a Unit object hanging
around somewhere close to where that request is being made, and can just
pass it into state.

John
=:->

On Mon, Jan 25, 2016 at 3:07 PM, Nate Finch 
wrote:

> I was really trying not to give too much information about this exact
> case, so we could avoid talking about a specific implementation, and focus
> on the more general question of how we identify objects.  Yes, we get the
> bytes using an HTTP request, but that is irrelevant to my question :)
>
> On Mon, Jan 25, 2016 at 2:00 AM John Meinel 
> wrote:
>
>> On Sat, Jan 23, 2016 at 1:28 AM, William Reade <
>> william.re...@canonical.com> wrote:
>>
>>> On Fri, Jan 22, 2016 at 9:53 PM, Nate Finch 
>>> wrote:
>>>
 Working in the model layer on the server between the API and the DB.
 Specifically in my instance, an API call comes in from a unit, requesting
 the bytes for a resource.  We want to record that this unit is now using
 the bytes from that specific revision of the resource.  I have a pointer to
 a state.Unit, and a function that takes a Resource metadata object and some
 reference to the unit, and does the actual transaction to the DB to store
 the unit's ID and the resource information.

>>>
>>> I'm a bit surprised that we'd be transferring those bytes over an API
>>> call in the first place (is json-over-websocket really a great way to send
>>> potential gigabytes? shouldn't we be getting URL+SHA256 from the apiserver
>>> as we do for charms, and downloading separately? and do we really want to
>>> enforce charmstore == apiserver?); and I'd point out that merely having
>>> agreed to deliver some bytes to a client is no indication that the client
>>> will actually be using those bytes for anything; but we should probably
>>> chat about those elsewhere, I'm evidently missing some context.
>>>
>>
>> So I would have expected that we'd rather use a similar raw
>> HTTP-to-get-content instead of a JSON request (given the intent of
>> resources is that they may be GB in size), but regardless it is the intent
>> that you download the bytes from the charm rather from the store directly.
>> Similar to how we currently fetch the charm archive content itself.
>> As for "will you be using it", the specific request from the charm is
>> when it calls "resource-get" which is very specifically the time when the
>> charm wants to go do something with those bytes.
>>
>> John
>> =:->
>>
>>
>>> But whenever we do record the unit-X-uses-resource-Y info I assume we'll
>>> have much the same stuff available in the apiserver, in which case I think
>>> you just want to pass the *Unit back into state; without it, you just need
>>> to read the doc from the DB all over again to make appropriate
>>> liveness/existence checks [0], and why bother unless you've already hit an
>>> assertion failure in your first txn attempt?
>>>
>>> Cheers
>>> William
>>>
>>> [0] I imagine you're not just dumping (unit, resource) pairs into the DB
>>> without checking that they're sane? that's really not safe
>>>
>>>
 On Fri, Jan 22, 2016 at 3:34 PM William Reade <
 william.re...@canonical.com> wrote:

> Need a bit more context here. What layer are you working in?
>
> In general terms, entity references in the API *must* use tags; entity
> references that leak out to users *must not* use tags; otherwise it's a
> matter of judgment and convenience. In state code, it's annoying to use
> tags because we've already got the globalKey convention; in worker code
> it's often justifiable if not exactly awesome. See
> https://github.com/juju/juju/wiki/Managing-complexity#workers
>
> Cheers
> William
>
> On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch 
> wrote:
>
>> I have a function that is recording which unit is using a specific
>> resource.  I wrote the function to take a UnitTag, because that's the
>> closest thing we have to an ID type. However, I and others seem to 
>> remember
>> hearing that Tags are really only supposed to be used for the API. That
>> leaves me with a problem - what can I pass to this function to indicate
>> which unit I'm talking about?  I'd be fine passing a pointer to the unit
>> object itself, but we're trying to avoid direct dependencies on state.
>> People have suggested just passing a string (presumably
>> unit.Tag().String()), but then my API is too lenient - it appears to say
>> "give me any string you want for an id", but what it really means is 
>> "give
>> me a serialized UnitTag".
>>
>> I think most places in the code just use a string for an ID, but this
>> opens up the code to abuses and developer errors.
>>
>> Can someone explain why tags should only be used in the API? It seems
>> like the perfect type to pass around to indicate the ID of a specifi

Re: Defaulting tests internal to the package

2016-01-25 Thread William Reade
On Mon, Jan 25, 2016 at 2:19 AM, Rick Harding 
wrote:

> I've got to toss another +1 for tests at both levels. One set of tests are
> tests against your contract to outsiders. Another is confidence that your
> internals are resilient. There are a ton of cases I can think of such as
> internal code that validates changes in state, validates various forms of
> input, deals with internal changes to document structure over time.
> Ideally, when an external contract test fails, one of the internal ones
> just blew up to point directly at the culprit within all your internal
> code.
>

I really don't think that "both" is any better a default than "internal".
It's a fallback; a patch for missing coverage when you can't effectively
write external tests.

Certainly, changes to doc structure over time are a prime case where it's
probably reasonable to bend principle. But, for example, all the
(critically important!) state-change-validation tests in the state package
*do* make use of an explicitly-injected transaction runner component [0],
and are much the better for it; and I'm pretty sure that input validation
is generally an important part of a component's external contract, and must
be tested as such (or, if complex, delegated to another injected component
-- and the delegation itself tested).

Yes, it can mean that internal tests need to be kept up to date more as the
> internals change, but even then tests provide another layer of "did you
> cover all these cases in your refactoring".
>

A casual double-check for definitions of "refactoring" reveals
variants of "altering
its internal structure without changing its external behavior". At least,
that's what I generally take it to mean; and I imagine we agree that some
quantity of refactoring is necessary to maintain a healthy codebase. And
sadly, IME, the refactoring that is hard to do is the refactoring that does
not get done; and so the package that is not maintained tastefully [1] is
the package that becomes a hideous overgrown nightmare that slows down all
development that touches it.

And internal tests slow down responsible refactoring efforts by an order of
magnitude, because you can no longer safely change the package and have
automatic confidence that a green bar reflects identical
externally-observable behaviour. The existence of *any* internal test is a
small but real leak in package encapsulation; and any time you make an
internal change without auditing the internal tests for impact, you take a
small but real risk that those tests will have subtly broken. You *can* do
it safely, with care and discipline; but you can't do it *quickly*; and if
you want to stay responsive, you need both.

I feel I should reiterate that I'm not trying to *forbid* internal tests;
but I am trying to show that their cost is much higher than is widely
appreciated -- and so that any heuristic that pushes us to use them broadly
is IMO highly suspect. Maybe a rudimentary design-pattern-style description
is the right way to go: i.e. problem, forces, solution, consequences?

Cheers
William

[0] although I couldn't swear that wasn't export_tested somewhere along the
line... I think the point still stands, at least it's explicit injection
from above
[1] for whatever set of expedient and defensible reasons may have applied
at various times
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Defaulting tests internal to the package

2016-01-25 Thread Katherine Cox-Buday

My axioms of argument are:

1. Just like good code, each unit tests addresses a different concern
   in isolation.
2. There are 3 concerns to be tested:
1. Correctness of a small unit of logic.
2. How units of logic are composed.
3. How things are instantiated.
3. Testing 2.1 usually requires internal tests.
4. The correctness of the system is an emergent property of the
   correctness of all your tests.

If you accept those axioms, then I agree with Dave: making the test 
package external just produces boilerplate.


-
Katherine

On 01/21/2016 02:55 PM, Nate Finch wrote:

[reposting this to the wider juju-dev mailing list]

So, I hit an interesting problem a while back.  I have some unit tests 
that need to be internal tests, thus they are in 'package foo'.  
However, my code needs to use some testhelper functions that someone 
else wrote... which are in 'package foo_test'.  It's impossible to 
reference those, so I had to move those helpers to package foo, which 
then either requires that we make them exported (which is exactly like 
using export_test.go, which, in the juju-core Oakland sprint, we all 
agreed was bad), or all tests that use the helpers need to be in 
'package foo'... which means I had to go change a bunch of files to be 
in package foo, and change the calls in those files from 
foo.SomeFunc() to just SomeFunc().


Given the assumption that some tests at some point will make sense to 
be internal tests, and given it's likely that helper functions/types 
will want to be shared across suites - should we not just always make 
our tests in package foo, and avoid this whole mess in the first place?


(A note, this happened again today - I wanted to add a unit test of a 
non-exported function to an existing test suite, and can't because the 
unit tests are in the foo_test package)


There seems to only be two concrete benefits to putting tests in 
package foo_test:
1. It avoids circular dependencies if your test needs to import 
something that imports the package you're testing.
2. It makes your tests read like normal usages of your package, i.e. 
calling foo.SomeFunc().
The first is obviously non-negotiable when it comes up... but I think 
it's actually quite rare (and might indicate a mixture of concerns 
that warrants investigation).  The second is nice to have, but not 
really essential (if we want tests that are good examples, we can 
write example functions).


So, I propose we put all tests in package foo by default.  For those 
devs that want to only test the exported API of a package, you can 
still do that.  But this prevents problems where helper code can't be 
shared between the two packages without ugliness and/or dumb code 
churnm, and it means anyone can add unit tests for non-exported code 
without having to create a whole new file and testsuite.


-Nate




--
-
Katherine

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju stable 1.25.3 is now released

2016-01-25 Thread Curtis Hovey-Canonical
# juju-core 1.25.3

A new stable release of Juju, juju-core 1.25.3, is now available.
This release replaces version 1.25.0.


## Getting Juju

juju-core 1.25.3 is available for Xenial and backported to earlier
series in the following PPA:

https://launchpad.net/~juju/+archive/stable

Windows, Centos, and OS X users will find installers at:

https://launchpad.net/juju-core/+milestone/1.25.3


## Notable Changes

This releases addresses stability and performance issues.


## Resolved issues

  * Unit loses network connectivity during bootstrap: juju 1.25.2 +
maas 1.9
Lp 1534795

 * "cannot allocate memory" when running "juju run"
Lp 1382556

  * Bootstrap with the vsphere provider fails to log into the virtual
machine
Lp 1511138

  * Add-machine with vsphere triggers machine-0: panic: juju home
hasn't been initialized
Lp 1513492

  * Using maas 1.9 as provider using dhcp nic will prevent juju
bootstrap
Lp 1512371

  * Worker/storageprovisioner: machine agents attempting to attach
environ-scoped volumes
Lp 1483492

  * Restore: agent old password not found in configuration
Lp 1452082

  * "ignore-machine-addresses" broken for containers
Lp 1509292

  * Deploying a service to a space which has no subnets causes the
agent to panic
Lp 1499426

  * /var/lib/juju gone after 1.18->1.20 upgrade and manual edit of
agent.conf
Lp 1444912

  * Juju bootstrap fails to successfully configure the bridge juju-br0
when deploying with wily 4.2 kernel
Lp 1496972

  * Incompatible cookie format change
Lp 1511717

  * Error environment destruction failed: destroying storage: listing
volumes: get https://x.x.x.x:8776/v2//volumes/detail: local
error: record overflow
Lp 1512399

  * Replica set emptyconfig maas bootstrap
Lp 1412621

  * Juju can't find daily image streams from cloud-
images.ubuntu.com/daily
Lp 1513982

  * Rsyslog certificate fails when using ipv6/4 dual stack with
prefer-ipv6: true
Lp 1478943

  * Improper address:port joining
Lp 1518128

  * Juju status  broken
Lp 1516989

  * 1.25.1 with maas 1.8: devices dns allocation uses non-unique
hostname
Lp 1525280

  * Increment minimum juju version for 2.0 upgrade to 1.25.3
Lp 1533751

  * Make assignment of units to machines use a worker
Lp 1497312

  * `juju environments` fails due to missing ~/.juju/current-
environment
Lp 1506680

  * Juju 1.25 misconfigures juju-br0 when using maas 1.9 bonded
interface
Lp 1516891

  * Destroy-environment on an unbootstrapped maas environment can
release all my nodes
Lp 1490865

  * On juju upgrade the security group lost ports for the exposed
services
Lp 1506649

  * Support centos and windows image metadata
Lp 1523693

  * Upgrade-juju shows available tools and best version but did not
output what it decided to do
Lp 1403655

  * Invalid binary version, version "1.23.3--amd64" or "1.23.3--armhf"
Lp 1459033

  * Add xenial to supported series
Lp 1533262


Finally

We encourage everyone to subscribe the mailing list at
juju-...@lists.canonical.com, or join us on #juju-dev on freenode.


-- 
Curtis Hovey
Canonical Cloud Development and Operations
http://launchpad.net/~sinzui

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Defaulting tests internal to the package

2016-01-25 Thread David Cheney
Thanks Katherine.

It's displaying a bias, but doing TDD with external tests is
overcomplicated and against the spirit of the thing. It's probably not
impossible, so I won't say that, but I will say the extra pomp of
wiring up external tests, forwarding private functions, constants and
variables, is antithetical to the TDD idea of writing the _smallest_
test that fails, followed by the smallest fix that passes.

Of course there is a is a place for both internal and external tests,
but if you're doing TDD, you're going to be writing internal unit
tests -- I don't see the value in the dogma (ha, from someone pitching
TDD!) of the external test straight jacket over what is already
require to write the existing set of test driven tests.

Thanks

Dave

On Tue, Jan 26, 2016 at 4:16 AM, Katherine Cox-Buday
 wrote:
> My axioms of argument are:
>
> Just like good code, each unit tests addresses a different concern in
> isolation.
> There are 3 concerns to be tested:
>
> Correctness of a small unit of logic.
> How units of logic are composed.
> How things are instantiated.
>
> Testing 2.1 usually requires internal tests.
> The correctness of the system is an emergent property of the correctness of
> all your tests.
>
> If you accept those axioms, then I agree with Dave: making the test package
> external just produces boilerplate.
>
> -
> Katherine
>
>
> On 01/21/2016 02:55 PM, Nate Finch wrote:
>
> [reposting this to the wider juju-dev mailing list]
>
> So, I hit an interesting problem a while back.  I have some unit tests that
> need to be internal tests, thus they are in 'package foo'.  However, my code
> needs to use some testhelper functions that someone else wrote... which are
> in 'package foo_test'.  It's impossible to reference those, so I had to move
> those helpers to package foo, which then either requires that we make them
> exported (which is exactly like using export_test.go, which, in the
> juju-core Oakland sprint, we all agreed was bad), or all tests that use the
> helpers need to be in 'package foo'... which means I had to go change a
> bunch of files to be in package foo, and change the calls in those files
> from foo.SomeFunc() to just SomeFunc().
>
> Given the assumption that some tests at some point will make sense to be
> internal tests, and given it's likely that helper functions/types will want
> to be shared across suites - should we not just always make our tests in
> package foo, and avoid this whole mess in the first place?
>
> (A note, this happened again today - I wanted to add a unit test of a
> non-exported function to an existing test suite, and can't because the unit
> tests are in the foo_test package)
>
> There seems to only be two concrete benefits to putting tests in package
> foo_test:
> 1. It avoids circular dependencies if your test needs to import something
> that imports the package you're testing.
> 2. It makes your tests read like normal usages of your package, i.e. calling
> foo.SomeFunc().
> The first is obviously non-negotiable when it comes up... but I think it's
> actually quite rare (and might indicate a mixture of concerns that warrants
> investigation).  The second is nice to have, but not really essential (if we
> want tests that are good examples, we can write example functions).
>
> So, I propose we put all tests in package foo by default.  For those devs
> that want to only test the exported API of a package, you can still do that.
> But this prevents problems where helper code can't be shared between the two
> packages without ugliness and/or dumb code churnm, and it means anyone can
> add unit tests for non-exported code without having to create a whole new
> file and testsuite.
>
> -Nate
>
>
>
> --
> -
> Katherine
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev