Re: lxd and constraints

2017-01-13 Thread Nate Finch
I just feel like we're entering a minefield that our application and CLI
aren't really built to handle.  I think we *should* handle it, but it needs
to be well planned out, instead of just doing a tiny piece at a time and
only figuring out later if we did the right thing.

There's a few problems I can see:

1.) you can have 10 lxd containers with memory limit of 2GB on a machine
with 4GB of RAM.  Deploying 10 applications to those containers that each
have a constraint of mem=2GB will not work as you expect.  We could add
extra bookkeeping for this, and warn you that you appear to be
oversubscribing the host, but that's more work.

2.) What happens if you try to deploy a container without a memory limit on
a host that already has a container on it?

For example:
4GB host
2GB lxd container
try to deploy a new service in a container on this machine.
Do we warn?  We have no clue how much RAM the service will use.  Maybe
it'll be fine, maybe it won't.

3.) Our CLI doesn't really work well with constraints on containers:

juju deploy mysql --constraints mem=2G --to lxd

Does this deploy a machine with 2GB of ram and a container with a 2GB ram
limit on it?  Or does it deploy a machine with 2GB of ram and a container
with no limit on it?  It has to be one or the other, and currently we have
no way of indicating which we want to do, and no way to do the other one
without using multiple commands.

This is a more likely use case, creating a bigger machine that can hold
multiple containers:
juju add-machine --constraints mem=4GB
// adds machine, let's say 5
// create a container on machine 5 with 2GB memory limit
juju deploy mysql --constraints mem=2GB --to lxd:5

At least in this case, the deploy command is clear, there's only one thing
they can possibly mean.  Usually, the placement directive would override
the constraint, but in this case, it does what you would want... but it is
a littler weird that --to lxd:5 uses the constraint, but --to 5 ignores it.

Note that you can't just write a simple script to do the above, because the
machine number is variable, so you have to parse our output and then use
that for the next command.  It's still scriptable, obviously, but it's more
complicated script than just two lines of bash.

Also note that using this second method, you can't deploy more than one
unit at a time, unless you want multiple units on containers on the same
machine (which I think would be pretty odd).



On Fri, Jan 13, 2017 at 3:48 AM Rick Harding 
wrote:

In the end, you say you want an instance with 2gb of ram and if the cloud
has an instance with that exact limit it is in fact an exact limit. The key
things here is the clouds don't have infinite malleable instance types like
containers (this works for kvm and for lxd). So I'm not sure the mis-match
is as far apart as it seems. root disk means give me a disk this big, if
you ask for 2 core as long as you can match an instance type with 2 cores
it's exactly the max you get.

It seems part of this can be more adjusting the language from "minimum" to
something closer to "requested X" where request gives it more of a "I want
X" without the min/max boundaries.



On Fri, Jan 13, 2017 at 3:14 AM John Meinel  wrote:

So we could make it so that constraints are actually 'exactly' for LXD,
which would then conform to both minimum and maximum, and would still be
actually useful for people deploying to containers. We could certainly
probe the host machine and say "you asked for 48 cores, and the host
machine doesn't have it".

However, note that explicit placement also takes precedence over
constraints anyway. If you do:
  juju deploy mysql --constraints mem=4G
today, and then do:
 juju add-unit --to 2
We don't apply the constraint limitations to that specific unit. Arguably
we should at *least* be warning that the constraints for the overall
application don't appear to be valid for this instance.

I guess I'd rather see constraints still set limits for containers, because
people really want that functionality, and that we warn any time you do a
direct placement and the constraints aren't satisfied. (but warn isn't
failing the attempt)

John
=:->

On Fri, Jan 13, 2017 at 10:09 AM, Stuart Bishop  wrote:

On 13 January 2017 at 02:20, Nate Finch  wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.



I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.


I don't think Juju can h

Re: lxd and constraints

2017-01-12 Thread Nate Finch
Merlijn:
I definitely agree that having the same term mean different things on
different platforms is a really bad idea.  I don't think we can change the
concept of constraints as minimums at this point, but maybe a new concept
of limits (to match lxd terminology) could be added.  Limits really only
make sense for containers, so we would either have to error out, or warn
the user and ignore it if you specified a limit when deploying a base
machine.

*Mike:*
The problem with trying to figure out how much "unused" RAM a host has is
that it gets thrown out the window if you ever deploy any unit to the host
machine, or if you deploy a unit in a container without a RAM constraint.
Those units may then use as much RAM as they want.


On Thu, Jan 12, 2017 at 3:22 PM Mike Pontillo 
wrote:

On Thu, Jan 12, 2017 at 11:20 AM, Nate Finch 
wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.


Is the second part (how much leftover you can expect the host to have for
other containers) captured somewhere? Because it seems to me that the
important question Juju needs to be asking is, "how over-provisioned is the
host I'm about to deploy on?", so that containers can be intelligently
load-balanced across the infrastructure.

Assuming Juju has full control over the hosts it is deploying containers
onto[2], I think one thing to do might be to allow the admin to specify
ratios (maybe separate for each of CPU, RAM, disk) to indicate how
over-provisioned to allow hosts and containers to be to be.

Let's take as an example a host with 16 GB of RAM, where you want to deploy
16 containers with a constraint of "at least 1G of RAM". There could be two
relevant over-provisioning ratios: one to specify how over-provisioned a
container hypervisor can be, and the other to specify how much more RAM
than the constraint specifies the container can be allowed to use. This
idea is perhaps a little naive; I'm not sure where one would specify these
values.

If that sounds confusing, maybe it's easier to look at an example[1]:

Host RAM (GB) Minimum RAM Constraint (GB) Host Over-provisioning Ratio
Container
Over-provisioning Ratio Allowed number of containers for host RAM Limit per
Container (GB)
16 2 1.00 1.00 8 2
16 1 1.00 1.00 16 1
16 1 2.00 1.00 32 1
16 1 1.00 2.00 16 2
16 1 2.00 2.00 32 2
16 1 2.00 4.00 32 4
16 1 16.00 4.00 256 4
16 1 16.00 1.00 256 1


Regards,
Mike

[1]:
https://docs.google.com/spreadsheets/d/1j6-98nB5AA_viHK9nF42MPpbZf35wv2N2gDi8DrhepU/view


[2]: (so that the numbers aren't thrown off by other juju deployments, or
non-juju deployments to the same hypervisor)
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


lxd and constraints

2017-01-12 Thread Nate Finch
I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.

However, in Juju's case, we expect constraints to be minimums, so that we
know the unit on that machine will have enough RAM to function (e.g. "give
me a machine with at least 2GB of RAM, since I know my application won't
work well with less").

This impedance mismatch is tricky to untangle.  With a naive implementation
of Juju constraints for lxd as a straight up setting of lxd limits, then
you can add a lxd container and specify a memory constraint that is higher
than the host machine's memory, and lxd will happily let you do that
because it knows that container won't exceed that amount of memory (by
definition it cannot).  But it means that juju will then let you deploy a
unit that needs more memory than the container has access to.

Note that this is also the case for setting cores.  On my local lxd
environment I can juju add-machine --constraints cores=48 and the container
will be created just fine.

I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Representative tests failing

2016-12-14 Thread Nate Finch
Seems like there's a likely out of disk space problem with the LXD and
windows machines that run representative tests.  For example:

http://juju-ci.vapour.ws/job/github-check-merge-juju/441/
http://juju-ci.vapour.ws/job/github-check-merge-juju/442/

Sounds like this happens often enough to be a problem.

Is this something we can automate by resetting to a snapshot or something
along those lines?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: One instance manual provider

2016-12-05 Thread Nate Finch
Except you can't expose anything deployed to lxd to the network, right?

On Mon, Dec 5, 2016, 5:49 PM Marco Ceppi  wrote:

> A one machine manual provider? Might as well just `ssh ; juju
> bootstrap lxd`.
>
> On Mon, Dec 5, 2016 at 10:09 AM Nate Finch 
> wrote:
>
> The should be no reason you can't deploy to the controller machine using
> manual just like any other cloud.
>
> juju bootstrap manual/x.x.x.x  mycloud
> juju switch controller
> juju deploy  --to 0
>
> Switching to the controller model is probably what you were missing, since
> the default model comes with no machines.
>
> On Mon, Dec 5, 2016 at 9:27 AM Rick Harding 
> wrote:
>
> I'll have to test it out but I would think that you could
>
> 1) bring up a machine, create a container on it, bootstrap to that
> container as the controller, create another container, and then add-machine
> it as a second machine and things should work ok.
>
> 2) I wonder if you can bootstrap to a machine, manually add container on
> that machine, and then add that container with add-machine.
>
> I'm guessing there's some bits about making sure the added containers have
> the ssh key you want to use for the ssh connection for add-machine.
>
> On Mon, Dec 5, 2016 at 3:18 PM Matthew Williams <
> matthew.willi...@canonical.com> wrote:
>
> Hey Folks,
>
> I notice the docs state that at least two instances are needed for the
> manual provider: https://jujucharms.com/docs/stable/clouds-manual. Some
> quick playing around suggests that this is indeed the case.
>
> Is there a technical reason why? I'd love to spin up a charm on [insert
> vps provider here] and only spend money for one instance
>
> Matty
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: One instance manual provider

2016-12-05 Thread Nate Finch
The should be no reason you can't deploy to the controller machine using
manual just like any other cloud.

juju bootstrap manual/x.x.x.x  mycloud
juju switch controller
juju deploy  --to 0

Switching to the controller model is probably what you were missing, since
the default model comes with no machines.

On Mon, Dec 5, 2016 at 9:27 AM Rick Harding 
wrote:

> I'll have to test it out but I would think that you could
>
> 1) bring up a machine, create a container on it, bootstrap to that
> container as the controller, create another container, and then add-machine
> it as a second machine and things should work ok.
>
> 2) I wonder if you can bootstrap to a machine, manually add container on
> that machine, and then add that container with add-machine.
>
> I'm guessing there's some bits about making sure the added containers have
> the ssh key you want to use for the ssh connection for add-machine.
>
> On Mon, Dec 5, 2016 at 3:18 PM Matthew Williams <
> matthew.willi...@canonical.com> wrote:
>
> Hey Folks,
>
> I notice the docs state that at least two instances are needed for the
> manual provider: https://jujucharms.com/docs/stable/clouds-manual. Some
> quick playing around suggests that this is indeed the case.
>
> Is there a technical reason why? I'd love to spin up a charm on [insert
> vps provider here] and only spend money for one instance
>
> Matty
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
I generally assume that "hard to type" doesn't apply when you're talking
about someone's native language.  Yes, you or I would have trouble
typing 數據庫 (database), but to someone in China, that's probably a word they
type all the time.  Forcing people to use an English translation for the
name of their software is not very welcoming in this day and age.

On Fri, Dec 2, 2016 at 10:46 AM Marco Ceppi 
wrote:

> On Fri, Dec 2, 2016 at 10:31 AM Nate Finch 
> wrote:
>
> One thing we *could* do to support non-english names that would not
> entirely open the door to emoji etc is to simply constrain the names to
> unicode letters and numbers.  Thus you could name something 數據庫 but
> not .
>
>
> I bothered Rick about this a while ago (half jokingly) since I own  http://
> 💩☁.ws <http://xn--l3h.ws> (poo cloud!) and was going to make a charm
> accompanying that name. Localized unicode characters - emoji or otherwise -
> are still a difficult UX compared to alphanumerics. It takes me 10 mins to
> find the emojis to type the damn domain in if I'm not on a phone.
>
> The only path for unicode names I could see happening, and it's a stretch,
> is if the application name can be set to a larger range of characters.
> Where you may want to name horizon deployed in your environment to
> something localized (or emoji) but the charm name should be flat and simple.
>
> On Fri, Dec 2, 2016 at 9:29 AM Mark Shuttleworth  wrote:
>
> On 02/12/16 09:23, Adam Collard wrote:
>
> True, but we could do normalisation in the charm store to prevent
> malicious names. I think it's an important aspect of software in the modern
> world that it can support the wide array of languages that we humans use.
>
>
> This just transfers the definition of 'OK' to a different codebase.
>
> It's much better to have a simple rule that can be well documented,
> enforced the same way in store and client and snapd, and typed on any
> laptop without having to refer to the internet for assistance.
>
>
> Mark
>
> --
>
>
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
One thing we *could* do to support non-english names that would not
entirely open the door to emoji etc is to simply constrain the names to
unicode letters and numbers.  Thus you could name something 數據庫 but
not .

On Fri, Dec 2, 2016 at 9:29 AM Mark Shuttleworth  wrote:

> On 02/12/16 09:23, Adam Collard wrote:
>
> True, but we could do normalisation in the charm store to prevent
> malicious names. I think it's an important aspect of software in the modern
> world that it can support the wide array of languages that we humans use.
>
>
> This just transfers the definition of 'OK' to a different codebase.
>
> It's much better to have a simple rule that can be well documented,
> enforced the same way in store and client and snapd, and typed on any
> laptop without having to refer to the internet for assistance.
>
>
> Mark
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
There's no technical reason for the restriction, AFAIK.  I believe it's
just aesthetic.

On Fri, Dec 2, 2016, 5:50 AM James Page  wrote:

> Hi All
>
> Is there a specific rationale for application names being limited to not
> starting with a digit?  I get why they can't end with one but I don't see
> why juju should place the same restriction on the start of an app name?
> Example (where we tripped over this):
>
>juju deploy ubuntu 6wind-virtual-accelerator
>
> (and ubuntu just to test the name out of a proposed charm where we had
> testing problems due to this limitation)
>
> Thoughts?
>
> James
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A (Very) Minimal Charm

2016-12-01 Thread Nate Finch
So, the whole point of my original email was to say - if you want a minimal
charm, just make one, it's easy.
I just published my above mentioned minimal charm as cs:~natefinch/nop

It's not showing up on jujucharms.com, maybe because charm proof is failing
because it's missing everything except the metadata?  I don't know, but it
still works with juju deploy.

On Thu, Dec 1, 2016 at 10:07 PM José Antonio Rey  wrote:

> Wouldn't it be possible for us to implement a configuration flag, and
> have it as default? Going back to the general point, the idea behind the
> ubuntu charm is to have a vainilla Ubuntu where you can work on
> anything. I understand we're mostly using it for testing, and reactive
> is now a big part of the ecosystem. I find two simple approaches for this:
>
>   * Create a ubuntu-vainilla charm which doesn't install any of the
> required packages OR
>   * Implement a 'vainilla' boolean configuration flag, where you can
> specify True for the vainilla Ubuntu install, False to install all of
> the reactive/other dependencies.
>
> If we get to work around the pyyaml issue and implement a better
> solution in the long term, I think that would be amazing. However, we
> can't let one dependency drag us down, and in the meanwhile, we have to
> implement a workaround.
>
> On 12/01/2016 01:56 PM, Cory Johns wrote:
> > Marco,
> >
> > What is the issue you mentioned with using snaps where you mentioned
> > needing an "unconfined classic snap"?
> >
> > On Thu, Dec 1, 2016 at 1:13 PM, Marco Ceppi  > <mailto:marco.ce...@canonical.com>> wrote:
> >
> > On Thu, Dec 1, 2016 at 12:56 PM Casey Marshall
> > mailto:casey.marsh...@canonical.com>>
> > wrote:
> >
> > On Thu, Dec 1, 2016 at 6:53 AM, Marco Ceppi
> > mailto:marco.ce...@canonical.com>>
> > wrote:
> >
> > On Thu, Dec 1, 2016 at 5:00 AM Adam Collard
> >  > <mailto:adam.coll...@canonical.com>> wrote:
> >
> > On Thu, 1 Dec 2016 at 04:02 Nate Finch
> >  > <mailto:nate.fi...@canonical.com>> wrote:
> >
> > On IRC, someone was lamenting the fact that the
> > Ubuntu charm takes longer to deploy now, because it
> > has been updated to exercise more of Juju's
> > features.  My response was - just make a minimal
> > charm, it's easy.  And then of course, I had to
> > figure out how minimal you can get.  Here it is:
> >
> > It's just a directory with a metadata.yaml in it
> > with these contents:
> >
> > name: min
> > summary: nope
> > description: nope
> > series:
> >   - xenial
> >
> > (obviously you can set the series to whatever you
> want)
> > No other files or directories are needed.
> >
> >
> > This is neat, but doesn't detract from the bloat in the
> > ubuntu charm.
> >
> >
> > I'm happy to work though changes to the Ubuntu charm to
> > decrease "bloat".
> >
> >
> > IMHO the bloat in the ubuntu charm isn't from support
> > for Juju features, but the switch to reactive plus
> > conflicts in layer-base wanting to a) support lots of
> > toolchains to allow layers above it to be slimmer and b)
> > be a suitable base for "just deploy me" ubuntu.
> >
> >
> > But it is to support the reactive framework, where we
> > utilize newer Juju features, like status and
> > application-version to make the charm rich despite it's
> > minimal goal set. Honestly, a handful of cached wheelhouses
> > and some apt packages don't strike me as bloat, but I do
> > want to make sure the Ubuntu charm works for those using it.
> So,
> >
> >
> > I think a minimal wheelhouse to provide a consistent charm hook
> > runtime is very reasonable and definitely not the problem here.
> >
> > There are too many packages that get installed by default with
> > the reactive framework that most charms don't ne

A (Very) Minimal Charm

2016-11-30 Thread Nate Finch
On IRC, someone was lamenting the fact that the Ubuntu charm takes longer
to deploy now, because it has been updated to exercise more of Juju's
features.  My response was - just make a minimal charm, it's easy.  And
then of course, I had to figure out how minimal you can get.  Here it is:

It's just a directory with a metadata.yaml in it with these contents:

name: min
summary: nope
description: nope
series:
  - xenial

(obviously you can set the series to whatever you want)
No other files or directories are needed.

Figured this might be useful for others.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: "unfairness" of juju/mutex

2016-11-16 Thread Nate Finch
Just for historical reference.  The original implementation of the new OS
mutex used flock until Dave mentioned that it presented problems with file
management (files getting renamed, deleted, etc).

In general, I'm definitely on the side of using flock, though I don't think
that necessarily solves our starvation problem, it depends on how flock is
implemented and the specific behavior of our units.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Dependencies added for new jsonschema support

2016-11-01 Thread Nate Finch
Thanks, Katherine, I had intended to mention in the email that this was a
request for review by the tech board :)

On Tue, Nov 1, 2016, 5:14 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Nate Finch  writes:
>
> > In support of my PR that adds an interactive mode for add-cloud, we
> needed to add
> > jsonschema for the cloud configurations that can be inspected at runtime
> to generate the
> > interactive queries.
>
> Thanks for the great write-up, Nate. I've added it to the tech board
> agenda[1] and we should review tonight.
>
> --
> Katherine
>
> [1] -
> https://docs.google.com/document/d/13nmOm6ojX5UUNtwfrkqr1cR6eC5XDPtnhN5H6pFLfxo/edit
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Dependencies added for new jsonschema support

2016-11-01 Thread Nate Finch
In support of my PR  that adds an
interactive mode for add-cloud, we needed to add jsonschema for the cloud
configurations that can be inspected at runtime to generate the interactive
queries.

Unfortunately, the jsonschema package we're currently using (gojsonschema)
does not expose the structure of the schema, so it's not able to be
inspected by internal code (
https://godoc.org/github.com/xeipuuv/gojsonschema#Schema).

In looking for a new jsonschema package, I got recommended
github.com/lestrrat/go-jsschema which is actively maintained, and does
expose the schema.

I have made a wrapper package for go-jsschema at github.com/juju/jsonschema
which adds some extra fields to the schema to support our particular use
case, but delegates most of the heavy lifting to the underlying package.

Currently, importing go-jsschema imports 5 other repos from the same author
in support of jsonschema, and one package by a third party (dave's
github.com/pkg/errors).  I am asking about making a PR to remove one of
those dependencies, but it still leaves us with 6 new dependencies (5 of
which are from the same author).

Here's the additions:
github.com/lestrrat/go-jsschema
github.com/lestrrat/go-jspointer
github.com/lestrrat/go-jsref
github.com/lestrrat/go-jsval
github.com/lestrrat/go-pdebug
github.com/lestrrat/go-structinfo
github.com/pkg/errors

The pdebug one is the one I can easily remove with a PR.  I'd like to
remove the structinfo one, but that might be less likely (would require
copying a few dozen lines of code, and I doubt the guy would be amenable...
but I can ask).

However, this shuld be able to be used as a more or less drop in
replacement for the existing forks of the gojsonschema package which we're
maintaining, which would remove 3 dependencies (albeit ones more or less
maintained by us).  In theory, we could also remove environschema at some
point and replace it with jsonschema, but that's a bigger job.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Long tests

2016-10-27 Thread Nate Finch
I ran the tests serially today to try to get a picture of what tests
actually take a long time without worrying about contention.  The full run
took almost 45 minutes (normally it takes like 15 testing packages in
parallel).

Here's the full output (sorry for the google drive link, pastebin got mad
at the length):
https://drive.google.com/file/d/0B-r4AW1RoHJNR3plVjhwSVM1Z0E/view

Unfortunately, the state tests panicked after a 10 minute timeout, but
there's still some good info in there.

Here's some especially bad low hanging fruit (note these are the times for
single tests, not full suites):

PASS: showoutput_test.go:59: ShowOutputSuite.TestRun 18.007s
PASS: upgradejuju_test.go:307: UpgradeJujuSuite.TestUpgradeJuju 10.722s
PASS: status_test.go:3292: StatusSuite.TestStatusAllFormats 12.255s
PASS: unit_test.go:217: UnitSuite.TestUpgradeFailsWithoutTools 10.200s
PASS: syslog_test.go:220: syslogSuite.TestConfigChange 31.715s
PASS: syslog_test.go:193: syslogSuite.TestLogRecordForwarded 31.731s
PASS: bakerystorage_test.go:72: BakeryStorageSuite.TestExpiryTime 58.037s
PASS: local_test.go:547:
localServerSuite.TestStopInstanceSecurityGroupNotDeleted 29.046s
PASS: kvm-broker_test.go:329:
kvmProvisionerSuite.TestContainerStartedAndStopped 10.098s
PASS: worker_test.go:69:
statusHistoryPrunerSuite.TestWorkerWontCallPruneBeforeFiringTimer 10.000s
PASS: uniter_test.go:1190: UniterSuite.TestActionEvents 39.242s
PASS: uniter_test.go:977: UniterSuite.TestUniterRelations 14.132s

There's about 112 tests that are faster than these, but still take more
than a second to run, shown here: http://pastebin.ubuntu.com/23391234/

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Big memory usage improvements

2016-10-17 Thread Nate Finch
FYI, this tools looks promising.  Probably worth checking out to see if
it'll help us track down those memory leaks.
https://stackimpact.com/blog/memory-leak-detection-in-production-go-applications/

On Thu, Oct 13, 2016 at 8:02 AM Christian Muirhead <
christian.muirh...@canonical.com> wrote:

> Oops, meant to reply-all.
>
> -- Forwarded message -
> From: Christian Muirhead 
> Date: Thu, 13 Oct 2016, 09:26
> Subject: Re: Big memory usage improvements
> To: Katherine Cox-Buday 
>
>
>
>
> On Wed, 12 Oct 2016, 22:36 Katherine Cox-Buday, <
> katherine.cox-bu...@canonical.com> wrote:
>
> Awesome, good work, Christian!
>
>
> :)
> Thanks for your help with the StatePool!
>
>
> Not to detract from this fantastic news, but it's still worrisome that an
> idle Juju seems to have a memory which is growing linearly (before picture
> looked like the beginning of an exponential curve?). Do we feel this is
> memory which will at some point be GCed?
>
>
> No, I think there's still a leak there. The other ones I fixed were
> goroutine leaks which were a bit easier to track down. There aren't any
> goroutines escaping now so I'll need to switch back to looking at heap
> dumps to find more.
>
> Cheers,
> Christian
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Github Reviews vs Reviewboard

2016-10-14 Thread Nate Finch
+1

Keeping the PR and reviews together really makes it easier for me to keep
track of what's going on with a PR.  It's also really nice not having to
context switch out of github for every single PR.

Reviewboard and related infrastructure breaks like once couple weeks, and
I'm not convinced it'll get better, since we've been using it for quite
some time now.

I have missed exactly zero of the features of reviewboard since using
github, and haven't really cared about the drawbacks of github.

One point - you *can* minimize comments in the files view - there's a
checkbox per file that will hide the comments in that file.

On Fri, Oct 14, 2016 at 8:22 AM roger peppe  wrote:

> On 14 October 2016 at 12:45, Adam Collard 
> wrote:
> > Not sure I get a vote, but -1
> >
> > You're running an old version of ReviewBoard (2.0.12 released in January
> > 2015) and many of the issues I think you've been hitting are fixed in
> later
> > revisions. Latest stable is 2.5.6.1, 3.0.x is under active development
> and
> > brings a chunk of new UI improvements.
> >
> > Release notes for 2.5
> >
> > 3.0 demo site
>
> I'm still not convinced.
>
> Even 3.0 still deletes draft comments without so much as a by-your-leave
> when you double-click somewhere else in the text. And because it doesn't
> use
> real text entry boxes, the Lazarus plugin, my usual saviour in such cases,
> doesn't work. I've lost far too much time to this in the past.
>
> Replying to a comment still involves a page reload and associated lost
> context.
>
> I can't see anything in the 2.5 release notes about fixing behaviour on
> file
> move/rename, though I may well have missed it.
>
> And not being able to deal with really large PRs is a definite issue too
> (not
> that github is better there).
>
>   cheers,
> rog.
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: List plugins installed?

2016-09-29 Thread Nate Finch
Seem alike the easiest thing to do is have a designated plugin directory
and have juju install  copy the binary/script there.  Then
we're only running plugins the user has specifically asked to install.

On Thu, Sep 29, 2016, 4:33 AM Stuart Bishop 
wrote:

> On 28 September 2016 at 22:45, roger peppe 
> wrote:
>
>> On 28 September 2016 at 14:55, Rick Harding 
>> wrote:
>> > This is just a miss. The original ability to see the plugins was a
>> subset of
>> > the help command and didn't make our CLI spreadsheet for things to
>> rework. I
>> > agree that list-plugins is the right idea here and that means that
>> plugins
>> > becomes a noun in our language.
>> >
>> > What's interesting is that add/remove fall out because that
>> > installing/uninstalling. I think that show-plugin might be interesting
>> to
>> > auto run the --description flag to bring it into CLI alignment with the
>> new
>> > world order.
>>
>> I've voiced discomfort with this before - I don't think that we should
>> arbitrarily run all executables that happen to have a "juju-" prefix.
>> It's potentially dangerous (for example, note that although git relies
>> heavily
>> on plugins, it doesn't execute a plugin until you explicitly name it).
>>
>> Perhaps there could be a standard way for a plugin to provide
>> metadata about itself as a data file.
>>
>>
> It also might be time to work out how a Juju snap is going to call or
> install plugins. I don't think the existing design is going to work, and
> there is still time to flag it as deprecated in the changelogs for 2.0 and
> work out the way forward for 2.1.
>
>
> --
> Stuart Bishop 
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Timeout added to kill-controller

2016-09-26 Thread Nate Finch
This is great. I have been wanting a way to just tell juju to shoot the VMs
in the head during development.

On Mon, Sep 26, 2016, 7:54 PM Tim Penhey  wrote:

> Hi all,
>
> NOTE: we do very much consider it a bug if the models don't die properly.
>
> I have just landed a fix for a kill-controller issue where it would just
> sit there for a long time with nothing apparent happening.
>
> Now kill-controller has a default timeout of 5 minutes. If there is no
> change in the timeout period, the command switches to a direct
> destruction mode where it will contact the cloud provider on behalf of
> each remaining model and destroy it that way.
>
> The following examples all used LXD, and were a single controller
> machine with ubuntu deployed in the default model.
>
> $ time juju kill-controller kill-test-fine -y
> Destroying controller "kill-test-fine"
> Waiting for resources to be reclaimed
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model
> All hosted models reclaimed, cleaning up controller machines
>
> real0m27.443s
>
>
> Nothing much changes here. Everything died nicely.
> You can specify a timeout with --timeout (or -t). Valid formats are
> things like "2m" for two minutes, "30s" for thirty seconds.
> Zero also works:
>
> $ time juju kill-controller kill-test-no-delay -t 0 -y
> Destroying controller "kill-test-no-delay"
> Waiting for resources to be reclaimed
> Killing admin@local/default directly
>done
> All hosted models destroyed, cleaning up controller machines
>
> real0m2.492s
>
>
> I had to throw a wrench in the works to get the provisioner to not kill
> the machine (wrench is a test facility we have). This allows me to show
> you a model that doesn't die like it should. I just specify a one minute
> timeout. The polling time was moved from two seconds to five seconds.
> Now you will see a countdown starting after 30 seconds of no change.
>
> $ juju kill-controller kill-test -t 1m -y
> Destroying controller "kill-test"
> Waiting for resources to be reclaimed
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine, will kill machines directly in 29s
> Waiting on 1 model, 1 machine, will kill machines directly in 24s
> Waiting on 1 model, 1 machine, will kill machines directly in 19s
> Waiting on 1 model, 1 machine, will kill machines directly in 14s
> Waiting on 1 model, 1 machine, will kill machines directly in 9s
> Waiting on 1 model, 1 machine, will kill machines directly in 4s
> Killing admin@local/default directly
>done
> All hosted models destroyed, cleaning up controller machines
>
>
> Tim
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-20 Thread Nate Finch
Personally, I really enjoy having everything in the same place, more than I
expected. It's also one less barrier of entry for newcomers and outsiders.

On Tue, Sep 20, 2016, 5:54 PM Menno Smits  wrote:

> (gah, hit send too early)
>
> ... If we decide to stay with RB, that will need to be fixed.
>
> On 21 September 2016 at 09:53, Menno Smits 
> wrote:
>
>> Some of us probably got a little excited (me included). There should be
>> discussion and a clear announcement before we make a signigicant change to
>> our process. The tech board meeting is today/tonight so we'll discuss it
>> there as per Rick's email. Please contribute to this thread if you haven't
>> already and have strong opinions either way on the topic.
>>
>> Interestingly our Github/RB integration seems to have broken a little
>> since Github made these changes. The links to Reviewboard on pull requests
>> aren't getting inserted any more. If we decide to stay with RB
>>
>> On 21 September 2016 at 05:54, Rick Harding 
>> wrote:
>>
>>> I spoke with Alexis today about this and it's on her list to check with
>>> her folks on this. The tech board has been tasked with he decision, so
>>> please feel free to shoot a copy of your opinions their way. As you say, on
>>> the one hand it's a big impact on the team, but it's also a standard
>>> developer practice that not everyone will agree with so I'm sure the tech
>>> board is a good solution to limiting the amount of bike-shedding and to
>>> have some multi-mind consensus.
>>>
>>> On Tue, Sep 20, 2016 at 1:52 PM Katherine Cox-Buday <
>>> katherine.cox-bu...@canonical.com> wrote:
>>>
>>>> Seems like a good thing to do would be to ensure the tech board doesn't
>>>> have any objections and then put it to a vote since it's more a property of
>>>> the team and not the codebase.
>>>>
>>>> I just want some consistency until a decision is made. E.g. "we will be
>>>> trying out GitHub reviews for the next two weeks; all reviews should be
>>>> done on there".
>>>>
>>>> --
>>>> Katherine
>>>>
>>>> Nate Finch  writes:
>>>>
>>>> > Can we try reviews on github for a couple weeks? Seems like we'll
>>>> > never know if it's sufficient if we don't try it. And there's no setup
>>>> > cost, which is nice.
>>>> >
>>>> > On Tue, Sep 20, 2016 at 12:44 PM Katherine Cox-Buday
>>>> >  wrote:
>>>> >
>>>> > I see quite a few PRs that are being reviewed in GitHub and not
>>>> > ReviewBoard. I really don't care where we do them, but can we
>>>> > please pick a direction and move forward? And until then, can we
>>>> > stick to our previous decision and use RB? With people using both
>>>> > it's much more difficult to tell what's been reviewed and what
>>>> > hasn't.
>>>> >
>>>> > --
>>>> > Katherine
>>>> >
>>>> > Nate Finch  writes:
>>>> >
>>>> > > In case you missed it, Github rolled out a new review process.
>>>> > It
>>>> > > basically works just like reviewboard does, where you start a
>>>> > review,
>>>> > > batch up comments, then post the review as a whole, so you don't
>>>> > just
>>>> > > write a bunch of disconnected comments (and get one email per
>>>> > review,
>>>> > > not per comment). The only features reviewboard has is the edge
>>>> > case
>>>> > > stuff that we rarely use: like using rbt to post a review from a
>>>> > > random diff that is not connected directly to a github PR. I
>>>> > think
>>>> > > that is easy enough to give up in order to get the benefit of
>>>> > not
>>>> > > needing an entirely separate system to handle reviews.
>>>> > >
>>>> > > I made a little test review on one PR here, and the UX was
>>>> > almost
>>>> > > exactly like working in reviewboard:
>>>> > > https://github.com/juju/juju/pull/6234
>>>> > >
>>>> > > There may be important edge cases I'm missing, but I think it's
>>>> > worth
>>>> > > looking into.
>>>> > >
>>>> > > -Nate
>>>>
>>>> --
>>>> Juju-dev mailing list
>>>> Juju-dev@lists.ubuntu.com
>>>> Modify settings or unsubscribe at:
>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-20 Thread Nate Finch
Can we try reviews on github for a couple weeks?  Seems like we'll never
know if it's sufficient if we don't try it.  And there's no setup cost,
which is nice.

On Tue, Sep 20, 2016 at 12:44 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> I see quite a few PRs that are being reviewed in GitHub and not
> ReviewBoard. I really don't care where we do them, but can we please pick a
> direction and move forward? And until then, can we stick to our previous
> decision and use RB? With people using both it's much more difficult to
> tell what's been reviewed and what hasn't.
>
> --
> Katherine
>
> Nate Finch  writes:
>
> > In case you missed it, Github rolled out a new review process. It
> > basically works just like reviewboard does, where you start a review,
> > batch up comments, then post the review as a whole, so you don't just
> > write a bunch of disconnected comments (and get one email per review,
> > not per comment). The only features reviewboard has is the edge case
> > stuff that we rarely use: like using rbt to post a review from a
> > random diff that is not connected directly to a github PR. I think
> > that is easy enough to give up in order to get the benefit of not
> > needing an entirely separate system to handle reviews.
> >
> > I made a little test review on one PR here, and the UX was almost
> > exactly like working in reviewboard:
> > https://github.com/juju/juju/pull/6234
> >
> > There may be important edge cases I'm missing, but I think it's worth
> > looking into.
> >
> > -Nate
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Nate Finch
Reviewboard goes down a couple times a month, usually from lack of disk
space or some other BS.  According to a source knowledgeable with these
matters, the charm was rushed out, and the agent for that machine is down
anyway, so we're kinda just waiting for the other shoe to drop.

As for the process things that Ian mentioned, most of those can be
addressed with a sprinkling of convention.  Marking things as issues could
just be adding :x: to the first line (github even pops up suggestions and
auto-completes), thusly:

[image: :x:]This will cause a race condition

And if you want to indicate you're dropping a suggestion, you can use :-1:
 which gives you a thumbs down:

[image: :-1:] I ran the race detector and it's fine.

It won't give you the cumulative "what's left to fix" at the top of the
page, like reviewboard... but for me, I never directly read that, anyway,
just used it to see if there were zero or non-zero comments left.

As for the inline comments in the code - there's a checkbox to hide them
all.  It's not quite as convenient as the gutter indicators per-comment,
but it's sufficient, I think.

On Wed, Sep 14, 2016 at 6:43 PM Ian Booth  wrote:

>
>
> On 15/09/16 08:22, Rick Harding wrote:
> > I think that the issue is that someone has to maintain the RB and the
> > cost/time spent on that does not seem commensurate with the bonus
> features
> > in my experience.
> >
>
> The maintenance is not that great. We have SSO using github credentials so
> there's really no day to day works IIANM. As a team, we do many, many
> reviews
> per day, and the features I outlined are significant and things I (and I
> assume
> others) rely on. Don't under estimate the value in knowing why a comment
> was
> rejected and being able to properly track that. Or having review comments
> collapsed as a gutter indicated so you can browse the code and still know
> that
> there's a comment there. With github, you can hide the comments but
> there's no
> gutter indicator. All these things add up to a lot.
>
>
> > On Wed, Sep 14, 2016 at 6:13 PM Ian Booth 
> wrote:
> >
> >> One thing review board does better is use gutter indicators so as not to
> >> interrupt the flow of reading the code with huge comment blocks. It also
> >> seems
> >> much better at allowing previous commits with comments to be viewed in
> >> their
> >> entirety. And it allows the reviewer to differentiate between issues and
> >> comments (ie fix this vs take note of this), plus it allows the notion
> of
> >> marking stuff as fixed vs dropped, with a reason for dropping if needed.
> >> So the
> >> github improvements are nice but there's still a large and significant
> gap
> >> that
> >> is yet to be filled. I for one would miss all the features reviewboard
> >> offers.
> >> Unless there's a way of doing the same thing in github that I'm not
> aware
> >> of.
> >>
> >> On 15/09/16 07:22, Tim Penhey wrote:
> >>> I'm +1 if we can remove the extra tools and we don't get email per
> >> comment.
> >>>
> >>> On 15/09/16 08:03, Nate Finch wrote:
> >>>> In case you missed it, Github rolled out a new review process.  It
> >>>> basically works just like reviewboard does, where you start a review,
> >>>> batch up comments, then post the review as a whole, so you don't just
> >>>> write a bunch of disconnected comments (and get one email per review,
> >>>> not per comment).  The only features reviewboard has is the edge case
> >>>> stuff that we rarely use:  like using rbt to post a review from a
> random
> >>>> diff that is not connected directly to a github PR. I think that is
> easy
> >>>> enough to give up in order to get the benefit of not needing an
> entirely
> >>>> separate system to handle reviews.
> >>>>
> >>>> I made a little test review on one PR here, and the UX was almost
> >>>> exactly like working in reviewboard:
> >> https://github.com/juju/juju/pull/6234
> >>>>
> >>>> There may be important edge cases I'm missing, but I think it's worth
> >>>> looking into.
> >>>>
> >>>> -Nate
> >>>>
> >>>>
> >>>
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Reviews on Github

2016-09-14 Thread Nate Finch
In case you missed it, Github rolled out a new review process.  It
basically works just like reviewboard does, where you start a review, batch
up comments, then post the review as a whole, so you don't just write a
bunch of disconnected comments (and get one email per review, not per
comment).  The only features reviewboard has is the edge case stuff that we
rarely use:  like using rbt to post a review from a random diff that is not
connected directly to a github PR. I think that is easy enough to give up
in order to get the benefit of not needing an entirely separate system to
handle reviews.

I made a little test review on one PR here, and the UX was almost exactly
like working in reviewboard: https://github.com/juju/juju/pull/6234

There may be important edge cases I'm missing, but I think it's worth
looking into.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-09-07 Thread Nate Finch
Just a note, because it wasn't clear to me - there are a couple cases where
the automatic upload tools won't do what you want, if you use a version of
juju you built locally.

If you're not a developer or someone who builds juju from source, it's safe
to ignore this email.

*1. If the version of juju you're building is available in streams and you
*want* to upload, you have to use --build-agent, because by default we
won't upload.  *
This happens if you're purposely building an old release, or if QA just
released a new build and you haven't pulled yet and/or master hasn't been
updated with a new version number.  --build-agent works like the old
upload-tools, except it always rebuilds jujud, even if there's an existing
jujud binary.

*2. If you're building a version of the code that is not available in
streams, and you *don't* want to upload, you must use
--agent-version=.*
This can happen if you want to deploy a known-good server version, but only
have a dev client around.  I use this to make sure I can reproduce bugs
before testing my fixes.  --agent-version works basically like the old
default (non-upload) behavior, except you have to explicitly specify a juju
version that exists in streams to deploy (e.g. --agent-version=2.0-beta17)


Note that if you want to be *sure* that juju bootstrap always does what you
expect it to, IMO you should always use either --build-agent (to upload) or
--agent-version (to not upload), depending on your intent.  The behavior of
the bare juju bootstrap can change without warning (from uploading to not
uploading) if a new version of jujud is pushed to streams that matches what
you're building locally (which happens every time a new build is pushed to
streams, until master is updated and you git pull and rebuild), and that
can be really confusing if you are expecting your changes to be uploaded,
and they're not.  It also changes behavior if you switch from master (which
always uploads) to a release tag (which never uploads), which while
predictable, can be easy to forget.

Related note, beta18 (which is in current master) and later versions of the
client can't bootstrap with --agent-version 2.0-beta17 or earlier, due to a
breaking change (you'll get an error about mismatching UUIDs).  This type
of breakage is rare, and generally should only happen during betas (or
pre-beta), but it impacts us right now, so... yeah.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: kill-controller, unregister, destroy-controller

2016-09-02 Thread Nate Finch
This is one of the reasons we always make you type out the controller name
for destroy controller.  Because

juju destroy-controller production-website

should ring a bell.  Simply adding a long flag doesn't really help you
remember if you're killing something important or not, because you always
have to type the same flag for every controller.

This is also why I just always use kill-controller... it's like
destroy-controller --destroy-all-models except you don't need the flag and
it's easier to type even disregarding the flag.  I'm sure I'm not the only
one that will figure this out and just avoid destroy-controller, which
defeats the purpose of trying to make it safer.

I agree with Roger we should be encouraging people to put blocks on
important controllers, and not rely on typing out long things that aren't
actually helpful.



On Fri, Sep 2, 2016 at 7:16 AM roger peppe 
wrote:

> It seems to me that this kind of thing is exactly what "blocks" are
> designed for. An explicit unblock command seems better to me than either an
> explicit flag or an extra prompt, both of which are vulnerable to typing
> without thinking. Particularly if "throwaway" controllers created for
> testing purposes are not blocked by default, so you don't get used to to
> typing "unblock" all the time.
>
> On 1 Sep 2016 16:14, "Mark Ramm-Christensen (Canonical.com)" <
> mark.ramm-christen...@canonical.com> wrote:
>
> I get the desire to remove friction everywhere we can, but unless
> destroying controllers is a regular activity, I actually think SOME
> friction is valuable here.
>
> If controllers are constantly getting into a wedged state where they must
> be killed, that's likely a product of a fast moving set of betas, and
> should be addressed *directly* rather than as they say "applying lipstick
> to a pig"
>
> On Thu, Sep 1, 2016 at 4:04 PM, Marco Ceppi 
> wrote:
>
>> On Thu, Sep 1, 2016 at 9:59 AM Mark Ramm-Christensen (Canonical.com) <
>> mark.ramm-christen...@canonical.com> wrote:
>>
>>> I believe keeping the --destroy-all-models flag is helpful in keeping
>>> you from accidentally destroying a controller that is hosting important
>>> models for someone without thinking.
>>>
>>
>> What happens if I destroy-controller without that flag? Do I have to go
>> into my cloud portal to kill those instances? Is there any way to recover
>> from that to get juju reconnected? If not, it's just a slower death.
>>
>>
>>> On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi 
>>> wrote:
>>>
 Hey everyone,

 I know we've had discussions about this over the past few months, but
 it seems we have three commands that overlap pretty aggressively.

 Using Juju beta16, and trying to 'destroy' a controller it looks like
 this now:

 ```
 root@ubuntu:~# juju help destroy-controller
 Usage: juju destroy-controller [options] 

 ...

 Details:
 All models (initial model plus all workload/hosted) associated with the
 controller will first need to be destroyed, either in advance, or by
 specifying `--destroy-all-models`.

 Examples:
 juju destroy-controller --destroy-all-models mycontroller

 See also:
 kill-controller
 unregister
 ```

 When would you ever want to destroy-controller and not
 destroy-all-models? I have to specify that flag everytime, it seems it
 should just be the default behavior. Kill-controller seems to do what
 destroy-controller --destroy-all-models does but more aggressively?

 Finally, unregister and destroy-controller (without
 --destroy-all-models) does the same thing. Can we consider dropping the -
 very long winded almost always required - flag for destroy-controller?

 Finally, there used to be a pretty good amount of feedback during
 destroy-controller, while it was rolling text, I at least knew what was
 happening. Now it's virtually silent. Given it runs for quite a long time,
 can we get some form of feedback to the user back into the command?

 ```
 root@ubuntu:~# juju destroy-controller --destroy-all-models cabs
 WARNING! This command will destroy the "cabs" controller.
 This includes all machines, applications, data and other resources.

 Continue? (y/N):y
 ERROR failed to destroy controller "cabs"

 If the controller is unusable, then you may run

 juju kill-controller

 to forcibly destroy the controller. Upon doing so, review
 your cloud provider console for any resources that need
 to be cleaned up.

 ERROR cannot connect to API: unable to connect to API: websocket.Dial
 wss://10.0.0.4:17070/api: dial tcp 10.0.0.4:17070: getsockopt: no
 route to host
 root@ubuntu:~# juju kill-controller cabs
 WARNING! This command will destroy the "cabs" controller.
 This includes all machines, applications, data and other resources.

 Continue? (y/N):y
 Un

Re: 1.25.6-xenial-amd64 and lxc containers

2016-08-30 Thread Nate Finch
You don't need anything in the environment.yaml.  You specify that you want
to deploy something to a lxc container in the deploy command.  e.g.

juju deploy mysql --to lxc   // deploys mysql to a new lxc container on a
new machine
juju deploy wordpress --to lxc:25  // deploys wordpress to a new lxc
container on the already-existing machine #25
juju deploy haproxy --to 1/lxc/3  // deploys haproxy to lxc container #3 on
machine #1

you can also use add-machine to create empty containers:

juju add-machine lxc // adds a new machine with an empty lxc container on it
juju add-machine lxc:5 // adds a new empty lxc container on machine 5

Hope that helps.
-Nate

On Tue, Aug 30, 2016 at 4:16 PM Daniel Bidwell  wrote:

> What do I have to put in my .juju/environment.yaml to get juju-1.25.6-
> xenial-amd64 to use lxc containers?
> --
> Daniel Bidwell 
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju should be built with go 1.6

2016-08-22 Thread Nate Finch
This shouldn't be news, but Juju must be built with go 1.6+ (we only
officially support go 1.6.0, but higher versions should work as well).

However, there was a bug in the 1.25 makefile, where it hadn't been updated
to install go 1.6, instead it was installing go 1.2.1 still.  Until now,
1.25 built just fine with go 1.2... but as of my next PR it will no longer
build with go 1.2 (due to values new to the go 1.6 stdlib that we now
depend on).

Juju 2.0 already had the update to the makefile, but I am today landing a
couple files that will hopefully make it more clear what's wrong if someone
tries to build with an earlier version of go.  In cmd/juju and cmd/jujud
there lives a version_canary.go file (one file for each directory, but
they're identical).  The content simple looks like this:

// Copyright 2016 Canonical Ltd.
// Licensed under the AGPLv3, see LICENCE file for details.

// +build !go1.6

package main

// This line intentionally does not compile.  This file will only be compiled if
// you are compiling with a version of Go that is lower than the one we support.
var requiredGoVersion = This_project_requires_Go_1_6


If this file is compiled on a version of go that is 1.6 or higher, it'll
get excluded from the build, and all will be well. If you compile it on a
version of go that is lower than 1.6, you'll get an error like this:

./version_canary.go:10: undefined: This_project_requires_Go_1_6

This is (hopefully) a much clearer error message than something like this:

../utils/tls.go:24: undefined: tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

(which is the error message we used to get if we compiled with an earlier
version of go).

Let me know if you have any questions.  And if someone was still building
1.25 with go 1.2 for the faster compile times... my apologies (but it
wouldn't have worked after today anyway).

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Nate Finch
Ian, can you describe how Juju decides if it's running for a developer or
an end user?  I'm worried this could trip people up who are both end users
and happen to have a juju development environment.

On Mon, Aug 15, 2016 at 11:29 AM Mark Shuttleworth  wrote:

> On 15/08/16 08:27, Ian Booth wrote:
> > So if you pull master you'll no longer need to use upload-tools.
> > Juju will Do the Right Thing*, when you type:
>
> Thanks Ian, this is a lovely simplification.
>
> Mark
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: So I wanted to update a dependency . .

2016-08-12 Thread Nate Finch
The main problem with putting code into a different directory is that if
there are files in that repo that reference the package itself, they'd need
to be updated.

So like for example, if flag_test.go imports "launchpad.net/gnuflag" to do
external tests, we'd need to edit that code to import "gnuflag" instead

Also, if there are any third party packages that use one of the same
dependencies we do, we'd need to edit those files, too...

On Fri, Aug 12, 2016 at 2:31 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On Fri, Aug 12, 2016 at 9:11 AM, Nate Finch 
> wrote:
>
>> The problem really stems from two go-isms -
>>
>> 1.) The standard for Go projects is to have the path on disk
>> representative of the URL to the project.
>> 2.) Every file that uses a package has an import statement that
>> references the path on disk.
>>
>> If either one of these were not true, we could have avoided your problem.
>>
>> There's nothing we can do about #2.  Building with the go tool requires
>> that if you use a package in a file, you must have an import statement for
>> it, and if you import "foo/bar/gnuflag", that code must live in
>> $GOPATH/src/foo/bar/gnuflag".
>>
>> For #1, *in theory* we could change things.  Only "go get" knows that
>> $GOPATH/src/launchpad.net/gnuflag corresponds to
>> http://launchpad.net/gnuflag.  When you compile and build, all the go
>> tool cares about is the path on disk and what code is in that folder.  We
>> *could* detach the idea of an import path from the URL it corresponds
>> to.  We could git clone github.com/juju/gnuflag into
>> $GOPATH/src/gnuflag, and the go tool would build it just fine.  the import
>> statement would then be import "gnuflag" and this would work just fine.
>> We'd need to maintain a mapping file of repo to folder on disk, similar to
>> what we do now with dependencies.tsv, but it wouldn't be the end of the
>> world.
>>
>> I'm not entirely convinced this is a good idea, but it would make
>> updating dependencies a lot easier.
>>
> This is interesting, and more or less what I was after in my mail. I
> wasn't sure if there were any out of the box ideas that would make life
> easier, but I hoped to surface some for discussion. Thanks!
>
> I suppose my follow-up question is, why or why not is this a good idea?
> Having to change all of the imports is painful, but as you say, it is an
> intentional design decision by the Go Community. I don't have any real
> feeling or inkling on what other Go projects do, but I suspect at least
> some of them follow this idea of importing all dependencies and updating
> them only if needed. However, by nature of them simply cloning the source
> into the repo, I would also guess they are not able to easily update a
> dependency.
>
>>
>> The source code hack to mgo is due to this problem as well. In theory we
>> can just fork mgo and make our fix, but that would again require us to
>> change a lot of import statements.
>>
>> Part of the problem was that you were changing two very common
>> dependencies that were imported 4 levels deep (i.e. a imports b imports c
>> imports d, and they all import the dependency).  This wouldn't change no
>> matter what we do - you'd still need to start the changes at d and roll
>> them up the stack.  That would be true in any language.
>>
> Yes, I told of my pain, but as painful as it was, I got lots of help from
> many of you. Thank you again for helping land this.
>
>>
>> The "circular" dependency (in quotes because you can't have a true
>> circular dependency in go, but you can have two repos that both depend on
>> each other) shouldn't have landed that way.  Two repos should definitely
>> not depend on each other.  No other repo should ever depend on
>> github.com/juju/juju.  We explicitly make no claims for backwards
>> compatibility, so the thing you're depending on could just disappear one
>> day.  If you need something in there, factor it out into its own repo with
>> a backwards compatibility guarantee first.
>>
>> It *is* a problem that dependencies under juju can be imported in broken
>> states because of our use of dependencies.tsv.  There's nothing that says
>> we couldn't run all the unit tests for all packages juju uses in the
>> landing job for github.com/juju/juju (and any other repo we desire)...
>> just run go test ./... from $GOPATH/src and it'll test everything we have
>> downloaded to build juju with.  That's an easy win, an

Re: So I wanted to update a dependency . .

2016-08-12 Thread Nate Finch
This is why we want to make a juju library, so we have a package that we
know we need to keep backwards compatibility with.  Sure, you can vendor or
pin the revision, but that doesn't help you when a new feature or fix lands
and you update, and everything breaks.  If we have a library we try to keep
backwards compatible, then these problems would be minimized.

On Fri, Aug 12, 2016 at 1:56 PM roger peppe 
wrote:

> On 12 August 2016 at 14:11, Nate Finch  wrote:
> > No other repo should ever depend on github.com/juju/juju.
>
> I have to call this out. There is no decent way to use Juju from Go without
> depending on github.com/juju/juju.
>
> And backward compatibility isn't insurmountable - dependencies can be
> locked or
> vendored. Juju itself uses many non-backwardly compatible
> dependencies, after all.
>
> I think we should be aiming to provide a nice Go API to Juju, whether that
> ends
> up in github.com/juju/juju or not.
>
>   cheers,
> rog.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: So I wanted to update a dependency . .

2016-08-12 Thread Nate Finch
The problem really stems from two go-isms -

1.) The standard for Go projects is to have the path on disk representative
of the URL to the project.
2.) Every file that uses a package has an import statement that references
the path on disk.

If either one of these were not true, we could have avoided your problem.

There's nothing we can do about #2.  Building with the go tool requires
that if you use a package in a file, you must have an import statement for
it, and if you import "foo/bar/gnuflag", that code must live in
$GOPATH/src/foo/bar/gnuflag".

For #1, *in theory* we could change things.  Only "go get" knows that
$GOPATH/src/launchpad.net/gnuflag corresponds to
http://launchpad.net/gnuflag.  When you compile and build, all the go tool
cares about is the path on disk and what code is in that folder.  We
*could* detach
the idea of an import path from the URL it corresponds to.  We could git
clone github.com/juju/gnuflag into $GOPATH/src/gnuflag, and the go tool
would build it just fine.  the import statement would then be import
"gnuflag" and this would work just fine.  We'd need to maintain a mapping
file of repo to folder on disk, similar to what we do now with
dependencies.tsv, but it wouldn't be the end of the world.

I'm not entirely convinced this is a good idea, but it would make updating
dependencies a lot easier.

The source code hack to mgo is due to this problem as well. In theory we
can just fork mgo and make our fix, but that would again require us to
change a lot of import statements.

Part of the problem was that you were changing two very common dependencies
that were imported 4 levels deep (i.e. a imports b imports c imports d, and
they all import the dependency).  This wouldn't change no matter what we do
- you'd still need to start the changes at d and roll them up the stack.
That would be true in any language.

The "circular" dependency (in quotes because you can't have a true circular
dependency in go, but you can have two repos that both depend on each
other) shouldn't have landed that way.  Two repos should definitely not
depend on each other.  No other repo should ever depend on
github.com/juju/juju.  We explicitly make no claims for backwards
compatibility, so the thing you're depending on could just disappear one
day.  If you need something in there, factor it out into its own repo with
a backwards compatibility guarantee first.

It *is* a problem that dependencies under juju can be imported in broken
states because of our use of dependencies.tsv.  There's nothing that says
we couldn't run all the unit tests for all packages juju uses in the
landing job for github.com/juju/juju (and any other repo we desire)... just
run go test ./... from $GOPATH/src and it'll test everything we have
downloaded to build juju with.  That's an easy win, and something we should
start doing ASAP in my opinion.

Finally... a lot of the reasons these things hurt so much is because juju
is a huge project - over 500,000 lines of code.  What is a minor annoyance
in a small project becomes much worse in a large project.  Obviously, part
of that is inescapable - we can't make juju smaller.  I do think it's worth
considering ways to make our lives easier.

-Nate

On Thu, Aug 11, 2016 at 7:08 PM Casey Marshall 
wrote:

> On Thu, Aug 11, 2016 at 5:44 PM, Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> This is a simple story of a man and a simple mission. Eliminate the final
>> 2 dependencies that are in bazaar and launchpad. It makes juju and it's
>> dependencies live completely in git. A notable goal, and one that I desired
>> for getting snaps to build with launchpad.
>>
>> I don't feel I need to explain the pain that making a no-source change
>> update to a dependency (point juju and friends to it's new location) has
>> been. I've had to carefully craft a set of PR's, and land them in a certain
>> order. I've encountered contention (because I have to change hundreds of
>> imports), unit test issues (because juju's dependencies aren't tested when
>> merged, so they can be incompatible with the latest juju without knowing
>> it), and circular dependencies that require some magic and wishful thinking
>> to workaround.
>>
>> I'm still not finished landing the change, but I hope to do so *soon*. It
>> must be close now!
>>
>> All of this to say, I think it would be useful to have a discussion on
>> how we manage dependencies within the project. From a release perspective,
>> it can be quite cumbersome as dependencies are picked up and dropped. It's
>> also recently made a release (And critical bugfix) really difficult at
>> times. From my newly experience contributor perspective, I would really
>> think twice before attempting to make an update :-) I suspect I'm not alone.
>>
>> I've heard ideas in the past about cleaning this up, and some things like
>> circular dependencies between romulus and juju are probably best described
>> as tech debt. But there also is some pain in the larger scheme

Re: Let's talk retries

2016-08-09 Thread Nate Finch
So, in my opinion, juju/retry is pretty good.  I think with some tweaking
and some intelligent use, it can feel every bit as lightweight as we'd like.

If we have common use cases, we just need to write up an API to encapsulate
them.  If we want a "retry N times or until this channel is closed" then
write an api for that.  Having preconstructed call args with prepopulated
best practices for delays, backoffs, etc. can make it a lot easier.

There's no reason running a complex retry can't look like this:

// closure to encapsulate local variables
f := func() error { return foo.bar(a, b, c) }
err := retry.WithCancel(f, juju.StdRetry, doneCh)

Where StdRetry is the juju standard retry scheme (standard timeout, backoff
strategy, etc), and doneCh is the channel you expect to get closed to
cancel this action.

That's doable with the current API if we add cancellation (though it would
be nice if we extracted the function-to-run from the args struct, since it
makes it more obvious that the retry args can be reused for different
functions).

I think goroutines should be controlled by the caller.  If the caller wants
to put retry.Foo() in a goroutine, it can... but I don't think it's useful
to have that be retry's responsibility, it complicates the package too much.

On Tue, Aug 9, 2016 at 11:26 AM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Andrew's queue is certainly nice, but I agree with the points Roger is
> raising:
>
> 1. Go's idiom for coordinating concurrent operations are goroutines and
> channels.
> 2. Sticking to these idioms makes it much easier to compose parts into a
> whole (if only because the language strongly bends code that way).
>
> As a matter of opinion, I also dislike having to instantiate structs to
> encapsulate logic; I would rather write the logic inline, or in a closure
> -- possibly in a goroutine.
>
> William Reade  writes:
>
> > On Tue, Aug 9, 2016 at 10:17 AM, roger peppe
> >  wrote:
> >
> > BTW I'm not saying that a timer queue is never the correct answer.
> > In some
> > circumstances, it can be the exactly the right thing to use.
> >
> >
> > Yeah -- the context here is that katco has been looking at the
> > provisioner, and I think a timer queue is sensible there. Firing off
> > goroutines for every machine is not necessarily bad by any means, but
> > it *is* somewhat harder to get right in every circumstance (e.g. what
> > happens when a machine is removed while that other goroutine is
> > working on provisioning it?).
> >
> > There are certainly many places where a single retryable op *is* the
> > only important thing and there's no call for a control loop. I'm not
> > suggesting we should always use a queue; but I'd tend to encourage it
> > whenever we have a component dealing with many similar retryable
> > tasks. (I definitely should be using it in dependency.Engine, for
> > example.)
> >
> > Cheers
> > William
> >
>
> --
> Katherine
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


hub github helper

2016-08-02 Thread Nate Finch
I've mentioned this before, but with some of our new code review
guidelines, I figured it's good to reiterate.  Github has a CLI tool that
helps with doing git-related things with github.  It's called hub. It's
written in Go, so installing it is as easy as go get github.com/github/hub

Github recommends making an alias to have hub replace git, since it
forwards everything to git that it doesn't understand.  Honestly, I don't
really see any benefit to that.  I prefer to understand what git is doing
versus what hub is doing.

It can do a whole bunch of stuff, but there are two things I use it for the
most - checking out PRs and making PRs.

Since we're supposed to be doing manual testing on people's PRs when we
review them, we need a way to do that.  With hub it's one command:

hub checkout 

so, for example:

hub checkout https://github.com/juju/juju/pull/5915

Bam, your local branch is set to a copy of the PR (don't forget to run
godeps).

To make a PR from the CLI using hub, make sure the repo you want to PR
against is the git remote called origin, then you can make a PR with your
current branch by just doing

hub pull-request

This will open an editor to write the PR message, or you can use -m just
like with git commit.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Interactive bootstrap has landed

2016-07-27 Thread Nate Finch
Yes, sorry.  Those are the only two command lines that will initiate it.
Any other arguments or flags will cause bootstrap to do the exact same
thing it did before with that argument list.

On Wed, Jul 27, 2016 at 1:24 PM Andreas Hasenack 
wrote:

> On Wed, Jul 27, 2016 at 2:19 PM, Nate Finch 
> wrote:
>
>> If you run "juju bootstrap" or "juju bootstrap --upload-tools", you will
>> now be placed into an interactive mode that will ask you for details about
>> the cloud you'd like to bootstrap on.   Currently this is limited to clouds
>> juju already knows about and only with credentials juju already knows
>> about.  We intend to expand this to let you
>>
>
> Does this kick in only when run exactly like that, without further
> parameters?
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Interactive bootstrap has landed

2016-07-27 Thread Nate Finch
If you run "juju bootstrap" or "juju bootstrap --upload-tools", you will
now be placed into an interactive mode that will ask you for details about
the cloud you'd like to bootstrap on.   Currently this is limited to clouds
juju already knows about and only with credentials juju already knows
about.  We intend to expand this to let you add a cloud during this
bootstrap process, and add credentials, so the entire process is one
interactive session.

I'm currently working on making "juju add-cloud" interactive, after which
we'll tie that experience into the bootstrap experience, and then do the
same for "juju add-credential" (which is already interactive, but just
needs to be tied into bootstrap).

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


No help Topics in 2.0

2016-07-26 Thread Nate Finch
Juju 1.x had a ton of help in the CLI, right at your fingertips:

$ juju1 help topics
azure-provider  How to configure a Windows Azure provider
basics  Basic commands
commandsBasic help for all commands
constraints How to use commands with constraints
ec2-providerHow to configure an Amazon EC2 provider
global-options  Options common to all commands
glossaryGlossary of terms
hpcloud-providerHow to configure an HP Cloud provider
jujuWhat is Juju?
juju-systemsAbout Juju Environment Systems (JES)
local-provider  How to configure a local (LXC) provider
logging How Juju handles logging
maas-provider   How to configure a MAAS provider
openstack-provider  How to configure an OpenStack provider
placement   How to use placement directives
plugins Show Juju plugins
spaces  How to configure more complex networks using spaces
topics  Topic list
users   About users in Juju

Almost all of this has been removed in 2.0:

$ juju help topics
basics  Basic Help Summary
commandsBasic help for all commands
global-options  Options common to all commands
topics  Topic list

There are a lot of parts of the CLI that *need* that extra level of
documentation.  I'm all for dropping a link to the website at the bottom of
some help docs for further info, but dropping the help entirely seems like
a mistake.  Some things like constraints and placement are used in multiple
commands, but there's no way to know from the CLI how to entire valid
values for them.  Things like the providers are not explained anywhere in
the CLI now, as far  as I can tell.

The plaintext docs are the easiest thing for us to update and maintain...
they live right in our own repo and let us update them whenever the code
updates, and they're the easiest thing for users to get to... let's not
drop that on the floor.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Cleansing Mongo data

2016-06-24 Thread Nate Finch
It seems as though we should be cleansing all the keys since we never
know what queries we might want to make in the future.

On Fri, Jun 24, 2016 at 12:04 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

>
> As I have only just discovered the need to cleanse mongo data, I can't say
> for sure, but it looks like we may have been cleansing things in the parts
> of Juju that need it. William may know more.
>
> If not, I imagine a small upgrade step would make short work of any
> problems.
>
> roger peppe  writes:
>
> > This is useful, thanks.
> >
> > Note that's it's not necessary to cleanse *all* keys that go into Mongo,
> > just the ones that might be used in queries.
> >
> > But one thought... what about keys that already contain full-width
> > dollar and dot?
> >
> >   cheers,
> > rog.
> >
> > On 23 June 2016 at 21:09, Katherine Cox-Buday
> >  wrote:
> >> Hey all,
> >>
> >> William gave me a good review and it came up that I wasn't cleansing
> >> some of
> >> the data being placed in Mongo. I wasn't aware this had to be done,
> >> and
> >> after talking to a few other folks it became apparent that maybe not
> >> many
> >> people know we should be doing this.
> >>
> >> At any rate, William also pointed me to some existing code which did
> >> this.
> >> I've pulled it out into the mongo/utils package for general
> >> consumption. The
> >> comments do a pretty good job of elucidating why this is necessary.
> >>
> >> https://github.com/juju/juju/blob/master/mongo/utils/data_cleansing.go
> >>
> >> -
> >> Katherine
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
>
> --
> Katherine
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: fslock is dead, long live mutex

2016-06-21 Thread Nate Finch
Thanks for this, Tim!  Really happy to see that old fslock code go away,
and using much more reliable OS-level functionality.

On Tue, Jun 21, 2016 at 11:53 AM Alexis Bruemmer <
alexis.bruem...@canonical.com> wrote:

> Tim this is really awesome and will address a few long standing and
> painful field bugs.  Thank you for taking the extra effort to do this work!
>
> Also thank you to Nate Finch and David Cheney for their contributions on
> this effort.
>
> On Tue, Jun 21, 2016 at 2:24 AM, Tim Penhey 
> wrote:
>
>> Hi folks,
>>
>> We have finally managed to exorcise the fslock from the codebase. Both
>> 1.25 and master no longer refer to it at all. We need to remove it from the
>> juju/utils package to make sure that people don't accidentally try and use
>> it again.
>>
>> There is a new replacement, the juju/mutex package. To acquire a mutex,
>> you create a mutex.Spec structure. You must at least provide a name, delay
>> and clock. The delay is how long the code waits between tries to acquire
>> the mutex. Named mutexs are shared between processes on the same machine.
>> If a process holds the mutex and the process dies, the mutex is
>> automatically released.  A spec can also have a timeout value, and/or an
>> abort channel.
>>
>> On linux this is implemented with abstract domain sockets, on windows it
>> uses a named semaphore, and on other platforms a flock is used on a named
>> temp file.
>>
>> https://godoc.org/github.com/juju/mutex
>>
>> Cheers,
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
>
>
> --
> Alexis Bruemmer
> Juju Core Manager, Canonical Ltd.
> (503) 686-5018
> alexis.bruem...@canonical.com
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Tests do not pass with go1.7

2016-06-08 Thread Nate Finch
Just FYI, in case anyone was like me and decided they wanted to jump on the
faster compile times in 1.7... some of our tests do not pass in go 1.7:

https://bugs.launchpad.net/juju-core/+bug/1589350
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


version parsing

2016-06-03 Thread Nate Finch
a bug was discovered in our version parsing:
https://bugs.launchpad.net/juju-core/+bug/1588911

basically, 2.0.alpha10 was being parsed as the tag being alpha1 and the
patch being 0, instead of alpha and 10.  While correcting this problem, I
noticed that we were matching [a-zA-Z0-9_]+ for the tag... which I'm pretty
sure was a mistake.  After talking to Curtis and Martin, I changed it so
the tag is only [a-z]+.  I'm pretty sure this will not affect anyone else,
but posting it to the list just in case.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Fixing "juju help commands"

2016-05-25 Thread Nate Finch
+1 ... one and only one way to do something is a lot easier to understand.
Either we like juju list-foos or we like juju foos... pick one and move
on.  This feels like two camps agreed to disagree and just kept both.

On Wed, May 25, 2016 at 10:12 AM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

>
> I think this has come up before on this list, but: isn't having 2 sets of
> commands in the first place a design smell? If we need aliases because
> users aren't using the originals, then  shouldn't we fix the original
> commands?
>
> Tim Penhey  writes:
>
> > On 25/05/16 00:12, Marco Ceppi wrote:
> >> Even if you don't expect people to run them, hidding them seems
> >> awkward.
> >> Better to simply educate with good help output about what the
> >> command
> >> does and when/why use the command.
> >
> > Was thinking, perhaps it would be better to have a feature flag to use
> > the "hidden" commands instead of the ability to hide commands.
> >
> > If you set the feature flag, you get the additional commands, and they
> > show up in help etc.  That way a way to get users to run them could be
> > something like:
> >
> >   JUJU_FEATURE_FLAGS=dev-debug juju dump-model
> >
> > or something like that.
> >
> > Tim
>
> --
> Katherine
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: admin model now called "controller" model

2016-05-25 Thread Nate Finch
I believe it was discussed at the Vancouver sprint.

On Wed, May 25, 2016 at 12:09 PM Adam Stokes 
wrote:

> Where was this discussed?
>
> On Wed, May 25, 2016, 10:56 AM Nate Finch 
> wrote:
>
>> The change is merging as we speak, so if you have scripts etc that need
>> to be updated, do so before pulling master.
>>
> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


admin model now called "controller" model

2016-05-25 Thread Nate Finch
The change is merging as we speak, so if you have scripts etc that need to
be updated, do so before pulling master.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: list, show and get (was Re: Fixing "juju help commands")

2016-05-24 Thread Nate Finch
I can understand list-clouds vs show-cloud, for plural vs singular (even
though I don't think we actually have commands like that, with a plural and
a singular).  But show-config vs get-config seems really pedantic. You're
still showing one thing. One is a foo, one is a foo-config. They have very
easily differentiated names, unlike plural vs singular, which actually
might be confusing if they only differed by an s.

On Tue, May 24, 2016, 6:45 PM Tim Penhey  wrote:

> On 25/05/16 01:49, Nate Finch wrote:
> > While we're on the subject... we have list-foos, show-foos, and
> > get-foo... and I honestly cannot remember when to use which. Can we just
> > standardize on one, so no one has to remember if it's list-models or
> > show-models or show-model-config or get-model-config?  They all mean the
> > same thing, let's just pick one.
>
> 'list-foos' which is to be an alias for 'foos' returns a list of foos.
> 'show-foo' shows information about a single 'foo'
> 'get-foo' gets configuration about a particular 'foo'
>
> This is consistent across all our foos now.
>
> Tim
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Fixing "juju help commands"

2016-05-24 Thread Nate Finch
If there are things we really don't want uses to run, they should probably
just not exist in production builds. If they're just less-used commands,
then I don't think it's a big deal to have them in juju help commands.

If we want to remove them from production builds, they could either be
compiled out by default, or just made into plugins.  But I don't think
hiding is really a good idea.

I am +1 on removing aliases from the default output of juju help commands,
though.  It's really long and spammy right now, and half of the length is
just due to aliases.

I do like the idea of every juju list-foos having an alias of juju foos...
but then why even have list-foos at all?

While we're on the subject... we have list-foos, show-foos, and get-foo...
and I honestly cannot remember when to use which. Can we just standardize
on one, so no one has to remember if it's list-models or show-models or
show-model-config or get-model-config?  They all mean the same thing, let's
just pick one.
On Tue, May 24, 2016, 8:12 AM Marco Ceppi  wrote:

> On Tue, May 24, 2016, 12:19 AM Tim Penhey 
> wrote:
>
>> Hi folks,
>>
>> More from the "let's fix the output" school of thought.
>>
>> One thing that has bugged me about 'juju help commands' was the repeated
>> mentions of "alias for ".
>>
>> I propose that we don't show aliases by default, and allow the user to
>> task for them.
>>
>> Also, the supercommand describe commands method was written before I
>> knew about the tabular output, so some code could be cleaned up there.
>>
>> Proposal:
>>
>> juju help commands
>>- shows the commands that are neither aliases, nor hidden
>> juju help commands --alias
>>- shows either just the aliases, or everything including aliases
>> juju help commands --all
>>- shows basic commands, aliases and hidden commands.
>>
>> I'd like to add the ability to say a command is hidden so it doesn't
>> show up in the commands list. The purpose for these could be debugging
>> assisting type functions, like "dump-model" or "debug-controller" (two
>> commands that don't yet exist).
>>
>
> I don't see a need for this, I and other users often use juju help
> commands to find or remind me of these commands. If you expect users to run
> these debug/dump commands ever, show them.
>
> Even if you don't expect people to run them, hidding them seems awkward.
> Better to simply educate with good help output about what the command does
> and when/why use the command.
>
>
>> Thoughts?
>>
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: natefinch/npipe/v2 does not support the POSIX read/close semantics juju needs

2016-05-23 Thread Nate Finch
Ug.. I'll look at this tomorrow (12:30 is a bad time for me to be diving
into windows API BS)

On Mon, May 23, 2016 at 11:40 PM David Cheney 
wrote:

> TL;DR read issue https://bugs.launchpad.net/bugs/1581337
>
> Thumper asked me to forward this to juju-dev for discussion
>
>
> http://reports.vapour.ws/releases/3964/job/run-unit-tests-win2012-amd64/attempt/2384
>
> Fails because, npipe.Accept is blocking on accept
>
> goroutine 41 [syscall, locked to thread]:
> syscall.Syscall(0x7f8139f102c, 0x2, 0x1a8, 0x, 0x0,
> 0xc082183de8, 0x3e5, 0x1c0050)
>  C:/Go/src/runtime/syscall_windows.go:163 +0x5c
> syscall.WaitForSingleObject(0x1a8, 0xc0, 0xc082183e10, 0x0, 0x0)
>  C:/Go/src/syscall/zsyscall_windows.go:693 +0x6f
> gopkg.in/natefinch/npipe%2ev2.waitForCompletion(0x15c, 0xc0822a42a0,
> 0xc08202b880, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:181
> +0x45 
> gopkg.in/natefinch/npipe%2ev2.(*PipeListener).AcceptPipe(0xc08202b880,
> 0x0, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:307
> +0x397 
> gopkg.in/natefinch/npipe%2ev2.(*PipeListener).Accept(0xc08202b880,
> 0x0, 0x0, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:261
> +0x44 
>
> github.com/juju/juju/worker/metrics/spool.(*socketListener).loop(0xc08227e780
> ,
> 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:71
> +0xaa
> 
>
> github.com/juju/juju/worker/metrics/spool.NewSocketListener.func1(0xc08227e780)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:43
> +0x65
> 
> created by github.com/juju/juju/worker/metrics/spool.NewSocketListener
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:44
> +0x140
> 
>
> and
>
> socketListener is waiting on the tomb to hit done.
>
> goroutine 43 [chan receive]:
> launchpad.net/tomb.(*Tomb).Wait(0xc08227e790, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> launchpad.net/tomb/tomb.go:108
> +0x5f 
>
> github.com/juju/juju/worker/metrics/spool.(*socketListener).Stop(0xc08227e780)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:56
> +0x18c
> 
> github.com/juju/juju/worker/metrics/sender.(*sender).stop(0xc082269770)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/sender.go:104
> +0x45
> 
>
> github.com/juju/juju/worker/metrics/sender.(*sender).(github.com/juju/juju/worker/metrics/sender.stop)-fm()
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/manifold.go:77
> +0x27
> 
>
> github.com/juju/juju/worker/metrics/spool.(*periodicWorker).Kill(0xc0822a40e0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:103
> +0x28
> 
>
> github.com/juju/juju/worker/metrics/sender_test.(*ManifoldSuite).setupWorkerTest.func1(0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/manifold_test.go:101
> +0x3d
> 
> github.com/juju/testing.(*CleanupSuite).callStack(0xc0821a9a28,
> 0xc0822791d0, 0xc08226bdd0, 0x2, 0x2)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/cleanup.go:52
> +0x51 
> github.com/juju/testing.(*CleanupSuite).TearDownTest(0xc0821a9a28,
> 0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/cleanup.go:46
> +0x4d 
> github.com/juju/testing.(*IsolationSuite).TearDownTest(0xc0821a9a20,
> 0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/isolation.go:38
> +0x44 
> reflect.Value.call(0xcf2800, 0xc0821a9a2

Re: Specifying bootstrap machine agent details using ip address

2016-05-23 Thread Nate Finch
Hostname or IP address will both work.

On Mon, May 23, 2016 at 1:11 AM phani shankar 
wrote:

> Hi,
>
>  I am new to juju. I was following
> https://jujucharms.com/docs/stable/config-manual document to do manual
> provisioning. While configuring environment.yaml, instead of specifying
> hostname for bootstrap-host, is there a mechanism to specify IP address of
> the machine ?. Please let me know your thoughts.
>
>
> Regards,
> PHANI SHANKAR
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Three CI builds in the last 48 hours have failed because launchpad had a lie down

2016-05-18 Thread Nate Finch
 I agree.  We should snapshot a known-good starting point and start from
there.  It's a huge waste of time to do apt update / upgrade / install git,
bzr, mercurial, etc every single time we run CI.  That's like 1/3rd the
time it takes to run the unit test CI job.

On Wed, May 18, 2016 at 10:35 PM David Cheney 
wrote:

> This is a huge waste of wall time and bandwidth, and increases the
> chance of failure significantly. This is something I would like to see
> changed.
>
> On Thu, May 19, 2016 at 12:06 PM, John Meinel 
> wrote:
> > CI does a build from scratch in a pristine VM. There is some discussion
> > about changing that in a variety of ways (in a container in a long
> running
> > VM, etc), but it doesn't have a local copy to work from so it has to
> pull it
> > from upstream each time.
> >
> > John
> > =:->
> >
> >
> > On Wed, May 18, 2016 at 8:05 PM, David Cheney <
> david.che...@canonical.com>
> > wrote:
> >>
> >> We already have godeps which can take a set of vcs repos and flick
> >> them to the right revisions.
> >>
> >> Why does CI check out every single dependency from upstream every
> >> single time we do a build ? That introduces wc -l dependencies.tsv
> >> points of failure to every single CI run -- not to mention the several
> >> minutes it spends laboriously checking out the code.
> >>
> >> Thanks
> >>
> >> Dave
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reminder: write tests fail first

2016-05-04 Thread Nate Finch
Sorry, this was my fault.  I have a PR
 that fixes it, I just evidently
forgot to actually land it.

On Wed, May 4, 2016 at 10:43 PM Casey Marshall 
wrote:

> An excellent demonstration of writing tests that fail first that I've seen
> recently: https://www.youtube.com/watch?v=PEbnzuMZceA
>
> On Wed, May 4, 2016 at 9:24 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> See: https://bugs.launchpad.net/juju-core/+bug/1578456
>>
>> Cheers,
>> Andrew
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Static Analysis tests

2016-05-02 Thread Nate Finch
I think it's a good point that we may only want to run some tests once, not
once per OS/Arch.  However, I believe these particular tests are still
limited in scope by the OS/arch of the host machine.  The go/build package
respects the build tags in files, so in theory, one run of this on Ubuntu
could miss code that is +build windows.  (we could give go/build different
os/arch constraints, but then we're back to running this N times for N
os/arch variants)

I'm certainly happy to have these tests run in verify.sh, but it sounds
like they may need to be run per-OS testrun as well.

On Sun, May 1, 2016 at 10:27 PM Andrew Wilkins 
wrote:

> On Thu, Apr 28, 2016 at 11:48 AM Nate Finch 
> wrote:
>
>> Maybe we're not as far apart as I thought at first.
>>
>
>
>> My thought was that they'd live under github.com/juju/juju/devrules (or
>> some other name) and therefore only get run during a full test run or if
>> you run them there specifically.  What is a full test run if not a test of
>> all our code?  These tests just happen to test all the code at once, rather
>> than piece by piece.  Combining with the other thread, if we also marked
>> them as skipped under -short, you could easily still run go test ./...
>> -short from the root of the juju repo and not incur the extra 16.5 seconds
>> (gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
>> it skips all the tests in the suite, which is particularly appropriate to
>> these tests, since it's the SetUpSuite that takes all the time).
>>
>
> I'm not opposed to using the Go testing framework in this instance,
> because it makes most sense to write the analysis code in Go. That may not
> always be the case, though, and I don't want to have a rule of "everything
> as Go tests" that means we end up shoe-horning things. This is just
> academic until we need something that doesn't live in the Go ecosystem,
> though.
>
> Most importantly, I don't want to lose the ability to distinguish the
> types of tests. As an example: where we run static analysis should never
> matter, so we can cut a merge job short by performing all of the static
> analysis checks up front. That doesn't matter much if we only gate merges
> on running the tests on one Ubuntu series/arch; but what if we want to
> start gating on Windows, CentOS, or additional architectures? It would not
> make sense to run them all in parallel if they're all going to fail on the
> static analysis tests. And then if we've run them up front, it would be
> ideal to not have to run them on the individual test machines.
>
> So I think it would be OK to have a separate "devrules" package, or
> whatever we want to call it. I would still like these tests to be run by
> verify.sh, so we have one place to go to check that the source code is
> healthy, without also running the unit tests or feature tests. If we have a
> separate package like this, test tags are not really necessary in the short
> term -- the distinction is made by separating the tests into their own
> package. We could still mark them as short/long, but that's orthogonal to
> separation-by-purpose.
>
> Cheers,
> Andrew
>
> Mostly, I just didn't want them to live off in a separate repo or run with
>> a separate tool.
>>
>> On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch 
>>> wrote:
>>>
>>>> From the other thread:
>>>>
>>>> I wrote a test that parses the entire codebase under
>>>> github.com/juju/juju to look for places where we're creating a new
>>>> value of crypto/tls.Config instead of using the new helper function that I
>>>> wrote that creates one with more secure defaults.  It takes 16.5 seconds to
>>>> run on my machine.  There's not really any getting around the fact that
>>>> parsing the whole tree takes a long time.
>>>>
>>>> What I *don't* want is to put these tests somewhere else which requires
>>>> more thought/setup to run.  So, no separate long-tests directory or
>>>> anything.  Keep the tests close to the code and run in the same way we run
>>>> unit tests.
>>>>
>>>
>>> The general answer to this belongs back in the other thread, but I agree
>>> that long-running *unit* tests (if there should ever be such a thing)
>>> should not be shunted off to another location. Keep the unit tests with the
>>> unit. Integration tests are a different matter, because they cross multiple
>

Re: adding unit tests that take a long time

2016-05-02 Thread Nate Finch
On Thu, Apr 28, 2016 at 11:44 PM Anastasia Macmood <
anastasia.macm...@canonical.com> wrote:

> I understand your desire for a quick turn around.
> But I question the value that you would get from running "fast" (short)
> tests - would this set include some fast running unit tests, integration
> tests and functional tests?
>

Yes.


> Simply because they have been identified as running quickly on some
> machines?
>

Yes.  While absolute speed obviously varies based on hardware/load,
relative speed should be pretty constant across environments.


> How would you know if that "fast" run is comprehensive enough?
>

I know for sure that it would *not* be comprehensive.  That's why we
wouldn't use it to gate landing code.  It is only for developers as a quick
smoke test.

The alternative to this is running NO tests while I'm developing.  Which is
what I do now (and probably what most of us do).  I run tests in the one
package I'm currently editing, and usually even just a subset of those if
they're all full-stack tests.  It's just not practical to do a full 15+
minute test run more often than once every few hours at most, unless
absolutely necessary.


> It sounds to me like you might as well say  "let's run couple of tests
> randomly" and rely on these result until you commit...
>

If they're fast, sure.  It's better than no tests.  I run go vet
constantly.  It doesn't find even 1% of the bugs in my code, but it finds
more than zero, and those ones I can fix before I even compile.  go test
-short is not a test that will tell me if everything is definitely
correct.  It's a test that'll tell me if something is definitely *incorrect*.



> I do not know what you will end up doing with your current dilemma. I
> second Andrew's suggestion as well \o/
> Developing short/long test distinctions and special processing for the
> tests that we maintain seems like a waste of our effort.
>

It's 3 very simple lines of code per slow test (or per suite if the suite
itself is slow).  If that's too much, I could make it a single line of code
by writing a helper function.  You'd just see this something like this:

func (s *MySuite) TestFullStackSomething(c *gc.C) {
testing.SkipIfShort(c)
//  rest of test
}

The functionality for the -short flag and for skipping tests is already
built into the test infrastructure we use (both go test and gocheck).
There's no other effort required.  It can be done incrementally as we're
doing other things.  And if someone doesn't like it, doesn't trust it,
thinks it's a bad idea... it won't impact them at all.  They can just
ignore this thread and everything will be exactly as it was (except for one
line of code they can ignore in some tests).

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-28 Thread Nate Finch
I don't really understand what you mean by stages of development.  At the
end of the day, they all test the exact same thing - is our code correct?
The form of the test seems like it should be unrelated to when they are
run.  Can you explain why you think running tests of different sorts at the
same time would be a bad thing?

Note that I only want to "divide up tests" temporally... not necessarily
spatially.  If we want to put all our static analysis tests in one
directory, our integration tests in another directory, unit tests in the
directory of the unit... that's totally fine.  I just want an easy way to
run all the fast tests (regardless of what or how they test) to get a
general idea of how badly I've broken juju during development.

On Thu, Apr 28, 2016 at 5:24 PM Anastasia Macmood <
anastasia.macm...@canonical.com> wrote:

> For what it's worth, to distinguish between tests based on the times they
> take to run is borderline naive. Meaningful distinction is what the test
> tests :D
> Unit test checks that unit of work under testing is doing what is
> expected;
> integration tests tests that we play well together;
> functional tests tests behaviour;
> static analysis analyses codebase to ensure conformity to agreed policies.
>
> They all have meaning at different stages of development and to bundle
> them based on the running time is to compromise these stages in long-term.
>
>
> On 29/04/16 05:03, Nate Finch wrote:
>
> Our full set of tests in github.com/juju/juu takes 10-15 minutes to run,
> depending on the speed of your computer.  It's no coincidence that our test
> pyramid looks more like this ▽ than this △.   Also, we have a lot of tests:
>
> /home/nate/src/github.com/juju/juju/$ grep -r ") Test" .
> --include="*_test.go" | wc -l
> 9464
>
> About small, medium, and large tests... I think that's a good
> designation.  Certainly 17 seconds is not a small test.  But I *think* it
> qualifies as medium (hopefully most would be faster).   Here's my
> suggestion, tying this back into what I was talking about originally:
>
> Small tests would be those that run with go test -short.  That gives you
> something you can run frequently during development to give you an idea of
> whether or not you really screwed up.  Ideally each one should be less than
> 100ms to run.  (Note that even if all our tests ran this fast, it would
> still take 15 minutes to run them, not including compilation time).
>
> Medium tests would also be run if you don't use -short.  Medium tests
> would still be something that an average developer could run locally, and
> while she may want to get up to grab a drink while they're running, she
> probably wouldn't have time to run to the coffee shop to get said drink.
> Medium tests would be anything more than 100ms, but probably less than
> 15-20 seconds (and hopefully not many of the latter).  Medium tests would
> be run before making a PR, and as a gating job.
>
> Long tests should be relegated to CI, such as bringing up instances in
> real clouds.
>
> I don't think it's terribly useful to divide tests up by type of test. Who
> cares if it's a bug found with static analysis or by executing the code?
> Either way, it's a bug.  The only thing that really matters is how long the
> tests take, so we can avoid running slow tests over and over.  I run go
> vet, go lint, and go fmt on save in my editor.  That's static analysis, but
> they run far more often than I actually run tests and that's because
> they're always super fast.
>
> I think we all agree that all of these tests (except for CI tests) should
> be used to gate landings.  The question then is, how do you run the tests,
> and how do you divide up the tests?  To me, the only useful metric for
> dividing them up is how long they take to run.  I'll run any kind of test
> you give me so long as it's fast enough.
>
> On Thu, Apr 28, 2016 at 12:39 PM Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> On 04/28/2016 10:12 AM, Katherine Cox-Buday wrote:
>> > On 04/27/2016 09:51 PM, Nate Finch wrote:
>> >> So, this is exactly why I didn't want to mention the nature of the
>> >> test, because we'd get sidetracked. I'll make another thread to talk
>> >> about that specific test.
>> Sorry I forced you into it, but it was important to this discussion. I
>> was wanting to understand your feelings towards a test you should be
>> running regularly as you develop, aka a unit test, that took more than a
>> trivial amount of time to actually execute.
>> >>
>> >> I do still want to talk about what we c

Re: adding unit tests that take a long time

2016-04-28 Thread Nate Finch
Our full set of tests in github.com/juju/juu takes 10-15 minutes to run,
depending on the speed of your computer.  It's no coincidence that our test
pyramid looks more like this ▽ than this △.   Also, we have a lot of tests:

/home/nate/src/github.com/juju/juju/$ grep -r ") Test" .
--include="*_test.go" | wc -l
9464

About small, medium, and large tests... I think that's a good designation.
Certainly 17 seconds is not a small test.  But I *think* it qualifies as
medium (hopefully most would be faster).   Here's my suggestion, tying this
back into what I was talking about originally:

Small tests would be those that run with go test -short.  That gives you
something you can run frequently during development to give you an idea of
whether or not you really screwed up.  Ideally each one should be less than
100ms to run.  (Note that even if all our tests ran this fast, it would
still take 15 minutes to run them, not including compilation time).

Medium tests would also be run if you don't use -short.  Medium tests would
still be something that an average developer could run locally, and while
she may want to get up to grab a drink while they're running, she probably
wouldn't have time to run to the coffee shop to get said drink.  Medium
tests would be anything more than 100ms, but probably less than 15-20
seconds (and hopefully not many of the latter).  Medium tests would be run
before making a PR, and as a gating job.

Long tests should be relegated to CI, such as bringing up instances in real
clouds.

I don't think it's terribly useful to divide tests up by type of test. Who
cares if it's a bug found with static analysis or by executing the code?
Either way, it's a bug.  The only thing that really matters is how long the
tests take, so we can avoid running slow tests over and over.  I run go
vet, go lint, and go fmt on save in my editor.  That's static analysis, but
they run far more often than I actually run tests and that's because
they're always super fast.

I think we all agree that all of these tests (except for CI tests) should
be used to gate landings.  The question then is, how do you run the tests,
and how do you divide up the tests?  To me, the only useful metric for
dividing them up is how long they take to run.  I'll run any kind of test
you give me so long as it's fast enough.

On Thu, Apr 28, 2016 at 12:39 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On 04/28/2016 10:12 AM, Katherine Cox-Buday wrote:
> > On 04/27/2016 09:51 PM, Nate Finch wrote:
> >> So, this is exactly why I didn't want to mention the nature of the
> >> test, because we'd get sidetracked. I'll make another thread to talk
> >> about that specific test.
> Sorry I forced you into it, but it was important to this discussion. I
> was wanting to understand your feelings towards a test you should be
> running regularly as you develop, aka a unit test, that took more than a
> trivial amount of time to actually execute.
> >>
> >> I do still want to talk about what we can do for unit tests that take
> >> a long time.  I think giving developers the option to skip long tests
> >> is handy - getting a reasonable amount of coverage when you're in the
> >> middle of the develop/test/fix cycle.  It would be really useful for
> >> when you're making changes that affect a lot of packages and so you
> >> end up having to run full tests over and over.  Of course, running
> >> just the short tests would not give you 100% confidence, but once
> >> you've fixed everything so the short tests pass, *then* you could do
> >> a long run for thorough coverage.
> >
> > I believe Cheryl has something like this in the works and will be
> > sending a note out on it soon.
> >
> Yes. It is imperative that developers can quickly (and I mean quickly or
> it won't happen!) run unit tests. We absolutely want testruns to be a
> part of the code, build, run iteration loop.
> >> This is a very low friction way to increase developer productivity,
> >> and something we can implement incrementally.  It can also lead to
> >> better test coverage over all.  If you write 10 unit tests that
> >> complete in milliseconds, but were thinking about writing a couple
> >> longer-running unit tests that make sure things are working
> >> end-to-end, you don't have the disincentive of "well, this will make
> >> everyone's full test runs 30 seconds longer", since you can always
> >> skip them with -short.
> >>
> >> The only real negative I see is that it makes it less painful to
> >> write long tests for no reason, which would still affect lan

Re: Static Analysis tests

2016-04-27 Thread Nate Finch
Maybe we're not as far apart as I thought at first.

My thought was that they'd live under github.com/juju/juju/devrules (or
some other name) and therefore only get run during a full test run or if
you run them there specifically.  What is a full test run if not a test of
all our code?  These tests just happen to test all the code at once, rather
than piece by piece.  Combining with the other thread, if we also marked
them as skipped under -short, you could easily still run go test ./...
-short from the root of the juju repo and not incur the extra 16.5 seconds
(gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
it skips all the tests in the suite, which is particularly appropriate to
these tests, since it's the SetUpSuite that takes all the time).

Mostly, I just didn't want them to live off in a separate repo or run with
a separate tool.

On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch 
> wrote:
>
>> From the other thread:
>>
>> I wrote a test that parses the entire codebase under github.com/juju/juju to
>> look for places where we're creating a new value of crypto/tls.Config
>> instead of using the new helper function that I wrote that creates one with
>> more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
>> not really any getting around the fact that parsing the whole tree takes a
>> long time.
>>
>> What I *don't* want is to put these tests somewhere else which requires
>> more thought/setup to run.  So, no separate long-tests directory or
>> anything.  Keep the tests close to the code and run in the same way we run
>> unit tests.
>>
>
> The general answer to this belongs back in the other thread, but I agree
> that long-running *unit* tests (if there should ever be such a thing)
> should not be shunted off to another location. Keep the unit tests with the
> unit. Integration tests are a different matter, because they cross multiple
> units. Likewise, tests for project policies.
>
> Andrew's response:
>>
>>
>> *The nature of the test is important here: it's not a test of Juju
>> functionality, but a test to ensure that we don't accidentally use a TLS
>> configuration that doesn't match our project-wide constraints. It's static
>> analysis, using the test framework; and FWIW, the sort of thing that Lingo
>> would be a good fit for.*
>>
>> *I'd suggest that we do organise things like this separately, and run
>> them as part of the "scripts/verify.sh" script. This is the sort of test
>> that you shouldn't need to run often, but I'd like us to gate merges on.*
>>
>> So, I don't really think the method of testing should determine where a
>> test lives or how it is run.  I could test the exact same things with a
>> more common unit test - check the tls config we use when dialing the API is
>> using tls 1.2, that it only uses these specific ciphersuites, etc.  In
>> fact, we have some unit tests that do just that, to verify that SSL is
>> disabled.  However, then we'd need to remember to write those same tests
>> for every place we make a tls.Config.
>>
>
> The method of testing is not particularly relevant; it's the *purpose*
> that matters. You could probably use static analysis for a lot of our
> units; it would be inappropriate, but they'd still be testing units, and so
> should live with them.
>
> The point I was trying to make is that this is not a test of one unit, but
> a policy that covers the entire codebase. You say that you don't want to it
> put them "somewhere else", but it's not at all clear to me where you think
> we *should* have them.
>
>> The thing I like about having this as part of the unit tests is that it's
>> zero friction.  They already gate landings.  We can write them and run them
>> them just like we write and run go tests 1000 times a day.  They're not
>> special.  There's no other commands I need to remember to run, scripts I
>> need to remember to set up.  It's go test, end of story.
>>
>
> Using the Go testing framework is fine. I only want to make sure we're not
> slowing down the edit/test cycle by frequently testing things that are
> infrequently going to change. It's the same deal as with integration tests;
> there's a trade-off between the time spent and confidence level.
>
>> The comment about Lingo is valid, though I think we have room for both in
>> our processes.  Lingo, in my mind, is more appropriate at review-time,
>> which allows 

Static Analysis tests

2016-04-27 Thread Nate Finch
>From the other thread:

I wrote a test that parses the entire codebase under github.com/juju/juju to
look for places where we're creating a new value of crypto/tls.Config
instead of using the new helper function that I wrote that creates one with
more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
not really any getting around the fact that parsing the whole tree takes a
long time.

What I *don't* want is to put these tests somewhere else which requires
more thought/setup to run.  So, no separate long-tests directory or
anything.  Keep the tests close to the code and run in the same way we run
unit tests.

Andrew's response:


*The nature of the test is important here: it's not a test of Juju
functionality, but a test to ensure that we don't accidentally use a TLS
configuration that doesn't match our project-wide constraints. It's static
analysis, using the test framework; and FWIW, the sort of thing that Lingo
would be a good fit for.*

*I'd suggest that we do organise things like this separately, and run them
as part of the "scripts/verify.sh" script. This is the sort of test that
you shouldn't need to run often, but I'd like us to gate merges on.*

So, I don't really think the method of testing should determine where a
test lives or how it is run.  I could test the exact same things with a
more common unit test - check the tls config we use when dialing the API is
using tls 1.2, that it only uses these specific ciphersuites, etc.  In
fact, we have some unit tests that do just that, to verify that SSL is
disabled.  However, then we'd need to remember to write those same tests
for every place we make a tls.Config.

The thing I like about having this as part of the unit tests is that it's
zero friction.  They already gate landings.  We can write them and run them
them just like we write and run go tests 1000 times a day.  They're not
special.  There's no other commands I need to remember to run, scripts I
need to remember to set up.  It's go test, end of story.

The comment about Lingo is valid, though I think we have room for both in
our processes.  Lingo, in my mind, is more appropriate at review-time,
which allows us to write lingo rules that may not have 100% confidence.
They can be strong suggestions rather than gating rules.  The type of test
I wrote should be a gating rule - there are no false positives.

To give a little more context, I wrote the test as a suite, where you can
add tests to hook into the code parsing, so we can trivially add more tests
that use the full parsed code, while only incurring the 16.5 second parsing
hit once for the entire suite.  That doesn't really affect this discussion
at all, but I figured people might appreciate that this could be extended
for more than my one specific test.  I certainly wouldn't advocate people
writing new 17 seconds tests all over the place.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Nate Finch
So, this is exactly why I didn't want to mention the nature of the test,
because we'd get sidetracked. I'll make another thread to talk about that
specific test.

I do still want to talk about what we can do for unit tests that take a
long time.  I think giving developers the option to skip long tests is
handy - getting a reasonable amount of coverage when you're in the middle
of the develop/test/fix cycle.  It would be really useful for when you're
making changes that affect a lot of packages and so you end up having to
run full tests over and over.  Of course, running just the short tests
would not give you 100% confidence, but once you've fixed everything so the
short tests pass, *then* you could do a long run for thorough coverage.

This is a very low friction way to increase developer productivity, and
something we can implement incrementally.  It can also lead to better test
coverage over all.  If you write 10 unit tests that complete in
milliseconds, but were thinking about writing a couple longer-running unit
tests that make sure things are working end-to-end, you don't have the
disincentive of "well, this will make everyone's full test runs 30 seconds
longer", since you can always skip them with -short.

The only real negative I see is that it makes it less painful to write long
tests for no reason, which would still affect landing times but
hopefully everyone is still aware of the impact of long-running tests, and
will avoid them whenever possible.

On Wed, Apr 27, 2016 at 9:12 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> That is an awesome idea! +1
>
>
> On 04/27/2016 05:51 PM, Andrew Wilkins wrote:
>
> On Thu, Apr 28, 2016 at 1:44 AM Nate Finch 
> wrote:
>
>> I was actually trying to avoid talking about the test itself to keep
>> things shorter ;)
>>
>> The test is parsing the entire codebase under github.com/juju/juju to
>> look for places where we're creating a new value of crypto/tls.Config
>> instead of using the new helper function that I wrote that creates one with
>> more secure defaults.  There's not really any getting around the fact that
>> parsing the whole tree takes a long time.
>>
>
> The nature of the test is important here: it's not a test of Juju
> functionality, but a test to ensure that we don't accidentally use a TLS
> configuration that doesn't match our project-wide constraints. It's static
> analysis, using the test framework; and FWIW, the sort of thing that Lingo
> would be a good fit for.
>
> I'd suggest that we *do* organise things like this separately, and run
> them as part of the "scripts/verify.sh" script. This is the sort of test
> that you shouldn't need to run often, but I'd like us to gate merges on.
>
> Cheers,
> Andrew
>
>
>> On Wed, Apr 27, 2016 at 1:25 PM Nicholas Skaggs <
>> nicholas.ska...@canonical.com> wrote:
>>
>>> This is a timely discussion Nate. I'll avoid saying too much off the
>>> top, but I do have a question.
>>>
>>> On 04/27/2016 12:24 PM, Nate Finch wrote:
>>> > I just wrote a test that takes ~16.5 seconds on my machine.
>>> Why does the test take so long? Are you intending it to be a short /
>>> small scoped test?
>>>
>>> Nicholas
>>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
>
>
> --
> -
> Katherine
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I was actually trying to avoid talking about the test itself to keep things
shorter ;)

The test is parsing the entire codebase under github.com/juju/juju to look
for places where we're creating a new value of crypto/tls.Config instead of
using the new helper function that I wrote that creates one with more
secure defaults.  There's not really any getting around the fact that
parsing the whole tree takes a long time.

On Wed, Apr 27, 2016 at 1:25 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> This is a timely discussion Nate. I'll avoid saying too much off the
> top, but I do have a question.
>
> On 04/27/2016 12:24 PM, Nate Finch wrote:
> > I just wrote a test that takes ~16.5 seconds on my machine.
> Why does the test take so long? Are you intending it to be a short /
> small scoped test?
>
> Nicholas
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I just wrote a test that takes ~16.5 seconds on my machine. Since we are
trying to *reduce* the time taken by tests, I think it's prudent to give
developers a way to avoid running longer tests (obviously things like the
landing bot etc should always run all the tests).

What I *don't* want is to put long tests somewhere else which requires more
thought/setup to run.  So, no separate long-tests directory or anything.
Keep the tests close to the code and run in the same way we run unit
tests.  This also lowers the barrier to changing older tests that also take
a long time.

I suggest we allow developers to opt out of longer running tests.  This is
already supported by passing the -short flag to go test.

Then, long tests could use

if testing.Short() {
c.Skip("skipping long running test.")
}

go test will run this test, but go test -short will skip it.  Sweet, easy,
devs can run go test -short over and over, and when they need to run for
landing, they can drop the -short.  CI and the landing bot don't need to
change at all.

We could also do some sort of opt-in scheme with build tags, but then CI
and the landing bot would have to change to opt in, and you'd need separate
files for long tests, which is a pain, etc.

Thoughts?
-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


two new helpers in github.com/juju/testing

2016-04-22 Thread Nate Finch
I added a couple new helpers to the testing repo that I think will help us
avoid some of the contortions we have had to do in the past when testing
executables and output on stderr/stdout.

The first is a fairly simple function, CaptureOutput
, which takes in a
func() and returns anything written to stderr or stdout while running that
function.  It temporarily replaces os.Stderr and os.Stdout with files on
disk while running the function (necessary because those two values are
*os.File values, so, rather hard to mock out with anything in-memory).

The second is slightly more complicated - PatchExecHelper


We had PatchExecutable, which creates gnarly bash/batch scripts on disk to
replace expected executables, but it has a few weaknesses - it doesn't
allow you to test the arguments passed to it *and* test stderr/stdout at
the same time, it assumed the thing you were patching out was on the PATH,
and I'm pretty sure it wouldn't work to patch out .exe files on Windows.

PatchExecHelper fixes all of those problems by approaching the problem in a
different way, it creates a function that is intended to mock out a call to
exec.Command.  It's very easy to use, just embed PatchExecHelper in your
test suite, and then use its GetExecCommand() method to get a function that
can be used to mock out exec.Command.

The exact method of how it works is detailed in the godoc, and it's an
example implementation of something I blogged about a while back:
https://npf.io/2015/06/testing-exec-command/

Here's an example of how it can be used:

// production code
var execCommand = exec.Command
func Run(command string, args ...string) ([]byte, error) {
return execCommand(command, args...).CombinedOutput()
}

// test code
type RunSuite struct {
testing.CleanupSuite

// you *must* embed PatchExecHelper in your test suite.
testing.PatchExecHelper
}

func (s RunSuite) TestRun(c *gc.C) {
outArgs := make(chan []string, 1)
f := s.GetExecCommand(testing.PatchExecConfig{
Stderr: "this is stderr!",
Stdout: "this is stdout!",
ExitCode: 5,
Args: outArgs,
})
s.PatchValue(&execCommand, f)

output, err := Run("echo", "hello world!")
args := <-outArgs
// test output, args, etc.
}

Let me know if you have any questions or comments.
-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
Thanks for explaining, John, that makes sense and really helps me
understand the reasoning behind these changes.

On Tue, Apr 19, 2016 at 11:30 PM John Meinel  wrote:

> On Tue, Apr 19, 2016 at 6:56 PM, Nate Finch 
> wrote:
>
>> Then I guess I don't understand why it worked fine up until last week.
>>
>
> So up until last week LXD depended on the 'lxc1' package which was the old
> tools for creating containers. That did always set up an 'lxcbr0' bridge,
> which LXD then always used for its own containers.
>
> That was "ok" because "lxc1" was never installed by default so it didn't
> break people who just "install ubuntu". With LXD being
> always-available-always-installed-by-default with Ubuntu, they had to be
> more conservative in their defaults. Otherwise when you just "ec2
> start-instance ubuntu" it might not work as you expect because it has
> additional bridges installed-by-default on your cloud instance that didn't
> exist in Trusty.
>
> John
> =:->
>
>
>
>> > LXD is our code.  Juju is our code.  Surely we can code/configure the
>>> two
>>> > to work together such that it just works for the 95% case?
>>>
>>> I assure you we're aware of this. Unfortunately there's not a good
>>> answer.
>>>
>>> Tycho
>>>
>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
Then I guess I don't understand why it worked fine up until last week.

On Tue, Apr 19, 2016, 10:39 AM Tycho Andersen 
wrote:

> On Tue, Apr 19, 2016 at 01:42:08PM +0000, Nate Finch wrote:
> > On Tue, Apr 19, 2016 at 7:49 AM John Meinel 
> wrote:
> >
> > > ...
> > >>>
> > >>
> > >
> > >>
> > >>> That's probably the cause of the other confusion in the updated docs
> -
> > >>> now we *do* want the bridge named lxdbr0 not lxcbr0. If the user
> > >>> already has lxc setup in some fashion there's a different error from
> > >>> juju telling them to run the dpkg-reconfigure command. That should
> > >>> presumably always appear whenever there's no usable bridge.
> > >>>
> > >> The flip flopping back and forth on the bridge name is going to cause
> > >> havoc even after release. Do we have clear direction now from LXD on
> how
> > >> they plan to handle the competing network bridge names? I understand
> we're
> > >> now saying a dpkg-reconfigure is required, but I'm not clear if /
> when we
> > >> plan to make this easier.
> > >>
> > >
> > > So, LXD is going to be installed-by-default on Xenial. Which means that
> > > everyone will always have it installed. However, that leads to a
> problem
> > > with having a "standard" subnet dedicated to LXD. While it works for
> lots
> > > of people, there are plenty of cases where that subnet would already
> be in
> > > use elsewhere (in say a company's internal LAN). Which means LXD chose
> to
> > > go with link-local addresses by default. AIUI that means that the
> > > containers can talk to the host (and probably the host can route them
> to
> > > the public internet?) but they aren't able to talk to other containers
> or
> > > have inbound traffic.
> > > My personal hope is that the dpkg-reconfigure steps can provide a
> 70%-case
> > > safe default set of addresses (possibly using autodetection), and give
> a
> > > nice wizard to help people, while still allowing them to override it
> in case
> > >
> >
> > I really fear for what this will do to adoption.  I feel like we're
> > throwing the common "it just works" case out the window to enable a very
> > small edge case to work slightly better.  I'd much rather have defaults
> > where juju/lxd just works with most common setups.  For the one little
> > exception - if the network on your corporate LAN conflicts - we give a
> nice
> > error message and give the user tools to reconfigure.
>
> The problem here is that there's no way to detect this case (e.g. what
> if the network in question is routed behind your default gateway
> somehow?). Things just break, and it can be difficult for people to
> figure out why.
>
> LXD's bridge setup defaults to ipv6 link local with an HTTP proxy that
> allows apt-get to work. The current dpkg-reconfigure prompts will
> suggest reasonable values if you decide you want to enable ipv4.
>
> > Making everyone
> > configure their network (not the easiest or friendliest task even for
> > people on our own team), is surely going to cause us to lose a *lot* of
> > users with a bad first impression.
> >
> > LXD is our code.  Juju is our code.  Surely we can code/configure the two
> > to work together such that it just works for the 95% case?
>
> I assure you we're aware of this. Unfortunately there's not a good
> answer.
>
> Tycho
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
On Tue, Apr 19, 2016 at 7:49 AM John Meinel  wrote:

> ...
>>>
>>
>
>>
>>> That's probably the cause of the other confusion in the updated docs -
>>> now we *do* want the bridge named lxdbr0 not lxcbr0. If the user
>>> already has lxc setup in some fashion there's a different error from
>>> juju telling them to run the dpkg-reconfigure command. That should
>>> presumably always appear whenever there's no usable bridge.
>>>
>> The flip flopping back and forth on the bridge name is going to cause
>> havoc even after release. Do we have clear direction now from LXD on how
>> they plan to handle the competing network bridge names? I understand we're
>> now saying a dpkg-reconfigure is required, but I'm not clear if / when we
>> plan to make this easier.
>>
>
> So, LXD is going to be installed-by-default on Xenial. Which means that
> everyone will always have it installed. However, that leads to a problem
> with having a "standard" subnet dedicated to LXD. While it works for lots
> of people, there are plenty of cases where that subnet would already be in
> use elsewhere (in say a company's internal LAN). Which means LXD chose to
> go with link-local addresses by default. AIUI that means that the
> containers can talk to the host (and probably the host can route them to
> the public internet?) but they aren't able to talk to other containers or
> have inbound traffic.
> My personal hope is that the dpkg-reconfigure steps can provide a 70%-case
> safe default set of addresses (possibly using autodetection), and give a
> nice wizard to help people, while still allowing them to override it in case
>

I really fear for what this will do to adoption.  I feel like we're
throwing the common "it just works" case out the window to enable a very
small edge case to work slightly better.  I'd much rather have defaults
where juju/lxd just works with most common setups.  For the one little
exception - if the network on your corporate LAN conflicts - we give a nice
error message and give the user tools to reconfigure.  Making everyone
configure their network (not the easiest or friendliest task even for
people on our own team), is surely going to cause us to lose a *lot* of
users with a bad first impression.

LXD is our code.  Juju is our code.  Surely we can code/configure the two
to work together such that it just works for the 95% case?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


hacking upload tools for development

2016-04-14 Thread Nate Finch
I wrote a wiki page about how to hack bootstrap --upload-tools so that it
doesn't stop the controller from going through the normal processes of
requesting tools from streams etc.  This can be handy when that's the part
of the code you want to debug, but you need to upload a custom binary with
extra debugging information and/or changes to that code.  It turns out that
the changes are very minimal, they're just hard to find and spread out in a
few different files.

Feel free to make corrections if there's anything I've missed or could do
better: https://github.com/juju/juju/wiki/Hacking-upload-tools
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Thunderbirds are Go 1.6

2016-04-10 Thread Nate Finch
Glorious. Thanks so much for all the work, Michael and Curtis and team!

On Sun, Apr 10, 2016, 1:16 PM Cheryl Jennings 
wrote:

> Thank you for all your work on this, Curtis!
>
> On Sun, Apr 10, 2016 at 11:55 AM, Curtis Hovey-Canonical <
> cur...@canonical.com> wrote:
>
>> Ladies and Gentlemen, We build with Go 1.6 everywhere.
>>
>> Take a moment to consider the implications, because that means we do
>> not use Go 1.2, 1.3, or 1.5. Juju 1.x and 2.x for Windows, OS X, and
>> Linux (precise, trusty, wily, xenial, and centos7) are built with Go 1.6.
>>
>> There was concern about how to support precise if the code grew 1.2
>> incompatibilities. Michael Hudson-Doyle's fabulous package was trivial
>> to backport to precise. We don't need to create a new process to support
>> precise. The golang-1.6 packages was also easy to copy to wily. since
>> the Go 1.5 unit tests are less reliable than the Go 1.6 unit tests, the
>> hour spent setting up wily to build with golang-1.6 will pay for itself
>> in a day.
>>
>> Some unit tests combinations are still using Go 1.2 because the tests
>> assume OS features because of the Golang version instead of checking
>> the OS:
>>
>> 1. The Makefile rules for installing deps are very stale. Precise unit
>> tests will fail when forced to use Go 1.6 because they cannot
>> start mongo:
>> error command line: unknown option sslOnNormalPorts
>> But we see that actual deployments of precise built with Go 1.6 work.
>>
>> 2. Centos and Windows unit test are still Go 1.2 because many lxd
>> related tests fail when run with Go 1.6.
>>
>> The git-merge-juju job is using the xenial daily image to gate merges.
>> The released image it too stale to do an unattended dist-upgrade
>>
>> The unit test runs for xenial-amd64, xenial-arm64, xenial-s390x,
>> xenial-ppc64el, wily-amd64, trusty-amd64, trusty-i386, and
>> trusty-ppc64el will use the golang (1.6) or golang-1.6 (1.6) packages.
>> The go1.2 trusty unit tests amd64, arm64, i386, and ppc64el are gone.
>>
>> Ubuntu source packages prefer the golang-1.6 package, and fallback to
>> golang (>= 1.2). We needed a new builder because Go 1.6 needs more
>> memory. We saw errors like this with 2G of memory
>> golang-1.6/pkg/tool/linux_amd64/link: running gcc failed:
>> fork/exec /usr/bin/gcc: cannot allocate memory
>> Centos is created using the same Go package as trusty. Windows is
>> cross compiled as 64 bit for agent and 32 bit for clients. OS X is
>> natively compiled using Go 1.6 on CI's OS X host.
>>
>> We also got word is was safe to upgrade our arm64 host to xenial. This
>> is done. The ppc64el hosts were also upgraded this week. We are
>> expanding tests for arm64 and ppc64el
>>
>> --
>> Curtis Hovey
>> Canonical Cloud Development and Operations
>> http://launchpad.net/~sinzui
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: I think master is leaking

2016-04-08 Thread Nate Finch
I have a ton of leaked mongod processes, but also apparently leaking
virtual ethernet adapters:

$ ifconfig | grep veth
veth5699cea6 Link encap:Ethernet  HWaddr ca:70:40:dc:9d:32
veth63846fbb Link encap:Ethernet  HWaddr f2:8b:bb:3b:08:9a
veth9111be0e Link encap:Ethernet  HWaddr be:a3:41:be:fd:0c
veth992938f8 Link encap:Ethernet  HWaddr 52:78:6b:1b:ac:97
vethea0be4ac Link encap:Ethernet  HWaddr ce:72:b0:7f:81:e4

Are we creating virtual adapters during tests?  That seems pretty egregious.

On Fri, Apr 8, 2016 at 6:13 PM Horacio Duran 
wrote:

> Hey, this is more of an open question/sharing.
> I was a good half hour trying to find why reboot test was timeouting and
> found that I had a couple of lxcs that I dont recall creating (at least one
> of them was a juju api server) and a good 5-8 mongos running.
> I killed the test, the mongos and the lxcs and magic, the reboot tests ran
> well.
> Did anyone else detect this kind of behavior? I believe either destroy (I
> deployed a couple of times earlier and destroyed) or the tests are leaking,
> at least, lxcs Ill try to repro later but so far I would like some one
> else's input.
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I obviously wasn't clear. I was suggesting a --yes-i-really-mean-it flag on
kill-controller, but if you passed just -y we show the prompt anyway
(instead of erroring out on an unknown flsg).

My point with the big prompt was in response to Rick saying it should never
be needed and should only be used in extreme circumstances... If that's how
we feel, we should make sure the user knows it.

Many people have said on this thread that the difference between
kill-controller and destroy-controller is not immediately obvious.., and
we're the devs on the project. I'm just trying to make sure we make it
clear for users.

On Wed, Apr 6, 2016, 4:54 PM Tim Penhey  wrote:

> On 06/04/16 23:13, Nick Veitch wrote:
> > Sure, I am just concerned about a proliferation of commands to do the
> > same (ultimately) task
> >
> > destroy-controller
>
> The most correct way to take down a controller.
>
> > kill-controller
>
> The OMG it is broken, please do as much as you can and I know I'm going
> to have to manually check any resources left around that it couldn't
> clean up.
>
> > forget/purge-controller
>
> Remove local references to the controller.
>
>
> Not really the same things at all.
>
> Tim
>
>
> >
> >
> >
> > On 6 April 2016 at 11:59, Horacio Duran  > <mailto:horacio.du...@canonical.com>> wrote:
> >
> > The issue I see with that approach is that in that case
> > kill-controller might be doing less than you expect instead of more,
> > suppose the controller is having transient issues and kill
> > controller cannot reach the cloud for deletion, this would forget
> > the controller and leave it in the cloud, forget-controller instead
> > tells us very clearly what is going to happen, the change is going
> > to be local and not affect the controller.
> > My 2c
> >
> >
> > On Wednesday, 6 April 2016, Nick Veitch  > <mailto:nick.vei...@canonical.com>> wrote:
> >
> > just my tuppence
> >
> > instead of having another command, can't we just add this as an
> > option to kill-controller?
> >
> > juju kill-controller --cleanup 
> >
> >
> >
> > On 6 April 2016 at 11:05, Horacio Duran
> >  wrote:
> >
> >
> > I might be biased by years of apt-get but purge makes me
> > think that you are going to do what kill is supposed to do,
> > forget sound more aligned whit what you are really aiming to.
> >
> > On Wednesday, 6 April 2016, Andrew Wilkins
> >  wrote:
> >
> > On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings
> >  wrote:
> >
> > Relevant bug:
> >  https://bugs.launchpad.net/juju-core/+bug/1553059
> >
> > We should provide a way to clean up controllers
> > without making the user manually edit juju's files.
> >
> >
> > Unless anyone objects, or has a better spelling, I will
> > be adding a command to do this:
> >
> > juju purge-controller 
> >
> > The command will require a "-y" or prompt for
> > confirmation, like kill-controller. It will not attempt
> > to destroy the controller, it will just remove the
> > details of it from the client.
> >
> > (Alternative suggestion for spelling: "juju
> > forget-controller". Purge-controller may suggest that
> > we're purging a controller of its contents, rather than
> > purging the controller from the client?)
> >
> > Cheers,
> > Andrew
> >
> > On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch
> >  wrote:
> >
> > This just happened to me, too.  Kill-controller
> > needs to work if at all possible.  That's the
> > whole point.  And yes, users may not hit
> > specific problems, but devs do, and that wastes
> > our time trying to figure out how to manually
> > clean up the garbage.
> >
> > On Mon, Apr 4, 2016 at 8:33 AM Rick Harding
> >  wrote:
> >
> >

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
So, I agree, we should never need to use kill controller... but sometimes
you might need it, like if someone SSH's into their controller and messes
it up, etc.

Right now, kill-controller is in all the help docs, i.e. juju help
commands... are we planning on changing that?  I'm not sure I'm on board
with having hidden commands, because they will invariably leak out, and
then we just have commands that everyone knows about, but that are
undocumented.

I'd much prefer giant warning text:

DANGER!!! USE THIS COMMAND ONLY AS A LAST RESORT!!
IT MAY LEAVE RESOURCES RUNNING IN YOUR CLOUD!  ALWAYS
TRY destroy-controller FIRST!

And make the flag to disable the "are you sure?" prompt something long and
annoying to type, not just -y, because that's too easy to just bang out
with muscle memory.

In fact, if I were feeling sneaky, I might accept -y and still make them
confirm, since they're obviously just banging out the command without
thinking.

-Nate



On Wed, Apr 6, 2016 at 11:21 AM Rick Harding 
wrote:

> I strongly feel that we're avoiding and dancing around the issue.
>
> The user should NEVER EVER have to use kill-controller. If someone types
> that we've failed to build Juju to the standards to which I feel we all
> should expect us to live up to. There is only ONE way for a user to take
> down a controller, destroy-controller.
>
> If a user has models running on that controller we've given them a flag to
> safely say "I know I have stuff running, but I don't care" with the current
> stuff running on there.
>
> I completely understand that destroy-controller is not the reverse of
> bootstrap. We've filed a bug for that and should correct the bug rather
> than moving to other commands and processes to deal with that.
>
> https://bugs.launchpad.net/juju-core/+bug/1563615
>
> kill-controller is a top secret tool we keep in our back pockets for
> emergency debugging when crap that's beyond our planning/control has
> occurred and we've not yet updated destroy-controlller to be able to handle
> that. It should not be in the main docs, release notes, etc. It's
> dangerous, folks should be discouraged from ever using it. Even when we do
> use it, we should be saying things like "Well, destroy-controller seems to
> be broken, we've got this potentially messy/risky way of trying to work
> around it that we can try...but..."
>
> As for the user case where a user registers on someone else's controller
> and wants to no longer have that controller appear, then we should update
> destroy-controller to handle that. There's only ONE way to remove a
> controller from your list, destroy-controller. We should be able to note
> that the user requesting destroy-controller does not have sufficient access
> to remove it and tell them "You've got these models running on this
> controller". And if they want to destroy with the flag to remove all their
> models, then we should do that and remove it from their system. If they
> have no running models, we should note that we're removing that
> controller's information from their system. If there's user confusion we
> can look at a message "You are not the controller admin, removing the
> controller information from your cache" or the like.
>
> I don't feel that adding another command for another way to remove a
> controller from list-controllers is something we want to do and we
> especially don't want to do it this week as we try to freeze 2.0 for
> release.
>
> If folks think I'm missing a point please reach out to me and lets move
> this to a high-bandwidth discussion, but I wanted to share the original
> design/thought on the destroy vs kill controller commands. I want everyone
> to share the feeling that if our users ever type kill-controller into their
> terminals we've failed them and realize that.
>
> Thanks
>
> Rick
>
> On Wed, Apr 6, 2016 at 11:02 AM Cheryl Jennings <
> cheryl.jenni...@canonical.com> wrote:
>
>> Another relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1566426
>>
>> Maybe kill-controller tries to destroy through the API, but has a time
>> out if things get "stuck" where it will fall back to the provider.  I was
>> joking when I suggested yesterday in IRC that we should bring back --force
>> for kill-controller, but maybe we need it (or at least a timeout).
>>
>> On Wed, Apr 6, 2016 at 7:53 AM, Nate Finch 
>> wrote:
>>
>>> Oh I see.  Yes, I agree that we should always try the right way first
>>> and only use the provider if necessary (especially if using the provider
>>> leaves garbage around).
>>>

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Oh I see.  Yes, I agree that we should always try the right way first and
only use the provider if necessary (especially if using the provider leaves
garbage around).

It seems like there's no reason why we couldn't make a --force flag do it
that way in 2.0 (aside from time constraints).

On Wed, Apr 6, 2016 at 10:48 AM Aaron Bentley 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 2016-04-06 10:45 AM, Nate Finch wrote:
> > Wait, didn't destroy-environment --force fall back to the provider?
> > I thought that was the whole point of --force
>
> No, it didn't fall back.  It uses the provider unconditionally,
> without trying a normal destroy-controller first.
>
> Aaron
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJXBSG5AAoJEK84cMOcf+9hzSQIAJ/vNKIa1/TnDSyvC2U9ApzW
> TAEvSqaEUw0ZL2dl2tiNKTp3JPzcnCR4VKrBIsh1xi0hB1UNtJR+IW4O46gRI6ok
> ZvA1cAvoJvRdmqf1ntNzYwHRSn/Tm82DGzixTPt0TcTn3KYrk13XpRJuxMbbvHDM
> LfYG0zglGmVKUaWs4rBogh4H4OaiOIR8lORXSC8GRQjA1/C4c+FjIg+KeW5Yw2Ti
> XnG87BPyJ1TtPGWxxeKAk4tnkZwnZKtJOnHU/IfvTFOpECdBjojWnnc6VbQ1um0H
> WwjR6EcA4qxkkhND6ypIGkt9A4k3ZZvckCau52EgIn3pnwhk5OSw64MURJAEmn0=
> =vm/H
> -END PGP SIGNATURE-
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Wait, didn't destroy-environment --force fall back to the provider?  I
thought that was the whole point of --force

On Wed, Apr 6, 2016 at 10:24 AM Aaron Bentley 
wrote:

> On 2016-04-06 08:22 AM, Andrew Wilkins wrote:
> > What I would like to see is: * destroy-controller to take on a
> > --force flag, causing it to do what kill-controller does now, and
> > what destroy-environment --force used to do
>
> What kill-controller does now, where it attempts to use Juju, but
> falls back to the provider, is much more valuable to us in QA than
> what destroy-environment --force used to do.  We don't want to leak
> resources unnecessarily, but we do want things to die when we try to
> kill them.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I agree with your assessment of the vocabulary, for what it's worth.

On Wed, Apr 6, 2016 at 9:41 AM Horacio Duran 
wrote:

> As a non English native myself I find destroy to be much more final than
> kill, to me destroy sounds like kill and then burn just in case, but I
> don't know if this extends to other non English speakers too.
>
> On Wednesday, 6 April 2016, Nate Finch  wrote:
>
>> Also +1 to Andrew's proposal.  In particular, the difference between kill
>> and destroy is pretty arbitrary from a vocabulary standpoint, so it's not
>> clear which one is the default and which one is the extreme measure.  A
>> flag on destroy is a lot more clear in that regard (and reduces the number
>> of commands needed).
>>
>> On Wed, Apr 6, 2016 at 8:41 AM Horacio Duran 
>> wrote:
>>
>>> Strong +1 to what Andrew just proposed
>>>
>>> On Wednesday, 6 April 2016, Andrew Wilkins 
>>> wrote:
>>>
>>>> On Wed, Apr 6, 2016 at 7:35 PM Rick Harding 
>>>> wrote:
>>>>
>>>>> +1 to the -1 of a new command for this. I'd like to raise the
>>>>> discussion with the technical board as I'd like understand why the change
>>>>> the change that the team had to make made it impossible for 
>>>>> kill-controller
>>>>> to function and what we could do to allow the team to remove legacy code,
>>>>> but still be able to kill off things.
>>>>>
>>>>
>>>> Sorry, I probably should have started a new thread; this is orthogonal.
>>>> Still worth talking about with the board, but orthogonal to removing
>>>> details of a controller. You might "juju register" someone else's
>>>> controller, and then want to remove it from your client without destroying
>>>> it.
>>>>
>>>> I think the UX could do with rethinking. There's a few issues:
>>>>  (1) It's too easy to lose resources via kill-controller. See:
>>>> https://bugs.launchpad.net/juju-core/+bug/1559701 and
>>>> https://bugs.launchpad.net/juju-core/+bug/1566011
>>>>  (2) It's not clear from the name what kill-controller vs.
>>>> destroy-controller is, and so it's easy to assume they either do the same
>>>> thing, or just randomly choose one and run it. That leads to (1) happening
>>>> more than we'd like or expect.
>>>>  (3) destroy-controller is harder to use than kill-controller for the
>>>> standard case of destroying the controller and all of its hosted models.
>>>> You have to pass "--destroy-all-models" to destroy-controller, which you
>>>> don't have to pass to kill-controller. I don't understand the point of
>>>> that, given that you're already asked whether you want to destroy the
>>>> controller or not.
>>>>
>>>> What I would like to see is:
>>>>  * kill-controller to be dropped
>>>>  * destroy-controller's --destroy-all-models flag to be dropped, and
>>>> implied by the accepted prompt (or -y)
>>>>  * destroy-controller to take on a --force flag, causing it to do what
>>>> kill-controller does now, and what destroy-environment --force used to do
>>>>  * a new command to remove a controller from the client
>>>>
>>>> Why a new command? Because removing/forgetting is orthogonal to
>>>> destroying. It's plain weird to say "kill-controller --forget" (or
>>>> --cleanup, or whatever) if you're removing details of a live controller
>>>> that you just don't want to use any more.
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Tue, Apr 5, 2016 at 11:55 PM Andrew Wilkins <
>>>>> andrew.wilk...@canonical.com> wrote:
>>>>>
>>>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>>>> cheryl.jenni...@canonical.com> wrote:
>>>>>>
>>>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>>>
>>>>>>> We should provide a way to clean up controllers without making the
>>>>>>> user manually edit juju's files.
>>>>>>>
>>>>>>
>>>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>>>> command to do this:
>>>>>>
>>>>>> juju purge-controller 
>

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Also +1 to Andrew's proposal.  In particular, the difference between kill
and destroy is pretty arbitrary from a vocabulary standpoint, so it's not
clear which one is the default and which one is the extreme measure.  A
flag on destroy is a lot more clear in that regard (and reduces the number
of commands needed).

On Wed, Apr 6, 2016 at 8:41 AM Horacio Duran 
wrote:

> Strong +1 to what Andrew just proposed
>
> On Wednesday, 6 April 2016, Andrew Wilkins 
> wrote:
>
>> On Wed, Apr 6, 2016 at 7:35 PM Rick Harding 
>> wrote:
>>
>>> +1 to the -1 of a new command for this. I'd like to raise the discussion
>>> with the technical board as I'd like understand why the change the change
>>> that the team had to make made it impossible for kill-controller to
>>> function and what we could do to allow the team to remove legacy code, but
>>> still be able to kill off things.
>>>
>>
>> Sorry, I probably should have started a new thread; this is orthogonal.
>> Still worth talking about with the board, but orthogonal to removing
>> details of a controller. You might "juju register" someone else's
>> controller, and then want to remove it from your client without destroying
>> it.
>>
>> I think the UX could do with rethinking. There's a few issues:
>>  (1) It's too easy to lose resources via kill-controller. See:
>> https://bugs.launchpad.net/juju-core/+bug/1559701 and
>> https://bugs.launchpad.net/juju-core/+bug/1566011
>>  (2) It's not clear from the name what kill-controller vs.
>> destroy-controller is, and so it's easy to assume they either do the same
>> thing, or just randomly choose one and run it. That leads to (1) happening
>> more than we'd like or expect.
>>  (3) destroy-controller is harder to use than kill-controller for the
>> standard case of destroying the controller and all of its hosted models.
>> You have to pass "--destroy-all-models" to destroy-controller, which you
>> don't have to pass to kill-controller. I don't understand the point of
>> that, given that you're already asked whether you want to destroy the
>> controller or not.
>>
>> What I would like to see is:
>>  * kill-controller to be dropped
>>  * destroy-controller's --destroy-all-models flag to be dropped, and
>> implied by the accepted prompt (or -y)
>>  * destroy-controller to take on a --force flag, causing it to do what
>> kill-controller does now, and what destroy-environment --force used to do
>>  * a new command to remove a controller from the client
>>
>> Why a new command? Because removing/forgetting is orthogonal to
>> destroying. It's plain weird to say "kill-controller --forget" (or
>> --cleanup, or whatever) if you're removing details of a live controller
>> that you just don't want to use any more.
>>
>> Cheers,
>> Andrew
>>
>> On Tue, Apr 5, 2016 at 11:55 PM Andrew Wilkins <
>>> andrew.wilk...@canonical.com> wrote:
>>>
>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>> cheryl.jenni...@canonical.com> wrote:
>>>>
>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>
>>>>> We should provide a way to clean up controllers without making the
>>>>> user manually edit juju's files.
>>>>>
>>>>
>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>> command to do this:
>>>>
>>>> juju purge-controller 
>>>>
>>>> The command will require a "-y" or prompt for confirmation, like
>>>> kill-controller. It will not attempt to destroy the controller, it will
>>>> just remove the details of it from the client.
>>>>
>>>> (Alternative suggestion for spelling: "juju forget-controller".
>>>> Purge-controller may suggest that we're purging a controller of its
>>>> contents, rather than purging the controller from the client?)
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch 
>>>>> wrote:
>>>>>
>>>>>> This just happened to me, too.  Kill-controller needs to work if at
>>>>>> all possible.  That's the whole point.  And yes, users may not hit 
>>>>>> specific
>>>>>> problems, but devs do, and that wastes our time trying to figure out how 
>>

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I agree with Horacio, forget controller is good becsude it only does what
you expect, whereas failing to kill a controller but making it look like
success is bad.

Honestly, I think the problem is kill controller that should just go back
to being a flag on destroy controller.  Then we'd just have destroy and
forget.

On Wed, Apr 6, 2016, 6:59 AM Horacio Duran 
wrote:

> The issue I see with that approach is that in that case kill-controller
> might be doing less than you expect instead of more, suppose the controller
> is having transient issues and kill controller cannot reach the cloud for
> deletion, this would forget the controller and leave it in the cloud,
> forget-controller instead tells us very clearly what is going to happen,
> the change is going to be local and not affect the controller.
> My 2c
>
>
> On Wednesday, 6 April 2016, Nick Veitch  wrote:
>
>> just my tuppence
>>
>> instead of having another command, can't we just add this as an option to
>> kill-controller?
>>
>> juju kill-controller --cleanup 
>>
>>
>>
>> On 6 April 2016 at 11:05, Horacio Duran 
>> wrote:
>>
>>>
>>> I might be biased by years of apt-get but purge makes me think that you
>>> are going to do what kill is supposed to do, forget sound more aligned whit
>>> what you are really aiming to.
>>>
>>> On Wednesday, 6 April 2016, Andrew Wilkins 
>>> wrote:
>>>
>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>> cheryl.jenni...@canonical.com> wrote:
>>>>
>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>
>>>>> We should provide a way to clean up controllers without making the
>>>>> user manually edit juju's files.
>>>>>
>>>>
>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>> command to do this:
>>>>
>>>> juju purge-controller 
>>>>
>>>> The command will require a "-y" or prompt for confirmation, like
>>>> kill-controller. It will not attempt to destroy the controller, it will
>>>> just remove the details of it from the client.
>>>>
>>>> (Alternative suggestion for spelling: "juju forget-controller".
>>>> Purge-controller may suggest that we're purging a controller of its
>>>> contents, rather than purging the controller from the client?)
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch 
>>>>> wrote:
>>>>>
>>>>>> This just happened to me, too.  Kill-controller needs to work if at
>>>>>> all possible.  That's the whole point.  And yes, users may not hit 
>>>>>> specific
>>>>>> problems, but devs do, and that wastes our time trying to figure out how 
>>>>>> to
>>>>>> manually clean up the garbage.
>>>>>>
>>>>>> On Mon, Apr 4, 2016 at 8:33 AM Rick Harding <
>>>>>> rick.hard...@canonical.com> wrote:
>>>>>>
>>>>>>> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
>>>>>>> andrew.wilk...@canonical.com> wrote:
>>>>>>>
>>>>>>>> In a non-beta release we would make sure that the config changes
>>>>>>>> aren't backwards incompatible.
>>>>>>>>
>>>>>>>
>>>>>>> I think this is the key thing. I think that kill-controller is an
>>>>>>> exception to this rule. I think we should always at least give the user 
>>>>>>> the
>>>>>>> ability to remove their stuff and start over with the new alpha/beta/rc
>>>>>>> release. I'd like to ask us to explore making kill-controller an 
>>>>>>> exception
>>>>>>> to this policy and that if tests prove we can't bootstrap on one beta 
>>>>>>> and
>>>>>>> kill with trunk that it's a blocking bug for us.
>>>>>>> --
>>>>>>> Juju-dev mailing list
>>>>>>> Juju-dev@lists.ubuntu.com
>>>>>>> Modify settings or unsubscribe at:
>>>>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Juju-dev mailing list
>>>>>> Juju-dev@lists.ubuntu.com
>>>>>> Modify settings or unsubscribe at:
>>>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>>>
>>>>>>
>>>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
>>
>> --
>> Nick Veitch,
>> CDO Documentation
>> Canonical
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-04 Thread Nate Finch
This just happened to me, too.  Kill-controller needs to work if at all
possible.  That's the whole point.  And yes, users may not hit specific
problems, but devs do, and that wastes our time trying to figure out how to
manually clean up the garbage.

On Mon, Apr 4, 2016 at 8:33 AM Rick Harding 
wrote:

> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> In a non-beta release we would make sure that the config changes aren't
>> backwards incompatible.
>>
>
> I think this is the key thing. I think that kill-controller is an
> exception to this rule. I think we should always at least give the user the
> ability to remove their stuff and start over with the new alpha/beta/rc
> release. I'd like to ask us to explore making kill-controller an exception
> to this policy and that if tests prove we can't bootstrap on one beta and
> kill with trunk that it's a blocking bug for us.
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Nate Finch
I'll just note that we've had flaky tests for as long as I've been working
on Juju, and there's never a "good" time to fix them. :)

On Mon, Mar 28, 2016 at 11:48 AM Aaron Bentley 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 2016-03-28 09:03 AM, Katherine Cox-Buday wrote:
> > Generally +1 on this, but I'm also intrigued by Martin's
> > statistic... do we currently weight test failures by how likely
> > they are to fail (i.e. how likely they are flaky)? That seems like
> > it would be a great metric to use to decide which to fix first.
>
> We don't do it on the likelihood of failure, but we do it on the
> frequency of failure.
>
> http://reports.vapour.ws/releases/top-issues
>
> I report on these on the cross-team call, and once the 2.0 settles
> down, I'll be reporting them on the release call again.
>
> Aaron
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJW+VJcAAoJEK84cMOcf+9hWrwH/0JradfscIE0wnt+yCW9nNCR
> 9hTHI2U19v1VuP6pWI4UiC7srfojPI8EXXEXrrAhF9rT8tpVK4EcJRJK9RvWvvz5
> BEquHMS0+eROFOqDJFavEB8hU7BKHErzkSwSG8uKq7JuwHs9gNtQO9z9fIhVKjnr
> aP4z2IliCqbYfXbupfSTD8TmqhI0AipQymTg3QB4C3sJdXzc5GjzIIckUo/X7aJj
> zH1tEtlwOdP0c9F+8ZVs1j6AAkb+uDGc/1Qr4MT1kInqGkli2UNF4TOX/AihNPyH
> iwYgq6O7uOkijFTrL9obRfbXxIFw1WCc9cYzxbRYnGfQff47Dyj7/BUStPPH0i0=
> =8FQ6
> -END PGP SIGNATURE-
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Nate Finch
+1, don't retry... devs need to feel the pain in order to get proper
motivation to fix this stuff...

On Mon, Mar 28, 2016 at 9:03 AM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Just wanted to say thank you 100x to all involved!
>
> On 03/24/2016 01:03 AM, Michael Hudson-Doyle wrote:
> > Hi,
> >
> > As of a few minutes ago, there is now a golang-1.6 package in
> > trusty-proposed:
> > https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> > review and copy, Steve).
> >
> > One difference between this and the package I prepared earlier is that
> > it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> > Makefiles and such will need to be adjusted to invoke that directly or
> > put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> > be installed alongside the golang packages that are already in
> > trusty).
> >
> > Cheers,
> > mwh
> > (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> >
> > On 17 February 2016 at 07:58, Michael Hudson-Doyle
> >  wrote:
> >> I have approval for the idea but also decided to wait for 1.6 and upload
> >> that instead. I'm also on leave currently so hopefully this can all
> happen
> >> in early March.
> >>
> >> Cheers,
> >> mwh
> >>
> >> On 17/02/2016 1:17 am, "John Meinel"  wrote:
> >>> To start with, thanks for working on this. However, doesn't this also
> >>> require changing the CI builds to use your ppa?
> >>>
> >>> What is the current state of this? I was just looking around and
> noticed
> >>> golang1.5-go isn't in anything specific for Trusty that I can see. I
> realize
> >>> if its going into an SRU it requires a fair amount of negotiation with
> other
> >>> teams, so I'm not  surprised to see it take a while. I just wanted to
> check
> >>> how it was going.
> >>>
> >>> Thanks,
> >>>
> >>> John
> >>> =:->
> >>>
> >>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
> >>>  wrote:
>  Hi all,
> 
>  As part of the plan for getting Go 1.5 into trusty (see here
>  https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
>  packages (called golang1.5-go rather than golang-go) for trusty in my
>  ppa:
> 
>  https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages
> 
>  (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
>  "make stupid packaging mistakes" day)
> 
>  I'll write up a SRU bug to start the process of getting this into
>  trusty tomorrow but before it does end up in trusty it would seem like
>  a good idea to run the CI tests using juju-core packages built with
>  this version of the go compiler. Is that something that's feasible to
>  arrange
> 
>  The only packaging requirement should be to change the build-depends
>  to be on golang1.5-go rather than golang-go or gccgo.
> 
>  Cheers,
>  mwh
> 
>  --
>  Juju-dev mailing list
>  Juju-dev@lists.ubuntu.com
>  Modify settings or unsubscribe at:
>  https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>>
>
> --
> -
> Katherine
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-24 Thread Nate Finch
Does this mean we can assume 1.6 for everything from now on, or is there
some other step we're waiting on?  I have some code that only needs to
exist while we support 1.2, and I'd be happy to just delete it.

On Thu, Mar 24, 2016, 4:07 AM Tim Penhey  wrote:

> Awesome news Michael.
>
> Thank you for all your work on this.
>
> Tim
>
> On 24/03/16 19:03, Michael Hudson-Doyle wrote:
> > Hi,
> >
> > As of a few minutes ago, there is now a golang-1.6 package in
> > trusty-proposed:
> > https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> > review and copy, Steve).
> >
> > One difference between this and the package I prepared earlier is that
> > it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> > Makefiles and such will need to be adjusted to invoke that directly or
> > put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> > be installed alongside the golang packages that are already in
> > trusty).
> >
> > Cheers,
> > mwh
> > (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> >
> > On 17 February 2016 at 07:58, Michael Hudson-Doyle
> >  wrote:
> >> I have approval for the idea but also decided to wait for 1.6 and upload
> >> that instead. I'm also on leave currently so hopefully this can all
> happen
> >> in early March.
> >>
> >> Cheers,
> >> mwh
> >>
> >> On 17/02/2016 1:17 am, "John Meinel"  wrote:
> >>>
> >>> To start with, thanks for working on this. However, doesn't this also
> >>> require changing the CI builds to use your ppa?
> >>>
> >>> What is the current state of this? I was just looking around and
> noticed
> >>> golang1.5-go isn't in anything specific for Trusty that I can see. I
> realize
> >>> if its going into an SRU it requires a fair amount of negotiation with
> other
> >>> teams, so I'm not  surprised to see it take a while. I just wanted to
> check
> >>> how it was going.
> >>>
> >>> Thanks,
> >>>
> >>> John
> >>> =:->
> >>>
> >>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
> >>>  wrote:
> 
>  Hi all,
> 
>  As part of the plan for getting Go 1.5 into trusty (see here
>  https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
>  packages (called golang1.5-go rather than golang-go) for trusty in my
>  ppa:
> 
>  https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages
> 
>  (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
>  "make stupid packaging mistakes" day)
> 
>  I'll write up a SRU bug to start the process of getting this into
>  trusty tomorrow but before it does end up in trusty it would seem like
>  a good idea to run the CI tests using juju-core packages built with
>  this version of the go compiler. Is that something that's feasible to
>  arrange
> 
>  The only packaging requirement should be to change the build-depends
>  to be on golang1.5-go rather than golang-go or gccgo.
> 
>  Cheers,
>  mwh
> 
>  --
>  Juju-dev mailing list
>  Juju-dev@lists.ubuntu.com
>  Modify settings or unsubscribe at:
>  https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>>
> >>>
> >>
> >
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Master is blocked in preparation for 2.0-beta3

2016-03-22 Thread Nate Finch
May I ask why we're not making a branch for beta3, so work may continue on
master?

On Tue, Mar 22, 2016 at 1:07 PM Cheryl Jennings <
cheryl.jenni...@canonical.com> wrote:

> Hi Everyone,
>
> Master is currently blocked to allow us to get to a stable 2.0-beta3.
> While we are working towards that goal, please do not JFDI changes into
> master.
>
> Please let me know if you have questions or concerns.
>
> Thanks!
> -Cheryl
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Usability issues with status-history

2016-03-19 Thread Nate Finch
It seems like we have two very specific problems, and are trying to solve
them with very general solutions.  In my mind, especially this close to
release, very specific solutions for these very specific problems is more
prudent, and has a smaller net effect on the product as time goes on.

For the first bug, where update-status spams the logs... how about we just
don't log update status (and the return from update status to idle)?  The
reason logging a hook is useful is because it indicates something has
happened in the system - config changed, the charm changed, etc.  However,
update status doesn't indicate anything except that N minutes went by.
This is not useful for the user to see.  If the charm has an update status
hook and logs something, *that* is useful to see, and of course, we would
see that.  If they really need to know if the update status hook is
running, they can examine the agent logs.

For the second bug, where a downloading % spams the history, it seems like
the easy answer is "don't do that".  For resources, we specifically avoided
putting download progress in status history for that very reason.  In the
future, it seems like it could be useful to have some mechanism to show
transient messages like downloading % etc, but status history does not seem
like the appropriate place to do that, especially given how it is currently
configured... and it seems way too late to be adding a new feature for 2.0

Just my 2 cents.

-Nate



On Sat, Mar 19, 2016, 10:28 AM William Reade 
wrote:

> On Sat, Mar 19, 2016 at 1:40 PM, Horacio Duran <
> horacio.du...@canonical.com> wrote:
>
>>
>>
>>
>> I think you are attributing too much importance to some data that can
>> hardly be considered information let me try to mention some points that I
>> think are valid here.
>> 1) Not every message is valuable, you state that every message we throw
>> away makes it harder to debug, but certainly a message like "downloading
>> N%" is useless, you can record the start of the download and failure/end
>> but the steps intermediate are quite useless. We can argue later which
>> messages satisfy this criteria, but I am completely sure that some do.
>>
>
> We only know which messages are useless once we know who's looking at the
> data and what they're looking for. In MVC terms, the more we delete from
> the model data, the more we constrain/distort what views we can implement,
> and hence hamper our own ability to evolve better solutions as we need them.
>
> I think this follows from the fact that we *can* argue which messages are
> useless -- by doing that we're implicitly arguing about what perspectives
> we want to support. And that's a fine discussion to have, but it's a choice
> we should make at presentation time, when we have the best possible
> understanding of what information we're trying to show; and preserving the
> raw data until that point leaves us, and others, free to implement new
> views for new use cases without having to write code inside the controller.
>
>
>> 2) Not filling the history buffer with superfluos messages will help
>> here, although I do agree we should find a more elegant deletion criteria
>> (time sounds right) while not loosing sight of size (at this point no one
>> has a record of the actual cost of storing these things in terms of space
>> therefore we cannot make decisions based on the scale we want to support.
>>
>> Regarding "making the charm decide" I agree its something we might not
>> want to do, I would actually not export this to the charm, I would just use
>> it internally, since we are opinionated we can very well decide what goes
>> there.
>>
>> Adding more status information will help adding observability, not having
>> a flag for internal ephemeral statuses strikes me a bit as deleting with
>> the left hand what you just wrote with the right one (saying might not
>> translate well from Spanish)
>>
>
> Not having a special delete-this-status flag seems entirely in line with
> not deleting statuses. Collecting all the data and then deleting bits of it
> feels much more self-contradictory. Am I missing your point?
>
> Finally, can this be a potential shoot in our own foot tool? yes, but so
>> can almost any other part of our code base, this is something juju will use
>> to report information to the user therefore we control it and, if we are
>> not careful we will shot ourselves but, then again, if we are not careful
>> with almost any part of the code we will do so too.
>>
>
> To continue the gunplay theme: don't saw the barrel off your shotgun just
> in case you need to rob a bank one day :).
>
> Cheers
> William
>
>
>>
>> Cheers
>>
>> On Thu, Mar 17, 2016 at 6:51 AM, William Reade <
>> william.re...@canonical.com> wrote:
>>
>> I see this as a combination of two problems:
>>>
>>> 1) We're spamming the end user with "whatever's in the status-history
>>> collection" rather than presenting a digest tuned for their needs.
>>>
>> 2) Important messages get thrown away way too 

juju versions are now in github.com/juju/version

2016-03-19 Thread Nate Finch
For a recent change in 2.0, we needed to be able to reference the
version.Number struct from both juju-core and charm.v6.  In order to
support this, we moved most of the contents of github.com/juju/juju/version
to at top level repo at github.com/juju/version.  Note that certain
juju-core-specific data still exists in the original folder, such as the
current Juju binary version.

Just wanted everyone to be aware of the change, so that if you're writing
code that interacts with version numbers, you know where that code now
lives.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Diffing string checker

2016-02-25 Thread Nate Finch
I agree with Horacio, you're my hero, Andrew.

I was actually just looking at sergi's diffmatchpatch implementation for
exactly this purpose a couple days ago.  Thanks for doing that!

On Thu, Feb 25, 2016 at 6:09 AM Andrew Wilkins 
wrote:

> On Thu, Feb 25, 2016 at 5:23 PM roger peppe  wrote:
>
>> Nice. FWIW I have a related tool that tries to help find where a regexp
>> mismatch has happened. It parses the output of gocheck, so it's
>> probably not that useful to non-acme users but you might want
>> to consider including the approach (http://paste.ubuntu.com/15195795/)
>> for fancycheck.Matches and ErrorMatches, perhaps, to highlight where
>> the mismatch starts.
>>
>
> Thanks, that is a better idea. I'll look into doing that next time I'm
> staring at a wall of text :)
>
>
>> On 25 February 2016 at 06:26, Andrew Wilkins
>>  wrote:
>> > Howdy,
>> >
>> > Occasionally I'll change a test, and some string equality test will fail
>> > with a wall of text. Sometimes we shouldn't be checking the whole
>> string,
>> > but sometimes it's legitimate to do so, and it can be difficult/tedious
>> to
>> > spot the differences.
>> >
>> > I've just written a checker which diffs the two string args, and
>> colourises
>> > the output. You may find it useful. I'm using red/green background, but
>> I
>> > also added bold for insertions, strike-through for deletions, in case
>> you're
>> > red/green colour blind. My terminal doesn't do strike-through, and
>> your's
>> > probably doesn't either. Anyway, the important thing is you can see the
>> > difference between bits that are the same vs. insertions/deletions.
>> >
>> > Code is at github.com/axw/fancycheck. Just replace
>> > c.Assert("x", gc.Equals, "y")
>> > with
>> > c.Assert("x", fancycheck.StringEquals, "y")
>> >
>> > Cheers,
>> > Andrew
>> >
>> > --
>> > Juju-dev mailing list
>> > Juju-dev@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
>> >
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Please merge master into your feature branches

2016-02-18 Thread Nate Finch
So uh... how are we going to deliver patches to 1.25 if CI can't run it?

On Thu, Feb 18, 2016 at 9:48 PM Andrew Wilkins 
wrote:

> On Fri, Feb 19, 2016 at 10:03 AM Ian Booth 
> wrote:
>
>> FYI for folks developing feature branches for juju-core.
>>
>> juju-core master has been updated to include the first round of
>> functionality to
>> improve the bootstrap experience. The consequence of this is that CI
>> scripts
>> needed to be updated to match. This means that any feature branch which
>> has not
>> had master commit 294388 or later merged in will not work with CI and so
>> will
>> not be blessed for release.
>>
>
> Further to this, please see the draft 2.0 release notes for instructions:
> http://paste.ubuntu.com/15127859/
>
> If there's anything you can't figure out from that and/or command help,
> please let us know so we can fix the docs.
>
> Cheers,
> Andrew
>
>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Introducing, juju resources!

2016-02-17 Thread Nate Finch
tl;dr: We'll fire the upgrade-charm hook, that code just wasn't done at
demo-time (I'm actually debugging it as we speak).

On Wed, Feb 17, 2016 at 9:24 AM Adam Collard 
wrote:

> Hi Moonstone!
>
> On Fri, 12 Feb 2016 at 21:27 Katherine Cox-Buday <
> katherine.cox-bu...@canonical.com> wrote:
>
>> Moonstone have been working hard on a new feature coming up in Juju 2.0
>> called "Juju Resources", and we're now at a point where we can share the
>> goodness and call for bugs/feedback! As noted in the video linked below,
>> the feature isn't quite complete, so expect some rough edges, and for some
>> of the CLI details to shift just a bit.
>>
>
> After watching the demo (thanks for sharing!) it struck me that use of the
> update-status hook to demonstrate this was pointing to a missing feature
> (IMHO).
>
> Is there a hook that fires when a resource is made available? If I have an
> install hook that requires a resource, I could make it transition to
> blocked when that resource is missing. But when that resource is provided,
> how do I know to try again?
>
> Thanks,
>
> Adam
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: testing "go 1.5 for trusty"

2016-02-16 Thread Nate Finch
We need to make the go 1.5 jobs voting, and under the same restrictions we
put our 1.2 tests.  The reason they haven't been fixed is because they
haven't been voting, so we can ignore them.

Go 1.5 is not less reliable, go 1.2 just hides our mistakes better.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Introducing, juju resources!

2016-02-13 Thread Nate Finch
One idea is to have a resource type of "directory" and upload everything in
that directory.  The current code has only one resource type - file.   But
it's copied to a directory on the unit named by the resource. So it would
be pretty easy to jnstsd copy a number of files to that directory.

Of course, this would be a resources v2 feature. I'm sure there's rework
that would be needed on the back end, but it certainly seems doable.  We
also had ideas for making a github repo as a resource, which might be an
even better UX, depending on your use case.

On Sat, Feb 13, 2016, 9:10 AM Rick Harding 
wrote:

> On Fri, Feb 12, 2016 at 11:06 PM Aaron Bentley <
> aaron.bent...@canonical.com> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> On 2016-02-12 04:26 PM, Katherine Cox-Buday wrote:
>> > Moonstone have been working hard on a new feature coming up in Juju
>> > 2.0 called "Juju Resources", and we're now at a point where we can
>> > share the goodness and call for bugs/feedback!
>>
>> I'm glad to see you folks are working on this. On the QA side, we've
>> been wanting Juju to support some kind of native blob storage,
>> basically since we started.
>>
>> Is is necessary to enumerate the names in the charm? I'd rather we
>> didn't. One of our use cases is plugins for Jenkins. The Jenkins charm
>> has has a setting that tells it which plugins to install, but there's
>> no predetermined list. It would be nice if the plugins could be
>> retrieved in a Juju-native way, but it would be painful to have to
>> update the charm every time we added a new plugin.  And it would be
>> bad if people ended up forking the charm just to adjust the list of
>> plugins.
>>
>> We could bundle the plugins in a single tarfile, but then updating the
>> tarfile would be annoying. For plugins, it's best if each plugin is
>> its own file and there are no restrictions on the filenames.
>
>
> Your read is correct. You must declare the resources. It's helpful to
> users to know what to stick in there and for the charm to be able to handle
> different items. In your case, a single tar file of the plugins you'd like
> to enable could be sent up and the charm would be able to untar, loop
> through them, etc. Ideally the charm would be uploaded to the store with a
> standard set of plugins and folks could then customize which ones they
> wanted by creating their own tar of plugins.
>
> I'll make sure we keep this in mind though. We've started with one file
> per resource, and maybe there's an idea here of "one or more" where you
> might have a charm that takes a data set file. You can deploy with a data
> set, or multiple files of data that all need to be processed, rather than
> building into a single file. We'll have to think it through for a future
> iteration of resources.
>
> Rick
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-02-02 Thread Nate Finch
I always liked you, Tim :)

On Tue, Feb 2, 2016, 10:25 PM Tim Penhey  wrote:

> Here is my short answer:
>
>   use names.UnitTag
>
> That is all.
>
> Tim
>
> On 23/01/16 09:53, Nate Finch wrote:
> > Working in the model layer on the server between the API and the DB.
> > Specifically in my instance, an API call comes in from a unit,
> > requesting the bytes for a resource.  We want to record that this unit
> > is now using the bytes from that specific revision of the resource.  I
> > have a pointer to a state.Unit, and a function that takes a Resource
> > metadata object and some reference to the unit, and does the actual
> > transaction to the DB to store the unit's ID and the resource
> information.
> >
> > On Fri, Jan 22, 2016 at 3:34 PM William Reade
> > mailto:william.re...@canonical.com>>
> wrote:
> >
> > Need a bit more context here. What layer are you working in?
> >
> > In general terms, entity references in the API *must* use tags;
> > entity references that leak out to users *must not* use tags;
> > otherwise it's a matter of judgment and convenience. In state code,
> > it's annoying to use tags because we've already got the globalKey
> > convention; in worker code it's often justifiable if not exactly
> > awesome.
> > See https://github.com/juju/juju/wiki/Managing-complexity#workers
> >
> > Cheers
> > William
> >
> > On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch
> > mailto:nate.fi...@canonical.com>> wrote:
> >
> > I have a function that is recording which unit is using a
> > specific resource.  I wrote the function to take a UnitTag,
> > because that's the closest thing we have to an ID type. However,
> > I and others seem to remember hearing that Tags are really only
> > supposed to be used for the API. That leaves me with a problem -
> > what can I pass to this function to indicate which unit I'm
> > talking about?  I'd be fine passing a pointer to the unit object
> > itself, but we're trying to avoid direct dependencies on state.
> > People have suggested just passing a string (presumably
> > unit.Tag().String()), but then my API is too lenient - it
> > appears to say "give me any string you want for an id", but what
> > it really means is "give me a serialized UnitTag".
> >
> > I think most places in the code just use a string for an ID, but
> > this opens up the code to abuses and developer errors.
> >
> > Can someone explain why tags should only be used in the API? It
> > seems like the perfect type to pass around to indicate the ID of
> > a specific object.
> >
> > -Nate
> >
> > --
> > Juju-dev mailing list
> > Juju-dev@lists.ubuntu.com <mailto:Juju-dev@lists.ubuntu.com>
> > Modify settings or unsubscribe at:
> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >
> >
> >
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju terminology change: controllers and models

2016-02-02 Thread Nate Finch
FYI, I noticed ServiceDeployWithNetworks still exists as a client and
facade method, but it's only called by tests. Maybe it should be removed?

On Tue, Feb 2, 2016, 8:34 PM Ian Booth  wrote:

> Hey all
>
> As has been mentioned previously in this list, for the Juju 2.0 release we
> have
> been working on fundamental terminology changes. In particular, we now talk
> about controllers and models instead of state servers and environments.
>
> To this end, a rather large change has landed in master and the upcoming
> 2.0-alpha2 release of Juju will reflect these changes. There are several
> things
> to be aware of. We have also taken the opportunity to remove a lot of code
> which
> existed to support older Juju clients. Needless to say, this Juju 2.0
> release
> will not support upgrading from 1.x - it works only as a clean install.
>
> Note: some of the changes will initially break the GUI and users of the
> Python
> Juju Client - work is underway to update these products for the next alpha3
> release scheduled for next week. For those wishing to continue to test
> Juju 2.0
> without the breaking changes, the alpha1 release is still available via
> ppa:juju/experimental. Separate communications to affected stakeholders
> has/will
> be made as part of the 2.0-alpha2 release.
>
> So, the changes are roughly broken down as follows:
>
> - CLI command name changes
> - facade name changes
> - api method and parameter name changes
> - facade method restructure
> - internal api name changes
> - external artifact/data changes (including on the wire changes)
> - deprecated and older version facades are removed
>
> 1. CLI command name changes
>
> As an obvious example, create-environment becomes create-model. We also
> have
> destroy-controller etc. This alpha2 release will also contain many of the
> other
> CLI changes targetted for 2.0 eg juju backup create becomes juju
> create-backup.
> Not all 2.0 CLI syntax is supported yet, but all the environment -> model
> changes are done.
>
> You will also use -m  instead of -e .
>
> The release notes will go into more detail.
>
> All user facing text now refers to model instead of environment.
>
> 2. Facade name changes
>
> If you are curious, see https://goo.gl/l4JqGd for a representative
> listing of
> all facade and method names and which ones have been changed.
>
> The main one is EnvironmentManager becomes ModelManager. These changes
> affect
> external API clients like the GUI and Python Juju client.
>
> 3. api method and parameter name changes
>
> By way of example:
> EnvironInfo() on the undertaker facade becomes ModelInfo().
> The param struct ModifyEnvironUsers becomes ModifyModelUsers etc.
> EnvironTag attributes become ModelTag.
>
> 4. Service facade method restructure
>
> As part of making our facades more manageable and maintainable when API
> changes
> are required, a whole bunch of service related methods are moved off the
> Client
> facade and onto the Service facade. This had already been started months
> ago,
> and there were shims in place to keep existing clients working, but now
> the job
> is finished.
> eg Client.AddRelation() becomes Service.AddRelation() etc.
>
> This change will break the GUI and Python Juju client.
>
> 5. Internal API name changes
>
> Things like state.AllEnvironments() becomes state.AllModels(), we now use
> names.ModelTag instead of names.EnvironTag, and many, many more.
>
> Note: the names package has not been forked into a .V2 yet (with EnvironTag
> removed) as there are dependencies to sort out. Please do not use
> EnvironTag
> anymore.
>
> 6. External artifact/data changes (including on the wire changes)
>
> There are several main examples here.
> On the wire, we transmit model-uuid tags rather than environment-uuid tags.
> In mongo, we store model-uuid doc fields rather than env-uuid.
> In agent.conf files we store Model info rather than Environment tags.
> In the controller blob store, we store and manage blobs for buckets rather
> than
> environments.
> The controller HTTP endpoints are /model/ In backups we store model tags and details, not environment.
>
> With the blobstore, we've create a .V2 branch which core uses. The
> charmstore
> will continue to use V1 for now.
>
> 7. Deprecated and older version facades are removed
>
> All facade versions have been bumped up. Older facades are removed, and we
> don't
> support fallback to older servers. The main example for facade removal is
> uniter
> v0 and v1 are gone. With deprecated API removal, service deployment now
> expects
> placement parameters rather than catering for older clients that did not
> support
> placement, so there's only one ServiceDeployMethod() instead of 3. All in
> all,
> the code base has been simplified by removing all support for deprecated
> facades
> and API calls.
>
> There are still a couple of grey areas internally to be sorted out. But
> the user
> facing changes are done (pending more CLI work between now and release).
>
>
> If you have a

Re: Build errors

2016-02-01 Thread Nate Finch
Juju is not structured to work with go get, per se.  Please see the
contributing document for details on building juju:

https://github.com/juju/juju/blob/master/CONTRIBUTING.md


On Mon, Feb 1, 2016 at 1:33 PM Neale Ferguson  wrote:

> I built juju a few weeks ago and attempted to do it today using the same
> procedure. During the stage:
>
> go get -d -v github.com/juju/juju/...
>
> I am now getting:
>
> github.com/Azure/azure-sdk-for-go (download)
> package github.com/juju/juju/cmd/juju
> imports
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go
> -autorest/autorest
> :
> cannot find package
> "
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/g
> o-autorest/autorest
> "
> in any of:
>
> /home/neale/go/src/
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/
> github.com/Azure/go-autorest/autorest (from $GOROOT)
>
> /home/neale/rpmbuild/BUILD/juju/src/
> github.com/Azure/azure-sdk-for-go/Godep
> s/_workspace/src/github.com/Azure/go-autorest/autorest
> 
> (from $GOPATH)
>
>
> And the build fails.
>
> Is this some transient condition related to the Azure git repo or
> something else?
>
> Neale
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-01-25 Thread Nate Finch
I was really trying not to give too much information about this exact case,
so we could avoid talking about a specific implementation, and focus on the
more general question of how we identify objects.  Yes, we get the bytes
using an HTTP request, but that is irrelevant to my question :)

On Mon, Jan 25, 2016 at 2:00 AM John Meinel  wrote:

> On Sat, Jan 23, 2016 at 1:28 AM, William Reade <
> william.re...@canonical.com> wrote:
>
>> On Fri, Jan 22, 2016 at 9:53 PM, Nate Finch 
>> wrote:
>>
>>> Working in the model layer on the server between the API and the DB.
>>> Specifically in my instance, an API call comes in from a unit, requesting
>>> the bytes for a resource.  We want to record that this unit is now using
>>> the bytes from that specific revision of the resource.  I have a pointer to
>>> a state.Unit, and a function that takes a Resource metadata object and some
>>> reference to the unit, and does the actual transaction to the DB to store
>>> the unit's ID and the resource information.
>>>
>>
>> I'm a bit surprised that we'd be transferring those bytes over an API
>> call in the first place (is json-over-websocket really a great way to send
>> potential gigabytes? shouldn't we be getting URL+SHA256 from the apiserver
>> as we do for charms, and downloading separately? and do we really want to
>> enforce charmstore == apiserver?); and I'd point out that merely having
>> agreed to deliver some bytes to a client is no indication that the client
>> will actually be using those bytes for anything; but we should probably
>> chat about those elsewhere, I'm evidently missing some context.
>>
>
> So I would have expected that we'd rather use a similar raw
> HTTP-to-get-content instead of a JSON request (given the intent of
> resources is that they may be GB in size), but regardless it is the intent
> that you download the bytes from the charm rather from the store directly.
> Similar to how we currently fetch the charm archive content itself.
> As for "will you be using it", the specific request from the charm is when
> it calls "resource-get" which is very specifically the time when the charm
> wants to go do something with those bytes.
>
> John
> =:->
>
>
>> But whenever we do record the unit-X-uses-resource-Y info I assume we'll
>> have much the same stuff available in the apiserver, in which case I think
>> you just want to pass the *Unit back into state; without it, you just need
>> to read the doc from the DB all over again to make appropriate
>> liveness/existence checks [0], and why bother unless you've already hit an
>> assertion failure in your first txn attempt?
>>
>> Cheers
>> William
>>
>> [0] I imagine you're not just dumping (unit, resource) pairs into the DB
>> without checking that they're sane? that's really not safe
>>
>>
>>> On Fri, Jan 22, 2016 at 3:34 PM William Reade <
>>> william.re...@canonical.com> wrote:
>>>
>>>> Need a bit more context here. What layer are you working in?
>>>>
>>>> In general terms, entity references in the API *must* use tags; entity
>>>> references that leak out to users *must not* use tags; otherwise it's a
>>>> matter of judgment and convenience. In state code, it's annoying to use
>>>> tags because we've already got the globalKey convention; in worker code
>>>> it's often justifiable if not exactly awesome. See
>>>> https://github.com/juju/juju/wiki/Managing-complexity#workers
>>>>
>>>> Cheers
>>>> William
>>>>
>>>> On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch 
>>>> wrote:
>>>>
>>>>> I have a function that is recording which unit is using a specific
>>>>> resource.  I wrote the function to take a UnitTag, because that's the
>>>>> closest thing we have to an ID type. However, I and others seem to 
>>>>> remember
>>>>> hearing that Tags are really only supposed to be used for the API. That
>>>>> leaves me with a problem - what can I pass to this function to indicate
>>>>> which unit I'm talking about?  I'd be fine passing a pointer to the unit
>>>>> object itself, but we're trying to avoid direct dependencies on state.
>>>>> People have suggested just passing a string (presumably
>>>>> unit.Tag().String()), but then my API is too lenient - it appears to say
>>>>> "give me any s

Re: Defaulting tests internal to the package

2016-01-24 Thread Nate Finch
I think the idea that tests of an internal function are worthless because
you might just not be using that function is a fairly specious argument.
That could be applied to unit tests at any level.  What if you're not using
the whole package?  By that logic, the only reliable tests are full stack
tests.

I think that William's position is correct in an ideal world.  The public
interface is the only contract you really need to worry about.  In theory,
all your packages would be simple enough that this would not be difficult,
and the tests would be straightforward.  However, in practice, some
packages are fairly complex beasts, and it can literally be an order of
magnitude easier to test a bit of logic internally than to test it
externally.  And I don't just mean easier in terms of time it takes to
write (although that shouldn't be ignored - we do have deadlines), but also
in simplicity of the code in the tests... the more complicated the code in
the tests, the harder they are to get right, the more difficult it is for
other people to understand them, and the more difficult it is to maintain
them.

Even if you have perfect code coverage, using inversion of control and
dependency injection to be able to do every test externally, you can still
have bugs that slip through, either due to weaknesses in your tests, or
even just poorly coded mocks that don't actually behave like the production
things they're mocking.  Isolating a piece of code down to its bare bones
can make its intended behavior a lot easier to define and understand, and
therefore easier to test.

I think small scope internal unit tests deliver incredible bang for the
buck.  They're generally super easy to write, super easy to understand, and
give you confidence that you haven't screwed up a piece of logic. They're
certainly not ironclad proof that your application as a whole doesn't have
bugs in it... but they are often very strong proof that this bit of logic
does not have bugs.

While I have seen plenty of bugs that were caused by a developer not using
a piece of code he/she was supposed to use, it has almost always been
exported code that wasn't used - a utility function in another package, for
example.  Certainly it's possible you could accidentally stop using an
internal function and no have anyone notice... but I think that's fairly
unlikely compared to the concrete benefits you get from simpler,
faster-to-write internal tests.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-01-22 Thread Nate Finch
Working in the model layer on the server between the API and the DB.
Specifically in my instance, an API call comes in from a unit, requesting
the bytes for a resource.  We want to record that this unit is now using
the bytes from that specific revision of the resource.  I have a pointer to
a state.Unit, and a function that takes a Resource metadata object and some
reference to the unit, and does the actual transaction to the DB to store
the unit's ID and the resource information.

On Fri, Jan 22, 2016 at 3:34 PM William Reade 
wrote:

> Need a bit more context here. What layer are you working in?
>
> In general terms, entity references in the API *must* use tags; entity
> references that leak out to users *must not* use tags; otherwise it's a
> matter of judgment and convenience. In state code, it's annoying to use
> tags because we've already got the globalKey convention; in worker code
> it's often justifiable if not exactly awesome. See
> https://github.com/juju/juju/wiki/Managing-complexity#workers
>
> Cheers
> William
>
> On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch 
> wrote:
>
>> I have a function that is recording which unit is using a specific
>> resource.  I wrote the function to take a UnitTag, because that's the
>> closest thing we have to an ID type. However, I and others seem to remember
>> hearing that Tags are really only supposed to be used for the API. That
>> leaves me with a problem - what can I pass to this function to indicate
>> which unit I'm talking about?  I'd be fine passing a pointer to the unit
>> object itself, but we're trying to avoid direct dependencies on state.
>> People have suggested just passing a string (presumably
>> unit.Tag().String()), but then my API is too lenient - it appears to say
>> "give me any string you want for an id", but what it really means is "give
>> me a serialized UnitTag".
>>
>> I think most places in the code just use a string for an ID, but this
>> opens up the code to abuses and developer errors.
>>
>> Can someone explain why tags should only be used in the API? It seems
>> like the perfect type to pass around to indicate the ID of a specific
>> object.
>>
>> -Nate
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Tags and object IDs

2016-01-22 Thread Nate Finch
I have a function that is recording which unit is using a specific
resource.  I wrote the function to take a UnitTag, because that's the
closest thing we have to an ID type. However, I and others seem to remember
hearing that Tags are really only supposed to be used for the API. That
leaves me with a problem - what can I pass to this function to indicate
which unit I'm talking about?  I'd be fine passing a pointer to the unit
object itself, but we're trying to avoid direct dependencies on state.
People have suggested just passing a string (presumably
unit.Tag().String()), but then my API is too lenient - it appears to say
"give me any string you want for an id", but what it really means is "give
me a serialized UnitTag".

I think most places in the code just use a string for an ID, but this opens
up the code to abuses and developer errors.

Can someone explain why tags should only be used in the API? It seems like
the perfect type to pass around to indicate the ID of a specific object.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Defaulting tests internal to the package

2016-01-22 Thread Nate Finch
I'm glad to hear Roger's opinion about testing internal code... that's
exactly how I feel.  True unit tests of small bits of code are easy to
write, easy to read and understand, and give you confidence that your code
is doing what you think it'll do in a wide variety of cases.  If the unit
test fails, it's generally incredibly clear where the fault in the code
lies, and it's easy to fix either the code or the test.  In 6 months, you
or someone else can go back to the tests, easily understand what they are
testing, and modify them trivially when making a bug fix or improvement.

While you certainly *can* write code that tests all the corner cases of
some small utility function through the external API... those tests will
almost always be incredibly opaque and hard to understand, and generally
much more complicated than they would be if you could just test the utility
function by itself.  This is often a problem I have with our current
tests... it's hard to see what is actually being tested, and even harder to
verify that the test itself is correct and complete.  When a test fails,
you often have no clue where the actual problem lies, because the test
traverses so much code.

Of course, I definitely think you *also* need tests of the exported API of
a package... but small unit tests are still incredibly valuable.


On Fri, Jan 22, 2016 at 7:06 AM roger peppe 
wrote:

> On 22 January 2016 at 09:40, William Reade 
> wrote:
> > I do not want anyone to add unit tests for non-exported code, because
> those
> > tests are almost completely worthless.
>
> I'd beg to disagree with this. I think it very much depends what you are
> testing. For me tests are about giving confidence that the code works.
> Sometimes as part of the implementation of a package I'll write a
> function with a well defined contract that doesn't justify being made
> public because it's intimately related to the way that the package is
> implemented. The function might be reasonably tricky, and it may be hard
> to reach all its corner cases at arm's length through the public API.
> It might not even be possible, but that doesn't mean that it's not
> useful to test the function, as it may provide a foundation for more
> varied future functionality.
>
> In this kind of scenario, I think it makes good sense to write a unit test
> for the unexported function. If you change the implementation, you might
> throw away the function and its tests, and that's fine, but this approach,
> in my experience, can give good confidence in some tricky situations with
> significantly less effort than doing everything through the public API.
>
> As always, there's a trade-off here - we want to maximise confidence
> while minimising time and effort spent writing and maintaining the
> code. It's always going to be a judgement call and flat assertions like
> "those tests are almost completely worthless" are not that helpful IMHO.
>
> I do agree that writing external tests *when reasonable* is the way,
> and I think that export_test.go is a reasonable escape hatch for the
> times when it's useful to write internal tests.
>
> Nate, you seem to be arguing that because we can't have total separation
> between external and internal APIs, then we should not have any separation
> at all. I don't agree - I think that keeping as much as possible to
> the external API, but having a few well-defined exported objects (in
> export_test.go) works well, and reflects the nuanced situation that we
> actually live in.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Defaulting tests internal to the package

2016-01-21 Thread Nate Finch
[reposting this to the wider juju-dev mailing list]

So, I hit an interesting problem a while back.  I have some unit tests that
need to be internal tests, thus they are in 'package foo'.  However, my
code needs to use some testhelper functions that someone else wrote...
which are in 'package foo_test'.  It's impossible to reference those, so I
had to move those helpers to package foo, which then either requires that
we make them exported (which is exactly like using export_test.go, which,
in the juju-core Oakland sprint, we all agreed was bad), or all tests that
use the helpers need to be in 'package foo'... which means I had to go
change a bunch of files to be in package foo, and change the calls in those
files from foo.SomeFunc() to just SomeFunc().

Given the assumption that some tests at some point will make sense to be
internal tests, and given it's likely that helper functions/types will want
to be shared across suites - should we not just always make our tests in
package foo, and avoid this whole mess in the first place?

(A note, this happened again today - I wanted to add a unit test of a
non-exported function to an existing test suite, and can't because the unit
tests are in the foo_test package)

There seems to only be two concrete benefits to putting tests in package
foo_test:
1. It avoids circular dependencies if your test needs to import something
that imports the package you're testing.
2. It makes your tests read like normal usages of your package, i.e.
calling foo.SomeFunc().
The first is obviously non-negotiable when it comes up... but I think it's
actually quite rare (and might indicate a mixture of concerns that warrants
investigation).  The second is nice to have, but not really essential (if
we want tests that are good examples, we can write example functions).

So, I propose we put all tests in package foo by default.  For those devs
that want to only test the exported API of a package, you can still do
that.  But this prevents problems where helper code can't be shared between
the two packages without ugliness and/or dumb code churnm, and it means
anyone can add unit tests for non-exported code without having to create a
whole new file and testsuite.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic retries of hooks

2016-01-20 Thread Nate Finch
To anyone who is not the author/maintainer/deeply knowledgeable about a
charm, a hook error is doom.  There is only one hammer an average user of
Juju can use in that case, and it's juju resolved --retry.  If we can do
that automatically for our users, then it can be a big help to cover up for
an imperfect charm (and no charm is perfect).  The range of transient
things that can go wrong is infinite, and it's unrealistic to assume every
single charm will handle every single problem.  Not to mention that some
problems are simply not possible to handle (OOM, reboot, etc). This can
only make Juju more reliable and more user-friendly.

The knowledge level of people like Dean and James is *far *beyond that of
an average user of Juju.  When they see a hook error, they can go analyze
the logs and status output and figure out whether an error is likely to get
resolved by a retry, and know if the charm should be updated to account for
this case.  But we shouldn't expect average users to have that depth of
knowledge.

Yes, we should absolutely be able to turn off hook retries in the
environment config, specifically for dev, testing, and expert users like
those in our organization.  But by default, they should retry.  To do
otherwise would be a disservice to our users.

On Wed, Jan 20, 2016 at 3:31 PM William Reade 
wrote:

> On Wed, Jan 20, 2016 at 8:01 PM, Dean Henrichsmeyer 
> wrote:
>>
>> You realize James was complaining and not celebrating the "success" ? The
>> fact that we can have a discussion trying to determine whether something is
>> a bug or a feature indicates a problem.
>>
>
> Sorry, I didn't intend to disparage his experience; I took it as
> legitimate and reasonable surprise at a change we evidently didn't
> communicate adequately. But I don't think it's a misfeature; I think it's a
> necessary approach, in service of global reliability in challenging
> environments.
>
> But: if there are times it's inconvenient and not just surprising, we
> should surely be able to disable it. Gabriel/Bogdan, would you be able to
> address this?
>
> Cheers
> William
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Rolling your own TLS transport

2016-01-14 Thread Nate Finch
tl;dr: don't.  Use NewHttpTLSTransport from github.com/juju/utils.

There was a bug with downloading tools where we weren't respecting the
proxy set in the environment. I thought this pretty unlikely to be our
fault, since go by default obeys the proxy in the environment. The only way
it could happen is if we were rolling our own transport value... which of
course we were (albeit for good reason - to avoid a bug).

So, I fixed NewHttpTLSTransport to obey the environment's proxy, and while
I was doing that, I grepped around to make sure no one else was doing the
same thing, and I found an instance where someone had rolled their own TLS
transport value, without even using NewHttpTLSTransport (no doubt because
the programmer didn't know that function existed).

So now I'm emailing everyone so you'll know that function exists. If you
need to make an HTTPS connection somewhere, please use NewHttpTLSTransport
when you create your http.Client, instead of rolling your own.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Copyright headers for auto-generated files?

2016-01-07 Thread Nate Finch
So, two things:
1.) We don't need to worry about this anymore, since I've removed the need
for the generated code.  But to clarify, I was using stringer (
https://github.com/golang/tools/blob/master/cmd/stringer/stringer.go) which
generates files with String() methods for enums.
2.) There's like 1000 go generate tools to embed files as resources in Go
code... we should probably just use one of those, rather than rolling our
own.  (for example, many of them gzip the contents to make them smaller,
etc).

On Thu, Jan 7, 2016 at 3:12 AM Ian Booth  wrote:

> We're already using go generate in the cloud-credentials feature branch.
> And the
> code we generate has the correct header.
>
> We have a top level generate package into which it is intended various
> generators will live.
>
> Here's a generator which reads the contents of a file and assigns that to
> a Go
> constant.
>
> https://github.com/juju/juju/blob/cloud-credentials/generate/filetoconst.go
>
>
>
> On 07/01/16 17:47, Andrew McDermott wrote:
> > Can the "thing" that is generating the content not add the header?
> >
> > On 6 January 2016 at 22:18, Katherine Cox-Buday <
> > katherine.cox-bu...@canonical.com> wrote:
> >
> >> Nate ran into an interesting problem yesterday. If we begin to use go
> >> generate directives to auto-generate code, do we care that this code
> >> doesn't have a Copyright header? We check the generated file into source
> >> control, but as the file is meant to be regenerated in the event of a
> >> change, we don't want to edit it by hand to add the header.
> >>
> >> --
> >> -
> >> Katherine
> >>
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >
> >
> >
> >
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Starting a new CI test

2015-12-17 Thread Nate Finch
That is amazing and awesome. Thanks, Martin and everyone else who helped
with this!  Lowering the barrier of entry to writing CI tests can only be a
good thing.

On Thu, Dec 17, 2015 at 1:00 PM Martin Packman 
wrote:

> I just added a cheaty way of getting started on a new CI test.
>
> $ make new-assess name=new_feature
>
> With 'new_feature' as the name you want for your test. This generates
> a new test script, and some unit tests to go along with it, which you
> can run as normal with:
>
> $ make test p=*new_feature*
>
> The QA team has done a fair bit of work of late both simplifying code
> and updating older scripts. These templates should prove to be a
> better example than looking at other random tests as we continue to
> refactor things however. Input from our (many) python experts is
> welcome.
>
> Next steps, a test runner that handles both environment setup and
> annoying problems like ssh config neatly.
>
> Martin
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: 1st time builder

2015-12-17 Thread Nate Finch
Usually if there are compile time issues, it's because you haven't run
godeps yet.
Make sure you've done the steps under this section of the docs:
https://github.com/juju/juju/blob/master/CONTRIBUTING.md#godeps
go-get by default pulls from the head of every git repo, but to keep our
builds repeatable, we use godeps to pin the revision of each repo, which
can occasionally mean that HEAD of master may not compile if someone has
broken something in a dependent repo.

Also, we don't officially support go 1.5 yet, though I believe the build
works and almost all the tests work (there are a small handful of tests
that fail in go 1.5 IIRC) but that wouldn't cause the compile issues
you're getting here.

On Thu, Dec 17, 2015 at 2:10 PM Neale Ferguson  wrote:

> I am attempting to build juju for the 1st time using the instructions at
> https://github.com/juju/juju and appear to have all the dependencies
> installed. I am using golang 1.5. I have searched this month¹s mailing
> list archives (juju and juju-dev) and googled the symptoms and got
> nothing. I am getting the following when building:
>
> src/github.com/juju/juju/provider/gce/google/instance.go:212: cannot use
> value (type string) as type *string in field value
> src/github.com/juju/juju/provider/gce/google/instance.go:228: cannot use
> item.Value (type *string) as type string in assignment
>
> src/github.com/juju/juju/provider/lxd/lxdclient/client_raw.go:23: cannot
> use (*lxd.Client)(nil) (type *lxd.Client) as type rawClientWrapperFull in
> assignment:
> *lxd.Client does not implement rawClientWrapperFull (wrong type for
> MigrateFrom method)
> have MigrateFrom(string, string, map[string]string, int,
> map[string]string, shared.Devices, []string, string, bool) (*lxd.Response,
> error)
> want MigrateFrom(string, string, map[string]string,
> map[string]string,
> []string, string, bool) (*lxd.Response, error)
>
> src/github.com/juju/juju/provider/azure/environ.go:126: undefined:
> resources.Group
> src/github.com/juju/juju/provider/azure/instance.go:224: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/instance.go:226: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolUDP
> src/github.com/juju/juju/provider/azure/instance.go:336: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/instance.go:338: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolUDP
> src/github.com/juju/juju/provider/azure/networking.go:65: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/networking.go:284: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network².SecurityRuleProtocolTCP
>
> src/github.com/juju/juju/provider/azure/utils.go:15: cannot use
> &stringPtrMap (type **map[string]*string) as type *map[string]*string in
> return argument
>
> src/github.com/juju/juju/provider/azure/vmextension.go:68: cannot use
> &extensionSettings (type *map[string]*string) as type
> *map[string]interface {} in field value
>
>
> Before I go too far down the rabbit hole trying to determine the problem,
> I was wondering if either this is a known problem or something I am doing
> incorrectly for the build?
>
> I am building this on a CentOS 7.1 system.
>
> Neale
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Comprehensive reviews & when to do them?

2015-12-14 Thread Nate Finch
I think that as long as we make it clear what kind of review we're asking
for, and make sure that we really do follow up on review comments, that we
can have the best of both worlds.  I do think that reviewing once all the
code is done is kind of fruitless... there's a lot more impetus to just
leave the code as it is, since so much time has been invested in it, and
there's often just so much code that really reviewing it would take all day
if not multiple days.

Reviewing in chunks is a lot more reasonable, it just takes some awareness
that you can't expect everything to be perfect and finished (if you want
perfect and finished, wait til it's a 16000 line code drop from the feature
branch).

But it definitely takes discipline on the developer's part to ensure that
they actually go back and address whatever review comments were left, even
if they can't get to everything before they merge to the feature branch.

On Mon, Dec 14, 2015 at 8:22 AM roger peppe 
wrote:

> On 14 December 2015 at 15:39, Katherine Cox-Buday
>  wrote:
> > Hey All,
> >
> > I think we have a mis-alignment in how we currently do reviews and
> feature
> > branches.
> >
> > We've switched over to feature-branches which is great and has allowed
> > Moonstone to land "good enough" code into our feature branch to support a
> > bi-weekly demo and solicit feedback. At the same time, I feel like we're
> > wasting people's time asking for a +1 on code that is not intended to be
> > landed into master. Often this code will take shortcuts or stub out some
> > code, and as the lead I'll make a judgment call to circle-back later.
> > Reviewers don't necessarily know this.
> >
> > Conversely, when we go to land the feature branch into master, these PRs
> are
> > generally rubber-stamped.
> >
> > I feel like maybe we have this backwards?
>
> I feel your pain.
>
> But the other side of this coin is that feature branches can be very big,
> so it's very difficult to give them a comprehensive review when merging
> them into master.
>
> For example, this feature branch (https://github.com/juju/juju/pull/3590)
> involved more than 16000 lines of changes ("diff" prints 30616 lines).
> I wouldn't have any trust in the results of a code review of all that
> code at once.
>
> I think that if you're going to implement features that you don't want
> comprehensively reviewed, it's best to treat them as "spikes" and keep
> them out of the juju repo entirely, then try to land them in manageable
> chunks on the feature branch (or master).
>
> In my view, an important attribute of the feature branches is that they
> are solidly reviewed and thus OK to land without full review of the entire
> diff.
>
> That said, I believe that it is *also* important that people do try
> to review feature branches when they merge, if at all reasonable. It's
> often the first time that people not directly involved with the feature
> branch actually look at the implications for master.
>
>   cheers,
> rog.
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Be very careful in assumptions in tests

2015-12-03 Thread Nate Finch
I'll definitely +1 the need for gc.HasLen ... I've seen a ton of panics in
tests if the code starts erroneously returning nil slices.  Obviously this
is less bad, since the tests still fail, but they're really ugly annoying
failures.

And +1000 to making tests that fail before fixing the code (or at least,
purposely re-breaking the code if you forget and fix the bug first)... I've
caught so many invalid tests that way... it is the only way to prove to
yourself that your test actually work correctly both in proving correct
behavior, and signalling bad behavior.

On Thu, Dec 3, 2015 at 5:24 PM Tim Penhey  wrote:

> Hi folks,
>
> I came across an interesting bug yesterday and during investigation
> found that there was a very comprehensive test that covered the situation.
>
> The problem is that the asserts were not actually running.
>
> The asserts were inside a loop where the expectation was that the loop
> would run exactly once as a previous call returned an array of items
> with one value in it.
>
> Except it didn't.
>
> It returned an empty list.
>
> So, if you see something like this, a simple line like:
>
>   c.Assert(someList, gc.HasLen, 1)
>
> Is sufficient to have caught this particular issue.
>
>
> Remember a key step when writing tests is to have the test fail first,
> then make it pass.
>
> One trick I use is to assert checks against data I know is wrong. If the
> test passes then I know I have other problems.
>
> Tim
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


  1   2   3   4   >