Juju 2.4-rc3 has been released

2018-06-25 Thread Ian Booth
A new development release of Juju is here, 2.4-rc3.

This release candidate addresses an issue upgrading from earlier Juju versions
as described below.

## Fixes

An upgrade step has been added to initialise the Raft configuration. This would
normally be done at bootstrap time but needs to be done during upgrade for
controllers that were bootstrapped with an earlier version.

## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 sudo snap install juju --classic --candidate

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.

## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at j...@lists.ubuntu.com.

## More information

To learn more about Juju please visit https://jujucharms.com.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: redhat, centos, oracle linux images for vmware deployment

2018-02-11 Thread Ian Booth
Hey Dan

The Ubuntu images used to bring up a vSphere VM are downloaded from
cloud-images.ubuntu.com. We use images in an ova archive format. Here's where
the xenial ones are sourced from for example:
http://cloud-images.ubuntu.com/xenial/current/

Juju uses simplestreams metadata to select the relevant image to be used, based
amongst other things on the series defined in the charm. For Ubuntu images, the
simplestreams metadata is here:
http://cloud-images.ubuntu.com/releases/streams/v1/

For images other than Ubuntu for use with Juju (eg centos), we publish the
metadata and image files elsewhere as they are not officially supported on
cloud-images. We only support centos7 images on AWS and Azure as can be seen 
here:
http://streams.canonical.com/juju/images/releases/streams/v1/

It's possible to "roll your own" image metadata and point that to a centos ova
image archive which should in theory work, but is not something we have tested
as to date there's not been a call for it to my knowledge. This metadata is
provided to Juju using the --metadata-source argument to bootstrap. There's
tooling to generate the image metadata (juju metadata generate-image), but bear
in mind that to date it's been used more as an advanced tool for internal use
and has some rough edges. There's some level of doc here which explains the
basics of setting up a private Openstack cloud:
https://jujucharms.com/docs/stable/howto-privatecloud

The above would need to be adapted to accommodate vSphere and centos images.
Things like where the image binaries would be hosted and made available to
vSphere would need to be sorted out. As I said, this is not something we have
invested time in testing as so far there's not been a call for it. Very hand
wavy, the steps would be:
- generate centos ova image archives
- host them somewhere accessible to vSphere and the bootstrap client
- generate image metadata cataloging the above images
- bootstrap juju using --metadata-source to point to the image metadata

ie it should or could be made to work with Juju as released but we've not tested
it. We can help you get things set up if that helps, and document the steps for
the next person as we go along.


On 10/02/18 04:41, Daniel Bidwell wrote:
> Where do I find the images that are used by juju to deploy to a vsphere
> controller?  My ubuntu systems come up great, but unfortunately I need
> to deploy some red hat/centos/oracle linux vms also, but juju deploy
> doesn't seem to be able to find them.
> 
> Is this an area that needs someone to get involved with?
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju deploys with a vsphere controller using hardware vm version 10

2018-02-06 Thread Ian Booth
Hi Daniel

The Juju vSphere provider currently only supports hardware version 10, but 14 is
now the most recent according to the VMWare website. If we were simply to track
and support the most recent hardware version, would that work for you?

On 05/02/18 12:38, Daniel Bidwell wrote:
> Is there anyway to make the vsphere controller to deploy vms with
> hardware vm version 13 instead of version 10?
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.3 beta2 is here!

2017-11-02 Thread Ian Booth

> * Parallelization of the Machine Provisioner
>>
>> Provisioning of machines is now faster!  Groups of machines will now be
>> provisioned in parallel reducing deployment time, especially on large
>> bundles.  Please give it a try and let us know what you think.
>>
>> Benchmarks for time to deploy 16 machines on different clouds:
>>
>> AWS:
>>
>> juju 2.2.5 4m36s
>>
>> juju 2.3-beta2 3m17s
>>
>> LXD:
>>
>> juju 2.2.5 3m57s
>>
>> juju 2.3-beta2 2m57s
>>
>> Google:
>>
>> juju 2.2.5 5m21s
>>
>> juju 2.3-beta2 2m10s
>>
>> OpenStack:
>>
>> juju 2.2.5 12m40s
>>
>> juju 2.3-beta2 4m52s
>>
>>
>>
> Oh heck yes this is a great improvement! I don't see MAAS numbers here, but
> I imagine palatalization has been implemented there too? Bare metal can be
> so slow to boot sometimes ;)
>

Works for all clouds. The provisioning code is generic and has been extracted
from each provider and moved up a layer. It got complicated because of the need
to still ensure even spread of distribution groups across availability zones in
the parallel case. There just wasn't time to get any MAAS numbers prior to
cutting the beta, but empirically, there's improvement across the board.
Positive deployment stories to share would be welcome :-)




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.3 beta2 is here!

2017-11-02 Thread Ian Booth


>>
>> * Parallelization of the Machine Provisioner
>>
>>
>> Provisioning of machines is now faster!  Groups of machines will now
>> be provisioned in parallel reducing deployment time, especially on
>> large bundles.  Please give it a try and let us know what you think.
>>
> 
> This is great. Did we also add support for automatic provisioning
> retries to handle sporadic cloud failures?
>

Some providers do have some such retries built in. eg Azure, Openstack,
Rackspace handle rate limit exceeded errors and Do The Right Thing. We're still
progressively addressing robustness concerns elsewhere.



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
And just to ask the obvious: deploying without the --storage constraint results
in a successful deploy, albeit to a machine with maybe the wrong disk?


On 01/11/17 10:51, James Beedy wrote:
> Ian,
> 
> So, I think I'm close here.
> 
> The filesytem/device layout on my node(s): https://imgur.com/a/Nzn2H
> 
> I have tagged the md0 device with the tag "raid0", then I have created the
> storage pool as you have specified.
> 
> `juju create-storage-pool ssd-disks maas tags=raid0`
> 
> Then ran the following command to deploy my charm [0], attaching storage as
> part of the command:
> 
> `juju deploy cs:~jamesbeedy/elasticsearch-27 --bind "cluster=vlan20
> public=mgmt-net" --storage data=ssd-disks,3T --constraints "tags=data"`
> 
> 
> The result is here: http://paste.ubuntu.com/25862190/
> 
> 
> Here machines 1 and 2 are deployed without the `--constraints`,
> http://paste.ubuntu.com/25862219/
> 
> 
> Am I missing something? Possibly like one more input to the `--storage` arg?
> 
> 
> Thanks
> 
> [0] https://jujucharms.com/u/jamesbeedy/elasticsearch/27
> 
> On Tue, Oct 31, 2017 at 3:14 PM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Thanks for raising the issue - we'll get the docs updated!
>>
>> On 01/11/17 07:44, James Beedy wrote:
>>> I knew it would be something simple and sensible :)
>>>
>>> Thank you!
>>>
>>> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth <ian.bo...@canonical.com>
>> wrote:
>>>
>>>> Of the top of my head, you want to do something like:
>>>>
>>>> $ juju create-storage-pool ssd-disks maas tags=ssd
>>>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>>>
>>>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>>>> tag. You
>>>> can select whatever criteria you want and whatever tags you want to use.
>>>>
>>>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>>>> which is
>>>> at least 32GB in size.
>>>>
>>>>
>>>> On 01/11/17 07:04, James Beedy wrote:
>>>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>>>> can't quite wrap my head around what the syntax might be to make it
>> work,
>>>>> and what the extent of the capability of the Juju storage features are
>>>> when
>>>>> used with MAAS.
>>>>>
>>>>> Re-reading [0], and looking for anything else I can find on Juju
>> storage
>>>>> every day for a week now thinking it may click or I might find the
>> right
>>>>> doc,  but it hasn't, and I haven't.
>>>>>
>>>>> I filed a bug with juju/docs here [1] .
>>>>>
>>>>> Does anyone have an example of how to consume Juju storage using the
>> MAAS
>>>>> provider?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>>>> [1] https://github.com/juju/docs/issues/2251
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Thanks for raising the issue - we'll get the docs updated!

On 01/11/17 07:44, James Beedy wrote:
> I knew it would be something simple and sensible :)
> 
> Thank you!
> 
> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Of the top of my head, you want to do something like:
>>
>> $ juju create-storage-pool ssd-disks maas tags=ssd
>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>
>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>> tag. You
>> can select whatever criteria you want and whatever tags you want to use.
>>
>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>> which is
>> at least 32GB in size.
>>
>>
>> On 01/11/17 07:04, James Beedy wrote:
>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>> can't quite wrap my head around what the syntax might be to make it work,
>>> and what the extent of the capability of the Juju storage features are
>> when
>>> used with MAAS.
>>>
>>> Re-reading [0], and looking for anything else I can find on Juju storage
>>> every day for a week now thinking it may click or I might find the right
>>> doc,  but it hasn't, and I haven't.
>>>
>>> I filed a bug with juju/docs here [1] .
>>>
>>> Does anyone have an example of how to consume Juju storage using the MAAS
>>> provider?
>>>
>>> Thanks!
>>>
>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>> [1] https://github.com/juju/docs/issues/2251
>>>
>>>
>>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Of the top of my head, you want to do something like:

$ juju create-storage-pool ssd-disks maas tags=ssd
$ juju deploy postgresql --storage pgdata=ssd-disks,32G

The above assumes you have tagged in MAAS any SSD disks with the "ssd" tag. You
can select whatever criteria you want and whatever tags you want to use.

The deploy command above selects a MAAS node with a disk tagged "ssd" which is
at least 32GB in size.


On 01/11/17 07:04, James Beedy wrote:
> Trying to check out Juju storage capabilities on MAAS I found [0], but
> can't quite wrap my head around what the syntax might be to make it work,
> and what the extent of the capability of the Juju storage features are when
> used with MAAS.
> 
> Re-reading [0], and looking for anything else I can find on Juju storage
> every day for a week now thinking it may click or I might find the right
> doc,  but it hasn't, and I haven't.
> 
> I filed a bug with juju/docs here [1] .
> 
> Does anyone have an example of how to consume Juju storage using the MAAS
> provider?
> 
> Thanks!
> 
> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
> [1] https://github.com/juju/docs/issues/2251
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju development summary

2017-10-27 Thread Ian Booth
Hi folks

Here's a quick wrap up of what the Juju team has been doing lately.

A chunk of time has been spent planning what we want to work on next cycle
leading up to the 18.04 LTS. Issues/features required by the field take a high
priority, including (but not limited to):
- audit logging
- enhancements to bundle deployment
- support for Openstack with Cisco ACI
- containers inheriting properties from hosts
- space selection for controller and agent traffic

There's also other feature work planned such as providing goal state to charms
and other mechanisms to reduce message chatter and improve scalability; post
deploy management of spaces and bindings, crud for spaces and subnets etc; cloud
native functionality exposed to charms.

The main engineering focus has been polishing things for the imminent (late this
week/early next week) 2.3 beta 2 release. There's a number of great improvements
over beta 1 to look forward to, including:
- lease / leadership tracking immune to clock skew / bad ntp
- much better machine provisioning performance across the board (up to 40%
reduction in time when deploying a bundle with 16 machines on openstack)
- resolution of annoying issues like errors like "model not found" on controller
destruction
- cross model support for prometheus and nagio deployments
- lots of polish to various usability paper cuts

For those keen to try beta 2, we guarantee upgradeability from this release to
2.3.0 final. So give it a run and help provide feedback so we can make 2.3 as
awesome as possible.

We also pushed out a couple of 2.2.x point releases since 2.2.4 to fix a few
small but significant issues. We encourage everyone to upgrade to 2.2.6.


Quick links:
  Work pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop
  Recent 2.2 commits: https://github.com/juju/juju/commits/2.2


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 16:33, Ian Booth wrote:
> 
> 
> On 19/10/17 15:22, John Meinel wrote:
>> So at the moment, I don't think Juju supports what you're looking for,
>> which is cross model relations without public addresses. We've certainly
>> discussed supporting all private for cross model. The main issue is that we
>> often drive parts of the firewalls (security groups) but without
>> understanding all the routing, it is hard to be sure whether things will
>> actually work.
>>
> 
> The space to which an endpoint is bound affects the behaviour here. Having 
> said
> that, there may be a bug in Juju's cross model relations code.
> 

Actually, there may be an issue with current behaviour, but not what I first
thought.

In network-get, only if an endpoint is not bound to a space does the resulting
ingress address use the public address (if one exists). If bound to a space, the
ingress addresses are set to the machine local addresses. This is wrong because
there's absolutely no guarantee an arbitrary external workload will be able to
connect to such an address - defaulting to the public address is the best choice
for most deployments.

I think network-get needs to change such that in the absence of information to
the contrary, regardless of whether an endpoint is bound to a space, the public
address should be advertised for ingress in a cross model relation.

The above implies we would need a way for the user to specify at relation time a
different ingress address for the consuming end. But that's not necessarily easy
to determine as it requires knowledge of how both sides (incl offering side)
have been deployed, and may change per relation. We don't intend to provide a
solution for this bit of the problem in Juju 2.3.


> So in the context of this doc
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> For relation data set up by Juju when a unit enters scope of a cross model 
> relation:
> 
> Juju will use the public address for advertising ingress. We have (future) 
> plans
> to support cross model relations where, in the absence of spaces, Juju can
> determine that traffic between endpoints is able to go via cloud local
> addresses, but as stated, with all the potential routing complexity involved, 
> we
> would limit this to quite restricted scenarios where it's guaranteed to work. 
> eg
> on AWS that might be same vpc/tenant/credentials or something. But we're not
> there yet and won't be for the cross model relations release in Juju 2.3.
> 
> The relation data is of course what is available to the remote unit(s) to 
> query.
> The data set up by Juju is the default, and can be overridden by a charm in a
> relation-changed hook for example.
> 
> For network-get output:
> 
> Where there is no space binding...
> 
> ... Juju will use the public address or cloud local address as above.
> 
> Where the endpoint is bound to a space...
> 
> ... Juju will populate the ingress address info in network-get to be the local
> machine addresses in that space.
> 
> So charm could call network-get and do a relation-set to put the correct
> ingress-address value in the relation data bag.
> 
> But I think the bug here is that when a unit enters scope, the default values
> Juju puts in relation data should be calculated the same as for network-get.
> Right now, the ingress address used is not space aware - if it's a cross model
> relation, Juju always uses the public address regardless of whether the 
> endpoint
> is bound to a space. If this behaviour were to be changed to match what
> network-get does, the relation data would be set up correctly(?) and there'd 
> be
> no need for the charm to override anything.
> 
>> I do believe the intended resolution is to use juju relate --via X, and
>> then X can be a space that isn't public. I'm pretty sure we don't have
>> everything wired up for that yet, and we want to make sure we can get the
>> current steps working well.
>>
> 
> juju relate --via X works at the moment by setting the egress-subnets value in
> the relation data bucket. This supports the case where the person deploying
> knows traffic from a model will egress via specific subnets, eg for a NATed
> firewall scenario. Juju itself uses this value to set firewall rules on the
> other model. There's currently no plans to support explicitly specifying what
> ingress addresses to use for either end of a cross model relation.
> 
>> The very first thing I noticed in your first email was that charms should
>> *not* be aware of spaces. The abstractions for charms are around their
>> bindings (explicit or via binding their endpoints). The goal of spaces is
>> to provide human operators a way to tell charms about their environm

Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 15:22, John Meinel wrote:
> So at the moment, I don't think Juju supports what you're looking for,
> which is cross model relations without public addresses. We've certainly
> discussed supporting all private for cross model. The main issue is that we
> often drive parts of the firewalls (security groups) but without
> understanding all the routing, it is hard to be sure whether things will
> actually work.
> 

The space to which an endpoint is bound affects the behaviour here. Having said
that, there may be a bug in Juju's cross model relations code.

So in the context of this doc
https://jujucharms.com/docs/master/developer-network-primitives

For relation data set up by Juju when a unit enters scope of a cross model 
relation:

Juju will use the public address for advertising ingress. We have (future) plans
to support cross model relations where, in the absence of spaces, Juju can
determine that traffic between endpoints is able to go via cloud local
addresses, but as stated, with all the potential routing complexity involved, we
would limit this to quite restricted scenarios where it's guaranteed to work. eg
on AWS that might be same vpc/tenant/credentials or something. But we're not
there yet and won't be for the cross model relations release in Juju 2.3.

The relation data is of course what is available to the remote unit(s) to query.
The data set up by Juju is the default, and can be overridden by a charm in a
relation-changed hook for example.

For network-get output:

Where there is no space binding...

... Juju will use the public address or cloud local address as above.

Where the endpoint is bound to a space...

... Juju will populate the ingress address info in network-get to be the local
machine addresses in that space.

So charm could call network-get and do a relation-set to put the correct
ingress-address value in the relation data bag.

But I think the bug here is that when a unit enters scope, the default values
Juju puts in relation data should be calculated the same as for network-get.
Right now, the ingress address used is not space aware - if it's a cross model
relation, Juju always uses the public address regardless of whether the endpoint
is bound to a space. If this behaviour were to be changed to match what
network-get does, the relation data would be set up correctly(?) and there'd be
no need for the charm to override anything.

> I do believe the intended resolution is to use juju relate --via X, and
> then X can be a space that isn't public. I'm pretty sure we don't have
> everything wired up for that yet, and we want to make sure we can get the
> current steps working well.
> 

juju relate --via X works at the moment by setting the egress-subnets value in
the relation data bucket. This supports the case where the person deploying
knows traffic from a model will egress via specific subnets, eg for a NATed
firewall scenario. Juju itself uses this value to set firewall rules on the
other model. There's currently no plans to support explicitly specifying what
ingress addresses to use for either end of a cross model relation.

> The very first thing I noticed in your first email was that charms should
> *not* be aware of spaces. The abstractions for charms are around their
> bindings (explicit or via binding their endpoints). The goal of spaces is
> to provide human operators a way to tell charms about their environment.
> But you shouldn't ever have to change the name of your space to match the
> name a charm expects.
> 
> So if you do 'network-get BINDING -r relation' that should give you the
> context you need to coordinate your network settings with the other
> application. The intent is that we give you the right data so that it works
> whether you are in a cross model relation or just related to a local app.
> 
> John
> =:->
> 
> 
> On Oct 13, 2017 19:59, "James Beedy"  wrote:
> 
> I can give a high level of what I feel is a reasonably common use case.
> 
> I have infrastructure in two primary locations; AWS, and MAAS (at the local
> datacenter). The nodes at the datacenter have a direct fiber route via
> virtual private gateway in us-west-2, and the instances in AWS/us-west-2
> have a direct route  via the VPG to the private MAAS networks at the
> datacenter. There is no charge for data transfer from the datacenter in and
> out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
> and have the AWS instances and MAAS instances talk to each other via
> private address.
> 
> At the application level, the component/config goes something like this:
> 
> The MAAS nodes at the data center have mgmt-net, cluster-net, and
> access-net, interfaces defined, all of which get ips from their respective
> address spaces from the datacenter MAAS.
> 
> I need my elasticsearch charm to configure elasticsearch such that
> elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
> instance) -> elasticsearch to talk across the 

Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52'
>> }
>> 'public-networks': {
>> 'default': {
>> 'addresses': [
>> {
>> 'cidr': 'publicipaddress/32',
>> 'address': 'publicipaddress'
>> }
>> ],
>> }
>> 'fan-networks': {
>> 'fan-252': {
>> 
>> 
>> }
>> }
>>
>> Where all 

Re: default network space

2017-10-12 Thread Ian Booth
I'd like to understand the use case you have in mind a little better. The
premise of the network-get output is that charms should not think about public
vs private addresses in terms of what to put into relation data - the other
remote unit should not be exposed to things in those terms.

There's some doc here to explain things in more detail

https://jujucharms.com/docs/master/developer-network-primitives

The TL;DR: is that charms need to care about:
- what address do I bind to (listen on)
- what address do external actors use to connect to me (ingress)

Depending on how the charm has been deployed, and more specifically whether it
is in a cross model relation, the ingress address might be either the public or
private address. Juju will decide based on a number of factors (whether models
are deployed to same region, vpc, other provider specific aspects) and populate
the network-get data accordingly. NOTE: for now Juju will always pick the public
address (if there is one) for the ingress value for cross model relations - the
algorithm to short circuit to a cloud local address is not yet finished.

The content of the bind-addresses block is space aware in that these are
filtered based on the space with which the specified endpoint is associated. The
network-get output though should not include any space information explicitly -
this is a concern which a charm should not care about.


On 12/10/17 13:35, James Beedy wrote:
> Hello all,
> 
> In case you haven't noticed, we now have a network_get() function available
> in charmhelpers.core.hookenv (in master, not stable).
> 
> Just wanted to have a little discussion about how we are going to be
> parsing network_get().
> 
> I first want to address the output of network_get() for an instance
> deployed to the default vpc, no spaces constraint, and related to another
> instance in another model also default vpc, no spaces constraint.
> 
> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
> 
> 
> The use case I have in mind here is such that I want to provide the private
> network interface address via relation data in the provides.py of my
> interface to the relating appliication.
> 
> This will be able to happen by calling
> hookenv.network_get('') in the layer that provides the
> interface in my charm, and passing the output to get the private interface
> ip data, to then set that in the provides side of the relation.
> 
> Tracking?
> 
> The problem:
> 
> The problem is such that its not so straight forward to just get the
> private address from the output of network_get().
> 
> As you can see above, I could filter for network interface name, but thats
> about the least best way one could go about this.
> 
> Initially, I assumed the network_get() output would look different if you
> had specified a spaces constraint when deploying your application, but the
> output was similar to no spaces, e.g. spaces aren't listed in the output of
> network_get().
> 
> 
> All in all, what I'm after is a consistent way to grep either the space an
> interface is bound to, or to get the public vs private address from the
> output of network_get(), I think this is true for every provider just about
> (ones that use spaces at least).
> 
> Instead of the dict above, I was thinking we might namespace the interfaces
> inside of what type of interface they are to make it easier to decipher
> when parsing the network_get().
> 
> My idea is a schema like the following:
> 
> {
> 'private-networks': {
> 'my-admin-space': {
> 'addresses': [
> {
> 'cidr': '172.31.48.0/20',
> 'address': '172.31.51.59'
> }
> ],
> 'interfacename': 'eth0',
> 'macaddress': '12:ba:53:58:9c:52'
> }
> 'public-networks': {
> 'default': {
> 'addresses': [
> {
> 'cidr': 'publicipaddress/32',
> 'address': 'publicipaddress'
> }
> ],
> }
> 'fan-networks': {
> 'fan-252': {
> 
> 
> }
> }
> 
> Where all interfaces bound to spaces are considered private addresses, and
> with the assumption that if you don't specify a space constraint, your
> private network interface is bound to the "default" space.
> 
> The key thing here is the schema structure grouping the interfaces bound to
> spaces inside a private-networks level in the dict, and the introduction of
> the fact that if you don't specify a space, you get an address bound to an
> artificial "default" space.
> 
> I feel this would make things easier to consume, and interface to from a
> developer standpoint.
> 
> Is this making sense? How do others feel?
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju 2.3 beta1 is here!

2017-10-05 Thread Ian Booth
After many months of effort, we're pleased to announce the release of the first
beta for the upcoming Juju 2.3 release. This release has many long requested new
features, some of which are highlighted below.

Please note that because this is a beta release (the first one at that), there
may likely be bugs or functionality that will be polished over the next betas
prior to release. But we encourage everyone to provide feedback so that we may
address any issues.

Also note that some of the documentation for the new features is also in beta
and undergoing revision and completion over the next few weeks. In particular
the cross model relations documentation is still in development.

## New and Improved

### FAN networking in containers (initial support)

A new "container-networking-method" model config attribute is introduced with 3
possible values: "local", "fan", "provider".
* local = use local bridge lxdbr0
* provider = containers get their IP address from the cloud via DHCP
* fan = use FAN

The default is to use "provider" if supported. Otherwise, if FAN is configured
use that, else "local".
On AWS, FAN works out of the box. For other clouds, a new fan-config model
option needs to be used, eg

juju model-config fan-config="= =

### Update application series

It's now possible to update the underlying OS series associated with an already
deployed application.

juju update-series  

will ensure that any new units deployed will now use the requested series.

juju update-series  

will inform the charms already deployed to the machine that the OS series has
been changed and they should re-configure accordingly. This requires charm
support and for the underlying OS to be upgraded manually beforehand.

For more detail, see the documentation
https://jujucharms.com/docs/devel/howto-updateseries

### Cross model relations

This feature allows workloads to be deployed and related across models, and even
across controllers. Note that some charms such as postgresql, prometheus (and
others) need to be updated to be cross model compatible - this work is underway.

For more detail, see the beta documentation
https://jujucharms.com/docs/devel/models-cmr/

*Note: this cross model relations documentaion is also still in beta and is
incomplete.*

### LXD storage provider

Juju storage is now supported by the LXD local cloud. The available storage
options include:
- lxd (default, directory based)
- btrfs
- zfs

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage#lxd-(lxd)

### Persistent storage management

Storage can be detached and reattached from/to units without losing the data on
that storage. The supported scenarios include:
- explicit detach / attach while the units are still active
- retain storage when a unit or application is destroyed
- retain storage when a model is destroyed
- deploy a charm using previously detached storage

The default behaviour now is to retain storage, unless destroy has explicitly
been requested when running the command.

Storage which is retained can then be reattached to a different unit. Filesystem
storage can be imported into a different model, from where it can be attached to
units in that model, or used when deploying a new charm.

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage


## Fixes

For a list of all bugs fixed in this release, see
https://launchpad.net/juju/+milestone/2.3-beta1

Some important fixes include:

* can't bootstrap openstack if nova and neutron AZs differ
https://bugs.launchpad.net/juju/+bug/1689683
* cache vSphere images in datastore to avoid repeated downloads
https://bugs.launchpad.net/juju/+bug/1711019
* juju run-action can be run on multiple units
https://bugs.launchpad.net/juju/+bug/1667213


## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 snap install juju --beta --classic

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.


## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at j...@lists.ubuntu.com.


## More information

To learn more about juju please visit https://jujucharms.com.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Weekly Development Summary

2017-08-18 Thread Ian Booth
Hi folks

A summary of what we've been doing this week in Juju.

Two new Azure regions have been added - koreasouth and koreacentral.
To use these on an existing Juju 2.x install, simply run
$ juju update-clouds

We continue to prepare for the 2.2.3 release. We were hoping to pull the trigger
this week but a few new issues from stakeholders were added to the milestone.
https://launchpad.net/juju/+milestone/2.2.3

Some issues fixed or being finalised:
- a particularly nasty Mongo replica set issue affecting some HA deployments
- some model destruction issues
- pending resources when an application is deployed again after failing once
- cloud names with underscores
- better able to handle duplicate instance ids in MAAS when a nodel fails to
deploy and is reused later

A new command "update-series" has been added which allows the series for an
application to be updated. Any new units deployed for that application
will use the specified series. We're working on a variation of the command to
allow the series for existing units/machines to be updated also.

On the cross model relations front, juju status and juju list-offers commands
have been tweaked to improve their output. The "list-offers" (offers) command by
default shows connection details to each offer, including user and relation id.
The "remove-relation" command now accepts a relation id and so it's possible to,
on the offering side, remove a cross model relation. Work is still being done to
support temporarily revoking a relation rather than removing it outright.

We continue to expand CI test coverage of Juju features. This week the
persistent storage feature gained test coverage.

Quick links:
  Work pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop
  Recent 2.2 commits: https://github.com/juju/juju/commits/2.2

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Weekly Development Summary

2017-08-11 Thread Ian Booth
Hi folks

Here's a quick wrap up of what the Juju team has been doing this week.

We're almost ready for a new 2.2.3 release. Issues addressed are found on the
milestone:
https://launchpad.net/juju/+milestone/2.2.3

Some highlights include bundles supporting local resources, migration and
upgrade fixes, and machine placement directives ignoring constraints.

The work to allow upgrades from 1.25 continues and we're close to a working
proof of concept. Challenges have included lxc to lxd upgrades and dealing with
the significant difference between the 1.25 and 2.x data models.

On the cross model relations front, support for multi-controller relations now
includes a complete macaroon based authentication mechanism.

More usability improvements have landed, including clean up of the juju
resources commands and other papercuts.

The relations section of Juju status in tabular format has been cleaned up based
on feedback from the field. Display of bogus subordinate relations is fixed, and
the content has been enhanced to display both endpoints, ordered by the provider
application. Check it out and any additional feedback welcome.

The Jenkins infrastructure used for landing and CI continues to improve at a
rapid pace. There's been awesome work done to make everything robust and
maintainable and remove all the special case scripts and slave machines. This
has all been behind the scenes. But over the past couple of weeks the work to
integrate the Open Blue Ocean plugin means that developers gain a fantastic view
into the progress of their landing job and can easily drill down to see the
cause of any test failures.

Quick links:
  Work pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop
  Recent 2.2 commits: https://github.com/juju/juju/commits/2.2

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Ian Booth


On 23/05/17 06:39, Stuart Bishop wrote:
> On 22 May 2017 at 20:02, roger peppe  wrote:
> 
>> not to show in the status history.  Given that the motivation behind
>> the proposal is to reduce load on the database and on controllers, I
> 
> One of the motivations was to reduce load. Another motivation, that
> I'm more interested in, was to make the status log history readable.
> Currently it is page after page of noise about update-status running
> with occasional bits of information.
> 
> (I've leave it to others to argue if it is better to fix this when
> generating the report or by not logging the noise in the first place)
> 

Since Juju 2.1.1, the juju show-status-log command no longer shows
status-history entries by default. There's a new --include-status-updates flag
which can be used if those entries are required in the output.

There's also squashing of repeated log entries. These enhancements were meant to
address the "I don't want to see it problem".

The idea to not record it was meant to address the load issue (both retrieval
and recording). As part of the ongoing performance tuning and scaling efforts,
some hard numbers are being gathered to measure the impact of keeping
update-status in the database.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Ian Booth


On 22/05/17 18:23, roger peppe wrote:
> I think it's slightly unfortunate that update-status exists at all -
> it doesn't really need to,
> AFAICS, as a charm can always do the polling itself if needed; for example:
> 
> while :; do
>  sleep 30
>  juju-run $UNIT 'status-set current-status "this is what is 
> happening"'
> done &
> 
> Or (better) use juju-run to set the status when the workload
> executable starts and exits, avoiding the need for polling at all.
>

It's not sufficient to just set the status when the workload starts and exits.
One example is a database which periodically goes offline for a short time for
maintenance. The workload executable itself should not have to know how to
communicate this to Juju. By the agent running update-status hook periodically,
it allows the charm itself to establish whether the database status should be
marked as "maintetance" for example. Using a hook provides a standard way all
charms can rely on to communicate workload status in a consistent way.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Weekly Development Summary

2017-05-19 Thread Ian Booth
Hi folks

That time of the week again - almost beer o'clock for those of us in Aus/NZ
timezones - and also time to recap on the happenings in the land of Juju
development over the past 7 days.

We're working hard to get a Juju 2.2 out the door. The week saw a release of 2.2
beta4 which included usability improvements to actions, openstack and oracle
providers. Focus this week has been on squashing a number of stakeholder bugs
and CI test failures. We aim to release an RC in a week or so all going well.

A couple of key development highlights apart from the usual fare of bug fixes
include:

- close to finishing improvements to how Juju storage operates - expect a snap
early next week to try out the feature which is targetted for Juju 2.3. You will
gain the ability to destroy a unit but leave its storage behind; this storage
can then be attached to a different unit or re-used when deploying a new
application instance.

- all of the CI and QA tools and scripts and test frameworks have been moved
across from Launchpad to live under the Juju repo on github. This is part of the
ongoing process to revamp and improve our test infrastructure to make everything
more robust and maintainable, and make writing CI tests as easy as possible.

- model migration improvements so that things play nicely together in a JAAS
world in addition to individual controllers

Quick links:
  Work Pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju/retry take 2 - looping

2016-10-20 Thread Ian Booth
I really like where the enhancements are headed. I feel they offer the syntax
that some folks wanted, with the safety and validation of the initial
implementation. Best of both worlds.

On 20/10/16 13:09, Tim Penhey wrote:
> Hi folks,
> 
> https://github.com/juju/retry/pull/5/files
> 
> As often is the case, the pure solution is not always the best. What seemed
> initially like the best approach didn't end up that way.
> 
> Both Katherine and Roger had other retry proposals that got me thinking about
> changes to the juju/retry package. The stale-mate from the tech board made me
> want to try another approach that I thought about while walking the dog today.
> 
> I wanted the security and fall-back of validation of the various looping
> attributes, while making the call site much more obvious.
> The pull request has the result of this attempt.
> 
> It is by no means perfect, but an improvement I think. I was able to trivially
> reimplement retry.Call with the retry.Loop concept with no test changes.
> 
> The tests are probably the best way to look at the usage.
> 
> Tim
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Github Reviews vs Reviewboard

2016-10-13 Thread Ian Booth
-1000 :-)

On 14/10/16 08:44, Menno Smits wrote:
> We've been trialling Github Reviews for some time now and it's time to
> decide whether we stick with it or go back to Reviewboard.
> 
> We're going to have a vote. If you have an opinion on the issue please
> reply to this email with a +1, 0 or -1, optionally followed by any further
> thoughts.
> 
>- +1 means you prefer Github Reviews
>- -1 means you prefer Reviewboard
>- 0 means you don't mind.
> 
> If you don't mind which review system we use there's no need to reply
> unless you want to voice some opinions.
> 
> The voting period starts *now* and ends my* EOD next Friday (October 21)*.
> 
> As a refresher, here are the concerns raised for each option.
> 
> *Github Reviews*
> 
>- Comments disrupt the flow of the code and can't be minimised,
>hindering readability.
>- Comments can't be marked as done making it hard to see what's still to
>be taken care of.
>- There's no way to distinguish between a problem and a comment.
>- There's no summary of issues raised. You need to scroll through the
>often busy discussion page.
>- There's no indication of which PRs have been reviewed from the pull
>request index page nor is it possible to see which PRs have been approved
>or otherwise.
>- It's hard to see when a review has been updated.
> 
> *Reviewboard*
> 
>- Another piece of infrastructure for us to maintain
>- Higher barrier to entry for newcomers and outside contributors
>- Occasionally misses Github pull requests (likely a problem with our
>integration so is fixable)
>- Poor handling of deleted and renamed files
>- Falls over with very large diffs
>- 1990's looks :)
>- May make future integration of tools which work with Github into our
>process more difficult (e.g. static analysis or automated review tools)
> 
> There has been talk of evaluating other review tools such as Gerrit and
> that may still happen. For now, let's decide between the two options we
> have recent experience with.
> 
> - Menno
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


upcoming change in Juju 2.0 to bootstrap arguments

2016-10-12 Thread Ian Booth
See https://bugs.launchpad.net/juju/+bug/1632919

The order of the cloud/region and controller name arguments will be swapped.

Old:

$ juju bootstrap mycontroller aws/us-east-1

New:

$ juju bootstrap aws/us-east-1 mycontroller
or now
$ juju bootstrap aws/us-east-1

Notice how controller name is optional. It will default to cloud-region.
eg

$ juju bootstrap aws
Creating Juju controller "aws-us-east-1" on aws/us-east-1
...

The only fallout I expect will be for folks like OIL who use scripts will have
to tweak their scripts to swap the arguments. The bootstrap API itself is
unaffected so Python client and other API users will see no difference. It's
just a CLI change.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Ian Booth

On 16/09/16 03:50, Nate Finch wrote:
> Reviewboard goes down a couple times a month, usually from lack of disk
> space or some other BS.  According to a source knowledgeable with these
> matters, the charm was rushed out, and the agent for that machine is down
> anyway, so we're kinda just waiting for the other shoe to drop.
> 
> As for the process things that Ian mentioned, most of those can be
> addressed with a sprinkling of convention.  Marking things as issues could
> just be adding :x: to the first line (github even pops up suggestions and
> auto-completes), thusly:
> 
> [image: :x:]This will cause a race condition
> 
> And if you want to indicate you're dropping a suggestion, you can use :-1:
>  which gives you a thumbs down:
> 
> [image: :-1:] I ran the race detector and it's fine.
> 
> It won't give you the cumulative "what's left to fix" at the top of the
> page, like reviewboard... but for me, I never directly read that, anyway,
> just used it to see if there were zero or non-zero comments left.
>

If we want to do a trial, and we acknowledge that there are functional gaps, and
we are prepared to work around those using convention, then we should document
what those conventions are so that everyone takes a consistent approach.


> As for the inline comments in the code - there's a checkbox to hide them
> all.  It's not quite as convenient as the gutter indicators per-comment,
> but it's sufficient, I think.
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Ian Booth


On 16/09/16 08:54, Anastasia Macmood wrote:
> On 16/09/16 08:02, Ian Booth wrote:
>> Another data point - in the past, when we have had PRs which touch a lot of
>> files (eg change the import path for a dependency), review board paginates 
>> the
>> diff so it's much easier to manage, whereas I've seen github actually 
>> truncate
>> what it displays because the diff is "too large". Hopefully this will no 
>> longer
>> be an issue, or else we won't be able to review such changes in the future.
> This is perfect to reduce the size of our proposals to manageable :)
>>

The point is that that's not always possible. The example given was where we
need to update import paths due to a dependency change. That has to be done all
in one go. There are other occasions as well where sometimes a mechanical change
needs to touch a lot of files in the one PR. We just need to be sure that any RB
replacement caters for those scenarios.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Ian Booth
Another data point - in the past, when we have had PRs which touch a lot of
files (eg change the import path for a dependency), review board paginates the
diff so it's much easier to manage, whereas I've seen github actually truncate
what it displays because the diff is "too large". Hopefully this will no longer
be an issue, or else we won't be able to review such changes in the future.

On 16/09/16 07:53, Menno Smits wrote:
> Although I share some of Ian's concerns, I think the reduced moving parts,
> improved reliability, reduced maintenance, easier experience for outside
> contributors and better handling of file moves are pretty big wins. The
> rendering of diffs on Github is a whole lot nicer as well.
> 
> I'm +1 for trialling the new review system on Github for a couple of weeks
> as per Andrew's suggestion.
> 
> On 16 September 2016 at 05:50, Nate Finch <nate.fi...@canonical.com> wrote:
> 
>> Reviewboard goes down a couple times a month, usually from lack of disk
>> space or some other BS.  According to a source knowledgeable with these
>> matters, the charm was rushed out, and the agent for that machine is down
>> anyway, so we're kinda just waiting for the other shoe to drop.
>>
>> As for the process things that Ian mentioned, most of those can be
>> addressed with a sprinkling of convention.  Marking things as issues could
>> just be adding :x: to the first line (github even pops up suggestions and
>> auto-completes), thusly:
>>
>> [image: :x:]This will cause a race condition
>>
>> And if you want to indicate you're dropping a suggestion, you can use :-1:
>>  which gives you a thumbs down:
>>
>> [image: :-1:] I ran the race detector and it's fine.
>>
>> It won't give you the cumulative "what's left to fix" at the top of the
>> page, like reviewboard... but for me, I never directly read that, anyway,
>> just used it to see if there were zero or non-zero comments left.
>>
>> As for the inline comments in the code - there's a checkbox to hide them
>> all.  It's not quite as convenient as the gutter indicators per-comment,
>> but it's sufficient, I think.
>>
>> On Wed, Sep 14, 2016 at 6:43 PM Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>>
>>>
>>> On 15/09/16 08:22, Rick Harding wrote:
>>>> I think that the issue is that someone has to maintain the RB and the
>>>> cost/time spent on that does not seem commensurate with the bonus
>>> features
>>>> in my experience.
>>>>
>>>
>>> The maintenance is not that great. We have SSO using github credentials so
>>> there's really no day to day works IIANM. As a team, we do many, many
>>> reviews
>>> per day, and the features I outlined are significant and things I (and I
>>> assume
>>> others) rely on. Don't under estimate the value in knowing why a comment
>>> was
>>> rejected and being able to properly track that. Or having review comments
>>> collapsed as a gutter indicated so you can browse the code and still know
>>> that
>>> there's a comment there. With github, you can hide the comments but
>>> there's no
>>> gutter indicator. All these things add up to a lot.
>>>
>>>
>>>> On Wed, Sep 14, 2016 at 6:13 PM Ian Booth <ian.bo...@canonical.com>
>>> wrote:
>>>>
>>>>> One thing review board does better is use gutter indicators so as not
>>> to
>>>>> interrupt the flow of reading the code with huge comment blocks. It
>>> also
>>>>> seems
>>>>> much better at allowing previous commits with comments to be viewed in
>>>>> their
>>>>> entirety. And it allows the reviewer to differentiate between issues
>>> and
>>>>> comments (ie fix this vs take note of this), plus it allows the notion
>>> of
>>>>> marking stuff as fixed vs dropped, with a reason for dropping if
>>> needed.
>>>>> So the
>>>>> github improvements are nice but there's still a large and significant
>>> gap
>>>>> that
>>>>> is yet to be filled. I for one would miss all the features reviewboard
>>>>> offers.
>>>>> Unless there's a way of doing the same thing in github that I'm not
>>> aware
>>>>> of.
>>>>>
>>>>> On 15/09/16 07:22, Tim Penhey wrote:
>>>>>> I'm +1 if we can remove the extra tools and we don't get email per
>>>>> comment.
>>>>>>
>>

Re: Reviews on Github

2016-09-14 Thread Ian Booth
One thing review board does better is use gutter indicators so as not to
interrupt the flow of reading the code with huge comment blocks. It also seems
much better at allowing previous commits with comments to be viewed in their
entirety. And it allows the reviewer to differentiate between issues and
comments (ie fix this vs take note of this), plus it allows the notion of
marking stuff as fixed vs dropped, with a reason for dropping if needed. So the
github improvements are nice but there's still a large and significant gap that
is yet to be filled. I for one would miss all the features reviewboard offers.
Unless there's a way of doing the same thing in github that I'm not aware of.

On 15/09/16 07:22, Tim Penhey wrote:
> I'm +1 if we can remove the extra tools and we don't get email per comment.
> 
> On 15/09/16 08:03, Nate Finch wrote:
>> In case you missed it, Github rolled out a new review process.  It
>> basically works just like reviewboard does, where you start a review,
>> batch up comments, then post the review as a whole, so you don't just
>> write a bunch of disconnected comments (and get one email per review,
>> not per comment).  The only features reviewboard has is the edge case
>> stuff that we rarely use:  like using rbt to post a review from a random
>> diff that is not connected directly to a github PR. I think that is easy
>> enough to give up in order to get the benefit of not needing an entirely
>> separate system to handle reviews.
>>
>> I made a little test review on one PR here, and the UX was almost
>> exactly like working in reviewboard: https://github.com/juju/juju/pull/6234
>>
>> There may be important edge cases I'm missing, but I think it's worth
>> looking into.
>>
>> -Nate
>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


A couple of API changes coming in Juju beta18 this week

2016-09-08 Thread Ian Booth
Just a heads up, 3 APIs are moving to a different facade. There's no other
semantic changes other than the move. The only externally end user visible
difference is that the juju model-defaults command operates only on a controller
and no longer supports specifying a model using -m.

The APIs are to do with setting inherited default model values, so if you don't
care about those, don't bother reading on. These APIs are quite new so hopefully
any downstream impact will be zero or negligible.

The APIs are

ModelDefaults() (config.ModelDefaultAttributes, error)
SetModelDefaults(cloud, region string, config map[string]interface{})
UnsetModelDefaults(cloud, region string, keys ...string) error

These were on the ModelConfig facade but are now on the ModelManager facade. The
latter is a facade which is accessed via a controller endpoint rather than a
model endpoint.





-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Ian Booth


On 16/08/16 12:58, Tim Penhey wrote:
> 
> 
> On 16/08/16 10:50, Ian Booth wrote:
>>
>> On 16/08/16 03:09, Nate Finch wrote:
>>> Ian, can you describe how Juju decides if it's running for a developer or
>>> an end user?  I'm worried this could trip people up who are both end users
>>> and happen to have a juju development environment.
>>>
>>
>> It's not so much Juju deciding - the use cases given were from the point of 
>> view
>> of a developer or end user.
>>
>> Juju will decide that it can automatically fallback to try to find and use a
>> local jujud (so long as the version of the jujud found matches that of the 
>> Juju
>> client being used to bootstrap or upgrade) if:
>>
>> - the Juju client version is newer than the agents running
>> - the client or agents have a build number > 0
>>
>> (the build number is 0 for released Juju agents but non zero when jujud is 
>> used
>> or built locally from source).
> 
> But this isn't entirely true is it? The build number is a horrible hack
> involving a version override file.
> 
> When I build jujud locally from source there is no version override and it is
> just the version as defined in the code I'm building.
> 

My wording was sadly suboptimal.
The agent reports a version containing a non-zero build number if uploaded or
built from source. So I was trying to refer to the version that the client had
reported to it.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Ian Booth

On 16/08/16 03:09, Nate Finch wrote:
> Ian, can you describe how Juju decides if it's running for a developer or
> an end user?  I'm worried this could trip people up who are both end users
> and happen to have a juju development environment.
>

It's not so much Juju deciding - the use cases given were from the point of view
of a developer or end user.

Juju will decide that it can automatically fallback to try to find and use a
local jujud (so long as the version of the jujud found matches that of the Juju
client being used to bootstrap or upgrade) if:

- the Juju client version is newer than the agents running
- the client or agents have a build number > 0

(the build number is 0 for released Juju agents but non zero when jujud is used
or built locally from source).

The above behaviour covers the use cases previously described:

- users always deploys / upgrades released versions
- users deploy a released version and want to upgrade to a version built from
source for testing
- users deploy from source and want to hack some more and upgrade for testing
- users have a deployed from source system and then a newer released agent comes
out and they want to upgrade to that *

*generally we don't support upgrades between non-released versions, so if
there's db schema changes or whatever, you're on your own

In all the above cases, juju bootstrap or juju upgrade-juju will work without
special arguments.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Ian Booth
So if you pull master you'll no longer need to use upload-tools.
Juju will Do the Right Thing*, when you type:

$ juju bootstrap mycontroller aws|lxd|whatever
or
$ juju upgrade-juju

*so long as your $GOPATH/bin is in your path (as a developer).

1. As a user, you bootstrap a controller using a released client (incl betas)

Juju will look for and find a packaged agent binary via simplesreams and use 
that.

2. As a user, you want to upgrade a Juju system using a newer release (incl 
betas)

Juju will look for and find a packaged agent binary via simplesreams and use 
that.

3. As a developer, you want to deploy with a Juju built locally from source

You'll first build your work, go install github.com/juju/juju/..., then

$ juju bootstrap mycontroller lxd

4. As a developer, you want to upgrade a running system using some local changes
you've been hacking on. This works for either a system running a released
version, or a system running a development version, ie either case 1, 2 or 3 
above

You'll first build your work, go install github.com/juju/juju/..., then

$ juju upgrade-juju

It should be apparent that there's now no difference in Juju commands when
running a production system or a development one.

As a developer, you also have the "advanced" option of not building the Juju
source before bootstrapping or upgrading, and asking Juju to do it for you. This
is similar to what happens currently and can be error prone, and there's no need
anyway. But if needed:

$ juju bootstrap mycontroller lxd --build-agent
$ juju upgrade-juju --build-agent

But as I said, there's no need for this really, so long as as a developer you
have your $GOPATH/bin directory early in your path so that locally built juju
gets used in preference to /usr/bin/juju

These changes also are to support running Juju for a single architecture using a
snap.

Please let me know if there's any questions. --upload-tools is still supported
but will be removed soon. You can use --show-logs to see what Juju is doing.
I must admit, not having to type --upload-tools all the time is to quote Borat,
"Nce".


TODO

We still need to add a --agent-binary option to upgrade-juju so you can point it
to the new jujud you want it to use. This is to allow developers to upgrade a
system running from a snap. ie go install and then use the resulting binary
after copying somewhere the snap can see it. There's a bit of usability to work
out there. Or it also allows you to send someone a jujud binary and they can
just easily using that, rather than messing around with tarballs and
simplestreams like they need to do today.





-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Some Juju CLI usability thoughts before we close 2.0

2016-08-11 Thread Ian Booth


On 11/08/16 17:46, Ian Booth wrote:
> 
> On 11/08/16 17:03, John Meinel wrote:
>> On Thu, Aug 11, 2016 at 9:30 AM, Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>> A few things have been irking me with some aspects of Juju's CLI. Here's a
>>> few
>>> thoughts from a user perspective (well, me as user, YMMV).
>>>
>>> The following pain points mainly revolve around commands that operate on a
>>> controller rather than a model.
>>>
>>> eg
>>>
>>> $ juju login [-c controllername] fred
>>> $ juju logout [-c controllername]
>>>
>>
>> I would agree with 'juju login' as it really seems you have to supply the
>> controller, especially since we explicitly disallow logging into the same
>> controller twice. The only case is 'juju logout && juju login'. Or the 'I
>> have to refresh my macaroon' but in that case couldn't we just do "juju
>> login" with no args at all?
>>
>>
>>
>>>
>>> I really think the -c arg is not that natural here.
>>>
>>> $ juju login controllername fred
>>> $ juju logout controllername
>>>
>>> seem a lot more natural and also explicit, because
>>> I know without args, the "current" controller will be used...
>>> but it's not in your face what that is without running list-controllers
>>> first,
>>> and so it's too easy to logout of the wrong controller accidentally. Having
>>> positional args solves that.
>>>
>>
>> I'm fine with an optional positional arg and "juju logout" removes you from
>> the current controller. As it isn't destructive (you can always login
>> again), as long as the command lets you know *what* you just logged out of,
>> you can undo your mistake. Vs "destroy-controller" which is significantly
>> more permanent when it finishes.
>>
>>
>>
>>>
>>> The same would then apply to other controller commands, like eg add-model
>>>
>>> $ juju add-model mycontroller mymodel
>>>
>>> One thing that might be an issue for people is if they only have one
>>> controller,
>>> then
>>>
>>> $ juju logout
>>> or
>>> $ juju add-model
>>>
>>> would just work and requiring a controller name is more typing.
>>>
>>
>> I disagree on 'juju add-model', as I think we have a very good concept of
>> "current context" and adding another model to your current controller is a
>> natural event.
>>
> 
> Fair point.
> 
>>
>>>
>>> But 2 points there:
>>> 1. as we move forward, people reasonably have more than one controller on
>>> the go
>>> at any time, and being explicit about what controller you are wanting to
>>> use is
>>> a good thing
>>> 2. in the one controller case, we could simply make the controller name
>>> optional
>>> so juju logout just works
>>>
>>> We already use a positional arg for destroy-controller - it just seems
>>> natural
>>> to do it everywhere for all controller commands.
>>>
>>
>> destroy-controller was always a special case because it is an unrecoverable
>> operation. Almost everything else you can use current context and if it was
>> a mistake you can easily recover.
>>
>>
>>>
>>> Anyways, I'd like to see what others think, mine is just the perspective
>>> of one
>>> user. I'd be happy to do a snap and put it out there to get feedback.
>>>
>>> --
>>>
>>> Another issue - I would really, really like a "juju whoami" command. We
>>> used to
>>> use juju switch-user without args for that, but now it's gone.
>>>
>>> When you are staring at a command prompt and you know you have several
>>> controllers and different logins active, I really want to just go:
>>>
>>> $ juju whoami
>>> Currently active as fred@controller2
>>
>>
>> I'd say you'd want what your current user, controller and model is, as that
>> is the current 'context' for commands.
>>
> 
> Agreed, adding model would be necessary.
> 

And as Rick pointed out, this mirrors charm whoami nicely too. So we get that
level of consistency across our Juju commands.

>>
>>>
>>
>>
>>> Just to get a quick reminder of what controller I am operating on and who
>>> I am
>>> logged in as on the controller.  I know we have a way of d

Re: Some Juju CLI usability thoughts before we close 2.0

2016-08-11 Thread Ian Booth

On 11/08/16 17:03, John Meinel wrote:
> On Thu, Aug 11, 2016 at 9:30 AM, Ian Booth <ian.bo...@canonical.com> wrote:
>
> > A few things have been irking me with some aspects of Juju's CLI. Here's a
> > few
> > thoughts from a user perspective (well, me as user, YMMV).
> >
> > The following pain points mainly revolve around commands that operate on a
> > controller rather than a model.
> >
> > eg
> >
> > $ juju login [-c controllername] fred
> > $ juju logout [-c controllername]
> >
>
> I would agree with 'juju login' as it really seems you have to supply the
> controller, especially since we explicitly disallow logging into the same
> controller twice. The only case is 'juju logout && juju login'. Or the 'I
> have to refresh my macaroon' but in that case couldn't we just do "juju
> login" with no args at all?
>
>
>
> >
> > I really think the -c arg is not that natural here.
> >
> > $ juju login controllername fred
> > $ juju logout controllername
> >
> > seem a lot more natural and also explicit, because
> > I know without args, the "current" controller will be used...
> > but it's not in your face what that is without running list-controllers
> > first,
> > and so it's too easy to logout of the wrong controller accidentally. Having
> > positional args solves that.
> >
>
> I'm fine with an optional positional arg and "juju logout" removes you from
> the current controller. As it isn't destructive (you can always login
> again), as long as the command lets you know *what* you just logged out of,
> you can undo your mistake. Vs "destroy-controller" which is significantly
> more permanent when it finishes.
>
>
>
> >
> > The same would then apply to other controller commands, like eg add-model
> >
> > $ juju add-model mycontroller mymodel
> >
> > One thing that might be an issue for people is if they only have one
> > controller,
> > then
> >
> > $ juju logout
> > or
> > $ juju add-model
> >
> > would just work and requiring a controller name is more typing.
> >
>
> I disagree on 'juju add-model', as I think we have a very good concept of
> "current context" and adding another model to your current controller is a
> natural event.
>

Fair point.

>
> >
> > But 2 points there:
> > 1. as we move forward, people reasonably have more than one controller on
> > the go
> > at any time, and being explicit about what controller you are wanting to
> > use is
> > a good thing
> > 2. in the one controller case, we could simply make the controller name
> > optional
> > so juju logout just works
> >
> > We already use a positional arg for destroy-controller - it just seems
> > natural
> > to do it everywhere for all controller commands.
> >
>
> destroy-controller was always a special case because it is an unrecoverable
> operation. Almost everything else you can use current context and if it was
> a mistake you can easily recover.
>
>
> >
> > Anyways, I'd like to see what others think, mine is just the perspective
> > of one
> > user. I'd be happy to do a snap and put it out there to get feedback.
> >
> > --
> >
> > Another issue - I would really, really like a "juju whoami" command. We
> > used to
> > use juju switch-user without args for that, but now it's gone.
> >
> > When you are staring at a command prompt and you know you have several
> > controllers and different logins active, I really want to just go:
> >
> > $ juju whoami
> > Currently active as fred@controller2
>
>
> I'd say you'd want what your current user, controller and model is, as that
> is the current 'context' for commands.
>

Agreed, adding model would be necessary.

>
> >
>
>
> > Just to get a quick reminder of what controller I am operating on and who
> > I am
> > logged in as on the controller.  I know we have a way of doing that via
> > list
> > controllers, but if there's a few, or even if not, you still need to scan
> > your
> > eyes down a table and look for the one wit the * to see the current one
> > and then
> > scan across and get see the user etc. It's all a lot harder than just a
> > whoami
> > command IMO.
> >
> > --
> >
> > We will need a juju shares command to show who has access to a controller,
> > now
> > that we have controller permissions login, addmodel, superuser.
> >
> > For models, we suppo

Some Juju CLI usability thoughts before we close 2.0

2016-08-10 Thread Ian Booth
A few things have been irking me with some aspects of Juju's CLI. Here's a few
thoughts from a user perspective (well, me as user, YMMV).

The following pain points mainly revolve around commands that operate on a
controller rather than a model.

eg

$ juju login [-c controllername] fred
$ juju logout [-c controllername]

I really think the -c arg is not that natural here.

$ juju login controllername fred
$ juju logout controllername

seem a lot more natural and also explicit, because
I know without args, the "current" controller will be used...
but it's not in your face what that is without running list-controllers first,
and so it's too easy to logout of the wrong controller accidentally. Having
positional args solves that.

The same would then apply to other controller commands, like eg add-model

$ juju add-model mycontroller mymodel

One thing that might be an issue for people is if they only have one controller,
then

$ juju logout
or
$ juju add-model

would just work and requiring a controller name is more typing.

But 2 points there:
1. as we move forward, people reasonably have more than one controller on the go
at any time, and being explicit about what controller you are wanting to use is
a good thing
2. in the one controller case, we could simply make the controller name optional
so juju logout just works

We already use a positional arg for destroy-controller - it just seems natural
to do it everywhere for all controller commands.

Anyways, I'd like to see what others think, mine is just the perspective of one
user. I'd be happy to do a snap and put it out there to get feedback.

--

Another issue - I would really, really like a "juju whoami" command. We used to
use juju switch-user without args for that, but now it's gone.

When you are staring at a command prompt and you know you have several
controllers and different logins active, I really want to just go:

$ juju whoami
Currently active as fred@controller2

Just to get a quick reminder of what controller I am operating on and who I am
logged in as on the controller.  I know we have a way of doing that via list
controllers, but if there's a few, or even if not, you still need to scan your
eyes down a table and look for the one wit the * to see the current one and then
scan across and get see the user etc. It's all a lot harder than just a whoami
command IMO.

-- 

We will need a juju shares command to show who has access to a controller, now
that we have controller permissions login, addmodel, superuser.

For models, we support:

$ juju shares -m model
$ juju shares (for the default model)

What do we want for controller shares?

$ juju shares-controller  ?

which would support positional arg

$ juju shares-controller mycontroller   ?

--

On the subject of shares, the shares command shows all users with access to a
model (or soon a controller as per above). That's great for admins to see who
they are sharing their stuff with. What I'd like as a user is a command to tell
me what level of access I have to various controllers and models. I'd like this
in list-controllers and list-models.

$ juju list-controllers

CONTROLLER   MODELUSER CLOUD/REGION   ACCESS
fredcontroller*  foo  fred@local  addmodel
ian  default  admin@local  lxd/localhost  superuser

$ juju list-models

MODEL  OWNER   STATUS ACCESS  LAST CONNECTION
foo*   fred@local  available  write   5 minutes ago

The above would make it much easier to see if I could add a model or deploy an
application etc. And I don't get to see who else has access like with juju
shares, just my own access levels. Thoughts?











-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju and snappy implementation spike - feedback please

2016-08-09 Thread Ian Booth
I personally like the idea that the snap could use a juju-home interface to
allow access to the standard ~/.local/share/juju directory; thus allowing a snap
and regular Juju to be used interchangeably (at least initially). This will
allow thw use case "hey, try my juju snap and you can use your existing
settings" But, isn't it verboten for snaps to access dot directories in user
home in any way, regardless of what any interface says? We could provide an
import tool to copy from ~/.local/share/juju to ~/snap/blah...

But in the other case, using a personal snap and sharing settings with the
official Juju snap - do we know what the official snappy story is around this
scenario? I can't imagine this is the first time it's come up?


On 09/08/16 17:27, John Meinel wrote:
> On Aug 9, 2016 1:06 AM, "Nicholas Skaggs" 
> wrote:
>>
>>
>>
>> On Mon, Aug 8, 2016 at 11:49 AM, John Meinel 
> wrote:
>>>
>>> If we are installing in '--devmode' don't we have access to the
> unfiltered $HOME directory if we look for it? And we could ask for an
> interface which is to JUJU_HOME that would give us access to just
> $HOME/.local/share/juju
>>>
>>>
>>> We could then copy that information into the normal 'home' directory of
> the snap. That might give a better experience than having to import your
> credentials anytime you want to just evaluate a dev build of juju.
>>
>> I agree this gets more difficult with the idea of sharing builds -- as
> you say, you have to re-add your credentials for each new developer.In
> regards to your thoughts on --devmode, it does give you greater access, but
> some things are still constrained. The HOME interface doesn't allow access
> to dot files or folders by default. So it's not useful in this instance. If
> juju were to change where it stores it's configuration files (aka, not in a
> dotfolder) this technically becomes not a problem as the current home
> interface would allow this.
> 
> Sure. That's why I mention "We're in dev mode now, and can ask for a
> JUJU_HOME interface vs the existing HOME one." We can also just ask for a
> "give me the root filesystem" interface that doesn't get connected by
> default.
> 
> I think not being able to publish your version of a snap and have it work
> with the "standard" version of a snap is going to be a general issue for
> anyone using snaps for development. So maybe it's a general snap property
> that can give you access to a "named" common directory.
> 
> John
> =:->
> 
>>>
>>>
>>> AIUI, the 'common' folder for a snap is only for different versions of
> the exact same snap, which means that if I publish 'juju-jameinel' it won't
> be able to share anything with 'juju-wallyworld' nor the official 'juju',
> so there isn't any reason to use it.
>>>
>>> I don't know exactly how snap content interfaces work, but it might be
> interesting to be able to share the JUJU_HOME between snaps (like they
> intend to be able to share a "pictures" or "music" directory).
>>>
>>> If we *don't* share them, then we won't easily be able to try a new Juju
> on an existing controller. (If I just want you to see how the new 'juju
> status' is formatted, you'd rather run it on a controller that has a bunch
> of stuff running.)
>>
>> It's worth mentioning / filing a bug about our needs with the snapcraft
> folks to see what options might exist. I've started conversations a few
> weeks ago and solved or got good bugs in on other juju issues. I think they
> understand the application config limitations / issues, so we can push for
> a resolution.
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju and snappy implementation spike - feedback please

2016-08-08 Thread Ian Booth
Hi folks

The below refers to work in a branch, not committed to master.

TL:DR; I've made some changes to Juju so that a custom build can be easily
snapped and shared. The snap is still required to be run in devmode until new
interfaces are done.

TL;DR2; The way upload-tools works has been changed and this will affect our QA
scripts (but I've left the old upload-tools in place for backwards 
compatibility).

This is an experiment - I have a branch which I plan to propose for merging into
master. The main area of feedback needed is:
- the replacement of upload-tools
- how to do agent upgrades in a snappy world (see end of email).

https://github.com/wallyworld/juju/tree/snappy-support

To try it out (on amd64)

$ install juju-wallyworld --edge --devmode
$ /snap/bin/juju-wallyworld.juju bootstrap mycontroller lxd
or
$ /snap/bin/juju-wallyworld.juju bootstrap mycontroller aws
or ...

I'm just using a super simple snapcraft.yaml file (thanks for for godeps plugin
by the way, awesome). The interesting bits are the changes in my Juju branch.

Limitation: multi-arch. Using a non-released Juju from the snap does not support
bootstrapping a controller on an arch different to that on which the snap was
compiled.  This is the same as is the case now anyway with upload-tools.

Note: I have made a change so that the first time juju runs, update-clouds is
called. This ensures that when Juju is run from a snap, the latest information
is available for bootstrap.

$ juju bootstrap ...
Since Juju 2 is being run for the first time, downloading latest cloud 
information.
Fetching latest public cloud list...
Your list of public clouds is up to date, see `juju clouds`.
Creating Juju controller
...

The aims of this work
-
1. Make it easy to share a complete custom Juju build (client and agent) with
others (demo/try new features etc).

2. Allow Juju to be snapped so that an agent is included in the snap -
simplestreams is supported but not *required*.
(only a single arch right now)

3. Change the semantics and syntax of upload-tools to IMO "do the right thing".

4. Improved developer experience

Changes to upload-tools
---
- "upload-tools" is replaced by "build-agent"
- messages referring to "tools" now refer to "agent binary"
- "build-agent" is only *required* if you need to actually build the jujud agent
binary from source; the default behaviour is to use a jujud co-located with the
juju binary so long as the versions match *exactly*. This is normally what you
have as a developer anyway.

The practical implications are shown below.

Main Use Cases
--
1. As a developer, I want to share a custom Juju build with others to get 
feedback.

Developer:
hack, hack, hack on Juju
$ snapcraft
$ snapcraft push .snap --release edge

End user:
$ snap install  --edge --devmode
$ /snap/bin/.juju autoload-credentials (or add-credential, if needed)
$ /snap/bin/.juju add-cloud (if needed)
$ /snap/bin/.juju bootstrap .

If the intent is just to try stuff on LXD, then the add-credential and add-cloud
steps above can be skipped:

$ snap install  --edge --devmode
$ /snap/bin/.juju bootstrap mycontroller lxd

2. As a developer, I want to hack on Juju and try out my changes.

hack, hack, hack
$ go install github.com/juju/juju/...
$ juju bootstrap mycontroller lxd

Note: no build-agent (upload-tools) is needed.

3. Packaging released version of Juju
This need some work and consultation. It may not be feasible. How to handle
agent binaries for different os/arch etc.
Maybe we just want to officially package a juju client snap that behaves just
like bootstrap today - no jujud agent binary included in snap, the juju client
creates the controller and pulls agent binaries from simplestreams.

About upload tools
--
So, the need to specify --upload-tools is now almost eliminated. And the name
has been changed to --build-agent because that's what it does. (and because the
"tools" terminology is something we need to move away from).

When Juju bootstraps, it does the following:
- look in simplestreams for a compatible agent binary
- look in the same directory as the juju client for an agent binary with the
exact same version
- build an agent binary from source

It stops looking when it finds a suitable binary.

As a developer, you would normally hack on Juju and then run go install. And
then run the resulting juju client. So everything would be in place to Just Work
without additional bootstrap args. But if for some reason you needed the agent
actually go built, you can still do so with --build-agent.

Developers: upgrading the agent binary in a snappy world

So, as a developer, you're testing your snap and want to make a change and see
what happens. Now, one way would be to:
- hack hack hack
- make a new snap
- publish to edge
- install new snap
- jujusnap.juju upgrade-juju

which would pick up the latest 

Re: Windows and --race tests are now gating

2016-07-10 Thread Ian Booth
Turning on gating for Windows tests before all tests were passing is premature
and is now blocking us from landing critical fixes for beta12 that we need to
release this week for inclusion into xenial. With the race tests, we got all of
those passing before turning on gating. We need to do the same for the Windows
tests. We need to deactivate gating on Windows at this stage. Of course we need
to fix the tests, but turning on gating before that is done is counterproductive
given what we need to get done this week.

On 08/07/16 03:26, Aaron Bentley wrote:
> Hi Juju developers,
> 
> As requested by juju-core, we have added --race and Windows unit tests
> to gated landings.  These tests are run concurrently, not sequentially,
> but all must complete before code can be landed.
> 
> As a practical matter, this means that landings are now impossible until
> the Windows and --race tests can pass.
> 
> The output from this initial version is a bit crude-- it will tell you
> which tasks failed (e.g. "windows"), and you then need to look at the
> corresponding logs under the Jenkins artifacts.  We aim to improve this
> soon.
> 
> Aaron
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Beta9

2016-06-17 Thread Ian Booth
As well as the user visible things like the great new status output and rename
of service to application etc, beta 9 contains a lot of below the waterline
changes geared towards our future feature work. We should start to see the
benefit of that work in the next beta and upcoming release candidates.

There's also more to come on the usability front. We'll soon have a nice
interactive bootstrap experience which will guide users through the steps of
getting a controller up and running, and we're continuing the efforts to improve
error messages, CLI help, and other user facing text.

On 17/06/16 21:36, Mark Shuttleworth wrote:
> Hi all
> 
> Just to say, initial impressions of beta9 are great, the status output
> cleanup is super, thank you!
> 
> Mark
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic commit squashing

2016-06-16 Thread Ian Booth


On 16/06/16 19:04, David Cheney wrote:
> Counter suggestion: the bot refuses to accept PR's that contain more
> than one commit, then it's up to the submitter to prepare it in any
> way that they feel appropriate.
>

Please no. I do not want to be forced to alter my local history.

I was hopeful that having the landing bot / github squash commits would satisfy
those people what did not want to use git log --first-parent to present a
sanitised view of commits, but allow me to retain the history in my branches
locally so I could look back on what I did and when and how (if needed).

If the default github behaviour is not sufficient, perhaps we can add some
tooling in the merge bot to do the squashing prior to merging.


> On Thu, Jun 16, 2016 at 6:44 PM, roger peppe  
> wrote:
>> Squashed commits are nice, but there's something worth watching
>> out for: currently the merge commit is committed with the text
>> that's in the github PR, but when a squashed commit is made, this
>> text is ignored and only the text in the actual proposed commit ends up
>> in the history. This surprised me (I often edit the PR description
>> as the review continues) so worth being aware of, I think.
>>
>>   cheers,
>> rog.
>>
>> On 16 June 2016 at 02:12, Menno Smits  wrote:
>>> Hi everyone,
>>>
>>> Following on from the recent thread about commit squashing and commit
>>> message quality, the idea of automatically squashing commit at merge time
>>> has been raised. The idea is that the merge bot would automatically squash
>>> commits for a pull request into a single commit, using the PR description as
>>> the commit message.
>>>
>>> With this in place, developers can commit locally using any approach they
>>> prefer. The smaller commits they make as they work won't be part of the
>>> history the team interacts with in master.
>>>
>>> When using autosquashing the quality of pull request descriptions should get
>>> even more scrutiny during reviews. The quality of PR descriptions is already
>>> important as they are used for merge commits but with autosquashing in place
>>> they will be the *only* commit message.
>>>
>>> Autosquashing can be achieved technically by either having the merge bot do
>>> the squashing itself, or by taking advantage of Github's feature to do this
>>> (currently in preview mode):
>>>
>>> https://developer.github.com/changes/2016-04-01-squash-api-preview/
>>>
>>> We need to ensure that the squashed commits are attributed to the correct
>>> author (i.e. not jujubot). I'm not sure what we do with pull requests which
>>> contain work from multiple authors. There doesn't seem to be an established
>>> approach for this.
>>>
>>> Thoughts?
>>>
>>> - Menno
>>>
>>>
>>>
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: 
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-beta9 ETA

2016-06-13 Thread Ian Booth


On 13/06/16 22:58, Rick Harding wrote:
> On Sat, Jun 11, 2016 at 6:32 PM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> We are also storing any config specified in clouds.yaml separately. These
>> items,
>> such as apt-mirror, are shared between models and are used by default if
>> not
>> specified in a hosted model. But you can override any such items as well
>> simply
>> by setting them on the model. For now, the semantics of this change are
>> transparent - get-model-config will show the accumulation of shared and
>> model
>> specific settings. But we are looking to add a command to show/set shared
>> config. Thus you will be able to say update a http-proxy setting across all
>> hosted models within a controller with one command:
>>
>> juju set-shared-config http-proxy=foo
>>
>> NB command name to be decided.
>>
> 
> Ian, can we setup some time to chat on this. I'm curious if, rather than a
> command to explicitly "set everywhere" we follow the model that the config
> is inherited unless overridden for a specific model. Then by setting it on

What you say above is how it will work. You bootstrap a controller and any
config specified in clouds.yaml for that cloud becomes the default inherited
config for all hosted models add to the controller. But you can then choose to
set a config value on your hosted model, and that will override anything that
was being used as the default.

> the controller all models would get it. If you want it set on a specific
> model, you'd set it on the model. In that way there'd not be a third/new
> command for setting config.
> 

"Setting it on the controller" - that's what we are proposing. Once you have
bootstrapped the controller and the shared default config for hosted models has
been set up (by virtue of the settings in clouds.yaml), you then need a way to
alter that shared config. Is that what you mean? What command would you like for
that? We have

$ juju set-model-config foo=bar

That sets foo on the current model.  Or

$ juju -m mymodel set-model-config foo=bar

operates on model mymodel.

The above are model commands. So we need a way to set foo=bar on the controller
itself (ie update the shared controller wide config). What are you proposing?
Did you intend that setting foo on the controller model would satisfy the
requirement? That seems to be wrong for 2 reasons:

1. It's a model not a controller
2. The controller model can be used to host applications (eg nagios), and as
such the controller model settings may we be required to be set in and of
themselves, and to conflate those with default controller side config seems 
wrong.

Maybe I'm thinking wrongly, but I make a very clear distinction in my mind
between the controller and its models. There should be separate commands for
managing controller artifacts, including  ACLs, vs model artifacts.

Speaking of ACLs, the same distinction applies. You want to manage access to the
controller - who can create models, who can share models, who can delete models
not their own, who can register users etc - vs model level operations - who can
deploy applications etc. And so again, the controller model permissions are
different semantically to the controller permissions. You can manage who can
create applications in the controller model, which is different to an operation
on the controller itself like registering a user. You may grant fred access to
the controller model, but not the controller itself.

Or maybe I'm misunderstanding what you mean?






-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-beta9 ETA

2016-06-11 Thread Ian Booth


On 12/06/16 02:30, Dean Henrichsmeyer wrote:
> On Fri, Jun 10, 2016 at 1:20 PM, Cheryl Jennings <
> cheryl.jenni...@canonical.com> wrote:
> 
> 
>> Some of the great things coming in beta9 include:
>> - Separation of controller config vs. model config
>>
> 
> Will this one have user-facing changes or is it internal?
> 

The separation of controller config is internal. Controller config includes:
- ca cert
- api port
- mongo port

These items are not used by Juju models at all but currently show up when you do
a juju get-model-config. In beta 10, this will not be the case. So from that
aspect, it's user facing but it means get-model-config will be a lot more user
friendly since you won't have a wall of text for a cert you don't care about.
There will be a separate get-controller-config command to see those items. They
are typically immutable.

We are also storing any config specified in clouds.yaml separately. These items,
such as apt-mirror, are shared between models and are used by default if not
specified in a hosted model. But you can override any such items as well simply
by setting them on the model. For now, the semantics of this change are
transparent - get-model-config will show the accumulation of shared and model
specific settings. But we are looking to add a command to show/set shared
config. Thus you will be able to say update a http-proxy setting across all
hosted models within a controller with one command:

juju set-shared-config http-proxy=foo

NB command name to be decided.

The other change for beta 10 will be to no longer store in model config
transient settings like bootstrap timeout which are not relevant once a
controller is running. This will also remove clutter from model settings.




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Advance notice - removal of local repo URLs committed to master

2016-04-18 Thread Ian Booth
Hi folks

We communicated back in early March that Juju 2.0 would no longer support local
charms deployed using a local charm repository and local charm URLs like
local:trusty/mysql. The final piece has landed in master, which is support for
local bundles to declare their contained charms to as disk paths.

You can now do something like this for bundles:

series: xenial
services:
wordpress:
charm: ./wordpress
num_units: 1
series: trusty
mysql:
charm: ./mysql
num_units: 1
relations:
- ["wordpress:db", "mysql:server"]


Note the series attributes. These are required if the charm does not yet define
any default series in metadata or you want to use a series different to the
default. Either the bundle default series will be used ("xenial" for the mysql
service above) or the service specific one will be ("trusty" for the wordpress
service above).

With the above changes, the JUJU_REPOSITORY env var is no longer supported, nor
is the --repository deploy argument. Just specify everything using a local path.
And construct your local bundles also using local paths for the charms (or store
paths also).

This work is now in master if you want to try it out early, otherwise it will be
available soon with the 2.0 release candidate following beta4.




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-04 Thread Ian Booth


On 05/04/16 11:12, Andrew Wilkins wrote:
> On Mon, Apr 4, 2016 at 8:32 PM Rick Harding 
> wrote:
> 
>> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> In a non-beta release we would make sure that the config changes aren't
>>> backwards incompatible.
>>>
>>
>> I think this is the key thing. I think that kill-controller is an
>> exception to this rule. I think we should always at least give the user the
>> ability to remove their stuff and start over with the new alpha/beta/rc
>> release. I'd like to ask us to explore making kill-controller an exception
>> to this policy and that if tests prove we can't bootstrap on one beta and
>> kill with trunk that it's a blocking bug for us.
>>
> 
> Generally agreed, but in this case I made the choice of improving the
> quality of the code base overall at the cost of breaking kill-controller in
> between betas. I think it's fair to have a temporary annoyance for
> developers and early adopters (of a beta only!) to improve the quality in
> the long term. Major, breaking versions don't come around very often, so
> we're trying to wipe the slate as clean as possible. The alternative is we
> continue building up cruft forever so we could support that one edge case
> that existed for 5 minutes.
>

To backup what Andrew said, we had the choice of an annoyance between betas for
early adopters/testers, vs a much larger effort and cost to develop extra code
and tests to support a very temporary edge case. We'd rather put our at capacity
development effort to finishing features for the release. Having said that, we
should have included in the releases notes an item to inform people that any
beta2 environments could only be killed with beta2 clients. We'll do better
communicating those beta limitations next time.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-29 Thread Ian Booth
Older URL format is what is needed until the change lands (targeted for beta4).
The URL based format for bundle charms is all that is supported by the original
local bundles work. The upcoming feature drop fixes that, as well as removing
the support for local charm URLs - all local charms, whether inside bundles or
deployed using the CLI, will be required to be specified using a file path.

On 29/03/16 15:57, Rick Harding wrote:
> So this means the older format should work? Antonio, have you poked through
> that thread at the original working setup for local charms?
> 
> On Mon, Mar 28, 2016 at 9:45 PM Antonio Rosales <
> antonio.rosa...@canonical.com> wrote:
> 
>>
>>
>> On Monday, March 28, 2016, Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>> Hey Antonio
>>>
>>> I must apologise - the changes didn't make beta3 due to all the work
>>> needed to
>>> migrate the CI scripts to test the new hosted model functionality; we ran
>>> out of
>>> time to be able to QA the local bundle changes.
>>>
>>> I expect this work will be done for beta4.
>>
>>
>> Completely understood. I'll retest with Beta 4. Thanks for the update.
>>
>> -Antonio
>>
>>
>>>
>>>
>>>
>> On 29/03/16 11:04, Antonio Rosales wrote:
>>>> + Juju list for others awareness
>>>>
>>>>
>>>> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth <ian.bo...@canonical.com>
>>> wrote:
>>>>> Thanks Rick. Trivial change to make. This work should be in beta3 due
>>> next week.
>>>>> The work includes dropping support for local repositories in favour of
>>> path
>>>>> based local charm and bundle deployment.
>>>>
>>>> Ian,
>>>> First thanks for working on this feature. Second, I tried this for a
>>>> local ppc64el deploy which is behind a firewall, and thus local charms
>>>> are good way forward. I may have got the syntax incorrect and thus
>>>> wanted to confirm here. What I did was is at:
>>>> http://paste.ubuntu.com/15547725/
>>>> Specifically, I set the the charm path to something like:
>>>> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
>>>> However, I got the following error:
>>>> ERROR cannot deploy bundle: cannot resolve URL
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
>>>> bundle URL has invalid form:
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
>>>>
>>>> This is on the latest beta3:
>>>> 2.0-beta3-xenial-ppc64el
>>>>
>>>> Any suggestions?
>>>>
>>>> -thanks,
>>>> Antonio
>>>>
>>>>
>>>>>
>>>>> On 10/03/16 23:37, Rick Harding wrote:
>>>>>> Thanks Ian, after thinking about it I think what we want to do is
>>> really
>>>>>> #2. The reasoning I think is:
>>>>>>
>>>>>> 1) we want to make things consistent. The CLI experience is present a
>>> charm
>>>>>> and override series with --series=
>>>>>> 2) more consistent, if you do it with local charms you can always do
>>> it
>>>>>> 3) we want to encourage folks to drop series from the charmstore urls
>>> and
>>>>>> worry less about series over time. Just deploy X and let the charm
>>> author
>>>>>> pick the default best series. I think we should encourage this in the
>>> error
>>>>>> message for #2. "Please remove the series section of the charm url"
>>> or the
>>>>>> like when we error on the conflict, pushing users to use the series
>>>>>> override.
>>>>>>
>>>>>> Uros, Francesco, this brings up a point that I think for multi-series
>>>>>> charms we want the deploy cli snippets to start to drop the series
>>> part of
>>>>>> the url as often as we can. If the url doesn't have the series
>>> specified,
>>>>>> e.g. jujucharms.com/mysql then the cli command should not either.
>>> Right now
>>>>>> I know we add the series/revision info and such. Over time we want to
>>> try
>>>>>> to get to as simple a command as possible.
>>>>>>
>>>>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com>
>>> wrote

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-28 Thread Ian Booth
Hey Antonio

I must apologise - the changes didn't make beta3 due to all the work needed to
migrate the CI scripts to test the new hosted model functionality; we ran out of
time to be able to QA the local bundle changes.

I expect this work will be done for beta4.

On 29/03/16 11:04, Antonio Rosales wrote:
> + Juju list for others awareness
> 
> 
> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth <ian.bo...@canonical.com> wrote:
>> Thanks Rick. Trivial change to make. This work should be in beta3 due next 
>> week.
>> The work includes dropping support for local repositories in favour of path
>> based local charm and bundle deployment.
> 
> Ian,
> First thanks for working on this feature. Second, I tried this for a
> local ppc64el deploy which is behind a firewall, and thus local charms
> are good way forward. I may have got the syntax incorrect and thus
> wanted to confirm here. What I did was is at:
> http://paste.ubuntu.com/15547725/
> Specifically, I set the the charm path to something like:
> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
> However, I got the following error:
> ERROR cannot deploy bundle: cannot resolve URL
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
> bundle URL has invalid form:
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
> 
> This is on the latest beta3:
> 2.0-beta3-xenial-ppc64el
> 
> Any suggestions?
> 
> -thanks,
> Antonio
> 
> 
>>
>> On 10/03/16 23:37, Rick Harding wrote:
>>> Thanks Ian, after thinking about it I think what we want to do is really
>>> #2. The reasoning I think is:
>>>
>>> 1) we want to make things consistent. The CLI experience is present a charm
>>> and override series with --series=
>>> 2) more consistent, if you do it with local charms you can always do it
>>> 3) we want to encourage folks to drop series from the charmstore urls and
>>> worry less about series over time. Just deploy X and let the charm author
>>> pick the default best series. I think we should encourage this in the error
>>> message for #2. "Please remove the series section of the charm url" or the
>>> like when we error on the conflict, pushing users to use the series
>>> override.
>>>
>>> Uros, Francesco, this brings up a point that I think for multi-series
>>> charms we want the deploy cli snippets to start to drop the series part of
>>> the url as often as we can. If the url doesn't have the series specified,
>>> e.g. jujucharms.com/mysql then the cli command should not either. Right now
>>> I know we add the series/revision info and such. Over time we want to try
>>> to get to as simple a command as possible.
>>>
>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com> wrote:
>>>
>>>> I've implemented option 1:
>>>>
>>>>  error if Series attribute is used at all with a store charm URL
>>>>
>>>> Trivial to change if needed.
>>>>
>>>> On 10/03/16 12:58, Ian Booth wrote:
>>>>> Yeah, agreed having 2 ways to specify store series can be suboptimal.
>>>>> So we have 2 choices:
>>>>>
>>>>> 1. error if Series attribute is used at all with a store charm URL
>>>>> 2. error if the Series attribute is used and conflicts
>>>>>
>>>>> Case 1
>>>>> --
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>>
>>>>> Case 2
>>>>> --
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: xenial
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>>
>>>>> On 10/03/16 12:51, Rick Harding wrote:
>>>>>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either
>>>> way.
>>>>>>
>>>>>> On Wed, Mar

Re: Go 1.6 is now in trusty-proposed

2016-03-24 Thread Ian Booth

On 24/03/16 22:01, Nate Finch wrote:
> Does this mean we can assume 1.6 for everything from now on, or is there
> some other step we're waiting on?  I have some code that only needs to
> exist while we support 1.2, and I'd be happy to just delete it.
>

Not yet. The builders and test infrastructure all need to be updated, and the
package needs a week to transition out of proposed.

We're also waiting on this to commit an Azure provider fix.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-24 Thread Ian Booth
OMFG that is the best news. We can finally get the Juju LXD provider working
properly on trusty :-D
And first class support for all architectures etc :-D
And no more chasing gccgo issues :-D

Thanks Michael and whoever else helped make this possible.

On 24/03/16 16:03, Michael Hudson-Doyle wrote:
> Hi,
> 
> As of a few minutes ago, there is now a golang-1.6 package in
> trusty-proposed:
> https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> review and copy, Steve).
> 
> One difference between this and the package I prepared earlier is that
> it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> Makefiles and such will need to be adjusted to invoke that directly or
> put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> be installed alongside the golang packages that are already in
> trusty).
> 
> Cheers,
> mwh
> (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> 
> On 17 February 2016 at 07:58, Michael Hudson-Doyle
>  wrote:
>> I have approval for the idea but also decided to wait for 1.6 and upload
>> that instead. I'm also on leave currently so hopefully this can all happen
>> in early March.
>>
>> Cheers,
>> mwh
>>
>> On 17/02/2016 1:17 am, "John Meinel"  wrote:
>>>
>>> To start with, thanks for working on this. However, doesn't this also
>>> require changing the CI builds to use your ppa?
>>>
>>> What is the current state of this? I was just looking around and noticed
>>> golang1.5-go isn't in anything specific for Trusty that I can see. I realize
>>> if its going into an SRU it requires a fair amount of negotiation with other
>>> teams, so I'm not  surprised to see it take a while. I just wanted to check
>>> how it was going.
>>>
>>> Thanks,
>>>
>>> John
>>> =:->
>>>
>>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
>>>  wrote:

 Hi all,

 As part of the plan for getting Go 1.5 into trusty (see here
 https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
 packages (called golang1.5-go rather than golang-go) for trusty in my
 ppa:

 https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages

 (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
 "make stupid packaging mistakes" day)

 I'll write up a SRU bug to start the process of getting this into
 trusty tomorrow but before it does end up in trusty it would seem like
 a good idea to run the CI tests using juju-core packages built with
 this version of the go compiler. Is that something that's feasible to
 arrange

 The only packaging requirement should be to change the build-depends
 to be on golang1.5-go rather than golang-go or gccgo.

 Cheers,
 mwh

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Usability issues with status-history

2016-03-20 Thread Ian Booth


On 20/03/16 22:44, John Meinel wrote:
>>
>> ...
>>
>> For the second bug, where a downloading % spams the history, it seems like
>> the easy answer is "don't do that".  For resources, we specifically avoided
>> putting download progress in status history for that very reason.  In the
>> future, it seems like it could be useful to have some mechanism to show
>> transient messages like downloading % etc, but status history does not seem
>> like the appropriate place to do that, especially given how it is currently
>> configured... and it seems way too late to be adding a new feature for 2.0
>>
>> Just my 2 cents.
>>
>> -Nate
>>
> 
> The one aspect here is that it has been a consistent problem, especially
> with the local provider, of people wanting to know why things haven't
> started yet. Being able to give them concrete progress is a huge boon here.

+1 But do we need to persist these transient progress messages. We can still
report progress to the user each time they run juju status, but why save such
data when its value is of limited or minimal value once the download of an image
has finished.

> I really think we want to be putting more status for machine pending
> progress. Now, as for 100 progress messages, it turns out the rate limiting
> on status updates means we can drop some of them. (we always get 100 events
> from LXD, but if we only update Juju with one every 1s, then we generally
> get a lot fewer if your download speed is fast.)
> 
> But regardless, having genuine feedback as to what is going on outweighs a
> minor thing about having too much information in the backlog.
> 

Why not have both? Report progress but not persist such data.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Usability issues with status-history

2016-03-19 Thread Ian Booth

Machines, services and units all now support recording status history. Two
issues have come up:

1. https://bugs.launchpad.net/juju-core/+bug/1530840

For units, especially in steady state, status history is spammed with
update-status hook invocations which can obscure the hooks we really care about

2. https://bugs.launchpad.net/juju-core/+bug/1557918

We now have the concept of recording a machine provisioning status. This is
great because it gives observability to what is happening as a node is being
allocated in the cloud. With LXD, this feature has been used to give visibility
to progress of the image downloads (finally, yay). But what happens is that the
machine status history gets filled with lots of "Downloading x%" type messages.

We have a pruner which caps the history to 100 entries per entity. But we need a
way to deal with the spam, and what is displayed when the user asks for juju
status-history.

Options to solve bug 1

A.
Filter out duplicate status entries when presenting to the user. eg say
"update-status (x43)". This still allows the circular buffer for that entity to
fill with "spam" though. We could make the circular buffer size much larger. But
there's still the issue of UX where a user ask for the X most recent entries.
What do we give them? The X most recent de-duped entries?

B.
If the we go to record history and the current previous entry is the same as
what we are about to record, just update the timestamp. For update status, my
view is we don't really care how many times the hook was run, but rather when
was the last time it ran.

Options to solve bug 2

A.
Allow a flag when setting status to say "this status value is transient" and so
it is recorded in status but not logged in history.

B.
Do not record machine provisioning status in history. It could be argued this
info is more or less transient and once the machine comes up, we don't care so
much about it anymore. It was introduced to give observability to machine
allocation.

Any other options?
Opinions on preferred solutions?

I really want to get this fixed before Juju 2.0






-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Usability issues with status-history

2016-03-18 Thread Ian Booth


On 17/03/16 19:51, William Reade wrote:
> I see this as a combination of two problems:
> 
> 1) We're spamming the end user with "whatever's in the status-history
> collection" rather than presenting a digest tuned for their needs.
> 2) Important messages get thrown away way too early, because we don't know
> which messages are important.
> 
> I think the pocket/transient/expiry solutions boil down to "let's make the
> charmer decide what's important", and I don't think that will help. The
> charmer is only sending those messages *because she believes they're
> important*; even if we had "perfect" trimming heuristics for the end user,
> we do the *charmer* a disservice by leaving them no record of what their
> charm actually did.
> 
> And, more generally: *every* message we throw away makes it hard to
> correctly analyse any older message. This applies within a given entity's
> domain, but also across entities: if you're trying to understand the
> interactions between 2 units, but one of those units is generating many
> more messages, you'll have 200 messages to inspect; but the 100 for the
> faster unit will only cover (say) the last 30 for the slower one, leaving
> 70 slow-unit messages that can't be correlated with the other unit's
> actions. At best, those messages are redundant; at worst, they're actively
> misleading.
> 
> So: I do not believe that any approach that can be summed up as "let's
> throw away *more* messages" is going to help either. We need to fix (2) so
> that we have raw status data that extends reasonably far back in time; and
> then we need to fix (1) so that we usefully precis that data for the user
> (...and! leave a path that makes the raw data observable, for the cases
> where our heuristics are unhelpful).
> 

I mostly agree but still believe there's a case for transient messages. The case
where Juju is downloading an image and emits progress updates which go into
status history is to me clearly a case where we needn't persist every single one
(or any). In that case, it's not a charmer deciding but Juju. And with status
updates like X% complete, as soon as a new message arrives, the old one is
superseded anyway. The user is surely just interested to know the current status
and when it completes they don't care anymore. And Juju agent can still decide
to say make every 10% of download progress messages non-transient to they go to
history for future reference.

> Cheers
> William
> 
> PS re: UX of asking for N entries... I can see end-user stories for
> timespans, and for "the last N *significant* changes". What's the scenario
> where a user wants to see exactly 50 message atoms?
> 

No one would say they want to see exactly 50 - it's an estimate. It's like when
you git log and you ask for the last 20 commits. If that's not enough to see
what you want, you just run again with an increased number.

I do think allowing for a timespan to be specified may be useful.

John's suggestion for adding a lifetime does sounds more complicated than we
want right now.

Would this work s an initial improvement for 2.0:

1. Increase limit of stored messages per entity so say 500 (from 100)
2. Allow messages emitted from Juju to be marked as transient
eg for download progress
3. Do smarter filtering of what is displayed with status-history
eg if we see the same tuple of messages over and over, consolidate

TIMETYPESTATUS  MESSAGE
26 Dec 2015 13:51:59Z   agent   executing   running config-changed hook
26 Dec 2015 13:51:59Z   agent   idle
26 Dec 2015 13:56:57Z   agent   executing   running update-status hook
26 Dec 2015 13:56:59Z   agent   idle
26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
26 Dec 2015 14:01:59Z   agent   idle
26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
26 Dec 2015 14:01:59Z   agent   idle

becomes

TIME TYPE STATUS MESSAGE
26 Dec 2015 13:51:59Z agent executing running config-changed hook
26 Dec 2015 13:51:59Z agent idle
>> Repeated 3 times, last occurence:
26 Dec 2015 14:01:57Z agent executing running update-status hook
26 Dec 2015 14:01:59Z agent idle





> On Thu, Mar 17, 2016 at 6:30 AM, John Meinel <j...@arbash-meinel.com> wrote:
> 
>>
>>
>> On Thu, Mar 17, 2016 at 8:41 AM, Ian Booth <ian.bo...@canonical.com>
>> wrote:
>>
>>>
>>> Machines, services and units all now support recording status history. Two
>>> issues have come up:
>>>
>>> 1. https://bugs.launchpad.net/juju-core/+bug/1530840
>>>
>>> For units, especially in steady state, status history is spammed with
>>> update-status hook invocations which can obscure the hooks we really care
>>> about
>>>
>>&

Re: Do we still need juju upgrade-charm --switch ... ?

2016-03-11 Thread Ian Booth
> 
> We use switch a lot, and customers use this as well. The primary use case
> is "I have a bug in production charm that is not available upstream yet". I
> expect future 2.0 uses to look like this:
> 
> charm pull 
> 
> juju upgrade-charm --switch ./ 
> 
> Another example, esp because of how the charmstore is structured now
> 
> juju deploy trusty/wordpress
> # hackity hack
> juju deploy --switch cs:~marcoceppi/trusty/wordpress wordpress
> 
> 
>>
>> What would folks lose if --switch were to be dropped for 2.0? Any
>> objections to
>> doing this?
> 
> 
> I object. Switch should be updated to support ./local/directory/charm
> instead of local:
> 

Thanks Marco, we'll have to ensure 2.0 is updated to allow --switch with
upgrade-charm --path which it currently does not.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Do we still need juju upgrade-charm --switch ... ?

2016-03-10 Thread Ian Booth
So we have a feature of upgrade-charm which allows you to crossgrade to a
different charm than the one originally deployed.

>From the upgrade-charm help docs:

The new charm's URL and revision are inferred as they would be when running a
deploy command.
Please note that --switch is dangerous, because juju only has limited
information with which to determine compatibility; the operation will succeed,
regardless of potential havoc.

What is the use case for this functionality? I seemed to get the impression it
was used mainly with local repos? But given local repos are going away in 2.0,
do we still need it? And given the potential for users getting things wrong, do
we even want to keep it regardless? Note also --switch is not allowed with
--path which is how local charms are upgraded.

What would folks lose if --switch were to be dropped for 2.0? Any objections to
doing this?




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-10 Thread Ian Booth
Thanks Rick. Trivial change to make. This work should be in beta3 due next week.
The work includes dropping support for local repositories in favour of path
based local charm and bundle deployment.

On 10/03/16 23:37, Rick Harding wrote:
> Thanks Ian, after thinking about it I think what we want to do is really
> #2. The reasoning I think is:
> 
> 1) we want to make things consistent. The CLI experience is present a charm
> and override series with --series=
> 2) more consistent, if you do it with local charms you can always do it
> 3) we want to encourage folks to drop series from the charmstore urls and
> worry less about series over time. Just deploy X and let the charm author
> pick the default best series. I think we should encourage this in the error
> message for #2. "Please remove the series section of the charm url" or the
> like when we error on the conflict, pushing users to use the series
> override.
> 
> Uros, Francesco, this brings up a point that I think for multi-series
> charms we want the deploy cli snippets to start to drop the series part of
> the url as often as we can. If the url doesn't have the series specified,
> e.g. jujucharms.com/mysql then the cli command should not either. Right now
> I know we add the series/revision info and such. Over time we want to try
> to get to as simple a command as possible.
> 
> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> I've implemented option 1:
>>
>>  error if Series attribute is used at all with a store charm URL
>>
>> Trivial to change if needed.
>>
>> On 10/03/16 12:58, Ian Booth wrote:
>>> Yeah, agreed having 2 ways to specify store series can be suboptimal.
>>> So we have 2 choices:
>>>
>>> 1. error if Series attribute is used at all with a store charm URL
>>> 2. error if the Series attribute is used and conflicts
>>>
>>> Case 1
>>> --
>>>
>>> Errors:
>>>
>>> Series: trusty
>>> Charm: cs:mysql
>>>
>>> Series: trusty
>>> Charm: cs:trusty/mysql
>>>
>>> Ok:
>>>
>>> Series: trusty
>>> Charm: ./mysql
>>>
>>>
>>> Case 2
>>> --
>>>
>>> Ok:
>>>
>>> Series: trusty
>>> Charm: cs:mysql
>>>
>>> Series: trusty
>>> Charm: cs:trusty/mysql
>>>
>>> Series: trusty
>>> Charm: ./mysql
>>>
>>> Errors:
>>>
>>> Series: xenial
>>> Charm: cs:trusty/mysql
>>>
>>>
>>> On 10/03/16 12:51, Rick Harding wrote:
>>>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either
>> way.
>>>>
>>>> On Wed, Mar 9, 2016, 9:50 PM Rick Harding <rick.hard...@canonical.com>
>>>> wrote:
>>>>
>>>>> I think there's already rules for charmstore charms. it uses the
>> default
>>>>> if not specified. I totally agree that for local charms we have to have
>>>>> this. For remote charms though this is providing the user two ways to
>> do
>>>>> the same thing
>>>>>
>>>>> On Wed, Mar 9, 2016, 9:46 PM Ian Booth <ian.bo...@canonical.com>
>> wrote:
>>>>>
>>>>>> If the charm store charm defines a series in the URL, then we will
>>>>>> consider it
>>>>>> an error to specify a different series using the attribute. But charm
>>>>>> store URLs
>>>>>> are not required to have a series, so we can use the attribute in that
>>>>>> case. It
>>>>>> also allows users to easily switch between store and local charms
>> during
>>>>>> development just by replacing "./" with "cs:"
>>>>>>
>>>>>>  nova-compute:
>>>>>>series: xenial
>>>>>>charm: ./nova-compute
>>>>>>
>>>>>>  nova-compute:
>>>>>>series: xenial
>>>>>>charm: cs:nova-compute
>>>>>>
>>>>>>
>>>>>> On 10/03/16 12:21, Rick Harding wrote:
>>>>>>> I'm not sure we want to make this attribute apply to charmstore
>> charms.
>>>>>>> We've an established practice of the charmstore url being the series
>>>>>>> information. It gives the user a chance to have conflicting
>> information
>>>>>> if
>>>>&g

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-10 Thread Ian Booth
I've implemented option 1:

 error if Series attribute is used at all with a store charm URL

Trivial to change if needed.

On 10/03/16 12:58, Ian Booth wrote:
> Yeah, agreed having 2 ways to specify store series can be suboptimal.
> So we have 2 choices:
> 
> 1. error if Series attribute is used at all with a store charm URL
> 2. error if the Series attribute is used and conflicts
> 
> Case 1
> --
> 
> Errors:
> 
> Series: trusty
> Charm: cs:mysql
> 
> Series: trusty
> Charm: cs:trusty/mysql
> 
> Ok:
> 
> Series: trusty
> Charm: ./mysql
> 
> 
> Case 2
> --
> 
> Ok:
> 
> Series: trusty
> Charm: cs:mysql
> 
> Series: trusty
> Charm: cs:trusty/mysql
> 
> Series: trusty
> Charm: ./mysql
> 
> Errors:
> 
> Series: xenial
> Charm: cs:trusty/mysql
> 
> 
> On 10/03/16 12:51, Rick Harding wrote:
>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either way.
>>
>> On Wed, Mar 9, 2016, 9:50 PM Rick Harding <rick.hard...@canonical.com>
>> wrote:
>>
>>> I think there's already rules for charmstore charms. it uses the default
>>> if not specified. I totally agree that for local charms we have to have
>>> this. For remote charms though this is providing the user two ways to do
>>> the same thing
>>>
>>> On Wed, Mar 9, 2016, 9:46 PM Ian Booth <ian.bo...@canonical.com> wrote:
>>>
>>>> If the charm store charm defines a series in the URL, then we will
>>>> consider it
>>>> an error to specify a different series using the attribute. But charm
>>>> store URLs
>>>> are not required to have a series, so we can use the attribute in that
>>>> case. It
>>>> also allows users to easily switch between store and local charms during
>>>> development just by replacing "./" with "cs:"
>>>>
>>>>  nova-compute:
>>>>series: xenial
>>>>charm: ./nova-compute
>>>>
>>>>  nova-compute:
>>>>series: xenial
>>>>charm: cs:nova-compute
>>>>
>>>>
>>>> On 10/03/16 12:21, Rick Harding wrote:
>>>>> I'm not sure we want to make this attribute apply to charmstore charms.
>>>>> We've an established practice of the charmstore url being the series
>>>>> information. It gives the user a chance to have conflicting information
>>>> if
>>>>> the charmstore url is cs:trusty/nova-compute and the series attribute is
>>>>> set to xenial. I think we should toss an error to a bundle that has
>>>> series:
>>>>> specified for a charmstore based charm value (or non-local value
>>>> whichever
>>>>> way you want to think about it)
>>>>>
>>>>> On Wed, Mar 9, 2016 at 6:29 PM Ian Booth <ian.bo...@canonical.com>
>>>> wrote:
>>>>>
>>>>>> One additional enhancement we need for bundles concerns specifying
>>>> series
>>>>>> for
>>>>>> multi-series charms, in particular local charms now that the local repo
>>>>>> will be
>>>>>> going away.
>>>>>>
>>>>>> Consider:
>>>>>>
>>>>>> A new multi-series charm may have a URL which does not specify the
>>>> series.
>>>>>> In
>>>>>> that case, the series used will be the default specified in the charm
>>>>>> metadata
>>>>>> or the latest LTS. But we want to allow people to choose their own
>>>> series
>>>>>> also.
>>>>>>
>>>>>> So we need a new (optional) Series attribute in the bundle metadata.
>>>>>>
>>>>>> bundle.yaml
>>>>>>   series: trusty
>>>>>>   services:
>>>>>> nova-compute:
>>>>>>   series: xenial <-- new
>>>>>>   charm: ./nova-compute
>>>>>>   num_units: 2
>>>>>>
>>>>>> or with a charm store charm
>>>>>>
>>>>>> bundle.yaml
>>>>>>   series: trusty
>>>>>>   services:
>>>>>> nova-compute:
>>>>>>   series: xenial<-- new
>>>>>>   charm: cs:nova-compute
>>>>>>   num_units: 2
>>>>

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-09 Thread Ian Booth
Yeah, agreed having 2 ways to specify store series can be suboptimal.
So we have 2 choices:

1. error if Series attribute is used at all with a store charm URL
2. error if the Series attribute is used and conflicts

Case 1
--

Errors:

Series: trusty
Charm: cs:mysql

Series: trusty
Charm: cs:trusty/mysql

Ok:

Series: trusty
Charm: ./mysql


Case 2
--

Ok:

Series: trusty
Charm: cs:mysql

Series: trusty
Charm: cs:trusty/mysql

Series: trusty
Charm: ./mysql

Errors:

Series: xenial
Charm: cs:trusty/mysql


On 10/03/16 12:51, Rick Harding wrote:
> Bah maybe you're right. I want to sleep on it. It's kind of ugh either way.
> 
> On Wed, Mar 9, 2016, 9:50 PM Rick Harding <rick.hard...@canonical.com>
> wrote:
> 
>> I think there's already rules for charmstore charms. it uses the default
>> if not specified. I totally agree that for local charms we have to have
>> this. For remote charms though this is providing the user two ways to do
>> the same thing
>>
>> On Wed, Mar 9, 2016, 9:46 PM Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>> If the charm store charm defines a series in the URL, then we will
>>> consider it
>>> an error to specify a different series using the attribute. But charm
>>> store URLs
>>> are not required to have a series, so we can use the attribute in that
>>> case. It
>>> also allows users to easily switch between store and local charms during
>>> development just by replacing "./" with "cs:"
>>>
>>>  nova-compute:
>>>series: xenial
>>>charm: ./nova-compute
>>>
>>>  nova-compute:
>>>series: xenial
>>>charm: cs:nova-compute
>>>
>>>
>>> On 10/03/16 12:21, Rick Harding wrote:
>>>> I'm not sure we want to make this attribute apply to charmstore charms.
>>>> We've an established practice of the charmstore url being the series
>>>> information. It gives the user a chance to have conflicting information
>>> if
>>>> the charmstore url is cs:trusty/nova-compute and the series attribute is
>>>> set to xenial. I think we should toss an error to a bundle that has
>>> series:
>>>> specified for a charmstore based charm value (or non-local value
>>> whichever
>>>> way you want to think about it)
>>>>
>>>> On Wed, Mar 9, 2016 at 6:29 PM Ian Booth <ian.bo...@canonical.com>
>>> wrote:
>>>>
>>>>> One additional enhancement we need for bundles concerns specifying
>>> series
>>>>> for
>>>>> multi-series charms, in particular local charms now that the local repo
>>>>> will be
>>>>> going away.
>>>>>
>>>>> Consider:
>>>>>
>>>>> A new multi-series charm may have a URL which does not specify the
>>> series.
>>>>> In
>>>>> that case, the series used will be the default specified in the charm
>>>>> metadata
>>>>> or the latest LTS. But we want to allow people to choose their own
>>> series
>>>>> also.
>>>>>
>>>>> So we need a new (optional) Series attribute in the bundle metadata.
>>>>>
>>>>> bundle.yaml
>>>>>   series: trusty
>>>>>   services:
>>>>> nova-compute:
>>>>>   series: xenial <-- new
>>>>>   charm: ./nova-compute
>>>>>   num_units: 2
>>>>>
>>>>> or with a charm store charm
>>>>>
>>>>> bundle.yaml
>>>>>   series: trusty
>>>>>   services:
>>>>> nova-compute:
>>>>>   series: xenial<-- new
>>>>>   charm: cs:nova-compute
>>>>>   num_units: 2
>>>>>
>>>>>
>>>>> Note: the global series in the bundle still applies if series is not
>>>>> otherwise
>>>>> known.
>>>>> The new series attribute is per charm.
>>>>>
>>>>> So in the case above, cs:nova-compute may ordinarily be deployed on
>>> trusty
>>>>> (the
>>>>> default series in that charm's metadata). But the bundle requires the
>>>>> xenial
>>>>> version. With the charm store URL, we can currently use
>>>>> cs:xenial/nova-compute
>>>>> but that's not the case for local charms deployed out of a directory.
>

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-09 Thread Ian Booth
If the charm store charm defines a series in the URL, then we will consider it
an error to specify a different series using the attribute. But charm store URLs
are not required to have a series, so we can use the attribute in that case. It
also allows users to easily switch between store and local charms during
development just by replacing "./" with "cs:"

 nova-compute:
   series: xenial
   charm: ./nova-compute

 nova-compute:
   series: xenial
   charm: cs:nova-compute


On 10/03/16 12:21, Rick Harding wrote:
> I'm not sure we want to make this attribute apply to charmstore charms.
> We've an established practice of the charmstore url being the series
> information. It gives the user a chance to have conflicting information if
> the charmstore url is cs:trusty/nova-compute and the series attribute is
> set to xenial. I think we should toss an error to a bundle that has series:
> specified for a charmstore based charm value (or non-local value whichever
> way you want to think about it)
> 
> On Wed, Mar 9, 2016 at 6:29 PM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> One additional enhancement we need for bundles concerns specifying series
>> for
>> multi-series charms, in particular local charms now that the local repo
>> will be
>> going away.
>>
>> Consider:
>>
>> A new multi-series charm may have a URL which does not specify the series.
>> In
>> that case, the series used will be the default specified in the charm
>> metadata
>> or the latest LTS. But we want to allow people to choose their own series
>> also.
>>
>> So we need a new (optional) Series attribute in the bundle metadata.
>>
>> bundle.yaml
>>   series: trusty
>>   services:
>> nova-compute:
>>   series: xenial <-- new
>>   charm: ./nova-compute
>>   num_units: 2
>>
>> or with a charm store charm
>>
>> bundle.yaml
>>   series: trusty
>>   services:
>> nova-compute:
>>   series: xenial<-- new
>>   charm: cs:nova-compute
>>   num_units: 2
>>
>>
>> Note: the global series in the bundle still applies if series is not
>> otherwise
>> known.
>> The new series attribute is per charm.
>>
>> So in the case above, cs:nova-compute may ordinarily be deployed on trusty
>> (the
>> default series in that charm's metadata). But the bundle requires the
>> xenial
>> version. With the charm store URL, we can currently use
>> cs:xenial/nova-compute
>> but that's not the case for local charms deployed out of a directory. We
>> need a
>> way to allow the series to be specified in that latter case.
>>
>> We'll look to make the changes in core initially and can followup later
>> with the
>> GUI etc. The attribute is optional and only really affects bundles with
>> local
>> charms.
>>
>>
>>
>> On 09/03/16 09:53, Ian Booth wrote:
>>> So to clarify what we'll do. We'll support the same syntax in bundle
>> files as we
>>> do for deploy.
>>>
>>> Deploys charm store charms:
>>>
>>> $ juju deploy cs:wordpress
>>> $ juju deploy wordpress
>>>
>>> Deploys a local charm from a directory:
>>>
>>> $ juju deploy ./charms/wordpress
>>> $ juju deploy ./wordpress
>>>
>>> So below deploys a local nova-compute charm in a directory co-located
>> with the
>>> bundle.yaml file.
>>>
>>>  series: trusty
>>>  services:
>>>nova-compute:
>>>  charm: ./nova-compute
>>>  num_units: 2
>>>
>>> This one deploys a charm store charm:
>>>
>>>  series: trusty
>>>  services:
>>>nova-compute:
>>>charm: nova-compute
>>>num_units: 2
>>>
>>>
>>>
>>> On 09/03/16 03:59, Rick Harding wrote:
>>>> Long term we want to have a pattern when the bundle is a directory with
>>>> local charms in a directory next to the bundles.yaml file. We could not
>> do
>>>> this cleanly before the multi-series charms that are just getting out
>> the
>>>> door. I think that bundles with local charms will be suboptimal until we
>>>> can get those bits to line up.
>>>>
>>>> I don't think we want to be doing the file based urls, but to build a
>>>> pattern that's reusable and makes sense across systems. Creating a
>> standard
>>>> pattern I think is the best path forward.
>>>>
>&g

Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-09 Thread Ian Booth
One additional enhancement we need for bundles concerns specifying series for
multi-series charms, in particular local charms now that the local repo will be
going away.

Consider:

A new multi-series charm may have a URL which does not specify the series. In
that case, the series used will be the default specified in the charm metadata
or the latest LTS. But we want to allow people to choose their own series also.

So we need a new (optional) Series attribute in the bundle metadata.

bundle.yaml
  series: trusty
  services:
nova-compute:
  series: xenial <-- new
  charm: ./nova-compute
  num_units: 2

or with a charm store charm

bundle.yaml
  series: trusty
  services:
nova-compute:
  series: xenial<-- new
  charm: cs:nova-compute
  num_units: 2


Note: the global series in the bundle still applies if series is not otherwise
known.
The new series attribute is per charm.

So in the case above, cs:nova-compute may ordinarily be deployed on trusty (the
default series in that charm's metadata). But the bundle requires the xenial
version. With the charm store URL, we can currently use cs:xenial/nova-compute
but that's not the case for local charms deployed out of a directory. We need a
way to allow the series to be specified in that latter case.

We'll look to make the changes in core initially and can followup later with the
GUI etc. The attribute is optional and only really affects bundles with local
charms.



On 09/03/16 09:53, Ian Booth wrote:
> So to clarify what we'll do. We'll support the same syntax in bundle files as 
> we
> do for deploy.
> 
> Deploys charm store charms:
> 
> $ juju deploy cs:wordpress
> $ juju deploy wordpress
> 
> Deploys a local charm from a directory:
> 
> $ juju deploy ./charms/wordpress
> $ juju deploy ./wordpress
> 
> So below deploys a local nova-compute charm in a directory co-located with the
> bundle.yaml file.
> 
>  series: trusty
>  services:
>nova-compute:
>  charm: ./nova-compute
>  num_units: 2
> 
> This one deploys a charm store charm:
> 
>  series: trusty
>  services:
>nova-compute:
>charm: nova-compute
>num_units: 2
> 
> 
> 
> On 09/03/16 03:59, Rick Harding wrote:
>> Long term we want to have a pattern when the bundle is a directory with
>> local charms in a directory next to the bundles.yaml file. We could not do
>> this cleanly before the multi-series charms that are just getting out the
>> door. I think that bundles with local charms will be suboptimal until we
>> can get those bits to line up.
>>
>> I don't think we want to be doing the file based urls, but to build a
>> pattern that's reusable and makes sense across systems. Creating a standard
>> pattern I think is the best path forward.
>>
>> On Tue, Mar 8, 2016 at 12:26 PM Martin Packman <martin.pack...@canonical.com>
>> wrote:
>>
>>> On 05/03/2016, Ian Booth <ian.bo...@canonical.com> wrote:
>>>>>
>>>>> How will bundles work which reference local charms? Will this work as
>>>>> expected where nova-compute is a directory at the same level as a bundle
>>>>> file?
>>>>>
>>>>> ```
>>>>> series: trusty
>>>>> services:
>>>>>   nova-compute:
>>>>> charm: ./nova-compute
>>>>> num_units: 2
>>>>> ```
>>>>>
>>>>
>>>> The above will work but not until a tweak is made to bundle deployment to
>>>> interpret a path on disk rather than a url. It's a small change. This
>>> would
>>>> be done as part of the work to remove the local repo support.
>>>
>>> Can we keep interpreting the the reference in the bundle as a url, but
>>> start supporting file urls? That seems neater than treating the cs:
>>> prefix as magic not-a-filename.
>>>
>>> The catch is that there's no sane way of referencing locations outside
>>> a base url.
>>>
>>> charm: file:nova-compute
>>>
>>> Works as a reference to a dir inside the base location, but:
>>>
>>> charm: file:../nova-compute
>>>
>>> Will not work as a reference to a sibling directory. And absolute file
>>> paths are pretty useless across machines.
>>>
>>> Martin
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>
>>
>>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0 and local charm deployment

2016-03-08 Thread Ian Booth
So to clarify what we'll do. We'll support the same syntax in bundle files as we
do for deploy.

Deploys charm store charms:

$ juju deploy cs:wordpress
$ juju deploy wordpress

Deploys a local charm from a directory:

$ juju deploy ./charms/wordpress
$ juju deploy ./wordpress

So below deploys a local nova-compute charm in a directory co-located with the
bundle.yaml file.

 series: trusty
 services:
   nova-compute:
 charm: ./nova-compute
 num_units: 2

This one deploys a charm store charm:

 series: trusty
 services:
   nova-compute:
   charm: nova-compute
   num_units: 2



On 09/03/16 03:59, Rick Harding wrote:
> Long term we want to have a pattern when the bundle is a directory with
> local charms in a directory next to the bundles.yaml file. We could not do
> this cleanly before the multi-series charms that are just getting out the
> door. I think that bundles with local charms will be suboptimal until we
> can get those bits to line up.
> 
> I don't think we want to be doing the file based urls, but to build a
> pattern that's reusable and makes sense across systems. Creating a standard
> pattern I think is the best path forward.
> 
> On Tue, Mar 8, 2016 at 12:26 PM Martin Packman <martin.pack...@canonical.com>
> wrote:
> 
>> On 05/03/2016, Ian Booth <ian.bo...@canonical.com> wrote:
>>>>
>>>> How will bundles work which reference local charms? Will this work as
>>>> expected where nova-compute is a directory at the same level as a bundle
>>>> file?
>>>>
>>>> ```
>>>> series: trusty
>>>> services:
>>>>   nova-compute:
>>>> charm: ./nova-compute
>>>> num_units: 2
>>>> ```
>>>>
>>>
>>> The above will work but not until a tweak is made to bundle deployment to
>>> interpret a path on disk rather than a url. It's a small change. This
>> would
>>> be done as part of the work to remove the local repo support.
>>
>> Can we keep interpreting the the reference in the bundle as a url, but
>> start supporting file urls? That seems neater than treating the cs:
>> prefix as magic not-a-filename.
>>
>> The catch is that there's no sane way of referencing locations outside
>> a base url.
>>
>> charm: file:nova-compute
>>
>> Works as a reference to a dir inside the base location, but:
>>
>> charm: file:../nova-compute
>>
>> Will not work as a reference to a sibling directory. And absolute file
>> paths are pretty useless across machines.
>>
>> Martin
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0 and local charm deployment

2016-03-05 Thread Ian Booth
> 
> Does this mean it won't be possible to deploy old single-series
> charms with Juju without modifying metadata.yaml to add the supported
> series?
> 

You can use the --series argument

$ juju deploy ./trusty/mysql --series trusty

We could look at pulling the series out of the path if it's an old single-series
charm without series defined in metadata. Would that be an approach we'd be
willing to adopt?


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0 and local charm deployment

2016-03-05 Thread Ian Booth
Hey Marco

> 
> I'm a +1
> 
> How will bundles work which reference local charms? Will this work as
> expected where nova-compute is a directory at the same level as a bundle
> file?
> 
> ```
> series: trusty
> services:
>   nova-compute:
> charm: ./nova-compute
> num_units: 2
> ```
> 

The above will work but not until a tweak is made to bundle deployment to
interpret a path on disk rather than a url. It's a small change. This would be
done as part of the work to remove the local repo support.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: admin is dead, long live $USER

2016-03-03 Thread Ian Booth
Hey Tim

The new bootstrap UX has not removed any --admin-user flag.
I can see that the server jujud bootstrap command has an --admin-user argument
but it appears this is never set anywhere in the cloud init scripts. Or not that
I can see. I've checked older version of the relevant files and can't see where
we've ever used this.

So maybe we have a capability to bootstrap the controller agent with a specified
admin-user but have not hooked it up yet?

On 04/03/16 08:11, Tim Penhey wrote:
> Ah... it used to be there :-) At least it is on my feature branch, but I
> don't think I have merged the most recent master updates that has the
> work to re-work bootstrap for the new cloud credentials stuff.
> 
> Tim
> 
> On 04/03/16 10:09, Rick Harding wrote:
>> If we do that we need to also make it configurable on bootstrap as an
>> option.
>>
>> +1 overall
>>
>>
>> On Thu, Mar 3, 2016, 4:07 PM Tim Penhey > > wrote:
>>
>> Hi folks,
>>
>> I was thinking that with the upcoming big changes with 2.0, we should
>> tackle a long held issue where we have the initial user called "admin".
>>
>> There was a request some time back that we should use the current user's
>> name. The reason it wasn't implemented at that time was due to logging
>> into the GUI issues. These have been resolved some time back with the
>> multiple user support that was added.
>>
>> All the server side code handles the ability to define the initial user
>> for the controller model, and we do this in all the tests, so the
>> default test user is actually called "test-admin".
>>
>> I *think* that all we need to do is change the default value we use in
>> the bootstrap command for the AdminUserName (--admin-user flag) from
>> "admin" to something we derive from the current user.
>>
>> Probably worth doing now.
>>
>> Thoughts?
>>
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com 
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju 2.0 and local charm deployment

2016-03-03 Thread Ian Booth
Hi folks

TL;DR we want to remove support for old style local charm repositories in Juju 
2.0

Hopefully everyone is aware that Juju 2.0 and the charm store will support
multi-series charms. To recap, a multi-series charm is one which can declare
that it supports more than just the one series; you no longer need to have a
separate copy of the charm for precise vs trusty vs xenial. Note that all series
must be for the same OS so you'll still need separate charm sources for Windows
vs Ubuntu vs Centos.

Here's a link to the release notes
https://jujucharms.com/docs/devel/temp-release-notes#multi-series-charms

Juju 2.0 will also support deploying bundles natively
https://jujucharms.com/docs/devel/temp-release-notes#native-support-for-charm-bundles

So, with multi-series charm support, local charm deployment is now also a lot
easier. Back in Juju 1.x, to deploy local charms you needed to set up a
so-called charm repository, with a proscribed directory layout. The directory
layout has one directory per series.

_ mycharms
 |_precise
  |_mysql
 |_trusty
  |_mysql
 |_bundle
  |_openstack

You deployed using a local URL syntax:

$ juju deploy --repository ~/mycharms local:trusty/mysql

$ juju deploy --repository ~/mycharms local:bundle/openstack

The above structure was fine for when charms were duplicated for each series.
But one of the limitations is that you can't easily git checkout mycharm and
deploy straight from the vcs source on disk.

Juju 2.0 supports deploying charms and bundles straight from any directory,
including where you've checked out your launchpad/github charm source.

$ juju deploy ~/mygithubstuff/mysql

$ juju deploy ~/mygithubstuff/openstack/bundle.yaml

So the above combined with the consolidation of charms for many series into the
one source tree means that the old local repo support is not needed.

Will anyone complain if we drop local repos in Juju 2.0? Is there a use case
where it's absolutely required to retain this?






-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD support (maybe)

2016-02-25 Thread Ian Booth
> 
>> I personally encourage us to use heterogeneous versions of go as much as
>> we can. Because we should be compatible as much as possible. But it does
>> look like our dependencies are going to force our hand.
>>
> 
> Agreed. I think it's healthy for Juju's devs to be using a range of Go
> versions (within reason). It helps to ensure we not relying on version
> specific behaviour.
> 

+1. And I like life on the edge, so 1.6 for me :-D


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD support (maybe)

2016-02-25 Thread Ian Booth
FWIW, go 1.6 works just fine with Juju on my system

On 26/02/16 08:34, Menno Smits wrote:
> On 26 February 2016 at 04:59, Horacio Duran 
> wrote:
> 
>> be aware though, iirc that ppa replaces your go version with 1.6 (or used
>> to) which can mess your env if you are using go from ubuntu.
>>
> 
> With a bit of apt configuration you can use the lxd stable PPA without
> pulling in its Go 1.6 packages.
> 
> Here's what I did:
> 
> $ cat /etc/apt/preferences.d/lxd-stable-pin
> Package:  *
> Pin: release o=LP-PPA-ubuntu-lxc-lxd-stable
> Pin-Priority: 200
> 
> Package: lxd lxd-tools lxd-client lxcfs lxc-templates lxc cgmanager
> libcgmanager0 libseccomp2
> Pin: release o=LP-PPA-ubuntu-lxc-lxd-stable
> Pin-Priority: 500
> 
> The main problem with this approach is that you have to explicitly specify
> the package names you do want to use, which will be a problem if package
> names change or extra packages are added. Maybe someone with more apt foo
> than me knows a better way.
> 
> - Menno
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Please merge master into your feature branches

2016-02-18 Thread Ian Booth
FYI for folks developing feature branches for juju-core.

juju-core master has been updated to include the first round of functionality to
improve the bootstrap experience. The consequence of this is that CI scripts
needed to be updated to match. This means that any feature branch which has not
had master commit 294388 or later merged in will not work with CI and so will
not be blessed for release.



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Breaking news - New Juju 2.0 home^H^H^H^H data location

2016-02-08 Thread Ian Booth
Yes

>> Very, very soon, the need for an environments.yaml file will be no more,

Hopefully in time for the 2.0 beta due sometime next week.

On 09/02/16 01:33, Adam Stokes wrote:
> Does this mean the environments.yaml file is going away at some point?
> 
> On Mon, Feb 8, 2016 at 2:16 AM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> As advance notice, the next alpha release of Juju 2.0 (due this week) will
>> use a
>> new default home location. Juju will now adhere to the the XDG desktop
>> standard
>> and use this directory (by default):
>>
>> ~/.local/share/juju
>>
>> to store its working files (%APPDATA%/Juju on Windows). This is partly to
>> allow
>> Juju 2.0 to be installed alongside 1.x.
>>
>> Very, very soon, the need for an environments.yaml file will be no more,
>> meaning
>> there will be no need for the user to edit any files in that directory. As
>> a
>> sneak peak of what is coming, you will be able to, out of the box:
>>
>> $ juju bootstrap mycontroller aws/us-west-2
>>
>> Note that there's no need to "$ juju init" or edit any environment.yaml to
>> use
>> the public clouds and regions supported by Juju. Adding support for new
>> regions
>> or cloud information is a simple matter of running "$juju update-clouds".
>> There's more to come, but you get the idea.
>>
>> Anyway, the point of the above is to say the location of the home/data
>> directory
>> doesn't really matter as there will be no need to poke around inside it.
>>
>> As an interim measure, if you run off master, just:
>>
>> mkdir ~/.local/share/juju
>> cp -r ~/.juju/* ~/.local/share/juju
>>
>> if you want to use existing models with the latest build from source.
>>
>>
>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Breaking news - New Juju 2.0 home^H^H^H^H data location

2016-02-07 Thread Ian Booth
As advance notice, the next alpha release of Juju 2.0 (due this week) will use a
new default home location. Juju will now adhere to the the XDG desktop standard
and use this directory (by default):

~/.local/share/juju

to store its working files (%APPDATA%/Juju on Windows). This is partly to allow
Juju 2.0 to be installed alongside 1.x.

Very, very soon, the need for an environments.yaml file will be no more, meaning
there will be no need for the user to edit any files in that directory. As a
sneak peak of what is coming, you will be able to, out of the box:

$ juju bootstrap mycontroller aws/us-west-2

Note that there's no need to "$ juju init" or edit any environment.yaml to use
the public clouds and regions supported by Juju. Adding support for new regions
or cloud information is a simple matter of running "$juju update-clouds".
There's more to come, but you get the idea.

Anyway, the point of the above is to say the location of the home/data directory
doesn't really matter as there will be no need to poke around inside it.

As an interim measure, if you run off master, just:

mkdir ~/.local/share/juju
cp -r ~/.juju/* ~/.local/share/juju

if you want to use existing models with the latest build from source.




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju terminology change: controllers and models

2016-02-02 Thread Ian Booth
Hey all

As has been mentioned previously in this list, for the Juju 2.0 release we have
been working on fundamental terminology changes. In particular, we now talk
about controllers and models instead of state servers and environments.

To this end, a rather large change has landed in master and the upcoming
2.0-alpha2 release of Juju will reflect these changes. There are several things
to be aware of. We have also taken the opportunity to remove a lot of code which
existed to support older Juju clients. Needless to say, this Juju 2.0 release
will not support upgrading from 1.x - it works only as a clean install.

Note: some of the changes will initially break the GUI and users of the Python
Juju Client - work is underway to update these products for the next alpha3
release scheduled for next week. For those wishing to continue to test Juju 2.0
without the breaking changes, the alpha1 release is still available via
ppa:juju/experimental. Separate communications to affected stakeholders has/will
be made as part of the 2.0-alpha2 release.

So, the changes are roughly broken down as follows:

- CLI command name changes
- facade name changes
- api method and parameter name changes
- facade method restructure
- internal api name changes
- external artifact/data changes (including on the wire changes)
- deprecated and older version facades are removed

1. CLI command name changes

As an obvious example, create-environment becomes create-model. We also have
destroy-controller etc. This alpha2 release will also contain many of the other
CLI changes targetted for 2.0 eg juju backup create becomes juju create-backup.
Not all 2.0 CLI syntax is supported yet, but all the environment -> model
changes are done.

You will also use -m  instead of -e .

The release notes will go into more detail.

All user facing text now refers to model instead of environment.

2. Facade name changes

If you are curious, see https://goo.gl/l4JqGd for a representative listing of
all facade and method names and which ones have been changed.

The main one is EnvironmentManager becomes ModelManager. These changes affect
external API clients like the GUI and Python Juju client.

3. api method and parameter name changes

By way of example:
EnvironInfo() on the undertaker facade becomes ModelInfo().
The param struct ModifyEnvironUsers becomes ModifyModelUsers etc.
EnvironTag attributes become ModelTag.

4. Service facade method restructure

As part of making our facades more manageable and maintainable when API changes
are required, a whole bunch of service related methods are moved off the Client
facade and onto the Service facade. This had already been started months ago,
and there were shims in place to keep existing clients working, but now the job
is finished.
eg Client.AddRelation() becomes Service.AddRelation() etc.

This change will break the GUI and Python Juju client.

5. Internal API name changes

Things like state.AllEnvironments() becomes state.AllModels(), we now use
names.ModelTag instead of names.EnvironTag, and many, many more.

Note: the names package has not been forked into a .V2 yet (with EnvironTag
removed) as there are dependencies to sort out. Please do not use EnvironTag
anymore.

6. External artifact/data changes (including on the wire changes)

There are several main examples here.
On the wire, we transmit model-uuid tags rather than environment-uuid tags.
In mongo, we store model-uuid doc fields rather than env-uuid.
In agent.conf files we store Model info rather than Environment tags.
In the controller blob store, we store and manage blobs for buckets rather than
environments.
The controller HTTP endpoints are /model/https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju terminology change: controllers and models

2016-02-02 Thread Ian Booth
Yeah, there's a couple of places that need a bit of cleanup. With that one, I
needed to double check existing call points before deleting, and ran out of time
before needing to do the merge. But the intent is to delete it.

On 03/02/16 12:53, Nate Finch wrote:
> FYI, I noticed ServiceDeployWithNetworks still exists as a client and
> facade method, but it's only called by tests. Maybe it should be removed?
> 
> On Tue, Feb 2, 2016, 8:34 PM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Hey all
>>
>> As has been mentioned previously in this list, for the Juju 2.0 release we
>> have
>> been working on fundamental terminology changes. In particular, we now talk
>> about controllers and models instead of state servers and environments.
>>
>> To this end, a rather large change has landed in master and the upcoming
>> 2.0-alpha2 release of Juju will reflect these changes. There are several
>> things
>> to be aware of. We have also taken the opportunity to remove a lot of code
>> which
>> existed to support older Juju clients. Needless to say, this Juju 2.0
>> release
>> will not support upgrading from 1.x - it works only as a clean install.
>>
>> Note: some of the changes will initially break the GUI and users of the
>> Python
>> Juju Client - work is underway to update these products for the next alpha3
>> release scheduled for next week. For those wishing to continue to test
>> Juju 2.0
>> without the breaking changes, the alpha1 release is still available via
>> ppa:juju/experimental. Separate communications to affected stakeholders
>> has/will
>> be made as part of the 2.0-alpha2 release.
>>
>> So, the changes are roughly broken down as follows:
>>
>> - CLI command name changes
>> - facade name changes
>> - api method and parameter name changes
>> - facade method restructure
>> - internal api name changes
>> - external artifact/data changes (including on the wire changes)
>> - deprecated and older version facades are removed
>>
>> 1. CLI command name changes
>>
>> As an obvious example, create-environment becomes create-model. We also
>> have
>> destroy-controller etc. This alpha2 release will also contain many of the
>> other
>> CLI changes targetted for 2.0 eg juju backup create becomes juju
>> create-backup.
>> Not all 2.0 CLI syntax is supported yet, but all the environment -> model
>> changes are done.
>>
>> You will also use -m  instead of -e .
>>
>> The release notes will go into more detail.
>>
>> All user facing text now refers to model instead of environment.
>>
>> 2. Facade name changes
>>
>> If you are curious, see https://goo.gl/l4JqGd for a representative
>> listing of
>> all facade and method names and which ones have been changed.
>>
>> The main one is EnvironmentManager becomes ModelManager. These changes
>> affect
>> external API clients like the GUI and Python Juju client.
>>
>> 3. api method and parameter name changes
>>
>> By way of example:
>> EnvironInfo() on the undertaker facade becomes ModelInfo().
>> The param struct ModifyEnvironUsers becomes ModifyModelUsers etc.
>> EnvironTag attributes become ModelTag.
>>
>> 4. Service facade method restructure
>>
>> As part of making our facades more manageable and maintainable when API
>> changes
>> are required, a whole bunch of service related methods are moved off the
>> Client
>> facade and onto the Service facade. This had already been started months
>> ago,
>> and there were shims in place to keep existing clients working, but now
>> the job
>> is finished.
>> eg Client.AddRelation() becomes Service.AddRelation() etc.
>>
>> This change will break the GUI and Python Juju client.
>>
>> 5. Internal API name changes
>>
>> Things like state.AllEnvironments() becomes state.AllModels(), we now use
>> names.ModelTag instead of names.EnvironTag, and many, many more.
>>
>> Note: the names package has not been forked into a .V2 yet (with EnvironTag
>> removed) as there are dependencies to sort out. Please do not use
>> EnvironTag
>> anymore.
>>
>> 6. External artifact/data changes (including on the wire changes)
>>
>> There are several main examples here.
>> On the wire, we transmit model-uuid tags rather than environment-uuid tags.
>> In mongo, we store model-uuid doc fields rather than env-uuid.
>> In agent.conf files we store Model info rather than Environment tags.
>> In the controller blob store, we store and manage 

Re: "environment" vs "model" in the code

2016-01-19 Thread Ian Booth
I'm a firm -1 to using old terminology for new work.

Doing anything other than using the new terminology for new work is simply
kicking the can down the road. We don't have time for re-work. We are currently
undetaking the rename of CLI, associated text, api parameters etc - the
outwardly facing artifacts. I'm sure we can all deal with a little inconsistency
for a short time. There will be inconsistency anyway with the current in
progress work.

On 18/01/16 10:35, Menno Smits wrote:
> +1 to what Roger said. New features always require changes to existing code
> so inconsistency is unavoidable if we take a piecemeal approach.
> 
> Given that a big rename is planned at some point, and that renaming can be
> largely automated, continuing to use "environment" internally until the big
> rename happens may make more sense in terms of maintainability.
> 
> Thoughts?
> 
> 
> 
> On 15 January 2016 at 21:05, roger peppe <roger.pe...@canonical.com> wrote:
> 
>> On 15 January 2016 at 06:03, Ian Booth <ian.bo...@canonical.com> wrote:
>>>
>>>
>>> On 15/01/16 10:16, Menno Smits wrote:
>>>> Hi all,
>>>>
>>>> We've committed to renaming "environment" to "model" in Juju's CLI and
>> API
>>>> but what do we want to do in Juju's internals? I'm currently adding
>>>> significant new model/environment related functionality to the state
>>>> package which includes adding new database collections, structs and
>>>> functions which could include either "env/environment" or "model" in
>> their
>>>> names.
>>>>
>>>> One approach could be that we only use the word "model" at the edges -
>> the
>>>> CLI, API and GUI - and continue to use "environment" internally. That
>> way
>>>> the naming of environment related things in most of Juju's code and
>>>> database stays consistent.
>>>>
>>>> Another approach is to use "model" for new work[1] with a hope that
>> it'll
>>>> eventually become the dominant name for the concept. This will however
>>>> result in a long period of widespread inconsistency, and it's unlikely
>> that
>>>> things we'll ever completely get rid of all uses of "environment".
>>>>
>>>> I think we need arrive at some sort of consensus on the way to tackle
>> this.
>>>> FWIW, I prefer the former approach. Having good, consistent names for
>>>> things is important[2].
>>>>
>>>
>>> Using "model" for new work is the correct approach - new chunks of work
>> will be
>>> internally consistent with the use of their terminology. And we will be
>> looking
>>> to migrate existing internal code once we tackle the external facing
>> stuff for
>>> 2.0. We don't want to add to our tech debt and make our future selves
>> sad by
>>> introducing obsoleted terminology for new work.
>>
>> The other side of this coin is that, as Menno says, now the code base
>> will be harder to read because it will be inconsistent throughout (and
>> not consistently inconsistent either, because the new work is bound to
>> cross domain boundaries).
>>
>> Given that it's not hard to make automated source code changes in Go
>> (given gofmt, gorename, gofix etc), I wonder if doing it this way might
>> just be making things harder for people maintaining the code without
>> actually making things significantly easier in the long run.
>>
>>   cheers,
>> rog.
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: "environment" vs "model" in the code

2016-01-14 Thread Ian Booth


On 15/01/16 10:16, Menno Smits wrote:
> Hi all,
> 
> We've committed to renaming "environment" to "model" in Juju's CLI and API
> but what do we want to do in Juju's internals? I'm currently adding
> significant new model/environment related functionality to the state
> package which includes adding new database collections, structs and
> functions which could include either "env/environment" or "model" in their
> names.
> 
> One approach could be that we only use the word "model" at the edges - the
> CLI, API and GUI - and continue to use "environment" internally. That way
> the naming of environment related things in most of Juju's code and
> database stays consistent.
> 
> Another approach is to use "model" for new work[1] with a hope that it'll
> eventually become the dominant name for the concept. This will however
> result in a long period of widespread inconsistency, and it's unlikely that
> things we'll ever completely get rid of all uses of "environment".
> 
> I think we need arrive at some sort of consensus on the way to tackle this.
> FWIW, I prefer the former approach. Having good, consistent names for
> things is important[2].
>

Using "model" for new work is the correct approach - new chunks of work will be
internally consistent with the use of their terminology. And we will be looking
to migrate existing internal code once we tackle the external facing stuff for
2.0. We don't want to add to our tech debt and make our future selves sad by
introducing obsoleted terminology for new work.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Running upgrade steps for units

2015-09-15 Thread Ian Booth


On 16/09/15 12:06, Menno Smits wrote:
> On 16 September 2015 at 08:41, Tim Penhey  wrote:
> 
>> On 15/09/15 19:38, William Reade wrote:
>>> Having the machine agent run unit agent upgrade steps would be a Bad
>>> Thing -- the unit agents are still actively running the old code at that
>>> point. Stopping the unit agents and managing the upgrade purely from the
>>> machine would be ok; but it feels like a lot of effort for very little
>>> payoff, so I'm most inclined to WONTFIX it and spend the energy on agent
>>> consolidation instead.
>>
>> This still leaves us with the problem of the two upgrade steps that were
>> written to update the uniter state file, and how to handle this.
>>
> 
> If the work that these upgrade steps did is fairly trivial we could have
> the unit agents run a function which does the upgrade work as it comes up,
> before workers are started. This might be an acceptable solution if we're
> going to merge machine and unit agents soon[1] anyway.
> 
> I had thought it might be reasonably easy to get the upgrade machinery
> working within the unit agent but now that I've looked at the code I can
> see that it's a fairly major undertaking (to do it Right at least).
>

That would work, since the upgrade steps are trivial. They read the local state
file (yaml) and update a setting and write the file back out. The uniter could
just call the upgrade methods directly.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: workers using *state.State

2015-09-08 Thread Ian Booth
Those workers below aren't the only ones. There's also minunits and peergrouper
workers.

No-one does these things on purpose. Just last week I caught and rejected a pull
request to introduce a new worker depending on state directly. People make
mistakes. Perhaps we should introduce a test which fails if state is imported
into any worker code. We have similar tests already for other forbidden imports.

On 08/09/15 17:12, William Reade wrote:
> People keep writing them, in defiance of sane layering and explicit
> instructions, for the most embarrassingly trivial tasks
> (statushistorypruner? dblogpruner? txnpruner? *all* of those can and should
> pass through a simple api facade, not just dance off to play with the
> direct-db-access fairies.)
> 
> There is no justification for *any* of those things to see a *state.State,
> and I'm going to start treating new workers that violate layering this way
> as deliberate sabotage attempts. Leads who have overseen the introduction
> of those workers, sort it out.
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Dreamhost progress

2015-07-16 Thread Ian Booth


On 17/07/15 09:42, Ian Booth wrote:

 Next step is generate image metadata for the trusty image given by
 `nova image-list` which is the same as before:

 $ juju metadata generate-image -i c55094e9-699c-4da9-95b4-2e2e75f4c66e -s 
 trusty

 Then bootstrap with --upload-tools and --metadata-source pointing at
 the directory with the image streams.

 
 This won't work yet if cloud storage is removed or not available. We are
 currently implementing the code needed to import custom image metadata as
 structured records in the state database. Currently, the image metadata is
 copied to the environment's cloud storage (ie swift for openstack).
 
 

Ooops. Sorry, I was looking at old code; image metadata is now imported into
state as a blob. We're still working to fix that to make the data properly
structured, so it's a work in progress that we're hoping to finish next week.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Recurring fails on windows tests and the recent CentOS failure

2015-07-15 Thread Ian Booth
+100

And if the only reason for embedding a suite is to use a helper type function
that does not require that suite's setup/teardown, then it really should be
factored out as a standalone function to be shared.

On 16/07/15 10:46, Tim Penhey wrote:
 I'm in agreement with Bogdan, Roger and William on this one.
 
 If your test suite is composed of other suites, and you override the
 default setup or teardown of either the suite or the test, you MUST call
 the respective methods of the embedded suites.
 
 Roger, if it is easy to write some code to assert this, I would LOVE to
 have that as a test. It is not something I have the ability to write
 quickly (if at all).
 
 As a rule, you should call the setups in the order you define them in
 the struct, and call teardown in the reverse order.
 
 Thanks,
 Tim
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Blocking bugs process

2015-07-14 Thread Ian Booth


On 14/07/15 23:26, Aaron Bentley wrote:
 On 2015-07-13 07:43 PM, Ian Booth wrote:
 By the definition given
 
 If a bug must be fixed for the next minor release, it is
 considered a ‘blocker’ and will prevent all landing on that
 branch.
 
 that bug and any other that we say we must include in a release
 would block landings. That's the bit I'm having an issue with. I
 think landings need to be blocked when appropriate, but not by that
 definition.
 
 Here's my rationale:
 1. We have held the principle that our trunk and stable branches
 should always be releaseable.
 2. We have said we should stop-the-line when a branch becomes
 unreleasable.
 3. Therefore, I have concluded that we should stop-the-line when a bug
 is present that makes the branch unreleasable.
 
 Do you agree with 1 and 2?  I think 3 simply follows from 1 and 2, but
 am I wrong?
 

Agree with 1 and 2 (depending on the definition of unreleasable - one definition
of releasable is CI passing).
3 does not follow from the definition though.

A milestone may have many bugs assigned to it that we agree must be fixed before
we release that milestone. simply because we think those bugs are of high
importance and fit our schedule in terms of resources etc. Holding up a 20+
people development team because we have a bunch of bugs assigned to a milestone
is not practical nor productive. Software has bugs. Bugs are assigned to
milestones so we can plan releases. We generally agree that we want all bugs on
a milestone to be fixed prior to releasing (or else why add them to that
milestone). This does not (or should not IMO) make them blockers.

I am happy with the process we have now. CI passing means a branch is
releasable. That's our current definition (we wait for a bless before
releasing). When CI breaks we stop the line to fix CI (and rollback of the
revision that just landed to break stuff is a viable option there). Some bugs
that have been around for a while which finally get assigned to a milestone
should not block landings. They may be complex and hard to diagnose and a few
people fixing is enough. It doesn't help anyone to hold up the entire dev team
over such bugs. Whereas a CI breakage you have clear choices - fix quickly or
rollback to unblock.


 Depends on the changes. I think we should be pragmatic and make
 considered decisions. I guess that's why we have the jfdi flag.
 
 It's true that the particulars of the bug may matter in deciding
 whether it should block, and that's why there's a process for
 overriding the blocking tag: Exceptions are raised to the release team.
 
 I think JFDI should be considered a nuclear option.  If you need it,
 it's good that it exists, but you shouldn't ever need it.  If you
 think you need it, there may be a problem with our process.
 

There have been many times we have legitimately needed jfdi. Dev teams exist in
a world where pragmatism is usually the best policy, rather than a strict
adherence to a policy which has the potential to kill velocity for unequal
corresponding benefit.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Fwd: juju 1.24.alpha1

2015-05-06 Thread Ian Booth
Copying to juju-dev as this info is generally useful.

Hey

Thanks for the feedback.

The status YAML output will *always* be available, but when Juju 2.0 ships, will
no longer be the default. That will be tabular which is much more human 
readable.

So as of Juju 2.0, whenever that ships, instead of

juju status

you type

juju status --format yaml

To aide people to transition their scripts before hand, you can enable the 2.0
CLI behaviour early:

export JUJU_CLI_VERSION=2
juju status

The 2.0 compatibility feature as of now just applies to status but we may adapt
other commands to use it also as needed. The whole idea here is to retain
backwards compatibility but allow improved features to be exposed to those folks
who want to use them early.

Note: even in the current Juju, with status you can start transitioning scripts.
The --format yaml option works the same now as it will in 2.0

As well as the default format change, status in Juju 2.0 will omit the legacy
status in favour of just printing the new improved workload and agent status. So
I'd encourage using the 2.0 status output now to get the best benefit, assuming
you have tweaked any scripts accordingly.


On 07/05/15 02:00, Alexis Bruemmer wrote:
 Hey Ian, Horacio,
 
 Wanted to make sure you guys saw Adam's comment on the 1.24 release (with
 the service status feature).
 
 
 -- Forwarded message --

 
 Looks awesome! Though one thing that may cause a lot of problems:
 
 The 'status' command will use a table layout in the future
 
 There could possibly be a lot of people affected by this who pulling in the
 yaml output in their applications rather than rely on the api.
 
 On Wed, May 6, 2015 at 5:31 PM, Curtis Hovey-Canonical cur...@canonical.com
 wrote:
 
 Thank you for testing Juju.

 We have a new devel release that introduces new features that you
 might be interested in and fixes several bugs reported by
 stakeholders. Notable changes include:

   * Service status
   * storage (experimental)

 You can see the full release notes at.
 https://launchpad.net/juju-core/+milestone/1.24-alpha1

 We are striving to release 2 betas every week this month to get fixes
 to stakholders quickly.



 --
 Curtis Hovey
 Canonical Cloud Development and Operations
 http://launchpad.net/~sinzui

 
 
 
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: bugs, fixes and targeting Juju versions

2015-05-04 Thread Ian Booth
Yes, cheery pick is something I use all the time, as it fills out the PR in the
latter branches with a nice commit message based on the original and also
includes the original PR from which the commit was first done.

On 05/05/15 11:45, Jesse Meek wrote:
 Ah, even better. Now I can update my workflow :)
 
 On 05/05/15 13:43, Menno Smits wrote:
 cherry-pick will even grab the top commit of a branch if you give the branch
 name (presuming the fix is a single commit). For example:

 git checkout -b bug-fix-1.24 upstream/1.24 # create a branch for the fix in 
 1.24
 git cherry-pick bug-fix-master-branch   # pull the fix across

 There are various ways of grabbing multiple revisions too.

 And of course, as per Ian's recent email you should be targeting fixes to the
 lowest affected version and working forwards. So really in your example the
 fix should be made for 1.24 and the cherry picked onto a branch made from 
 master.




 On 5 May 2015 at 13:15, Tim Penhey tim.pen...@canonical.com
 mailto:tim.pen...@canonical.com wrote:

 git cherry-pick does this as a git command.

 Tim


 On 05/05/15 13:03, Jesse Meek wrote:
  Hi All,
 
  tl;dr `git diff --no-prefix master  diff.patch; patch -p0 
 diff.patch`
  is useful for landing bug fixes in different versions of juju.
 
  As a lot of us are currently bug hunting and needing to land
 fixes in
  multiple versions of Juju, I thought I'd share my process of
 doing that
  (maybe it's helpful?):
 
  So say you've branched master, let's call it
 bug-fix-master-branch,
  it's got your fix but you need to land it in 1.24. So branch
 1.24, let's
  call it bug-fix-124, and do the following:
 
  # generate a diff of your changes that can be used with patch
  (--no-prefix master is the magic flag that generates the right
 format)
  (bug-fix-master-branch) $ git diff --no-prefix master  diff.patch
 
  # don't add or commit, checkout the other branch
  (bug-fix-master-branch) $ git checkout bug-fix-124
 
  # diff.patch is still there, unstaged. So use it to add the patch
  (bug-fix-124) $ patch -p0  diff.patch
 
  # do a sanity check
  (bug-fix-124) $ git diff
 
  # remove the patch file
  (bug-fix-124) $ rm diff.patch
 
  You've now got a bug-fix branch eligible for automatic merging
 targeting
  1.24.
 
  Cheers,
  Jess
 


 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com mailto:Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


 
 
 
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Do not land code on blocked branches

2015-05-03 Thread Ian Booth
My email was poorly worded, sorry. It's main purpose was to reply the the email
from QA to let the QA folks know that an incompatibility was discovered which
explains the CI test failures holding up the 1.24 release, and that a solution
was in progress.

John's analysis is correct. I am almost done restoring the Juju 1.16 behaviour
to reinstate compatibility with quickstart as it stands today. Any quickstart
changes are not urgent therefore. The root cause is that Juju 1.16 behaviour was
removed and even though Juju clients are fine with this, existing external
clients may inadvertently be relying on such deprecated behaviour.

The difficulty is that right up until 1.18, the first point from which we were
required to retain backwards compatibility forever, quite a lot of
functionality was deprecated. It's hard to know which external (non Juju)
clients depend on such behaviour. That's why we have CI tests for the important
clients like quickstart and deployer. So in this case, the CI tests have done
their job :-)

On 03/05/15 22:11, John Meinel wrote:
 Just going off the bits that Ian pointed to, the section of code was if you
 called ServiceDeploy with a CharmStore URL (eg cs:mysql) but you had not
 already called AddCharm.
 
 The juju cli client already knows to call Client.AddCharm with the given
 URL, whereas the internal api/client/client.go does a double check if it
 gets called with a charm URL that isn't already in state.
 
 Now, I don't know how Quickstart would be triggering
 apiserver/client/client.go
 The error here: in traceback looks like:
 connecting to wss://
 52.6.157.186:17070/environment/47724da5-9b38-4141-8f92-03d8f4225de9/api
 environment type: ec2
 bootstrap node series: trusty
 charm URL: cs:trusty/juju-gui-27
 requesting juju-gui deployment
 juju-quickstart: error: bad API response: charm cs:trusty/juju-gui-27 not
 found
 2015-05-01 18:28:59 ERROR Command '('juju', '--show-log', 'quickstart',
 '-e', 'aws-quickstart-bundle', '--constraints', 'mem=2G', '--no-browser',
 '/var/lib/jenkins/repository/landscape-scalable.yaml')' returned non-zero
 exit status 1
 Traceback (most recent call last):
   File /var/lib/jenkins/juju-ci-tools/quickstart_deploy.py, line 51, in
 run
 for step in self.iter_steps():
   File /var/lib/jenkins/juju-ci-tools/quickstart_deploy.py, line 70, in
 iter_steps
 self.client.quickstart(self.bundle_path)
   File /mnt/jenkinshome/juju-ci-tools/jujupy.py, line 335, in quickstart
 self.juju('quickstart', args, self.env.needs_sudo())
   File /mnt/jenkinshome/juju-ci-tools/jujupy.py, line 294, in juju
 return subprocess.check_call(args, env=env)
   File /usr/lib/python2.7/subprocess.py, line 511, in check_call
 raise CalledProcessError(retcode, cmd)
 
 
 So it would appear that we had code to allow users to call
 Client.ServiceDeploy(cs:mysql) and the server would lookup the charm and
 deploy it for the user, but we stopped doing that as a Juju CLI since 1.16.
 
 However, I think this is *our* bad because this is a very important client
 (quickstart and probably others) that has been relying on this behavior in
 all our recent releases.
 
 Compat with juju-cli != compatibility with Juju API users.
 
 AFAIK we don't have a great way to respond to clients that behavior is
 deprecated, but we can bump the Version of the API and change the behavior.
 We definitely should have done that here rather than just remove the
 behavior.
 
 John
 =:-
 
 On Sun, May 3, 2015 at 3:59 PM, Richard Harding rick.hard...@canonical.com
 wrote:
 
 On Sun, 03 May 2015, Ian Booth wrote:


 Curtis has filed three new bugs for these so far, and there appears to
 be one or two more to come:

 https://bugs.launchpad.net/juju-core/+bug/1450912

 The issue here is that quickstart is relying on Juju 1.16 behaviour.
 There was a
 block of code with a comment:

 // Remove this whole if block when 1.16 compatibility is dropped.

 The code block was removed because 1.18 was our minimum compatibility
 version.
 But it seems we have to restore the 1.16 behaviour. Note that this is
 not an
 upgraded environment where we need to retain compatibility with older
 deployments. It is a fresh 1.24 install which should be able to rely on
 1.18 and
 later behaviour only.

 Ian, can you be more specific on the chunk of code that was removed or
 branch I can look at for this? I'll happily file a bug and update
 quickstart, we just need to know what's changed there. Having a branch in
 hand or a bug will assist us in getting that updated as fast as possible.

 In searching through the quickstart code there's no hard requirement or
 notes on 1.16.

 Thanks

 --

 Rick Harding

 Juju UI Engineering
 https://launchpad.net/~rharding
 @mitechie

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com

Re: Do not land code on blocked branches

2015-05-03 Thread Ian Booth
 
 Curtis has filed three new bugs for these so far, and there appears to
 be one or two more to come:
 
 https://bugs.launchpad.net/juju-core/+bug/1450912

The issue here is that quickstart is relying on Juju 1.16 behaviour. There was a
block of code with a comment:

// Remove this whole if block when 1.16 compatibility is dropped.

The code block was removed because 1.18 was our minimum compatibility version.
But it seems we have to restore the 1.16 behaviour. Note that this is not an
upgraded environment where we need to retain compatibility with older
deployments. It is a fresh 1.24 install which should be able to rely on 1.18 and
later behaviour only.



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Do not land code on blocked branches

2015-05-02 Thread Ian Booth

 Curtis has filed three new bugs for these so far, and there appears to
 be one or two more to come:
 
 https://bugs.launchpad.net/juju-core/+bug/1450912
 https://bugs.launchpad.net/juju-core/+bug/1450917


The quickstart bugs have two root causes. A fix has already landed for one
issue. A fix for the other one is close to landing.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: previously valid amazon environment now invalid?

2015-04-30 Thread Ian Booth
Right now, the default tabular output is behind a feature flag because it's
experimental. We still need to decide how to allow users to have that output by
default without the feature flag, but also without breaking 1.18 script
compatibility. The best option IMO for this case is an env variable on the
user's client machine since the change is a client only one and I don't want to
pollute the CLI with --v2 type cruft and introduce yet another thing to support
in the future.

On 30/04/15 20:46, roger peppe wrote:
 At the Nuremberg sprint, I saw a demo of juju status that produced
 tabular format
 by default. I'm guessing that this issue means that can never actually be
 enabled.
 
 Although... it could be done backwardly compatibly (with a required 
 environment
 variable or configuration file setting) and perhaps that shows the way forward
 here. We could allow a user to change a setting that enables backwardly
 incompatible features (such as removing environment fallback or
 producing tabular format status). That doesn't help with the code cruft issue
 though.
 
 
 On 30 April 2015 at 11:29, Michael Hudson-Doyle
 michael.hud...@canonical.com wrote:
 I don't want to bore on and on about this, but one thing.

 On 30 April 2015 at 22:06, Nate Finch nate.fi...@canonical.com wrote:
 If someone needs 1.18 CLI compatibility, they can use 1.18.  It's that
 simple.

 It's not that simple to do that though, as long as new versions of
 juju go into trusty-updates.  You'd have to pin the version or do
 apt/preferences junk to prefer trusty over trusty-updates for
 juju-core or something.  I'm not even sure, and I'm much more familiar
 with this sort of thing than it makes sense to assume our users are.

 Cheers,
 mwh

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/juju-dev
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: previously valid amazon environment now invalid?

2015-04-30 Thread Ian Booth


On 30/04/15 21:04, roger peppe wrote:
 On 30 April 2015 at 11:55, Ian Booth ian.bo...@canonical.com wrote:
 Right now, the default tabular output is behind a feature flag because it's
 experimental. We still need to decide how to allow users to have that output 
 by
 default without the feature flag, but also without breaking 1.18 script
 compatibility. The best option IMO for this case is an env variable on the
 user's client machine since the change is a client only one and I don't want 
 to
 pollute the CLI with --v2 type cruft and introduce yet another thing to 
 support
 in the future.
 
 The danger here is that we end up with 100 environment variables, each
 tweaking some aspect of the Juju client's behaviour, and that
 debugging becomes hard because every user has some
 uniquely different combination of settings.
 
 Perhaps a single environment variable, say JUJU_COMPAT,
 defining the oldest version required for backward compatibility,
 might work here? The default value would be whatever
 is the oldest version that we currently support.
 

Agreed. That's what I was alluding to with the --v2 arg example.
When Juju 2.0 ships we can turn on the new behaviour by default. Until then,
people should have an easy way to choose the new behaviour if they want  to use 
it.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: previously valid amazon environment now invalid?

2015-04-28 Thread Ian Booth


On 29/04/15 04:41, Nate Finch wrote:
 No one seems to be answering my actual question. That error message seems
 new. Is it?  Either way, the error message is incorrect - control bucket is
 not required - and whatever is emitting that message needs to be fixed.
 On Apr 28, 2015 2:32 PM, Aaron Bentley aaron.bent...@canonical.com
 wrote:


control bucket used to be used to store tools and charms, and also write out a
little yaml file containing state server addresses. We now store tools and
charms in state, but still require the yaml file. So control bucket is still
needed for Juju to operate correctly.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju storage - early access

2015-04-02 Thread Ian Booth
 
 We have implemented support for creating volumes in the ec2 provider, via
 the ebs storage provider. By default, the ebs provider will create cheap
 and nasty magnetic volumes. There is also an ebs-ssd storage pool
 provided OOTB that will create SSD (gp2) volumes. Finally, you can create
 your own pools if you like; the parameters for ebs are:
   - volume-type: may be magnetic, ssd, or provisioned-iops
   - iops: number of provisioned IOPS (requires volume-type=provisioned-iops)
 

We haven't tested yet, but there should also be support for encrypted ebs
volumes eg

juju storage pool create encrypted-ebs ebs encrypted=true

If you do try it, let us know of any issues encountered.





-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Bleeding edge testers - do you want to try some new Juju Health/Status functionality?

2015-04-02 Thread Ian Booth
Oh, one more thing.

Hook tools like status-set are usually run inside a hook. If run from the
outside, a little extra work is required. This would be the case where a charm
wants to dynamically update its workload status without waiting to be polled by
Juju.

From the spec:

Neither polling nor Juju hook execution are the complete answer for status
updates. A piece of software might have reason to think that its service status
has changed at any time. Ideally, such changes would be communicated to other
related software immediately rather than waiting for a hook execution or poll.
Consider the case where a database charm has successfully been installed and is
now running. A periodic database backup may cause a unit to become unable to
accept requests for the duration of the backup. In such cases, the unit needs a
mechanism to report status changes back to the agent, without waiting for a hook
to fire.

juju-run allows external processes to set status:

  juju-run unitname ‘status-set maintenance “backing up database”’

In the example above, the process that triggers the backup might set the status
of the unit before taking the database offline.

For juju-run to know what unit status to set, a specific unit name needs to be
provided. The unit name is available within a hook as $JUJU_UNIT_NAME. So the
charm will need remember its unit name from the install hook, and then external
software can be configured to call juju-run as needed to update the unit status
on an ad hoc basis.

On 02/04/15 19:18, Ian Booth wrote:
 Hi rocket scientists,
 
 If you don't mind living on the bleeding edge, and are comfortable pulling 
 Juju
 source from Github and compiling, we'd love to get some feedback on a very 
 early
 version of the new Juju Unit and Service Status work.
 
 What's landed in master are the tools needed to start testing and playing with
 the new functionality slated for inclusion in Juju 1.24. More will be landing
 over the coming weeks (even as early as next week); what's there is hopefully
 enough to get started.
 
 We're still looking at the best way to present and display some aspects of the
 data so feedback welcome. And bugs also if you find issues.
 
 Included
 
 
 Hook tools
 --
 
 juju-set, used to inform Juju or a charm's workload status
 eg
   juju-set maintenance | waiting | blocked | active [message]
 
 juju-get, used to return the current workload status
 
 juju status
 ---
 
 If you set the feature flag new-status, the tabular format is the default 
 and
 the legacy status information is not shown.
 
 Status data (yaml and json formats) now includes separate status information 
 for
 the agent and the workload. For each, the last updated time is included to
 show when the status was last set.
 
 status behaviour
 
 
 The charm is expected to set its own workload status, however Juju will do the
 following:
 
 - when the unit's machine is being allocated, the workload status will be
 unknown with a message Waiting for agent initialization to finish
 
 - when the install hook starts, the workload status will change to 
 maintenance
 with a message installing charm software
 
 - when the start hook finishes, if the charm has not set its own status by 
 then,
 Juju will set the workload status to unknown
 
 - the agent status is more informative eg when a hook runs, or an action, the
 agent status is set to executing with a message like running start hook.
 
 - if the agent has been idle for 2 seconds, without new events queued etc, its
 status is set to idle
 
 So, with the above, you can deploy a unit and watch juju status to see a 
 nicer
 indication of what's happening as a charm is installed, watch the hooks run 
 etc.
 
 
 Excluded
 
 
 - status history
 - service status
 - status-set hook
 - status log
 - watcher used by GUI still shows legacy status
 
 
 Known Issues
 
 
 - status-get hook tool doesn't show error state for workload
 
 
 Example Output
 --
 (tabular gets messed up in email, so just including yaml)
 
 environment: local
 machines:
   0:
 agent-state: started
 agent-version: 1.24-alpha1.1
 dns-name: localhost
 instance-id: localhost
 series: utopic
 state-server-member-status: has-vote
   1:
 agent-state: started
 agent-version: 1.24-alpha1.1
 dns-name: 10.0.1.194
 instance-id: ian-local-machine-1
 series: trusty
 hardware: arch=amd64
 services:
   mysql:
 charm: cs:trusty/mysql-24
 exposed: false
 relations:
   cluster:
   - mysql
 units:
   mysql/0:
 workload-status:
   current: error
   message: 'hook failed: start'
   since: 01 Apr 15 22:51 AEST
 agent-status:
   current: idle
   since: 01 Apr 15 22:55 AEST
   version: 1.24-alpha1.1
 machine: 1
 public-address: 10.0.1.194
 
 
 
 
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify

Re: Is simplestreams spam worth having in the Log

2015-04-01 Thread Ian Booth
TL;DR:

A lot of the spam is necessary to diagnose when simplestreams look up fails, or
you get the wrong tools. In such cases, it's extremely useful to see where the
search path has looked. This was especially the case in the early days when
published tools and associated metadata sometimes were wrong, or signed json
metadata wasn't always there etc. That was also a time before we had the various
validate utilities which could be used to show from where tools would be 
selected.

Now that simplestreams issues are the exception rather than the norm, and we
have configurable logging (without which we could not get suitable debug info),
it is a fine time to reduce the logging level to trace.

On 01/04/15 20:47, John Meinel wrote:
 I've been noticing lately that everytime a test fails it ends up having a
 *lot* of lines about failing to find simplestreams headers. (this last test
 failure had about 200 long lines of that, and only 6 lines of actual
 failure message that was useful).
 
 Now I think there are a few things to look at here:
 
 1) The lines about looking for any double up and occur 9 times. Why are
 we repeating the search for tools 9 times in TestUpgradeCharmDir? maybe
 its genuine, but it sure feels like we're doing work over and over again
 that could be done once.
 
 2) We still default to reporting every failed index.json lookup, and *not*
 reporting the one that succeeded. Now these are at DEBUG level, but I have
 the feeling their utility is low enough that we should actually switch them
 to TRACE and *start* logging the one we successfully found at DEBUG level.
 
 Thoughts?
 
 John
 =:-
 
 
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Please, no more types called State

2015-03-12 Thread Ian Booth


On 12/03/15 16:53, Tim Penhey wrote:
 On 12/03/15 18:13, Ian Booth wrote:
 I see the point. But it could be considered analogous to having lots of 
 methods
 called New() etc. So long as the types are relevant for the package in which
 they're declared then isn't that ok? If we have lots of packages where state
 needs to be persisted, how is that different to having lots of packages 
 where a
 struct needs to be created, hence there will be several different New() 
 methods.

 Many of the current usages are client facades in the various API packages, 
 which
 is indeed unfortunate and I wish were different. But let's not universally
 reject State types without considering the intended semantics.
 
 *cough* *bullshit* *cough*
 
 State is a terrible name for a structure.
 
 I've also heard you say as much before too.

I've complained about the examples I gave in my response (State types in the API
facades) plus the big ball of mud which is the state package itself. But bespoke
usages of State types in the correct context need to be considered individually
and not universally rejected because we misuse State elsewhere.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Please, no more types called State

2015-03-11 Thread Ian Booth
I see the point. But it could be considered analogous to having lots of methods
called New() etc. So long as the types are relevant for the package in which
they're declared then isn't that ok? If we have lots of packages where state
needs to be persisted, how is that different to having lots of packages where a
struct needs to be created, hence there will be several different New() methods.

Many of the current usages are client facades in the various API packages, which
is indeed unfortunate and I wish were different. But let's not universally
reject State types without considering the intended semantics.



On 12/03/15 15:01, David Cheney wrote:
 lucky(~/src/github.com/juju/juju) % pt -i type\ State\ | wc -l
 
 23
 
 Thank you.
 
 Dave
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju status --format=tabular

2015-02-18 Thread Ian Booth
+1 from me too - we should be able to do this next week before the freeze on the
27th

On 19/02/15 15:09, Tim Penhey wrote:
 On 19/02/15 17:07, John Meinel wrote:
 I was wondering if we could change the sorting to be numeric aware
 instead of just alphabetical.

 Specifically if you have more than 10 units you get:
 [Units]
 ID STATE   VERSION MACHINE PORTS PUBLIC-ADDRESS
 
 ubuntu/0   started 1.18.4  1 ec2-54-220-86-118.eu-we...
 ubuntu/1   started 1.18.4  2 ec2-54-155-143-100.eu-w...
 ubuntu/10  started 1.18.4  1 ec2-54-220-86-118.eu-we...
 ubuntu/100 started 1.18.4  1 ec2-54-220-86-118.eu-we...
 ubuntu/101 started 1.18.4  2 ec2-54-155-143-100.eu-w...
 ubuntu/102 started 1.18.4  3 ec2-54-74-157-229.eu-we...
 ubuntu/103 started 1.18.4  4 ec2-54-220-194-242.eu-w...
 ubuntu/104 started 1.18.4  5 ec2-54-246-53-159.eu-we...
 ubuntu/11  started 1.18.4  2 ec2-54-155-143-100.eu-w...
 ...

 Which is not ideal IMO. Since we *know* that units are in the form
 SERVICE/NUMBER can we do numerical sorting on the second part of the field?
 
 +1 FWIW
 
 Tim
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New Package for Feature Tests

2015-01-08 Thread Ian Booth
Thank you Katherine, it's great to see this important work come to fruition.

One area of the code in particular which will benefit from this is the CLI,
implemented in cmd/juju. Historically, cmd/juju unit tests were written on top
of a full stack (as an aside, any test suite which embeds JujuConnSuite is not a
unit test). Recently implemented commands have tests which stub out the api
client and are written as true unit tests. However, these are missing end-end
integration tests.

Obviously we should look to split, as time allows, existing command tests into
unit tests and feature tests. But could authors of recently added command tests
which are missing feature tests please go ahead and add those.

To reinforce what Katherine says, the feature tests should only really cover the
happy path, to ensure that everything is wired together and working properly.

Moving forward, reviewers should look to push back on branches which 1) do not
use proper unit tests, 2) do not have feature tests.

On 09/01/15 12:03, Katherine Cox-Buday wrote:
 Hey everyone,
 
 I just landed a PR which introduces a new package for Juju which is
 intended to host long-running end-to-end feature tests. You can have a look
 here: github.com/juju/juju/blob/master/featuretests/doc.go
 
 A little context as I understand it:
 
1. This is the direct result of the team's discussion about segregating
long-running tests from short-running tests.
2. It is the intention of the team that tests now be written thus:
   - Light-weight unit tests alongside Juju core packages.
   - End-to-end feature tests in this new package.
 
 Hopefully this allows us to be more agile as we modify code, but still
 maintain the safety-net of end-to-end tests. The main difference for me is
 that the bulk of our tests -- where we test edge-cases, all permutations of
 calls, etc. -- will now be in lightweight unit tests. The heavier-weight,
 end-to-end tests will now be used in a 1:1 ratio with user-facing features,
 and the number of these that we have to maintain should drop off a bit.
 
 This has been a great team effort to steer a very large change; kudos to
 you all!
 
 -
 Katherine
 
 
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Dear reviewers,

2014-12-02 Thread Ian Booth
On 03/12/14 13:34, Tim Penhey wrote:
 Hello there reviewers,
 
 I have a number of concerns around reviews that I need to say.
 
 Firstly, as an on call reviewer, you only need to look at the reviews
 that have not yet been looked at.  If you ask for changes on a branch as
 a reviewer, you have a responsibility to respond to the developer when
 they make said changes or they ask for clarification.


+1
I know sometimes I forget to go back the next day and look at subsequent
changes, hence


 As a developer it is your responsibility to get your branch landed.
 Don't just throw it over the review fence and think you are done.
 

... sometimes the developer needs to poke the reviewer and remind them to finish
the review :-)

 Please be pragmatic when using the errors library. It doesn't make sense
 to have it on absolutely every error return. It can be helpful, but it
 isn't a requirement to be everywhere. As a developer, consider
 annotating errors when it makes sense and tracing where appropriate. As
 a reviewer, please don't expect it everywhere.


+1
Let's be pragmatic and consider the situation. eg I would not expect trace to be
required when returning an error at a layer boundary, but deep down inside some
complex function, yes. With annotate, it may not be needed it an immediate
caller annotates the error for example. So please use good judgement rather than
a blanket insistence that trace and annotate be used everywhere.


 With respect to string literals: if it is used once, inline is fine;
 twice is borderline; more than twice and there should be a (hopefully
 documented) defined constant.  The constant should have a meaningful
 name, not obscure.
 

+100 for so many reasons

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Proposal: feature flag implementation for Juju

2014-11-25 Thread Ian Booth
I like feature flags so am +1 to the overall proposal. I also agree with the
approach to keep them immutable, given the stated goals and complexity
associated with making them not so.

I think the env variable implementation is ok too - this keeps everything very
loosely coupled and avoids polluting a juju environment with an extra config
attribute.

On 26/11/14 08:47, Tim Penhey wrote:
 Hi all,
 
 There are often times when we want to hook up features for testing that
 we don't want exposed to the general user community.
 
 In the past we have hooked things up in master, then when the release
 branch is made, we have had to go and change things there.  This is a
 terrible way to do it.
 
 Here is my proposal:
 
 http://reviews.vapour.ws/r/531/diff/#
 
 We have an environment variable called JUJU_FEATURE_FLAGS. It contains
 comma delimited strings that are used as flags.
 
 The value is read when the program initializes and is not mutable.
 
 Simple checks can be used in the code:
 
 if featureflag.Enabled(foo) {
   // do foo like things
 }
 
 Thoughts and suggestions appreciated, but I don't want to have the
 bike-shedding go on too long.
 
 Tim
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Feature Request: show running relations in 'juju status'

2014-11-17 Thread Ian Booth


On 17/11/14 15:47, Stuart Bishop wrote:
 On 17 November 2014 07:13, Ian Booth ian.bo...@canonical.com wrote:
 
 The new Juju Status work planned for this cycle will hopefully address the 
 main
 concern about knowing when a deployed charm is fully ready to do the work for
 which it was installed. ie the current situation whereby a unit is marked as
 Started but it not ready. Charms are able to mark themselves as Busy and also
 set a status message to indicate they are churning and not ready to run. 
 Charms
 can also indicate that they are Blocked and require manual intervention (eg a
 service needs a database and no relation has been established yet to provide 
 the
 database), or Waiting (the database on which the service relies is busy but 
 will
 resolve automatically when the database is available again).
 
 As long as the 'ready' state is managed by juju and not the unit, I'll
 stand happily corrected :-) The focus I'd seen had been on the unit
 declaring its own status, and there is no way for a unit to know that
 is ready because it has no way of knowing that, for example, there are
 another 10 peer units being provisioned that will need to be related.
 

You are correct that the initial scope of work is more about the unit, and less
about the deployment as a whole. There are plans though to address the issue.
We're throwing around the concept of a goal state, which is conceptually akin
to looking forward in time to be able to inform units what relations they will
expect to participate in and what units will be deployed. They'd likely be
something like a relation-goals hook tool (to compliment relation-list and
relation-ids), as well as hook(s) for when the goal state changes. There's
ongoing work in the uniter by William to get the architecture right so this work
can be considered. There's still a lot of value in the current Juju Status work,
but as you point out, it's not the full story.

 
 So although there are not currently plans to show the number of running 
 hooks in
 the first phase of this work, mechanisms are being provided to allow charm
 authors to better communicate the state of their charms to give much clearer 
 and
 more accurate feedback as to 1) when a charm is fully ready to do work, 2) 
 if a
 charm is not ready to do work, why not.
 
 A charm declaring itself ready is part of the picture. What is more
 important is when the system is ready. You don't want to start pumping
 requests through your 'ready' webserver, only to have it torn away as
 a new block device is mounted on your database when its storage-joined
 hook is invoked and returned to 'ready' state again once the
 storage-changed hook has completed successfully.
 

Also being thrown around is the concept of a new agent-state called Idle,
which would be used when there are no pending hooks to run. There are plans as
well for the next phase of the Juju status work to allow collaborating services
to notify when they are busy, and mark relationships as down. So if the database
had it's storage-attached hook invoked, it would mark itself as Busy, mark its
relation to the webserver as Down, thus allowing the webserver to put itself
into Waiting. Or, if we are talking about the initial install phase, the
database would not initially mark itself as Running until its declared storage
requirements were met, so the webserver would go from Installing to Waiting and
then to Running one the database became Running.








-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Feature Request: show running relations in 'juju status'

2014-11-16 Thread Ian Booth


On 15/11/14 15:44, Stuart Bishop wrote:
 On 14 November 2014 22:31, Mario Splivalo mario.spliv...@canonical.com 
 wrote:
 Hello, good people!

 How hard would it be to implement 'showing running relations in juju
 status'?

 Currently there is no easy (if any) way of knowing the state of the
 deployment. When one does 'juju add-relation' the relation hooks are
 run, but there is no feedback on weather the hooks are still running or
 everything is done. Only in case there is a hook error you would see
 that one in 'juju status'. One can have logs tailed and assume that when
 there is no action for some amount of time - everything deployed as it
 should.

 Having juju status display number of running hooks would greatly help in
 troubleshooting deployments.
 
 This has been my most wanted feature for well over a year, and at the
 moment is covered by
 https://bugs.launchpad.net/juju-core/+bug/1254766. Unfortunately, I
 don't think the work has been scheduled and I don't think the latest
 round of updates to 'juju status' cover it.
 

The new Juju Status work planned for this cycle will hopefully address the main
concern about knowing when a deployed charm is fully ready to do the work for
which it was installed. ie the current situation whereby a unit is marked as
Started but it not ready. Charms are able to mark themselves as Busy and also
set a status message to indicate they are churning and not ready to run. Charms
can also indicate that they are Blocked and require manual intervention (eg a
service needs a database and no relation has been established yet to provide the
database), or Waiting (the database on which the service relies is busy but will
resolve automatically when the database is available again).

A status-set hook tool will be provided and can be run inside any hook. The
exact syntax is still being finalised, but will be along the lines of:

status-set installing | running | busy | blocked | waiting “message”
[--waiting-on some blocker]
[--blocked-on some blocker]

This mechanism allows charms to give much more meaningful and precise feedback
as to their state and if not Running, why not. There will also be a facility to
allow charms to set their status outside of a hook, so that arbitrary service
interruptions or issues can be communicated at any time (eg a database goes down
for scheduled maintenance at a set time, can mark itself as Busy while that
happens).

So although there are not currently plans to show the number of running hooks in
the first phase of this work, mechanisms are being provided to allow charm
authors to better communicate the state of their charms to give much clearer and
more accurate feedback as to 1) when a charm is fully ready to do work, 2) if a
charm is not ready to do work, why not.




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Can we removed all devel agents from released streams.

2014-11-13 Thread Ian Booth


On 14/11/14 06:28, Curtis Hovey-Canonical wrote:
 We have another cases where an env using --upload-tools tried to
 upgrade from 1.18.4 to 1.20.x and got 1.19.x. I want to remove all the
 devel agents from the released streams.
 
 We have already created separate streams for devel and proposed agents
 to ensure environments cannot upgrade to them without explicitly set
 the environment to use them.
 
 I want to ensure we don't have old devel agents in our released
 streams. This will prevent anyone from getting these version from our
 official locations. This may also prevent environments that are idling
 on obsolete version from deploying more units.
 
 Are there other issues that will happen if I remove the devel agents?
 Is this a bad and dangerous idea?
 

I think this is a good idea and can only see benefits.
So +1 from me.

Having said that, if they used upload-tools then the public metadata is not used
anyway. Juju will generate metadata for the jujud it finds in the user's path
(or compiles from source if no jujud is found). The metadata is written to their
environ storage (for Juju  1.21). Do we have any more information about their
setup? It would be interesting to understand what happened.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Can we removed all devel agents from released streams.

2014-11-13 Thread Ian Booth


On 14/11/14 13:38, John Meinel wrote:
 I don't think we care about older development releases, but if we care at
 all, they won't be able to look anywhere but the released stream. (Only
 1.21 knows how to handle agent-stream/tools-stream, right?)

Correct. But I don't think we should support people still running 1.19.x or
older devel releases.

 I think we have a deeper bug if upgrade juju --version upgrades to anything
 that isn't exactly what you asked for.
 

Yep agreed. Although using upload-tools complicates things a little so we need
to understand their set up to be able to figure out what happened.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-20 Thread Ian Booth
Hi Eric

I just created a pull request for a 1.20 branch and got the same symptoms as
seen previously. ie an incomplete review board review without a diff and with a
reviewer.

On 21/10/14 07:38, Eric Snow wrote:
 This should be resolved now.  I've verified it works for me.  If it
 still impacts anyone, just let me know.
 
 -eric
 
 On Mon, Oct 20, 2014 at 7:34 PM, Eric Snow eric.s...@canonical.com wrote:
 Yeah, this is the same issue that Ian brought up.  I'm looking into
 it.  Sorry for the pain.

 -eric

 On Mon, Oct 20, 2014 at 5:31 PM, Dimiter Naydenov
 dimiter.nayde...@canonical.com wrote:
 Hey Eric,
 
 Today I tried proposing a PR and the RB issue (#202) was created, but
 it didn't have Reviewers field set (as described below), it wasn't
 published (due to the former), but MOST importantly didn't have a diff
 uploaded. After fiddling around with rbt I managed to do:
 $ rbt diff  ~/patch
 (while on the proposed feature branch)
 
 And then went to the RB issue page and manually uploaded the generated
 diff and published it.
 
 So most definitely the hook generating RB issues have to upload the
 diff as well :)
 
 It's coming together, keep up the good work!
 
 Cheers,
 Dimiter
 
 On 20.10.2014 16:53, Eric Snow wrote:
 On Mon, Oct 20, 2014 at 6:06 AM, Ian Booth
 ian.bo...@canonical.com wrote:
 Hey Eric

 This is awesome, thank you.

 I did run into a gotcha - I created a PR and then looked at the
 Incoming review queue and there was nothing new there. I then
 clicked on All in the Outgoing review queue and saw that the
 review was unpublished. I then went to publish it and it
 complained at least one reviewer was needed. So I had to fill in
 juju-team and all was good.

 1. Can we make it so that the review is published automatically?
 2. Can we pre-fill juju-team as the reviewer?

 Good catch.  The two are actually related.  The review is
 published, but that fails because no reviewer got set.  I'll get
 that fixed.

 -eric

 
 

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: reviewboard-github integration

2014-10-19 Thread Ian Booth
Hey Eric

This is awesome, thank you.

I did run into a gotcha - I created a PR and then looked at the Incoming review
queue and there was nothing new there. I then clicked on All in the Outgoing
review queue and saw that the review was unpublished. I then went to publish it
and it complained at least one reviewer was needed. So I had to fill in
juju-team and all was good.

1. Can we make it so that the review is published automatically?
2. Can we pre-fill juju-team as the reviewer?


On 18/10/14 15:38, Eric Snow wrote:
 With the switch to Reviewboard we introduced extra steps to our
 workflow (mostly involving rbt).  This in turn made things more
 difficult for new/existing contributors.  I've been able to take some
 time in the last couple weeks to improve the situation by adding some
 integration between github and reviewboard.
 
 As of tonight that integration has reached an initial milestone.  The
 barriers to contribution introduced by Reviewboard are essentially
 gone.  Furthermore, the automation means the review requests should
 stay in sync with the pull requests.  So I'm happy to report that,
 unless you are chaining branches (which github PRs don't support
 anyway), you shouldn't need to use rbt anymore.
 
 Currently:
 
 * a new PR automatically triggers the creation of a new review request
 * the review request has a link back to the pull request
 * updates to the PR (i.e. pushes to your branch) automatically trigger
 an update to the review request
 * closing (discard/merge) a PR automatically triggers closing the
 corresponding review request
 * re-opening a PR automatically triggers re-opening the corresponding
 review request
 * a reviewboard user gets created if there wasn't one already
 
 Nearly working:
 
 * after the review request is created, a link to it is added to a pull
 request comment
 
 Future work:
 
 * support patch queues/chained branches/etc. (via trigger in PR summary)
 * add reviewboard support to the merge bot (check for ship-it before
 doing anything)
 
 Will not happen:
 
 * automatically merge PR when given ship-it
 * PR comments (including review comments) will not be pushed to the
 corresponding review request
 * likewise reviewboard comments won't be pushed up to the corresponding PR
 
 I can't promise that the future work will happen in the short term,
 but I'll post any updates as they come.  Enjoy!
 
 -eric
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: State should not depend on the API server

2014-09-16 Thread Ian Booth


On 16/09/14 19:19, roger peppe wrote:
 On 16 September 2014 02:12, Tim Penhey tim.pen...@canonical.com wrote:
 On 12/09/14 01:35, Nate Finch wrote:
 Separation of logic is absolutely a good thing.  Separation of data is
 not nearly so useful.

 What I see as the real benefit of this work is based behind the
 interface segregation principle.

 Effectively this boils down to don't depend on things you don't need.
 
 I agree with this sentiment. Dependency hygiene is very important
 (though it seems to me that dependencies
 are perhaps more about implementation than interface).
 
 The state package should never depend on packages from the API.

 This work is not just busy work, but clear separation, and generally
 what is considered good software development principles.
 
 I have difficulty with this though. The state package and the API package
 are clearly linked - their concerns are not entirely separate. In my view,
 the state package exists only to serve the purposes of the API, which is
 *the* externally visible part of Juju.


There's a few issues here. The state package as it exists today is a mix of Juju
domain business logic and a persistence layer, unfortunately intertwined. What
should be the externally visible part of Juju is a service oriented abstraction
with coarse grained business methods; we have this with the facades. What comes
into those facades over the wire in terms of parameters should be transformed
into domain objects for use by the service business logic called by the facade
endpoints. On the wire data crosses a system boundary to enter/exit the business
services layer and so the data needs to be transformed to avoid unnecessary
coupling. As an example, a machine reference comes in over the wire as a machine
tag, which is transformed to a machine id for use by the services layer, which
us transformed to a global key when passed to the persistence layer. Thus the
api params structs contain attributes that are syntactically distinct from
what's required by the business services. It's a terrible abstraction leak to do
otherwise. Dave has started the process of correcting the implementation issue
we have, and in my view, it is necessary and desirable, based on sound design
principals. IMO :-)

 As such, many of the concerns and data structures dealt with by the state
 package will be the same as those dealt with by the API implementation.


This I disagree with - see above. The data structures should not be the same.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju landing tests - good news

2014-09-11 Thread Ian Booth
Hi folks

It's been a fantastic effort so far improving the quality of our tests; so much
so that this time yesterday I switched off the retry flag. This means that our
landing tests run at full speed, and fail first time if there's an error.

Since the change, I've seen a few failures due to glitches in AWS which hosts
the instance used to run the tests. I've seen a few failures which appear
legitimate due to env vars not being cleaned up (need to dig a bit more). And I
think I've seen one intermittent failure in everyone's favourite package jujud.

So, all in all, things are a *lot* better thanks to everyone's efforts. Landings
will now be much faster. We're still not quite there, but a lot closer than we 
were.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


  1   2   >