Juju 2.4-rc3 has been released

2018-06-25 Thread Ian Booth
A new development release of Juju is here, 2.4-rc3.

This release candidate addresses an issue upgrading from earlier Juju versions
as described below.

## Fixes

An upgrade step has been added to initialise the Raft configuration. This would
normally be done at bootstrap time but needs to be done during upgrade for
controllers that were bootstrapped with an earlier version.

## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 sudo snap install juju --classic --candidate

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.

## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at juju@lists.ubuntu.com.

## More information

To learn more about Juju please visit https://jujucharms.com.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju deploys with a vsphere controller using hardware vm version 10

2018-02-06 Thread Ian Booth
Hi Daniel

The Juju vSphere provider currently only supports hardware version 10, but 14 is
now the most recent according to the VMWare website. If we were simply to track
and support the most recent hardware version, would that work for you?

On 05/02/18 12:38, Daniel Bidwell wrote:
> Is there anyway to make the vsphere controller to deploy vms with
> hardware vm version 13 instead of version 10?
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.3.0 is here!

2017-12-07 Thread Ian Booth


On 08/12/17 09:39, Micheal B wrote:
> Looks great here other than the LDX on VMware which is what I need or at 
> least part of it is. Wanting to run containerized openstack in kubernetes on 
> vmware.  Unless someones has a better idea, I could try.
>

Sorry about that issue. This LXD on VMWare issue will be fixed ASAP next week
and we'll be doing a 2.3.1 point release (by the end of the year all going well)
with this and some other small fixes which missed the cut for 2.3.0.

You can conjure-up (or deploy deploy) kubernetes on other clouds (or even
localhost with LXD if you have a machine with lots of RAM) in the meantime. Or
you could use the openstack-lxd bundle if your main goal is Openstack.


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.3 beta2 is here!

2017-11-02 Thread Ian Booth

> * Parallelization of the Machine Provisioner
>>
>> Provisioning of machines is now faster!  Groups of machines will now be
>> provisioned in parallel reducing deployment time, especially on large
>> bundles.  Please give it a try and let us know what you think.
>>
>> Benchmarks for time to deploy 16 machines on different clouds:
>>
>> AWS:
>>
>> juju 2.2.5 4m36s
>>
>> juju 2.3-beta2 3m17s
>>
>> LXD:
>>
>> juju 2.2.5 3m57s
>>
>> juju 2.3-beta2 2m57s
>>
>> Google:
>>
>> juju 2.2.5 5m21s
>>
>> juju 2.3-beta2 2m10s
>>
>> OpenStack:
>>
>> juju 2.2.5 12m40s
>>
>> juju 2.3-beta2 4m52s
>>
>>
>>
> Oh heck yes this is a great improvement! I don't see MAAS numbers here, but
> I imagine palatalization has been implemented there too? Bare metal can be
> so slow to boot sometimes ;)
>

Works for all clouds. The provisioning code is generic and has been extracted
from each provider and moved up a layer. It got complicated because of the need
to still ensure even spread of distribution groups across availability zones in
the parallel case. There just wasn't time to get any MAAS numbers prior to
cutting the beta, but empirically, there's improvement across the board.
Positive deployment stories to share would be welcome :-)




-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Thanks James, we'll get to it. We'll work with the MAAS folks as on the surface
it looks like Juju is passing things correctly via the MAAS APIs. The fact that
the deployment works minus the storage constraint is interesting. Initially I
theorised it could have been a TB vs TiB mismatch but the disk size is large
enough to count that out. We'll update bug from here on


On 01/11/17 13:10, James Beedy wrote:
> I’ve created this bug for further tracking 
> https://bugs.launchpad.net/juju/+bug/1729127
> 
>> On Oct 31, 2017, at 7:59 PM, James Beedy  wrote:
>>
>> Yes, deploying without —storage results in a successful deploy. 
>>
>>> On Oct 31, 2017, at 7:52 PM, Ian Booth  wrote:
>>>
>>> And just to ask the obvious: deploying without the --storage constraint 
>>> results
>>> in a successful deploy, albeit to a machine with maybe the wrong disk?
>>>
>>>
>>>> On 01/11/17 10:51, James Beedy wrote:
>>>> Ian,
>>>>
>>>> So, I think I'm close here.
>>>>
>>>> The filesytem/device layout on my node(s): https://imgur.com/a/Nzn2H
>>>>
>>>> I have tagged the md0 device with the tag "raid0", then I have created the
>>>> storage pool as you have specified.
>>>>
>>>> `juju create-storage-pool ssd-disks maas tags=raid0`
>>>>
>>>> Then ran the following command to deploy my charm [0], attaching storage as
>>>> part of the command:
>>>>
>>>> `juju deploy cs:~jamesbeedy/elasticsearch-27 --bind "cluster=vlan20
>>>> public=mgmt-net" --storage data=ssd-disks,3T --constraints "tags=data"`
>>>>
>>>>
>>>> The result is here: http://paste.ubuntu.com/25862190/
>>>>
>>>>
>>>> Here machines 1 and 2 are deployed without the `--constraints`,
>>>> http://paste.ubuntu.com/25862219/
>>>>
>>>>
>>>> Am I missing something? Possibly like one more input to the `--storage` 
>>>> arg?
>>>>
>>>>
>>>> Thanks
>>>>
>>>> [0] https://jujucharms.com/u/jamesbeedy/elasticsearch/27
>>>>
>>>>> On Tue, Oct 31, 2017 at 3:14 PM, Ian Booth  
>>>>> wrote:
>>>>>
>>>>> Thanks for raising the issue - we'll get the docs updated!
>>>>>
>>>>>> On 01/11/17 07:44, James Beedy wrote:
>>>>>> I knew it would be something simple and sensible :)
>>>>>>
>>>>>> Thank you!
>>>>>>
>>>>>> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth 
>>>>> wrote:
>>>>>>
>>>>>>> Of the top of my head, you want to do something like:
>>>>>>>
>>>>>>> $ juju create-storage-pool ssd-disks maas tags=ssd
>>>>>>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>>>>>>
>>>>>>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>>>>>>> tag. You
>>>>>>> can select whatever criteria you want and whatever tags you want to use.
>>>>>>>
>>>>>>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>>>>>>> which is
>>>>>>> at least 32GB in size.
>>>>>>>
>>>>>>>
>>>>>>>> On 01/11/17 07:04, James Beedy wrote:
>>>>>>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>>>>>>> can't quite wrap my head around what the syntax might be to make it
>>>>> work,
>>>>>>>> and what the extent of the capability of the Juju storage features are
>>>>>>> when
>>>>>>>> used with MAAS.
>>>>>>>>
>>>>>>>> Re-reading [0], and looking for anything else I can find on Juju
>>>>> storage
>>>>>>>> every day for a week now thinking it may click or I might find the
>>>>> right
>>>>>>>> doc,  but it hasn't, and I haven't.
>>>>>>>>
>>>>>>>> I filed a bug with juju/docs here [1] .
>>>>>>>>
>>>>>>>> Does anyone have an example of how to consume Juju storage using the
>>>>> MAAS
>>>>>>>> provider?
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>>
>>>>>>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>>>>>>> [1] https://github.com/juju/docs/issues/2251
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
And just to ask the obvious: deploying without the --storage constraint results
in a successful deploy, albeit to a machine with maybe the wrong disk?


On 01/11/17 10:51, James Beedy wrote:
> Ian,
> 
> So, I think I'm close here.
> 
> The filesytem/device layout on my node(s): https://imgur.com/a/Nzn2H
> 
> I have tagged the md0 device with the tag "raid0", then I have created the
> storage pool as you have specified.
> 
> `juju create-storage-pool ssd-disks maas tags=raid0`
> 
> Then ran the following command to deploy my charm [0], attaching storage as
> part of the command:
> 
> `juju deploy cs:~jamesbeedy/elasticsearch-27 --bind "cluster=vlan20
> public=mgmt-net" --storage data=ssd-disks,3T --constraints "tags=data"`
> 
> 
> The result is here: http://paste.ubuntu.com/25862190/
> 
> 
> Here machines 1 and 2 are deployed without the `--constraints`,
> http://paste.ubuntu.com/25862219/
> 
> 
> Am I missing something? Possibly like one more input to the `--storage` arg?
> 
> 
> Thanks
> 
> [0] https://jujucharms.com/u/jamesbeedy/elasticsearch/27
> 
> On Tue, Oct 31, 2017 at 3:14 PM, Ian Booth  wrote:
> 
>> Thanks for raising the issue - we'll get the docs updated!
>>
>> On 01/11/17 07:44, James Beedy wrote:
>>> I knew it would be something simple and sensible :)
>>>
>>> Thank you!
>>>
>>> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth 
>> wrote:
>>>
>>>> Of the top of my head, you want to do something like:
>>>>
>>>> $ juju create-storage-pool ssd-disks maas tags=ssd
>>>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>>>
>>>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>>>> tag. You
>>>> can select whatever criteria you want and whatever tags you want to use.
>>>>
>>>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>>>> which is
>>>> at least 32GB in size.
>>>>
>>>>
>>>> On 01/11/17 07:04, James Beedy wrote:
>>>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>>>> can't quite wrap my head around what the syntax might be to make it
>> work,
>>>>> and what the extent of the capability of the Juju storage features are
>>>> when
>>>>> used with MAAS.
>>>>>
>>>>> Re-reading [0], and looking for anything else I can find on Juju
>> storage
>>>>> every day for a week now thinking it may click or I might find the
>> right
>>>>> doc,  but it hasn't, and I haven't.
>>>>>
>>>>> I filed a bug with juju/docs here [1] .
>>>>>
>>>>> Does anyone have an example of how to consume Juju storage using the
>> MAAS
>>>>> provider?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>>>> [1] https://github.com/juju/docs/issues/2251
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Thanks for raising the issue - we'll get the docs updated!

On 01/11/17 07:44, James Beedy wrote:
> I knew it would be something simple and sensible :)
> 
> Thank you!
> 
> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth  wrote:
> 
>> Of the top of my head, you want to do something like:
>>
>> $ juju create-storage-pool ssd-disks maas tags=ssd
>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>
>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>> tag. You
>> can select whatever criteria you want and whatever tags you want to use.
>>
>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>> which is
>> at least 32GB in size.
>>
>>
>> On 01/11/17 07:04, James Beedy wrote:
>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>> can't quite wrap my head around what the syntax might be to make it work,
>>> and what the extent of the capability of the Juju storage features are
>> when
>>> used with MAAS.
>>>
>>> Re-reading [0], and looking for anything else I can find on Juju storage
>>> every day for a week now thinking it may click or I might find the right
>>> doc,  but it hasn't, and I haven't.
>>>
>>> I filed a bug with juju/docs here [1] .
>>>
>>> Does anyone have an example of how to consume Juju storage using the MAAS
>>> provider?
>>>
>>> Thanks!
>>>
>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>> [1] https://github.com/juju/docs/issues/2251
>>>
>>>
>>>
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Of the top of my head, you want to do something like:

$ juju create-storage-pool ssd-disks maas tags=ssd
$ juju deploy postgresql --storage pgdata=ssd-disks,32G

The above assumes you have tagged in MAAS any SSD disks with the "ssd" tag. You
can select whatever criteria you want and whatever tags you want to use.

The deploy command above selects a MAAS node with a disk tagged "ssd" which is
at least 32GB in size.


On 01/11/17 07:04, James Beedy wrote:
> Trying to check out Juju storage capabilities on MAAS I found [0], but
> can't quite wrap my head around what the syntax might be to make it work,
> and what the extent of the capability of the Juju storage features are when
> used with MAAS.
> 
> Re-reading [0], and looking for anything else I can find on Juju storage
> every day for a week now thinking it may click or I might find the right
> doc,  but it hasn't, and I haven't.
> 
> I filed a bug with juju/docs here [1] .
> 
> Does anyone have an example of how to consume Juju storage using the MAAS
> provider?
> 
> Thanks!
> 
> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
> [1] https://github.com/juju/docs/issues/2251
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 16:33, Ian Booth wrote:
> 
> 
> On 19/10/17 15:22, John Meinel wrote:
>> So at the moment, I don't think Juju supports what you're looking for,
>> which is cross model relations without public addresses. We've certainly
>> discussed supporting all private for cross model. The main issue is that we
>> often drive parts of the firewalls (security groups) but without
>> understanding all the routing, it is hard to be sure whether things will
>> actually work.
>>
> 
> The space to which an endpoint is bound affects the behaviour here. Having 
> said
> that, there may be a bug in Juju's cross model relations code.
> 

Actually, there may be an issue with current behaviour, but not what I first
thought.

In network-get, only if an endpoint is not bound to a space does the resulting
ingress address use the public address (if one exists). If bound to a space, the
ingress addresses are set to the machine local addresses. This is wrong because
there's absolutely no guarantee an arbitrary external workload will be able to
connect to such an address - defaulting to the public address is the best choice
for most deployments.

I think network-get needs to change such that in the absence of information to
the contrary, regardless of whether an endpoint is bound to a space, the public
address should be advertised for ingress in a cross model relation.

The above implies we would need a way for the user to specify at relation time a
different ingress address for the consuming end. But that's not necessarily easy
to determine as it requires knowledge of how both sides (incl offering side)
have been deployed, and may change per relation. We don't intend to provide a
solution for this bit of the problem in Juju 2.3.


> So in the context of this doc
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> For relation data set up by Juju when a unit enters scope of a cross model 
> relation:
> 
> Juju will use the public address for advertising ingress. We have (future) 
> plans
> to support cross model relations where, in the absence of spaces, Juju can
> determine that traffic between endpoints is able to go via cloud local
> addresses, but as stated, with all the potential routing complexity involved, 
> we
> would limit this to quite restricted scenarios where it's guaranteed to work. 
> eg
> on AWS that might be same vpc/tenant/credentials or something. But we're not
> there yet and won't be for the cross model relations release in Juju 2.3.
> 
> The relation data is of course what is available to the remote unit(s) to 
> query.
> The data set up by Juju is the default, and can be overridden by a charm in a
> relation-changed hook for example.
> 
> For network-get output:
> 
> Where there is no space binding...
> 
> ... Juju will use the public address or cloud local address as above.
> 
> Where the endpoint is bound to a space...
> 
> ... Juju will populate the ingress address info in network-get to be the local
> machine addresses in that space.
> 
> So charm could call network-get and do a relation-set to put the correct
> ingress-address value in the relation data bag.
> 
> But I think the bug here is that when a unit enters scope, the default values
> Juju puts in relation data should be calculated the same as for network-get.
> Right now, the ingress address used is not space aware - if it's a cross model
> relation, Juju always uses the public address regardless of whether the 
> endpoint
> is bound to a space. If this behaviour were to be changed to match what
> network-get does, the relation data would be set up correctly(?) and there'd 
> be
> no need for the charm to override anything.
> 
>> I do believe the intended resolution is to use juju relate --via X, and
>> then X can be a space that isn't public. I'm pretty sure we don't have
>> everything wired up for that yet, and we want to make sure we can get the
>> current steps working well.
>>
> 
> juju relate --via X works at the moment by setting the egress-subnets value in
> the relation data bucket. This supports the case where the person deploying
> knows traffic from a model will egress via specific subnets, eg for a NATed
> firewall scenario. Juju itself uses this value to set firewall rules on the
> other model. There's currently no plans to support explicitly specifying what
> ingress addresses to use for either end of a cross model relation.
> 
>> The very first thing I noticed in your first email was that charms should
>> *not* be aware of spaces. The abstractions for charms are around their
>> bindings (explicit or via binding their endpoints). The goal of spaces is

Re: default network space

2017-10-18 Thread Ian Booth


On 19/10/17 15:22, John Meinel wrote:
> So at the moment, I don't think Juju supports what you're looking for,
> which is cross model relations without public addresses. We've certainly
> discussed supporting all private for cross model. The main issue is that we
> often drive parts of the firewalls (security groups) but without
> understanding all the routing, it is hard to be sure whether things will
> actually work.
> 

The space to which an endpoint is bound affects the behaviour here. Having said
that, there may be a bug in Juju's cross model relations code.

So in the context of this doc
https://jujucharms.com/docs/master/developer-network-primitives

For relation data set up by Juju when a unit enters scope of a cross model 
relation:

Juju will use the public address for advertising ingress. We have (future) plans
to support cross model relations where, in the absence of spaces, Juju can
determine that traffic between endpoints is able to go via cloud local
addresses, but as stated, with all the potential routing complexity involved, we
would limit this to quite restricted scenarios where it's guaranteed to work. eg
on AWS that might be same vpc/tenant/credentials or something. But we're not
there yet and won't be for the cross model relations release in Juju 2.3.

The relation data is of course what is available to the remote unit(s) to query.
The data set up by Juju is the default, and can be overridden by a charm in a
relation-changed hook for example.

For network-get output:

Where there is no space binding...

... Juju will use the public address or cloud local address as above.

Where the endpoint is bound to a space...

... Juju will populate the ingress address info in network-get to be the local
machine addresses in that space.

So charm could call network-get and do a relation-set to put the correct
ingress-address value in the relation data bag.

But I think the bug here is that when a unit enters scope, the default values
Juju puts in relation data should be calculated the same as for network-get.
Right now, the ingress address used is not space aware - if it's a cross model
relation, Juju always uses the public address regardless of whether the endpoint
is bound to a space. If this behaviour were to be changed to match what
network-get does, the relation data would be set up correctly(?) and there'd be
no need for the charm to override anything.

> I do believe the intended resolution is to use juju relate --via X, and
> then X can be a space that isn't public. I'm pretty sure we don't have
> everything wired up for that yet, and we want to make sure we can get the
> current steps working well.
> 

juju relate --via X works at the moment by setting the egress-subnets value in
the relation data bucket. This supports the case where the person deploying
knows traffic from a model will egress via specific subnets, eg for a NATed
firewall scenario. Juju itself uses this value to set firewall rules on the
other model. There's currently no plans to support explicitly specifying what
ingress addresses to use for either end of a cross model relation.

> The very first thing I noticed in your first email was that charms should
> *not* be aware of spaces. The abstractions for charms are around their
> bindings (explicit or via binding their endpoints). The goal of spaces is
> to provide human operators a way to tell charms about their environment.
> But you shouldn't ever have to change the name of your space to match the
> name a charm expects.
> 
> So if you do 'network-get BINDING -r relation' that should give you the
> context you need to coordinate your network settings with the other
> application. The intent is that we give you the right data so that it works
> whether you are in a cross model relation or just related to a local app.
> 
> John
> =:->
> 
> 
> On Oct 13, 2017 19:59, "James Beedy"  wrote:
> 
> I can give a high level of what I feel is a reasonably common use case.
> 
> I have infrastructure in two primary locations; AWS, and MAAS (at the local
> datacenter). The nodes at the datacenter have a direct fiber route via
> virtual private gateway in us-west-2, and the instances in AWS/us-west-2
> have a direct route  via the VPG to the private MAAS networks at the
> datacenter. There is no charge for data transfer from the datacenter in and
> out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
> and have the AWS instances and MAAS instances talk to each other via
> private address.
> 
> At the application level, the component/config goes something like this:
> 
> The MAAS nodes at the data center have mgmt-net, cluster-net, and
> access-net, interfaces defined, all of which get ips from their respective
> address spaces from the datacenter MAAS.
> 
> I need my elasticsearch charm to configure elasticsearch such that
> elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
> instance) -> elasticsearch to talk across the correct space for the 

Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52

Juju 2.3 beta1 is here!

2017-10-05 Thread Ian Booth
After many months of effort, we're pleased to announce the release of the first
beta for the upcoming Juju 2.3 release. This release has many long requested new
features, some of which are highlighted below.

Please note that because this is a beta release (the first one at that), there
may likely be bugs or functionality that will be polished over the next betas
prior to release. But we encourage everyone to provide feedback so that we may
address any issues.

Also note that some of the documentation for the new features is also in beta
and undergoing revision and completion over the next few weeks. In particular
the cross model relations documentation is still in development.

## New and Improved

### FAN networking in containers (initial support)

A new "container-networking-method" model config attribute is introduced with 3
possible values: "local", "fan", "provider".
* local = use local bridge lxdbr0
* provider = containers get their IP address from the cloud via DHCP
* fan = use FAN

The default is to use "provider" if supported. Otherwise, if FAN is configured
use that, else "local".
On AWS, FAN works out of the box. For other clouds, a new fan-config model
option needs to be used, eg

juju model-config fan-config="= =

### Update application series

It's now possible to update the underlying OS series associated with an already
deployed application.

juju update-series  

will ensure that any new units deployed will now use the requested series.

juju update-series  

will inform the charms already deployed to the machine that the OS series has
been changed and they should re-configure accordingly. This requires charm
support and for the underlying OS to be upgraded manually beforehand.

For more detail, see the documentation
https://jujucharms.com/docs/devel/howto-updateseries

### Cross model relations

This feature allows workloads to be deployed and related across models, and even
across controllers. Note that some charms such as postgresql, prometheus (and
others) need to be updated to be cross model compatible - this work is underway.

For more detail, see the beta documentation
https://jujucharms.com/docs/devel/models-cmr/

*Note: this cross model relations documentaion is also still in beta and is
incomplete.*

### LXD storage provider

Juju storage is now supported by the LXD local cloud. The available storage
options include:
- lxd (default, directory based)
- btrfs
- zfs

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage#lxd-(lxd)

### Persistent storage management

Storage can be detached and reattached from/to units without losing the data on
that storage. The supported scenarios include:
- explicit detach / attach while the units are still active
- retain storage when a unit or application is destroyed
- retain storage when a model is destroyed
- deploy a charm using previously detached storage

The default behaviour now is to retain storage, unless destroy has explicitly
been requested when running the command.

Storage which is retained can then be reattached to a different unit. Filesystem
storage can be imported into a different model, from where it can be attached to
units in that model, or used when deploying a new charm.

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage


## Fixes

For a list of all bugs fixed in this release, see
https://launchpad.net/juju/+milestone/2.3-beta1

Some important fixes include:

* can't bootstrap openstack if nova and neutron AZs differ
https://bugs.launchpad.net/juju/+bug/1689683
* cache vSphere images in datastore to avoid repeated downloads
https://bugs.launchpad.net/juju/+bug/1711019
* juju run-action can be run on multiple units
https://bugs.launchpad.net/juju/+bug/1667213


## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 snap install juju --beta --classic

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.


## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at juju@lists.ubuntu.com.


## More information

To learn more about juju please visit https://jujucharms.com.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What is the best way to work with multiple models in a controller using the cli?

2017-10-05 Thread Ian Booth
Hey

The -m argument is what you want. It accepts wither just a model name or a
controller:model for when you have multiple controllers. eg

$ juju status -m prod
$ juju status -m ctrl:prod

The first command above works on the prod model on the current controller. The
second selects a specific controller regardless of the current one. The second
way is the safest unless you really are only using one controller.

Also, you can set the JUJU_MODEL env var. That's useful for when you open
different terminal windows and want to work on a different model in each window
without using -m each time.

On 05/10/17 17:43, Akshat Jiwan Sharma wrote:
> Hi,
> 
> I have deployed a few models using the local juju controller and I want
> to execute a bunch of commands on a particular model using the juju-cli.
> 
> Lets say I have these three models defined on my controller
> 
> - model1
> - model2
> - model3
> 
> This is the sequence of commands I want to run
> 
> 1. List all the machines in model1
> 2. Add storage unit to model2
> 3. Add a relation between applications in model3
> 
> These operations may be run in any order. That is first I might run op 2
> then op 3 and then op1.
> The only constraint is that an operation must be run on a particular model.
> Right now I go about this task like so:-
> 
> juju switch model1 && juju machines
> 
>  This works fine. I get all my machines listed for model1. The problem with
> this  approach is that I'm not sure if
> another command is executing a juju switch somewhere and suddenly the model
> I'm operating changes from model1 to model2.
> 
> For instance suppose that these two commands are run one after the other
> 
> juju switch model1 && juju list machines
> juju switch model3 && juju add-relation app1 app2
> 
> Now how can I be certain that for second command I'm operating on model 3?
> As far as I understand juju switches are global.
> Meaning a `switch` makes a change "permanent" to all the other commands
> that follow.
> 
> My question is how do I "lock" the execution of a certain command to a
> particular model?
> 
> Thanks,
> Akshat
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


network-get hook tool - fixing inconsistent output plus new stuff

2017-09-14 Thread Ian Booth
Hi folks

TL;DR; I want to rename a yaml/json attribute in network-get output. I want to
see if any charmers would find this to be an issue. IIANM, we don't (yet) have a
tool to easily scrape the charm store to see what charms use network-get
directly. Charm helpers calls network-get with the --primary-address flag and
this will continue to work as before [1].

[1] --primary-address will be deprecated in Juju 2.3; --bind-address should be
used instead.

* If you see a reason not to rename the network-get yaml/json "info" attribute,
now is the time to speak up *

There's already been internal discussion with some key charmers about the new
content for network-get. But given we're looking to change output, it's time to
circulate more widely.

The network-get hook tool has been evolving over the past few cycles as a
replacement for the unit-get hook tool. The tool is not yet quite finished; as
of now in Juju 2.x releases, the following is supported:

$ network-get "binding" --primary-address
$ network-get "binding"

Without the --primary-address flag, the output is a yaml printout of all the
link layer devices and their addresses on the machine, relevant to the specified
binding according to the space to which it is associated. It's also possible to
ask for json.

Here's an example output:

$ network-get "binding"
info:
- macaddress: "00:11:22:33:44:00"
  interfacename: eth0
  addresses:
  - address: 10.10.0.23
cidr: 10.10.0.0/24
  - address: 192.168.1.111
cidr: 192.168.1.0/24
- macaddress: "00:11:22:33:44:11"
  interfacename: eth1
  addresses:
  - address: 10.10.1.23
cidr: 10.10.1.0/24
  - address: 192.168.2.111
cidr: 192.168.2.0/24

$ network-get "binding" --primary-address
10.10.0.23

Problem 1.

The json output is not consistent with the yaml. json uses "network-info"
instead of "info" as the attribute tag name.

Problem 2.

The attribute tag name itself.

Instead of "info" or "network-info", I want to rename to "bind-addresses". Or
maybe even "local-addresses"?. Here's why.

There's 3 key pieces of address information a charm needs to know, for either
the local unit and/or the remote unit:
1. what address to bind to (to listen on)
2. what address to advertise for incoming connections (ingress)
3. what subnets outbound traffic will originate from (egress)

Note: the following applies to the develop branch only. 2.x is missing all of
this new stuff.

For the remote unit, this information is in relation data as these attributes:
- ingress-address
- egress-subnets

For the local unit, network-get is the tool to use to find out. I want to rename
the "info" attribute to better reflect the semantics of what the data represents
as well as fix the yaml/json mismatch.

Here's an example

bind-addresses:
- macaddress: "00:11:22:33:44:00"
  interfacename: eth0
  addresses:
  - address: 10.10.0.23
cidr: 10.10.0.0/24
  - address: 192.168.1.111
cidr: 192.168.1.0/24
- macaddress: "00:11:22:33:44:11"
  interfacename: eth1
  addresses:
  - address: 10.10.1.23
cidr: 10.10.1.0/24
  - address: 192.168.2.111
cidr: 192.168.2.0/24
egress-subnets:
- 192.168.1.0/8
- 10.0.0.0/8
ingress-addresses:
- 100.1.2.3

You can also ask for individual values

$ network-get "binding" --bind-address
10.10.0.23

$ network-get "binding" --ingress-address
100.1.2.3

Cross Model Relations

A key driver for this work is cross model relations. When called in a relation
context, or with the -r arg to specify a relation id, the ingress and egress
information provided by network-get is adjusted so that it is correct for the
relation. The charm itself remains agnostic to whether it is a cross model
relation or not; Juju does all the work. But suffice to say, charms should
evolve to use the new semantics of network-get so that they are cross model
relations compatible. As is the case now with charm helpers and how it falls
back to unit-get for older versions of Juju, this new work will only be
available in 2.3 onwards, so charm helpers will need to deal with that.












-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: call for testing: relations across Juju models

2017-07-26 Thread Ian Booth


On 25/07/17 00:54, Dmitrii Shcherbakov wrote:
> Hi Patrizio,
> 
> As far as I understand it now, if you configure it right in terms of
> networking, it will be possible for both single and multi-cloud cases.
>

Correct. You can have one application deployed to a model in a Google cloud, and
another deployed to a model in AWS for example. Juju correctly determines that
workload traffic needs to flow between the workloads' respective public
addresses, and also takes care of opening the required firewall ports to allow
workload traffic to flow from the requires side of the relation to the provides
side.

Future work will see Juju optimise the netwrok aspects to that the relation will
be set up to use the cloud local addresses if the models and relation endpoints
are deployed in a way that this is supported (eg for AWS, same region, tenant,
vpc).

I also plan to add cross model support to bundles, to make the k8s federation
story described below easier. This is not started yet, just an udea on the
fairly large cross model relations todo list.

> Having only workers on the second cloud is fairly straightforward.
> 
> However, I think the real use-case is to implement k8s federation without
> having to replicate etcd across multiple data centers and using
> latency-based load-balancing:
> 
> https://kubernetes.io/docs/concepts/cluster-administration/federation/
> https://kubernetes.io/docs/tasks/federation/set-up-cluster-federation-kubefed/
> 
> This will require charming of the federation controller manager to
> have federation control plane for multiple clouds.
> 
> This is similar to an orchestrator use-case in the ETSI NFV architecture.
> 
> Quite an interesting problem to solve with cross-controller relations.
> 
> 
> 
> Best Regards,
> Dmitrii Shcherbakov
> 
> Field Software Engineer
> IRC (freenode): Dmitrii-Sh
> 
> On Mon, Jul 24, 2017 at 4:48 PM, Patrizio Bassi 
> wrote:
> 
>> Hi All
>>
>> this is very very interesting.
>>
>> Is possibile to scale out some units using cross models?
>>
>> For instance: in a onpestack tenant i deploy a kubernates cluster. Than in
>> another tenant i add k8-workers, the add-unit command will refer to the
>> parent deployment to get needed params (i.e. master IP address.. juju
>> config)
>>
>> This will be even better in a hybrid cloud environment
>> Regards
>>
>> Patrizio
>>
>>
>>
>> 2017-07-24 15:26 GMT+02:00 Ian Booth :
>>
>>>
>>>
>>> On 24/07/17 23:12, Ian Booth wrote:
>>>>
>>>>
>>>> On 24/07/17 20:02, Paul Gear wrote:
>>>>> On 08/07/17 03:36, Rick Harding wrote:
>>>>>> As I noted in The Juju Show [1] this week I've put together a blog
>>>>>> post around the cross model relations feature that folks can test out
>>>>>> in Juju 2.2. Please test it out and provide your feedback.
>>>>>>
>>>>>> http://mitechie.com/blog/2017/7/7/call-for-testing-shared-se
>>> rvices-with-juju
>>>>>>
>>>>>> Current known limitations:
>>>>>> Only works in the same model
>>>>>> You need to bootstrap with the feature flag to test it out
>>>>>> Does not currently work with relations to subordinates. Work is in
>>>>>> progress
>>>>>
>>>>> Hi Rick,
>>>>>
>>>>> I gave this a run this afternoon.  In my case, I just set up an haproxy
>>>>> unit in one model and a Nagios server in another, and connected the
>>>>> haproxy:reverseproxy to the nagios:website.  Everything worked exactly
>>>>> as expected.
>>>>>
>>>>> One comment about the user interface: the "juju relate" for the client
>>>>> side seems a bit redundant, since "juju add-relation" could easily work
>>>>> out which type of relation it was by looking at the form of the
>>> provided
>>>>> identifier.  If we pass a URI to an offered relation in another model,
>>>>> it could use a cross-model relation, and if we just use normal
>>>>> service:relation-id format, it could use a normal relation.
>>>>>
>>>>> Anyway, just wanted to say it's great to see some progress on this,
>>>>> because it solves some real operational problems for us.  I can't wait
>>>>> for the cross-controller, reverse-direction, highly-scalable version
>>>>> which will allow us to obsolete the glue scripts needed to connect our
>>>>> Nagios se

Re: call for testing: relations across Juju models

2017-07-24 Thread Ian Booth


On 24/07/17 23:12, Ian Booth wrote:
> 
> 
> On 24/07/17 20:02, Paul Gear wrote:
>> On 08/07/17 03:36, Rick Harding wrote:
>>> As I noted in The Juju Show [1] this week I've put together a blog
>>> post around the cross model relations feature that folks can test out
>>> in Juju 2.2. Please test it out and provide your feedback. 
>>>
>>> http://mitechie.com/blog/2017/7/7/call-for-testing-shared-services-with-juju
>>>
>>> Current known limitations:
>>> Only works in the same model
>>> You need to bootstrap with the feature flag to test it out
>>> Does not currently work with relations to subordinates. Work is in
>>> progress
>>
>> Hi Rick,
>>
>> I gave this a run this afternoon.  In my case, I just set up an haproxy
>> unit in one model and a Nagios server in another, and connected the
>> haproxy:reverseproxy to the nagios:website.  Everything worked exactly
>> as expected.
>>
>> One comment about the user interface: the "juju relate" for the client
>> side seems a bit redundant, since "juju add-relation" could easily work
>> out which type of relation it was by looking at the form of the provided
>> identifier.  If we pass a URI to an offered relation in another model,
>> it could use a cross-model relation, and if we just use normal
>> service:relation-id format, it could use a normal relation.
>>
>> Anyway, just wanted to say it's great to see some progress on this,
>> because it solves some real operational problems for us.  I can't wait
>> for the cross-controller, reverse-direction, highly-scalable version
>> which will allow us to obsolete the glue scripts needed to connect our
>> Nagios server to all our deployed NRPE units!  :-)
>>
>>
>>
> 
> Glad it's working.
> 
> Multi-controller CMR is already available in the edge snap, but we need to 
> get a
> new blog post out to describe how to use it. There's also a couple of 
> branches I
> want to land first to fix a firewalling issue. So expect something in the next
> few days.
> 
> If you can live with the filewall issue (which will be imminently fixed), give
> it a go. The only different with what's mentioned in the blob post above is 
> that
> you prefix the offer URL with the host controller name.
> 
> eg, the hello world case...
> 
> $ juju bootstrap aws foo
> $ juju deploy mysql
> $ juju offer mysql:db
> 
> $ juju bootstrap aws bar
> $ juju deploy mediawiki
> $ juju expose mediawiki
> $ juju relate mediawki:db foo:admin/default.myql
> 
> Don't forget that you can also use the "consume" permission to restrict offers
> to certain users, so long as the user consuming the offer has login access to
> the hosting controller.
> 
> You can also do things like find offers available on a given controller by
> 
> $ juju find-endpoints foo:
> 
> firewall bug: if the offer is a requires endpoint, and the consumer is a
> provides endpoint, the firewall is not set up properly. This affects the
> telegraf<->prometheus case or nrpe<->nagios case for example. A fix will land 
> in
> the next day or so and be available in the edge snap shortly. Until then it 
> can
> be run in MAAS or LXD no problem as there are no pesky firewalls to worry 
> about.
> 
> There's also an initial POC to allow the consuming application to be behind a
> NAT. So in the above example, if the mediawiki application were in a model
> running in a local LXD cloud behind CGNAT or something, simply use "what's my
> ip" to discover the source address and set the model config attribute
> "egress-cidrs" to /32 (or any other cidr that includes the source
> addresses). The user experience here is under development but works.
> 
> A key implementation artifact is that controller-to-controller traffic flows
> from the consuming model to the offering model. In the case where offer 
> endpoint
> is provides, and consumer endpoint is requires, workload traffic will 
> generally
> flow the same way - eg consumer app opens a connection to an IP address in the
> offering model. So control traffic and workload traffic is unidirectional.
> 
> In the case where the offer has the requires endpoint, this typically this 
> means
> that the offer application will initiate the connection to the consumer app. 
> eg
> prometheus will poll the source of the metrics is the consuming model. This

prometheus will poll the source of the metrics *in* the consuming model.

> means that the workload traffic is offer model -> consumer model, while the
> control traffic is 

Re: call for testing: relations across Juju models

2017-07-24 Thread Ian Booth


On 24/07/17 20:02, Paul Gear wrote:
> On 08/07/17 03:36, Rick Harding wrote:
>> As I noted in The Juju Show [1] this week I've put together a blog
>> post around the cross model relations feature that folks can test out
>> in Juju 2.2. Please test it out and provide your feedback. 
>>
>> http://mitechie.com/blog/2017/7/7/call-for-testing-shared-services-with-juju
>>
>> Current known limitations:
>> Only works in the same model
>> You need to bootstrap with the feature flag to test it out
>> Does not currently work with relations to subordinates. Work is in
>> progress
> 
> Hi Rick,
> 
> I gave this a run this afternoon.  In my case, I just set up an haproxy
> unit in one model and a Nagios server in another, and connected the
> haproxy:reverseproxy to the nagios:website.  Everything worked exactly
> as expected.
> 
> One comment about the user interface: the "juju relate" for the client
> side seems a bit redundant, since "juju add-relation" could easily work
> out which type of relation it was by looking at the form of the provided
> identifier.  If we pass a URI to an offered relation in another model,
> it could use a cross-model relation, and if we just use normal
> service:relation-id format, it could use a normal relation.
> 
> Anyway, just wanted to say it's great to see some progress on this,
> because it solves some real operational problems for us.  I can't wait
> for the cross-controller, reverse-direction, highly-scalable version
> which will allow us to obsolete the glue scripts needed to connect our
> Nagios server to all our deployed NRPE units!  :-)
> 
> 
> 

Glad it's working.

Multi-controller CMR is already available in the edge snap, but we need to get a
new blog post out to describe how to use it. There's also a couple of branches I
want to land first to fix a firewalling issue. So expect something in the next
few days.

If you can live with the filewall issue (which will be imminently fixed), give
it a go. The only different with what's mentioned in the blob post above is that
you prefix the offer URL with the host controller name.

eg, the hello world case...

$ juju bootstrap aws foo
$ juju deploy mysql
$ juju offer mysql:db

$ juju bootstrap aws bar
$ juju deploy mediawiki
$ juju expose mediawiki
$ juju relate mediawki:db foo:admin/default.myql

Don't forget that you can also use the "consume" permission to restrict offers
to certain users, so long as the user consuming the offer has login access to
the hosting controller.

You can also do things like find offers available on a given controller by

$ juju find-endpoints foo:

firewall bug: if the offer is a requires endpoint, and the consumer is a
provides endpoint, the firewall is not set up properly. This affects the
telegraf<->prometheus case or nrpe<->nagios case for example. A fix will land in
the next day or so and be available in the edge snap shortly. Until then it can
be run in MAAS or LXD no problem as there are no pesky firewalls to worry about.

There's also an initial POC to allow the consuming application to be behind a
NAT. So in the above example, if the mediawiki application were in a model
running in a local LXD cloud behind CGNAT or something, simply use "what's my
ip" to discover the source address and set the model config attribute
"egress-cidrs" to /32 (or any other cidr that includes the source
addresses). The user experience here is under development but works.

A key implementation artifact is that controller-to-controller traffic flows
from the consuming model to the offering model. In the case where offer endpoint
is provides, and consumer endpoint is requires, workload traffic will generally
flow the same way - eg consumer app opens a connection to an IP address in the
offering model. So control traffic and workload traffic is unidirectional.

In the case where the offer has the requires endpoint, this typically this means
that the offer application will initiate the connection to the consumer app. eg
prometheus will poll the source of the metrics is the consuming model. This
means that the workload traffic is offer model -> consumer model, while the
control traffic is consumer model -> offer model. Hence we need bi-directional
routability between offer and consuming model in this case.

Having the controller-to-controller traffic from flow consuming model to
offering mode is better for scalability and reduces complexity significantly. If
the routing issue above is not a problem in practice, then we'll stick with the
implementation as is. If not, we'll need to discuss things further.










-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Coming in 2.3: storage improvements

2017-07-13 Thread Ian Booth
Indeed. And just landing today is support for btrfs as well. So there'll be a
choice of:
lxd (the default, directory based)
lxd-zfs
lxd-btrfs

On 14/07/17 13:46, Menno Smits wrote:
> Nice work Andrew! These changes make Juju's storage support much more
> powerful.
> 
> 
> 
> On 13 July 2017 at 20:56, Andrew Wilkins 
> wrote:
> 
>> Hi folks,
>>
>> I've just published https://awilkins.id.au/post/juju-2.3-storage/, which
>> highlights some of the new bits added around storage that's coming to Juju
>> 2.3. I particularly wanted to highlight that a new LXD storage provider has
>> just landed on develop today. It should be available in the edge snap soon.
>>
>> The LXD storage provider will enable you to attach LXD storage volumes to
>> your containers, and use that for a charm's storage requirements. e.g.
>>
>> $ juju deploy postgresql --storage pgdata=1G,lxd-zfs
>>
>> This will create a LXD storage pool backed by a ZFS pool, create a 1GiB
>> ZFS volume and attach that to the container.
>>
>> I'd appreciate feedback on the new provider, and the attach/detach changes
>> described in the blog post, preferably before 2.3 comes around. In
>> particular, UX warts or functionality that you're missing or anything you
>> find broken-by-design -- stuff that can't easily be fixed after we release.
>>
>> Thanks!
>>
>> Cheers,
>> Andrew
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju
>>
>>
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: JUJU_UNIT_NAME no longer set in env

2017-05-22 Thread Ian Booth
FWIW, Juju itself still sets JUJU_UNIT_NAME

https://github.com/juju/juju/blob/develop/worker/uniter/runner/context/context.go#L582

On 23/05/17 05:59, James Beedy wrote:
> Juju 2.1.2
> 
> I'm getting this "JUJU_UNIT_NAME not in env" error on legacy-non-reactive
> xenial charm using service_name() from hookenv.
> 
> http://paste.ubuntu.com/24626263/
> 
> Did we remove this?
> 
> ~James
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to add openstack cloud to juju 2.1.2-xenial

2017-04-10 Thread Ian Booth

On 11/04/17 07:11, Daniel Bidwell wrote:
> I need to add openstack as a cloud to juju 2.1.2-xenial.  I don't seem
> to find the right howto.  What authentication method do I use?  And
> where do I get the authentication string?  User name and password for
> dashboard user?
> 

The authentication method to use is typically userpass. This will be one of the
choices if running juju add-cloud interactively. The authentication string can
typically be found by looking at your novarc file - it is the AUTH_URL value,
usually something like "https://keystone.mydomain.com:443/v2.0/";

Once the cloud itself has been added, you then need to add credential
information which can be done using juju add-credential. It will pick up that
userpass authentication has been previously specified and will prompt for things
like tenant name, domain name etc - these values depend on how the Openstack
instance itself has been set up, and whether keystone v3 authentication is being
used etc. Juju can attempt to guess the right credential values by running juju
autoload-credentials, assuming you have a ~/.novarc file or have the Openstack
client env vars set up. The novarc file usually contains the required values for
the various credential attributes.




-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Default Model SG Rules

2017-01-31 Thread Ian Booth
As part of the cross model relations work, the provider interface is being
reworked such that Open/Close Port() API calls can now take as parameters
ingress rules, ie a collection of port ranges and allowed source CIDRs.

With the above work, it will be possible to use that new provider
capability to implement something like ssh-allow as an optional model parameter.

Bigger picture though - we want to move to a model where Juju controllers
are simply applications, by default with a single deployed unit, and with HA we
effectively add-unit -n 3 for example. So in that sense, bug 1420996 which asks
for juju expose to gain the ability to limit the subnets to which an application
is exposed seems like something useful to look at too.

But, we also have the concept of spaces - a set of subnets with the same
ingress/egress rules. Talking to John who has been doing much of the work in
this area, we could consider the fact that there should be a way to provide ssh
access to all machines in an environment; maybe we have Juju model this to allow
an ssh endpoint for machines to be have a binding into a specific space.

Having said that, the above work to improve how Juju controllers are
modeled is not scheduled for the Juju 2.2 cycle. Maybe "ssh-allow" is a tasteful
enough compromise for a quick win for Juju 2.2? it would be easy enough to
upgrade that later to support a better modelled solution.

On 30/01/17 08:11, Michael Nelson wrote:
> On Sat, Jan 28, 2017 at 4:34 AM James Beedy  wrote:
> 
>> A default SG rule generated for every model allows 22 from 0.0.0.0/0, I'm
>> guessing this is because we are trying to facilitate the use case for juju
>> deployed on a public cloud, and instances being ssh accessed from the
>> internet and not from behind VPN in the same address space.
>>
>> A functionality which would allow users who don't want ssh open to the
>> world to close it, either completely, or limit to a private address space,
>> would be very helpful (especially because Juju reverts any changes made to
>> the SG,
>>
> 
> I created a bug about that a while back:
> 
> https://bugs.launchpad.net/juju-core/+bug/1420996
> 
> As per the last change there, it was targeted for 2.1.0 until just recently.
> 
> 
> 
>> so I couldn't even lock down port 22 if I wanted to).
>>
>> Is it possible to introduce a model config param that we could use to tell
>> juju where to allow ssh traffic from?
>>
> 
> Again, an older bug, but I'd be keen to see that not just for 22/ssh, but
> in general when exposing services:
> 
> https://bugs.launchpad.net/bugs/1401358
> 
> but that may not fit the new juju2 models since the bug was written.
> 
> 
>>
>> Quick fix: Introduce an 'ssh-allow' param that could be used to open and
>> close port 22 on the SG generated for the model?
>>
>> Better fix: Introduce a config param 'ssh-access', where default value is
>> 0.0.0.0/0, which could then be modified to an address space that fits the
>> users security needs.
>>
>> How do others feel about this?
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Issue deploying a juju controller on openstack private cloud

2016-10-14 Thread Ian Booth
Unfortunately at the moment there's not good documentation on all of the
possible configuration options. It's something on the radar to improve.

If you did have a running 2.0 system (2.0 was released today), you could type
$ juju model-defaults

to see the available options.

On 14/10/16 00:26, sergio gonzalez wrote:
> Hello Ian
> 
> Thanks for your support. Where can I find all the configuration
> options to be passed during bootstrap?
> 
> Regards
> 
> Sergio
> 
> 2016-10-13 15:58 GMT+02:00 Ian Booth :
>> Hi Sergio
>>
>> Courtesy of Rick Harding, here's the information you need.
>>
>> The openstack provider has a network configuration attribute which needs to 
>> be
>> set to specify the network label or UUID to bring machines up on when 
>> multiple
>> networks exist.
>>
>> You pass it as an argument to bootstrap. eg
>>
>> $ juju bootstrap openstack-controller openstack-mitaka
>> --config image-metadata-url=http://10.2.1.109/simplestream/images/
>> --config network=
>>
>> On 13/10/16 06:29, sergio gonzalez wrote:
>>> Hello
>>>
>>> I am trying to deploy a juju controller, but I get the following error:
>>>
>>> juju --debug bootstrap openstack-controller openstack-mitaka --config
>>> image-metadata-url=http://10.2.1.109/simplestream/images/
>>>
>>> 2016-10-12 20:19:00 INFO juju.cmd supercommand.go:63 running juju
>>> [2.0-beta15 gc go1.6.2]
>>>
>>> 2016-10-12 20:19:00 INFO cmd cmd.go:141 Adding contents of
>>> "/home/ubuntu/.local/share/juju/ssh/juju_id_rsa.pub" to
>>> authorized-keys
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.cmd.juju.commands bootstrap.go:499
>>> preparing controller with config: map[name:controller
>>> uuid:71e55928-2c38-407b-897f-94e83c60890b
>>> image-metadata-url:http://10.2.1.109/simplestream/images/
>>> authorized-keys:ssh-rsa
>>> B3NzaC1yc2EDAQABAAABAQDBoDbcBms7z/ChSG5hQyqZQYhkH6V5uA7HcINuFJH2AC9ygej6TdJ6eCdsPU77x+CgdRVLINE1PhtWsXdYFEZ11e7OV2Y4Jlt/SkMqGJK4enHNXcofIBUntbuVh90hww/yiApLxxi4/cMgHTigu4YZbkZz+QVBqCn5zZMgqPbSR55uHGsT9cbF1S+XRj/OqMpuwOkbgZ/vR880wz6lq1rUwdBOIAIblhuwXHLTT7A5y6Vck69xuqkeyjI67SUdHhxXeCDbjUkOkCqKHY9dU3LNHIH0xYsWGTB7z+FpCn8f7URfMviLQ2QX30Uda/h0KQ91/raGjYE5SHU3E/P/VWtj
>>> juju-client-key
>>>
>>>  type:openstack]
>>>
>>> 2016-10-12 20:19:00 INFO juju.provider.openstack provider.go:75
>>> opening model "controller"
>>>
>>> 2016-10-12 20:19:00 INFO cmd cmd.go:129 Creating Juju controller
>>> "openstack-controller" on openstack-mitaka/RegionOne
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
>>> image datasource "image-metadata-url"
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
>>> image datasource "default cloud images"
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
>>> image datasource "default ubuntu cloud images"
>>>
>>> 2016-10-12 20:19:01 INFO juju.cmd.juju.commands bootstrap.go:641
>>> combined bootstrap constraints:
>>>
>>> 2016-10-12 20:19:01 INFO cmd cmd.go:129 Bootstrapping model "controller"
>>>
>>> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:214
>>> model "controller" supports service/machine networks: false
>>>
>>> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:216
>>> network management by juju enabled: true
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.bootstrap tools.go:95 looking
>>> for bootstrap tools: version=2.0-beta15
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:106 finding
>>> tools in stream "devel"
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:108 reading
>>> tools with major.minor version 2.0
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:116 filtering
>>> tools by version: 2.0-beta15
>>>
>>> 2016-10-12 20:19:01 DEBUG juju.environs.tools urls.go:109 trying
>>> datasource "keystone catalog"
>>>
>>> 2016-10-12 20:19:02 DEBUG juju.environs.simplestreams
>>> simplestreams.go:680 using default candidate for content id
>>> "com.ubuntu.juju:devel:tools" are {20161007 mirrors:1.0
>>> content-download streams/v1/cpc-mirrors.sjson []}
>>>
>>> 2016-10-12 20:19:03 DEBUG juju.environs

Re: Issue deploying a juju controller on openstack private cloud

2016-10-13 Thread Ian Booth
Hi Sergio

Courtesy of Rick Harding, here's the information you need.

The openstack provider has a network configuration attribute which needs to be
set to specify the network label or UUID to bring machines up on when multiple
networks exist.

You pass it as an argument to bootstrap. eg

$ juju bootstrap openstack-controller openstack-mitaka
--config image-metadata-url=http://10.2.1.109/simplestream/images/
--config network=

On 13/10/16 06:29, sergio gonzalez wrote:
> Hello
> 
> I am trying to deploy a juju controller, but I get the following error:
> 
> juju --debug bootstrap openstack-controller openstack-mitaka --config
> image-metadata-url=http://10.2.1.109/simplestream/images/
> 
> 2016-10-12 20:19:00 INFO juju.cmd supercommand.go:63 running juju
> [2.0-beta15 gc go1.6.2]
> 
> 2016-10-12 20:19:00 INFO cmd cmd.go:141 Adding contents of
> "/home/ubuntu/.local/share/juju/ssh/juju_id_rsa.pub" to
> authorized-keys
> 
> 2016-10-12 20:19:00 DEBUG juju.cmd.juju.commands bootstrap.go:499
> preparing controller with config: map[name:controller
> uuid:71e55928-2c38-407b-897f-94e83c60890b
> image-metadata-url:http://10.2.1.109/simplestream/images/
> authorized-keys:ssh-rsa
> B3NzaC1yc2EDAQABAAABAQDBoDbcBms7z/ChSG5hQyqZQYhkH6V5uA7HcINuFJH2AC9ygej6TdJ6eCdsPU77x+CgdRVLINE1PhtWsXdYFEZ11e7OV2Y4Jlt/SkMqGJK4enHNXcofIBUntbuVh90hww/yiApLxxi4/cMgHTigu4YZbkZz+QVBqCn5zZMgqPbSR55uHGsT9cbF1S+XRj/OqMpuwOkbgZ/vR880wz6lq1rUwdBOIAIblhuwXHLTT7A5y6Vck69xuqkeyjI67SUdHhxXeCDbjUkOkCqKHY9dU3LNHIH0xYsWGTB7z+FpCn8f7URfMviLQ2QX30Uda/h0KQ91/raGjYE5SHU3E/P/VWtj
> juju-client-key
> 
>  type:openstack]
> 
> 2016-10-12 20:19:00 INFO juju.provider.openstack provider.go:75
> opening model "controller"
> 
> 2016-10-12 20:19:00 INFO cmd cmd.go:129 Creating Juju controller
> "openstack-controller" on openstack-mitaka/RegionOne
> 
> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "image-metadata-url"
> 
> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default cloud images"
> 
> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default ubuntu cloud images"
> 
> 2016-10-12 20:19:01 INFO juju.cmd.juju.commands bootstrap.go:641
> combined bootstrap constraints:
> 
> 2016-10-12 20:19:01 INFO cmd cmd.go:129 Bootstrapping model "controller"
> 
> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:214
> model "controller" supports service/machine networks: false
> 
> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:216
> network management by juju enabled: true
> 
> 2016-10-12 20:19:01 INFO juju.environs.bootstrap tools.go:95 looking
> for bootstrap tools: version=2.0-beta15
> 
> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:106 finding
> tools in stream "devel"
> 
> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:108 reading
> tools with major.minor version 2.0
> 
> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:116 filtering
> tools by version: 2.0-beta15
> 
> 2016-10-12 20:19:01 DEBUG juju.environs.tools urls.go:109 trying
> datasource "keystone catalog"
> 
> 2016-10-12 20:19:02 DEBUG juju.environs.simplestreams
> simplestreams.go:680 using default candidate for content id
> "com.ubuntu.juju:devel:tools" are {20161007 mirrors:1.0
> content-download streams/v1/cpc-mirrors.sjson []}
> 
> 2016-10-12 20:19:03 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "image-metadata-url"
> 
> 2016-10-12 20:19:03 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default cloud images"
> 
> 2016-10-12 20:19:03 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default ubuntu cloud images"
> 
> 2016-10-12 20:19:03 DEBUG juju.environs.bootstrap bootstrap.go:489
> constraints for image metadata lookup &{{{RegionOne
> http://10.2.1.111:35357/v3} [centos7 precise trusty win10 win2012
> win2012hv win2012hvr2 win2012r2 win2016 win2016nano win7 win8 win81
> xenial yakkety] [amd64 arm64 ppc64el s390x] released}}
> 
> 2016-10-12 20:19:03 DEBUG juju.environs.bootstrap bootstrap.go:501
> found 1 image metadata in image-metadata-url
> 
> 2016-10-12 20:19:04 DEBUG juju.environs.simplestreams
> simplestreams.go:458 index has no matching records
> 
> 2016-10-12 20:19:04 DEBUG juju.environs.bootstrap bootstrap.go:501
> found 0 image metadata in default cloud images
> 
> 2016-10-12 20:19:05 DEBUG juju.environs.simplestreams
> simplestreams.go:454 skipping index
> "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson";
> because of missing information: index file has no data for cloud
> {RegionOne http://10.2.1.111:35357/v3} not found
> 
> 2016-10-12 20:19:05 DEBUG juju.environs.bootstrap bootstrap.go:497
> ignoring image metadata in default ubuntu cloud images: index file has
> no data for cloud {RegionOne http://10.2.1.111:35357/v3} not found
> 
> 2016-10-12 20:19:05 DEBUG juju.environs.bootstrap bootstrap.go:505
> found 1 image metada

Re: delayed juju beta16 until next week

2016-08-18 Thread Ian Booth
Just to provide a little more clarity on the Azure issue.

The recent Azure SDK update changed the Azure behaviour as exposed to Juju. We
were previously not waiting for machines to be marked as fully provisioned; the
SDK now does this for us. MS says this is what you must do. The effect on Juju
is that deployments take twice as long since everything is now serialised.

Andrew has an idea that we may be able to work around it but there are no
guarantees at this point. But we'll try and find a suitable workaround.


On 18/08/16 23:25, Rick Harding wrote:
> We need to delay the release of beta16 until next week as we've been busy
> breaking things and currently don't have a working Azure in our trunk.
> 
> We've updated the Azure code we use to talk to their APIs and in the
> process uncovered changes in our code that need to happen to help bring
> things back to fully functional. We've also uncovered a race in our use of
> the Azure APIs that is currently in progress. Due to this flux we've not
> had a passing build on Azure since beta15.
> 
> The team's been hard at work chasing down the updates and we are confident
> we'll have everything set for next week. We're excited because the new
> Azure tooling will allow us to improve the Azure user experience for Juju
> 2.0.
> 
> If you have a specific blocking bug that was addressed (marked
> fix-committed) this week and a daily build that does not work on Azure is
> ok for your needs please let us know and we'll force a daily build update
> for this week.
> 
> Thanks for your patience.
> 
> Rick
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Model config

2016-06-08 Thread Ian Booth


On 08/06/16 23:59, roger peppe wrote:
> On 8 June 2016 at 10:41, Andrew Wilkins  wrote:
>> Hi folks,
>>
>> We're in the midst of making some changes to model configuration in Juju
>> 2.0, separating out things that are not model specific from those that are.
>> For many things this is very clear-cut, and for other things not so much.
>>
>> For example, api-port and state-port are controller-specific, so we'll be
>> moving them from model config to a new controller config collection. The end
>> goal is that you'll no longer see those when you type "juju
>> get-model-config" (there will be a separate command to get controller
>> attributes such as these), though we're not quite there yet.
> 
> Interesting - seems like a good change.
> 
> Will this change the internal and API representations too, so there
> are two groups
> of mutually-exclusive attributes? Does this also mean that the

Internally there will be three groups of mutually exclusive attributes:
- controller
- cloud
- model

Initially, we'll maintain internal API compatibility by combining all these to
produce the result of state.ModelConfig()

We'll then be able to consider things like config inheritance / overrides etc.
eg if cloud config (specified in the clouds.yaml file) defines an apt-mirror,
should we allow a model to also have that value, which will take precedence over
the cloud value.

> really-not-very-nice
> ConfigSkeleton API method will go too?
> 

I hope so. But we're rushing to get everything done for beta9 and are focusing
first on the data model since it will be harder to upgrade if that's not right
first up.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju2: format of clouds.yaml for juju add-cloud

2016-05-03 Thread Ian Booth


On 03/05/16 23:55, Andreas Hasenack wrote:
> On Tue, May 3, 2016 at 10:28 AM, Ian Booth  wrote:
> 
>>
>> The syaml referred to above is for cloud definitions. The command being
>> run is
>> for adding a credential. The credential data model is different to clouds.
>> The
>> type of cloud though (eg openstack vs aws vs google) determine what
>> credential
>> attributes are valid. "domain-name" is used for keystone v3
>> authentication. It
>> is optional and not needed for keystone v2. Whether to enter it or not
>> depends
>> entirely on your openstack setup.
>>
>>
> Thanks.
> 
> Suggestion: since I'm adding a credential for a cloud that is already
> defined, maybe juju shouldn't ask me for "domain-name" if the endpoint url
> is for keystone 2.
> 

That would be a nice improvement. It's tricky because the credentials schema is
currently only defined on the type of cloud, not any attributes assigned to the
cloud. It will potentially be a non-trivial amount of work to change that.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju2: format of clouds.yaml for juju add-cloud

2016-05-03 Thread Ian Booth


On 03/05/16 23:16, Andreas Hasenack wrote:
> On Tue, May 3, 2016 at 10:09 AM, Andreas Hasenack 
> wrote:
> 
>> On Wed, Apr 20, 2016 at 7:07 PM, Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> On Thu, Apr 21, 2016 at 2:44 AM Andreas Hasenack 
>>> wrote:
>>>
 Hi,

 I was trying to add another "cloud" so that I could have multiple MAAS
 servers available to bootstrap on, without having to type the MAAS IP
 everytime in the bootstrap command line, and pass --credential.

 Some reading lead me to juju add-cloud, but the documentation only has
 examples for openstack clouds, like:

 clouds:
   :
 type: 
 regions:
   :
 endpoint: 
 auth-types: <[access-key, oauth, userpass]>


 That does not translate immediately to a MAAS configuration. I asked for
 help on IRC and mgz provided me with this syntax:

 clouds:
   some-name:
 type: maas
 auth-types: [oauth1]
 endpoint: 'http:///MAAS/'


 Are there other options that could be used here, specific to the "maas"
 type? What about other cloud types, what changes in this template?

>>>
>>> Everything that you can use is used here:
>>> http://streams.canonical.com/juju/public-clouds.syaml. So the things in
>>> there of note are "storage-endpoint" and "regions".
>>>
>>>
>>
>> What's "domain-name"?
>> andreas@nsn7:~$ juju add-credential cistack
>>   credential name: cistack
>>   auth-type: userpass
>>   username: andreas
>>   password:
>>   tenant-name: andreas
>>   domain-name: ?
>> credentials added for cloud cistack
>>
>> It's not used in http://streams.canonical.com/juju/public-clouds.syaml,
>> nor is it documented in
>> https://jujucharms.com/docs/devel/clouds#specifying-additional-clouds. I
>> can take a guess (DNS domain name), but I don't know where and how it's
>> used. juju1 didn't have that, and nor does the novarc file given to me by
>> horizon.
>>
>>
> 
> Looks like juju 2b6 also doesn't know what it is:
> $ juju-2.0 bootstrap cistack-controller cistack
> WARNING unknown config field "domain-name"
> ERROR authentication failed.
> (...)
> 

That WARNING above is unfortunately misleading in this case. The provider's
config parsing needs to be updated to understand that domain-name is a new
optional field (the field is still processed despite the warning).

But it doesn't indicate the cause of the failure. Perhaps only keystone 2 is
supported by the openstack cloud in use. The error message would be better if it
indicated the cause of failure.


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju2: format of clouds.yaml for juju add-cloud

2016-05-03 Thread Ian Booth


On 03/05/16 23:09, Andreas Hasenack wrote:
> On Wed, Apr 20, 2016 at 7:07 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
> 
>> On Thu, Apr 21, 2016 at 2:44 AM Andreas Hasenack 
>> wrote:
>>
>>> Hi,
>>>
>>> I was trying to add another "cloud" so that I could have multiple MAAS
>>> servers available to bootstrap on, without having to type the MAAS IP
>>> everytime in the bootstrap command line, and pass --credential.
>>>
>>> Some reading lead me to juju add-cloud, but the documentation only has
>>> examples for openstack clouds, like:
>>>
>>> clouds:
>>>   :
>>> type: 
>>> regions:
>>>   :
>>> endpoint: 
>>> auth-types: <[access-key, oauth, userpass]>
>>>
>>>
>>> That does not translate immediately to a MAAS configuration. I asked for
>>> help on IRC and mgz provided me with this syntax:
>>>
>>> clouds:
>>>   some-name:
>>> type: maas
>>> auth-types: [oauth1]
>>> endpoint: 'http:///MAAS/'
>>>
>>>
>>> Are there other options that could be used here, specific to the "maas"
>>> type? What about other cloud types, what changes in this template?
>>>
>>
>> Everything that you can use is used here:
>> http://streams.canonical.com/juju/public-clouds.syaml. So the things in
>> there of note are "storage-endpoint" and "regions".
>>
>>
> 
> What's "domain-name"?
> andreas@nsn7:~$ juju add-credential cistack
>   credential name: cistack
>   auth-type: userpass
>   username: andreas
>   password:
>   tenant-name: andreas
>   domain-name: ?
> credentials added for cloud cistack
> 
> It's not used in http://streams.canonical.com/juju/public-clouds.syaml, nor
> is it documented in
> https://jujucharms.com/docs/devel/clouds#specifying-additional-clouds. I

The syaml referred to above is for cloud definitions. The command being run is
for adding a credential. The credential data model is different to clouds. The
type of cloud though (eg openstack vs aws vs google) determine what credential
attributes are valid. "domain-name" is used for keystone v3 authentication. It
is optional and not needed for keystone v2. Whether to enter it or not depends
entirely on your openstack setup.

> can take a guess (DNS domain name), but I don't know where and how it's
> used. juju1 didn't have that, and nor does the novarc file given to me by
> horizon.
> 

Yes, only Juju v2 supports keystone 3.

As a reminder, the release notes sent out with each beta explain it:

https://jujucharms.com/docs/devel/temp-release-notes#keystone-3-support-in-openstack

### Keystone 3 support in Openstack.

Juju now supports Openstack with Keystone Identity provider V3. Keystone
3 brings a new attribute to our credentials, "domain-name"
(OS_DOMAIN_NAME) which is optional. If "domain-name" is present (and
user/password too) juju will use V3 authentication by default. In other
cases where only user and password is present, it will query Openstack
as to what identity providers are supported, and their endpoints. V3
will be tried and, if it works, set as the identity provider or else it
will settle for V2, the previous standard.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-28 Thread Ian Booth
Older URL format is what is needed until the change lands (targeted for beta4).
The URL based format for bundle charms is all that is supported by the original
local bundles work. The upcoming feature drop fixes that, as well as removing
the support for local charm URLs - all local charms, whether inside bundles or
deployed using the CLI, will be required to be specified using a file path.

On 29/03/16 15:57, Rick Harding wrote:
> So this means the older format should work? Antonio, have you poked through
> that thread at the original working setup for local charms?
> 
> On Mon, Mar 28, 2016 at 9:45 PM Antonio Rosales <
> antonio.rosa...@canonical.com> wrote:
> 
>>
>>
>> On Monday, March 28, 2016, Ian Booth  wrote:
>>
>>> Hey Antonio
>>>
>>> I must apologise - the changes didn't make beta3 due to all the work
>>> needed to
>>> migrate the CI scripts to test the new hosted model functionality; we ran
>>> out of
>>> time to be able to QA the local bundle changes.
>>>
>>> I expect this work will be done for beta4.
>>
>>
>> Completely understood. I'll retest with Beta 4. Thanks for the update.
>>
>> -Antonio
>>
>>
>>>
>>>
>>>
>> On 29/03/16 11:04, Antonio Rosales wrote:
>>>> + Juju list for others awareness
>>>>
>>>>
>>>> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth 
>>> wrote:
>>>>> Thanks Rick. Trivial change to make. This work should be in beta3 due
>>> next week.
>>>>> The work includes dropping support for local repositories in favour of
>>> path
>>>>> based local charm and bundle deployment.
>>>>
>>>> Ian,
>>>> First thanks for working on this feature. Second, I tried this for a
>>>> local ppc64el deploy which is behind a firewall, and thus local charms
>>>> are good way forward. I may have got the syntax incorrect and thus
>>>> wanted to confirm here. What I did was is at:
>>>> http://paste.ubuntu.com/15547725/
>>>> Specifically, I set the the charm path to something like:
>>>> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
>>>> However, I got the following error:
>>>> ERROR cannot deploy bundle: cannot resolve URL
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
>>>> bundle URL has invalid form:
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
>>>>
>>>> This is on the latest beta3:
>>>> 2.0-beta3-xenial-ppc64el
>>>>
>>>> Any suggestions?
>>>>
>>>> -thanks,
>>>> Antonio
>>>>
>>>>
>>>>>
>>>>> On 10/03/16 23:37, Rick Harding wrote:
>>>>>> Thanks Ian, after thinking about it I think what we want to do is
>>> really
>>>>>> #2. The reasoning I think is:
>>>>>>
>>>>>> 1) we want to make things consistent. The CLI experience is present a
>>> charm
>>>>>> and override series with --series=
>>>>>> 2) more consistent, if you do it with local charms you can always do
>>> it
>>>>>> 3) we want to encourage folks to drop series from the charmstore urls
>>> and
>>>>>> worry less about series over time. Just deploy X and let the charm
>>> author
>>>>>> pick the default best series. I think we should encourage this in the
>>> error
>>>>>> message for #2. "Please remove the series section of the charm url"
>>> or the
>>>>>> like when we error on the conflict, pushing users to use the series
>>>>>> override.
>>>>>>
>>>>>> Uros, Francesco, this brings up a point that I think for multi-series
>>>>>> charms we want the deploy cli snippets to start to drop the series
>>> part of
>>>>>> the url as often as we can. If the url doesn't have the series
>>> specified,
>>>>>> e.g. jujucharms.com/mysql then the cli command should not either.
>>> Right now
>>>>>> I know we add the series/revision info and such. Over time we want to
>>> try
>>>>>> to get to as simple a command as possible.
>>>>>>
>>>>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth 
>>> wrote:
>>>>>>
>>>>>>> I've implemen

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-28 Thread Ian Booth
Hey Antonio

I must apologise - the changes didn't make beta3 due to all the work needed to
migrate the CI scripts to test the new hosted model functionality; we ran out of
time to be able to QA the local bundle changes.

I expect this work will be done for beta4.

On 29/03/16 11:04, Antonio Rosales wrote:
> + Juju list for others awareness
> 
> 
> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth  wrote:
>> Thanks Rick. Trivial change to make. This work should be in beta3 due next 
>> week.
>> The work includes dropping support for local repositories in favour of path
>> based local charm and bundle deployment.
> 
> Ian,
> First thanks for working on this feature. Second, I tried this for a
> local ppc64el deploy which is behind a firewall, and thus local charms
> are good way forward. I may have got the syntax incorrect and thus
> wanted to confirm here. What I did was is at:
> http://paste.ubuntu.com/15547725/
> Specifically, I set the the charm path to something like:
> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
> However, I got the following error:
> ERROR cannot deploy bundle: cannot resolve URL
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
> bundle URL has invalid form:
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
> 
> This is on the latest beta3:
> 2.0-beta3-xenial-ppc64el
> 
> Any suggestions?
> 
> -thanks,
> Antonio
> 
> 
>>
>> On 10/03/16 23:37, Rick Harding wrote:
>>> Thanks Ian, after thinking about it I think what we want to do is really
>>> #2. The reasoning I think is:
>>>
>>> 1) we want to make things consistent. The CLI experience is present a charm
>>> and override series with --series=
>>> 2) more consistent, if you do it with local charms you can always do it
>>> 3) we want to encourage folks to drop series from the charmstore urls and
>>> worry less about series over time. Just deploy X and let the charm author
>>> pick the default best series. I think we should encourage this in the error
>>> message for #2. "Please remove the series section of the charm url" or the
>>> like when we error on the conflict, pushing users to use the series
>>> override.
>>>
>>> Uros, Francesco, this brings up a point that I think for multi-series
>>> charms we want the deploy cli snippets to start to drop the series part of
>>> the url as often as we can. If the url doesn't have the series specified,
>>> e.g. jujucharms.com/mysql then the cli command should not either. Right now
>>> I know we add the series/revision info and such. Over time we want to try
>>> to get to as simple a command as possible.
>>>
>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth  wrote:
>>>
>>>> I've implemented option 1:
>>>>
>>>>  error if Series attribute is used at all with a store charm URL
>>>>
>>>> Trivial to change if needed.
>>>>
>>>> On 10/03/16 12:58, Ian Booth wrote:
>>>>> Yeah, agreed having 2 ways to specify store series can be suboptimal.
>>>>> So we have 2 choices:
>>>>>
>>>>> 1. error if Series attribute is used at all with a store charm URL
>>>>> 2. error if the Series attribute is used and conflicts
>>>>>
>>>>> Case 1
>>>>> --
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>>
>>>>> Case 2
>>>>> --
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: xenial
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>>
>>>>> On 10/03/16 12:51, Rick Harding wrote:
>>>>>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either
>>>> way.
>>>>>>
>>>>>> On Wed, Mar 9, 2016, 9:50 PM Rick Harding 
>&g

Re: juju 2.0 beta3 push this week

2016-03-20 Thread Ian Booth
Another feature which will be in the next beta is support for keystone 3 in
Openstack.

On 18/03/16 04:51, Rick Harding wrote:
> tl;dr
> Juju 2.0 beta3 will not be out this week.
> 
> The team is fighting a backlog of getting work landed. Rather than get the
> partial release out this week with the handful of current features and
> adding to the backlog while getting that beta release out, the decision was
> made to focus on getting the current work that’s ready landed. This will
> help us get our features in before the freeze exception deadline of the
> 23rd.
> 
> We have several new things currently in trunk (such as enhanced support for
> MAAS spaces, machine provisioning status monitoring, Juju GUI embedded CLI
> commands into Juju Core), but we have important things to get landed. These
> include:
> 
> - Updating controller model to be called “admin” and a “default” initial
> working model on bootstrap that’s safely removable
> - Minimum Juju version support for charms
> - juju read-only mode
> - additional resources work with version numbers and bundles support
> - additional work in the clouds and credentials management work
> - juju add-user and juju register to sign in the new user
> 
> The teams will work together and focus on landing these and we’ll get a
> beta with the full set of updates for everyone to try out next week. If you
> have any questions or concerns, please let me know.
> 
> Thanks
> 
> Rick
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju 2.0 beta3 push this week

2016-03-18 Thread Ian Booth


On 18/03/16 05:12, Adam Stokes wrote:
> Hi!
> 
> Could I get this bug added to the list too?
> 
> https://bugs.launchpad.net/juju-core/+bug/1554721
>

That bug is on the list for sure. We're aiming for beta3 but it could likely
slip. It will be fixed before 2.0. The priority is the feature backlog. One of
the other features we're aiming for not included in Rick's list is mongo3
support om Xenial.


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju GUI 2.1.0 released – Now with Juju 2.0 support

2016-03-10 Thread Ian Booth
This is awesome news. I just wanted to acknowledge the tonne of extra work done
by the GUI folks for the GUI to support all of the API changes introduced by
Juju 2.0. Can't wait to try out the new GUI with the next Juju beta2 due out
this week (next day or so).

On 11/03/16 07:06, Jeff Pihach wrote:
> Hi All,
> 
> We are excited to announce a new major release of the Juju GUI with support
> for Juju 2.0 (currently in beta). Juju 2.0 brings with it a ton of
> improvements, but one we’d like to highlight is the ability to create new
> models without needing to bootstrap them one by one. I run over all of the
> features in this new version of the GUI in this video:
> 
> 
> 
> https://www.youtube.com/watch?v=RsA2vNbKU5o
> 
> 
> 
> Along with Juju 2.0 support comes these fine additions:
> 
>   * A new user profile page which shows your models, bundles and charms
> after logging into the Charmstore.
> 
>   * Added support for syntax highlighting in the charm details pages in the
> charmbrowser when the charm author provides a GitHub Flavored Markdown
> README file. You can find more information on this here:
> https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#code
> 
>   * Added the ability to drag uncommitted units between machines in the
> machine view.
> 
>   * Unit statuses are now also shown in the machine view.
> 
>   * Fixed – when subordinates are deployed extra empty machines are no
> longer created.
> 
>   * Fixed – websockets are now closed properly when switching models.
> 
>   * Fixed – On logging out all cookies are now deleted.
> 
> 
> 
> 
> To upgrade an existing deployment:
> 
>   juju upgrade-charm juju-gui
> 
> To deploy this release in your model:
> 
>   juju deploy juju-gui
> 
> 
> 
> We hope you will enjoy this release and welcome any feedback you may have.
> Please let us know here or in our github repository
> https://github.com/juju/juju-gui/issues and we’ll be sure to get back to
> you.
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Logging into the API on Juju 2.0

2016-02-29 Thread Ian Booth
No, you are right.

$ juju list-controllers --format yaml

is better.

On 01/03/16 14:49, John Meinel wrote:
> Is there a reason to tell people to look at "controllers.yaml" rather than
> having the official mechanism be something like "juju list-controllers
> --format=yaml" ? I'd really like to avoid tying 3rd party scripts to our
> on-disk configuration. We can keep CLI compatibility, but on-disk
> structures aren't something we really want to commit to forever.
> 
> John
> =:->
> 
> On Tue, Mar 1, 2016 at 8:22 AM, Ian Booth  wrote:
> 
>> Just to be clear, the remote APi for listing models for a given controller
>> exists. But you do need to look at controllers.yaml to see what
>> controllers you
>> have bootstrapped or have access to in order to make the remote list
>> models api
>> call.
>>
>> On 01/03/16 13:14, Adam Stokes wrote:
>>> Got it squared away, being able to replicate `juju list-controllers`
>> didn't
>>> have a remote api. So I will continue to read from
>>> ~/.local/share/juju/controllers.yaml. My intention was to basically see
>>> what controllers were already bootstrapped and gather the models for
>> those
>>> controllers using the remote juju api. But that doesn't exist so I will
>>> mimic what `juju list-controllers` does and read from the yaml file for
>>> controllers that are local to my admin and users.
>>>
>>> On Mon, Feb 29, 2016 at 9:40 PM, Tim Penhey 
>>> wrote:
>>>
>>>> It is the controller that you have logged into for the API.
>>>>
>>>> What are you wanting?
>>>>
>>>> You need a different API connection for each controller.
>>>>
>>>> Tim
>>>>
>>>> On 01/03/16 15:05, Adam Stokes wrote:
>>>>> Right, but how do you specify which controller you want to list the
>>>>> models for? The only way I can see is to manually `juju switch
>>>>> ` then re-login to the API and run the AllModels method. Is
>>>>> there a way (as an administrator) to specify which controller you want
>>>>> to list the models for?
>>>>>
>>>>> On Mon, Feb 29, 2016 at 8:46 PM, Ian Booth >>>> <mailto:ian.bo...@canonical.com>> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 01/03/16 11:25, Adam Stokes wrote:
>>>>> > On Mon, Feb 29, 2016 at 7:24 PM, Tim Penhey
>>>>> mailto:tim.pen...@canonical.com>>
>>>>> > wrote:
>>>>> >
>>>>> >> On 01/03/16 03:48, Adam Stokes wrote:
>>>>> >>> Is there a way to list all models for a specific controller?
>>>>> >>
>>>>> >> Yes.
>>>>> >
>>>>> >
>>>>> > Mind pointing me to the api docs that has that capability?
>>>>> >
>>>>>
>>>>>
>>>> https://godoc.org/github.com/juju/juju/api/controller#Client.AllModels
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Logging into the API on Juju 2.0

2016-02-29 Thread Ian Booth
Just to be clear, the remote APi for listing models for a given controller
exists. But you do need to look at controllers.yaml to see what controllers you
have bootstrapped or have access to in order to make the remote list models api
call.

On 01/03/16 13:14, Adam Stokes wrote:
> Got it squared away, being able to replicate `juju list-controllers` didn't
> have a remote api. So I will continue to read from
> ~/.local/share/juju/controllers.yaml. My intention was to basically see
> what controllers were already bootstrapped and gather the models for those
> controllers using the remote juju api. But that doesn't exist so I will
> mimic what `juju list-controllers` does and read from the yaml file for
> controllers that are local to my admin and users.
> 
> On Mon, Feb 29, 2016 at 9:40 PM, Tim Penhey 
> wrote:
> 
>> It is the controller that you have logged into for the API.
>>
>> What are you wanting?
>>
>> You need a different API connection for each controller.
>>
>> Tim
>>
>> On 01/03/16 15:05, Adam Stokes wrote:
>>> Right, but how do you specify which controller you want to list the
>>> models for? The only way I can see is to manually `juju switch
>>> ` then re-login to the API and run the AllModels method. Is
>>> there a way (as an administrator) to specify which controller you want
>>> to list the models for?
>>>
>>> On Mon, Feb 29, 2016 at 8:46 PM, Ian Booth >> <mailto:ian.bo...@canonical.com>> wrote:
>>>
>>>
>>>
>>> On 01/03/16 11:25, Adam Stokes wrote:
>>> > On Mon, Feb 29, 2016 at 7:24 PM, Tim Penhey
>>> mailto:tim.pen...@canonical.com>>
>>> > wrote:
>>> >
>>> >> On 01/03/16 03:48, Adam Stokes wrote:
>>> >>> Is there a way to list all models for a specific controller?
>>> >>
>>> >> Yes.
>>> >
>>> >
>>> > Mind pointing me to the api docs that has that capability?
>>> >
>>>
>>>
>> https://godoc.org/github.com/juju/juju/api/controller#Client.AllModels
>>>
>>>
>>
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Logging into the API on Juju 2.0

2016-02-29 Thread Ian Booth


On 01/03/16 11:25, Adam Stokes wrote:
> On Mon, Feb 29, 2016 at 7:24 PM, Tim Penhey 
> wrote:
> 
>> On 01/03/16 03:48, Adam Stokes wrote:
>>> Is there a way to list all models for a specific controller?
>>
>> Yes.
> 
> 
> Mind pointing me to the api docs that has that capability?
> 

https://godoc.org/github.com/juju/juju/api/controller#Client.AllModels

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Logging into the API on Juju 2.0

2016-02-26 Thread Ian Booth
The admin user tag for aws is the same as described below. The @local suffix
pertains to the controller not the cloud - think of it as users for a controller
you bootstrap yourself are local to that controller.

On 27/02/16 11:29, Adam Stokes wrote:
> Thanks that makes sense now. I don't have aws or anything but what would
> the admin user tag for those clouds look like?
> 
> On Fri, Feb 26, 2016 at 7:07 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
> 
>> On Sat, Feb 27, 2016 at 1:10 AM Adam Stokes 
>> wrote:
>>
>>> Also, will the API support non admin users to login and query the various
>>> modelmanager methods they have access to? If so, will this be available by
>>> GA release?
>>>
>>> On Fri, Feb 26, 2016 at 11:45 AM, Adam Stokes 
>>> wrote:
>>>
 Currently, the only way to login to the Juju 2.0 api is to use the Tag
 of 'user-admin'.

>>>
>> You can log in with additional users. With the CLI, you can do:
>>   - juju add-user bob
>>   - juju change-user-password bob
>>   - juju switch-user bob
>> (or you could use the "register" command to add another controller entry;
>> you'll still end up with the "bob" user)
>>
>> However, all the files created by juju during bootstrap (accounts.yaml,
 models.yaml, controllers.yaml) only mention the admin user as 'admin@local'
 for the controller.

>>>
>> "admin" is equivalent to "admin@local"; the latter form is canonical.
>> What you're passing over the API is a different form altogether: it is a
>> "tag". The tag form of a user is: user-[@domain].
>>
>> So for the "admin@local" user, the tag form is "user-admin@local". You
>> can also supply just "user-admin", and the "local" is implied.
>>
>> When will the API login support logging in as the admin user for the
 specified controller?

 An example of the request being passed to the api server:

 {'Type': 'Admin',
  'Version': 3,
  'Request': 'Login',
  'RequestId': 1,
  'Params': {'auth-tag': user,
'credentials': password}}

 user = 'user-admin' and not 'admin@local' as seen in the yaml configs.

>>>
>> That should be working. Please file a bug if it's not, with steps to
>> reproduce.
>>
>> Cheers,
>> Andrew
>>
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju devel 2.0-beta1 is available for testing

2016-02-21 Thread Ian Booth
The error appears on bootstrap like this:

$ juju bootstrap mycontroller lxd
Creating Juju controller "mycontroller" on lxd/localhost
Bootstrapping model "mycontroller"
Starting new instance for initial controller
Launching instance
ERROR failed to bootstrap model: cannot start bootstrap instance: can't get info
for image 'ubuntu-trusty': json: cannot unmarshal string into Go value of type 
int64

https://bugs.launchpad.net/juju-core/+bug/1547268

Also affects Juju 2.0 alpha2

On 21/02/16 16:46, Mark Shuttleworth wrote:
> On 21/02/16 00:18, Ian Booth wrote:
>> It seems the confusion comes from not seeing lxd in the output of juju
>> list-clouds. list-clouds ostensibly shows available public clouds (aws, azure
>> etc) and any private clouds (maas, openstack etc) added by the user. The lxd
>> cloud is just built-in to Juju. But from a usability perspective, it's seems 
>> we
>> should include lxd in the output of list-clouds.
> 
> Yes please :)
> 
>> NOTE: the latest lxd 2.0.0 beta3 release recently added to the archives has 
>> an
>> api change that is not compatible with Juju. You will need to ensure that 
>> you're
>> still using the lxd 2.0.0 beta2 release to test with Juju.
> 
> Is that the source of the error:
> 
> ERROR invalid config: Can not change ZFS config. Images or containers
> are still using the ZFS pool:
> 
> ?
> 
> Mark
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju devel 2.0-beta1 is available for testing

2016-02-20 Thread Ian Booth
To specify a different LXD host:

$ juju bootstrap mycontroller lxd/

For now, just localhost (the default) has been fully tested and is guaranteed to
work with this beta1.

There's no need to edit any clouds.yaml file for the LXD cloud. It's meant to be
really easy to use!


On 21/02/16 09:21, Marco Ceppi wrote:
> Won't the user be able to create different LXD clouds by specifying a
> remote LXD host though?
> 
> On Sun, Feb 21, 2016, 12:19 AM Ian Booth  wrote:
> 
>> The lxd cloud works on Juju 2.0 beta1 out of the box.
>>
>> $ juju bootstrap mycontroller lxd
>>
>> There is no need to edit any clouds.yaml. It Just Works.
>>
>> It seems the confusion comes from not seeing lxd in the output of juju
>> list-clouds. list-clouds ostensibly shows available public clouds (aws,
>> azure
>> etc) and any private clouds (maas, openstack etc) added by the user. The
>> lxd
>> cloud is just built-in to Juju. But from a usability perspective, it's
>> seems we
>> should include lxd in the output of list-clouds.
>>
>> NOTE: the latest lxd 2.0.0 beta3 release recently added to the archives
>> has an
>> api change that is not compatible with Juju. You will need to ensure that
>> you're
>> still using the lxd 2.0.0 beta2 release to test with Juju.
>>
>>
>> On 21/02/16 08:26, Jorge O. Castro wrote:
>>> Awesome, a nice weekend present!
>>>
>>> I updated and LXD is not listed when I `juju list-clouds`. Rick and I
>>> were guessing that maybe because the machine I am testing on is on
>>> trusty that we exclude that cloud on purpose. If I was on a xenial
>>> machine I would assume lxd would be available?
>>>
>>> What's an example clouds.yaml look like for a lxd local provider? I
>>> tried manually adding a lxd cloud via `add-cloud` but I'm unsure of
>>> what the formatting would look like for a local provider.
>>>
>>>> Development releases use the "devel" simple-streams. You must configure
>>>> the `agent-stream` option in your environments.yaml to use the matching
>>>> juju agents.
>>>
>>> I am confused, I no longer have an environments.yaml so is this
>>> leftover from a previous release?
>>>
>>> Thanks!
>>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju devel 2.0-beta1 is available for testing

2016-02-20 Thread Ian Booth
The lxd cloud works on Juju 2.0 beta1 out of the box.

$ juju bootstrap mycontroller lxd

There is no need to edit any clouds.yaml. It Just Works.

It seems the confusion comes from not seeing lxd in the output of juju
list-clouds. list-clouds ostensibly shows available public clouds (aws, azure
etc) and any private clouds (maas, openstack etc) added by the user. The lxd
cloud is just built-in to Juju. But from a usability perspective, it's seems we
should include lxd in the output of list-clouds.

NOTE: the latest lxd 2.0.0 beta3 release recently added to the archives has an
api change that is not compatible with Juju. You will need to ensure that you're
still using the lxd 2.0.0 beta2 release to test with Juju.


On 21/02/16 08:26, Jorge O. Castro wrote:
> Awesome, a nice weekend present!
> 
> I updated and LXD is not listed when I `juju list-clouds`. Rick and I
> were guessing that maybe because the machine I am testing on is on
> trusty that we exclude that cloud on purpose. If I was on a xenial
> machine I would assume lxd would be available?
> 
> What's an example clouds.yaml look like for a lxd local provider? I
> tried manually adding a lxd cloud via `add-cloud` but I'm unsure of
> what the formatting would look like for a local provider.
> 
>> Development releases use the "devel" simple-streams. You must configure
>> the `agent-stream` option in your environments.yaml to use the matching
>> juju agents.
> 
> I am confused, I no longer have an environments.yaml so is this
> leftover from a previous release?
> 
> Thanks!
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: EC2 VPC firewall rules

2016-02-18 Thread Ian Booth
Login was bumped to v3 to prevent accidental logins from older Juju clients
which may appear to connect successfully but then fail later depending on what
operations are performed.

It also allows the "this version is incompatible message". This was done for 1.x
clients logging into Juju 2.0 servers, but the other way around was missed out.
We'll fix that for beta2.

On 18/02/16 20:51, John Meinel wrote:
> Shouldn't we at least be giving a "juju 2.0 cannot operate with a juju 1.X
> API server, please install juju-1.25 if you want to use this system", or
> something along tohse lines. Admin(3).Login is not implemented sounds like
> a poor way for them to discover that.
> 
> John
> =:->
> 
> 
> On Thu, Feb 18, 2016 at 2:49 PM, John Meinel  wrote:
> 
>> Looks like the changes to Login broke compatibility. We are adding a Login
>> v3, but it looks like the new code will refuse to try to Login to v2. I'm a
>> bit surprised, but it means you'll need to bootstrap again if you want to
>> test it out with current trunk.
>>
>> John
>> =:->
>>
>>
>> On Thu, Feb 18, 2016 at 2:47 PM, Tom Barber 
>> wrote:
>>
>>> Hey Dimiter,
>>>
>>> Thanks for that. As am running trunk I wanted to make sure I was fully up
>>> to date before progressing further. I pulled trunk locally and ran juju
>>> upgrade-juju --upload-tools
>>>
>>> That gives me:
>>>
>>> WARNING no addresses found in space "default"
>>> WARNING using all API addresses (cannot pick by space "default"):
>>> [public:52.30.224.20 local-cloud:172.31.2.38]
>>> WARNING discarding API open error: no such request - method
>>> Admin(3).Login is not implemented (not implemented)
>>> ERROR no such request - method Admin(3).Login is not implemented (not
>>> implemented)
>>>
>>>
>>> I assume the ERROR portion is pretty critical. So here's a slightly off
>>> topic question, which I suspect has a very simple yes/no answer. Can I
>>> either a) force a bootstrapped environment upgrade b) manually upgrade an
>>> environment by passing the error but making the bootstrap node up to date
>>> c) export the existing nodes it manages and import them back into a new
>>> bootstrap node without having to recreate them as well?
>>>
>>> Thanks
>>>
>>> Tom
>>>
>>> --
>>>
>>> Director Meteorite.bi - Saiku Analytics Founder
>>> Tel: +44(0)5603641316
>>>
>>> (Thanks to the Saiku community we reached our Kickstart
>>> 
>>> goal, but you can always help by sponsoring the project
>>> )
>>>
>>> On 18 February 2016 at 10:42, Dimiter Naydenov <
>>> dimiter.nayde...@canonical.com> wrote:
>>>
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 18.02.2016 12:01, Tom Barber wrote:
> Hello folks
>
> I'm not sure if my tinkering has broken something, the fact I'm
> running trunk has broken something or I just don't understand
> something.
>
> Until last week we've been running EC2 classic, but we have now
> switched to EC2-VPC and have launched a few machines.
>
> juju ssh to these machines works fine and I've been configuring
> them to suit our needs.
>
> Then I came to look at external access, `juju expose mysqldb` for
> example, I would then expect to be able to access it from the
> outside world, but can't unless go into my VPC settings and open
> the port in one of the juju security groups, at which point
> external access works fine.
>
> Am I missing something?
>
> Thanks
>
> Tom
>
>
 Hey Tom,

 What you're describing sounds like a bug, as "juju expose "
 should trigger the firewaller worker to open the ports the service has
 declared (with open-ports within the charm) using the security group
 assigned to the host machine for all units of that service.

 Have you changed the "firewall-mode" setting by any chance?
 Can you provide some logs from /var/log/juju/*.log on the bootstrap
 instance (machine 0)?

 Cheers,
 - --
 Dimiter Naydenov 
 Juju Core Sapphire team 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJWxaAXAAoJENzxV2TbLzHwGgEIAIuj0sPzh7S/4jvTQ6aA/dwP
 i7WkSZ586JkNbEFeCBjDavO6oZFOwIAEW+EpGuy1C0O8BJr5Y2YJBMR96pdf3Rj/
 Y6xS4Byt0HrwCWixt7ut6zu7BsT+nv6YFO7fNQvNYLyroufzpqUKaALJp5xwedkJ
 JIx1iyLnAZ4ZC1/0VkoBM/UjbZN7xQIteNvChBCZSSk8RvbqXCKhbXZKuUKMAw5g
 R+D3wIwLEyZHb5SATcSSdE6nidv4A0F2waac1/3lOvFebeOsnapnRKkIDp3Y9v19
 /zDiDLWSJJvMDau8iIzSQ4STK/sLEmA78iRNkfDRWRifv0z1KkY6ppnhaS+jrj4=
 =kPA7
 -END PGP SIGNATURE-

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju

>>>
>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.u

Re: Call for Feedback: LXD local provider

2016-01-08 Thread Ian Booth
+1 to what Stuart says below about container creation.

Plus, I have found just today in testing that about 1 in 4 times, the
provisioning simply fails silently. ie a new container is supposed to be started
to deploy a charm into, but nothing happens. lxc list shows nothing and the
machine remains allocating in Juju. Both Andrew and myself have observed this 
issue.

https://bugs.launchpad.net/juju-core/+bug/1532186



On 08/01/16 20:08, Stuart Bishop wrote:
> On 8 January 2016 at 00:48, Jorge O. Castro  wrote:
>> Hi everyone,
>>
>> Katherine walked me through using the new LXD provider in the Juju
>> Alpha: https://linuxcontainers.org/lxd/
>>
>> The one caveat right now is that you need to be on wily or xenial as your 
>> host.
>>
>> We are collecting feedback here along with the current working
>> instructions: 
>> https://docs.google.com/document/d/1lbh3ZkkSdBOGRadF_6FWrijbOhH4Vf2f7alrZFr8pz0/edit?usp=sharing
> 
> I don't seem to be able to add feedback there.
> 
> Initial feedback:
>   - lxd hasn't made it into the release notes yet, at least in the
> 1.26alpha3 copy I have.
> 
>   - container creation is as slow or slower than lxc. I think there
> are still some 'apt get updates', upgrades and package installs being
> run between container creation and kicking off the charm. It is well
> over an order of magnitude slower to do 'juju deploy ubuntu' than it
> is to 'lxc launch ubuntu first'. We might need richer templates, with
> agents and dependencies preinstalled. Yes, it is fast but seems only
> as fast as the lxc provider with the btrfs hack has been for some time
> (I'm using the btrfs hack with lxd too, per the lxd getting started
> guide).
> 
>   - bootstrap spits out a well known and understood error. The images
> team needs to fix this or juju team work around it, as it breaks
> charms too (cassandra, rabbit, others have fallen victim): "sudo:
> unable to resolve host
> juju-f2339d90-dd3c-4a1f-8cd2-13e7c795df3f-machine-0". The fix is to
> add the relevant entry for $hostname to /etc/hosts.
> 
>   - The namespace option in environments.yaml doesn't seem to have any
> visible effect. I'm still getting container names like
> juju-f2339d90-dd3c-4a1f-8cd2-13e7c795df3f-machine-0, whereas I'd like
> something friendlier. This is likely just me not understanding what
> this option does.
> 
>   - alas, I tripped over a show stopper for me elsewhere in 1.26alpha3
> so haven't proceeded much further. Anecdotally it seems more reliable
> than the old lxc provider, but I'll need to be able to do more runs to
> confirm that.
> 
>   - I very much look forward to using a remote lxd server. Its always
> surprising how many Cassandra nodes this little laptop can support,
> but offloading it to a cloud vm while keeping the fast container
> spinup times will be nice ;)
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju on MAAS agent tools upgrade mechanism

2015-09-13 Thread Ian Booth


On 12/09/15 23:49, Peter Grandi wrote:
> Apologies for the late reply, I spent most the time in between
> reverse engineering some issues with other ("hipsterish")
> clusterized services.
> 
> In the meantime I have written and just now uploaded my own
> draft overview of how Juju is structured, at a very high level:
> 
>   https://wiki.ubuntu.com/ServerTeam/JujuConcepts
> 
> Needs to be update it a bit with some of the information below.

The wiki article looks great in general, especially since I think you've done it
by observing how Juju runs There's a few conceptual missteps in some of the
information. To help clarify, may I also recommend as a good overview of Juju:
http://blog.labix.org/2013/06/25/the-heart-of-juju

> 
>> Each machine in a Juju environment runs a jujud binary. The
>> binary is packaged in the so-called tools tarball.
> 
> I seem to have noticed 'jujud' is per-unit rather than
> per-machine, but then I think that there is a very noticeable
> bias of Juju development towards one-unit-per-node on dynamic
> public "cloud" providers... :-)
> 

Correct. Right now, a machine has several jujud services - one for the machine
and one for each unit deployed to that machine. We're hoping to get the time to
consolidate this so that each node has a single jujud agent to manage all of the
workloads on that machine.

The recommended deployment model is indeed one unit per node, but bear in mind a
node may be a container. So to achieve density, a host machine may run multiple
units, each hosted inside an LXC container for example.

>> The bootstrap process needs to download the tools from
>> somewhere to the initial Juju Server.
> 

Either that or the tools can be provided to the bootstrap command from a local
directory; this will upload the tools to the Juju Server. The --metadata-source
argument to bootstrap is the thing to use.

> That would be I guess the Juju "controller" machine, which is
> not necessarily any of the MongoDB repset.
> 

The Juju Servers (what you call the controller above) do correspond to the
MongoDB replicaset machines. A Juju deployment may use only one Juju Server
(also hosting MongoDB) but this is not HA. In an HA scenario, extra Juju Server
machines are added, each running a MongoDB replicaset instance. Any Juju Server
may receive API requests from a Juju node; the MongoDB primary runs on one of
the Servers.

>> For deployments with internet access, the tools come from
>> https://streams.canonical.com/juju/tools/. This is the
>> simplest case and doesn't require any agent-metadata-url or
>> sync-tools usage. I may have missed it in your emails, but I'm
>> assuming your environment does have internet access?
> 
> The Juju controller, the Juju state machines and the Juju nodes
> I am dealing with all have Internet access. They are on various
> private subnets, but the Juju controller also has a public
> address, and the others are NAT'ed.
> 

In that case, no sync-tools or any other setup is needed. A simple juju
bootstrap will pull down the tools and cache in the Juju Server's blobstore.
When new nodes are added, the tools come from the Juju Server. The only time
tools are fetched again from the internet is when an upgrade is done.

>> [ ... ]  we now store charms and tools in the environment
>> blobstore.
> 
> Thanks for the details!
> 
>> So the above is for bootstrap.
> 
> So far so good, and it looks like bootstrap worked around May
> this year.
> 
>> For upgrades, if the machines in your environment have
>> internet access, then juju upgrade-juju --version=1.24.5
>> should just work.
> 
> That's a bit vague. though. I would run 'juju upgrade-juju' on
> the control node, which has got 1.24.5 and then "somehow" the
> ~70 units with 'jujud' 1.23.3 deployed on the local 12 nodes
> would then download the '.tgz' for their architecture of version
> 1.24.5, but that does not happen and I got instead the error
> message "ERROR no matching tools available" which seems to be
> coming from the 'juju' command running on the control node
> itself.
> 

You typically run juju upgrade-juju on a client machine. I recommend always
using the --version argument to avoid surprises. The algorithm is essentially:
- upgrade command figures out what version to upgrade to [1]
- upgrade command writes an environment setting with the requested version
- Juju machine agents notice the new version request and download the tools to
their nodes
- each agent on the nodes restarts in order to run with the jujud afforded by
the new tools

[1] the algorithm used to figure out the version of tools to upgrade to is
essentially X+1 but it depends on client version and currently running version
in the environment. You can sometimes see a "no tools available"  message but
there needs to be much better UX in this area to explain why the tools version
could not be automatically determined etc. There have been bugs raised and fixed
eg http://pad.lv/1459093 but it's on ongoing are of improvement.
It's best jus

Re: Juju on MAAS agent tools upgrade mechanism

2015-09-01 Thread Ian Booth
Hi Peter

There's a lot to respond to in your email - I'll summarise the key points up 
front.

Firstly, let me briefly explain a little about how the tools tarballs are
handled in Juju. I'll cover bootstrap as well as upgrades, just for
completeness, even though you are just upgrading.

Each machine in a Juju environment runs a jujud binary. The binary is packaged
in the so-called tools tarball. The bootstrap process needs to download the
tools from somewhere to the initial Juju Server. For deployments with internet
access, the tools come from https://streams.canonical.com/juju/tools/. This is
the simplest case and doesn't require any agent-metadata-url or sync-tools 
usage.

I may have missed it in your emails, but I'm assuming your environment does have
internet access?

In any case, during the bootstrap process, the tool tarballs are retrieved and
cached in a blobstore maintained by the Juju Server. This blobstore is a Mongo
database called "blobstore". This is separate to the "juju" database where the
collections representing the environment model and tools metadata etc are 
stored.

Note - this is where some of the older material you have found online might be a
little out of date. We have dropped the requirement that cloud providers *must*
provide a blob storage mechanism for Juju to use. In it's place, we now store
charms and tools in the environment blobstore. The MAAS provider still has
access to MAAS storage, but doesn't use it any more for tools, charms etc.

If your environment does not have internet access at the time of bootstrap, then
we need a way to provide the initial Juju Server with the tools. There's 2 ways:
1. Host the tools at a location pointed to by agent-metadata-url config setting
2. Have the tools available in a local directory and pass to bootstrap

For the latter case, the sync-tools utility can be used to pull tools tarballs
from streams.canonical.com to a local directory

If you have no internet access, and tools are available in a local directory,
you bootstrap Juju with the metadata-source argument. This uploads the tools to
the newly deployed environment from that directory rather than fetched from an
online source.

eg
$ mkdir test
$ juju sync-tools --local-dir ~/test --version 1.24
$ juju bootstrap --metadata-source ~/test

The sync-tools above is actually unnecessary if the machine used has internet
access. But if it didn't you'd run sync-tools on a machine which did and copy
the entire ~/test directory across to the machine used to bootstrap.

So the above is for bootstrap.

For upgrades, if the machines in your environment have internet access, then

$ juju upgrade-juju --version=1.24.5

should just work.

If your Juju Server machine does not have internet access, then what you need to
do is copy the new tools to which we want to upgrade into the Juju Server's
blobstore. The sync-tools command does this.

$ juju sync-tools --version 1.24

Once the above is run, the upgrade command should be able to find the latest
1.24 tools in the Juju Server blobstore.

There should be no special processing needed for MAAS. That may have been the
case some time ago, before sync-tools and the local caching of the tools in the
Juju Server blobstore, but not now. The only time bootstrap or upgrade requires
extra steps is for deployments without internet access.

I will start the ball rolling to get that outdated documentation you came across
fixed up.

On your question about whether Juju on MAAS or private clouds is abandonware -
definitely NOT. We have a whole lab set up which tests MAAS and Juju working
together on a daily basis. The fact you've had issues upgrading is most
unfortunate and you've clearly found issues with the documentation which hasn't
helped.

There's lots of folks on the #juju-dev IRC channel on Freenode who would be able
to give you assistance in "real time" if you wanted to hop on there and ask for
help. We'd love to get your issues solved.



On 01/09/15 21:44, Peter Grandi wrote:
> 
>> https://bugs.launchpad.net/juju-core/+bug/1447899 The bug is
>> fixed in the recently released 1.25-alpha1.  What you can try on
>> your system is to explicitly specify the version you want to
>> upgrade to:
> 
> Had a look and forgot to mention that I tried that and got "ERROR
> no matching tools available", that's why I tried to build a local
> cache with known metadata content.
> 
> As it is so typical ("cannot read file" style...), the error
> message does not say which component it came from and which
> version it was trying to match against, and in which list it was
> trying to match it.
> 
>> You appear to have set up a valid tools collection with the
>> correct metadata. The tools tarballs themselves which the
>> metadata refers to reside in a blobstore managed by Juju.
> 
> That's interesting, because I can't find that blobstore, and the
> local setup was upgraded at least from 1.23.2 to 1.23.3 without
> any special work (or at least those who did it don't remember it)

Re: Juju on MAAS agent tools upgrade mechanism

2015-08-31 Thread Ian Booth
Hi Peter

I seem to recall that at one time, the upgrade-juju command could fail to
determine implicitly what tools version it could upgrade to. A quick bug search
revealed this bug

https://bugs.launchpad.net/juju-core/+bug/1447899

The bug is fixed in the recently released 1.25-alpha1.

What you can try on your system is to explicitly specify the version you want to
upgrade to:

juju upgrade-juju --version 1.24.5

You appear to have set up a valid tools collection with the correct metadata.
The tools tarballs themselves which the metadata refers to reside in a blobstore
managed by Juju. Which bug report are you referring to with regard to building a
local tools cache?

Regardless of any local cache, or use of sync-tools etc, if your environment has
internet access then an upgrade request like the one above will go to
https://streams.canonical.com/juju/tools and retrieve from the the requested
tools (1.24.5).

If there is no outgoing internet access, then sync-tools is used to populate the
environment tools cache with tools source from a local directory or elsewhere.

But having said all that, if the upgrade-juju command cannot determine what
tools it should use, it will fail even if there are tools available cached or
otherwise. That's why it's best to explicitly ask for the tools you want. This
also guarantees a repeatable upgrade.

If you wanted to try what I suggest above, please come back with any progress so
we can help if the suggestion doesn't work.


On 29/08/15 02:49, Peter Grandi wrote:
>>> [ ... 1.23.3 has some excessive lease/txns traffic fixed in
>>> 1.24.5 ... ]
>> [ ... ] You may find you have a problem upgrading away from
>> 1.23 (again, due to problems with the new lease feature). I
>> created a Juju plugin to help work around this. [ ... ]
> 
> I have acquired the plugin and found in the "cheatsheet" a hint
> on how to get it and have it recognized, but I haven't been able
> yet to use it because of a different problem.
> 
> The Juju infrastucture that 1.23.3 is running on is managed by
> MAAS, and perhaps because of that I haven't been able to get the
> 1.24.5 tools to get installed, while the '.deb' for 1.24.5 was
> installed without issue on the "control" node.
> 
> The first issue was that 'juju upgrade-juju' reported no newer
> version and that version 1.24.5 was unknown. The colleague who
> upgraded from 1.23.2 to 1.23.3 some months ago thinks it all
> "just worked", but after much searching I figured out that there
> have been a few changes and MAAS is a special case as 'juju help
> upgrade-juju' states that:
> 
>   «Both of these depend on tools availability, which some
>   situations (no outgoing internet access) and provider types
>   (such as maas) require that you manage yourself; see the
>   documentation for "sync-tools".»
> 
> and indeed some tutorials for Juju on MAAS throw in a
> 'sync-tools' line. However that did not work for me, and I got
> an error message with '1.24.5--amd64', and then during my
> attempts to work around by building a local 'tools' cache as
> suggested by a bug report, with SHA256 mismatches (the report of
> such mismatches was wrong).
> 
> Eventually I managed to build a local 'tools' cache that seems
> to work containing just:
> 
>   local-tools/tools/releases/juju-1.24.5-trusty-amd64.tgz
>   local-tools/tools/streams/v1/index.json
>   local-tools/tools/streams/v1/index2.json
>   local-tools/tools/streams/v1/com.ubuntu.juju-released-tools.json
> 
> and I also fixed the 'toolsmetadata' collection as indicated in
> a bug report and now it looks like:
> 
>   { "_id" : "1.23.2-trusty-amd64", "version" : "1.23.2-trusty-amd64", "size" :
> NumberLong(11555177), "sha256" :
> 
> "acabf7b8f9d9a9d718a083f80355dfbdce228bb2f8c4e9cfab7899c730f7290b",
> "path" :
> 
> "tools/1.23.2-trusty-amd64-acabf7b8f9d9a9d718a083f80355dfbdce228bb2f8c4e9cfab7899c730f7290b",
> "txn-revno" : NumberLong(2), "txn-queue" : [
> "55534945705cc83c1638_2232d0f1" ] }
>   { "_id" : "1.23.3-trusty-amd64", "path" :
> 
> "tools/1.23.3-trusty-amd64-007c62a742c974c3f082964f37b04c28d46345e4816a926c31f8bdef53000552",
> "sha256" :
> 
> "007c62a742c974c3f082964f37b04c28d46345e4816a926c31f8bdef53000552",
> "size" : NumberLong(11566458), "txn-queue" : [
> "558c54bc705cc881670003fd_9cb7835d",
> "558c54bc705cc881670003fe_d9e05d7b" ], "txn-revno" :
> NumberLong(2), "version" : "1.23.3-trusty-amd64" }
>   { "_id" : "1.24.5-trusty-amd64", "version" : "1.24.5-trusty-amd64", "size" :
> NumberLong(16649545), "sha256" :
> 
> "e080a20aed15abb1e131dec2bafa227ac395cfb5710b10c05d82f9c50243a497",
> "path" :
> 
> "tools/1.24.5-trusty-amd64-e080a20aed15abb1e131dec2bafa227ac395cfb5710b10c05d82f9c50243a497",
> "txn-revno" : Numb

Re: Getting back to t1.micros

2015-05-14 Thread Ian Booth
> 
> For the past year I have been charming in AWS t1.micro instances. These
> have been great resources, and allowed me and others to get started with
> charming and knowing the ecosystem. However, I have recently found out that
> the support for this instance type has been basically deprecated, and being
> replaced by t2.micros. However, I would like to propose getting back to
> t1.* as a standard for the following reasons:
> 
> * t2.* instances only allow you to have 10% of the CPU power on the machine
> as a basis. They use a credits system, where you can only get 100% CPU use
> for a limited period of time, compared to the t1.micro instances, where you
> would get 100% CPU usage straightforward, with no credit system. This also
> allows you to get more out of the machine, and is ideal for developing
> charms.
> * t1.micro instances are part of the free tier, and way cheaper than
> m1.smalls (which are the Juju default), allowing *anyone* to either get the
> free tier and start charming away, or do it for a super low cost.
> * When I set the same constraints at bootstrap, it just gives me an error
> saying it uses a different HDD type. Could we at least get that fixed?
> 

Can you please provide the text of the error?

> I would like to hear your opinions on getting t1.* back as a default. It
> was a great resource, and now I can't even get it by forcing it because it
> will throw errors.
> 

We can't use t1.* as the default because depending on one's account and the
region used, that instance type is simply not available any more. The default
needs to be something that always works out of the box for everyone. We can
though look into why you are having trouble using t1 and fix that issue so that
at least it is an option for you.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: bind mount support for local provider

2015-01-16 Thread Ian Booth
Hi Cory

This is part of what we are planning to deliver for this cycle as part of the
storage work. We also plan on being able to provide the container with access to
block devices eg loopback, either in container's filesystem or on the host 
machine.


On 17/01/15 02:11, Corey Bryant wrote:
> Hi all,
>
> Do there happen to be any plans for juju bind mount support for the local
> provider?
>
> For example:  juju deploy mysql --bind "/shared/mysql /shared"
>
> which would bind mount the host /shared/mysql directory to /shared in the
> deployed container.
>
>
>


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unit-get

2014-11-09 Thread Ian Booth


On 10/11/14 11:43, José Antonio Rey wrote:
> Correct me if I'm wrong, but I'm seeing this targeted for 1.22?
>

You are correct, but it's also been targeted to 1.21-beta1 milestone.

>>
>> Note that for the EC2 case, using the DNS name for the unit's public address 
>> has
>> been reported as a bug, since in the case of split horizon DNS, the DNS name
>> resolves to a private IP address internally.
>>
>> https://bugs.launchpad.net/bugs/1308374
>>
>> This bug is being fixed for the 1.21 Juju release, which we hope to get into
>> beta within the next day or so.
>>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unit-get

2014-11-09 Thread Ian Booth


On 10/11/14 11:32, Michael Nelson wrote:
> On Fri, Nov 7, 2014 at 3:19 PM, Andrew Wilkins
>  wrote:
>> Hi Sameer,
>>
>> The behaviour changed a few months ago to address a bug:
>> https://bugs.launchpad.net/juju-core/+bug/1353442
>>
>> Is this causing you problems, or were you just surprised/curious?
> 
> 
> Hi Andrew. This did cause a bug in the elasticsearch charm recently
> [1] - I'd not realised it was related to a juju change, but thought it
> was just a difference on ec2, that the private-address was not an IP
> address (I had only tested with local, canonistack and HP).
>

Note that for the EC2 case, using the DNS name for the unit's public address has
been reported as a bug, since in the case of split horizon DNS, the DNS name
resolves to a private IP address internally.

https://bugs.launchpad.net/bugs/1308374

This bug is being fixed for the 1.21 Juju release, which we hope to get into
beta within the next day or so.


> The reason it caused an issue was because we were using the
> private-address as part of a firewall rule which required an IP
> address. We've pushed a fix now, but is there a way to foresee this
> kind of change in the future? Maybe for these changes which might
> affect charms, we could trigger retests for some set of approved
> charms across HP, ec2 etc.?
> 
> [1] https://bugs.launchpad.net/charms/+source/elasticsearch/+bug/1386664
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju + MAAS + Image Downloads

2014-09-04 Thread Ian Booth


On 05/09/14 09:02, David Britton wrote:
> Hi juju folks --
> 
> I'm using MAAS + Juju to do some testing behind a firewall with LXCs.  I
> want to accelerate the download of the large images that I am
> downloading from cloud-images.ubuntu.com.
> 
> I see that MAAS has cloud images.  Ideally, I'd like to instruct Juju to
> download them from there:
> 
> https://bugs.launchpad.net/juju-core/+bug/1357045
> 
> But I'm not sure that is possible.  So, I'll leave it to someone else to
> pick up that bug if they think it's worthwhile.
> 
> I then tried to setup squid and proxy them transparently and found that
> the image-metadata-url that I give juju is only for the .json files that
> are referenced.  The images are still downloaded via https from
> cloud-images.ubuntu.com.  I'm not even sure if this is a bug.  I mean, I
> understand why you want https, but if I want to mirror it, it's a new
> level of commitment to make it https only especially in a private
> environment.
> 
> Is the only option for me to mirror cloud-images and set up an https
> endpoint (or a transparent https m-i-t-m proxy) in order to avoid
> downloading these large images over and over?
> 


I'm not an LXC (nor MAAS) expert, but I see there being 2 options:

1. Use the ubuntu-cloud template as we do now, but pass in the -T argument to
instruct the script to obtain the specified LXC images from an http endpoint
using wget. This would then replace/override the use of ubuntu-cloudimg-query
which is what reaches out now to cloud-images.canonical.com to obtain the 
images.

2. Use a different template script and specify it using the -t argument to
lxc-create. This script would have to replicate what's in
/usr/share/lxc/templates/lxc-ubuntu-cloud but obtain the lxc images from 
elsewhere.

For MAAS, if there were an endpoint which could serve the LXC tarballs, as
opposed to the root images themselves at
http://cluster-name/MAAS/static/images/ubuntu/amd64/generic/trusty/release/root-image,
then option 1 would be easiest. We could provide a new config option to specify
the correct URL to pass to -T

Thoughts?



-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: --constraints "root-disk=16384M" fails in EC2

2014-05-29 Thread Ian Booth
If a root disk constraint is specified, Juju will translate that into a block
device mapping request when the instance is started. Hence we do start an
instance with the required root disk size but the subsequent constraints
matching fails. That's my understanding anyway.

On 30/05/14 11:42, Kapil Thangavelu wrote:
> fwiw. all the ubuntu cloud images root disks in ec2 have 8gb of disk size
> by default, juju doesn't reallocate the root volume size when creating an
> instance (if it did cloudinit will auto resize the root fs if its created
> with a larger root vol).
> 
> 
> On Thu, May 29, 2014 at 8:26 PM, Ian Booth  wrote:
> 
>> Hi Stein
>>
>> This does appear to be a bug in Juju's constraints handling for EC2.
>> I'd have to do an experiment to confirm, but certainly reading the code
>> appears to show a problem.
>>
>> Given how EC2 works, in that Juju asks for the specified root disk size
>> when starting an instance, I don't have a workaround that I can think
>> of to share with you.
>>
>> The fix for this would be relatively simple to implement and so can be
>> done in time for the next stable release (1.20) which is due in a few
>> weeks. Alternatively, we hope to have a new development release out
>> next week (1.19.3).  I'll try to get any fix done in time for that also.
>>
>> I've raised bug 1324729 for this issue.
>>
>> On Fri 30 May 2014 09:29:15 EST, GMail wrote:
>>> Trying to deploy a charm with some extra root disk space. When using the
>> root-disk constraint defined above I get the following error:
>>>
>>> '(error: no instance types in us-east-1 matching constraints
>> "cpu-power=100 root-disk=16384M")'
>>>
>>> I’m deploying a bundle with the following constraints: constraints:
>> "mem=4G arch=amd64”, but need more disk-space then the default provided.
>>>
>>> Any suggestions ?
>>>
>>>
>>> Stein Myrseth
>>> Bjørkesvingen 6J
>>> 3408 Tranby
>>> mob: +47 909 62 763
>>> mailto:stein.myrs...@gmail.com
>>>
>>>
>>>
>>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: --constraints "root-disk=16384M" fails in EC2

2014-05-29 Thread Ian Booth
Hi Stein

This does appear to be a bug in Juju's constraints handling for EC2. 
I'd have to do an experiment to confirm, but certainly reading the code 
appears to show a problem.

Given how EC2 works, in that Juju asks for the specified root disk size 
when starting an instance, I don't have a workaround that I can think 
of to share with you.

The fix for this would be relatively simple to implement and so can be 
done in time for the next stable release (1.20) which is due in a few 
weeks. Alternatively, we hope to have a new development release out 
next week (1.19.3).  I'll try to get any fix done in time for that also.

I've raised bug 1324729 for this issue.

On Fri 30 May 2014 09:29:15 EST, GMail wrote:
> Trying to deploy a charm with some extra root disk space. When using the 
> root-disk constraint defined above I get the following error:
>
> '(error: no instance types in us-east-1 matching constraints "cpu-power=100 
> root-disk=16384M")'
>
> I’m deploying a bundle with the following constraints: constraints: "mem=4G 
> arch=amd64”, but need more disk-space then the default provided.
>
> Any suggestions ?
>
>
> Stein Myrseth
> Bjørkesvingen 6J
> 3408 Tranby
> mob: +47 909 62 763
> mailto:stein.myrs...@gmail.com
>
>
>
>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Detecting cowboy'd changes in a Juju Env

2014-05-12 Thread Ian Booth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Joey

> 
> I'm curious to know if there is any reliable mechanism to detect a
> cowboyed change inside a juju environment and then report them.
> 
> A non-juju synonym of what I'm trying to accomplish would be with puppet
> managing a system's /etc directory. If that directory is under some RCS
> you can diff it and tell what changes have been made. I'd like to do
> something similar within a juju environment.
> 

I assume you are talking about someone using the juju set-env command to change
an environment value, and knowing that that has happened. Right now, AFAIK,
there's no tooling in Juju that provides a packaged solution for what you want.

Currently, Juju's initial environment state comes from the environments.yaml
file at bootstrap, which is transformed into a yaml .jenv file inside
the $JUJU_HOME/environments directory. Each set-env invocation also leaves
information in the server side log files. So theoretically you could determine
if changes have been made and who did it, by combining information from get-env
with the sources just mentioned. Clearly, this is not ideal.

A topic of discussion at the recent Juju sprint was to add audit logging to
Juju. I *think* that topic has slipped off the todo list for the next cycle. So
I don't personally  have a good answer for you right now. Perhaps someone else
can chime in with a better answer?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iJwEAQECAAYFAlNxXzoACgkQCJ79BCOJFcYdNQP/QZp8MIC5uG1eaEvGh20GR6v1
50FLMmpjw4/BjMGvSxmJDaahocHYGhAeuasSbzRUpkT7s0CRk2g5SkfhxSL3ZXsa
6hV3+kTzbl1yshSNWcyWcHIHTW3JAE3N7+aoQaXsPTOxpzryTrAUfqgyITZs1nqf
iQzYk9EGCUYw0+sGmzc=
=+Oom
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Attemping to use Canonistack w/ Juju-core are the instructions up to date?

2014-01-29 Thread Ian Booth
Those instructions are indeed out of date. The correct attribute name is now
tools-metadata-url.

Having said that, Canonistack, along with AWS, HP Cloud, Azure is supported to
run Juju releases from PPA out-of-the-box without the need to upload any tools
or set the URL to locate the tools. This is because the Juju release process
uploads the tools for those clouds so that Juju can find them. If you are
running from source, then yes, --upload-tools is necessary.

On 30/01/14 09:26, Sean Feole wrote:
> Hey Everyone,
> 
> I've been looking at the following wiki page in attempt to use Canonistack
> with juju. Are these wiki instructions up-to-date ??
> 
> https://wiki.canonical.com/InformationInfrastructure/IS/CanonicalOpenstack/CanonistackWithJujuCore
> 
> These instructions were written for juju-core 1.13 (June/2013) and I just
> pulled 1.17.1 from the -devel ppa.
> 
>  I noticed there is some small print on the wiki regarding adding
> tools-url:
> https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60to
> the env.yaml. Is this still valid??   I'm asking because I saw a
> tools-metadata-url: field in the environment.yaml file.
> 
> I didn't want to spam the juju-dev list with the same email so if that's a
> better place to ask, let me know :)
> 
> Thanks,
> -Sean
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju