juju stable 1.20.0 is released

2014-07-03 Thread Curtis Hovey-Canonical
juju-core 1.20.0

A new stable release of Juju, juju-core 1.20.0, is now available.


Getting Juju

juju-core 1.20.0 is available for utopic and backported to earlier
series in the following PPA:

https://launchpad.net/~juju/+archive/stable


New and Notable

* High Availability

* Availability Zone Placement

* Azure Availability Sets

* Juju debug-log Command Supports Filtering and Works with LXC

* Constraints Support instance-type

* The lxc-use-clone Option Makes LXC Faster for Non-Local Providers

* Support for Multiple NICs with the Same MAC

* MaaS Network Constraints and Deploy Argument

* MAAS Provider Supports Placement and add-machine

* Server-Side API Versioning


Resolved issues

* Juju client is not using the floating ip to connect to the
  state server
  Lp 1308767

* Juju help promotes the 'wrong' versions of commands
  Lp 1299120

* Juju backup command fails against trusty bootstrap node
  Lp 1305780

* The root-disk constraint is broken on ec2
  Lp 1324729

* Bootstrapping juju from within a juju deployed unit fails
  Lp 1285256

* Images are not found if endpoint and region are inherited from
  the top level in simplestreams metadata
  Lp 1329805

* Missing @ syntax for reading config setting from file content
  Lp 1216967

* Deploy --config assumes relative path if it starts with a tilde (~)
  Lp 1271819

* local charm deployment fails with symlinks
  Lp 1330919

* Git usage can trigger disk space/memory issues for charms with blobs
  Lp 1232304

* Juju upgrade-charm fails because of git
  Lp 1297559

* .pyc files caused upgrade-charm to fail with merge conflicts
  Lp 1191028

* Juju debug-hooks should display a start message
  Lp 1270856

* Can't determine which relation is in error from status
  Lp 1194481

* Dying units departure is reported late
  Lp 1192433

* Juju upgrade-juju needs a dry run mode
  Lp 1272544

* Restarting API server with lots of agents gets hung
  Lp 1290843

* Juju destroy-environment =256 nodes fails
  Lp 1316272

* Azure destroy-environment does not complete
  Lp 1324910

* Azure bootstrap dies with xml schema validation error
  Lp 1259947

* Azure provider stat output does not show machine hardware info
  Lp 1215177

* Bootstrapping azure causes memory to fill
  Lp 1250007

* Juju destroy-environment on openstack should remove sec groups
  Lp 1227574

* Floating IPs are not recycled in OpenStack Havana
  Lp 1247500

* LXC cloning should be default behaviour
  Lp 1318485

* Nasty worrying output using local provider
  Lp 1304132

* Intermittent error destroying local provider environments
  Lp 1307290

* Default bootstrap timeout is too low for MAAS environments
  Lp 1314665

* MAAS provider cannot provision named instance
  Lp 1237709

* Manual provisioning requires target hostname to be directly resolvable
  Lp 1300264

* Manual provider specify bash as shell for ubuntu user
  Lp 1303195


High Availability

The juju state-server (bootstrap node) can be placed into high
availability mode. Juju will automatically recover when one or more the
state-servers fail. You can use the 'ensure-availability' command to
create the additional state-servers:

juju ensure-availability

The 'ensure-availability' command creates 3 state servers by default,
but you may use the '-n' option to specify a larger number. The number
of state servers must be odd. The command supports the 'series' and
'constraints' options like the 'bootstrap' command. You can learn more
details by running 'juju ensure-availability --help'


Availability Zone Placement

Juju supports explicit placement of machines to availability zones
(AZs), and implicitly spreads units across the available zones.

When bootstrapping or adding a machine, you can specify the availability
zone explicitly as a placement directive. e.g.

juju bootstrap --to zone=us-east-1b
juju add-machine zone=us-east-1c

If you don't specify a zone explicitly, Juju will automatically and
uniformly distribute units across the available zones within the region.
Assuming the charm and the charm's service are well written, you can
rest assured that IaaS downtime will not affect your application.
Commands you already use will ensure your services are always available.
e.g.

juju deploy -n 10 service

When adding machines without an AZ explicitly specified, or when adding
units to a service, the ec2 and openstack providers will now
automatically spread instances across all available AZs in the region.
The spread is based on density of instance distribution groups.

State servers compose a distribution group: when running 'juju
ensure-availability', state servers will be spread across AZs. Each
deployed service (e.g. mysql, redis, whatever) composes a separate
distribution group; the AZ spread of one service does not affect the AZ
spread of another service.

Amazon's EC2 and OpenStack Havana-based clouds and newer are supported.
This includes HP Cloud. Older versions of OpenStack are not supported.


Azure 

Re: juju stable 1.20.0 is released

2014-07-03 Thread Alexis Bruemmer
Good stuff, well done team!


On Thu, Jul 3, 2014 at 12:11 PM, Curtis Hovey-Canonical 
cur...@canonical.com wrote:

 juju-core 1.20.0

 A new stable release of Juju, juju-core 1.20.0, is now available.


 Getting Juju

 juju-core 1.20.0 is available for utopic and backported to earlier
 series in the following PPA:

 https://launchpad.net/~juju/+archive/stable


 New and Notable

 * High Availability

 * Availability Zone Placement

 * Azure Availability Sets

 * Juju debug-log Command Supports Filtering and Works with LXC

 * Constraints Support instance-type

 * The lxc-use-clone Option Makes LXC Faster for Non-Local Providers

 * Support for Multiple NICs with the Same MAC

 * MaaS Network Constraints and Deploy Argument

 * MAAS Provider Supports Placement and add-machine

 * Server-Side API Versioning


 Resolved issues

 * Juju client is not using the floating ip to connect to the
   state server
   Lp 1308767

 * Juju help promotes the 'wrong' versions of commands
   Lp 1299120

 * Juju backup command fails against trusty bootstrap node
   Lp 1305780

 * The root-disk constraint is broken on ec2
   Lp 1324729

 * Bootstrapping juju from within a juju deployed unit fails
   Lp 1285256

 * Images are not found if endpoint and region are inherited from
   the top level in simplestreams metadata
   Lp 1329805

 * Missing @ syntax for reading config setting from file content
   Lp 1216967

 * Deploy --config assumes relative path if it starts with a tilde (~)
   Lp 1271819

 * local charm deployment fails with symlinks
   Lp 1330919

 * Git usage can trigger disk space/memory issues for charms with blobs
   Lp 1232304

 * Juju upgrade-charm fails because of git
   Lp 1297559

 * .pyc files caused upgrade-charm to fail with merge conflicts
   Lp 1191028

 * Juju debug-hooks should display a start message
   Lp 1270856

 * Can't determine which relation is in error from status
   Lp 1194481

 * Dying units departure is reported late
   Lp 1192433

 * Juju upgrade-juju needs a dry run mode
   Lp 1272544

 * Restarting API server with lots of agents gets hung
   Lp 1290843

 * Juju destroy-environment =256 nodes fails
   Lp 1316272

 * Azure destroy-environment does not complete
   Lp 1324910

 * Azure bootstrap dies with xml schema validation error
   Lp 1259947

 * Azure provider stat output does not show machine hardware info
   Lp 1215177

 * Bootstrapping azure causes memory to fill
   Lp 1250007

 * Juju destroy-environment on openstack should remove sec groups
   Lp 1227574

 * Floating IPs are not recycled in OpenStack Havana
   Lp 1247500

 * LXC cloning should be default behaviour
   Lp 1318485

 * Nasty worrying output using local provider
   Lp 1304132

 * Intermittent error destroying local provider environments
   Lp 1307290

 * Default bootstrap timeout is too low for MAAS environments
   Lp 1314665

 * MAAS provider cannot provision named instance
   Lp 1237709

 * Manual provisioning requires target hostname to be directly resolvable
   Lp 1300264

 * Manual provider specify bash as shell for ubuntu user
   Lp 1303195


 High Availability

 The juju state-server (bootstrap node) can be placed into high
 availability mode. Juju will automatically recover when one or more the
 state-servers fail. You can use the 'ensure-availability' command to
 create the additional state-servers:

 juju ensure-availability

 The 'ensure-availability' command creates 3 state servers by default,
 but you may use the '-n' option to specify a larger number. The number
 of state servers must be odd. The command supports the 'series' and
 'constraints' options like the 'bootstrap' command. You can learn more
 details by running 'juju ensure-availability --help'


 Availability Zone Placement

 Juju supports explicit placement of machines to availability zones
 (AZs), and implicitly spreads units across the available zones.

 When bootstrapping or adding a machine, you can specify the availability
 zone explicitly as a placement directive. e.g.

 juju bootstrap --to zone=us-east-1b
 juju add-machine zone=us-east-1c

 If you don't specify a zone explicitly, Juju will automatically and
 uniformly distribute units across the available zones within the region.
 Assuming the charm and the charm's service are well written, you can
 rest assured that IaaS downtime will not affect your application.
 Commands you already use will ensure your services are always available.
 e.g.

 juju deploy -n 10 service

 When adding machines without an AZ explicitly specified, or when adding
 units to a service, the ec2 and openstack providers will now
 automatically spread instances across all available AZs in the region.
 The spread is based on density of instance distribution groups.

 State servers compose a distribution group: when running 'juju
 ensure-availability', state servers will be spread across AZs. Each
 deployed service (e.g. mysql, redis, whatever) composes a separate
 distribution group; the AZ 

Re: Proposal: making apt-get upgrade optional

2014-07-03 Thread Tim Penhey
I do just want to make the point that we are not just an ubuntu only
system any more, nor even linux only.

I'd prefer if we kept away from terms like apt-get as it doesn't make
sense for windows nor centos.  While we could certainly treat those
values differently on the other platforms, it definitely gives the
feeling that we are *mainly* ubuntu and (hand wavey) some others.

Any ideas for better names?

Tim


On 04/07/14 02:56, Matt Bruzek wrote:
 +1 to making these options configurable and having sane defaults.
 
 Thanks!
 
- Matt Bruzek matthew.bru...@canonical.com
 mailto:matthew.bru...@canonical.com
 
 
 On Thu, Jul 3, 2014 at 9:50 AM, Antonio Rosales
 antonio.rosa...@canonical.com mailto:antonio.rosa...@canonical.com
 wrote:
 
 On Tue, Jul 1, 2014 at 7:19 PM, Andrew Wilkins
 andrew.wilk...@canonical.com mailto:andrew.wilk...@canonical.com
 wrote:
  On Wed, Jul 2, 2014 at 3:38 AM, Antonio Rosales
  antonio.rosa...@canonical.com
 mailto:antonio.rosa...@canonical.com wrote:
 
  Suggest we make an environments.yaml key value of say
 apt-get-update
  set to a boolean with the default being true. Existing charms are
  timing out[0] when apt-get update is turned off due to stale apt-get
  metadata. Users then can them make the choice, and we can make
  suggestions in the docs as to what this key value means and how
 it can
  improve performance especially in the developer scenario when the
 care
  more about fast iterative deploys.
 
  Thoughts?
 
 
  I'm not suggesting we turn off update, just upgrade. We add repos
  (cloud-tools, ppa), so we need to update for juju's dependencies
 anyway. I
  don't think my proposal will affect charms.
 
 Ah yes, sorry.  However, I would still suggest upgrade and update be
 config parameter with the default being past behavior. On that note it
 would also be nice to have a utility for Juju to pass on additional
 user defined cloud-init config options.
 
 -thanks,
 Antonio
 
 
 
  [0] https://bugs.launchpad.net/juju-core/+bug/1336353
 
  -thanks,
  Antonio
 
  On Tue, Jul 1, 2014 at 4:43 AM, Andrew Wilkins
  andrew.wilk...@canonical.com
 mailto:andrew.wilk...@canonical.com wrote:
   On Tue, Jul 1, 2014 at 5:45 PM, John Meinel
 j...@arbash-meinel.com mailto:j...@arbash-meinel.com
   wrote:
  
   I would just caution that we'd really prefer behavior to be
 consistent
   across platforms and clouds, and if we can work with Microsoft
 to make
   'apt-get update' faster in their cloud everyone wins who uses
 Ubuntu
   there,
   not just us.
  
  
   I was meaning to disable it across all providers. It would be
 ideal to
   improve upgrades for all Ubuntu users, but from what I can tell
 it's a
   case
   of Azure's OS disks being a tad slow. If you start going up the
   instance-type scale, then you do get more IOPS. I haven't
 measured how
   much
   of a difference it makes.
  
  
   Have we looked into why Upgrade is taking 3m+? Is it the time to
   download
   things, is it the time to install things? I've certainly heard
 things
   like
   disk ops is a bit poor on Azure (vs CPU is actually better than
   average).
   Given the variance of 6m+ to 3m20s with Eat my data, it would
 seem disk
   sync
   performance is at least a factor here.
  
  
   I just looked, and it is mostly not network related (I assume
 mostly I/O
   bound). On ec2 an upgrade fetches all the bits in 0s; on Azure it's
   taking
   5s.
  
   Given I believe apt-get update is also disabled for local (it
 is run on
   the initial template, and then not run for the other instances
 copied
   from
   that), there is certainly precedence. I think a big concern is
 that we
   would
   probably still want to do apt-get update for security related
 updates.
   Though perhaps that is all of the updates we are applying
 anyway...
  
   If I read the aws.json file correctly, I see only 8 releases
 of the
   'precise' image. 6 of 'trusty' and 32 total dates of released
 items.
   And
   some of the trusty releases are 2014-01-22.1 which means it is
 likely
   to be
   beta releases.
  
   Anyway, that means that they are actually averaging an update only
   1/month, which is a fairly big window of updates to apply by
 the end of
   month (I would imagine). And while that does mean it takes
 longer to
   boot,
   it also means you would be open to more security holes without it.
  
  
   My contention is that if we don't *keep* it updated, we may as
 well just
   leave it to the user. When you create an instance in ec2 or
 Azure 

RFC: mongo _id fields in the multi-environment juju server world

2014-07-03 Thread Tim Penhey
Hi folks,

Very shortly we are going to start on the work to be able to store
multiple environments within a single mongo database.

Most of our current entities are stored in the database with their name
or id fields serialized to bson as the _id field.

As far as I know (and I may be wrong), if you are adding a document to
the mongo collection, and you do not specify an _id field, mongo will
create a unique value for you.

In our new world, things that used to be unique, like machines,
services, units etc, are now only unique when paired with the
environment id.

It seems we have a number of options here.

1. change the _id field to be a composed field where it is the
concatenation of the environment id and the existing id or name field.
If we do take this approach, I strongly recommend having the fields that
make up the key be available by themselves elsewhere in the document
structure.

2. let mongo create the _id field, and we ensure uniqueness over the
pair of values with a unique index. One think I am unsure about with
this approach is how we currently do our insertion checks, where we do a
document does not exist check.  We wouldn't be able to do this as a
transaction assertion as it can only check for _id values.  How fast are
the indices updated?  Can having a unique index for a document work for
us?  I'm hoping it can if this is the way to go.

3. use a composite _id field such that the document may start like this:
  { _id: { env_uuid: blah, name: foo}, ...
This gives the benefit of existence checks, and real names for the _id
parts.

Thoughts? Opinions? Recommendations?

BTW, I think that if we can make 3 work, then it is the best approach.

Tim

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: RFC: mongo _id fields in the multi-environment juju server world

2014-07-03 Thread John Meinel
According to the mongo docs:
http://docs.mongodb.org/manual/core/document/#record-documents
The field name _id is reserved for use as a primary key; its value must be
unique in the collection, is immutable, and may be of any type other than
an array.

That makes it sound like we *could* use an object for the _id field and do
_id = {env_uuid:, name:}

Though I thought the purpose of doing something like that is to allow
efficient sharding in a multi-environment world.

Looking here: http://docs.mongodb.org/manual/core/sharding-shard-key/
The shard key must be indexed (which is just fine for us w/ the primary _id
field or with any other field on the documents), and The index on the
shard key *cannot* be a *multikey index
http://docs.mongodb.org/manual/core/index-multikey/#index-type-multikey.*
I don't really know what that means in the case of wanting to shard based
on an object instead of a simple string, but it does sound like it might be
a problem.
Anyway, for purposes of being *unique* we may need to put environ uuid in
there, but for the purposes of sharding we could just put it on another
field and index that field.

John
=:-



On Fri, Jul 4, 2014 at 5:01 AM, Tim Penhey tim.pen...@canonical.com wrote:

 Hi folks,

 Very shortly we are going to start on the work to be able to store
 multiple environments within a single mongo database.

 Most of our current entities are stored in the database with their name
 or id fields serialized to bson as the _id field.

 As far as I know (and I may be wrong), if you are adding a document to
 the mongo collection, and you do not specify an _id field, mongo will
 create a unique value for you.

 In our new world, things that used to be unique, like machines,
 services, units etc, are now only unique when paired with the
 environment id.

 It seems we have a number of options here.

 1. change the _id field to be a composed field where it is the
 concatenation of the environment id and the existing id or name field.
 If we do take this approach, I strongly recommend having the fields that
 make up the key be available by themselves elsewhere in the document
 structure.

 2. let mongo create the _id field, and we ensure uniqueness over the
 pair of values with a unique index. One think I am unsure about with
 this approach is how we currently do our insertion checks, where we do a
 document does not exist check.  We wouldn't be able to do this as a
 transaction assertion as it can only check for _id values.  How fast are
 the indices updated?  Can having a unique index for a document work for
 us?  I'm hoping it can if this is the way to go.

 3. use a composite _id field such that the document may start like this:
   { _id: { env_uuid: blah, name: foo}, ...
 This gives the benefit of existence checks, and real names for the _id
 parts.

 Thoughts? Opinions? Recommendations?

 BTW, I think that if we can make 3 work, then it is the best approach.

 Tim

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev