# juju-core 1.26-alpha2

A new development release of Juju, juju-core 1.26-alpha2, is now
available.
This release replaces version 1.26-alpha1.


## Getting Juju

juju-core 1.26-alpha2 is available for Wily and backported to earlier
series in the following PPA:

    https://launchpad.net/~juju/+archive/devel

Windows and OS X users will find installers at:

    https://launchpad.net/juju-core/+milestone/1.26-alpha2

Development releases use the "devel" simple-streams. You must configure
the `agent-stream` option in your environments.yaml to use the matching
juju agents.

Upgrading from stable releases to development releases is not
supported. You can upgrade test environments to development releases
to test new features and fixes, but it is not advised to upgrade
production environments to 1.26-alpha2.


## Notable Changes

* Native support for charm bundles
* Unit agent improvements
* API login with macaroons
* LXD Provider
* Microsoft Azure Resource Manager provider

### Native support for charm bundles

The Juju 'deploy' command can now deploy a bundle. The Juju Quickstart
or Deployer plugins are not needed to deploy a bundle of charms. You can
deploy the mediawiki-single bundle like so:

    juju deploy cs:bundle/mediawiki-single

Local bundles can be deployed by passing the path to the bundle. For
example:

    juju deploy ./openstack/bundle.yaml

Local bundles can also be deployed from a local repository. Bundles
reside in the "bundle" subdirectory. For example, your local juju
repository might look like this:

    juju-repo/
     |
     - trusty/
     - bundle/
       |
       - openstack/
         |
         - bundle.yaml

and you can deploy the bundle like so:

    export JUJU_REPOSITORY="$HOME/juju-repo"
    juju deploy local:bundle/openstack

Bundles, when deployed from the command line like this, now support
storage constraints. To specify how to allocate storage for a service, you
can add a "storage" key underneath a service, and under "storage" add a
key for each store you want to allocate, along with the constraints. e.g.
say you're deploying ceph-osd, and you want each unit to have a 50GiB
disk:

    ceph-osd:
        ...
        storage:
            osd-devices: 50G

Because a bundle should work across cloud providers, the constraints in
the bundle should not specify a pool/storage provider, and just use the
default for the cloud. To customize how storage is allocated, you can use
the "--storage" flag with a new bundle-specific format: --storage
service:store=constraints. e.g. say you you're deploying OpenStack, and
you want each unit of ceph-osd to have 3x50GiB disks:

    juju deploy ./openstack/bundle.yaml --storage
ceph-osd:osd-devices=3,50G


### Unit agent improvements

We've made improvements to worker lifecycle management in the unit agent
in this release. The resource dependencies (API connections, locks,
etc.) shared among concurrent workers that comprise the agent are now
well-defined, modeled and coordinated by an engine, in a design inspired
by Erlang supervisor trees.

This improves the long-term testability of the unit agent, and should
improve the agent's resilience to failure. This work also allows hook
contexts to execute concurrently, which supports features in development
targeting 2.0.


### API login with macaroons

Added an alternative API login method based on macaroons in support of a
new charm publishing workflow targeting 16.04.


### LXD Provider

The new LXD provider is the best way to use Juju locally.

The state-server is no longer your host machine; it is now a LXC
container. This keeps your host machine clean and allows you to utilize
your local environment more like a traditional Juju environment. Because
of this, you can test things like Juju high-availability without needing
to utilize a cloud provider.

The previous local provider remains functional for backwards
compatibility.

#### Requirements

- Running Wily (LXD is installed by default)

- Import the LXD cloud-images that you intend to deploy and register
  an alias:

      lxd-images import ubuntu trusty --alias ubuntu-trusty
      lxd-images import ubuntu wily --alias ubuntu-wily

  or register an alias for your existing cloud-images

      lxc image alias create ubuntu-trusty <fingerprint>
      lxc image alias create ubuntu-wily <fingerprint>

- For alpha2, you must specify the "--upload-tools" flag when
  bootstrapping the environment that will use trusty cloud-images.
  This is because most of Juju's charms are for Trusty, and the
  agent-tools for Trusty don't yet have LXD support compiled in.

    juju bootstrap --upload-tools

"--upload-tools" is not required for deploying a wily state-server and
wily services.


#### Specifying a LXD Environment

In you environments.yaml, you'll now find a block for LXD providers:

lxd:
    type: lxd
    # namespace identifies the namespace to associate with containers
    # created by the provider.  It is prepended to the container names.
    # By default the environment's name is used as the namespace.
    #
    # namespace: lxd
    # remote-url is the URL to the LXD API server to use for managing
    # containers, if any. If not specified then the locally running LXD
    # server is used.
    #
    # Note: Juju does not set up remotes for you. Run the following
    # commands on an LXD remote's host to install LXD:
    #
    #   add-apt-repository ppa:ubuntu-lxc/lxd-stable
    #   apt-get update
    #   apt-get install lxd
    #
    # Before using a locally running LXD (the default for this provider)
    # after installing it, either through Juju or the LXD CLI ("lxc"),
    # you must either log out and back in or run this command:
    #
    #   newgrp lxd
    #
    # You will also need to prepare the cloud images that Juju uses:
    #
    #   lxc remote add images images.linuxcontainers.org
    #   lxd-images import ubuntu trusty --alias ubuntu-trusty
    #   lxd-images import ubuntu trusty --alias ubuntu-trusty
    #
    # See: https://linuxcontainers.org/lxd/getting-started-cli/
    #
    # remote-url:
    # The cert and key the client should use to connect to the remote
    # may also be provided. If not then they are auto-generated.
    #
    # client-cert:
    # client-key:

### Microsoft Azure Resource Manager provider

Juju now supports Microsoft Azure's new Resource Manager API. The Azure
provider has effectively been rewritten, but old environments are still
supported. To use the new provider support, you must bootstrap a new
environment with new configuration. There is no automated method for
migrating.

The new provider supports everything the old provider did, but now also
supports several additional features, as well as support for unit
placement (i.e. you can specify existing machines to which units are
deployed). As before, units of a service will be allocated to machines in
a service-specific Availability Set if no machine is specified.

In the initial release of this provider, each machine will be allocated a
public IP address. In a future release, we will only allocate public IP
addresses to machines that have exposed services, to enable allocating
more machines than there are public IP addresses.

Each environment is represented as a "resource group" in Azure, with the
VMs, subnets, disks, etc. being contained within that resource group.
This enables guarantees about ensuring resources are not leaked when
destroying an environment, which means we are now able to support
persistent volumes in the Azure storage provider.

Finally, the new Azure provider supports Microsoft Windows Server 2012
(series "win2012"), and 2012 R2 (series "win2012r2"), natively.

To use the new Azure support, you need the following configuration in
environments.yaml:

    type:            azure
    application-id:  <Azure-AD-application-ID>
    application-key: <Azure-AD-application-password>
    subscription-id: <Azure-account-subscription-ID>
    tenant-id:       <Azure-AD-tenant-ID>
    location:        westus # or any other Azure location

To obtain these values, it is recommended that you use the Azure CLI:
https://azure.microsoft.com/en-us/documentation/articles/xplat-cli/.

In order to login using the "ARM" mode, you will have to have a Microsoft
Work or School Account. To create one, follow the instructions at:
https://azure.microsoft.com/en-us/documentation/articles/resource-group-create-work-id-from-personal/

You will need to create an "application" in Azure Active Directory for
Juju to use, per the following documentation:
https://azure.microsoft.com/en-us/documentation/articles/resource-group-authenticate-service-principal/#authenticate-service-principal-with-password---azure-cli
(NOTE: you should assign the role "Owner", not "Reader", to the
application.)

Take a note of the "Application Id" output when issuing "azure ad app
create". This is the value that you must use in the "application-id"
configuration for Juju. The password you specify is the value to use in
"application-key".

To obtain your subscription ID, you can use "azure account list" to list
your account subscriptions and their IDs. To obtain your tenant ID, you
should use "azure account show", passing in the ID of the account
subscription you will use.


## Known Issues

ARM-HF clients and agents are not available Precise, Trusty and Vivid.
Clients and agents are available for Wily.


## Resolved issues

  * Cannot build trusty armhf with go1.2 on from master
    Lp 1513236

  * State: initially assigned units don't get storage attachments
    Lp 1517344

  * Relation settings watcher exposes txn-revno to uniter
    Lp 1496652

  * Storage: add bundle support
    Lp 1511135

  * Juju storage filesystem list  panics and dumps stack trace
    Lp 1515736


Finally

We encourage everyone to subscribe the mailing list at
juju-...@lists.canonical.com, or join us on #juju-dev on freenode.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to