Thanks Vinodhin
I just installed juju from snap and wanted to test couple of deployments.
Should the controller version be 2.4.0 as well or are we keeping 2.3.7?
kowalczykb@hp1:~$ snap list
Name VersionRev Tracking Developer Notes
core 16-2.33.1 4917 stablecanonical core
juju 2.4.0 4587 stablecanonical classic
kowalczykb@hp1:~$ juju status
ModelController Cloud/Region Version SLA
default super-bonkerslocalhost/localhost 2.3.7unsupported
App Version Status Scale Charm Store Rev OS Notes
Unit Workload Agent Machine Public address Ports Message
Machine State DNS Inst id Series AZ Message
Thanks
--
Kind Regards
Bogdan Kowalczyk
Technical Account Manager @ Canonical
Email: bogdan.kowalc...@canonical.com
On 03/07/18 10:20, Vinodhini Balusamy wrote:
*Juju team is proud to release version 2.4. This release greatly improves
running and operating production infrastructure at scale. Improvements to
`juju status` output, easier maintenance of proper HA, and guiding Juju to
the correct management network all aid in keeping your infrastructure
running smoothly. * Bionic supportJuju 2.4 fully supports running
controllers and workloads on Ubuntu 18.04 LTS (Bionic), including
leveraging netplan for network management. * LXD enhancementsLXD
functionality has been updated to support the latest LXD 3.0. Juju supports
LXD installed as a Snap and defaults to Snap-installed LXD by default if it
is present. A basic model of LXD clustering is now supported with the
following conditions: - The juju bootstrap of the localhost cloud must be
performed on a cluster member.- Bridge networking on clustered machines
must be set up to allow egress traffic to the controller container(s).*
Status UX cleanup - ‘Relations’ section in status output has been cleaned
up:- When filtering by application name, only direct relations are shown;-
In tabular format, ‘relations’ section is no longer visible by default (bug
# 1633972 <https://bugs.launchpad.net/juju/+bug/1633972>). Use
‘--relations’ option to see the section.- Clarified empty status output:
whether it is due to a model being empty or because a provided filter did
not match anything on the model (bugs 1255786
<https://bugs.launchpad.net/juju-core/+bug/1255786>, 1696245
<https://bugs.launchpad.net/juju/+bug/1696245> and 1594883
<https://bugs.launchpad.net/juju/+bug/1594883>).- Addition of a timestamp
to the status output (bug 1765404
<https://bugs.launchpad.net/juju/+bug/1765404>)- Reordering the status
model table to improve consistency between model updates. - Status now
shows application endpoint binding information (in YAML and JSON formats).
For each endpoint, the space to which it is bound is listed.* Controller
configuration options for network spacesTwo new controller configuration
settings have been introduced (see
https://docs.jujucharms.com/2.4/en/controllers-config
<https://docs.jujucharms.com/2.4/en/controllers-config>). These are: -
juju-mgmt-space- juju-ha-spacejuju-mgmt-space is the name of the network
space used by agents to communicate with controllers. Setting a value for
this item limits the IP addresses of controller API endpoints in agent
config, to those in the space. If the value is misconfigured so as to
expose no addresses to agents, then a fallback to all available addresses
results. Juju client communication with controllers is unaffected by this
value.Juju-ha-space is the name of the network space used for MongoDB
replica-set communication in high availability (HA) setups. This replaces
the previously auto-detected space used for such communication. When
enabling HA, this value must be set where member machines in the HA set
have more than one IP address available for MongoDB use, otherwise an error
will be reported. Existing HA replica sets with multiple available
addresses will report a warning instead of an error provided the members
and addresses remain unchanged.Using either of these options during
bootstrap or enable-ha effectively adds constraints to machine
provisioning. The commands will fail with an error if such constraints
cannot be satisfied.* Rework of 'juju enable-ha'In Juju 2.4 you can no
longer use 'juju enable-ha' to demote controllers. Instead you can now use
the usual 'juju remove-machine X' command, targeting a controller machine.
This will gracefully remove the machine as a controller and from the
database replica set. This method does allow you to end up with an even
number of controllers, which is not a recommended configuration. After
removing a controller it is recommended to run 'juju enable-ha' to bring
back proper redundancy. 'juju remove-machine --force' is also available,
for when the machine is gone and not available to run its own teardown and
cleanup. See https://docs.jujucharms.com/2.4/en/controllers-ha
<https://docs.jujucharms.com/2.4/en/controllers-ha>.* New charming tool:
Charm Goal State