Re: Re: Can't Access To Instances Use SSH

2016-01-14 Thread yyoung...@gmail.com
Hello James

I know it is hard to access the instance with only one external IP address.
But I think there is a way to route the traffic to get there right?

Here is my configuration:
The host has a one interface can access external network.
Neutron is in a kvm deployed on the host and has  two virtual network 
interfaces:
onr for tunnel-network 10.0.0.x and one for external-network(I want to forward 
the traffic from it to outside network).
I am using Legacy with Open vSwitch.(I think it's the juju default config)
my neutron  bridge table:
092c9e99-25bb-4bec-8cfc-8c0af7f9aa79
Bridge br-data
Port phy-br-data
Interface phy-br-data
Port br-data
Interface br-data
type: internal
Bridge br-tun
Port "gre-0a2e"
Interface "gre-0a2e"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.46"}
Port "gre-0a20"
Interface "gre-0a20"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.32"}
Port "gre-0a16"
Interface "gre-0a16"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.22"}
Port "gre-0a27"
Interface "gre-0a27"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.39"}
Port "gre-0a2b"
Interface "gre-0a2b"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.43"}
Port "gre-0a18"
Interface "gre-0a18"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.24"}
Port br-tun
Interface br-tun
type: internal
Port "gre-0a2d"
Interface "gre-0a2d"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.45"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-0a23"
Interface "gre-0a23"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.35"}
Port "gre-0a2a"
Interface "gre-0a2a"
type: gre
options: {in_key=flow, local_ip="10.0.0.44", out_key=flow, 
remote_ip="10.0.0.42"}
Bridge br-int
fail_mode: secure
Port "tap5881c2ce-1a"
tag: 1
Interface "tap5881c2ce-1a"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tap156e2b2a-aa"
tag: 2
Interface "tap156e2b2a-aa"
Port "tap2228fe49-74"
tag: 1
Interface "tap2228fe49-74"
Port int-br-data
Interface int-br-data
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "em2"
Interface "em2"
Port "tapaa55b086-57"
Interface "tapaa55b086-57"
Port "eth1"
Interface "eth1"
ovs_version: "2.0.2"

em2 is the host external network interface and eth1 is the neutron kvm 
interface I 
want to do the surgery.
The problem is the br-ex, instances will go through it to access the outside 
network.
So how can I route the traffic from br-ex to external network?

Can the kvm and host share the same ip or can the host's ip act as a router to 
the kvm?

thank you for help!!!



Yanyang Tao
Student, Integrated Computing PhD Program
Dept of Computer Science, College of EIT, UALR
Tel: +1 501 909‐2599
E-mail:yyoung...@gmail.com

 
From: James Page
Date: 2016-01-13 23:40
To: yyoung...@gmail.com
CC: Juju?email?list
Subject: Re: Can't Access To Instances Use SSH
Hello

On Wed, Jan 13, 2016 at 10:59 PM, yyoung...@gmail.com  
wrote:
I deployed Openstack manually with JUJU and MAAS. 
Here is my distribution:
Keystone, Neutron, Mysql, Rabbitmq, Dashboard, nova-cloud-controller, Glance, 
Cinder each deployed on one VM(which is kvm) on one physical server.
nova-compute has 9 nodes each deployed on one physical nodes.

My external network is one fixed IP like x.x.x.x. I can only get one available 
IP address from our community.

This will make accessing instances very hard; at least two IP addresses would 
be needed - one of the virtual router that is created to provide north/south 
traffic routing to the internal network, and one for a floating ip address for 
the instance you want to access.
 
My internal network which use 10.0.0.0/24 

Re: Does sftp eliminate the need to check sha1sum?

2016-01-14 Thread Merlijn Sebrechts
hi all, Sorry to barge in like this, but this is very important for my
use-case.


Binaries that are downloaded always need to be checked using a
checksum *included
in the charm*.

One Charm version should always deploy the *exact same version of that
service*. If you want Juju to be used in production, *Charm deployment has
to be reproducible*. Please do not force people who use Juju in production
to fork all the Charms they use. Store Charms should be good enough so they
can be used in production.

Consider the use-case of a platform that automatically scales a service up
and down depending on usage. This will break if we allow Charms to be
changed between versions. This has bitten me once already. The Hadoop
Charms downloads the jujubigdata pip dependency and uses code in this
dependency for communication between units. Because of an oversight, two
versions of this pip dependency were not compatible with each other. This
meant that running `juju add-unit` on an existing Hadoop installation was
successful one day and failed the next.

I understand why the Hadoop Charms were build this way, it is a lot easier
to maintain. However layers fix the maintenance issue.

We do not know who uses our Charms and for what, so *there is no way we can
reliably determine if a change would break a use case*. Yes, this change
could fix a bug but there could be some service relying on this bug to be
present. Because of this, one version of a Charm must always deploy the
exact same thing. Let the users handle the upgrade process themselves.

There are examples enough of cases where even minor version changes of
binaries break relationships, so one version of a charm must always deploy
the same binary.



Kind regards
Merlijn Sebrechts

2016-01-14 12:42 GMT+01:00 Cory Johns :

> My preference over hard-coding a checksum into the charm would be hosting
> a GPG signature alongside the file and including the public key in the
> charm.  This allows the charm author to update a file if necessary without
> having to also update the charm, but also allows the charm to confirm that
> it got the file as intended by the author.
>
> Hopefully, though, this will become moot with the advent of resources
> support in Juju 2.0.
>
> On Thu, Jan 14, 2016 at 1:48 AM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> On Thu, Jan 14, 2016 at 3:23 AM José Antonio Rey  wrote:
>>
>>> I think this is more of a discusion on if you got 'what' you wanted or
>>> if you got it from 'where' you wanted. Even if you used SFTP, the file
>>> could've changed, and if it doesn't have a SHA1SUM it could result in
>>> unexpected charm breakage.
>>>
>>> If it were me, I would always implement SHA1SUMS, just to make sure that
>>> the file is, in fact, what I wanted. It would make it easier to debug
>>> and fix later down the road.
>>>
>>
>> +1
>>
>> SFTP/SSH will (can?) ensure the integrity during transit, but can't tell
>> you that the data wasn't tampered with outside of the SFTP transfer
>> process. Someone could have replaced the file on the file server. The
>> client needs to know ahead of time what to expect.
>>
>> On 01/13/2016 02:18 PM, Adam Israel wrote:
>>> > Matt,
>>> >
>>> > For the charm in question, I would think adding the sha1sum check to
>>> the
>>> > process would be sufficient, especially in the scenario that the binary
>>> > is being self-hosted for the purposes of installing it via the charm.
>>> >
>>> > Adam Israel - Software Engineer
>>> > Canonical Ltd.
>>> > http://juju.ubuntu.com/ - Automate your Cloud Infrastructure
>>> >
>>> >> On Jan 13, 2016, at 2:14 PM, Tom Barber >> >> > wrote:
>>> >>
>>> >> Yeah but as pointed out earlier,  it verifies where you got it from,
>>> >> but not what you got.  :)
>>> >>
>>> >> On 13 Jan 2016 19:11, "Jay Wren" >> >> > wrote:
>>> >>
>>> >> StrictHostKeyChecking and shipping the public key of the ssh host
>>> with
>>> >> the charm does seem to meet the criteria of verifying the intended
>>> >> source.
>>> >>
>>> >>
>>> >> On Wed, Jan 13, 2016 at 1:46 PM, Matt Bruzek
>>> >> >> >> > wrote:
>>> >> > I recently reviewed a charm that is using sftp to download the
>>> >> binary files
>>> >> > with a username and password.  The charm does not check the
>>> >> sha1sum of these
>>> >> > files.
>>> >> >
>>> >> > The Charm Store Policy states:  Must verify that any software
>>> >> installed or
>>> >> > utilized is verified as coming from the intended source
>>> >> >
>>> >> > https://jujucharms.com/docs/stable/authors-charm-policy
>>> >> >
>>> >> > Does using sftp eliminate the need to check the sha1sum of the
>>> files
>>> >> > downloaded?
>>> >> >
>>> >> > What does the Juju community say to 

Re: Does sftp eliminate the need to check sha1sum?

2016-01-14 Thread Merlijn Sebrechts
Please note that this doesn't have to be a burden on the vetting process.
The vetting process for an updated binary can be more or less automated,
especially with the upcoming juju resources. The important part is that
every time the binary changes, the charm version has to be bumped.

2016-01-14 13:29 GMT+01:00 Merlijn Sebrechts :

> hi all, Sorry to barge in like this, but this is very important for my
> use-case.
>
>
> Binaries that are downloaded always need to be checked using a checksum 
> *included
> in the charm*.
>
> One Charm version should always deploy the *exact same version of that
> service*. If you want Juju to be used in production, *Charm deployment
> has to be reproducible*. Please do not force people who use Juju in
> production to fork all the Charms they use. Store Charms should be good
> enough so they can be used in production.
>
> Consider the use-case of a platform that automatically scales a service up
> and down depending on usage. This will break if we allow Charms to be
> changed between versions. This has bitten me once already. The Hadoop
> Charms downloads the jujubigdata pip dependency and uses code in this
> dependency for communication between units. Because of an oversight, two
> versions of this pip dependency were not compatible with each other. This
> meant that running `juju add-unit` on an existing Hadoop installation was
> successful one day and failed the next.
>
> I understand why the Hadoop Charms were build this way, it is a lot easier
> to maintain. However layers fix the maintenance issue.
>
> We do not know who uses our Charms and for what, so *there is no way we
> can reliably determine if a change would break a use case*. Yes, this
> change could fix a bug but there could be some service relying on this bug
> to be present. Because of this, one version of a Charm must always deploy
> the exact same thing. Let the users handle the upgrade process themselves.
>
> There are examples enough of cases where even minor version changes of
> binaries break relationships, so one version of a charm must always deploy
> the same binary.
>
>
>
> Kind regards
> Merlijn Sebrechts
>
> 2016-01-14 12:42 GMT+01:00 Cory Johns :
>
>> My preference over hard-coding a checksum into the charm would be hosting
>> a GPG signature alongside the file and including the public key in the
>> charm.  This allows the charm author to update a file if necessary without
>> having to also update the charm, but also allows the charm to confirm that
>> it got the file as intended by the author.
>>
>> Hopefully, though, this will become moot with the advent of resources
>> support in Juju 2.0.
>>
>> On Thu, Jan 14, 2016 at 1:48 AM, Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> On Thu, Jan 14, 2016 at 3:23 AM José Antonio Rey 
>>> wrote:
>>>
 I think this is more of a discusion on if you got 'what' you wanted or
 if you got it from 'where' you wanted. Even if you used SFTP, the file
 could've changed, and if it doesn't have a SHA1SUM it could result in
 unexpected charm breakage.

 If it were me, I would always implement SHA1SUMS, just to make sure that
 the file is, in fact, what I wanted. It would make it easier to debug
 and fix later down the road.

>>>
>>> +1
>>>
>>> SFTP/SSH will (can?) ensure the integrity during transit, but can't tell
>>> you that the data wasn't tampered with outside of the SFTP transfer
>>> process. Someone could have replaced the file on the file server. The
>>> client needs to know ahead of time what to expect.
>>>
>>> On 01/13/2016 02:18 PM, Adam Israel wrote:
 > Matt,
 >
 > For the charm in question, I would think adding the sha1sum check to
 the
 > process would be sufficient, especially in the scenario that the
 binary
 > is being self-hosted for the purposes of installing it via the charm.
 >
 > Adam Israel - Software Engineer
 > Canonical Ltd.
 > http://juju.ubuntu.com/ - Automate your Cloud Infrastructure
 >
 >> On Jan 13, 2016, at 2:14 PM, Tom Barber > > wrote:
 >>
 >> Yeah but as pointed out earlier,  it verifies where you got it from,
 >> but not what you got.  :)
 >>
 >> On 13 Jan 2016 19:11, "Jay Wren" > > wrote:
 >>
 >> StrictHostKeyChecking and shipping the public key of the ssh
 host with
 >> the charm does seem to meet the criteria of verifying the
 intended
 >> source.
 >>
 >>
 >> On Wed, Jan 13, 2016 at 1:46 PM, Matt Bruzek
 >> > > wrote:
 >> > I recently reviewed a charm that is using sftp to download the
 >> binary files
 >> > with a username and password.  The charm 

Re: Does sftp eliminate the need to check sha1sum?

2016-01-14 Thread Cory Johns
My preference over hard-coding a checksum into the charm would be hosting a
GPG signature alongside the file and including the public key in the
charm.  This allows the charm author to update a file if necessary without
having to also update the charm, but also allows the charm to confirm that
it got the file as intended by the author.

Hopefully, though, this will become moot with the advent of resources
support in Juju 2.0.

On Thu, Jan 14, 2016 at 1:48 AM, Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Thu, Jan 14, 2016 at 3:23 AM José Antonio Rey  wrote:
>
>> I think this is more of a discusion on if you got 'what' you wanted or
>> if you got it from 'where' you wanted. Even if you used SFTP, the file
>> could've changed, and if it doesn't have a SHA1SUM it could result in
>> unexpected charm breakage.
>>
>> If it were me, I would always implement SHA1SUMS, just to make sure that
>> the file is, in fact, what I wanted. It would make it easier to debug
>> and fix later down the road.
>>
>
> +1
>
> SFTP/SSH will (can?) ensure the integrity during transit, but can't tell
> you that the data wasn't tampered with outside of the SFTP transfer
> process. Someone could have replaced the file on the file server. The
> client needs to know ahead of time what to expect.
>
> On 01/13/2016 02:18 PM, Adam Israel wrote:
>> > Matt,
>> >
>> > For the charm in question, I would think adding the sha1sum check to the
>> > process would be sufficient, especially in the scenario that the binary
>> > is being self-hosted for the purposes of installing it via the charm.
>> >
>> > Adam Israel - Software Engineer
>> > Canonical Ltd.
>> > http://juju.ubuntu.com/ - Automate your Cloud Infrastructure
>> >
>> >> On Jan 13, 2016, at 2:14 PM, Tom Barber > >> > wrote:
>> >>
>> >> Yeah but as pointed out earlier,  it verifies where you got it from,
>> >> but not what you got.  :)
>> >>
>> >> On 13 Jan 2016 19:11, "Jay Wren" > >> > wrote:
>> >>
>> >> StrictHostKeyChecking and shipping the public key of the ssh host
>> with
>> >> the charm does seem to meet the criteria of verifying the intended
>> >> source.
>> >>
>> >>
>> >> On Wed, Jan 13, 2016 at 1:46 PM, Matt Bruzek
>> >> > >> > wrote:
>> >> > I recently reviewed a charm that is using sftp to download the
>> >> binary files
>> >> > with a username and password.  The charm does not check the
>> >> sha1sum of these
>> >> > files.
>> >> >
>> >> > The Charm Store Policy states:  Must verify that any software
>> >> installed or
>> >> > utilized is verified as coming from the intended source
>> >> >
>> >> > https://jujucharms.com/docs/stable/authors-charm-policy
>> >> >
>> >> > Does using sftp eliminate the need to check the sha1sum of the
>> files
>> >> > downloaded?
>> >> >
>> >> > What does the Juju community say to this question?
>> >> >
>> >> >- Matt Bruzek > >> >
>> >> >
>> >> > --
>> >> > Juju mailing list
>> >> > Juju@lists.ubuntu.com 
>> >> > Modify settings or unsubscribe at:
>> >> > https://lists.ubuntu.com/mailman/listinfo/juju
>> >> >
>> >>
>> >> --
>> >> Juju mailing list
>> >> Juju@lists.ubuntu.com 
>> >> Modify settings or unsubscribe at:
>> >> https://lists.ubuntu.com/mailman/listinfo/juju
>> >>
>> >> --
>> >> Juju mailing list
>> >> Juju@lists.ubuntu.com 
>> >> Modify settings or unsubscribe at:
>> >> https://lists.ubuntu.com/mailman/listinfo/juju
>> >
>> >
>> >
>>
>>
>> --
>> José Antonio Rey
>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Does sftp eliminate the need to check sha1sum?

2016-01-14 Thread Charles Butler
Just as a word of calming against this our recommended patterns are to make
the payload you're fetching configurable:

such as fetch this particular tarball from server X with a shipping
SHA1/SHA256 sum of the payload to ensure a) its the same payload b) it
wasn't corrupted from the point of writing the charm to upgrading.

Shipping these upgrades via charm config leaves existing deployments at
their version, only new deployments will upgrade the components, and users
are still left to "manually upgrade" by changing the charm config, which
should keep you from seeing major breakage unless shipping components
contain the defect as outlined above.

The charm should handle upgrades, vs the user 'creating' the change.

The GPG approach is interesting, as its replicates some of the "trusted
registry" features i've been tracking on the app container side, where we
have warm fuzzies knowing that our image passes a validity hash match, and
further is verified against the developers signature w/ the hub, so we know
they signed the release.

I like these ideas, keep up the good work @cory


Charles Butler  - Juju Charmer
Come see the future of datacenter orchestration: http://jujucharms.com

On Thu, Jan 14, 2016 at 7:36 AM, Merlijn Sebrechts <
merlijn.sebrec...@gmail.com> wrote:

> Please note that this doesn't have to be a burden on the vetting process.
> The vetting process for an updated binary can be more or less automated,
> especially with the upcoming juju resources. The important part is that
> every time the binary changes, the charm version has to be bumped.
>
> 2016-01-14 13:29 GMT+01:00 Merlijn Sebrechts 
> :
>
>> hi all, Sorry to barge in like this, but this is very important for my
>> use-case.
>>
>>
>> Binaries that are downloaded always need to be checked using a checksum 
>> *included
>> in the charm*.
>>
>> One Charm version should always deploy the *exact same version of that
>> service*. If you want Juju to be used in production, *Charm deployment
>> has to be reproducible*. Please do not force people who use Juju in
>> production to fork all the Charms they use. Store Charms should be good
>> enough so they can be used in production.
>>
>> Consider the use-case of a platform that automatically scales a service
>> up and down depending on usage. This will break if we allow Charms to be
>> changed between versions. This has bitten me once already. The Hadoop
>> Charms downloads the jujubigdata pip dependency and uses code in this
>> dependency for communication between units. Because of an oversight, two
>> versions of this pip dependency were not compatible with each other. This
>> meant that running `juju add-unit` on an existing Hadoop installation was
>> successful one day and failed the next.
>>
>> I understand why the Hadoop Charms were build this way, it is a lot
>> easier to maintain. However layers fix the maintenance issue.
>>
>> We do not know who uses our Charms and for what, so *there is no way we
>> can reliably determine if a change would break a use case*. Yes, this
>> change could fix a bug but there could be some service relying on this bug
>> to be present. Because of this, one version of a Charm must always deploy
>> the exact same thing. Let the users handle the upgrade process themselves.
>>
>> There are examples enough of cases where even minor version changes of
>> binaries break relationships, so one version of a charm must always deploy
>> the same binary.
>>
>>
>>
>> Kind regards
>> Merlijn Sebrechts
>>
>> 2016-01-14 12:42 GMT+01:00 Cory Johns :
>>
>>> My preference over hard-coding a checksum into the charm would be
>>> hosting a GPG signature alongside the file and including the public key in
>>> the charm.  This allows the charm author to update a file if necessary
>>> without having to also update the charm, but also allows the charm to
>>> confirm that it got the file as intended by the author.
>>>
>>> Hopefully, though, this will become moot with the advent of resources
>>> support in Juju 2.0.
>>>
>>> On Thu, Jan 14, 2016 at 1:48 AM, Andrew Wilkins <
>>> andrew.wilk...@canonical.com> wrote:
>>>
 On Thu, Jan 14, 2016 at 3:23 AM José Antonio Rey 
 wrote:

> I think this is more of a discusion on if you got 'what' you wanted or
> if you got it from 'where' you wanted. Even if you used SFTP, the file
> could've changed, and if it doesn't have a SHA1SUM it could result in
> unexpected charm breakage.
>
> If it were me, I would always implement SHA1SUMS, just to make sure
> that
> the file is, in fact, what I wanted. It would make it easier to debug
> and fix later down the road.
>

 +1

 SFTP/SSH will (can?) ensure the integrity during transit, but can't
 tell you that the data wasn't tampered with outside of the SFTP transfer
 process. Someone could have replaced the file on the file server. 

juju stable 1.25.2 is proposed for release.

2016-01-14 Thread Curtis Hovey-Canonical
# juju-core 1.25.2

A new proposed stable release of Juju, juju-core 1.25.2, is now available.
This release may replace version 1.25.0 on Thursday January 21.


## Getting Juju

juju-core 1.25.2 is available for Xenial and backported to earlier
series in the following PPA:

https://launchpad.net/~juju/+archive/proposed

Windows, Centos, and OS X users will find installers at:

https://launchpad.net/juju-core/+milestone/1.25.2

Proposed releases use the "proposed" simple-streams. You must configure
the `agent-stream` option in your environments.yaml to use the matching
juju agents.


## Notable Changes

This releases addresses stability and performance issues.


## Resolved issues

  * "cannot allocate memory" when running "juju run"
Lp 1382556

  * Bootstrap with the vsphere provider fails to log into the virtual
machine
Lp 1511138

  * Add-machine with vsphere triggers machine-0: panic: juju home
hasn't been initialized
Lp 1513492

  * Using maas 1.9 as provider using dhcp nic will prevent juju
bootstrap
Lp 1512371

  * Worker/storageprovisioner: machine agents attempting to attach
environ-scoped volumes
Lp 1483492

  * Restore: agent old password not found in configuration
Lp 1452082

  * "ignore-machine-addresses" broken for containers
Lp 1509292

  * Deploying a service to a space which has no subnets causes the
agent to panic
Lp 1499426

  * /var/lib/juju gone after 1.18->1.20 upgrade and manual edit of
agent.conf
Lp 1444912

  * Juju bootstrap fails to successfully configure the bridge juju-br0
when deploying with wily 4.2 kernel
Lp 1496972

  * Incompatible cookie format change
Lp 1511717

  * Error environment destruction failed: destroying storage: listing
volumes: get https://x.x.x.x:8776/v2//volumes/detail: local
error: record overflow
Lp 1512399

  * Replica set emptyconfig maas bootstrap
Lp 1412621

  * Juju can't find daily image streams from cloud-
images.ubuntu.com/daily
Lp 1513982

  * Rsyslog certificate fails when using ipv6/4 dual stack with
prefer-ipv6: true
Lp 1478943

  * Improper address:port joining
Lp 1518128

  * Juju status  broken
Lp 1516989

  * 1.25.1 with maas 1.8: devices dns allocation uses non-unique
hostname
Lp 1525280

  * Increment minimum juju version for 2.0 upgrade to 1.25.3
Lp 1533751

  * Make assignment of units to machines use a worker
Lp 1497312

  * `juju environments` fails due to missing ~/.juju/current-
environment
Lp 1506680

  * Juju 1.25 misconfigures juju-br0 when using maas 1.9 bonded
interface
Lp 1516891

  * Destroy-environment on an unbootstrapped maas environment can
release all my nodes
Lp 1490865

  * On juju upgrade the security group lost ports for the exposed
services
Lp 1506649

  * Support centos and windows image metadata
Lp 1523693

  * Upgrade-juju shows available tools and best version but did not
output what it decided to do
Lp 1403655

  * Invalid binary version, version "1.23.3--amd64" or "1.23.3--armhf"
Lp 1459033

  * Add xenial to supported series
Lp 1533262


Finally

We encourage everyone to subscribe the mailing list at
juju-...@lists.canonical.com, or join us on #juju-dev on freenode.


-- 
Curtis Hovey
Canonical Cloud Development and Operations
http://launchpad.net/~sinzui

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Deprecating charm config options: postgresql case

2016-01-14 Thread Stuart Bishop
On 15 January 2016 at 02:00, Andreas Hasenack  wrote:

>   max_connections:
> default: 100
> type: int
> description: >
> DEPRECATED. Use extra_pg_conf.
> Maximum number of connections to allow to the PG database
>
> The option still exists and can be set, but does nothing. The service will
> get whatever is set in the new extra_pg_conf option, which happens to be
> 100.

That would be a bug. It is supposed to still work, with warnings in
your logs that you are using deprecated options.


> Other deprecated options have a more explicit warning:
>
>   performance_tuning:
> default: "Mixed"
> type: string
> description: >
> DEPRECATED AND IGNORED. The pgtune project has been abandoned
> and the packages dropped from Debian and Ubuntu. The charm
> still performs some basic tuning, which users can tweak using
> extra_pg_config.
>
> In this specific postgresql case, looks like all (I just tested two, btw)
> deprecated options should have been marked with the extra "... AND IGNORED"
> text. But then again, is it worth it to silently accept them and do nothing,
> thereby introducing subtle run-time failures?

Just dropping options risks breaking lots of mojo specs, all at the
same time. The plan is to log warnings, escalate to irritating
workload status messages later, and eventually drop them. All the
options that matter are supposed to still be functional, and the few
being ignored done so after careful consideration of the impact.

I think there are also issues to deal with for upgrade-charm, although
Juju might have changed its behaviour since I last looked (are service
settings that no longer exist now silently dropped, or do they block
the upgrade?)

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


[Review Queue] nfs, zulu8, saiku, nuage, quobyte, apache2

2016-01-14 Thread Kevin Monroe
Happy 2016 folks!

The Big Data team has been rocking the queue this year, and I had the
pleasure of announcing our RQ time last week.  Unfortunately, I didn't get
the note sent out, so I got the chance to cover this week's RQ time as
well!  Here's what we found:

Jan 14, 2016:

   -

   nfs
   -


  
https://code.launchpad.net/~freyes/charms/trusty/nfs/lp1433036/+merge/280365
  -

  known not to work in lxc on trusty, bundletesting on AWS
  -

  test attempts to deploy ‘precise/owncloud’ charm from the store which
  appears to be broken:
  -

 0.shared-fs-relation-changed logger.go:40
 
/var/lib/juju/agents/unit-owncloud-0/charm/hooks/shared-fs-relation-changed:
 line 55: [: missing `]
 -

  Asked the author if an owncloud change is needed to support this
  update
  -

   zulu8
   -

  https://bugs.launchpad.net/charms/+bug/1519858
  -

  missing tests, so we requested Azul dupe the recent openjdk tests:
  -


 
https://code.launchpad.net/~kwmonroe/charms/trusty/zulu8/add-tests/+merge/282681



Jan 8, 2016:

   -

   saiku analytics - enterprise
   -

  https://bugs.launchpad.net/charms/+bug/1524715
  -

  Found 2 issues causing test failures; suggested fixes
  -

   nuage-vrs
   -

  https://bugs.launchpad.net/charms/+bug/1420995
  -

  All charm proof issues resolved
  -

  Conditional restart logic added and seems good
  -

  Tests (unit) added, look good, and pass
  -

  Provided some feedback and suggested improvements, the with the main
  suggestions dealing with how to avoid a charm error state when
blocked was
  intended, and how to avoid an immutable config option
  -

  Unfortunately, we can’t fully test the charm without access to the
  repos
  -

   quobyte new charms
   -

  quobyte-webconsole https://bugs.launchpad.net/charms/+bug/1527679
  -

  quobyte-api https://bugs.launchpad.net/charms/+bug/1527676
  -

  quobyte-metadata https://bugs.launchpad.net/charms/+bug/1527674
  -

  quobyte-registry https://bugs.launchpad.net/charms/+bug/1527672
  -

  quobyte-data https://bugs.launchpad.net/charms/+bug/1527673
  -

  The charms above deploy the Quobyte Storage System availubale at
  https://code.launchpad.net/~3-bruno/charms/trusty/quobyte-metadata/trunk
  -

  At this point we cannot proceed with the above charms primarily
  because there is no adequate testing.
  -

  We suggested two options to the authors, add tests and/or create a
  bundle that would deploy all services and test the storage system as a
  whole.
  -

   apache2 - add-logs-interface
   -


  
https://code.launchpad.net/~evarlast/charms/trusty/apache2/add-logs-interface/+merge/278222
  - Test concern was addressed, but a possible corner-case was spotted
  and the README should be updated to document this new interface.


Questions/concerns?  We're in #juju on freenode.  Thanks!
-Kevin
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Deprecating charm config options: postgresql case

2016-01-14 Thread Andreas Hasenack
TL;DR
Should charms a) remove deprecated options; b) accept them but do nothing
(the case below); c) accept them for a while, log a warning, then remove;
d) ?

Hi,

Recently the postgresql charm deprecated several config options. For
example:

  max_connections:
default: 100
type: int
description: >
DEPRECATED. Use extra_pg_conf.
Maximum number of connections to allow to the PG database

The option still exists and can be set, but does nothing. The service will
get whatever is set in the new extra_pg_conf option, which happens to be
100.

I believe the intent of this behaviour was to not break the deployment of
the charm using existing configuration files. But instead it introduces a
subtle breakage: my DB can now only handle 100 connections, whereas before
it was (in my case) 500. The deployment works, but the system doesn't
behave as before and eventually breaks under use. That lead to some
debugging until this was found:

psycopg2.OperationalError: FATAL:  remaining connection slots are reserved
for non-replication superuser connections

Other deprecated options have a more explicit warning:

  performance_tuning:
default: "Mixed"
type: string
description: >
DEPRECATED AND IGNORED. The pgtune project has been abandoned
and the packages dropped from Debian and Ubuntu. The charm
still performs some basic tuning, which users can tweak using
extra_pg_config.

In this specific postgresql case, looks like all (I just tested two, btw)
deprecated options should have been marked with the extra "... AND IGNORED"
text. But then again, is it worth it to silently accept them and do
nothing, thereby introducing subtle run-time failures?
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Deprecating charm config options: postgresql case

2016-01-14 Thread Patrik Karisch
Hi Andreas,

deprecated but do nothing is IMHO a BC break. A deprecation is only useful
if it does it's behavior (but overwritten by the new configuration) until
it is removed.

Sadly revisions are not a useful versioning scheme. Would be cool if juju
can adapt semantic versioning, so a charm can deprecate options, use it in
1.x versions and remove it in 2.x versions. Juju would not update a charm
over a major version until it is told so.

Andreas Hasenack  schrieb am Do., 14. Jan. 2016 um
20:01 Uhr:

> TL;DR
> Should charms a) remove deprecated options; b) accept them but do nothing
> (the case below); c) accept them for a while, log a warning, then remove;
> d) ?
>
> Hi,
>
> Recently the postgresql charm deprecated several config options. For
> example:
>
>   max_connections:
> default: 100
> type: int
> description: >
> DEPRECATED. Use extra_pg_conf.
> Maximum number of connections to allow to the PG database
>
> The option still exists and can be set, but does nothing. The service will
> get whatever is set in the new extra_pg_conf option, which happens to be
> 100.
>
> I believe the intent of this behaviour was to not break the deployment of
> the charm using existing configuration files. But instead it introduces a
> subtle breakage: my DB can now only handle 100 connections, whereas before
> it was (in my case) 500. The deployment works, but the system doesn't
> behave as before and eventually breaks under use. That lead to some
> debugging until this was found:
>
> psycopg2.OperationalError: FATAL:  remaining connection slots are reserved
> for non-replication superuser connections
>
> Other deprecated options have a more explicit warning:
>
>   performance_tuning:
> default: "Mixed"
> type: string
> description: >
> DEPRECATED AND IGNORED. The pgtune project has been abandoned
> and the packages dropped from Debian and Ubuntu. The charm
> still performs some basic tuning, which users can tweak using
> extra_pg_config.
>
> In this specific postgresql case, looks like all (I just tested two, btw)
> deprecated options should have been marked with the extra "... AND IGNORED"
> text. But then again, is it worth it to silently accept them and do
> nothing, thereby introducing subtle run-time failures?
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: "environment" vs "model" in the code

2016-01-14 Thread Ian Booth


On 15/01/16 10:16, Menno Smits wrote:
> Hi all,
> 
> We've committed to renaming "environment" to "model" in Juju's CLI and API
> but what do we want to do in Juju's internals? I'm currently adding
> significant new model/environment related functionality to the state
> package which includes adding new database collections, structs and
> functions which could include either "env/environment" or "model" in their
> names.
> 
> One approach could be that we only use the word "model" at the edges - the
> CLI, API and GUI - and continue to use "environment" internally. That way
> the naming of environment related things in most of Juju's code and
> database stays consistent.
> 
> Another approach is to use "model" for new work[1] with a hope that it'll
> eventually become the dominant name for the concept. This will however
> result in a long period of widespread inconsistency, and it's unlikely that
> things we'll ever completely get rid of all uses of "environment".
> 
> I think we need arrive at some sort of consensus on the way to tackle this.
> FWIW, I prefer the former approach. Having good, consistent names for
> things is important[2].
>

Using "model" for new work is the correct approach - new chunks of work will be
internally consistent with the use of their terminology. And we will be looking
to migrate existing internal code once we tackle the external facing stuff for
2.0. We don't want to add to our tech debt and make our future selves sad by
introducing obsoleted terminology for new work.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


"environment" vs "model" in the code

2016-01-14 Thread Menno Smits
Hi all,

We've committed to renaming "environment" to "model" in Juju's CLI and API
but what do we want to do in Juju's internals? I'm currently adding
significant new model/environment related functionality to the state
package which includes adding new database collections, structs and
functions which could include either "env/environment" or "model" in their
names.

One approach could be that we only use the word "model" at the edges - the
CLI, API and GUI - and continue to use "environment" internally. That way
the naming of environment related things in most of Juju's code and
database stays consistent.

Another approach is to use "model" for new work[1] with a hope that it'll
eventually become the dominant name for the concept. This will however
result in a long period of widespread inconsistency, and it's unlikely that
things we'll ever completely get rid of all uses of "environment".

I think we need arrive at some sort of consensus on the way to tackle this.
FWIW, I prefer the former approach. Having good, consistent names for
things is important[2].

Thoughts?

- Menno

[1] - but what defines "new" and what do we do when making significant
changes to existing code?
[2] - http://martinfowler.com/bliki/TwoHardThings.html
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Does sftp eliminate the need to check sha1sum?

2016-01-14 Thread Merlijn Sebrechts
Thanks for the words of calming :)


In retrospect, my email ended up sounding a lot more heated than originally
intended, sorry for that.



Kind regards
Merlijn

2016-01-14 15:26 GMT+01:00 Charles Butler :

> Just as a word of calming against this our recommended patterns are to
> make the payload you're fetching configurable:
>
> such as fetch this particular tarball from server X with a shipping
> SHA1/SHA256 sum of the payload to ensure a) its the same payload b) it
> wasn't corrupted from the point of writing the charm to upgrading.
>
> Shipping these upgrades via charm config leaves existing deployments at
> their version, only new deployments will upgrade the components, and users
> are still left to "manually upgrade" by changing the charm config, which
> should keep you from seeing major breakage unless shipping components
> contain the defect as outlined above.
>
> The charm should handle upgrades, vs the user 'creating' the change.
>
> The GPG approach is interesting, as its replicates some of the "trusted
> registry" features i've been tracking on the app container side, where we
> have warm fuzzies knowing that our image passes a validity hash match, and
> further is verified against the developers signature w/ the hub, so we know
> they signed the release.
>
> I like these ideas, keep up the good work @cory
>
>
> Charles Butler  - Juju Charmer
> Come see the future of datacenter orchestration: http://jujucharms.com
>
> On Thu, Jan 14, 2016 at 7:36 AM, Merlijn Sebrechts <
> merlijn.sebrec...@gmail.com> wrote:
>
>> Please note that this doesn't have to be a burden on the vetting process.
>> The vetting process for an updated binary can be more or less automated,
>> especially with the upcoming juju resources. The important part is that
>> every time the binary changes, the charm version has to be bumped.
>>
>> 2016-01-14 13:29 GMT+01:00 Merlijn Sebrechts > >:
>>
>>> hi all, Sorry to barge in like this, but this is very important for my
>>> use-case.
>>>
>>>
>>> Binaries that are downloaded always need to be checked using a checksum 
>>> *included
>>> in the charm*.
>>>
>>> One Charm version should always deploy the *exact same version of that
>>> service*. If you want Juju to be used in production, *Charm deployment
>>> has to be reproducible*. Please do not force people who use Juju in
>>> production to fork all the Charms they use. Store Charms should be good
>>> enough so they can be used in production.
>>>
>>> Consider the use-case of a platform that automatically scales a service
>>> up and down depending on usage. This will break if we allow Charms to be
>>> changed between versions. This has bitten me once already. The Hadoop
>>> Charms downloads the jujubigdata pip dependency and uses code in this
>>> dependency for communication between units. Because of an oversight, two
>>> versions of this pip dependency were not compatible with each other. This
>>> meant that running `juju add-unit` on an existing Hadoop installation was
>>> successful one day and failed the next.
>>>
>>> I understand why the Hadoop Charms were build this way, it is a lot
>>> easier to maintain. However layers fix the maintenance issue.
>>>
>>> We do not know who uses our Charms and for what, so *there is no way we
>>> can reliably determine if a change would break a use case*. Yes, this
>>> change could fix a bug but there could be some service relying on this bug
>>> to be present. Because of this, one version of a Charm must always deploy
>>> the exact same thing. Let the users handle the upgrade process themselves.
>>>
>>> There are examples enough of cases where even minor version changes of
>>> binaries break relationships, so one version of a charm must always deploy
>>> the same binary.
>>>
>>>
>>>
>>> Kind regards
>>> Merlijn Sebrechts
>>>
>>> 2016-01-14 12:42 GMT+01:00 Cory Johns :
>>>
 My preference over hard-coding a checksum into the charm would be
 hosting a GPG signature alongside the file and including the public key in
 the charm.  This allows the charm author to update a file if necessary
 without having to also update the charm, but also allows the charm to
 confirm that it got the file as intended by the author.

 Hopefully, though, this will become moot with the advent of resources
 support in Juju 2.0.

 On Thu, Jan 14, 2016 at 1:48 AM, Andrew Wilkins <
 andrew.wilk...@canonical.com> wrote:

> On Thu, Jan 14, 2016 at 3:23 AM José Antonio Rey 
> wrote:
>
>> I think this is more of a discusion on if you got 'what' you wanted or
>> if you got it from 'where' you wanted. Even if you used SFTP, the file
>> could've changed, and if it doesn't have a SHA1SUM it could result in
>> unexpected charm breakage.
>>
>> If it were me, I would always implement SHA1SUMS, just to make sure
>> 

Re: Does sftp eliminate the need to check sha1sum?

2016-01-14 Thread Marco Ceppi
Hey everyone!

Wow, great discussion so far. I think it's clear that repeat-ability and
reliability is very important and it's something Juju aims to do so we
should make sure charms follow suit. As Merlijn and Cory alluded to, this
is a pretty temporary problem as in Juju 2.0 we will get resources. In
brief, resources will allow you to, in the metadata, declare a set of
things the charm needs and where they map to. Here's a brief example of the
user experience that's being planned around this, *do note that it's
subject to change (and feedback welcome!)*:

# metadata.yaml
name: my-app
...
resources:
  jdk:
type: file
  application:
type: file
  arbitrary-resource:
type: file

>From there, with the new juju charm command which has the publish workflow
you'll be able to directly push your charm to the store (and it's
resources). So, lets say i have /tmp/jdk.tar.gz /tmp/my-app.zip and
/tmp/package.deb

$ charm push ./my-app --resources "jdk=/tmp/jdk.tar.gz
application=/tmp/my-app.zip arbitrary-resource=/tmp/package.deb"

This will upload the charm and files to your personal namespace and map
each file blob to that resource id. When a user deploys the charm they will
get the payload delivered by juju just like the charm and you will be able
to just issue `resource-get ` to retrieve the filepath on disk
that juju stored the resource. This will make software delivery very
reliable and make offline deployments of charms very easy.

Furthermore, at deploy time, you'll be able to modify resources inflight.
So, given the following example, where i have a different jdk
version/vendor that I want to deploy with:

$ juju deploy cs:~marcoceppi/trusty/my-app --resources
"jdk=/home/marco/other-jdk.tar.gz"

This gives me the charm I requested, the resources uploaded previously, but
the jdk resource will be taken from my local machine instead of what is in
the store. Finally, much like upgrade-charm, there will likely be an
upgrade-resource type command to manage resource versions independently to
charm code.

I do want to again stress that this is still in design, so while the
feature will be there, the command and formats might change. We'll be
talking more concretely about this at the summit and I'll follow up next
week with a roadmap and description of each new feature in Juju 2.0!

Marco

On Thu, Jan 14, 2016 at 4:27 PM Charles Butler 
wrote:

> Just as a word of calming against this our recommended patterns are to
> make the payload you're fetching configurable:
>
> such as fetch this particular tarball from server X with a shipping
> SHA1/SHA256 sum of the payload to ensure a) its the same payload b) it
> wasn't corrupted from the point of writing the charm to upgrading.
>
> Shipping these upgrades via charm config leaves existing deployments at
> their version, only new deployments will upgrade the components, and users
> are still left to "manually upgrade" by changing the charm config, which
> should keep you from seeing major breakage unless shipping components
> contain the defect as outlined above.
>
> The charm should handle upgrades, vs the user 'creating' the change.
>
> The GPG approach is interesting, as its replicates some of the "trusted
> registry" features i've been tracking on the app container side, where we
> have warm fuzzies knowing that our image passes a validity hash match, and
> further is verified against the developers signature w/ the hub, so we know
> they signed the release.
>
> I like these ideas, keep up the good work @cory
>
>
> Charles Butler  - Juju Charmer
> Come see the future of datacenter orchestration: http://jujucharms.com
>
> On Thu, Jan 14, 2016 at 7:36 AM, Merlijn Sebrechts <
> merlijn.sebrec...@gmail.com> wrote:
>
>> Please note that this doesn't have to be a burden on the vetting process.
>> The vetting process for an updated binary can be more or less automated,
>> especially with the upcoming juju resources. The important part is that
>> every time the binary changes, the charm version has to be bumped.
>>
>> 2016-01-14 13:29 GMT+01:00 Merlijn Sebrechts > >:
>>
>>> hi all, Sorry to barge in like this, but this is very important for my
>>> use-case.
>>>
>>>
>>> Binaries that are downloaded always need to be checked using a checksum 
>>> *included
>>> in the charm*.
>>>
>>> One Charm version should always deploy the *exact same version of that
>>> service*. If you want Juju to be used in production, *Charm deployment
>>> has to be reproducible*. Please do not force people who use Juju in
>>> production to fork all the Charms they use. Store Charms should be good
>>> enough so they can be used in production.
>>>
>>> Consider the use-case of a platform that automatically scales a service
>>> up and down depending on usage. This will break if we allow Charms to be
>>> changed between versions. This has bitten me once already. The Hadoop
>>> Charms downloads the jujubigdata 

Re: Does sftp eliminate the need to check sha1sum?

2016-01-14 Thread Cory Johns
Merlijn,

I completely agree with you, that charm deps should be tied to the charm
version.  For the big data charms, we're working on refactoring them to use
layers (and hope to have that available in ~bigdata-dev very, very soon),
which will create a wheelhouse of deps in the charm at build time, which
should address the issue you ran into.

I should clarify that when I suggested GPG signing, I actually had in mind
charm payloads, and was thinking of making new payload versions available
(selected via a config option, as Chuck mentioned) in a secure way, without
needing a new version of the charm.  But again, with the advent of
resources, especially combined with the wheelhouse generation during charm
build, the problem will go away entirely.

On Thu, Jan 14, 2016 at 2:29 PM, Merlijn Sebrechts <
merlijn.sebrec...@gmail.com> wrote:

> hi all, Sorry to barge in like this, but this is very important for my
> use-case.
>
>
> Binaries that are downloaded always need to be checked using a checksum 
> *included
> in the charm*.
>
> One Charm version should always deploy the *exact same version of that
> service*. If you want Juju to be used in production, *Charm deployment
> has to be reproducible*. Please do not force people who use Juju in
> production to fork all the Charms they use. Store Charms should be good
> enough so they can be used in production.
>
> Consider the use-case of a platform that automatically scales a service up
> and down depending on usage. This will break if we allow Charms to be
> changed between versions. This has bitten me once already. The Hadoop
> Charms downloads the jujubigdata pip dependency and uses code in this
> dependency for communication between units. Because of an oversight, two
> versions of this pip dependency were not compatible with each other. This
> meant that running `juju add-unit` on an existing Hadoop installation was
> successful one day and failed the next.
>
> I understand why the Hadoop Charms were build this way, it is a lot easier
> to maintain. However layers fix the maintenance issue.
>
> We do not know who uses our Charms and for what, so *there is no way we
> can reliably determine if a change would break a use case*. Yes, this
> change could fix a bug but there could be some service relying on this bug
> to be present. Because of this, one version of a Charm must always deploy
> the exact same thing. Let the users handle the upgrade process themselves.
>
> There are examples enough of cases where even minor version changes of
> binaries break relationships, so one version of a charm must always deploy
> the same binary.
>
>
>
> Kind regards
> Merlijn Sebrechts
>
> 2016-01-14 12:42 GMT+01:00 Cory Johns :
>
>> My preference over hard-coding a checksum into the charm would be hosting
>> a GPG signature alongside the file and including the public key in the
>> charm.  This allows the charm author to update a file if necessary without
>> having to also update the charm, but also allows the charm to confirm that
>> it got the file as intended by the author.
>>
>> Hopefully, though, this will become moot with the advent of resources
>> support in Juju 2.0.
>>
>> On Thu, Jan 14, 2016 at 1:48 AM, Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> On Thu, Jan 14, 2016 at 3:23 AM José Antonio Rey 
>>> wrote:
>>>
 I think this is more of a discusion on if you got 'what' you wanted or
 if you got it from 'where' you wanted. Even if you used SFTP, the file
 could've changed, and if it doesn't have a SHA1SUM it could result in
 unexpected charm breakage.

 If it were me, I would always implement SHA1SUMS, just to make sure that
 the file is, in fact, what I wanted. It would make it easier to debug
 and fix later down the road.

>>>
>>> +1
>>>
>>> SFTP/SSH will (can?) ensure the integrity during transit, but can't tell
>>> you that the data wasn't tampered with outside of the SFTP transfer
>>> process. Someone could have replaced the file on the file server. The
>>> client needs to know ahead of time what to expect.
>>>
>>> On 01/13/2016 02:18 PM, Adam Israel wrote:
 > Matt,
 >
 > For the charm in question, I would think adding the sha1sum check to
 the
 > process would be sufficient, especially in the scenario that the
 binary
 > is being self-hosted for the purposes of installing it via the charm.
 >
 > Adam Israel - Software Engineer
 > Canonical Ltd.
 > http://juju.ubuntu.com/ - Automate your Cloud Infrastructure
 >
 >> On Jan 13, 2016, at 2:14 PM, Tom Barber > > wrote:
 >>
 >> Yeah but as pointed out earlier,  it verifies where you got it from,
 >> but not what you got.  :)
 >>
 >> On 13 Jan 2016 19:11, "Jay Wren" > > wrote:
 >>
 >> StrictHostKeyChecking and