Re: Best practices for "fat" charms

2014-04-02 Thread Simon Davy
Resending this to the list, rather than just Jorge, sorry.

On 1 April 2014 20:07, Jorge O. Castro  wrote:
> Hi everyone,
>
> Matt Bruzek and I have been doing some charm testing on a machine that
> does not have general access to the internet. So charms that pull from
> PPAs, github, etc. do not work.
>
> We've been able to "fatten" the charms by doing things like creating a
> /files directory in the charm itself and putting the
> package/tarball/jar file in there, and given the networking issues
> that we might face in production environments that we should start
> thinking about best practices for having charms with payloads instead
> of pulling from a network source.

Great question :)

So we have been deploying "fat" charms in a restricted environment
like this since we started using juju. Our build machines have no
internet access, but they *can* access the ubuntu archives, a private
archive, and launchpad, and other internal services.

For us there are two kinds of charms that need different work to
support these restrictions.

One type is the regular charmstore charms than make up your
infrastructure glue (like apache2, haproxy, squid, postgresql, etc).
These kinds of packages don't change their core software payload
often, usually on stable releases.

For charms like these that install things that are not in the ubuntu
archives, we fork and modify them as necessary to support installing
from a custom archive, for which we build a package ourselves. A good
example of a charm that works well this way is elastic search, which
uses the vendor packages by default, but allows you to specify a ppa
or archive to install from instead in the config. Many other charms
also do this, but not all, so it's worth noting I think.

The other type is "application" charms, which are typically private
charms where the payload is your application, and you can change the
core software payload multiple times a day.

For these charms (our core workload) we do a "fat" charm as you suggest above.

1) check out pinned version of the charms into a local repository

2) run a build step over the local repository that looks for a
Makefile with a charm-payload target in each local charm's dir.

3) if found, it is run, which pulls deps and builds. Source deps
have to be mirrored in lp, package deps not in main need to be added
to the private archive. A build will fail if it tries to reach
something it can't.

4) this produces some artifact(s) in the charms files/ directory
(tarballs, executable, scripts, etc).

5) a subsequent deploy/upgrade then ships these out to the charm dir
on the unit, and the hooks unpack as appropriate for that project.


This works reasonably well. It has a few downsides:

a) conflates charm upgrade with code deployment. This required some
hacks with the charm revision (now fixed I think).

b) some build deps on the build machine need manual management.

c) the build step is repeated at each stage: dev, CI, staging,
production, which is wasteful and error prone.


One of the things we are looking at at the moment is to deploy the
build artifacts from an internal trusted url. For us, this would be an
openstack swift bucket. The ideal goal is that if the CI build/tests
succeed, it would deposit the build artifacts in a versioned swift
url. A charm's config can then be updated to the new url. This
potentially allows us to

a) deploy code separately to charm upgrades
b) reuse the same build artifact for CI/staging/dev,
c) roll back easily with a config change
d) manually do rolling upgrades via juju run if needed.

Michael Nelson's been working on this, I expect he'll have more to add.

So in essence we are thinking of trying to move away from "fat"
charms, and use build artifacts at trusted urls to get the payload on
the units, for the reasons above.

Some final thoughts:

We're still looking to simultaneously support the fat charm approach
of bundling payload with upgrade-charm as well, as it's really nice
for upgrading code and charm in a single juju "action", which the url
approach doesn't do.

Our build artifacts are usually tarballs for python projects, and
binaries for go projects, plus an assortment of scripts. I am planning
to look at PEX files for python, as well as maybe docker images, to
see if this can be simplified further and made more robust.

HTH

On 1 April 2014 20:07, Jorge O. Castro  wrote:
> Hi everyone,
>
> Matt Bruzek and I have been doing some charm testing on a machine that
> does not have general access to the internet. So charms that pull from
> PPAs, github, etc. do not work.
>
> We've been able to "fatten" the charms by doing things like creating a
> /files directory in the charm itself and putting the
> package/tarball/jar file in there, and given the networking issues
> that we might face in production environments that we should start
> thinking about best practices for having charms with payloads instead
> of pulling from a network source.
>
> Marco has some ideas on how we ca

Re: Hook firing

2014-09-03 Thread Simon Davy
On 3 September 2014 13:12, Michael Nelson  wrote:
> On Wed, Sep 3, 2014 at 9:57 PM, Darryl Weaver
>  wrote:
>> Hi All,
>>
>> I am still a little confused as to when Juju decides to run hooks.
>> Obviously hooks run when something changes.
>> However, I see juju rujnning config-changed hooks at times when the
>> configuration has not changed recently.
>
> Hi Darryl, we've been seeing this too...
>
>>
>> So, I'd like to know how Juju decides to run the hooks and when?
>
> Simon was asking the same question on #juju the other day, apparently
> juju runs config-changed after a juju agent restart, but I don't know
> what's triggering that in our case. More at:
>
> http://irclogs.ubuntu.com/2014/09/01/%23juju.html#t15:20

Also, at machine restart (which I suspect is due primarily to the
agent restart case, of course).

I don't know if there is a docs page that lists all this explicitly,
but it would be nice is if there was, for reference.


-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Hook firing

2014-09-03 Thread Simon Davy
On 3 September 2014 13:42, Darryl Weaver  wrote:
> Thanks the IRC logs are pretty helpful.
> We are seeing the same issue here at Sky,
> config-changed hook runs "At random times".
> This is when there are no user changes to the Juju config and no reboots
> happening.
> It is possible the juju agent restarted, but I don't think so, but will have
> to collect some evidence first.

Right, this is what I expect is happening to us, but tracking it is
tricky. We only noticed because our charm has an implementation issue
in that it tries to reuse an authtoken that has a 24hr expiry.

It might be helpful if some one on juju-core could details if there
are specific circumstances where the juju agent may restart?

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Hook firing

2014-09-03 Thread Simon Davy
Arg, resending reply to list

On 3 September 2014 14:27, José Antonio Rey  wrote:
> Even if the config-changed hook was run when it was not supposed to, it
> shouldn't have caused any changed if values were not moved to anything
> different. If it did, then I believe we're having an idempotency problem
> there.

I suspect this has been happening for a while, but unnoticed, as most
case the hook will run fine.

In our cases, an implementation issue with tokens that expire after a
fixed period of 24hr were being reused (to fetch build assets from
swift), so we noticed the run as it tripped an error state, which we
have nagios alerts for. We are fixing our charms to not have this
problem.

But we raised the questions for 2 reasons:

1) We want to understand why it's happening. Its probably ok that it
does happen, but it's the apparent randomness that is confusing.

2) If a charm has relied on juju's idempotency provisions exclusively,
then a config-changed hook's implementation may *always* restart
whatever server process the charm is managing even though nothing has
actually changed (we specifically avoid this in our charms where
possible). This could lead to unnecessary service wide restarts, which
is undesirable in many circumstances, especially with no current way
to control those.

So yeah, we'd like to get to the bottom of the cause for 1), for our
own understand the system, but perhaps 2) needs some consideration
from juju-core.

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Revision file on charms

2014-09-08 Thread Simon Davy
On 8 September 2014 15:40, Matt Bruzek  wrote:
> José
>
> I just had a conversation about that with the Ecosystems team.  You are
> correct, the revision file in the charm directory is no longer used and we
> can delete those files on future updates.

So this is interesting, as we are looking to start using the revision
file more explicitly, as part of our automation.

We deploy from local repositories, with charms in bzr, and one thing
we would like to explicitly know which *code* revision is currently
deployed in an environment.

So we were thinking to have a pre-deploy step that did "$(bzr revno) >
revision"  in each charm, just before deploy/upgrade. That way, the
revision reported by juju status gives us the exact code revision of
the the current charm. This would pave the way for us to diff a
current environment with a desired one and see what charms need
changing as part of a deploy.

Does this make sense? Is there some other method of knowing the code
branch/revision a charm is currently running?

Thanks


-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Call for Charm School topics

2014-09-08 Thread Simon Davy
Hi Sebastian

On 5 September 2014 18:54, Sebastian  wrote:
> - Subordinated charms, when and how to use it. Conflicts with Ansible
> Charms.

Can you give more details on "Conflicts with Ansible Charms"?

We use the ansible support in charm-helpers a lot, for both principle
charms and subordinates, and have had no problems mixing those with
non-ansible charms, so I'd be keen to hear your issues.

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Revision file on charms

2014-09-08 Thread Simon Davy
On 8 September 2014 16:10, Kapil Thangavelu
 wrote:
>
> you need to keep in mind the revision file will never match a local charm
> version in the state server (it will at min be at least one higher than
> that). This goes back to removing the need for users to manage the revision
> file contents while in development or pass in the upgrade flag during dev,
> the need was obviated by having the state server control the revision of the
> charm.

Right, so even if I have an explicit number in a file, the revision
reported by juju will/may be different.

> i filed https://bugs.launchpad.net/juju-core/+bug/1313016 to cover this case
> for deployer, so it could annotate vcs charms with their repo and rev since
> local charm revisions are useless for repeatability as their independent of
> content and determined soley by the state server (with the revision file
> serving as a hint for min sequence value) and the available charms in the
> state server are not introspectable.

Right, +1ed.

I wonder if a convention and juju annotations could supply this
currently. Plus, a way to write annotations from the cli.

Thanks


-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Revision file on charms

2014-09-09 Thread Simon Davy
On 8 Sep 2014 16:51, "Stuart Bishop"  wrote:
>
> I think you are better off using a service configuration item to hold
> the version for now, at least until the dust settles. Juju is good
> about incrementing the number in the revision file when it needs to,
> which is probably not when you want it bumped.

We do this for payload revision already, as we use that as the
mechanism to deploy new code. We could do it for charm revision as
well I guess, but it has little guarantee, as it still needs a manual
juju set post deploy. Plus, you need to add that to every charm you
want to know the version of (and we want to know *all* of them).

Using annotations would have the same update problem, but at least
it's charm independent.

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Call for Charm School topics

2014-09-09 Thread Simon Davy
On 9 September 2014 18:45, Sebastian  wrote:
> Hey Simon!
>
> For example, imagine you have an Ansible charm which inside are some php
> settings applied like php memory limit, and then you relate a subordinated
> charm to it, that creates a new php.ini with custom configurations that the
> project needs.
>
> If the playbook of the subordinated charm makes a move that doesn't count
> with files or configurations generated by the other playbook in the main
> service charm, probably will result in a conflict, and errors will be shown.

This seems to me more like a problem of conflicting on a single file,
which is a problem for any subordinate, not specifically an ansible
one.

As I'm sure you know, it is generally better to use a directory of
config files that are included from the main config file, so different
things can "own" different files, and not conflict. Many tools support
this OOTB.

However, if you can't have includes in a the main php.ini file, then
ansible can actually help you with this somewhat, all though in
limited fashion.

Check out the lineinfile module:

http://docs.ansible.com/lineinfile_module.html

> I was thinking in how to avoid this, leaving less customized configurations
> in the main charm. But!, it's difficult to know where's the limit.

Yeah, so getting the right responsibilities between
subordinates/principles can be quite nuanced.

A good example IMO is the gunicorn subordinate for running python
applications. All configuration is done via the relation to the
subordinate, and the subordinate owns the gunicorn config completely
(going so far as to disable the debian sysv init packaging as it's
quite limited). So the responsibilities are clearly divided. That may
not be possible in your scenario, of course[1].

But it's easy for a principle charm to touch a file it really
shouldn't be doing. We had some gunicorn using principle charms that
invoked "/etc/init.d/gunicorn restart" directly, which was a real pain
when we wanted to change how gunicorn was run.

One further idea: pass your extra config to the subordinate (or vice
versa) as a string in the relation data, and have the subordinate
include that config in the generated php.ini. Not the cleanest
solution, but it could work. Or even better, expose all the knobs you
want to configure in the relation data, and allow them to be set by
the principle (with sane defaults). This is what the gunicorn
subordinate does, fyi.

HTH

[1] I know very little about modern[2] php deployment
[2] I can probably help with PHP 3, though ;P

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: SSH host key maintenance, local provider

2014-10-03 Thread Simon Davy
This is what I have:

Host 10.0.3.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ForwardAgent yes
LogLevel ERROR
ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r
ControlPersist 8h


LogLevel ERROR is nice, means you don't get any key warnings.

HTH

-- 
Simon


On 3 October 2014 12:13, Stuart Bishop  wrote:
> Hi. Has anyone got a simple mechanism for keeping their
> ~/.ssh/known_hosts and ~root/.ssh/known_hosts files clear of ephemeral
> juju machines?
>
> I did have a script that cleared it out on bootstrap, but it has
> stopped working and I thought I'd ask here for current best practice
> before debugging it.
>
> --
> Stuart Bishop 
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju



-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: SSH host key maintenance, local provider

2014-10-03 Thread Simon Davy
On 3 October 2014 13:21, Simon Davy  wrote:
> This is what I have:
>
> Host 10.0.3.*
> StrictHostKeyChecking no
> UserKnownHostsFile /dev/null
> ForwardAgent yes
> LogLevel ERROR
> ControlMaster auto
> ControlPath /tmp/ssh_mux_%h_%p_%r
> ControlPersist 8h
>

Sorry, for clarity, this is in my .ssh/config file.

I also have similar rule for canonistack ip ranges.

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Send password from one charm instance to another on the same cloud.

2014-10-17 Thread Simon Davy
On 17 October 2014 10:13, saurabh  wrote:
> Hi All,
>
> I need to communicate password from one charm instance to another instance
> in order to authenticate services.
> Please suggest me a way to do that.

Hi Saurabh

I assume by "one charm instance to another" you mean two units in the
same service?

In which case, having the charm implement the peer relation would
allow that communication. E.g. if one unit does a relation-set
password=, all other units (instances) in the service will be
able to do "relation-get password" in the peer relation-changed hook.

More info at: 
https://juju.ubuntu.com/docs/authors-charm-metadata.html#peer-relations

If you mean between two instances of different charms, then a normal
relation will do, of course,

HTH

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Migrating the python-django charm to Ansible

2014-11-07 Thread Simon Davy
On 6 November 2014 01:03, Michael Nelson  wrote:
> Hi Patrick,
>
> On Thu, Nov 6, 2014 at 4:22 AM, Patrick Hetu  wrote:
[snip]
>> The shim would do things like:
>>
>> * Translating Juju variable to role variable
>
> That can be done in the playbook where needed as above.
>
>> * Sanitize variable like: unit_name, relation_name, etc
>
> That *should* be done by the existing charmhelpers (it writes out
> /etc/ansible/host_vars/localhost on each hook execution with config,
> relations and some other things like unit name, public and private
> addresses)

It does, but they're not sanitized currently. Would be a good thing to
add a sanitized version of local/remote unit names to the ansible
host_vars.

[snip]
>> Then using tags to dispatch based on the current relation
>> to the corresponding roles.

I'm confused - this is what the ansible charmhelpers does right now?

>> Build role for re-usability
>> ---
>>
>
> Yes...
>
>> This have the potential to solve the problem of repetition of some
>> high level tasks in other charm.
>> Like adding ppa,
>
> You can add PPAs with the existing apt_repository module (see the
> example there):
>
> http://docs.ansible.com/apt_repository_module.html
>

Yeah, we find 95% or so things we need to do ansible provides OOTB.

For example, and relevant to django:

http://docs.ansible.com/pip_module.html
http://docs.ansible.com/django_manage_module.html

We can also include roles from ansible galaxy as needed in a charm,
gaining quite a bit of reuse.

>> fetching source code, adding ssl certificate,
>> configuring backup, etc
>>
>> A little bit like what charmhelper is doing right now.
>
>
> Yes! I think making reusable roles for common tasks in juju charms is
> really worthwhile [1]. We do have a bunch of public shareable roles
> that Simon Davy and I (mainly) are reusing and improving for some
> internal services [2]. Please take a look and see if there's things
> that could be useful - but our use-case is a bit different (for
> example, the payload role is because we are never pulling sourcecode
> branches from external repositories during deployments, instead
> pulling a built code asset from an internal swift container - although
> there would be no reason why it couldn't be extended to support both).
> Other roles may be useful (directories-and-permissions for listing
> readonly/readwrite dirs, or wsgi-app for interfacing with the gunicorn
> charm.
>
> A typical charm which I write for a service these days only does two
> things in addition to the re-used roles: installs any specific package
> dependencies and writes out the specific settings/config file and
> handle any other relations (elasticsearch/postgresql). I posted an
> example a while back (uses an old version of the shared roles) [3].
>
> Also, I know that Simon has a django shared role that he's about to
> submit to charm-ansible-roles - it may be worth a look (Simon?), but
> again, our use-case is a bit different as we're not attempting to
> write one charm to deploy any django project right now (we did try in
> the past, but found it was adding unreasonable complexity for us,
> unless the projects being deployed were very similar).

So, the django role is fairly simple, it includes tasks for syncdb,
migrate, grantuser, and collectstatic.

But it is based on using a migration "action" via juju run, as we
wanted more explicit control of when migrations are run on an update,
and on details of our specific use cases (we always have a different
db user with elevated privs for doing migrations, the run time db user
is very limited)

So it needs a bit of work before it's generally consumable outside of
our specific uses.

Regards the framework charm idea, we found that each of our django
services, while mostly similar, had different config options and
relations we wanted to expose.

We'd previously had experience with a mega-charm (the u1 charm
deployed 10 or so different services from the same codebase, it had 42
relations!), and it wasn't that fun. Having to add all possible config
options, and all possible relations was messy. Plus, then we'd have
some charms that would need to relate to themselves, so you would need
to implement both sides of the relation hook in the same charm, which
was not my idea of a good time.

So we have individual charms for each services, even if running from
the same codebase. Each charm supports only the relations and config
that it needs[1]. We utilise shared ansible roles as Michael said to
reduce the charm to pretty much the bare minimum needed for this
specific service: charm config, relations, writing the service config
to disk. Most other things are taken care of by th

Re: python-django charm questions

2014-12-02 Thread Simon Davy
On 2 December 2014 at 17:04, sheila miguez  wrote:
> Hi all,
>
>
> Pip wishes
>
> * pip_extra_args support so that I can use --no-index
> --find-links=/path/to/wheels (this is in my fork)
> * remove --upgrade --use-mirrors, leave it up to the user (in my fork)

First class support for wheels in the charm would be good.

> Django wishes
>
> * call collectstatic (in my fork). I see a pending MR with something like
> this, but it enforces dj-static, which I would not do.

Right, I think this is my branch, which was to get a demo working.
Although, we do use dj-static in prod (with squid in front) and it
works great, same solution in dev/prod, and fast (assuming squid or
similar). AIUI, th alternatives are to a) deploy to alt service (cdn,
apache, whatever), which means deploying to two targets, which I
prefer not to do, or b) running apache/nginx on the django units to
serve. But, in our deployments, static assets are always accelerated
by squid or similar, so there's not much benefit to b.

> * allow installing from tgz (in my fork)

So, the django charm allows more extensive customisation via a
subordinate charm. This means you can write your own subordinate
charm, bundle/fetch your tgz as appropriate, control your own
settings.py, etc. You relate it to the django charm, and it supplies
the needed info to the django charm, which will do the rest.

I think this is generally a good route to go, as a single django charm
cannot reasonably support every deployment option under the sun.

That being said, I haven't personally used this feature, not am I the
maintainer, so Patrick may have more to add, or correct me.

 > * fail on install/config changed if django-admin.py is not found. discovered
> this doesn't happen when it isn't installed, while working on the pip
> changes. otherwise it fails (in my fork)

Yeah, failing hard with good logs is always good.

> * allow users to inject custom templates. e.g. conditionally add
> django-debug-toolbar based on DEBUG.

You mean settings.py templates? You can do that with the custom
subordiate above.

> Newbie Questions
>
> * My fork adds a couple of lines to website_relation_joined_changed because
> the unit_name and port were not available to populate my apache2 template
> otherwise, but this could be due to user error. How do other people do this?

Is your fork available to view?

Which apache2 template are you referring to? One in your fork, or the
apache charm?

This sounds odd, as the http relation type should expose 'port' and
'hostname' variables, with hostname usually defaulting to the unit
name. When I've used the django charm, this worked as expected.

> * Why is django_extra_settings used in config_changed but not during
> install?

I expect that's a bug.

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Thoughts about Juju local as Dev

2014-12-05 Thread Simon Davy
On 5 December 2014 at 01:54, Sebastian  wrote:
> Hey guys!

Hi Sebastian

I've been looking at using a juju charm as the default dev env for one
of our services, login.ubuntu.com.

It's mostly working, but we haven't switched to it yet, as it needs
some more polish but we'd like to at some point.

> Behold the Vagrant workflow
> For some of the developers, this flow was terrible mysterious, only the list
> of things you have to install or know what does each and every software,
> here's the list:
> Vagrant and the download a box, Virtualbox, Juju, Juju-local, Juju-gui (yes,
> it's important to separate, client <> server <> gui), LXC and the containers
> paradigm, SSHFS (which is quite difficult in Mac OS X) to access and edit
> the files on the container, and finally the slow Sshuttle to access the
> containers via ssh from the host.
> Thats a lot to understand, so many things and we don't even start to explain
> how to use the charms, relations and stuff like the charm's hooks.

Right. We're not using vagrant much, as all except our front end guy
are on ubuntu, so we're using lxc. Plus, we have been using a
per-project lxc for development for a long time, with user's home dir
bind mounted, so our devs are familiar with this already.

> Accessing your app services:
> Sshuttle is not the best solution, so let's use Virtualbox networking
> features. I created a private network interface (containerseth0), and then
> setting the networking configurations of the lxc container to it
> (containerseth0). That was one of the best solution I came up till now, but
> I know this is not the right way to do that, but I don't know that much
> about network bridges.

Right. Again, this is not an issue when using lxc direct from your
host, as the 10.0.3.* addresses are already locally routeable. But for
vagrant, that sound's like the way to go.

Although, having said that, I do all my juju work on remote box, and I
use sshuttle on my local machine to access the 10.0.3.* addresses on
my remote box without any problems.


> Download, install and configure = Waste
> When you want to be more efficient, the first thing you have to identify is
> the waste and try to decrease it the best you can. For example, if I'm a
> Drupal developer that wants to start developing in a new or existent project
> he has to wait for download (apt-proxy this is being done right now I
> think), install all the dependencies and then configuring process, again,
> for the same service (charm).
> So, some ideia is to try to clone service unit (container) before the
> "started" status of the charm, so in that way whenever I want a new project
> I don't have to wait all of that, just the config-changed and start process.
> Today the only (not 100% sure) thing is being using cloning is with units
> scaling.

As others have mentioned, using btrfs and squid-deb-proxy or similar
really takes the sting out of this.

> Why my machine is so slow!! /O\
> Every developer have more than 2 projects cloned in their workspace, and
> that result in a lot of deployed running charms, with all their services
> like Nginx, Php-fpm, Varnish and MySQL.

Do you need varnish in a dev environment? Perhaps you do, but I'm
curious as to your use case.

For development, we strip the env down to the base service and it's
dependents (like postgres and memcached).

> So,
> It's naturally that the machine and consequently the applications appear to
> be very slow, there's too many containers running at same time. The solution
> to this where not defined yet, but we are trying to:
> - Use one Vagrant VM for each project, but thats painfully when you must see
> other projects running.

I think the vagrant usage may be key to this slowness. I regularly
have upwards of 20 lxc containers running on a 4 core, 8G box without
any noticeable performance issues or memory exhaustion.

The other issue of course is disk thrashing, which btrfs clone and the
upcoming os-upgrade options will help with.

> - Manually turning off all the containers using lxc-stop, which is other
> painfully process.

Yeah, you can script it though.

> - Parallel local type environments, so it's an env for each project, but
> that needs tweeks to avoid ports conflicts and still we had to manually
> stop/start all the containers.
> So we didn't figure it out yet.

Yeah, I've done this but it is hacky. I'd love a tool to set up a
different local env for me

> juju set mysql dataset-size="20%"
> F why MySQL isn't starting? Telling to the developers and making
> predefined bundles and config files, was not enough, they forgot about to
> set the MySQL dataset-size when are working in a local environment. The
> charms could react better to the environments types.

Urg, sounds fun :(

> Charm's development is a slow and a complex process
> You are developing a complex charm and guess what? error in the logs, the
> charm's deploy failed, then you modify the charm code and repeat the hall
> process all over again.

Are 

Re: Makefile target names

2015-01-22 Thread Simon Davy
On 22 January 2015 at 15:13, David Britton  wrote:
>
> lint:
>  - make lint
>

Could we also make[1] the charm linter lint the makefile for the
presence of targets agreed in the outcome of this thread?


[1] Pun fully intended :)

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Makefile target names

2015-01-22 Thread Simon Davy
 On 22 January 2015 at 16:29, David Britton  wrote:
> On Thu, Jan 22, 2015 at 04:17:26PM +0000, Simon Davy wrote:
>> On 22 January 2015 at 15:13, David Britton  
>> wrote:
>> >
>> > lint:
>> >  - make lint
>> >
>>
>> Could we also make[1] the charm linter lint the makefile for the
>> presence of targets agreed in the outcome of this thread?
>
> "charm proof"
>
> I like it.  (bundle tester already runs this)

Which is interesting, as my lint targets general runs charm proof too,
so it'd be run twice in that case?

Not a big issue, but if the charm store/review queue is automatically
charm-proofing too, perhaps the make lint target should not be?

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Call for help for a juju plugin

2015-03-04 Thread Simon Davy
On 4 March 2015 at 13:45, Jorge O. Castro  wrote:
> Hi everyone,
>
> Sometimes people deploy things like a complex bundle on a cloud and it
> doesn't work. Having to wrangle the person to find out which units
> broke, juju sshing in and manually digging around logs and refiring
> off hooks, etc. is time consuming.

This is a great idea, and should be doable for shipping just juju
logs/config, and possibly relation data.

But if you'd want to include logs/config for the service, I think
you'd need a way for charms to expose there logs (path[1], type) and
configs somehow. This could maybe be part of the charm metadata, but
paths are not necessarily fixed locations, as config can affect
them[1]. So maybe a charm could publish this information in the juju
status output, as part of service metadata? I think there was some
discussion of custom status output at some point.

This would be useful for the above scenario[2], but also in
development of full stacks, and debugging in production. Some useful
tools/plugins that this would enable:

juju-tail-logs  could use multitail to tail all logs on that
unit simultaneously, as I send a test request to the unit and watch
what happens. Bonus for a --filter  argument :)

juju-log-trace  could multitail all access|error on all
units that publish them, so I can trace http requests coming though
mutliple units of apache->haproxy->squid->haproxy->app servers (I do
this manually sometimes now, but it's a pain, and I must know the
location of every log file for each unit). Ditto for the --filter
argument.

juju-view-configs could open all the config files in a stack for
viewing in $EDITOR, vastly simplify checking if the desired
relations/config have produced the correct result you were looking
for. I regularly do this manually, one by one, with stuff like juju
ssh  'cat /etc/haproxy/haproxy.cfg', for example.

Another use case would be to surface the logs and config in the GUI,
which IMO would be a killer feature for GUI adoption in production
environments.

In fact, this simple change could expose a whole raft of system
diagnosis tooling that would be real value-add to the base juju
proposition, IMO.

HTH

[1] As recommended in our current best practice docs:
https://jujucharms.com/docs/authors-charm-best-practice (Disclaimer: I
am not convinced by all of those guidelines)

[2] There's an issue here with sensitive information, like PID and
secrets. I think the pastebinit plugin would need to be able to dump
locally, to allow redaction prior to submitting, if it was every going
to be able to be used in production systems. Some data (IPs, emails,
etc) could be auto-redacted in the tool, perhaps.


-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: State of the "Best Practices Guide"

2015-03-04 Thread Simon Davy
On 2 March 2015 at 17:46, Marco Ceppi  wrote:
> # Charm Authors
>
> - Avoid symlinks in charm

/me blinks

I assume you don't mean symlinks in hooks/ dir?

Anything documenting why this is a bad idea?

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Coordinating actions in a service

2015-05-08 Thread Simon Davy
On 8 May 2015 at 14:28, Gustavo Niemeyer  wrote:
>
> - The requirement of running an action across all units but without
> executing in all of them at once also sounds very common. In fact, it's so
> common that it should probably be the default when somebody dispatches an
> action to all units of a service.

+1 to rolling actions (and hooks?) being the default!

I'm becoming more convinced that actions is the way to handle
controlled rollouts of new code to units, my day-to-day most common
orchestration (i.e. must be automated).

Currently working on a way to orchestrate this externally, but that as
the default would be great (coupled with health-check after each
action completion)

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How should we change default config values in charms?

2015-08-17 Thread Simon Davy
On 16 August 2015 at 19:41, Mark Shuttleworth  wrote:
> On 14/08/15 04:56, Stuart Bishop wrote:
>> I've discussed similar things with regard to the PostgreSQL charm and
>> autotuning, with the decision being that charm defaults should be set
>> so things 'just work' in a development environment.
>
> Yes, charm defaults should be efficient for rapid iteration and development.
>
> Iteration and development scenarios are, as Stub points out, orders of
> magnitude more common than production scenarios.

Interesting. We have been defaulting to production settings, as we
don't want to deploy an incorrect/unsafe config in production by
omission, especially security related config.

I see the motivation of making it easier for people to get going
though, but IMO, from an operational standpoint, development defaults
are concerning.

I think maybe there is a distinction between performance defaults and
security defaults, and defaults can be dev on the former, and
production on the latter, maybe?

> In future, we might designate whole environments as dev, test or
> production, so that charms could auto-tune differently.

Sounds good, we do something similar manually.

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest on the LXD Provider!

2015-11-10 Thread Simon Davy
On 9 November 2015 at 18:19, Rick Harding  wrote:
> Thanks Katherine. That's looking great. One request, next demo I'd be
> curious to see how easy it is to run multiple lxd environments locally. I
> know it's been possible with lxc before with a bunch of config.

Just an FYI, I have a tool to manage multiple local provider environments.

https://github.com/bloodearnest/plugins/blob/juju-add-local-env/juju-add-local-env

I've have ~12 local environments that I switch between for my
day-to-day work, usually 3-4 bootstrapped at once. I couldn't work
effectively without multiple environments.

Hopefully the above utility will be made obsolete by the lxd provider,
but it might be useful in the mean time.

> Ideally we'd
> just be able to create a new named section and say it's lxd and boom, I can
> bootstrap the new one and have it distinct from the first.

Yes!  I hope we'll be able to have 1 lxd container running a
multi-environment state server, that can manage multiple lxd
environments on your host (or remotely?)! That would be a great dev
experience.

> Keep up the great work!

And it is indeed great work :)

We've been using lxd with the manual provider, really been impressed
with what lxd brings to the table.

Couple of questions

 - do you have plans to support applying user-defined lxd profiles to
the lxd containers that juju creates? This would be great in dev, and
in special cases (e.g. give your charm access to the gpu, or bind
mount a host dir)

 - likewise, will users be able to specify base lxd image to use?

Many thanks for this work, it has the potential to really benefit our
daily usage of juju.

Thanks



-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unit number is increasing in latest juju version.

2015-11-13 Thread Simon Davy
On 13 November 2015 at 13:50, Mark Shuttleworth  wrote:
>
> Thanks Sunitha. Matty, deeper question is - was this an intended change
> in behaviour, and what's the rationale?

One possibility is that the juju environment is no longer being torn
down between tests? That would result in this behaviour, for sure.

So this could have been induced by an amulet change, or a local
change, perhaps, rather than a juju core change?

If the environment is being recreated each time, then perhaps juju
destrroy environment is somehow not cleaning up some state, and this
getting picked up in the new bootstrap somehow? Don't see any other
way that a fresh juju env would not start at /0.

Either way, Matty's fix is a more robust way to write amulet tests,
regardless. You don't really want to destroy/bootstrap your env
between every test, very slow.

HTH

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unit number is increasing in latest juju version.

2015-11-13 Thread Simon Davy
On 13 November 2015 at 15:02, Matthew Williams
 wrote:
> Hi Mark, Sunitha,
>
> My apologies, I should have included the explanation in the original email.
>
> This was a change to address a long standing bug:
> https://bugs.launchpad.net/juju-core/+bug/1174610

Ah - I must have been remembering the older behaviour. I remember
being surprised that destroy-service didn;t reset unit numbering
previously - I hadn't realised it had been changed in the mean time.

> There's a discussion in the bug report, but the summary is that in most
> cases it's desirable to have the unit id be unique across the life of an
> environment. Otherwise you loose the identity of a unit across relations.

This makes sense to me.

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest on the LXD Provider!

2015-11-16 Thread Simon Davy
On 10 November 2015 at 20:46, Mark Shuttleworth  wrote:
> On 10/11/15 11:06, Simon Davy wrote:
>> We've been using lxd with the manual provider, really been impressed
>> with what lxd brings to the table.
>
> Yes, it's a really dramatic leap forward in distributed systems dev and
> test, glad you like it :)
>
>> Couple of questions
>>
>>  - do you have plans to support applying user-defined lxd profiles to
>> the lxd containers that juju creates? This would be great in dev, and
>> in special cases (e.g. give your charm access to the gpu, or bind
>> mount a host dir)
>
> This would map nicely to the generic ideas of machine-type or machine
> constraints, so should be achievable. Why not write up a proposal? We'd
> want it to feel just like choosing a particular machine type on your
> cloud, only in this case you'd be choose a "container machine type".

Yes, does seem to mesh nicely with machine type idea. Your provider
could maintain a set of profiles available for users to choose from.

Some uses cases OTTOMH:

Production:

 - exposing special devices like gpu

 - exposing an encrypted block device to a container that has the keys
to decrypt and mount (although I understand there are security issues
atm with the kernel superblock parser)

 - selecting networking type.  We've had to manually add maas machines
in with regular openstack kvm for high-connection frontends before,
due to kvm networking limitations. I would be great if we could deploy
with lxd and tell it to use the host network (assuming this is
possible in lxd in the future). I guess there'd be some security
compromises here.

 - A more of off-the-wall idea: local volume sharing/publishing, a la
docker, would be very interesting.  It could allow faster/more secure
communication between containers on the same host by using unix
sockets, for example.


Development use cases

- mounting specific code branches into the lxd for development

- mount users $HOME dir for convenience (ssh keys, bash/editor/bzr config, etc)

- controlling the subuid/gid map for the above

- sharing the X socket with the container (useful if you have selenium
tests, for example)

- controlling the network bridge, a la Jay's recent post.

- adding additional veths/bridges, in order to test your charm's
handling of public/private ip addresses (currently only possible by
deploying to an actual cloud provider, AFAIK)

- likewise for volumes - if adding an lxd disk device could link into
the new storage hooks, then we can test our storage hooks locally.

Hmm, maybe some of these are not solved by custom lxd profiles, but
just lxd provider feature requests :)

I would happily write up a proposal - is this list the correct venue?

>>  - likewise, will users be able to specify base lxd image to use?
>
> Actually, we'd like to be able to do that on any cloud, not just on LXD.
> Essentially saying something like:
>
>   juju deploy somecharm --image f67d90bad257
>
> I'm paraphrasing, but the idea is to tell Juju not to lookup the image
> ("trusty", "precise") the way it normally would, but just to trust you
> and wing it with that base image. This wants to be done in a way which
> works for LXD and on any cloud that can provide a named snapshot or
> image for launch.

\o/ - hurrah!  This would be great. We could publish these images out
of our CI process, for our application charms. As well as maybe
consume an IS-provided base image for other services, rather than the
cumbersome basenode scripts we currently use.

Is there a spec document for this?

> For LXD development purposes, this would let you have a base image with
> all the main dependencies you're going to need pre-defined, so the
> actual charm install is super-fast.

Yep, this is kinda what we are doing w/the manual provider and lxd
currently, for application development with juju. We create an lxd
ahead of time and install dependencies. We then bind mount the
developer's code/logs/config directories into the lxd at the places
the charm expects, and then bootstrap and deploy all charms onto that
one container. This gives us a self contained development environment,
using the same tooling as production, but convenient for devs to use.
We've only just got this set up, so early days yet, but it looks
promising.

The more I use LXD, the more excited I get :) I'm trying to retain a
cool, professional, level-headed, right-tool-for-the-job, and
objective persona, but I'm think I'm in danger of becoming a total
fanboy, as I suspect my colleagues will attest to :D

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest on the LXD Provider!

2015-11-16 Thread Simon Davy
On 16 November 2015 at 17:07, Rick Harding  wrote:
> On Mon, 16 Nov 2015, Simon Davy wrote:
>
>> Some uses cases OTTOMH:
>>
>> - mounting specific code branches into the lxd for development
>
> This is a feature we're looking at Juju adding to support sharing the local
> filesystem to the unit deployed with lxd in order to speed up development
> cycles. There's a current spec on this that's under iteration and requires
> the LXD work to land before it can begin. It'll build on top of the storage
> work so that it's meant to be modeled as sharing generic filesystems in all
> of the supported providers.

Right. So this could work nicely for the our dev scenario I think, if
we can expose each directory we're interesting as a juju-aware storage
config, which would probably make sense to do anyway, even if we don't
(yet) use it in production.

The tricky part to this for us was matching uids/gids on the the files
system. We ended up setting security.privileged=true as the easiest
path, it be great we can massage the subuid/gids and the id_map to
only map the running user though, rather than throw away uid mapping
altogether.

>> - mount users $HOME dir for convenience (ssh keys, bash/editor/bzr config, 
>> etc)
>
> Kind of the same as above I'd think. Maybe there's some magic bits to this
> example.

Yeah, there's some tricks to mapping the user through to the
container, if you want the full convenience of being able to log in as
host $USER. You need to create the user with the right uid, and also
install that user's shell if it's not the default, etc. Basically, all
the stuff that the old lxc ubuntu template did to support the -b
option.

It might very well be easier for the user to do this customisation in
the base image before deploying, perhaps.

>> - controlling the network bridge, a la Jay's recent post.
>>
>> - adding additional veths/bridges, in order to test your charm's
>> handling of public/private ip addresses (currently only possible by
>> deploying to an actual cloud provider, AFAIK)
>>
>> - likewise for volumes - if adding an lxd disk device could link into
>> the new storage hooks, then we can test our storage hooks locally.
>>
>> Hmm, maybe some of these are not solved by custom lxd profiles, but
>> just lxd provider feature requests :)
>
> Yes, as the provider lands I think there'll be room to make sure it gets
> first class support for Juju modeling of things such as storage and
> networking.
>
>
>> I would happily write up a proposal - is this list the correct venue?
>
> Preferably a google doc that folks can comment, question, and refer back
> to.
>
>
>> > I'm paraphrasing, but the idea is to tell Juju not to lookup the image
>> > ("trusty", "precise") the way it normally would, but just to trust you
>> > and wing it with that base image. This wants to be done in a way which
>> > works for LXD and on any cloud that can provide a named snapshot or
>> > image for launch.
>>
>> \o/ - hurrah!  This would be great. We could publish these images out
>> of our CI process, for our application charms. As well as maybe
>> consume an IS-provided base image for other services, rather than the
>> cumbersome basenode scripts we currently use.
>>
>> Is there a spec document for this?
>
> It's a more recent add to the roadmap and we're investigating what it would
> take to support this. I'll make sure the team adds you in as a feature
> buddy and gets you a copy of the doc as it comes together.

Yes please!

> Thanks for the feedback! It's great to see folks excited to use things and
> to find guinea pigs as things land and become available.

/me puts on guinea pig costume

Thanks!

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest on the LXD Provider!

2015-11-19 Thread Simon Davy
On 18 November 2015 at 17:15, Katherine Cox-Buday
 wrote:
> Simon, I've gone ahead and added you as a subscriber to the blueprint for
> this feature. That was as things develop you can stay in the loop.
>
> Would you be willing to give feedback on how this feature is shaping up when
> we begin developing?

Absolutely.

I spend 80% of my working day using the current local provider, so am
keen to help improve things.

We are also trying to use juju in application development, i.e. have
the deployed charm be the development environment. Currently manually
creating/configuring an lxd container, and deploying the charm to it
with the manual provider. I would be keen to see what options might be
possible to do this with the lxd provider, like providing a base
image, and configuring the container with profiles, as we could then
use the lxd provider to create the units rather than manually.

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] charm-tools 1.9.3

2015-11-25 Thread Simon Davy
On 25 November 2015 at 16:02, Marco Ceppi  wrote:
> ## Wheel House for layer dependencies
>
> Going forward we recommend all dependencies for layers and charms be
> packaged in a wheelhouse.txt file. This perform the installation of pypi
> packages on the unit instead of first on the local machine meaning Python
> libraries that require architecture specific builds will do it on the units
> architecture.

If I'm understanding the above correctly, this approach is a blocker for us.

We would not want to install direct from pypi on a production service

 1) pypi packages are not signed (or when they are, pip doesn't verify
the signature)
 2) pypi is an external dependency and thus unreliable (although not
as bad these days)
 3) old versions can disappear from pypi at an authors whim.
 4) installing c packages involves installing a c toolchain on your prod machine

Additionally, our policy (Canonical's, that is), does not allow access
to the internet on production machines, for very good reasons. This is
the default policy in many (probably most) production environments.

Any layer or charm that consumes a layer that uses this new approach
for dependencies would thus be unusable to us :(

It also harms repeatability, and I would not want to use it even if
our access policy allowed access to pypi.

For python charm dependencies, we use system python packages as much
as possible, or if we need any wheels, we ship that wheel in the
charm, and pip install it directly from the there. No external
network, completely repeatable.

Another option is to require/configure a local pypi to pull the
packages from, but  again, an external dependency and spof.

I much prefer what the current tool seems to do, bundle deps as wheels
into a wheels/ dir as part of the charm build process.  Then that
charm is self-contained, and requires no external access, and is more
reliable/repeatable.

> This also provides the added bonus of making `charm layers` a
> much cleaner experience.
>
> Here's an example of side-by-side output of a charm build of the basic layer
> before and after converting to Wheelhouse.
>
> Previous: http://paste.ubuntu.com/13502779/ (53 directories, 402 files)
> Wheelhouse:  http://paste.ubuntu.com/13502779// (3 directories, 21 files)

These are the same link?

But looking at the link, I much prefer that version - everything is
bundled with the charm, as I suggest above.

So, to me, this new approach would be a regression :(

If it was my own charms, fair enough, I could just not use this
approach. But, if the base layers and interface layers I'm trying to
use do this, or the other charms from the store that we use do, then
we cannot use them, which would mean forking charms, which we've done
before and helps no one.

Maybe I've misunderstood, but if the new recommended approach involves
pulling from pypi on the unit at deploy them, then this is a big
problem for us, and I think many others.

I don't know where we are at with the resources work, but maybe that
could have a part to play here?

Thanks

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


[ANN] charm-tools 1.9.3

2015-11-26 Thread Simon Davy
On Thursday, 26 November 2015, Marco Ceppi 
wrote:
> On Wed, Nov 25, 2015 at 4:08 PM Simon Davy  wrote:
>>
>> On 25 November 2015 at 16:02, Marco Ceppi 
wrote:
>> > ## Wheel House for layer dependencies
>> >
>> > Going forward we recommend all dependencies for layers and charms be
>> > packaged in a wheelhouse.txt file. This perform the installation of
pypi
>> > packages on the unit instead of first on the local machine meaning
Python
>> > libraries that require architecture specific builds will do it on the
units
>> > architecture.
>>
>> If I'm understanding the above correctly, this approach is a blocker for
us.
>>
>> We would not want to install direct from pypi on a production service
>>
>>  1) pypi packages are not signed (or when they are, pip doesn't verify
>> the signature)
>>  2) pypi is an external dependency and thus unreliable (although not
>> as bad these days)
>>  3) old versions can disappear from pypi at an authors whim.
>>  4) installing c packages involves installing a c toolchain on your prod
machine
>>
>> Additionally, our policy (Canonical's, that is), does not allow access
>> to the internet on production machines, for very good reasons. This is
>> the default policy in many (probably most) production environments.
>>
>> Any layer or charm that consumes a layer that uses this new approach
>> for dependencies would thus be unusable to us :(
>>
>> It also harms repeatability, and I would not want to use it even if
>> our access policy allowed access to pypi.
>>
>> For python charm dependencies, we use system python packages as much
>> as possible, or if we need any wheels, we ship that wheel in the
>> charm, and pip install it directly from the there. No external
>> network, completely repeatable.
>
> So, allow me to clarify. If you review the pastebin outputs from the
original announcement email, what this shift does is previously `charm
build` would create and embed installed dependencies into the charm under
lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
dependency. Issues there are for PyYAML it will build a yaml.so file which
would be built based on the architecture of your machine and not the cloud.

Right. This was the bit which confused me, I think.

Can we not just use python-yaml, as its installed by default on cloud
images anyway?

We use virtualenv with --system-site-packages, and use system packages for
python libs with c packages where possible, leaving wheels for things which
aren't packaged or we want newer versions of.

> This new method builds source wheels and embeds the wheel in the charm.
There's a bootstrap process on deploy that will unpackage and install the
dependencies on the system when deployed. The deps are still bundled in the
charm just the output of the charm is much more sane and easier to read
>
>>
>> Another option is to require/configure a local pypi to pull the
>> packages from, but  again, an external dependency and spof.
>>
>> I much prefer what the current tool seems to do, bundle deps as wheels
>> into a wheels/ dir as part of the charm build process.  Then that
>> charm is self-contained, and requires no external access, and is more
>> reliable/repeatable.
>>
>> > This also provides the added bonus of making `charm layers` a
>> > much cleaner experience.
>> >
>> > Here's an example of side-by-side output of a charm build of the basic
layer
>> > before and after converting to Wheelhouse.
>> >
>> > Previous: http://paste.ubuntu.com/13502779/ (53 directories, 402 files)
>> > Wheelhouse:  http://paste.ubuntu.com/13502779// (3 directories, 21
files)
>>
>> These are the same link?
>>
>> But looking at the link, I much prefer that version - everything is
>> bundled with the charm, as I suggest above.
>
> Sorry, meant to send two links.
> The first: http://paste.ubuntu.com/13502779/
> The Second: http://paste.ubuntu.com/13511384/
> Now which one would you prefer :)

Great :) Problem solved, sorry for the noise.



-- 
Simon
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] charm-tools 1.9.3

2015-11-27 Thread Simon Davy
On Friday, 27 November 2015, Marco Ceppi  wrote:
> On Thu, Nov 26, 2015 at 3:05 AM Simon Davy  wrote:
>>
>> On Thursday, 26 November 2015, Marco Ceppi 
wrote:
>> > On Wed, Nov 25, 2015 at 4:08 PM Simon Davy 
wrote:
>> >>
>> >> On 25 November 2015 at 16:02, Marco Ceppi 
wrote:
>> >> > ## Wheel House for layer dependencies
>> >> >
>> >> > Going forward we recommend all dependencies for layers and charms be
>> >> > packaged in a wheelhouse.txt file. This perform the installation of
pypi
>> >> > packages on the unit instead of first on the local machine meaning
Python
>> >> > libraries that require architecture specific builds will do it on
the units
>> >> > architecture.
>> >>
>> >> If I'm understanding the above correctly, this approach is a blocker
for us.
>> >>
>> >> We would not want to install direct from pypi on a production service
>> >>
>> >>  1) pypi packages are not signed (or when they are, pip doesn't verify
>> >> the signature)
>> >>  2) pypi is an external dependency and thus unreliable (although not
>> >> as bad these days)
>> >>  3) old versions can disappear from pypi at an authors whim.
>> >>  4) installing c packages involves installing a c toolchain on your
prod machine
>> >>
>> >> Additionally, our policy (Canonical's, that is), does not allow access
>> >> to the internet on production machines, for very good reasons. This is
>> >> the default policy in many (probably most) production environments.
>> >>
>> >> Any layer or charm that consumes a layer that uses this new approach
>> >> for dependencies would thus be unusable to us :(
>> >>
>> >> It also harms repeatability, and I would not want to use it even if
>> >> our access policy allowed access to pypi.
>> >>
>> >> For python charm dependencies, we use system python packages as much
>> >> as possible, or if we need any wheels, we ship that wheel in the
>> >> charm, and pip install it directly from the there. No external
>> >> network, completely repeatable.
>> >
>> > So, allow me to clarify. If you review the pastebin outputs from the
original announcement email, what this shift does is previously `charm
build` would create and embed installed dependencies into the charm under
lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
dependency. Issues there are for PyYAML it will build a yaml.so file which
would be built based on the architecture of your machine and not the cloud.
>>
>> Right. This was the bit which confused me, I think.
>>
>> Can we not just use python-yaml, as its installed by default on cloud
images anyway?
>>
>> We use virtualenv with --system-site-packages, and use system packages
for python libs with c packages where possible, leaving wheels for things
which aren't packaged or we want newer versions of.
>>
>
> Again, this is for hook dependencies, not exactly for dependencies of the
workload.

Right. I understand that :)

I detailed how we solve this for our python app payloads as a possible
solution for how to solve it for python charm deps also, but of course
those deps would be completely separate things, not even installed in the
same virtualenv.


> The charm could apt intall python-yaml, but using --system-site-packages
when building is something I'd discourage since not everyone has the same
apt pacakges installed.

Except that they do specifically have python-yaml installed, I believe. Its
installed by default in Ubuntu cloud images, due to cloud-init I think.

But yes, other system python packages could be exposed. I wish once again
there was a way to include specific list of system packages in a virtualenv
rather than all them.

And it should be easy enough to add a way to declare which system packages
are required by a layer?

>Unless that user is building on a fresh cloud-image there's a chance they
won't catch some packages that don't get declared.
> We'd be interested in making this a better story. The wheelhousing for
dependencies not yet available in the archive instead of embedding them in
the charm was a first step but certainly not the last. I'm not sure how
this would work when we generate a wheelhouse since the wheelhouse
generation grabs dependencies of the install. That's why PyYAML is showing
up in the generated charm artifact. We're not explicitly saying "included
PyYAML" we're simply saying we need charmhelpers and charms.reactive from
PyPI as a minimum dep

Reactive roadmap

2016-03-08 Thread Simon Davy
Hi all

My team (Online Services at Canonical) maintains >20 private charms
for our services, plus a few charmstore charms.

Most of these charms are written with charmhelpers ansible support, or
with the Services framework. We would like to move towards
consolidating these approaches (as both have issues), and so have been
experimenting with reactive.

We like the ideas in reactive, especially the composition part, as
sharing common code between our charms has been very painful. Also,
the higher-level user-defined events that reactive provides is a
definite improvement over having to implement the lower level relation
dance semantics every time.

However, it's a fast moving target, and we have encountered some
issues.  So we have a couple of questions, that we haven't been able
to locate answers for in reactive docs (we may have missed some?).

1) Is there a roadmap for reactive? A target for a stable 1.0 release,
or similar? We'd ideally like a stable base to build from before
committing to use a new framework, having been (re)writing/maintaining
charms for 4+ years now :)

2) Layer pinning. Right now, layers are evolving fast, and the lack of
pinning to layer versions has caused charm builds to break from day to
day. Is this a planned feature?

3) Downloading from the internet. This issue has been common in
charmstore charms, and is discouraged, AIUI. But the same issue
applies for layers, and possibly with more effect, due to a layer's
composibility.  We simply can not utilise any layer that downloads
things from github or similar, and I'm sure others are in a similar
situation.  We're aware of resources, but not convinced this is a
scalable solution for layers, as it makes using a charm that has
layers that require resources much more complex. So, some clarity in
this area would be helpful.

Thanks for all the work on making charming better.

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Reactive roadmap

2016-03-14 Thread Simon Davy
On Mon, Mar 14, 2016 at 1:05 PM, John Meinel  wrote:
> ...
>
>>
>>
>> > 3) Downloading from the internet. This issue has been common in
>> > charmstore charms, and is discouraged, AIUI. But the same issue
>> > applies for layers, and possibly with more effect, due to a layer's
>> > composibility.  We simply can not utilise any layer that downloads
>> > things from github or similar, and I'm sure others are in a similar
>> > situation.  We're aware of resources, but not convinced this is a
>> > scalable solution for layers, as it makes using a charm that has
>> > layers that require resources much more complex. So, some clarity in
>> > this area would be helpful.
>>
>> Yes, layers that do not work in network restricted environments are
>> not useful to many people. I think you will find layers will improve
>> things here. Layers only need to be written correctly once. And if
>> they are broken, only fixed once. A big improvement over cargo culted
>> code, where you could end up fixing essentially the same bug or adding
>> the same feature several times.
>>
>
> Layers themselves are a compile time thing, not a runtime thing, right? So
> while the code in the layer might say "download some resource from github",
> you the layer itself is only downloaded from github before it is published
> into the charm store. Am I understanding this wrong?
>
> John
> =:->

Right, downloading at build time is a different problem.

The issue is that the layer might do something on the install hook,
for example, which downloads from the internet at run time, on the
units.

Such things work fine in dev or for demos, but will fail in many
production environments.

-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Reactive roadmap

2016-03-15 Thread Simon Davy
On Mon, Mar 14, 2016 at 2:28 PM, Cory Johns  wrote:
> On Tue, Mar 8, 2016 at 9:19 AM, Simon Davy  wrote:
>>
>> 1) Is there a roadmap for reactive? A target for a stable 1.0 release,
>> or similar? We'd ideally like a stable base to build from before
>> committing to use a new framework, having been (re)writing/maintaining
>> charms for 4+ years now :)
>
>
> The layered & reactive approach saw adoption more quickly than I expected,
> which is both great and unfortunate for working out the kinks.  That said,
> the core concepts of both aspects of this approach have been fairly stable,
> with additions rather than breakages.  There is a 2.0 release of charm-tools
> coming, to coincide with Juju 2.0, but again, the build process should be
> backwards compatible.  There are some issues with charms.reactive that may
> require incompatible changes to fix, but that would be for 2.0 and the
> semantic version range that the base layer uses gives us flexibility there,
> though that leads in to your next point.

That's helpful to know, thanks.

>> 2) Layer pinning. Right now, layers are evolving fast, and the lack of
>> pinning to layer versions has caused charm builds to break from day to
>> day. Is this a planned feature?
>
>
> There has been quite a bit of discussion about this and I don't think either
> side has swayed the other.  On the one hand, as you note, there is a valid
> argument for needing to avoid breakage and in these early days, layers are
> evolving quickly.
>
> On the other hand, we want to strongly encourage charm authors to always
> sync the layers and interfaces they use to take advantage of the
> improvements and fixes from upstream, lest charms end up stagnating.  And
> that means encouraging backward-compatibility in the layers.  To that end,
> it has been suggested that layers be handled like interface protocols in
> that, if you need to make an incompatible change, you can fork the layer
> with a different name and both can coexist until one supplants the other.

That is a reasonable policy, feels like a similar discussion in the
other thread about upgrading elasticsearch/kibana charms. But, as a
policy alone, it's not enough to trust for backwards compatibility. We
have that policy in Ubuntu, particularly for SRUs and security
updates. But it's backed up with significant testing, and thus we
trust it not to break stuff (or rather, not to break stuff with a high
enough frequency to matter). Layers/interfaces, as of yet don't have
that assurance.

So, regards the trade-off between the two approaches, we would always
come down on the former approach. This is because our experience
across the board with all dependency management tools we use is that
pinning to versions is essential for reliability.  We pin our
dependent code branches, we pin our python packages, we pin our npm
packages, we pin our charms, we pin/hold key apt packages. Because if
we don't an upstream change can break the CD pipeline and then we
can't deploy.

This 'vendor/pin all the things your app depends on' approach is part
of the use case for snappy, and docker. Also, a comparison to golang
tooling seems apt. Go only needs build time dependencies too, and yet
the standard approach is to vendor all the things.

Yes, there's a downside, it's painful to keep things in sync. If you
want to 'strongly encourage' folks to keep up to date, some good
tooling around vendoring/syncing would be my recommendation..

I suspect the reality is that if you went along the lines of the
second argument, then people would just work round it anyway (like we
are doing right now).

A thought that's been brewing for a while: what if layers/interfaces
were available as python wheels, and interfaces.juju.solutions
provided an additional PyPI index? Then you could leverage the
existing tooling of pip and requirements.txt to provide your
dependency management?


> Additionally, as Stuart pointed out with tongue-in-cheek, the local repo can
> serve that purpose, and with charm-tools 2.0 that will become easier with
> the pull-source command (https://github.com/juju/charm-tools/pull/125).

OK, so a standard tool as a part of charm-tools to vendor/sync layers
would work for us I think.

>> 3) Downloading from the internet. This issue has been common in
>> charmstore charms, and is discouraged, AIUI. But the same issue
>> applies for layers, and possibly with more effect, due to a layer's
>> composibility.  We simply can not utilise any layer that downloads
>> things from github or similar, and I'm sure others are in a similar
>> situation.
>
>
> Again, as Stuart mentioned, this is actually *better* with layers than it
> has been in the past, because layers encourage charm dependenci

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-05 Thread Simon Davy
Lots of interesting ideas here.

Some other ideas (apologies if these have already been proposed, but I
don't *think* they have)

a) A small one - please can we have 'juju get  '? See
https://bugs.launchpad.net/juju-core/+bug/1423548

Maybe this is already in the config schema work, but it would *really*
help in a lot
of situations, and it seems simple?

e.g.

$ juju get my-service foo
bar

This would make me very happy :)


b) A bigger ask: DNS for units.

Provider level dns (when present) only gives machine name dns, which
is not useful when working at the model level. As an operator, I've
generally no idea which machine unit X is on, and have to go hunting
in juju status. It's be great to be able to address individual units, both
manually when troubleshooting, and in scripts.

One way to do this might be if juju could provide a local dns resolver
as part of the controller.

e.g. if you have a model called 'bar', with service called
'foo', with 2 units, the following domains[1] could be resolved by the
controller dns resolver:

foo-0.bar
foo-1.bar

and/or

unit-foo-0.bar
unit-foo-1.bar

or even

0.foo.bar
1.foo.bar


Then tools can be configured to use this dns resolver. For example, we
have deployment servers where we manage our models from. We could add
the controller's dns here, making it easy for deployment/maintenance
scripts to target units easily.

Right now, we have to parse json output in bash from juju status to
scrape ip addresses, which is horrible[2]

Other possibilities (warning: not necessarily a good idea)

 * add this resolver into the provisioned machine configuration, so config
on the units could use these domain names.

 * the controller dns resolver can forward to a specified upstream
resolver (falling back to host's resolv.conf info)
- single point of control for dns for all models in that controller
- repeatability/reliability - if upsteam dns is down, controller
  dns still provides local resolution, and also could cache upstream,
  perhaps.

 * if you ask for a service level address, rather than unit, it could
maybe return a dns round robin record. This would be useful for
internal haproxy services, for example, and could give some default
load-balancing OOTB

 * would provide dns on providers that don't have native support
(like, erm, ps4.5 :)

Now, there are caveats a plenty here. We'd need HA dns cluster, and
there's a whole bunch of security issues that would need addressing,
to be sure. And I would personally opt for an implementation that uses
proven dns technology rather than implementing a new dns
resolver/forwarder in go with a mongodb backend. But maybe that's just
me ;P


Thanks.


[1] in hindsight, I do think having a / in the unit name was not the
best option, due to it's path/url issues. AIUI, internally juju uses
unit--N as identifiers? Could this be exposed as alternate unit
names? i.e. cli/api commands could accept either?

[2] At the very least, it would be great to have a cli to get the
ip(s) of a unit, would simplify a lot of scripts. e.g.

$ juju get-ip foo/0 --private
10.0.3.24
$ juju get-ip foo/0 --public
1.2.3.4
$ juju get-ip foo --private
10.0.3.24
10.0.3.134


-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Promulgated charms (production readiness)

2016-05-17 Thread Simon Davy
On Mon, May 16, 2016 at 2:49 PM, Tim Van Steenburgh
 wrote:
> Right, but NRPE can be related to any charm too. My point was just that the
> charm doesn't need to explicitly support monitoring.

It totally does, IMO.

process count, disk, mem usage are all important, and should be
available out of the box.

But alerts (driven by monitoring) are all about specific context.

When I'm alerted, I want as specific info as possible as to what is
wrong and hints as to why.  Generic machine monitoring provides little
context, and if that's all you had, would increase your MTTR as you go
fish.

I want detailed, application specific, early alerts that can only be
written by those with application knowledge. These belong in the
charm, and need to be written/maintained by the charm experts.

I've been banging on about this idea for while, but in my head, it
makes sense to promote the idea of app-specific health checks (a la
snappy) in to juju proper, rather than a userspace solution with
layers. Then, you *don't* need specific relation support in your charm
- you just need to write a generic set of health checks/scripts.

Then, these checks are available to run as an action (we do this
pre/post each deploy), or show via juju status, or via the GUI[1]. A
monitoring service can just relate to the charm with the default
relation[2], and get a rich app specific set of checks that it can
convert to its own format and process. No need for relations for each
specific monitoring tool you wish to support. Makes monitoring a 1st
class juju citizen.

Juju could totally own this space, and it's a compelling one.
Monitoring is a mess, and needs integrating with everything all the
time. If we do 80% of that integration for our users, I think that
would play very well with operations folks. And I don't think the
tools in the DISCO[3] orchestration space can do this as effectively -
they by design do not have a central place to consolidate this kind of
integration.


[1] Want a demo that will wow a devops crowd, IMO? Deploy a full demo
system, with monitoring exposed in GUI out of the box. I've said it
before (and been laughed at :), but the GUI could be an amazing
monitoring tool. We might even use it in production ;P

[2] or even more magically, just deploy a monitoring service like
nagios unrelated in the environment, and have it speak with the
model's controller to fetch checks from all machines. Implicit
relations to all, which for monitoring is maybe what you want?

[3] Docker Inspired Slice of COmputer,
https://plus.google.com/+MarkShuttleworthCanonical/posts/W6LScydwS89


-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: DNS for Juju Charms

2016-09-26 Thread Simon Davy
On Sun, Sep 25, 2016 at 2:25 AM, Casey Marshall <
casey.marsh...@canonical.com> wrote:

> Awesome idea! Probably more of a wishlist thing at this point.. but can we
> also add SSHFP records for all the units?
>

​Great idea!
​


> -Casey
>
> On Sat, Sep 24, 2016 at 11:47 AM, Marco Ceppi 
> wrote:
>
>> Hey everyone,
>>
>> I'm currently working on a charm for NS1, which is a DNS service
>> accessible via API. There are quite a few of these types of services
>> available and I'd like to develop a best practice about how the Juju model
>> is presented as DNS. My hope is this would eventually be something that
>> Juju includes in it's model, but for now charms seem to be a clean way to
>> present this.
>>
>> My proposal for how public DNS would be configured for mapping a juju
>> deployment to resolvable DNS records is as follows:
>>
>> Given a root TLD: example.tld which represents the root of a model, the
>> following bundle would be represented as such:
>>
>> haproxy/0104.196.197.94
>> mariadb/0104.196.50.123
>> redis/0  104.196.105.166
>> silph-web/0  104.196.42.224
>> silph-web/1  104.196.117.185
>> silph-web/2  104.196.117.134
>>
>> I'd expect the following for DNS values
>>
>> haproxy.example.tld - 104.196.197.94
>> 0.haproxy.example.tld - 104.196.197.94
>> mariadb.example.tld - 104.196.50.123
>> 0.mariadb.example.tld - 104.196.50.123
>> redis.example.tld - 104.196.105.166
>> 0.redis.example.tld - 104.196.105.166
>> silph-web.example.tld - 104.196.42.224, 104.196.117.185, 104.196.117.134
>> 0.silph-web.example.tld - 104.196.42.224
>> 1.silph-web.example.tld - 104.196.117.185
>> 2.silph-web.example.tld - 104.196.117.134
>>
>>
​+1 to the scheme, and +100 to the idea of the controller being a DNS
resolver for units.

I have exactly the same scheme in my local juju dns tool (which uses a
dnsmasq zone file), minus the RR entries. My root domain is just
.juju.

Having a charm I can add to just give me that in dev would be awesome, even
more awesome if I have the option to use same charm in CI/prod too. It
would make configuration, scripting and debugging much easier, as well as
provide a basic cross-protocol load balancing OOTB (well, charm would need
updating to use the DNS, I guess).

A few questions:

1) Any thoughts on TTL for units? I guess it's possibly not too much of a
problem, as charms will have explicit info as to other units, but there may
be some fun corner cases.

2) Could we add a charmhelpers function that could turn a unit name string
into its canonical dns? Like we do already for url/path-safe unit names,
IIRC? Even better, down the line, if juju could provide them in the hook
env, that would be sweet.

3) In the client interface layer for the charm, do you think it might be
possible to enable short-term dns caching for the unit? We just found yet
another performance issue where this would have really helped. I realise
this might be the wrong place to do it, but I thought I'd check.


Thanks

--
​
Simon​
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-23 Thread Simon Davy
On Thu, Feb 23, 2017 at 2:48 AM, Curtis Hovey-Canonical <
cur...@canonical.com> wrote:

> A new release of Juju, 2.1.0, and Conjure-up, are here!
>
>
> ## What's new in 2.1.0
>
> - Model migration
> - Interactive `add-cloud`
> - Networking changes
> - Conjure-up
> - LXD credential changes
> - Changes to the GUI
> - Instrumentation of Juju via Prometheus endpoints
> - Improved OpenStack keystone v3 authentication
> - New cloud-regions supported
> - Additional improvements
>

​One thing that seems to have landed in 2.1, which is worth noting IMO, is
the local juju lxd image aliases.

tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in the
local lxd server, and uses that if it finds it.

This is amazing. I can now build a local nightly image[1] that pre-installs
and pre-downloads a whole set of packages[2], and my local lxd units don't
have to install them when they spin up. Between layer-basic and Canonical
IS' basenode, for us that's about 111 packages that I don't need to install
on every machine in my 10 node bundle. Took my install hook times from
5min+ each to <1min, and probably halfs my initial deploy time, on average.

Oddly, I only found out about this indirectly via Andrew Wilkins' blog
post[3] on CentOs images, which suggested this was possible. I had to
actually look at the code[4] to figure it out.

For me, this is the single biggest feature in 2.1, and will save me 30mins+
a day, per person who works with juju on my team. But more than raw time,
it reduces iteration interval, and the number of context switches I'm doing
as a I wait for things to deploy. ++win.

I couldn't find any mention of this is the 2.1 lxd provider docs, but I
think it'd be worth calling out, as it's a big speed up in local
development.

My thanks to the folks who did this work. Very much appreciated.

[1] well, you could do this with juju 1.x, but it was messier.
[2] my current nightly cron:
https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040
[3] https://awilkins.id.au/post/juju-2.1-lxd-centos/
[4]
https://github.com/juju/juju/blob/staging/tools/lxdclient/client_image.go#L117
​

-- 
Simon
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-24 Thread Simon Davy
On Fri, Feb 24, 2017 at 11:14 AM, Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Fri, Feb 24, 2017 at 6:51 PM Mark Shuttleworth  wrote:
>
>> On 24/02/17 11:30, Andrew Wilkins wrote:
>>
>> On Fri, Feb 24, 2017 at 6:15 PM Adam Collard 
>> wrote:
>>
>> On Fri, 24 Feb 2017 at 10:07 Adam Israel 
>> wrote:
>>
>> Thanks for calling this out, Simon! We should be shouting this from the
>> rooftops and celebrating in the streets.
>>
>>
>> Only if you also wave a big WARNING banner!
>>
>> I can definitely see value in pre-installing a bunch of things in your
>> LXD images as a way of speeding up the development/testing cycle, but doing
>> so might give you false confidence in your charm. It will become much
>> easier to forget to list a package that you need installing,  or to ensure
>> that you have the correct access (PPA credentials, or proxy details etc.)
>> and having your charm gracefully handle when those are missing.
>>
>> Juju promises charms encoding operations that can work across multiple
>> cloud providers, bare metal and containers please keep that in mind :)
>>
>>
>> Indeed, and this is the reason why it wasn't called out. We probably
>> should document it for power-users/charmers, but in general I wouldn't go
>> encouraging its use. Optimising for LXD is great for repeat deploys, but it
>> wouldn't be great if that leads to less attention to quality on the rest of
>> the providers.
>>
>> Anyway, I'm glad it's helping make charmers' lives easier!
>>
>>
>> We should call this out loudly because it helps people making charms.
>>
>> Those people are plenty smart enough to debug a failure if they forget a
>> dependency which was preinstalled in their dev images.
>>
>
> I was thinking about deployment times more than anything else. If you
> don't feel your user's pain, you're less likely to make it go away. But
> anyway, that can be fixed with automation as well (CI, as you say below).
>

​I agree there is a risk here. In my specific case, I judge the benefits to
outweigh the costs, by quite some margin.

But that judgment is specific to my use case, where layer-basic and IS'
basenode add significant package churn on every node[1] (increasing the
benefit), and we have a full mojo-based CI pipeline for bundle changes
(lowering the cost).

On a different tack all together, I think that reducing iteration time for
charm development is a *massive* win for users. Faster iterations mean
faster feature development and bug fixes, and more comprehensive testing
(as it costs less). I would estimate that iteration improvement would
outweigh the increased risk from a missing pre-installed package, but YMMV.


​[1] ok, so not every charm we deploy is layer based, but they are heading
that way...


Don't HIDE something that helps developers for fear of those developers
>> making mistakes, TEACH them to put CI or other out-of-band tests in place
>> anyway that will catch that every time.
>>
>
> FWIW, it wasn't intentionally hidden to start with, it was just missed. I
> made the changes primarily to support an external user who wanted to demo
> CentOS charms on LXD; the change also enabled custom images in general, and
> also slightly improved container startup time. Three birds, one stone; only
> one bird-hitting was reported ;)
>


​This is hugely appreciated. I reckon 95% of my deployments in the average
week are to lxd, so improvements to the lxd provider affect my velocity
considerably.

Thanks

​--
Simon
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju