Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Merlijn Sebrechts
As an aside; is there a good write-up somewhere about charm unit testing.
I'd like to do this but I'm not sure how to do this. I am completely new to
unit testing so I'm having a hard time to see how a good unittest for a
Charm would look like and what exactly should be tested.

2016-03-17 1:52 GMT+01:00 Marco Ceppi :

> Hello everyone!
>
> This is an email I've been meaning to write for a while, and have
> rewritten a few times now. With 2.0 on the horizon and the charm ecosystem
> rapidly growing, I couldn't keep the idea to myself any longer.
>
> # tl;dr:
>
> We should stop writing Amulet tests in charms and instead only write them
> Bundles and force charms to do unit-testing (when possible) and promote
> that all charms be included in bundles in the store.
>
> # Problem
>
> Without making this a novel, charm-testing and amulet started before
> bundles were even a construct in Juju with a spec written before Juju 1.0.
> Since then, many new comers to the ecosystem have remarked how odd it is to
> be writing deployment validations at the charm level. Indeed, as years have
> gone by and new tools have sprung up it's become clear that; having an
> author try to model all the permutations of a charms deployment and do the
> physical deploys at that charm level are tedious and incomplete at best.
>
> With the explosion of layers and improvements to uniting test in charms at
> that component level, I feel that continuing to create these bespoke
> "bundles" via amulet in a single charm will not be a robust solution going
> forward. As we sprint closer to Juju 2.0 we're seeing a higher demand for
> assurance of working scenarios, and a sharp focus on quality at every
> level. As such I'd like to propose the following policy changes:
>
> - All bundles must have tests before promulgation to the store
> - All charms need to have comprehensive tests (unit or amulet)
> - All charms should be included in a bundle
>
> I'll break down my reasoning and examples in the following sections:
>
> # All bundles must have tests before promulgation to the store
>
> Writing bundle tests with Amulet is actually a more compelling story today
> than writing an Amulet test case for a charm. As an example, there's a new
> ELK stack bundle being produced, here's what the test for that bundle looks
> like:
> https://github.com/juju-solutions/bundle-elk-stack/blob/master/tests/10-test-bundle
>
> This makes a lot of sense because it's asserting that the bundle is
> working as expected by the Author who put the bundle together. It's also
> loading the bundle.yaml as the deployment spec meaning as the bundle
> evolves the tests will make sure they continue to run as expected. Also,
> this could potentially be used in future smoke tests for charms being
> updated if a CI process swaps out, say elasticsearch, for a newer version
> of a charm being reviewed. We can assert that both the unittests in
> elasticsearch work and it operates properly in an existing real world
> solution a la the bundle.
>
> Additional examples:
> -
> https://github.com/juju-solutions/bundle-realtime-syslog-analytics/blob/master/tests/01-bundle.py
> -
> https://github.com/juju-solutions/bundle-apache-core-batch-processing/blob/master/tests/01-bundle.py
>
> # All charms need to have comprehensive tests (unit or amulet)
>
> This is just a clarification and more strongly typed policy change that
> require charms have (preferred) unit tests or, if not applicable, then an
> Amulet test. Bash doesn't really allow for unittesting, so in those
> scenarios, Amulet tests would function as a valid testing case.
>
> There are also some charms which will not make sense as a bundle. One
> example is the recently promulgated Fiche charm:
> http://bazaar.launchpad.net/~charmers/charms/trusty/fiche/trunk/view/head:/tests/10-deploy
>  It's
> a standalone pastebin, but it's an awesome service that provides deployment
> validation with an Amulet test. The test stands up the charm, exercises
> configuration, and validates the service responds in an expected way. For
> scenarios where a charm does not have a bundle an Amulet test would be
> required.
>
> Any charm that currently includes an Amulet test is welcome to continue
> keeping such a test.
>
> # All charms should be included in a bundle
>
> This last one is to underscore that charms need to serve a purpose. This
> policy is written as not an absolute, but instead a strongly worded
> suggestion as there are always charms that are exceptions to the rules. One
> such example is the aforementioned Fiche charm which as a bundle would not
> make as much sense, but is still a purposeful charm.
>
> That being said, most users coming to consume Juju are looking to solve a
> problem. Bundles underscore solutions to problems that people can consume,
> and get started quicker.
>
> As such, when new applications are charmed a test of "is this application
> something that serves a clear purpose" having a bundle 

Re: Usability issues with status-history

2016-03-19 Thread Andrew Wilkins
On Sat, Mar 19, 2016 at 8:36 PM William Reade 
wrote:

> On Sat, Mar 19, 2016 at 4:39 AM, Ian Booth 
> wrote:
>
>>
>> I mostly agree but still believe there's a case for transient messages.
>> The case
>> where Juju is downloading an image and emits progress updates which go
>> into
>> status history is to me clearly a case where we needn't persist every
>> single one
>> (or any). In that case, it's not a charmer deciding but Juju. And with
>> status
>> updates like X% complete, as soon as a new message arrives, the old one is
>> superseded anyway. The user is surely just interested to know the current
>> status
>> and when it completes they don't care anymore. And Juju agent can still
>> decide
>> to say make every 10% of download progress messages non-transient to they
>> go to
>> history for future reference.
>>
>
> There are two distinct problems: collecting the data, and presenting
> information gleaned from that data. Adding complexity to the first task in
> the hope of simplifying the second mixes the concerns at a very deep level,
> and makes the whole stack harder to understand for everybody.
>
> Would this work s an initial improvement for 2.0:
>>
>> 1. Increase limit of stored messages per entity so say 500 (from 100)
>>
>
> Messages-per-entity seems like a strange starting point, compared with
> either max age or max data size (or both). Storage concerns don't seem like
> a major risk: we're keeping a max 3 days/4 gigabytes of normal log messages
> in the database already, and I rather doubt that SetStatus calls generate
> anything like that magnitude of data. Shouldn't we just be following the
> same sort of trimming strategy there and leaving the dataset otherwise
> uncontaminated, and hence as useful as possible?
>
> 2. Allow messages emitted from Juju to be marked as transient
>> eg for download progress
>>
>
> -1, it's just extra complexity to special-case a particular kind of status
> in exchange for very questionable storage gains, and muddies the dataset to
> boot.
>
>
>> 3. Do smarter filtering of what is displayed with status-history
>> eg if we see the same tuple of messages over and over, consolidate
>>
>> TIMETYPESTATUS  MESSAGE
>> 26 Dec 2015 13:51:59Z   agent   executing   running config-changed
>> hook
>> 26 Dec 2015 13:51:59Z   agent   idle
>> 26 Dec 2015 13:56:57Z   agent   executing   running update-status hook
>> 26 Dec 2015 13:56:59Z   agent   idle
>> 26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
>> 26 Dec 2015 14:01:59Z   agent   idle
>> 26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
>> 26 Dec 2015 14:01:59Z   agent   idle
>>
>> becomes
>>
>> TIME TYPE STATUS MESSAGE
>> 26 Dec 2015 13:51:59Z agent executing running config-changed hook
>> 26 Dec 2015 13:51:59Z agent idle
>> >> Repeated 3 times, last occurence:
>> 26 Dec 2015 14:01:57Z agent executing running update-status hook
>> 26 Dec 2015 14:01:59Z agent idle
>>
>
> +100 to this sort of thing. It won't be perfect, but where it's imperfect
> we'll be able to see how to improve. And if we're always calculating it
> from the source data, we can improve the presentation/analytics and fix
> those bugs in isolation; if we mangle the data at collection time we
> sharply limit our options in that arena. (And surely sensible filtering
> will render the transient-download-message problem moot *anyway*, leaving
> us less reason to worry about (2)?)
>

+1 on collapsing repeats. I'd also prefer to add more data to status so
that we can collapse entries, rather than dropping data so that we
don't/can't see it in history.

What do we base "sensible filtering" on? Exact match on message isn't
enough for download messages, obviously, and I'd be very hesitant to bake
in any knowledge of specific message formats.

IIANM, our status entries can carry additional data which we don't render.
If we add the concept of overarching operations to status entries (e.g.
each "image download progress" entry is part of the "image download"
operation), then we could collapse all adjacent entries within that
operation. This could be a simple string in the status data; or we could
extend the status schema. Either way.

Cheers
> William
>
>
>
>>
>>
>>
>>
>> > On Thu, Mar 17, 2016 at 6:30 AM, John Meinel 
>> wrote:
>> >
>> >>
>> >>
>> >> On Thu, Mar 17, 2016 at 8:41 AM, Ian Booth 
>> >> wrote:
>> >>
>> >>>
>> >>> Machines, services and units all now support recording status
>> history. Two
>> >>> issues have come up:
>> >>>
>> >>> 1. https://bugs.launchpad.net/juju-core/+bug/1530840
>> >>>
>> >>> For units, especially in steady state, status history is spammed with
>> >>> update-status hook invocations which can obscure the hooks we really
>> care
>> >>> about
>> >>>
>> >>> 2. https://bugs.launchpad.net/juju-core/+bug/1557918
>> >>>
>> >>> We now have the 

Re: PatchValue (AddCleanup) unsafe with non pointer receiver test

2016-03-19 Thread roger peppe
On 17 March 2016 at 04:52, John Meinel  wrote:
> I came across this in the LXD test suite today, which was hard to track
> down, so I figured I'd let everyone know about it.
>
> We have a nice helper in testing.IsolationSuite with "PatchValue()" that
> will change a global for you during the test, and then during TearDown()
> will cleanup the patch it made.
> It turns out that if your test doesn't have a pointer receiver this fails,

This is your problem. You should *always* write tests on the pointer
to the suite, and it's a bug if you don't. It's perfectly OK for helper
suites (aside: I wish we called them "fixtures") to have mutable state,
and that can't work if you define methods on the value type.

We actually already have a tool that can catch this bug - the -copylocks
flag to go vet can check that sync.Mutex is not passed by value,
so if we put a Mutex inside CleanupSuite, then go vet will complain about
programs like this:

package main

import (
"github.com/juju/testing"
gc "gopkg.in/check.v1"
)

type X struct {
testing.IsolationSuite
}

func main() {
}

func (x X) TestFoo(c *gc.C) {
}

with a message like this:

/home/rog/src/tst.go:15: TestFoo passes lock by value: main.X
contains github.com/juju/testing.IsolationSuite contains
github.com/juju/testing.CleanupSuite contains sync.Mutex

> because the "suite" object is a copy, so when PatchValue calls AddCleanup to
> do s.testStack = append(...) the suite object goes away before TearDown is
> called.
>
> You can see this with the attached test suite.
>
> Example:
>
> func (s mySuite) TestFoo(c *gc.C) {
>  // This is unsafe because s.PatchValue ends up modifying s.testStack but
> that attribute only exists
>  // for the life of the TestFoo function
>   s.PatchValue(, "newvalue")
> }
>
> I tried adding the attached patch so that we catch places we are using
> AddCleanup unsafely, but it fails a few tests I wasn't expecting, so I'm not
> sure if I'm actually doing the right thing.
>
> John
> =:->
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm Store policy updates and refinement for 2.0

2016-03-19 Thread Jorge O. Castro
On Fri, Mar 18, 2016 at 12:11 PM, Tom Barber  wrote:
> I assume this apples to only bundles that get promoted to recommended
> otherwise how would you enforce it?

Yes, to be clear these policies only apply to things that are in the
recommended/promulgated space. So jujucharms.com/haproxy, not
jujucharms.com/u/jorge/haproxy

As always, everyone is free to do what they like in namespaces.

-- 
Jorge Castro
Canonical Ltd.
http://jujucharms.com/ - The fastest way to model your service

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Usability issues with status-history

2016-03-19 Thread Ian Booth

Machines, services and units all now support recording status history. Two
issues have come up:

1. https://bugs.launchpad.net/juju-core/+bug/1530840

For units, especially in steady state, status history is spammed with
update-status hook invocations which can obscure the hooks we really care about

2. https://bugs.launchpad.net/juju-core/+bug/1557918

We now have the concept of recording a machine provisioning status. This is
great because it gives observability to what is happening as a node is being
allocated in the cloud. With LXD, this feature has been used to give visibility
to progress of the image downloads (finally, yay). But what happens is that the
machine status history gets filled with lots of "Downloading x%" type messages.

We have a pruner which caps the history to 100 entries per entity. But we need a
way to deal with the spam, and what is displayed when the user asks for juju
status-history.

Options to solve bug 1

A.
Filter out duplicate status entries when presenting to the user. eg say
"update-status (x43)". This still allows the circular buffer for that entity to
fill with "spam" though. We could make the circular buffer size much larger. But
there's still the issue of UX where a user ask for the X most recent entries.
What do we give them? The X most recent de-duped entries?

B.
If the we go to record history and the current previous entry is the same as
what we are about to record, just update the timestamp. For update status, my
view is we don't really care how many times the hook was run, but rather when
was the last time it ran.

Options to solve bug 2

A.
Allow a flag when setting status to say "this status value is transient" and so
it is recorded in status but not logged in history.

B.
Do not record machine provisioning status in history. It could be argued this
info is more or less transient and once the machine comes up, we don't care so
much about it anymore. It was introduced to give observability to machine
allocation.

Any other options?
Opinions on preferred solutions?

I really want to get this fixed before Juju 2.0






-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Tom Barber
Here's another one, which I can't find in the docs, but apologies if it
exists.

It would be good to be able to specify allowed origin IPs for juju expose
for cloud types that support it.

For example in EC2 instead of allowing 0.0.0.0 allow a specific address or
range. But also expand that further, so each service could be exposed to
different addresses, say different services in the same model for different
clients, or similar.

Tom

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 19 March 2016 at 03:20, Andrew Wilkins 
wrote:

> On Sat, Mar 19, 2016 at 12:53 AM Jacek Nykis 
> wrote:
>
>> On 08/03/16 23:51, Mark Shuttleworth wrote:
>> > *Storage*
>> >
>> >  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>> >  * object storage abstraction (probably just mapping to S3-compatible
>> APIS)
>> >
>> > I'm interested in feedback on the operations aspects of storage. For
>> > example, whether it would be helpful to provide lifecycle management for
>> > storage being re-assigned (e.g. launch a new database application but
>> > reuse block devices previously bound to an old database  instance).
>> > Also, I think the intersection of storage modelling and MAAS hasn't
>> > really been explored, and since we see a lot of interest in the use of
>> > charms to deploy software-defined storage solutions, this probably will
>> > need thinking and work.
>>
>> Hi Mark,
>>
>> I took juju storage for a spin a few weeks ago. It is a great idea and
>> I'm sure it will simplify our models (no more need for
>> block-storage-broker and storage charms). It will also improve security
>> because block-storage-broker needs nova credentials to work
>>
>> I only played with storage briefly but I hope my feedback and ideas will
>> be useful
>>
>> * IMO it would be incredibly useful to have storage lifecycle
>> management. Deploying a new database using pre-existing block device you
>> mentioned would certainly be nice. Another scenario could be users who
>> deploy to local disk and decide to migrate to block storage later
>> without redeploying and manual data migration
>>
>>
>
>> One day we may even be able to connect storage with actions. I'm
>> thinking "storage snapshot" action followed by juju deploy to create up
>> to date database clone for testing/staging/dev
>>
>> * I found documentation confusing. It's difficult for me to say exactly
>> what is wrong but I had to read it a few times before things became
>> clear. I raised some specific points on github:
>> https://github.com/juju/docs/issues/889
>>
>> * cli for storage is not as nice as other juju commands. For example we
>> have the in the docs:
>>
>> juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
>>
>> I suspect most charms will use single storage device so it may be
>> possible to optimize for that use case. For example we could have:
>>
>> juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd
>> --storage-size=10G
>>
>
> It seems like the issues you've noted below are all documentation issues,
> rather than limitations in the implementation. Please correct me if I'm
> wrong.
>
>
>> If we come up with sensible defaults for different providers we could
>> make end users' experience even better by making --storage-type optional
>>
>
> Storage type is already optional. If you omit it, you'll get the provider
> default. e.g. for AWS, that's EBS magnetic disks.
>
>
>> * it would be good to have ability to use single storage stanza in
>> metadata.yaml that supports all types of storage. They way it is done
>> now [0] means I can't test block storage hooks in my local dev
>> environment. It also forces end users to look for storage labels that
>> are supported
>>
>> [0] http://paste.ubuntu.com./15414289/
>
>
> Not quite sure what you mean here. If you have a "filesystem" type, you
> can use any storage provider that supports natively creating filesystems
> (e.g. "tmpfs") or block devices (e.g. "ebs"). If you specify the latter,
> Juju will manage the filesystem on the block device.
>
> * the way things are now hooks are responsible for creating filesystem
>> on block devices. I feel that as a charmer I shouldn't need to know that
>> much about storage internals. I would like to ask juju and get
>> preconfigured path back. Whether it's formatted and mounted block
>> device, GlusterFS or local filesystem it should not matter
>
>
> That is exactly what it does, so again, I think this is an issue of
> documentation clarity. If you're using the "filesystem" type, Juju will
> create the filesystem; if you use "block", it won't.
>
> If you could provide more details on what you're doing (off list, I 

Charm Store policy updates and refinement for 2.0

2016-03-19 Thread Jorge O. Castro
Hello everyone,

With 2.0 around the corner we decided to spend some time cleaning up
the page everyone loves to hate, the Juju Charm Store policy:

https://jujucharms.com/docs/1.25/authors-charm-policy

and here is what I would like to propose:

https://github.com/castrojo/docs/blob/master/src/en/authors-charm-policy.md

I've done a few things here:

- I've separated it from one huge paragraph to sections, General,
Testing and Quality requirements, Metadata requirements, and Security
requirements.
- I've split out things that a charm/bundle MUST do and what it SHOULD
do in each section to make it clearer on what is a hard requirement
and what is a recommendation.
- I've removed most of the Ubuntu-specific jargon and generalized it
to include other OSes such as CentOS.
- Made documenting interfaces and external dependencies a requirement.

There are also some new policies that we need ack from ~charmers in
order to implement. Specifically we've made the testing and quality
requirements explicit. I've also added a requirement of using Juju
Resources (which appears to be undocumented?) for payloads.

Recommendations from everyone on what we should include here would be
most welcome, specifically our recommendations around Windows charms
is non-existent.


-- 
Jorge Castro
Canonical Ltd.
http://jujucharms.com/ - The fastest way to model your service

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Preparing for the next beta - CI runs on feature branches

2016-03-19 Thread Cheryl Jennings
Hey Everyone!

The cutoff for the next beta is just around the corner (Monday, March 21)!
In order to meet this cutoff, please pay attention to the CI reports [0] on
your branches and address failures on your branch when they arise.

To avoid wasting CI test time on branches with known errors, we will be
introducing a way to skip CI runs for those feature branches.  If your
branch has a unique failure, or has not pulled in an updated master
containing fixes for known CI bugs, a bug will be opened against your
branch with the "block-ci-testing" tag[1].  Branches with
"block-ci-testing" bugs in a non-Fix Committed state will not be run
through CI.

For example, master is currently blocked because of bug #1558158[2].  Once
that bug is fixed, your branch will not be tested until you pull in an
updated version of master which contains that fix.  Feature branches
without pre-existing "block-ci-testing" bugs are being tested until that
bug is marked as Fix Released.

Once you have brought in the latest master, and / or addressed the
unique-to-your-branch bug, set the "block-ci-testing" bug against your
branch to Fix Committed to signal that your branch is ready for CI testing.

Please let me know if you have any questions.
Thanks!
-Cheryl

[0] http://reports.vapour.ws/releases
[1] https://bugs.launchpad.net/juju-core/+bugs/?field.tag=block-ci-testing
[2] https://bugs.launchpad.net/juju-core/+bug/1558158
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Ryan Beisner
Good evening,

I really like the notion of a bundle possessing functional tests as an
enhancement to test coverage.  I agree with almost all of those ideas.  :-)
  tldr;  I would suggest that we consider bundle tests 'in addition to' and
not 'as a replacement of' individual charm tests, because:


*# Coverage and relevance*
Any given charm may have many different modes of operation -- features
which are enabled in some bundles but not in others.  A bundle test will
likely only exercise that charm in the context of its configuration as it
pertains to that bundle.  However, those who propose changes to the
individual charm should know (via tests) if they've functionally broken the
wider set of its knobs, bells and levers, which may be irrelevant to, or
not testable in the bundle's amulet test due to its differing perspective.
This opens potential functional test coverage gaps if we lean solely on the
bundle for the test.

There are numerous cases where a charm can shift personalities and use
cases, but not always on-the-fly in an already-deployed model.  In those
cases, it may take a completely different and new deployment topology and
configuration (bundle) to be able to exercise the relevant functional
tests.  Without integrated amulet tests within the charm, one would have to
publish multiple bundles, each containing separate amulet tests.  For
low-dev-velocity charms, for simple charms, or for charms that aren't
likely to be involved in complex workloads, this may be manageable.  But I
don't think we should discourage or stop looking for individual charm
amulet tests even there.

A charm's integrated amulet test can be both more focused and more
expansive in what it exercises, as it can contain multiple deployment
topologies and configurations (equivalent to cycling multiple unique
bundles).  For example:  charm-xyz with and without SSL;  or in HA and
without HA;  or IPv4 vs. IPv6; or IPv4 HA vs. IPv6 HA, multicast vs.
unicast;  [IPv6 + HA + SSL] vs [IPv4 + HA + SSL]; or mysql deploying mysql
proper vs. mysql deploying a variant;   and you can see the gist of the
coverage explosion which translates to having a whole load of bundles to
produce and maintain.


*# Dev and test: cost, scale and velocity*
Individual charm amulet tests are an important piece in testing large or
complex models.  I'll share some bits of what we do for OpenStack charms as
an example.  No bias.  :-)

Each of the OpenStack charms contain amulet test definitions.  We lean
heavily on those tests to deploy fractions of a full OpenStack bundle as
the core of our CI development gate.  With [27 charms] x [stable + dev] x
[8 Ubuntu/OpenStack Release Combos], there are currently* ~432 *possible
variations of amulet tests (derived bundles of fractional OpenStacks).  A
subset of those are executed in gate, depending on relevance to the
developer's proposed change.  This allows us to endure a high velocity of
focused testing on development in these very active charms.  Because the
derived models are much smaller than the reference bundle, we can give
developers rapid and automated feedback, plus they can iterate on
development outside of our CI without having to be able to deploy a full
OpenStack.

That is not to say that we don't have acceptance and integration tests for
full OpenStack bundles.  We do that in the form of mojo specs which
dynamically deploy any number of full OpenStack bundle topologies and
configurations against multiple Ubuntu+OpenStack release combos, using
either the dev or the stable set of OpenStack charms.  It basically takes
what I've described above for amulet and allows us to pivot entire bundles
into different models automatically.  There are currently *84* such
OpenStack mojo specs with tests (bundle equivalents)

Fear not, this is mostly accomplished with bundle inheritance, yaml foo,
and shared test libraries.  We're not actually maintaining ~*516 bundles*.
But if we were to achieve the current level of coverage with bundles,
that's approximately how many there would need to be.  This includes the
upcoming Xenial and Mitaka releases.  Reduce by ~12% when Juno EOLs.  Add
12% when we hit Newton B1, and so on.


*# How I'd like to use the proposed ideas*
There are some OpenStack reference bundles in the charm store.  My
suggested approach would be to continue to leverage individual charm amulet
tests while adding functional tests to the existing charm store bundles.
That would increase test coverage, and provide a mechanism to validate
proposed changes to those specific bundles, such as to re-validate the
bundles when charm versions are revved within them.


To summarize, I am:

-1 to stopping or discouraging individual charm amulet tests

+1 for every charm containing amulet tests

+1 for every charm containing unit tests

+1 for every charm having amulet coverage in at least 1 bundle

+1 for every bundle possessing amulet tests


Also open to feedback, discussion, suggestions, kicks in the shin.

Thanks for all 

Re: Charm Store policy updates and refinement for 2.0

2016-03-19 Thread Charles Butler
Big +1 to the categories

What i'd like to see is the policy document move to strong language where
we can build tooling around the automated checking of policy.

Refactoring the MUSTS and SHOULDS give us a strong lead on that language.
MUST == has to be satisfied
SHOULD = area for improvement - (proper use of warn, which wont fail charm
proof for a change)

eg: We require OSI approved licenses, we can scan the copyright file for
that with a license bot to find the lingo of approved licenses, otherwise
it flags for human review. It may be an acceptable license we're not aware
of.

The more points in policy we can convert to strongly typed lingo, the more
targeted and automated we can make portions of policy review, and also
scopes the mission of ~charmers. A limited resource who is responsible for
enforcing this document.



On Fri, Mar 18, 2016 at 11:59 AM Jorge O. Castro  wrote:

> Hello everyone,
>
> With 2.0 around the corner we decided to spend some time cleaning up
> the page everyone loves to hate, the Juju Charm Store policy:
>
> https://jujucharms.com/docs/1.25/authors-charm-policy
>
> and here is what I would like to propose:
>
> https://github.com/castrojo/docs/blob/master/src/en/authors-charm-policy.md
>
> I've done a few things here:
>
> - I've separated it from one huge paragraph to sections, General,
> Testing and Quality requirements, Metadata requirements, and Security
> requirements.
> - I've split out things that a charm/bundle MUST do and what it SHOULD
> do in each section to make it clearer on what is a hard requirement
> and what is a recommendation.
> - I've removed most of the Ubuntu-specific jargon and generalized it
> to include other OSes such as CentOS.
> - Made documenting interfaces and external dependencies a requirement.
>
> There are also some new policies that we need ack from ~charmers in
> order to implement. Specifically we've made the testing and quality
> requirements explicit. I've also added a requirement of using Juju
> Resources (which appears to be undocumented?) for payloads.
>
> Recommendations from everyone on what we should include here would be
> most welcome, specifically our recommendations around Windows charms
> is non-existent.
>
>
> --
> Jorge Castro
> Canonical Ltd.
> http://jujucharms.com/ - The fastest way to model your service
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Marco Ceppi
On Thu, Mar 17, 2016 at 9:08 AM Tom Barber  wrote:

> Its taken me about 2 weeks of on and off testing to get 4 unit tests
> working, getting everything to play ball is hard, so it would be good!
> Maybe I'll write a blog post about it once I'm done.
>

Fantastic! Do you have a link to these? Would love to see how these


>
> --
>
> Director Meteorite.bi - Saiku Analytics Founder
> Tel: +44(0)5603641316
>
> (Thanks to the Saiku community we reached our Kickstart
> 
> goal, but you can always help by sponsoring the project
> )
>
> On 17 March 2016 at 12:24, Merlijn Sebrechts 
> wrote:
>
>> As an aside; is there a good write-up somewhere about charm unit testing.
>> I'd like to do this but I'm not sure how to do this. I am completely new to
>> unit testing so I'm having a hard time to see how a good unittest for a
>> Charm would look like and what exactly should be tested.
>>
>> 2016-03-17 1:52 GMT+01:00 Marco Ceppi :
>>
>>> Hello everyone!
>>>
>>> This is an email I've been meaning to write for a while, and have
>>> rewritten a few times now. With 2.0 on the horizon and the charm ecosystem
>>> rapidly growing, I couldn't keep the idea to myself any longer.
>>>
>>> # tl;dr:
>>>
>>> We should stop writing Amulet tests in charms and instead only write
>>> them Bundles and force charms to do unit-testing (when possible) and
>>> promote that all charms be included in bundles in the store.
>>>
>>> # Problem
>>>
>>> Without making this a novel, charm-testing and amulet started before
>>> bundles were even a construct in Juju with a spec written before Juju 1.0.
>>> Since then, many new comers to the ecosystem have remarked how odd it is to
>>> be writing deployment validations at the charm level. Indeed, as years have
>>> gone by and new tools have sprung up it's become clear that; having an
>>> author try to model all the permutations of a charms deployment and do the
>>> physical deploys at that charm level are tedious and incomplete at best.
>>>
>>> With the explosion of layers and improvements to uniting test in charms
>>> at that component level, I feel that continuing to create these bespoke
>>> "bundles" via amulet in a single charm will not be a robust solution going
>>> forward. As we sprint closer to Juju 2.0 we're seeing a higher demand for
>>> assurance of working scenarios, and a sharp focus on quality at every
>>> level. As such I'd like to propose the following policy changes:
>>>
>>> - All bundles must have tests before promulgation to the store
>>> - All charms need to have comprehensive tests (unit or amulet)
>>> - All charms should be included in a bundle
>>>
>>> I'll break down my reasoning and examples in the following sections:
>>>
>>> # All bundles must have tests before promulgation to the store
>>>
>>> Writing bundle tests with Amulet is actually a more compelling story
>>> today than writing an Amulet test case for a charm. As an example, there's
>>> a new ELK stack bundle being produced, here's what the test for that bundle
>>> looks like:
>>> https://github.com/juju-solutions/bundle-elk-stack/blob/master/tests/10-test-bundle
>>>
>>> This makes a lot of sense because it's asserting that the bundle is
>>> working as expected by the Author who put the bundle together. It's also
>>> loading the bundle.yaml as the deployment spec meaning as the bundle
>>> evolves the tests will make sure they continue to run as expected. Also,
>>> this could potentially be used in future smoke tests for charms being
>>> updated if a CI process swaps out, say elasticsearch, for a newer version
>>> of a charm being reviewed. We can assert that both the unittests in
>>> elasticsearch work and it operates properly in an existing real world
>>> solution a la the bundle.
>>>
>>> Additional examples:
>>> -
>>> https://github.com/juju-solutions/bundle-realtime-syslog-analytics/blob/master/tests/01-bundle.py
>>> -
>>> https://github.com/juju-solutions/bundle-apache-core-batch-processing/blob/master/tests/01-bundle.py
>>>
>>> # All charms need to have comprehensive tests (unit or amulet)
>>>
>>> This is just a clarification and more strongly typed policy change that
>>> require charms have (preferred) unit tests or, if not applicable, then an
>>> Amulet test. Bash doesn't really allow for unittesting, so in those
>>> scenarios, Amulet tests would function as a valid testing case.
>>>
>>> There are also some charms which will not make sense as a bundle. One
>>> example is the recently promulgated Fiche charm:
>>> http://bazaar.launchpad.net/~charmers/charms/trusty/fiche/trunk/view/head:/tests/10-deploy
>>>  It's
>>> a standalone pastebin, but it's an awesome service that provides deployment
>>> validation with an Amulet test. The test stands up the charm, exercises
>>> configuration, and 

Re: Charmers application - David Ames

2016-03-19 Thread James Beedy
Team -

David played a monumental role in resolving a handful of issues I was
hitting my head on while trying to solidify my HA Openstack, also with DVR
issues which I was experiencing prior. The issues I was experiencing were
rather in depth and complex. David went a great deal out of his way to
identify where the bugs were in the charms that were at the root of my
issues, and ensured the issues I was experiencing were exploited and
resolved in the respective charms. By doing this, I feel David has
subsequently played a large role in solidifying the core functionality of
the charms. It is evident that David cares a great deal about the Juju
ecosystem, producing quality artifacts, and the community in general. It
has been a pleasure working with David, and I look forward to working with
him in the future!

Thanks David!

~James
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New feature for charmers - min-juju-version

2016-03-19 Thread Rick Harding
Thanks Nate, great stuff. I know there's a lot of folks looking forward to
this helping our charming community out as we fill out the model more and
charms get to adapt and move forward.

On Thu, Mar 17, 2016 at 6:35 PM Nate Finch  wrote:

> Yes, it'll be ignored, and the charm will be deployed normally.
>
> On Thu, Mar 17, 2016 at 3:29 PM Ryan Beisner 
> wrote:
>
>> This is awesome.  What will happen if a charm possesses the flag in
>> metadata.yaml and is deployed with 1.25.x?  Will it gracefully ignore it?
>>
>> On Thu, Mar 17, 2016 at 1:57 PM, Nate Finch 
>> wrote:
>>
>>> There is a new (optional) top level field in the metadata.yaml file
>>> called min-juju-version. If supplied, this value specifies the minimum
>>> version of a Juju server with which the charm is compatible. When a user
>>> attempts to deploy a charm (whether from the charmstore or from local) that
>>> has min-juju-version specified, if the targeted model's Juju version is
>>> lower than that specified, then the user will be shown an error noting that
>>> the charm requires a newer version of Juju (and told what version they
>>> need). The format for min-juju-version is a string that follows the same
>>> scheme as our release versions, so you can be as specific as you like. For
>>> example, min-juju-version: "2.0.1-beta3" will deploy on 2.0.1 (release),
>>> but will not deploy on 2.0.1-alpha1 (since alpha1 is older than beta3).
>>>
>>> Note that, at this time, Juju 1.25.x does *not* recognize this flag, so
>>> it will, unfortunately, not be respected by 1.25 environments.
>>>
>>> This code just landed in master, so feel free to give it a spin.
>>>
>>> -Nate
>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New feature for charmers - min-juju-version

2016-03-19 Thread Nate Finch
Yes, it'll be ignored, and the charm will be deployed normally.

On Thu, Mar 17, 2016 at 3:29 PM Ryan Beisner 
wrote:

> This is awesome.  What will happen if a charm possesses the flag in
> metadata.yaml and is deployed with 1.25.x?  Will it gracefully ignore it?
>
> On Thu, Mar 17, 2016 at 1:57 PM, Nate Finch 
> wrote:
>
>> There is a new (optional) top level field in the metadata.yaml file
>> called min-juju-version. If supplied, this value specifies the minimum
>> version of a Juju server with which the charm is compatible. When a user
>> attempts to deploy a charm (whether from the charmstore or from local) that
>> has min-juju-version specified, if the targeted model's Juju version is
>> lower than that specified, then the user will be shown an error noting that
>> the charm requires a newer version of Juju (and told what version they
>> need). The format for min-juju-version is a string that follows the same
>> scheme as our release versions, so you can be as specific as you like. For
>> example, min-juju-version: "2.0.1-beta3" will deploy on 2.0.1 (release),
>> but will not deploy on 2.0.1-alpha1 (since alpha1 is older than beta3).
>>
>> Note that, at this time, Juju 1.25.x does *not* recognize this flag, so
>> it will, unfortunately, not be respected by 1.25 environments.
>>
>> This code just landed in master, so feel free to give it a spin.
>>
>> -Nate
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread roger peppe
On 16 March 2016 at 15:04, Kapil Thangavelu  wrote:
> Relations have associated config schemas that can be set by the user
> creating the relation. I.e. I could run one autoscaling service and
> associate with relation config for autoscale options to the relation with a
> given consumer service.

Great, I hoped that's what you meant.
I'm also +1 on this feature - it would enable all kinds of useful flexibility.

One recent example I've come across that could use this feature
is that we've got a service that can hand out credentials to services
that are related to it. At the moment the only way to state that
certain services should be handed certain classes of credential
is to have a config value that holds a map of service name to
credential info, which doesn't seem great - it's awkward, easy
to get wrong, and when a service goes away, its associated info
hangs around.

Having the credential info associated with the relation itself would be perfect.

>
> On Wed, Mar 16, 2016 at 9:17 AM roger peppe 
> wrote:
>>
>> On 16 March 2016 at 12:31, Kapil Thangavelu  wrote:
>> >
>> >
>> > On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth 
>> > wrote:
>> >>
>> >> Hi folks
>> >>
>> >> We're starting to think about the next development cycle, and gathering
>> >> priorities and requests from users of Juju. I'm writing to outline some
>> >> current topics and also to invite requests or thoughts on relative
>> >> priorities - feel free to reply on-list or to me privately.
>> >>
>> >> An early cut of topics of interest is below.
>> >>
>> >> Operational concerns
>> >>
>> >> * LDAP integration for Juju controllers now we have multi-user
>> >> controllers
>> >> * Support for read-only config
>> >> * Support for things like passwords being disclosed to a subset of
>> >> user/operators
>> >> * LXD  container migration
>> >> * Shared uncommitted state - enable people to collaborate around
>> >> changes
>> >> they want to make in a model
>> >>
>> >> There has also been quite a lot of interest in log control - debug
>> >> settings for logging, verbosity control, and log redirection as a
>> >> systemic
>> >> property. This might be a good area for someone new to the project to
>> >> lead
>> >> design and implementation. Another similar area is the idea of
>> >> modelling
>> >> machine properties - things like apt / yum repositories, cache settings
>> >> etc,
>> >> and having the machine agent setup the machine / vm / container
>> >> according to
>> >> those properties.
>> >>
>> >
>> > ldap++. as brought up in the user list better support for aws best
>> > practice
>> > credential management, ie. bootstrapping with transient credentials (sts
>> > role assume, needs AWS_SECURITY_TOKEN support), and instance role for
>> > state
>> > servers.
>> >
>> >
>> >>
>> >> Core Model
>> >>
>> >>  * modelling individual services (i.e. each database exported by the db
>> >> application)
>> >>  * rich status (properties of those services and the application
>> >> itself)
>> >>  * config schemas and validation
>> >>  * relation config
>> >>
>> >> There is also interest in being able to invoke actions across a
>> >> relation
>> >> when the relation interface declares them. This would allow, for
>> >> example, a
>> >> benchmark operator charm to trigger benchmarks through a relation
>> >> rather
>> >> than having the operator do it manually.
>> >>
>> >
>> > in priority order, relation config
>>
>> What do you understand by the term "relation config"?

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Casey Marshall
On Wed, Mar 16, 2016 at 7:52 PM, Marco Ceppi 
wrote:

> Hello everyone!
>
> This is an email I've been meaning to write for a while, and have
> rewritten a few times now. With 2.0 on the horizon and the charm ecosystem
> rapidly growing, I couldn't keep the idea to myself any longer.
>
> # tl;dr:
>
> We should stop writing Amulet tests in charms and instead only write them
> Bundles and force charms to do unit-testing (when possible) and promote
> that all charms be included in bundles in the store.
>

I'm unfamiliar with unit testing best practices in charms specifically. Are
there packages that make it easier to test a reactive charm?

Failing that, can you recommend a good example of unit testing in a modern
charm that I could follow by example?

Thanks,
Casey
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Unfairly weighted charmstore results

2016-03-19 Thread Tom Barber
Cross posted from IRC:

Hello folks,

I have a gripe about the charm store search. Mostly because its really
badly weighted towards recommended charms, and finding what you(an end user
wants is really hard unless they know what they are doing).

Take this example:

https://jujucharms.com/q/pentaho

Now I'm writing a charm called Pentaho Data Integration, so why do I have
to scroll past 55 recommended charms that have nothing to do with what I
have looked for?

But

https://jujucharms.com/q/etl

Shows me exactly what I need at the top, with no recommended charms
blocking the view.

So I guess its weighted towards tags, then names, sorta.

Im not against recommended charms being dumped at the top, they are
recommended after all but it appears the ranking could be vastly improved.

Off the top of my head a ranking combo of something like, keyword
relevance, recommended vs non recommended, times deployed, age, tags and
last updated. would give a half decent weighting for the charms and would
hopefully stop 55 unrelated charms appearing at the top of the list.

Now I guess, I could dump pentaho in as a tag to get me top of the SEO
rankings, but it seems like generally the method could be improved as the
amount of charms increases, quite plausibly using something like Apache
Nutch to crawl the available charms and build a proper search facility
would improve things.

Cheers

Tom


--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Openstack HA Portland Meetup Present

2016-03-19 Thread Rick Harding
Thanks for the update James, glad things went so well! From our end, we
appreciate the awesome first hand user feedback you're always willing to
reach out and provide. Our stuff just gets better with folks like you
putting it to the test day in and day out. I can't wait to get you some of
the new stuff coming that I think will greatly improve your next
presentation!

Rick

On Fri, Mar 18, 2016 at 1:42 AM James Beedy  wrote:

> I just gave this presentation --> http://54.172.233.114/ at an Openstack
> meetup at puppet headquarters in Portland, geared around HA Openstack
> production deployments. I wanted to update the team of the great news! It
> was by far the best presentation I have ever given, which isn't saying
> much, but peoples heads were turning in disbelief of what they were
> seeing the entire time :-) Alongside my slides, I gave a live demo of the
> load balancing of my juju deployed presentation in real time as the group
> was accessing it. Following that, I scaled out my presentation by adding a
> unit of `present` live. Also, I went out on a limb and live demoed a fully
> ha test stack, adding a lxc unit of glance to one machine and removing it
> from another whilst keeping quorum, and lightly touched on how the
> hacluster charm works, and a bit on the concept of interfaces and deploying
> from the juju-gui. There was a surprising amount of interest in Juju
> following my presentation, a good amount of people had never heard of juju,
> most of them seemed to be blown away by what they had just witnessed :-)
>
> On that note, I want to thank everyone for the work you have all done to
> get the Juju ecosystem/framework to where it is today. As nice as it was to
> see my test stack preforming so well at the demo, its much more fulfilling
> to know that my production stack is purring like a kitten too... no
> downtime for 6+ months (since her production inception)!!!  Over the past 6
> months, I have had some major issues that I have resolved, and with no
> service downtime! To that extent, I may of ripped my stacks guts out and
> then put them back in again... quite a few times with services running
> atop her -- its nice to see I can do all of that and she still stands and
> is able to recover and regain a healthy state.
>
> Here are the ip addresses and repos for my presentation. For anyone
> interested, you can login to the haproxy stats and see the traffic
> generated! As a side note - I was able to spin this all up and present
> using my charm dev amazon account --> HUGE +1 for the charm developer
> program!!
>
>
> presentation: http://54.172.233.114/
> haproxy stats: http://54.172.233.114:1/  --> un: admin, pw: password
> presentation markdown: https://github.com/jamesbeedy/os-ha-meetup-present
> layer present: https://github.com/jamesbeedy/layer-present
> https://github.com/jamesbeedy/os_ha_test_stack
>
>
> Thanks all!
>
> ~James
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Tom Barber
Couple of new things cropped up today that would be very useful.

a) actions within gui, currently its a bit weird to drag stuff around in
the gui then drop to shell to run actions. Doesn't make much sense to a
user.
b) actions within bundles. For example, I'd like a few "standard" bundles,
but also a demo bundle seeded with sample data, to do this I'd need to run
some actions behind the scenes to get the stuff in place which I can't do
c) upload files with actions. Currently for some things I need to pass in
some files then trigger an action on the unit upon that file. It would be
good to say path=/tmp/myfile.xyz and have the action upload that to a place
you define.

Tom

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 16 March 2016 at 16:03, roger peppe  wrote:

> On 16 March 2016 at 15:04, Kapil Thangavelu  wrote:
> > Relations have associated config schemas that can be set by the user
> > creating the relation. I.e. I could run one autoscaling service and
> > associate with relation config for autoscale options to the relation
> with a
> > given consumer service.
>
> Great, I hoped that's what you meant.
> I'm also +1 on this feature - it would enable all kinds of useful
> flexibility.
>
> One recent example I've come across that could use this feature
> is that we've got a service that can hand out credentials to services
> that are related to it. At the moment the only way to state that
> certain services should be handed certain classes of credential
> is to have a config value that holds a map of service name to
> credential info, which doesn't seem great - it's awkward, easy
> to get wrong, and when a service goes away, its associated info
> hangs around.
>
> Having the credential info associated with the relation itself would be
> perfect.
>
> >
> > On Wed, Mar 16, 2016 at 9:17 AM roger peppe 
> > wrote:
> >>
> >> On 16 March 2016 at 12:31, Kapil Thangavelu  wrote:
> >> >
> >> >
> >> > On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth 
> >> > wrote:
> >> >>
> >> >> Hi folks
> >> >>
> >> >> We're starting to think about the next development cycle, and
> gathering
> >> >> priorities and requests from users of Juju. I'm writing to outline
> some
> >> >> current topics and also to invite requests or thoughts on relative
> >> >> priorities - feel free to reply on-list or to me privately.
> >> >>
> >> >> An early cut of topics of interest is below.
> >> >>
> >> >> Operational concerns
> >> >>
> >> >> * LDAP integration for Juju controllers now we have multi-user
> >> >> controllers
> >> >> * Support for read-only config
> >> >> * Support for things like passwords being disclosed to a subset of
> >> >> user/operators
> >> >> * LXD  container migration
> >> >> * Shared uncommitted state - enable people to collaborate around
> >> >> changes
> >> >> they want to make in a model
> >> >>
> >> >> There has also been quite a lot of interest in log control - debug
> >> >> settings for logging, verbosity control, and log redirection as a
> >> >> systemic
> >> >> property. This might be a good area for someone new to the project to
> >> >> lead
> >> >> design and implementation. Another similar area is the idea of
> >> >> modelling
> >> >> machine properties - things like apt / yum repositories, cache
> settings
> >> >> etc,
> >> >> and having the machine agent setup the machine / vm / container
> >> >> according to
> >> >> those properties.
> >> >>
> >> >
> >> > ldap++. as brought up in the user list better support for aws best
> >> > practice
> >> > credential management, ie. bootstrapping with transient credentials
> (sts
> >> > role assume, needs AWS_SECURITY_TOKEN support), and instance role for
> >> > state
> >> > servers.
> >> >
> >> >
> >> >>
> >> >> Core Model
> >> >>
> >> >>  * modelling individual services (i.e. each database exported by the
> db
> >> >> application)
> >> >>  * rich status (properties of those services and the application
> >> >> itself)
> >> >>  * config schemas and validation
> >> >>  * relation config
> >> >>
> >> >> There is also interest in being able to invoke actions across a
> >> >> relation
> >> >> when the relation interface declares them. This would allow, for
> >> >> example, a
> >> >> benchmark operator charm to trigger benchmarks through a relation
> >> >> rather
> >> >> than having the operator do it manually.
> >> >>
> >> >
> >> > in priority order, relation config
> >>
> >> What do you understand by the term "relation config"?
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 

Re: New feature for charmers - min-juju-version

2016-03-19 Thread Mark Shuttleworth
On 17/03/16 22:34, Nate Finch wrote:
> Yes, it'll be ignored, and the charm will be deployed normally.
>
> On Thu, Mar 17, 2016 at 3:29 PM Ryan Beisner 
> wrote:
>
>> This is awesome.  What will happen if a charm possesses the flag in
>> metadata.yaml and is deployed with 1.25.x?  Will it gracefully ignore it?
>>

I wonder if there is a clean way for us to have Juju 1.x reject the
charm very early in the process, giving an error that would essentially
amount to the "not understood"? Or if we could have the charm store
refuse to serve up the charm to a 1.x Juju client / server?

Mark

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Charmers application

2016-03-19 Thread David Ames

Former ~charmer(s) formally requesting to come back into the fold.

In my IS days I was in ~charmers and did work around the charms IS uses 
the most: apache2, haproxy, nrpe-external-master, cassandra etc.


Now in the OpenStack Charmers team I get my hands dirty every day 
working with the OpenStack charms.


Here are some of the highlights of my work with OpenStack Charms:

 * Getting rabbitmq-server stabilized
   and dealing with simultaneous restarts
 * Added haproxy time out settings to the OpenStack API Charms
 * Implemented workload status in the core OpenStack Charms

Currently working on

 * Initial implementation of network spaces
 * DNS based HA for hacluster

Community engagement:

 * Worked with James Beedy on pain points in the charms
 * Worked with Nuage to land their charms in the charm store
 * Assist with OIL in on-boarding charm partners

Please consider my request to re-join ~charmers.

--
David Ames

--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Tom Barber
I tend to agree with Ryan, I think the ideas are reasonably sound, although
I'm not sure about the "every charm should be part of a bundle" policy, but
I certainly don't think you should discourage testing at charm level the
encapsulation can be useful, and you can never have too many tests!

I'm
+0.5 for charms part of a bundle expectation
-1 for discouraging charm tests

My 2 cents.

Tom

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 17 March 2016 at 04:38, Ryan Beisner  wrote:

> Good evening,
>
> I really like the notion of a bundle possessing functional tests as an
> enhancement to test coverage.  I agree with almost all of those ideas.  :-)
>   tldr;  I would suggest that we consider bundle tests 'in addition to' and
> not 'as a replacement of' individual charm tests, because:
>
>
> *# Coverage and relevance*
> Any given charm may have many different modes of operation -- features
> which are enabled in some bundles but not in others.  A bundle test will
> likely only exercise that charm in the context of its configuration as it
> pertains to that bundle.  However, those who propose changes to the
> individual charm should know (via tests) if they've functionally broken the
> wider set of its knobs, bells and levers, which may be irrelevant to, or
> not testable in the bundle's amulet test due to its differing perspective.
> This opens potential functional test coverage gaps if we lean solely on the
> bundle for the test.
>
> There are numerous cases where a charm can shift personalities and use
> cases, but not always on-the-fly in an already-deployed model.  In those
> cases, it may take a completely different and new deployment topology and
> configuration (bundle) to be able to exercise the relevant functional
> tests.  Without integrated amulet tests within the charm, one would have to
> publish multiple bundles, each containing separate amulet tests.  For
> low-dev-velocity charms, for simple charms, or for charms that aren't
> likely to be involved in complex workloads, this may be manageable.  But I
> don't think we should discourage or stop looking for individual charm
> amulet tests even there.
>
> A charm's integrated amulet test can be both more focused and more
> expansive in what it exercises, as it can contain multiple deployment
> topologies and configurations (equivalent to cycling multiple unique
> bundles).  For example:  charm-xyz with and without SSL;  or in HA and
> without HA;  or IPv4 vs. IPv6; or IPv4 HA vs. IPv6 HA, multicast vs.
> unicast;  [IPv6 + HA + SSL] vs [IPv4 + HA + SSL]; or mysql deploying mysql
> proper vs. mysql deploying a variant;   and you can see the gist of the
> coverage explosion which translates to having a whole load of bundles to
> produce and maintain.
>
>
> *# Dev and test: cost, scale and velocity*
> Individual charm amulet tests are an important piece in testing large or
> complex models.  I'll share some bits of what we do for OpenStack charms as
> an example.  No bias.  :-)
>
> Each of the OpenStack charms contain amulet test definitions.  We lean
> heavily on those tests to deploy fractions of a full OpenStack bundle as
> the core of our CI development gate.  With [27 charms] x [stable + dev] x
> [8 Ubuntu/OpenStack Release Combos], there are currently* ~432 *possible
> variations of amulet tests (derived bundles of fractional OpenStacks).  A
> subset of those are executed in gate, depending on relevance to the
> developer's proposed change.  This allows us to endure a high velocity of
> focused testing on development in these very active charms.  Because the
> derived models are much smaller than the reference bundle, we can give
> developers rapid and automated feedback, plus they can iterate on
> development outside of our CI without having to be able to deploy a full
> OpenStack.
>
> That is not to say that we don't have acceptance and integration tests for
> full OpenStack bundles.  We do that in the form of mojo specs which
> dynamically deploy any number of full OpenStack bundle topologies and
> configurations against multiple Ubuntu+OpenStack release combos, using
> either the dev or the stable set of OpenStack charms.  It basically takes
> what I've described above for amulet and allows us to pivot entire bundles
> into different models automatically.  There are currently *84* such
> OpenStack mojo specs with tests (bundle equivalents)
>
> Fear not, this is mostly accomplished with bundle inheritance, yaml foo,
> and shared test libraries.  We're not actually maintaining ~*516 bundles*.
> But if we were to achieve the current level of coverage with bundles,
> that's approximately how many there would need to be.  

Re: Unfairly weighted charmstore results

2016-03-19 Thread Nate Finch
BTW, I reported a very similar problems in this bug:
https://github.com/CanonicalLtd/jujucharms.com/issues/192

On Thu, Mar 17, 2016 at 10:18 AM Uros Jovanovic <
uros.jovano...@canonical.com> wrote:

> Hi Tom,
>
> We currently bump the recommended charms over the community ones. The
> reason other shows is due to using N-grams (3-N) in search and the ranking
> logic using that puts recommended charms over the non-recommended ones. And
> we're not only searching over names of charms but a bunch of content that a
> charm has.
>
> The system works relatively well for recommended charms if you know the
> name (or close to what name is), but not in cases where a name is long and
> the charm is only in community space. That's why you get better results
> with short query vs a longer one.
>
> We're working on providing better search results in the following weeks.
>
>
>
>
> On Thu, Mar 17, 2016 at 2:18 PM, Tom Barber 
> wrote:
>
>> Cross posted from IRC:
>>
>> Hello folks,
>>
>> I have a gripe about the charm store search. Mostly because its really
>> badly weighted towards recommended charms, and finding what you(an end user
>> wants is really hard unless they know what they are doing).
>>
>> Take this example:
>>
>> https://jujucharms.com/q/pentaho
>>
>> Now I'm writing a charm called Pentaho Data Integration, so why do I have
>> to scroll past 55 recommended charms that have nothing to do with what I
>> have looked for?
>>
>> But
>>
>> https://jujucharms.com/q/etl
>>
>> Shows me exactly what I need at the top, with no recommended charms
>> blocking the view.
>>
>> So I guess its weighted towards tags, then names, sorta.
>>
>> Im not against recommended charms being dumped at the top, they are
>> recommended after all but it appears the ranking could be vastly improved.
>>
>> Off the top of my head a ranking combo of something like, keyword
>> relevance, recommended vs non recommended, times deployed, age, tags and
>> last updated. would give a half decent weighting for the charms and would
>> hopefully stop 55 unrelated charms appearing at the top of the list.
>>
>> Now I guess, I could dump pentaho in as a tag to get me top of the SEO
>> rankings, but it seems like generally the method could be improved as the
>> amount of charms increases, quite plausibly using something like Apache
>> Nutch to crawl the available charms and build a proper search facility
>> would improve things.
>>
>> Cheers
>>
>> Tom
>>
>>
>> --
>>
>> Director Meteorite.bi - Saiku Analytics Founder
>> Tel: +44(0)5603641316
>>
>> (Thanks to the Saiku community we reached our Kickstart
>> 
>> goal, but you can always help by sponsoring the project
>> )
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Usability issues with status-history

2016-03-19 Thread Horacio Duran
I think you are attributing too much importance to some data that can
hardly be considered information let me try to mention some points that I
think are valid here.
1) Not every message is valuable, you state that every message we throw
away makes it harder to debug, but certainly a message like "downloading
N%" is useless, you can record the start of the download and failure/end
but the steps intermediate are quite useless. We can argue later which
messages satisfy this criteria, but I am completely sure that some do.
2) Not filling the history buffer with superfluos messages will help here,
although I do agree we should find a more elegant deletion criteria (time
sounds right) while not loosing sight of size (at this point no one has a
record of the actual cost of storing these things in terms of space
therefore we cannot make decisions based on the scale we want to support.

Regarding "making the charm decide" I agree its something we might not want
to do, I would actually not export this to the charm, I would just use it
internally, since we are opinionated we can very well decide what goes
there.

Adding more status information will help adding observability, not having a
flag for internal ephemeral statuses strikes me a bit as deleting with the
left hand what you just wrote with the right one (saying might not
translate well from Spanish)

Finally, can this be a potential shoot in our own foot tool? yes, but so
can almost any other part of our code base, this is something juju will use
to report information to the user therefore we control it and, if we are
not careful we will shot ourselves but, then again, if we are not careful
with almost any part of the code we will do so too.

Cheers

On Thu, Mar 17, 2016 at 6:51 AM, William Reade 
 wrote:

I see this as a combination of two problems:
>
> 1) We're spamming the end user with "whatever's in the status-history
> collection" rather than presenting a digest tuned for their needs.
>
2) Important messages get thrown away way too early, because we don't know
> which messages are important.
>
> I think the pocket/transient/expiry solutions boil down to "let's make the
> charmer decide what's important", and I don't think that will help. The
> charmer is only sending those messages *because she believes they're
> important*; even if we had "perfect" trimming heuristics for the end user,
> we do the *charmer* a disservice by leaving them no record of what their
> charm actually did.
>
> And, more generally: *every* message we throw away makes it hard to
> correctly analyse any older message. This applies within a given entity's
> domain, but also across entities: if you're trying to understand the
> interactions between 2 units, but one of those units is generating many
> more messages, you'll have 200 messages to inspect; but the 100 for the
> faster unit will only cover (say) the last 30 for the slower one, leaving
> 70 slow-unit messages that can't be correlated with the other unit's
> actions. At best, those messages are redundant; at worst, they're actively
> misleading.
>
> So: I do not believe that any approach that can be summed up as "let's
> throw away *more* messages" is going to help either. We need to fix (2) so
> that we have raw status data that extends reasonably far back in time; and
> then we need to fix (1) so that we usefully precis that data for the user
> (...and! leave a path that makes the raw data observable, for the cases
> where our heuristics are unhelpful).
>
> Cheers
> William
>
> PS re: UX of asking for N entries... I can see end-user stories for
> timespans, and for "the last N *significant* changes". What's the scenario
> where a user wants to see exactly 50 message atoms?
>
> On Thu, Mar 17, 2016 at 6:30 AM, John Meinel 
>  wrote:
>
>>
>>
>> On Thu, Mar 17, 2016 at 8:41 AM, Ian Booth 
>>  wrote:
>>
>>>
>>> Machines, services and units all now support recording status history.
>>> Two
>>> issues have come up:
>>>
>>> 1. https://bugs.launchpad.net/juju-core/+bug/1530840
>>>
>>> For units, especially in steady state, status history is spammed with
>>> update-status hook invocations which can obscure the hooks we really
>>> care about
>>>
>>> 2. https://bugs.launchpad.net/juju-core/+bug/1557918
>>>
>>> We now have the concept of recording a machine provisioning status. This
>>> is
>>> great because it gives observability to what is happening as a node is
>>> being
>>> allocated in the cloud. With LXD, this feature has been used to give
>>> visibility
>>> to progress of the image downloads (finally, yay). But what happens is
>>> that the
>>> machine status history gets filled with lots of "Downloading x%" type
>>> messages.
>>>
>>> We have a pruner which caps the history to 100 entries per entity. But
>>> we need a
>>> way to deal with the spam, and what is displayed when the user asks for
>>> juju
>>> status-history.
>>>
>>> Options to solve bug 1

Re: openstack base/autopilot

2016-03-19 Thread Daniel Westervelt
Also, the Openstack base package can be used to bootstrap Autopilot. If you 
install it and run "$ sudo openstack-install" the Autopilot will be one of the 
options presented to you.

- Daniel

Sent from my iPhone

> On Mar 19, 2016, at 4:58 AM, Mark Shuttleworth  wrote:
> 
>> On 19/03/16 03:58, Frank Ritchie wrote:
>> Does Openstack Autopilot use the Openstack Base bundle?
> 
> It uses the same charms, but it has to construct a custom model based on:
> 
> * choice of hypervisor (KVM, LXD)
> * choice of storage (object, block, SWIFT, Ceph, ScaleIO etc)
> * choice of SDN (NeutronOVS, ODL, Plumgrid etc)
> * choice of hardware
> * mapping of services to hardware
> 
> A bundle is a pre-canned model, the autopilot dynamically updates the
> model to deal with things like failures or additional hardware.
> 
> Mark
> 
> -- 
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Usability issues with status-history

2016-03-19 Thread William Reade
On Sat, Mar 19, 2016 at 4:39 AM, Ian Booth  wrote:

>
> I mostly agree but still believe there's a case for transient messages.
> The case
> where Juju is downloading an image and emits progress updates which go into
> status history is to me clearly a case where we needn't persist every
> single one
> (or any). In that case, it's not a charmer deciding but Juju. And with
> status
> updates like X% complete, as soon as a new message arrives, the old one is
> superseded anyway. The user is surely just interested to know the current
> status
> and when it completes they don't care anymore. And Juju agent can still
> decide
> to say make every 10% of download progress messages non-transient to they
> go to
> history for future reference.
>

There are two distinct problems: collecting the data, and presenting
information gleaned from that data. Adding complexity to the first task in
the hope of simplifying the second mixes the concerns at a very deep level,
and makes the whole stack harder to understand for everybody.

Would this work s an initial improvement for 2.0:
>
> 1. Increase limit of stored messages per entity so say 500 (from 100)
>

Messages-per-entity seems like a strange starting point, compared with
either max age or max data size (or both). Storage concerns don't seem like
a major risk: we're keeping a max 3 days/4 gigabytes of normal log messages
in the database already, and I rather doubt that SetStatus calls generate
anything like that magnitude of data. Shouldn't we just be following the
same sort of trimming strategy there and leaving the dataset otherwise
uncontaminated, and hence as useful as possible?

2. Allow messages emitted from Juju to be marked as transient
> eg for download progress
>

-1, it's just extra complexity to special-case a particular kind of status
in exchange for very questionable storage gains, and muddies the dataset to
boot.


> 3. Do smarter filtering of what is displayed with status-history
> eg if we see the same tuple of messages over and over, consolidate
>
> TIMETYPESTATUS  MESSAGE
> 26 Dec 2015 13:51:59Z   agent   executing   running config-changed hook
> 26 Dec 2015 13:51:59Z   agent   idle
> 26 Dec 2015 13:56:57Z   agent   executing   running update-status hook
> 26 Dec 2015 13:56:59Z   agent   idle
> 26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
> 26 Dec 2015 14:01:59Z   agent   idle
> 26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
> 26 Dec 2015 14:01:59Z   agent   idle
>
> becomes
>
> TIME TYPE STATUS MESSAGE
> 26 Dec 2015 13:51:59Z agent executing running config-changed hook
> 26 Dec 2015 13:51:59Z agent idle
> >> Repeated 3 times, last occurence:
> 26 Dec 2015 14:01:57Z agent executing running update-status hook
> 26 Dec 2015 14:01:59Z agent idle
>

+100 to this sort of thing. It won't be perfect, but where it's imperfect
we'll be able to see how to improve. And if we're always calculating it
from the source data, we can improve the presentation/analytics and fix
those bugs in isolation; if we mangle the data at collection time we
sharply limit our options in that arena. (And surely sensible filtering
will render the transient-download-message problem moot *anyway*, leaving
us less reason to worry about (2)?)

Cheers
William



>
>
>
>
> > On Thu, Mar 17, 2016 at 6:30 AM, John Meinel 
> wrote:
> >
> >>
> >>
> >> On Thu, Mar 17, 2016 at 8:41 AM, Ian Booth 
> >> wrote:
> >>
> >>>
> >>> Machines, services and units all now support recording status history.
> Two
> >>> issues have come up:
> >>>
> >>> 1. https://bugs.launchpad.net/juju-core/+bug/1530840
> >>>
> >>> For units, especially in steady state, status history is spammed with
> >>> update-status hook invocations which can obscure the hooks we really
> care
> >>> about
> >>>
> >>> 2. https://bugs.launchpad.net/juju-core/+bug/1557918
> >>>
> >>> We now have the concept of recording a machine provisioning status.
> This
> >>> is
> >>> great because it gives observability to what is happening as a node is
> >>> being
> >>> allocated in the cloud. With LXD, this feature has been used to give
> >>> visibility
> >>> to progress of the image downloads (finally, yay). But what happens is
> >>> that the
> >>> machine status history gets filled with lots of "Downloading x%" type
> >>> messages.
> >>>
> >>> We have a pruner which caps the history to 100 entries per entity. But
> we
> >>> need a
> >>> way to deal with the spam, and what is displayed when the user asks for
> >>> juju
> >>> status-history.
> >>>
> >>> Options to solve bug 1
> >>>
> >>> A.
> >>> Filter out duplicate status entries when presenting to the user. eg say
> >>> "update-status (x43)". This still allows the circular buffer for that
> >>> entity to
> >>> fill with "spam" though. We could make the circular buffer size much
> >>> larger. But
> >>> there's still the issue of 

Re: Usability issues with status-history

2016-03-19 Thread William Reade
I see this as a combination of two problems:

1) We're spamming the end user with "whatever's in the status-history
collection" rather than presenting a digest tuned for their needs.
2) Important messages get thrown away way too early, because we don't know
which messages are important.

I think the pocket/transient/expiry solutions boil down to "let's make the
charmer decide what's important", and I don't think that will help. The
charmer is only sending those messages *because she believes they're
important*; even if we had "perfect" trimming heuristics for the end user,
we do the *charmer* a disservice by leaving them no record of what their
charm actually did.

And, more generally: *every* message we throw away makes it hard to
correctly analyse any older message. This applies within a given entity's
domain, but also across entities: if you're trying to understand the
interactions between 2 units, but one of those units is generating many
more messages, you'll have 200 messages to inspect; but the 100 for the
faster unit will only cover (say) the last 30 for the slower one, leaving
70 slow-unit messages that can't be correlated with the other unit's
actions. At best, those messages are redundant; at worst, they're actively
misleading.

So: I do not believe that any approach that can be summed up as "let's
throw away *more* messages" is going to help either. We need to fix (2) so
that we have raw status data that extends reasonably far back in time; and
then we need to fix (1) so that we usefully precis that data for the user
(...and! leave a path that makes the raw data observable, for the cases
where our heuristics are unhelpful).

Cheers
William

PS re: UX of asking for N entries... I can see end-user stories for
timespans, and for "the last N *significant* changes". What's the scenario
where a user wants to see exactly 50 message atoms?

On Thu, Mar 17, 2016 at 6:30 AM, John Meinel  wrote:

>
>
> On Thu, Mar 17, 2016 at 8:41 AM, Ian Booth 
> wrote:
>
>>
>> Machines, services and units all now support recording status history. Two
>> issues have come up:
>>
>> 1. https://bugs.launchpad.net/juju-core/+bug/1530840
>>
>> For units, especially in steady state, status history is spammed with
>> update-status hook invocations which can obscure the hooks we really care
>> about
>>
>> 2. https://bugs.launchpad.net/juju-core/+bug/1557918
>>
>> We now have the concept of recording a machine provisioning status. This
>> is
>> great because it gives observability to what is happening as a node is
>> being
>> allocated in the cloud. With LXD, this feature has been used to give
>> visibility
>> to progress of the image downloads (finally, yay). But what happens is
>> that the
>> machine status history gets filled with lots of "Downloading x%" type
>> messages.
>>
>> We have a pruner which caps the history to 100 entries per entity. But we
>> need a
>> way to deal with the spam, and what is displayed when the user asks for
>> juju
>> status-history.
>>
>> Options to solve bug 1
>>
>> A.
>> Filter out duplicate status entries when presenting to the user. eg say
>> "update-status (x43)". This still allows the circular buffer for that
>> entity to
>> fill with "spam" though. We could make the circular buffer size much
>> larger. But
>> there's still the issue of UX where a user ask for the X most recent
>> entries.
>> What do we give them? The X most recent de-duped entries?
>>
>> B.
>> If the we go to record history and the current previous entry is the same
>> as
>> what we are about to record, just update the timestamp. For update
>> status, my
>> view is we don't really care how many times the hook was run, but rather
>> when
>> was the last time it ran.
>>
>
> The problem is that it isn't the same as the "last" message. Going to the
> original paste:
>
> TIMETYPESTATUS  MESSAGE
> 26 Dec 2015 13:51:59Z   agent   idle
> 26 Dec 2015 13:56:57Z   agent   executing   running update-status hook
> 26 Dec 2015 13:56:59Z   agent   idle
> 26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
> 26 Dec 2015 14:01:59Z   agent   idle
>
> Which means there is an "running update-status" *and* a "idle" message.
> So we can't just say "is the last message == this message". It would have
> to look deeper in history, and how deep should we be looking? what happens
> if a given charm does one more "status-set" during its update-status hook
> to set the status of the unit to "still happy". Then we would have 3.
> (agent executing, unit happy, agent idle)
>
>
>> Options to solve bug 2
>>
>> A.
>> Allow a flag when setting status to say "this status value is transient"
>> and so
>> it is recorded in status but not logged in history.
>>
>> B.
>> Do not record machine provisioning status in history. It could be argued
>> this
>> info is more or less transient and once the machine comes up, we don't
>> care so
>> much about it anymore. 

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Jacek Nykis
On 08/03/16 23:51, Mark Shuttleworth wrote:
> *Storage*
> 
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
> 
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but
> reuse block devices previously bound to an old database  instance).
> Also, I think the intersection of storage modelling and MAAS hasn't
> really been explored, and since we see a lot of interest in the use of
> charms to deploy software-defined storage solutions, this probably will
> need thinking and work.

Hi Mark,

I took juju storage for a spin a few weeks ago. It is a great idea and
I'm sure it will simplify our models (no more need for
block-storage-broker and storage charms). It will also improve security
because block-storage-broker needs nova credentials to work

I only played with storage briefly but I hope my feedback and ideas will
be useful

* IMO it would be incredibly useful to have storage lifecycle
management. Deploying a new database using pre-existing block device you
mentioned would certainly be nice. Another scenario could be users who
deploy to local disk and decide to migrate to block storage later
without redeploying and manual data migration

One day we may even be able to connect storage with actions. I'm
thinking "storage snapshot" action followed by juju deploy to create up
to date database clone for testing/staging/dev

* I found documentation confusing. It's difficult for me to say exactly
what is wrong but I had to read it a few times before things became
clear. I raised some specific points on github:
https://github.com/juju/docs/issues/889

* cli for storage is not as nice as other juju commands. For example we
have the in the docs:

juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd

I suspect most charms will use single storage device so it may be
possible to optimize for that use case. For example we could have:

juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G

If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional

* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported

[0] http://paste.ubuntu.com./15414289/

* the way things are now hooks are responsible for creating filesystem
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter

* finally I hit 2 small bugs:

https://bugs.launchpad.net/juju-core/+bug/1539684
https://bugs.launchpad.net/juju-core/+bug/1546492


If anybody is interested in more details just ask, I'm happy to discuss
or try things out just note that I will be off next week so will most
likely reply on 29th


Regards,
Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Openstack HA Portland Meetup Present

2016-03-19 Thread Rick Harding
Thanks for the update James, glad things went so well! From our end, we
appreciate the awesome first hand user feedback you're always willing to
reach out and provide. Our stuff just gets better with folks like you
putting it to the test day in and day out. I can't wait to get you some of
the new stuff coming that I think will greatly improve your next
presentation!

Rick

On Fri, Mar 18, 2016 at 1:42 AM James Beedy  wrote:

> I just gave this presentation --> http://54.172.233.114/ at an Openstack
> meetup at puppet headquarters in Portland, geared around HA Openstack
> production deployments. I wanted to update the team of the great news! It
> was by far the best presentation I have ever given, which isn't saying
> much, but peoples heads were turning in disbelief of what they were
> seeing the entire time :-) Alongside my slides, I gave a live demo of the
> load balancing of my juju deployed presentation in real time as the group
> was accessing it. Following that, I scaled out my presentation by adding a
> unit of `present` live. Also, I went out on a limb and live demoed a fully
> ha test stack, adding a lxc unit of glance to one machine and removing it
> from another whilst keeping quorum, and lightly touched on how the
> hacluster charm works, and a bit on the concept of interfaces and deploying
> from the juju-gui. There was a surprising amount of interest in Juju
> following my presentation, a good amount of people had never heard of juju,
> most of them seemed to be blown away by what they had just witnessed :-)
>
> On that note, I want to thank everyone for the work you have all done to
> get the Juju ecosystem/framework to where it is today. As nice as it was to
> see my test stack preforming so well at the demo, its much more fulfilling
> to know that my production stack is purring like a kitten too... no
> downtime for 6+ months (since her production inception)!!!  Over the past 6
> months, I have had some major issues that I have resolved, and with no
> service downtime! To that extent, I may of ripped my stacks guts out and
> then put them back in again... quite a few times with services running
> atop her -- its nice to see I can do all of that and she still stands and
> is able to recover and regain a healthy state.
>
> Here are the ip addresses and repos for my presentation. For anyone
> interested, you can login to the haproxy stats and see the traffic
> generated! As a side note - I was able to spin this all up and present
> using my charm dev amazon account --> HUGE +1 for the charm developer
> program!!
>
>
> presentation: http://54.172.233.114/
> haproxy stats: http://54.172.233.114:1/  --> un: admin, pw: password
> presentation markdown: https://github.com/jamesbeedy/os-ha-meetup-present
> layer present: https://github.com/jamesbeedy/layer-present
> https://github.com/jamesbeedy/os_ha_test_stack
>
>
> Thanks all!
>
> ~James
>
>
> --
> Juju mailing list
> j...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


New feature for charmers - min-juju-version

2016-03-19 Thread Nate Finch
There is a new (optional) top level field in the metadata.yaml file called
min-juju-version. If supplied, this value specifies the minimum version of
a Juju server with which the charm is compatible. When a user attempts to
deploy a charm (whether from the charmstore or from local) that has
min-juju-version specified, if the targeted model's Juju version is lower
than that specified, then the user will be shown an error noting that the
charm requires a newer version of Juju (and told what version they need).
The format for min-juju-version is a string that follows the same scheme as
our release versions, so you can be as specific as you like. For example,
min-juju-version: "2.0.1-beta3" will deploy on 2.0.1 (release), but will
not deploy on 2.0.1-alpha1 (since alpha1 is older than beta3).

Note that, at this time, Juju 1.25.x does *not* recognize this flag, so it
will, unfortunately, not be respected by 1.25 environments.

This code just landed in master, so feel free to give it a spin.

-Nate
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


juju versions are now in github.com/juju/version

2016-03-19 Thread Nate Finch
For a recent change in 2.0, we needed to be able to reference the
version.Number struct from both juju-core and charm.v6.  In order to
support this, we moved most of the contents of github.com/juju/juju/version
to at top level repo at github.com/juju/version.  Note that certain
juju-core-specific data still exists in the original folder, such as the
current Juju binary version.

Just wanted everyone to be aware of the change, so that if you're writing
code that interacts with version numbers, you know where that code now
lives.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Ryan Beisner
On Thu, Mar 17, 2016 at 8:38 AM, Marco Ceppi 
wrote:

> On Thu, Mar 17, 2016 at 12:39 AM Ryan Beisner 
> wrote:
>
>> Good evening,
>>
>> I really like the notion of a bundle possessing functional tests as an
>> enhancement to test coverage.  I agree with almost all of those ideas.  :-)
>>   tldr;  I would suggest that we consider bundle tests 'in addition to' and
>> not 'as a replacement of' individual charm tests, because:
>>
>>
>> *# Coverage and relevance*
>> Any given charm may have many different modes of operation -- features
>> which are enabled in some bundles but not in others.  A bundle test will
>> likely only exercise that charm in the context of its configuration as it
>> pertains to that bundle.  However, those who propose changes to the
>> individual charm should know (via tests) if they've functionally broken the
>> wider set of its knobs, bells and levers, which may be irrelevant to, or
>> not testable in the bundle's amulet test due to its differing perspective.
>> This opens potential functional test coverage gaps if we lean solely on the
>> bundle for the test.
>>
>
> In a world with layered charms, do we still need functional tests at the
> charm level?
>

I believe the answer is not only yes, but that it becomes even more
important with layered charms because of the multiple variables involved.
Let's say you've rebuilt an existing layered charm that is composed of one
of your own updated layers and 4 other published layers which may or may
not have changed since the last charm build.  After rebuilding the charm,
you can now re-run the charm Amulet test to verify that the rebuilt charm
hasn't regressed in functionality.

If there's no charm amulet test, you modify the bundle (because the bundle
is not likely pointing to your newly-built charm), deploy the bundle and
execute its tests.  This may only utilize a portion of the charm's
capability, so the remaining capability of the re-built charm is at risk
for breakage if that is the gate.



>
> I'd also like to clarify that this would not deprecate functional tests in
> charms but makes it so teh policy reads EITHER unit tests or functional
> tests, with a preference on a layered approach using unittests. Development
> is quicker and since ever level of the charm is now segmented into
> repositories which (should) have their own testing we don't need to
> validate that X interface works in Y permutation as the interface is a
> library with both halves of the communication channel and with tests.
>
>
>> There are numerous cases where a charm can shift personalities and use
>> cases, but not always on-the-fly in an already-deployed model.  In those
>> cases, it may take a completely different and new deployment topology and
>> configuration (bundle) to be able to exercise the relevant functional
>> tests.  Without integrated amulet tests within the charm, one would have to
>> publish multiple bundles, each containing separate amulet tests.  For
>> low-dev-velocity charms, for simple charms, or for charms that aren't
>> likely to be involved in complex workloads, this may be manageable.  But I
>> don't think we should discourage or stop looking for individual charm
>> amulet tests even there.
>>
>
> We will always support charms with Amulet tests, ALWAYS, I think it'd even
> be the hallmark of an exceptionally well written charm if it had the means
> to do extensive functional testing.
>

*>>> # tl;dr:*

*>>> We should stop writing Amulet tests in charms and instead only write
them Bundles and force charms to do unit-testing (when possible) and
promote that all charms be included in bundles in the store.*

My main alarm here was the tl;dr.  I think we should not stop writing
Amulet tests in charms.  Rather, encourage all fronts:  charm level Amulet
tests, bundle Amulet tests, layer unit tests and charm unit tests.



> I also have a separate email I'm authoring where we should be leveraging
> health checks /inside/ a charm rather than externally poking it, so that
> any deployment at anytime, in any mutation can be asked "hey, you ok?" and
> get back a detailed report that assertions could be run against. I have a
> follow up (and one of the reasons this email took me so long)
>

+1 to the "self-aware charm" as discussed (in Malta I think), as an added
enhancement to quality and observability efforts.



>
>
>> A charm's integrated amulet test can be both more focused and more
>> expansive in what it exercises, as it can contain multiple deployment
>> topologies and configurations (equivalent to cycling multiple unique
>> bundles).  For example:  charm-xyz with and without SSL;  or in HA and
>> without HA;  or IPv4 vs. IPv6; or IPv4 HA vs. IPv6 HA, multicast vs.
>> unicast;  [IPv6 + HA + SSL] vs [IPv4 + HA + SSL]; or mysql deploying mysql
>> proper vs. mysql deploying a variant;   and you can see the gist of the
>> coverage explosion which translates to having a whole load of bundles 

Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Benjamin Saller
The observation which might be too basic here is that for an Amulet test to
do something useful it needs to exercise the relations. This implies
(almost always) another charm. When your testing depends on more than one
charm (in what might be a synthetic situation) you are talking about
bundles. By depending on the tests living in a bundle we are saying this is
a known good, tested configuration intended for real world usage. Those are
important properties and not ones made by an Amulet test alone.

Even if it is the intention of an Amulet test to provide a guarantee of a
known good real world deployment that isn't surfaced to the consumer of the
charm. When consuming a bundle with tests the implications to the user are
both more transparent and practical.

-Ben

On Thu, Mar 17, 2016 at 9:12 AM Ryan Beisner 
wrote:

> On Thu, Mar 17, 2016 at 8:38 AM, Marco Ceppi 
> wrote:
>
>> On Thu, Mar 17, 2016 at 12:39 AM Ryan Beisner 
>> wrote:
>>
>>> Good evening,
>>>
>>> I really like the notion of a bundle possessing functional tests as an
>>> enhancement to test coverage.  I agree with almost all of those ideas.  :-)
>>>   tldr;  I would suggest that we consider bundle tests 'in addition to' and
>>> not 'as a replacement of' individual charm tests, because:
>>>
>>>
>>> *# Coverage and relevance*
>>> Any given charm may have many different modes of operation -- features
>>> which are enabled in some bundles but not in others.  A bundle test will
>>> likely only exercise that charm in the context of its configuration as it
>>> pertains to that bundle.  However, those who propose changes to the
>>> individual charm should know (via tests) if they've functionally broken the
>>> wider set of its knobs, bells and levers, which may be irrelevant to, or
>>> not testable in the bundle's amulet test due to its differing perspective.
>>> This opens potential functional test coverage gaps if we lean solely on the
>>> bundle for the test.
>>>
>>
>> In a world with layered charms, do we still need functional tests at the
>> charm level?
>>
>
> I believe the answer is not only yes, but that it becomes even more
> important with layered charms because of the multiple variables involved.
> Let's say you've rebuilt an existing layered charm that is composed of one
> of your own updated layers and 4 other published layers which may or may
> not have changed since the last charm build.  After rebuilding the charm,
> you can now re-run the charm Amulet test to verify that the rebuilt charm
> hasn't regressed in functionality.
>
> If there's no charm amulet test, you modify the bundle (because the
> bundle is not likely pointing to your newly-built charm), deploy the bundle
> and execute its tests.  This may only utilize a portion of the charm's
> capability, so the remaining capability of the re-built charm is at risk
> for breakage if that is the gate.
>
>
>
>>
>> I'd also like to clarify that this would not deprecate functional tests
>> in charms but makes it so teh policy reads EITHER unit tests or functional
>> tests, with a preference on a layered approach using unittests. Development
>> is quicker and since ever level of the charm is now segmented into
>> repositories which (should) have their own testing we don't need to
>> validate that X interface works in Y permutation as the interface is a
>> library with both halves of the communication channel and with tests.
>>
>>
>>> There are numerous cases where a charm can shift personalities and use
>>> cases, but not always on-the-fly in an already-deployed model.  In those
>>> cases, it may take a completely different and new deployment topology and
>>> configuration (bundle) to be able to exercise the relevant functional
>>> tests.  Without integrated amulet tests within the charm, one would have to
>>> publish multiple bundles, each containing separate amulet tests.  For
>>> low-dev-velocity charms, for simple charms, or for charms that aren't
>>> likely to be involved in complex workloads, this may be manageable.  But I
>>> don't think we should discourage or stop looking for individual charm
>>> amulet tests even there.
>>>
>>
>> We will always support charms with Amulet tests, ALWAYS, I think it'd
>> even be the hallmark of an exceptionally well written charm if it had the
>> means to do extensive functional testing.
>>
>
> *>>> # tl;dr:*
>
> *>>> We should stop writing Amulet tests in charms and instead only write
> them Bundles and force charms to do unit-testing (when possible) and
> promote that all charms be included in bundles in the store.*
>
> My main alarm here was the tl;dr.  I think we should not stop writing
> Amulet tests in charms.  Rather, encourage all fronts:  charm level Amulet
> tests, bundle Amulet tests, layer unit tests and charm unit tests.
>
>
>
>> I also have a separate email I'm authoring where we should be leveraging
>> health checks /inside/ a charm rather than 

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Kapil Thangavelu
On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth  wrote:

> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.
>
> An early cut of topics of interest is below.
>
>
>
> *Operational concerns ** LDAP integration for Juju controllers now we
> have multi-user controllers
> * Support for read-only config
> * Support for things like passwords being disclosed to a subset of
> user/operators
> * LXD container migration
> * Shared uncommitted state - enable people to collaborate around changes
> they want to make in a model
>
> There has also been quite a lot of interest in log control - debug
> settings for logging, verbosity control, and log redirection as a systemic
> property. This might be a good area for someone new to the project to lead
> design and implementation. Another similar area is the idea of modelling
> machine properties - things like apt / yum repositories, cache settings
> etc, and having the machine agent setup the machine / vm / container
> according to those properties.
>
>
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.



>
>
> *Core Model * * modelling individual services (i.e. each database
> exported by the db application)
>  * rich status (properties of those services and the application itself)
>  * config schemas and validation
>  * relation config
>
> There is also interest in being able to invoke actions across a relation
> when the relation interface declares them. This would allow, for example, a
> benchmark operator charm to trigger benchmarks through a relation rather
> than having the operator do it manually.
>
>
in priority order, relation config, config schemas/validation, rich status.
relation config is a huge boon to services that are multi-tenant to other
services, as is the workaround is to create either copies per tenant or
intermediaries.


> *Storage*
>
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
>
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but reuse
> block devices previously bound to an old database  instance). Also, I think
> the intersection of storage modelling and MAAS hasn't really been explored,
> and since we see a lot of interest in the use of charms to deploy
> software-defined storage solutions, this probably will need thinking and
> work.
>
>
it maybe out of band, but with storage comes backups/snapshots. also of
interest, is encryption on block and object storage using cloud native
mechanisms where available.


>
>
> *Clouds and providers *
>  * System Z and LinuxONE
>  * Oracle Cloud
>
> There is also a general desire to revisit and refactor the provider
> interface. Now we have seen many cloud providers get done, we are in a
> better position to design the best provider interface. This would be a
> welcome area of contribution for someone new to the project who wants to
> make it easier for folks creating new cloud providers. We also see constant
> requests for a Linode provider that would be a good target for a refactored
> interface.
>
>
>
>
> *Usability * * expanding the set of known clouds and regions
>  * improving the handling of credentials across clouds
>


Autoscaling, either tighter integration with cloud native features or juju
provided abstraction.
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Marco Ceppi
On Thu, Mar 17, 2016 at 5:39 AM Tom Barber  wrote:

> I tend to agree with Ryan, I think the ideas are reasonably sound,
> although I'm not sure about the "every charm should be part of a bundle"
> policy, but I certainly don't think you should discourage testing at charm
> level the encapsulation can be useful, and you can never have too many
> tests!
>

Thanks for the feedback, I appreciate it. I want to clarify since I did a
terrible job explaining that I don't wish to stop Amulet tests in charm but
instead really focus on driving unittests in charms - and less charms but
more Layers. This will allow for quicker iterations and faster dev/test
cycles.


> I'm
> +0.5 for charms part of a bundle expectation
> -1 for discouraging charm tests
>

Want to caveat again, charm tests would be one or the two (or both)
[unittest, amulet]


> My 2 cents.
>

Thanks!


>
> Tom
>
> --
>
> Director Meteorite.bi - Saiku Analytics Founder
> Tel: +44(0)5603641316
>
> (Thanks to the Saiku community we reached our Kickstart
> 
> goal, but you can always help by sponsoring the project
> )
>
> On 17 March 2016 at 04:38, Ryan Beisner 
> wrote:
>
>> Good evening,
>>
>> I really like the notion of a bundle possessing functional tests as an
>> enhancement to test coverage.  I agree with almost all of those ideas.  :-)
>>   tldr;  I would suggest that we consider bundle tests 'in addition to' and
>> not 'as a replacement of' individual charm tests, because:
>>
>>
>> *# Coverage and relevance*
>> Any given charm may have many different modes of operation -- features
>> which are enabled in some bundles but not in others.  A bundle test will
>> likely only exercise that charm in the context of its configuration as it
>> pertains to that bundle.  However, those who propose changes to the
>> individual charm should know (via tests) if they've functionally broken the
>> wider set of its knobs, bells and levers, which may be irrelevant to, or
>> not testable in the bundle's amulet test due to its differing perspective.
>> This opens potential functional test coverage gaps if we lean solely on the
>> bundle for the test.
>>
>> There are numerous cases where a charm can shift personalities and use
>> cases, but not always on-the-fly in an already-deployed model.  In those
>> cases, it may take a completely different and new deployment topology and
>> configuration (bundle) to be able to exercise the relevant functional
>> tests.  Without integrated amulet tests within the charm, one would have to
>> publish multiple bundles, each containing separate amulet tests.  For
>> low-dev-velocity charms, for simple charms, or for charms that aren't
>> likely to be involved in complex workloads, this may be manageable.  But I
>> don't think we should discourage or stop looking for individual charm
>> amulet tests even there.
>>
>> A charm's integrated amulet test can be both more focused and more
>> expansive in what it exercises, as it can contain multiple deployment
>> topologies and configurations (equivalent to cycling multiple unique
>> bundles).  For example:  charm-xyz with and without SSL;  or in HA and
>> without HA;  or IPv4 vs. IPv6; or IPv4 HA vs. IPv6 HA, multicast vs.
>> unicast;  [IPv6 + HA + SSL] vs [IPv4 + HA + SSL]; or mysql deploying mysql
>> proper vs. mysql deploying a variant;   and you can see the gist of the
>> coverage explosion which translates to having a whole load of bundles to
>> produce and maintain.
>>
>>
>> *# Dev and test: cost, scale and velocity*
>> Individual charm amulet tests are an important piece in testing large or
>> complex models.  I'll share some bits of what we do for OpenStack charms as
>> an example.  No bias.  :-)
>>
>> Each of the OpenStack charms contain amulet test definitions.  We lean
>> heavily on those tests to deploy fractions of a full OpenStack bundle as
>> the core of our CI development gate.  With [27 charms] x [stable + dev] x
>> [8 Ubuntu/OpenStack Release Combos], there are currently* ~432 *possible
>> variations of amulet tests (derived bundles of fractional OpenStacks).  A
>> subset of those are executed in gate, depending on relevance to the
>> developer's proposed change.  This allows us to endure a high velocity of
>> focused testing on development in these very active charms.  Because the
>> derived models are much smaller than the reference bundle, we can give
>> developers rapid and automated feedback, plus they can iterate on
>> development outside of our CI without having to be able to deploy a full
>> OpenStack.
>>
>> That is not to say that we don't have acceptance and integration tests
>> for full OpenStack bundles.  We do that in the form of mojo specs which
>> dynamically deploy any number of full OpenStack bundle topologies and
>> configurations against multiple 

Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread roger peppe
On 16 March 2016 at 12:31, Kapil Thangavelu  wrote:
>
>
> On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth  wrote:
>>
>> Hi folks
>>
>> We're starting to think about the next development cycle, and gathering
>> priorities and requests from users of Juju. I'm writing to outline some
>> current topics and also to invite requests or thoughts on relative
>> priorities - feel free to reply on-list or to me privately.
>>
>> An early cut of topics of interest is below.
>>
>> Operational concerns
>>
>> * LDAP integration for Juju controllers now we have multi-user controllers
>> * Support for read-only config
>> * Support for things like passwords being disclosed to a subset of
>> user/operators
>> * LXD container migration
>> * Shared uncommitted state - enable people to collaborate around changes
>> they want to make in a model
>>
>> There has also been quite a lot of interest in log control - debug
>> settings for logging, verbosity control, and log redirection as a systemic
>> property. This might be a good area for someone new to the project to lead
>> design and implementation. Another similar area is the idea of modelling
>> machine properties - things like apt / yum repositories, cache settings etc,
>> and having the machine agent setup the machine / vm / container according to
>> those properties.
>>
>
> ldap++. as brought up in the user list better support for aws best practice
> credential management, ie. bootstrapping with transient credentials (sts
> role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
> servers.
>
>
>>
>> Core Model
>>
>>  * modelling individual services (i.e. each database exported by the db
>> application)
>>  * rich status (properties of those services and the application itself)
>>  * config schemas and validation
>>  * relation config
>>
>> There is also interest in being able to invoke actions across a relation
>> when the relation interface declares them. This would allow, for example, a
>> benchmark operator charm to trigger benchmarks through a relation rather
>> than having the operator do it manually.
>>
>
> in priority order, relation config

What do you understand by the term "relation config"?

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Charm Store policy updates and refinement for 2.0

2016-03-19 Thread Tom Barber
Cool!

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 18 March 2016 at 16:15, Jorge O. Castro  wrote:

> On Fri, Mar 18, 2016 at 12:11 PM, Tom Barber 
> wrote:
> > I assume this apples to only bundles that get promoted to recommended
> > otherwise how would you enforce it?
>
> Yes, to be clear these policies only apply to things that are in the
> recommended/promulgated space. So jujucharms.com/haproxy, not
> jujucharms.com/u/jorge/haproxy
>
> As always, everyone is free to do what they like in namespaces.
>
> --
> Jorge Castro
> Canonical Ltd.
> http://jujucharms.com/ - The fastest way to model your service
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Billy Olsen
On Thu, Mar 17, 2016 at 10:40 AM, Benjamin Saller <
benjamin.sal...@canonical.com> wrote:

> The observation which might be too basic here is that for an Amulet test
> to do something useful it needs to exercise the relations. This implies
> (almost always) another charm. When your testing depends on more than one
> charm (in what might be a synthetic situation) you are talking about
> bundles.
>

Not sure I agree here. As I see it, the purpose of the amulet tests should
be to validate that the service provided by the charm is functional, and
this may or may not involve other charms. If bundles are used to actually
communicate that these are known good real world deployments, then bundles
would not apply to everywhere that an amulet test exercising multiple
charms fit so a distinction between bundle tests and the amulet tests are
still needed. For example, a charm that provides a service which is part of
a larger solution may not need the entire solution stood up in order to
validate the functionality of the services it provides. However since its
not the full solution or bundle, it wouldn't make sense to write a bundle
test which exercises this one piece of the stack.


> By depending on the tests living in a bundle we are saying this is a known
> good, tested configuration intended for real world usage. Those are
> important properties and not ones made by an Amulet test alone.
>
> Even if it is the intention of an Amulet test to provide a guarantee of a
> known good real world deployment that isn't surfaced to the consumer of the
> charm. When consuming a bundle with tests the implications to the user are
> both more transparent and practical.
>

Yes and as such, all bundles should have tests that show the validity of
the bundle.

Essentially, I think the tests boil down to the following:

1. Unit Tests - test the code of the charm itself
2. Amulet Tests - provides a function-level test of the charm itself
3. Bundle Tests - provides a system-level test of the solution provided by
the bundle


>
> -Ben
>
> On Thu, Mar 17, 2016 at 9:12 AM Ryan Beisner 
> wrote:
>
>> On Thu, Mar 17, 2016 at 8:38 AM, Marco Ceppi 
>> wrote:
>>
>>> On Thu, Mar 17, 2016 at 12:39 AM Ryan Beisner <
>>> ryan.beis...@canonical.com> wrote:
>>>
 Good evening,

 I really like the notion of a bundle possessing functional tests as an
 enhancement to test coverage.  I agree with almost all of those ideas.  :-)
   tldr;  I would suggest that we consider bundle tests 'in addition to' and
 not 'as a replacement of' individual charm tests, because:


 *# Coverage and relevance*
 Any given charm may have many different modes of operation -- features
 which are enabled in some bundles but not in others.  A bundle test will
 likely only exercise that charm in the context of its configuration as it
 pertains to that bundle.  However, those who propose changes to the
 individual charm should know (via tests) if they've functionally broken the
 wider set of its knobs, bells and levers, which may be irrelevant to, or
 not testable in the bundle's amulet test due to its differing perspective.
 This opens potential functional test coverage gaps if we lean solely on the
 bundle for the test.

>>>
>>> In a world with layered charms, do we still need functional tests at the
>>> charm level?
>>>
>>
>> I believe the answer is not only yes, but that it becomes even more
>> important with layered charms because of the multiple variables involved.
>> Let's say you've rebuilt an existing layered charm that is composed of one
>> of your own updated layers and 4 other published layers which may or may
>> not have changed since the last charm build.  After rebuilding the charm,
>> you can now re-run the charm Amulet test to verify that the rebuilt charm
>> hasn't regressed in functionality.
>>
>> If there's no charm amulet test, you modify the bundle (because the
>> bundle is not likely pointing to your newly-built charm), deploy the bundle
>> and execute its tests.  This may only utilize a portion of the charm's
>> capability, so the remaining capability of the re-built charm is at risk
>> for breakage if that is the gate.
>>
>>
>>
>>>
>>> I'd also like to clarify that this would not deprecate functional tests
>>> in charms but makes it so teh policy reads EITHER unit tests or functional
>>> tests, with a preference on a layered approach using unittests. Development
>>> is quicker and since ever level of the charm is now segmented into
>>> repositories which (should) have their own testing we don't need to
>>> validate that X interface works in Y permutation as the interface is a
>>> library with both halves of the communication channel and with tests.
>>>
>>>
 There are numerous cases where a charm can shift personalities and use
 cases, but not always on-the-fly in an already-deployed model.  In those
 cases, it may 

Re: openstack base/autopilot

2016-03-19 Thread Mark Shuttleworth
On 19/03/16 03:58, Frank Ritchie wrote:
> Does Openstack Autopilot use the Openstack Base bundle?

It uses the same charms, but it has to construct a custom model based on:

 * choice of hypervisor (KVM, LXD)
 * choice of storage (object, block, SWIFT, Ceph, ScaleIO etc)
 * choice of SDN (NeutronOVS, ODL, Plumgrid etc)
 * choice of hardware
 * mapping of services to hardware

A bundle is a pre-canned model, the autopilot dynamically updates the
model to deal with things like failures or additional hardware.

Mark

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New feature for charmers - min-juju-version

2016-03-19 Thread Mark Shuttleworth
On 17/03/16 18:57, Nate Finch wrote:
> There is a new (optional) top level field in the metadata.yaml file called
> min-juju-version. If supplied, this value specifies the minimum version of
> a Juju server with which the charm is compatible.

Thank you! This is an oft-requested feature to enable charmers to focus
on newer Juju capabilities.

Mark

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Kapil Thangavelu
On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth  wrote:

> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.
>
> An early cut of topics of interest is below.
>
>
>
> *Operational concerns ** LDAP integration for Juju controllers now we
> have multi-user controllers
> * Support for read-only config
> * Support for things like passwords being disclosed to a subset of
> user/operators
> * LXD container migration
> * Shared uncommitted state - enable people to collaborate around changes
> they want to make in a model
>
> There has also been quite a lot of interest in log control - debug
> settings for logging, verbosity control, and log redirection as a systemic
> property. This might be a good area for someone new to the project to lead
> design and implementation. Another similar area is the idea of modelling
> machine properties - things like apt / yum repositories, cache settings
> etc, and having the machine agent setup the machine / vm / container
> according to those properties.
>
>
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.



>
>
> *Core Model * * modelling individual services (i.e. each database
> exported by the db application)
>  * rich status (properties of those services and the application itself)
>  * config schemas and validation
>  * relation config
>
> There is also interest in being able to invoke actions across a relation
> when the relation interface declares them. This would allow, for example, a
> benchmark operator charm to trigger benchmarks through a relation rather
> than having the operator do it manually.
>
>
in priority order, relation config, config schemas/validation, rich status.
relation config is a huge boon to services that are multi-tenant to other
services, as is the workaround is to create either copies per tenant or
intermediaries.


> *Storage*
>
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
>
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but reuse
> block devices previously bound to an old database  instance). Also, I think
> the intersection of storage modelling and MAAS hasn't really been explored,
> and since we see a lot of interest in the use of charms to deploy
> software-defined storage solutions, this probably will need thinking and
> work.
>
>
it maybe out of band, but with storage comes backups/snapshots. also of
interest, is encryption on block and object storage using cloud native
mechanisms where available.


>
>
> *Clouds and providers *
>  * System Z and LinuxONE
>  * Oracle Cloud
>
> There is also a general desire to revisit and refactor the provider
> interface. Now we have seen many cloud providers get done, we are in a
> better position to design the best provider interface. This would be a
> welcome area of contribution for someone new to the project who wants to
> make it easier for folks creating new cloud providers. We also see constant
> requests for a Linode provider that would be a good target for a refactored
> interface.
>
>
>
>
> *Usability * * expanding the set of known clouds and regions
>  * improving the handling of credentials across clouds
>


Autoscaling, either tighter integration with cloud native features or juju
provided abstraction.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unfairly weighted charmstore results

2016-03-19 Thread Uros Jovanovic
Hi Tom,

We currently bump the recommended charms over the community ones. The
reason other shows is due to using N-grams (3-N) in search and the ranking
logic using that puts recommended charms over the non-recommended ones. And
we're not only searching over names of charms but a bunch of content that a
charm has.

The system works relatively well for recommended charms if you know the
name (or close to what name is), but not in cases where a name is long and
the charm is only in community space. That's why you get better results
with short query vs a longer one.

We're working on providing better search results in the following weeks.




On Thu, Mar 17, 2016 at 2:18 PM, Tom Barber  wrote:

> Cross posted from IRC:
>
> Hello folks,
>
> I have a gripe about the charm store search. Mostly because its really
> badly weighted towards recommended charms, and finding what you(an end user
> wants is really hard unless they know what they are doing).
>
> Take this example:
>
> https://jujucharms.com/q/pentaho
>
> Now I'm writing a charm called Pentaho Data Integration, so why do I have
> to scroll past 55 recommended charms that have nothing to do with what I
> have looked for?
>
> But
>
> https://jujucharms.com/q/etl
>
> Shows me exactly what I need at the top, with no recommended charms
> blocking the view.
>
> So I guess its weighted towards tags, then names, sorta.
>
> Im not against recommended charms being dumped at the top, they are
> recommended after all but it appears the ranking could be vastly improved.
>
> Off the top of my head a ranking combo of something like, keyword
> relevance, recommended vs non recommended, times deployed, age, tags and
> last updated. would give a half decent weighting for the charms and would
> hopefully stop 55 unrelated charms appearing at the top of the list.
>
> Now I guess, I could dump pentaho in as a tag to get me top of the SEO
> rankings, but it seems like generally the method could be improved as the
> amount of charms increases, quite plausibly using something like Apache
> Nutch to crawl the available charms and build a proper search facility
> would improve things.
>
> Cheers
>
> Tom
>
>
> --
>
> Director Meteorite.bi - Saiku Analytics Founder
> Tel: +44(0)5603641316
>
> (Thanks to the Saiku community we reached our Kickstart
> 
> goal, but you can always help by sponsoring the project
> )
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Tom Barber
Yeah, I guess that would be a good solution for sample data and stuff.
Doesn't work for user defined bits and pieces though. For actions we
current "cat" the content into a parameter, but of course that doesn't work
for everything, and really really sucks when you try and cat unescaped JSON
into it. But for users who want to deploy their own content to services,
personally I think it would just be cleaner to allow a file type in an
action for people to pass in.

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 18 March 2016 at 16:44, Eric Snow  wrote:

> On Fri, Mar 18, 2016 at 8:57 AM, Tom Barber 
> wrote:
> > c) upload files with actions. Currently for some things I need to pass in
> > some files then trigger an action on the unit upon that file. It would be
> > good to say path=/tmp/myfile.xyz and have the action upload that to a
> place
> > you define.
>
> Have you taken a look at resources in the upcoming 2.0?  You define
> resources in your charm metadata and use "juju attach" to upload them
> to the controller (e.g. "juju attach my-service/0
> my-resource=/tmp/myfile.xyz"). *  Then charms can use the
> "resource-get" hook command to download the resource file from the
> controller.  "resource-get" returns the path where the downloaded file
> was saved.
>
> -eric
>
>
> * You will also upload the resources to the charm store for charm store
> charms.
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Marco Ceppi
On Thu, Mar 17, 2016 at 12:39 AM Ryan Beisner 
wrote:

> Good evening,
>
> I really like the notion of a bundle possessing functional tests as an
> enhancement to test coverage.  I agree with almost all of those ideas.  :-)
>   tldr;  I would suggest that we consider bundle tests 'in addition to' and
> not 'as a replacement of' individual charm tests, because:
>
>
> *# Coverage and relevance*
> Any given charm may have many different modes of operation -- features
> which are enabled in some bundles but not in others.  A bundle test will
> likely only exercise that charm in the context of its configuration as it
> pertains to that bundle.  However, those who propose changes to the
> individual charm should know (via tests) if they've functionally broken the
> wider set of its knobs, bells and levers, which may be irrelevant to, or
> not testable in the bundle's amulet test due to its differing perspective.
> This opens potential functional test coverage gaps if we lean solely on the
> bundle for the test.
>

In a world with layered charms, do we still need functional tests at the
charm level?

I'd also like to clarify that this would not deprecate functional tests in
charms but makes it so teh policy reads EITHER unit tests or functional
tests, with a preference on a layered approach using unittests. Development
is quicker and since ever level of the charm is now segmented into
repositories which (should) have their own testing we don't need to
validate that X interface works in Y permutation as the interface is a
library with both halves of the communication channel and with tests.


> There are numerous cases where a charm can shift personalities and use
> cases, but not always on-the-fly in an already-deployed model.  In those
> cases, it may take a completely different and new deployment topology and
> configuration (bundle) to be able to exercise the relevant functional
> tests.  Without integrated amulet tests within the charm, one would have to
> publish multiple bundles, each containing separate amulet tests.  For
> low-dev-velocity charms, for simple charms, or for charms that aren't
> likely to be involved in complex workloads, this may be manageable.  But I
> don't think we should discourage or stop looking for individual charm
> amulet tests even there.
>

We will always support charms with Amulet tests, ALWAYS, I think it'd even
be the hallmark of an exceptionally well written charm if it had the means
to do extensive functional testing.

I also have a separate email I'm authoring where we should be leveraging
health checks /inside/ a charm rather than externally poking it, so that
any deployment at anytime, in any mutation can be asked "hey, you ok?" and
get back a detailed report that assertions could be run against. I have a
follow up (and one of the reasons this email took me so long)


> A charm's integrated amulet test can be both more focused and more
> expansive in what it exercises, as it can contain multiple deployment
> topologies and configurations (equivalent to cycling multiple unique
> bundles).  For example:  charm-xyz with and without SSL;  or in HA and
> without HA;  or IPv4 vs. IPv6; or IPv4 HA vs. IPv6 HA, multicast vs.
> unicast;  [IPv6 + HA + SSL] vs [IPv4 + HA + SSL]; or mysql deploying mysql
> proper vs. mysql deploying a variant;   and you can see the gist of the
> coverage explosion which translates to having a whole load of bundles to
> produce and maintain.
>

Interesting, will consider this more.


> That is not to say that we don't have acceptance and integration tests for
> full OpenStack bundles.  We do that in the form of mojo specs which
> dynamically deploy any number of full OpenStack bundle topologies and
> configurations against multiple Ubuntu+OpenStack release combos, using
> either the dev or the stable set of OpenStack charms.  It basically takes
> what I've described above for amulet and allows us to pivot entire bundles
> into different models automatically.  There are currently *84* such
> OpenStack mojo specs with tests (bundle equivalents)
>
> Fear not, this is mostly accomplished with bundle inheritance, yaml foo,
> and shared test libraries.  We're not actually maintaining ~*516 bundles*.
> But if we were to achieve the current level of coverage with bundles,
> that's approximately how many there would need to be.  This includes the
> upcoming Xenial and Mitaka releases.  Reduce by ~12% when Juno EOLs.  Add
> 12% when we hit Newton B1, and so on.
>

This is most compelling, but OpenStack is one of those scenarios that is
the exception to the rule. First and foremost, the openstack-charmers team
gates these charms, meaning more or less circumvent the charmers review
process because of the unique nature of the solution space. As a result
these polices wouldn't directly apply to your existing processes which are
both excellent already and help continue to produce high quality artifacts


> *# How I'd like to use the 

Openstack HA Portland Meetup Present

2016-03-19 Thread James Beedy
I just gave this presentation --> http://54.172.233.114/ at an Openstack
meetup at puppet headquarters in Portland, geared around HA Openstack
production deployments. I wanted to update the team of the great news! It
was by far the best presentation I have ever given, which isn't saying
much, but peoples heads were turning in disbelief of what they were
seeing the entire time :-) Alongside my slides, I gave a live demo of the
load balancing of my juju deployed presentation in real time as the group
was accessing it. Following that, I scaled out my presentation by adding a
unit of `present` live. Also, I went out on a limb and live demoed a fully
ha test stack, adding a lxc unit of glance to one machine and removing it
from another whilst keeping quorum, and lightly touched on how the
hacluster charm works, and a bit on the concept of interfaces and deploying
from the juju-gui. There was a surprising amount of interest in Juju
following my presentation, a good amount of people had never heard of juju,
most of them seemed to be blown away by what they had just witnessed :-)

On that note, I want to thank everyone for the work you have all done to
get the Juju ecosystem/framework to where it is today. As nice as it was to
see my test stack preforming so well at the demo, its much more fulfilling
to know that my production stack is purring like a kitten too... no
downtime for 6+ months (since her production inception)!!!  Over the past 6
months, I have had some major issues that I have resolved, and with no
service downtime! To that extent, I may of ripped my stacks guts out and
then put them back in again... quite a few times with services running
atop her -- its nice to see I can do all of that and she still stands and
is able to recover and regain a healthy state.

Here are the ip addresses and repos for my presentation. For anyone
interested, you can login to the haproxy stats and see the traffic
generated! As a side note - I was able to spin this all up and present
using my charm dev amazon account --> HUGE +1 for the charm developer
program!!


presentation: http://54.172.233.114/
haproxy stats: http://54.172.233.114:1/  --> un: admin, pw: password
presentation markdown: https://github.com/jamesbeedy/os-ha-meetup-present
layer present: https://github.com/jamesbeedy/layer-present
https://github.com/jamesbeedy/os_ha_test_stack


Thanks all!

~James
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Usability issues with status-history

2016-03-19 Thread John Meinel
On Thu, Mar 17, 2016 at 8:41 AM, Ian Booth  wrote:

>
> Machines, services and units all now support recording status history. Two
> issues have come up:
>
> 1. https://bugs.launchpad.net/juju-core/+bug/1530840
>
> For units, especially in steady state, status history is spammed with
> update-status hook invocations which can obscure the hooks we really care
> about
>
> 2. https://bugs.launchpad.net/juju-core/+bug/1557918
>
> We now have the concept of recording a machine provisioning status. This is
> great because it gives observability to what is happening as a node is
> being
> allocated in the cloud. With LXD, this feature has been used to give
> visibility
> to progress of the image downloads (finally, yay). But what happens is
> that the
> machine status history gets filled with lots of "Downloading x%" type
> messages.
>
> We have a pruner which caps the history to 100 entries per entity. But we
> need a
> way to deal with the spam, and what is displayed when the user asks for
> juju
> status-history.
>
> Options to solve bug 1
>
> A.
> Filter out duplicate status entries when presenting to the user. eg say
> "update-status (x43)". This still allows the circular buffer for that
> entity to
> fill with "spam" though. We could make the circular buffer size much
> larger. But
> there's still the issue of UX where a user ask for the X most recent
> entries.
> What do we give them? The X most recent de-duped entries?
>
> B.
> If the we go to record history and the current previous entry is the same
> as
> what we are about to record, just update the timestamp. For update status,
> my
> view is we don't really care how many times the hook was run, but rather
> when
> was the last time it ran.
>

The problem is that it isn't the same as the "last" message. Going to the
original paste:

TIMETYPESTATUS  MESSAGE
26 Dec 2015 13:51:59Z   agent   idle
26 Dec 2015 13:56:57Z   agent   executing   running update-status hook
26 Dec 2015 13:56:59Z   agent   idle
26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
26 Dec 2015 14:01:59Z   agent   idle

Which means there is an "running update-status" *and* a "idle" message. So
we can't just say "is the last message == this message". It would have to
look deeper in history, and how deep should we be looking? what happens if
a given charm does one more "status-set" during its update-status hook to
set the status of the unit to "still happy". Then we would have 3. (agent
executing, unit happy, agent idle)


> Options to solve bug 2
>
> A.
> Allow a flag when setting status to say "this status value is transient"
> and so
> it is recorded in status but not logged in history.
>
> B.
> Do not record machine provisioning status in history. It could be argued
> this
> info is more or less transient and once the machine comes up, we don't
> care so
> much about it anymore. It was introduced to give observability to machine
> allocation.
>

Isn't this the same as (A)? We need a way to say that *this* message should
be showed but not saved forever. Or are you saying that until a machine
comes up as "running" we shouldn't save any of the messages? I don't think
we want that, because when provisioning fails you want to know what steps
were achieved.


>
> Any other options?
> Opinions on preferred solutions?
>
> I really want to get this fixed before Juju 2.0
>

We could do a "log level" rather than just "transient or not", and that
would decide what would get displayed by default. (so you can ask for
'update-status' messages but they wouldn't be shown by default). The
problem is that we want to keep status messages pruned at a sane level and
with 2 updates for every 'update-status' call history of 100 is only
100/2*5/60 ~ 4hours of history. If something interesting happened
yesterday, you're SOL.

What if we added a "interesting lifetime" to status messages. So the
status-set could indicate how long the message would be preserved?
"update-status" and "idle" could be flagged as preserved for only 1 hour,
and "dowloading %" could be flagged at say 5 minutes. Too complicated? It
certainly complicates the pruner (not terribly, when we record them we just
record an expire time that is indexed and the pruner just removes
everything that is over its expiry time.)

Alternatively we could have some sort of UUID for messages to indicate that
"this message is actually similar to other messages with this UUID" and we
prune them based on that. (UUIDs get flagged with a different number of
messages to keep than the global 100 for otherwise untagged messages.)

"Transient" is the easiest to understand, but doesn't really solve bug #1.

If we think of the "UUID" version as something like a named "status pocket"
maybe its actually tasteful. You'd have the "global" pocket that has our
default 100 most-recent-messages, and then you can create any new pocket
that has a default of say 10 messages. So you would be 

Re: juju 2.0 beta3 push this week

2016-03-19 Thread Adam Stokes
Hi!

Could I get this bug added to the list too?

https://bugs.launchpad.net/juju-core/+bug/1554721

On Thu, Mar 17, 2016 at 2:51 PM, Rick Harding 
wrote:

>
>
> tl;dr
> Juju 2.0 beta3 will not be out this week.
>
> The team is fighting a backlog of getting work landed. Rather than get the
> partial release out this week with the handful of current features and
> adding to the backlog while getting that beta release out, the decision was
> made to focus on getting the current work that’s ready landed. This will
> help us get our features in before the freeze exception deadline of the
> 23rd.
>
> We have several new things currently in trunk (such as enhanced support
> for MAAS spaces, machine provisioning status monitoring, Juju GUI embedded
> CLI commands into Juju Core), but we have important things to get landed.
> These include:
>
> - Updating controller model to be called “admin” and a “default” initial
> working model on bootstrap that’s safely removable
> - Minimum Juju version support for charms
> - juju read-only mode
> - additional resources work with version numbers and bundles support
> - additional work in the clouds and credentials management work
> - juju add-user and juju register to sign in the new user
>
> The teams will work together and focus on landing these and we’ll get a
> beta with the full set of updates for everyone to try out next week. If you
> have any questions or concerns, please let me know.
>
> Thanks
>
> Rick
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Openstack HA Portland Meetup Present

2016-03-19 Thread Mark Shuttleworth
On 18/03/16 05:41, James Beedy wrote:
> I just gave this presentation --> http://54.172.233.114/ at an Openstack
> meetup at puppet headquarters in Portland, geared around HA Openstack
> production deployments. I wanted to update the team of the great news! It
> was by far the best presentation I have ever given, which isn't saying
> much, but peoples heads were turning in disbelief of what they were
> seeing the entire time :-) 

That's great, James!

Mark


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju