Re: Critical regression on vsphere due to cloud-init
Hi Merlijn -- We are working on that bug actively. At its core, this is a mismatch between how a cloud functions (dhcp/dns is set up in advance and not mutable by the instance), and how vsphere operates. If you could put into the bug the reproduction steps you have taken, that would be very much appreciated. Especially needed are: 1) vsphere version 2) What dhcp server are you running and how is it configured? 3) How do you deploy CDK and where does the problem trigger? On Tue, Feb 20, 2018 at 2:53 AM, Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: > Hi all > > > I want to bring a critical regression on vsphere to your attention. DNS on > vsphere recently got broken, probably due to a regression in cloud-init. > Link to the bug report below. Because of this, we're unable to deploy many > bundles such as the big data bundles and the Kubernetes bundles. > > Is somebody working on a fix? The CDK issue mentions a workaround; does > anybode know what that workaround is? > > - Bug in cloud-init: https://bugs.launchpad.net/cloud-init/+bug/1746455 > - Bug in CDK: https://github.com/juju-solutions/bundle-canonical- > kubernetes/issues/480 > - Someone on askubuntu having the same issue: https://askubuntu.com/ > questions/994629/dhcp-lease-always-registered-with- > default-ubuntu-instead-of-actual-hostname-at > > > > Regards > Merlijn > > -- > Juju-dev mailing list > juju-...@lists.ubuntu.com > Modify settings or unsubscribe at: https://lists.ubuntu.com/ > mailman/listinfo/juju-dev > > -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Critical regression on vsphere due to cloud-init
Hi Merlijn -- We are working on that bug actively. At its core, this is a mismatch between how a cloud functions (dhcp/dns is set up in advance and not mutable by the instance), and how vsphere operates. If you could put into the bug the reproduction steps you have taken, that would be very much appreciated. Especially needed are: 1) vsphere version 2) What dhcp server are you running and how is it configured? 3) How do you deploy CDK and where does the problem trigger? On Tue, Feb 20, 2018 at 2:53 AM, Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: > Hi all > > > I want to bring a critical regression on vsphere to your attention. DNS on > vsphere recently got broken, probably due to a regression in cloud-init. > Link to the bug report below. Because of this, we're unable to deploy many > bundles such as the big data bundles and the Kubernetes bundles. > > Is somebody working on a fix? The CDK issue mentions a workaround; does > anybode know what that workaround is? > > - Bug in cloud-init: https://bugs.launchpad.net/cloud-init/+bug/1746455 > - Bug in CDK: https://github.com/juju-solutions/bundle-canonical- > kubernetes/issues/480 > - Someone on askubuntu having the same issue: https://askubuntu.com/ > questions/994629/dhcp-lease-always-registered-with- > default-ubuntu-instead-of-actual-hostname-at > > > > Regards > Merlijn > > -- > Juju-dev mailing list > Juju-dev@lists.ubuntu.com > Modify settings or unsubscribe at: https://lists.ubuntu.com/ > mailman/listinfo/juju-dev > > -- David Britton <david.brit...@canonical.com> -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: PROPOSAL: stop recording 'executing update-status hook'
+1 from me: https://bugs.launchpad.net/juju/+bug/1530840 :) On Thu, May 18, 2017 at 8:13 PM, Tim Penhey <tim.pen...@canonical.com> wrote: > Hi folks, > > Currently juju will update the status of any hook execution for any unit > to show that it is busy doing things. This was all well and good until we > do things based on time. > > Every five minutes (or so) each unit will have the update-status hook > executed to allow the unit to set or update the workload status based on > what is currently going on with that unit. > > Since all hook executions are stored, this means that the show-status-log > will show the unit jumping from executing update-status to ready and back > every five minutes. > > The proposal is to special case the update-status hook and show in status > (or the status-log) that the hook is being executed. debug-log will > continue to show the hook executing if you are looking. > > This will reduce noise in the status-log, simplify some of our code around > dealing with status-log, and reduce load on controllers looking after > hundreds or thousands of units. > > Is anyone opposed to this change? > > Tim > > -- > Juju-dev mailing list > Juju-dev@lists.ubuntu.com > Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm > an/listinfo/juju-dev > -- David Britton <david.brit...@canonical.com> -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: PROPOSAL: stop recording 'executing update-status hook'
+1 from me: https://bugs.launchpad.net/juju/+bug/1530840 :) On Thu, May 18, 2017 at 8:13 PM, Tim Penhey <tim.pen...@canonical.com> wrote: > Hi folks, > > Currently juju will update the status of any hook execution for any unit > to show that it is busy doing things. This was all well and good until we > do things based on time. > > Every five minutes (or so) each unit will have the update-status hook > executed to allow the unit to set or update the workload status based on > what is currently going on with that unit. > > Since all hook executions are stored, this means that the show-status-log > will show the unit jumping from executing update-status to ready and back > every five minutes. > > The proposal is to special case the update-status hook and show in status > (or the status-log) that the hook is being executed. debug-log will > continue to show the hook executing if you are looking. > > This will reduce noise in the status-log, simplify some of our code around > dealing with status-log, and reduce load on controllers looking after > hundreds or thousands of units. > > Is anyone opposed to this change? > > Tim > > -- > Juju-dev mailing list > juju-...@lists.ubuntu.com > Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm > an/listinfo/juju-dev > -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: A (Very) Minimal Charm
On Fri, Dec 16, 2016 at 09:33:18AM -0600, Katherine Cox-Buday wrote: > Tim Penhey <tim.pen...@canonical.com> writes: > > > Make sure you also run on LXD with a decent delay to the APT > > archive. > > Open question: is there any reason we shouldn't expect charm authors > to take a hard-right towards charms with snaps embedded as resources? > I know one of our long-standing conceptual problems is consistency > across units which snaps solves nicely. For new projects we are working this way. We have not used resources yet, but instead are using "fat" charms and sideloading the snap. But, resources are the next logical progression. -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: A (Very) Minimal Charm
On Fri, Dec 16, 2016 at 09:33:18AM -0600, Katherine Cox-Buday wrote: > Tim Penhey <tim.pen...@canonical.com> writes: > > > Make sure you also run on LXD with a decent delay to the APT > > archive. > > Open question: is there any reason we shouldn't expect charm authors > to take a hard-right towards charms with snaps embedded as resources? > I know one of our long-standing conceptual problems is consistency > across units which snaps solves nicely. For new projects we are working this way. We have not used resources yet, but instead are using "fat" charms and sideloading the snap. But, resources are the next logical progression. -- David Britton <david.brit...@canonical.com> -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Jenkins plugin to upload charm to store?
Looks cool Merlijn -- I may use this in the near future as I have some similar scripts I have cobbled together as well. :) On Thu, Nov 3, 2016 at 11:46 AM, Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: > Hi all > > > I wrote my own python script to test and publish bundles and their charms. > It's very specialized for our use-case so I doubt it's useful for you guys > although I like the idea of a reference script. I'd like to follow the > development of that script to make sure it also fits our use-case. > > > *For reference, our script does the following:* > >1. *The script receives a number of bundles as commandline arguments.* >2. *It checks which of our Charms are in the bundles.* >3. *It pushes those charms to unpublished channel* >4. *It rewrites the bundles so they point to the unpublished charms* >5. *It spins up a Juju client in an lxc container and run cwr inside >that container to test the bundles* >6. *if tests succeed, it publishes both the bundles and our charms.* > > > *Script source > (agplv3): > https://github.com/IBCNServices/tengu-charms/blob/master/cihelpers.py > <https://github.com/IBCNServices/tengu-charms/blob/master/cihelpers.py>* > > > > Kind regards > Merlijn > > 2016-11-02 13:42 GMT+01:00 Stuart Bishop <stuart.bis...@canonical.com>: > >> >> >> On 2 November 2016 at 18:24, Konstantinos Tsakalozos < >> kos.tsakalo...@canonical.com> wrote: >> >>> Hi Tom, >>> >>> Yes, I have my own script right now. It is not elegant. >>> >>> Instead of each one of us maintaining their own scripts, we could have a >>> single point of reference. In the Jenkins world I thought that would be a >>> plugin, but a script would also work. Is there anyone open sourcing his CI >>> <--> juju integration scripts? >>> >>> >> It could be much, much more elegant. I've got open issues on getting >> 'charm push' to report the revision better (so you can publish or tag), or >> even having 'charm push --channel' do what you want. I personally would >> rather see this improved so it helps everyone, to the point you don't need >> a Jenkins plugin. >> >> An automated system needs to deal with the auth problem, which is >> unfortunate (someone typing 'charm login' and entering their SSO password >> and a token on a possibly untrusted system, or manufacturing an auth token >> and installing it somehow). Snappy has this sorted better, with Launchpad >> able to build snaps from a branch and upload them to the snap store on your >> behalf. >> >> >> -- >> Stuart Bishop <stuart.bis...@canonical.com> >> >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm >> an/listinfo/juju >> >> > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: https://lists.ubuntu.com/ > mailman/listinfo/juju > > -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Question for cosmetic and understandability of breadcrumbs in github
We are planning a tag for every push for the landscape-server and client charms, and bundles. +1 on it being mentioned as a best practice (the same type of thing as when you release a version of any other software!). Though, I would recommend using the full charm store identifier eg: 'cs:~user/charm-3'. Basically, the full standard out of the charm push operation. I also like the repo-info for traceability the other way around. They solve a similar problem but depending on where you are looking are both useful. On Thu, Jun 16, 2016 at 2:54 PM, Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: > Yep, seems like something useful! > > 2016-06-16 22:48 GMT+02:00 Charles Butler <charles.but...@canonical.com>: > >> I was actually talking to beisner about this in Irc and the open stackers >> are putting a report in their artifacts with the repository information. >> >> >> https://api.jujucharms.com/charmstore/v5/~openstack-charmers-next/xenial/neutron-gateway/archive/repo-info >> >> I think I like this better. >> >> We are generating a manifest of the other layers but I'm not certain we >> are storing any commit hash info in that manifest. I don't think we are. >> >> But it would give me a nice trail to follow. >> >> On Thu, Jun 16, 2016 at 4:42 PM Merlijn Sebrechts < >> merlijn.sebrec...@gmail.com> wrote: >> >>> Well, Charles, I must admit that I'm a bit lost. There's some lingo in >>> this email I don't quite understand, and it's quite late on my side of the >>> globe ;) >>> >>> What I understand you want: You have a Github repo that contains the top >>> layer of a Charm. Each tag in that repo corresponds to the revision of the >>> charm build from that layer. Is this correct? >>> >>> This would allow you to see what Charm corresponds to what layer version. >>> >>> I don't quite understand how this would solve your kubernetes problem. >>> Don't you want this information about every layer instead of just the top >>> one? Is this something 'charm build' would be able to do automatically? It >>> gets the layers from a repo so it might be able to put that info >>> (repo/commit) in a log somewhere? >>> >>> 2016-06-16 19:51 GMT+02:00 Charles Butler <charles.but...@canonical.com> >>> : >>> >>>> Greetings, >>>> >>>> I deposit many of my layers in GitHub, and one of the things I've been >>>> striving to do is keep tag releases at the revisions i cut a charm release >>>> for a given channel. As we know, the default channel is seen by no-one, and >>>> runs in increments of n+1. >>>> >>>> My prior projects i've been following semver for releases, but that has >>>> *nothing* in terms of a breadcrumb trail back to the store. >>>> >>>> Would it be seen as good practice to tag releases - on the top most >>>> layer of a charm - with what charm release its coordinated with? >>>> >>>> Given the scenario that i'm ready to release swarm, and lets assume >>>> that to date i haven't tagged any releases in the layer repository: >>>> >>>> charm show cs:~containers/trusty/swarm revision-info >>>> revision-info: >>>> Revisions: >>>> - cs:~containers/trusty/swarm-2 >>>> - cs:~containers/trusty/swarm-1 >>>> - cs:~containers/trusty/swarm-0 >>>> >>>> I see that i'm ready to push swarm-3 to the store: >>>> >>>> git tag 3 >>>> git push origin --tags >>>> >>>> I can now correlate the source revision to what i've put in my account >>>> on the store, but this does not account for promulgation (which has an >>>> orthogonal revision history), and mis-match of those id's. >>>> >>>> I think this can simply be documented that tags track >>>> <>/<> pushes, and to correlate source with release, to use >>>> the method shown above to fetch release info. >>>> >>>> Does this sound useful/helpful or am I being pedantic? (I say this >>>> because Kubernetes touches ~ 7 layers, and it gets confusing keeping >>>> everything up to date locally while testing, and then again re-testing with >>>> --no-local-layers to ensure our repositories are caught up with current >>>> development work. (Cant count the number of open pull requests hanging >>>> waiting for review because we've moved to the next hot-ticket item) >>&g
Re: Promulgated charms (production readiness)
On Mon, May 16, 2016 at 11:06:32AM +, Charles Butler wrote: > > If we can enable this to be a short-story for every charm author, we should > run this down and tackle it, throw money at it, and make it the best > experience for "instant monitoring" ever. I'd like to provide a bit of motivation and a charm consumer anecdote: When the Openstack Autopilot incorporated nagios and related it to all openstack charms (16.03 release), I personally saw the magic of juju at play. You went from having a lot of unrealized monitoring potential in your juju deployed cloud to having a fully functioning monitoring solution deployed, and ready to go, just an add-relation away. That add-relation wasn't free, it took work coding, testing, integrating, distilling operational knowledge, getting multiple contributors to test and submit bug reports... But, after that work, it's to the point now where nagios is highlighting bugs in the deployment that were overlooked before (especially longer-running cloud bugs that don't show up at deployment time). So, not just valuable for production, and something that operators have grown to expect and trust, but something that helps keep bugs from escaping downstream as well! I just wanted to pass on a bit of encouragement that the work to incorporate those checks was valuable and having that same experience across other bits of big software makes charms alive and moves them ever closer to invaluable operational tools. -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: incorrect private address when using manual provider
FYI -- I have something similar filed: https://bugs.launchpad.net/juju-core/+bug/1574844 randomly an IPv6 address is chosen as the "public address" on the local provider. Even though all the machines have IPv6 assigned by LXD, sometimes juju considers it the primary). On Wed, May 4, 2016 at 6:45 PM, Matt Rae <matt@canonical.com> wrote: > Hi we're seeing an issue where the juju private-address is chosen > incorrectly when using the manual provider on a host with multiple > interfaces. > > Are there any details regarding how the private-address is chosen when > using the manual provider? > > Matt > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > > -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju2: how to edit a maas "cloud"?
Hi John -- thanks for the explanation. I wasn't suggesting another file the user would maintain, but instead a default set of configs I could attach to a cloud when I'm calling 'add-cloud'. It probably makes more sense in the case of MAAS where at least the bootstrap timeout often needs to be altered, as well as proxy settings (at a typical customer site). Thinking a bit further -- having a shared controller with users helps. But, in the case of MAAS, a PoC user (first time experience) would still struggle with it, I think. Especially dedicating a machine to a controller in their rack, and not seeing a way around it easily (abuse the admin model is the current answer). On Tue, Apr 26, 2016 at 12:11 PM, John Meinel <j...@arbash-meinel.com> wrote: > I believe --config can take a file rather than just a 'key=value' pairing. > So you can save all your config to a file and pass it in with '--config > myconf.yaml' > > There was discussion of having a default search path for some of the > config, but I'm not sure if that got implemented, nor if it is actually > better since it is another magic place that you have to discover. > > John > =:-> > > > On Tue, Apr 26, 2016 at 9:14 PM, David Britton < > david.brit...@canonical.com> wrote: > >> On Tue, Apr 26, 2016 at 11:58:38AM -0500, Cheryl Jennings wrote: >> > > >> > > On Apr 25, 2016 12:55, "Andreas Hasenack" <andr...@canonical.com> >> wrote: >> > > >> > >> Uh, so in essence there are now three "files"? Cloud definition, >> config >> > >> for that cloud and credentials? And the config has to be passed each >> time, >> > >> whereas the other two are " imported"? >> > >> >> > > Yes, that's correct. 1 - The cloud definition (built in for public >> > clouds), 2 - credentials, and 3 - config that can be specified upon each >> > bootstrap. >> >> Are there any plans to allow this to be stored between controller >> bootstraps? >> >> Background -- We have some substrates where specifically the default >> timeout is too low for the maas provider. There are also considerations >> like proxies, apt proxies, etc that all become quite cumbersome to type >> and remember on the command line. >> >> -- >> David Britton <david.brit...@canonical.com> >> >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> > > -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju2: how to edit a maas "cloud"?
On Tue, Apr 26, 2016 at 11:58:38AM -0500, Cheryl Jennings wrote: > > > > On Apr 25, 2016 12:55, "Andreas Hasenack" <andr...@canonical.com> wrote: > > > >> Uh, so in essence there are now three "files"? Cloud definition, config > >> for that cloud and credentials? And the config has to be passed each time, > >> whereas the other two are " imported"? > >> > > Yes, that's correct. 1 - The cloud definition (built in for public > clouds), 2 - credentials, and 3 - config that can be specified upon each > bootstrap. Are there any plans to allow this to be stored between controller bootstraps? Background -- We have some substrates where specifically the default timeout is too low for the maas provider. There are also considerations like proxies, apt proxies, etc that all become quite cumbersome to type and remember on the command line. -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Bitnami wordpress charm development
On Mon, Feb 29, 2016 at 07:02:08PM -0300, Ney Moura wrote: > > But I keep having erros with the install hook. It says file not found. > Maybe I missed it in the tarball -- could you attach a juju debug-log capture while you deploy it? Or, /var/log/all-machines.log from the bootstrap node is the same thing. Thanks! -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju.worker.dependency engine.go:304 failed to start "uniter" manifold worker: dependency not available
On Sat, Feb 27, 2016 at 12:12:24PM +0100, Stian Aurdal wrote: > Hello, > > Im having some errors while trying to deploy landscape on a KVM using the > openstack-install autopilot. I have an issue going at > https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/870 > > <https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/870> > > but we found some juju warnings we did not understand: > unit-haproxy-0[891]: 2016-02-26 19:58:58 WARNING juju.worker.dependency > engine.go:304 failed to start "uniter" manifold worker: dependency not > available > Any help much appreciated. > Hi Stian -- We've encountered this error before too. Could you add your findings to this bug? There are a couple workaround you can try there. https://bugs.launchpad.net/juju-core/+bug/1513667 Thanks! -- David Britton <david.brit...@canonical.com> -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
[Review Queue] haproxy
Merged: https://code.launchpad.net/~danilo/charms/trusty/haproxy/merge-services-fix/+merge/259233 - Fig bug in backend service merge logic [lp:1455079] -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Makefile target names
+1, but I would propose using hyphens for word separators, not underscores -- at least for the recommendation. I would also recommend *not* having multiple default names. As mentioned, the yaml control file I think can be used to override all this, so it still leaves room for individual preferences on the exact namings. unit tests: - make test unit tests dependencies: - make test-depends functional tests: - make functional-test lint: - make lint charm-helpers upstream sync: - make sync -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Makefile target names
On Thu, Jan 22, 2015 at 04:57:36PM +, Marco Ceppi wrote: test: lint unit-test functional-test -1, I'd rather 'test' be unit testing only. Many charms have this already and it seems like unecessary busy work to change it. ``` makefile: - code-lint - unit-test ``` -1, vote for 'lint', 'test' (unit test only) at this level, and agree with Tim that it's redundnat to print these out, we should make the recommeneded defaults what bundletester supports. All else I agree with. -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: An Open Question: Charm Dependency Management
On Tue, Jan 20, 2015 at 05:58:24PM +, Marco Ceppi wrote: I don't see how a Makefile in a charm doesn't resolve this issue. +1 on some standard published Makefile targets. We already have some that are highly recommended: - test - lint Maybe: - test-depends or depends # to install/update dependencies needed for testing Are there others that are needed/missing or that I forgot we already have as standard? -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Charm Reviews + Activity
Hi -- This week, I reviewed a test charm addition from Matt and a version bump charm from Charles (which needed information). In reviewing the new apache2 test, I found once annoyance. `juju test` bootstraps and tears down between each step, and with all these new tests added en masse there seems to be a practice emerging of using a '00_setup' script to install dependencies. This in effect inserts an extra bootstrap unnecessarily into the test process. @Marco or others -- thoughts about putting an exception or machinery in to get rid of this? I was going to go through a lot of these other additions basic test additions, but wanted to get the question out there before I did. In other charm related activity, I also worked on getting real bundles created for the Landscape charm (submitted into the ~landscape namespace as of now), will get them further cleaned up next week and submitted for review. Thanks! -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: debug hooks
On Thu, Oct 23, 2014 at 04:09:31PM +0400, Vasiliy Tolstov wrote: Hi =)! I have succeseful deployed wordpress on precreated lxc containers. After deploy to vps i have all services in error state with message about install hook failed. First, look on the unit in /var/log/juju: juju ssh unit ls -l /var/log/juju/unit-* The unit-* logs are the intersting ones if you have gotten as far as running hooks. If that is not revealing, you can step through hook execution, see the following link (thought this is typically not necessary to just see how something failed, it's more if you are developing a charm): https://juju.ubuntu.com/docs/authors-hook-debug.html HTH! -- David Britton david.brit...@canonical.com -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
[Review Queue] storage
https://code.launchpad.net/~tribaal/charms/precise/storage/refactor-mount-volume/+merge/232236 Needs Fixing: Failing some deployments scenerios, but the idea and refactor seem sound at this point. Spoke with Chris in the review and on IRC. -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: [Review Queue] storage and zabbix-agent (personal namespace) followup
On Fri, Sep 26, 2014 at 09:46:44PM +0300, Christopher Glass wrote: Back to the hammer and anvil for this one! Looking forward to seeing it again. Thanks Chris for your willingness to take this one on! :) -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Juju + MAAS + Image Downloads
On Fri, Sep 05, 2014 at 10:34:22AM +1000, Ian Booth wrote: [...] For MAAS, if there were an endpoint which could serve the LXC tarballs, as opposed to the root images themselves at http://cluster-name/MAAS/static/images/ubuntu/amd64/generic/trusty/release/root-image, then option 1 would be easiest. We could provide a new config option to specify the correct URL to pass to -T With either of the mechanical changes in how juju creates the template, this last statement feels like the right one (to me). Maas is the only provider right now that supports LXCs as full-first class citizens, it already downloads and maintains multiple images, and according to Andres it also exports simple streams data. Should be a simple matter to add another image to this mix? Someone more knowledgable in MAAS, please correct my errors! -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Juju + MAAS + Image Downloads
Hi juju folks -- I'm using MAAS + Juju to do some testing behind a firewall with LXCs. I want to accelerate the download of the large images that I am downloading from cloud-images.ubuntu.com. I see that MAAS has cloud images. Ideally, I'd like to instruct Juju to download them from there: https://bugs.launchpad.net/juju-core/+bug/1357045 But I'm not sure that is possible. So, I'll leave it to someone else to pick up that bug if they think it's worthwhile. I then tried to setup squid and proxy them transparently and found that the image-metadata-url that I give juju is only for the .json files that are referenced. The images are still downloaded via https from cloud-images.ubuntu.com. I'm not even sure if this is a bug. I mean, I understand why you want https, but if I want to mirror it, it's a new level of commitment to make it https only especially in a private environment. Is the only option for me to mirror cloud-images and set up an https endpoint (or a transparent https m-i-t-m proxy) in order to avoid downloading these large images over and over? -- David Britton davidpbrit...@gmail.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: charmhelpers -- github or launchpad?
On Wed, Aug 20, 2014 at 11:36:51AM -0400, Marco Ceppi wrote: Where are you seeing charmhelpers on Github? I didn't. It was a discussion about charm helpers that confused me and another collegague. Not to worry, totally my mistake. -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
~charmers Application - David Britton
Hi Charmers -- Here you will find my application for inclusion into the charmers group. I have been using and developing charms for juju since the pyjuju days, while it was being renamed to juju from ensemble. I have authored a number of charms (Some public, some just for personal use), and made significant contributions to many more. At my day job, I work for Canonical on the Landscape team. This has afforded me the opportunity to work on those charms we use most to faciliate our products (apache2, postgresql, haproxy). I have made a number of visible contributions to these from small bug fixes to large features. Our own charms (landscape-server, landscape-client) are maintained under the ~landcape-charmers team, of which I'm a member. We even have a separate project (landscape-charm) in launchpad for tighter control of our development process on our landscape-server charm -- these charms are both fully open source (GPLv2). We have a fairly extensive internal testing infrastructure for our landscape charms where we spin them up in different combinations daily (trusty, precise, multiple versions of Landscape, etc). We do this all with juju test at an integration level. We also have a full and comprehensive unit test suites for each of our charms. Recently, I desinged, implemented and now maintain (with much help from my fellow team members) the storage charm, and the block-storage-broker charm. These charms allow other services to request, assiociate and mount cloud storage in a juju-friendly way. I'm hoping to see wider adoption of these. I have contributed to the openstack charm collection in a number of ways, testing, debugging, contributing patches, etc. Past these charm specific contributions, I also test, file bugs and contribute patches back to other juju products (juju-deployer, charm-tools, charm-helpers, juju-core, juju-gui, ...) on a regular basis. Lastly, I am a heavy user of Juju, maintaining many of our teams internal services with it -- so I undersatnd the need to have charm quality and robustness. I also am very aware of making sure full solutions work, not *just* individual charms. Here are some of the charms where I've made significant contributions (authorship-level): https://jujucharms.com/precise/landscape-server https://jujucharms.com/precise/landscape-client https://jujucharms.com/precise/storage https://jujucharms.com/precise/block-storage-broker Charms I've conrtibuted major changes to: https://jujucharms.com/precise/haproxy https://jujucharms.com/precise/apache2 A couple larger MPs that I have authored: https://code.launchpad.net/~davidpbritton/charms/precise/haproxy/fix-service-entries/+merge/202387 https://code.launchpad.net/~davidpbritton/charms/precise/apache2/vhost-config-relation/+merge/220295 https://code.launchpad.net/~davidpbritton/charms/trusty/apache2/avoid-regen-cert/+merge/223990 Feel free to ask me any questions, and thanks for your consideration. :-) -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: getting rid of all-machines.log
On Fri, Aug 08, 2014 at 12:03:21PM -0400, Nate Finch wrote: [...] remote syslog and to the local file log, we wouldn't need to worry about log rotation of the local log screwing up what gets sent to the remote Do the standard rsyslog log rotation mechanisms not function well? On Windows, what about the event log (which has remote viewing/aggregation capabilities built in)? -- David Britton david.brit...@canonical.com -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Proposal: making apt-get upgrade optional
On Tue, Jul 1, 2014 at 1:53 PM, Matt Bruzek matthew.bru...@canonical.com wrote: Hello Andrew, I ran into a problem when Juju was no longer calling apt-get update. I filed bug: https://bugs.launchpad.net/juju-core/+bug/1336353 Agreed -- I've fixed this problem multiple times in charms by making the first step apt-get upgrade. Which always seemed a bit wasteful to me. :) It happens more on the local provider since those images are copied from templates which are not rebuilt until you remove them (do lxc-ls --fancy to see them). So, the templates package cache goes out of date, and your cloned machine also goes out of date. -- David Britton david.brit...@canonical.com -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: $ juju switch --format
-name, status, username] - all fields can be shown by using --all 6) juju info --list will output the list of environments in either yaml or json (or smart) Or perhaps just leave that to the juju switch command. cheers, rog. -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- David Britton david.brit...@canonical.com -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: $ juju switch --format
Hi William, On Fri, Jun 06, 2014 at 07:40:47PM +0200, William Reade wrote: To restate your point, I think: you want to be able to keep seeing and reading simple names for the contexts you have available to work in. Yes. Agreed. Do the following use cases express your needs (even if you weren't hitherto aware that you were specifically manipulating environment *connections*)? Ah, ok, things are changing, got it. Making sure patterns like the following are still easily grok-able (even if it's a different command) is nice: juju destroy-environment $(juju env) Juju env used to print a lot of decoration around the environment name, IIRC it was like: $ juju env Current Environment: local (from JUJU_ENV) $ That was a lot to extract the name from. :) As a user, I want to be able to refer to particular environment connections by short simple names As a user, I want to be able to see what environment connections I have available As a user, I want to be able to see what environment connection is currently active I'd slightly modify to: As a user, I want to print without annotation the currently active environment connection If that even makes sense to do in the new world order. I'll leave that for you to judge. As a user, I want to be able to quickly activate a given environment connection As a user, I want to be able to see the details (env uuid, env name, state-servers, etc) of my environment connections Agreed with the rest. Thanks for writing those out. I see that there are changes coming, and I'm looking forward to what you have in store. Thanks to all for taking the time to consider the issues involved. Good to see you are taking seriously existing scripts, the burden of maintaining old ways of doing things going forward, progress, etc. -- David Britton david.brit...@canonical.com -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: config-get: error: settings not found
On Thu, Apr 03, 2014 at 10:13:50AM -0300, Andreas Hasenack wrote: Is there a normal scenario where this could happen, or is it a bug in juju? We talked briefly today with the juju folks -- Please file a bug against this. -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Odd ec2 behavior - volumes attached to bootstrap node.
On Fri, Mar 21, 2014 at 08:21:11AM +0400, John Meinel wrote: Notice that volume which is attached is a *new* volume. You are right... confused by euca/ec2 cmdline output again! :( Thanks John. -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Odd ec2 behavior - volumes attached to bootstrap node.
Is this expected? Notice one of my volumes is attached after the bootstrap node comes up. The ec2-tools and aws dashboard show the same thing. I've never seen this behavior before (I haven't looked that much), but I'm having trouble finding anything other than the -b option to ec2-run-instances that would expose anything like this. dpb@helo:~$ euca-describe-volumes VOLUME vol-d59280d71 us-west-2a available 2014-03-11T19:22:23.660Zstandard TAG volume vol-d59280d7volume_name nfs/0 volume VOLUME vol-0a3735049 us-west-2b available 2014-03-20T17:02:22.356Zstandard TAG volume vol-0a373504volume_name postgresql/0 volume dpb@helo:~$ juju bootstrap; juju status -v verbose is deprecated with the current meaning, use show-log 2014-03-20 21:06:32 INFO juju.provider.ec2 ec2.go:193 opening environment dpb-aws-us-west-2 2014-03-20 21:06:54 INFO juju.state open.go:68 opening state; mongo addresses: [ec2-54-245-152-4.us-west-2.compute.amazonaws.com:37017]; entity 2014-03-20 21:10:04 INFO juju.state open.go:106 connection established 2014-03-20 21:10:04 INFO juju conn.go:66 juju: authorization error while connecting to state server; retrying 2014-03-20 21:10:04 INFO juju.state open.go:68 opening state; mongo addresses: [ec2-54-245-152-4.us-west-2.compute.amazonaws.com:37017]; entity 2014-03-20 21:10:05 INFO juju.state open.go:106 connection established environment: dpb-aws-us-west-2 machines: 0: agent-state: pending dns-name: ec2-54-245-152-4.us-west-2.compute.amazonaws.com instance-id: i-7bae4d73 instance-state: running series: precise hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M services: {} 2014-03-20 21:10:07 INFO juju supercommand.go:286 command finished dpb@helo:~$ euca-describe-volumes VOLUME vol-d59280d71 us-west-2a available 2014-03-11T19:22:23.660Zstandard TAG volume vol-d59280d7volume_name nfs/0 volume VOLUME vol-2a1cf92f8 snap-00d68df1 us-west-2a in-use 2014-03-20T21:06:35.358Zstandard ATTACHMENT vol-2a1cf92fi-7bae4d73 /dev/sda1 attached 2014-03-20T21:06:35.000Z VOLUME vol-0a3735049 us-west-2b available 2014-03-20T17:02:22.356Zstandard TAG volume vol-0a373504volume_name postgresql/0 volume -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: psa for charm authors
On Wed, Mar 12, 2014 at 10:44:39AM -0400, Kapil Thangavelu wrote: As of 1.17 (on both client and environment) For local charms, juju will always upload a new version on deploy and upgrade, so need to worry about cached charms in the state server being used even when you have a new changes in the charm. Awesome news! -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: AWS Request Rate
On Thu, Feb 20, 2014 at 10:12:13AM -0500, Kapil Thangavelu wrote: 2. afaicr for a given failed provisioning request, there is no retry in core. Thanks Kapil and Marco for the replies Would this be worth putting in a bug report about? Or is it working as desired? I seem to remember pyjuju being more resilient in this area. -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Vagrant boxes with Juju now available
On Fri, Dec 6, 2013 at 10:16 AM, John Arbash Meinel j...@arbash-meinel.comwrote: You probably want to look at juju upgrade-charm. When you deploy, we take the contents of the local repository and zip it up to store in the environment and deploy from there. You need to upgrade-charm to copy in a new version and tell the system that you want to switch to it. Also, if you have destroyed a service, you can't do upgrade-charm, at that point, you want to make sure your revision file in your charm directory contains a higher number as juju remembers the charms it deploys. The --upgrade flag on juju deploy can automate that for you, in that case. See the discussion in this bug, as this behavior surprised me too: https://bugs.launchpad.net/juju-core/+bug/1205466 -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Use non-released images with the openstack provider
Hi -- How do I use non-released images with the openstack provider? I see that azure has an override variable called image-stream, but I don't see anything similar in openstack. Is there some easy way to do this that I'm missing? I'm basically wanting to spin up trusty instances, but they are in the daily stream. Thanks! -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
juju deployed service, machine goes away
Hi -- If I terminate a machine out from underneath juju, how do I correctly inform juju that machine is no longer there? Is there a way to gracefully terminate from the service unit/machine perspective (equivalent of shutdown to AWS or Nova, it will destroy the instance)? Thanks! -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju deployed service, machine goes away
Thanks David -- I think it's currently targetted at 1.16.5: https://bugs.launchpad.net/juju-core/+bug/1089291 I'll be following it there. :) On Mon, Dec 2, 2013 at 3:06 PM, David Cheney david.che...@canonical.comwrote: On Tue, Dec 3, 2013 at 8:47 AM, David Britton david.brit...@canonical.com wrote: Hi -- If I terminate a machine out from underneath juju, how do I correctly inform juju that machine is no longer there? The current solution we have for this is 'please don't do that, juju needs to own machines' ,but we understand that this can easily happen outside of your control. At the moment that will probably leave your juju instance with a phantom reference to a machine. Worse, if this machine was created without a service unit assigned, ie via juju add-machine, it may attract a unit which will never be deployed (because the machine has been removed). I say at the moment because this is being worked on as we speak and may already be fixed in 1.16.4 or later. You should at least upgrade to this release. If it is fixed in 1.16.4, when the release notes are available they will mention a new option on destroy-machine to forcefully remove it from the database. Like all --force style options, this should be used with care and not enshrined into regular use. Is there a way to gracefully terminate from the service unit/machine perspective (equivalent of shutdown to AWS or Nova, it will destroy the instance)? Thanks! -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Volume Management - NFS Charm
Hi -- I've been looking to build a persistence story into the NFS charm, and I'm wondering if anyone else has thought of this problem before. I opted to use NFS since it exposes the mount interface, which allows a pretty easy use case from relating charms. I would expect that other file storage charms would use something like this even if the ideas are not quite fleshed out or uniform between charms yet. My ideas are as follows. 1) Expose config settings for cloud volumes in the nfs charm. cloud_credentials: base64 encoded shell script to source containing cloud credentials cloud_storage: pseudo-URL syntax for cloud storage device. E.g., swift://block-storage-id The hook would be smart enough to install the right package, and try to attach and mount the volume after sourcing the credentials file. The disadvantages are the hook is provider-specific, and would need to be expanded for other clouds to be generally useful. 2) A storage subordinate charm. E.g., swift-storage You would add this subordinate to the NFS service, and it would have the smarts to mount the block device at an agreed-upon location. The disadvantages involve waiting on the subordinate before the NFS charm is generally useful. I'm not sure how the mechanics of this would work. I really don't like either of the things I'm proposing, and was wondering what others thought about these ideas. Have you had any brainstorms about it? Have you come up with something better to try out? Please let me know. This seems to be a big missing puzzle piece in practical juju usage right now, and I would love to hear others thoughts (since I may be missing something). Thanks! -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Volume Management - NFS Charm
Sorry -- s/swift/nova-volume/ I have swift on the brain from another issue I'm working on! On Sun, Nov 17, 2013 at 11:17 AM, David Britton david.brit...@canonical.com wrote: Hi -- I've been looking to build a persistence story into the NFS charm, and I'm wondering if anyone else has thought of this problem before. I opted to use NFS since it exposes the mount interface, which allows a pretty easy use case from relating charms. I would expect that other file storage charms would use something like this even if the ideas are not quite fleshed out or uniform between charms yet. My ideas are as follows. 1) Expose config settings for cloud volumes in the nfs charm. cloud_credentials: base64 encoded shell script to source containing cloud credentials cloud_storage: pseudo-URL syntax for cloud storage device. E.g., swift://block-storage-id The hook would be smart enough to install the right package, and try to attach and mount the volume after sourcing the credentials file. The disadvantages are the hook is provider-specific, and would need to be expanded for other clouds to be generally useful. 2) A storage subordinate charm. E.g., swift-storage You would add this subordinate to the NFS service, and it would have the smarts to mount the block device at an agreed-upon location. The disadvantages involve waiting on the subordinate before the NFS charm is generally useful. I'm not sure how the mechanics of this would work. I really don't like either of the things I'm proposing, and was wondering what others thought about these ideas. Have you had any brainstorms about it? Have you come up with something better to try out? Please let me know. This seems to be a big missing puzzle piece in practical juju usage right now, and I would love to hear others thoughts (since I may be missing something). Thanks! -- David Britton david.brit...@canonical.com -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: I want to change the http interface
novella. -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- David Britton david.brit...@canonical.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju