Re: juju deploy failure on nova

2015-11-22 Thread Billy Olsen
Hi Cathy,

These messages are used to indicate why the service is not a fully
functional service.. As you point out, the software packages for the base
charm service has been installed at this point in time, but needs
additional relations in order to make the service a fully functioning
service. These are normally expected messages when the service has just
been deployed.

If you proceed at this point to adding the missing relations, you'll see
this message disappear from the status output.

Thanks,

Billy


On Sat, Nov 21, 2015 at 1:59 AM, wuwenbin  wrote:

> Hi Adam:
>
>  I download trusty codes and use juju to deploy openstack. While
> there are problems about nova-cloud-controller and nova-compute.
>
>  Error info is as follows. I think that installing charm is an
> independent operation because later we will add relationship between those
> charms. I have no idea what’t going on.
>
>  Looking forward for your replay.
>
>  Thanks.
>
> Best regards
>
> Cathy
>
>
>
> Error info:
>
> nova-cloud-controller:
>
> charm: local:trusty/nova-cloud-controller-501
>
> exposed: false
>
> service-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image, compute, identity,
> database'
>
>   since: 21 Nov 2015 12:44:35+08:00
>
> relations:
>
>   cluster:
>
>   - nova-cloud-controller
>
> units:
>
>   nova-cloud-controller/0:
>
> workload-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image, compute,
> identity, database'
>
>   since: 21 Nov 2015 12:44:35+08:00
>
> agent-status:
>
>   current: idle
>
>   since: 21 Nov 2015 16:39:38+08:00
>
>   version: 1.25.0.1
>
> agent-state: started
>
> agent-version: 1.25.0.1
>
> machine: "5"
>
> open-ports:
>
> - /tcp
>
> - 8773/tcp
>
> - 8774/tcp
>
> - 9696/tcp
>
> public-address: 192.168.122.242
>
>   nova-compute:
>
> charm: local:trusty/nova-compute-133
>
> exposed: false
>
> service-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image'
>
>   since: 21 Nov 2015 16:40:46+08:00
>
> relations:
>
>   compute-peer:
>
>   - nova-compute
>
> units:
>
>   nova-compute/0:
>
> workload-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image'
>
>   since: 21 Nov 2015 16:40:46+08:00
>
> agent-status:
>
>   current: idle
>
>   since: 21 Nov 2015 16:40:48+08:00
>
>   version: 1.25.0.1
>
> agent-state: started
>
> agent-version: 1.25.0.1
>
> machine: "1"
>
> public-address: 192.168.122.56
>
>
>
> log info:
>
> unit-nova-compute-0[3088]: 2015-11-21 08:50:56 WARNING
> unit.nova-compute/0.juju-log server.go:268 messaging relation is missing
> and must be related for functionality.
>
> unit-nova-compute-0[3088]: 2015-11-21 08:50:56 WARNING
> unit.nova-compute/0.juju-log server.go:268 image relation is missing and
> must be related for functionality.
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>


-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [Ecosystem-engineering] The future of Charm Helpers

2015-11-22 Thread Billy Olsen
I'm very much in favor of breaking up charm helpers as well. I think it
ultimately acknowledges how charm developers have chosen to use the library
(even if not quite correct), but also allows for a theoretically more
organized core library. A few thoughts come up as I read this...

First, the hookenv and other parts of charmhelpers is something that is so
essential to writing a python-based charm these days, I would personally
making a move for the cornerstone python charm library to use the
charms.core namespace, though I get why charms.helper would be used as a
close-to replacement. Honestly, I think just reboot and put it in the
charms.core namespace - you want to write a charm in python, you need this
one library and that's it. Others provide more value, but this one is the
core piece.

Secondly, I'm mildly concerned with the namespace of choice (using the
shared charms. as the parent namespace). There may be a magical python 3ism
that resolves the mixed development + packaged use of common code (think
pip, virtualenvs, etc), but there were some issues that the oslo components
within OpenStack ran into with a shared common namespace ((some are in a
blog here
<http://blog.nemebean.com/content/whys-and-hows-oslo-namespace-change>, and
the spec to remove the namespaces within the oslo packages is here
<http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html>).
As the libraries are broken up, as I believe they should be, we need to
make sure we've carefully considered how we expect some of these flows to
work and make sure they work (and preferably well). Maybe its not really an
issue, but I'd love to be convinced of that.

Thanks,
Billy

--
Billy Olsen
Software Engineer
Canonical Ltd

On Sun, Nov 22, 2015 at 4:37 PM, Adam Israel 
wrote:

> I am very much in favor of this move forward. I’ve recently worked on
> converting the charm-benchmark package to charms.benchmark; I see where
> having cleaner namespaces make will make every charm author’s life easier.
>
> That said, I know that transitioning to this new model is an epic
> undertaking, especially with the changes coming in the next LTS, i.e., no
> python 2 by default. To that end, I’d propose some kind of compatibility
> report be generated (as part of the upgraded CI, perhaps) that notifies
> charm authors of upcoming changes and how their charm fares against the new
> requirements. The last thing I want to see as a ~charmer is Xenial come to
> pass and having to engage in firefighter mode to fix incompatibilities.
>
>
> Adam Israel - Software Engineer
> Canonical Ltd.
> http://juju.ubuntu.com/ - Automate your Cloud Infrastructure
>
> > On Nov 22, 2015, at 2:23 PM, Marco Ceppi 
> wrote:
> >
> > Hello everyone,
> >
> > I'm rebooting this conversation because it never fully came to a
> resolution and since this topic was discussed a lot has happened in the
> Charm Ecosystem. I still hold firm, and a lot of charmers I've spoken with
> agree, that charm helpers has become the opposite which it originally
> helped to solve - a concise and tasteful way to bind Python to Juju. I've
> been considering ways to resolve this, providing a clear way for users to
> author Python charms, while not diminishing the large breadth of helpers
> and methods already created.
> >
> > A new approach I'd like to present is a clean break from the
> "charm-helpers" name and a transition to a new library, `charms.helper`.
> This name follows the same scheme that the reactive framework does,
> `charms.reactive` and is a way we can continue the practice of producing
> helpers around the charm ecosystem without the need of a monolithic code
> base.
> >
> > Under this proposal, `charmhelpers.core.hookenv` would more or less
> become `charms.helper` and items like `charmhelpers.contrib.openstack`
> would be moved to their own upstream project and be available as
> `charms.openstack`. They will instead depend on `charms.helper` for
> previous `hookenv` methods. This is a cleaner namespace that still
> providing the discoverability (search pypi index for `charms` and you'll
> see the ecosystem basically owns that space) desired from the current
> source tree.
> >
> > This clean break will allow us to correct a few naming mistmatches and
> allow us to incubate a transition period where charm authors can use and
> try these but the current charm-helpers library has charms.helper as a
> dependency and map current `charmhelpers.core.hookenv` to the new
> `charms.helper`. I've already started work on a strawman to demonstrate how
> charms.helper could look and will share that later this week.
> >
> > With the new charm build pattern and reactive framework this would fit
>

Re: [Ecosystem-engineering] The future of Charm Helpers

2015-11-23 Thread Billy Olsen
Cory,

Yeah, my understanding is that the namespace support in Python 3 is far
improved, and there was some support for it in Python 2.7 which still had
some unique issues from time to time. I'll play around with it a bit and if
I find anything worth mentioning I'll report back. At the very least, it
can go into a known issues/limitations list.

Part of the main reasons I offered this as only a mild concern is that I'm
not fully aware if oslo had made different choices in how namespaces were
being handled that directly impacted the compatibility between python 2 and
python 3 and using namespaced projects.

Thanks!

- Billy



On Mon, Nov 23, 2015 at 12:39 PM, Cory Johns 
wrote:

> Billy,
>
> I also notice that oslo used the following to define the namespace
> packages:
>
> __import__('pkg_resources').declare_namespace(__name__)
>
> My reading indicates that the preferred way to handle namespace packages
> in Python 2 (which is future-compatible with Python 3) is:
>
> from pkgutil import extend_path
> __path__ = extend_path(__path__, __name__)
>
> I tested this (https://github.com/johnsca/nspkg) and it seems to address
> the issues you had with oslo, even in Python 2.  (Note that I did also
> manually test the --system-site-packages + virtualenv case, though I didn't
> commit that code to test.sh in that repo.)
>
> This is the approach we've been using with the charms.X namespace, so I'm
> optimistic that we won't have the same issues you did with oslo.  And, as
> noted in my previous email, we'll probably be switching to Python 3  very
> soon anyway.  However, further testing is always welcome.
>
>
> On Mon, Nov 23, 2015 at 2:01 PM, Cory Johns 
> wrote:
>
>>
>> On Mon, Nov 23, 2015 at 1:37 AM, Billy Olsen 
>> wrote:
>>
>>> Secondly, I'm mildly concerned with the namespace of choice (using the
>>> shared charms. as the parent namespace). There may be a magical python 3ism
>>> that resolves the mixed development + packaged use of common code (think
>>> pip, virtualenvs, etc), but there were some issues that the oslo components
>>> within OpenStack ran into with a shared common namespace ((some are in a
>>> blog here
>>> <http://blog.nemebean.com/content/whys-and-hows-oslo-namespace-change>,
>>> and the spec to remove the namespaces within the oslo packages is here
>>> <http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html>).
>>> As the libraries are broken up, as I believe they should be, we need to
>>> make sure we've carefully considered how we expect some of these flows to
>>> work and make sure they work (and preferably well). Maybe its not really an
>>> issue, but I'd love to be convinced of that.
>>>
>>
>> I do think that namespace package support has been much improved in
>> Python 3 (in particular, 3.3 and above), but I must admit that I had not
>> run into nor been aware of the issues with namespace packages under earlier
>> versions of Python.  However, there has already been discussion of making
>> all layered / reactive charms Python 3 anyway, so maybe we can do some
>> quick tests to determine if those issues have been resolved with the new
>> namespace package support?
>>
>
>


-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: usr and pwd of openstack------>答复: juju deploy failure on nova

2015-11-24 Thread Billy Olsen
Hi Cathy,

The easiest way is to simply set the admin-password field for the keystone
service, juju set keystone admin-password=mypassword.

For mysql are you using mysql or percona?

- Billy

On Mon, Nov 23, 2015 at 5:51 AM, wuwenbin  wrote:

> Hi Billy:
>
>You’re right about relationship. Now another problem occurs that
> username and password to log openstack-dashboard can’t be found. I check
> keystone config.yaml, and  find related config below, while that doesn’t
> work. I also try to use mysql while password is also not known. So if you
> know what they are or how to find them, please help me.
>
>Thanks.
>
> Best regards
>
> Cathy
>
>
>
> admin-user:
>
> default: admin
>
> type: string
>
> description: Default admin user to create and manage.
>
>   admin-password:
>
> default: None
>
> type: string
>
> description: |
>
>   Admin password. To be used *for testing only*. Randomly generated by
>
>   default.
>
>
>
> *发件人:* Billy Olsen [mailto:billy.ol...@canonical.com]
> *发送时间:* 2015年11月23日 1:57
> *收件人:* wuwenbin
> *抄送:* ad...@canonical.com; jiangrui (D); juju@lists.ubuntu.com;
> Weidong.Shao; Qinchuan; Ashlee Young; Zhaokexue
> *主题:* Re: juju deploy failure on nova
>
>
>
> Hi Cathy,
>
>
>
> These messages are used to indicate why the service is not a fully
> functional service.. As you point out, the software packages for the base
> charm service has been installed at this point in time, but needs
> additional relations in order to make the service a fully functioning
> service. These are normally expected messages when the service has just
> been deployed.
>
>
>
> If you proceed at this point to adding the missing relations, you'll see
> this message disappear from the status output.
>
>
>
> Thanks,
>
>
>
> Billy
>
>
>
>
>
> On Sat, Nov 21, 2015 at 1:59 AM, wuwenbin  wrote:
>
> Hi Adam:
>
>  I download trusty codes and use juju to deploy openstack. While
> there are problems about nova-cloud-controller and nova-compute.
>
>  Error info is as follows. I think that installing charm is an
> independent operation because later we will add relationship between those
> charms. I have no idea what’t going on.
>
>  Looking forward for your replay.
>
>  Thanks.
>
> Best regards
>
> Cathy
>
>
>
> Error info:
>
> nova-cloud-controller:
>
> charm: local:trusty/nova-cloud-controller-501
>
> exposed: false
>
> service-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image, compute, identity,
> database'
>
>   since: 21 Nov 2015 12:44:35+08:00
>
> relations:
>
>   cluster:
>
>   - nova-cloud-controller
>
> units:
>
>   nova-cloud-controller/0:
>
> workload-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image, compute,
> identity, database'
>
>   since: 21 Nov 2015 12:44:35+08:00
>
> agent-status:
>
>   current: idle
>
>   since: 21 Nov 2015 16:39:38+08:00
>
>   version: 1.25.0.1
>
> agent-state: started
>
> agent-version: 1.25.0.1
>
> machine: "5"
>
> open-ports:
>
> - /tcp
>
> - 8773/tcp
>
> - 8774/tcp
>
> - 9696/tcp
>
> public-address: 192.168.122.242
>
>   nova-compute:
>
> charm: local:trusty/nova-compute-133
>
> exposed: false
>
> service-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image'
>
>   since: 21 Nov 2015 16:40:46+08:00
>
> relations:
>
>   compute-peer:
>
>   - nova-compute
>
> units:
>
>   nova-compute/0:
>
> workload-status:
>
>   current: blocked
>
>   message: 'Missing relations: messaging, image'
>
>   since: 21 Nov 2015 16:40:46+08:00
>
> agent-status:
>
>   current: idle
>
>   since: 21 Nov 2015 16:40:48+08:00
>
>   version: 1.25.0.1
>
> agent-state: started
>
> agent-version: 1.25.0.1
>
> machine: "1"
>
> public-address: 192.168.122.56
>
>
>
> log info:
>
> unit-nova-compute-0[3088]: 2015-11-21 08:50:56 WARNING
> unit.nova-compute/0.juju-log server.go:268 messaging relation is missing
> and must be related for functionality.
>
> unit-nova-compute-0[3088]: 2015-11-21 08:50:56 WARNING
> unit.nova-compute/0.juju-log server.go:268 image relation is missing and
> must be related for functionality.
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
>
>
>
> --
>
> Billy Olsen
>
> billy.ol...@canonical.com
>
> Software Engineer
> Canonical USA
>



-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Tuning ceph

2015-11-25 Thread Billy Olsen
Pshem,

If you have specific use cases around the setting of config options, please
do share. The charms tend to be opinionated about configuration and make it
simple to deploy the majority of installations. However, there will
undoubtedly be config tweaks here and there. Your use cases can help ensure
we are attempting to cover your needs.


Thanks,

Billy

On Wednesday, November 25, 2015, Pshem Kowalczyk  wrote:

> Hi,
>
> Yes, I got the the same conclusion, either write my own charms to try to
> get the features implemented upstream.
>
> In a way I think that having some sort of local 'overlay' for the hooks
> (that get's applied automatically, but doesn't modify the original charms)
> would make the things easier.
>
> At this stage the juju ecosystem, despite being quite flexible is not
> really conductive to external changes. It's pretty much an all-or-nothing
> approach. It works for well for most deployments, but not when you want to
> fine-tune (the relatively complex stuff). I do wonder how much need there
> really is for this fine tuning of the charms, perhaps it's just me ;-)
>
> kind regards
> Pshem
>
>
> On Thu, 26 Nov 2015 at 09:34 Peter Sabaini  > wrote:
>
>> On 25.11.15 21:29, Pshem Kowalczyk wrote:
>> > Right,
>> >
>> > How to you make sure that juju doesn't override my changes? If I had to
>> > add another mon node (and remove one of the existing ones) the new
>> > config would be overwritten by the default one.
>> >
>> > I think the general issue is that I can't tell when particular config
>> > files will be re-generated.
>>
>> Indeed, that only lends itself for configuration outside of the charms'
>> control. If however you're getting an overlap by a juju-managed config
>> file your only options are a) get the needed parameter included
>> upstream or b) fork the charm
>>
>> cheers,
>> peter.
>>
>>
>>
>> >
>> > kind regards
>> > Pshem
>> >
>> >
>> > On Wed, 25 Nov 2015 at 21:58 Peter Sabaini > 
>> > <mailto:peter.saba...@canonical.com
>> >> wrote:
>> >
>> > On 24.11.15 23:25, Pshem Kowalczyk wrote:
>> > > Hi,
>> > >
>> > > I'm relatively new to the juju ecosystem. I've built a test/POC
>> > > openstack setup using juju charms. Ceph is used as the
>> > backend-storage
>> > > system for the deployment. Since the production deployment of this
>> > > system has to meet some external requirements (particular CRUSH
>> > > settings, recovery times etc) I'll have to tune ceph settings a
>> bit.
>> > >
>> > > The charm itself doesn't seem to have a section to add that
>> > information
>> > > (some other charms do have that ability). What's the best way of
>> > doing it?
>> > >
>> > > In general case, I've realised that sometimes it would be useful
>> to
>> > > have ability to run some actions after juju has finished its
>> > > configuration to fine-tune it to particular requirements (without
>> > > losing the advantages of using juju for all the dependencies). Is
>> it
>> > > possible to do something like that without building my own charms?
>> >
>> > We're generally just using "juju ssh", "juju run" and occasionally
>> > "juju scp"
>> >
>> > Caveat: juju ssh doesn't really handle stdin
>> >
>> > cheers,
>> > peter.
>> >
>> > > kind regards
>> > > Pshem
>> > >
>> > >
>> > >
>> >
>> >
>> > --
>> > Juju mailing list
>> > Juju@lists.ubuntu.com
>>  > Juju@lists.ubuntu.com
>> >
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju
>> >
>>
>>
>>

-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Using maas+juju to deploy openstack, I can't run 02-maasdeploy.sh successfully.

2015-12-04 Thread Billy Olsen
So what this seems to be indicating is either a) the ssh key isn't properly
setup as part of the MAAS cloud unit script, b) the private key doesn't
match the public key installed, c) something else I can't think of at the
moment.

Can we get the full maas-deployer logs? That may shed some light on what's
going on.

Artur, perhaps something else comes to mind? It's almost as if it's not
fully cleaned up...

Billy

 The maas-deployer should be creating/using the ssh key that's available in
/root/.ssh/id_maas according to the logs, because it's running as root. I
don't currently have access to the script for this (02-maasdeploy.sh) so
I'm not 100% sure of the context you are running this.

If you don't have another maas environment setup on the same host,
something you can try is to remove the /root/.ssh/id_maas* keys

On Friday, December 4, 2015, zhangyuanyou  wrote:

> Hi  Artur Tyloch,
>I'm working on maas+juju to deploy openstack,now I edit the
> 02-maasdeploy.sh and excute it.
> But this line "maas-deployer -c deployment.yaml -d --force" can't pass ,it
> always let me to input the password like this:
>
> *2015-12-04 17:45:54,032 DEBUG Executing: 'virt-install --connect
> qemu:///system --name opnfv-maas-intel --ram 4096 --vcpus 4 --disk
> vol=default/opnfv-maas-intel-root.img,format=qcow2,bus=virtio,io=native
> --disk
> vol=default/opnfv-maas-intel-seed.img,format=raw,bus=virtio,io=native
> --network bridge=brAdm,model=virtio --network bridge=brData,model=virtio
> --network bridge=brPublic,model=virtio --noautoconsole --vnc --import'
> stdin=''*
> *2015-12-04 17:45:55,145 DEBUG Executing: 'virsh -c qemu:///system
> autostart opnfv-maas-intel' stdin=''*
> *2015-12-04 17:45:55,162 DEBUG Waiting for MAAS vm to come up for ssh..*
> *2015-12-04 17:45:55,163 DEBUG Using ip address specified: 192.168.212.140*
> *2015-12-04 17:45:55,163 DEBUG Executing: 'ssh -i /root/.ssh/id_maas -o
> UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> root@192.168.212.140 
> true' stdin=''*
>
>
> *root@192.168.212.140's
>  password: *I
> edited the file deployment.yaml like this:
> *demo-maas:*
> *maas:*
> *# Defines the general setup for the MAAS environment, including
> the*
> *# username and password for the host as well as the MAAS server.*
> *user: root*
> *password: root*
>
> *# Contains the virtual machine parameters for creating the MAAS
> virtual*
> *# server. Here you can configure the name of the virsh domain,
> the*
> *# parameters for how the network is attached.*
> *name: opnfv-maas-intel*
> *interfaces:
> ['bridge=brAdm,model=virtio','bridge=brData,model=virtio','bridge=brPublic,model=virtio']*
> *memory: 4096*
> *vcpus: 4*
> *arch: amd64*
> *pool: default*
> *disk_size: 160G*
>
> *# Apt http proxy setting(s)*
> *apt_http_proxy:*
>
> *apt_sources:*
> *  - ppa:maas/stable*
> *  - ppa:juju/stable*
>
> *# Virsh power settings*
> *# Specifies the uri and keys to use for virsh power control of
> the *
> *# juju virtual machine. If the uri is omitted, the value for the*
> *# --remote is used. If no power settings are desired, then do not*
> *# supply the virsh block.*
> *virsh:*
> *rsa_priv_key: /home/ubuntu/.ssh/id_rsa*
> *rsa_pub_key: /home/ubuntu/.ssh/id_rsa.pub*
> *#uri: qemu+ssh://ubuntu@10.4.1.1/system
> <http://ubuntu@10.4.1.1/system>*
>
> *# Defines the IP Address that the configuration script will use
> to*
> *# to access the MAAS controller via SSH.*
> ip_address: 192.168.212.140
>
> Could you help me to resolve the question ? Any assistance is greatly
> appreciated.
>
> Thanks.
> Yuanyou
>
>
>
>
>


-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: 02-maasdeploy.sh can't pass, StopIteration

2015-12-09 Thread Billy Olsen
-i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> ubuntu@192.168.122.2 true' stdin=''
> 2015-12-09 18:53:39,701 DEBUG MAAS vm started.
> 2015-12-09 18:53:39,701 DEBUG Logging into maas host '192.168.122.2'
> 2015-12-09 18:53:39,702 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> ubuntu@192.168.122.2 grep "MAAS controller is now configured"
> /var/log/cloud-init-output.log' stdin=''
> 2015-12-09 18:53:40,143 INFO Waiting for cloud-init to complete - this
> usually takes several minutes
> 2015-12-09 18:53:40,143 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> ubuntu@192.168.122.2 grep -m 1 "MAAS controller is now configured" <(sudo
> tail -n 1 -F /var/log/cloud-init-output.log)' stdin=''
> 2015-12-09 19:44:08,952 INFO done.
> 2015-12-09 19:44:08,952 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> ubuntu@192.168.122.2 grep "+ apikey=" /var/log/cloud-init-output.log|
> tail -n 1| sed -r "s/.+=(.+)/\1/"' stdin=''
> 2015-12-09 19:44:09,459 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> ubuntu@192.168.122.2 mkdir virsh-keys' stdin=''
> 2015-12-09 19:44:09,939 DEBUG Executing: 'scp -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> /home/ubuntu/.ssh/id_rsa ubuntu@192.168.122.2:virsh-keys/id_rsa' stdin=''
> 2015-12-09 19:44:10,421 DEBUG Executing: 'scp -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> /home/ubuntu/.ssh/id_rsa.pub ubuntu@192.168.122.2:virsh-keys/id_rsa.pub'
> stdin=''
> 2015-12-09 19:44:10,901 DEBUG Executing script on remote host
> '192.168.122.2'
> 2015-12-09 19:44:10,901 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> ubuntu@192.168.122.2' stdin='maas_home=...'
> 2015-12-09 19:44:11,435 DEBUG Fetching MAAS api key
> 2015-12-09 19:44:11,435 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
> ubuntu@192.168.122.2 sudo maas-region-admin apikey --username ubuntu'
> stdin=''
> 2015-12-09 19:44:12,714 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o
> LogLevel=quiet ubuntu@192.168.122.2 maas login maas
> http://192.168.122.2/MAAS/api/1.0
> 3SpVLmd46fELPetYq7:2T2QZvtZpazeTEwLKZ:Dmczh6W43yxxHexGAjkbXKPPR3G4cPJe'
> stdin='LC_ALL=C'
> 2015-12-09 19:44:13,434 DEBUG Configuring MAAS settings...
> 2015-12-09 19:44:13,435 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o
> LogLevel=quiet ubuntu@192.168.122.2 maas maas maas set-config
> name='main_archive' value='http://us.archive.ubuntu.com/ubuntu''
> stdin='LC_ALL=C'
> 2015-12-09 19:44:14,225 DEBUG Command executed successfully: stdout='OK'
> 2015-12-09 19:44:14,226 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o
> LogLevel=quiet ubuntu@192.168.122.2 maas maas maas set-config
> name='maas_name' value='automaas'' stdin='LC_ALL=C'
> 2015-12-09 19:44:15,013 DEBUG Command executed successfully: stdout='OK'
> 2015-12-09 19:44:15,013 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o
> LogLevel=quiet ubuntu@192.168.122.2 maas maas maas set-config
> name='ntp_server' value='202.120.2.101'' stdin='LC_ALL=C'
> 2015-12-09 19:44:15,929 DEBUG Command executed successfully: stdout='OK'
> 2015-12-09 19:44:15,930 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o
> LogLevel=quiet ubuntu@192.168.122.2 maas maas maas set-config
> name='upstream_dns' value='114.114.114.114'' stdin='LC_ALL=C'
> 2015-12-09 19:44:16,748 DEBUG Command executed successfully: stdout='OK'
> 2015-12-09 19:44:16,748 DEBUG Starting the import of boot resources
> 2015-12-09 19:44:16,748 DEBUG Executing: 'ssh -i /home/ubuntu/.ssh/id_maas
> -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o
> LogLevel=quiet ubuntu@192.168.122.2 maas maas boot-resources import'
> stdin='LC_ALL=C'
> 2015-12-09 19:44:17,536 DEBUG Command executed successfully:
> stdout='Import of boot resources started'
> 2015-12-09 19:44:17,536 DEBUG Logging into 192.168.122.2
>  Importing images ... Queued for download
>  Importing images ... Downloading   5% ad
>  Importing images ... Downloading   5%
>  Importing images ... Downloading   5%
> Traceback (most recent call last): 97%
>   File "/usr/bin/maas-deployer", line 9, in 
> load_entry_point('maas-deployer==0.0.1', 'console_scripts',
> 'maas-deployer')()
>   File "/usr/lib/python2.7/dist-packages/maas_deployer/cli.py", line 88,
> in main
> engine.deploy(target)
>   File "/usr/lib/python2.7/dist-packages/maas_deployer/vmaas/engine.py",
> line 71, in deploy
> self.wait_for_import_boot_images(client, maas_config)
>   File "/usr/lib/python2.7/dist-packages/maas_deployer/vmaas/engine.py",
> line 330, in wait_for_import_boot_images
> complete, status = checker.are_images_complete()
>   File
> "/usr/lib/python2.7/dist-packages/maas_deployer/vmaas/maasclient/bootimages.py",
> line 93, in are_images_complete
> status = self.get_status()
>   File
> "/usr/lib/python2.7/dist-packages/maas_deployer/vmaas/maasclient/bootimages.py",
> line 60, in get_status
> {'host': self.host, 'sequence': self.sequence.next()})
>
> StopIteration
>
>
>
>
>



-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Upgrading charms to 16.01

2016-01-28 Thread Billy Olsen
Pshem,

I'm curious if switching would work for you...

e.g. juju upgrade-charm --switch cs:trusty/glance-29

Thanks,

Billy

On Thu, Jan 28, 2016 at 3:43 PM, Pshem Kowalczyk  wrote:

> Ok,
>
> force-downgrade doesn't seem to work:
> ubuntu@maascontroller:~$ juju upgrade-charm keystone
>
>
>
> ERROR already running latest charm "cs:trusty/keystone-33"
> ubuntu@maascontroller:~$ juju upgrade-charm --force keystone
> ERROR already running latest charm "cs:trusty/keystone-33"
>
> I ended up removing each unit and re-adding it. This has resolved the
> keystone issue.
>
> I have tried that method with another charm (glance):
>
> ubuntu@maascontroller:~$ juju set glance
> openstack-origin=cloud:trusty-mitaka
> ubuntu@maascontroller:~$ juju upgrade-charm --force glance
> Added charm "cs:trusty/glance-30" to
> the environment.
>
> that has resulted in exactly the same error:
> 2016-01-28 22:40:39 ERROR juju-log FATAL ERROR: Could not derive OpenStack
> version for codename: mitaka
> 2016-01-28 22:40:39 ERROR juju.worker.uniter.operation runhook.go:107 hook
> "config-changed" failed: exit status 1
>
> So I think the only way is to blew away a service unit and deploy a new
> one.
>
> kind regards
> Pshem
>
>
> On Fri, 29 Jan 2016 at 10:49 James Page  wrote:
>
>>
>> Hi Pshem
>>
>> On Thu, 28 Jan 2016 at 22:39 Pshem Kowalczyk  wrote:
>>
>>> I've tried to upgrade keystone to the new charm version (from liberty).
>>> I've updated the source:
>>>
>>> juju set keystone openstack-origin=cloud:trusty-mitaka
>>>
>>> and scheduled an upgrade:
>>>
>>>  juju upgrade charm keystone
>>>
>>> but the charm upgrade fails:
>>>
>>> 2016-01-28 21:36:13 ERROR juju-log FATAL ERROR: Could not derive
>>> OpenStack version for codename: mitaka
>>>
>>> What am I doing wrong?
>>>
>>
>> You need to upgrade the charm first, and then set the configuration
>> option as the old version of the charm does not know about mitaka.
>>
>> You can resolve this by doing:
>>
>> juju upgrade-charm --force keystone
>> juju resolved --retry keystone/0 (or whatever the unit name is that
>> failed)
>>
>> Hopefully that should fix you up.
>>
>> Cheers
>>
>> James
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>


-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: charmers + openstack-charmers application

2016-02-19 Thread Billy Olsen
board/next
>>   -
>>
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/percona-cluster/next
>>   -
>>
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/rabbitmq-server/next
>>   -
>>
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-proxy/next
>>   -
>>
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-storage/next
>>
>>
>> Thanks for all the great tools, and thank you for your consideration.
>>
>> Cheers & happy charming!
>>
>> Ryan Beisner
>>
>>
>>
>>
>
> --
> José Antonio Rey
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>



-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Billy Olsen
re.
>>>>>
>>>>> # Problem
>>>>>
>>>>> Without making this a novel, charm-testing and amulet started before
>>>>> bundles were even a construct in Juju with a spec written before Juju 1.0.
>>>>> Since then, many new comers to the ecosystem have remarked how odd it is 
>>>>> to
>>>>> be writing deployment validations at the charm level. Indeed, as years 
>>>>> have
>>>>> gone by and new tools have sprung up it's become clear that; having an
>>>>> author try to model all the permutations of a charms deployment and do the
>>>>> physical deploys at that charm level are tedious and incomplete at best.
>>>>>
>>>>> With the explosion of layers and improvements to uniting test in
>>>>> charms at that component level, I feel that continuing to create these
>>>>> bespoke "bundles" via amulet in a single charm will not be a robust
>>>>> solution going forward. As we sprint closer to Juju 2.0 we're seeing a
>>>>> higher demand for assurance of working scenarios, and a sharp focus on
>>>>> quality at every level. As such I'd like to propose the following policy
>>>>> changes:
>>>>>
>>>>> - All bundles must have tests before promulgation to the store
>>>>> - All charms need to have comprehensive tests (unit or amulet)
>>>>> - All charms should be included in a bundle
>>>>>
>>>>> I'll break down my reasoning and examples in the following sections:
>>>>>
>>>>> # All bundles must have tests before promulgation to the store
>>>>>
>>>>> Writing bundle tests with Amulet is actually a more compelling story
>>>>> today than writing an Amulet test case for a charm. As an example, there's
>>>>> a new ELK stack bundle being produced, here's what the test for that 
>>>>> bundle
>>>>> looks like:
>>>>> https://github.com/juju-solutions/bundle-elk-stack/blob/master/tests/10-test-bundle
>>>>>
>>>>> This makes a lot of sense because it's asserting that the bundle is
>>>>> working as expected by the Author who put the bundle together. It's also
>>>>> loading the bundle.yaml as the deployment spec meaning as the bundle
>>>>> evolves the tests will make sure they continue to run as expected. Also,
>>>>> this could potentially be used in future smoke tests for charms being
>>>>> updated if a CI process swaps out, say elasticsearch, for a newer version
>>>>> of a charm being reviewed. We can assert that both the unittests in
>>>>> elasticsearch work and it operates properly in an existing real world
>>>>> solution a la the bundle.
>>>>>
>>>>> Additional examples:
>>>>> -
>>>>> https://github.com/juju-solutions/bundle-realtime-syslog-analytics/blob/master/tests/01-bundle.py
>>>>> -
>>>>> https://github.com/juju-solutions/bundle-apache-core-batch-processing/blob/master/tests/01-bundle.py
>>>>>
>>>>> # All charms need to have comprehensive tests (unit or amulet)
>>>>>
>>>>> This is just a clarification and more strongly typed policy change
>>>>> that require charms have (preferred) unit tests or, if not applicable, 
>>>>> then
>>>>> an Amulet test. Bash doesn't really allow for unittesting, so in those
>>>>> scenarios, Amulet tests would function as a valid testing case.
>>>>>
>>>>> There are also some charms which will not make sense as a bundle. One
>>>>> example is the recently promulgated Fiche charm:
>>>>> http://bazaar.launchpad.net/~charmers/charms/trusty/fiche/trunk/view/head:/tests/10-deploy
>>>>>  It's
>>>>> a standalone pastebin, but it's an awesome service that provides 
>>>>> deployment
>>>>> validation with an Amulet test. The test stands up the charm, exercises
>>>>> configuration, and validates the service responds in an expected way. For
>>>>> scenarios where a charm does not have a bundle an Amulet test would be
>>>>> required.
>>>>>
>>>>> Any charm that currently includes an Amulet test is welcome to
>>>>> continue keeping such a test.
>>>>>
>>>>> # All charms should be included in a bundle
>>>>>
>>>>> This last one is to underscore that charms need to serve a purpose.
>>>>> This policy is written as not an absolute, but instead a strongly worded
>>>>> suggestion as there are always charms that are exceptions to the rules. 
>>>>> One
>>>>> such example is the aforementioned Fiche charm which as a bundle would not
>>>>> make as much sense, but is still a purposeful charm.
>>>>>
>>>>> That being said, most users coming to consume Juju are looking to
>>>>> solve a problem. Bundles underscore solutions to problems that people can
>>>>> consume, and get started quicker.
>>>>>
>>>>> As such, when new applications are charmed a test of "is this
>>>>> application something that serves a clear purpose" having a bundle
>>>>> submitted alongside the charm validates that claim and provides users a 
>>>>> way
>>>>> to immediately get started with a solution.
>>>>>
>>>>> # Conclusion
>>>>>
>>>>> These policy changes, once accepted, will be targeted at all charms
>>>>> and bundles in Xenial as well as any new charm submitted after policy
>>>>> acceptance date for trusty, and finally any charm currently under review
>>>>> will be encouraged to adhere to the new policy but won't be required.
>>>>>
>>>>> # Action items
>>>>>
>>>>> I'm seeking feedback on this concept and welcome suggestions for
>>>>> improvements, questions, dissenting opinions, and any other remarks as 
>>>>> well
>>>>> as votes from ~charmers and feedback from the community at large.
>>>>>
>>>>> Thanks,
>>>>> Marco Ceppi
>>>>>
>>>>> --
>>>>> Juju mailing list
>>>>> Juju@lists.ubuntu.com
>>>>> Modify settings or unsubscribe at:
>>>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>>>
>>>>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>


-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Ceph-Radosgw Account Quotas

2016-03-30 Thread Billy Olsen
James,

You can manage ceph radosgw quotas via the radosgw-admin command -
http://docs.ceph.com/docs/hammer/radosgw/admin/

I'm not sure if the general object storage APIs apply from OpenStack, but
in general Ceph is S3 compatible and not Swift compatible. If there's an
API difference, Ceph will lean the way of S3. If there are Swift specific
APIs, its possible that Ceph does not yet honor those APIs.


On Tue, Mar 29, 2016 at 10:19 AM, James Beedy  wrote:

> Liam,
>
> The capability to modify account quotas for object storage is a must have.
> Can you aid me in finding out how this might be accomplished using
> ceph-radosgw?
>
> Thanks,
>
> ~James
>
> On Tue, Mar 29, 2016 at 3:48 AM, Liam Young 
> wrote:
>
>> Hi James,
>>
>> The ceph-radosgw  charm does register endpoints with keystone. The
>> catalog query below was against the deployment done by
>> the 018-basic-trusty-liberty ceph-radosgw amulet test:
>>
>> $ keystone catalog --service object-store
>>
>> Service: object-store
>> +-+--+
>> |   Property  |  Value   |
>> +-+--+
>> |   adminURL  |http://10.5.5.41:80/swift |
>> |  id | 5369a3e7cdc846af8c6a1cda90a6bd7a |
>> | internalURL |   http://10.5.5.41:80/swift/v1   |
>> |  publicURL  |   http://10.5.5.41:80/swift/v1   |
>> |region   |RegionOne |
>> +-+--+
>>
>> Having said that I don't know whether ceph-radosgw supports managing
>> qutoas via that api, I suspect not.
>>
>> Liam
>>
>> On Fri, Mar 25, 2016 at 9:56 PM, James Beedy 
>> wrote:
>>
>>> Team,
>>>
>>> I have a need to increase the account quotas of my ceph-radosgw object
>>> storage. To the extent of my knowledge, I need to preform api calls similar
>>> to those found here:
>>> http://docs.openstack.org/liberty/config-reference/content/object-storage-account-quotas.html
>>>
>>> Is this functionality currently supported by the object-store api?
>>>
>>> I feel like ceph-radosgw may not be passing the relational data on
>>> identity joined hook to facilitate the creation of my needed endpoint.
>>>
>>> Has anyone else hit this yet? I feel like this is a legitimate bug with
>>> either ceph-radosgw or keystone, although I could just be missing something.
>>>
>>> I feel like I need the public and internal endpoints created here:
>>> http://docs.openstack.org/liberty/install-guide-ubuntu/swift-controller-install.html
>>>
>>> Any insight would be greatly appreciated!
>>>
>>> Thanks!
>>>
>>> James
>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>>
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>


-- 
Billy Olsen

billy.ol...@canonical.com
Software Engineer
Canonical USA
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Application of Membership for Charmers

2017-07-21 Thread Billy Olsen
Hello Charmers,

My name is Billy Olsen and I've been a long time contributor to the
OpenStack Charms, going back 3 years now. I'm currently a core member of
the OpenStack charming community and have additionally made contributions
of code and review effort to the charm-helpers library.

I believe that I can provide a positive contribution in the overall
charming ecosystem.

Thanks,

Billy
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Application of Membership for Charmers

2017-07-27 Thread Billy Olsen
Thanks Tim, Alex, David, Jorge, James and the rest of the charmers!


On Thu, Jul 27, 2017 at 10:07 AM Tim Van Steenburgh <
tim.van.steenbu...@canonical.com> wrote:

> +1 and approved, thanks Billy.
>
> On Fri, Jul 21, 2017 at 8:41 AM, Billy Olsen 
> wrote:
>
>> Hello Charmers,
>>
>> My name is Billy Olsen and I've been a long time contributor to the
>> OpenStack Charms, going back 3 years now. I'm currently a core member of
>> the OpenStack charming community and have additionally made contributions
>> of code and review effort to the charm-helpers library.
>>
>> I believe that I can provide a positive contribution in the overall
>> charming ecosystem.
>>
>> Thanks,
>>
>> Billy
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Charmer application

2017-08-30 Thread Billy Olsen
It's a bit late, but I'd like to give an enthusiastic +1 as well

On Wed, Aug 30, 2017 at 1:50 PM Tim Van Steenburgh <
tim.van.steenbu...@canonical.com> wrote:

> Thanks Frode, welcome to ~charmers!
>
> On Wed, Aug 30, 2017 at 4:10 PM, David Ames 
> wrote:
>
>> Huge +1
>>
>> On Wed, Aug 30, 2017 at 3:14 AM, Frode Nordahl 
>> wrote:
>> > Dear Juju community,
>> >
>> > I would like to officially apply for membership of the Juju ~charmers
>> team.
>> >
>> > Through the course of the past year I have made contributions to the
>> > OpenStack Charms and other Charm projects. I have also had the
>> privilege of
>> > meeting many of you in person, and have shared fruitful exchanges. I
>> have
>> > signed the Ubuntu Code of Conduct.
>> >
>> > Before playing around with Juju I have had a long career in tech from
>> which
>> > I have experience with both operations and development of system level
>> code.
>> > I particularly like having my code well tested to make sure it keeps on
>> > running in the future.
>> >
>> > Some examples of my charm-related work:
>> >
>> https://github.com/openstack/charm-neutron-openvswitch/commit/4ffbc2fe25400abf55719a370f3a2cd37f90c99d
>> >
>> https://github.com/openstack/charm-rabbitmq-server/commit/08b10513c5725fb740382668c47fc769a6f2936c
>> >
>> https://github.com/marcoceppi/charm-mysql/commit/cefb77fafcd1ee36d4dc30c14d07aa857d5273a2
>> >
>> https://github.com/marcoceppi/charm-mysql/commit/1a8277855be4020b26121cb9d573cd150b6aa882
>> >
>> https://github.com/openstack/charm-ceph-radosgw/commit/7fa6639ab3fde7dc89131fb204f018fd4339e82f
>> >
>> https://github.com/openstack/charm-keystone/commit/5de1770931e886732870da1909f08279a0b804b4
>> >
>> https://github.com/openstack/charm-nova-cloud-controller/commit/2eef644a5c1acb2675e94908c88182658fec4ac5
>> >
>> https://github.com/openstack/charm-openstack-dashboard/commit/8f3a93ac4e7102736da492a189144220312f93df
>> >
>> https://github.com/openstack/charm-swift-proxy/commit/7c24ae81283710c830ab03f240ec9cc10dccd975
>> >
>> > Launchpad ID: fnordahl
>> >
>> > --
>> > Frode Nordahl
>> >
>> > --
>> > Juju mailing list
>> > Juju@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju
>> >
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: patch to ceph-osd

2018-02-05 Thread Billy Olsen
Hi Guiseppe,

First and foremost thanks for the patch in an effort to improve the
overall experience with the ceph-osd and charm.

The ceph-osd charm is in the OpenStack project arena and as such is
governed by the rules which apply to OpenStack projects. For more
information on contributing, please refer to the documentation on how to
contribute to the OpenStack charms at
https://docs.openstack.org/charm-guide/latest/how-to-contribute.html. In
order to accept patches to the project, you'll need to make sure you
have signed the OpenStack Contributor License Agreement (documented in
https://wiki.openstack.org/wiki/How_To_Contribute).

With regards to the patch provided, the upstream packages have chosen to
alter the rules in lvm which feels like a better fit for this. The charm
doesn't manage or own the 60-persistent-storage.rules file and may break
if there's a package update that modifies this file. Can you file a bug
against the ceph/lvm2 package in Ubuntu to address this fix instead?

Thanks,

Billy

On 02/02/2018 08:23 AM, Giuseppe Attardi wrote:
> According to this page:
>
>   https://patchwork.kernel.org/patch/9826353/
>
> there is a bug in ceph that affects the use of FibreChannel disks.
>
> The following patch to file hooks/ceph_hooks.py, fixes it in the charm 
> ceph-osd:
>
> *** hooks/ceph_hooks.py   2018-02-02 16:15:40.304388602 +0100
> --- hooks/ceph_hooks.py~  2018-02-02 16:18:47.304401004 +0100
> ***
> *** 369,378 
>
>  @hooks.hook('storage.real')
>  def prepare_disks_and_activate():
> - # patch for missing dm devices
> - # see: https://patchwork.kernel.org/patch/9826353/
> - patch_persistent_storage_rules()
> - 
>  osd_journal = get_journal_devices()
>  check_overlap(osd_journal, set(get_devices()))
>  log("got journal devs: {}".format(osd_journal), level=DEBUG)
> --- 369,374 
> ***
> *** 558,579 
>  log('Updating status.')
>
>
> - # patch for missing dm devices
> - from subprocess import check_call
> - 
> - 
> - CEPH_PERSITENT_STORTAGE_RULES = 
> '/lib/udev/rules.d/60-persistent-storage.rules'
> - 
> - 
> - def patch_persistent_storage_rules():
> - if os.path.isfile(CEPH_PERSITENT_STORTAGE_RULES):
> - check_call(['sed', '-i', 's/KERNEL!="loop/KERNEL!="dm*|loop/',
> - CEPH_PERSITENT_STORTAGE_RULES])
> - log('Patched %s' % CEPH_PERSITENT_STORTAGE_RULES)
> - else:
> - log('Missing %s' % CEPH_PERSITENT_STORTAGE_RULES)
> - 
> - 
>  if __name__ == '__main__':
>  try:
>  hooks.execute(sys.argv)
> --- 554,559 ——
>
> Regards
>
> — Beppe
>
>
>


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju