Re: master/CiaB: Wait for juju services to have open ports - Timeout when waiting for services

2017-01-05 Thread Mac Lin
Some update. I tried to remove all the relations and re-add, but the status
remains.

Then I tried destroy-machine, but when adding it back it reports:
ERROR machine is already provisioned.
I tried to recover it but failed. Trying to get another install.

Revisting the bad juju status, most of the services are missing relations.
I'm wondering if it's because the related service is not ready itself? e.g.
most of the service is waiting for database - mongodb, but mangodb is
waiting for agent init finsih. And what does the agent mean? the charm?

On Wed, Jan 4, 2017 at 8:48 PM, Mac Lin  wrote:

> This is the content of juju-bad.log, result of "juju status" on my server.
> Or please let me know what command should I gave.
>
> It's difficult for me to try them out in ansible playbook. *sigh*
>
> environment: manual
> machines:
>   "0":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: juju.cord.lab
> instance-id: 'manual:'
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
> state-server-member-status: has-vote
>   "1":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: keystone.cord.lab
> instance-id: manual:keystone.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "2":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: percona-cluster.cord.lab
> instance-id: manual:percona-cluster.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "3":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: nagios.cord.lab
> instance-id: manual:nagios.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "4":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: neutron-api.cord.lab
> instance-id: manual:neutron-api.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "5":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: nova-cloud-controller.cord.lab
> instance-id: manual:nova-cloud-controller.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "6":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: openstack-dashboard.cord.lab
> instance-id: manual:openstack-dashboard.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "7":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: rabbitmq-server.cord.lab
> instance-id: manual:rabbitmq-server.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "8":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: mongodb.cord.lab
> instance-id: manual:mongodb.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "9":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: ceilometer.cord.lab
> instance-id: manual:ceilometer.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
>   "10":
> agent-state: started
> agent-state-info: (started)
> agent-version: 1.25.9
> dns-name: glance.cord.lab
> instance-id: manual:glance.cord.lab
> series: trusty
> hardware: arch=amd64 cpu-cores=8 mem=24112M
> services:
>
>   ceilometer:
> charm: cs:trusty/ceilometer-17
> exposed: false
> service-status:
>   current: blocked
>   message: 'Missing relations: messaging, identity, database'
>   since: 03 Jan 2017 19:39:55Z
> relations:
>   amqp:
>   - rabbitmq-server
>   ceilometer-service:
>   - ceilometer-agent
>   cluster:
>   - ceilometer
>   identity-service:
>   - keystone
>   juju-info:
>   - nagios
>   nrpe-external-master:
>   - nrpe
>   shared-db:
>   - mongodb
> units:
>   ceilometer/0:
> workload-status:
>   current: blocked
>   message: 'Missing relations: messaging, identity, database'
>   since: 03 Jan 2017 19:39:55Z
> agent-status:
>   current: executing
>   message: running install hook
>   since: 03 Jan 2017 14:53:31Z
>   version: 1.25.9
> agent-state: started
> agent-version: 1.25.9
> machine: "9"
> open-ports:
> - 8777/tcp
> public-address: ceilometer.cord.lab
>   ceilometer-agent:
> charm: cs:trusty/ceilometer-agent-13
> exposed: false
> service-status: {}
> relations:
>   ceilometer-service:
>   - ceilometer
>   glance:
> charm: cs:trusty/glance-28
> 

Re: Opaque automatic hook retries from API

2017-01-05 Thread Casey Marshall
^^ s/immutability/idempotency

On Thu, Jan 5, 2017 at 12:39 PM, Casey Marshall <
casey.marsh...@canonical.com> wrote:

> On Thu, Jan 5, 2017 at 3:33 AM, Adam Collard 
> wrote:
>
>> Hi,
>>
>> The automatic hook retries[0] that landed as part of 2.0 (are documented
>> as) run indefinitely[1] - this causes problems as an API user:
>>
>> Imagine you are driving Juju using the API, and when you perform an
>> operation (e.g. set the configuration of a service, or reboot the unit, or
>> add a relation..) - you want to show the status of that operation.
>>
>> Prior to the automatic retries, you simply perform your operation, and
>> watch the delta streams for the corresponding change to the unit - the
>> success or otherwise of the operation is reflected in the unit
>> agent-status/workload-status pair.
>>
>> Now, with retries, if you see a unit in the error state, you can't
>> accurately reflect the status of the operation, since the unit will
>> undoubtedly retry the hook again. Maybe it succeeds, maybe it fails again.
>> How can one say after receiving the first delta of a unit error if the
>> operation succeeded or failed?
>>
>> With no visibility up front on the retry strategy that Juju will perform
>> (e.g. something representing the exponential backoff and a fixed number of
>> retries before Juju admits defeat) it is impossible to say at any point in
>> the delta stream what the result of a failed-at-least-once operation is.
>>
>
> I think the retry strategy is great -- it leverages the immutability we
> expect hooks to provide, to deliver a robust result over unreliable
> substrates -- and all substrates are unreliable where there's
> internetworking involved!
>
> However I see your point about the retry strategy muddling status. I've
> noticed this sometimes when watching openstack or k8s bundles "shake out"
> the errors as they come up. I don't think this is always a charm quality
> issue, it's maybe because we're trying to show two different things with
> status?
>
>
>> What if Juju made a clearer distinction between result-state ("what I'm
> doing most recently or last attempted to do") vs. goal-state ("what I'm
> trying to get done") in the status? Would that help?
>
>
>> Can retries be limited to a small number, with a backoff algorithm
>> explicitly documented and stuck to by Juju, with the retry attempt number
>> included in the delta stream?
>>
>> Thanks,
>>
>> Adam
>>
>> [0] https://jujucharms.com/docs/2.0/reference-release-notes
>> [1] https://jujucharms.com/docs/2.0/models-config#retrying-failed-hooks
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju-dev
>>
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Opaque automatic hook retries from API

2017-01-05 Thread Casey Marshall
On Thu, Jan 5, 2017 at 3:33 AM, Adam Collard 
wrote:

> Hi,
>
> The automatic hook retries[0] that landed as part of 2.0 (are documented
> as) run indefinitely[1] - this causes problems as an API user:
>
> Imagine you are driving Juju using the API, and when you perform an
> operation (e.g. set the configuration of a service, or reboot the unit, or
> add a relation..) - you want to show the status of that operation.
>
> Prior to the automatic retries, you simply perform your operation, and
> watch the delta streams for the corresponding change to the unit - the
> success or otherwise of the operation is reflected in the unit
> agent-status/workload-status pair.
>
> Now, with retries, if you see a unit in the error state, you can't
> accurately reflect the status of the operation, since the unit will
> undoubtedly retry the hook again. Maybe it succeeds, maybe it fails again.
> How can one say after receiving the first delta of a unit error if the
> operation succeeded or failed?
>
> With no visibility up front on the retry strategy that Juju will perform
> (e.g. something representing the exponential backoff and a fixed number of
> retries before Juju admits defeat) it is impossible to say at any point in
> the delta stream what the result of a failed-at-least-once operation is.
>

I think the retry strategy is great -- it leverages the immutability we
expect hooks to provide, to deliver a robust result over unreliable
substrates -- and all substrates are unreliable where there's
internetworking involved!

However I see your point about the retry strategy muddling status. I've
noticed this sometimes when watching openstack or k8s bundles "shake out"
the errors as they come up. I don't think this is always a charm quality
issue, it's maybe because we're trying to show two different things with
status?


> What if Juju made a clearer distinction between result-state ("what I'm
doing most recently or last attempted to do") vs. goal-state ("what I'm
trying to get done") in the status? Would that help?


> Can retries be limited to a small number, with a backoff algorithm
> explicitly documented and stuck to by Juju, with the retry attempt number
> included in the delta stream?
>
> Thanks,
>
> Adam
>
> [0] https://jujucharms.com/docs/2.0/reference-release-notes
> [1] https://jujucharms.com/docs/2.0/models-config#retrying-failed-hooks
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm release fails with "cannot update base entity for..."

2017-01-05 Thread Merlijn Sebrechts
Filed a bug here: https://github.com/juju/charm-tools/issues/297

2017-01-04 19:59 GMT+01:00 Merlijn Sebrechts :

> Ok, thanks!
>
>
> This might be a bug in `charm build`. It merges the "series" array of
> multiple layers' metadata.yaml.
>
> 2017-01-04 18:18 GMT+01:00 roger peppe :
>
>> OK, I've found the issue. Your metadata.yaml specifies "trusty" twice
>> in the "series" attribute.
>> Specifying it only once should allow you to release.
>>
>> We should produce a better error message for this case (or just
>> de-duplicate series).
>>
>>   cheers,
>> rog.
>>
>> On 3 January 2017 at 12:17, Merlijn Sebrechts
>>  wrote:
>> > Hi all
>> >
>> >
>> > When releasing my charm I get the following error:
>> >
>> > charm release cs:~tengu-team/jupiter-notebook-spark-0
>> > ERROR cannot release charm or bundle: cannot publish charm or bundle:
>> cannot
>> > update base entity for "cs:~tengu-team/jupiter-notebook-spark-0":
>> Field name
>> > duplication not allowed with modifiers
>> >
>> > A quick google shows that this is a MongoDB error. Is something wrong
>> with
>> > the cs backend?
>> >
>> >
>> >
>> > Kind regards
>> > Merlijn
>> >
>> > --
>> > Juju mailing list
>> > Juju@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju
>> >
>>
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: problem when building a charm for giraph

2017-01-05 Thread Merlijn Sebrechts
Since this is a bigtop charm, maybe the best idea might be to merge the
code into the bigtop repository, and have it promulgated together with the
other bigtop charms.

@Kevin, what do you advise?

2017-01-05 14:55 GMT+01:00 Rick Harding :

> On Thu, Jan 5, 2017 at 8:17 AM Panagiotis Liakos 
> wrote:
>
>>
>> Moreover, I have just released the latest version of my charm [2].
>
>
> Congrats!
>
>
>> 1) Should I follow some procedure to find someone to review the charm?
>>
>
> Definitely, you can submit it to the review queue [1] and there's a
> document that goes through some things to look out for going through the
> process [2].
>
>
>> 2) When I release my charm I get the following warning: bugs-url is
>> not set.  See set command. Am I supposed to set such a bugs-url? Can I
>> use https://github.com/panagiotisl/bigtop/issues ?
>>
>
> Exactly, where ever you want to track the bugs for the charm is up to you.
> Use the charm command to set the bugs-url
>
> charm set cs:~panagiotisl/giraph bugs-url https://github.com/
> panagiotisl/bigtop/issues
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: problem when building a charm for giraph

2017-01-05 Thread Rick Harding
On Thu, Jan 5, 2017 at 8:17 AM Panagiotis Liakos  wrote:

>
> Moreover, I have just released the latest version of my charm [2].


Congrats!


> 1) Should I follow some procedure to find someone to review the charm?
>

Definitely, you can submit it to the review queue [1] and there's a
document that goes through some things to look out for going through the
process [2].


> 2) When I release my charm I get the following warning: bugs-url is
> not set.  See set command. Am I supposed to set such a bugs-url? Can I
> use https://github.com/panagiotisl/bigtop/issues ?
>

Exactly, where ever you want to track the bugs for the charm is up to you.
Use the charm command to set the bugs-url

charm set cs:~panagiotisl/giraph bugs-url
https://github.com/panagiotisl/bigtop/issues
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Layers and duplicated config options

2017-01-05 Thread Marco Ceppi
Are these configuration options to be different values? If not charm-build
will do the de-duping. I'm dubious of adding things like nagios
configuration options to things like the kubernetes charm. This opens
potentially having config options for zabbix or any other monitoring
solution and crowding the charm configuration.

Surely, nagios_context and nagios_servicegroups are actually configuration
options on nrpe-external-master and not on each workload?

Marco

On Thu, Jan 5, 2017 at 8:42 AM Junien Fridrick <
junien.fridr...@canonical.com> wrote:

> Hi,
>
> I need to add the same config options to 2 different layers that are
> used in the same charm. How does that work ? Should I add them the 2
> layers and let "charm build" deal with it ? Or is there another way ?
>
> My use case is to add the "nagios_context" and "nagios_servicegroups"
> config options to both layer-docker and kubernetes-master /
> kubernetes-worker. These options are used by the nrpe-external-master
> relation, which I'm trying to add to the kubernetes-master and
> kubernetes-worker charms.
>
> Thanks !
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: problem when building a charm for giraph

2017-01-05 Thread Mark Shuttleworth
On 05/01/17 15:16, Panagiotis Liakos wrote:

> Can I use https://github.com/panagiotisl/bigtop/issues ?

Your charm, your choice :)

Mark

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Layers and duplicated config options

2017-01-05 Thread Junien Fridrick
Hi,

I need to add the same config options to 2 different layers that are
used in the same charm. How does that work ? Should I add them the 2
layers and let "charm build" deal with it ? Or is there another way ?

My use case is to add the "nagios_context" and "nagios_servicegroups"
config options to both layer-docker and kubernetes-master /
kubernetes-worker. These options are used by the nrpe-external-master
relation, which I'm trying to add to the kubernetes-master and
kubernetes-worker charms.

Thanks !

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: problem when building a charm for giraph

2017-01-05 Thread Panagiotis Liakos
Thanks a lot once more Konstantinos!

I followed your advice and put my environment variables in a way
similar to what the mahout charm does. I do not think that these
changes should be made to the hadoop charm as what I am doing in this
charm is to put the necessary .jar files of giraph to the hadoop
classpath. However, I also set two hadoop properties when I run my
smoke-test which might be of interest to the developers of the hadoop
charm. See the end of this line [1].

Moreover, I have just released the latest version of my charm [2]. And
I have two more questions :)

1) Should I follow some procedure to find someone to review the charm?
2) When I release my charm I get the following warning: bugs-url is
not set.  See set command. Am I supposed to set such a bugs-url? Can I
use https://github.com/panagiotisl/bigtop/issues ?

Thanks,
Panagiotis


[1]
https://github.com/panagiotisl/bigtop/blob/master/bigtop-packages/src/charm/giraph/layer-giraph/actions/smoke-test#L34
[2]
https://jujucharms.com/u/panagiotisl/giraph/

2016-11-25 12:04 GMT+02:00 Konstantinos Tsakalozos
:
> Hi Panagiotis,
>
> Nice to see you are making progress.
>
> To your questions:
> 1. We usually put environment variables inside /etc/environment ; giraph can
> update that file. Have a look at how we do that for Mahout [0]. Do you think
> the two variables you mention should have been there in the first place
> (possibly set by the hadoop charm)?
> 2. The set of charms you are working with deploy all files using the .deb
> packages built by the apache bigtop project. To see exactly what is going to
> be deployed you would need to break open the deb packages. This might help
> you [1].
> 3. You are right about the charm proof command. I just opened an issue
> https://github.com/juju/docs/issues/1545. Thank you for reporting that.
>
> Thanks,
> Konstantinos
>
>
> [0]
> https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/mahout/layer-mahout/reactive/mahout.py#L33
> [1]
> http://askubuntu.com/questions/30482/is-there-an-apt-command-to-download-a-deb-file-from-the-repositories-to-the-curr
>
>
>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Opaque automatic hook retries from API

2017-01-05 Thread Adam Collard
Hi,

The automatic hook retries[0] that landed as part of 2.0 (are documented
as) run indefinitely[1] - this causes problems as an API user:

Imagine you are driving Juju using the API, and when you perform an
operation (e.g. set the configuration of a service, or reboot the unit, or
add a relation..) - you want to show the status of that operation.

Prior to the automatic retries, you simply perform your operation, and
watch the delta streams for the corresponding change to the unit - the
success or otherwise of the operation is reflected in the unit
agent-status/workload-status pair.

Now, with retries, if you see a unit in the error state, you can't
accurately reflect the status of the operation, since the unit will
undoubtedly retry the hook again. Maybe it succeeds, maybe it fails again.
How can one say after receiving the first delta of a unit error if the
operation succeeded or failed?

With no visibility up front on the retry strategy that Juju will perform
(e.g. something representing the exponential backoff and a fixed number of
retries before Juju admits defeat) it is impossible to say at any point in
the delta stream what the result of a failed-at-least-once operation is.

Can retries be limited to a small number, with a backoff algorithm
explicitly documented and stuck to by Juju, with the retry attempt number
included in the delta stream?

Thanks,

Adam

[0] https://jujucharms.com/docs/2.0/reference-release-notes
[1] https://jujucharms.com/docs/2.0/models-config#retrying-failed-hooks
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev