Re: Feature Request: -about-to-depart hook

2015-02-03 Thread Stuart Bishop
On 3 February 2015 at 21:23, Stuart Bishop stuart.bis...@canonical.com wrote:
 On 28 January 2015 at 21:03, Mario Splivalo

 I'm not sure if this is possible... Once the unit left relation juju is
 no longer aware of it so there is no way of knowing if -broken completed
 with success or not. Or am I wrong here?

 Hooks have no way of telling, but juju could in the same way that you
 can tell by running 'juju status'. If the unit is still running, it
 might still run the -broken hook. Once the unit is destroyed, we know
 it will never run the -broken hook.

While typing up https://bugs.launchpad.net/juju-core/+bug/1417874 I
realized that your proposed solution of a pre-departure hook is the
only one that can work. Once -departed hooks start firing both the
doomed unit and the leader have already lost the access needed to
decommission the departing node.

I'm going to need to tear out the decommissioning code from my charm
(that started failing my tests once I tightened security), and
document the manual decommissioning process. Unless someone can come
up with a better way forward with current juju.

-- 
Stuart Bishop stuart.bis...@canonical.com

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Deploying Spark charm to data-analytics-with-sql-like loses HDFS configuration

2015-02-03 Thread Ken Williams
Hi,

  I'm able to deploy the 'data-analytics-with-sql-like' bundle successfully.
  When it's deployed I can run the hdfs and hive smoke tests described
  here (
https://api.jujucharms.com/v4/bundle/data-analytics-with-sql-like-5/archive/README.md)
successfully.

  I then need to deploy the Spark charm and add relations connecting
  spark-master to yarn-hdfs-master. I do this by deploying spark-master
  to the same machine where yarn-hdfs-master is located (see my
  previous email).

  However, once I have deployed the spark charm and added relations
  from yarn-hdfs-master to spark-master, I can no longer connect
  to hdfs.

root@adminuser-VirtualBox:~# juju ssh yarn-hdfs-master/0
ubuntu@ip-172-31-27-83:~$ sudo su $HDFS_USER
hdfs@ip-172-31-27-83:/home/ubuntu$
hdfs@ip-172-31-27-83:/home/ubuntu$ hdfs dfs -ls /
ls: Incomplete HDFS URI, no host: hdfs://TODO-NAMENODE-HOSTNAME:PORT
hdfs@ip-172-31-27-83:/home/ubuntu$

It seems that adding relations between yarn-hdfs-master and
spark overwrites the hdfs configuration so it cannot connect anymore.

This is how I deploy the spark-charm and the relations I add,
root@adminuser-VirtualBox:~# cd ~
root@adminuser-VirtualBox:~# mkdir charms
root@adminuser-VirtualBox:~# mkdir charms/trusty
root@adminuser-VirtualBox:~# cd charms/trusty
root@adminuser-VirtualBox:~# git clone
https://github.com/Archethought/spark-charm spark

// Deploy spark to specific node(s) in the cluster
// in particular, deploy spark-master to same node as yarn-hdfs-master
   juju status // which node machine is yarn-hdfs-master on ?
   // e.g. say machine: 4
   // deploy spark-master to same node as yarn-hdfs-master
   juju deploy --repository=charms local:trusty/spark --to 4 spark-master
   juju deploy --repository=charms local:trusty/spark spark-slave
   juju add-relation spark-master:master spark-slave:slave

   // add relations - connect YARN to SPARK
   juju add-relation yarn-hdfs-master:resourcemanager spark-master
   juju add-relation yarn-hdfs-master:namenode spark-master
   juju status


Attached is my 'juju status'.

Any help is very appreciated,

Ken
root@adminuser-VirtualBox:~# juju status
environment: amazon
machines:
  0:
agent-state: started
agent-version: 1.21.1
dns-name: 54.152.135.200
instance-id: i-a0b24a5a
instance-state: running
series: trusty
hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M
state-server-member-status: has-vote
  1:
agent-state: started
agent-version: 1.21.1
dns-name: 54.164.136.34
instance-id: i-4026e7af
instance-state: running
series: trusty
hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M
  2:
agent-state: started
agent-version: 1.21.1
dns-name: 54.152.183.122
instance-id: i-d81b7429
instance-state: running
series: trusty
hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M
  3:
agent-state: started
agent-version: 1.21.1
dns-name: 54.152.37.83
instance-id: i-68b74f92
instance-state: running
series: trusty
hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M
  4:
agent-state: started
agent-version: 1.21.1
dns-name: 54.152.140.119
instance-id: i-532aebbc
instance-state: running
series: trusty
hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M
  5:
agent-state: started
agent-version: 1.21.1
dns-name: 54.152.233.201
instance-id: i-ec3c521d
instance-state: running
series: trusty
hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M
services:
  compute-node:
charm: cs:trusty/hdp-hadoop-4
exposed: false
relations:
  datanode:
  - yarn-hdfs-master
  nodemanager:
  - yarn-hdfs-master
units:
  compute-node/0:
agent-state: started
agent-version: 1.21.1
machine: 1
open-ports:
- 8010/tcp
- 8025/tcp
- 8030/tcp
- 8050/tcp
- 8088/tcp
- 8141/tcp
- 8480/tcp
- 10020/tcp
- 19888/tcp
- 50010/tcp
- 50075/tcp
public-address: 54.164.136.34
  hdphive:
charm: cs:trusty/hdp-hive-2
exposed: false
relations:
  db:
  - mysql
  namenode:
  - yarn-hdfs-master
  resourcemanager:
  - yarn-hdfs-master
units:
  hdphive/0:
agent-state: started
agent-version: 1.21.1
machine: 2
open-ports:
- 1/tcp
public-address: 54.152.183.122
  juju-gui:
charm: cs:trusty/juju-gui-17
exposed: true
units:
  juju-gui/0:
agent-state: started
agent-version: 1.21.1
machine: 0
open-ports:
- 80/tcp
- 443/tcp
public-address: 54.152.135.200
  mysql:
charm: cs:trusty/mysql-4
exposed: false
relations:
  cluster:
  - mysql
  db:
  - hdphive
units:
   

Re: Feature Request: -about-to-depart hook

2015-02-03 Thread Stuart Bishop
On 28 January 2015 at 21:03, Mario Splivalo
mario.spliv...@canonical.com wrote:
 On 01/27/2015 09:52 AM, Stuart Bishop wrote:
 Ignoring the, most likely, wrong nomenclature of the proposed hook, what
 are your opinions on the matter?

 I've been working on similar issues.

 When the peer relation-departed hook is fired, the unit running it
 knows that $REMOTE_UNIT is leaving the cluster. $REMOTE_UNIT may not
 be alive - we may be removing a failed unit from the service.
 $REMOTE_UNIT may be alive but uncontactable - some form of network
 partition has occurred.

 $REMOTE_UNIT doesn't have to be the one leaving the cluster. If I have
 3-unit cluster (mongodb/0, mongodb/1, mongodb/2), and I 'juju remove
 mongodb/1), the relation-departed hook will fire on all three units.
 Moreover, it will fire twice on mongodb/1. So, from mongodb/2
 perspective, $REMOTE_UNIT is indeed pointing to mongodb/0, which is, in
 this case, leaving the relation. But if we observe the same scenario on
 mongodb/0, $REMOTE_UNIT there will point to mongodb/0. But that unit is
 NOT leaving the cluster. There is no way to know if the hook is running
 on the unit that's leaving or is it running on the unit that's staying.

I see, and have also struck the same problem with the Cassandra charm.
It is impossible to have juju decommission a node.

My relation-departed hook must reset the firewall rules, since the
replication connection is unauthenticated and we cannot leave it open.
This means I cannot decommission the departing unit in the
relation-broken hook, as the remaining nodes refuse to talk to it and
it has no way of redistributing its data.

And I can't decommission the departing node in the relation-departed
hook, because as you correctly say, it is impossible to know which
unit is actually leaving the cluster and which are remaining.


 But, if that takes place in relation-departed, there is no way of
 knowing if you need to do a stepdown, because you don't know if you're
 the unit being removed, or is it the remote unit being removed.
 Therefore the logic for removing nodes had to go to relation-broken.
 But, as you explained, if the unit goes down catastrophically the
 relation-broken will never be executed and I have a cluster that needs
 manual intervention to clean up.

Leadership might provide a work around, as the service is guaranteed
to have exactly one leader. If a unit is running the relation-departed
hook and it is the leader, it knows it is not the one leaving the
cluster (or it would no longer be leader) and it can perform the
decommissioning.

But that is a messy work around. Given we have both struck nearly
exactly the same problem, I'd surmise the same issue will occur in
pretty much all similar systems (Swift, Redis, mysql, ...) and we need
a better solution.

I've also heard rumours of a goal state, which may provide units
enough context to know what is happening. I don't know the details of
this though.


 I'm not sure if this is possible... Once the unit left relation juju is
 no longer aware of it so there is no way of knowing if -broken completed
 with success or not. Or am I wrong here?

Hooks have no way of telling, but juju could in the same way that you
can tell by running 'juju status'. If the unit is still running, it
might still run the -broken hook. Once the unit is destroyed, we know
it will never run the -broken hook.


-- 
Stuart Bishop stuart.bis...@canonical.com

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: commands

2015-02-03 Thread Nate Finch
I'm so glad these are getting auto generated so they stay in sync.

They should be consistent in style, we should choose capitalized or not and
stick with it.  I don't really care which.
On Feb 3, 2015 7:08 AM, Nick Veitch nick.vei...@canonical.com wrote:

 Hello,

 I have just finished updating the all-new exciting and more comprehensible
 commands reference page in the docs for 1.21.1:

 https://juju.ubuntu.com/docs/commands.html

 The more observant of you will notice that the content has almost all been
 autogenerated directly from juju help. This is exciting (for me
 anyway) because I have long yearned for these to sync better. Some
 commands are better explained than others, and some could do with some
 examples, but I can contribute those.

 Before I do that, I did have a question - there are some consistent
 inconsistencies in the style of the output; for example, there is
 'purpose:' and 'usage:', but 'Example:' is capitalised. Is there some
 compelling reason for this wrongness which people have passionate
 opinions about, or can I fix it (i.e. Usage:, etc.)?


 --
 Nick Veitch
 nick.vei...@canonical.com

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: simple stream for runabove

2015-02-03 Thread Patrick Hetu
 Bonjour Patrick,

Salut Thomas,



 Can you give us the result of the following command:
 juju metadata validate-tools


Tools are not a problem, images are:

u-ph:~$ juju metadata validate-images -e run
ERROR index file has no data for cloud {BHS-1 https://auth.runabove.io/v2.0}
not found
Resolve Metadata:
  source: default cloud images
  signed: false
  indexURL: http://cloud-images.ubuntu.com/releases/streams/v1/index.json
ERROR subprocess encountered error code 1

 u-ph:~$ juju metadata validate-tools -e run
Matching Tools Versions:
- 1.21.1-trusty-amd64
- 1.21.1-trusty-arm64
- 1.21.1-trusty-armhf
- 1.21.1-trusty-i386
- 1.21.1-trusty-ppc64el
Resolve Metadata:
  source: default simplestreams
  signed: true
  indexURL: https://streams.canonical.com/juju/tools/streams/v1/index2.sjson

In my environment.yaml in the run section I have:

image-metadata-url: 
https://storage.bhs-1.runabove.io/v1/AUTH_abcde.../ubuntu_stream/;

but it looks like juju metadata validate-images ignores it.


 Did you check/follow this :
 https://juju.ubuntu.com/docs/howto-privatecloud.html


Yep, I've follow this and copy the json structure from
http://cloud-images.ubuntu.com/releases/streams/v1/



 Best regards,

 On 03/02/2015 04:50, Patrick Hetu wrote:
  Hi there,
 
  I'm trying to get Runabove's openstack cloud to work with Juju but
  I can't get the simple stream to work. Simplestream.go seems to
  just skip to index2.json because of an error but it didn't report
  any:
 
  2015-02-03 02:38:34 INFO juju.cmd supercommand.go:37 running juju
  [1.21.1-utopic-amd64 gc] 2015-02-03 02:38:34 INFO
  juju.provider.openstack provider.go:248 opening environment run
  2015-02-03 02:38:35 DEBUG juju.environs.configstore disk.go:336
  writing jenv file to /home/avoine/.juju/environments/run.jenv
  2015-02-03 02:38:35 INFO juju.network network.go:106 setting
  prefer-ipv6 to false 2015-02-03 02:38:35 DEBUG juju.environs
  imagemetadata.go:105 trying datasource keystone catalog
  2015-02-03 02:38:35 DEBUG juju.environs.simplestreams
  simplestreams.go:374 searching for metadata in datasource
  image-metadata-url 2015-02-03 02:38:35 INFO juju.utils http.go:59
  hostname SSL verification enabled 2015-02-03 02:38:35 DEBUG
  juju.environs.simplestreams simplestreams.go:465 fetchData failed
  for
  
 https://storage.bhs-1.runabove.io/v1/AUTH_XXX/ubuntu_stream/streams/v1/index2.sjson
 :
 
 
 cannot find URL
  
 https://storage.bhs-1.runabove.io/v1/AUTH_XXX/ubuntu_stream/streams/v1/index2.sjson
 
 
 
 not found
  [...]
 
  This is what I have in my index.json
 
  { index: { com.ubuntu.cloud:released:runabove: { updated:
  Mon, 02 Feb 2015 14:14:09 +, clouds: [ { region:
  BHS-1, endpoint: https://auth.runabove.io/v2.0; }], format:
  products:1.0, datatype: image-ids, cloudname: runabove,
  products: [ com.ubuntu.cloud:server:12.04:amd64 ], path:
  streams/v1/com.ubuntu.cloud:released:runabove.json } } }
 
  and in com.ubuntu.cloud:released:runabove.json:
 
  { updated: Thu, 06 Nov 2014 13:28:28 +, datatype:
  image-ids, content_id: com.ubuntu.cloud:released:runabove,
  products: { com.ubuntu.cloud:server:12.04:amd64: { release:
  precise, version: 12.04, arch: amd64, versions: {
  20141015.1: { items: { BHS-1: { virt: kvm, crsn:
  BHS-1, root_size: 8GB, id:
  23b30a81-1b0f-45d3-8dc1-eed72091d020 } }, pubname: Ubuntu
  Server 12.04 (amd64 20141015.1) - Image, label: release } } }
  }, format: products:1.0, _aliases: { crsn: { BHS-1: {
  region: BHS-1, endpoint: https://auth.runabove.io/v2.0; } }
  } }
 
  Tell me if you need more info. Also, should I bug the fact that
  runabove is not in the offical stream?
 
  Thanks,
 
  Patrick
 
 

 - --
 Best Regards,
 Nicolas Thomas  -  Solution Architect   -   Canonical
 http://insights.ubuntu.com/?p=889
 GPG FPR: D592 4185 F099 9031 6590 6292 492F C740 F03A 7EB9
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU0HntAAoJEEkvx0DwOn65JBAH/2hwa14nFsZPj19EU7LLaGXa
 3lqn/YKFYnLw6+JnS+oDsyOtcK8xz124g/Hg1yIzhothY1raudivcBphicqrE3+t
 dDnrQz/VKzZBtdlkOSUU9Q318sCkzV4jKak3VMNFjGoKj1d97dLhjcxLVR2+Moqs
 BnSkrmb9i/e2mulLzk3L/dnMPsCkV1P/mDMNgsW8qH/1YA0DqPiS0ShB8cvhzTuE
 H8Y++Pj6q9CBCBwky/ktS65pHWU+Pn53UDHKvfudNZz9VAAuHiY5UzAVhViTqXX8
 HXvgaIfePHXcCF8PItl03N9mOyABanUf6nYZC6jOtJ8A0Gnc/qoKMTQRIVKM18w=
 =zMel
 -END PGP SIGNATURE-

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: django-python upgrade questions

2015-02-03 Thread Patrick Hetu
Hi Sheila,

Thank you for all your great ideas, for some backgrounds checkout a
discussion about the future of the charm in this thread:

  http://comments.gmane.org/gmane.linux.ubuntu.juju.user/1649

I'm working to rewrite the charm using Ansible so I don't think I'll add
new features to the current pure python charm.
But all your comments are really welcome for helping building the new charm.

2015-01-09 10:29 GMT-05:00 sheila miguez she...@pobox.com:

 I need to have upgrade functionality for this charm. I'm not experienced
 with writing hooks, so I want to check my assumptions and ask questions.

 Here begins the wall of text.

 During install, configure_and_install[1] is called in the install hook to
 install Django and, optionally, South.

 configure_and_install is handling concerns that I think would be better to
 split out.

 Here is the current logic:
   * if the user specifies a ppa, it will add the ppa
   * if the user specifies a key, it will add the key
   * if the method is somehow called with an empty string, it bails
   * otherwise it defaults to pip

 I think it would be better to break this up.

 Adding apt repositories and keys should be done separately. Config items
 could be added for a ppa list and a key list. Install could check for those
 and add them regardless of any django or south concerns.


I agree with you. The idea will be to create an aptkit ansible role that
could be reuse as many time that you need in the charm's playbook see:


http://bazaar.launchpad.net/~patrick-hetu/charms/precise/python-django/ansible_reboot/view/head:/playbooks/roles/aptkit/defaults/main.yml

or you could use pure ansible modules in the in the pre_tasks section:

  http://docs.ansible.com/apt_repository_module.html



 Django and South versions/distro/whatever would proceed separately.


The idea here was to create a django-app ansible role that handle is
dependencies independently of the other apps:


http://bazaar.launchpad.net/~patrick-hetu/charms/precise/python-django/ansible_reboot/view/head:/playbooks/roles/django-app/tasks/main.yml


 This makes upgrade simpler, I think. Upgrade would have similar steps to
 install, but I have a question about idempotence when adding apt
 repositories and keys. Are those idempotent? If not, then refactoring them
 out of that method makes it easier to idempotently handle Django and South
 dependencies.


They should be idempotent because apt-add-repository, pip, apt-get are
idempotent and in the pure-python charm they are checked every time
configuration change.
So changing the ppa should upgrade the packages.



 Now on to general plans for upgrade hook. It needs these to be added, but
 I want to know if my assumptions are mistaken:
   * ability to pull new src
   * ability to upgrade django and south
   * ability re-inject settings


I think those actions could be done via configuration changes and not
necessarily via an explicit upgrade but drawing the line is tricky.
I'm trying to build the new charm in a way that it would be easier to be
integrated in a continuous integration workflow but I'm really not sure how
thing would work.


 And one general observation I have -- if I could remove some of the
 options this charm allows, it would be easier to deal with. e.g. not give
 many options for how packages are installed. Force people to do it only one
 way. Pick a recommended practice and stick with it. I recommend taking a
 look at what audryr and pydanny suggest, but ultimately, whatever devops
 people decide to want, I will roll with it. but pick one way.


 I don't think there is a clear recommended practice that I can stick with.


 To sum up, if you reached this far, I'd really like for my assumptions to
 be checked and corrected. Help!



 Ps. as a side note, you can see some of the tiny incremental changes I'm
 thinking about in my branch[2]. I need to have the ability to install a
 project from a tarball due to lack of access. I'm not sure I will
 ultimately request any merge for tarball functionality as it is really
 janky. but it's what I have to work with based on what we've been doing at
 work.


Yeah, I can really see how this feature would be useful. Also, Micheal is
working on something similar:

  https://github.com/absoludity/charm-ansible-roles/tree/master/wsgi-app

Thanks again and sry to have took so long to answer.

Patrick
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju