Re: [ansible-project] Apt behaviour

2022-03-06 Thread Igor Cicimov
Of course it has everything to do with the module, what else? The 2.4.4 
does the correct thing and takes the list as it is: a list. The module 
documentation also says it needs to be a list.

However, 2.5+ takes that list as a string for some reason and wraps it in a 
list ending in that silly state of a list inside a list. That can't be 
right.



On Monday, March 7, 2022 at 5:08:08 PM UTC+11 Felix Fontein wrote:

> hi,
>
> > Hi all,
> > 
> > Anyone knows why is apt behaving differently here? I have this var
>
> I don't think this is related to the apt module, but to how the
> variable ends up being loaded. It seems to end up as a string, but not
> as a sting in JSON format (which would be converted to a list), but in
> Python format.
>
> Without knowing how exactly you end up with the variable in this format.
>
> (Also please note that both Ansible 2.4.x and 2.5.x are completely
> outdated and End of Line.)
>
> Cheers,
> Felix
>
>
>
> > 
> > fonts_packages:
> > - ttf-wqy-zenhei
> > - fonts-takao-mincho
> > - fonts-indic
> > - ttf-wqy-microhei
> > 
> > and simple task:
> > 
> > - name: install additional fonts
> > apt:
> > name: "{{ fonts_packages }}"
> > state: present
> > 
> > passing the list to apt.
>
>
-- 


* *







Know Your Customer due diligence on demand, powered by 
intelligent process automation




Blogs 
  |  LinkedIn 
  |  Twitter 


 




Encompass Corporation UK Ltd  |  
Company No. SC493055  |  Address: Level 3, 33 Bothwell Street, Glasgow, UK, 
G2 6NL

Encompass Corporation Pty Ltd  |  ACN 140 556 896  |  Address: 
Level 10, 117 Clarence Street, Sydney, New South Wales, 2000

This email 
and any attachments is intended only for the use of the individual or 
entity named above and may contain confidential information. 

If you are 
not the intended recipient, any dissemination, distribution or copying of 
this email is prohibited. 

If received in error, please notify us 
immediately by return email and destroy the original message.








-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/7164ed8c-72f7-4ef1-9237-4e6be65e7092n%40googlegroups.com.


[ansible-project] Apt behaviour

2022-03-06 Thread Igor Cicimov
Hi all,

Anyone knows why is apt behaving differently here? I have this var

fonts_packages:
  - ttf-wqy-zenhei
  - fonts-takao-mincho
  - fonts-indic
  - ttf-wqy-microhei

and simple task:

- name: install additional fonts
  apt:
name: "{{ fonts_packages }}"
state: present

 passing the list to apt.

That works in Ansible 2.4.4 as expected:

"invocation": {
"module_args": {
 
"name": [
"ttf-wqy-zenhei", 
"fonts-takao-mincho", 
"fonts-indic", 
"ttf-wqy-microhei"
], 
"only_upgrade": false, 
"package": [
"ttf-wqy-zenhei", 
"fonts-takao-mincho", 
"fonts-indic", 
"ttf-wqy-microhei"
], 
...
 
But in 2.5+ fails because:

fatal: [hostname]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {

"name": "['ttf-wqy-zenhei', 'fonts-takao-mincho', 
'fonts-indic', 'ttf-wqy-microhei']",
"only_upgrade": false,
"package": [
"['ttf-wqy-zenhei'",
" 'fonts-takao-mincho'",
" 'fonts-indic'",
" 'ttf-wqy-microhei']"
],
...
},
"msg": "No package(s) matching '['ttf-wqy-zenhei'' available"
}

I'm passing a list to apt "name" parameter as per the documentation.

Thanks


-- 


* *







Know Your Customer due diligence on demand, powered by 
intelligent process automation




Blogs 
  |  LinkedIn 
  |  Twitter 


 




Encompass Corporation UK Ltd  |  
Company No. SC493055  |  Address: Level 3, 33 Bothwell Street, Glasgow, UK, 
G2 6NL

Encompass Corporation Pty Ltd  |  ACN 140 556 896  |  Address: 
Level 10, 117 Clarence Street, Sydney, New South Wales, 2000

This email 
and any attachments is intended only for the use of the individual or 
entity named above and may contain confidential information. 

If you are 
not the intended recipient, any dissemination, distribution or copying of 
this email is prohibited. 

If received in error, please notify us 
immediately by return email and destroy the original message.








-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/13465a54-4216-450c-a9f9-a079495425f3n%40googlegroups.com.


Re: [ansible-project] Random anisble failures due to tmp files not found

2019-03-25 Thread Igor Cicimov
FWIW found the reference: 

https://github.com/dw/mitogen/issues/301#issuecomment-404620821
https://github.com/ansible/ansible/issues/31617

On Friday, March 8, 2019 at 9:46:14 AM UTC+11, Igor Cicimov wrote:
>
>
>
> On Friday, March 8, 2019 at 4:12:37 AM UTC+11, Kai Stian Olstad wrote:
>>
>> On 07.03.2019 01:34, Igor Cicimov wrote: 
>> > Anyone else seeing random playbook execution failuers like this: 
>> > 
>> > Source 
>> /home/user/.ansible/tmp/ansible-tmp-1551917585.84-139567381415844/source 
>> > not found"} 
>>
>> No, you must have some issues with your setup. 
>>
>
> Maybe but highly unlikely. This happened during provisioning of 20 EC2 
> instances with same ansible config and same repository and 19 were 
> successful but 1 failed with the error. All instances autoprovision them 
> self locally and the one that failed has hundreds of tasks successfully 
> executed before the failed one.
>
> I remember seeing an issue where the OP found out that the:
>
> remote_tmp = $HOME/.ansible/tmp
>
> settings works unreliably and ansible looses the tmp path being unable to 
> resolve the $HOME env var. I tried googling it up again but can not find it 
> now :-/
>
>
>>
>> > Any idea what is causing them and how to fix it? 
>>
>> Running with - might give you some more information. 
>>
>
> That's hard since replaying the whole playbook several times after the 
> failure shows no issues at all.
>
>
>>
>> -- 
>> Kai Stian Olstad 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/9bfc5812-6b20-4b68-a5cd-bc2e6f1bdb90%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Random anisble failures due to tmp files not found

2019-03-07 Thread Igor Cicimov


On Friday, March 8, 2019 at 4:12:37 AM UTC+11, Kai Stian Olstad wrote:
>
> On 07.03.2019 01:34, Igor Cicimov wrote: 
> > Anyone else seeing random playbook execution failuers like this: 
> > 
> > Source 
> /home/user/.ansible/tmp/ansible-tmp-1551917585.84-139567381415844/source 
> > not found"} 
>
> No, you must have some issues with your setup. 
>

Maybe but highly unlikely. This happened during provisioning of 20 EC2 
instances with same ansible config and same repository and 19 were 
successful but 1 failed with the error. All instances autoprovision them 
self locally and the one that failed has hundreds of tasks successfully 
executed before the failed one.

I remember seeing an issue where the OP found out that the:

remote_tmp = $HOME/.ansible/tmp

settings works unreliably and ansible looses the tmp path being unable to 
resolve the $HOME env var. I tried googling it up again but can not find it 
now :-/


>
> > Any idea what is causing them and how to fix it? 
>
> Running with - might give you some more information. 
>

That's hard since replaying the whole playbook several times after the 
failure shows no issues at all.


>
> -- 
> Kai Stian Olstad 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/b59d4be9-4ec8-418e-87ee-f18a6e08fa16%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Random anisble failures due to tmp files not found

2019-03-06 Thread Igor Cicimov
Anyone else seeing random playbook execution failuers like this:

Source /home/user/.ansible/tmp/ansible-tmp-1551917585.84-139567381415844/source 
not found"}

Any idea what is causing them and how to fix it?

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/cfa70ac1-a7d3-4f76-9474-c0dd2d8e7de4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] --skip-tags equivalent inside playbooks

2019-02-05 Thread Igor Cicimov


On Wednesday, February 6, 2019 at 2:14:57 AM UTC+11, Kai Stian Olstad wrote:
>
> On 04.02.2019 04:10, Igor Cicimov wrote: 
> > On Monday, February 4, 2019 at 12:10:51 AM UTC+11, Kai Stian Olstad 
> > wrote: 
>
> Probably pretty useless to answer but anyway. 
>

You are right it was useless.


>
> >> 
> >> On 03.02.2019 01:53, Igor Cicimov wrote: 
> >> > Kai, why would I use vars when I already have tags on my tasks which 
> >> > purpose, and only purpose, is filtering during execution? 
> >> 
> >> Filtering is done on the command line with --tags not inside a 
> >> playbook 
> >> or task file. 
> >> 
> > 
> > That's correct ... and have you tried doing so? In the below example: 
> > 
> > - roles: 
> >- role1 
> >- role2 
> >- role3 
> > 
> > where all 3 roles have the same tags, lets say "instal" and 
> > "configure", 
> > how are you going to filter the "install" tag for the role1 only? 
>
> You can't without having unique tag.
>

You will run our of unique names for every single tag in couple of years :-)

I understand what you are looking for, I just say that is not possible 
> with Ansible at the moment and if you need that kind of functionality 
> use variables instead. 
>
>
> > If you need any other functionality, variables is the way to go. 
> >> 
> > 
> > Yeah like in the example you gave above: 
> > 
> > --- 
> > - include_tasks: install.yml 
> >when: test_install | default(true) == true 
> > 
> > - include_tasks: configure.yml 
> >when: test_configure | default(true) == true 
> > 
> > so you end up with separate file for each tag you have, good luck with 
> > that. 
>
> I don't know why I need good luck with that, been doing it for years and 
> it just work. 
>
>
So for tasks that have multiple tags you just repeat them in every file? 
Great!


> >> > Also as I said back in 2015 
> >> > 
> https://groups.google.com/d/msg/ansible-project/WimzDEJLHJc/9U10Yjb4CQAJ 
> >> > it 
> >> > is hard to retrofit variables into hundreds of playbooks you have 
> >> > written 
> >> > with tags expecting they will serve the purpose they exist for, 
> *which 
> >> > is 
> >> > filtering*. 
> >> 
> >> They do, the filtering is done on the command line. 
> >> Tags on a role in a playbook is adding the tags to all the task in the 
> >> role. 
> >> 
> > 
> > Which is wrong and useless. 
>
> Strange view on life. 
>
>
> >> So it pretty uniform, tags in in task files and playbooks is adding 
> >> that 
> >> tag to the task. 
> >> Filtering is done at run time on the command line. 
> >> 
> >> 
> >> > From where I stand, the "tags" option that we can pass to the role 
> like 
> >> > this: 
> >> > 
> >> > - roles: 
> >> > - { name: role1 tags: ["tag1","tag2"] } <== this *IS/SHOULD 
> BE* 
> >> > equivalent to a command line 
> >> 
> >> Why should it, in my opinion this will make it pretty confusing for 
> >> when 
> >> tags add a tag and when it's filtering on tags. 
> >> 
> > 
> > Simple, there should had been *tags*, *skip-tags* and *add-tags*, 
> > genius 
> > isn't it :-) 
>
> Yes it is, but I have tried to come up with alternative way to do it in 
> the scope of how Ansible work at the moment. 
>
>
> >> > is pretty much useless since instead filtering the role's tasks based 
> >> > on 
> >> > that "tags" list it adds those tags to each of them. Really not sure 
> >> > how is 
> >> > this helping me in any way and what would be the use case or 
> advantage 
> >> > I 
> >> > get from doing this? I mean if I wanted those tags in a role I would 
> >> > have 
> >> > included them in its tasks already ... or am I missing something? 
> >> 
> >> The functionality is that if you want to run a few of the role(s) in a 
> >> playbook, add a tag to the role and filter the tag on the command 
> >> line. 
> >> I use this feature a lot, a playbook have have tens of roles and I 
> >> just 
> >> want to run one or two of them, so changing that will destroy my and 
> >> everyone else's use of tags. 
> >> 
> > 
> > Why would you include a role in a p

Re: [ansible-project] --skip-tags equivalent inside playbooks

2019-02-03 Thread Igor Cicimov
On Monday, February 4, 2019 at 12:10:51 AM UTC+11, Kai Stian Olstad wrote:
>
> On 03.02.2019 01:53, Igor Cicimov wrote: 
> > Kai, why would I use vars when I already have tags on my tasks which 
> > purpose, and only purpose, is filtering during execution? 
>
> Filtering is done on the command line with --tags not inside a playbook 
> or task file. 
>

That's correct ... and have you tried doing so? In the below example:

- roles:
   - role1
   - role2
   - role3

where all 3 roles have the same tags, lets say "instal" and "configure", 
how are you going to filter the "install" tag for the role1 only?

If you need any other functionality, variables is the way to go. 
>

Yeah like in the example you gave above:

--- 
- include_tasks: install.yml 
   when: test_install | default(true) == true 

- include_tasks: configure.yml 
   when: test_configure | default(true) == true 

so you end up with separate file for each tag you have, good luck with that.


>
> > Also as I said back in 2015 
> > https://groups.google.com/d/msg/ansible-project/WimzDEJLHJc/9U10Yjb4CQAJ 
> > it 
> > is hard to retrofit variables into hundreds of playbooks you have 
> > written 
> > with tags expecting they will serve the purpose they exist for, *which 
> > is 
> > filtering*. 
>
> They do, the filtering is done on the command line. 
> Tags on a role in a playbook is adding the tags to all the task in the 
> role. 
>

Which is wrong and useless.


> So it pretty uniform, tags in in task files and playbooks is adding that 
> tag to the task. 
> Filtering is done at run time on the command line.
>
>
> > From where I stand, the "tags" option that we can pass to the role like 
> > this: 
> > 
> > - roles: 
> > - { name: role1 tags: ["tag1","tag2"] } <== this *IS/SHOULD BE* 
> > equivalent to a command line 
>
> Why should it, in my opinion this will make it pretty confusing for when 
> tags add a tag and when it's filtering on tags. 
>

Simple, there should had been *tags*, *skip-tags* and *add-tags*, genius 
isn't it :-)


>
> > is pretty much useless since instead filtering the role's tasks based 
> > on 
> > that "tags" list it adds those tags to each of them. Really not sure 
> > how is 
> > this helping me in any way and what would be the use case or advantage 
> > I 
> > get from doing this? I mean if I wanted those tags in a role I would 
> > have 
> > included them in its tasks already ... or am I missing something? 
>
> The functionality is that if you want to run a few of the role(s) in a 
> playbook, add a tag to the role and filter the tag on the command line. 
> I use this feature a lot, a playbook have have tens of roles and I just 
> want to run one or two of them, so changing that will destroy my and 
> everyone else's use of tags. 
>

Why would you include a role in a playbook that you don't need executed I 
wonder???


> If you download a role from Galaxy you don't want to change the tags in 
> the role because that makes it very hard to download newer version of 
> that role. 
> But you can at least add your own tags on the role so you can filter to 
> run or not run the role when the playbook is running.
>

Have never seen any Galaxy role that I can use verbatim without applying 
any custom changes so this argument can hardly count.


>
> > So to conclude, when I call a role with *tags* I expect those and only 
> > those tags to be in effect during role's execution. 
>
> But I don't, and it's not feature I need since I use variables for that. 
>

I do too to include what ever I need to get executed. And then I want to 
use the tags I've been applying religiously to all tasks I write (as I do 
with everything I create in AWS) for further filtering. And that is the 
whole point of the discussion. That option does not exists for playbooks 
that include roles.


>
> > Similarly I would 
> > expect to use *skip-tags* for tags I do not want executed during run 
> > time. 
> > Instead of that you are telling me to use vars when I already have tags 
> > that should serve the purpose. 
>
> The problem here is if you have 20 roles where all roles have uniq tag 
> and you only want to run one of them, adding 19 skiped tags instead of 1 
> include tag is not very practical. 
>
>
As said above don't include a role in a playbook if you don't need it. It 
can also be simply solved via variable as you say right? How about if I 
have 90 tasks in a role and want to exclude 45? Much more difficult isn't 
it?


> > Not sure why such a resistance towards a feature that is very logical 
> > to 
> > h

Re: [ansible-project] --skip-tags equivalent inside playbooks

2019-02-02 Thread Igor Cicimov
Kai, why would I use vars when I already have tags on my tasks which 
purpose, and only purpose, is filtering during execution?

Also as I said back in 2015 
https://groups.google.com/d/msg/ansible-project/WimzDEJLHJc/9U10Yjb4CQAJ it 
is hard to retrofit variables into hundreds of playbooks you have written 
with tags expecting they will serve the purpose they exist for, *which is 
filtering*.

>From where I stand, the "tags" option that we can pass to the role like 
this:

- roles:
- { name: role1 tags: ["tag1","tag2"] } <== this *IS/SHOULD BE* 
equivalent to a command line 

is pretty much useless since instead filtering the role's tasks based on 
that "tags" list it adds those tags to each of them. Really not sure how is 
this helping me in any way and what would be the use case or advantage I 
get from doing this? I mean if I wanted those tags in a role I would have 
included them in its tasks already ... or am I missing something?

So to conclude, when I call a role with *tags* I expect those and only 
those tags to be in effect during role's execution. Similarly I would 
expect to use *skip-tags* for tags I do not want executed during run time. 
Instead of that you are telling me to use vars when I already have tags 
that should serve the purpose.

Not sure why such a resistance towards a feature that is very logical to 
have and makes much more sense than what it is atm.

On Saturday, February 2, 2019 at 11:05:40 PM UTC+11, Kai Stian Olstad wrote:
>
> On 02.02.2019 05:39, Igor Cicimov wrote: 
> > Brian, I find the current usage of "tags" when calling a role via 
> > "roles:" 
> > or "include_role/import_role" is counter intuitive. The reason we tag 
> > tasks 
> > in our playbooks is for the purpose of filtering which we would expect 
> > to 
> > be the case in the above mentioned scenarios as well. But it is not, 
> > and 
> > that is major draw back in making reusable (DRY) code. 
> > 
> > I constantly find my self in need to execute just a part of some role 
> > tasks, lets say the ones tagged with "install" but skip the ones tagged 
> > with "configure" lets say. This is exactly what we get by passing 
> > "--tags" 
> > or "--skip-tags" on the command line so why not make this consistent 
> > everywhere? 
>
> I would argue that it's very consistent at the moment. 
> All tags in a yaml file sets/add that tag(s), and which tags you want to 
> run is specified on the command line. 
>
> Use variables if you want to run part of your code. 
> An example: 
>
> roles/test/install.yml 
> roles/test/configure.yml 
>
>
> roles/test/main.yml 
> --- 
> - include_tasks: install.yml 
>when: test_install | default(true) == true 
>
> - include_tasks: configure.yml 
>when: test_configure | default(true) == true 
>
> Then to only run install just do this 
> playbook.yml 
> - hosts: localhost 
>roles: 
>  - role: test 
>test_configure: false 
>
>
> You can also overwrite the variables on the command line too if needed. 
>
> -- 
> Kai Stian Olstad 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/3db1ef02-829b-4c06-b0e3-67d0643a480e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] --skip-tags equivalent inside playbooks

2019-02-01 Thread Igor Cicimov
Brian, I find the current usage of "tags" when calling a role via "roles:" 
or "include_role/import_role" is counter intuitive. The reason we tag tasks 
in our playbooks is for the purpose of filtering which we would expect to 
be the case in the above mentioned scenarios as well. But it is not, and 
that is major draw back in making reusable (DRY) code. 

I constantly find my self in need to execute just a part of some role 
tasks, lets say the ones tagged with "install" but skip the ones tagged 
with "configure" lets say. This is exactly what we get by passing "--tags" 
or "--skip-tags" on the command line so why not make this consistent 
everywhere?

On Saturday, February 2, 2019 at 6:07:44 AM UTC+11, Brian Coca wrote:
>
> No, it is not possible 
>
> -- 
> Brian Coca 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/1b7ba93b-4cc5-48f1-a264-6346269ec5bf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] --skip-tags equivalent inside playbooks

2019-01-31 Thread Igor Cicimov
Anyone knows if this is possible in any ansible release?

On Thursday, December 17, 2015 at 8:08:14 AM UTC+11, Igor Cicimov wrote:
>
> This sounds like very reasonable request, option like:
>
> - { role: A, skip-tags: [t1, t2] }
>
> whould be very useful in case of playbook with many roles having same tags 
> thus --skip-tags is not an option in case one wants to skip tags in couple 
> of roles only.
>
> Any plans to support this?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/06b31bdc-9a22-4352-b3a3-231e515db0a0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Re: Error while evaluating conditional

2017-06-12 Thread Igor Cicimov
Yes i tried that it didnt work

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/adb4411c-1eda-4216-8257-dd4ed9bd18ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] wait_for using bastion host

2017-05-24 Thread Igor Cicimov
Try to connect manually with the same ssh command with -vvv switch to find out 
what went wrong. Also use register in the ansible command and debug the 
returned value. My guess is missing/mismatch key or security group issue ie the 
instance has tcp port 22 blocked.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/cfd778be-c059-4279-a38a-354f61aa417e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] wait_for using bastion host

2017-05-24 Thread Igor Cicimov
Try to connect manually with the same ssh command with -vvv switch to find out 
what went wrong. Also use register in the ansible command and debug the 
returned value. My guess is missing/mismatch key or security group issue ie the 
instance has tcp port 22 blocked.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/b07b8fed-83da-4cc1-b748-b74937aec71e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Error while evaluating conditional

2017-05-17 Thread Igor Cicimov
ok it was simply the order, this works:

  when: ansible_eth0.ipv4.address == server_list.0 and 'does not exist' in 
volume_info.stderr

where this will not:

  when: 'does not exist' in volume_info.stderr and 
ansible_eth0.ipv4.address == server_list.0

no matter in what kind of quotes you wrap it, escape it etc.

On Wednesday, May 17, 2017 at 4:32:56 PM UTC+10, Igor Cicimov wrote:
>
> Any idea why would Ansible error on this?
>
> fatal: [localhost] => error while evaluating conditional: 'does not exist' 
> in volume_info.stderr and "10.99.4.236" == "10.99.3.195"
>
> FATAL: all hosts have already failed -- aborting
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/4664b4e7-64a5-4b86-b870-f6bdf968c151%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Error while evaluating conditional

2017-05-17 Thread Igor Cicimov
Any idea why would Ansible error on this?

fatal: [localhost] => error while evaluating conditional: 'does not exist' 
in volume_info.stderr and "10.99.4.236" == "10.99.3.195"

FATAL: all hosts have already failed -- aborting

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/e19abbfd-4d9f-433c-9807-e10a2fe621f9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Ansible 2.0.2 sts_asume_role error - Region does not seem to be available for aws module boto.sts

2017-03-15 Thread Igor Cicimov
Hi,

When I run the following playbook on an EC2 instance with IAM instance role:

---
- hosts: localhost
  connection: local
  gather_facts: false
  tasks:
- name: Assume the instance profile role
  sts_assume_role:
region: "eu-west-1"
role_arn: "arn:aws:iam::xx:instance-profile/profile-name"
role_session_name: "someRoleSession"
  register: assumed_role

I get error:

TASK [Assume the instance profile role] 

fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": 
"Region eu-west-1 does not seem to be available for aws module boto.sts. If 
the region definitely exists, you may need to upgrade boto or extend with 
endpoints_path"}

Get the same result after trying couple of other regions without success 
(although the instance it self is in the eu-west-1 region). Which is the 
correct region then?

The role ARN is obtained via the instance metadata:

# curl -s http://169.254.169.254/latest/meta-data/iam/info | jq -c -M -r 
'.InstanceProfileArn'

Some details of the setup:

# lsb_release -a
No LSB modules are available.
Distributor ID:Ubuntu
Description:Ubuntu 14.04.5 LTS
Release:14.04
Codename:trusty

# ansible --version
ansible 2.0.2.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

# python --version
Python 2.7.6

# dpkg -l python-boto | grep ^ii
ii  python-boto  2.20.1-2ubuntu2   
all  Python interface to Amazon's Web Services

What can be the problem? Nothing obvious comes to my attention when looking 
in the module documentation 
http://docs.ansible.com/ansible/sts_assume_role_module.html.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/d3e8bebb-f74d-487f-8e5b-f7314297b110%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Counting hosts in a group form an ec2 dynamic inventory

2016-12-17 Thread Igor Cicimov
The usual way would be to tag the instances and pick them up at the start of 
your play, put them in a group and count them. However, the dynamic inventory 
contains the running instances only otherwise your plays will fail running 
against stopped instances. Not sure how to get around this. 

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/67924150-030e-407b-ab47-7a7a4d741fbe%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Unable to use ec2_vpc_route_table to remove route tables.... anyone else?

2016-12-13 Thread Igor Cicimov
Just to clarify, was the routing table in use by a subnet(s) at that 
moment? I think AWS will not allow you to remove a resource that is 
referenced by another one.

On Thursday, December 8, 2016 at 7:43:57 AM UTC+11, rc@gmail.com wrote:
>
> Using version:
>
> ansible 2.2.1.0 (stable-2.2 acad2ba246) last updated 2016/12/07 11:28:43 
> (GMT -400)
>   lib/ansible/modules/core: (detached HEAD 8139278530) last updated 
> 2016/12/07 11:28:55 (GMT -400)
>   lib/ansible/modules/extras: (detached HEAD f5f1fc934a) last updated 
> 2016/12/07 11:28:56 (GMT -400)
>   config file = /etc/ansible/ansible.cfg
>   configured module search path = Default w/o overrides
>
> I'm able to build route tables just fine, but trying to utilize the 
> "state:absent" attribute in the module does not result in removing route 
> tables. 
>
> I ended up doing a shell task and calling awscli to handle the removal. I 
> don't see any specific issues on the extras Github site about this 
> behavior, so I thought I'd inquire here... 
>
> Thanks,
>
> --rc
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/58efed36-3eaa-4ccf-ab6a-1c0c589d2051%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Ansible 2.2.0 Final has been released!

2016-11-01 Thread Igor Cicimov
Hi James,

Although the performance improvements are welcomed, I wonder how does this 
stack compared to 1.9.x version?

There have been discussions here, more specific 
https://groups.google.com/forum/#!starred/ansible-devel/UU0Tpw-qlhY, and 
couple of issues raised and outlined in your post here 
https://groups.google.com/forum/#!starred/ansible-devel/TLEc1NoQ7lA, that 
revel a serious problem in the very core of 2.x release. Has this been 
addressed in 2.2.0 and if yes how close is the 2.2 performance to the old 
1.9.x branch?

Thanks,
Igor

On Wednesday, November 2, 2016 at 12:55:28 AM UTC+11, James Cammarata wrote:
>
> Hi all, we're very happy to announce that Ansible 2.2.0 has been released!
>
> This release includes many new features and improvements (from the RC1 
> release announcement):
>
> * Almost 200 new modules!
> * Major performance improvements. In many cases, you should see a 2-3x 
> improvement over 2.1/2.0.
> * `include_role` now allows for roles to be executed inline with your 
> other tasks, instead of listing them only in the `roles:` section of your 
> plays (http://docs.ansible.com/ansible/include_role_module.html)
> * The `listen` feature for handlers allows for much easier notifications 
> of multiple handlers via a pub/sub mechanism (
> http://docs.ansible.com/ansible/playbooks_intro.html#handlers-running-operations-on-change
> ).
> * Serial batches can now be specified as a list rather than a single 
> integer value, meaning you can do something like this to scale up serial 
> batches:
>   `serial: [1, 5, 10]` # the first batch will be 1 host, 2nd=5 hosts, all 
> other batches will be 10 hosts)
> * Windows tasks can now use "async", and can also now use the 
> "environment" option to set environment variables.
> * Support for binary modules.
> * New become method: `ksu` (Kerberos su).
> * New meta option: `end_play`, which allows for early termination of a 
> play without failing.
> * Meta tasks now support conditional statements.
>
> Here is the official announcement on our website:
>
>
> https://www.ansible.com/press/ansible-22-delivers-new-automation-capabilities-for-containers-networks-and-cloud
>
> As always, this update is available via PyPi and releases.ansible.com 
> now, and packages for distros will be available as soon as possible.
>
> Thanks and enjoy!
>
> James Cammarata
>
> Ansible Lead/Sr. Principal Software Engineer
> Ansible by Red Hat
> twitter: @thejimic, github: jimi-c
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/ee7e7988-436a-4e6f-a13f-fcdb2f8d4301%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: MongoDB Ansible Deployment

2016-08-18 Thread Igor Cicimov


On Friday, August 19, 2016 at 12:50:07 PM UTC+10, Igor Cicimov wrote:
>
>
>
> On Friday, August 19, 2016 at 2:41:27 AM UTC+10, adam.a...@gmail.com 
> wrote:
>>
>>  Hello,
>>
>> So I am trying to convert a bunch of Ansible Playbooks that were used to 
>> deploy MongoDB into a more generalized group of roles used to for the 
>> deployment. I have somewhat based it off of the Ansible Example MongoDB 
>> Deployment Repository 
>> <https://github.com/ansible/ansible-examples/tree/master/mongodb>. The 
>> thing I am having trouble with now is the mongod role and the updating of 
>> the startup and configuration files on the arbiter/replica servers. In the 
>> original deployment, I have a play that looks like the one below.
>>
>> - hosts:
>>   - rs1
>>   remote_user: root
>>
>>   tasks:
>>   - name: Create the data directory for the replica sets
>> file: path=/data/db state=directory
>>   - name: Copy the daemon configuration file over to the replica sets
>> copy: src=/ansible/MongoDB/roles/mongod/templates/mongod_rs1.conf 
>> dest=/etc/mongod.conf
>>   - name: Copy the rc.local file over to the replica servers to start the 
>> mongod services at boot
>> copy: src=/ansible/MongoDB/roles/mongod/templates/rc_rep.local 
>> dest=/etc/rc.d/rc.local mode=0755
>>
>> The problem I am having stems from the hosts - there are 20 replica set 
>> hosts with each set having 2 replication servers. Now, it is easy to create 
>> the data directories and the startup file on each replication server as 
>> they all get the same path and startup file. So, I can just simply do the 
>> same task for every server in those groups. The place where I struggle, 
>> though, is the configuration file. The 2 servers in each replication set 
>> get their own configuration files, which are inherently different from all 
>> the other configuration files. My thoughts on how to get this to work - 
>> without doing a different play for each host - involved using the 
>> inventory_hostname variable as I ran through the Playbook. The way I had it 
>> set-up was like this,
>>
>> - name: Create the mongodb configuration file for each set of replica 
>> servers
>>   template: 
>> src=ansible/MongoDB/roles/mongod/templates/mongod_{{inventory_hostname}}.conf
>>  
>> dest=/etc/mongod.conf
>>
>> I thought, then, that this would drop the mongod.conf file onto the 
>> correct servers because it would read in the group, like "rs1", to that 
>> variable and it would work easily. 
>>
>
> If you need to use the group name, if I understand correctly, then make it 
> a variable like:
>
> - hosts: '{{ hosts }}'
>
> and then set the external variable at run time like -e "hosts=rs1" and use 
> that for the conf file name as well mongod_{{ hosts }}.conf
>  
>
>> This made me realize, though, that it would use "ghmrep1" and "ghmrep21" 
>> for example instead of "rs1" as the hostname. Then I thought about just 
>> changing the name of the mongod configuration files in the template folder 
>> to something like "mongod_ghmrep1" so that it would work that way. This 
>> seems unnecessary, though, and I would also have to double the amount of 
>> files I had to make this work. Is there any simpler way to go about this 
>> play where I could just create/copy all the files over in one fell swoop?
>>
>  
 Alternatively, if your groups are named like rs1, rs2, ... etc., then you 
can use the integer sequence loop to dynamically generate the group names 
in play time: 
http://docs.ansible.com/ansible/playbooks_loops.html#looping-over-integer-sequences

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/53509384-bc79-4a69-93f7-bd93b7b924e8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: MongoDB Ansible Deployment

2016-08-18 Thread Igor Cicimov


On Friday, August 19, 2016 at 2:41:27 AM UTC+10, adam.a...@gmail.com wrote:
>
>  Hello,
>
> So I am trying to convert a bunch of Ansible Playbooks that were used to 
> deploy MongoDB into a more generalized group of roles used to for the 
> deployment. I have somewhat based it off of the Ansible Example MongoDB 
> Deployment Repository 
> . The 
> thing I am having trouble with now is the mongod role and the updating of 
> the startup and configuration files on the arbiter/replica servers. In the 
> original deployment, I have a play that looks like the one below.
>
> - hosts:
>   - rs1
>   remote_user: root
>
>   tasks:
>   - name: Create the data directory for the replica sets
> file: path=/data/db state=directory
>   - name: Copy the daemon configuration file over to the replica sets
> copy: src=/ansible/MongoDB/roles/mongod/templates/mongod_rs1.conf 
> dest=/etc/mongod.conf
>   - name: Copy the rc.local file over to the replica servers to start the 
> mongod services at boot
> copy: src=/ansible/MongoDB/roles/mongod/templates/rc_rep.local 
> dest=/etc/rc.d/rc.local mode=0755
>
> The problem I am having stems from the hosts - there are 20 replica set 
> hosts with each set having 2 replication servers. Now, it is easy to create 
> the data directories and the startup file on each replication server as 
> they all get the same path and startup file. So, I can just simply do the 
> same task for every server in those groups. The place where I struggle, 
> though, is the configuration file. The 2 servers in each replication set 
> get their own configuration files, which are inherently different from all 
> the other configuration files. My thoughts on how to get this to work - 
> without doing a different play for each host - involved using the 
> inventory_hostname variable as I ran through the Playbook. The way I had it 
> set-up was like this,
>
> - name: Create the mongodb configuration file for each set of replica 
> servers
>   template: 
> src=ansible/MongoDB/roles/mongod/templates/mongod_{{inventory_hostname}}.conf 
> dest=/etc/mongod.conf
>
> I thought, then, that this would drop the mongod.conf file onto the 
> correct servers because it would read in the group, like "rs1", to that 
> variable and it would work easily. 
>

If you need to use the group name, if I understand correctly, then make it 
a variable like:

- hosts: '{{ hosts }}'

and then set the external variable at run time like -e "hosts=rs1" and use 
that for the conf file name as well mongod_{{ hosts }}.conf
 

> This made me realize, though, that it would use "ghmrep1" and "ghmrep21" 
> for example instead of "rs1" as the hostname. Then I thought about just 
> changing the name of the mongod configuration files in the template folder 
> to something like "mongod_ghmrep1" so that it would work that way. This 
> seems unnecessary, though, and I would also have to double the amount of 
> files I had to make this work. Is there any simpler way to go about this 
> play where I could just create/copy all the files over in one fell swoop?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/dd0b3a38-4d59-4b35-b031-946fa0cd30d8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Execution order of handles in notify

2016-07-13 Thread Igor Cicimov


On Wednesday, July 13, 2016 at 7:21:08 PM UTC+10, Kai Stian Olstad wrote:
>
> On 13.07.2016 10:25, Igor Cicimov wrote: 
> > - name: template configuration file 
> >   template: src=template.j2 dest=/etc/foo.conf 
> >   notify: 
> >  - restart memcached 
> >  - restart apache 
> > 
> > 
> > does this guarantee that memcached is always going to be restarted 
> > before 
> > apache when the handles get flushed? 
>
> No. 
> All notify will run at the end an in the order they are in handlers 
> file. 
>
> -- 
> Kai Stian Olstad 
>

I guess same goes in case of different tasks calling different handles? 
Like:

- task1
  notify:
 - restart memcached

- tasks2
  notify:
 - restart apache

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/9d2cc89a-b761-4d46-9fab-f76a65135bef%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Need help with nested variables to use in a "when:" directive inside of a role playbook

2016-02-03 Thread Igor Cicimov
Try this:

- name: Ensure user directory exists
  file:
path=/opt/{{ item.item }}
state=directory
owner={{ item.item }}
group={{ item.item }}
mode=0755
  with_items: passinfo.results

not tested though.

On Wednesday, February 3, 2016 at 7:48:21 AM UTC+11, Dayton Jones wrote:
>
> I have a requirement to create directories only if the specified user(s) 
> exist on the remote host... given my role defintion below, what is the 
> proper syntax to use with "when" (or other method) to only create the 
> directory only if that user exists on the host and skip the task if not 
> present?  
>
> ../vars/main.yml:
>
>> ---
>> my_user_list:
>>   - user1
>>   - user2
>>
>
>
> ../tasks/main.yml:
>
>> - name: Check for existence of users
>>   getent: database=passwd key={{item}} fail_key=False
>>   with_items: my_user_list
>>   register: passinfo
>> - name: Ensure user directory exists (user1)
>>   file:
>> path=/opt/user1
>> state=directory
>> owner=user1
>> group=user1
>> mode=0755
>>   when: ??? filter to only run if user1 exists ???
>>   ignore_errors: yes
>> - name: Ensure user directory exists (user2)
>>   file:
>> path=/opt/user2
>> state=directory
>> owner=user2
>> group=user2
>> mode=0755
>>   when: ??? filter to only run if user2 exists ???
>>   ignore_errors: yes
>>
>
>
> here is the "output" of the passinfo variable:
>
> "passinfo": {
> "changed": false, 
> "msg": "All items completed",
> "results": [
> {
> "_ansible_no_log": false,
> "ansible_facts": {
> "getent_passwd": {
> "user2": [
> "x",
> "1002",
> "1002",
> "",
> "/home/user2",
> "/bin/bash"
> ]
> }
> },
> "changed": false,
> "invocation": {
> "module_args": {
> "_ansible_check_mode": false,
> "_ansible_debug": false,
> "_ansible_diff": false,
> "_ansible_no_log": false,
> "_ansible_verbosity": 0,
> "database": "passwd",
> "fail_key": false,
> "key": "user2",
> "split": null
> },
> "module_name": "getent"
> },
> "item": "user2"
> },
> {
> "_ansible_no_log": false,
> "ansible_facts": {
> "getent_passwd": {
> "user1": null
> }
> },
> "changed": false,
> "invocation": {
> "module_args": {
> "_ansible_check_mode": false,
> "_ansible_debug": false,
> "_ansible_diff": false,
> "_ansible_no_log": false,
> "_ansible_verbosity": 0,
> "database": "passwd",
> "fail_key": false,
> "key": "user1",
> "split": null
> },
> "module_name": "getent"
> },
> "item": "user1",
> "msg": "One or more supplied key could not be found in the 
> database."
> }
> ]
> }
> }
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/0876b6df-58a5-4ff5-a279-b485c082b311%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Ansible 1.9.4-1ppa~trusty removed from ansible ppa?

2016-01-13 Thread Igor Cicimov
I noticed the same and that sucks. In the future, is it possible to make 
people aware of this fact before the ppa just disappear so we can prepare 
for the impact and don't get our automated envs brake?
I apologies if this has already been done and I have somehow missed it.

Thanks,
Igor

On Thursday, January 14, 2016 at 1:33:28 AM UTC+11, Brian Coca wrote:
>
> Sadly we don't seem able to host multiple versions of a package 
> through the PPA, so 2.0 will removes 1.9. 
>
> -- 
> Brian Coca 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/c52c6cf0-f760-43dc-aa7d-c4b6c9754ccd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] variables messed up when using including tasks

2015-12-12 Thread Igor Cicimov
Well of course they get overwritten in the *same* play since all variables 
are *global* during the play execution, otherwise if you call group1 after 
group0 what will happen is group1 will get the variables from group0, which 
is not desirable, right?

What you need to do is rethink your playbooks and inventory structure, 
think of using multiple roles instead of single one with many tasks and 
keep the global variables names unique if you don't want them overwritten.

On Sunday, December 13, 2015 at 3:31:17 AM UTC+11, silverdr wrote:
>
> I /think/ I know where the problem comes from: I encountered it with group 
> vars defined in the inventory file. I eventually moved both the role 
> variables and group_vars into inventory file, trying to establish one place 
> where the variables are defined. Now:  

My inventory file looks like this (IPs are dummy): 
>
> [group0] 
> 234.123.41.44 
> [group0:vars] 
> var0=00 
> var1=01 
>
> [group1] 
> 234.123.41.45 
> [group1:vars] 
> var0=10 
> var1=11 
>
> Now, in this configuration, variables have their values intact. But when I 
> refer with both groups to the same host, like: 
>
> [group0] 
> 234.123.41.44 
> [group0:vars] 
> var0=00 
> var1=01 
>
> [group1] 
> 234.123.41.44 
> [group1:vars] 
> var0=10 
> var1=11 
>
> which I guess is rather typical when deploying to development environment, 
> then the group0 variables get overwritten! 
>
> Since my configuration with "defaults" has been applied to similar 
> inventory, my best guess is that this caused the problem there too. Now - 
> what to do in order to have the variables keep their values for different 
> groups, even if the host happens to be the same? And shouldn't it be the 
> case anyway?

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/c74dc634-04fc-4380-8800-99b5978e649d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] variables messed up when using including tasks

2015-12-10 Thread Igor Cicimov
Strange, because I'm sure every time I call a tasks file from another role I 
have to explicitly include that role's defaults|vars file via vars_files to be 
able to use its variables. So you are saying you are seeing this happen without 
this linking. Do you mind showing us the whole main playbook?

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/daa01dcf-1ba9-4e56-879e-1a5626dbde30%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] variables messed up when using including tasks

2015-12-10 Thread Igor Cicimov
Just out of curiosity, what happens when you supply destination via 
extra_vars and leave everything as it was in defaults file?


On Friday, December 11, 2015 at 11:09:30 AM UTC+11, silverdr wrote:
>
>
> > On 2015-12-10, at 23:13, Igor Cicimov <ig...@encompasscorporation.com 
> > wrote: 
> > 
> > Strange, because I'm sure every time I call a tasks file from another 
> role I have to explicitly include that role's defaults|vars file via 
> vars_files to be able to use its variables. So you are saying you are 
> seeing this happen without this linking. Do you mind showing us the whole 
> main playbook? 
>
> FYI: I - kind of - worked the problem around by moving the variables from 
> "defaults" to "vars" but this causes other inconveniences when I actually 
> want to overwrite them. 
>
> Example playbook ({{destination}} value changes after the first -include: 
>
> --- 
> - hosts: api 
>   remote_user: teamcity 
>   sudo: yes 
>   gather_facts: no 
>
>   pre_tasks: 
>
>   roles: 
> - { role: "api" } 
>
>   tasks: 
>
>   post_tasks: 
>
>  
> roles/api/tasks/main.yml: 
>
> # Prerequisities / system dependencies 
> - name: install dependencies 
>   apt: name={{item}} state=present 
>   with_items: 
> - python-httplib2 
> - supervisor 
>
> # Configuring PHP 
> - include: php.yml 
>
> # Getting and preparing / installing the code 
>
> - name: if exists - remove previously cloned repository 
>   file: 
> path: "{{destination}}" 
> state: absent 
>   when: clear_destination is defined 
>   tags: 
> - api 
>
> - name: clone repository 
>   git: 
> repo: "{{repository}}" 
> dest: "{{destination}}" 
> accept_hostkey: yes 
>   tags: 
> - api_debug 
>
> - name: copy default env file # TODO - process the env file out of 
> env.dist 
>   template: src=env.ini.j2 dest={{destination}}/api/.env 
>   tags: env_file 
>
> - name: install using composer 
>   composer: command=install working_dir={{destination}}/api/ no_dev=no 
>
> - name: change ownership of the cloned repository 
>   file: dest={{destination}} owner={{owneruser}} group={{ownergroup}} 
> state=directory recurse=yes 
>
> # Configuring database 
> - name: stop supervisor service 
>   supervisorctl: name='api:' state=stopped 
>   tags: api_database 
>
> - include: database.yml 
>
> - name: run migrations 
>   sudo_user: "{{owneruser}}" 
>   command: php artisan migrate chdir={{destination}}/api/ 
>
> - name: seed the database 
>   sudo_user: "{{owneruser}}" 
>   command: php artisan db:seed chdir={{destination}}/api/ 
>
> # Cleaning elastic index 
> - name: cleanup elastic index 
>   uri: url=http://{{elastic_hostname}}:{{elastic_port}}/{{listing_index}} 
> method=DELETE status_code=200,404 
>   when: clean_elastic_index is defined 
>   tags: 
> - clean_elastic_index 
>
> # Run artisan queue daemon with supervisord 
> - name: copy queue daemon configuration 
>   template: src=supervisord.conf.j2 dest=/etc/supervisor/conf.d/api.conf 
>   tags: queue 
>
> - name: add new service 
>   supervisorctl: name='api:' state=started 
>   tags: queue 
>
> - name: start added service 
>   supervisorctl: name='api:' state=restarted 
>   tags: queue 
>
> # Configuring network 
> - include: network.yml 
>
> # Configuring httpd 
> - include: httpd.yml 
>
> # Generating apidocs 
> - include: apidocs.yml 
>
> ** 
> roles/api/tasks/php.yml: 
>
> --- 
> - name: install php5 extensions 
>   apt: name={{item}} state=present 
>   with_items: 
> - php5-mcrypt 
> - php5-json 
> - php5-pgsql 
> - php5-xsl 
> - php5-gmp 
>
> - name: enable php5-mcrypt extension 
>   command: php5enmod mcrypt 
>   args: 
> creates: /etc/php5/apache2/conf.d/20-mcrypt.ini 
>
> # when installing xdebug make sure that xdebug.max_nesting_level = 250 
> # is also set in the php's php.ini configuration file 
> - name: install php5 xdebug extension 
>   apt: name={{item}} state=present 
> - php5-xdebug 
>   when: xdebug is defined 
>
> - name: set php5 xdebug configuration param 
>   lineinfile: 
> dest: "{{php_ini_path}}" 
> backup: true 
> backrefs: true 
> state: present 
> regexp: "{{item.regexp}}" 
> line: "{{item.line}}" 
>   with_items: 
> - {regexp: '^xdebug.max_nesting_level', line: 
> 'xdebug.max_nesting_level = 250'} 
>   notify: restart httpd 
>   when: xdebug is defined 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/47154092-8c8e-464a-a1a7-1757fc31c1bb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] variables messed up when using including tasks

2015-12-10 Thread Igor Cicimov
Also if you can put

- debug: var=destination

before and after every include call would be very helpful to see where the 
var actually gets changed.

Just realized that my previous message might not be clear enough, what I 
mean is add:

--extra-vars '{"destination":"some_value_here"}'

to the ansible-playbook command and move back all your vars in their 
respective defaults files as they were before.

On Friday, December 11, 2015 at 3:37:37 PM UTC+11, Igor Cicimov wrote:
>
> Just out of curiosity, what happens when you supply destination via 
> extra_vars and leave everything as it was in defaults file?
>
>
> On Friday, December 11, 2015 at 11:09:30 AM UTC+11, silverdr wrote:
>>
>>
>> > On 2015-12-10, at 23:13, Igor Cicimov <ig...@encompasscorporation.com> 
>> wrote: 
>> > 
>> > Strange, because I'm sure every time I call a tasks file from another 
>> role I have to explicitly include that role's defaults|vars file via 
>> vars_files to be able to use its variables. So you are saying you are 
>> seeing this happen without this linking. Do you mind showing us the whole 
>> main playbook? 
>>
>> FYI: I - kind of - worked the problem around by moving the variables from 
>> "defaults" to "vars" but this causes other inconveniences when I actually 
>> want to overwrite them. 
>>
>> Example playbook ({{destination}} value changes after the first -include: 
>>
>> --- 
>> - hosts: api 
>>   remote_user: teamcity 
>>   sudo: yes 
>>   gather_facts: no 
>>
>>   pre_tasks: 
>>
>>   roles: 
>> - { role: "api" } 
>>
>>   tasks: 
>>
>>   post_tasks: 
>>
>>  
>> roles/api/tasks/main.yml: 
>>
>> # Prerequisities / system dependencies 
>> - name: install dependencies 
>>   apt: name={{item}} state=present 
>>   with_items: 
>> - python-httplib2 
>> - supervisor 
>>
>> # Configuring PHP 
>> - include: php.yml 
>>
>> # Getting and preparing / installing the code 
>>
>> - name: if exists - remove previously cloned repository 
>>   file: 
>> path: "{{destination}}" 
>> state: absent 
>>   when: clear_destination is defined 
>>   tags: 
>> - api 
>>
>> - name: clone repository 
>>   git: 
>> repo: "{{repository}}" 
>> dest: "{{destination}}" 
>> accept_hostkey: yes 
>>   tags: 
>> - api_debug 
>>
>> - name: copy default env file # TODO - process the env file out of 
>> env.dist 
>>   template: src=env.ini.j2 dest={{destination}}/api/.env 
>>   tags: env_file 
>>
>> - name: install using composer 
>>   composer: command=install working_dir={{destination}}/api/ no_dev=no 
>>
>> - name: change ownership of the cloned repository 
>>   file: dest={{destination}} owner={{owneruser}} group={{ownergroup}} 
>> state=directory recurse=yes 
>>
>> # Configuring database 
>> - name: stop supervisor service 
>>   supervisorctl: name='api:' state=stopped 
>>   tags: api_database 
>>
>> - include: database.yml 
>>
>> - name: run migrations 
>>   sudo_user: "{{owneruser}}" 
>>   command: php artisan migrate chdir={{destination}}/api/ 
>>
>> - name: seed the database 
>>   sudo_user: "{{owneruser}}" 
>>   command: php artisan db:seed chdir={{destination}}/api/ 
>>
>> # Cleaning elastic index 
>> - name: cleanup elastic index 
>>   uri: url=http://{{elastic_hostname}}:{{elastic_port}}/{{listing_index}} 
>> method=DELETE status_code=200,404 
>>   when: clean_elastic_index is defined 
>>   tags: 
>> - clean_elastic_index 
>>
>> # Run artisan queue daemon with supervisord 
>> - name: copy queue daemon configuration 
>>   template: src=supervisord.conf.j2 dest=/etc/supervisor/conf.d/api.conf 
>>   tags: queue 
>>
>> - name: add new service 
>>   supervisorctl: name='api:' state=started 
>>   tags: queue 
>>
>> - name: start added service 
>>   supervisorctl: name='api:' state=restarted 
>>   tags: queue 
>>
>> # Configuring network 
>> - include: network.yml 
>>
>> # Configuring httpd 
>> - include: httpd.yml 
>>
>> # Generating apidocs 
>> - include: apidocs.yml 
>>
>> ** 
>> roles/api/tasks/php.yml: 
>>
>> --- 
>> - name: install php5 ex

Re: [ansible-project] variables messed up when using including tasks

2015-12-09 Thread Igor Cicimov
Do you by any chance have some of those vars also defined in the group_vars/all 
file? In that case they overide the ones in the roles default file causing 
unexpected results for people unaware of this fact.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/09bab1fc-a54f-4b61-937d-ac38ddd7637c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: gluster_volume module

2015-12-06 Thread Igor Cicimov
Just for the record, removing quotes from your example like this:

options={ group: virt, storage.owner-uid: '36', storage.owner-gid: '36' }

works for me on 1.9.4.

On Wednesday, August 12, 2015 at 11:21:23 PM UTC+10, Chris Weeks wrote:
>
> Thank you for sharing this information, I had come across the exact same 
> problem - use case: gluster storage backends for oVirt clusters.
>
> My role includes the following:
>
> - name: Gluster volume exists - {{gvol_name}}
>   gluster_volume: >
> state=present
> name={{gvol_name}}
> brick={{gvol_brick}}
> options="{ group: virt, storage.owner-uid: '36', storage.owner-gid: 
> '36' }"
>
> Unfortunately, that syntax still fails with msg: unable to evaluate 
> dictionary for options.
>
> Do you have any tips on the fix or are you able to send a working 
> role/playbook that I can use as an example?
>
> Either way, I appreciate you posting your fault and some success online!   
> :)
>
>
>
> On Saturday, July 4, 2015 at 3:52:09 PM UTC+12, Michael Goodness wrote:
>>
>> Well, apparently I hadn't tried everything yet. I was able to get a 
>> successful run by including the options parameter in the creation task, 
>> rather than keeping them separate. I also had to double-quote the expansion 
>> in my playbook, and single-quote the integers in my variable dictionary:
>>
>> # file: group_vars/gluster_hosts
>>> ...
>>> options: { group: virt, storage.owner-uid: '36', storage.owner-gid: '36' 
>>> }
>>> ...
>>
>>
>> (Note the corrections to owner-uid and owner-gid.)
>>
>> # file: roles/gluster/tasks/main.yml
>>> ...
>>> - name: Create gluster volumes
>>>   sudo: true
>>>   gluster_volume:
>>> name={{ item.name }}
>>> brick="{% for brick in item.bricks %}{{ brick.mount }}/{{ item.name 
>>> }}{% if not loop.last %},{% endif %}{% endfor %}"
>>> cluster="{{ groups.gluster_hosts|join(',') }}"
>>> options="{{ item.options }}"
>>> state=present
>>>   with_items: gluster_volumes
>>>   tags: gluster
>>> ...
>>
>>
>> Seems like there's a bug in the module, and the documentation is 
>> definitely incorrect. I'll experiment more and see if I can offer a fix.
>>
>>
>> On Friday, July 3, 2015 at 10:33:24 PM UTC-5, Michael Goodness wrote:
>>>
>>> Greetings, Ansiblites!
>>>
>>> I'm attempting to write a role that uses the gluster_volume module. All 
>>> is well right up until I try to specify an 'options' parameter. According 
>>> to the documentation, I'm to use "a dictionary/hash with 
>>> options/settings for the volume". The example given is as follows:
>>>
>>> gluster_volume: state=present name=test1 
 options='{performance.cache-size: 256MB}'
>>>
>>>
>>> I've matched that syntax in my playbook, and all I get is a complaint 
>>> from Ansible about quoting. I've tried just about every variation of 
>>> quoting I can think of: single, double, double-double, escaped, etc.. The 
>>> closest I (think) have gotten is by specifying the options as a variable, 
>>> then single-quoting the expansion:
>>>
>>> # file: group_vars/gluster_hosts
 ...
 options: { group: virt, storage.owner_uid: 36, storage.owner_gui: 36 }
 ...

>>>  
>>>
 # file: roles/gluster/tasks/main.yml
 ...
  gluster_volume: name={{ name }} options='{{ options }}' state=present
 ...
>>>
>>>
>>> Even then, the result is:
>>>
>>> msg: unable to evaluate dictionary for options
>>>
>>>
>>> Can anyone shed some light on this? I'm at my wit's end.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/22028701-e8cf-44e6-afa1-072a629d30f7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] How can append values into host variable

2015-11-19 Thread Igor Cicimov
Try

- set_fact: locked_domains="{{ locked_domains }}item.ID"

Also using with_dict might be better option instead of with_items in this case.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/e82693a9-ba5a-4bf2-a467-220cf32e7110%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: The file module failing when directory exists

2015-11-16 Thread Igor Cicimov
Absolutely sure. This should be easily reproducible if you are willing to try 
it.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/ba4eae22-dfff-487f-9447-17592b7fca01%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] The file module failing when directory exists

2015-11-16 Thread Igor Cicimov
Directory.

On Tuesday, November 17, 2015 at 9:21:51 AM UTC+11, Tim Fletcher wrote:
>
> On 16/11/15 07:06, Igor Cicimov wrote: 
> > Hi all, 
> > 
> > Currently a task like this: 
> > 
> > file: path=/data state=directory owner=root group=root mode=0755 
> > 
> > fails with the following error: 
> > 
> > failed: [localhost] => {"failed": true, "parsed": false} 
> > BECOME-SUCCESS-pquapqcakqrmffxonpxknqbulycqfmls 
> > Traceback (most recent call last): 
> >   File 
> > "/root/.ansible/tmp/ansible-tmp-1447656411.96-95869428580440/file", line 
> > 2012, in  
> > main() 
> >   File 
> > "/root/.ansible/tmp/ansible-tmp-1447656411.96-95869428580440/file", line 
> > 279, in main 
> > os.mkdir(curpath) 
> > OSError: [Errno 17] File exists: '/data' 
> > 
> > on Ansible 1.9.4. Doesn't it make more sense for the task to continue 
> > and just set the permissions instead of failing when the directory 
> > already exists? 
>
> Is it a directory or a file? 
>
> I have come across this error when a file existed which I requested be a 
> directory. 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/f94c3e62-0cb2-40f3-a446-300af23c9d01%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] The file module failing when directory exists

2015-11-15 Thread Igor Cicimov
Hi all,

Currently a task like this:

file: path=/data state=directory owner=root group=root mode=0755

fails with the following error:

failed: [localhost] => {"failed": true, "parsed": false}
BECOME-SUCCESS-pquapqcakqrmffxonpxknqbulycqfmls
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1447656411.96-95869428580440/file", 
line 2012, in 
main()
  File "/root/.ansible/tmp/ansible-tmp-1447656411.96-95869428580440/file", 
line 279, in main
os.mkdir(curpath)
OSError: [Errno 17] File exists: '/data'

on Ansible 1.9.4. Doesn't it make more sense for the task to continue and 
just set the permissions instead of failing when the directory already 
exists?

Thanks,
Igor

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/55ba151e-d042-4375-8eec-b11750c333a2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] new AWS EC2 instance created using ansible does not use SSD

2015-10-28 Thread Igor Cicimov
Probably it is not supported and I can't see any of examples in the docs 
referring to /dev/sda1 in the volumes section.

What you can do though is go to your ec2 console and set gp2 type as default 
when launching new instances.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/f90091ac-3e39-42ca-bcec-4d8bdee8b8b7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Dynamic deployment to AWS w/ Groups

2015-10-26 Thread Igor Cicimov
Hey Rob,

Well you definitely need a dynamic inventory which in case of AWS is 
provided by the *ec2.py* script. I guess you have already read on how to 
setup this, some helpful links below:

http://docs.ansible.com/ansible/intro_dynamic_inventory.html#example-aws-ec2-external-inventory-script
https://aws.amazon.com/blogs/apn/getting-started-with-ansible-and-dynamic-amazon-ec2-inventory-management/

So basically you download the script from 
https://raw.github.com/ansible/ansible/devel/contrib/inventory/ec2.py, make 
it executable and set it as */etc/ansible/hosts* (please take backup first 
of your existing file). Then you just drop 
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
 
to */etc/ansible/ec2.ini* and you are good to go.

This is the way I have it set up. In this way I don't even have to provide 
inventory file when I'm running some default stuff since Ansible by default 
reads the /etc/ansible/hots file. So instead:

$ ansible-playbook -i /some/inventory/path some-playbook.yml

I just run:

$ ansible-playbook some-playbook.yml

and Ansible understands by default that I want to run against the dynamic 
inventory.

So, this dynamic inventory will be the one that will provide you with all 
the groups you want to work with later. So after running your creation 
playbook for first time, next time you run a playbook it will have the new 
instances included in the inventory for you and nicely sorted under 
appropriate groups. There will be groups available for zones, subnets, 
security groups, tags etc etc etc. If you run the */etc/ansible/hosts* file 
(which is actually the ec2.py script renamed in our case) manually, it will 
give you a screen output of the whole inventory including the groups you 
have on your disposal.

To conclude, you do not need inventory for your instance creation playbook 
but you DO need it for any other playbook you want to run against already 
created instances, like the config playbook for example. 

Now about the variables. True you can set the variables in the group_vars 
but the problem is you need them to be dynamic, thus you need to modify 
them in runtime. So you can have some default valuse lets say in your 
group_vars file like:

var1: 1

and then change that accordingly depending on the env you are running 
against:

  - set-fact:
  var1: {%- if my_env|lower == 'prod' -%}5{%- endif -%}

OR you can have different set of variables per environment which might 
become more messy since you will have to maintain different files and 
possibly directories. That's my recommendation was to keep it simple to 
start with and do it all in single playbook using the above logic. The 
bottom line is you will have to set those variable somewhere and somehow, 
it is up to you how you want to do it. Then later when you feel more 
comfortable with Ansible you might start braking it down to roles, 
dependencies, different inventory dirs per environment, upload it to 
GitHub  etc etc etc.

Just my 2 cents.

Cheers,
Igor

On Tuesday, October 27, 2015 at 12:26:17 AM UTC+11, Rob Wilkerson wrote:
>
> Thanks, Igor. This has been great. It sounds like you're saying I 
> can't/shouldn't use an inventory file at all, but explicitly pass an extra 
> var  for the environment. Rather than dropping conditionals everywhere I 
> need env-specific values, is there any way to get Ansible to use that env 
> value to automatically read variables from a file? Originally, it seemed 
> like group_vars made the most sense, but if an inventory file doesn't make 
> sense, I'm not sure group_vars make sense either.
>
> Thanks again.
>
> On Sunday, October 25, 2015 at 9:18:09 PM UTC-4, Igor Cicimov wrote:
>>
>> Well as I expected this might be little bit confusing to a beginner. So 
>> basically the answer to your question was: tag the instances you create 
>> based on the environment you are creating them in and then use that tag for 
>> the configuration task(s).
>>
>> Which means you need to base the exact_count on the parameter you want to 
>> group on, in this case the environment tag:
>>
>> exact_count: "5"
>> count_tag:
>>   Env: "prod"
>>
>> In this way when ever you run the creation playbook it will check if 
>> there are exactly 5 instances with tag named Env and value of prod and if 
>> no will create them and if yes will skip the creation task.
>>
>> There aren't any "tons" of input vars in my example, it is just the 
>> environment tag name you want to run the playbook for. Very simple.
>>
>> For the configuration playbook as I said you just need to use:
>>
>> - hosts: tag_Env_<prod|staging|dev>
>>
>> to configure the ec2 instances you have created for the environment.
>>
>> Sorry I really don't know how to better 

[ansible-project] Re: Dynamic deployment to AWS w/ Groups

2015-10-25 Thread Igor Cicimov
Arbib is right, you will have to add them to a new group and use that in 
the next play in the same playbook. I went overboard trying to simplify so 
it does not confuse the beginner folk that might read this post.

Anyway, the point is: in AWS the tags are very powerful in terms of 
automation so tag everything and tag as much as possible and use that later 
to your own advantage.

On Sunday, October 25, 2015 at 12:31:29 PM UTC+11, Arbab Nazar wrote:
>
>  Excellent method but you cannot use dynamic inventory group Ansible 
> creates based on Tags:
>
> - hosts: tag_Env_prod
> ...
>
> - hosts: tag_Env_dev
> ...
>
> - hosts: tag_Env_stage
> ...
>
> in the same playbook because it will give the create that group doesn't 
> exist or no host inside the group instead you can use the add_host module. 
> Thanks 
>
> On Saturday, October 24, 2015 at 8:43:17 PM UTC-4, Igor Cicimov wrote:
>>
>> You don't need any groups at all I would say, just use the --extra-vars 
>> parameter when launching the playbook to tell it what kind of instances you 
>> want to launch. For example:
>>
>> -- extra-vars '{"my_env":"prod"}'
>>
>> then in the playbook you evaluate this variable and set some facts 
>> accordingly:
>>
>>   - set-fact:
>>   var1: |
>>{%- if my_env|lower == 'prod' -%}
>>5
>>{%- elif my_env|lower == 'dev' -%}
>>1
>>{%- endif -%}
>>var2: |
>>...
>>
>> and use those vars as you wish in the tasks section.
>>
>> Make sure you have jinja2 extensions enabled in your ansible.cfg file 
>> though:
>>
>> jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
>>
>> Now, when you create your ec2 instances you tag them with your 
>> environment variable given as input, per your example:
>>
>>  instance_tags:
>> Name: Demo
>> Env: "{{ my_env }}"
>>
>> so you can later use dynamic inventory group Ansible creates for you 
>> based on Tags:
>>
>> - hosts: tag_Env_prod
>> ...
>>
>> - hosts: tag_Env_dev
>> ...
>>
>> - hosts: tag_Env_stage
>> ...
>>
>> in the same or a new playbook depending on your requirements.
>>
>> Hope I'm getting your question right 
>>
>> Cheers,
>> Igor
>>
>> On Saturday, October 24, 2015 at 2:27:03 AM UTC+11, Rob Wilkerson wrote:
>>>
>>> I'd like to be able to create an Ansible script that launches (or 
>>> updates) 1 or more EC2 instances depending on the group (dev, staging, 
>>> prod) that I'm deploying. In looking through the examples and the AWS 
>>> guide, the EC2 module and this dynamic inventory stuff, I feel like I'm 
>>> missing something: How do I execute this script against a particular 
>>> inventory group if/when I have no inventory file because of the dynamic 
>>> inventory recommendation?
>>>
>>> Hopefully that makes some sense. As an example, I have a playbook with 2 
>>> roles: launch and provision. The launch role begins with this code from one 
>>> of the documented examples:
>>>
>>> - name: Provision a set of instances
>>>   ec2:
>>>  key_name: "{{ aws_keypair }}"
>>>  group: test
>>>  instance_type: t2.micro
>>>  image: "{{ ami_id }}"
>>>  wait: true
>>>  exact_count: 5
>>>  count_tag:
>>> Name: Demo
>>>  instance_tags:
>>> Name: Demo
>>> register: ec2
>>>
>>> Using this example, based on the inventory group, the exact_count value 
>>> will be different, as will the key_name and perhaps other values. How do I 
>>> read those from the right place since they would be group_vars? In 
>>> development, exact_count would be 1 while in production it would be at 
>>> least 2.
>>>
>>> Thanks.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/846e9114-8de4-4bd3-84dd-9839f87d7878%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Dynamic deployment to AWS w/ Groups

2015-10-25 Thread Igor Cicimov
Well as I expected this might be little bit confusing to a beginner. So 
basically the answer to your question was: tag the instances you create 
based on the environment you are creating them in and then use that tag for 
the configuration task(s).

Which means you need to base the exact_count on the parameter you want to 
group on, in this case the environment tag:

exact_count: "5"
count_tag:
  Env: "prod"

In this way when ever you run the creation playbook it will check if there 
are exactly 5 instances with tag named Env and value of prod and if no will 
create them and if yes will skip the creation task.

There aren't any "tons" of input vars in my example, it is just the 
environment tag name you want to run the playbook for. Very simple.

For the configuration playbook as I said you just need to use:

- hosts: tag_Env_<prod|staging|dev>

to configure the ec2 instances you have created for the environment.

Sorry I really don't know how to better explain this to a beginner. Don't 
get discouraged though keep working and when you come back to this post in 
couple of weeks when you master Ansible it will all be crystal clear :-)

On Monday, October 26, 2015 at 5:51:09 AM UTC+11, Rob Wilkerson wrote:
>
> I've been running this over in my head and I think I'm more confused than 
> ever. I don't really care whether I specifically use dynamic inventory or 
> any other specific technique. I care that my script is idempotent, 
> reasonably simple to execute for any developer and somewhat consistent with 
> Ansible recommendations. \
>
> If I run the staging deployment, it should create any instances (to meet 
> the exact_count) that don't exist, update any configuration deltas in those 
> that do exist and then deploy any projects to those servers.
>
> I guess the bottom line question is simply, what's the appropriate 
> strategy to make this work? Within that context, I do still need variables 
> specific to each group/environment so the question I asked Arbab still 
> stands. I really don't want to force developers to drop a ton of "extra 
> vars" when the deploy. I'd like to keep that command line execution as dead 
> simple as possible:
>
> # Run the deploy playbook against environment X
> ansible-playbook -i staging deploy.yml
>
> I really do appreciate the help with this. Learning Ansible has been 
> fairly simple, but figuring out how to organize for a non-trivial 
> infrastructure has definitely been a challenge.
>
> On Sunday, October 25, 2015 at 2:50:21 AM UTC-4, Igor Cicimov wrote:
>>
>> Arbib is right, you will have to add them to a new group and use that in 
>> the next play in the same playbook. I went overboard trying to simplify so 
>> it does not confuse the beginner folk that might read this post.
>>
>> Anyway, the point is: in AWS the tags are very powerful in terms of 
>> automation so tag everything and tag as much as possible and use that later 
>> to your own advantage.
>>
>> On Sunday, October 25, 2015 at 12:31:29 PM UTC+11, Arbab Nazar wrote:
>>>
>>>  Excellent method but you cannot use dynamic inventory group Ansible 
>>> creates based on Tags:
>>>
>>> - hosts: tag_Env_prod
>>> ...
>>>
>>> - hosts: tag_Env_dev
>>> ...
>>>
>>> - hosts: tag_Env_stage
>>> ...
>>>
>>> in the same playbook because it will give the create that group doesn't 
>>> exist or no host inside the group instead you can use the add_host module. 
>>> Thanks 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/196669a8-8ec3-456f-87b5-fde91270dd50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: apt update failing on ubuntu-14.04 in ec2

2015-10-25 Thread Igor Cicimov
Thanks Mark for confirming this. Now off to update all of my playbooks :-(

On Saturday, October 24, 2015 at 7:36:45 AM UTC+11, Mark McWilliams wrote:
>
> I am noticing exactly the same thing. Like you, it just started happening 
> in the last few days. I believe its related to Canonical's Ubuntu AMI that 
> was updated on October, 19th. This issue only started for me after 
> switching to this updated AMI. I've been able to mostly workaround it by 
> having an explicit apt cache update task with a retry:
>
> - name: Update apt cache
>   apt: update_cache=yes
>   register: result
>   until: result|success
>   retries: 10
>
> The cache update often fails on the first attempt, and then succeeds on 
> the next. This happens frequently, but inconsistently any time I launch a 
> new instance with this AMI.
>
> On Thursday, October 22, 2015 at 6:28:54 PM UTC-7, Igor Cicimov wrote:
>>
>> Hi all,
>>
>> The following task:
>>
>> - name: Update apt
>>   apt: update_cache=yes cache_valid_time=3600
>>   when: ansible_os_family == "Debian"
>>
>> suddenly failed today on the newly created ec2 host after running 
>> successfully for a long long time.
>>
>> The trace is given below:
>>
>> Traceback (most recent call last):
>>   File "", line 2258, in 
>>   File "", line 554, in main
>>   File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 107, in 
>> __init__
>> self.open(progress)
>>   File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 153, in open
>> self._records = apt_pkg.PackageRecords(self._cache)
>> SystemError: E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_multiverse_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_main_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_multiverse_binary-amd64_Packages
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_binary-amd64_Packages
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_binary-amd64_Packages
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_universe_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_restricted_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_multiverse_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_main_i18n_Translation-en
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_multiverse_binary-amd64_Packages
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_universe_binary-amd64_Packages
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_restricted_binary-amd64_Packages
>>  
>> - open (2: No such file or directory), E:Could not open file 
>> /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd64_Packages
>>  
>> - open (2: No such file or directory)
>>
>>
>> FATAL: all hosts have already failed -- aborting
>>
>> - name: Update apt
>>   apt: update_cache=yes cache_valid_time=3600
>>   when: ansible_os_family == "Debian"
>>
>> Of course 

[ansible-project] Running parallely a playbook on the same host but with different values of the same extra var

2015-10-24 Thread Igor Cicimov
You mean like:

for i in value1 value2; do playbook -i inventory/myhost playbook/test.yml 
--extra-vars "var=$i" &; done

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/e4377aea-52fd-4887-9d06-7dd310627bb6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] apt update failing on ubuntu-14.04 in ec2

2015-10-22 Thread Igor Cicimov
Hi all,

The following task:

- name: Update apt
  apt: update_cache=yes cache_valid_time=3600
  when: ansible_os_family == "Debian"

suddenly failed today on the newly created ec2 host after running 
successfully for a long long time.

The trace is given below:

Traceback (most recent call last):
  File "", line 2258, in 
  File "", line 554, in main
  File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 107, in 
__init__
self.open(progress)
  File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 153, in open
self._records = apt_pkg.PackageRecords(self._cache)
SystemError: E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_multiverse_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_main_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_multiverse_binary-amd64_Packages
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_binary-amd64_Packages
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_binary-amd64_Packages
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_universe_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_restricted_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_multiverse_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_main_i18n_Translation-en
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_multiverse_binary-amd64_Packages
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_universe_binary-amd64_Packages
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_restricted_binary-amd64_Packages
 
- open (2: No such file or directory), E:Could not open file 
/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd64_Packages
 
- open (2: No such file or directory)


FATAL: all hosts have already failed -- aborting

- name: Update apt
  apt: update_cache=yes cache_valid_time=3600
  when: ansible_os_family == "Debian"

Of course the named archive files do not exist since this is a Amazon 
mirror:

$ ls -l /var/lib/apt/lists/
total 111780
-rw-r--r-- 1 root root  8234934 May  8  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd64_Packages
-rw-r--r-- 1 root root  4149211 Apr 15  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_main_i18n_Translation-en
-rw-r--r-- 1 root root  595 May  8  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_main_source_Sources
-rw-r--r-- 1 root root58512 May  8  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_Release
-rw-r--r-- 1 root root  933 May  8  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_Release.gpg
-rw-r--r-- 1 root root 31726252 May  8  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_universe_binary-amd64_Packages
-rw-r--r-- 1 root root 18635427 May  8  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_universe_i18n_Translation-en
-rw-r--r-- 1 root root 27857155 May  8  2014 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty_universe_source_Sources
-rw-r--r-- 1 root root64439 Oct 22 23:50 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty-updates_InRelease
-rw-r--r-- 1 root root  4195440 Oct 22 23:50 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages
-rw-r--r-- 1 root root  3256427 Oct 22 18:40 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty-updates_main_i18n_Translation-en
-rw-r--r-- 1 root root  1243126 Oct 22 23:50 
ap-southeast-2.ec2.archive.ubuntu.com_ubuntu_dists_trusty-updates_main_source_Sources
-rw-r--r-- 1 root root  1913553 Oct 22 23:50 

[ansible-project] Send payload using uri module

2015-10-20 Thread Igor Cicimov
Well first of all if you check the module docs 
http://docs.ansible.com/ansible/uri_module.html you can see that body_format 
was introduced in ansible 2.0 so you can't use it in 1.9.1

It also shows an example of supplying the json file to the body parameter that 
looks much cleaner than what you are trying to do:

body: "{{ lookup('file','issue.json') }}"

Have you tried that?

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/53fde8bb-cca0-43aa-b464-8c6e6ad1bf9f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] How to create multiple folders with loop in ansible

2015-10-03 Thread Igor Cicimov
You have typo here

register: dies

it shoild be:

register: dirs

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/8740c3c0-7e06-4ca1-8406-09b2ac59b9a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] - Include Help. Include at start of playbook

2015-09-29 Thread Igor Cicimov
- hosts: all
  sudo:  yes
  tasks:
   - include: ../register_katello/register-katello.yml
   - name: TEST
 shell: echo "TEST"

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/f775b81f-9dcf-410b-8a58-706f173c656a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Launching multiple ec2 instances in different AZs playbook issue.

2015-09-18 Thread Igor Cicimov
Shouldn't that be ec2.results.instances instead of ec2.result.instances ?

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/dcaa4638-434c-486a-b801-d8e4ced62bf7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: register and looping

2015-08-05 Thread Igor Cicimov
Ok wrote this simple playbook just to test it:

---
- hosts: localhost
  gather_facts: false
  connection: local
  sudo: false
  tasks:
   - set_fact:
   vard: Chassis Power is on

   - shell: echo {{ vard }} | grep 'Chassis Power is on'
 ignore_errors: yes
 register: vard_result

   - debug: msg=Found it!
 when: vard_result.rc == 0
 with_items: vard_result.stdout_lines

   - debug: msg=Found it again!
 when: vard_result.stdout.find('Chassis Power is on') != -1

   - debug: msg=Found it again and again!
 when: 'Chassis Power is on' in vard

the run output:

$ ansible-playbook -i local tt4.yml

PLAY [localhost] 
** 

TASK: [set_fact ] 
* 
ok: [localhost]

TASK: [shell echo {{ motd }} | grep 'Chassis Power is on'] 
 
changed: [localhost]

TASK: [debug msg=Found it!] 
* 
ok: [localhost] = (item=Chassis Power is on) = {
item: Chassis Power is on,
msg: Found it!
}

TASK: [debug msg=Found it again!] 
*** 
ok: [localhost] = {
msg: Found it again!
}

TASK: [debug msg=Found it again and again!] 
* 
ok: [localhost] = {
msg: Found it again and again!

so no surprises here I can match the string in 3 different ways. This is 
based on the result of running ipmi on one of my servers:

root@virtual:~# ipmitool power status
Chassis Power is on
root@virtual:~#

which shows a single line being returned upon execution.

Now the only thing I can think of is that this might be different in your 
case. Do you mind showing us the output of your playbook run? I'm 
especially interested to see the output of

debug: var=power.stdout


Cheers,
Igor

On Wednesday, August 5, 2015 at 6:20:47 PM UTC+10, kevin parker wrote:

 *contents of input.yml*

 ---

 computeserver1:
   - name: compute4
 ipaddress: 192.168.211.251
 console: 192.168.211.10
 consoleuser: administrator
 consolepassword: 1


 computeserver2:
   - name: compute5
 ipaddress: 192.168.211.253
 console: 192.168.211.11
 consoleuser: administrator
 consolepassword: 1

 with out register: everything works but i want to take action based on the 
 result returned by ipmi.So i am trying register: to save result of ipmi and 
 then based on the result ,sending ipmi reset/ON for a set of servers.

 On Wednesday, August 5, 2015 at 9:03:36 AM UTC+5:30, Igor Cicimov wrote:

 Can you share the structure of the computeserver1 or even better the 
 content of the input.yml file?

 I guess you have confirmed that executing the same ipmi command from the 
 Ansible station manually works properly?

 On Tuesday, August 4, 2015 at 3:06:49 AM UTC+10, kevin parker wrote:

 i am trying to use ipmitools to check power status and based on the 
 result i will start/reset the server.But i am not able to continue as 
 register: and looping are not working .Is there any alternate approach for 
 achieving below? 



 ---
 - hosts: compute
   gather_facts: no
   vars_files:
 - input.yml

   tasks:

- name: check Power status of target
  local_action:  command ipmitool -I lanplus -H {{item.console}} -U 
 {{item.consoleuser}} -P {{item.consolepassword}} power status
  with_items:
- {{ computeserver1 }}
  when: item.console and item.consoleuser and item.consolepassword is 
 defined
  register: power

- name: check Debug
  debug: var=power.stdout

- name: Power Reset
  local_action: command ipmitool -I lanplus -H {{item.console}} -U 
 {{item.consoleuser}} -P {{item.consolepassword}} power reset
  with_items:
- {{ computeserver1 }}
  when: item.console and item.consoleuser and item.consolepassword is 
 defined and power.stdout.find('Chassis Power is on') != -1

- name: Power On
  local_action: command ipmitool -I lanplus -H {{item.console}} -U 
 {{item.consoleuser}} -P {{item.consolepassword}} power on
  with_items:
- {{ computeserver1 }}
  when: item.console and item.consoleuser and item.consolepassword is 
 defined and power.stdout.find('Chassis Power is off') != -1

  Thanks for any help



-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/927b99c1-be58-4457-96e8-eb4abd310c63%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: register and looping

2015-08-04 Thread Igor Cicimov
Can you share the structure of the computeserver1 or even better the 
content of the input.yml file?

I guess you have confirmed that executing the same ipmi command from the 
Ansible station manually works properly?

On Tuesday, August 4, 2015 at 3:06:49 AM UTC+10, kevin parker wrote:

 i am trying to use ipmitools to check power status and based on the 
 result i will start/reset the server.But i am not able to continue as 
 register: and looping are not working .Is there any alternate approach for 
 achieving below? 



 ---
 - hosts: compute
   gather_facts: no
   vars_files:
 - input.yml

   tasks:

- name: check Power status of target
  local_action:  command ipmitool -I lanplus -H {{item.console}} -U 
 {{item.consoleuser}} -P {{item.consolepassword}} power status
  with_items:
- {{ computeserver1 }}
  when: item.console and item.consoleuser and item.consolepassword is 
 defined
  register: power

- name: check Debug
  debug: var=power.stdout

- name: Power Reset
  local_action: command ipmitool -I lanplus -H {{item.console}} -U 
 {{item.consoleuser}} -P {{item.consolepassword}} power reset
  with_items:
- {{ computeserver1 }}
  when: item.console and item.consoleuser and item.consolepassword is 
 defined and power.stdout.find('Chassis Power is on') != -1

- name: Power On
  local_action: command ipmitool -I lanplus -H {{item.console}} -U 
 {{item.consoleuser}} -P {{item.consolepassword}} power on
  with_items:
- {{ computeserver1 }}
  when: item.console and item.consoleuser and item.consolepassword is 
 defined and power.stdout.find('Chassis Power is off') != -1

  Thanks for any help


-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/328074b9-0c2c-4868-9b79-9468554c1e56%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Problem with parameterizing role

2015-08-01 Thread Igor Cicimov
Roger, the documentation you are quoting is perfectly fine, it shows you 
how to invoke the same role with different parameters. What you are then 
doing inside the role is wrong. It is two different things right?

You need to understand how playbooks and roles work in Ansible and Paul's 
comment is doing a good job to help you. Read it again carefully.

On Sunday, August 2, 2015 at 2:11:48 AM UTC+10, Roger Sherman wrote:

 Ok, so what you’re telling me is the documentation is wrong? Because 
 that’s literally exactly what it’s saying to do - if you look at my 
 original post, you’ll see I quoted what the documentation says to do.

 I’m not saying it’s right and you’re wrong. If it’s wrong, that’s what I 
 want to hear.

 Thank you,

 Roger Sherman
 public key - A3068658

 On Jul 31, 2015, at 7:47 PM, Paul Markham pa...@netrefinery.com 
 javascript: wrote:

 You're executing the role twice, each time with different parameters. If 
 the role writes a single file each time, you end up with two files. The way 
 you've got it now, the role will write both files each time it's executed; 
 the result is what you're seeing: both files have the same content, which 
 comes from the second call to the role.




-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/d449071a-33cc-4560-8737-7a587fadb7a1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Problem with parameterizing role

2015-07-30 Thread Igor Cicimov
You forgot to include condition in the tasks, like for example: 

- name: Template to 1st server.xml
  template: src=server.xml.j2 dest=/var/opt/tomcat_1/conf/server.xml 
owner=tomcat7 group=nogroup mode=0600
  when: port == 5000
 
- name: Template to 2nd server.xml
  template: src=server.xml.j2 dest=/var/opt/tomcat_2/conf/server.xml 
owner=tomcat7 group=nogroup mode=0600
  when: port == 5001

otherwise each will get executed on each run, of course.

On Thursday, July 30, 2015 at 4:23:41 AM UTC+10, Roger Sherman wrote:

 I have a group of hosts that have two tomcat instances on them, and I need 
 the server.xml to have different values in each instance. Initially, I 
 tried to write a jinja2 loop into a template, which I bailed on when 
 someone in the IRC channel pointed out parameterizing the role (from 
 http://docs.ansible.com/ansible/playbooks_roles.html - there's not a 
 direct anchor, so I'll quote):

 Also, should you wish to parameterize roles, by adding variables, you can 
 do so, like this:

 - hosts: webservers
 roles:
 - common
 - { role: foo_app_instance, dir: '/opt/a', port: 5000 } 
 - { role: foo_app_instance, dir: '/opt/b', port: 5001 } 

  But when I try the above, both server.xml's end up with the same value. 
 It looks like each subsequent pass through the role puts it's value in both 
 directories, so after the second time it runs, both server.xml's have the 
 second value. I pastebinned my playbook, role, and the line from the 
 template: http://pastebin.com/eDMQkY3H

 If anybody could help me with this, or tell me what I'm doing wrong, I'd 
 greatly appreciate it.


-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/8fe9dc5a-39fb-44fe-b0a4-16bcb6709cdd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: ansible ec2_facts returns false data (if there is NAT on the system level; This is ok if You use AWS router interface gateway)

2015-07-14 Thread Igor Cicimov
I'm using Ansible with AWS VPC's, where most of them have public and 
private subnets, and have never had the problem you are seeing. This is 
definitely a misconfiguration on your side and nothing to do with Ansible. 
The ec2_facts is doing the right thing, there is no other way of collecting 
data except querying the meta-data repository which is what the AWS CLI 
tools do anyway. Meaning you will get wrong data using AWS CLI as well. 
Don't forget you are in the cloud and your networking is configured in the 
hypervisor/SDN level and NOT on instance level. Meaning you can create as 
many network interfaces as you want on instance level and set IP's on those 
but none of them will work since you have bypassed the SDN and there is no 
record of those in the meta-data repository. Which finally means that 
collecting facts on the instance locally really means nothing if those 
values don't match what is in the meta-data repository.

Now that we have that cleared, lets move to your problem, which looks to me 
is AWS routing tables. Or more specific the lack of those. For an instance 
to be in a private subnet it needs separate routing table from the VPC's 
default one (which has IGW created for you when the VPC was created) that 
has the NAT instance as IGW (internet gateway). And that is all you need, 
you don't have to set any routing tables on the system level, the SDN will 
route the traffic for you.
 
Hope this makes sense. Since you haven't provided any info about your 
subnets, routing tables, ACL's etc. this is more of a guess what's going on 
so please correct my assumptions if needed.

Thanks,
Igor

On Tuesday, July 14, 2015 at 10:16:49 PM UTC+10, sirkubax wrote:

 *THE PROBLEM:*
 I've just realised why sometimes my playbook fills the template with false 
 data

 This happens, when the instance is in my VPC subnet (with internet 
 gateway), while in configuration there is *NAT route table on the system 
 level*, then *reguest to the internet goes through NAT instance *and the 
 AWS response is *covered.*
 Then the* NAT_instance facts *are *returned*, NOT the current_instance 
 facts about.


 *THE DEBUGGING:*

 If You look into the code, the ec2_facts fetch a bunch of requests to

 'http://169.254.169.254/latest/meta-data'


 in Example:

 curl http://169.254.169.254/latest/meta-data/local-ipv4
 *172.16.0.200*


 while* real data* is

 eth0: ***
 inet *172.16.0.110*/24 brd 172.16.0.255 scope global eth0


 THE INSTANCE CONFIGURATION:

 $ ip r
 default via 172.16.0.200 dev eth0 
 172.16.0.0/24 dev eth0  proto kernel  scope link  src 172.16.0.110 
 172.16.0.0/16 via 172.16.0.1 dev eth0 

$ ip a 

 eth0: ***
 inet *172.16.0.110*/24 brd 172.16.0.255 scope global eth0



 If You keep remote files, You can check it Yourself

 export ANSIBLE_KEEP_REMOTE_FILES=1

 and then 

 python 
 /home/ubuntu/.ansible/tmp/ansible-tmp-1436872330.49-72199016469620/ec2_facts

 will return as one of the facts:
 ansible_ec2_local_ipv4: 172.16.0.200,
 (or run a curl)

 curl http://169.254.169.254/latest/meta-data/local-ipv4


 *THE CURRENT WORKAROUND:*

1. do NOT use (in *roles *nor *tasks*)
   1. - action: ec2_facts
   2. DRAWBACKS:
  1. You will not have some variables available (*ansible_ec2_* 
  will be unavailable)*
  2. You will have only *ec2_* facts *from you LOCAL* inventory 
  cache (ec2.py* if I'm correct now)
  3. If You add in playbook (gather_facts: True) then You can 
  also use *ansible_* facts *gathered by *setup.py* module
 1. so instead of *ansible_ec2_local_ipv4* You can use 
 *ansible_eth0['ipv4]['address']*
  4. *BUT* this can bring some problems when You have a role, that 
  expects some vatiable (example: ansible_hostname), but in the 
 playbook You 
  have disabled system fact gathering  (gather_facts: False) - 
  You will have to be carefull
  5. *OR* You would like to access some AWS variable, independent 
  form Your LOCAL cache
   2. configure you VPC routing tables so it will point to 
NAT-instance-interface, rather than IP address
   1. 0.0.0.0/0  eni-xxx / i-xxx
   1. instead of:
  1. 0.0.0.0/0  igw-z  + system routing tables
   2. Then You do not have to override the routing table on the system 
   level
   3. You rely on AWS Router
   4. DRAWBACKS
  1. You will have to change the routing table in the VPC, 
  pointing to other phisical interface, when Your NAT instance will 
 shut down
 1. vs
  2. If kept with system routing table, You will lunch new 
  NAT-instance with old IP address attached
   
 *QUESTIONS / CONCLUSION:*

1. Be aware about ec2_facts limitation
2. If possible - rely on Amazon Routing Table
1. How You prevent SPOF in Your VPC subnets?
   2. What is Your best-practise to configure VPC subnet (private and 
 

[ansible-project] Run Docker container as ordinary user

2015-07-14 Thread Igor Cicimov
sudo: false

-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/fc109657-9e33-4fa5-aaf3-62391cc96db5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: ansible ec2_facts returns false data (if there is NAT on the system level; This is ok if You use AWS router interface gateway)

2015-07-14 Thread Igor Cicimov
Have to correct myself, you do provide the subnet information. So in answer 
to you questions/conclusions they way I do it is:

- Use private routing table for the private subnets pointing to the NAT as 
IGW
- Use 2 x NAT instances and NAT takeover script that modifies the the 
private subnets routing table and points the IGW to itself in case the 
other NAT instance has failed

On Wednesday, July 15, 2015 at 10:21:38 AM UTC+10, Igor Cicimov wrote:

 I'm using Ansible with AWS VPC's, where most of them have public and 
 private subnets, and have never had the problem you are seeing. This is 
 definitely a misconfiguration on your side and nothing to do with Ansible. 
 The ec2_facts is doing the right thing, there is no other way of collecting 
 data except querying the meta-data repository which is what the AWS CLI 
 tools do anyway. Meaning you will get wrong data using AWS CLI as well. 
 Don't forget you are in the cloud and your networking is configured in the 
 hypervisor/SDN level and NOT on instance level. Meaning you can create as 
 many network interfaces as you want on instance level and set IP's on those 
 but none of them will work since you have bypassed the SDN and there is no 
 record of those in the meta-data repository. Which finally means that 
 collecting facts on the instance locally really means nothing if those 
 values don't match what is in the meta-data repository.

 Now that we have that cleared, lets move to your problem, which looks to 
 me is AWS routing tables. Or more specific the lack of those. For an 
 instance to be in a private subnet it needs separate routing table from the 
 VPC's default one (which has IGW created for you when the VPC was created) 
 that has the NAT instance as IGW (internet gateway). And that is all you 
 need, you don't have to set any routing tables on the system level, the SDN 
 will route the traffic for you.
  
 Hope this makes sense. Since you haven't provided any info about your 
 subnets, routing tables, ACL's etc. this is more of a guess what's going on 
 so please correct my assumptions if needed.

 Thanks,
 Igor

 On Tuesday, July 14, 2015 at 10:16:49 PM UTC+10, sirkubax wrote:

 *THE PROBLEM:*
 I've just realised why sometimes my playbook fills the template with 
 false data

 This happens, when the instance is in my VPC subnet (with internet 
 gateway), while in configuration there is *NAT route table on the system 
 level*, then *reguest to the internet goes through NAT instance *and the 
 AWS response is *covered.*
 Then the* NAT_instance facts *are *returned*, NOT the current_instance 
 facts about.


 *THE DEBUGGING:*

 If You look into the code, the ec2_facts fetch a bunch of requests to

 'http://169.254.169.254/latest/meta-data'


 in Example:

 curl http://169.254.169.254/latest/meta-data/local-ipv4
 *172.16.0.200*


 while* real data* is

 eth0: ***
 inet *172.16.0.110*/24 brd 172.16.0.255 scope global eth0


 THE INSTANCE CONFIGURATION:

 $ ip r
 default via 172.16.0.200 dev eth0 
 172.16.0.0/24 dev eth0  proto kernel  scope link  src 172.16.0.110 
 172.16.0.0/16 via 172.16.0.1 dev eth0 

$ ip a 

 eth0: ***
 inet *172.16.0.110*/24 brd 172.16.0.255 scope global eth0



 If You keep remote files, You can check it Yourself

 export ANSIBLE_KEEP_REMOTE_FILES=1

 and then 

 python 
 /home/ubuntu/.ansible/tmp/ansible-tmp-1436872330.49-72199016469620/ec2_facts

 will return as one of the facts:
 ansible_ec2_local_ipv4: 172.16.0.200,
 (or run a curl)

 curl http://169.254.169.254/latest/meta-data/local-ipv4


 *THE CURRENT WORKAROUND:*

1. do NOT use (in *roles *nor *tasks*)
   1. - action: ec2_facts
   2. DRAWBACKS:
  1. You will not have some variables available (*ansible_ec2_* 
  will be unavailable)*
  2. You will have only *ec2_* facts *from you LOCAL* inventory 
  cache (ec2.py* if I'm correct now)
  3. If You add in playbook (gather_facts: True) then You can 
  also use *ansible_* facts *gathered by *setup.py* module
 1. so instead of *ansible_ec2_local_ipv4* You can use 
 *ansible_eth0['ipv4]['address']*
  4. *BUT* this can bring some problems when You have a role, 
  that expects some vatiable (example: ansible_hostname), but in the 
 playbook 
  You have disabled system fact gathering  (gather_facts: 
  False) - You will have to be carefull
  5. *OR* You would like to access some AWS variable, independent 
  form Your LOCAL cache
   2. configure you VPC routing tables so it will point to 
NAT-instance-interface, rather than IP address
   1. 0.0.0.0/0  eni-xxx / i-xxx
   1. instead of:
  1. 0.0.0.0/0  igw-z  + system routing tables
   2. Then You do not have to override the routing table on the 
   system level
   3. You rely on AWS Router
   4. DRAWBACKS
  1. You will have to change the routing table in the VPC

[ansible-project] Reload inventory variables inside playbook execution

2015-07-11 Thread Igor Cicimov
May I ask why would you not go with the usual way of adding the newly created 
instance to a new host group and then configure it? As in the example from the 
ec2 page:

- name: Create a sandbox instance
  hosts: localhost
  gather_facts: False
  vars:
key_name: my_keypair
instance_type: m1.small
security_group: my_securitygroup
image: my_ami_id
region: us-east-1
  tasks:
- name: Launch instance
  ec2:
 key_name: {{ keypair }}
 group: {{ security_group }}
 instance_type: {{ instance_type }}
 image: {{ image }}
 wait: true
 region: {{ region }}
 vpc_subnet_id: subnet-29e63245
 assign_public_ip: yes
  register: ec2
- name: Add new instance to host group
  add_host: hostname={{ item.public_ip }} groupname=launched
  with_items: ec2.instances
- name: Wait for SSH to come up
  wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 
state=started
  with_items: ec2.instances

- name: Configure instance(s)
  hosts: launched
  sudo: True
  gather_facts: True
  tasks:
- include: config.yml
- include: config2.yml


-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/50b3f086-2da8-4183-8173-fbe27f263bf6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] selectattr with ansible

2015-07-05 Thread Igor Cicimov
I think the 'equalto' test is only available in the 2.8+ version of jinja2

-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/210c12a5-9a29-4fba-aef6-46b9fdcc68db%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: selectattr with ansible

2015-07-05 Thread Igor Cicimov
This works for me:

---
- hosts: localhost
  connection: local
  gather_facts: false
  vars:
   interfaces:
- name: lo0
  unit0:
ip_primary: 1.1.1.1
ip_secondary: 2.2.2.2
  unit1:
ip_primary: 3.3.3.3
ip_secondary: 4.4.4.4
- name: xyz
  unit0:
ip_primary: 9.9.9.9
  tasks:
   - set_fact:
  myip: |
   {%- set ips = [] -%}
   {% for interface in interfaces if interface.name == lo0 %}
 {%- do ips.append(interface.unit0.ip_primary) -%}
   {%- endfor -%}
   {{ ips }}
   - debug: var=myip

$ ansible-playbook -i local test101.yml 

PLAY [localhost] 
** 

TASK: [set_fact ] 
* 
ok: [localhost]

TASK: [debug var=myip] 
 
ok: [localhost] = {
var: {
myip: [
1.1.1.1
]
}
}

PLAY RECAP 
 
set_fact  --- 
0.04s
debug var=myip -- 
0.00s
localhost  : ok=2changed=0unreachable=0failed=0



On Saturday, July 4, 2015 at 11:18:50 PM UTC+10, Vishal Chainani wrote:

 Hi,

 I have a list like below, and I need to select the sequence which has 
 name attribute equal to lo0. I use this selectattr statement *{% set 
 lo0 = interfaces|selectattr(name, equalto, lo0) | first%}*

 But while running the playbook throws me an error {'msg': 
 TemplateRuntimeError: no test named 'equalto', 'failed': True}

 interfaces:
 - name: lo0
   unit0:
 ip_primary: 1.1.1.1
 ip_secondary: 2.2.2.2
   unit1:
 ip_primary: 3.3.3.3
 ip_secondary: 4.4.4.4
 - name: xyz
   unit0:
 ip_primary: 9.9.9.9

 Jinja2 version 2.7.3.

 Any pointer what am I  missing?


 Vishal


-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/862068e6-1c55-4e2e-a671-b28c3cc03d82%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] ec2_asg Missing required arguments for autoscaling group create/update: launch_config_name

2015-06-08 Thread Igor Cicimov
Done.

https://github.com/ansible/ansible/issues/11209

On Tuesday, June 9, 2015 at 12:13:17 PM UTC+10, benno joy wrote:

 Hi,

 It seems like it is a documentation bug, the min_size, max_size and 
 lc_config_name are required paramters, there is a feature request to add 
 default values to min_size and max_size , could you please raise a bug 
 report for this in github.

 - Benno

  

 On Tue, Jun 9, 2015 at 6:31 AM, Igor Cicimov 
 ig...@encompasscorporation.com javascript: wrote:

 When using ec2_asg I get:

 TASK: [ec2_asg ] 
 ** 
 failed: [localhost] = {failed: true}
 msg: Missing required arguments for autoscaling group create/update: 
 launch_config_name

 where is in the module man page:

 http://docs.ansible.com/ec2_asg_module.html

 this attribute is listed as not required.

 This is on Ansible 1.9.1
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 Ansible Project group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to ansible-proje...@googlegroups.com javascript:.
 To post to this group, send email to ansible...@googlegroups.com 
 javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/ansible-project/105fb140-c9ae-4472-8668-7a43573f%40googlegroups.com
  
 https://groups.google.com/d/msgid/ansible-project/105fb140-c9ae-4472-8668-7a43573f%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/e5741c6e-951a-4888-9eed-f6ea2b514360%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] ec2_asg Missing required arguments for autoscaling group create/update: launch_config_name

2015-06-08 Thread Igor Cicimov
When using ec2_asg I get:

TASK: [ec2_asg ] 
** 
failed: [localhost] = {failed: true}
msg: Missing required arguments for autoscaling group create/update: 
launch_config_name

where is in the module man page:

http://docs.ansible.com/ec2_asg_module.html

this attribute is listed as not required.

This is on Ansible 1.9.1

-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/105fb140-c9ae-4472-8668-7a43573f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Weird issue with ec2_eip module

2015-06-08 Thread Igor Cicimov
I expected to be some syntax error on my side but nothing that obvious and 
embarrassing :-(

I better go and check my eyes.

Thanks, Benno.
Igor

On Tuesday, June 9, 2015 at 2:27:35 PM UTC+10, benno joy wrote:

 you have *var={{ ec_asg_info.instances }}*

 *and in with_items: ec2_asg_info   shouldnt it be : e**c_asg_info*


 On Tue, Jun 9, 2015 at 9:08 AM, Igor Cicimov 
 ig...@encompasscorporation.com javascript: wrote:

 Hi all,

 I'm creating an ASG in my playbook from which I launch the following 
 instances:

 TASK: [create auto scaling group] 
 * 
 changed: [localhost]

 TASK: [debug *var={{ ec_asg_info.instances }}]* 
 *** 
 ok: [localhost] = {
 var: {
 ['i-9abfff30', 'i-ee4ea517', 'i-08ec48a2']: ['i-9abfff30', 
 'i-ee4ea517', 'i-08ec48a2']
 }
 }

 TASK: [debug var={{ item }}] 
  
 ok: [localhost] = (item=i-9abfff30) = {
 item: i-9abfff30,
 var: {
 i-9abfff30: i-9abfff30
 }
 }
 ok: [localhost] = (item=i-ee4ea517) = {
 item: i-ee4ea517,
 var: {
 i-ee4ea517: i-ee4ea517
 }
 }
 ok: [localhost] = (item=i-08ec48a2) = {
 item: i-08ec48a2,
 var: {
 i-08ec48a2: i-08ec48a2
 }
 }

 and then I pass the list of the create instances to the ec2_eip task:

 - name: associate new elastic IPs with each of the instances
   ec2_eip:
   instance_id={{ item }} 
   region={{ vpc_region }} 
   in_vpc=yes
   with_items: *ec2_asg_info.instances*
   register: eip_info

 which fails with the following error:

 TASK: [associate new elastic IPs with each of the instances] 
 ** 
 failed: [localhost] = (item=ec2_asg_info.instances) = {failed: true, 
 *item: 
 ec2_asg_info.instances*}
 msg: EC2ResponseError: 400 Bad Request
 ?xml version=1.0 encoding=UTF-8?
 ResponseErrorsErrorCodeInvalidInstanceID.Malformed/CodeMessage*Invalid
  
 id: ec2_asg_info.instances*
 /Message/Error/ErrorsRequestID9c204182-d5b6-41e0-badb-c6e85a9bc8e5/RequestID/Response

 FATAL: all hosts have already failed -- aborting

 Note the invalid ID message coming back which has the list name instead 
 of the item set.

 Which is strange since if I run another playbook with the ec2_eip module 
 only in which I set the list manually (after I created the instances with 
 the previous asg playbook of course):

 - name: associate new elastic IPs with each of the instances:
   ec2_eip:
   instance_id={{ item }}
   region={{ vpc_region }}
   in_vpc=yes
   *with_items: ['i-9abfff30', 'i-ee4ea517', 'i-08ec48a2']*
   register: eip_info

 the tasks executes fine:

 TASK: [associate new elastic IPs with each of the instances] 
 ** 
 changed: [localhost] = (item=i-9abfff30)
 changed: [localhost] = (item=i-ee4ea517)
 changed: [localhost] = (item=i-08ec48a2)

 What is going on here?

 Thanks,
 Igor

 -- 
 You received this message because you are subscribed to the Google Groups 
 Ansible Project group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to ansible-proje...@googlegroups.com javascript:.
 To post to this group, send email to ansible...@googlegroups.com 
 javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/ansible-project/96f3764c-00e8-47b2-bf11-3ccbdc7936e0%40googlegroups.com
  
 https://groups.google.com/d/msgid/ansible-project/96f3764c-00e8-47b2-bf11-3ccbdc7936e0%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/8b233a11-c21f-4e67-8c2b-5efcec80ed4b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Weird issue with ec2_eip module

2015-06-08 Thread Igor Cicimov
Hi all,

I'm creating an ASG in my playbook from which I launch the following 
instances:

TASK: [create auto scaling group] 
* 
changed: [localhost]

TASK: [debug *var={{ ec_asg_info.instances }}]* 
*** 
ok: [localhost] = {
var: {
['i-9abfff30', 'i-ee4ea517', 'i-08ec48a2']: ['i-9abfff30', 
'i-ee4ea517', 'i-08ec48a2']
}
}

TASK: [debug var={{ item }}] 
 
ok: [localhost] = (item=i-9abfff30) = {
item: i-9abfff30,
var: {
i-9abfff30: i-9abfff30
}
}
ok: [localhost] = (item=i-ee4ea517) = {
item: i-ee4ea517,
var: {
i-ee4ea517: i-ee4ea517
}
}
ok: [localhost] = (item=i-08ec48a2) = {
item: i-08ec48a2,
var: {
i-08ec48a2: i-08ec48a2
}
}

and then I pass the list of the create instances to the ec2_eip task:

- name: associate new elastic IPs with each of the instances
  ec2_eip:
  instance_id={{ item }} 
  region={{ vpc_region }} 
  in_vpc=yes
  with_items: *ec2_asg_info.instances*
  register: eip_info

which fails with the following error:

TASK: [associate new elastic IPs with each of the instances] 
** 
failed: [localhost] = (item=ec2_asg_info.instances) = {failed: true, 
*item: 
ec2_asg_info.instances*}
msg: EC2ResponseError: 400 Bad Request
?xml version=1.0 encoding=UTF-8?
ResponseErrorsErrorCodeInvalidInstanceID.Malformed/CodeMessage*Invalid
 
id: ec2_asg_info.instances*
/Message/Error/ErrorsRequestID9c204182-d5b6-41e0-badb-c6e85a9bc8e5/RequestID/Response

FATAL: all hosts have already failed -- aborting

Note the invalid ID message coming back which has the list name instead of 
the item set.

Which is strange since if I run another playbook with the ec2_eip module 
only in which I set the list manually (after I created the instances with 
the previous asg playbook of course):

- name: associate new elastic IPs with each of the instances:
  ec2_eip:
  instance_id={{ item }}
  region={{ vpc_region }}
  in_vpc=yes
  *with_items: ['i-9abfff30', 'i-ee4ea517', 'i-08ec48a2']*
  register: eip_info

the tasks executes fine:

TASK: [associate new elastic IPs with each of the instances] 
** 
changed: [localhost] = (item=i-9abfff30)
changed: [localhost] = (item=i-ee4ea517)
changed: [localhost] = (item=i-08ec48a2)

What is going on here?

Thanks,
Igor

-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/96f3764c-00e8-47b2-bf11-3ccbdc7936e0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Launch ec2 instances in multiple availability zones in single play

2015-06-05 Thread Igor Cicimov
Thanks Benno, that's a good point, I usually associate ASG with launch 
config but for sure it is another way to do it.

On Friday, June 5, 2015 at 3:57:14 PM UTC+10, benno joy wrote:

 wouldn't ec2_asg do that automatically for you ?



 On Fri, Jun 5, 2015 at 10:35 AM, Igor Cicimov 
 ig...@encompasscorporation.com javascript: wrote:

 Hi all,

 I've been looking for the best way to achieve what is mentioned in the 
 title of the message but haven't found any. At least not one that will 
 produce a satisfactory result which can be used further down in the same 
 playbook. 

 For example I've been testing the following loop:

 - name: create instance
   ec2: image={{ images[item.0.region] }}
keypair={{ keypair }}
instance_type={{ instance_type }}
instance_tags={{ tags }}
vpc_subnet_id={{ item.1.subnet }}
region={{ item.0.region }}
group_id={{ group_id }}
assign_public_ip=yes
wait=true
wait_timeout={{ wait_timeout }}
   with_subelements:
 - vpc 
 - subnets
   when: item.0.name == ec2_env
   register: ec2_info

 which does what I want, launches one instance per AZ in the chosen 
 region, but the output registered is way too complicated to be of any use:

 TASK: [debug var=ec2_info] 
  
 ok: [localhost] = {
 var: {
 ec2_info: {
 changed: true,
 msg: All items completed,
 results: [
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: true,
 instance_ids: [
 i-
 ],
 instances: [
 {
 ami_launch_index: 0,
 architecture: x86_64,
 dns_name: 
 .eu-west-1.compute.amazonaws.com,
 ebs_optimized: false,
 groups: {
 sg-: sg-
 },
 hypervisor: xen,
 id: i-,
 image_id: ami-47a23a30,
 instance_type: t2.micro,
 kernel: null,
 key_name:,
 launch_time: 2015-06-05T02:48:14.000Z,
 placement: eu-west-1a,
 private_dns_name: 
 .eu-west-1.compute.internal,
 private_ip: ,
 public_dns_name: 
 .eu-west-1.compute.amazonaws.com,
 public_ip: ,
 ramdisk: null,
 region: eu-west-1,
 root_device_name: /dev/sda1,
 root_device_type: ebs,
 state: running,
 state_code: 16,
 tags: {},
 tenancy: default,
 virtualization_type: hvm
 }
 ],
 invocation: {
 module_args: image=\ami-47a23a30\ 
 keypair= instance_type=t2.micro instance_tags=\{'Environment': 
 u'', 'Role': u'server', 'Type': 'type', 'Name': 
 u'ec2-type-', 'Datacenter': u''}\ 
 vpc_subnet_id=subnet- region=eu-west-1 group_id=sg- 
 assign_public_ip=yes wait=true wait_timeout=300,
 module_name: ec2
 },
 item: [
 {
 cidr: ,
 name: ,
 region: eu-west-1,
 subnets_app: [
 {
 subnet: subnet-,
 zone: eu-west-1a

[ansible-project] Launch ec2 instances in multiple availability zones in single play

2015-06-04 Thread Igor Cicimov
Hi all,

I've been looking for the best way to achieve what is mentioned in the 
title of the message but haven't found any. At least not one that will 
produce a satisfactory result which can be used further down in the same 
playbook. 

For example I've been testing the following loop:

- name: create instance
  ec2: image={{ images[item.0.region] }}
   keypair={{ keypair }}
   instance_type={{ instance_type }}
   instance_tags={{ tags }}
   vpc_subnet_id={{ item.1.subnet }}
   region={{ item.0.region }}
   group_id={{ group_id }}
   assign_public_ip=yes
   wait=true
   wait_timeout={{ wait_timeout }}
  with_subelements:
- vpc 
- subnets
  when: item.0.name == ec2_env
  register: ec2_info

which does what I want, launches one instance per AZ in the chosen region, 
but the output registered is way too complicated to be of any use:

TASK: [debug var=ec2_info] 
 
ok: [localhost] = {
var: {
ec2_info: {
changed: true,
msg: All items completed,
results: [
{
changed: false,
skipped: true
},
{
changed: false,
skipped: true
},
{
changed: false,
skipped: true
},
{
changed: false,
skipped: true
},
{
changed: false,
skipped: true
},
{
changed: false,
skipped: true
},
{
changed: false,
skipped: true
},
{
changed: true,
instance_ids: [
i-
],
instances: [
{
ami_launch_index: 0,
architecture: x86_64,
dns_name: 
.eu-west-1.compute.amazonaws.com,
ebs_optimized: false,
groups: {
sg-: sg-
},
hypervisor: xen,
id: i-,
image_id: ami-47a23a30,
instance_type: t2.micro,
kernel: null,
key_name:,
launch_time: 2015-06-05T02:48:14.000Z,
placement: eu-west-1a,
private_dns_name: 
.eu-west-1.compute.internal,
private_ip: ,
public_dns_name: 
.eu-west-1.compute.amazonaws.com,
public_ip: ,
ramdisk: null,
region: eu-west-1,
root_device_name: /dev/sda1,
root_device_type: ebs,
state: running,
state_code: 16,
tags: {},
tenancy: default,
virtualization_type: hvm
}
],
invocation: {
module_args: image=\ami-47a23a30\ 
keypair= instance_type=t2.micro instance_tags=\{'Environment': 
u'', 'Role': u'server', 'Type': 'type', 'Name': 
u'ec2-type-', 'Datacenter': u''}\ 
vpc_subnet_id=subnet- region=eu-west-1 group_id=sg- 
assign_public_ip=yes wait=true wait_timeout=300,
module_name: ec2
},
item: [
{
cidr: ,
name: ,
region: eu-west-1,
subnets_app: [
{
subnet: subnet-,
zone: eu-west-1a
},
{
subnet: subnet-,
zone: eu-west-1b
},
{
subnet: subnet-,
zone: eu-west-1c
}
],
subnets_db: [
{

Re: [ansible-project] Re: Can't get the group_id value from register in ec2_group on creation

2015-05-28 Thread Igor Cicimov
Any idea how to dig out the sg id out of this? Maybe changing the module to 
not be so verbose and print only the needed info would be easier?

On Thursday, May 28, 2015 at 3:59:14 PM UTC+10, Igor Cicimov wrote:

 Thanks for replying Benno. I did exactly that with debugging and can see 
 where the problem is.

 First let me say I haven't been completely honest about the way I've been 
 invoking the ec2_module. I have simplified the call for readability but 
 from the debug output I can see I shouldn't have since it covers the 
 problem. In case I do:

  - ec2_group:
  name: group-{{ ec2_env }}
  description: firewall
  vpc_id: vpc-
  region: eu-west-1
  ...
register: group_sg

 then all is fine. The debug message is simple:

 ok: [localhost] = {
 msg: group_id -- {'invocation': {'module_name': u'ec2_group', 
 'module_args': ''}, 'changed': True, 'group_id': 'sg-'}
 }

 However my case I'm invoking ec2_group via with_dict loop as given below:

 - hosts: localhost
   connection: local
   gather_facts: false
   vars_files:
 - group_vars/app_servers
 - group_vars/vpcs
   tasks:
   - name: Some group
 ec2_group:
  name: group-{{ ec2_env }}
  description: group firewall
  vpc_id: {{ item.key }}
  region: {{ item.value.region }}
  purge_rules: false
  purge_rules_egress: false
  rules:
   - proto: tcp
 from_port: 22
 to_port: 22
 cidr_ip: 0.0.0.0/0
   - proto: tcp
 from_port: x
 to_port: x
 cidr_ip: {{ item.value.cidr }}
 .
 .
 .
   - proto: all
 group_name: group-{{ ec2_env }}
  rules_egress:
   - proto: all
 type: all
 cidr_ip: 0.0.0.0/0
 with_dict: vpc
 when: item.value.name == ec2_env
 register: group_sg


 where the dictionary is a VPC mappings as follows:

 vpc:
  vpc-:
   name: nameX
   region: ap-southeast-2
   cidr: /16
   subnets:
- { zone: ap-southeast-2a, subnet: subnet- }
- { zone: ap-southeast-2b, subnet: subnet- }
   subnets_app:
- { zone: ap-southeast-2a, subnet: subnet- }
- { zone: ap-southeast-2b, subnet: subnet- }
   subnets_db:
- { zone: ap-southeast-2a, subnet: subnet- }
- { zone: ap-southeast-2b, subnet: subnet- }
 .
 .
 .
  vpc-:
   name: nameY
   region: eu-west-1
   cidr: /16
   subnets:
- { zone: eu-west-1a, subnet: subnet- }
- { zone: eu-west-1b, subnet: subnet- }
- { zone: eu-west-1c, subnet: subnet- }
   subnets_app:
- { zone: eu-west-1a, subnet: subnet- }
- { zone: eu-west-1b, subnet: subnet- }
- { zone: eu-west-1c, subnet: subnet- }
   subnets_db:
- { zone: eu-west-1a, subnet: subnet- }
- { zone: eu-west-1b, subnet: subnet- }
- { zone: eu-west-1c, subnet: subnet- }


 in which case I get the following complex structure as outout:

 TASK: [debug var=group_sg] 
 ***
 ok: [localhost] = {
 var: {
 group_sg: {
 changed: true,
 msg: All items completed,
 results: [
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: false,
 skipped: true
 },
 {
 changed: true,
 group_id: sg-,
 invocation: {
 module_args: ,
 module_name: ec2_group
 },
 item: {
 key: vpc-,
 value: {
 cidr: /16,
 name: ,
 region: eu-west-1,
 subnets: [
 {
 subnet: subnet-,
 zone: eu-west-1a
 },
 {
 subnet: subnet-,
 zone: eu-west-1b
 },
 {
 subnet: subnet-,
 zone: eu-west-1c
 }
 ],
 subnets_app: [
 {
 subnet: subnet-,
 zone: eu-west-1a

[ansible-project] Can't get the group_id value from register in ec2_group on creation

2015-05-27 Thread Igor Cicimov
I have the following as part of a play:

  - ec2_group:
 name: group-name
 description: firewall
 vpc_id: {{ vpc_id }}
 region: {{ region }}
 purge_rules: false
 purge_rules_egress: false
 rules:
  - proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
  ...
  #- proto: all
  #  group_name: group-name
register: group_sg

  - debug: msg=group_id -- {{ group_sg.group_id }}

which fails with the error:

TASK: [debug msg=group_id -- {{ group_sg.group_id }}] 
** 
fatal: [localhost] = One or more undefined variables: 'dict object' has no 
attribute 'group_id'

Isn't this the right way of getting this attribute? Or this is not an 
option for a SG created inside VPC? The SG is being created fine though for 
the specified VPC and region.

Another thing is that I'm anable to use:

  - proto: all
group_name: group-name

as in the official Ansible page example in the rules since I'm getting the 
following error:

File /usr/local/lib/python2.7/dist-packages/boto/connection.py, line 
1226, in get_status
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
?xml version=1.0 encoding=UTF-8?
ResponseErrorsErrorCodeInvalidGroup.NotFound/CodeMessageYou 
have specified two resources that belong to different 
networks./Message/Error/ErrorsRequestIDdee577be-.../RequestID/Response

Any ideas?

$ ansible --version
ansible 1.9.1


Thanks,
Igor

-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/73f37d6a-f9b5-4219-92cb-665d0f250e6b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Re: Can't get the group_id value from register in ec2_group on creation

2015-05-27 Thread Igor Cicimov
}
],
subnets_db: [
{
subnet: subnet-,
zone: eu-west-1a
},
{
subnet: subnet-,
zone: eu-west-1b
},
{
subnet: subnet-,
zone: eu-west-1c
}
]
}
}
},
{
changed: false,
skipped: true
}
]
}
}
}


Yeah, I'm trying to make the play generic and apply to any VPC/subnets in 
any region.

Thanks again for your help.

Igor

On Thursday, May 28, 2015 at 3:35:04 PM UTC+10, benno joy wrote:

 Hi Igor,

 - debug: msg=group_id -- {{ group_sg.group_id }} should work, can you 
 please try

 - debug: var=group_sg and see what are the keys that you are getting.

 Also for question 2. there were a few fixes added to filter groups in the 
 same vpc, can you please try the latest devel branch,


 - Benno





 On Thu, May 28, 2015 at 10:26 AM, Igor Cicimov 
 ig...@encompasscorporation.com javascript: wrote:

 Replying to my self about the second part of my question re:

   - proto: all
 group_name: group-name

 It came up that the group name has to unique in the region otherwise the 
 call will fail. The AWS console on other hand allows creation of security 
 groups with the same name in same region in case they belong to different 
 VPC's.



 On Thursday, May 28, 2015 at 2:33:51 PM UTC+10, Igor Cicimov wrote:

 I have the following as part of a play:

   - ec2_group:
  name: group-name
  description: firewall
  vpc_id: {{ vpc_id }}
  region: {{ region }}
  purge_rules: false
  purge_rules_egress: false
  rules:
   - proto: tcp
 from_port: 22
 to_port: 22
 cidr_ip: 0.0.0.0/0
   ...
   #- proto: all
   #  group_name: group-name
 register: group_sg

   - debug: msg=group_id -- {{ group_sg.group_id }}

 which fails with the error:

 TASK: [debug msg=group_id -- {{ group_sg.group_id }}] 
 ** 
 fatal: [localhost] = One or more undefined variables: 'dict object' has 
 no attribute 'group_id'

 Isn't this the right way of getting this attribute? Or this is not an 
 option for a SG created inside VPC? The SG is being created fine though for 
 the specified VPC and region.

 Another thing is that I'm anable to use:

   - proto: all
 group_name: group-name

 as in the official Ansible page example in the rules since I'm getting 
 the following error:

 File /usr/local/lib/python2.7/dist-packages/boto/connection.py, line 
 1226, in get_status
 raise self.ResponseError(response.status, response.reason, body)
 boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
 ?xml version=1.0 encoding=UTF-8?
 ResponseErrorsErrorCodeInvalidGroup.NotFound/CodeMessageYou 
 have specified two resources that belong to different 
 networks./Message/Error/ErrorsRequestIDdee577be-.../RequestID/Response

 Any ideas?

 $ ansible --version
 ansible 1.9.1


 Thanks,
 Igor

  -- 
 You received this message because you are subscribed to the Google Groups 
 Ansible Project group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to ansible-proje...@googlegroups.com javascript:.
 To post to this group, send email to ansible...@googlegroups.com 
 javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/ansible-project/5135d1c1-5d10-40ad-8a4b-22828b94a382%40googlegroups.com
  
 https://groups.google.com/d/msgid/ansible-project/5135d1c1-5d10-40ad-8a4b-22828b94a382%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/a14cb730-f026-4f5b-8422-7d463cce6a44%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: Can't get the group_id value from register in ec2_group on creation

2015-05-27 Thread Igor Cicimov
Replying to my self about the second part of my question re:

  - proto: all
group_name: group-name

It came up that the group name has to unique in the region otherwise the 
call will fail. The AWS console on other hand allows creation of security 
groups with the same name in same region in case they belong to different 
VPC's.


On Thursday, May 28, 2015 at 2:33:51 PM UTC+10, Igor Cicimov wrote:

 I have the following as part of a play:

   - ec2_group:
  name: group-name
  description: firewall
  vpc_id: {{ vpc_id }}
  region: {{ region }}
  purge_rules: false
  purge_rules_egress: false
  rules:
   - proto: tcp
 from_port: 22
 to_port: 22
 cidr_ip: 0.0.0.0/0
   ...
   #- proto: all
   #  group_name: group-name
 register: group_sg

   - debug: msg=group_id -- {{ group_sg.group_id }}

 which fails with the error:

 TASK: [debug msg=group_id -- {{ group_sg.group_id }}] 
 ** 
 fatal: [localhost] = One or more undefined variables: 'dict object' has 
 no attribute 'group_id'

 Isn't this the right way of getting this attribute? Or this is not an 
 option for a SG created inside VPC? The SG is being created fine though for 
 the specified VPC and region.

 Another thing is that I'm anable to use:

   - proto: all
 group_name: group-name

 as in the official Ansible page example in the rules since I'm getting the 
 following error:

 File /usr/local/lib/python2.7/dist-packages/boto/connection.py, line 
 1226, in get_status
 raise self.ResponseError(response.status, response.reason, body)
 boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
 ?xml version=1.0 encoding=UTF-8?
 ResponseErrorsErrorCodeInvalidGroup.NotFound/CodeMessageYou 
 have specified two resources that belong to different 
 networks./Message/Error/ErrorsRequestIDdee577be-.../RequestID/Response

 Any ideas?

 $ ansible --version
 ansible 1.9.1


 Thanks,
 Igor


-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/5135d1c1-5d10-40ad-8a4b-22828b94a382%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Re: create network interface (ENI) on EC2 instance? possible to use raw boto commands?

2015-01-05 Thread Igor Cicimov
+1 for this request. Have you seen this module though: 
https://github.com/cybosol/ansible/blob/master/library/cloud/ec2_eni
haven't tried it my self but maybe it can help.

On Saturday, January 3, 2015 11:29:13 AM UTC+11, Jeff wrote:

 Someone asked for this https://github.com/ansible/ansible/issues/7895 a 
 while back before the modules were reorganized, but I've seen no mention 
 since, so I'm guess it's still not possible to create (or manipulate) an 
 Elastic 
 Network Interface (eni) 
 http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html on an 
 EC2 instance yet.   That being the case, since this is supported by boto (
 create_network_interface 
 http://boto.readthedocs.org/en/latest/ref/ec2.html#boto.ec2.connection.EC2Connection.create_network_interface),
  
 is it possible for me to make my own calls via boto somehow?  I'm not very 
 deep into Ansible so perhaps this is a nonsensical question.

 Thanks


-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/674b7bf4-7358-4f9e-976b-a368dd630493%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Configuring Ansible to run play books through a bastion host on aws/ec2

2014-12-25 Thread Igor Cicimov
Matt, doesn't this prevent you from using ec2 dynamic inventory? For me 
being unable to dynamically discover instances as they come and go in the 
VPC is huge limitation.

On Thursday, February 6, 2014 7:31:20 AM UTC+11, Matt Martz wrote:

 I use bastions for nearly all of my communication with servers.  It is all 
 done via my ~/.ssh/config file.  Something like:

 Host bastion
 User   myuser
 HostName   bastion.example.org
 ProxyCommand   none
 IdentityFile   ~/.ssh/id_rsa
 BatchMode  yes
 PasswordAuthentication no

 Host *
 ServerAliveInterval60
 TCPKeepAlive   yes
 ProxyCommand   ssh -qaY bastion 'nc -w 14400 %h %p'
 ControlMaster  auto
 ControlPath~/.ssh/mux-%r@%h:%p
 ControlPersist 8h

 In ~/.ansible.cfg I then have

 [ssh_connection]
 ssh_args = -o ControlPersist=15m -F ~/.ssh/config
 scp_if_ssh = True
 control_path = ~/.ssh/mux-%%r@%%h:%%p

 Nothing else required.  I execute ansible and all my connections go 
 through the bastion.  Your Host * might benefit from being more targeted. 
  In any case, I also have to use these same configs for normal SSH access, 
 so for me it makes sense to just have them in my ssh config.

 I really don't see a need to modify anything within Ansible to do this.
 -- 
 Matt Martz
 ma...@sivel.net javascript:

 On February 5, 2014 at 2:09:24 PM, Adam Heath (ad...@brainfood.com 
 javascript:) wrote:

 I just looked over ssh.py and ssh_old.py; if I were to actually want to 
 sit down and do this, I would factor those 2 classes, into a common base 
 class, then introduce a third version that supported ProxyCommand. 

 ps: I notice something odd in the two files above: 

 == 
 - def exec_command(self, cmd, tmp_path, sudo_user=None, 
 sudoable=False, executable='/bin/sh', in_data=None, su=False, 
 su_user=None): 
 + def exec_command(self, cmd, tmp_path, sudo_user=None, 
 sudoable=False, executable='/bin/sh', in_data=None, su_user=None, 
 su=False): 
 == 

 Why is the order of the last 2 args reversed for those two files? Seems 
 like it might cause some confusion. 

 On 02/05/2014 01:51 PM, Adam Heath wrote: 
  I've had musings on that too. Currently, I think you'd have to manually 
  configure $HOME/.ssh/config, with ProxyCommand. 
  
  However, I just had a thought. What if there was an 
  ansible_ssh_proxy=$other_inventory_host feature? When set, ansible 
  would auto-add the -o ProxyCommand=$something. 
  
  This is just some random brainstorm ramblings. 
  
  On 02/05/2014 12:59 PM, Jeff Lord wrote: 
  Hello, 
  
  I am building out an env in AWS using ansible and would like to 
  configure all of my hosts by running through a single bastion host 
 which 
  has port 22 open. 
  Laptop - AWS Bastion - AWS private network instances 
  
  Is there a good example of how to configure the proxy around? 
  
  Thank You in advance, 
  

 -- 
 You received this message because you are subscribed to the Google Groups 
 Ansible Project group. 
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to ansible-proje...@googlegroups.com javascript:. 
 To post to this group, send email to ansible...@googlegroups.com 
 javascript:. 
 For more options, visit https://groups.google.com/groups/opt_out. 



-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/41fc268e-ebc7-4b69-a75b-c694ab6dc46a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Boolean variables and when condition

2014-07-01 Thread Igor Cicimov
Thanks guys for looking into it. With some debugging it came up that one of 
the variables was never set and was always false, so another case of stupid 
user error. Otherwise can confirm that the condition works as expected with 
or without |bool, which I guess has higher importance when used with vars 
as {{ var | bool }} format.

On Wednesday, July 2, 2014 4:37:20 AM UTC+10, James Cammarata wrote:

 If they are actual booleans already (and not a string value like 'yes' 
 'no' or 'true') then you don't need to use the bool filter at all and it 
 can be simplified to:

 when: var1 and var2 or not var3

 Regarding the logic, what does debug say the variable values are before 
 the task you're using them in with the when statement above?



 On Mon, Jun 30, 2014 at 8:50 PM, Igor Cicimov 
 ig...@encompasscorporation.com javascript: wrote:

 Hi guys,

 Simple question ... given that the vars below are boolean type with 
 values of true or false, does this condition make sense?

 when: (var1 | bool and var2 | bool) or not var3 | bool

 Although I would expect to work (based on what I can see in jinja2 
 reference) I'm not seeing the desired result so hence the question.

 Thanks
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 Ansible Project group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to ansible-proje...@googlegroups.com javascript:.
 To post to this group, send email to ansible...@googlegroups.com 
 javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/ansible-project/cdc66b02-9d7f-4d46-b7bf-aeb194e4%40googlegroups.com
  
 https://groups.google.com/d/msgid/ansible-project/cdc66b02-9d7f-4d46-b7bf-aeb194e4%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/3df3bc11-cecb-488e-a4fd-481dbee2c40c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Boolean variables and when condition

2014-06-30 Thread Igor Cicimov
Hi guys,

Simple question ... given that the vars below are boolean type with values 
of true or false, does this condition make sense?

when: (var1 | bool and var2 | bool) or not var3 | bool

Although I would expect to work (based on what I can see in jinja2 
reference) I'm not seeing the desired result so hence the question.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
Ansible Project group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/cdc66b02-9d7f-4d46-b7bf-aeb194e4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.