Re: [ansible-project] {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_default_ipv4'"}

2018-11-18 Thread Tom K.
So I've made two more empty hosts and called them *mysql05* and *mysql06* 
and tested on all 3 (this way I don't blow away my working cluster).  Now I 
removed the limit flag and run it like this:

# ansible-playbook -i infra main.yml --tags "mysql" -v

Everything worked well and the */etc/my.cnf* was populated as expected:

[root@mysql04 ~]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
bind-address="192.168.0.109"
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M
wsrep_provider=/usr/lib64/galera-3/libgalera_smm.so
wsrep_provider_options="gcache.size=300M; gcache.page_size=300M"
wsrep_cluster_name="galera_cluster1"
wsrep_cluster_address='gcomm://192.168.0.109,192.168.0.102,192.168.0.111'
wsrep_sst_method=rsync
server_id=1
wsrep_node_address="192.168.0.109"
wsrep_node_name="mysql04"
[mysql_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@mysql04 ~]#


Still, running the limit on any one, as Stephen suggested, fails with the 
following for any host specified by limit as you mentioned:


ansible-playbook -i infra --limit mysql05 main.yml --tags "mysql" -v --check

fatal: [mysql05]: FAILED! => {"changed": false, "msg": 
"AnsibleUndefinedVariable: 
'ansible.vars.hostvars.HostVarsVars object' has no attribute 
'ansible_default_ipv4'"}


ansible-playbook -i infra --limit mysql06 main.yml --tags "mysql" -v --check

fatal: [mysql06]: FAILED! => {"changed": false, "msg": 
"AnsibleUndefinedVariable: 
'ansible.vars.hostvars.HostVarsVars object' has no attribute 
'ansible_default_ipv4'"}

But it seems to work opposite to the way described above (Apologies if I'm 
misreading).  The error is thrown for ANY host that I use the *--limit* 
flag on, not the ones that I don't use the limit on.  

I would have expected it to gather facts on the host I'm limiting the run 
too, not the ones I'm excluding.  

Cheers,
Tom

On Friday, November 16, 2018 at 5:53:32 AM UTC-5, Stephen C. wrote:
>
> Hi Tom, 
>
> Can you try a couple of options and post it to this thread please ? 
> With the same inventory file: 
>
> [mysql] 
>   mysql01 
>   mysql02 
>   mysql03 
>   mysql04
>
>
>
> ansible-playbook -i infra --limit mysql02 main.yml --tags "mysql" -v
>
> ansible-playbook -i infra --limit mysql03 main.yml --tags "mysql" -v
>
>
> Thnx, 
> Stephen
>
> On Wednesday, November 14, 2018 at 9:15:38 AM UTC-5, Kai Stian Olstad 
> wrote:
>>
>> On 14.11.2018 14:49, Tom K. wrote: 
>> > On Tuesday, November 13, 2018 at 3:10:18 AM UTC-5, Tom K. wrote: 
>> > 
>> > Ok.  So I removed a couple of tags from the mysql task "mysql : Copy 
>> > my.cnf 
>> > global MySQL configuration."  and adjusted the play as follows: 
>>
>> The playbook is fine, the problem is the limit option your are using on 
>> ansible-playbook. 
>>
>>
>> > [root@awx01 ansible]# vi main.yml 
>> > --- 
>> > - name: Gather all facts prior to execution 
>> >   hosts: mysql 
>> >   gather_facts: true 
>> >   tasks: 
>> > - debug: msg='{{ inventory_hostname }} has default IP {{ 
>> > ansible_default_ipv4["address"] }}' 
>> > - template: 
>> > src: test.j2 
>> > dest: /tmp/test.out 
>> >   tags: mysql 
>> > 
>> > 
>> > - name: Install and configure MySQL 
>> >   hosts: mysql 
>> >   become: true 
>> >   roles: 
>> > - mysql 
>> >   tags: mysql 
>> > 
>> > 
>> > But that didn't work.  Still got the original error with mysql04.   
>> > Until I 
>> > removed mysql01-3 from the infra file leaving only mysql04: 
>> > 
>> > [mysql] 
>> > mysql04 
>> > 
>> > 
>> > And reran using: 
>> > 
>> > ansible-playbook -i infra --limit mysql04 main.yml --tags "mysql" -v 
>>
>>
>> You still have the same problem I commented on earlier. 
>>
>> When you run a with --limit, the task and *gather_facts* is only run on 
>> host specified in the limit. 
>> So when you in you template try using facts for mysql01-03 they don't 
>> exist since you haven't gather them so you get the error message. 
>>
>> So remove you --limit and it will work, the template you have will never 
>> work if you specify limit. 
>>
>>
>> -- 
>> Kai Stian Olstad 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/a7a99a33-69d4-4d0b-8dc1-7b76a727a9b6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] How to force Ansible to update PATH?

2018-11-18 Thread Dmitriy Panteleyev
I am installing snapd via Ansible on *buntu.

After installation, snap binaries are linked from `/snap/bin`.  When snapd 
is installed, it adds a script to `/etc/profile.d/` that basically add 
/snap/bin to bash PATH.

My problem is that I cannot find a way to force Ansible to refresh the 
path, so subsequent roles/tasks fail with " 
command not found in PATH"

I have tried:

1. Ansible `meta: reset_connection` task -- fails with message "unable to 
reset this type of connection" (local)
2. In playbook: `environment: PATH: '/snap/bin:{{ ansible_env.PATH }}' -- 
no effect
3. In role: `shell: 'export PATH=$PATH:/snap/bin'` -- no effect

Short of forcing a reboot mid-way through a playbook, any other ideas?

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/633ae612-9ef0-490c-84d0-786591bcc77a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] Parallel execution of tasks in playbook

2018-11-18 Thread Saravanan
Thanks a bunch Jon. It helps a lot.

On Friday, 16 November 2018 03:35:26 UTC-5, J Hawkesworth wrote:
>
> Hello,
>
> Ansible will run tasks in parallel against groups of hosts, so I suggest 
> you convert your 
>
> apache_sever_list.yaml
>
> file into ansible inventory format
>
> and put all the hosts in it into a group called 'apache'
>
> then you can run tasks against
>
> - hosts: apache
>   tasks:
>   - name: any tasks here run against all the hosts in apache group 
> simultaneously
>
>
> What its nice is you can still use 'delegate_to: localhost' for your uri 
> task and it will still run the task from the ansible controller (but run it 
> in parallel for each host in your 'apache' group).
>
> Also, have a look at 'template' module I think you will find it a lot 
> easier to create the html file than the combination of shell commands and 
> lineinfile.  
>
> I don't have time to test this today but try experimenting with organising 
> things as follows:
>
> Create an inventory file for your apache hosts:
>
> # file: apache_hosts 
> # this is an ansible inventory file (using 'ini' format but can use yaml)
> # see 
> https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html 
> for more about inventory 
> [apache]
> host1 ap_port=8081
> host2 ap_port=80
> host3 ap_port=8080
> host4 ap_port=8008
> host5 ap_port=8123
>
> Create a template file like this:
>
> # ansible template file: apache.html.j2
> # see 
> https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#magic-variables-and-how-to-access-information-about-other-hosts
> # for how to use variables in templates.
> 
> 
> List of failed Apache servers
> 
> HostnamePort
> <%for host_result in apache_check_result %>
> {{ examine apache_check_result to find hostname from results }} td>{{ again use apache_check_result to get at the port used | default(
> 'Failed' }}
> 
> 
> 
>
> Create a playbook like this:
>
> # playbook: apache_check.yml
>
> - name: report on apache status 
>   gather_facts: yes
>   hosts: apache
>   
>   tasks:
> - name: Check the apache server status
>   uri:
> url: "{{ ansible_hostname }}:{{ ap_port }}"
> method: GET
> status_code: 200
> body_format: raw
> follow_redirects: all
> return_content: yes
> validate_certs: no
> force: yes
>   register: apache_check_result
>   ignore_errors: yes
>   delegate_to: localhost  
>   
> - name: show results for debugging purposes
>   debug:
>  var: apache_check_result
>   
> - name: template out the results
>   template:
> src: apache_html.j2
> dest: /var/apache.html
>   delegate_to: localhost
>  
>
>   and run the playbook like this
>
> ansible-playbook -i apache_hosts apache_check.yml
>
>
> Sorry I haven't got time to debug this but I hope the above will 
> illustrate that by using ansible's inventory you can get your tasks to run 
> in parallel (and also that using the 'template' module is a great way to 
> create files from ansible.
>
> Hope this helps get you on the right track.
>
> All the best,
>
> Jon
>
>
>
>
>
>
> On Thursday, November 15, 2018 at 3:40:34 PM UTC, Build Admin wrote:
>>
>> Thank you for your reply.
>> I tried using strategy as free and unable to use async in the task as 
>> task has dependency task. Could you please suggest code change or alternate 
>> method for running the task in parallel.
>>
>> ---
>> - name: Main playbook
>>   gather_facts: no
>>   hosts: 127.0.0.1
>>   strategy: free
>>
>>   tasks:
>> - name: Create csv file and html file
>>   file:
>> path: "{{ item }}"
>> state: touch
>>   delegae_to: localhost
>>   become_user: awx
>>   become: no
>>   with_items:
>> - /tmp/apache.csv
>> - /tmp/apache.html
>>
>> - include_vars: apache_sever_list.yaml
>>
>> - include_tasks: apache_task.yaml
>>   with_items: '{{ apacheSevers }}'
>>
>> - name: Run the csv2html script
>>   shell: |
>> echo "List of failed Apache servers"
>> echo "" ;
>> echo "HostnamePort"
>> while read INPUT; do
>> echo "${INPUT//,/}";
>> done < /tmp/apache.csv
>> echo ""
>>   delegae_to: localhost
>>   become_user: awx
>>   become: no
>>
>> - name: append
>>   lineinfile:
>> dest: /tmp/apache.html
>> line: "{{ output.stdout }}"
>> insertafter: EOF
>>   delegae_to: localhost
>>   become_user: awx
>>   become: no
>>
>>
>>
>> *apche_task.yaml*
>>
>> - name: Check the apache server status
>>   uri:
>> url: "{{ item.hostname }}:{{ item.port }}"
>> method: GET
>> status_code: 200
>> body_format: raw
>> follow_redirects: all
>> return_content: yes
>> validate_certs: no
>> force: yes
>>   delegae_to: localhost
>>   become_user: awx
>>   become: no
>>
>>
>> - 

Re: [ansible-project] ssh to remote node and run CLI

2018-11-18 Thread vinoth kumar
Hi Abdul,

You could remove the hosts from inventory.ym , else directly mention it in
default hosts inventory file

Br
Vinoth

On Wed, 14 Nov 2018 at 8:26 AM, Abdul Rahim  wrote:

> Thans Brian ,
>
> It fails with following
>
> root@ansibile-launch:~/ansible/tasks/add-compute# ansible-playbook -i
> inventory.yml add-compute.yml -
> ansible-playbook 2.7.1
>   config file = /etc/ansible/ansible.cfg
>   configured module search path = [u'/root/.ansible/plugins/modules',
> u'/usr/share/ansible/plugins/modules']
>   ansible python module location = /usr/lib/python2.7/dist-packages/ansible
>   executable location = /usr/bin/ansible-playbook
>   python version = 2.7.12 (default, Dec  4 2017, 14:50:18) [GCC 5.4.0
> 20160609]
> Using /etc/ansible/ansible.cfg as config file
> setting up inventory plugins
> /root/ansible/tasks/add-compute/inventory.yml did not meet host_list
> requirements, check plugin documentation if this is unexpected
> /root/ansible/tasks/add-compute/inventory.yml did not meet script
> requirements, check plugin documentation if this is unexpected
> Parsed /root/ansible/tasks/add-compute/inventory.yml inventory source with
> yaml plugin
> ERROR! Syntax Error while loading YAML.
>   mapping values are not allowed in this context
>
> The error appears to have been in
> '/root/ansible/tasks/add-compute/add-compute.yml': line 11, column 23, but
> may
> be elsewhere in the file depending on the exact syntax problem.
>
> The offending line appears to be:
>
>   - name: Copy Test
>   ansible_user: "{{build_username}}"
>   ^ here
> We could be wrong, but this one looks like it might be an issue with
> missing quotes.  Always quote template expression brackets when they
> start a value. For instance:
>
> with_items:
>   - {{ foo }}
>
> Should be written as:
>
> with_items:
>   - "{{ foo }}"
>
> root@ansibile-launch:~/ansible/tasks/add-compute#
>
> root@ansibile-launch:~/ansible/tasks/add-compute# cat add-compute.yml
> ---
> # Demo Adding Compute Node
> - name: Adding Compute Node
>   hosts: build_node
>   connection: local
>   gather_facts: no
>
>
>   tasks:
>   - name: Copy Test
>   ansible_user: "{{build_username}}"
>   ansible_ssh_pass: "{{build_password}}"
>   ansible_connection: ssh
>   command: cp /root/arahim/ansible/tasks/add-compute.yml
> /root/arahim/ansible/tasks/add-compute.yml.bak
> root@ansibile-launch:~/ansible/tasks/add-compute# cat inventory.yml
> fabric01:
>   hosts:
> build_node:
>   build_host: 192.168.115.101
>   build_username: root
>   build_password: 123Abdul123
>
>
> It does work with below
>
>
> root@ansibile-launch:~/ansible/tasks/add-compute# ansible-playbook -i
> new_inventory test.yml
> [DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user
> and make sure become_method is 'sudo' (default). This feature will be
> removed in version 2.9. Deprecation warnings can be disabled by
> setting deprecation_warnings=False in ansible.cfg.
>
> PLAY [all]
> **
>
> TASK [Gathering Facts]
> **
> ok: [192.168.115.101]
>
> TASK [Copy file]
> 
> changed: [192.168.115.101]
>
> PLAY RECAP
> **
> 192.168.115.101: ok=2changed=1unreachable=0failed=0
>
> root@ansibile-launch:~/ansible/tasks/add-compute# cat new_inventory
> [hosts]
> 192.168.115.101 ansible_connection=ssh ansible_ssh_user=root
> ansible_ssh_pass=123Abdul123
>
> root@ansibile-launch:~/ansible/tasks/add-compute# cat test.yml
> ---
> - hosts: all
>   user: root
>   vars:
> createuser: 'ansible'
> createpassword: '123Abdul123'
>   tasks:
>   - name: Copy file
> command: cp /root/arahim/ansible/tasks/add-compute.yml
> /root/arahim/ansible/tasks/add-compute.yml.bak
> sudo: true
>
>
> Not sure what is wrong with the yml version of inventory and the variables
> called .. but I am able to now make progress , thanks for getting back to
> me on this
>
> Regards,
> AR
>
> On Wed, Nov 14, 2018 at 1:26 PM Brian Coca  wrote:
>
>> Without an error I can only guess, one thing i've noticed is that you
>> are incorrectly formatting the task
>>
>> - name: return motd to registered var
>>   command: 'cp /root/setup_data.yaml 

Re: [ansible-project] Nested loop with second loop depending on first item

2018-11-18 Thread 'J Hawkesworth' via Ansible Project
Hmm, would the cartesian lookup help you?

https://docs.ansible.com/ansible/latest/plugins/lookup/cartesian.html

On Wednesday, November 14, 2018 at 8:05:32 AM UTC, saisum...@gmail.com 
wrote:
>
> Hi I have seen ur resolutions in this page i.
>
> I am also having the same issue with nested loops .
> my scenario is nothing but th*e matrix multiplication of n*n*
> Can some one please suggest how to implement it using ansible loops along 
> with adding conidition of count ++ and count --
>
> On Monday, June 12, 2017 at 6:39:28 PM UTC+5:30, Q wrote:
>>
>> Guillaume,
>>
>> I have exactly the same problem with setting up OSD via ADM host and I 
>> was pulling my hair for 2 days.
>> I don't know how to thank you:)
>>
>>
>> On Wednesday, December 11, 2013 at 3:24:37 PM UTC+1, Guillaume Subiron 
>> wrote:
>>>
>>> After reading nested.py and realizing it would never do what I wanted, 
>>> I found a workaround. 
>>>
>>> On each ceph-ODSs : 
>>>
>>> - delegate_to: "{{ ceph-admin }}" 
>>>   shell: echo {{ inventory_hostname }}-{{ item }} 
>>>   with_items: disks 
>>>
>>>
>>>
>>> Anyway, thank you very much for your help :) 
>>>
>>>
>>> Le 13/12/11 14:53, Guillaume Subiron claviotta : 
>>> > Hum, this is not what I'm looking for, because my action is not 
>>> > executed on the ceph-OSDs, but on another host (a centralized admin 
>>> > node). 
>>> > In this playbook, I'm not doing anything on the ceph-OSDs. 
>>> > 
>>> > What I need to do (only on my admin node) is : 
>>> > 
>>> > - shell: echo {{ item.0 }}-{{ item.1 }} 
>>> >   with_nested: 
>>> > - groups['ceph-OSDs'] 
>>> > - the disks of the current item in the "groups['ceph-OSDs']" loop 
>>> > 
>>> > I want it to print (on the admin host) : 
>>> > 
>>> > osd0-sdb 
>>> > osd1-sdb 
>>> > osd1-sdc 
>>> > 
>>> > Do you understand the problem ? I don't see any workaround. This is a 
>>> > matter of syntax. 
>>> > 
>>> > 
>>> > Le 13/12/11 08:24, Michael DeHaan claviotta : 
>>> > > I think you probably want this: 
>>> > > 
>>> > > - shell: echo {{ item.0 }}-{{item.1 }} 
>>> > >   with_together: 
>>> > >   - groups['ceph-OSDs'] 
>>> > >   - disks 
>>> > > 
>>> > > this will print for the first host 
>>> > > 
>>> > > osd0-sdb 
>>> > > osd1-sdb 
>>> > > 
>>> > > and for the second host 
>>> > > 
>>> > > osd0-sdb 
>>> > > osd0-sdc 
>>> > > osd1-sdb 
>>> > > osd1-sdc 
>>> > > 
>>> > > Let me know if that works for you and if I'm missing something we'll 
>>> figure 
>>> > > it out. 
>>> > > 
>>> > > Thanks! 
>>> > > 
>>> > > 
>>> > > 
>>> > > 
>>> > > 
>>> > > On Wed, Dec 11, 2013 at 8:21 AM, Guillaume Subiron <
>>> mae...@subiron.org>wrote: 
>>> > > 
>>> > > > Le 13/12/11 08:05, Michael DeHaan claviotta : 
>>> > > > > Before we dive into a technical solution let me understand your 
>>> use case 
>>> > > > > and what you are modelling a bit better. 
>>> > > > > 
>>> > > > > So groups['ceph-ODSs'] would be all machines in the ceph-ODSs 
>>> group. 
>>> > > > 
>>> > > > That's right. 
>>> > > > 
>>> > > > > 
>>> > > > > I'd probably just define a variable like "disks" on the group, 
>>> but I'm 
>>> > > > > unclear why that wouldn't work in your case. 
>>> > > > > 
>>> > > > > I could probably understand more if I could see how "disks" 
>>> differs 
>>> > > > between 
>>> > > > > hosts. 
>>> > > > 
>>> > > > It's simple, my Ceph OSD (storage nodes) are all différents. Some 
>>> > > > contains 2 hard drives (sdb, sdc), some contains 10 (sdb, sdc, 
>>> sdd…). 
>>> > > > "disks" is a list of hard drives, which is different from one host 
>>> to 
>>> > > > another. ex: 
>>> > > > 
>>> > > >   inventory 
>>> > > > 
>>> > > > osd0 
>>> > > > osd1 
>>> > > > 
>>> > > > [ceph-OSDs] 
>>> > > > osd0 
>>> > > > osd1 
>>> > > > 
>>> > > >   host_vars/osd0 
>>> > > > 
>>> > > > disks: 
>>> > > >   - sdb 
>>> > > > 
>>> > > >   host_vars/osd1 
>>> > > > 
>>> > > > disks: 
>>> > > >   - sdb 
>>> > > >   - sdc 
>>> > > > 
>>> > > > In my nested loop, I need to loop over the Ceph Storage nodes and 
>>> > > > their hard drive. The hard drive list is an host variable 
>>> (accessible 
>>> > > > by hostvars[osd0]['disks'], for instance). 
>>> > > > 
>>> > > > With the example above, I want my playbook to do : 
>>> > > > 
>>> > > > ceph_deploy osd prepare osd0:sda 
>>> > > > ceph_deploy osd prepare osd1:sda 
>>> > > > ceph_deploy osd prepare osd1:sdb 
>>> > > > 
>>> > > > > > > On 11 December 2013 09:53, Guillaume Subiron <
>>> mae...@subiron.org> 
>>> > > > > > wrote: 
>>> > > > > > > 
>>> > > > > > > > Hi, 
>>> > > > > > > > 
>>> > > > > > > > I'm trying to do a special kind of nesting loop, using the 
>>> item of 
>>> > > > the 
>>> > > > > > > > first loop in the second loop: 
>>> > > > > > > > 
>>> > > > > > > > - name: Prepare OSDs 
>>> > > > > > > >   shell: ceph-deploy osd prepare {{ item[0] }}:{{ item[1] 
>>> }} 
>>> > > > > > > >   with_nested: 
>>> > > > > > > > - groups['ceph-ODSs'] 
>>> > > > > > > > - hostvars[item[0]]['disks'] 
>>> > > > > > > > 
>>> > > > > >