I just did some more testing with this. The behavior is:

Ansible writes sometimes two, sometimes three entries to the file.

To me, this indicates that file access is not exclusive. So the parallel 
Ansible processes are all opening and closing the file, and last-one-in 
wins. I also tried with both

delegate_to: localhost

and

connection: local

Neither of these fixed the file consistency problem outlined above.


Other approaches:

   - I can't use "add_host", because it does not work in parallel (see github 
   issue <https://github.com/ansible/ansible/issues/2963>).
   - I could retain "register" in variables and reuse it in a later role, 
   but I can't see a way to do it (group thread 
   <https://groups.google.com/forum/#!topic/ansible-project/SyH-bL8rJIA> about 
   this is unanswered ATM).
   - I guess I'll have to write each host's variables to a local yaml file, 
   then read those files as variables in a later role. Seems clunky, but I see 
   no other way.


Any other suggestions?


On Tuesday, September 9, 2014 3:36:55 PM UTC-4, Kurt Yoder wrote:
>
> Hi list,
>
> I posted a while back about a way to parallelize Openstack node creation. 
> To recap, I have a role with the following task:
>
> - name: Set up API connections for all Openstack nodes
>   add_host:
>     name: "os_api_{{ item }}"
>     ansible_ssh_host: 127.0.0.1
>     groups: os_api
>     ansible_connection: local
>     oshost: "{{ item }}"
>   with_items: cluster
>
>
> This gives me a bunch of API connections which I run in parallel in 
> another role and task:
>
> - name: Launch cluster VM on Openstack
>   nova_compute:
>     name: "{{ os_username }}_{{ oshost }}"
>     state: present
>     login_username: "{{ os_username }}"
>     login_tenant_name: "{{ os_tenant }}"
>     login_password: "{{ os_password }}"
>     image_id: "{{ os_image_id }}"
>     key_name: "{{ os_username }}_controller_key"
>     wait_for: 200
>     flavor_id: "{{ os_flavor_id }}"
>     auth_url: "{{ os_url }}"
>     user_data: "{{ lookup('template', '../templates/cloud-config.j2') }}"
>
> - name: Assign IP address to cluster VM
>   quantum_floating_ip:
>     state: present
>     login_username: "{{ os_username }}"
>     login_password: "{{ os_password }}"
>     login_tenant_name: "{{ os_tenant }}"
>     network_name: "{{ os_network_name }}"
>     instance_name: "{{ os_username }}_{{ oshost }}"
>     internal_network_name: "{{ os_internal_network_name }}"
>     auth_url: "{{ os_url }}"
>   register: quantum_info
>
> - name: Wait for cluster SSH to become available
>   wait_for:
>     port: 22
>     host: "{{ quantum_info.public_ip }}"
>     timeout: 180
>     state: started
>
> - name: Retrieve cluster public SSH host key
>   shell: "ssh-keyscan {{ quantum_info.public_ip }}"
>   register: scanned_key
>
>
> Now I have a list of IPs for the configured hosts. I want to record their 
> SSH host key, as captured by "scanned_key". If I add the following, will it 
> safely serialize access to the local "known_hosts" file:
>
> - name: Set SSH known_hosts entry
>   lineinfile:
>     dest: ~/.ssh/known_hosts
>     line: "{{ scanned_key.stdout }}"
>     state: present
>     regexp: "^{{ quantum_info.public_ip }} "
>   delegate_to: localhost
>
> Is this the recommended way to do it?
>
>
> Thanks,
>
> -Kurt
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/a7d87b47-3752-4482-ba6e-8b73f065b9fc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to