can i do something like this instead. using when condition
- name: Run Esxcli command
vmware_host_esxcli:
hostname: '{{ vcenter_hostname }}'
username: '{{ admin}}'
password: '{{ pass }}'
esxi_hostname: "{{ inventory_hostname }}"
validate_cert
You probably need a handler with a changed_when condition.
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html#handlers
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_error_handling.html#defining-changed
See this discussion:
https://unix.stackexchange
I got pb that based on output will say:
reboot required: True
or reboot required: False
if its false how do i skip the reboot on the following task?
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop rec
Hi,
I have 2 VPN servers, and remote-random in VPN config, so clients connect
to server1 or server2.
I need clients to reboot after major upgrades (new kernel), and the problem
is that if the client connects to different VPN server, reboot module fails
because it cannot connect any more to the
I am testing a playbook to deploy updates then force a reboot on an Oracle
Linux ver 8.3 The DNF updates run fine but when the system reboots it hangs.
The server is a vm running in a vmware 6.7 environment. In order to bring the
system back up I have to reset the vm from vcenter. Not sure h
Hi,
on our setup we reboot our servers groupwise.
With serial: 1 is ensured only one server is rebooted a time.
The reboot is only performed if /var/run/reboot-required exists. This
might be Debian specific. This 'when' condition could give you an idea
how to handle server specific facts.
--8<---
Hmm... I have seen where someone put a delay in the task that does the
reboot. So you could probably do something like that, to make sure that
the node is back up before moving on, or you could do some sort of check
and register a variable, so that a node would only reboot if that node is
set.
O
Hi,
I get all servers from Azure and created inventory in memory by ansible.
After get server and created inventory, connect in all servers collection
facts and execute update of servers. This moment I need to create a
process to reboot my servers by custom facts.
On Tue, Jun 30, 2020 at 4:53 P
Are you using ansible to apply updates and reboot now? If so, what does
your current process look like?
--john
On Mon, Jun 29, 2020 at 3:26 PM Rafael Tomelin
wrote:
>
> Hi Guys,
> I have periodic updates on all RedHat servers at the same time, I need to
> create a strategy to restart my cluste
Hi Guys,
I have periodic updates on all RedHat servers at the same time, I need to
create a strategy to restart my clusters because I can't stop my
application and services.
How to create the reboot strategy server-per-server in clusters?
--
Atenciosamente,
Rafael Tomelin
Tel.: 51-984104084
Sk
On 23.10.2019 15:52, 'Chris Bidwell - NOAA Federal' via Ansible Project
wrote:
Doesn't this use the ansible server running the playbook to check if
the
port is open?
No, since host is server2 the wait_for module will run on that host.
You would need connection: local, delegate_to: localhost or
Doesn't this use the ansible server running the playbook to check if the
port is open?
On Tue, Oct 22, 2019 at 5:07 PM Kai Stian Olstad <
ansible-project+l...@olstad.com> wrote:
> On 23.10.2019 00:42, 'Chris Bidwell - NOAA Federal' via Ansible Project
> wrote:
> > Hi all,
> >
> > So I know how to
On 23.10.2019 00:42, 'Chris Bidwell - NOAA Federal' via Ansible Project wrote:
> Hi all,
>
> So I know how to do this for the most part, but I've got two servers that
> when I have to reboot them, one needs to be done before the other and
> cannot before a specific port comes available. That port
Hi all,
So I know how to do this for the most part, but I've got two servers that
when I have to reboot them, one needs to be done before the other and
cannot before a specific port comes available. That port is only
accessible from that second server, not the ansible server itself.
Does that mak
Hello
you wait 10 seconds before calling shutdown but check after 5 seconds
whether the port is accessible again.
I use async as well but firstly check via a localaction that the ssh port
is drained/not reachable anymore
Best regards
Mirko
--
Sent from my mobile
Am 15.03.2018 15:21 schrieb :
Le 15/03/2018 à 17:06, rraka1...@gmail.com a écrit :
> Thanks JYL, by the way there is no log when i run this play , even
> system gets rebooted and up but it doesn't give any clue and breaks
> the connection .
> Anyway, i will try to debug the logs..
You can add more verbose with -v and -vv,
Thanks JYL, by the way there is no log when i run this play , even system
gets rebooted and up but it doesn't give any clue and breaks the
connection .
Anyway, i will try to debug the logs..
On Thursday, March 15, 2018 at 9:03:45 PM UTC+5:30, Jean-Yves LENHOF wrote:
>
> Hi,
>
>
> Le 15/03/20
Hi,
Le 15/03/2018 à 15:21, rraka1...@gmail.com a écrit :
> Hello Friends,
>
> Does anyone have experience using the reboot playbook on RHEL systems,
> i'm using below method and it reboots the systems but in between while
> system is rebooting it breaks the connection and does not wait for
> pos
Hello Friends,
Does anyone have experience using the reboot playbook on RHEL systems, i'm
using below method and it reboots the systems but in between while system
is rebooting it breaks the connection and does not wait for post reboot
status like uptime, if someone already overcomed this plea
ignore error will not avoid connection errors, only task errors, so
'UNREACHABLE' will not be captured by it.
On Sun, Feb 19, 2017 at 2:30 AM, Pshem Kowalczyk wrote:
> Hi,
>
> Perhaps not directly answering your question - but a workaround I used in a
> number of playbooks (added to the tasks tha
Hi,
Perhaps not directly answering your question - but a workaround I used in a
number of playbooks (added to the tasks that times out, in your case the
one that unsets the noout):
- name: Unset the noout flag
command: ceph osd unset noout
register: result
until: result.failed is undefined
re
trying to reboot with ignore_errors: true still errors out. control
machine and all ceph nodes run ubuntu 16.04. ansible 2.2.1.0 installed with
pip
---
- hosts: osds
serial: 1
tasks:
- name: Set the noout flag
command: ceph osd set noout
- name: Reboot the server
command: s
Thank you!
But unfortunately, same problem was occurred on the devel.
% git log HEAD -1
commit 94db7365b911ac740902142e807ab5f65a970f94
Author: Michael DeHaan
Date: Fri Oct 3 17:08:52 2014 -0400
__getattr__ to hide some of the attribute magic.
% git submodule update
% ansible-playbook -
Hi Shirou,
I believe this error was fixed in devel, if you'd like to test it there.
Thanks!
On Fri, Oct 3, 2014 at 3:03 AM, shirou wrote:
> Hi all,
>
> I used ansible-1.7.2, and I created this playbook to reboot and wait
> until wakeup.
>
> - name: reboot
> shell: /sbin/reboot
>
Hi all,
I used ansible-1.7.2, and I created this playbook to reboot and wait
until wakeup.
- name: reboot
shell: /sbin/reboot
- name: wait for the server to go down
local_action: wait_for host={{ inventory_hostname }} port=22 state=stopped
It works on RHEL6. However, on RHEL
Ah good point, didn't realize this was possible. Thanks!
On Monday, 2 December 2013 19:44:03 UTC, Brian Coca wrote:
>
>
> or just add sudo:false to the local_action, you should really not need
> superuser for the wait_for
>
>
> --
> Brian Coca
> Stultorum infinitus est numerus
>
> 0111011100
or just add sudo:false to the local_action, you should really not need
superuser for the wait_for
--
Brian Coca
Stultorum infinitus est numerus
0111011100100110010101101110001001110111011000010110011101010010011100110110110101110111001001110111
Pedo mellon a m
FOUND IT: if you run local_action wait_for with sudo: true, make sure your
*local* user doesn't require a sudo password either by editing /etc/sudoers
locally (eg mathias ALL=(ALL) NOPASSWD: ALL)!
Mathias
On Monday, 2 December 2013 19:23:29 UTC, Mathias Bogaert wrote:
>
> Hi Michael,
>
> The sy
Hi Michael,
The system has SSH keys installed, and never requires any passwords.
The command line options are ansible-playbook -i hosts --extra-vars
"accelerate=true" site.yml.
I'll debug it a bit further, but this definitely worked using Ansible 1.3.
Cheers,
Mathias
On Sunday, 1 December 2013
It just looks like the system didn't take a sudo password before, you
rebooted it, and now it needs a sudo password to me.
You didn't show any of the command line options you used to execute Ansible
with though, so it's hard to say with incomplete information.
Don't think that's a 1.3/1.4 thing b
Ping?
On Tuesday, 26 November 2013 21:50:17 UTC, Mathias Bogaert wrote:
>
> Hi James,
>
> The playbook is available here:
>
> https://github.com/analytically/hadoop-ansible
>
> The roles that has the reboot here:
>
>
> https://github.com/analytically/hadoop-ansible/blob/master/roles/2_aggregated_l
Hi James,
The playbook is available here:
https://github.com/analytically/hadoop-ansible
The roles that has the reboot here:
https://github.com/analytically/hadoop-ansible/blob/master/roles/2_aggregated_links/tasks/main.yml
Thanks!
On Tuesday, 26 November 2013 21:46:43 UTC, James Tanner wrote
Show us your full ansible-playbook command+args and the playbook please.
On 11/26/2013 04:33 PM, Mathias Bogaert wrote:
Here's my debug output for local_action: wait_for host={{
inventory_hostname }} port=22 state=stopped :
TASK: [2_aggregated_links | wait for the server to go down (reboot)]
Here's my debug output for local_action: wait_for host={{
inventory_hostname }} port=22 state=stopped :
TASK: [2_aggregated_links | wait for the server to go down (reboot)]
**
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p
$HOME/.ansible/tmp/ansible-1385501543.62-188179043733979 && chmod
It's not immediately obvious to me what error you are pointing out. Run
ansible-playbook with - and show us that output.
On 11/25/2013 04:50 PM, Mathias Bogaert wrote:
Hi,
Using Ansible 1.3, the following worked:
- name: reboot after bonding the interfaces
shell: sleep 2s && /sbin/reboo
Hi,
Using Ansible 1.3, the following worked:
- name: reboot after bonding the interfaces
shell: sleep 2s && /sbin/reboot &
- name: wait for the server to go down (reboot)
local_action: wait_for host={{ inventory_hostname }} port=22 state=stopped
- name: wait for the server to come up
loca
36 matches
Mail list logo