On 18.01.2015 09:25, Alan McKinnon wrote:

> My advice:
> 
> Start with groups. If you find you need to have lots of "when"
> clauses to make the plays work across more than one distro, and the
> whens follow the same format, then you might want to split them into
> groups. Make for example a "gentoo-www" group and a "debian-www"
> group, and create a super-group "www" that includes both.
> 
> It's one of those questions you can only really answer once you've
> built it for yourself and can see what works better in your
> environment

Yes, thanks!

>> IMO ansible should correctly detect the running PID1 .. and it
>> tries to as far as I understand the code of the service-module.
>> 
>> For example I tried to write a task to ensure that ntpd is
>> down/disabled and chrony is installed/enabled/started ... no real
>> success so far.
> 
> If ansible confuses installed init systems with running init system, 
> then that will be a bug in ansible and should be reported

When I read

/usr/lib64/python2.7/site-packages/ansible/modules/core/system/service.py

I understand that it should detect the enabled systemd in line 403ff

but maybe it detects the wrong tool/binary to start/stop services when
both openrc and systemd are installed (442ff)

See this:

# ansible -i inventories/oops_nodes.yml  -l hiro.local -m service -a
"name=chronyd state=started" all
hiro.local | FAILED >> {
    "failed": true,
    "msg": " * WARNING: chronyd is already starting\n"
}

That is ~ the same msg as in:

# /etc/init.d/chronyd start
 * WARNING: chronyd is already starting

(the openrc-script answering)

# systemctl status chronyd
● chronyd.service - Chrony Network Time Service
   Loaded: loaded (/usr/lib64/systemd/system/chronyd.service; enabled;
vendor preset: enabled)
   Active: active (running) since Mo 2015-01-19 10:57:39 CET; 9min ago
  Process: 761 ExecStart=/usr/sbin/chronyd (code=exited, status=0/SUCCESS)
 Main PID: 764 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─764 /usr/sbin/chronyd

But for another daemon:

# ansible -i inventories/oops_nodes.yml  -l hiro.local -m service -a
"name=systemd-networkd state=started" all
hiro.local | success >> {
    "changed": false,
    "name": "systemd-networkd",
    "state": "started"
}


I might file the bug at b.g.o. .. going upstream seems a bit early ;-)

Stefan

Reply via email to