Nvm, saw the add_host in the comment.
On Sat, Apr 26, 2014 at 5:32 PM, Gustavo Hexsel wrote:
> Then I can consider this a bug report. Without retries, wait_for fails
> for every EC2 AMI I tried (admitedly, they're all variations of CentOS).
>
> Things I've seen:
> - it reports port open, then
Then I can consider this a bug report. Without retries, wait_for fails for
every EC2 AMI I tried (admitedly, they're all variations of CentOS).
Things I've seen:
- it reports port open, then refuses to connect
- it reports times out even though I was able to manually log in prior to
the timeout
-
I'm always a bit wary when so many keywords come together. It's usually
the sign something can be simplified and is not "Ansible-like" enough.
- name: Wait for SSH to come up after the reboot
wait_for: host={{item}} port=22 delay=60 timeout=90 state=started
with_items: groups.tag_
Just as a side-note, I was able to get the wait_for mode to work for ssh
with a bit of fiddling (so you don't have to wait with 2 tasks):
- hosts: 127.0.0.1
connection: local
gather_facts: false
vars_files:
- env.yaml
tasks:
- name: Wait for SSH to come up after the reboot
wa
Using local ./library content is fine, but please don't run a fork with
extra packages added if you are going to ask questions about them -- or at
least identify that you are when you do.
It can make Q&A very confusing when people ask about things that aren't
merged.
On Fri, Apr 25, 2014 at 2:31
On Apr 25, 2014, at 2:31 PM, ghex...@gmail.com wrote:
> So, I'm curious, for the case where you want to start "stopped" EC2
> instances, what's the current recommended approach?
>
> I've kind of ignored this task for now, managing that by hand (it's just
> our dev env, but it's still a c
So, I'm curious, for the case where you want to start "stopped" EC2
instances, what's the current recommended approach?
I've kind of ignored this task for now, managing that by hand (it's just
our dev env, but it's still a couple of dozen instances at least). I'm
almost about to pull Sco
"so maybe a pull request against ec2_facts with the filters would get
accepted. Long run it does seem like hosts and modules need to have some
idea of state ... "
Anything applying to more than one host definitely shouldn't be done by the
facts module.
On Mon, Apr 21, 2014 at 1:54 PM, 'C. S. '
Thanks for the clarification, right, the use case and implementation are a bit
different. Seems like they could be combined however.
On Apr 21, 2014, at 10:45 , Scott Anderson wrote:
> Actually, it's not the same as ec2_facts other than it returns facts about an
> instance.
>
> ec2_facts onl
Actually, it's not the same as ec2_facts other than it returns facts about an
instance.
ec2_facts only works when run on an actual AWS instance (it calls the Amazon
ec2 metadata servers) and it only retrieves the facts for that instance alone.
ec2_instance_facts, on the other hand, can retrieve
Thanks!
That's interesting, your module is the same as ec2_facts just with filtering.
And the ec2_facts module says it may add filtering in the notes. I think I'd
agree with Michael's pov, but it looks like we've already gone down facts being
outside the inventory module, so maybe a pull reques
Hi folks,
We're trying to implement a system where we can power environments on and off
AWS when they're not in use. However the ec2 inventory module excludes
instances that are not in a running state. It seems like adding an option to
the ec2 module to include stopped instances would work, bu
12 matches
Mail list logo