invalid output was: SUDO-SUCCESS-siogahtpvfcksvvxfxqslqrszpkxisfp
bzip2: (stdin) is not a bzip2 file.
bzip2: (stdin) is not a bzip2 file.
bzip2: (stdin) is not a bzip2 file.
bzip2: (stdin) is not a bzip2 file.
Traceback (most recent call last):
File
Hi,
I have a little problem with conditionally setting a proxy.
situation:
- many systems in different domains (since hosting env)
- two domains are internal and only get outside connection via a proxy
So, I need no proxy set for all systems that aren't in those domains.
For those two domains,
It seems this should work according to the docs. It does work if I use
with_items: app1
$ cat jj.yml
---
- hosts: all
vars:
app1:
- base: {{ file | basename }}
app2:
- base: {{ file }}
tasks:
- name: debug
debug: msg={{ item.base }}
with_items:
It's not silly, so much as --limit doing what it says it is supposed to do.
However, see my proposal about --limit setting a default and being able to
also set limit: all on some tasks, such that it's possible to control the
limit per play, and only have --limit pass it in for some.
I think
So this is a traceback that looks like it needs to be caught to raise a
proper error.
Please let us know what version of Ansible you are using, though in this
case it seems like we need to catch an exception in the apt module.
Once you have pinned down the Ansible version, please file a ticket
situation:
- many systems in different domains (since hosting env)
- two domains are internal and only get outside connection via a proxy
This is EXACTLY why the environment support was written, actually.
I was automating an OpenStack config at a very, hmm, structured, company
and I had to
In the above, app1 and app2 are not hashes, but lists of strings.
I think you would want to define them like:
app2: { base: foo }
etc
But really probably
apps:
- { name: app1, base: foo }
- { name: app2, base: bar }
tasks:
- blarg: ...
with_items: apps
Etc.
Let me know if
Just wanted to clarify some behavior -- that this does not fail the entire
play in a way that is abnormal.
It takes the host out of rotation for the rest of the playbook - the host
has failed.
If there are other hosts under configuration, those hosts will still be
configured, unless they have
Ergh, meant to submit this to ansible-devel, sorry about that. Anyway, I
definitely agree that some potential backends (eg: SQL database) might not
be suited for the type of workload here, but that redis should perform well
in any conceivable use-case. I've submitted a pull request #8203
Thanks, this should be pretty easy to test out and benchmark.
I'm classifying this as P2 so we can get it some attention earlier.
On Sat, Jul 19, 2014 at 12:59 PM, Josh Drake m0n...@gmail.com wrote:
Ergh, meant to submit this to ansible-devel, sorry about that. Anyway, I
definitely agree
10 matches
Mail list logo