and 2 years later ...
: )
On Friday, 30 September 2016, Marc Tamsky wrote:
> On Friday, December 19, 2014 at 7:50:09 AM UTC-8, Dario Bertini wrote:
>>
>> I was thinking of simply using
>>
>> ansible -m debug -a "msg={{the_jinja_expression_I_want_to_test}}"
>>
>> but I
I usually attach a debugger,
http://michaeldehaan.net/post/35403909347/tips-on-using-debuggers-with-ansible
or set a small template action to dump the contents of the
dictionary/variable whatever to a local file on my box.
It's easier to inspect it that way,
On Friday, December 19, 2014
I use a different approach,
root
├── inventory
│ |-- dev
│ │-- vagrant
│ │-- preprod
│ └── production
|
└── group_vars
|
|-- all.yaml
|-- dev/
||-- secrets.yaml
|
|-- vagrant/
||-- secrets.yaml
|
|-- preprod/
||-- secrets.yaml
I use jenkins heavily for orchestration, in a setup where I have multiple
jenkins CD pipelines in different environments all deploying code using
Ansible.
See this setup below, it's an example for deploying jenkins, setting up the
jobs and dependencies.
It also shows how to provisioning and
it's holding
up. (Unfortunately some development familiarity is required)
Perhaps it's not a thing on 1.7.2 regardless, so I'd try that first.
Thanks!
On Thu, Oct 30, 2014 at 9:26 AM, Azul Inho ma...@azulinho.com
javascript: wrote:
Have an odd one here,
I'm on ansible 1.6.2, my
This works on my machine,
- name: wait for server to come back
local_action: shell while true; do echo Waiting ... ; ssh -o
ConnectTimeout=5 -o BatchMode=yes {{ inventory_hostname }} pwd ; [ $? -eq
0 ] break || sleep 5; done
sudo: false
On Monday, November 10, 2014 10:14:50 AM UTC,
just a heads up,
I run RH6.5, not able to upgrade at the moment to 6.6 (and it looks like it
wouldn't help either), I have worked around the ControlPersist issue by
installing a openssh6 client on my control host box (/opt/openssh6),
I then have a wrapper script that calls ansible-playbook and
I do this:
# first batch in parallel
6 - hosts:
Have an odd one here,
I'm on ansible 1.6.2, my plays run slow, I'm stuck with RH6 so
controlmaster/persist is not an option.
I have this weird behaviour where gathering facts from a box takes almost a
minute, and then it just sits there for another 47 seconds or so before it
starts the first
+ 26command: touch /var/tmp/last-upstream-fetch
1
On 12 June 2014 18:34, Azul Inho m...@azulinho.com wrote:
Hi there,
I saw an example sometime ago, either a thread,blog or IRC about executing
a single task in a playbook only once every 24 hours.
Can't find that example anywhere
Hi there,
I saw an example sometime ago, either a thread,blog or IRC about executing
a single task in a playbook only once every 24 hours.
Can't find that example anywhere,
has anyone here has an example of how to achieve this ?
I have a task that syncs a mirror with a bunch of upstream repos,
I am having exactly the same error message, but in my case I am not using
virtualenv.
Did you get this sorted?
On Sunday, March 16, 2014 3:47:18 PM UTC, Yapeng Wu wrote:
Hello, I am new to Ansible.
I have installed ansible in the virtualenv. But when I load a playbook, it
failed in one
the application,
then you would of course put that in a separate role, and notify after
changing the config so the app will get restarted.
Den måndagen den 3:e mars 2014 kl. 17:02:42 UTC+1 skrev Azul Inho:
In my role common I have:
- name: yum install OS
updates
=* disablerepo=custom_repo_here
On Thu, Mar 6, 2014 at 12:16 PM, Azul Inho m...@azulinho.com wrote:
yes, that's what I am looking into now.
I deploy the configuration for the app through a different task, which
doesn't change very often.
caking the RPM-initscripts to restart the app looks
I had similar doubts last week, got my answers on the irc channel.
Quick question for Michael, do you use git for your documentation pages or is
it wiki based? I would be happy to submit a pull request with enhancements to
the docs.
It would also help me with my bad memory
--
You received
In my role common I have:
- name: yum install OS
updates
shell: yum update
-y
that's an approach,
however that '-x list' is likely to grow really quickly as it will require
every service list which I restart after an update.
On 3 March 2014 16:09, Petros Moisiadis ernes...@yahoo.gr wrote:
On 03/03/2014 06:02 PM, Azul Inho wrote:
In my role common I have
which is quite
good in my view.
I used vcloud-rest to automate deployment of new VAPPs into a well-known
vcloud org, have a look, it may help a bit
https://github.com/dvla/vcloud-management-tools
azul
On Thursday, January 23, 2014 9:38:34 AM UTC, Luca Cancelliere wrote:
Vcloud director
18 matches
Mail list logo