Sure will do. Thanks James
On Tue, 10 Feb 2015 at 04:30 James Cammarata wrote:
> Hi James,
>
> Could you open a github issue for this so we can keep track of it? In the
> mean time, you can use the `no_log: yes` option on a per-task basis to
> ensure sensitive information is not logged.
>
> Thank
Aha! That's perfect - thanks James
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send e
Hi Le Van, this is typically what tags are used for. The other alternative
is to simply create separate playbooks for them, though there is not really
any best practice guideline for that. It's mostly a matter of personal
preference.
Thanks!
On Wed, Feb 4, 2015 at 9:59 PM, Le Van wrote:
> Hi,
>
Hi Gary, I believe this is expected behavior. You can set rds=False in the
ec2.ini to have RDS instance information removed from the dynamic inventory
results.
Hope that helps!
On Mon, Feb 9, 2015 at 5:09 PM, Gary Malouf wrote:
> I'm attempting to separate my staging and production server via t
Hi Daniel, you can use the environment variable
ANSIBLE_ROLES_PATH=/path/to/roles in front of ansible-playbook to modify
the path on the fly, without having to modify your ansible.cfg (or deploy a
local cfg in the working directory). In either case, the roles path can be
a colon-separated list (jus
Hi Craig, not quite sure here but it might be that your remote repos are
not setup correctly on that system?
On Mon, Feb 9, 2015 at 2:53 PM, Craig White wrote:
> been able to do simple ad-hoc commands but now trying to run a relatively
> simple playbook
>
> # ansible-playbook -C /etc/ansible/pla
Hi Rob,
I don't think there's anything overly verbose about that. You could use the
defaults/main.yml file to put defaults in, rather than using the default()
method you're using, but that depends on what you're trying to do to a
degree and might not work since you're using with_subelements.
Hope
We are experimenting with a staging and production environment each in
their own vpc. It has been a struggle to use the EC2 module with this
setup because despite applying instance filters to ec2. ini, the 'count'
tags used in the ec2 module for provisioning count instances across vpcs if
they
Hi James,
Could you open a github issue for this so we can keep track of it? In the
mean time, you can use the `no_log: yes` option on a per-task basis to
ensure sensitive information is not logged.
Thanks!
On Mon, Feb 9, 2015 at 12:20 PM, James Morgan
wrote:
> Hi,
>
> I have some sensitive da
Hi Chris, you can disable this warning in your ansible.cfg file
(system_warnings=no).
On Mon, Feb 9, 2015 at 9:34 AM, Chris Short wrote:
> I just built a clean CentOS 6.6 server and have ansible installed. All
> packages are up to date and I'm still seeing this error:
>
> [WARNING]: The version
Hi Chris, are you specifying the remote_user value, or does the current
user running Ansible on the controller match the remote user on the target
system?
On Mon, Feb 9, 2015 at 9:24 AM, Chris Adams wrote:
> Hi there,
>
> Can someone give me some pointers as to why an ansible role using
> sudo_u
Actually, looking through the code I believe you're correct. Setting the
option for -i again *might* work (assuming the rsync command will override
the setting with the second value, instead of throwing an error), however I
think using the rsync command directly is what you want anyway, as you said
I looked at the module code and it seemed like it would automatically add
the key if it was being used. I ended up running rsync directly using the
command module and it seems to be working now.
Looking at the rsync_opts argument, it seems like it will only append to
the existing arguments, no
Thank you. I added the following and now everything is working great!
ansible.cfg:
[ssh_connection]
ssh_args = -F ssh.config
pipelining = True
ssh.config:
Host *
ControlMaster auto
ControlPersist 60s
ControlPath ~/.ssh/ansible-%r@%h:%p
On Monday, February 9, 2015 at 6:58:06 PM UTC-5, M
Hi Michael, have you tried setting the key option via the rsync_opts and/or
rsync_path parameters to synchronize module?
On Sat, Feb 7, 2015 at 10:05 AM, Michael Spiegle
wrote:
> I have a simple task to copy a file from one path to another on a remote
> host. I need a private key to SSH into th
Hi,
I believe if you want to keep this in a common role, the best way would be
to simply list each task to install the key as follows:
- name: install rabbitmq key
apt_key: file="apt_keys/some_rabbitmq_specific_key.asc
when: "rabbitmq" in group_names
...
If you wanted to make it more gener
When specifying ssh_args what you loose is the ControlPath configuration,
not pipelining. Pipelining is where it reduces round trips and ControlPath
re-uses a single socket connection to the server instead of new connections
for every communication.
If you want to use ssh_args and maintain Control
I'm attempting to separate my staging and production server via two
different vpcs. I'm using a filter as follows in each of ec2.ini files to
separate instances running in the staging vpc from those in the production
one.
instance_filters = vpc-id=vpc-someid
This works well for filtering out
Is it possible to pass the roles_path parameter (from ansible.cfg) or an
equivalent as an argument to ansible-playbook?
My use case is: I'm using the ansible-galaxy (tool not web service) to pull
my shared ansible roles from private GitHub repositories into a local
directory called "vendor_role
My ansible.cfg contains the following right now:
[ssh_connection]
ssh_args = -F ssh.config
pipelining = True
If I run Ansible, my "base" role takes 02:06 (mm:ss) to run on a single
host and makes many SSH connections. If I simply comment out ssh_args,
pipelining works and Ansible runs in 00:
many ways to do it, another example is having a variable defined in
production/staging groups:
[production:vars]
subdomain: example.com
[staging:vars]
subdomain: -staging.example.com
...
in template:
server_name {{inventory_hostname + subdomain|default('-dev.example.com') }}
--
You receiv
short answer is yes... easily.
An example of your template scenario might look something like this:
{% if ansible_local.oscar.tags.environment == "production" %}
server_name foo.example.com;
{% elif ansible_local.oscar.tags.environment == "staging" %}
server_nam
I'm trying to wrap my brain around something. Can Ansible deploy different
application configuration files based upon a hostname, IP address, network
address, etc? To take that a step further, could Ansible look at a template
and then fill in the blanks based upon hostname, IP, and/or network
a
been able to do simple ad-hoc commands but now trying to run a relatively
simple playbook
# ansible-playbook -C /etc/ansible/playbooks/apache/site.yml
[WARNING]: The version of gmp you have installed has a known issue
regarding
timing vulnerabilities when used with pycrypto. If possible, you sh
Hi Adam
comments inline.
( TL;DR:
* good work on the troubleshooting!
* I think you're right, this is likely an environment/path thing.
* Ansible 1.4 is ancient, that probably isn't helping.
* there's an executable parameter you can use to hardcode the gem
command to run.
)
On 9 February 2015 a
Hi,
I'm trying to design a relatively generic role for managing AWS Elastic
Beanstalk applications. Because some of the configuration options need to
be the same across a few environments I'd like to have a place towards the
top level of the vars file that contains these. I have something worki
Hi,
I have some sensitive data (keys and pass files etc) stored in yaml var
files and encrypted with the vault.
Just noticed that if I have -v set it prints out the contents when I import
the var files.
I would have expected the facts to know that the file its loading was from
the vault and t
Hi,
Even After trying your suggestion. I am getting the following error
failed: [localhost -> 127.0.0.1] => {"failed": true}
msg: unsupported parameter for module: resource_tags
FATAL: all hosts have already failed -- aborting
This is happening for
tasks:
- name: Create a VPC
registe
The site is in the same git repo under docsite (_themes/srtd/ for the
formatting), you can submit a PR to select something more palatable
fontwise.
--
Brian Coca
--
You received this message because you are subscribed to the Google Groups
"Ansible Project" group.
To unsubscribe from this gr
Any chance you could change your Lato font to something else? While I love
experimenting with fonts Lato shows up very badly on the Chrome browser. I
realized it was a Chrome specific problem when I looked at it on IE. Lato
looks quite nice there. Anyway, sorry for being a pest but Chrome happ
On Mon, Feb 9, 2015 at 12:18 PM, ProfHase wrote:
> Do you have any idea what to do about the machine-specific ssl-certificates?
Again, I can't think of a valid "machine-specific" ssl certificate
case. It should be based on roles, right? But I liked Brian Coca's
ealier comment about putting the n
Thanks a lot, great idea for my poor design :) (did not have the role
dependencies in mind).
Do you have any idea what to do about the machine-specific ssl-certificates?
Thanks, Ilya
Am Montag, 9. Februar 2015 17:25:33 UTC+1 schrieb Michael Peters:
>
> On Mon, Feb 9, 2015 at 10:55 AM, ProfHase
Hi,
I can see this behaviour with following gems
github: haste
rubygems: ops_build
opsci:~ $ sudo gem list
*** LOCAL GEMS ***
TASK: [common-server | Install Haste client binary]
***
ESTABLISH CONNECTION FOR USER: rsd
REMOTE_MODULE gem name=haste state=latest
EXEC s
ansible itself (the command line tools) supports Solaris, mainly as a
target but you should also be able to run it from a solaris box with a
new enough version of python. Tower is a proprietary web frontend to
ansible, it does not need to run on Solaris to manage solaris hosts.
For tower you can s
On Mon, Feb 9, 2015 at 10:55 AM, ProfHase wrote:
> @Michael Peters:
> I am using monit for monitoring. And depending on machine there are
> completely different services to monitor. I could also do multiple roles
> like 'monit_webservice' , 'monit_db', 'monit_application_a',
> 'monit_application_
you can put the list of certificates per host/app in a variable and
then just reference that variable to copy the certs and reference them
from the configs.
I really don't understand the comment about needing a conf.d when you
have multiple functions to apply to a file, that is easily handled
with
The most actual case with my configuration is: I have multiple machines
with the same apache configuration (application) except for ssl
certificates. Would one put the whole certificate into a variable (looks
strange to me)?
@Michael Peters:
I am using monit for monitoring. And depending on mac
Ansible determines host uniqueness based on the hostname, in your case this
is currently `1.2.3.4`. What you need to do is use host aliases if you
intend on having 2 hosts with the same IP but different ports. Such as:
[main_node]
main ansible_ssh_host=1.2.3.4
[child_node]
child ansible_ssh_hos
I just built a clean CentOS 6.6 server and have ansible installed. All
packages are up to date and I'm still seeing this error:
[WARNING]: The version of gmp you have installed has a known issue regarding
timing vulnerabilities when used with pycrypto. If possible, you should
update
it (i.e. yum
the same way you install a package verifies if it is already
installed, for example:
yum: name=vim state=present
will install the package if not present or return changed=false if it
is already installed.
--
Brian Coca
--
You received this message because you are subscribed to the Google G
Hello,
I am new here and I would like to deploy open source configuration
management software in my company and I need information on verification of
installations.
How Ansible manages to verify an installation of package or other software?
Is there possiblity to verify already installated packag
Hi there,
Can someone give me some pointers as to why an ansible role using sudo_user
below would complain about *missing sudo passwords, when I'm able to: *
1. *ssh in as one non-root user (in this case, chris)*
2. *use `sudo su deploy_user` to switch to that deploy user*
3. *call `s
Hi all,
I have the following problem.
Given:
SVN repo with some complex YAML configuration files
Aim:
fetch a file from SVN and register file's content as Ansible fact (dict
object)
Problem:
fetched file is not converted into dict object, but considered by ansible
be a unicode string
Exam
This is my hosts:
[main_node]
1.2.3.4
[child_node]
1.2.3.4:
child_node is virtual container inside the main_node
why
ansible-playbook --diff --limit=child_node lc.yml --list-hosts
playbook: lc.yml
play #3 (main node): host count=1
1.2.3.4
play #4 (child node): host count=1
I'm new to Ansible, and trying to wrap my head around the variety of ways
to organize playbooks and achieve certain tasks in a DRY way, without
duplicating too much logic in several places.
Right now I'm trying to find a way in which I could have with_fileglob
consolidate
files of a certain pa
Hi,
I am new to ansible. while going through the document is it written that
Ansible Tower is not supported in SunOs. Could anyone please let me know if
is there any other way to install it in Solaris Instance and if Ansible
tower is different from Ansible?
Thanks in advance.
--
You received
I have a simple task to copy a file from one path to another on a remote
host. I need a private key to SSH into the remote host, but the
synchronize module automatically uses my private key in the rsync command
too which seems unnecessary:
ansible-playbook --private-key=keys/mykey.pem playbook
Hello Branko
thank you for your reply.
I am not exactly looking at an Anaconda file.. I am looking into a simple
YAML script(playbook) which works as PXE boot script.
That means I have a boot server which has Ansible installed in it. I want
to place the script in that boot server & remotely
On Mon, 9 Feb 2015 03:27:34 -0800 (PST)
aditya patnaik wrote:
> Hi Folks,
>
> I am new to Ansible .I need help to create a boot script something like
> kickstart file using Ansible
>
> I already have a kickstart file (shown below) to install centos & KVM on
> a physicall host. The script w
Hi Folks,
I am new to Ansible .I need help to create a boot script something like
kickstart file using Ansible
I already have a kickstart file (shown below) to install centos & KVM on
a physicall host. The script will run on network (PXE) i want in yaml
format But dont know how do i go ahe
Try this:
- name: Create a VPC
local_action:
module: ec2_vpc
state: present
cidr_block: 10.0.0.0/16
resource_tags: { "Environment": "Development" }
subnets:
- cidr: 10.0.0.0/24
az: us-east-1a
resource_tags: { "Environment":"Development", "Name" : "Public
51 matches
Mail list logo