On Apr 25, 2014, at 2:31 PM, ghex...@gmail.com wrote:

>   So, I'm curious, for the case where you want to start "stopped" EC2 
> instances, what's the current recommended approach?  
> 
>   I've kind of ignored this task for now, managing that by hand (it's just 
> our dev env, but it's still a couple of dozen instances at least).  I'm 
> almost about to pull Scott's branch in locally since it looks so much better 
> than manual management.


Here's an example in case you do use ec2_instance_facts. This example creates 
maintenance instances for updating AMIs.

Notes:
    * This is part of a set of scripts that will create an entire load balanced 
application environment (including DNS, VPC, centralized logging, and RDS) in a 
bare AWS account in about 20-30 minutes.
    * app_environment is dev, test, stage, or prod. The scripts will create the 
same setup in each environment with some differences such as RDS size, domain 
name, and so forth. 
    * I use a naming convention for AWS resources of 
'<product>-<environment>-<AWS type>-<purpose>', eg. foo-stage-ec2-logging or 
foo-prod-ami-web.

# The base image is created from a standard Ubuntu LTS instance. Then, packages 
common to all
# of the images (eg. security, ansible, boto, etc.) are installed and 
configured.

# There's a separate pull request (also rejected, hi Michael... ;-) for the 
ec2_ami_facts module.
- name: Obtain list of existing AMIs
  local_action:
    module: ec2_ami_facts
    description: "{{ ami_image_name }}"
    tags:
      environment: "{{ app_environment }}"
    region: "{{ vpc_region }}"
    aws_access_key: "{{ aws_access_key }}"
    aws_secret_key: "{{ aws_secret_key }}"
  register: ami_facts
  ignore_errors: yes

# If a version of the AMI exists, record this. Otherwise use the base Ubuntu 
image.
- set_fact:
    environment_base_image_id: "{{ ami_facts.images[0].id }}"
  when: ami_facts.images|count > 0
- set_fact:
    environment_base_image_id: "{{ ami_base_image_id }}"
  when: ami_facts.images|count == 0
    
# See if the maintenance image for this image type for this environment is 
running.    
- name: Obtain list of existing instances
  local_action:
    module: ec2_instance_facts
    name: "{{ ami_maint_instance_name }}"
    # Everything but terminated
    states:
      - pending
      - running
      - shutting-down
      - stopped
      - stopping
    tags:
      environment: "{{ app_environment }}"
    region: "{{ vpc_region }}"
    aws_access_key: "{{ aws_access_key }}"
    aws_secret_key: "{{ aws_secret_key }}"
  register: instance_facts
  ignore_errors: yes

- set_fact:
    environment_maint_instance: "{{ 
instance_facts.instances_by_name.get(ami_maint_instance_name) }}"
  when: instance_facts.instances|count > 0

# If there is no such instance, create one.
- name: Create an instance for managing the AMI creation
  local_action:
    module: ec2
    state: present
    image: "{{ environment_base_image_id }}"
    instance_type: t1.micro
    group: "{{ environment_public_ssh_security_group }}"
    instance_tags:
      Name: "{{ ami_maint_instance_name }}"
      environment: "{{ app_environment }}"
    key_name: "{{ environment_public_ssh_key_name }}"
    vpc_subnet_id: "{{ environment_vpc_public_subnet_az1_id }}"
    assign_public_ip: yes
    wait: yes
    wait_timeout: 600
    region: "{{ vpc_region }}"
    aws_access_key: "{{ aws_access_key }}"
    aws_secret_key: "{{ aws_secret_key }}"
  register: maint_instance
  when: environment_maint_instance is not defined

- set_fact:
    environment_maint_instance: "{{ maint_instance.instances[0] }}"
  when: maint_instance is defined and maint_instance.instances|count > 0

- name: Ensure instance is running
  local_action:
    module: ec2
    state: running
    instance_ids: "{{ environment_maint_instance.id }}"
    wait: yes
    wait_timeout: 600
    region: "{{ vpc_region }}"
    aws_access_key: "{{ aws_access_key }}"
    aws_secret_key: "{{ aws_secret_key }}"
  register: maint_instance
  when: environment_maint_instance is defined

# If we had to start the instance then the public IP will not have been defined 
when
# we gathered facts above, so get it again.
- name: Obtain public IP of newly running instance
  local_action:
    module: ec2_instance_facts
    name: "{{ ami_maint_instance_name }}"
    states:
      - running
    tags:
      environment: "{{ app_environment }}"
    region: "{{ vpc_region }}"
    aws_access_key: "{{ aws_access_key }}"
    aws_secret_key: "{{ aws_secret_key }}"
  register: instance_facts
  when: maint_instance|changed

- set_fact:
    environment_maint_instance: "{{ 
instance_facts.instances_by_name.get(ami_maint_instance_name) }}"
  when: maint_instance|changed

# Pass the collected facts on the new maintenance image host for configuration 
by role.
- name: Add new maintentance instance to host group
  local_action:
    module: add_host
    hostname: "{{ environment_maint_instance.public_ip }}"
    groupname: maint_instance
    app_environment: "{{ app_environment }}"
    # This passes the new/existing private key file to ansible for use in 
contacting the hosts. Better way to do this?
    ansible_ssh_private_key_file: "{{ environment_public_ssh_private_key_file 
}}"
    environment_maint_instance: "{{ environment_maint_instance }}"

- name: Wait for SSH on maintenance host
  local_action:
    module: wait_for
    host: "{{ environment_maint_instance.public_ip }}"
    port: 22
    # This is annoying as Hades. Sometimes the delay works, sometimes it's not 
enough.
    # The check fails if the port is open but the ssh daemon isn't yet ready to 
accept
    # actual traffic, right after the maintenance instance is started.
    #delay: 10
    timeout: 320
    state: started

# TODO fix the hardcoded user too
- name: Really wait for SSH on maintenance host
  local_action: command ssh -o StrictHostKeyChecking=no -i {{ 
environment_public_ssh_private_key_file }} ubuntu@{{ 
environment_maint_instance.public_ip }} echo Rhubarb
  register: result
  until: result.rc  == 0
  retries: 20
  delay: 10

Regards,
-scott

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/0D6BADC5-829C-4EE9-A6FA-7B672330B5E7%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to