On Sep 11, 2014, at 2:26 PM, James Martin <jmar...@ansible.com> wrote:

> Scott,
> 
> Neat to see someone else's approach.  The "fast method" you have there 
> probably could be worked into what's been merged.  Another approach (maybe 
> simpler) would just be stand up a parallel ASG with the new AMI.    

The general problem with this approach is that it doesn't work well for 
blue-green deployments, nor if the new code can't coexist with the currently 
running code. We make that decision before deploy time and put the site in 
maintenance mode if we determine there's an incompatibility between the two 
versions.

I think we're probably going to move to a system that uses a tier of proxies 
and two ELBs. That way we can update the idle ELB, change out the AMIs, and 
bring the updated ELB up behind an alternate domain for the blue-green testing. 
Then when everything checks out, switch the proxies to the updated ELB and take 
down the remaining, now idle ELB.

Amazon would suggest using Route53 to point to the new ELB, but there's too 
great a chance of faulty DNS caching breaking a switch to a new ELB. Plus 
there's a 60s TTL to start with regardless, even in the absence of caching.

> 
> I like making the AutoScale Group do the instance provisioning, versus your 
> approach of provisioning the instance  and then moving it to an ASG.  From 
> what I can tell, your module doesn't seem to be idempotent -- so if it's run, 
> it's always going to act.  The feature I added only updates instances if they 
> have a launch config that is different from what's currently assigned to the 
> ASG.  So it's safe to run again (or continue a run that failed for some 
> reason), without having to cycle through all the instances again.  

You may have missed the "cycle_all" parameter. If False, only instances that 
don't match the new AMI are cycled.

Using the ASG to do the provisioning might be preferable if it's reliable. At 
first I went that route, but I was having problems with the ASG's provisioning 
being non-deterministic. Manually creating the instances seems to ensure that 
things happen in a particular order and with predictable speed. As mentioned, 
the manual method definitely works every time, although I need to add some more 
timeout and error checking (like what happens if I ask for 3 new instances and 
only get 2).

I have a separate task that cleans up the old AMIs and LCs, incidentally. I 
keep the most recent around as a backup for quick rollbacks.

> 
> We will be publishing an article on some different approaches that we've 
> worked through for doing this "immutablish" deploy stuff sometime next week.  

I'm looking forward to reading it for sure.

Regards,
-scott

> 
> On Thu, Sep 11, 2014 at 2:04 PM, Scott Anderson <scottanderso...@gmail.com> 
> wrote:
> For comparison:
> 
> https://github.com/scottanderson42/ansible/blob/ec2_vol/library/cloud/ec2_asg_cycle
> 
> Still a work in progress (as you should be able to tell from the logging 
> statements :-), but we've been using it in production for several months and 
> it's (now) battle tested. The "Slow" method is unimplemented but is intended 
> to be your Option 2.
> 
> -scott
> 
> On Sep 11, 2014, at 1:50 PM, Scott Anderson <scottanderso...@gmail.com> wrote:
> 
> > Wow, I wish I'd seen this conversation earlier.
> >
> > I have a module that does this, using something similar to option 1.
> >
> > My module respects multi-AZ load balancers and results in a completely 
> > transparent deploy, *so long as* the code in the new AMI can run alongside 
> > the old code. There's a start of two different methods, one which replaces 
> > a single instance at a time and the other which fires up all the new 
> > instances in the proper VPCs, waits for them to initialize, adds them to 
> > the elb and ash, then terminates once they're all stable.
> >
> > You also have to set up session pinning and draining on the elb for it to 
> > function correctly. Otherwise you can end up with someone getting assets 
> > from two different code bases.
> >
> > There's actually a more reliable way to do it that involves using 
> > intermediary instances, but we haven't gotten that far yet.
> >
> > -scott
> 
> --
> You received this message because you are subscribed to a topic in the Google 
> Groups "Ansible Project" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/ansible-project/JXiZgm36sHU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> ansible-project+unsubscr...@googlegroups.com.
> To post to this group, send email to ansible-project@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/ansible-project/D5B502F2-C5E4-4399-970E-E0B22FEC6736%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> 
> -- 
> James Martin
> Solutions Architect
> Ansible, Inc.
> 
> -- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "Ansible Project" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/ansible-project/JXiZgm36sHU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> ansible-project+unsubscr...@googlegroups.com.
> To post to this group, send email to ansible-project@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/ansible-project/CAMP2DW5tC%3DZkyPFALX5Axc%2B_7dHeyJ%2BPiAen3Daq3PbEHDnRxA%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/1F8BCF14-D78F-43A2-B51B-2DD8BC160E3F%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to