Hi Michael,

Thanks for the response. Some excellent points. I'll respond to a couple.

> As such, I strongly favor integration tests run against stage environments to 
> make sure
> things work, and coupling that in a rolling update against a production 
> environment as
> a condition to decide whether to add something back to a load balanced 
> environment
> there as well - ideally using the same tests, but that's not always possible.

Agreed! Me too. I didn't address the downstream deployment stages, but
this would be consistent with my approach.


> While more of a unit test thing, I personally find Cucumber to be wasted 
> effort because
> most product-management types (I guess I qualify) are not going to be the 
> ones writing
> test stubs, so it's usually fine to just go straight to tests.

Understood. On the contrary, I have found the type of tests I write
when using a natural language approach to be a great mechanism for
communicating requirements within a team. Though I wasn't sure, it
read like you are thinking of Cucumber as a unit testing framework. I
never think of Cucumber as a "unit" testing framework.

In any case, my point isn't about Cucumber in general, but rather
about the good practice of creating an acceptance test for a body of
work about to be embarked upon. I am glad for teams to use whatever
tool they think makes the most sense for their combination of team
members and customers. By acceptance test, I simply mean an automated
verification that the features we want actually do exist in the system
that a customer and engineer can collaborate on.


> Good integration tests for ops are more important -- is this web service 
> responsive?

Agreed.

> Not things like "is the file permission of this file the same thing it is 
> listed in the configuration
> system - as that just duplicates configuration in two places.

Duplication is precisely the point, sort of. I like Uncle Bob's point
about unit testing being the double entry accounting of software
engineering. We express the functionality we want in two ways: code
and test. Then we automatically make sure our column sum equals our
row sum.

Over the long haul, I find the double entry approach to find problems
fast when they are introduced. In the short term, they alert me
quickly when I make mistakes.

> I'm strongly not a fan of ServerSpec, because I think it fails to understand 
> the basis of
> configuration management in most places - that you do not have to check for 
> what the
> config tool already asserts.

I understand your point. Usually, I try to make sure that my
"functional tests" focus on testing my implementation. For example, I
testing that starting a service has the expected results. CM tools
can't do that generically, beyond asserting that they have in fact
done as instructed. For example, if I have a "service: name=foo
state=started" entry, all I can expect from Ansible natively is that
it verifies that there is a service named "foo", that asking for it's
status reports "running", and if not running, calling "start" puts it
into that state. But, I'm missing my test for foo listening on a
specific port and writing to a specific file, for example. That feels
like a good use case for Serverspec.

But, again, I don't care so much about the specific tool. I am happy
for teams to choose whatever works best for them. My workflow focuses
more applying TDD mentality to evolving infrastructure coding
alongside the rest of my application.

> We've written about this here:
>
> http://docs.ansible.com/test_strategies.html

Thanks for the reference! And thanks for writing that up! I like the
ideas a lot and they seem pretty consistent with how I like to
approach things. One thing I like to do is have the opportunity to run
tests in isolation from running the provisioning. That doesn't
preclude me from putting more of them in the Ansible files, of course.
I think the physical separation of test and implementation helps me
wear the two different hats I need to wear to write good tests, but
I'll gladly accept that as a personal preference.

> While some of his posts are useful...

I'll skip the fun discussion about patterns for now. Interesting
points, and thank you! I assume that we can agree that "looking at
what we have implemented and finding opportunities to improve it,
non-functionally" is a good thing.

> Here is the outline of my slides from my talk to the NYC Continuous 
> Deployment group...

Looks like a great talk. Sorry I missed it!

> Using Vagrant to push to AWS here seems weird to me, I'd probably just use 
> the AWS
> modules in Ansible directly from from Jenkins to trigger my tests towards 
> AWS, rather
> than kicking them off from a laptop.

I like the isolation of Ansible for configuration and Vagrant/Packer
for machine instantiation. It seems to make for a good isolation of
concerns.

As for the "from a laptop" part, I like to test stuff from my dev
environment, before I commit. So at that point it might be from my
laptop. But, when the CI/CD server is calling Vagrant, that's of
course not on a laptop.

And for your final points:

> (A)  try to keep it simple
> (B)  unit tests don't usually make sense in prod - integration tests DO 
> matter, and are supremely important, but spend time writing tests for your 
> application, not tests for the config tool
> (C)  monitoring is supremely important
> (D)  build -> qa/stage -> prod


We agree on every point here! Thanks for summarizing your position and
again for the thoughtful response!

- PJ




On Wed, Sep 24, 2014 at 10:14 AM, Michael DeHaan <[email protected]> wrote:
>
>
>
> On Wed, Sep 24, 2014 at 7:34 AM, Paul Julius <[email protected]> wrote:
>>
>> Hi Ansible folks!
>>
>> Cross posting here from the Vagrant and Packer mailing lists, because I 
>> thought that people on the Ansible mailing list would probably have really 
>> great ideas to share with me.
>>
>> I am really enjoying my current workflow with Ansible. I love it because it 
>> models my Dev workflow, almost precisely, thereby getting me close to the 
>> "infrastructure as code" nirvana.
>>
>> We just wrapped up CITCON Zagreb [footnote:x] where I was talking to other 
>> DevOps folks about how they use Ansible. There were some interesting ideas! 
>> I wanted to ask on this mailing list what other people are doing. I would 
>> love any feedback.
>>
>> Pickup story to automate deployment of something
>> Write broken acceptance test (in something like Cucumber)
>>
>> Put acceptance test in "In Progress" bucket [footnote:xx]
>
>
> If you're talking about your own unit tests, this is up to you here.   What 
> follows may be percieved as a bit of a rant, and I don't want it to be 
> percieved as much, but I think most people who come from this uber-testing 
> culture have made things incredibly too hard, create more work for 
> themselves, and as a result move slower - not really breaking less - but 
> doing extra work.
>
> Work that ansible (and declarative CM systems in general) are designed to not 
> make to be a thing.
>
> As such, I strongly favor integration tests run against stage environments to 
> make sure things work, and coupling that in a rolling update against a 
> production environment as a condition to decide whether to add something back 
> to a load balanced environment there as well - ideally using the same tests, 
> but that's not always possible.
>
> While more of a unit test thing, I personally find Cucumber to be wasted 
> effort because most product-management types (I guess I qualify) are not 
> going to be the ones writing test stubs, so it's usually fine to just go 
> straight to tests.
>
> That being said, I think there's a lot of niceness to come out of the Ruby 
> testing community - I just never felt Cucumber was one of those things.
>
> Good integration tests for ops are more important -- is this web service 
> responsive?   Not things like "is the file permission of this file the same 
> thing it is listed in the configuration system - as that just duplicates 
> configuration in two places.
>
>
>>
>> Write broken functional test (in something like Serverspec)
>
>
> I'm strongly not a fan of ServerSpec, because I think it fails to understand 
> the basis of configuration management in most places - that you do not have 
> to check for what the config tool already asserts.
>
> I'm much more of a fan of checking to make sure a deployed application works.
>
> We've written about this here:
>
> http://docs.ansible.com/test_strategies.html
>>
>> Write just enough code to make it pass - Vagrant + Ansible + Virtualbox
>> Refactor - Good sense and Fowler's patterns
>
>
> While some of his posts are useful, Fowler's refactoring suggests some rather 
> silly things for code - change one thing, recompile, re-run tests, that would 
> utterly sabotage development efficiency in most cases.
>
> He tries to make code design a bit too mechanical, IMHO.
>
> Unrelated, but somewhat on the Fowler-worship front:
>
> See somewhat related - http://perl.plover.com/yak/design/samples/slide001.html
>
> I'm also not really sure how Design Patterns apply so much for a 
> configuration system :)
>
>>
>> Run my pre-commit build - Packer + Ansible + AWS (or whatever target 
>> platform)
>> Commit/push - Git (or VCS of choice)
>> Go to step 3, until acceptance test passes
>> Review with customer, maybe go back to step 2
>> Move acceptance test into the "Complete" bucket
>> Story complete
>>
>>
>> At step 7, of course, my CI server picks up the change and sends it through 
>> the following stages of my pipeline:
>
>
>
>
> Here is the outline of my slides from my talk to the NYC Continuous 
> Deployment group that suggests a good dev->stage->test workflow and how to 
> incorporate tests into a CD pipeline:
>
> https://gist.githubusercontent.com/brokenthumbs/7fd7992fc1af0cfcc63d/raw/e0c750e00aeb6e62da04fd680346516cb88f8ae5/gistfile1.txt
>
>
>
>
>>
>>
>> Checkout from Git
>> Runs Vagrant+Ansible+AWS
>>
>> Executes functional tests - Serverspec - 0% tolerance for broken tests
>> Executes "Complete" Acceptance tests against the Vagrant instance - 0% 
>> tolerance for breakages
>> Executes "In Progress" Acceptance tests against the Vagrant instance - 
>> reporting on results and fail if a test passes [footnote:xxx]
>
>
> Using Vagrant to push to AWS here seems weird to me, I'd probably just use 
> the AWS modules in Ansible directly from from Jenkins to trigger my tests 
> towards AWS, rather than kicking them off from a laptop.
>
> I guess TLDR is:
>
> (A)  try to keep it simple
> (B)  unit tests don't usually make sense in prod - integration tests DO 
> matter, and are supremely important, but spend time writing tests for your 
> application, not tests for the config tool
> (C)  monitoring is supremely important
> (D)  build -> qa/stage -> prod
>
>
>>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Ansible Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To post to this group, send email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgxr0fV7hXYiwBpvG7knAQMV0z8GHSJKxbcvC67hScTvCA%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/CA%2BXRcNW59u0R9eerXx6dLyQRHgPGA5bsLJwf%3D0OvuJz1aASWuw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to