Justin, what are your thoughts on leveraging this approach along with
long-lived Docker containers? I think the lifecycle would look like:

   1. I need components A, B, C
   2. If not started, start A, B, C
   3. If started, clean/reset it
   4. Setup pre-test state
   5. Run test(s)
   6. Clean/reset component state after test method and/or test class
   completion (we may consider nuking this step - I assisted in designing an
   acceptance testing framework in the past where we handled cleanup on the
   front end. This meant that remediation for failed tests became simpler
   because I still had the state from the failed test run)

Stopping/removing the containers could be a manual process or just a simple
script in the project root. We could also add a special module that runs
last that handles shutting down containers, if desired.

I know this has been a perennial thread for us. I can dig up the original
discuss threads on this as well if folks think they're still pertinent, but
I think we're pretty far removed now from that original point in time and
should just move forward with fresh perspectives.

On Wed, May 1, 2019 at 9:21 AM Justin Leet <justinjl...@gmail.com> wrote:

> Re: the integration testing point above, a strategy I used recently to
> alleviate a similar problem was to exploit JUnit 5's extensions.  I haven't
> played with this at all in Metron's code, so 1) Take this with a several
> grains of salt and 2) Feel encouraged to point out anything
> wrong/problematic/ludicrous.  My use case was pretty tame and easy to
> implement.
>
> Essentially, you can set up an extension backed by a singleton cluster (in
> the case I was doing, I had two extensions, one for a Kafka cluster and one
> for MiniDfs).  The backing clusters expose some methods (i.e. create topic,
> get brokers, get file system, etc.), which lets the tests classes setup
> whatever they need. Classes that need them just use an annotation and can
> be setup under the hood to init only when tests in the current run suite
> actually use them and spin down after all are done. This saved ~1 min on a
> 4 minute build.  It ends up being a pretty clean way to use them, although
> we might need to be a bit more sophisticated, since my case was less
> complicated. The main messiness is that this necessarily invites tests to
> interfere with each other (i.e. if multiple tests use the kafka topic
> "test", problems will be involved). We might be able to improve on this.
>
> We could likely do something similar with our InMemoryComponents. Right now
> these are spun up on a per-test basis, rather than the overall suite. This
> involves some setup in any class that wants them, just to be able to get a
> cluster.  There's also some definition of spin up order and so on, which is
> already taken care of with the extensions (declaration order).  The other
> catch in our case is that we have way more code covered by JUnit 4, which
> doesn't have the same capabilities.  That migration doesn't necessarily
> have to be done all at once, but it is a potential substantial pain point.
>
> Basically, I'd hope to push management of our abstraction further back into
> actual JUnit so they're get more reuse across runs and it's easily
> understood what a test needs and uses right up front.  In our case, the
> InMemoryComponents likely map to extensions.
>
> If we did something like this, it potentially solves a few problems
> * Makes it easy to build an integration test that says "Give me these
> components".
> * Keeps it alive across any test that needs them
> * Only spins it up if there are tests running that do need them. Only spins
> up the necessary components.
> * Lower our build times.
> * Interaction with components remains primarily through the same methods
> you'd normally interact (you can build producers/consumers, or whatever).
> * IMO, easier to explain and have people use.  I've had a couple people
> pretty easily pick up on it.
>
> I can't share the code, but essentially it looks like (And I'm sure email
> will butcher this, so I'm sorry in advance):
> @ExtendWith({<ExtensionOne.class, ExtensionTwo.class, ...})
> ...
> @BeforeAll
> public static void beforeAll() {
>    // Test specific setup, similar to before. But without the cluster
> creation work.
> }
>
> Everything else gets handled under the covers, with the caveat that what I
> needed to do was less involved than some of our tests, so there may be some
> experimenting.
>
> On Wed, May 1, 2019 at 10:59 AM Justin Leet <justinjl...@gmail.com> wrote:
>
> > Hi all,
> >
> > I wanted to start a discussion on something near and dear to all of our
> > hearts: The role of full-dev in our testing cycle.
> >
> > Right now, we require all PRs to have spun up the full-dev environment
> and
> > make sure that things flow through properly. In some cases, this is a
> > necessity, and in other cases, it's a fairly large burden on current and
> > potential contributors.
> >
> > So what do we need to do to reduce our dependence on full dev and
> increase
> > our confidence in our CI process?
> >
> > My initial thoughts on this encompass a few things
> > * Increasing our ability to easily write automated tests. In particular,
> I
> > think our integration testing is fairly hard and has some structural
> issues
> > (e.g. our tests spin up components per Test class, not for the overall
> test
> > suite being run, which also increases build times, etc.). How do we make
> > this easier and have less boilerplate?  I have one potential idea I'll
> > reply to this thread with.
> > * Unit tests in general. I'd argue that any area we thing need full-dev
> > spun up to be confident in needs more testing. Does anyone have any
> > opinions on which areas we have the least confidence in?  Which areas of
> > code are people afraid to touch?
> > * Our general procedures. Should we not be requiring full-dev on all PRs,
> > but maybe just asking for a justification why not (e.g. "I don't need
> > full-dev for this, because I added unit tests here and here and
> integration
> > tests for the these components?"). And then a reviewer can request a
> > full-dev spin up if needed?  Could we stage this rollout (e.g.
> > metron-platform modules require it, but others don't, etc.?) This'll add
> > more pressure onto the release phase, so maybe that involves fleshing
> out a
> > checklist of critical path items to check.
> > * Do we need to improve our docs? Publish Javadocs? After all, if docs
> are
> > better, hopefully we'd see less issues introduced and be more confident
> in
> > changes coming in.
> > * What environment do we even want testing to occur in?
> > https://github.com/apache/metron/pull/1261 has seen a lot of work, and
> > getting it across the finish line may help make the dev environment less
> > onerous, even if still needed.
> >
>

Reply via email to