>
> My impression is that this is already the status quo. But, if we think we
> need to be more clear on this, let's put up a vote to change the coding
> guidelines and PR checklist. I've done this many times in the past, the
> most obvious instances are when I've made doc changes or unit test
>
Justin, what are your thoughts on leveraging this approach along with
long-lived Docker containers? I think the lifecycle would look like:
1. I need components A, B, C
2. If not started, start A, B, C
3. If started, clean/reset it
4. Setup pre-test state
5. Run test(s)
6.
Picking up where things left off in the VOTE thread on the subject,
I'm presenting a revision to my original proposal below. I'd like to get
this signed off on before submitting it for another vote. Otto, picking up
where we left off, let me know if this looks good to you.
I'd like to propose a
Re: the integration testing point above, a strategy I used recently to
alleviate a similar problem was to exploit JUnit 5's extensions. I haven't
played with this at all in Metron's code, so 1) Take this with a several
grains of salt and 2) Feel encouraged to point out anything
Hi all,
I wanted to start a discussion on something near and dear to all of our
hearts: The role of full-dev in our testing cycle.
Right now, we require all PRs to have spun up the full-dev environment and
make sure that things flow through properly. In some cases, this is a
necessity, and in
Short version: I'm in favor of #2 of 0.7.1 and #1 as a blocker for 0.8.0.
#3 seems like a total waste of time and effort.
The wall of text version:
I agree this isn't "just the wrong thing shown", but for completely
different reasons.
To be extremely clear about what the problem is: Our "dev"
I think it would help if the full consequences of having the UI show the
wrong status where listed.
Someone trying metron, will, by default , see the wrong thing in the UI for
the ONLY sensors they have that are running and doing data.
What happens when they try to start them to make them work?