Nick - To be clear, when you say you can never get them all to pass, do you mean you can never get all the tests to pass without protractor flake re-running the failing tests (ie. eventually all the tests pass in the end), or do you mean you still have failing tests even after protractor-flake does its work? I want to look into this, but before I do I want to make sure I understand you correctly.
On Fri, Oct 5, 2018 at 12:40 PM Casey Stella <[email protected]> wrote: > This is really good feedback, Nick. I agree, we need them to be reliable > enough to not be a source of constant false positives prior to putting them > into the checklist. > On Thu, Oct 4, 2018 at 15:34 Nick Allen <[email protected]> wrote: > > > I think we still have an issue of reliability. I can never reliably get > > them all to pass. I have no idea which failures are real. Am I the only > > one that experiences this? > > > > We need a reliable pass/fail on these before we talk about adding them to > > the checklist. For example, I just tried to run them on METRON-1771. I > > don't think we have a problem with these changes, but I have not been > able > > to get one run to fully pass. See the attached output of those runs. > > > > > > > > On Wed, Oct 3, 2018 at 7:36 AM Shane Ardell <[email protected]> > > wrote: > > > >> I ran them locally a handful of times just now, and on average they took > >> approximately 15 minutes to complete. > >> > >> On Tue, Oct 2, 2018, 18:22 Michael Miklavcic < > [email protected] > >> > > >> wrote: > >> > >> > @Shane Just how much time are we talking about, on average? I don't > >> think > >> > many in the community have had much exposure to running the e2e tests > in > >> > their current form. It might still be worth it in the short term. > >> > > >> > On Tue, Oct 2, 2018 at 10:20 AM Shane Ardell < > [email protected]> > >> > wrote: > >> > > >> > > The protractor-flake package should catch and re-run false failures, > >> so > >> > > people shouldn't get failing tests when they are done running. I > just > >> > meant > >> > > that we often re-run flaky tests with protractor-flake, so it can > >> take a > >> > > while to run and could increase the build time considerably. > >> > > > >> > > On Tue, Oct 2, 2018, 18:00 Casey Stella <[email protected]> wrote: > >> > > > >> > > > Are the tests so brittle that, even with flaky, people will run > upon > >> > > false > >> > > > failures as part of contributing a PR? If so, do we have a list > of > >> the > >> > > > brittle ones (and the things that would disambiguate a true > failure > >> > from > >> > > a > >> > > > false failure) that we can add to the documentation? > >> > > > > >> > > > On Tue, Oct 2, 2018 at 11:58 AM Shane Ardell < > >> [email protected] > >> > > > >> > > > wrote: > >> > > > > >> > > > > I also would like to eventually have these tests automated. > There > >> > are a > >> > > > > couple hurdles to setting up our e2e tests to run with our > build. > >> I > >> > > think > >> > > > > the biggest hurdle is setting up a dedicated server with data > for > >> the > >> > > e2e > >> > > > > tests to use. I would assume this requires funding, engineering > >> > > support, > >> > > > > obfuscated data, etc. I also think we should migrate our e2e > >> tests to > >> > > > > Cypress first because Protractor lacks debugging tools that > would > >> > make > >> > > > our > >> > > > > life much easier if, for example, we had a failure in our CI > build > >> > but > >> > > > > could not reproduce locally. In addition, our current Protractor > >> > tests > >> > > > are > >> > > > > brittle and extremely slow. > >> > > > > > >> > > > > All that said, it seems we agree that we could add another PR > >> > checklist > >> > > > > item in the meantime. Clarifying those e2e test instructions > >> should > >> > be > >> > > > part > >> > > > > of that task. > >> > > > > > >> > > > > On Mon, Oct 1, 2018 at 2:36 PM Casey Stella <[email protected] > > > >> > > wrote: > >> > > > > > >> > > > > > I'd also like to make sure that clear instructions are > provided > >> (or > >> > > > > linked > >> > > > > > to) about how to run them. Also, we need to make sure the > >> > > instructions > >> > > > > are > >> > > > > > rock-solid for running them. > >> > > > > > Looking at > >> > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > https://github.com/apache/metron/tree/master/metron-interface/metron-alerts#e2e-tests > >> > > > > > , > >> > > > > > would someone who doesn't have much or any knowledge of the UI > >> be > >> > > able > >> > > > to > >> > > > > > run that without assistance? > >> > > > > > > >> > > > > > For instance, we use full-dev, do we need to stop data from > >> being > >> > > > played > >> > > > > > into full-dev for the tests to work? > >> > > > > > > >> > > > > > Casey > >> > > > > > > >> > > > > > On Mon, Oct 1, 2018 at 8:29 AM Casey Stella < > [email protected] > >> > > >> > > > wrote: > >> > > > > > > >> > > > > > > I'm not super keen on expanding the steps to contribute, > >> > especially > >> > > > in > >> > > > > an > >> > > > > > > avenue that should be automated. > >> > > > > > > That being said, I think that until we get to the point of > >> > > automating > >> > > > > the > >> > > > > > > e2e tests, it's sensible to add them to the checklist. > >> > > > > > > So, I would support it, but I would also urge us to move > >> forward > >> > > the > >> > > > > > > efforts of running these tests as part of the CI build. > >> > > > > > > > >> > > > > > > What is the current gap there? > >> > > > > > > > >> > > > > > > Casey > >> > > > > > > > >> > > > > > > On Mon, Oct 1, 2018 at 7:41 AM Shane Ardell < > >> > > > [email protected]> > >> > > > > > > wrote: > >> > > > > > > > >> > > > > > >> Hello everyone, > >> > > > > > >> > >> > > > > > >> In another discussion thread from July, I briefly mentioned > >> the > >> > > idea > >> > > > > of > >> > > > > > >> adding a step to the pull request checklist asking > >> contributors > >> > to > >> > > > run > >> > > > > > the > >> > > > > > >> UI end-to-end tests. Since we aren't running e2e tests as > >> part > >> > of > >> > > > the > >> > > > > CI > >> > > > > > >> build, it's easy for contributors to unintentionally break > >> these > >> > > > > tests. > >> > > > > > >> Reminding contributors to run these tests will hopefully > help > >> > > catch > >> > > > > > >> situations like this before opening a pull request. > >> > > > > > >> > >> > > > > > >> Does this make sense to everyone? > >> > > > > > >> > >> > > > > > >> Regards, > >> > > > > > >> Shane > >> > > > > > >> > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > > >
