Which two cases are acting up? Ceej On Dec 5, 2015 8:51 PM, "abdullah alamoudi" <[email protected]> wrote:
> Just to add some icing on the cake, two of the test cases that are expected > to fail, fail sporadically!!!!!! (aka succeed sometime!). > > How?Why? > I have no clue at the moment but what should we do with them? > I have disabled them in the code review I submitted. > > I urge all of you to look at the change to see if you can fix any of the > failing test cases or investigate ones with strange behavior. > ~Abdullah. > > Amoudi, Abdullah. > > On Thu, Dec 3, 2015 at 4:36 PM, Mike Carey <[email protected]> wrote: > > > +1 indeed! > > On Dec 3, 2015 3:45 PM, "Ian Maxon" <[email protected]> wrote: > > > > > Definite +1. > > > > > > We also should (separately) start checking the output of the CC/NC > > > logs or somehow otherwise find a way to detect exceptions that are > > > uncaught there. Right now if the exception doesn't come back to the > > > user as an error in issuing a query, we'd have no way to detect it. > > > > > > On Thu, Dec 3, 2015 at 2:43 PM, Till Westmann <[email protected]> > wrote: > > > > +1 ! > > > > > > > > > > > > On 3 Dec 2015, at 14:38, Chris Hillery wrote: > > > > > > > >> Yes, please propose the change. I've been looking at overhauling the > > > test > > > >> framework as well so I will review. > > > >> > > > >> For Zorba, I implemented a "known failing" mechanism that allowed > you > > to > > > >> mark a test that was currently broken (associated with a ticket ID) > > > >> without > > > >> disabling it. The framework would continue to execute it and expect > it > > > to > > > >> fail. It would also cause the test run to fail if the test started > to > > > >> succeed (ie, the bug was fixed) which ensured that the "known > failing" > > > >> mark > > > >> would get removed in a timely fashion. To be clear, this is > completely > > > >> distinct from a negative test case - it was a way to not worry about > > > >> forgetting tests that had to be disabled due to known bugs, and to > > > ensure > > > >> that all such known bugs had an associated tracking ticket. It was > > quite > > > >> useful there and I was planning to re-introduce it here. > > > >> > > > >> Ceej > > > >> aka Chris Hillery > > > >> > > > >> On Thu, Dec 3, 2015 at 2:29 PM, abdullah alamoudi < > [email protected] > > > > > > >> wrote: > > > >> > > > >>> Hi All, > > > >>> Today, I implemented a fix for a critical issue that we have and > > wanted > > > >>> to > > > >>> add a new kind of test cases where the test case has 3 files: > > > >>> > > > >>> 1. Creating the dataset. > > > >>> 2. Fill it with data that have duplicate keys. This is expected to > > > throw > > > >>> a > > > >>> duplicate key exception. > > > >>> 3. Delete the dataset. This is expected to pass (the bug was here > > where > > > >>> it > > > >>> is not being deleted). > > > >>> > > > >>> With the current way we use the test framework, we are unable to > test > > > >>> such > > > >>> case and so I started to improve the test framework starting with > > > >>> actually > > > >>> checking the type of exception thrown and making sure that it > matches > > > the > > > >>> expected error. > > > >>> > > > >>> ... and boom. I found that many test cases fail but nobody notices > > > >>> because > > > >>> no one checks the type of exception thrown. Moreover, If a test is > > > >>> expected > > > >>> to fail and it doesn't, the framework doesn't check for that. In > > > >>> addition, > > > >>> sometimes the returned exception is meaningless and that is > something > > > we > > > >>> absolutely must avoid. > > > >>> > > > >>> What I propose is that I push to master the improved test framework > > and > > > >>> disable the failing test cases, create JIRA issues for them and > > assign > > > >>> each > > > >>> to someone to look at them. > > > >>> > > > >>> Thoughts? > > > >>> > > > >>> Amoudi, Abdullah. > > > >>> > > > > > > > > > >
