Hi Joan,

My answers inline

On 2019/05/22 20:16:18, Joan Touzet <woh...@apache.org> wrote: 
> Hi Ilya, thanks for starting this thread. Comments inline.
> 
> On 2019-05-22 14:42, Ilya Khlopotov wrote:
> > The eunit testing framework is very hard to maintain. In particular, it has 
> > the following problems:
> > - the process structure is designed in such a way that failure in setup or 
> > teardown of one test affects the execution environment of subsequent tests. 
> > Which makes it really hard to locate the place where the problem is coming 
> > from.
> 
> I've personally experienced this a lot when reviewing failed logfiles,
> trying to find the *first* failure where things go wrong. It's a huge
> problem.
> 
> > - inline test in the same module as the functions it tests might be skipped
> > - incorrect usage of ?assert vs ?_assert is not detectable since it makes 
> > tests pass 
> > - there is a weird (and hard to debug) interaction when used in combination 
> > with meck 
> >    - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
> >    - https://github.com/eproxus/meck/issues/61
> >    - meck:unload() must be used instead of meck:unload(Module)
> 
> Eep! I wasn't aware of this one. That's ugly.
> 
> > - teardown is not always run, which affects all subsequent tests
> 
> Have first-hand experienced this one too.
> 
> > - grouping of tests is tricky
> > - it is hard to group tests so individual tests have meaningful descriptions
> > 
> > We believe that with ExUnit we wouldn't have these problems:
> 
> Who's "we"?
Wrong pronoun read it as I.

> 
> > - on_exit function is reliable in ExUnit
> > - it is easy to group tests using `describe` directive
> > - code-generation is trivial, which makes it is possible to generate tests 
> > from formal spec (if/when we have one)
> 
> Can you address the timeout question w.r.t. EUnit that I raised
> elsewhere for cross-platform compatibility testing? I know that
> Peng ran into the same issues I did here and was looking into extending
> timeouts.
> 
> Many of our tests suffer from failures where CI resources are slow and
> simply fail due to taking longer than expected. Does ExUnit have any
> additional support here?
> 
> A suggestion was made (by Jay Doane, I believe, on IRC) that perhaps we
> simply remove all timeout==failure logic (somehow?) and consider a
> timeout a hung test run, which would eventually fail the entire suite.
> This would ultimately lead to better deterministic testing, but we'd
> probably uncover quite a few bugs in the process (esp. against CouchDB
> <= 4.0).

There is one easy workaround. We could set trace: true in the config
because one of the side effects of it is timeout = infinity (see here 
https://github.com/elixir-lang/elixir/blob/master/lib/ex_unit/lib/ex_unit/runner.ex#L410).
 However this approach has an important caveat:
- all tests would be run sequentially which means that we wouldn't be able to 
parallelize them latter. 

> > 
> > Here are a few examples:
> > 
> > # Test adapters to test different interfaces using same test suite
> 
> This is neat. I'd like someone else to comment whether this the approach
> you define will handle the polymorphic interfaces gracefully, or if the
> effort to parametrise/DRY out the tests will be more difficulty than
> simply maintaining 4 sets of tests.
> 
> 
> > # Using same test suite to compare new implementation of the same interface 
> > with the old one
> > 
> > Imagine that we are doing a major rewrite of a module which would implement 
> > the same interface.
> 
> *tries to imagine such a 'hypothetical' rewrite* :)
> > How do we compare both implementations return the same results for the same 
> > input?
> > It is easy in Elixir, here is a sketch:
> 
> Sounds interesting. I'd again like an analysis (from someone else) as to
> how straightforward this would be to implement.
> 
> -Joan
> 
> 

Reply via email to