Hi Ankit,

I have a few concerns about testing examples. Before writing tests for
examples,

   - you will need to first decide what constitutes a test for an example,
   because examples are not API calls, which will have return statements and
   the test can just call the API and assert for certain values. Just testing
   if an example is a compilable python script will not add much value in my
   opinion.
   - And testing for example output and results will require a re-write of
   many of the examples, because many of them currently just have print
   statements as outputs and does not return any value as such. I am not sure
   if it is worth the dev-effort.
   - the current set of examples in the mxnet repo are very diverse - some
   are written as python notebooks, some are just python scripts with paper
   implementations, and some are just illustrations of certain mxnet features.
   I am curious to know how you will write tests for these things.


Looking forward to seeing the design of this test bed/framework.


Thanks
Anirudh Acharya

On Fri, Nov 9, 2018 at 2:39 PM Marco de Abreu
<marco.g.ab...@googlemail.com.invalid> wrote:

> Hello Ankit,
>
> that's a great idea! Using the tutorial tests as reference is a great
> starting point. If you are interested, please don't hesitate to attend the
> Berlin user group in case you would like to discuss your first thoughts
> in-person before drafting a design.
>
> -Marco
>
>
> Am Fr., 9. Nov. 2018, 23:23 hat khedia.an...@gmail.com <
> khedia.an...@gmail.com> geschrieben:
>
> > Hi MXNet community,
> >
> > Recently, I and a few other contributors focussed on fixing examples in
> > our repository which were not working out of the box as expected.
> > https://github.com/apache/incubator-mxnet/issues/12800
> > https://github.com/apache/incubator-mxnet/issues/11895
> > https://github.com/apache/incubator-mxnet/pull/13196
> >
> > Some of the examples failed after API changes and remained uncaught until
> > a user reported the issue. While the community is actively working on
> > fixing it, it might re-occur after few days if we don’t have a proper
> > mechanism to catch regressions.
> >
> > So, I would like to propose to enable nightly/weekly tests for the
> > examples similar to what we have for tutorials to catch any such
> > regressions. The test could check only basic functionalities/working of
> the
> > examples. It can run small examples completely whereas it can run long
> > training examples for only few epochs.
> >
> > Any thoughts from the community? Any other suggestions for fixing the
> same?
> >
> > Regards,
> > Ankit Khedia
> >
>

Reply via email to