We can start small consolidating few examples which are there in both 
tutorials/examples folder.
One of such example is GradCAM implementation:
In tutorials - 
https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/vision/cnn_visualization.md
In examples- 
https://github.com/apache/incubator-mxnet/blob/master/example/cnn_visualization
The only difference is we have a python script in place of notebook in examples 
folder for this particular example.
Any thoughts/suggestions?

On the other hand, I agree with Sandeep that there should be some basic testing 
of examples and we can start with a small set of python examples to begin with.

—Ankit


On Nov 13, 2018, at 10:31 AM, Naveen Swamy 
<mnnav...@gmail.com<mailto:mnnav...@gmail.com>> wrote:

Aaron, IMO tutorials have a specific purpose, to introduce concepts and
APIs to the users and I think converting examples to tutorials would
overwhelm the users, we should carefully choose which examples we want to
turn into tutorials.

I agree that today examples are graveyard of untested code, my suggestion
is to add some testing to the example when you touch the example - at the
least to check the functionality. These can be run once a week.


On Tue, Nov 13, 2018 at 6:52 AM Aaron Markham 
<aaron.s.mark...@gmail.com<mailto:aaron.s.mark...@gmail.com>>
wrote:

I've been actively promoting moving examples to tutorials during reviews.
That way they fall under the testing umbrella and get added to the website.

Many times there's not really a great distinction as to why something is in
the examples folder, other than it's like a graveyard of untested sample
code.

I would suggest a starting strategy of when doing updates on examples, see
if with just a little more effort, ask yourself, can it be converted to a
tutorial?

The last thing CI needs is more flaky tutorial tests, so whatever is done
here should use the more robust approaches that are being discussed.

Cheers,
Aaron

On Mon, Nov 12, 2018, 16:24 sandeep krishnamurthy <
sandeep.krishn...@gmail.com<mailto:sandeep.krishn...@gmail.com> wrote:

Thanks, Ankit for bringing this up. @Anirudh - All the concerns you
raised
are very valid. Here are my thoughts:
1. There were several examples that were crashing or had compiler errors.
This is a very bad user experience. All example scripts should be at
least runnable!
2. While I agree examples are too diverse (python scripts, notebooks,
epochs, print statements etc..) We can always start small, we can start
with 5 examples. We can use this to streamline all examples to be python
scripts, print statements, with the main function invoker that can take
params like epoch, dataset etc.
3. We can start with running weekly tests to avoid too long nightly test
pipeline.
4. One possible issue can be on a few examples that depend on a large or
controlled dataset. I am not sure yet, how to solve this, but, we can
think.

Any suggestions?
Best,
Sandeep



On Mon, Nov 12, 2018 at 10:38 AM Anirudh Acharya 
<anirudhk...@gmail.com<mailto:anirudhk...@gmail.com>>
wrote:

Hi Ankit,

I have a few concerns about testing examples. Before writing tests for
examples,

  - you will need to first decide what constitutes a test for an
example,
  because examples are not API calls, which will have return
statements
and
  the test can just call the API and assert for certain values. Just
testing
  if an example is a compilable python script will not add much value
in
my
  opinion.
  - And testing for example output and results will require a re-write
of
  many of the examples, because many of them currently just have print
  statements as outputs and does not return any value as such. I am
not
sure
  if it is worth the dev-effort.
  - the current set of examples in the mxnet repo are very diverse -
some
  are written as python notebooks, some are just python scripts with
paper
  implementations, and some are just illustrations of certain mxnet
features.
  I am curious to know how you will write tests for these things.


Looking forward to seeing the design of this test bed/framework.


Thanks
Anirudh Acharya

On Fri, Nov 9, 2018 at 2:39 PM Marco de Abreu
<marco.g.ab...@googlemail.com.invalid<mailto:marco.g.ab...@googlemail.com.invalid>>
 wrote:

Hello Ankit,

that's a great idea! Using the tutorial tests as reference is a great
starting point. If you are interested, please don't hesitate to
attend
the
Berlin user group in case you would like to discuss your first
thoughts
in-person before drafting a design.

-Marco


Am Fr., 9. Nov. 2018, 23:23 hat 
khedia.an...@gmail.com<mailto:khedia.an...@gmail.com> <
khedia.an...@gmail.com<mailto:khedia.an...@gmail.com>> geschrieben:

Hi MXNet community,

Recently, I and a few other contributors focussed on fixing
examples
in
our repository which were not working out of the box as expected.
https://github.com/apache/incubator-mxnet/issues/12800
https://github.com/apache/incubator-mxnet/issues/11895
https://github.com/apache/incubator-mxnet/pull/13196

Some of the examples failed after API changes and remained uncaught
until
a user reported the issue. While the community is actively working
on
fixing it, it might re-occur after few days if we don’t have a
proper
mechanism to catch regressions.

So, I would like to propose to enable nightly/weekly tests for the
examples similar to what we have for tutorials to catch any such
regressions. The test could check only basic
functionalities/working
of
the
examples. It can run small examples completely whereas it can run
long
training examples for only few epochs.

Any thoughts from the community? Any other suggestions for fixing
the
same?

Regards,
Ankit Khedia





--
Sandeep Krishnamurthy



Reply via email to