Hello MXNet community,

Reproducibility of ML experiments carried out by data scientists, analysts
and experts is the talk of the town.

While listening to TWiML's latest podcast - Managing Deep Learning
Experiments with Lukas Biewald [1], he mentions the company Weights and
Biases [2] [3]

Brief
- Reproducibility crisis in ML
- Let alone the latest research papers, even your own experiments (say from
1 month ago) are not reproducible
- Solution :
1. Versioning
Takes snapshots to store versions - Code, Data, Parameters and Hyper
parameters
Versioning or Snapshotting falls in the realm of data management. Notable
companies - DVC and Pachyderm.

2. Visualization
Builds on top of Tensorboard (TBoard). But solves its shortcomings
- Targeted for distributed training (unlike TBoard)
- Visualizes wrt several experiments (not just a single run)

3. Collaboration
Making this cloud based, allows cross-team collaboration.

*MXNet*
>From MXNet's point of view, we can discuss if it's worthwhile to have this
(many positives point towards a yes) and if so we can explore following
options -
a. Work with W&B for building support for using it with MXNet (currently
they have Tensorflow (TF) and PyTorch (PT) supported)
b. Build something in-house on similar lines that would involve significant
engineering effort, discussion.

So I wanted to know what does the community think about this?

Thanks,
Chai

[1]
https://twimlai.com/twiml-talk-295-managing-deep-learning-experiments-with-lukas-biewald
[2] https://www.wandb.com
[3] https://github.com/wandb

-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
<https://github.com/ChaiBapchya>[image: https://www.facebook.com/chaibapat]
<https://www.facebook.com/chaibapchya>[image:
https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image:
https://www.linkedin.com//in/chaibapat25]
<https://www.linkedin.com//in/chaibapchya/>

Reply via email to