Hello Andy,
regarding your question, this will depend a lot on the specific task:
- for tasks that are "easy" to distribute such as inference
(scoring), hyper-parameter tuning or cross-validation, these tasks
will take full advantage of the cluster and the performance should
improve more or less
For that package specifically it’s best to see if they have a mailing list
and if not perhaps ask on github issues.
Having said that perhaps the folks involved in that package will reply here
too.
On Wed, 22 Nov 2017 at 20:03, Andy Davidson
wrote:
> I am starting
I am starting a new deep learning project currently we do all of our work on
a single machine using a combination of Keras and Tensor flow.
https://databricks.github.io/spark-deep-learning/site/index.html looks very
promising. Any idea how performance is likely to improve as I add machines
to my