Hi all,

I've implemented most of a content recommendation system for a client.
However, whenever I attempt to save a MatrixFactorizationModel I've
trained, I see one of four outcomes:

1. Despite "save" being wrapped in a "try" block, I see a massive stack
trace quoting some java.io classes. The Model isn't written.
2. Same as the above, but the Model *is* written. It's unusable however, as
it's missing many of the files it should have, particularly in the
"product" folder.
3. Same as the above, but sbt crashes completely.
4. No massive stack trace, and the Model seems to be written. Upon being
loaded by another Spark context and fed a user ID, it claims the user isn't
present in the Model.

Case 4 is pretty rare. I see these failures both locally and when I test on
a Google Cloud instance with much better resources.

Note that `ALS.trainImplicit` and `model.save` are being called from within
a Future. Could it be possible that Play threads are closing before Spark
can finish, thus interrupting it somehow?

We are running Spark 1.6.1 within Play 2.4 and Scala 2.11. All these
failures have occurred while in Play's Dev mode in SBT.

Thanks for any insight you can give.

Reply via email to