Could you post some stack trace info?

Generally, it can be problematic to run Spark within a web server framework
as often there are dependency conflict and threading issues. You might
prefer to run the model-building as a standalone app, or check out
https://github.com/spark-jobserver/spark-jobserver (either for triggering
spark jobs remotely from a web app, via HTTP, or for ideas on how to handle
SparkContext within web framework / akka).

On Fri, 8 Apr 2016 at 00:56 Colin Woodbury <coli...@gmail.com> wrote:

> Hi all,
>
> I've implemented most of a content recommendation system for a client.
> However, whenever I attempt to save a MatrixFactorizationModel I've
> trained, I see one of four outcomes:
>
> 1. Despite "save" being wrapped in a "try" block, I see a massive stack
> trace quoting some java.io classes. The Model isn't written.
> 2. Same as the above, but the Model *is* written. It's unusable however,
> as it's missing many of the files it should have, particularly in the
> "product" folder.
> 3. Same as the above, but sbt crashes completely.
> 4. No massive stack trace, and the Model seems to be written. Upon being
> loaded by another Spark context and fed a user ID, it claims the user isn't
> present in the Model.
>
> Case 4 is pretty rare. I see these failures both locally and when I test
> on a Google Cloud instance with much better resources.
>
> Note that `ALS.trainImplicit` and `model.save` are being called from
> within a Future. Could it be possible that Play threads are closing before
> Spark can finish, thus interrupting it somehow?
>
> We are running Spark 1.6.1 within Play 2.4 and Scala 2.11. All these
> failures have occurred while in Play's Dev mode in SBT.
>
> Thanks for any insight you can give.
>

Reply via email to