[ 
https://issues.apache.org/jira/browse/SPARK-19247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15824820#comment-15824820
 ] 

Asher Krim commented on SPARK-19247:
------------------------------------

Good question. I've seen it come up before 
(http://stackoverflow.com/questions/40842736/spark-word2vecmodel-exceeds-max-rpc-size-for-saving).
 Additionally, the issue from SPARK-11994 is unpatched in ml, so loading large 
models currently requires setting a large `spark.kryoserializer.buffer.max`.

(Personally, I've been on a goose chase fighting OOM's while saving large 
ml.word2vec models (Spark 1.6.3). This seemed like a good place to start 
digging into it. However in further testing, it looks like my issue may stem 
from CatalystTypeConverters)

I'm happy to follow any backwards compatibility guidelines.

> improve ml word2vec save/load
> -----------------------------
>
>                 Key: SPARK-19247
>                 URL: https://issues.apache.org/jira/browse/SPARK-19247
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Asher Krim
>
> ml word2vec models can be somewhat large (~4gb is not uncommon). The current 
> save implementation saves the model as a single large datum, which can cause 
> rpc issues and fail to save the model.
> On the loading side, there are issues with loading this large datum as well. 
> This was already solved for mllib word2vec in 
> https://issues.apache.org/jira/browse/SPARK-11994, but the change was never 
> ported to the ml word2vec implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to