I try to save a large text file of a approx. 5GB
sc.parallelize(cfile.toString()
.split("\n"), 1)
.saveAsTextFile(new Path(path+".cs", "data").toUri.toString)
but I keep getting
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
Hello!
I am experiencing an issue [1] with Word2VecModel#save. It appears to
exceed spark.akka.frameSize (see stack trace [3]).
Setting the frameSize is not really an option because that would just
limit me to 2GB so I wonder if there is anything I can do to make this
work even if the model
Nobody here who can help me on this? :/
On 19/04/16 13:15, Stefan Falk wrote:
Hello Sparklings!
I am trying to train a word vector model but as I call
Word2VecModel#save() I am getting a org.apache.spark.SparkException
saying that this would exceed the frameSize limit (stackoverflow
Hello Sparklings!
I am trying to train a word vector model but as I call
Word2VecModel#save() I am getting a org.apache.spark.SparkException
saying that this would exceed the frameSize limit (stackoverflow
question [1]).
Increasing the frameSize would only help me in this particular case I