Hi Arun,
Have you found a solution? Seems that I have the same problem.
thanks,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Any-issues-with-repartition-tp13462p15654.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
his message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Any-issues-with-repartition-tp13462p15654.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscri
.set("spark.tast.maxFailures", 16)
.set("spark.worker.timeout", 150)
thanks a lot,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Any-issues-with-repartition-tp13462p15674.html
Sent from the Apache Spark User List m
k.tast.maxFailures", 16)
> .set("spark.worker.timeout", 150)
>
> thanks a lot,
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Any-issues-with-repartition-tp13462p15674.html
> Sent f
I am still puzzled on this. In my case the data is allowed to write to disk,
and I usually get different errors if it is out of memory.
My guess is that akka kills the executors for some reason.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Any-issues
ut of memory.
>
> My guess is that akka kills the executors for some reason.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Any-issues-with-repartition-tp13462p15929.html
I would second the suggest that one of the spark committers weigh in.
Many times the repartition() command fails, no matter how many times I run
it.
This is more of an 0.x behavior than a 1.0.2 behavior.
anyone?
Dale.
On 10/8/14, 1:06 AM, "Paul Wais" wrote:
>Looks like an OOM issue? Hav