That may be the cause of your issue. Take a look at the tuning guide[1] and
maybe also profile your application. See if you can reuse your objects.
1. http://spark.apache.org/docs/latest/tuning.html
Best Regards,
Sonal
Founder, Nube Technologies http://www.nubetech.co
Thanks, Sonal.
But it seems to be an error happened when “cleaning broadcast”?
BTW, what is the timeout of “[30 seconds]”? can I increase it?
Best,
Yifan LI
On 02 Feb 2015, at 11:12, Sonal Goyal sonalgoy...@gmail.com wrote:
That may be the cause of your issue. Take a look at the
I think this broadcast cleaning(memory block remove?) timeout exception was
caused by:
15/02/02 11:48:49 ERROR TaskSchedulerImpl: Lost executor 13 on
small18-tap1.common.lip6.fr: remote Akka client disassociated
15/02/02 11:48:49 ERROR SparkDeploySchedulerBackend: Asked to remove
non-existent
Is your code hitting frequent garbage collection?
Best Regards,
Sonal
Founder, Nube Technologies http://www.nubetech.co
http://in.linkedin.com/in/sonalgoyal
On Fri, Jan 30, 2015 at 7:52 PM, Yifan LI iamyifa...@gmail.com wrote:
Hi,
I am running my graphx application on Spark 1.2.0(11
Yes, I think so, esp. for a pregel application… have any suggestion?
Best,
Yifan LI
On 30 Jan 2015, at 22:25, Sonal Goyal sonalgoy...@gmail.com wrote:
Is your code hitting frequent garbage collection?
Best Regards,
Sonal
Founder, Nube Technologies http://www.nubetech.co/
Hi,
I am running my graphx application on Spark 1.2.0(11 nodes cluster), has
requested 30GB memory per node and 100 cores for around 1GB input dataset(5
million vertices graph).
But the error below always happen…
Is there anyone could give me some points?
(BTW, the overall
Hi,
I am running my graphx application on Spark 1.2.0(11 nodes cluster), has
requested 30GB memory per node and 100 cores for around 1GB input dataset(5
million vertices graph).
But the error below always happen…
Is there anyone could give me some points?
(BTW, the overall edge/vertex RDDs