Hello spark aficionados,

We upgraded from spark 1.0.0 to 1.0.1 when the new release came out and
started noticing some weird errors. Even a simple operation like
"reduceByKey" or "count" on an RDD gets stuck in "cluster mode". This issue
does not occur with spark 1.0.0 (in cluster or local mode)  or spark 1.0.2
(in cluster or local mode) or with spark 1.0.1 (in local mode).  I looked
at the spark release notes and it did not seem like the "gigantic task
size" issue is a problem because we have tons of resources on this cluster.

Has anyone else encountered this issue before?

Thanks in advance for your help,
Shivani


-- 
Software Engineer
Analytics Engineering Team@ Box
Mountain View, CA

Reply via email to