Yes, I run the minimum spanning tree and it fails again. I increased the
Zookeeper counter also it fails again. The log files state that an "
org.apache.zookeeper.KeeperExceptionConnectionLossException" occurred
before killing the job. If it's a memory problem, can I increase the memory
limit per e
As I said, failures on specific supersteps *might* happen, but its not
necessary.
Did you run the minimum spanning tree job again? Did it finish
successfully?
On a different note, what do you mean by "submitted a job of 90
supersteps"? I don't think you can specify the number of supersteps-- that
Thank you Vishal.
But I submitted a PageRank job of 90 supersteps, 20 workers, 4,000,000
vertices and 30 edges per vertex. The job completed successfully. I'm
really confused.
On Wed, Aug 22, 2012 at 7:33 PM, Vishal Patel wrote:
> After several supersteps, sometimes a worker thread dies (say it