I'm computing connected components using Spark GraphX on AWS EC2. I believe
the computation was successful, as I saw the type information of the final
result. However, it looks like Spark was doing some cleanup. The
BlockManager removed a bunch of blocks and stuck at

15/07/04 21:53:06 INFO storage.BlockManager: Removing block rdd_334_4
15/07/04 21:53:06 INFO storage.MemoryStore: Block rdd_334_4 of size 25986936
dropped from memory (free 15648106262)

There was no error message, no update for like an hour. If I press the Enter
key, I got disconnected from the cluster. Does anyone happen to know what's
going on here?

I used 8 r3.4xlarge instances. I have 7 million edges and 200 million
vertices.

Thank you!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-got-stuck-with-BlockManager-after-computing-connected-components-using-GraphX-tp23620.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to