Re: The running time of spark

2014-08-23 Thread Denis RP
In fact I think it's highly impossible, but I just want some confirmation from you, please leave your option, thanks :) -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/The-running-time-of-spark-tp12624p12691.html Sent from the Apache Spark User List mailing

Re: The running time of spark

2014-08-23 Thread Sean Owen
in context: http://apache-spark-user-list.1001560.n3.nabble.com/The-running-time-of-spark-tp12624.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr

Re: The running time of spark

2014-08-23 Thread Denis RP
, or suggestions to make the process fast enough. Thanks! -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/The-running-time-of-spark-tp12624p12696.html Sent from the Apache Spark User List mailing list archive at Nabble.com

Re: The running time of spark

2014-08-23 Thread Ankur Dave
At 2014-08-23 08:33:48 -0700, Denis RP qq378789...@gmail.com wrote: Bottleneck seems to be I/O, the CPU usage ranges 10%~15% most time per VM. The caching is maintained by pregel, should be reliable. Storage level is MEMORY_AND_DISK_SER. I'd suggest trying the DISK_ONLY storage level and

Re: The running time of spark

2014-08-23 Thread Denis RP
://apache-spark-user-list.1001560.n3.nabble.com/The-running-time-of-spark-tp12624p12707.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org

The running time of spark

2014-08-21 Thread Denis RP
/The-running-time-of-spark-tp12624.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org