Re: HDFS performances + unexpected death of executors.

2015-07-14 Thread Max Demoulin
I will try a fresh setup very soon. Actually, I tried to compile spark by myself, against hadoop 2.5.2, but I had the issue that I mentioned in this thread: http://apache-spark-user-list.1001560.n3.nabble.com/Master-doesn-t-start-no-logs-td23651.html I was wondering if maybe

Re: Issues when combining Spark and a third party java library

2015-07-12 Thread Max Demoulin
Yes, Thank you. -- Henri Maxime Demoulin 2015-07-12 2:53 GMT-04:00 Akhil Das ak...@sigmoidanalytics.com: Did you try setting the HADOOP_CONF_DIR? Thanks Best Regards On Sat, Jul 11, 2015 at 3:17 AM, maxdml maxdemou...@gmail.com wrote: Also, it's worth noting that I'm using the prebuilt

Re: Master doesn't start, no logs

2015-07-07 Thread Max Demoulin
GMT-04:00 Akhil Das ak...@sigmoidanalytics.com: Can you try renaming the ~/.ivy2 file to ~/.ivy2_backup and build spark1.4.0 again and run it? Thanks Best Regards On Tue, Jul 7, 2015 at 6:27 PM, Max Demoulin maxdemou...@gmail.com wrote: Yes, I do set $SPARK_MASTER_IP. I suspect a more

Re: Master doesn't start, no logs

2015-07-07 Thread Max Demoulin
Yes, I do set $SPARK_MASTER_IP. I suspect a more internal issue, maybe due to multiple spark/hdfs instances having successively run on the same machine? -- Henri Maxime Demoulin 2015-07-07 4:10 GMT-04:00 Akhil Das ak...@sigmoidanalytics.com: Strange. What are you having in $SPARK_MASTER_IP? It

Re: Directory creation failed leads to job fail (should it?)

2015-06-29 Thread Max Demoulin
The underlying issue is a filesystem corruption on the workers. In the case where I use hdfs, with a sufficient amount of replica, would Spark try to launch a task on another node where the block replica is present? Thanks :-) -- Henri Maxime Demoulin 2015-06-29 9:10 GMT-04:00 ayan guha

Re: Directory creation failed leads to job fail (should it?)

2015-06-29 Thread Max Demoulin
not by replication. On 30 Jun 2015 01:50, Max Demoulin maxdemou...@gmail.com wrote: The underlying issue is a filesystem corruption on the workers. In the case where I use hdfs, with a sufficient amount of replica, would Spark try to launch a task on another node where the block replica is present

Re: Exception in thread main java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.elapsedMillis()J

2015-06-25 Thread Max Demoulin
I see, thank you! -- Henri Maxime Demoulin 2015-06-25 5:54 GMT-04:00 Steve Loughran ste...@hortonworks.com: you are using a guava version on the classpath which your version of Hadoop can't handle. try a version 15 or build spark against Hadoop 2.7.0 On 24 Jun 2015, at 19:03, maxdml

Re: Exception in thread main java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.elapsedMillis()J

2015-06-25 Thread Max Demoulin
Can I actually include another version of guava in the classpath when launching the example through spark submit? -- Henri Maxime Demoulin 2015-06-25 10:57 GMT-04:00 Max Demoulin max...@cs.duke.edu: I see, thank you! -- Henri Maxime Demoulin 2015-06-25 5:54 GMT-04:00 Steve Loughran ste