Re: Maven out of memory error

2015-01-18 Thread Sean Owen
Oh: are you running the tests with a different profile setting than
what the last assembly was built with? this particular test depends on
those matching. Not 100% sure that's the problem, but a good guess.

On Sat, Jan 17, 2015 at 4:54 PM, Ted Yu yuzhih...@gmail.com wrote:
 The test passed here:

 https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/1215/consoleFull

 It passed locally with the following command:

 mvn -DHADOOP_PROFILE=hadoop-2.4 -Phadoop-2.4 -Pyarn -Phive test
 -Dtest=JavaAPISuite

 FYI

 On Sat, Jan 17, 2015 at 8:23 AM, Andrew Musselman
 andrew.mussel...@gmail.com wrote:

 Failing for me and another team member on the command line, for what it's
 worth.

  On Jan 17, 2015, at 2:39 AM, Sean Owen so...@cloudera.com wrote:
 
  Hm, this test hangs for me in IntelliJ. It could be a real problem,
  and a combination of a) just recently actually enabling Java tests, b)
  recent updates to the complicated Guava shading situation.
 
  The manifestation of the error usually suggests that something totally
  failed to start (because of, say, class incompatibility errors, etc.)
  Thus things hang and time out waiting for the dead component. It's
  sometimes hard to get answers from the embedded component that dies
  though.
 
  That said, it seems to pass on the command line. For example my recent
  Jenkins job shows it passes:
 
  https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25682/consoleFull
 
  I'll try to uncover more later this weekend. Thoughts welcome though.
 
  On Fri, Jan 16, 2015 at 8:26 PM, Andrew Musselman
  andrew.mussel...@gmail.com wrote:
  Thanks Ted, got farther along but now have a failing test; is this a
  known
  issue?
 
  ---
  T E S T S
  ---
  Running org.apache.spark.JavaAPISuite
  Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
  123.462 sec
   FAILURE! - in org.apache.spark.JavaAPISuite
  testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5
  sec
   ERROR!
  org.apache.spark.SparkException: Job aborted due to stage failure:
  Master
  removed our application: FAILED
 at
 
  org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
 at
 
  org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
 at
 
  org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
 at
 
  scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at
  scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at
 
  org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
 at
 
  org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at
 
  org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at scala.Option.foreach(Option.scala:236)
 at
 
  org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
 at
 
  org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at
 
  org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
 at akka.dispatch.Mailbox.run(Mailbox.scala:220)
 at
 
  akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at
  scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at
 
  scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at
 
  scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at
 
  scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 
  Running org.apache.spark.JavaJdbcRDDSuite
  Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846
  sec -
  in org.apache.spark.JavaJdbcRDDSuite
 
  Results :
 
 
  Tests in error:
   JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage
  failure:
  Maste...
 
  On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:
 
  Can you try doing this before running mvn ?
 
  export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
  -XX:ReservedCodeCacheSize=512m
 
  What OS are you using ?
 
  Cheers
 
  On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman
  andrew.mussel...@gmail.com wrote:
 
  Just got the latest from Github and tried running `mvn test`; is this
  error common and do you have any advice on fixing it?
 
  

Re: Maven out of memory error

2015-01-18 Thread Ted Yu
Yes.
That could be the cause.

On Sun, Jan 18, 2015 at 11:47 AM, Sean Owen so...@cloudera.com wrote:

 Oh: are you running the tests with a different profile setting than
 what the last assembly was built with? this particular test depends on
 those matching. Not 100% sure that's the problem, but a good guess.

 On Sat, Jan 17, 2015 at 4:54 PM, Ted Yu yuzhih...@gmail.com wrote:
  The test passed here:
 
 
 https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/1215/consoleFull
 
  It passed locally with the following command:
 
  mvn -DHADOOP_PROFILE=hadoop-2.4 -Phadoop-2.4 -Pyarn -Phive test
  -Dtest=JavaAPISuite
 
  FYI
 
  On Sat, Jan 17, 2015 at 8:23 AM, Andrew Musselman
  andrew.mussel...@gmail.com wrote:
 
  Failing for me and another team member on the command line, for what
 it's
  worth.
 
   On Jan 17, 2015, at 2:39 AM, Sean Owen so...@cloudera.com wrote:
  
   Hm, this test hangs for me in IntelliJ. It could be a real problem,
   and a combination of a) just recently actually enabling Java tests, b)
   recent updates to the complicated Guava shading situation.
  
   The manifestation of the error usually suggests that something totally
   failed to start (because of, say, class incompatibility errors, etc.)
   Thus things hang and time out waiting for the dead component. It's
   sometimes hard to get answers from the embedded component that dies
   though.
  
   That said, it seems to pass on the command line. For example my recent
   Jenkins job shows it passes:
  
  
 https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25682/consoleFull
  
   I'll try to uncover more later this weekend. Thoughts welcome though.
  
   On Fri, Jan 16, 2015 at 8:26 PM, Andrew Musselman
   andrew.mussel...@gmail.com wrote:
   Thanks Ted, got farther along but now have a failing test; is this a
   known
   issue?
  
   ---
   T E S T S
   ---
   Running org.apache.spark.JavaAPISuite
   Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
   123.462 sec
FAILURE! - in org.apache.spark.JavaAPISuite
   testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5
   sec
ERROR!
   org.apache.spark.SparkException: Job aborted due to stage failure:
   Master
   removed our application: FAILED
  at
  
   org.apache.spark.scheduler.DAGScheduler.org
 $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
  at
  
  
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
  at
  
  
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
  at
  
  
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at
   scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
  at
  
  
 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
  at
  
  
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
  at
  
  
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
  at scala.Option.foreach(Option.scala:236)
  at
  
  
 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
  at
  
  
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
  at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
  at
  
  
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
  at akka.actor.ActorCell.invoke(ActorCell.scala:487)
  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
  at akka.dispatch.Mailbox.run(Mailbox.scala:220)
  at
  
  
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
  at
   scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
  at
  
  
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
  at
  
  
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
  at
  
  
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
  
   Running org.apache.spark.JavaJdbcRDDSuite
   Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846
   sec -
   in org.apache.spark.JavaJdbcRDDSuite
  
   Results :
  
  
   Tests in error:
JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage
   failure:
   Maste...
  
   On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com
 wrote:
  
   Can you try doing this before running mvn ?
  
   export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
   -XX:ReservedCodeCacheSize=512m
  
   What OS 

Re: Maven out of memory error

2015-01-17 Thread Sean Owen
Hm, this test hangs for me in IntelliJ. It could be a real problem,
and a combination of a) just recently actually enabling Java tests, b)
recent updates to the complicated Guava shading situation.

The manifestation of the error usually suggests that something totally
failed to start (because of, say, class incompatibility errors, etc.)
Thus things hang and time out waiting for the dead component. It's
sometimes hard to get answers from the embedded component that dies
though.

That said, it seems to pass on the command line. For example my recent
Jenkins job shows it passes:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25682/consoleFull

I'll try to uncover more later this weekend. Thoughts welcome though.

On Fri, Jan 16, 2015 at 8:26 PM, Andrew Musselman
andrew.mussel...@gmail.com wrote:
 Thanks Ted, got farther along but now have a failing test; is this a known
 issue?

 ---
  T E S T S
 ---
 Running org.apache.spark.JavaAPISuite
 Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.462 sec
  FAILURE! - in org.apache.spark.JavaAPISuite
 testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5 sec
  ERROR!
 org.apache.spark.SparkException: Job aborted due to stage failure: Master
 removed our application: FAILED
 at
 org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
 at
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at
 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at scala.Option.foreach(Option.scala:236)
 at
 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
 at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
 at akka.dispatch.Mailbox.run(Mailbox.scala:220)
 at
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

 Running org.apache.spark.JavaJdbcRDDSuite
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec -
 in org.apache.spark.JavaJdbcRDDSuite

 Results :


 Tests in error:
   JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage failure:
 Maste...

 On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:

 Can you try doing this before running mvn ?

 export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
 -XX:ReservedCodeCacheSize=512m

 What OS are you using ?

 Cheers

 On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman
 andrew.mussel...@gmail.com wrote:

 Just got the latest from Github and tried running `mvn test`; is this
 error common and do you have any advice on fixing it?

 Thanks!

 [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
 spark-core_2.10 ---
 [WARNING] Zinc server is not available at port 3030 - reverting to normal
 incremental compile
 [INFO] Using incremental compilation
 [INFO] compiler plugin:
 BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
 [INFO] Compiling 400 Scala sources and 34 Java sources to
 /home/akm/spark/core/target/scala-2.10/classes...
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
 imported `DataReadMethod' is permanently hidden by definition of object
 DataReadMethod in package executor
 [WARNING] import org.apache.spark.executor.DataReadMethod
 [WARNING]  ^
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
 match may not be exhaustive.
 It would fail on the following input: TASK_ERROR
 [WARNING]   def fromMesos(mesosState: 

Re: Maven out of memory error

2015-01-17 Thread Andrew Musselman
Failing for me and another team member on the command line, for what it's worth.

 On Jan 17, 2015, at 2:39 AM, Sean Owen so...@cloudera.com wrote:
 
 Hm, this test hangs for me in IntelliJ. It could be a real problem,
 and a combination of a) just recently actually enabling Java tests, b)
 recent updates to the complicated Guava shading situation.
 
 The manifestation of the error usually suggests that something totally
 failed to start (because of, say, class incompatibility errors, etc.)
 Thus things hang and time out waiting for the dead component. It's
 sometimes hard to get answers from the embedded component that dies
 though.
 
 That said, it seems to pass on the command line. For example my recent
 Jenkins job shows it passes:
 https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25682/consoleFull
 
 I'll try to uncover more later this weekend. Thoughts welcome though.
 
 On Fri, Jan 16, 2015 at 8:26 PM, Andrew Musselman
 andrew.mussel...@gmail.com wrote:
 Thanks Ted, got farther along but now have a failing test; is this a known
 issue?
 
 ---
 T E S T S
 ---
 Running org.apache.spark.JavaAPISuite
 Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.462 sec
  FAILURE! - in org.apache.spark.JavaAPISuite
 testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5 sec
  ERROR!
 org.apache.spark.SparkException: Job aborted due to stage failure: Master
 removed our application: FAILED
at
 org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
at
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at
 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 
 Running org.apache.spark.JavaJdbcRDDSuite
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec -
 in org.apache.spark.JavaJdbcRDDSuite
 
 Results :
 
 
 Tests in error:
  JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage failure:
 Maste...
 
 On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:
 
 Can you try doing this before running mvn ?
 
 export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
 -XX:ReservedCodeCacheSize=512m
 
 What OS are you using ?
 
 Cheers
 
 On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman
 andrew.mussel...@gmail.com wrote:
 
 Just got the latest from Github and tried running `mvn test`; is this
 error common and do you have any advice on fixing it?
 
 Thanks!
 
 [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
 spark-core_2.10 ---
 [WARNING] Zinc server is not available at port 3030 - reverting to normal
 incremental compile
 [INFO] Using incremental compilation
 [INFO] compiler plugin:
 BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
 [INFO] Compiling 400 Scala sources and 34 Java sources to
 /home/akm/spark/core/target/scala-2.10/classes...
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
 imported `DataReadMethod' is permanently hidden by definition of object
 DataReadMethod in package executor
 [WARNING] import org.apache.spark.executor.DataReadMethod
 [WARNING]  ^
 [WARNING]
 

Re: Maven out of memory error

2015-01-17 Thread Ted Yu
The test passed here:

https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/1215/consoleFull

It passed locally with the following command:

mvn -DHADOOP_PROFILE=hadoop-2.4 -Phadoop-2.4 -Pyarn -Phive test
-Dtest=JavaAPISuite

FYI

On Sat, Jan 17, 2015 at 8:23 AM, Andrew Musselman 
andrew.mussel...@gmail.com wrote:

 Failing for me and another team member on the command line, for what it's
 worth.

  On Jan 17, 2015, at 2:39 AM, Sean Owen so...@cloudera.com wrote:
 
  Hm, this test hangs for me in IntelliJ. It could be a real problem,
  and a combination of a) just recently actually enabling Java tests, b)
  recent updates to the complicated Guava shading situation.
 
  The manifestation of the error usually suggests that something totally
  failed to start (because of, say, class incompatibility errors, etc.)
  Thus things hang and time out waiting for the dead component. It's
  sometimes hard to get answers from the embedded component that dies
  though.
 
  That said, it seems to pass on the command line. For example my recent
  Jenkins job shows it passes:
 
 https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25682/consoleFull
 
  I'll try to uncover more later this weekend. Thoughts welcome though.
 
  On Fri, Jan 16, 2015 at 8:26 PM, Andrew Musselman
  andrew.mussel...@gmail.com wrote:
  Thanks Ted, got farther along but now have a failing test; is this a
 known
  issue?
 
  ---
  T E S T S
  ---
  Running org.apache.spark.JavaAPISuite
  Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
 123.462 sec
   FAILURE! - in org.apache.spark.JavaAPISuite
  testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5
 sec
   ERROR!
  org.apache.spark.SparkException: Job aborted due to stage failure:
 Master
  removed our application: FAILED
 at
  org.apache.spark.scheduler.DAGScheduler.org
 $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
 at
 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
 at
 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
 at
 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at
 
 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
 at
 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at
 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at scala.Option.foreach(Option.scala:236)
 at
 
 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
 at
 
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at
 
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
 at akka.dispatch.Mailbox.run(Mailbox.scala:220)
 at
 
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at
 scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at
 
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at
  scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at
 
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 
  Running org.apache.spark.JavaJdbcRDDSuite
  Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846
 sec -
  in org.apache.spark.JavaJdbcRDDSuite
 
  Results :
 
 
  Tests in error:
   JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage
 failure:
  Maste...
 
  On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:
 
  Can you try doing this before running mvn ?
 
  export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
  -XX:ReservedCodeCacheSize=512m
 
  What OS are you using ?
 
  Cheers
 
  On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman
  andrew.mussel...@gmail.com wrote:
 
  Just got the latest from Github and tried running `mvn test`; is this
  error common and do you have any advice on fixing it?
 
  Thanks!
 
  [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
  spark-core_2.10 ---
  [WARNING] Zinc server is not available at port 3030 - reverting to
 normal
  incremental compile
  [INFO] Using incremental compilation
  [INFO] compiler plugin:
  

Maven out of memory error

2015-01-16 Thread Andrew Musselman
Just got the latest from Github and tried running `mvn test`; is this error
common and do you have any advice on fixing it?

Thanks!

[INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
spark-core_2.10 ---
[WARNING] Zinc server is not available at port 3030 - reverting to normal
incremental compile
[INFO] Using incremental compilation
[INFO] compiler plugin:
BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
[INFO] Compiling 400 Scala sources and 34 Java sources to
/home/akm/spark/core/target/scala-2.10/classes...
[WARNING]
/home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
imported `DataReadMethod' is permanently hidden by definition of object
DataReadMethod in package executor
[WARNING] import org.apache.spark.executor.DataReadMethod
[WARNING]  ^
[WARNING]
/home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
match may not be exhaustive.
It would fail on the following input: TASK_ERROR
[WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
mesosState match {
[WARNING]  ^
[WARNING]
/home/akm/spark/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:89:
method isDirectory in class FileSystem is deprecated: see corresponding
Javadoc for more information.
[WARNING] if (!fileSystem.isDirectory(new Path(logBaseDir))) {
[WARNING] ^
[ERROR] PermGen space - [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError


Re: Maven out of memory error

2015-01-16 Thread Ted Yu
Can you try doing this before running mvn ?

export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
-XX:ReservedCodeCacheSize=512m

What OS are you using ?

Cheers

On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman 
andrew.mussel...@gmail.com wrote:

 Just got the latest from Github and tried running `mvn test`; is this
 error common and do you have any advice on fixing it?

 Thanks!

 [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
 spark-core_2.10 ---
 [WARNING] Zinc server is not available at port 3030 - reverting to normal
 incremental compile
 [INFO] Using incremental compilation
 [INFO] compiler plugin:
 BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
 [INFO] Compiling 400 Scala sources and 34 Java sources to
 /home/akm/spark/core/target/scala-2.10/classes...
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
 imported `DataReadMethod' is permanently hidden by definition of object
 DataReadMethod in package executor
 [WARNING] import org.apache.spark.executor.DataReadMethod
 [WARNING]  ^
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
 match may not be exhaustive.
 It would fail on the following input: TASK_ERROR
 [WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
 mesosState match {
 [WARNING]  ^
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:89:
 method isDirectory in class FileSystem is deprecated: see corresponding
 Javadoc for more information.
 [WARNING] if (!fileSystem.isDirectory(new Path(logBaseDir))) {
 [WARNING] ^
 [ERROR] PermGen space - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the
 -e switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions,
 please read the following articles:
 [ERROR] [Help 1]
 http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError




Re: Maven out of memory error

2015-01-16 Thread Andrew Musselman
Thanks Ted, got farther along but now have a failing test; is this a known
issue?

---
 T E S T S
---
Running org.apache.spark.JavaAPISuite
Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.462
sec  FAILURE! - in org.apache.spark.JavaAPISuite
testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5 sec
 ERROR!
org.apache.spark.SparkException: Job aborted due to stage failure: Master
removed our application: FAILED
at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Running org.apache.spark.JavaJdbcRDDSuite
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec -
in org.apache.spark.JavaJdbcRDDSuite

Results :


Tests in error:
  JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage failure:
Maste...

On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:

 Can you try doing this before running mvn ?

 export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
 -XX:ReservedCodeCacheSize=512m

 What OS are you using ?

 Cheers

 On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman 
 andrew.mussel...@gmail.com wrote:

 Just got the latest from Github and tried running `mvn test`; is this
 error common and do you have any advice on fixing it?

 Thanks!

 [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
 spark-core_2.10 ---
 [WARNING] Zinc server is not available at port 3030 - reverting to normal
 incremental compile
 [INFO] Using incremental compilation
 [INFO] compiler plugin:
 BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
 [INFO] Compiling 400 Scala sources and 34 Java sources to
 /home/akm/spark/core/target/scala-2.10/classes...
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
 imported `DataReadMethod' is permanently hidden by definition of object
 DataReadMethod in package executor
 [WARNING] import org.apache.spark.executor.DataReadMethod
 [WARNING]  ^
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
 match may not be exhaustive.
 It would fail on the following input: TASK_ERROR
 [WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
 mesosState match {
 [WARNING]  ^
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:89:
 method isDirectory in class FileSystem is deprecated: see corresponding
 Javadoc for more information.
 [WARNING] if (!fileSystem.isDirectory(new Path(logBaseDir))) {
 [WARNING] ^
 [ERROR] PermGen space - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the
 -e switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions,
 please read the following articles:
 [ERROR] [Help 1]
 http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError





Re: Maven out of memory error

2015-01-16 Thread Andrew Musselman
Thanks Sean

On Fri, Jan 16, 2015 at 12:06 PM, Sean Owen so...@cloudera.com wrote:

 Hey Andrew, you'll want to have a look at the Spark docs on building:
 http://spark.apache.org/docs/latest/building-spark.html

 It's the first thing covered there.

 The warnings are normal as you are probably building with newer Hadoop
 profiles and so old-Hadoop support code shows deprecation warnings on
 its use of old APIs.

 On Fri, Jan 16, 2015 at 8:03 PM, Andrew Musselman
 andrew.mussel...@gmail.com wrote:
  Just got the latest from Github and tried running `mvn test`; is this
 error
  common and do you have any advice on fixing it?
 
  Thanks!
 
  [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
  spark-core_2.10 ---
  [WARNING] Zinc server is not available at port 3030 - reverting to normal
  incremental compile
  [INFO] Using incremental compilation
  [INFO] compiler plugin:
  BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
  [INFO] Compiling 400 Scala sources and 34 Java sources to
  /home/akm/spark/core/target/scala-2.10/classes...
  [WARNING]
 
 /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
  imported `DataReadMethod' is permanently hidden by definition of object
  DataReadMethod in package executor
  [WARNING] import org.apache.spark.executor.DataReadMethod
  [WARNING]  ^
  [WARNING]
  /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
  match may not be exhaustive.
  It would fail on the following input: TASK_ERROR
  [WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
  mesosState match {
  [WARNING]  ^
  [WARNING]
 
 /home/akm/spark/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:89:
  method isDirectory in class FileSystem is deprecated: see corresponding
  Javadoc for more information.
  [WARNING] if (!fileSystem.isDirectory(new Path(logBaseDir))) {
  [WARNING] ^
  [ERROR] PermGen space - [Help 1]
  [ERROR]
  [ERROR] To see the full stack trace of the errors, re-run Maven with the
 -e
  switch.
  [ERROR] Re-run Maven using the -X switch to enable full debug logging.
  [ERROR]
  [ERROR] For more information about the errors and possible solutions,
 please
  read the following articles:
  [ERROR] [Help 1]
  http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError
 



Re: Maven out of memory error

2015-01-16 Thread Ted Yu
I got the same error:

testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 261.111 sec
  ERROR!
org.apache.spark.SparkException: Job aborted due to stage failure: Master
removed our application: FAILED
at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)

Looking under ore/target/surefire-reports/ , I don't see test output.
Trying to figure out how test output can be generated.

Cheers

On Fri, Jan 16, 2015 at 12:26 PM, Andrew Musselman 
andrew.mussel...@gmail.com wrote:

 Thanks Ted, got farther along but now have a failing test; is this a known
 issue?

 ---
  T E S T S
 ---
 Running org.apache.spark.JavaAPISuite
 Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.462
 sec  FAILURE! - in org.apache.spark.JavaAPISuite
 testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5 sec
  ERROR!
 org.apache.spark.SparkException: Job aborted due to stage failure: Master
 removed our application: FAILED
 at org.apache.spark.scheduler.DAGScheduler.org
 $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
 at
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at
 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at scala.Option.foreach(Option.scala:236)
 at
 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
 at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
 at akka.dispatch.Mailbox.run(Mailbox.scala:220)
 at
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

 Running org.apache.spark.JavaJdbcRDDSuite
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec
 - in org.apache.spark.JavaJdbcRDDSuite

 Results :


 Tests in error:
   JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage failure:
 Maste...

 On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:

 Can you try doing this before running mvn ?

 export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
 -XX:ReservedCodeCacheSize=512m

 What OS are you using ?

 Cheers

 On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman 
 andrew.mussel...@gmail.com wrote:

 Just got the latest from Github and tried running `mvn test`; is this
 error common and do you have any advice on fixing it?

 Thanks!

 [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
 spark-core_2.10 ---
 [WARNING] Zinc server is not available at port 3030 - reverting to
 normal incremental compile
 [INFO] Using incremental compilation
 [INFO] compiler plugin:
 BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
 [INFO] Compiling 400 Scala sources and 34 Java sources to
 /home/akm/spark/core/target/scala-2.10/classes...
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
 imported `DataReadMethod' is permanently hidden by definition of object
 DataReadMethod in package executor
 [WARNING] import org.apache.spark.executor.DataReadMethod
 [WARNING]  ^
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
 match may not be exhaustive.
 It would fail on the following input: TASK_ERROR
 [WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
 mesosState match {
 [WARNING]  ^
 [WARNING]
 

Re: Maven out of memory error

2015-01-16 Thread Ted Yu
I tried the following but still didn't see test output :-(

diff --git a/pom.xml b/pom.xml
index f4466e5..dae2ae8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1131,6 +1131,7 @@

 spark.driver.allowMultipleContextstrue/spark.driver.allowMultipleContexts
 /systemProperties
 failIfNoTestsfalse/failIfNoTests
+redirectTestOutputToFiletrue/redirectTestOutputToFile
   /configuration
 /plugin
 !-- Scalatest runs all Scala tests --

On Fri, Jan 16, 2015 at 12:41 PM, Ted Yu yuzhih...@gmail.com wrote:

 I got the same error:

 testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 261.111
 sec   ERROR!
 org.apache.spark.SparkException: Job aborted due to stage failure: Master
 removed our application: FAILED
 at org.apache.spark.scheduler.DAGScheduler.org
 $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)

 Looking under ore/target/surefire-reports/ , I don't see test output.
 Trying to figure out how test output can be generated.

 Cheers

 On Fri, Jan 16, 2015 at 12:26 PM, Andrew Musselman 
 andrew.mussel...@gmail.com wrote:

 Thanks Ted, got farther along but now have a failing test; is this a
 known issue?

 ---
  T E S T S
 ---
 Running org.apache.spark.JavaAPISuite
 Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.462
 sec  FAILURE! - in org.apache.spark.JavaAPISuite
 testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5
 sec   ERROR!
 org.apache.spark.SparkException: Job aborted due to stage failure: Master
 removed our application: FAILED
 at org.apache.spark.scheduler.DAGScheduler.org
 $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
 at
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at
 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
 at scala.Option.foreach(Option.scala:236)
 at
 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
 at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at
 org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
 at akka.dispatch.Mailbox.run(Mailbox.scala:220)
 at
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at
 scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

 Running org.apache.spark.JavaJdbcRDDSuite
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec
 - in org.apache.spark.JavaJdbcRDDSuite

 Results :


 Tests in error:
   JavaAPISuite.testGuavaOptional » Spark Job aborted due to stage
 failure: Maste...

 On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:

 Can you try doing this before running mvn ?

 export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
 -XX:ReservedCodeCacheSize=512m

 What OS are you using ?

 Cheers

 On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman 
 andrew.mussel...@gmail.com wrote:

 Just got the latest from Github and tried running `mvn test`; is this
 error common and do you have any advice on fixing it?

 Thanks!

 [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
 spark-core_2.10 ---
 [WARNING] Zinc server is not available at port 3030 - reverting to
 normal incremental compile
 [INFO] Using incremental compilation
 [INFO] compiler plugin:
 BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
 [INFO] Compiling 400 Scala sources and 34 Java sources to
 /home/akm/spark/core/target/scala-2.10/classes...
 [WARNING]
 /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
 imported