I got the same error:

testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 261.111 sec
 <<< ERROR!
org.apache.spark.SparkException: Job aborted due to stage failure: Master
removed our application: FAILED
at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)

Looking under ore/target/surefire-reports/ , I don't see test output.
Trying to figure out how test output can be generated.

Cheers

On Fri, Jan 16, 2015 at 12:26 PM, Andrew Musselman <
andrew.mussel...@gmail.com> wrote:

> Thanks Ted, got farther along but now have a failing test; is this a known
> issue?
>
> -------------------------------------------------------
>  T E S T S
> -------------------------------------------------------
> Running org.apache.spark.JavaAPISuite
> Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.462
> sec <<< FAILURE! - in org.apache.spark.JavaAPISuite
> testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5 sec
> <<< ERROR!
> org.apache.spark.SparkException: Job aborted due to stage failure: Master
> removed our application: FAILED
>     at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
>     at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>     at scala.Option.foreach(Option.scala:236)
>     at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>     at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
>     at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>     at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>     at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>     at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>     at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>     at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>     at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
> Running org.apache.spark.JavaJdbcRDDSuite
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec
> - in org.apache.spark.JavaJdbcRDDSuite
>
> Results :
>
>
> Tests in error:
>   JavaAPISuite.testGuavaOptional ยป Spark Job aborted due to stage failure:
> Maste...
>
> On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> Can you try doing this before running mvn ?
>>
>> export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M
>> -XX:ReservedCodeCacheSize=512m"
>>
>> What OS are you using ?
>>
>> Cheers
>>
>> On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman <
>> andrew.mussel...@gmail.com> wrote:
>>
>>> Just got the latest from Github and tried running `mvn test`; is this
>>> error common and do you have any advice on fixing it?
>>>
>>> Thanks!
>>>
>>> [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
>>> spark-core_2.10 ---
>>> [WARNING] Zinc server is not available at port 3030 - reverting to
>>> normal incremental compile
>>> [INFO] Using incremental compilation
>>> [INFO] compiler plugin:
>>> BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
>>> [INFO] Compiling 400 Scala sources and 34 Java sources to
>>> /home/akm/spark/core/target/scala-2.10/classes...
>>> [WARNING]
>>> /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
>>> imported `DataReadMethod' is permanently hidden by definition of object
>>> DataReadMethod in package executor
>>> [WARNING] import org.apache.spark.executor.DataReadMethod
>>> [WARNING]                                  ^
>>> [WARNING]
>>> /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
>>> match may not be exhaustive.
>>> It would fail on the following input: TASK_ERROR
>>> [WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
>>> mesosState match {
>>> [WARNING]                                                          ^
>>> [WARNING]
>>> /home/akm/spark/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:89:
>>> method isDirectory in class FileSystem is deprecated: see corresponding
>>> Javadoc for more information.
>>> [WARNING]     if (!fileSystem.isDirectory(new Path(logBaseDir))) {
>>> [WARNING]                     ^
>>> [ERROR] PermGen space -> [Help 1]
>>> [ERROR]
>>> [ERROR] To see the full stack trace of the errors, re-run Maven with the
>>> -e switch.
>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>> [ERROR]
>>> [ERROR] For more information about the errors and possible solutions,
>>> please read the following articles:
>>> [ERROR] [Help 1]
>>> http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError
>>>
>>>
>>
>

Reply via email to