The test passed here:

https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/1215/consoleFull

It passed locally with the following command:

mvn -DHADOOP_PROFILE=hadoop-2.4 -Phadoop-2.4 -Pyarn -Phive test
-Dtest=JavaAPISuite

FYI

On Sat, Jan 17, 2015 at 8:23 AM, Andrew Musselman <
andrew.mussel...@gmail.com> wrote:

> Failing for me and another team member on the command line, for what it's
> worth.
>
> > On Jan 17, 2015, at 2:39 AM, Sean Owen <so...@cloudera.com> wrote:
> >
> > Hm, this test hangs for me in IntelliJ. It could be a real problem,
> > and a combination of a) just recently actually enabling Java tests, b)
> > recent updates to the complicated Guava shading situation.
> >
> > The manifestation of the error usually suggests that something totally
> > failed to start (because of, say, class incompatibility errors, etc.)
> > Thus things hang and time out waiting for the dead component. It's
> > sometimes hard to get answers from the embedded component that dies
> > though.
> >
> > That said, it seems to pass on the command line. For example my recent
> > Jenkins job shows it passes:
> >
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25682/consoleFull
> >
> > I'll try to uncover more later this weekend. Thoughts welcome though.
> >
> > On Fri, Jan 16, 2015 at 8:26 PM, Andrew Musselman
> > <andrew.mussel...@gmail.com> wrote:
> >> Thanks Ted, got farther along but now have a failing test; is this a
> known
> >> issue?
> >>
> >> -------------------------------------------------------
> >> T E S T S
> >> -------------------------------------------------------
> >> Running org.apache.spark.JavaAPISuite
> >> Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> 123.462 sec
> >> <<< FAILURE! - in org.apache.spark.JavaAPISuite
> >> testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5
> sec
> >> <<< ERROR!
> >> org.apache.spark.SparkException: Job aborted due to stage failure:
> Master
> >> removed our application: FAILED
> >>    at
> >> org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
> >>    at
> >>
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> >>    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
> >>    at scala.Option.foreach(Option.scala:236)
> >>    at
> >>
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
> >>    at
> >>
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
> >>    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> >>    at
> >>
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
> >>    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> >>    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> >>    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
> >>    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
> >>    at
> >>
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
> >>    at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >>    at
> >>
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >>    at
> >> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >>    at
> >>
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> >>
> >> Running org.apache.spark.JavaJdbcRDDSuite
> >> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846
> sec -
> >> in org.apache.spark.JavaJdbcRDDSuite
> >>
> >> Results :
> >>
> >>
> >> Tests in error:
> >>  JavaAPISuite.testGuavaOptional ยป Spark Job aborted due to stage
> failure:
> >> Maste...
> >>
> >>> On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >>>
> >>> Can you try doing this before running mvn ?
> >>>
> >>> export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M
> >>> -XX:ReservedCodeCacheSize=512m"
> >>>
> >>> What OS are you using ?
> >>>
> >>> Cheers
> >>>
> >>> On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman
> >>> <andrew.mussel...@gmail.com> wrote:
> >>>>
> >>>> Just got the latest from Github and tried running `mvn test`; is this
> >>>> error common and do you have any advice on fixing it?
> >>>>
> >>>> Thanks!
> >>>>
> >>>> [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
> >>>> spark-core_2.10 ---
> >>>> [WARNING] Zinc server is not available at port 3030 - reverting to
> normal
> >>>> incremental compile
> >>>> [INFO] Using incremental compilation
> >>>> [INFO] compiler plugin:
> >>>> BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
> >>>> [INFO] Compiling 400 Scala sources and 34 Java sources to
> >>>> /home/akm/spark/core/target/scala-2.10/classes...
> >>>> [WARNING]
> >>>>
> /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
> >>>> imported `DataReadMethod' is permanently hidden by definition of
> object
> >>>> DataReadMethod in package executor
> >>>> [WARNING] import org.apache.spark.executor.DataReadMethod
> >>>> [WARNING]                                  ^
> >>>> [WARNING]
> >>>>
> /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
> >>>> match may not be exhaustive.
> >>>> It would fail on the following input: TASK_ERROR
> >>>> [WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
> >>>> mesosState match {
> >>>> [WARNING]                                                          ^
> >>>> [WARNING]
> >>>>
> /home/akm/spark/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:89:
> >>>> method isDirectory in class FileSystem is deprecated: see
> corresponding
> >>>> Javadoc for more information.
> >>>> [WARNING]     if (!fileSystem.isDirectory(new Path(logBaseDir))) {
> >>>> [WARNING]                     ^
> >>>> [ERROR] PermGen space -> [Help 1]
> >>>> [ERROR]
> >>>> [ERROR] To see the full stack trace of the errors, re-run Maven with
> the
> >>>> -e switch.
> >>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> >>>> [ERROR]
> >>>> [ERROR] For more information about the errors and possible solutions,
> >>>> please read the following articles:
> >>>> [ERROR] [Help 1]
> >>>> http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError
> >>
>

Reply via email to