When I ran test suite using the following command:

build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
-Dhadoop.version=2.7.0 package

I got failure in Spark Project Docker Integration Tests :

16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator: Remote
daemon shut down; proceeding with flushing remote transports.
^[[31m*** RUN ABORTED ***^[[0m
^[[31m  com.spotify.docker.client.DockerException:
java.util.concurrent.ExecutionException:
com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: java.io.
           IOException: No such file or directory^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at
org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
^[[31m  at
org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
^[[31m  ...^[[0m
^[[31m  Cause: java.util.concurrent.ExecutionException:
com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
java.io.IOException: No such file or directory^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  ...^[[0m
^[[31m  Cause:
com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
java.io.IOException: No such file or directory^[[0m
^[[31m  at
org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
^[[31m  at
org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
^[[31m  at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m
^[[31m  at java.util.concurrent.FutureTask.run(FutureTask.java:262)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)^[[0m
^[[31m  at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:50)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:37)^[[0m
^[[31m  at
org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:487)^[[0m
^[[31m  at
org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:177)^[[0m
^[[31m  ...^[[0m
^[[31m  Cause: java.io.IOException: No such file or directory^[[0m
^[[31m  at
jnr.unixsocket.UnixSocketChannel.doConnect(UnixSocketChannel.java:94)^[[0m

Has anyone seen the above ?

On Wed, Mar 2, 2016 at 2:45 PM, Michael Armbrust <mich...@databricks.com>
wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 1.6.1!
>
> The vote is open until Saturday, March 5, 2016 at 20:00 UTC and passes if
> a majority of at least 3+1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.6.1
> [ ] -1 Do not release this package because ...
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is *v1.6.1-rc1
> (15de51c238a7340fa81cb0b80d029a05d97bfc5c)
> <https://github.com/apache/spark/tree/v1.6.1-rc1>*
>
> The release files, including signatures, digests, etc. can be found at:
> https://home.apache.org/~pwendell/spark-releases/spark-1.6.1-rc1-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1180/
>
> The test repository (versioned as v1.6.1-rc1) for this release can be
> found at:
> https://repository.apache.org/content/repositories/orgapachespark-1179/
>
> The documentation corresponding to this release can be found at:
> https://home.apache.org/~pwendell/spark-releases/spark-1.6.1-rc1-docs/
>
>
> =======================================
> == How can I help test this release? ==
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions from 1.6.0.
>
> ================================================
> == What justifies a -1 vote for this release? ==
> ================================================
> This is a maintenance release in the 1.6.x series.  Bugs already present
> in 1.6.0, missing features, or bugs related to new features will not
> necessarily block this release.
>
> ===============================================================
> == What should happen to JIRA tickets still targeting 1.6.0? ==
> ===============================================================
> 1. It is OK for documentation patches to target 1.6.1 and still go into
> branch-1.6, since documentations will be published separately from the
> release.
> 2. New features for non-alpha-modules should target 1.7+.
> 3. Non-blocker bug fixes should target 1.6.2 or 2.0.0, or drop the target
> version.
>

Reply via email to