[RESULT] [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-09 Thread Michael Armbrust
This vote passes with nine +1s (five binding) and one binding +0!  Thanks
to everyone who tested/voted.  I'll start work on publishing the release
today.

+1:
Mark Hamstra*
Moshe Eshel
Egor Pahomov
Reynold Xin*
Yin Huai*
Andrew Or*
Burak Yavuz
Kousuke Saruta
Michael Armbrust*

0:
Sean Owen*


-1: (none)

*Binding

On Wed, Mar 9, 2016 at 3:29 PM, Michael Armbrust 
wrote:

> +1 - Ported all our internal jobs to run on 1.6.1 with no regressions.
>
> On Wed, Mar 9, 2016 at 7:04 AM, Kousuke Saruta 
> wrote:
>
>> +1 (non-binding)
>>
>>
>> On 2016/03/09 4:28, Burak Yavuz wrote:
>>
>> +1
>>
>> On Tue, Mar 8, 2016 at 10:59 AM, Andrew Or  wrote:
>>
>>> +1
>>>
>>> 2016-03-08 10:59 GMT-08:00 Yin Huai < 
>>> yh...@databricks.com>:
>>>
 +1

 On Mon, Mar 7, 2016 at 12:39 PM, Reynold Xin < 
 r...@databricks.com> wrote:

> +1 (binding)
>
>
> On Sun, Mar 6, 2016 at 12:08 PM, Egor Pahomov <
> pahomov.e...@gmail.com> wrote:
>
>> +1
>>
>> Spark ODBC server is fine, SQL is fine.
>>
>> 2016-03-03 12:09 GMT-08:00 Yin Yang < 
>> yy201...@gmail.com>:
>>
>>> Skipping docker tests, the rest are green:
>>>
>>> [INFO] Spark Project External Kafka ... SUCCESS
>>> [01:28 min]
>>> [INFO] Spark Project Examples . SUCCESS
>>> [02:59 min]
>>> [INFO] Spark Project External Kafka Assembly .. SUCCESS
>>> [ 11.680 s]
>>> [INFO]
>>> 
>>> [INFO] BUILD SUCCESS
>>> [INFO]
>>> 
>>> [INFO] Total time: 02:16 h
>>> [INFO] Finished at: 2016-03-03T11:17:07-08:00
>>> [INFO] Final Memory: 152M/4062M
>>>
>>> On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang < 
>>> yy201...@gmail.com> wrote:
>>>
 When I ran test suite using the following command:

 build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
 -Dhadoop.version=2.7.0 package

 I got failure in Spark Project Docker Integration Tests :

 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator:
 Remote daemon shut down; proceeding with flushing remote transports.
 ^[[31m*** RUN ABORTED ***^[[0m
 ^[[31m  com.spotify.docker.client.DockerException:
 java.util.concurrent.ExecutionException:
 com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
 java.io.IOException: No such file or directory^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
 ^[[31m  at
 org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
 ^[[31m  at
 org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
 ^[[31m  at
 org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
 ^[[31m  at
 org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
 ^[[31m  ...^[[0m
 ^[[31m  Cause: java.util.concurrent.ExecutionException:
 com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
 java.io.IOException: No such file or directory^[[0m
 ^[[31m  at
 jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
 ^[[31m  at
 jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
 ^[[31m  at
 jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
 

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-09 Thread Michael Armbrust
+1 - Ported all our internal jobs to run on 1.6.1 with no regressions.

On Wed, Mar 9, 2016 at 7:04 AM, Kousuke Saruta 
wrote:

> +1 (non-binding)
>
>
> On 2016/03/09 4:28, Burak Yavuz wrote:
>
> +1
>
> On Tue, Mar 8, 2016 at 10:59 AM, Andrew Or  wrote:
>
>> +1
>>
>> 2016-03-08 10:59 GMT-08:00 Yin Huai < 
>> yh...@databricks.com>:
>>
>>> +1
>>>
>>> On Mon, Mar 7, 2016 at 12:39 PM, Reynold Xin < 
>>> r...@databricks.com> wrote:
>>>
 +1 (binding)


 On Sun, Mar 6, 2016 at 12:08 PM, Egor Pahomov <
 pahomov.e...@gmail.com> wrote:

> +1
>
> Spark ODBC server is fine, SQL is fine.
>
> 2016-03-03 12:09 GMT-08:00 Yin Yang < 
> yy201...@gmail.com>:
>
>> Skipping docker tests, the rest are green:
>>
>> [INFO] Spark Project External Kafka ... SUCCESS
>> [01:28 min]
>> [INFO] Spark Project Examples . SUCCESS
>> [02:59 min]
>> [INFO] Spark Project External Kafka Assembly .. SUCCESS [
>> 11.680 s]
>> [INFO]
>> 
>> [INFO] BUILD SUCCESS
>> [INFO]
>> 
>> [INFO] Total time: 02:16 h
>> [INFO] Finished at: 2016-03-03T11:17:07-08:00
>> [INFO] Final Memory: 152M/4062M
>>
>> On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang < 
>> yy201...@gmail.com> wrote:
>>
>>> When I ran test suite using the following command:
>>>
>>> build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
>>> -Dhadoop.version=2.7.0 package
>>>
>>> I got failure in Spark Project Docker Integration Tests :
>>>
>>> 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator:
>>> Remote daemon shut down; proceeding with flushing remote transports.
>>> ^[[31m*** RUN ABORTED ***^[[0m
>>> ^[[31m  com.spotify.docker.client.DockerException:
>>> java.util.concurrent.ExecutionException:
>>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>>> java.io.IOException: No such file or directory^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  at
>>> org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
>>> ^[[31m  at
>>> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
>>> ^[[31m  ...^[[0m
>>> ^[[31m  Cause: java.util.concurrent.ExecutionException:
>>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>>> java.io.IOException: No such file or directory^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  ...^[[0m
>>> ^[[31m  Cause:

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-09 Thread Kousuke Saruta

+1 (non-binding)

On 2016/03/09 4:28, Burak Yavuz wrote:

+1

On Tue, Mar 8, 2016 at 10:59 AM, Andrew Or > wrote:


+1

2016-03-08 10:59 GMT-08:00 Yin Huai >:

+1

On Mon, Mar 7, 2016 at 12:39 PM, Reynold Xin
> wrote:

+1 (binding)


On Sun, Mar 6, 2016 at 12:08 PM, Egor Pahomov
>
wrote:

+1

Spark ODBC server is fine, SQL is fine.

2016-03-03 12:09 GMT-08:00 Yin Yang
>:

Skipping docker tests, the rest are green:

[INFO] Spark Project External Kafka
... SUCCESS [01:28 min]
[INFO] Spark Project Examples
. SUCCESS [02:59 min]
[INFO] Spark Project External Kafka Assembly
.. SUCCESS [ 11.680 s]
[INFO]


[INFO] BUILD SUCCESS
[INFO]


[INFO] Total time: 02:16 h
[INFO] Finished at: 2016-03-03T11:17:07-08:00
[INFO] Final Memory: 152M/4062M

On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang
>
wrote:

When I ran test suite using the following
command:

build/mvn clean -Phive -Phive-thriftserver
-Pyarn -Phadoop-2.6 -Dhadoop.version=2.7.0 package

I got failure in Spark Project Docker
Integration Tests :

16/03/02 17:36:46 INFO
RemoteActorRefProvider$RemotingTerminator:
Remote daemon shut down; proceeding with
flushing remote transports.
^[[31m*** RUN ABORTED ***^[[0m
^[[31m
 com.spotify.docker.client.DockerException:
java.util.concurrent.ExecutionException:

com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
java.io .  IOException: No
such file or directory^[[0m
^[[31m  at

com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
^[[31m  at

com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
^[[31m  at

com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
^[[31m  at

org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
^[[31m  at

org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
^[[31m  at

org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at

org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
^[[31m  at

org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at

org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
^[[31m  at

org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
^[[31m  ...^[[0m
^[[31m  Cause:
java.util.concurrent.ExecutionException:

com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
java.io.IOException: No such file or
directory^[[0m
^[[31m  at

jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
^[[31m  at

jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-08 Thread Burak Yavuz
+1

On Tue, Mar 8, 2016 at 10:59 AM, Andrew Or  wrote:

> +1
>
> 2016-03-08 10:59 GMT-08:00 Yin Huai :
>
>> +1
>>
>> On Mon, Mar 7, 2016 at 12:39 PM, Reynold Xin  wrote:
>>
>>> +1 (binding)
>>>
>>>
>>> On Sun, Mar 6, 2016 at 12:08 PM, Egor Pahomov 
>>> wrote:
>>>
 +1

 Spark ODBC server is fine, SQL is fine.

 2016-03-03 12:09 GMT-08:00 Yin Yang :

> Skipping docker tests, the rest are green:
>
> [INFO] Spark Project External Kafka ... SUCCESS
> [01:28 min]
> [INFO] Spark Project Examples . SUCCESS
> [02:59 min]
> [INFO] Spark Project External Kafka Assembly .. SUCCESS [
> 11.680 s]
> [INFO]
> 
> [INFO] BUILD SUCCESS
> [INFO]
> 
> [INFO] Total time: 02:16 h
> [INFO] Finished at: 2016-03-03T11:17:07-08:00
> [INFO] Final Memory: 152M/4062M
>
> On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang  wrote:
>
>> When I ran test suite using the following command:
>>
>> build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
>> -Dhadoop.version=2.7.0 package
>>
>> I got failure in Spark Project Docker Integration Tests :
>>
>> 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator:
>> Remote daemon shut down; proceeding with flushing remote transports.
>> ^[[31m*** RUN ABORTED ***^[[0m
>> ^[[31m  com.spotify.docker.client.DockerException:
>> java.util.concurrent.ExecutionException:
>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>> java.io.IOException: No such file or directory^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  at
>> org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
>> ^[[31m  at
>> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
>> ^[[31m  ...^[[0m
>> ^[[31m  Cause: java.util.concurrent.ExecutionException:
>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>> java.io.IOException: No such file or directory^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  ...^[[0m
>> ^[[31m  Cause:
>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>> java.io.IOException: No such file or directory^[[0m
>> ^[[31m  at
>> org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
>> ^[[31m  at
>> org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
>> ^[[31m  at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-08 Thread Andrew Or
+1

2016-03-08 10:59 GMT-08:00 Yin Huai :

> +1
>
> On Mon, Mar 7, 2016 at 12:39 PM, Reynold Xin  wrote:
>
>> +1 (binding)
>>
>>
>> On Sun, Mar 6, 2016 at 12:08 PM, Egor Pahomov 
>> wrote:
>>
>>> +1
>>>
>>> Spark ODBC server is fine, SQL is fine.
>>>
>>> 2016-03-03 12:09 GMT-08:00 Yin Yang :
>>>
 Skipping docker tests, the rest are green:

 [INFO] Spark Project External Kafka ... SUCCESS
 [01:28 min]
 [INFO] Spark Project Examples . SUCCESS
 [02:59 min]
 [INFO] Spark Project External Kafka Assembly .. SUCCESS [
 11.680 s]
 [INFO]
 
 [INFO] BUILD SUCCESS
 [INFO]
 
 [INFO] Total time: 02:16 h
 [INFO] Finished at: 2016-03-03T11:17:07-08:00
 [INFO] Final Memory: 152M/4062M

 On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang  wrote:

> When I ran test suite using the following command:
>
> build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
> -Dhadoop.version=2.7.0 package
>
> I got failure in Spark Project Docker Integration Tests :
>
> 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator:
> Remote daemon shut down; proceeding with flushing remote transports.
> ^[[31m*** RUN ABORTED ***^[[0m
> ^[[31m  com.spotify.docker.client.DockerException:
> java.util.concurrent.ExecutionException:
> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
> java.io.IOException: No such file or directory^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  at
> org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
> ^[[31m  at
> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
> ^[[31m  ...^[[0m
> ^[[31m  Cause: java.util.concurrent.ExecutionException:
> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
> java.io.IOException: No such file or directory^[[0m
> ^[[31m  at
> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
> ^[[31m  at
> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
> ^[[31m  at
> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  ...^[[0m
> ^[[31m  Cause:
> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
> java.io.IOException: No such file or directory^[[0m
> ^[[31m  at
> org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
> ^[[31m  at
> org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
> ^[[31m  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m
> ^[[31m  at
> java.util.concurrent.FutureTask.run(FutureTask.java:262)^[[0m
> ^[[31m  at
> 

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-08 Thread Yin Huai
+1

On Mon, Mar 7, 2016 at 12:39 PM, Reynold Xin  wrote:

> +1 (binding)
>
>
> On Sun, Mar 6, 2016 at 12:08 PM, Egor Pahomov 
> wrote:
>
>> +1
>>
>> Spark ODBC server is fine, SQL is fine.
>>
>> 2016-03-03 12:09 GMT-08:00 Yin Yang :
>>
>>> Skipping docker tests, the rest are green:
>>>
>>> [INFO] Spark Project External Kafka ... SUCCESS
>>> [01:28 min]
>>> [INFO] Spark Project Examples . SUCCESS
>>> [02:59 min]
>>> [INFO] Spark Project External Kafka Assembly .. SUCCESS [
>>> 11.680 s]
>>> [INFO]
>>> 
>>> [INFO] BUILD SUCCESS
>>> [INFO]
>>> 
>>> [INFO] Total time: 02:16 h
>>> [INFO] Finished at: 2016-03-03T11:17:07-08:00
>>> [INFO] Final Memory: 152M/4062M
>>>
>>> On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang  wrote:
>>>
 When I ran test suite using the following command:

 build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
 -Dhadoop.version=2.7.0 package

 I got failure in Spark Project Docker Integration Tests :

 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator:
 Remote daemon shut down; proceeding with flushing remote transports.
 ^[[31m*** RUN ABORTED ***^[[0m
 ^[[31m  com.spotify.docker.client.DockerException:
 java.util.concurrent.ExecutionException:
 com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
 java.io.IOException: No such file or directory^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
 ^[[31m  at
 org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
 ^[[31m  at
 org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
 ^[[31m  at
 org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
 ^[[31m  at
 org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
 ^[[31m  ...^[[0m
 ^[[31m  Cause: java.util.concurrent.ExecutionException:
 com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
 java.io.IOException: No such file or directory^[[0m
 ^[[31m  at
 jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
 ^[[31m  at
 jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
 ^[[31m  at
 jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
 ^[[31m  at
 com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
 ^[[31m  at
 org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
 ^[[31m  at
 org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
 ^[[31m  at
 org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
 ^[[31m  ...^[[0m
 ^[[31m  Cause:
 com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
 java.io.IOException: No such file or directory^[[0m
 ^[[31m  at
 org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
 ^[[31m  at
 org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
 ^[[31m  at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m
 ^[[31m  at java.util.concurrent.FutureTask.run(FutureTask.java:262)^[[0m
 ^[[31m  at
 jersey.repackaged.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)^[[0m
 ^[[31m  at
 java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)^[[0m
 ^[[31m  at
 

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-07 Thread Reynold Xin
+1 (binding)


On Sun, Mar 6, 2016 at 12:08 PM, Egor Pahomov 
wrote:

> +1
>
> Spark ODBC server is fine, SQL is fine.
>
> 2016-03-03 12:09 GMT-08:00 Yin Yang :
>
>> Skipping docker tests, the rest are green:
>>
>> [INFO] Spark Project External Kafka ... SUCCESS
>> [01:28 min]
>> [INFO] Spark Project Examples . SUCCESS
>> [02:59 min]
>> [INFO] Spark Project External Kafka Assembly .. SUCCESS [
>> 11.680 s]
>> [INFO]
>> 
>> [INFO] BUILD SUCCESS
>> [INFO]
>> 
>> [INFO] Total time: 02:16 h
>> [INFO] Finished at: 2016-03-03T11:17:07-08:00
>> [INFO] Final Memory: 152M/4062M
>>
>> On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang  wrote:
>>
>>> When I ran test suite using the following command:
>>>
>>> build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
>>> -Dhadoop.version=2.7.0 package
>>>
>>> I got failure in Spark Project Docker Integration Tests :
>>>
>>> 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator: Remote
>>> daemon shut down; proceeding with flushing remote transports.
>>> ^[[31m*** RUN ABORTED ***^[[0m
>>> ^[[31m  com.spotify.docker.client.DockerException:
>>> java.util.concurrent.ExecutionException:
>>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>>> java.io.IOException: No such file or directory^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  at
>>> org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
>>> ^[[31m  at
>>> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
>>> ^[[31m  ...^[[0m
>>> ^[[31m  Cause: java.util.concurrent.ExecutionException:
>>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>>> java.io.IOException: No such file or directory^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
>>> ^[[31m  at
>>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  at
>>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>>> ^[[31m  at
>>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>>> ^[[31m  ...^[[0m
>>> ^[[31m  Cause:
>>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>>> java.io.IOException: No such file or directory^[[0m
>>> ^[[31m  at
>>> org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
>>> ^[[31m  at
>>> org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
>>> ^[[31m  at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m
>>> ^[[31m  at java.util.concurrent.FutureTask.run(FutureTask.java:262)^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)^[[0m
>>> ^[[31m  at
>>> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)^[[0m
>>> ^[[31m  at
>>> jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:50)^[[0m
>>> ^[[31m  at
>>> 

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-06 Thread Egor Pahomov
+1

Spark ODBC server is fine, SQL is fine.

2016-03-03 12:09 GMT-08:00 Yin Yang :

> Skipping docker tests, the rest are green:
>
> [INFO] Spark Project External Kafka ... SUCCESS [01:28
> min]
> [INFO] Spark Project Examples . SUCCESS [02:59
> min]
> [INFO] Spark Project External Kafka Assembly .. SUCCESS [
> 11.680 s]
> [INFO]
> 
> [INFO] BUILD SUCCESS
> [INFO]
> 
> [INFO] Total time: 02:16 h
> [INFO] Finished at: 2016-03-03T11:17:07-08:00
> [INFO] Final Memory: 152M/4062M
>
> On Thu, Mar 3, 2016 at 8:55 AM, Yin Yang  wrote:
>
>> When I ran test suite using the following command:
>>
>> build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
>> -Dhadoop.version=2.7.0 package
>>
>> I got failure in Spark Project Docker Integration Tests :
>>
>> 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator: Remote
>> daemon shut down; proceeding with flushing remote transports.
>> ^[[31m*** RUN ABORTED ***^[[0m
>> ^[[31m  com.spotify.docker.client.DockerException:
>> java.util.concurrent.ExecutionException:
>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: java.io.
>>IOException: No such file or directory^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  at
>> org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
>> ^[[31m  at
>> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
>> ^[[31m  ...^[[0m
>> ^[[31m  Cause: java.util.concurrent.ExecutionException:
>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>> java.io.IOException: No such file or directory^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
>> ^[[31m  at
>> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  at
>> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
>> ^[[31m  at
>> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
>> ^[[31m  ...^[[0m
>> ^[[31m  Cause:
>> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
>> java.io.IOException: No such file or directory^[[0m
>> ^[[31m  at
>> org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
>> ^[[31m  at
>> org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
>> ^[[31m  at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m
>> ^[[31m  at java.util.concurrent.FutureTask.run(FutureTask.java:262)^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)^[[0m
>> ^[[31m  at
>> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:50)^[[0m
>> ^[[31m  at
>> jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:37)^[[0m
>> ^[[31m  at
>> 

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-03 Thread Sean Owen
FWIW I was running this with OpenJDK 1.8.0_66

On Thu, Mar 3, 2016 at 7:43 PM, Tim Preece  wrote:
> Regarding the failure in
> org.apache.spark.streaming.kafka.DirectKafkaStreamSuite","offset recovery
>
> We have been seeing the very same problem with the IBM JDK for quite a long
> time ( since at least July 2015 ).
> It is intermittent and we had dismissed it as a testcase problem.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-03 Thread Tim Preece
Regarding the failure in 
org.apache.spark.streaming.kafka.DirectKafkaStreamSuite","offset recovery

We have been seeing the very same problem with the IBM JDK for quite a long
time ( since at least July 2015 ).
It is intermittent and we had dismissed it as a testcase problem.




--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-6-1-RC1-tp16532p16542.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-03 Thread Sean Owen
@Yin Yang see https://issues.apache.org/jira/browse/SPARK-12426 Docker
has to be running locally for these tests to pass. I think it's a
little surprising. However I still get a docker error, below.

For me, +0 I guess. The signatures and hashes are all fine, but as
usual I'm getting test failures. I suspect they may just be
environment related but would like others to confirm they're *not*
seeing the same.

The Docker bits are still giving me trouble even with docker runing.


On Ubuntu 15.10, with -Pyarn -Phadoop-2.6 -Phive -Phive-thriftserver:

Core:

- spilling with compression *** FAILED ***
  java.lang.Exception: Test failed with compression using codec
org.apache.spark.io.LZ4CompressionCodec:

assertion failed: expected groupByKey to spill, but did not


Docker Integration Tests:

*** RUN ABORTED ***
  com.spotify.docker.client.DockerException:
java.util.concurrent.ExecutionException:
com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
java.io.IOException: Permission denied

  at 
com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)



Streaming Kafka

- offset recovery *** FAILED ***
  The code passed to eventually never returned normally. Attempted 188
times over 10.036713564 seconds. Last failure message:
strings.forall({
((elem: Any) => DirectKafkaStreamSuite.collectedData.contains(elem))
  }) was false. (DirectKafkaStreamSuite.scala:249)

On Thu, Mar 3, 2016 at 4:55 PM, Yin Yang  wrote:
> When I ran test suite using the following command:
>
> build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
> -Dhadoop.version=2.7.0 package
>
> I got failure in Spark Project Docker Integration Tests :
>
> 16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator: Remote
> daemon shut down; proceeding with flushing remote transports.
> ^[[31m*** RUN ABORTED ***^[[0m
> ^[[31m  com.spotify.docker.client.DockerException:
> java.util.concurrent.ExecutionException:
> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: java.io.
> IOException: No such file or directory^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  at
> org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
> ^[[31m  at
> org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
> ^[[31m  ...^[[0m
> ^[[31m  Cause: java.util.concurrent.ExecutionException:
> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
> java.io.IOException: No such file or directory^[[0m
> ^[[31m  at
> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
> ^[[31m  at
> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
> ^[[31m  at
> jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
> ^[[31m  at
> com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  at
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
> ^[[31m  at
> org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
> ^[[31m  ...^[[0m
> ^[[31m  Cause:
> com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
> java.io.IOException: No such file or directory^[[0m
> ^[[31m  at
> org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
> ^[[31m  at
> org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
> ^[[31m  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m
> ^[[31m  at java.util.concurrent.FutureTask.run(FutureTask.java:262)^[[0m
> ^[[31m  at
> 

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-03 Thread Yin Yang
When I ran test suite using the following command:

build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
-Dhadoop.version=2.7.0 package

I got failure in Spark Project Docker Integration Tests :

16/03/02 17:36:46 INFO RemoteActorRefProvider$RemotingTerminator: Remote
daemon shut down; proceeding with flushing remote transports.
^[[31m*** RUN ABORTED ***^[[0m
^[[31m  com.spotify.docker.client.DockerException:
java.util.concurrent.ExecutionException:
com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: java.io.
   IOException: No such file or directory^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1141)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1082)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at
org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)^[[0m
^[[31m  at
org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)^[[0m
^[[31m  ...^[[0m
^[[31m  Cause: java.util.concurrent.ExecutionException:
com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
java.io.IOException: No such file or directory^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1080)^[[0m
^[[31m  at
com.spotify.docker.client.DefaultDockerClient.ping(DefaultDockerClient.java:281)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:76)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.beforeAll(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  at
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)^[[0m
^[[31m  at
org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.run(DockerJDBCIntegrationSuite.scala:58)^[[0m
^[[31m  ...^[[0m
^[[31m  Cause:
com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException:
java.io.IOException: No such file or directory^[[0m
^[[31m  at
org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)^[[0m
^[[31m  at
org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)^[[0m
^[[31m  at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)^[[0m
^[[31m  at java.util.concurrent.FutureTask.run(FutureTask.java:262)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)^[[0m
^[[31m  at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:50)^[[0m
^[[31m  at
jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:37)^[[0m
^[[31m  at
org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:487)^[[0m
^[[31m  at
org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:177)^[[0m
^[[31m  ...^[[0m
^[[31m  Cause: java.io.IOException: No such file or directory^[[0m
^[[31m  at
jnr.unixsocket.UnixSocketChannel.doConnect(UnixSocketChannel.java:94)^[[0m

Has anyone seen the above ?

On Wed, Mar 2, 2016 at 2:45 PM, Michael Armbrust 
wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 1.6.1!
>
> The vote is open until Saturday, March 5, 2016 at 20:00 UTC and passes if
> a majority of at least 3+1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.6.1
> [ ] -1 Do not release this package because ...
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is *v1.6.1-rc1
> (15de51c238a7340fa81cb0b80d029a05d97bfc5c)
> *
>
> The release files, including signatures, digests, etc. can be found at:
> 

Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-03 Thread Tim Preece
I just created the following pull request ( against master but would like on
1.6.1 ) for the isolated classloader fix ( Spark-13648 )

https://github.com/apache/spark/pull/11495



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-6-1-RC1-tp16532p16538.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-03 Thread Tim Preece
I have been testing 1.6.1RC1 using the IBM Java SDK.

I notice a problem ( with the org.apache.spark.sql.hive.client.VersionsSuite
tests ) after a recent Spark 1.6.1 change. 
Pull request -
https://github.com/apache/spark/commit/f7898f9e2df131fa78200f6034508e74a78c2a44

The change introduced a dependency on
org.apache.hadoop.hive.cli.CliSessionState in
org.apache.spark.sql.hive.client.ClientWrapper. 

In particular the following test was added
if (originalState.isInstanceOf[CliSessionState]) {

The problem is that VersionsSuite test uses an isolated classloader in order
to test various versions of Hive. However the classpath of the isolated
classloader does not contain CliSessionState. 

The behaviour of isInstanceOf[CliSessionState]) is JVM vendor specific. In
particular whether this code causes the CliSessionState class to be loaded.
( It does not for openjdk, but does for IBM JDK ). Hence this call can throw
a classnotfound exception.

I will have a pull request to fix the testcase very shortly.

I opened JIRA SPARK-13648 ( i wasn't too sure if I should have reopened one
of SPARK-11624 or SPARK-11972 instead ?)

 





--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-6-1-RC1-tp16532p16537.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.1 (RC1)

2016-03-02 Thread Mark Hamstra
+1

On Wed, Mar 2, 2016 at 2:45 PM, Michael Armbrust 
wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 1.6.1!
>
> The vote is open until Saturday, March 5, 2016 at 20:00 UTC and passes if
> a majority of at least 3+1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.6.1
> [ ] -1 Do not release this package because ...
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is *v1.6.1-rc1
> (15de51c238a7340fa81cb0b80d029a05d97bfc5c)
> *
>
> The release files, including signatures, digests, etc. can be found at:
> https://home.apache.org/~pwendell/spark-releases/spark-1.6.1-rc1-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1180/
>
> The test repository (versioned as v1.6.1-rc1) for this release can be
> found at:
> https://repository.apache.org/content/repositories/orgapachespark-1179/
>
> The documentation corresponding to this release can be found at:
> https://home.apache.org/~pwendell/spark-releases/spark-1.6.1-rc1-docs/
>
>
> ===
> == How can I help test this release? ==
> ===
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions from 1.6.0.
>
> 
> == What justifies a -1 vote for this release? ==
> 
> This is a maintenance release in the 1.6.x series.  Bugs already present
> in 1.6.0, missing features, or bugs related to new features will not
> necessarily block this release.
>
> ===
> == What should happen to JIRA tickets still targeting 1.6.0? ==
> ===
> 1. It is OK for documentation patches to target 1.6.1 and still go into
> branch-1.6, since documentations will be published separately from the
> release.
> 2. New features for non-alpha-modules should target 1.7+.
> 3. Non-blocker bug fixes should target 1.6.2 or 2.0.0, or drop the target
> version.
>