[jira] [Commented] (HIVE-13830) Hive on spark driver crash with Spark 1.6.1

2017-01-16 Thread KaiXu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825512#comment-15825512
 ] 

KaiXu commented on HIVE-13830:
--

how do you build your spark? did you add the -Phive profile?

> Hive on spark driver crash with Spark 1.6.1
> ---
>
> Key: HIVE-13830
> URL: https://issues.apache.org/jira/browse/HIVE-13830
> Project: Hive
>  Issue Type: Bug
>  Components: Spark, spark-branch
>Affects Versions: 2.0.0, 2.1.0
> Environment: Hadoop 2.7.2, Hive 2.1.0, Spark 1.6.1, Kerberos
>Reporter: Alexandre Linte
>
> With Hive 1.2.1 I was able to use Hive on  successfully with the use of the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar". 
> Today with Hive 2.0.0, I'm unable to use Hive on  whether it be with the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar" or the -assembly 
> "-assembly-1.6.1-hadoop2.7.2.jar".
> My configuration is the following:
>   * -. available in HIVE_DIR/
>   *  assembly available in HIVE_DIR/lib
> I gathered several logs below:
> - HQL commands
> {noformat}
> $ hive -v --database shfs3453
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application//-1.6.1/assembly/target/scala-2.10/-assembly-1.6.1-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Logging initialized using configuration in 
> file:/opt/application/Hive/apache-hive-2.0.0-bin/conf/hive-log4j2.properties
> use shfs3453
> OK
> Time taken: 1.425 seconds
> Hive-on-MR is deprecated in Hive 2 and may not be available in the future 
> versions. Consider using a different execution engine (i.e. tez, ) or using 
> Hive 1.X releases.
> hive (shfs3453)> set hive.execution.engine=;
> set hive.execution.engine=
> hive (shfs3453)> set .master=yarn-client;
> set .master=yarn-client
> hive (shfs3453)> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, 
> Day STRING, Block STRING, IUCR INT, PrimaryType STRING, Description STRING, 
> LocationDescription STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, 
> District INT, Ward INT, CommunityArea INT, FBICode INT, XCoordinate BIGINT, 
> YCoordinate BIGINT, Year INT, UpdatedOn STRING, Latitude FLOAT, Longitude 
> FLOAT, Location STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED 
> AS TEXTFILE;
> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, Day STRING, Block 
> STRING, IUCR INT, PrimaryType STRING, Description STRING, LocationDescription 
> STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, District INT, Ward INT, 
> CommunityArea INT, FBICode INT, XCoordinate BIGINT, YCoordinate BIGINT, Year 
> INT, UpdatedOn STRING, Latitude FLOAT, Longitude FLOAT, Location STRING) ROW 
> FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE
> OK
> Time taken: 0.408 seconds
> hive (shfs3453)> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM 
> chicagocrimes WHERE Description = 'FIRST DEGREE MURDER';
> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM chicagocrimes WHERE 
> Description = 'FIRST DEGREE MURDER'
> Query ID = shfs3453_20160524092714_41c89aec-2c6f-49e9-98c7-d227ca144f73
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting  Job = 79484279-8e75-4b13-8e71-7de463f4d51e
> Status: SENT
> Failed to execute  task, with exception 'java.lang.IllegalStateException(RPC 
> channel is closed.)'
> FAILED: Execution , return code 1 from 
> org.apache.hadoop.hive.ql.exec..SparkTask
> {noformat}
> - Client logs
> {noformat}
> May 24 09:32:19 hive-cli  - org.apache.hive..client.rpc.RpcDispatcherReceived 
>  message:io.netty.handler.codec.DecoderException: 
> java.lang.NoClassDefFoundError: org/apache/hive//client/Job
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:358)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
> at 
> 

[jira] [Commented] (HIVE-13830) Hive on spark driver crash with Spark 1.6.1

2016-11-02 Thread Alexandre Linte (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628158#comment-15628158
 ] 

Alexandre Linte commented on HIVE-13830:


I'm still using Spark 1.6.1, Hive 2.1.0 and Hadoop 2.7.2, the error remains 
valid.

> Hive on spark driver crash with Spark 1.6.1
> ---
>
> Key: HIVE-13830
> URL: https://issues.apache.org/jira/browse/HIVE-13830
> Project: Hive
>  Issue Type: Bug
>  Components: Spark, spark-branch
>Affects Versions: 2.0.0, 2.1.0
> Environment: Hadoop 2.7.2, Hive 2.1.0, Spark 1.6.1, Kerberos
>Reporter: Alexandre Linte
>
> With Hive 1.2.1 I was able to use Hive on  successfully with the use of the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar". 
> Today with Hive 2.0.0, I'm unable to use Hive on  whether it be with the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar" or the -assembly 
> "-assembly-1.6.1-hadoop2.7.2.jar".
> My configuration is the following:
>   * -. available in HIVE_DIR/
>   *  assembly available in HIVE_DIR/lib
> I gathered several logs below:
> - HQL commands
> {noformat}
> $ hive -v --database shfs3453
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application//-1.6.1/assembly/target/scala-2.10/-assembly-1.6.1-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Logging initialized using configuration in 
> file:/opt/application/Hive/apache-hive-2.0.0-bin/conf/hive-log4j2.properties
> use shfs3453
> OK
> Time taken: 1.425 seconds
> Hive-on-MR is deprecated in Hive 2 and may not be available in the future 
> versions. Consider using a different execution engine (i.e. tez, ) or using 
> Hive 1.X releases.
> hive (shfs3453)> set hive.execution.engine=;
> set hive.execution.engine=
> hive (shfs3453)> set .master=yarn-client;
> set .master=yarn-client
> hive (shfs3453)> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, 
> Day STRING, Block STRING, IUCR INT, PrimaryType STRING, Description STRING, 
> LocationDescription STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, 
> District INT, Ward INT, CommunityArea INT, FBICode INT, XCoordinate BIGINT, 
> YCoordinate BIGINT, Year INT, UpdatedOn STRING, Latitude FLOAT, Longitude 
> FLOAT, Location STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED 
> AS TEXTFILE;
> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, Day STRING, Block 
> STRING, IUCR INT, PrimaryType STRING, Description STRING, LocationDescription 
> STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, District INT, Ward INT, 
> CommunityArea INT, FBICode INT, XCoordinate BIGINT, YCoordinate BIGINT, Year 
> INT, UpdatedOn STRING, Latitude FLOAT, Longitude FLOAT, Location STRING) ROW 
> FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE
> OK
> Time taken: 0.408 seconds
> hive (shfs3453)> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM 
> chicagocrimes WHERE Description = 'FIRST DEGREE MURDER';
> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM chicagocrimes WHERE 
> Description = 'FIRST DEGREE MURDER'
> Query ID = shfs3453_20160524092714_41c89aec-2c6f-49e9-98c7-d227ca144f73
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting  Job = 79484279-8e75-4b13-8e71-7de463f4d51e
> Status: SENT
> Failed to execute  task, with exception 'java.lang.IllegalStateException(RPC 
> channel is closed.)'
> FAILED: Execution , return code 1 from 
> org.apache.hadoop.hive.ql.exec..SparkTask
> {noformat}
> - Client logs
> {noformat}
> May 24 09:32:19 hive-cli  - org.apache.hive..client.rpc.RpcDispatcherReceived 
>  message:io.netty.handler.codec.DecoderException: 
> java.lang.NoClassDefFoundError: org/apache/hive//client/Job
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:358)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
> at 
> 

[jira] [Commented] (HIVE-13830) Hive on spark driver crash with Spark 1.6.1

2016-11-01 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15625642#comment-15625642
 ] 

Aihua Xu commented on HIVE-13830:
-

Did you get HoS to work? {{java.lang.NoSuchFieldError: 
SPARK_RPC_SERVER_ADDRESS}} looks like mismatched jars in the hive installation.

> Hive on spark driver crash with Spark 1.6.1
> ---
>
> Key: HIVE-13830
> URL: https://issues.apache.org/jira/browse/HIVE-13830
> Project: Hive
>  Issue Type: Bug
>  Components: Spark, spark-branch
>Affects Versions: 2.0.0, 2.1.0
> Environment: Hadoop 2.7.2, Hive 2.1.0, Spark 1.6.1, Kerberos
>Reporter: Alexandre Linte
>
> With Hive 1.2.1 I was able to use Hive on  successfully with the use of the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar". 
> Today with Hive 2.0.0, I'm unable to use Hive on  whether it be with the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar" or the -assembly 
> "-assembly-1.6.1-hadoop2.7.2.jar".
> My configuration is the following:
>   * -. available in HIVE_DIR/
>   *  assembly available in HIVE_DIR/lib
> I gathered several logs below:
> - HQL commands
> {noformat}
> $ hive -v --database shfs3453
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application//-1.6.1/assembly/target/scala-2.10/-assembly-1.6.1-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Logging initialized using configuration in 
> file:/opt/application/Hive/apache-hive-2.0.0-bin/conf/hive-log4j2.properties
> use shfs3453
> OK
> Time taken: 1.425 seconds
> Hive-on-MR is deprecated in Hive 2 and may not be available in the future 
> versions. Consider using a different execution engine (i.e. tez, ) or using 
> Hive 1.X releases.
> hive (shfs3453)> set hive.execution.engine=;
> set hive.execution.engine=
> hive (shfs3453)> set .master=yarn-client;
> set .master=yarn-client
> hive (shfs3453)> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, 
> Day STRING, Block STRING, IUCR INT, PrimaryType STRING, Description STRING, 
> LocationDescription STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, 
> District INT, Ward INT, CommunityArea INT, FBICode INT, XCoordinate BIGINT, 
> YCoordinate BIGINT, Year INT, UpdatedOn STRING, Latitude FLOAT, Longitude 
> FLOAT, Location STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED 
> AS TEXTFILE;
> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, Day STRING, Block 
> STRING, IUCR INT, PrimaryType STRING, Description STRING, LocationDescription 
> STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, District INT, Ward INT, 
> CommunityArea INT, FBICode INT, XCoordinate BIGINT, YCoordinate BIGINT, Year 
> INT, UpdatedOn STRING, Latitude FLOAT, Longitude FLOAT, Location STRING) ROW 
> FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE
> OK
> Time taken: 0.408 seconds
> hive (shfs3453)> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM 
> chicagocrimes WHERE Description = 'FIRST DEGREE MURDER';
> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM chicagocrimes WHERE 
> Description = 'FIRST DEGREE MURDER'
> Query ID = shfs3453_20160524092714_41c89aec-2c6f-49e9-98c7-d227ca144f73
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting  Job = 79484279-8e75-4b13-8e71-7de463f4d51e
> Status: SENT
> Failed to execute  task, with exception 'java.lang.IllegalStateException(RPC 
> channel is closed.)'
> FAILED: Execution , return code 1 from 
> org.apache.hadoop.hive.ql.exec..SparkTask
> {noformat}
> - Client logs
> {noformat}
> May 24 09:32:19 hive-cli  - org.apache.hive..client.rpc.RpcDispatcherReceived 
>  message:io.netty.handler.codec.DecoderException: 
> java.lang.NoClassDefFoundError: org/apache/hive//client/Job
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:358)
> at 
> 

[jira] [Commented] (HIVE-13830) Hive on spark driver crash with Spark 1.6.1

2016-10-24 Thread KaiXu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15601515#comment-15601515
 ] 

KaiXu commented on HIVE-13830:
--

I used spark1.6.2 release version, spark1.6.4 and Hive 1.2.1, it has the same 
error.

> Hive on spark driver crash with Spark 1.6.1
> ---
>
> Key: HIVE-13830
> URL: https://issues.apache.org/jira/browse/HIVE-13830
> Project: Hive
>  Issue Type: Bug
>  Components: Spark, spark-branch
>Affects Versions: 2.0.0, 2.1.0
> Environment: Hadoop 2.7.2, Hive 2.1.0, Spark 1.6.1, Kerberos
>Reporter: Alexandre Linte
>
> With Hive 1.2.1 I was able to use Hive on  successfully with the use of the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar". 
> Today with Hive 2.0.0, I'm unable to use Hive on  whether it be with the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar" or the -assembly 
> "-assembly-1.6.1-hadoop2.7.2.jar".
> My configuration is the following:
>   * -. available in HIVE_DIR/
>   *  assembly available in HIVE_DIR/lib
> I gathered several logs below:
> - HQL commands
> {noformat}
> $ hive -v --database shfs3453
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application//-1.6.1/assembly/target/scala-2.10/-assembly-1.6.1-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Logging initialized using configuration in 
> file:/opt/application/Hive/apache-hive-2.0.0-bin/conf/hive-log4j2.properties
> use shfs3453
> OK
> Time taken: 1.425 seconds
> Hive-on-MR is deprecated in Hive 2 and may not be available in the future 
> versions. Consider using a different execution engine (i.e. tez, ) or using 
> Hive 1.X releases.
> hive (shfs3453)> set hive.execution.engine=;
> set hive.execution.engine=
> hive (shfs3453)> set .master=yarn-client;
> set .master=yarn-client
> hive (shfs3453)> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, 
> Day STRING, Block STRING, IUCR INT, PrimaryType STRING, Description STRING, 
> LocationDescription STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, 
> District INT, Ward INT, CommunityArea INT, FBICode INT, XCoordinate BIGINT, 
> YCoordinate BIGINT, Year INT, UpdatedOn STRING, Latitude FLOAT, Longitude 
> FLOAT, Location STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED 
> AS TEXTFILE;
> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, Day STRING, Block 
> STRING, IUCR INT, PrimaryType STRING, Description STRING, LocationDescription 
> STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, District INT, Ward INT, 
> CommunityArea INT, FBICode INT, XCoordinate BIGINT, YCoordinate BIGINT, Year 
> INT, UpdatedOn STRING, Latitude FLOAT, Longitude FLOAT, Location STRING) ROW 
> FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE
> OK
> Time taken: 0.408 seconds
> hive (shfs3453)> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM 
> chicagocrimes WHERE Description = 'FIRST DEGREE MURDER';
> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM chicagocrimes WHERE 
> Description = 'FIRST DEGREE MURDER'
> Query ID = shfs3453_20160524092714_41c89aec-2c6f-49e9-98c7-d227ca144f73
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting  Job = 79484279-8e75-4b13-8e71-7de463f4d51e
> Status: SENT
> Failed to execute  task, with exception 'java.lang.IllegalStateException(RPC 
> channel is closed.)'
> FAILED: Execution , return code 1 from 
> org.apache.hadoop.hive.ql.exec..SparkTask
> {noformat}
> - Client logs
> {noformat}
> May 24 09:32:19 hive-cli  - org.apache.hive..client.rpc.RpcDispatcherReceived 
>  message:io.netty.handler.codec.DecoderException: 
> java.lang.NoClassDefFoundError: org/apache/hive//client/Job
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:358)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
> at 
> 

[jira] [Commented] (HIVE-13830) Hive on spark driver crash with Spark 1.6.1

2016-10-05 Thread Alexandre Linte (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547919#comment-15547919
 ] 

Alexandre Linte commented on HIVE-13830:


Nothing new here?

> Hive on spark driver crash with Spark 1.6.1
> ---
>
> Key: HIVE-13830
> URL: https://issues.apache.org/jira/browse/HIVE-13830
> Project: Hive
>  Issue Type: Bug
>  Components: Spark, spark-branch
>Affects Versions: 2.0.0, 2.1.0
> Environment: Hadoop 2.7.2, Hive 2.1.0, Spark 1.6.1, Kerberos
>Reporter: Alexandre Linte
>
> With Hive 1.2.1 I was able to use Hive on  successfully with the use of the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar". 
> Today with Hive 2.0.0, I'm unable to use Hive on  whether it be with the 
> -assembly "-assembly-1.4.1-hadoop2.7.1.jar" or the -assembly 
> "-assembly-1.6.1-hadoop2.7.2.jar".
> My configuration is the following:
>   * -. available in HIVE_DIR/
>   *  assembly available in HIVE_DIR/lib
> I gathered several logs below:
> - HQL commands
> {noformat}
> $ hive -v --database shfs3453
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hive/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application//-1.6.1/assembly/target/scala-2.10/-assembly-1.6.1-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/application/Hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Logging initialized using configuration in 
> file:/opt/application/Hive/apache-hive-2.0.0-bin/conf/hive-log4j2.properties
> use shfs3453
> OK
> Time taken: 1.425 seconds
> Hive-on-MR is deprecated in Hive 2 and may not be available in the future 
> versions. Consider using a different execution engine (i.e. tez, ) or using 
> Hive 1.X releases.
> hive (shfs3453)> set hive.execution.engine=;
> set hive.execution.engine=
> hive (shfs3453)> set .master=yarn-client;
> set .master=yarn-client
> hive (shfs3453)> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, 
> Day STRING, Block STRING, IUCR INT, PrimaryType STRING, Description STRING, 
> LocationDescription STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, 
> District INT, Ward INT, CommunityArea INT, FBICode INT, XCoordinate BIGINT, 
> YCoordinate BIGINT, Year INT, UpdatedOn STRING, Latitude FLOAT, Longitude 
> FLOAT, Location STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED 
> AS TEXTFILE;
> CREATE TABLE chicagoCrimes2 (ID BIGINT, CaseNumber STRING, Day STRING, Block 
> STRING, IUCR INT, PrimaryType STRING, Description STRING, LocationDescription 
> STRING, Arrest BOOLEAN, Domestic BOOLEAN, Beat INT, District INT, Ward INT, 
> CommunityArea INT, FBICode INT, XCoordinate BIGINT, YCoordinate BIGINT, Year 
> INT, UpdatedOn STRING, Latitude FLOAT, Longitude FLOAT, Location STRING) ROW 
> FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE
> OK
> Time taken: 0.408 seconds
> hive (shfs3453)> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM 
> chicagocrimes WHERE Description = 'FIRST DEGREE MURDER';
> INSERT OVERWRITE TABLE chicagocrimes2 SELECT * FROM chicagocrimes WHERE 
> Description = 'FIRST DEGREE MURDER'
> Query ID = shfs3453_20160524092714_41c89aec-2c6f-49e9-98c7-d227ca144f73
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting  Job = 79484279-8e75-4b13-8e71-7de463f4d51e
> Status: SENT
> Failed to execute  task, with exception 'java.lang.IllegalStateException(RPC 
> channel is closed.)'
> FAILED: Execution , return code 1 from 
> org.apache.hadoop.hive.ql.exec..SparkTask
> {noformat}
> - Client logs
> {noformat}
> May 24 09:32:19 hive-cli  - org.apache.hive..client.rpc.RpcDispatcherReceived 
>  message:io.netty.handler.codec.DecoderException: 
> java.lang.NoClassDefFoundError: org/apache/hive//client/Job
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:358)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
> at 
> io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)

[jira] [Commented] (HIVE-13830) Hive on spark driver crash with Spark 1.6.1

2016-06-28 Thread Alexandre Linte (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352653#comment-15352653
 ] 

Alexandre Linte commented on HIVE-13830:


Hi,
I upgraded Hive to Hive 2.1.0. Now I have the following errors:
- HQL commands
{noformat}
hive (shfs3453)> SELECT COUNT(year) FROM chicagocrimes GROUP BY year;
SELECT COUNT(year) FROM chicagocrimes GROUP BY year
FAILED: SemanticException Failed to get a spark session: 
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client.
{noformat}
- Client logs
{noformat}
Jun 28 11:02:08 hive-cli INFO - org.apache.hadoop.hive.conf.HiveConfUsing the 
default value passed in for log id: c10f51a3-a72d-40c7-9ff6-26e5fb3732da
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.session.SessionStateUpdating thread name to 
c10f51a3-a72d-40c7-9ff6-26e5fb3732da main
Jun 28 11:02:08 hive-cli INFO - org.apache.hadoop.hive.ql.DriverCompiling 
command(queryId=shfs3453_20160628110208_f0b51237-d391-472d-abe8-f2dd2457a9ed): 
SELECT COUNT(year) FROM chicagocrimes GROUP BY year
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerStarting Semantic Analysis
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerCompleted phase 1 of Semantic 
Analysis
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerGet metadata for source tables
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerGet metadata for subqueries
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerGet metadata for destination 
tables
Jun 28 11:02:08 hive-cli INFO - hive.ql.ContextNew scratch dir is 
hdfs://sandbox/tmp/hive/shfs3453/c10f51a3-a72d-40c7-9ff6-26e5fb3732da/hive_2016-06-28_11-02-08_399_7245611464735028300-1
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerCompleted getting MetaData in 
Semantic Analysis
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerGet metadata for source tables
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerGet metadata for subqueries
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerGet metadata for destination 
tables
Jun 28 11:02:08 hive-cli INFO - hive.ql.ContextNew scratch dir is 
hdfs://sandbox/tmp/hive/shfs3453/c10f51a3-a72d-40c7-9ff6-26e5fb3732da/hive_2016-06-28_11-02-08_399_7245611464735028300-1
Jun 28 11:02:08 hive-cli INFO - org.apache.hadoop.hive.common.FileUtilsCreating 
directory if it doesn't exist: 
hdfs://sandbox/tmp/hive/shfs3453/c10f51a3-a72d-40c7-9ff6-26e5fb3732da/hive_2016-06-28_11-02-08_399_7245611464735028300-1/-mr-10001/.hive-staging_hive_2016-06-28_11-02-08_399_7245611464735028300-1
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.parse.CalcitePlannerCBO Succeeded; optimized logical 
plan.
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.ppd.OpProcFactoryProcessing for FS(6)
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.ppd.OpProcFactoryProcessing for SEL(5)
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.ppd.OpProcFactoryProcessing for GBY(4)
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.ppd.OpProcFactoryProcessing for RS(3)
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.ppd.OpProcFactoryProcessing for GBY(2)
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.ppd.OpProcFactoryProcessing for SEL(1)
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.ppd.OpProcFactoryProcessing for TS(0)
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactoryRS 3 oldColExprMap: 
{KEY._col0=Column[_col0], VALUE._col0=Column[_col1]}
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactoryRS 3 newColExprMap: 
{KEY._col0=Column[_col0], VALUE._col0=Column[_col1]}
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactoryloading spark 
properties from:spark-defaults.conf
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactoryload spark property 
from spark-defaults.conf (spark.default.parallelism -> 10).
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactoryload spark property 
from spark-defaults.conf (spark.kryoserializer.buffer -> 100m).
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactoryload spark property 
from spark-defaults.conf (spark.executor.memory -> 4g).
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactoryload spark property 
from spark-defaults.conf (spark.driver.memory -> 2g).
Jun 28 11:02:08 hive-cli INFO - 
org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactoryload spark property 
from spark-defaults.conf (spark.kryo.classesToRegister ->