[jira] [Issue Comment Deleted] (HIVE-9970) Hive on spark
[ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9970: -- Comment: was deleted (was: My working Environment is Centos 6.4 Hadoop - 2.6.0 Hive - 1.1.0 Spark - 1.3.0 (Builded spark,sql.yarn) could you brief me about the error.Is that because of using higher end verions (or) my mistakes during building the spark. ) > Hive on spark > - > > Key: HIVE-9970 > URL: https://issues.apache.org/jira/browse/HIVE-9970 > Project: Hive > Issue Type: Bug >Reporter: Amithsha >Assignee: Tarush Grover > > Hi all, > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing > insert into i am getting the following g error. > Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 > Total jobs = 1 > Launching Job 1 out of 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create > spark client.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > Have added the spark-assembly jar in hive lib > And also in hive console using the command add jar followed by the steps > set spark.home=/opt/spark-1.2.1/; > add jar > /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar; > set hive.execution.engine=spark; > set spark.master=spark://xxx:7077; > set spark.eventLog.enabled=true; > set spark.executor.memory=512m; > set spark.serializer=org.apache.spark.serializer.KryoSerializer; > Can anyone suggest > Thanks & Regards > Amithsha -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-9970) Hive on spark
[ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9970: -- Comment: was deleted (was: My working Environment is Centos 6.4 Hadoop - 2.6.0 Hive - 1.1.0 Spark - 1.3.0 (Builded spark,sql.yarn) could you brief me about the error.Is that because of using higher end verions (or) my mistakes during building the spark. ) > Hive on spark > - > > Key: HIVE-9970 > URL: https://issues.apache.org/jira/browse/HIVE-9970 > Project: Hive > Issue Type: Bug >Reporter: Amithsha >Assignee: Tarush Grover > > Hi all, > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing > insert into i am getting the following g error. > Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 > Total jobs = 1 > Launching Job 1 out of 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create > spark client.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > Have added the spark-assembly jar in hive lib > And also in hive console using the command add jar followed by the steps > set spark.home=/opt/spark-1.2.1/; > add jar > /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar; > set hive.execution.engine=spark; > set spark.master=spark://xxx:7077; > set spark.eventLog.enabled=true; > set spark.executor.memory=512m; > set spark.serializer=org.apache.spark.serializer.KryoSerializer; > Can anyone suggest > Thanks & Regards > Amithsha -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-9970) Hive on spark
[ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9970: -- Comment: was deleted (was: My working Environment is Centos 6.4 Hadoop - 2.6.0 Hive - 1.1.0 Spark - 1.3.0 (Builded spark,sql.yarn) could you brief me about the error.Is that because of using higher end verions (or) my mistakes during building the spark. ) > Hive on spark > - > > Key: HIVE-9970 > URL: https://issues.apache.org/jira/browse/HIVE-9970 > Project: Hive > Issue Type: Bug >Reporter: Amithsha >Assignee: Tarush Grover > > Hi all, > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing > insert into i am getting the following g error. > Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 > Total jobs = 1 > Launching Job 1 out of 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create > spark client.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > Have added the spark-assembly jar in hive lib > And also in hive console using the command add jar followed by the steps > set spark.home=/opt/spark-1.2.1/; > add jar > /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar; > set hive.execution.engine=spark; > set spark.master=spark://xxx:7077; > set spark.eventLog.enabled=true; > set spark.executor.memory=512m; > set spark.serializer=org.apache.spark.serializer.KryoSerializer; > Can anyone suggest > Thanks & Regards > Amithsha -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-9970) Hive on spark
[ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9970: -- Comment: was deleted (was: My working Environment is Centos 6.4 Hadoop - 2.6.0 Hive - 1.1.0 Spark - 1.3.0 (Builded spark,sql.yarn) could you brief me about the error.Is that because of using higher end verions (or) my mistakes during building the spark. ) > Hive on spark > - > > Key: HIVE-9970 > URL: https://issues.apache.org/jira/browse/HIVE-9970 > Project: Hive > Issue Type: Bug >Reporter: Amithsha >Assignee: Tarush Grover > > Hi all, > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing > insert into i am getting the following g error. > Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 > Total jobs = 1 > Launching Job 1 out of 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create > spark client.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > Have added the spark-assembly jar in hive lib > And also in hive console using the command add jar followed by the steps > set spark.home=/opt/spark-1.2.1/; > add jar > /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar; > set hive.execution.engine=spark; > set spark.master=spark://xxx:7077; > set spark.eventLog.enabled=true; > set spark.executor.memory=512m; > set spark.serializer=org.apache.spark.serializer.KryoSerializer; > Can anyone suggest > Thanks & Regards > Amithsha -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-9970) Hive on spark
[ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9970: -- Comment: was deleted (was: my hive version is 1.2.0. and build spark1.3.1 on hadoop with "./make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.5". That’s unfortunate,hive on spark still in error state,"Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask". java.lang.NoSuchFieldError: SPARK_RPC_CLIENT_CONNECT_TIMEOUT at org.apache.hive.spark.client.rpc.RpcConfiguration.(RpcConfiguration.java:46) at org.apache.hive.spark.client.RemoteDriver.(RemoteDriver.java:146) at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:556) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:480) So,Have these questions had answers?) > Hive on spark > - > > Key: HIVE-9970 > URL: https://issues.apache.org/jira/browse/HIVE-9970 > Project: Hive > Issue Type: Bug >Reporter: Amithsha >Assignee: Tarush Grover > > Hi all, > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing > insert into i am getting the following g error. > Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 > Total jobs = 1 > Launching Job 1 out of 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create > spark client.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > Have added the spark-assembly jar in hive lib > And also in hive console using the command add jar followed by the steps > set spark.home=/opt/spark-1.2.1/; > add jar > /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar; > set hive.execution.engine=spark; > set spark.master=spark://xxx:7077; > set spark.eventLog.enabled=true; > set spark.executor.memory=512m; > set spark.serializer=org.apache.spark.serializer.KryoSerializer; > Can anyone suggest > Thanks & Regards > Amithsha -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-9970) Hive on spark
[ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9970: -- Comment: was deleted (was: My working Environment is Centos 6.4 Hadoop - 2.6.0 Hive - 1.1.0 Spark - 1.3.0 (Builded spark,sql.yarn) could you brief me about the error.Is that because of using higher end verions (or) my mistakes during building the spark. ) > Hive on spark > - > > Key: HIVE-9970 > URL: https://issues.apache.org/jira/browse/HIVE-9970 > Project: Hive > Issue Type: Bug >Reporter: Amithsha >Assignee: Tarush Grover > > Hi all, > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing > insert into i am getting the following g error. > Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 > Total jobs = 1 > Launching Job 1 out of 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create > spark client.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > Have added the spark-assembly jar in hive lib > And also in hive console using the command add jar followed by the steps > set spark.home=/opt/spark-1.2.1/; > add jar > /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar; > set hive.execution.engine=spark; > set spark.master=spark://xxx:7077; > set spark.eventLog.enabled=true; > set spark.executor.memory=512m; > set spark.serializer=org.apache.spark.serializer.KryoSerializer; > Can anyone suggest > Thanks & Regards > Amithsha -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-9970) Hive on spark
[ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9970: -- Comment: was deleted (was: My working Environment is Centos 6.4 Hadoop - 2.6.0 Hive - 1.1.0 Spark - 1.3.0 (Builded spark,sql.yarn) could you brief me about the error.Is that because of using higher end verions (or) my mistakes during building the spark. ) > Hive on spark > - > > Key: HIVE-9970 > URL: https://issues.apache.org/jira/browse/HIVE-9970 > Project: Hive > Issue Type: Bug >Reporter: Amithsha >Assignee: Tarush Grover > > Hi all, > Recently i have configured Spark 1.2.0 and my environment is hadoop > 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing > insert into i am getting the following g error. > Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63 > Total jobs = 1 > Launching Job 1 out of 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create > spark client.)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > Have added the spark-assembly jar in hive lib > And also in hive console using the command add jar followed by the steps > set spark.home=/opt/spark-1.2.1/; > add jar > /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar; > set hive.execution.engine=spark; > set spark.master=spark://xxx:7077; > set spark.eventLog.enabled=true; > set spark.executor.memory=512m; > set spark.serializer=org.apache.spark.serializer.KryoSerializer; > Can anyone suggest > Thanks & Regards > Amithsha -- This message was sent by Atlassian JIRA (v6.3.4#6332)