[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-28 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563156#comment-14563156
 ] 

Apache Spark commented on SPARK-7853:
-

User 'yhuai' has created a pull request for this issue:
https://github.com/apache/spark/pull/6459

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Assignee: Yin Huai
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars 
 ./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
 CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 
 'org.apache.hive.hcatalog.data.JsonSerDe';
 {code}
 Throws Exception like:
 {noformat}
 15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a string, 
 b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hive.hcatalog.data.JsonSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-28 Thread Yin Huai (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563132#comment-14563132
 ] 

Yin Huai commented on SPARK-7853:
-

Seems my change in https://github.com/apache/spark/pull/6435 makes hive context 
fail to create in spark shell. Will submit a pr soon.

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Assignee: Yin Huai
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars 
 ./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
 CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 
 'org.apache.hive.hcatalog.data.JsonSerDe';
 {code}
 Throws Exception like:
 {noformat}
 15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a string, 
 b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hive.hcatalog.data.JsonSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-27 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560772#comment-14560772
 ] 

Apache Spark commented on SPARK-7853:
-

User 'liancheng' has created a pull request for this issue:
https://github.com/apache/spark/pull/6435

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars 
 ./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
 CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 
 'org.apache.hive.hcatalog.data.JsonSerDe';
 {code}
 Throws Exception like:
 {noformat}
 15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a string, 
 b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hive.hcatalog.data.JsonSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-26 Thread Cheng Lian (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560429#comment-14560429
 ] 

Cheng Lian commented on SPARK-7853:
---

OT: [~chenghao] Just edited the JIRA description. When pasting exception stack 
trace {{noformat}} can be more preferable than {{panel}} since it uses 
monospace font :)

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars 
 ./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
 CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 
 'org.apache.hive.hcatalog.data.JsonSerDe';
 {code}
 Throws Exception like:
 {noformat}
 15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a string, 
 b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hive.hcatalog.data.JsonSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-25 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558368#comment-14558368
 ] 

Apache Spark commented on SPARK-7853:
-

User 'chenghao-intel' has created a pull request for this issue:
https://github.com/apache/spark/pull/6396

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars 
 ./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
 CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 
 'org.apache.hive.hcatalog.data.JsonSerDe';
 {code}
 Throws Exception like:
 {panel}
 15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a string, 
 b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hive.hcatalog.data.JsonSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-25 Thread Cheng Hao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558321#comment-14558321
 ] 

Cheng Hao commented on SPARK-7853:
--

ClassNotFound is actually I got after investigation,and the class 
`org.apache.hadoop.hive.serde2.TestSerDe` can not be found.

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars ./sql/data/files/TestSerDe.jar
 spark-sql CREATE TABLE alter1(a INT, b INT) ROW FORMAT SERDE 
 'org.apache.hadoop.hive.serde2.TestSerDe';
 {code}
 Throws Exception like:
 {panel}
 15/05/25 01:33:35 ERROR thriftserver.SparkSQLDriver: Failed in [CREATE TABLE 
 alter1(a INT, b INT) ROW FORMAT SERDE 
 'org.apache.hadoop.hive.serde2.TestSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hadoop.hive.serde2.TestSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558316#comment-14558316
 ] 

Ted Yu commented on SPARK-7853:
---

Subject says ClassNotFoundException.
Which class couldn't be found ?

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars ./sql/data/files/TestSerDe.jar
 spark-sql CREATE TABLE alter1(a INT, b INT) ROW FORMAT SERDE 
 'org.apache.hadoop.hive.serde2.TestSerDe';
 {code}
 Throws Exception like:
 {panel}
 15/05/25 01:33:35 ERROR thriftserver.SparkSQLDriver: Failed in [CREATE TABLE 
 alter1(a INT, b INT) ROW FORMAT SERDE 
 'org.apache.hadoop.hive.serde2.TestSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hadoop.hive.serde2.TestSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-25 Thread Cheng Hao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558332#comment-14558332
 ] 

Cheng Hao commented on SPARK-7853:
--

And it seems the bug introduced by `IsolatedClientLoader` of Spark SQL, I am 
working on it for a workaround fixing.

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars ./sql/data/files/TestSerDe.jar
 spark-sql CREATE TABLE alter1(a INT, b INT) ROW FORMAT SERDE 
 'org.apache.hadoop.hive.serde2.TestSerDe';
 {code}
 Throws Exception like:
 {panel}
 15/05/25 01:33:35 ERROR thriftserver.SparkSQLDriver: Failed in [CREATE TABLE 
 alter1(a INT, b INT) ROW FORMAT SERDE 
 'org.apache.hadoop.hive.serde2.TestSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hadoop.hive.serde2.TestSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7853) ClassNotFoundException for SparkSQL

2015-05-25 Thread Cheng Hao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558363#comment-14558363
 ] 

Cheng Hao commented on SPARK-7853:
--

Update the description, seesm 'TestSerDe` is not a good example as it's removed 
since hive-0.13

 ClassNotFoundException for SparkSQL
 ---

 Key: SPARK-7853
 URL: https://issues.apache.org/jira/browse/SPARK-7853
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Hao
Priority: Blocker

 Reproduce steps:
 {code}
 bin/spark-sql --jars 
 ./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
 CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 
 'org.apache.hive.hcatalog.data.JsonSerDe';
 {code}
 Throws Exception like:
 {panel}
 15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a string, 
 b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
 org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
 Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
 validate serde: org.apache.hive.hcatalog.data.JsonSerDe
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
   at 
 org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
   at 
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
   at 
 org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
   at 
 org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
   at 
 org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:147)
   at org.apache.spark.sql.DataFrame.init(DataFrame.scala:131)
   at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
   at 
 org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218)
   at 
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
   at 
 org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
   at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org