[jira] [Assigned] (SPARK-15417) Failed to enable HiveSupport in PySpark

2016-05-19 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-15417:


Assignee: Apache Spark  (was: Andrew Or)

> Failed to enable HiveSupport in PySpark
> ---
>
> Key: SPARK-15417
> URL: https://issues.apache.org/jira/browse/SPARK-15417
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, SQL
>Affects Versions: 2.0.0
>Reporter: Xiao Li
>Assignee: Apache Spark
>Priority: Blocker
>
> Unable to use Hive meta-store in pyspark shell. Tried both HiveContext and 
> SparkSession. Both failed. It always uses in-memory catalog.
> Method 1: Using SparkSession
> {noformat}
> >>> from pyspark.sql import SparkSession
> >>> spark = SparkSession.builder.enableHiveSupport().getOrCreate()
> >>> spark.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
> DataFrame[]
> >>> spark.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' 
> >>> INTO TABLE src")
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/session.py", 
> line 494, in sql
> return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py",
>  line 933, in __call__
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/utils.py", line 
> 57, in deco
> return f(*a, **kw)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py",
>  line 312, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling o21.sql.
> : java.lang.UnsupportedOperationException: loadTable is not implemented
> at 
> org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.loadTable(InMemoryCatalog.scala:297)
> at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadTable(SessionCatalog.scala:280)
> at org.apache.spark.sql.execution.command.LoadData.run(tables.scala:263)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:57)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:55)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:69)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
> at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
> at org.apache.spark.sql.Dataset.(Dataset.scala:187)
> at org.apache.spark.sql.Dataset.(Dataset.scala:168)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:541)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
> at py4j.Gateway.invoke(Gateway.java:280)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:211)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Method 2: Using HiveContext: 
> {noformat}
> >>> from pyspark.sql import HiveContext
> >>> sqlContext = HiveContext(sc)
> >>> sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
> DataFrame[]
> >>> sqlContext.sql("LOAD DATA LOCAL INPATH 
> >>> 'examples/src/main/resources/kv1.txt' INTO TABLE src")
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/context.py", 
> line 346, in sql
> return self.sparkSession.sql(sqlQuery)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/session.py", 
> line 494, in sql
> return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
>   File 
> 

[jira] [Assigned] (SPARK-15417) Failed to enable HiveSupport in PySpark

2016-05-19 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-15417:


Assignee: Andrew Or  (was: Apache Spark)

> Failed to enable HiveSupport in PySpark
> ---
>
> Key: SPARK-15417
> URL: https://issues.apache.org/jira/browse/SPARK-15417
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, SQL
>Affects Versions: 2.0.0
>Reporter: Xiao Li
>Assignee: Andrew Or
>Priority: Blocker
>
> Unable to use Hive meta-store in pyspark shell. Tried both HiveContext and 
> SparkSession. Both failed. It always uses in-memory catalog.
> Method 1: Using SparkSession
> {noformat}
> >>> from pyspark.sql import SparkSession
> >>> spark = SparkSession.builder.enableHiveSupport().getOrCreate()
> >>> spark.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
> DataFrame[]
> >>> spark.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' 
> >>> INTO TABLE src")
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/session.py", 
> line 494, in sql
> return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py",
>  line 933, in __call__
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/utils.py", line 
> 57, in deco
> return f(*a, **kw)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py",
>  line 312, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling o21.sql.
> : java.lang.UnsupportedOperationException: loadTable is not implemented
> at 
> org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.loadTable(InMemoryCatalog.scala:297)
> at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadTable(SessionCatalog.scala:280)
> at org.apache.spark.sql.execution.command.LoadData.run(tables.scala:263)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:57)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:55)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:69)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
> at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
> at org.apache.spark.sql.Dataset.(Dataset.scala:187)
> at org.apache.spark.sql.Dataset.(Dataset.scala:168)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:541)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
> at py4j.Gateway.invoke(Gateway.java:280)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:211)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Method 2: Using HiveContext: 
> {noformat}
> >>> from pyspark.sql import HiveContext
> >>> sqlContext = HiveContext(sc)
> >>> sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
> DataFrame[]
> >>> sqlContext.sql("LOAD DATA LOCAL INPATH 
> >>> 'examples/src/main/resources/kv1.txt' INTO TABLE src")
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/context.py", 
> line 346, in sql
> return self.sparkSession.sql(sqlQuery)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/session.py", 
> line 494, in sql
> return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
>   File 
> 

[jira] [Assigned] (SPARK-15417) Failed to enable HiveSupport in PySpark

2016-05-19 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or reassigned SPARK-15417:
-

Assignee: Andrew Or

> Failed to enable HiveSupport in PySpark
> ---
>
> Key: SPARK-15417
> URL: https://issues.apache.org/jira/browse/SPARK-15417
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, SQL
>Affects Versions: 2.0.0
>Reporter: Xiao Li
>Assignee: Andrew Or
>Priority: Blocker
>
> Unable to use Hive meta-store in pyspark shell. Tried both HiveContext and 
> SparkSession. Both failed. It always uses in-memory catalog.
> Method 1: Using SparkSession
> {noformat}
> >>> from pyspark.sql import SparkSession
> >>> spark = SparkSession.builder.enableHiveSupport().getOrCreate()
> >>> spark.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
> DataFrame[]
> >>> spark.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' 
> >>> INTO TABLE src")
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/session.py", 
> line 494, in sql
> return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py",
>  line 933, in __call__
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/utils.py", line 
> 57, in deco
> return f(*a, **kw)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py",
>  line 312, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling o21.sql.
> : java.lang.UnsupportedOperationException: loadTable is not implemented
> at 
> org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.loadTable(InMemoryCatalog.scala:297)
> at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadTable(SessionCatalog.scala:280)
> at org.apache.spark.sql.execution.command.LoadData.run(tables.scala:263)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:57)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:55)
> at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:69)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
> at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
> at org.apache.spark.sql.Dataset.(Dataset.scala:187)
> at org.apache.spark.sql.Dataset.(Dataset.scala:168)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:541)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
> at py4j.Gateway.invoke(Gateway.java:280)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:211)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Method 2: Using HiveContext: 
> {noformat}
> >>> from pyspark.sql import HiveContext
> >>> sqlContext = HiveContext(sc)
> >>> sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
> DataFrame[]
> >>> sqlContext.sql("LOAD DATA LOCAL INPATH 
> >>> 'examples/src/main/resources/kv1.txt' INTO TABLE src")
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/context.py", 
> line 346, in sql
> return self.sparkSession.sql(sqlQuery)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/pyspark/sql/session.py", 
> line 494, in sql
> return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
>   File 
> "/Users/xiaoli/IdeaProjects/sparkDelivery/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py",
>