So it appears it should be possible to use HBase's new hbase-spark module, if
you follow this pattern:
https://hbase.apache.org/book.html#_sparksql_dataframes

Unfortunately, when I run my example from PySpark, I get the following
exception:


> py4j.protocol.Py4JJavaError: An error occurred while calling o120.save.
> : java.lang.RuntimeException: org.apache.hadoop.hbase.spark.DefaultSource
> does not allow create table as select.
>       at scala.sys.package$.error(package.scala:27)
>       at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:259)
>       at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>       at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
>       at py4j.Gateway.invoke(Gateway.java:259)
>       at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>       at py4j.commands.CallCommand.execute(CallCommand.java:79)
>       at py4j.GatewayConnection.run(GatewayConnection.java:209)
>       at java.lang.Thread.run(Thread.java:745)

Even when I created the table in HBase first, it still failed.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-connect-HBase-and-Spark-using-Python-tp27372p27397.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to