Thank you!! I can do this using saveAsTable with the schemaRDD, right?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Table-not-found-using-jdbc-console-to-query-sparksql-hive-thriftserver-tp13840p13979.html
Sent from the Apache Spark User List mailing
It sort of depends on the definition of efficiently. From a work flow
perspective I would agree but from an I/O perspective, wouldn’t there be the
same multi-pass from the standpoint of the Hive context needing to push the
data into HDFS? Saying this, if you’re pushing the data into HDFS and
: using jdbc console to query sparksql hive
thriftserver
It sort of depends on the definition of efficiently. From a work flow
perspective I would agree but from an I/O perspective, wouldn’t there be the
same multi-pass from the standpoint of the Hive context needing to push the
data into HDFS
I used the hiveContext to register the tables and the tables are still not
being found by the thrift server. Do I have to pass the hiveContext to JDBC
somehow?
--
View this message in context:
Actually, when registering the table, it is only available within the sc
context you are running it in. For Spark 1.1, the method name is changed to
RegisterAsTempTable to better reflect that.
The Thrift server process runs under a different process meaning that it cannot
see any of the
Hi Denny,
There is a related question by the way.
I have a program that reads in a stream of RDD¹s, each of which is to be
loaded into a hive table as one partition. Currently I do this by first
writing the RDD¹s to HDFS and then loading them to hive, which requires
multiple passes of HDFS I/O
Your tables were registered in the SqlContext, whereas the thrift server
works with HiveContext. They seem to be in two different worlds today.
On 9/9/14, 5:16 PM, alexandria1101 alexandria.shea...@gmail.com wrote:
Hi,
I want to use the sparksql thrift server in my application and make sure