Hello Everyone,
I'm brand new to spark and was wondering if there's a JDBC driver to access
spark-SQL directly. I'm running spark in standalone mode and don't have
hadoop in this environment.
--
*Best Regards/أطيب المنى,*
*Anas Mosaad*
...@exchange.microsoft.com
wrote:
You can use thrift server for this purpose then test it with beeline.
See doc:
https://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server
*From:* Anas Mosaad [mailto:anas.mos...@incorta.com]
*Sent:* Monday, December 8
a working
Metastore.
On 12/9/14 3:59 PM, Anas Mosaad wrote:
Thanks Judy, this is exactly what I'm looking for. However, and plz
forgive me if it's a dump question is: It seems to me that thrift is the
same as hive2 JDBC driver, does this mean that starting thrift will start
hive as well
`.
On 12/9/14 5:27 PM, Anas Mosaad wrote:
Thanks Cheng,
I thought spark-sql is using the same exact metastore, right? However,
it didn't work as expected. Here's what I did.
In spark-shell, I loaded a csv files and registered the table, say
countries.
Started the thrift server
In that case, what should be the behavior of saveTableAs?
On Dec 10, 2014 4:03 AM, Michael Armbrust mich...@databricks.com wrote:
That is correct. It the hive context will create an embedded metastore in
the current directory if you have not configured hive.
On Tue, Dec 9, 2014 at 5:51 PM,
on the forum can confirm on this though.
*From:* Cheng Lian [mailto:lian.cs@gmail.com]
*Sent:* Tuesday, December 9, 2014 6:42 AM
*To:* Anas Mosaad
*Cc:* Judy Nash; user@spark.apache.org
*Subject:* Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext