Re: Spark-SQL JDBC driver

2014-12-14 Thread Michael Armbrust
can confirm on this though. *From:* Cheng Lian [mailto:lian.cs@gmail.com] *Sent:* Tuesday, December 9, 2014 6:42 AM *To:* Anas Mosaad *Cc:* Judy Nash; user@spark.apache.org *Subject:* Re: Spark-SQL JDBC driver According to the stacktrace, you were still using SQLContext rather than

RE: Spark-SQL JDBC driver

2014-12-11 Thread Anas Mosaad
on the forum can confirm on this though. *From:* Cheng Lian [mailto:lian.cs@gmail.com] *Sent:* Tuesday, December 9, 2014 6:42 AM *To:* Anas Mosaad *Cc:* Judy Nash; user@spark.apache.org *Subject:* Re: Spark-SQL JDBC driver According to the stacktrace, you were still using SQLContext

Re: Spark-SQL JDBC driver

2014-12-11 Thread Denny Lee
, December 9, 2014 6:42 AM *To:* Anas Mosaad *Cc:* Judy Nash; user@spark.apache.org *Subject:* Re: Spark-SQL JDBC driver According to the stacktrace, you were still using SQLContext rather than HiveContext. To interact with Hive, HiveContext *must* be used. Please refer to this page http

RE: Spark-SQL JDBC driver

2014-12-10 Thread Judy Nash
SQL experts on the forum can confirm on this though. From: Cheng Lian [mailto:lian.cs@gmail.com] Sent: Tuesday, December 9, 2014 6:42 AM To: Anas Mosaad Cc: Judy Nash; user@spark.apache.org Subject: Re: Spark-SQL JDBC driver According to the stacktrace, you were still using SQLContext rather

Re: Spark-SQL JDBC driver

2014-12-09 Thread Anas Mosaad
Thanks Judy, this is exactly what I'm looking for. However, and plz forgive me if it's a dump question is: It seems to me that thrift is the same as hive2 JDBC driver, does this mean that starting thrift will start hive as well on the server? On Mon, Dec 8, 2014 at 9:11 PM, Judy Nash

Re: Spark-SQL JDBC driver

2014-12-09 Thread Cheng Lian
Essentially, the Spark SQL JDBC Thrift server is just a Spark port of HiveServer2. You don't need to run Hive, but you do need a working Metastore. On 12/9/14 3:59 PM, Anas Mosaad wrote: Thanks Judy, this is exactly what I'm looking for. However, and plz forgive me if it's a dump question is:

Re: Spark-SQL JDBC driver

2014-12-09 Thread Anas Mosaad
Thanks Cheng, I thought spark-sql is using the same exact metastore, right? However, it didn't work as expected. Here's what I did. In spark-shell, I loaded a csv files and registered the table, say countries. Started the thrift server. Connected using beeline. When I run show tables or !tables,

Re: Spark-SQL JDBC driver

2014-12-09 Thread Cheng Lian
How did you register the table under spark-shell? Two things to notice: 1. To interact with Hive, HiveContext instead of SQLContext must be used. 2. `registerTempTable` doesn't persist the table into Hive metastore, and the table is lost after quitting spark-shell. Instead, you must use

Re: Spark-SQL JDBC driver

2014-12-09 Thread Anas Mosaad
Back to the first question, this will mandate that hive is up and running? When I try it, I get the following exception. The documentation says that this method works only on SchemaRDD. I though that countries.saveAsTable did not work for that a reason so I created a tmp that contains the results

Re: Spark-SQL JDBC driver

2014-12-09 Thread Cheng Lian
According to the stacktrace, you were still using SQLContext rather than HiveContext. To interact with Hive, HiveContext *must* be used. Please refer to this page http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables On 12/9/14 6:26 PM, Anas Mosaad wrote: Back to the

RE: Spark-SQL JDBC driver

2014-12-08 Thread Judy Nash
You can use thrift server for this purpose then test it with beeline. See doc: https://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server From: Anas Mosaad [mailto:anas.mos...@incorta.com] Sent: Monday, December 8, 2014 11:01 AM To: user@spark.apache.org