On 10/16/14 12:44 PM, neeraj wrote:
I would like to reiterate that I don't have Hive installed on the Hadoop
cluster.
I have some queries on following comment from Cheng Lian-2:
"The Thrift server is used to interact with existing Hive data, and thus
needs Hive Metastore to access Hive catalog. In your case, you need to build
Spark with sbt/sbt -Phive,hadoop-2.4 clean package. But since you’ve already
started Thrift server successfully, this step should already have been done
properly."

1. Even though, I don't have Hive installed, How can I connect my
application (Microsoft Excel etc.) to Spark SQL. Do I must have Hive
installed.
Are you trying to use Excel as a data source of Spark SQL, or using Spark SQL as a data source of Excel? You can use Spark SQL in your own Spark applications without involving Hive, but the Thrift server is designed to interact to existing Hive data. Actually it's just a HiveServer2 port for Spark SQL.
2. Where can I download/get Spark SQL JDBC/ODBC drivers as I could not find
it on databricks site.
3. Could somebody point me to steps to connect Excel with Spark SQL and get
some data SQL. Is this possible at all.
I think this article from Denny Lee can be helpful, although it's about Tableau rather than Excel: https://www.concur.com/blog/en-us/connect-tableau-to-sparksql
4. Which all applications can be used to connect Spark SQL.
In theory, all applications that support ODBC/JDBC can connect to Spark SQL.

Regards,
Neeraj








--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/YARN-deployment-of-Spark-and-Thrift-JDBC-server-tp16374p16537.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to