Hi Ashu,

Per the documents:

Configuration of Hive is done by placing your hive-site.xml file in conf/.


For example, you can place a something like this in your
$SPARK_HOME/conf/hive-site.xml file:

<configuration>
<property>
  <name>hive.metastore.uris</name>
  *<!-- Ensure that the following statement points to the Hive
Metastore URI in your cluster -->*
  <value>thrift://*HostNameHere*:9083</value>
  <description>URI for client to contact metastore server</description>
</property>
</configuration>

HTH.

-Todd



On Fri, Feb 6, 2015 at 4:12 AM, ashu <ashutosh.triv...@iiitb.org> wrote:

> Hi,
> I have Hive in development, I want to use it in Spark. Spark-SQL document
> says the following
> /
>  Users who do not have an existing Hive deployment can still create a
> HiveContext. When not configured by the hive-site.xml, the context
> automatically creates metastore_db and warehouse in the current directory./
>
> So I have existing hive set up and configured, how would I be able to use
> the same in Spark?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to