Hi Denny

Thanks for the reply. I have tried the same and seems to work.

I had a quick question though.I have configured to use Hive Metastore (MySql).

When I connect against the Thrift Server using hive it seems to schedule Map 
Reduce job when I query against the table. When I run the same using beeline it 
seems to use the Spark Context to execute .

Is this correct or something wrong with my setup?


My Understanding was that the Thrift Server was just a HIVEQL frontend and the 
undelying  query execution would be done by SPARK .

Regards
Santosh

From: Denny Lee [mailto:denny.g....@gmail.com]
Sent: Wednesday, September 17, 2014 10:14 PM
To: user@spark.apache.org; Addanki, Santosh Kumar
Subject: Re: SchemaRDD and RegisterAsTable

The registered table is stored within the spark context itself.  To have the 
table available for the thrift server to get access to, you can save the sc 
table into the Hive context so that way the Thrift server process can see the 
table.  If you are using derby as your metastore, then the thrift server should 
be accessing this as you would want to utilize the same hive configuration 
(i.e. hive-site.xml).  You may want to migrate your metastore to MySQL or 
Postgres as it will be handle concurrency better than derby.

HTH!
Denny



On September 17, 2014 at 21:47:50, Addanki, Santosh Kumar 
(santosh.kumar.adda...@sap.com<mailto:santosh.kumar.adda...@sap.com>) wrote:
Hi,

We built out SPARK 1.1.0 Version with MVN using
mvn –Pyarn –Phadoop-2.4 –Dhadoop.version=2.4.0 –Phive clean package


And the Thrift Server has been configured to use the Hive Meta Store.

When a schemaRDD is registered as table where does the metadata of this table 
get stored. Can it be stored in the configured hive meta-store or?

Also if the thrift Server is not configured to use the HIVE metastore its using 
its own default (probably derby) metastore.So would the table metainfo be 
stored in this meta-store.

Best Regards
Santosh






Reply via email to