Each time you run the jar, a new JVM will be started, maintain connection 
between different JVM is not a correct way to think of

> each time when I run that jar it tries to make connection with hive metastore

At 2015-07-07 17:07:06, "wazza" <rajeshkumarit8...@gmail.com> wrote:
>Hi I am new to Apache Spark and I have tried to query hive tables using
>Apache Spark Sql. First I have tried it in Spark-shell where I can query 1
>lakh records from hive table within a second. Then I have tried in a java
>code which always take more than 10 seconds and I have noted that each time
>when I run that jar it tries to make connection with hive metastore. can any
>one tell me how to maintain the connection between Apache spark and Hive
>metastore or else how to achieve that same in java.
>
>
>
>--
>View this message in context: 
>http://apache-spark-user-list.1001560.n3.nabble.com/Maintain-Persistent-Connection-with-Hive-meta-store-tp23664.html
>Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>For additional commands, e-mail: user-h...@spark.apache.org
>

Reply via email to