Have you looked at Spark SQL <http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables>? It supports HiveQL, can read from the hive metastore, and does not require hadoop.
On Wed, Jan 7, 2015 at 8:27 AM, jamborta <jambo...@gmail.com> wrote: > Hi all, > > We have been building a system where we heavily reply on hive queries > executed through spark to load and manipulate data, running on CDH and > yarn. > I have been trying to explore lighter setups where we would not have to > maintain a hadoop cluster, just run the system on spark only. > > Is it possible to run spark standalone, and setup hive alongside, without > the hadoop cluster? if not, any suggestion how we can replicate the > convenience of hive tables (and hive sql) without hive? > > thanks, > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Spark-with-Hive-cluster-dependencies-tp21017.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >