I've heard that Spark SQL will be or has already started deprecating HQL. We have Spark SQL + Python jobs that currently read from the Hive metastore to get things like table location and partition values.
Will we have to re-code these functions in future releases of Spark (maybe by connecting to Hive directly), or will fetching Hive metastore data be supported in future releases via regular SQL? Jon -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-deprecating-Hive-How-will-I-access-Hive-metadata-in-the-future-tp24874.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org