Repository: spark Updated Branches: refs/heads/branch-1.1 8f6e2e9df -> 8cb4e5b47
[SPARK-2844][SQL] Correctly set JVM HiveContext if it is passed into Python HiveContext constructor https://issues.apache.org/jira/browse/SPARK-2844 Author: Ahir Reddy <[email protected]> Closes #1768 from ahirreddy/python-hive-context-fix and squashes the following commits: 7972d3b [Ahir Reddy] Correctly set JVM HiveContext if it is passed into Python HiveContext constructor (cherry picked from commit 490ecfa20327a636289321ea447722aa32b81657) Signed-off-by: Michael Armbrust <[email protected]> Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8cb4e5b4 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8cb4e5b4 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8cb4e5b4 Branch: refs/heads/branch-1.1 Commit: 8cb4e5b47b9b871bf4c0d93d0a747e55f66ca0ec Parents: 8f6e2e9 Author: Ahir Reddy <[email protected]> Authored: Mon Aug 11 20:06:06 2014 -0700 Committer: Michael Armbrust <[email protected]> Committed: Mon Aug 11 20:06:25 2014 -0700 ---------------------------------------------------------------------- python/pyspark/sql.py | 14 ++++++++++++++ 1 file changed, 14 insertions(+) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/8cb4e5b4/python/pyspark/sql.py ---------------------------------------------------------------------- diff --git a/python/pyspark/sql.py b/python/pyspark/sql.py index 950e275..3604046 100644 --- a/python/pyspark/sql.py +++ b/python/pyspark/sql.py @@ -912,6 +912,8 @@ class SQLContext: """Create a new SQLContext. @param sparkContext: The SparkContext to wrap. + @param sqlContext: An optional JVM Scala SQLContext. If set, we do not instatiate a new + SQLContext in the JVM, instead we make all calls to this object. >>> srdd = sqlCtx.inferSchema(rdd) >>> sqlCtx.inferSchema(srdd) # doctest: +IGNORE_EXCEPTION_DETAIL @@ -1315,6 +1317,18 @@ class HiveContext(SQLContext): It supports running both SQL and HiveQL commands. """ + def __init__(self, sparkContext, hiveContext=None): + """Create a new HiveContext. + + @param sparkContext: The SparkContext to wrap. + @param hiveContext: An optional JVM Scala HiveContext. If set, we do not instatiate a new + HiveContext in the JVM, instead we make all calls to this object. + """ + SQLContext.__init__(self, sparkContext) + + if hiveContext: + self._scala_HiveContext = hiveContext + @property def _ssql_ctx(self): try: --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
