[
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14078962#comment-14078962
]
Lefty Leverenz commented on HIVE-7436:
--------------------------------------
bq. Is HADOOP_CLASSPATH documented anywhere for Hive?
Grepping the Hive wiki reveals three docs that mention HADOOP_CLASSPATH, but
none for Hive:
* [HCatalog InputOutput -- Running MapReduce with HCatalog (see first example)
|
https://cwiki.apache.org/confluence/display/Hive/HCatalog+InputOutput#HCatalogInputOutput-RunningMapReducewithHCatalog]
* [Install WebHCat -- Hadoop Distributed Cache (see templeton.override.jars,
which is the last config in the section) |
https://cwiki.apache.org/confluence/display/Hive/WebHCat+InstallWebHCat#WebHCatInstallWebHCat-HadoopDistributedCache]
* [WebHCat Configuration -- Configuration Variables (see
templeton.override.jars, which is 5th in the table) |
https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure#WebHCatConfigure-ConfigurationVariables]
> Load Spark configuration into Hive driver
> -----------------------------------------
>
> Key: HIVE-7436
> URL: https://issues.apache.org/jira/browse/HIVE-7436
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Chengxiang Li
> Assignee: Chengxiang Li
> Fix For: spark-branch
>
> Attachments: HIVE-7436-Spark.1.patch, HIVE-7436-Spark.2.patch,
> HIVE-7436-Spark.3.patch
>
>
> load Spark configuration into Hive driver, there are 3 ways to setup spark
> configurations:
> # Java property.
> # Configure properties in spark configuration file(spark-defaults.conf).
> # Hive configuration file(hive-site.xml).
> The below configuration has more priority, and would overwrite previous
> configuration with the same property name.
> Please refer to [http://spark.apache.org/docs/latest/configuration.html] for
> all configurable properties of spark, and you can configure spark
> configuration in Hive through following ways:
> # Configure through spark configuration file.
> #* Create spark-defaults.conf, and place it in the /etc/spark/conf
> configuration directory. configure properties in spark-defaults.conf in java
> properties format.
> #* Create the $SPARK_CONF_DIR environment variable and set it to the location
> of spark-defaults.conf.
> export SPARK_CONF_DIR=/etc/spark/conf
> #* Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
> export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
> # Configure through hive configuration file.
> #* edit hive-site.xml in hive conf directory, configure properties in
> spark-defaults.conf in xml format.
> Hive driver default spark properties:
> ||name||default value||description||
> |spark.master|local|Spark master url.|
> |spark.app.name|Hive on Spark|Default Spark application name.|
> NO PRECOMMIT TESTS. This is for spark-branch only.
--
This message was sent by Atlassian JIRA
(v6.2#6252)