Yes you are right.  Make the change and also link hive-site.xml into spark conf 
directory.  Rerun the sql getting error in hive.log

2015-09-25 13:31:14,750 INFO  [HiveServer2-Handler-Pool: Thread-125]: 
client.SparkClientImpl (SparkClientImpl.java:startDriver(375)) - Attempting 
impersonation of HIVEAPP
2015-09-25 13:31:14,750 INFO  [HiveServer2-Handler-Pool: Thread-125]: 
client.SparkClientImpl (SparkClientImpl.java:startDriver(409)) - Running client 
driver with argv: /u01/app/spark-1.4.1-bin-hadoop2.6/bin/spark-submit 
--executor-memory 512m --proxy-user HIVEAPP --properties-file 
/tmp/spark-submit.4348738410387344124.properties --class 
org.apache.hive.spark.client.RemoteDriver 
/u01/app/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar --remote-host 
ip-10-92-82-229.ec2.internal --remote-port 48481 --conf 
hive.spark.client.connect.timeout=1000 --conf 
hive.spark.client.server.connect.timeout=90000 --conf 
hive.spark.client.channel.log.level=null --conf 
hive.spark.client.rpc.max.size=52428800 --conf hive.spark.client.rpc.threads=8 
--conf hive.spark.client.secret.bits=256
2015-09-25 13:31:15,473 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - Warning: Ignoring non-spark config property: 
hive.spark.client.server.connect.timeout=90000
2015-09-25 13:31:15,473 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - Warning: Ignoring non-spark config property: 
hive.spark.client.rpc.threads=8
2015-09-25 13:31:15,474 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - Warning: Ignoring non-spark config property: 
hive.spark.client.connect.timeout=1000
2015-09-25 13:31:15,474 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - Warning: Ignoring non-spark config property: 
hive.spark.client.secret.bits=256
2015-09-25 13:31:15,474 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - Warning: Ignoring non-spark config property: 
hive.spark.client.rpc.max.size=52428800
2015-09-25 13:31:15,718 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/09/25 13:31:15 WARN util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2015-09-25 13:31:16,063 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/09/25 13:31:16 INFO client.RMProxy: 
Connecting to ResourceManager at /0.0.0.0:8032
2015-09-25 13:31:16,245 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - ERROR: 
org.apache.hadoop.security.authorize.AuthorizationException: User: hadoop is 
not allowed to impersonate HIVEAPP
2015-09-25 13:31:16,248 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/09/25 13:31:16 INFO util.Utils: Shutdown 
hook called
2015-09-25 13:31:16,265 WARN  [Driver]: client.SparkClientImpl 
(SparkClientImpl.java:run(427)) - Child process exited with code 1.

-----Original Message-----
From: Marcelo Vanzin [mailto:van...@cloudera.com] 
Sent: Friday, September 25, 2015 1:12 PM
To: Garry Chen <g...@cornell.edu>
Cc: Jimmy Xiang <jxi...@cloudera.com>; user@spark.apache.org
Subject: Re: hive on spark query error

On Fri, Sep 25, 2015 at 10:05 AM, Garry Chen <g...@cornell.edu> wrote:
> In spark-defaults.conf the spark.master  is  spark://hostname:7077.  
> From hive-site.xml  <property>
>     <name>spark.master</name>
>     <value>hostname</value>
>   </property>

That's not a valid value for spark.master (as the error indicates).
You should set it to "spark://hostname:7077", as you have it in 
spark-defaults.conf (or perhaps remove the setting from hive-site.xml, I think 
hive will honor your spark-defaults.conf).

--
Marcelo

Reply via email to