Hi,

I was trying to set up Spark SQL on a private cluster. I configured a 
hive-site.xml under spark/conf that uses a local metestore with warehouse and 
default FS name set to HDFS on one of my corporate cluster. Then I started 
spark master, worker and thrift server. However, when creating a database on 
beeline, I got the following error:

org.apache.hive.service.cli.HiveSQLException: 
org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
MetaException(message:Got exception: java.io.IOException Failed on local 
exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]; Host Details : local host is: “<spark-master-host>"; 
destination host is: “<HDFS-namenode:port>"; )

It occurred when spark was trying to create a hdfs directory under the 
warehouse in order to create the database. All processes (spark master, worker, 
thrift server, beeline) were run as a user with the right access permissions. 
My spark classpaths have /home/y/conf/hadoop in the front. I was able to read 
and write files from hadoop fs command line under the same directory and also 
from the spark-shell without any issue.

Any hints regarding the right way of configuration would be appreciated.

Thanks,
Du

Reply via email to