[ 
https://issues.apache.org/jira/browse/KYLIN-2817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shao Feng Shi closed KYLIN-2817.
--------------------------------
    Resolution: Incomplete

> The problem with Kerberos when building Cube in the kylin read and write 
> separation environment
> -----------------------------------------------------------------------------------------------
>
>                 Key: KYLIN-2817
>                 URL: https://issues.apache.org/jira/browse/KYLIN-2817
>             Project: Kylin
>          Issue Type: Bug
>          Components: Environment 
>    Affects Versions: v2.0.0
>         Environment: CDH:5.7.1
> Kylin:apache-kylin-2.0.0-bin
>            Reporter: jiangshouzhuang
>            Assignee: Hongbin Ma
>            Priority: Major
>             Fix For: Future
>
>
> In building kylin read and write separation environment, I encountered some 
> Kerberos permissions problems.
> The Kylin's environment is as follows:
> CDH 5.7.1 Cluster A: Yarn,HDFS,Hive. HDFS is HA, Nameservice is nn-idc.
> CDH 5.7.1 Cluster B: Yarn,HDFS,HBase,Kylin,HDFS is HA, Nameservice is kylin-ns
> Cluster A and Cluster B use a common KDC.
> When I use kylin to build the cube, the error as following:
> #19 Step Name: Convert Cuboid Data to HFile
> java.io.IOException: Failed to run job : Failed to renew token: Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:kylin-ns, Ident: 
> (HDFS_DELEGATION_TOKEN token 12 for kylin_manager_user)
>       at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:244)
>       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
>       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>       at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
>       at 
> org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:149)
>       at 
> org.apache.kylin.storage.hbase.steps.CubeHFileJob.run(CubeHFileJob.java:106)
>       at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:102)
>       at 
> org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:123)
>       at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
>       at 
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)
>       at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)
>       at 
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> The problem is similar to that on Yarn's issues:
> https://issues.apache.org/jira/browse/YARN-3021
> So I think that Apache Kylin supports the environment of reading and writing 
> separation. How can I solve this problem and configure it?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to