Please check the find-hive-dependency.sh, make sure the hive_dependency
works as expected.

2017-05-31 21:58 GMT+08:00 柴诗雨 <shaonian0...@163.com>:

> hello,here is my question:
> I try to use kylin, here is my install enviroment:
>
>
> apache hadoop 2.7.3
> apache hbase 1.3.0
> apache hive 2.1.1
> apache kylin 2.0.0
>
>
> I followed quick start sample to do,but when I build a cube,there is a
> error occured in Step 3:
> java.io.FileNotFoundException: File does not exist:
> hdfs://localhost:9000/opt/bigdata/hive-2.1.1/lib/hive-exec-2.1.1.jar
>         at org.apache.hadoop.hdfs.DistributedFileSystem$17.
> doCall(DistributedFileSystem.java:1072)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$17.
> doCall(DistributedFileSystem.java:1064)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
> FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
> DistributedFileSystem.java:1064)
>         at org.apache.hadoop.mapreduce.filecache.
> ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.
> java:288)
>         at org.apache.hadoop.mapreduce.filecache.
> ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.
> java:224)
>         at org.apache.hadoop.mapreduce.filecache.
> ClientDistributedCacheManager.determineTimestamps(
> ClientDistributedCacheManager.java:99)
>         at org.apache.hadoop.mapreduce.filecache.
> ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(
> ClientDistributedCacheManager.java:57)
>         at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(
> JobSubmitter.java:265)
>         at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(
> JobSubmitter.java:301)
>         at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> JobSubmitter.java:389)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1614)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>         at org.apache.kylin.engine.mr.common.AbstractHadoopJob.
> waitForCompletion(AbstractHadoopJob.java:149)
>         at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.
> run(FactDistinctColumnsJob.java:130)
>         at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:102)
>         at org.apache.kylin.engine.mr.common.MapReduceExecutable.
> doWork(MapReduceExecutable.java:123)
>         at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
>         at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(
> DefaultChainedExecutable.java:64)
>         at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
>         at org.apache.kylin.job.impl.threadpool.DefaultScheduler$
> JobRunner.run(DefaultScheduler.java:142)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> I don't know why kylin would go to hdfs to find jar file.
>
>
>
>
> best wishes
> Chai shiyu

Reply via email to