Re: Issues in building cubes of Apache Kylin

2017-02-12 Thread 排骨瘦肉丁
Solved it??How to solve it?Please tell me!!!Please!!!

--
View this message in context: 
http://apache-kylin.74782.x6.nabble.com/Issues-in-building-cubes-of-Apache-Kylin-tp6530p7142.html
Sent from the Apache Kylin mailing list archive at Nabble.com.


Re: Issues in building cubes of Apache Kylin

2016-12-08 Thread Billy Liu
Glad to hear it.

2016-12-09 14:14 GMT+08:00 Tarun Vashisth <vashi...@adobe.com>:

> Thanks Billy for your help.
>
> Indeed, It was a configuration issue with hadoop. We were able to resolve
> it and now Apache Kylin is working fine.
>
>
> Regards,
>
> Tarun
>
> 
> From: Billy Liu <billy...@apache.org>
> Sent: Thursday, December 8, 2016 7:48:40 PM
> To: Samirul Haque
> Cc: Tarun Vashisth; d...@kylin.incubator.apache.org; Abhishek Shishodia
> Subject: Re: Issues in building cubes of Apache Kylin
>
> It seems not Kylin's issue. The stacktrace is Hadoop code. Based on my
> limited experience, most of Hadoop issues come from the misconfiguration or
> incompatible components. Could you try some Hadoop distribution? Or your
> hadoop administrator could give much help.
>
> 2016-12-08 20:44 GMT+08:00 Samirul Haque <saha...@adobe.com<mailto:saha
> q...@adobe.com>>:
> Somehow we did surpassed issue of last mail. But now stucked at below
> point,
>
> #3 Step Name: Extract Fact Table Distinct Columns
> Logs error : no counters for job job_1481188794276_0003
>
> Hadoop logs shows for this job as :
>
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: /usr/local/hadoop/logs/
> userlogs/application_1481188794276_0003/container_1481188794276_0003_01_01
> (Is a directory)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at java.io.FileOutputStream.(FileOutputStream.java:133)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at org.apache.log4j.FileAppender.activateOptions(FileAppender.
> java:165)
> at org.apache.hadoop.yarn.ContainerLogAppender.activateOptions(
> ContainerLogAppender.java:55)
> at org.apache.log4j.config.PropertySetter.activate(
> PropertySetter.java:307)
> at org.apache.log4j.config.PropertySetter.setProperties(
> PropertySetter.java:172)
> at org.apache.log4j.config.PropertySetter.setProperties(
> PropertySetter.java:104)
> at org.apache.log4j.PropertyConfigurator.parseAppender(
> PropertyConfigurator.java:842)
> at org.apache.log4j.PropertyConfigurator.parseCategory(
> PropertyConfigurator.java:768)
> at org.apache.log4j.PropertyConfigurator.configureRootCategory(
> PropertyConfigurator.java:648)
> at org.apache.log4j.PropertyConfigurator.doConfigure(
> PropertyConfigurator.java:514)
> at org.apache.log4j.PropertyConfigurator.doConfigure(
> PropertyConfigurator.java:580)
> at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(
> OptionConverter.java:526)
> at org.apache.log4j.LogManager.(LogManager.java:127)
> at org.slf4j.impl.Log4jLoggerFactory.getLogger(
> Log4jLoggerFactory.java:64)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:285)
> at org.apache.commons.logging.impl.SLF4JLogFactory.
> getInstance(SLF4JLogFactory.java:155)
> at org.apache.commons.logging.impl.SLF4JLogFactory.
> getInstance(SLF4JLogFactory.java:132)
> at org.apache.commons.logging.LogFactory.getLog(LogFactory.
> java:657)
> at org.apache.hadoop.service.AbstractService.(
> AbstractService.java:43)
>
>
>
> From: Tarun Vashisth
> Sent: Thursday, December 08, 2016 5:54 PM
> To: Billy Liu <billy...@apache.org<mailto:billy...@apache.org>>; Samirul
> Haque <saha...@adobe.com<mailto:saha...@adobe.com>>
> Cc: d...@kylin.incubator.apache.org<mailto:d...@kylin.incubator.apache.org>;
> Abhishek Shishodia <shish...@adobe.com<mailto:shish...@adobe.com>>
> Subject: Re: Issues in building cubes of Apache Kylin
>
>
> Hi Billy,
>
>
>
> echo $HIVE_CONF gives output: /usr/local/hive/conf
>
> We are still getting the same error
>
>
>
> java.io.FileNotFoundException: File does not exist:
> hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/
> hduser1254415952/.staging/job_local1254415952_0001/libjars/
> hive-metastore-1.2.1.jar
>
> at org.apache.hadoop.hdfs.DistributedFileSystem$17.
> doCall(DistributedFileSystem.java:1072)
>
> at org.apache.hadoop.hdfs.DistributedFileSystem$17.
> doCall(DistributedFileSystem.java:1064)
>
> at org.apache.hadoop.fs.FileSystemLinkResolver.
> resolve(FileSystemLinkResolver.java:81)
>
> at org.apache.hadoop.hdfs.DistributedFileSystem.
> getFileStatus(DistributedFileSystem.java:1064)
>
> at org.apache.hadoop.mapreduce

Re: Issues in building cubes of Apache Kylin

2016-12-08 Thread Tarun Vashisth
Thanks Billy for your help.

Indeed, It was a configuration issue with hadoop. We were able to resolve it 
and now Apache Kylin is working fine.


Regards,

Tarun


From: Billy Liu <billy...@apache.org>
Sent: Thursday, December 8, 2016 7:48:40 PM
To: Samirul Haque
Cc: Tarun Vashisth; d...@kylin.incubator.apache.org; Abhishek Shishodia
Subject: Re: Issues in building cubes of Apache Kylin

It seems not Kylin's issue. The stacktrace is Hadoop code. Based on my limited 
experience, most of Hadoop issues come from the misconfiguration or 
incompatible components. Could you try some Hadoop distribution? Or your hadoop 
administrator could give much help.

2016-12-08 20:44 GMT+08:00 Samirul Haque 
<saha...@adobe.com<mailto:saha...@adobe.com>>:
Somehow we did surpassed issue of last mail. But now stucked at below point,

#3 Step Name: Extract Fact Table Distinct Columns
Logs error : no counters for job job_1481188794276_0003

Hadoop logs shows for this job as :

log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: 
/usr/local/hadoop/logs/userlogs/application_1481188794276_0003/container_1481188794276_0003_01_01
 (Is a directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.(FileOutputStream.java:213)
at java.io.FileOutputStream.(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at 
org.apache.hadoop.yarn.ContainerLogAppender.activateOptions(ContainerLogAppender.java:55)
at 
org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at 
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at 
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at 
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at 
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at 
org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:285)
at 
org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
at 
org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at 
org.apache.hadoop.service.AbstractService.(AbstractService.java:43)



From: Tarun Vashisth
Sent: Thursday, December 08, 2016 5:54 PM
To: Billy Liu <billy...@apache.org<mailto:billy...@apache.org>>; Samirul Haque 
<saha...@adobe.com<mailto:saha...@adobe.com>>
Cc: d...@kylin.incubator.apache.org<mailto:d...@kylin.incubator.apache.org>; 
Abhishek Shishodia <shish...@adobe.com<mailto:shish...@adobe.com>>
Subject: Re: Issues in building cubes of Apache Kylin


Hi Billy,



echo $HIVE_CONF gives output: /usr/local/hive/conf

We are still getting the same error



java.io.FileNotFoundException: File does not exist: 
hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser1254415952/.staging/job_local1254415952_0001/libjars/hive-metastore-1.2.1.jar

at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)

at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)

at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)

at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)

at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)

at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:99)

at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)

at 
org.apache.hadoop.mapred

Re: Issues in building cubes of Apache Kylin

2016-12-08 Thread Billy Liu
It seems not Kylin's issue. The stacktrace is Hadoop code. Based on my
limited experience, most of Hadoop issues come from the misconfiguration or
incompatible components. Could you try some Hadoop distribution? Or your
hadoop administrator could give much help.

2016-12-08 20:44 GMT+08:00 Samirul Haque <saha...@adobe.com>:

> Somehow we did surpassed issue of last mail. But now stucked at below
> point,
>
>
>
> #3 Step Name: Extract Fact Table Distinct Columns
>
> Logs error : no counters for job job_1481188794276_0003
>
>
>
> Hadoop logs shows for this job as :
>
>
>
> log4j:ERROR setFile(null,true) call failed.
>
> java.io.FileNotFoundException: /usr/local/hadoop/logs/
> userlogs/application_1481188794276_0003/container_1481188794276_0003_01_01
> (Is a directory)
>
> at java.io.FileOutputStream.open0(Native Method)
>
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
>
> at java.io.FileOutputStream.(FileOutputStream.java:213)
>
> at java.io.FileOutputStream.(FileOutputStream.java:133)
>
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
>
> at org.apache.log4j.FileAppender.activateOptions(FileAppender.
> java:165)
>
> at org.apache.hadoop.yarn.ContainerLogAppender.activateOptions(
> ContainerLogAppender.java:55)
>
> at org.apache.log4j.config.PropertySetter.activate(
> PropertySetter.java:307)
>
> at org.apache.log4j.config.PropertySetter.setProperties(
> PropertySetter.java:172)
>
> at org.apache.log4j.config.PropertySetter.setProperties(
> PropertySetter.java:104)
>
> at org.apache.log4j.PropertyConfigurator.parseAppender(
> PropertyConfigurator.java:842)
>
> at org.apache.log4j.PropertyConfigurator.parseCategory(
> PropertyConfigurator.java:768)
>
> at org.apache.log4j.PropertyConfigurator.configureRootCategory(
> PropertyConfigurator.java:648)
>
> at org.apache.log4j.PropertyConfigurator.doConfigure(
> PropertyConfigurator.java:514)
>
> at org.apache.log4j.PropertyConfigurator.doConfigure(
> PropertyConfigurator.java:580)
>
> at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(
> OptionConverter.java:526)
>
> at org.apache.log4j.LogManager.(LogManager.java:127)
>
> at org.slf4j.impl.Log4jLoggerFactory.getLogger(
> Log4jLoggerFactory.java:64)
>
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:285)
>
> at org.apache.commons.logging.impl.SLF4JLogFactory.
> getInstance(SLF4JLogFactory.java:155)
>
> at org.apache.commons.logging.impl.SLF4JLogFactory.
> getInstance(SLF4JLogFactory.java:132)
>
> at org.apache.commons.logging.LogFactory.getLog(LogFactory.
> java:657)
>
> at org.apache.hadoop.service.AbstractService.(
> AbstractService.java:43)
>
>
>
>
>
>
>
> *From:* Tarun Vashisth
> *Sent:* Thursday, December 08, 2016 5:54 PM
> *To:* Billy Liu <billy...@apache.org>; Samirul Haque <saha...@adobe.com>
> *Cc:* d...@kylin.incubator.apache.org; Abhishek Shishodia <
> shish...@adobe.com>
> *Subject:* Re: Issues in building cubes of Apache Kylin
>
>
>
> Hi Billy,
>
>
>
> echo $HIVE_CONF gives output: /usr/local/hive/conf
>
> We are still getting the same error
>
>
>
> java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser1254415952/.staging/job_local1254415952_0001/libjars/hive-metastore-1.2.1.jar
>
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
>
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
>
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
>
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
>
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
>
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:99)
>
> at 
> org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:5

Re: Issues in building cubes of Apache Kylin

2016-12-08 Thread Tarun Vashisth
Hi Billy,


echo $HIVE_CONF gives output: /usr/local/hive/conf

We are still getting the same error


java.io.FileNotFoundException: File does not exist: 
hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser1254415952/.staging/job_local1254415952_0001/libjars/hive-metastore-1.2.1.jar
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:99)
at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at 
org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:149)
at 
org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:108)
at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:92)
at 
org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

result code:2


Any pointers?


Regards,

Tarun


From: Billy Liu <billy...@apache.org>
Sent: Thursday, December 8, 2016 12:14:57 PM
To: Samirul Haque
Cc: dev; d...@kylin.incubator.apache.org; Abhishek Shishodia; Tarun Vashisth
Subject: Re: Issues in building cubes of Apache Kylin

Seems you were not using any Hadoop distribution, such HDP or CDH, but Apache 
Hadoop, Apache Hive and Apache HBase.

Could you try export HIVE_CONF to your hive config path first?


2016-12-08 13:53 GMT+08:00 Samirul Haque 
<saha...@adobe.com<mailto:saha...@adobe.com>>:
Hi Billy,

Thanks for your response.

Output of the bin/find-dependency.sh is :

KYLIN_HOME is set to /usr/local/kylin
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in 
jar:file:/usr/local/hive/lib/hive-common-2.1.0.jar!/hive-log4j2.properties 
Async: true
HIVE_CONF is set to: 
/home/hduser/bin:/home/hduser/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/hadoop/bin:/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/hive/bin:/usr/local/kylin/bin:/usr/local/Hbase/bin:/usr/local/hadoop/bin:/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/hive/bin:/usr/local/kylin/bin:/usr/local/Hbase/bin:/usr/local/hadoop/bin:/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/hive/bin:/usr/local/kylin/bin:/usr/local/Hbase/bin:/usr/local/hadoop/bin:/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/hive/bin:/usr/local/kylin/bin:/usr/local/Hbase/bin:/usr/local/h

Re: Issues in building cubes of Apache Kylin

2016-12-07 Thread Billy Liu
at-1.1.1.jar:/usr/local/hive/lib/javax.
> servlet-3.0.0.v201112011016.jar:/usr/local/hive/lib/hbase-
> common-1.1.1-tests.jar:/usr/local/hive/lib/hamcrest-core-
> 1.3.jar:/usr/local/hive/lib/asm-tree-3.1.jar:/usr/local/
> hive/lib/ST4-4.0.4.jar:/usr/local/hive/lib/hive-shims-2.1.
> 0.jar:/usr/local/hive/lib/jcodings-1.0.8.jar:/usr/local/
> hive/lib/commons-collections-3.2.2.jar:/usr/local/hive/lib/
> tephra-hbase-compat-1.0-0.6.0.jar:/usr/local/hive/lib/
> curator-client-2.6.0.jar:/usr/local/hive/lib/geronimo-
> annotation_1.0_spec-1.1.1.jar:/usr/local/hive/lib/hbase-
> procedure-1.1.1.jar:/usr/local/hive/lib/netty-3.7.0.
> Final.jar:/usr/local/hive/lib/commons-dbcp-1.4.jar:/usr/
> local/hive/lib/hive-orc-2.1.0.jar:/usr/local/hive/lib/
> tephra-core-0.6.0.jar:/usr/local/hive/lib/maven-scm-
> provider-svnexe-1.4.jar:/usr/local/hive/lib/accumulo-trace-
> 1.6.0.jar:/usr/local/hive/lib/org.abego.treelayout.core-1.0.
> 1.jar:/usr/local/hive/lib/velocity-1.5.jar:/usr/local/
> hive/lib/log4j-core-2.4.1.jar:/usr/local/hive/lib/pentaho-
> aggdesigner-algorithm-5.1.5-jhyde.jar:/usr/local/hive/lib/
> jasper-runtime-5.5.23.jar:/usr/local/hive/lib/protobuf-
> java-2.5.0.jar:/usr/local/hive/lib/commons-io-2.4.jar:/
> usr/local/hive/lib/curator-recipes-2.6.0.jar:/usr/local/
> hive/lib/super-csv-2.2.0.jar:/usr/local/hive/lib/hive-shims-
> common-2.1.0.jar:/usr/local/hive/lib/janino-2.7.6.jar:/
> usr/local/hive/lib/libthrift-0.9.3.jar:/usr/local/hive/lib/
> twill-discovery-api-0.6.0-incubating.jar:/usr/local/
> hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:/usr/local/hive/
> lib/commons-el-1.0.jar:/usr/local/hive/lib/hbase-protocol-
> 1.1.1.jar:/usr/local/hive/lib/asm-3.1.jar:/usr/local/hive/
> lib/hive-hwi-2.1.0.jar:/usr/local/hive/lib/hive-shims-
> scheduler-2.1.0.jar:/usr/local/hive/lib/jpam-1.1.jar:/
> usr/local/hive/lib/commons-compiler-2.7.6.jar:/usr/local/
> hive/lib/mail-1.4.1.jar:/usr/local/hive/hcatalog/share/
> hcatalog/hive-hcatalog-core-2.1.0.jar
>
>
>
>
>
> Hadoop distribution which was used is :
>
> http://redrockdigimark.com/apachemirror/hadoop/core/
> stable/hadoop-2.7.3.tar.gz
>
>
>
> Please let me know if you need any other information.
>
>
>
> *From:* Billy Liu [mailto:billy...@apache.org]
> *Sent:* Thursday, December 08, 2016 6:19 AM
> *To:* dev <dev@kylin.apache.org>
> *Cc:* d...@kylin.incubator.apache.org; Samirul Haque <saha...@adobe.com>;
> Abhishek Shishodia <shish...@adobe.com>
> *Subject:* Re: Issues in building cubes of Apache Kylin
>
>
>
> Hi Tarun,
>
>
>
> what's the output of bin/find-hive-dependency.sh? It seems something wrong
> in the environment variables.
>
> Which Hadoop distribution were you using?
>
>
>
> 2016-12-07 23:04 GMT+08:00 Tarun Vashisth <vashi...@adobe.com>:
>
> Hi,
>
>
>
> We are trying to build cubes for data and while doing so, we are getting
> the following error during the step 3 of cube creating
>
> File does not exist: hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/
> hduser341814501/.staging/job_local341814501_0007/libjars/
> hive-exec-2.1.0.jar
>
>
> Earlier, it was trying to find the jars at 
> hdfs://localhost:54310/usr/local/hive/lib.
> We manually put all the jars of hive there and after that, it started
> failing at this step.
>
>   *   Is kylin trying to create this directory hduser../ and so on and
> unable to do so? Is it failing at that step?
>
> Hadoop Version: - 2.7.3
> Hive Version: - 2.1.0
> Hbase Version: - 1.2.4
> Kylin Version: - 1.6
>
> Any help would be much appreciated in this regard.
>
>
> Regards,
>
> Tarun
>
>
>


Re: Issues in building cubes of Apache Kylin

2016-12-07 Thread Billy Liu
Hi Tarun,

what's the output of bin/find-hive-dependency.sh? It seems something wrong
in the environment variables.
Which Hadoop distribution were you using?

2016-12-07 23:04 GMT+08:00 Tarun Vashisth :

> Hi,
>
>
>
> We are trying to build cubes for data and while doing so, we are getting
> the following error during the step 3 of cube creating
>
> File does not exist: hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/
> hduser341814501/.staging/job_local341814501_0007/libjars/
> hive-exec-2.1.0.jar
>
>
> Earlier, it was trying to find the jars at 
> hdfs://localhost:54310/usr/local/hive/lib.
> We manually put all the jars of hive there and after that, it started
> failing at this step.
>
>   *   Is kylin trying to create this directory hduser../ and so on and
> unable to do so? Is it failing at that step?
>
> Hadoop Version: - 2.7.3
> Hive Version: - 2.1.0
> Hbase Version: - 1.2.4
> Kylin Version: - 1.6
>
> Any help would be much appreciated in this regard.
>
>
> Regards,
>
> Tarun
>