[
https://issues.apache.org/jira/browse/KYLIN-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709264#comment-14709264
]
ZhouQianhao commented on KYLIN-953:
-----------------------------------
thanks [~jerryshao2015] for the feedback
surely the right way to do it is to load the hadoop related conf automatically,
however what kylin chose is the fast way(use hbase RunJar) to let the hbase
find its own dependency, it seems that is missed some of the conf. We will try
to fix it
Btw I am wondering why is the code become
from
Path partitionsPath = new Path("/tmp", "partitions_" + UUID.randomUUID());
to
Path partitionsPath = new Path(conf.get("hbase.fs.tmp.dir"), "partitions_" +
UUID.randomUUID());
in your environment?
> when running the cube job at "Convert Cuboid Data to HFile" step, an error is
> throw
> -----------------------------------------------------------------------------------
>
> Key: KYLIN-953
> URL: https://issues.apache.org/jira/browse/KYLIN-953
> Project: Kylin
> Issue Type: Bug
> Components: Job Engine
> Affects Versions: v0.7.2
> Reporter: JerryShao
> Assignee: ZhouQianhao
>
> when cube job run at the "Convert Cuboid Data to HFile" step, throws an error
> like bellow:
> [pool-5-thread-8]:[2015-08-18
> 09:43:15,854][ERROR][org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:98)]
> - error in CubeHFileJ
> ob
> java.lang.IllegalArgumentException: Can not create a Path from a null string
> at org.apache.hadoop.fs.Path.checkPathArg(Path.java:123)
> at org.apache.hadoop.fs.Path.<init>(Path.java:135)
> at org.apache.hadoop.fs.Path.<init>(Path.java:89)
> at
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:545)
> at
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:394)
> at
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureIncrementalLoad(HFileOutputFormat.java:88)
> at
> org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:89)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at
> org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)
> at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
> at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
> at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
> at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:133)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)