Shaofeng SHI created KYLIN-2679:
---
Summary: Report error when a dimension using "dict" encoding and
also configured Global dictionary for "distinct_count" measure
Key: KYLIN-2679
URL: https://issues.apache.org/jira/b
Hi Zhifeng,
Thanks for the sharing. A few questions for it: Is this change made in
Apache Calcite (and which version) ? Will it bring any concurrency issue?
If it is safe, that should be merged to Calcite. Thx!
2017-06-22 10:48 GMT+08:00 苏 志锋 :
> Environment
>
> Apache kylin 1.6.0
>
>
The root cause should be " java.lang.NoClassDefFoundError:
org/cloudera/htrace/Trace". Please locate the "htrace-core.jar" in local
disk first, and then re-run the spark-submit command with adding this jar
in the param "--jars". If it works this time, then you can configure this
jar path in kylin.p
Hi ShaoFeng , no more other error msg before or after this in kylin.log, but I
try to execute cmd by spart-submit directly like this:
./spark-submit --class org.apache.kylin.common.util.SparkEntry --conf
spark.executor.instance
Hi Jianhui,
We see there are so many "READY" jobs, do you know what caused that?
Usually job engine will automatically execute the jobs whose state is
"READY". If those jobs are not needed, pls discard them. After that,
running StorageCleanupJob will remove the HDFS folders for them.
2017-06-21 2
hi Sky, glad to see it moves forward. The "Failed to find metadata store by
url: kylin_metadata_2_0
_0@hbase" is not root cause. Could you check more with the log files, is
there any other error before or after this?
2017-06-21 20:43 GMT+08:00 skyyws :
> Thank you for your suggestion, Shaofeng Sh
Hi Billy,
I'm sure that all jobs are DISCARD or SUCCEED on kylin's web UI
-邮件原件-
发件人: Billy Liu [mailto:billy...@apache.org]
发送时间: 2017年6月21日 21:40
收件人: dev
主题: Re: 答复: Can't cleanup expired data
Hi Jianhui,
The log said some job's status is READY. Kylin only drop the DISCARD or
SUCCEE
My friend told me, put the hdfs-site.xml into HADOOP_CONF_DIR will resolve
the HBase Kerberos issue. Have a try.
2017-06-20 0:23 GMT+08:00 ShaoFeng Shi :
> I think the root cause error is "Caused by: java.lang.IllegalAccessError:
> tried to access class
> org.apache.hadoop.hbase.client.AsyncProce
Hi Jianhui,
The log said some job's status is READY. Kylin only drop the DISCARD or
SUCCEED jobs and related intermediate tables. Could you discard those job
first and try the StorageCleanup tool again?
Hi Shaofeng,
This is second time I saw the storage cleanup issue from community. It's
worth d
Thank you for your suggestion, Shaofeng Shi, I try to use hadoop client 2.7.3,
it worked. But I met another probelm:
-- --
-- --
17/06/21 20:20:39 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID
Hi Shaofeng,
This is the logs,also I found that when I purged cube, HIVE table can not be
cleanup,for example cube " c_all".
2017-06-21 19:34:52,430 INFO [main StorageCleanupJob:283]: Checking table
SLF4J: Class path contains multiple SLF4J bindings.
2017-06-21 19:34:52,430 INFO [main StorageC
Hi Jianhui,
It is NOT safe to delete those folders manually. The HDFS folder not only
has Cuboid files for further merge, but also may have other files. So
please don't delete them unless you know the impact.
You mentioned that running StorageCleanupJob doesn't get cleaned up; Did
you check the l
wangxianbin created KYLIN-2678:
--
Summary: found error in test case KylinConfigCLITest
Key: KYLIN-2678
URL: https://issues.apache.org/jira/browse/KYLIN-2678
Project: Kylin
Issue Type: Bug
peng.jianhua created KYLIN-2677:
---
Summary: There is no place to view the project configuration and
can only be viewed on the edit project page.
Key: KYLIN-2677
URL: https://issues.apache.org/jira/browse/KYLIN-2677
14 matches
Mail list logo