Hi Chen,
Kylin provides a way to backup
metadata[http://kylin.apache.org/cn/docs/howto/howto_backup_metadata.html].
you can recover data from backup metadata.
If you want to migrate cube from a Kylin environment to another, you can use
cube migration tool[http://kylin.apache.org/cn/docs/how
Using hour as the partition column should be fine. From the data, it seems
the declared column sequence is not matched with the persisted data.
Lifan, I see you posted the cube JSON, could you please also provide the
model's JSON? That would help to analysis the problem. Thank you!
Best regards,
Hi Chao Long
Kylin??
??
SnowLake
??8??
Hi, lifei
After check your model.json, I found you use "HOUR_START" as your
partition_date_column, which is not correct.
I think you should change to "timestamp" and have another try.
Source code at
https://github.com/apache/kylin/blob/master/source-kafka/src/main/java/org/apache/kylin/source
Hello, I am evaluating Kylin and tried to join streaming table and hive
table, but now got unexpected behavior.
All the scripts can be found in
https://gist.github.com/OstCollector/a4ac396e3169aa42a416d96db3021195
(may need to modify some script to match the environments)
Environment:
Centos 7
H
Hi, yang
I can not see your picture, you can try to add these in the attachment. And if
you want to find out why the Spark task failed, go to check the Yarn resource
manager or Spark history server. And if you have resolved this problem, welcome
to share to the community.
Best wishes!
发送自 Win
Hi,廉立伟.
What’s your Kylin version? There is an issue about Chinese characters:
https://issues.apache.org/jira/browse/KYLIN-3705. If your Kylin version is
lower than 2.5.2, I advise you to upgrade to the latest Kylin version.
Here are some tips for getting more useful info: check the “Here a
PENG Zhengshuai created KYLIN-3814:
--
Summary: Add pause interval for job retry
Key: KYLIN-3814
URL: https://issues.apache.org/jira/browse/KYLIN-3814
Project: Kylin
Issue Type: Improvement
Hi Chen
sequence??cubesegment
mergemerge??merge
hfilemerge
--
Best Regards,
Chao Long
-- --
??: "chen snowl
Dear All:
因为数据备份问题,关注kylin的后台存储,这里咨询一个问题
Hdfs://${HAname}/kylin/kylin_metadata/kylin-${jobid}/${cubename}/cuboid这个目录的下我测试发现它的数据量大小和Kylin的Hbase中的segment一致,我将其删除后查询时不影响的
我的问题的是:这里的数据是否是有意保持的
Cube build过程的后几步:
。。。
>> 写出 cuboid data
>> convert cuboid data To Hfile
>> 将在Hdfs://${HAname}/kylin/kyli
10 matches
Mail list logo