Shaofeng SHI created KYLIN-2248:
---
Summary: TopN merge further optimization after KYLIN-1917
Key: KYLIN-2248
URL: https://issues.apache.org/jira/browse/KYLIN-2248
Project: Kylin
Issue Type:
Shaofeng SHI created KYLIN-2247:
---
Summary: Automatically flush cache after executing "sample.sh" or
"metadata.sh restore"
Key: KYLIN-2247
URL: https://issues.apache.org/jira/browse/KYLIN-2247
Project:
hi,
每个维度的基数有590万,当在rowkey中选择dict时,编译产生错误:
“Too high cardinality is not suitable for dictionary -- cardinality: 5978388“
所以修改了model, 没有定义rowkey, 对所有维度定义了全局字典,build成功,查询时报错:
“AppendTrieDictionary can't retrive value from id“
I'm sorry to trouble you. I had found the reason about why the build job
failed. I wrote the wrong kafka host when I create streaming table. When I
added a broker, I wrote the host as "localhost".
On 12/5/2016 11:00, 汪胜wrote:
Hello, I installed kylin1.6.0 and kafka0.10.1, and
It’s not recommend to select * without limit. Query server cannot handle too
many records got from storage.
On 05/12/2016, 10:25 AM, "alaleiwang" wrote:
thanks a lot,i will check KYLIN-1936 and try v1.5.4 to find if it will
solve
our problem by use "select *
According to hongbin’s comments in JIRA KYLIN-1936, it fixed limit push down
issue when dealing with multiple segments.
On 04/12/2016, 7:01 PM, "Alberto Ramón" wrote:
about KYLIN-1936: what make this change ?
(keep in mind groups by? keep in mind
Hello, I installed kylin1.6.0 and kafka0.10.1, and I followed the blog
"Scalable Cubing from Kafka (beta)" step by step, but the build job failed at
the first step "Save data from Kafka". The hadoop yarn log show the below
content:
"WARN [main] org.apache.hadoop.mapred.YarnChild: Exception
sorry, it's my mistake , I click the "load hive table from tree" button , it's
always empty,
but when I click "load hive table" button and input the table name ,it's ok,
and when I click the "load hive table from tree" button again,It's ok too
-- Original
thanks a lot,i will check KYLIN-1936 and try v1.5.4 to find if it will solve
our problem by use "select * from table limit N"
i am still afraid the case the user will use "select *" clause even without
"limit",how can we deal with this one?
--
View this message in context:
The limit push down issue is completely resolved in JIRA KYLIN-1936, which is
applied in 1.5.4. So please try kylin 1.5.4.
On 02/12/2016, 5:51 PM, "ShaoFeng Shi" wrote:
I remember hongbin has further optimization on the limit push down after
1.5.3; @hongbin, can
10 matches
Mail list logo