Re: Redistribute intermediate table default not by rand()
Please move the high cardinality dimensions to the leading position of rowkey, that will make the data distribution more even; Chao Long 于2018年11月2日周五 下午1:38写道: > Hi zhixin, > Data may become not correct if use "distribute by rand()". > https://issues.apache.org/jira/browse/KYLIN-3388 > > > > > -- 原始邮件 -- > 发件人: "liuzhixin"; > 发送时间: 2018年11月2日(星期五) 中午12:53 > 收件人: "dev"; > 抄送: "ShaoFeng Shi"; > 主题: Re: Redistribute intermediate table default not by rand() > > > > Hi kylin team: > > Step: Redistribute intermediate table > # > 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND() > 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。 > > Best Regards! > > > 在 2018年11月2日,下午12:03,liuzhixin 写道: > > > > Hi kylin team: > > > > Version: Kylin2.5-hadoop3.1 for hdp3.0 > > # > > Step: Redistribute intermediate table > > # > > DISTRIBUTE BY is that: > > INSERT OVERWRITE TABLE table_intermediate SELECT * FROM > table_intermediate DISTRIBUTE BY Field1, Field2, Field3; > > # > > Not DISTRIBUTE BY RAND() > > # > > Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE > BY RAND()? > > > > Best wishes. > > -- Best regards, Shaofeng Shi 史少锋
?????? Redistribute intermediate table default not by rand()
Hi zhixin, Data may become not correct if use "distribute by rand()". https://issues.apache.org/jira/browse/KYLIN-3388 -- -- ??: "liuzhixin"; : 2018??11??2??(??) 12:53 ??: "dev"; : "ShaoFeng Shi"; : Re: Redistribute intermediate table default not by rand() Hi kylin team: Step: Redistribute intermediate table # ??DISTRIBUTE BYDISTRIBUTE BY RAND() Best Regards?? > ?? 2018??11??212:03??liuzhixin ?? > > Hi kylin team: > > Version: Kylin2.5-hadoop3.1 for hdp3.0 > # > Step: Redistribute intermediate table > # > DISTRIBUTE BY is that: > INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate > DISTRIBUTE BY Field1, Field2, Field3; > # > Not DISTRIBUTE BY RAND() > # > Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY > RAND()? > > Best wishes. >
Re: Redistribute intermediate table default not by rand()
Hi kylin team: Step: Redistribute intermediate table # 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND() 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。 Best Regards! > 在 2018年11月2日,下午12:03,liuzhixin 写道: > > Hi kylin team: > > Version: Kylin2.5-hadoop3.1 for hdp3.0 > # > Step: Redistribute intermediate table > # > DISTRIBUTE BY is that: > INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate > DISTRIBUTE BY Field1, Field2, Field3; > # > Not DISTRIBUTE BY RAND() > # > Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY > RAND()? > > Best wishes. >
Redistribute intermediate table default not by rand()
Hi kylin team: Version: Kylin2.5-hadoop3.1 for hdp3.0 # Step: Redistribute intermediate table # DISTRIBUTE BY is that: INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate DISTRIBUTE BY Field1, Field2, Field3; # Not DISTRIBUTE BY RAND() # Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY RAND()? Best wishes.
[jira] [Created] (KYLIN-3662) exception message "Cannot find project '%s'." should be formated
Lingang Deng created KYLIN-3662: --- Summary: exception message "Cannot find project '%s'." should be formated Key: KYLIN-3662 URL: https://issues.apache.org/jira/browse/KYLIN-3662 Project: Kylin Issue Type: Bug Affects Versions: v2.5.0 Reporter: Lingang Deng Assignee: Lingang Deng When use kylin dashboard without system cube, exception is threw as follows, {code:java} org.apache.kylin.rest.exception.BadRequestException: Cannot find project '%s'. at org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:378) at org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:359) at org.apache.kylin.rest.controller.DashboardController.getQueryMetrics(DashboardController.java:74) {code} The log is unfriendly to users. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re:Re: Re: [DISCUSS] New Kylin Streaming Solution From eBay
Hi ShaoFeng, For streaming ingest/query performance, there is a doc: https://drive.google.com/file/d/1GSBMpRuVQRmr8Ev2BWvssfMd-Rck9vsH/view?ths=true , it is also in the design doc's 'performance' section attached in the jira: https://issues.apache.org/jira/browse/KYLIN-3654 For stability, it is very stable in our environment, but currently it is not widely used in eBay, so it is hard to say. I will start to merge code to master branch, it may take some time because our current version is Kylin 2.1.0, hope it can be done before Nov.30, but I cannot guarantee it, there is lots of other works to do. At 2018-11-01 15:08:12, "ShaoFeng Shi" wrote: >Hi Gang, > >Thank you for the information, that is helpful for understanding the >overall design and implementation. > >Do you have some statistical information, like performance, throughput, >stability, etc.? Besides, what's the plan of contributing it to the >community? Thanks! > > >Ma Gang 于2018年11月1日周四 下午2:45写道: > >> Thanks Xiaoxiang, >> Very good questions! Please see my comments started with [Gang]: >> >> >> 1. Is it possible to use Yarn as cluster manager for index task. >> Coordinator process will set up them at specificed period. >> [Gang] I think it is possible, but in current design, the indexing task >> is designed as long running task, it also can provide query service, this >> makes the whole system very simple and efficiency, I don't think we need to >> stop/start indexing task time by time. But use yarn to manage the resource >> is possible, we need to redesign the existing coordinator, to make it easy >> to deploy to Yarn, Kubernetes, etc. Hope this can be done after >> contribution to community. >> >> 2. As I know, ebay’s New Kylin Streaming Solution use replica Set to >> ensure that income messages wouldn’t lost if some processes lost. I think >> replica set is a set of kafka cosumer processes which is responsible for >> ingest message and build base cuboid in memory. Could you please show me >> some detail about how replica Set provide HA guarantee? How to configure >> it? A link / paper is OK. I found one but I don’t know if it same meaning >> for your replica Set. >> >> >> [Gang] Yes, it is similar as the MongoDB replication, but currently we >> don't replicate data from Primary node, just assign the same Kafka >> topic/partitions to the receivers in a ReplicaSet, all receivers in a >> ReplicaSet will consume data from Kafka, so if one receiver is down, other >> receivers in the ReplicaSet are still consuming the same Kafka data, so the >> consume/query will not be impact. And We don't guarantee that the receivers >> in a ReplicaSet have the same consuming rate, but we can guarantee that the >> user can view data consistently by stick to the query to one receiver for >> one cube. >> The HA implementation is a little bit naive, but simple and worked. Maybe >> in the future, we can do HA by replication to support other streaming >> sources that don't support multiple consumers and don't have persistent >> store. >> >> 3. How to add or remove node of replica Set in production env? How to >> monitor the health/pressure of replica Set cluster ? >> [Gang] Currently we have UI/restful api to let admin to add/remove node >> to/from a ReplicaSet, and have a simple ui to let admin monitor the health, >> consuming rate for each receiver/cube. Also all metrics are collected using >> yammer metrics framework, it is easy to exposed to other monitor system. >> >> 4. Does all measure are supported in ebay’s New Kylin Streaming >> Solution? What about count distinct(bitmap)? >> [Gang] Most measures are supported, but precise count distinct(bitmap) is >> not support in case that the distinct dimension is not int type. As you >> know, to support precise count distinct for not-int type dimension, it >> needs to build global dictionary, it is not possible in the streaming env. >> >> >> 5. It seems ebay’s New Kylin Streaming Solution use a custom columnar >> storage, why not use a open source mature columnar storage solution ? Have >> your ever compare the performance of your custom columnar storage to open >> source columnar storage solution ? >> >> [Gang] Most open source columnar format like Parquet, ORC are designed to >> use in Hadoop env, the streaming data are in local disk, so I didn't >> consider them at the beginning. It is not very hard to define columnar >> format to store Kylin specific data, use a customize columnar storage, you >> can use mmap file to scan data, add row-level invert index for all >> dimensions, so I think the performance will be better compared to using >> common columnar format. I didn't compare the performance, but the storage >> engine is pluggable, you may contribute a parquet storage if you are >> interesting. >> >> >> >> >> >> >> At 2018-11-01 12:42:25, "Xiaoxiang Yu" wrote: >> >Hi gang, I am so glad to know that eBay has a solution for realtime olap >> on kylin. I have some small question:
Re: [DISCUSS] New Kylin Streaming Solution From eBay
Thank you for your reply. Maybe I can help to improve your Kylin Streaming Solution in the future. Best wishes, Xiaoxiang Yu On [DATE], "[NAME]" <[ADDRESS]> wrote: Thanks Xiaoxiang, Very good questions! Please see my comments started with [Gang]: 1. Is it possible to use Yarn as cluster manager for index task. Coordinator process will set up them at specificed period. [Gang] I think it is possible, but in current design, the indexing task is designed as long running task, it also can provide query service, this makes the whole system very simple and efficiency, I don't think we need to stop/start indexing task time by time. But use yarn to manage the resource is possible, we need to redesign the existing coordinator, to make it easy to deploy to Yarn, Kubernetes, etc. Hope this can be done after contribution to community. 2. As I know, ebay’s New Kylin Streaming Solution use replica Set to ensure that income messages wouldn’t lost if some processes lost. I think replica set is a set of kafka cosumer processes which is responsible for ingest message and build base cuboid in memory. Could you please show me some detail about how replica Set provide HA guarantee? How to configure it? A link / paper is OK. I found one but I don’t know if it same meaning for your replica Set. [Gang] Yes, it is similar as the MongoDB replication, but currently we don't replicate data from Primary node, just assign the same Kafka topic/partitions to the receivers in a ReplicaSet, all receivers in a ReplicaSet will consume data from Kafka, so if one receiver is down, other receivers in the ReplicaSet are still consuming the same Kafka data, so the consume/query will not be impact. And We don't guarantee that the receivers in a ReplicaSet have the same consuming rate, but we can guarantee that the user can view data consistently by stick to the query to one receiver for one cube. The HA implementation is a little bit naive, but simple and worked. Maybe in the future, we can do HA by replication to support other streaming sources that don't support multiple consumers and don't have persistent store. 3. How to add or remove node of replica Set in production env? How to monitor the health/pressure of replica Set cluster ? [Gang] Currently we have UI/restful api to let admin to add/remove node to/from a ReplicaSet, and have a simple ui to let admin monitor the health, consuming rate for each receiver/cube. Also all metrics are collected using yammer metrics framework, it is easy to exposed to other monitor system. 4. Does all measure are supported in ebay’s New Kylin Streaming Solution? What about count distinct(bitmap)? [Gang] Most measures are supported, but precise count distinct(bitmap) is not support in case that the distinct dimension is not int type. As you know, to support precise count distinct for not-int type dimension, it needs to build global dictionary, it is not possible in the streaming env. 5. It seems ebay’s New Kylin Streaming Solution use a custom columnar storage, why not use a open source mature columnar storage solution ? Have your ever compare the performance of your custom columnar storage to open source columnar storage solution ? [Gang] Most open source columnar format like Parquet, ORC are designed to use in Hadoop env, the streaming data are in local disk, so I didn't consider them at the beginning. It is not very hard to define columnar format to store Kylin specific data, use a customize columnar storage, you can use mmap file to scan data, add row-level invert index for all dimensions, so I think the performance will be better compared to using common columnar format. I didn't compare the performance, but the storage engine is pluggable, you may contribute a parquet storage if you are interesting. At 2018-11-01 12:42:25, "Xiaoxiang Yu" wrote: >Hi gang, I am so glad to know that eBay has a solution for realtime olap on kylin. I have some small question: > > >1. Is it possible to use Yarn as cluster manager for index task. Coordinator process will set up them at specificed period. Yarn will manage : > >a) retry these task if some failed > >b) resource allocation > >c) log collection > >2. As I know, ebay’s New Kylin Streaming Solution use replica Set to ensure that income messages wouldn’t lost if some processes lost. I think replica set is a set of kafka cosumer processes which is responsible for ingest message and build base cuboid in memory. Could you please show me some detail about how replica Set provide HA guarantee? How to configure it? A link / paper is OK. I found one but I don’t know if it same meaning for your replica Set. > >a) [Mongodb
Re: Re: [DISCUSS] New Kylin Streaming Solution From eBay
Hi Gang, Thank you for the information, that is helpful for understanding the overall design and implementation. Do you have some statistical information, like performance, throughput, stability, etc.? Besides, what's the plan of contributing it to the community? Thanks! Ma Gang 于2018年11月1日周四 下午2:45写道: > Thanks Xiaoxiang, > Very good questions! Please see my comments started with [Gang]: > > > 1. Is it possible to use Yarn as cluster manager for index task. > Coordinator process will set up them at specificed period. > [Gang] I think it is possible, but in current design, the indexing task > is designed as long running task, it also can provide query service, this > makes the whole system very simple and efficiency, I don't think we need to > stop/start indexing task time by time. But use yarn to manage the resource > is possible, we need to redesign the existing coordinator, to make it easy > to deploy to Yarn, Kubernetes, etc. Hope this can be done after > contribution to community. > > 2. As I know, ebay’s New Kylin Streaming Solution use replica Set to > ensure that income messages wouldn’t lost if some processes lost. I think > replica set is a set of kafka cosumer processes which is responsible for > ingest message and build base cuboid in memory. Could you please show me > some detail about how replica Set provide HA guarantee? How to configure > it? A link / paper is OK. I found one but I don’t know if it same meaning > for your replica Set. > > > [Gang] Yes, it is similar as the MongoDB replication, but currently we > don't replicate data from Primary node, just assign the same Kafka > topic/partitions to the receivers in a ReplicaSet, all receivers in a > ReplicaSet will consume data from Kafka, so if one receiver is down, other > receivers in the ReplicaSet are still consuming the same Kafka data, so the > consume/query will not be impact. And We don't guarantee that the receivers > in a ReplicaSet have the same consuming rate, but we can guarantee that the > user can view data consistently by stick to the query to one receiver for > one cube. > The HA implementation is a little bit naive, but simple and worked. Maybe > in the future, we can do HA by replication to support other streaming > sources that don't support multiple consumers and don't have persistent > store. > > 3. How to add or remove node of replica Set in production env? How to > monitor the health/pressure of replica Set cluster ? > [Gang] Currently we have UI/restful api to let admin to add/remove node > to/from a ReplicaSet, and have a simple ui to let admin monitor the health, > consuming rate for each receiver/cube. Also all metrics are collected using > yammer metrics framework, it is easy to exposed to other monitor system. > > 4. Does all measure are supported in ebay’s New Kylin Streaming > Solution? What about count distinct(bitmap)? > [Gang] Most measures are supported, but precise count distinct(bitmap) is > not support in case that the distinct dimension is not int type. As you > know, to support precise count distinct for not-int type dimension, it > needs to build global dictionary, it is not possible in the streaming env. > > > 5. It seems ebay’s New Kylin Streaming Solution use a custom columnar > storage, why not use a open source mature columnar storage solution ? Have > your ever compare the performance of your custom columnar storage to open > source columnar storage solution ? > > [Gang] Most open source columnar format like Parquet, ORC are designed to > use in Hadoop env, the streaming data are in local disk, so I didn't > consider them at the beginning. It is not very hard to define columnar > format to store Kylin specific data, use a customize columnar storage, you > can use mmap file to scan data, add row-level invert index for all > dimensions, so I think the performance will be better compared to using > common columnar format. I didn't compare the performance, but the storage > engine is pluggable, you may contribute a parquet storage if you are > interesting. > > > > > > > At 2018-11-01 12:42:25, "Xiaoxiang Yu" wrote: > >Hi gang, I am so glad to know that eBay has a solution for realtime olap > on kylin. I have some small question: > > > > > >1. Is it possible to use Yarn as cluster manager for index task. > Coordinator process will set up them at specificed period. Yarn will manage > : > > > >a) retry these task if some failed > > > >b) resource allocation > > > >c) log collection > > > >2. As I know, ebay’s New Kylin Streaming Solution use replica Set to > ensure that income messages wouldn’t lost if some processes lost. I think > replica set is a set of kafka cosumer processes which is responsible for > ingest message and build base cuboid in memory. Could you please show me > some detail about how replica Set provide HA guarantee? How to configure > it? A link / paper is OK. I found one but I don’t know if it same meaning > for
Re:Re: [DISCUSS] New Kylin Streaming Solution From eBay
Thanks Xiaoxiang, Very good questions! Please see my comments started with [Gang]: 1. Is it possible to use Yarn as cluster manager for index task. Coordinator process will set up them at specificed period. [Gang] I think it is possible, but in current design, the indexing task is designed as long running task, it also can provide query service, this makes the whole system very simple and efficiency, I don't think we need to stop/start indexing task time by time. But use yarn to manage the resource is possible, we need to redesign the existing coordinator, to make it easy to deploy to Yarn, Kubernetes, etc. Hope this can be done after contribution to community. 2. As I know, ebay’s New Kylin Streaming Solution use replica Set to ensure that income messages wouldn’t lost if some processes lost. I think replica set is a set of kafka cosumer processes which is responsible for ingest message and build base cuboid in memory. Could you please show me some detail about how replica Set provide HA guarantee? How to configure it? A link / paper is OK. I found one but I don’t know if it same meaning for your replica Set. [Gang] Yes, it is similar as the MongoDB replication, but currently we don't replicate data from Primary node, just assign the same Kafka topic/partitions to the receivers in a ReplicaSet, all receivers in a ReplicaSet will consume data from Kafka, so if one receiver is down, other receivers in the ReplicaSet are still consuming the same Kafka data, so the consume/query will not be impact. And We don't guarantee that the receivers in a ReplicaSet have the same consuming rate, but we can guarantee that the user can view data consistently by stick to the query to one receiver for one cube. The HA implementation is a little bit naive, but simple and worked. Maybe in the future, we can do HA by replication to support other streaming sources that don't support multiple consumers and don't have persistent store. 3. How to add or remove node of replica Set in production env? How to monitor the health/pressure of replica Set cluster ? [Gang] Currently we have UI/restful api to let admin to add/remove node to/from a ReplicaSet, and have a simple ui to let admin monitor the health, consuming rate for each receiver/cube. Also all metrics are collected using yammer metrics framework, it is easy to exposed to other monitor system. 4. Does all measure are supported in ebay’s New Kylin Streaming Solution? What about count distinct(bitmap)? [Gang] Most measures are supported, but precise count distinct(bitmap) is not support in case that the distinct dimension is not int type. As you know, to support precise count distinct for not-int type dimension, it needs to build global dictionary, it is not possible in the streaming env. 5. It seems ebay’s New Kylin Streaming Solution use a custom columnar storage, why not use a open source mature columnar storage solution ? Have your ever compare the performance of your custom columnar storage to open source columnar storage solution ? [Gang] Most open source columnar format like Parquet, ORC are designed to use in Hadoop env, the streaming data are in local disk, so I didn't consider them at the beginning. It is not very hard to define columnar format to store Kylin specific data, use a customize columnar storage, you can use mmap file to scan data, add row-level invert index for all dimensions, so I think the performance will be better compared to using common columnar format. I didn't compare the performance, but the storage engine is pluggable, you may contribute a parquet storage if you are interesting. At 2018-11-01 12:42:25, "Xiaoxiang Yu" wrote: >Hi gang, I am so glad to know that eBay has a solution for realtime olap on >kylin. I have some small question: > > >1. Is it possible to use Yarn as cluster manager for index task. >Coordinator process will set up them at specificed period. Yarn will manage : > >a) retry these task if some failed > >b) resource allocation > >c) log collection > >2. As I know, ebay’s New Kylin Streaming Solution use replica Set to >ensure that income messages wouldn’t lost if some processes lost. I think >replica set is a set of kafka cosumer processes which is responsible for >ingest message and build base cuboid in memory. Could you please show me some >detail about how replica Set provide HA guarantee? How to configure it? A link >/ paper is OK. I found one but I don’t know if it same meaning for your >replica Set. > >a) [Mongodb replication](https://docs.mongodb.com/manual/replication/). > >3. How to add or remove node of replica Set in production env? How to >monitor the health/pressure of replica Set cluster ? > >4. Does all measure are supported in ebay’s New Kylin Streaming Solution? >What about count distinct(bitmap)? > >5. It seems ebay’s New Kylin Streaming Solution use a custom
[jira] [Created] (KYLIN-3661) query return inconsistent result
wangxianbin created KYLIN-3661: -- Summary: query return inconsistent result Key: KYLIN-3661 URL: https://issues.apache.org/jira/browse/KYLIN-3661 Project: Kylin Issue Type: Bug Affects Versions: v2.4.0 Reporter: wangxianbin Attachments: one.png, three.png, two.png {{three queries on same cube. obviously,the differ between query one and two is "group by os", however the second query have smaller "uv" and "pv", which is wrong, and query three return correct result.}} h1. query one SELECT a.os, count(DISTINCT a.DUID) AS "uv", CASE WHEN sum(a.pv) IS NULL THEN 0 ELSE sum(a.pv) END AS "pv" FROM dw_netflow.visit_all a JOIN DW_NETFLOW.DW_DIM_DATE b ON a.dt = b.day_name WHERE a.dt = '2018-10-13' AND a.LABEL_TYPE = 'EVENT' group by a.os h1. result |OS|uv|pv| |other|4657|869656| |android|1713172|198955150| |ios|118205|8438544| h1. query two SELECT count(DISTINCT a.DUID) AS "uv", CASE WHEN sum(a.pv) IS NULL THEN 0 ELSE sum(a.pv) END AS "pv" FROM dw_netflow.visit_all a JOIN DW_NETFLOW.DW_DIM_DATE b ON a.dt = b.day_name WHERE a.dt = '2018-10-13' AND a.LABEL_TYPE = 'EVENT' h1. result |uv|pv| |699022|30428195| h1. query three SELECT count(DISTINCT a.DUID) AS "uv", CASE WHEN sum(a.pv) IS NULL THEN 0 ELSE sum(a.pv) END AS "pv" FROM dw_netflow.visit_all a JOIN DW_NETFLOW.DW_DIM_DATE b ON a.dt = b.day_name WHERE a.dt = '2018-10-13' AND a.LABEL_TYPE = 'EVENT' AND a.os in ('ios','android','other') h1. result |uv|pv| |1830387|208263350| -- This message was sent by Atlassian JIRA (v7.6.3#76005)