Re: Question about cube size estimation in Kylin 1.5

2016-04-26 Thread ShaoFeng Shi
The issue is very likely related with
https://issues.apache.org/jira/browse/KYLIN-1624; You can wait for v1.5.2,
or pick the commits related with HLL (on master branch) made by Yang
yesterday.


2016-04-26 17:49 GMT+08:00 ShaoFeng Shi :

> Hi Dayue,
>
> could you please open a JIRA for this, and make it configurable? As I know
> now Kylin allow cube level's configurations to overwirte kylin.properties,
> with this you can customize the magic number at cube level.
>
> Thanks;
>
> 2016-04-25 15:01 GMT+08:00 Li Yang :
>
>> The magic coefficient is due to hbase compression on keys and values, the
>> final cube size is much smaller than the sum of all keys and all values.
>> That's why multiplying the coefficient. It's totally by experience at the
>> moment. It should vary depends on the key encoding and compression applied
>> to HTable.
>>
>> At the minimal, we should make it configurable I think.
>>
>> On Mon, Apr 18, 2016 at 4:38 PM, Dayue Gao  wrote:
>>
>> > Hi everyone,
>> >
>> >
>> > I made several cubing tests on 1.5 and found most of the time was spent
>> on
>> > the "Convert Cuboid Data to HFile" step due to lack of reducer
>> parallelism.
>> > It seems that the estimated cube size is too small compared to the
>> actual
>> > size, which leads to small number of regions (hence reducers) to be
>> > created. The setup and result of the tests are like:
>> >
>> >
>> > Cube#1: source_record=11998051, estimated_size=8805MB, coefficient=0.25,
>> > region_cut=5GB, #regions=2, actual_size=49GB
>> > Cube#2: source_record=123908390, estimated_size=4653MB,
>> coefficient=0.05,
>> > region_cut=10GB, #regions=2, actual_size=144GB
>> >
>> >
>> > The "coefficient" is from CubeStatsReader#estimateCuboidStorageSize,
>> which
>> > looks mysterious to me. Currently the formula for cuboid size
>> estimation is
>> >
>> >
>> >   size(cuboid) = rows(cuboid) x row_size(cuboid) x coefficient
>> >   where coefficient = has_memory_hungry_measures(cube) ? 0.05 : 0.25
>> >
>> >
>> > Why do we multiply the coefficient? And why it's five times smaller in
>> > memory hungry case? Cloud someone explain the rationale behind it?
>> >
>> >
>> > Thanks, Dayue
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>>
>
>
>
> --
> Best regards,
>
> Shaofeng Shi
>
>


-- 
Best regards,

Shaofeng Shi


Re: Empty result return in the Insight query kylin1.5.1

2016-04-26 Thread hongbin ma
please check the content of the first steps output hive table. Usually it's
because 0 rows exist in that table

On Wed, Apr 27, 2016 at 11:58 AM, Tao Li(Internship) 
wrote:

> Hi,
>
>I have successfully built a cube, however the cube size is zero and
> query result is empty in the Insight query.
>
>the environment of my cluster:
>hadoop 2.5.2 + hive 1.2.1 + hbase 1.1.3 + kylin 1.5.1
>
> Best regards,
>
> Tao Li
>
>
>
>
>
> ???(,?
> This email message (including any attachments) is confidential and may be
> legally privileged. If you have received it by mistake, please notify the
> sender by return email and delete this message from your system. Any
> unauthorized use or dissemination of this message in whole or in part is
> strictly prohibited. Envision Energy Limited and all its subsidiaries shall
> not be liable for the improper or incomplete transmission of the
> information contained in this email nor for any delay in its receipt or
> damage to your system. Envision Energy Limited does not guarantee the
> integrity of this email message, nor that this email message is free of
> viruses, interceptions, or interference.
>



-- 
Regards,

*Bin Mahone | 马洪宾*
Apache Kylin: http://kylin.io
Github: https://github.com/binmahone


[jira] [Created] (KYLIN-1628) BitMapFilterEvaluatorTest Failed

2016-04-26 Thread Dong Li (JIRA)
Dong Li created KYLIN-1628:
--

 Summary: BitMapFilterEvaluatorTest Failed
 Key: KYLIN-1628
 URL: https://issues.apache.org/jira/browse/KYLIN-1628
 Project: Kylin
  Issue Type: Bug
  Components: Tools, Build and Test
Affects Versions: v1.3.0
Reporter: Dong Li
Assignee: Dong Li


roaringbitmap version is a range in pom.xml, and RoaringBitmap.flit(start, end) 
is deprecated in latest version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: multi hadoop cluster

2016-04-26 Thread ShaoFeng Shi
if the cube has no data, you'd better firstly have a check on the hive
source table.

Kylin runs "hive -e" shell command to generate the intermediate flat table
at the 1st step of the cube job. If your "hive" command can read the right
table, Kylin should be able also.

The intermediate hive table was dropped at the end of the cube build. But
you can rebuild the cube, and check the intermediate table content before
the build be finished. Or you can directly run the hive create intermediate
table SQL (can be found in the "parameter" of the first step).

With the intermediate table, you can check whether it has data. If no data,
you can further check the filter conditions; sometimes it is the date
format doesn't match causing this.

If the intermediate table has data, while cube has no data, then that is a
problem, but I almost haven't seen this before.

2016-04-27 10:08 GMT+08:00 bitbean :

> not so match
>
>
> my case: local hive and remote hive share the metadata database, so my
> local hive can see  fact table on remote hdfs.
>
>
> I use the fact table on remote hdfs, now my trouble is that  kylin will
> build cube in 5 minutes, but the result htable  has no data.
>
>
> I guess kylin doesn't fetch data from remote hdfs, and kylin doesn't tell
> me it can't fetch data,
>
>
>
>
> so can you help me to resolve it ?
>
>
>
>
> -- 原始邮件 --
> 发件人: "ShaoFeng Shi";;
> 发送时间: 2016年4月26日(星期二) 下午5:42
> 收件人: "dev";
>
> 主题: Re: multi hadoop cluster
>
>
>
> will this match your case?
> https://issues.apache.org/jira/browse/KYLIN-1172
>
> 2016-04-26 16:55 GMT+08:00 bitbean :
>
> > Hi all,
> >
> >  i am encountering a problem with multiple hadoop cluster.
> >
> >
> >  kylin submit job to yarn on one hdfs, but my fact table is on other
> > hdfs. Two hadoop clusters  use the same mysql to store metadata.
> >
> >
> > so when i build cube, the first step to create intermediate table , and
> > insert data from fact table.
> >
> >
> > but i can't access the fact table in kylin's hive.
> >
> >
> >  for example , the first step as below
> >
> >
> > "kylin_intermediate_cube8_2016030100_2016041300 SELECT
> > PARTNER_USR_DOC_BASIC_INFO_FT0_S.PHONE_PROVINCE_IND
> > FROM WLT_PARTNER.PARTNER_USR_DOC_BASIC_INFO_FT0_S as
> > PARTNER_USR_DOC_BASIC_INFO_FT0_S
> > WHERE (PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D >= '2016-03-01' AND
> > PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D < '2016-04-13')"
> >
> >
> >
> >  Table "PARTNER_USR_DOC_BASIC_INFO_FT0_S" locate
> > "hdfs://hadoop2NameNode/wlt_partner/PARTNER_USR_DOC_BASIC_INFO_FT0_S"
> >
> >
> > but "kylin_intermediate_cube8_2016030100_2016041300"  locate
> > "hdfs://bihbasemaster/"
> >
> >
> > they are different clusters.
> >
> >
> > The current situation is there will not be any error in WEBUI at step
> > 1,
> >
> >
> >  When cube done, there is nothing in  Htable, so What can i do?
>
>
>
>
> --
> Best regards,
>
> Shaofeng Shi
>



-- 
Best regards,

Shaofeng Shi


Re: Re: Re:Re: No detail message when Error.

2016-04-26 Thread ShaoFeng Shi
Thanks for the reporting; This is a cache issue in v1.5.1; the fix will be
released in next version.

2016-04-26 13:43 GMT+08:00 Roy :

> of course, bulid cube successful ,choose insight tab display not result
> and refresh again.  not working. finally click reload metadata. Below is
> detail logs:
>
> 2016-04-26 13:32:48,174 DEBUG [http-bio-7070-exec-9]
> service.QueryService:289 : getting table metas
> 2016-04-26 13:32:48,175 DEBUG [http-bio-7070-exec-9]
> service.QueryService:307 : getting column metas
> 2016-04-26 13:32:48,182 DEBUG [http-bio-7070-exec-9]
> service.QueryService:321 : done column metas
> 2016-04-26 13:32:56,245 DEBUG [http-bio-7070-exec-4]
> service.QueryService:289 : getting table metas
> 2016-04-26 13:32:56,247 DEBUG [http-bio-7070-exec-4]
> service.QueryService:307 : getting column metas
> 2016-04-26 13:32:56,260 DEBUG [http-bio-7070-exec-4]
> service.QueryService:321 : done column metas
> 2016-04-26 13:33:03,083 DEBUG [http-bio-7070-exec-10]
> service.QueryService:289 : getting table metas
> 2016-04-26 13:33:03,085 DEBUG [http-bio-7070-exec-10]
> service.QueryService:307 : getting column metas
> 2016-04-26 13:33:03,095 DEBUG [http-bio-7070-exec-10]
> service.QueryService:321 : done column metas
> 2016-04-26 13:33:07,162 DEBUG [http-bio-7070-exec-8]
> service.QueryService:289 : getting table metas
> 2016-04-26 13:33:07,163 DEBUG [http-bio-7070-exec-8]
> service.QueryService:307 : getting column metas
> 2016-04-26 13:33:07,170 DEBUG [http-bio-7070-exec-8]
> service.QueryService:321 : done column metas
> 2016-04-26 13:33:07,966 DEBUG [http-bio-7070-exec-5]
> controller.UserController:64 : authentication.getPrincipal() is
> org.springframework.security.core.userdetails.User@3b40b2f: Username:
> ADMIN; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true;
> credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities:
> ROLE_ADMIN,ROLE_ANALYST,ROLE_MODELER
> 2016-04-26 13:33:12,125 INFO  [pool-4-thread-1]
> threadpool.DefaultScheduler:106 : Job Fetcher: 0 running, 0 actual running,
> 0 ready, 43 others
> 2016-04-26 13:33:14,516 DEBUG [http-bio-7070-exec-2]
> controller.UserController:64 : authentication.getPrincipal() is
> org.springframework.security.core.userdetails.User@3b40b2f: Username:
> ADMIN; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true;
> credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities:
> ROLE_ADMIN,ROLE_ANALYST,ROLE_MODELER
> 2016-04-26 13:33:19,512 DEBUG [http-bio-7070-exec-1]
> controller.UserController:64 : authentication.getPrincipal() is
> org.springframework.security.core.userdetails.User@3b40b2f: Username:
> ADMIN; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true;
> credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities:
> ROLE_ADMIN,ROLE_ANALYST,ROLE_MODELER
> 2016-04-26 13:33:19,539 DEBUG [http-bio-7070-exec-7]
> service.QueryService:289 : getting table metas
> 2016-04-26 13:33:19,540 DEBUG [http-bio-7070-exec-7]
> service.QueryService:307 : getting column metas
> 2016-04-26 13:33:19,549 DEBUG [http-bio-7070-exec-7]
> service.QueryService:321 : done column metas
> 2016-04-26 13:33:37,470 DEBUG [http-bio-7070-exec-10]
> controller.UserController:64 : authentication.getPrincipal() is
> org.springframework.security.core.userdetails.User@3b40b2f: Username:
> ADMIN; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true;
> credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities:
> ROLE_ADMIN,ROLE_ANALYST,ROLE_MODELER
> 2016-04-26 13:33:37,485 DEBUG [http-bio-7070-exec-8]
> service.AdminService:90 : Get Kylin Runtime Config
> 2016-04-26 13:33:37,489 DEBUG [http-bio-7070-exec-2]
> service.AdminService:52 : Get Kylin Runtime environment
> 2016-04-26 13:33:39,408 INFO  [http-bio-7070-exec-1]
> controller.CacheController:64 : wipe cache type: ALL event:UPDATE name:all
> 2016-04-26 13:33:39,409 INFO  [http-bio-7070-exec-1]
> service.CacheService:171 : rebuild cache type: ALL name:all
> 2016-04-26 13:33:39,409 WARN  [http-bio-7070-exec-1]
> service.CacheService:116 : cleaning all storage cache
> 2016-04-26 13:33:39,410 INFO  [http-bio-7070-exec-1]
> service.CacheService:134 : removeAllOLAPDataSources is called.
> 2016-04-26 13:33:44,510 DEBUG [http-bio-7070-exec-9]
> controller.UserController:64 : authentication.getPrincipal() is
> org.springframework.security.core.userdetails.User@3b40b2f: Username:
> ADMIN; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true;
> credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities:
> ROLE_ADMIN,ROLE_ANALYST,ROLE_MODELER
> 2016-04-26 13:33:44,510 INFO  [http-bio-7070-exec-7] cube.CubeManager:124
> : Initializing CubeManager with config kylin_metadata@hbase
> 2016-04-26 13:33:44,511 INFO  [http-bio-7070-exec-7]
> hbase.HBaseConnection:139 : connection is null or closed, creating a new one
> 2016-04-26 13:33:44,630 DEBUG [http-bio-7070-exec-7] cube.CubeManager:811
> : Loading Cube 

Re: how to get the rate value

2016-04-26 Thread ShaoFeng Shi
hi dong, could you please open a JIRA to Kylin for tracking this issue?
https://issues.apache.org/jira/secure/Dashboard.jspa

Thanks!

2016-04-26 20:56 GMT+08:00 耳东 <775620...@qq.com>:

> Hi all:
>
>
>   I want to get a value which is defined as sum(a)/sum(b), how can I
> do this kind of anlysis.
>
>   Now I build a cube which have sum(a) and sum(b), when I execute
> “select sum(a)/sum(b) from table1 group by c” ,the result is wrong.
> sum(a)/sum(b) the result is all 0 and sum(b)/sum(a) result is all 1.
>
>
>  MMENE_NAMESUCC   ATTSUCC/ATT
>  CSMME15BZX   336981   368366   1
>  CSMME32BZX   338754   366842   1
>  CSMME07BZX   687965   747694   1
>  CSMME03BHW   703269   747623   1
>  CSMME12BZX   705856   764656   1
>  CSMME16BHW   1962293142173   1
>
>
>MMENE_NAME   SUCC   ATT   ATT/SUCC
>  CSMME15BZX   336981   368366   0
>  CSMME32BZX   338754   366842   0
>  CSMME07BZX   687965   747694   0
>  CSMME03BHW   703269   747623   0
>  CSMME12BZX   705856   764656   0
>  CSMME16BHW   1962293142173   0




-- 
Best regards,

Shaofeng Shi


Re: cardinality number limit about raw expression

2016-04-26 Thread ShaoFeng Shi
The raw measure need encode the column values with dictionary; while
dictionary is not good for ultra high cardinality. That's why it
complaints; You can try something as workaround:

1) cut a big segment into several segments, if you were trying to build a
large data set at once;
2) set "kylin.dictionary.max.cardinality" in conf/kylin.properties to a
bigger value (default is 500).

2016-04-27 10:55 GMT+08:00 yubo-...@yolo24.com :

> hi all,
>
> I am using 1.5.1 for testing.
> when I add the raw expression on one column of module, get the following
> error message in log file.
>
> Too high cardinality is not suitable for dictionary -- cardinality:
> 10886118
>
> my question is
>
> 1. does this means that the raw expression only allows limited number of
> cardinality ?
> 2. how to modify configuration for this limited number for raw
> expression(measure).
>
> --
> View this message in context:
> http://apache-kylin.74782.x6.nabble.com/cardinality-number-limit-about-raw-expression-tp4286.html
> Sent from the Apache Kylin mailing list archive at Nabble.com.
>



-- 
Best regards,

Shaofeng Shi


Empty result return in the Insight query kylin1.5.1

2016-04-26 Thread Tao Li(Internship)
Hi,

   I have successfully built a cube, however the cube size is zero and query 
result is empty in the Insight query.

   the environment of my cluster:
   hadoop 2.5.2 + hive 1.2.1 + hbase 1.1.3 + kylin 1.5.1

Best regards,

Tao Li




???(,?
This email message (including any attachments) is confidential and may be 
legally privileged. If you have received it by mistake, please notify the 
sender by return email and delete this message from your system. Any 
unauthorized use or dissemination of this message in whole or in part is 
strictly prohibited. Envision Energy Limited and all its subsidiaries shall not 
be liable for the improper or incomplete transmission of the information 
contained in this email nor for any delay in its receipt or damage to your 
system. Envision Energy Limited does not guarantee the integrity of this email 
message, nor that this email message is free of viruses, interceptions, or 
interference.


roll back to kylin1.2

2016-04-26 Thread bitbean
Hi all


i rolled back to kylin1.2 ,but the WEBUI stay 1.5.1 style? and can't do 
anything .  I wonder where to store html ?

[jira] [Created] (KYLIN-1627) add backdoor toggle to dump binary cube storage response for further analysis

2016-04-26 Thread hongbin ma (JIRA)
hongbin ma created KYLIN-1627:
-

 Summary: add backdoor toggle to dump binary cube storage response 
for further analysis
 Key: KYLIN-1627
 URL: https://issues.apache.org/jira/browse/KYLIN-1627
 Project: Kylin
  Issue Type: Improvement
Reporter: hongbin ma
Assignee: hongbin ma






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


?????? multi hadoop cluster

2016-04-26 Thread bitbean
not so match


my case: local hive and remote hive share the metadata database, so my local 
hive can see  fact table on remote hdfs.


I use the fact table on remote hdfs, now my trouble is that  kylin will build 
cube in 5 minutes, but the result htable  has no data.


I guess kylin doesn't fetch data from remote hdfs, and kylin doesn't tell me it 
can't fetch data,




so can you help me to resolve it ?




--  --
??: "ShaoFeng Shi";;
: 2016??4??26??(??) 5:42
??: "dev"; 

: Re: multi hadoop cluster



will this match your case? https://issues.apache.org/jira/browse/KYLIN-1172

2016-04-26 16:55 GMT+08:00 bitbean :

> Hi all,
>
>  i am encountering a problem with multiple hadoop cluster.
>
>
>  kylin submit job to yarn on one hdfs, but my fact table is on other
> hdfs. Two hadoop clusters  use the same mysql to store metadata.
>
>
> so when i build cube, the first step to create intermediate table , and
> insert data from fact table.
>
>
> but i can't access the fact table in kylin's hive.
>
>
>  for example , the first step as below
>
>
> "kylin_intermediate_cube8_2016030100_2016041300 SELECT
> PARTNER_USR_DOC_BASIC_INFO_FT0_S.PHONE_PROVINCE_IND
> FROM WLT_PARTNER.PARTNER_USR_DOC_BASIC_INFO_FT0_S as
> PARTNER_USR_DOC_BASIC_INFO_FT0_S
> WHERE (PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D >= '2016-03-01' AND
> PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D < '2016-04-13')"
>
>
>
>  Table "PARTNER_USR_DOC_BASIC_INFO_FT0_S" locate
> "hdfs://hadoop2NameNode/wlt_partner/PARTNER_USR_DOC_BASIC_INFO_FT0_S"
>
>
> but "kylin_intermediate_cube8_2016030100_2016041300"  locate
> "hdfs://bihbasemaster/"
>
>
> they are different clusters.
>
>
> The current situation is there will not be any error in WEBUI at step
> 1,
>
>
>  When cube done, there is nothing in  Htable, so What can i do?




-- 
Best regards,

Shaofeng Shi

回复: sql execute failed

2016-04-26 Thread 耳东
Thank you, I didn't know this before. it is solved now .




-- 原始邮件 --
发件人: "lidong";;
发送时间: 2016年4月26日(星期二) 晚上9:22
收件人: "dev"; 

主题: Re: sql execute failed



year is a reserved keyword, try to use “YEAR”, which added the double-quotes. 
With the double-quotes, column name is case-sensitive.


More reserved words see: http://calcite.apache.org/docs/reference.html#keywords


Thanks,
Dong


Original Message
Sender:耳东775620...@qq.com
Recipient:dev...@kylin.apache.org
Date:Tuesday, Apr 26, 2016 21:12
Subject:sql execute failed


Hi all: My cube as a column named year , whose data type is integer, when I 
execute "select year from tablename", it failed. I don't konw why. Encountered 
"year" at line 1, column 8. Was expecting one of: "UNION" ... "INTERSECT" ... 
"EXCEPT" ... "ORDER" ... "LIMIT" ... "OFFSET" ... "FETCH" ... "STREAM" ... 
"DISTINCT" ... "ALL" ... "*" ... "+" ... "-" ... UNSIGNED_INTEGER_LITERAL ... 
DECIMAL_NUMERIC_LITERAL ... APPROX_NUMERIC_LITERAL ... BINARY_STRING_LITERAL 
... PREFIXED_STRING_LITERAL ... QUOTED_STRING ... UNICODE_STRING_LITERAL ... 
"TRUE" ... "FALSE" ... "UNKNOWN" ... "NULL" ... LBRACE_D ... LBRACE_T ... 
LBRACE_TS ... "DATE" ... "TIME" ... "TIMESTAMP" ... "INTERVAL" ... "?" ... 
"CAST" ... "EXTRACT" ... "POSITION" ... "CONVERT" ... "TRANSLATE" ... "OVERLAY" 
... "FLOOR" ... "CEIL" ... "CEILING" ... "SUBSTRING" ... "TRIM" ... LBRACE_FN 
... "MULTISET" ... "ARRAY" ... "SPECIFIC" ... IDENTIFIER ... QUOTED_IDENTIFIER 
... BACK_QUOTED_IDENTIFIER ... BRACKET_QUOTED_IDENTIFIER ... 
UNICODE_QUOTED_IDENTIFIER ... "ABS" ... "AVG" ... "CARDINALITY" ... 
"CHAR_LENGTH" ... "CHARACTER_LENGTH" ... "COALESCE" ... "COLLECT" ... 
"COVAR_POP" ... "COVAR_SAMP" ... "CUME_DIST" ... "COUNT" ... "CURRENT_DATE" ... 
"CURRENT_TIME" ... "CURRENT_TIMESTAMP" ... "DENSE_RANK" ... "ELEMENT" ... "EXP" 
... "FIRST_VALUE" ... "FUSION" ... "GROUPING" ... "LAST_VALUE" ... "LN" ... 
"LOCALTIME" ... "LOCALTIMESTAMP" ... "LOWER" ... "MAX" ... "MIN" ... "MOD" ... 
"NULLIF" ... "OCTET_LENGTH" ... "PERCENT_RANK" ... "POWER" ... "RANK" ... 
"REGR_SXX" ... "REGR_SYY" ... "ROW_NUMBER" ... "SQRT" ... "STDDEV_POP" ... 
"STDDEV_SAMP" ... "SUM" ... "UPPER" ... "VAR_POP" ... "VAR_SAMP" ... 
"CURRENT_CATALOG" ... "CURRENT_DEFAULT_TRANSFORM_GROUP" ... "CURRENT_PATH" ... 
"CURRENT_ROLE" ... "CURRENT_SCHEMA" ... "CURRENT_USER" ... "SESSION_USER" ... 
"SYSTEM_USER" ... "USER" ... "NEW" ... "CASE" ... "NEXT" ... "CURRENT" ... 
"CURSOR" ... "ROW" ... "NOT" ... "EXISTS" ... "(" ... at 
org.apache.kylin.rest.controller.QueryController.doQueryWithCache(QueryController.java:224)
 at 
org.apache.kylin.rest.controller.QueryController.query(QueryController.java:94) 
at sun.reflect.GeneratedMethodAccessor156.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606) at 
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:213)
 at 
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:126)
 at 
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:96)
 at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:617)
 at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:578)
 at 
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80)
 at 
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923)
 at 
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)
 at 
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882)
 at 
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:789)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:646) at 
javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
 at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
 at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
 at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
 at 

[jira] [Created] (KYLIN-1626) pogo games

2016-04-26 Thread Amit Kumar (JIRA)
Amit Kumar created KYLIN-1626:
-

 Summary: pogo games
 Key: KYLIN-1626
 URL: https://issues.apache.org/jira/browse/KYLIN-1626
 Project: Kylin
  Issue Type: Bug
  Components: Web 
Reporter: Amit Kumar
Assignee: Zhong,Jason






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: source code introduction

2016-04-26 Thread Luke Han
source code is the best document of source code:-)

Regards!
Luke Han




On Tue, Apr 26, 2016 at 1:16 AM -0700, "耳东" <775620...@qq.com> wrote:










Hi all:   Is there any document introducing the kylin source code?






Re: sql execute failed

2016-04-26 Thread lidong
year is a reserved keyword, try to use “YEAR”, which added the double-quotes. 
With the double-quotes, column name is case-sensitive.


More reserved words see: http://calcite.apache.org/docs/reference.html#keywords


Thanks,
Dong


Original Message
Sender:耳东775620...@qq.com
Recipient:dev...@kylin.apache.org
Date:Tuesday, Apr 26, 2016 21:12
Subject:sql execute failed


Hi all: My cube as a column named year , whose data type is integer, when I 
execute "select year from tablename", it failed. I don't konw why. Encountered 
"year" at line 1, column 8. Was expecting one of: "UNION" ... "INTERSECT" ... 
"EXCEPT" ... "ORDER" ... "LIMIT" ... "OFFSET" ... "FETCH" ... "STREAM" ... 
"DISTINCT" ... "ALL" ... "*" ... "+" ... "-" ... UNSIGNED_INTEGER_LITERAL ... 
DECIMAL_NUMERIC_LITERAL ... APPROX_NUMERIC_LITERAL ... BINARY_STRING_LITERAL 
... PREFIXED_STRING_LITERAL ... QUOTED_STRING ... UNICODE_STRING_LITERAL ... 
"TRUE" ... "FALSE" ... "UNKNOWN" ... "NULL" ... LBRACE_D ... LBRACE_T ... 
LBRACE_TS ... "DATE" ... "TIME" ... "TIMESTAMP" ... "INTERVAL" ... "?" ... 
"CAST" ... "EXTRACT" ... "POSITION" ... "CONVERT" ... "TRANSLATE" ... "OVERLAY" 
... "FLOOR" ... "CEIL" ... "CEILING" ... "SUBSTRING" ... "TRIM" ... LBRACE_FN 
... "MULTISET" ... "ARRAY" ... "SPECIFIC" ... IDENTIFIER ... QUOTED_IDENTIFIER 
... BACK_QUOTED_IDENTIFIER ... BRACKET_QUOTED_IDENTIFIER ... 
UNICODE_QUOTED_IDENTIFIER ... "ABS" ... "AVG" ... "CARDINALITY" ... 
"CHAR_LENGTH" ... "CHARACTER_LENGTH" ... "COALESCE" ... "COLLECT" ... 
"COVAR_POP" ... "COVAR_SAMP" ... "CUME_DIST" ... "COUNT" ... "CURRENT_DATE" ... 
"CURRENT_TIME" ... "CURRENT_TIMESTAMP" ... "DENSE_RANK" ... "ELEMENT" ... "EXP" 
... "FIRST_VALUE" ... "FUSION" ... "GROUPING" ... "LAST_VALUE" ... "LN" ... 
"LOCALTIME" ... "LOCALTIMESTAMP" ... "LOWER" ... "MAX" ... "MIN" ... "MOD" ... 
"NULLIF" ... "OCTET_LENGTH" ... "PERCENT_RANK" ... "POWER" ... "RANK" ... 
"REGR_SXX" ... "REGR_SYY" ... "ROW_NUMBER" ... "SQRT" ... "STDDEV_POP" ... 
"STDDEV_SAMP" ... "SUM" ... "UPPER" ... "VAR_POP" ... "VAR_SAMP" ... 
"CURRENT_CATALOG" ... "CURRENT_DEFAULT_TRANSFORM_GROUP" ... "CURRENT_PATH" ... 
"CURRENT_ROLE" ... "CURRENT_SCHEMA" ... "CURRENT_USER" ... "SESSION_USER" ... 
"SYSTEM_USER" ... "USER" ... "NEW" ... "CASE" ... "NEXT" ... "CURRENT" ... 
"CURSOR" ... "ROW" ... "NOT" ... "EXISTS" ... "(" ... at 
org.apache.kylin.rest.controller.QueryController.doQueryWithCache(QueryController.java:224)
 at 
org.apache.kylin.rest.controller.QueryController.query(QueryController.java:94) 
at sun.reflect.GeneratedMethodAccessor156.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606) at 
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:213)
 at 
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:126)
 at 
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:96)
 at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:617)
 at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:578)
 at 
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80)
 at 
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923)
 at 
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)
 at 
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882)
 at 
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:789)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:646) at 
javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
 at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
 at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
 at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
 at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
 at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
 at 

sql execute failed

2016-04-26 Thread ????
Hi all:


 My cube as a column named year , whose data type is integer, when I 
execute "select year from tablename", it failed. I don't konw why.


Encountered "year" at line 1, column 8. Was expecting one of: "UNION" ... 
"INTERSECT" ... "EXCEPT" ... "ORDER" ... "LIMIT" ... "OFFSET" ... "FETCH" ... 
"STREAM" ... "DISTINCT" ... "ALL" ... "*" ... "+" ... "-" ... 
 ...  ... 
 ...  ... 
 ...  ...  ... 
"TRUE" ... "FALSE" ... "UNKNOWN" ... "NULL" ...  ...  ... 
 ... "DATE" ... "TIME" ... "TIMESTAMP" ... "INTERVAL" ... "?" ... 
"CAST" ... "EXTRACT" ... "POSITION" ... "CONVERT" ... "TRANSLATE" ... "OVERLAY" 
... "FLOOR" ... "CEIL" ... "CEILING" ... "SUBSTRING" ... "TRIM" ...  
... "MULTISET" ... "ARRAY" ... "SPECIFIC" ...  ... 
 ...  ... 
 ...  ... "ABS" ... "AVG" 
... "CARDINALITY" ... "CHAR_LENGTH" ... "CHARACTER_LENGTH" ... "COALESCE" ... 
"COLLECT" ... "COVAR_POP" ... "COVAR_SAMP" ... "CUME_DIST" ... "COUNT" ... 
"CURRENT_DATE" ... "CURRENT_TIME" ... "CURRENT_TIMESTAMP" ... "DENSE_RANK" ... 
"ELEMENT" ... "EXP" ... "FIRST_VALUE" ... "FUSION" ... "GROUPING" ... 
"LAST_VALUE" ... "LN" ... "LOCALTIME" ... "LOCALTIMESTAMP" ... "LOWER" ... 
"MAX" ... "MIN" ... "MOD" ... "NULLIF" ... "OCTET_LENGTH" ... "PERCENT_RANK" 
... "POWER" ... "RANK" ... "REGR_SXX" ... "REGR_SYY" ... "ROW_NUMBER" ... 
"SQRT" ... "STDDEV_POP" ... "STDDEV_SAMP" ... "SUM" ... "UPPER" ... "VAR_POP" 
... "VAR_SAMP" ... "CURRENT_CATALOG" ... "CURRENT_DEFAULT_TRANSFORM_GROUP" ... 
"CURRENT_PATH" ... "CURRENT_ROLE" ... "CURRENT_SCHEMA" ... "CURRENT_USER" ... 
"SESSION_USER" ... "SYSTEM_USER" ... "USER" ... "NEW" ... "CASE" ... "NEXT" ... 
"CURRENT" ... "CURSOR" ... "ROW" ... "NOT" ... "EXISTS" ... "(" ...
at 
org.apache.kylin.rest.controller.QueryController.doQueryWithCache(QueryController.java:224)
at 
org.apache.kylin.rest.controller.QueryController.query(QueryController.java:94)
at sun.reflect.GeneratedMethodAccessor156.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:213)
at 
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:126)
at 
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:96)
at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:617)
at 
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:578)
at 
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80)
at 
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923)
at 
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)
at 
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882)
at 
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:789)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at 
org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at 
org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at 

how to get the rate value

2016-04-26 Thread ????
Hi all:


  I want to get a value which is defined as sum(a)/sum(b), how can I do 
this kind of anlysis.
 
  Now I build a cube which have sum(a) and sum(b), when I execute ??select 
sum(a)/sum(b) from table1 group by c?? ,the result is wrong. sum(a)/sum(b) the 
result is all 0 and sum(b)/sum(a) result is all 1.


 MMENE_NAMESUCC   ATTSUCC/ATT  
 CSMME15BZX   336981   368366   1  
 CSMME32BZX   338754   366842   1  
 CSMME07BZX   687965   747694   1  
 CSMME03BHW   703269   747623   1  
 CSMME12BZX   705856   764656   1  
 CSMME16BHW   1962293142173   1  


   MMENE_NAME   SUCC   ATT   ATT/SUCC  
 CSMME15BZX   336981   368366   0  
 CSMME32BZX   338754   366842   0  
 CSMME07BZX   687965   747694   0  
 CSMME03BHW   703269   747623   0  
 CSMME12BZX   705856   764656   0  
 CSMME16BHW   1962293142173   0

[jira] [Created] (KYLIN-1625) GC log overwrites old one after restart Kylin service

2016-04-26 Thread Dong Li (JIRA)
Dong Li created KYLIN-1625:
--

 Summary: GC log overwrites old one after restart Kylin service
 Key: KYLIN-1625
 URL: https://issues.apache.org/jira/browse/KYLIN-1625
 Project: Kylin
  Issue Type: Improvement
  Components: Client - CLI
Affects Versions: v1.5.1
Reporter: Dong Li
Assignee: Dong Li
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KYLIN-1623) Make the hll precision for data samping configurable

2016-04-26 Thread Shaofeng SHI (JIRA)
Shaofeng SHI created KYLIN-1623:
---

 Summary: Make the hll precision for data samping configurable
 Key: KYLIN-1623
 URL: https://issues.apache.org/jira/browse/KYLIN-1623
 Project: Kylin
  Issue Type: New Feature
  Components: Job Engine
Reporter: Shaofeng SHI
Assignee: Shaofeng SHI


now kylin uses hll(14) for sampling; it should be configurable at cube level 
for small or large cubes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


how to add Custom aggregation types

2016-04-26 Thread ????
Hi all:


  How can I add the custom aggregation function(UDF)? I cound not find any 
instructions from the http://kylin.apache.org/ page.

Re: Question about cube size estimation in Kylin 1.5

2016-04-26 Thread ShaoFeng Shi
Hi Dayue,

could you please open a JIRA for this, and make it configurable? As I know
now Kylin allow cube level's configurations to overwirte kylin.properties,
with this you can customize the magic number at cube level.

Thanks;

2016-04-25 15:01 GMT+08:00 Li Yang :

> The magic coefficient is due to hbase compression on keys and values, the
> final cube size is much smaller than the sum of all keys and all values.
> That's why multiplying the coefficient. It's totally by experience at the
> moment. It should vary depends on the key encoding and compression applied
> to HTable.
>
> At the minimal, we should make it configurable I think.
>
> On Mon, Apr 18, 2016 at 4:38 PM, Dayue Gao  wrote:
>
> > Hi everyone,
> >
> >
> > I made several cubing tests on 1.5 and found most of the time was spent
> on
> > the "Convert Cuboid Data to HFile" step due to lack of reducer
> parallelism.
> > It seems that the estimated cube size is too small compared to the actual
> > size, which leads to small number of regions (hence reducers) to be
> > created. The setup and result of the tests are like:
> >
> >
> > Cube#1: source_record=11998051, estimated_size=8805MB, coefficient=0.25,
> > region_cut=5GB, #regions=2, actual_size=49GB
> > Cube#2: source_record=123908390, estimated_size=4653MB, coefficient=0.05,
> > region_cut=10GB, #regions=2, actual_size=144GB
> >
> >
> > The "coefficient" is from CubeStatsReader#estimateCuboidStorageSize,
> which
> > looks mysterious to me. Currently the formula for cuboid size estimation
> is
> >
> >
> >   size(cuboid) = rows(cuboid) x row_size(cuboid) x coefficient
> >   where coefficient = has_memory_hungry_measures(cube) ? 0.05 : 0.25
> >
> >
> > Why do we multiply the coefficient? And why it's five times smaller in
> > memory hungry case? Cloud someone explain the rationale behind it?
> >
> >
> > Thanks, Dayue
> >
> >
> >
> >
> >
> >
> >
> >
>



-- 
Best regards,

Shaofeng Shi


Re: multi hadoop cluster

2016-04-26 Thread ShaoFeng Shi
will this match your case? https://issues.apache.org/jira/browse/KYLIN-1172

2016-04-26 16:55 GMT+08:00 bitbean :

> Hi all,
>
>  i am encountering a problem with multiple hadoop cluster.
>
>
>  kylin submit job to yarn on one hdfs, but my fact table is on other
> hdfs. Two hadoop clusters  use the same mysql to store metadata.
>
>
> so when i build cube, the first step to create intermediate table , and
> insert data from fact table.
>
>
> but i can't access the fact table in kylin's hive.
>
>
>  for example , the first step as below
>
>
> "kylin_intermediate_cube8_2016030100_2016041300 SELECT
> PARTNER_USR_DOC_BASIC_INFO_FT0_S.PHONE_PROVINCE_IND
> FROM WLT_PARTNER.PARTNER_USR_DOC_BASIC_INFO_FT0_S as
> PARTNER_USR_DOC_BASIC_INFO_FT0_S
> WHERE (PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D >= '2016-03-01' AND
> PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D < '2016-04-13')"
>
>
>
>  Table "PARTNER_USR_DOC_BASIC_INFO_FT0_S" locate
> "hdfs://hadoop2NameNode/wlt_partner/PARTNER_USR_DOC_BASIC_INFO_FT0_S"
>
>
> but "kylin_intermediate_cube8_2016030100_2016041300"  locate
> "hdfs://bihbasemaster/"
>
>
> they are different clusters.
>
>
> The current situation is there will not be any error in WEBUI at step
> 1,
>
>
>  When cube done, there is nothing in  Htable, so What can i do?




-- 
Best regards,

Shaofeng Shi


multi hadoop cluster

2016-04-26 Thread bitbean
Hi all,

 i am encountering a problem with multiple hadoop cluster.


 kylin submit job to yarn on one hdfs, but my fact table is on other hdfs. 
Two hadoop clusters  use the same mysql to store metadata.


so when i build cube, the first step to create intermediate table , and insert 
data from fact table.


but i can't access the fact table in kylin's hive.


 for example , the first step as below 


"kylin_intermediate_cube8_2016030100_2016041300 SELECT
PARTNER_USR_DOC_BASIC_INFO_FT0_S.PHONE_PROVINCE_IND
FROM WLT_PARTNER.PARTNER_USR_DOC_BASIC_INFO_FT0_S as 
PARTNER_USR_DOC_BASIC_INFO_FT0_S 
WHERE (PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D >= '2016-03-01' AND 
PARTNER_USR_DOC_BASIC_INFO_FT0_S.PT_LOG_D < '2016-04-13')"



 Table "PARTNER_USR_DOC_BASIC_INFO_FT0_S" locate  
"hdfs://hadoop2NameNode/wlt_partner/PARTNER_USR_DOC_BASIC_INFO_FT0_S"


but "kylin_intermediate_cube8_2016030100_2016041300"  locate 
"hdfs://bihbasemaster/"


they are different clusters.


The current situation is there will not be any error in WEBUI at step 1,


 When cube done, there is nothing in  Htable, so What can i do?

[jira] [Created] (KYLIN-1622) sql not executed and report topN error

2016-04-26 Thread Zhong,Jason (JIRA)
Zhong,Jason created KYLIN-1622:
--

 Summary: sql not executed and report topN error
 Key: KYLIN-1622
 URL: https://issues.apache.org/jira/browse/KYLIN-1622
 Project: Kylin
  Issue Type: Bug
  Components: Query Engine
Reporter: Zhong,Jason
Assignee: Shaofeng SHI


pull the latest code from master branch, commit id is 
d899a67963af14f80fc047a9253ac7736c1983f3, 
 deploy in sandbox, and run bin/sample.sh to get sample cube 
"kylin_sales_cube", and created a cube called "kylin_sales_cube_desc_TOPN". I 
build "kylin_sales_cube_desc_TOPN" first and then build "kylin_sales_cube". 

topN sql is 
"
SELECT 
 SUM (t0.PRICE),  t1.WEEK_BEG_DT 
 FROM  "DEFAULT".KYLIN_SALES AS t0 
  INNER JOIN "DEFAULT".KYLIN_CAL_DT AS t1 
 ON  t0.PART_DT = t1.CAL_DT 
 GROUP BY 
 t1.WEEK_BEG_DT 
 ORDER BY SUM (t0.PRICE)
"
but !!!
when run sql 
"
SELECT 
  t1.WEEK_BEG_DT 
 FROM  "DEFAULT".KYLIN_SALES AS t0 
  INNER JOIN "DEFAULT".KYLIN_CAL_DT AS t1 
 ON  t0.PART_DT = t1.CAL_DT 
 GROUP BY 
 t1.WEEK_BEG_DT
"
it reports 
Error while executing SQL "SELECT t1.WEEK_BEG_DT FROM "DEFAULT".KYLIN_SALES AS 
t0 INNER JOIN "DEFAULT".KYLIN_CAL_DT AS t1 ON t0.PART_DT = t1.CAL_DT GROUP BY 
t1.WEEK_BEG_DT LIMIT 5": When query with topN, only one metrics is allowed.


I'll attach kylin_sales_cube_desc_TOPN.json 
for "kylin_sales_cube" you can run bin/sample.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


source code introduction

2016-04-26 Thread ????
Hi all:   Is there any document introducing the kylin source code?