[jira] [Created] (KYLIN-4156) Add JoinedFormatter for dynamic variables in where sentence

2019-09-03 Thread liuzhixin (Jira)
liuzhixin created KYLIN-4156:


 Summary: Add JoinedFormatter for dynamic variables in where 
sentence
 Key: KYLIN-4156
 URL: https://issues.apache.org/jira/browse/KYLIN-4156
 Project: Kylin
  Issue Type: Improvement
Reporter: liuzhixin


*Description*
The users can use ${START_DATE} and ${END_DATE} in where filter which in the 
WebUI.
And the dynamic variables ${START_DATE} and ${END_DATE} are from the building 
date of Cube.

*Improvement*

Sometimes we can  improve the filter condition for lookup table with 
${START_DATE} and ${END_DATE} 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KYLIN-4058) HBase tableName is case sensitie

2019-06-26 Thread liuzhixin (JIRA)
liuzhixin created KYLIN-4058:


 Summary: HBase tableName is case sensitie
 Key: KYLIN-4058
 URL: https://issues.apache.org/jira/browse/KYLIN-4058
 Project: Kylin
  Issue Type: Bug
  Components: Storage - HBase
Reporter: liuzhixin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Compile in GitHub Error

2019-06-17 Thread liuzhixin


Ok,Thank you very much!


> 在 2019年6月17日,下午6:06,nichunen  写道:
> 
> Hi, please rebase to the latest code of master branch. It should be fixed now.
> 
> 
> 
> Best regards,
> 
> 
> 
> Ni Chunen / George
> 
> 
> 
> On 06/17/2019 18:03,liuzhixin wrote:
> Hello kylin team:
> 
> There is someting wrong for Compile in GitHub Error.
> Refer:  https://travis-ci.org/apache/kylin/jobs/546641440 
> <https://travis-ci.org/apache/kylin/jobs/546641440> .
> 
> 
> Thanks very much for any help.
> 



Compile in GitHub Error

2019-06-17 Thread liuzhixin
Hello kylin team:

There is someting wrong for Compile in GitHub Error.
Refer:  https://travis-ci.org/apache/kylin/jobs/546641440 
 .


Thanks very much for any help.



Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
HI ShaoFeng Shi:

数据表中高基数维度(例如request_id或者timestamp)会带来维度膨胀,引起了OOM;
而其他的一些偏低的高基数维度本身数据分布就不均衡,导致数据也分布不均衡;
#
数据本身有很多分布就不均衡,没有了rand(),Kylin该如何处理?

Best Wishes

> 在 2018年11月2日,下午1:42,ShaoFeng Shi  写道:
> 
> Please move the high cardinality dimensions to the leading position of
> rowkey, that will make the data distribution more even;
> 
> Chao Long  于2018年11月2日周五 下午1:38写道:
> 
>> Hi zhixin,
>> Data may become not correct if use "distribute by rand()".
>> https://issues.apache.org/jira/browse/KYLIN-3388
>> 
>> 
>> 
>> 
>> -- 原始邮件 --
>> 发件人: "liuzhixin";
>> 发送时间: 2018年11月2日(星期五) 中午12:53
>> 收件人: "dev";
>> 抄送: "ShaoFeng Shi";
>> 主题: Re: Redistribute intermediate table default not by rand()
>> 
>> 
>> 
>> Hi kylin team:
>> 
>> Step: Redistribute intermediate table
>> #
>> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
>> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
>> 
>> Best Regards!
>> 
>>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>>> 
>>> Hi kylin team:
>>> 
>>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>>> #
>>> Step: Redistribute intermediate table
>>> #
>>> DISTRIBUTE BY is that:
>>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM
>> table_intermediate DISTRIBUTE BY Field1, Field2, Field3;
>>> #
>>> Not DISTRIBUTE BY RAND()
>>> #
>>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE
>> BY RAND()?
>>> 
>>> Best wishes.
>>> 
> 
> 
> 
> -- 
> Best regards,
> 
> Shaofeng Shi 史少锋




Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
Hi Chao Long,

Yes!
#
So I said “has provided”, below,
> At the same time,  Kylin should support the custom column for shard. (has 
> provided)

#
Bug, Kylin can insert one rand column in the intermediate hive table  for the 
next shard, (as default).

Best Wishes!

> 在 2018年11月2日,下午4:03,Chao Long  写道:
> 
> Hi zhixin,
>   As I remember  If you set "shard by" column in cube design page, Kylin will 
> use this column as the condition of  "distribute by", rather than the first 
> three field of rowkey.
> 
> 
> 
> 
> -- 原始邮件 --
> 发件人: "liuzhixin";
> 发送时间: 2018年11月2日(星期五) 下午3:11
> 收件人: "dev";
> 抄送: "Chao Long"; 
> 主题: Re: Redistribute intermediate table default not by rand()
> 
> 
> 
> Hi Chao Long,
> 
> Thank you for the answer.
> #
> Step1: Create Intermediate Flat Hive Table
> Step2: Redistribute intermediate table
> #
> Perhaps, Kylin can insert one rand column in the intermediate hive table  for 
> the next shard, (as default).
> At the same time,  Kylin should support the custom column for shard. (has 
> provided)
> 
> Best Wishes.
> 
>> 在 2018年11月2日,下午1:38,Chao Long  写道:
>> 
>> Hi zhixin,
>> Data may become not correct if use "distribute by rand()".
>> https://issues.apache.org/jira/browse/KYLIN-3388
>> 
>> 
>> 
>> 
>> -- 原始邮件 --
>> 发件人: "liuzhixin";
>> 发送时间: 2018年11月2日(星期五) 中午12:53
>> 收件人: "dev";
>> 抄送: "ShaoFeng Shi"; 
>> 主题: Re: Redistribute intermediate table default not by rand()
>> 
>> 
>> 
>> Hi kylin team:
>> 
>> Step: Redistribute intermediate table
>> #
>> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
>> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
>> 
>> Best Regards!
>> 
>>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>>> 
>>> Hi kylin team:
>>> 
>>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>>> #
>>> Step: Redistribute intermediate table
>>> #
>>> DISTRIBUTE BY is that:
>>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate 
>>> DISTRIBUTE BY Field1, Field2, Field3;
>>> #
>>> Not DISTRIBUTE BY RAND()
>>> #
>>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY 
>>> RAND()?
>>> 
>>> Best wishes.




Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
Hi Chao Long,

Thank you for the answer.
#
Step1: Create Intermediate Flat Hive Table
Step2: Redistribute intermediate table
#
Perhaps, Kylin can insert one rand column in the intermediate hive table  for 
the next shard, (as default).
At the same time,  Kylin should support the custom column for shard. (has 
provided)

Best Wishes.

> 在 2018年11月2日,下午1:38,Chao Long  写道:
> 
> Hi zhixin,
> Data may become not correct if use "distribute by rand()".
> https://issues.apache.org/jira/browse/KYLIN-3388
> 
> 
> 
> 
> ------ 原始邮件 --
> 发件人: "liuzhixin";
> 发送时间: 2018年11月2日(星期五) 中午12:53
> 收件人: "dev";
> 抄送: "ShaoFeng Shi"; 
> 主题: Re: Redistribute intermediate table default not by rand()
> 
> 
> 
> Hi kylin team:
> 
> Step: Redistribute intermediate table
> #
> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
> 
> Best Regards!
> 
>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>> 
>> Hi kylin team:
>> 
>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>> #
>> Step: Redistribute intermediate table
>> #
>> DISTRIBUTE BY is that:
>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate 
>> DISTRIBUTE BY Field1, Field2, Field3;
>> #
>> Not DISTRIBUTE BY RAND()
>> #
>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY 
>> RAND()?
>> 
>> Best wishes.




Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
Hi Chao Long,

Thank you for the answer.
#
Step1: Create Intermediate Flat Hive Table
Step2: Redistribute intermediate table
#
Perhaps, Kylin can insert one rand column in the intermediate hive table  for 
the next shard, (as default).
At the same time,  Kylin should support the custom column for shard. (has 
provided)

Best Wishes.

> 在 2018年11月2日,下午1:38,Chao Long  写道:
> 
> Hi zhixin,
> Data may become not correct if use "distribute by rand()".
> https://issues.apache.org/jira/browse/KYLIN-3388
> 
> 
> 
> 
> ------ 原始邮件 --
> 发件人: "liuzhixin";
> 发送时间: 2018年11月2日(星期五) 中午12:53
> 收件人: "dev";
> 抄送: "ShaoFeng Shi"; 
> 主题: Re: Redistribute intermediate table default not by rand()
> 
> 
> 
> Hi kylin team:
> 
> Step: Redistribute intermediate table
> #
> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
> 
> Best Regards!
> 
>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>> 
>> Hi kylin team:
>> 
>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>> #
>> Step: Redistribute intermediate table
>> #
>> DISTRIBUTE BY is that:
>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate 
>> DISTRIBUTE BY Field1, Field2, Field3;
>> #
>> Not DISTRIBUTE BY RAND()
>> #
>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY 
>> RAND()?
>> 
>> Best wishes.




Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
Hi ShaoFeng Shi,

Thank you for the answer.
#
Step1: Create Intermediate Flat Hive Table
Step2: Redistribute intermediate table
#
Perhaps, Kylin can insert one rand column for the next shard, (as default).
At the same time,  Kylin should support the custom column for shard.

Best Wishes.

> 在 2018年11月2日,下午2:06,ShaoFeng Shi  写道:
> 
> Hi ShaoFeng Shi,
> 
> Kylin 2.5.1 will add some tips in the advanced step, hope that can help.
> 
> liuzhixin  于2018年11月2日周五 下午2:05写道:
> 
>> Hi Chao Long:
>> 
>> Thank you for the answer.
>> #
>> Maybe kylin should provide config for every build step
>> 
>> Best wishes.
>> 
>>> 在 2018年11月2日,下午1:38,Chao Long  写道:
>>> 
>>> Hi zhixin,
>>> Data may become not correct if use "distribute by rand()".
>>> https://issues.apache.org/jira/browse/KYLIN-3388
>>> 
>>> 
>>> 
>>> 
>>> -- 原始邮件 --
>>> 发件人: "liuzhixin";
>>> 发送时间: 2018年11月2日(星期五) 中午12:53
>>> 收件人: "dev";
>>> 抄送: "ShaoFeng Shi";
>>> 主题: Re: Redistribute intermediate table default not by rand()
>>> 
>>> 
>>> 
>>> Hi kylin team:
>>> 
>>> Step: Redistribute intermediate table
>>> #
>>> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
>>> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
>>> 
>>> Best Regards!
>>> 
>>>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>>>> 
>>>> Hi kylin team:
>>>> 
>>>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>>>> #
>>>> Step: Redistribute intermediate table
>>>> #
>>>> DISTRIBUTE BY is that:
>>>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM
>> table_intermediate DISTRIBUTE BY Field1, Field2, Field3;
>>>> #
>>>> Not DISTRIBUTE BY RAND()
>>>> #
>>>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE
>> BY RAND()?
>>>> 
>>>> Best wishes.
>> 
>> 
>> 
> 
> -- 
> Best regards,
> 
> Shaofeng Shi 史少锋




Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
Hi ShaoFeng Shi,

Thank you for the answer.
#
Step1: Create Intermediate Flat Hive Table
Step2: Redistribute intermediate table
#
Perhaps, Kylin can insert one rand column for the next shard, (as default).
At the same time,  Kylin should support the custom column for shard.

Best Wishes.

> 在 2018年11月2日,下午2:06,ShaoFeng Shi  写道:
> 
> Hi ShaoFeng Shi,
> 
> Kylin 2.5.1 will add some tips in the advanced step, hope that can help.
> 
> liuzhixin  于2018年11月2日周五 下午2:05写道:
> 
>> Hi Chao Long:
>> 
>> Thank you for the answer.
>> #
>> Maybe kylin should provide config for every build step
>> 
>> Best wishes.
>> 
>>> 在 2018年11月2日,下午1:38,Chao Long  写道:
>>> 
>>> Hi zhixin,
>>> Data may become not correct if use "distribute by rand()".
>>> https://issues.apache.org/jira/browse/KYLIN-3388
>>> 
>>> 
>>> 
>>> 
>>> -- 原始邮件 --
>>> 发件人: "liuzhixin";
>>> 发送时间: 2018年11月2日(星期五) 中午12:53
>>> 收件人: "dev";
>>> 抄送: "ShaoFeng Shi";
>>> 主题: Re: Redistribute intermediate table default not by rand()
>>> 
>>> 
>>> 
>>> Hi kylin team:
>>> 
>>> Step: Redistribute intermediate table
>>> #
>>> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
>>> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
>>> 
>>> Best Regards!
>>> 
>>>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>>>> 
>>>> Hi kylin team:
>>>> 
>>>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>>>> #
>>>> Step: Redistribute intermediate table
>>>> #
>>>> DISTRIBUTE BY is that:
>>>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM
>> table_intermediate DISTRIBUTE BY Field1, Field2, Field3;
>>>> #
>>>> Not DISTRIBUTE BY RAND()
>>>> #
>>>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE
>> BY RAND()?
>>>> 
>>>> Best wishes.
>> 
>> 
>> 
> 
> -- 
> Best regards,
> 
> Shaofeng Shi 史少锋




Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
Hi Chao Long:

Thank you for the answer.
#
Maybe kylin should provide config for every build step

Best wishes.

> 在 2018年11月2日,下午1:38,Chao Long  写道:
> 
> Hi zhixin,
> Data may become not correct if use "distribute by rand()".
> https://issues.apache.org/jira/browse/KYLIN-3388
> 
> 
> 
> 
> ------ 原始邮件 --
> 发件人: "liuzhixin";
> 发送时间: 2018年11月2日(星期五) 中午12:53
> 收件人: "dev";
> 抄送: "ShaoFeng Shi"; 
> 主题: Re: Redistribute intermediate table default not by rand()
> 
> 
> 
> Hi kylin team:
> 
> Step: Redistribute intermediate table
> #
> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
> 
> Best Regards!
> 
>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>> 
>> Hi kylin team:
>> 
>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>> #
>> Step: Redistribute intermediate table
>> #
>> DISTRIBUTE BY is that:
>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate 
>> DISTRIBUTE BY Field1, Field2, Field3;
>> #
>> Not DISTRIBUTE BY RAND()
>> #
>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY 
>> RAND()?
>> 
>> Best wishes.




Re: Redistribute intermediate table default not by rand()

2018-11-02 Thread liuzhixin
Hi ShaoFeng Shi

OK, thank you for the answer.
#
Perhaps Kylin should provide the tips or notes for the default shard.

Best Wishes.

> 在 2018年11月2日,下午1:42,ShaoFeng Shi  写道:
> 
> Please move the high cardinality dimensions to the leading position of
> rowkey, that will make the data distribution more even;
> 
> Chao Long  于2018年11月2日周五 下午1:38写道:
> 
>> Hi zhixin,
>> Data may become not correct if use "distribute by rand()".
>> https://issues.apache.org/jira/browse/KYLIN-3388
>> 
>> 
>> 
>> 
>> -- 原始邮件 --
>> 发件人: "liuzhixin";
>> 发送时间: 2018年11月2日(星期五) 中午12:53
>> 收件人: "dev";
>> 抄送: "ShaoFeng Shi";
>> 主题: Re: Redistribute intermediate table default not by rand()
>> 
>> 
>> 
>> Hi kylin team:
>> 
>> Step: Redistribute intermediate table
>> #
>> 默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
>> 如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。
>> 
>> Best Regards!
>> 
>>> 在 2018年11月2日,下午12:03,liuzhixin  写道:
>>> 
>>> Hi kylin team:
>>> 
>>> Version: Kylin2.5-hadoop3.1 for hdp3.0
>>> #
>>> Step: Redistribute intermediate table
>>> #
>>> DISTRIBUTE BY is that:
>>> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM
>> table_intermediate DISTRIBUTE BY Field1, Field2, Field3;
>>> #
>>> Not DISTRIBUTE BY RAND()
>>> #
>>> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE
>> BY RAND()?
>>> 
>>> Best wishes.
>>> 
> 
> 
> 
> -- 
> Best regards,
> 
> Shaofeng Shi 史少锋




Re: Redistribute intermediate table default not by rand()

2018-11-01 Thread liuzhixin
Hi kylin team:

Step: Redistribute intermediate table
#
默认选择了维度的前三个字段作为DISTRIBUTE BY的依据,没有采用DISTRIBUTE BY RAND()
如果没有合适的维度字段,这样的默认策略将会导致数据更加的数据不均衡。

Best Regards!

> 在 2018年11月2日,下午12:03,liuzhixin  写道:
> 
> Hi kylin team:
> 
> Version: Kylin2.5-hadoop3.1 for hdp3.0
> #
> Step: Redistribute intermediate table
> #
> DISTRIBUTE BY is that:
> INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate 
> DISTRIBUTE BY Field1, Field2, Field3;
> #
> Not DISTRIBUTE BY RAND()
> #
> Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY 
> RAND()?
> 
> Best wishes.
> 



Redistribute intermediate table default not by rand()

2018-11-01 Thread liuzhixin
Hi kylin team:

Version: Kylin2.5-hadoop3.1 for hdp3.0
#
Step: Redistribute intermediate table
#
DISTRIBUTE BY is that:
INSERT OVERWRITE TABLE table_intermediate SELECT * FROM table_intermediate 
DISTRIBUTE BY Field1, Field2, Field3;
#
Not DISTRIBUTE BY RAND()
#
Is this default DISTRIBUTE BY Field1, Field2, Field3? how to DISTRIBUTE BY 
RAND()?

Best wishes.



Re: Kylin2.5不支持Hive2.x的SQL中的关键字识别

2018-10-31 Thread liuzhixin
Hi ShaoFeng Shi:

I create a jira for the case.
#
Refer: https://issues.apache.org/jira/browse/KYLIN-3658
#
Thank you for your help.

Best Wishes.

> 在 2018年10月31日,下午9:31,ShaoFeng Shi  写道:
> 
> Hi zhixin,
> 
> Thanks for letting us know this; Would you like to report a JIRA to Kylin?
> Thank you!
> 
> liuzhixin  于2018年10月31日周三 下午6:48写道:
> 
>> Hi kylin team:
>> 
>> Hive2.3 版本严格限制了SQL中的关键字使用,必须添加上反引号, 例如`date`。
>> 
>> 但是Kylin在访问Hive的时候,生成的SQL语句没有添加反引号``来约束SQL关键字,这会带来很多问题。
>> 
>> 
>> Best wishes!
> 
> 
> 
> -- 
> Best regards,
> 
> Shaofeng Shi 史少锋




[jira] [Created] (KYLIN-3658) The keywords of Hive are not supported By Kylin

2018-10-31 Thread liuzhixin (JIRA)
liuzhixin created KYLIN-3658:


 Summary: The keywords of Hive are not supported By Kylin
 Key: KYLIN-3658
 URL: https://issues.apache.org/jira/browse/KYLIN-3658
 Project: Kylin
  Issue Type: Bug
Affects Versions: v2.5.0
Reporter: liuzhixin


Hive2.x version strictly limited in the SQL keywords which must be added on the 
quotes, 
e.g.  ` date `, `timestamp` ...

When Kylin visits Hive, the generated SQL statement does not add the quotes ` ` 
governing the SQL keywords, it will bring some problems.

#



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Partition Column for Kylin

2018-10-31 Thread liuzhixin
Hi George Ni, 

Thank you for your answer!

Best Wishes!

> 在 2018年10月31日,下午9:45,George Ni  写道:
> 
> As I know, partition column on separate columns is not supported yet. 
> 
> There is an issue: https://issues.apache.org/jira/browse/KYLIN-3650
> 
> I think dipesh is working on it.
> 
> Best regards,
> 
> Chun’en Ni(George)
> 
> - 原始邮件 -
> 发件人: liuzhixin 
> 收件人: dev@kylin.apache.org
> 抄送: 335960...@qq.com
> 已发送邮件: Wed, 31 Oct 2018 20:11:34 +0800 (CST)
> 主题: Partition Column for Kylin
> 
> Hi kylin team:
> 
> If my hive table partitions are below:
> #
> PARTITIONED BY (
>  `` string,
>  `mm` string,
>  `dd` string,
>  `re` string,
>  `hh` string)
> #
> How can I set the partition Partition Date Column and Partition Time Column?
> 
> 
> Best wishes.




Kylin2.5不支持Hive的SQL中的关键字识别

2018-10-31 Thread liuzhixin
Hi kylin team:

Hive2.3 版本严格限制了SQL中的关键字使用,必须添加上反引号, 例如`date`。

但是Kylin在访问Hive的时候,生成的SQL语句没有添加反引号``来约束SQL关键字,这会带来很多问题。


Best wishes!

Re: MapReduceException: Counter 0

2018-10-29 Thread liuzhixin
Hello kylin team:
#
Maybe kylin conf needs hpd.version for running!
#
Best wishes.

> 在 2018年10月22日,下午6:44,liuzhixin  写道:
> 
> Hello kylin team: what’s wrong with the Counter 0? Thank you.
> 
> 
> #
> Best wishes!




ArrayIndexOutOfBoundsException for NDCuboidBuilder

2018-10-29 Thread liuzhixin
Hello kylin team:
#
When I test the kylin2.5.0-hadoop3.1-hbase2 for hdp3.0, 
I select spark as the engine type,
Sometimes there are some ArrayIndexOutOfBoundsExceptions,
And the error info is blow:
#
#
[2018-10-29 11:48:26][INFO][Driver][DAGScheduler:54] Job 1 failed: runJob at 
SparkHadoopWriter.scala:78, took 0.425416 s
[2018-10-29 11:48:26][ERROR][Driver][SparkHadoopWriter:91] Aborting job 
job_20181029114826_0008.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 
5, ip-172-31-40-100.ec2.internal, executor 1): 
java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at 
org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKeyInternal(NDCuboidBuilder.java:106)
at 
org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKey2(NDCuboidBuilder.java:87)
at 
org.apache.kylin.engine.spark.SparkCubingByLayer$CuboidFlatMap.call(SparkCubingByLayer.java:432)
at 
org.apache.kylin.engine.spark.SparkCubingByLayer$CuboidFlatMap.call(SparkCubingByLayer.java:376)
at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at 
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
at 
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1083)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1081)
at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:831)
...skipping...
[2018-10-29 11:48:26][ERROR][Driver][ApplicationMaster:91] User class threw 
exception: java.lang.RuntimeException: error execute 

Re: MapReduceException: Counter 0

2018-10-25 Thread liuzhixin
Hello,Na Zhai

step: Kylin: Extract Fact TableDistinct Columns

Kylin can’t submit the MR Task and throw the Counter: 0

#
Best wishes!

> 在 2018年10月24日,下午9:06,liuzhixin  写道:
> 
> Hello Na Zhai:
> #
> The beeline connect hive successful, and There are no more messages!
> #
> 
> 
> Best wishes!
> 
>> 在 2018年10月23日,上午11:35,Na Zhai > <mailto:na.z...@kyligence.io>> 写道:
>> 
>> Hi, liuzhixin,
>> Beeline connect hive, you mean the first step of build cube as the place 
>> circled in the picture? 
>> If you mean that, you can check whether the other cube can build 
>> successfully? You should make sure the health of the hive and the beeline 
>> connect hive successfully.
>>  
>> Best wishes!
>>  
>> 发送自 Windows 10 版邮件 <https://go.microsoft.com/fwlink/?LinkId=550986>应用
>>  
>> 发件人: liuzhixin mailto:liuz...@163.com>>
>> 发送时间: Monday, October 22, 2018 8:05:09 PM
>> 收件人: Na Zhai
>> 抄送: 335960...@qq.com <mailto:335960...@qq.com>
>> 主题: Re: MapReduceException: Counter 0
>>  
>> Hello, Beeline connect hive
>> 
>> #
>> #
>> 
>> 018-10-22 20:03:32,945 DEBUG [Scheduler 1236831559 Job 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-142] common.HadoopCmdOutput:98 : 
>> Counters: 0
>> 2018-10-22 20:03:32,987 DEBUG [Scheduler 1236831559 Job 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-142] common.HadoopCmdOutput:104 : 
>> outputFolder 
>> ishdfs://hdfscluster/mnt/kylin/kylin_metadata/kylin-23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f/kylin_sales_mode1/fact_distinct_columns
>>  
>> 
>> 2018-10-22 20:03:32,999 DEBUG [Scheduler 1236831559 Job 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-142] common.HadoopCmdOutput:109 : Seems 
>> no counter found for hdfs
>> 2018-10-22 20:03:33,014 INFO  [Scheduler 1236831559 Job 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-142] execution.ExecutableManager:434 : 
>> job id:23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-02 from RUNNING to ERROR
>> 2018-10-22 20:03:33,015 ERROR [Scheduler 1236831559 Job 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-142] execution.AbstractExecutable:165 : 
>> error running Executable: CubingJob{id=23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f, 
>> name=BUILD CUBE - kylin_sales_mode1 - 2012010100_2016010100 - 
>> GMT+08:00 2018-10-22 19:49:39, state=RUNNING}
>> 2018-10-22 20:03:33,020 DEBUG [pool-7-thread-1] cachesync.Broadcaster:113 : 
>> Servers in the cluster: [localhost:7070]
>> 2018-10-22 20:03:33,021 DEBUG [pool-7-thread-1] cachesync.Broadcaster:123 : 
>> Announcing new broadcast to all: BroadcastEvent{entity=execute_output, 
>> event=update, cacheKey=23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f}
>> 2018-10-22 20:03:33,024 INFO  [Scheduler 1236831559 Job 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-142] execution.ExecutableManager:434 : 
>> job id:23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f from RUNNING to ERROR
>> 2018-10-22 20:03:33,024 DEBUG [pool-7-thread-1] cachesync.Broadcaster:113 : 
>> Servers in the cluster: [localhost:7070]
>> 2018-10-22 20:03:33,024 DEBUG [Scheduler 1236831559 Job 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f-142] execution.AbstractExecutable:316 : 
>> no need to send email, user list is empty
>> 2018-10-22 20:03:33,024 DEBUG [pool-7-thread-1] cachesync.Broadcaster:123 : 
>> Announcing new broadcast to all: BroadcastEvent{entity=execute_output, 
>> event=update, cacheKey=23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f}
>> 2018-10-22 20:03:33,028 DEBUG [http-nio-7070-exec-10] 
>> cachesync.Broadcaster:247 : Broadcasting UPDATE, execute_output, 
>> 23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f
>> 2018-10-22 20:03:33,029 ERROR [pool-11-thread-1] 
>> threadpool.DefaultScheduler:115 : ExecuteException 
>> job:23b694f2-c7cc-9d0e-4e3d-0b91ce16e21f
>> org.apache.kylin.job.exception.ExecuteException: 
>> org.apache.kylin.job.exception.ExecuteException: 
>> org.apache.kylin.engine.mr.exception.MapReduceException: Counters: 0
>> 
>> at 
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:178)
>> at 
>> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:113)
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:748)
>> Caused by: org.apache.kylin.job.exception.ExecuteException: 
>> org.apache.kylin.engine.mr.exception.MapReduceException: Counters: 0
>> 
>> at 
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:

blocked because of many connection errors from mysql

2018-10-24 Thread liuzhixin
HI, kylin team:

There is something wrong with the connection from mysql.
#
Kylin version: 2.5.0-hadoop3.1 for hbase 2.0
#
NestedThrowables:
java.sql.SQLException: Unable to open a test connection to the given database. 
JDBC url = 
jdbc:mysql://metadb-forhivereplica.c5yzcdreb1xr.us-east-1.rds.amazonaws.com:3306/hive_emr_test?characterEncoding=UTF-8=true,
 username = root. Terminating connection pool (set lazyInit to true if you 
expect to start your database after your app). Original Exception: --
java.sql.SQLException: null,  message from server: "Host '172.31.26.19' is 
blocked because of many connection errors; unblock with 'mysqladmin 
flush-hosts'"
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1095)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2031)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:718)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:46)
at sun.reflect.GeneratedConstructorAccessor91.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:302)
at 
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.(BoneCP.java:416)
at 
com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at 
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483)
at 
org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:297)
at sun.reflect.GeneratedConstructorAccessor125.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606)
at 
org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at 
org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133)
at 
org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:422)
at 
org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:817)
at 
org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:334)
at 
org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:213)
at sun.reflect.GeneratedMethodAccessor147.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1975)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1970)
at 
javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1177)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:814)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:702)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:519)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:548)
at 
org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:403)
at 
org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:340)
at 
org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:301)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:58)
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:624)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:590)
  

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java-3.1.0

2018-10-15 Thread liuzhixin
O  [FetcherRunner 1193270701-99] 
threadpool.DefaultFetcherRunner:96 : Job Fetcher: 0 should running, 0 actual 
running, 0 stopped, 0 ready, 0 already succeed, 8 error, 1 discarded, 0 others

> 在 2018年10月16日,上午9:56,ShaoFeng Shi  写道:
> 
> Hi Zx,
> 
> The source code of 1.13.0-kylin-r4. is in Kyligence's fork: 
> https://github.com/Kyligence/calcite <https://github.com/Kyligence/calcite>
> Please provide the full kylin.log, maybe we can find new clue there.
> 
> liuzhixin mailto:liuz...@163.com>> 于2018年10月15日周一 下午3:06写道:
> Hi ShaoFeng Shi:
> 
> This is the original error,
> 
> When I build the cube and give the error: java.lang.NoClassDefFoundError: 
> com/google/protobuf/GeneratedMessageV3,
> 
> The GeneratedMessageV3 from protobuf-java-3.1.0, and there is really no it in 
> hive.
> 
> Maybe I should assembly it to the atopcalcite.
> 
> But I can run the hive -e command success.
> 
> #
> Time taken: 0.218 seconds
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/google/protobuf/GeneratedMessageV3
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>   at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>   at 
> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>   at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>   at 
> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>   at 
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>   at 
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>   at 
> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>   at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java-3.1.0

2018-10-15 Thread liuzhixin
Hi ShaoFeng Shi:

This is the original error,

When I build the cube and give the error: java.lang.NoClassDefFoundError: 
com/google/protobuf/GeneratedMessageV3,

The GeneratedMessageV3 from protobuf-java-3.1.0, and there is really no it in 
hive.

Maybe I should assembly it to the atopcalcite.

But I can run the hive -e command success.

#
Time taken: 0.218 seconds
Exception in thread "main" java.lang.NoClassDefFoundError: 
com/google/protobuf/GeneratedMessageV3
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
at 
org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
at 
org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
at 
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: java.lang.ClassNotFoundException: 
com.google.protobuf.GeneratedMessageV3
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 51 more
The command is:
hive -e "USE default;


> 在 2018年10月15日,下午

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java-3.1.0

2018-10-15 Thread liuzhixin
Hi shaofeng:

Yes, I can run the command well in hive shell. 

I can’t find calcite-core version 1.13.0-kylin-r4. 

Best wishes.

> 在 2018年10月15日,下午2:20,ShaoFeng Shi  写道:
> 
> Hive version 2.3.3 can work well with HDP 3? Can you try the HiveQL that
> Kylin executed out of Kylin, if it works, then there should be something
> wrong in Kylin.
> 
> liuzhixin mailto:liuz...@163.com>> 于2018年10月15日周一 下午1:47写道:
> 
>> Thank you for the answer!
>> 
>> I can’t decide the hive version.
>> 
>> And the hive version 2.3.3 can work well with HDP 3.
>> 
>> Perhaps you can test the Kylin with hive version 2.3.3.
>> 
>> Maybe it’s other error. Thanks!
>> 
>> Best wishes!
>> 
>> 
>> 在 2018年10月15日,下午1:24,ShaoFeng Shi  写道:
>> 
>> Hi zhixin,
>> 
>> I think the problem is how to run Hive 2 with HDP 3, no relation with
>> Kylin.
>> 
>> Usually, we don't encourage user to customize the component version in a
>> release, because that may bring dependency conflicts.
>> 
>> I suggest you use the original Hive version in HDP 3.
>> 
>> liuzhixin mailto:liuz...@163.com>> 于2018年10月15日周一 
>> 上午11:25写道:
>> 
>>> Hi ShaoFeng Shi
>>> 
>>> Yes, the error from hive version 2.3.3,
>>> 
>>> And Kylin need hive version 3.1.0.
>>> 
>>> So how to solve the question?
>>> 
>>> Best wishes!
>>> 
>>>> 在 2018年10月15日,上午11:10,ShaoFeng Shi >>> <mailto:shaofeng...@apache.org>> 写道:
>>>> 
>>>> Hi Zhixin,
>>>> 
>>>> The error log is thrown from Hive, not from Kylin I think. Please verify
>>>> your hive is properly installed; You can manually run that hive command
>>> :
>>>> 
>>>> hive -e "use default; xxx"
>>>> 
>>>> Lijun Cao <641507...@qq.com <mailto:641507...@qq.com>> 于2018年10月15日周一 
>>>> 上午11:01写道:
>>>> 
>>>>> Hi liuzhixin:
>>>>> 
>>>>> As I remember, the Hive version in HDP 3 is 3.1.0 .
>>>>> 
>>>>> You can update Hive to 3.1.0 and then have another try.
>>>>> 
>>>>> And according to my previous test, the binary package
>>>>> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You
>>> can
>>>>> get it form official site.
>>>>> 
>>>>> Best Regards
>>>>> 
>>>>> Lijun Cao
>>>>> 
>>>>>> 在 2018年10月15日,10:22,liuzhixin mailto:liuz...@163.com>> 
>>>>>> 写道:
>>>>>> 
>>>>>> hi cao lijun,
>>>>>> #
>>>>>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
>>>>>> 
>>>>>> I have compile the source code with hive 2.3.3,
>>>>>> 
>>>>>> but the module atopcalcite depends on protobuf 3.1.0,
>>>>>> 
>>>>>> other module depends on protobuf 2.5.0.
>>>>>> 
>>>>>> 
>>>>>>> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com 
>>>>>>> <mailto:641507...@qq.com>> 写道:
>>>>>>> 
>>>>>>> Hi liuzhixin:
>>>>>>> 
>>>>>>> Which platform did you use?
>>>>>>> 
>>>>>>> The CDH 6.0.x or HDP 3.0 ?
>>>>>>> 
>>>>>>> Best Regards
>>>>>>> 
>>>>>>> Lijun Cao
>>>>>>> 
>>>>>>>> 在 2018年10月12日,21:14,liuzhixin >>>>>>> <mailto:liuz...@163.com>> 写道:
>>>>>>>> 
>>>>>>>> Logging initialized using configuration in
>>>>> 
>>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
>>>>> Async: true
>>>>>>>> OK
>>>>>>>> Time taken: 4.512 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 1.511 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.272 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.185 seconds
>>>>>>>> Exception in thread "main" java.lang.NoSuchMethodError:
>>>>> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>>>>>>&

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread liuzhixin
Thank you for the answer!

I can’t decide the hive version.

And the hive version 2.3.3 can work well with HDP 3.

Perhaps you can test the Kylin with hive version 2.3.3.

Maybe it’s other error. Thanks!

Best wishes!


> 在 2018年10月15日,下午1:24,ShaoFeng Shi  写道:
> 
> Hi zhixin,
> 
> I think the problem is how to run Hive 2 with HDP 3, no relation with Kylin. 
> 
> Usually, we don't encourage user to customize the component version in a 
> release, because that may bring dependency conflicts.
> 
> I suggest you use the original Hive version in HDP 3.
> 
> liuzhixin mailto:liuz...@163.com>> 于2018年10月15日周一 上午11:25写道:
> Hi ShaoFeng Shi
> 
> Yes, the error from hive version 2.3.3,
> 
> And Kylin need hive version 3.1.0.
> 
> So how to solve the question?
> 
> Best wishes!
> 
> > 在 2018年10月15日,上午11:10,ShaoFeng Shi  > <mailto:shaofeng...@apache.org>> 写道:
> > 
> > Hi Zhixin,
> > 
> > The error log is thrown from Hive, not from Kylin I think. Please verify
> > your hive is properly installed; You can manually run that hive command :
> > 
> > hive -e "use default; xxx"
> > 
> > Lijun Cao <641507...@qq.com <mailto:641507...@qq.com>> 于2018年10月15日周一 
> > 上午11:01写道:
> > 
> >> Hi liuzhixin:
> >> 
> >> As I remember, the Hive version in HDP 3 is 3.1.0 .
> >> 
> >> You can update Hive to 3.1.0 and then have another try.
> >> 
> >> And according to my previous test, the binary package
> >> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can
> >> get it form official site.
> >> 
> >> Best Regards
> >> 
> >> Lijun Cao
> >> 
> >>> 在 2018年10月15日,10:22,liuzhixin mailto:liuz...@163.com>> 
> >>> 写道:
> >>> 
> >>> hi cao lijun,
> >>> #
> >>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
> >>> 
> >>> I have compile the source code with hive 2.3.3,
> >>> 
> >>> but the module atopcalcite depends on protobuf 3.1.0,
> >>> 
> >>> other module depends on protobuf 2.5.0.
> >>> 
> >>> 
> >>>> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com 
> >>>> <mailto:641507...@qq.com>> 写道:
> >>>> 
> >>>> Hi liuzhixin:
> >>>> 
> >>>> Which platform did you use?
> >>>> 
> >>>> The CDH 6.0.x or HDP 3.0 ?
> >>>> 
> >>>> Best Regards
> >>>> 
> >>>> Lijun Cao
> >>>> 
> >>>>> 在 2018年10月12日,21:14,liuzhixin  >>>>> <mailto:liuz...@163.com>> 写道:
> >>>>> 
> >>>>> Logging initialized using configuration in
> >> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
> >> Async: true
> >>>>> OK
> >>>>> Time taken: 4.512 seconds
> >>>>> OK
> >>>>> Time taken: 1.511 seconds
> >>>>> OK
> >>>>> Time taken: 0.272 seconds
> >>>>> OK
> >>>>> Time taken: 0.185 seconds
> >>>>> Exception in thread "main" java.lang.NoSuchMethodError:
> >> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
> >>>>>at
> >> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
> >>>>>at
> >> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
> >>>>>at
> >> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
> >>>>>at
> >> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
> >>>>>at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
> >>>>>at
> >> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
> >>>>>at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
> >>>>>at
> >> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
> >>>>>at
> >> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
> >>>>>at
> >> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
> >>>>>at
> >> org.apache.

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread liuzhixin
Hi ShaoFeng Shi

Yes, the error from hive version 2.3.3,

And Kylin need hive version 3.1.0.

So how to solve the question?

Best wishes!

> 在 2018年10月15日,上午11:10,ShaoFeng Shi  写道:
> 
> Hi Zhixin,
> 
> The error log is thrown from Hive, not from Kylin I think. Please verify
> your hive is properly installed; You can manually run that hive command :
> 
> hive -e "use default; xxx"
> 
> Lijun Cao <641507...@qq.com> 于2018年10月15日周一 上午11:01写道:
> 
>> Hi liuzhixin:
>> 
>> As I remember, the Hive version in HDP 3 is 3.1.0 .
>> 
>> You can update Hive to 3.1.0 and then have another try.
>> 
>> And according to my previous test, the binary package
>> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can
>> get it form official site.
>> 
>> Best Regards
>> 
>> Lijun Cao
>> 
>>> 在 2018年10月15日,10:22,liuzhixin  写道:
>>> 
>>> hi cao lijun,
>>> #
>>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
>>> 
>>> I have compile the source code with hive 2.3.3,
>>> 
>>> but the module atopcalcite depends on protobuf 3.1.0,
>>> 
>>> other module depends on protobuf 2.5.0.
>>> 
>>> 
>>>> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
>>>> 
>>>> Hi liuzhixin:
>>>> 
>>>> Which platform did you use?
>>>> 
>>>> The CDH 6.0.x or HDP 3.0 ?
>>>> 
>>>> Best Regards
>>>> 
>>>> Lijun Cao
>>>> 
>>>>> 在 2018年10月12日,21:14,liuzhixin  写道:
>>>>> 
>>>>> Logging initialized using configuration in
>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
>> Async: true
>>>>> OK
>>>>> Time taken: 4.512 seconds
>>>>> OK
>>>>> Time taken: 1.511 seconds
>>>>> OK
>>>>> Time taken: 0.272 seconds
>>>>> OK
>>>>> Time taken: 0.185 seconds
>>>>> Exception in thread "main" java.lang.NoSuchMethodError:
>> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>>>>>at
>> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
>>>>>at
>> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
>>>>>at
>> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
>>>>>at
>> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>>>>>at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>>>>>at
>> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>>>>>at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>>>>>at
>> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>>>>>at
>> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>>>>>at
>> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>>>>>at
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>>>>>at
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>>>>>at
>> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>>>>>at
>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>>>>>at java.sql.DriverManager.getConnection(DriverManager.java:664)
>>>>>at java.sql.DriverManager.getConnection(DriverManager.java:208)
>>>>>at
>> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>>>>>at
>> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>>>>>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>>>>>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>>>>>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>>>>>at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>>>>>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInter

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread liuzhixin
Hi Cao Lijun

Yeah! You are right.

Our platform uses the ambari-hdp3.0, but with the hive standalone 2.3.3.

So I need to compile kyin for the hive version 2.3.3.

And now its not compatible with protobuf-java version 3.1.0 which from 
atopcalcite.

Best wishes for you.

> 在 2018年10月15日,上午11:00,Lijun Cao <641507...@qq.com> 写道:
> 
> Hi liuzhixin:
> 
> As I remember, the Hive version in HDP 3 is 3.1.0 . 
> 
> You can update Hive to 3.1.0 and then have another try.
> 
> And according to my previous test, the binary package 
> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can get 
> it form official site.
> 
> Best Regards
> 
> Lijun Cao
> 
>> 在 2018年10月15日,10:22,liuzhixin  写道:
>> 
>> hi cao lijun,
>> #
>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
>> 
>> I have compile the source code with hive 2.3.3, 
>> 
>> but the module atopcalcite depends on protobuf 3.1.0,
>> 
>> other module depends on protobuf 2.5.0. 
>> 
>> 
>>> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
>>> 
>>> Hi liuzhixin:
>>> 
>>> Which platform did you use?
>>> 
>>> The CDH 6.0.x or HDP 3.0 ? 
>>> 
>>> Best Regards
>>> 
>>> Lijun Cao
>>> 
>>>> 在 2018年10月12日,21:14,liuzhixin  写道:
>>>> 
>>>> Logging initialized using configuration in 
>>>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties 
>>>> Async: true
>>>> OK
>>>> Time taken: 4.512 seconds
>>>> OK
>>>> Time taken: 1.511 seconds
>>>> OK
>>>> Time taken: 0.272 seconds
>>>> OK
>>>> Time taken: 0.185 seconds
>>>> Exception in thread "main" java.lang.NoSuchMethodError: 
>>>> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>>>>at 
>>>> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
>>>>at org.apache.calcite.avatica.proto.Common.(Common.java:18927)
>>>>at 
>>>> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
>>>>at 
>>>> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>>>>at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>>>>at 
>>>> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>>>>at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>>>>at 
>>>> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>>>>at 
>>>> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>>>>at 
>>>> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>>>>at 
>>>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>>>>at 
>>>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>>>>at 
>>>> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>>>>at 
>>>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>>>>at java.sql.DriverManager.getConnection(DriverManager.java:664)
>>>>at java.sql.DriverManager.getConnection(DriverManager.java:208)
>>>>at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>>>>at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>>>>at 
>>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>>>>at 
>>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>>>>at 
>>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>>>>at 
>>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>>>>at 
>>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>>>>at 
>>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>>>>at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>>>>at org.apache.had

Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-12 Thread liuzhixin
Logging initialized using configuration in 
file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties 
Async: true
OK
Time taken: 4.512 seconds
OK
Time taken: 1.511 seconds
OK
Time taken: 0.272 seconds
OK
Time taken: 0.185 seconds
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
at 
com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
at org.apache.calcite.avatica.proto.Common.(Common.java:18927)
at 
org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
at 
org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
at 
org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
at 
org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
at 
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
The command is:
hive -e "USE default;


[jira] [Created] (KYLIN-3627) calcite-core-1.13.0-kylin-r4

2018-10-11 Thread liuzhixin (JIRA)
liuzhixin created KYLIN-3627:


 Summary: calcite-core-1.13.0-kylin-r4
 Key: KYLIN-3627
 URL: https://issues.apache.org/jira/browse/KYLIN-3627
 Project: Kylin
  Issue Type: Bug
Reporter: liuzhixin


where to find calcite-core version 1.13.0-kylin-r4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)