Kylin auto refresh is not working

2016-07-22 Thread Karthigeyan K
Hi,
   I am trying to use incremental cube building .
   I have a hive table partitioned by date. I created and built kylin cube
for it.
   I attached the image of my cube refresh settings. I am trying to auto
merge new data records every 30 minutes.
I added a new partition(date=2016-07-21) to hive table with 1 more
records. when I do a count(*) in hive the added rows got reflected in
result.
But Kylin cube in not updated even after many hours.
I don't know what am I missing here, Your Kind help is appreciated.


[image: Inline image 2]

Thanks,
Karthigeyan.


Re: Kylin job failed

2016-07-11 Thread Karthigeyan K
Thanks ShaoFeng Shi. Its working after I renamed the hive table.
Yes TRANSACTIONS is a non-reserved keyword in hive.

On Mon, Jul 11, 2016 at 8:28 PM, ShaoFeng Shi <shaofeng...@gmail.com> wrote:

> (Continued. ) did you try to rename the fact table?
>
> Regards,
>
> Shaofeng Shi
>
> shaofeng...@gmail.com
>
> From Outlook Mobile
>
>
>
>
> On Mon, Jul 11, 2016 at 10:55 PM +0800, "ShaoFeng Shi" <
> shaofeng...@gmail.com> wrote:
>
>
>
>
>
>
>
>
>
>
> FAILED: ParseException line 10:29 mismatched input
> 'TRANSACTIONS' expecting Identifier near 'as' in table source
> Interesting, is "transactions" a keyword in hive? We used to use
> "fact_table", "lookup_1", "lookup_2" as the alias, but changed to use table
> name for better readability; could you please open a JIRA for tracking?
> To bypass it,
> Regards,
>
> Shaofeng Shi
>
> shaofeng...@gmail.com
>
> From Outlook Mobile
>
>
>
>
> On Mon, Jul 11, 2016 at 2:30 PM +0800, "Karthigeyan K" <
> karthigeyan.t...@gmail.com> wrote:
>
>
>
>
>
>
>
>
>
>
> Hi ,
> I was able to build the cube while using only fact table without lookup
> tables.
>
> But Its failing when adding lookup tables.
> I have one fact table and 2 lookup tables.
>
> The problem is AS keyword used with JOIN conditions.
> Because The same query ran successfully when I run it manually in hive
> after removing those table alias.
> How to fix this in Kylin?
>
> pasted entire log below. King help is appreciated.
>
> Thanks,
> Karthigeyan.
>
>
> OS command error exit with 64 -- hive -e "USE default;
> DROP TABLE IF EXISTS
>
> kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255;
>
> CREATE EXTERNAL TABLE IF NOT EXISTS
>
> kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255
> (
> DEFAULT_TRANSACTIONS_CUSTOMERID string
> ,DEFAULT_TRANSACTIONS_PRODUCTID string
> ,DEFAULT_TRANSACTIONS_PURCHASEDATE date
> ,DEFAULT_TRANSACTIONS_QUANTITY int
> ,DEFAULT_TRANSACTIONS_PRICE double
> ,DEFAULT_TRANSACTIONS_SALE double
> )
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '\177'
> STORED AS SEQUENCEFILE
> LOCATION
> '/kylin/kylin_metadata/kylin-e6854c92-1e73-41e1-b0da-0e33f18dbfec/kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255';
>
> SET dfs.replication=2;
> SET dfs.block.size=33554432;
> SET hive.exec.compress.output=true;
> SET hive.auto.convert.join.noconditionaltask=true;
> SET hive.auto.convert.join.noconditionaltask.size=3;
> SET
> mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
> SET
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
> SET hive.merge.mapfiles=true;
> SET hive.merge.mapredfiles=true;
> SET mapred.output.compression.type=BLOCK;
> SET hive.merge.size.per.task=25600;
> SET hive.support.concurrency=false;
> SET mapreduce.job.split.metainfo.maxsize=-1;
>
> INSERT OVERWRITE TABLE
>
> kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255
> SELECT
> TRANSACTIONS.CUSTOMERID
> ,TRANSACTIONS.PRODUCTID
> ,TRANSACTIONS.PURCHASEDATE
> ,TRANSACTIONS.QUANTITY
> ,TRANSACTIONS.PRICE
> ,TRANSACTIONS.SALE
> FROM DEFAULT.TRANSACTIONS as TRANSACTIONS
> LEFT JOIN DEFAULT.PRODUCT as PRODUCT
> ON TRANSACTIONS.PRODUCTID = PRODUCT.PRODUCTID
> LEFT JOIN DEFAULT.CUSTOMER as CUSTOMER
> ON TRANSACTIONS.CUSTOMERID = CUSTOMER.CUSTOMERID
> ;
>
> "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
>
> [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
>
> [jar:file:/usr/hdp/2.3.2.1-12/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> WARNING: Use "yarn jar" to launch YARN applications.
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
>
> [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
>
> [jar:file:/usr/hdp/2.3.2.1-12/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
&

Kylin job failed

2016-07-11 Thread Karthigeyan K
Hi ,
I was able to build the cube while using only fact table without lookup
tables.

But Its failing when adding lookup tables.
I have one fact table and 2 lookup tables.

The problem is AS keyword used with JOIN conditions.
Because The same query ran successfully when I run it manually in hive
after removing those table alias.
How to fix this in Kylin?

pasted entire log below. King help is appreciated.

Thanks,
Karthigeyan.


OS command error exit with 64 -- hive -e "USE default;
DROP TABLE IF EXISTS
kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255;

CREATE EXTERNAL TABLE IF NOT EXISTS
kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255
(
DEFAULT_TRANSACTIONS_CUSTOMERID string
,DEFAULT_TRANSACTIONS_PRODUCTID string
,DEFAULT_TRANSACTIONS_PURCHASEDATE date
,DEFAULT_TRANSACTIONS_QUANTITY int
,DEFAULT_TRANSACTIONS_PRICE double
,DEFAULT_TRANSACTIONS_SALE double
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\177'
STORED AS SEQUENCEFILE
LOCATION 
'/kylin/kylin_metadata/kylin-e6854c92-1e73-41e1-b0da-0e33f18dbfec/kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255';

SET dfs.replication=2;
SET dfs.block.size=33554432;
SET hive.exec.compress.output=true;
SET hive.auto.convert.join.noconditionaltask=true;
SET hive.auto.convert.join.noconditionaltask.size=3;
SET 
mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET 
mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET hive.merge.mapfiles=true;
SET hive.merge.mapredfiles=true;
SET mapred.output.compression.type=BLOCK;
SET hive.merge.size.per.task=25600;
SET hive.support.concurrency=false;
SET mapreduce.job.split.metainfo.maxsize=-1;

INSERT OVERWRITE TABLE
kylin_intermediate_transactions_demo_cube_1970010100_2922789940817071255
SELECT
TRANSACTIONS.CUSTOMERID
,TRANSACTIONS.PRODUCTID
,TRANSACTIONS.PURCHASEDATE
,TRANSACTIONS.QUANTITY
,TRANSACTIONS.PRICE
,TRANSACTIONS.SALE
FROM DEFAULT.TRANSACTIONS as TRANSACTIONS
LEFT JOIN DEFAULT.PRODUCT as PRODUCT
ON TRANSACTIONS.PRODUCTID = PRODUCT.PRODUCTID
LEFT JOIN DEFAULT.CUSTOMER as CUSTOMER
ON TRANSACTIONS.CUSTOMERID = CUSTOMER.CUSTOMERID
;

"
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/hdp/2.3.2.1-12/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/hdp/2.3.2.1-12/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/07/09 17:09:08 WARN conf.HiveConf: HiveConf of name
hive.optimize.mapjoin.mapreduce does not exist
16/07/09 17:09:08 WARN conf.HiveConf: HiveConf of name hive.heapsize
does not exist
16/07/09 17:09:08 WARN conf.HiveConf: HiveConf of name
hive.metastore.local does not exist
16/07/09 17:09:08 WARN conf.HiveConf: HiveConf of name
hive.auto.convert.sortmerge.join.noconditionaltask does not exist
ivysettings.xml file not found in HIVE_HOME or
HIVE_CONF_DIR,file:/usr/hdp/2.3.2.0-2950/hadoop/lib/hadoop-lzo-0.6.0.2.3.2.0-2950-sources.jar!/ivysettings.xml
will be used

Logging initialized using configuration in
jar:file:/usr/hdp/2.3.2.0-2950/hive/lib/hive-common-1.2.1.2.3.2.0-2950.jar!/hive-log4j.properties
OK
Time taken: 2.28 seconds
OK
Time taken: 0.497 seconds
OK
Time taken: 0.527 seconds
MismatchedTokenException(262!=26)
at 
org.antlr.runtime.BaseRecognizer.recoverFromMismatchedToken(BaseRecognizer.java:617)
at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.tableSource(HiveParser_FromClauseParser.java:4608)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource(HiveParser_FromClauseParser.java:3729)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.joinSource(HiveParser_FromClauseParser.java:1873)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromClause(HiveParser_FromClauseParser.java:1518)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:45857)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41519)
at