Hi, ALL:
I’m using spark engine to build cube.
Now I found the bottleneck of build time lies in the #3 Step Name: Extract Fact
Table Distinct Columns.
When I look into the spark application, I found there is only two splits
regardless of how large the input sequence file is.
I wonder how t
Hi:
I’m sorry the picture is dead again.
I upload it as attachment this time
--
Best regards,
Xi Chen
From: 陈熹(chenxi07)-技术产品中心
Sent: Monday, November 5, 2018 3:04 PM
To: dev@kylin.apache.org
Subject: How to increase split number for Fact distinct columns when using
spark
>
>
>
> Xi Chen
>
>
>
>
>
> *From:* 陈熹(chenxi07)-技术产品中心
> *Sent:* Monday, November 5, 2018 3:04 PM
> *To:* dev@kylin.apache.org
> *Subject:* How to increase split number for Fact distinct columns when
> using spark engine?(picture added)
>
>
>
> H
attachment this time
>
>
>
> --
>
> Best regards,
>
>
>
> Xi Chen
>
>
>
>
>
> *From:* 陈熹(chenxi07)-技术产品中心
> *Sent:* Monday, November 5, 2018 3:04 PM
> *To:* dev@kylin.apache.org
> *Subject:* How to increase split number for Fact distinct columns whe
--
Best regards,
Xi Chen
*From:* 陈熹(chenxi07)-技术产品中心
*Sent:* Monday, November 5, 2018 3:04 PM
*To:* dev@kylin.apache.org
*Subject:* How to increase split number for Fact distinct columns when
using spark engine?(picture added)
Hi, ALL:
I’m using spark engine to build cu
’m sorry the picture is dead again.
> >
> >I upload it as attachment this time
> >
> >
> >
> > --
> >
> > Best regards,
> >
> >
> >
> > Xi Chen
> >
> >
> >
> >
> >
> > *From:* 陈熹(chenxi07)-技术
Hi, shaofeng:
Thank you for your suggestion! I'll give it a try.
--
Best regards,
Xi Chen
-Original Message-
From: ShaoFeng Shi
Sent: Monday, November 5, 2018 4:06 PM
To: dev
Subject: Re: How to increase split number for Fact distinct columns when using
spark e
: Support DrakosData
Sent: Monday, November 5, 2018 4:01 PM
To: dev@kylin.apache.org; 陈熹(chenxi07)-技术产品中心
Subject: Re: How to increase split number for Fact distinct columns when using
spark engine?(picture added)
Hi Xi Chen
I think your refer to 'kylin.engine.spark.rdd-partition-c