Spark SQL supports very basic join reordering optimization, based on the raw
table data size, this was added couple major releases back.
And the “EXPLAIN EXTENDED query” command is a very informative tool to verify
whether the optimization taking effect.
From: Raajay
October 12, 2015 10:17 AM
> *To:* Cheng, Hao
> *Cc:* user@spark.apache.org
> *Subject:* Re: Join Order Optimization
>
>
>
> Hi Cheng,
>
> Could you point me to the JIRA that introduced this change ?
>
>
> Also, is this SPARK-2211 the right issue to follow for cost-based
Probably you have to read the source code, I am not sure if there are any .ppt
or slides.
Hao
From: VJ Anand [mailto:vjan...@sankia.com]
Sent: Monday, October 12, 2015 11:43 AM
To: Cheng, Hao
Cc: Raajay; user@spark.apache.org
Subject: Re: Join Order Optimization
Hi - Is there a design document
Hi Cheng,
Could you point me to the JIRA that introduced this change ?
Also, is this SPARK-2211 the right issue to follow for cost-based
optimization?
Thanks
Raajay
On Sun, Oct 11, 2015 at 7:57 PM, Cheng, Hao wrote:
> Spark SQL supports very basic join reordering
, October 12, 2015 10:17 AM
To: Cheng, Hao
Cc: user@spark.apache.org
Subject: Re: Join Order Optimization
Hi Cheng,
Could you point me to the JIRA that introduced this change ?
Also, is this SPARK-2211 the right issue to follow for cost-based optimization?
Thanks
Raajay
On Sun, Oct 11, 2015 at 7