here
>
>
> http://spark.apache.org/docs/latest/sql-programming-guide.html#other-configuration-options
>
> Thanks,
>
> Jagat Singh
>
> On Sat, Jul 9, 2016 at 9:50 AM, Lalitha MV <lalitham...@gmail.com> wrote:
>
>> Hi,
>>
>> 1. What implementation is used
Hi,
1. What implementation is used for the hash join -- is it classic hash join
or Hybrid grace hash join?
2. If the hash table does not fit in memory, does it spill or does it fail?
Are there parameters to control this (for example to set the percentage of
hash table which can spill etc.)
3. Is
the
> following precedence:
> * - BroadcastNestedLoopJoin: if one side of the join could be broadcasted
> * - CartesianProduct: for Inner join
> * - BroadcastNestedLoopJoin
> */
>
>
>
> On Jul 5, 2016, at 13:28, Lalitha MV <lalitham...@gmail.com> wrote:
>
>
oins are used in queries.
>
> // maropu
>
> On Tue, Jul 5, 2016 at 4:23 AM, Lalitha MV <lalitham...@gmail.com> wrote:
>
>> Hi maropu,
>>
>> Thanks for your reply.
>>
>> Would it be possible to write a rule for this, to make it always pick
>> sh
On Sat, Jul 2, 2016 at 12:58 AM, Takeshi Yamamuro <linguin@gmail.com>
wrote:
> Hi,
>
> No, spark has no hint for the hash join.
>
> // maropu
>
> On Fri, Jul 1, 2016 at 4:56 PM, Lalitha MV <lalitham...@gmail.com> wrote:
>
>> Hi,
>>
>
Hi,
In order to force broadcast hash join, we can set
the spark.sql.autoBroadcastJoinThreshold config. Is there a way to enforce
shuffle hash join in spark sql?
Thanks,
Lalitha