> On Nov. 5, 2014, 10:41 p.m., Szehon Ho wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 201
> > <https://reviews.apache.org/r/27640/diff/1/?file=750693#file750693line201>
> >
> >     Hi Suhas, I was taking a look through the code, I dont think its easy 
> > now to identify which is the big-table parent vs small-table parent.  There 
> > is a HashTableDummyOperator representing the small-table but it only has 
> > some basic information.
> >     
> >     Maybe you know more about it, but was wondering do we need to save the 
> > info to a context when we cut the small-table RS from MapJoin in 
> > ReduceSinkMapJoinProc?  Thanks.
> 
> Suhas Satish wrote:
>     Hi Szehon,
>     GenSparkProcContext has this - 
>       // we need to keep the original list of operators in the map join to 
> know
>       // what position in the mapjoin the different parent work items will 
> have.
>       public final Map<MapJoinOperator, List<Operator<?>>> mapJoinParentMap;
>       
>     There is also another data structure in GenSparkProcContext to keep track 
> of which MapJoinWork is connected to which ReduceSinks. 
>       // a map to keep track of what reduce sinks have to be hooked up to
>       // map join work
>       public final Map<BaseWork, List<ReduceSinkOperator>> 
> linkWorkWithReduceSinkMap;
>       
>     Maybe we need to introduce a similar one for HashTableSinkOperator  like 
>      public final Map<BaseWork, List<HashTableSinkOperator>> 
> linkWorkWithHashTableSinkMap;
>      
>     In any case,  we should pass this GenSparkProcContext along to the 
> physicalContext in the pyhsical resolvers. Let me know your thoughts.

Hi Suhas, can we re-use that even?  It seems that only small-table RS are 
connected to MJ at this point.  So big-table RS should never get into here.  If 
we can't re-use it we will have to create a new data structure.  The idea is to 
identify which RS to replace with HashTableSink.  Hope that makes sense, thanks.


- Szehon


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27640/#review60052
-----------------------------------------------------------


On Nov. 5, 2014, 8:29 p.m., Suhas Satish wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27640/
> -----------------------------------------------------------
> 
> (Updated Nov. 5, 2014, 8:29 p.m.)
> 
> 
> Review request for hive, Chao Sun, Jimmy Xiang, Szehon Ho, and Xuefu Zhang.
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This replaces ReduceSinks with HashTableSinks in smaller tables for a 
> map-join. But the condition check field to detect map-join is actually being 
> set in CommonJoinResolver, which doesnt exist yet. We need to decide where is 
> the right place to populate this field. 
> 
> 
> Diffs
> -----
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 
> 795a5d7 
> 
> Diff: https://reviews.apache.org/r/27640/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Suhas Satish
> 
>

Reply via email to