[ 
https://issues.apache.org/jira/browse/DRILL-6385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569608#comment-16569608
 ] 

ASF GitHub Bot commented on DRILL-6385:
---------------------------------------

sohami commented on a change in pull request #1334: DRILL-6385: Support JPPD 
feature
URL: https://github.com/apache/drill/pull/1334#discussion_r207750658
 
 

 ##########
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java
 ##########
 @@ -190,11 +213,21 @@ public IterOutcome next() {
         if (isNewSchema) {
           // Even when recordCount = 0, we should return return OK_NEW_SCHEMA 
if current reader presents a new schema.
           // This could happen when data sources have a non-trivial schema 
with 0 row.
-          container.buildSchema(SelectionVectorMode.NONE);
+          if (firstRuntimeFiltered) {
+            container.buildSchema(SelectionVectorMode.TWO_BYTE);
+            runtimeFiltered = true;
+          } else {
+            container.buildSchema(SelectionVectorMode.NONE);
+          }
 
 Review comment:
   In general I am concerned about the different types of output container 
being generated in ScanBatch at runtime. None of the operator does that post 
buildSchema phase and it increases the chances of introducing bugs in code. 
When a RecordBatch returns SV vector along with it then general convention is 
that record count will be dictated by SV vector, but here we are relying on 
another variable `recordCount`.  Also we need to be extra careful when to set 
SV2 correctly both with conditions of schema change and when runtimeFiltered 
flag is applied.
   
   I think the reason to do this way is to avoid extra copy by 
RemovingRecordBatch for cases when there is no records filtered out using bloom 
filter condition. But this will still happen in this case when let say with one 
batch some records were filtered which moved ScanBatch from SVMode None to Two 
and later batches were such that none of the records were filtered out.
   
   My recommendation will be to use a global query level option to determine 
when the BloomFilter can be applied, and use that information to add a 
FilterOperator on top of Scan. Since Filter will also do the exact same thing 
(i.e. apply SV2) based on the condition obtainer from RuntimeFilter. Until 
FilterOperator gets the runTimeFilter information it will just pass through the 
batches as is from Scan. This way Scan doesn't have to duplicate the logic of 
Filter using SV2 vector. @amansinha100 - Do you have any recommendation for 
this ?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support JPPD (Join Predicate Push Down)
> ---------------------------------------
>
>                 Key: DRILL-6385
>                 URL: https://issues.apache.org/jira/browse/DRILL-6385
>             Project: Apache Drill
>          Issue Type: New Feature
>          Components:  Server, Execution - Flow
>    Affects Versions: 1.14.0
>            Reporter: weijie.tong
>            Assignee: weijie.tong
>            Priority: Major
>
> This feature is to support the JPPD (Join Predicate Push Down). It will 
> benefit the HashJoin ,Broadcast HashJoin performance by reducing the number 
> of rows to send across the network ,the memory consumed. This feature is 
> already supported by Impala which calls it RuntimeFilter 
> ([https://www.cloudera.com/documentation/enterprise/5-9-x/topics/impala_runtime_filtering.html]).
>  The first PR will try to push down a bloom filter of HashJoin node to 
> Parquet’s scan node.   The propose basic procedure is described as follow:
>  # The HashJoin build side accumulate the equal join condition rows to 
> construct a bloom filter. Then it sends out the bloom filter to the foreman 
> node.
>  # The foreman node accept the bloom filters passively from all the fragments 
> that has the HashJoin operator. It then aggregates the bloom filters to form 
> a global bloom filter.
>  # The foreman node broadcasts the global bloom filter to all the probe side 
> scan nodes which maybe already have send out partial data to the hash join 
> nodes(currently the hash join node will prefetch one batch from both sides ).
>       4.  The scan node accepts a global bloom filter from the foreman node. 
> It will filter the rest rows satisfying the bloom filter.
>  
> To implement above execution flow, some main new notion described as below:
>       1. RuntimeFilter
> It’s a filter container which may contain BloomFilter or MinMaxFilter.
>       2. RuntimeFilterReporter
> It wraps the logic to send hash join’s bloom filter to the foreman.The 
> serialized bloom filter will be sent out through the data tunnel.This object 
> will be instanced by the FragmentExecutor and passed to the 
> FragmentContext.So the HashJoin operator can obtain it through the 
> FragmentContext.
>      3. RuntimeFilterRequestHandler
> It is responsible to accept a SendRuntimeFilterRequest RPC to strip the 
> actual BloomFilter from the network. It then translates this filter to the 
> WorkerBee’s new interface registerRuntimeFilter.
> Another RPC type is BroadcastRuntimeFilterRequest. It will register the 
> accepted global bloom filter to the WorkerBee by the registerRuntimeFilter 
> method and then propagate to the FragmentContext through which the probe side 
> scan node can fetch the aggregated bloom filter.
>       4.RuntimeFilterManager
> The foreman will instance a RuntimeFilterManager .It will indirectly get 
> every RuntimeFilter by the WorkerBee. Once all the BloomFilters have been 
> accepted and aggregated . It will broadcast the aggregated bloom filter to 
> all the probe side scan nodes through the data tunnel by a 
> BroadcastRuntimeFilterRequest RPC.
>      5. RuntimeFilterEnableOption 
>  A global option will be added to decide whether to enable this new feature.
>  
> Welcome suggestion and advice from you.The related PR will be presented as 
> soon as possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to