[ 
https://issues.apache.org/jira/browse/IMPALA-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17897833#comment-17897833
 ] 

ASF subversion and git services commented on IMPALA-13509:
----------------------------------------------------------

Commit a541670856c08d6809646863c305643f60a7e70d in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=a54167085 ]

IMPALA-13509: (Addendum) Fix build issue

Fixes build issue introduced by merging IMPALA-13509 and IMPALA-13502
changes without testing them together.

Change-Id: Ie465af1c15052d29596ea86aa7d1661e81df5c81
Reviewed-on: http://gerrit.cloudera.org:8080/22059
Reviewed-by: Quanlong Huang <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>
Reviewed-by: Csaba Ringhofer <[email protected]>


> Avoid duplicate deepcopy during hash partitioning in KrpcDataStreamSender
> -------------------------------------------------------------------------
>
>                 Key: IMPALA-13509
>                 URL: https://issues.apache.org/jira/browse/IMPALA-13509
>             Project: IMPALA
>          Issue Type: Improvement
>          Components: Backend
>            Reporter: Csaba Ringhofer
>            Assignee: Csaba Ringhofer
>            Priority: Critical
>              Labels: performance
>
> Currently all rows are deep copied twice:
> 1. to the RowBatch of the given channel
> 2. to an OutboundRowBatch when the collector RowBatch is at capacity
> Copying directly to an OutboundRowBatch could avoid some CPU work.
> The would also allow easier implementation of the following improvements:
> - deduplicate tuples similarly to broadcast/unpartitioned exchange 
> (IMPALA-13225).
> - keep outbound row batch size below data_stream_sender_buffer_size even for 
> var len data 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to