[ 
https://issues.apache.org/jira/browse/HIVE-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14014829#comment-14014829
 ] 

Gopal V commented on HIVE-7158:
-------------------------------

The reducer count dependency problem was addressed in HIVE-7121, since you have 
to use the same hash algorithm to partition keys in both paths. If the reducer 
count is set by the logical planner, then this optimization is turned off for 
the reduce vertex.

So this feature is marked per-vertex with different min/max estimations (capped 
at max) instead of being global for the DAG.

To answer the first question, even if hive's estimation of reducer count is 
correct, then it will speed up the query.

Over-partitioning is actually faster for sorts (we do not take advantage of it 
in reducer merges - we should, now that we're sending >1 partitions to the same 
reducer) because the comparator today is (partition, key) - the fast-path out 
of sort comparator exits without  the expensive key (like string) compare if 
the partitions are distinct.

https://github.com/apache/incubator-tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/sort/impl/dflt/DefaultSorter.java#L388

In Tez code, you can find some code to actually tell the system to double the 
number of bits for that comparison. Because partitions are usually only using 
~16 bits of 32 bits (since you know maxReducers up-front, you know how many 
bits exactly) - the other 16 bits are used to store 2 bytes from the key prefix.

https://github.com/apache/incubator-tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/sort/impl/PipelinedSorter.java#L241

That code fragment has actually got a fairly long history attached to it. That 
feature was part of a different world of ETL map-reduce when I started working 
on it.

https://issues.apache.org/jira/browse/MAPREDUCE-3235?focusedCommentId=13492822&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13492822

You would take a hit on the size of your IndexRecord (24 byte overhead per 
partition) and some inefficiencies when dealing with partitions smaller than 
32kb (compression is per partition, in 32kb blocks for zlib).

But that overhead is trivial in comparison to the fact that for a task with 
more reducers than mappers, this would be a nice way to spin up fewer 
containers & allow for multi-tenancy wins as well as latency.

> Use Tez auto-parallelism in Hive
> --------------------------------
>
>                 Key: HIVE-7158
>                 URL: https://issues.apache.org/jira/browse/HIVE-7158
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Gunther Hagleitner
>            Assignee: Gunther Hagleitner
>         Attachments: HIVE-7158.1.patch, HIVE-7158.2.patch
>
>
> Tez can optionally sample data from a fraction of the tasks of a vertex and 
> use that information to choose the number of downstream tasks for any given 
> scatter gather edge.
> Hive estimates the count of reducers by looking at stats and estimates for 
> each operator in the operator pipeline leading up to the reducer. However, if 
> this estimate turns out to be too large, Tez can reign in the resources used 
> to compute the reducer.
> It does so by combining partitions of the upstream vertex. It cannot, 
> however, add reducers at this stage.
> I'm proposing to let users specify whether they want to use auto-parallelism 
> or not. If they do there will be scaling factors to determine max and min 
> reducers Tez can choose from. We will then partition by max reducers, letting 
> Tez sample and reign in the count up until the specified min.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to