[ 
https://issues.apache.org/jira/browse/HIVE-3640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548343#comment-13548343
 ] 

Hudson commented on HIVE-3640:
------------------------------

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
    HIVE-3640. Reducer allocation is incorrect if enforce bucketing and 
mapred.reduce.tasks are both set. (Vighnesh Avadhani via kevinwilfong) 
(Revision 1405240)

     Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1405240
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/VerifyNumReducersForBucketsHook.java
* /hive/trunk/ql/src/test/queries/clientpositive/bucket_num_reducers.q
* /hive/trunk/ql/src/test/results/clientpositive/bucket_num_reducers.q.out

                
> Reducer allocation is incorrect if enforce bucketing and mapred.reduce.tasks 
> are both set
> -----------------------------------------------------------------------------------------
>
>                 Key: HIVE-3640
>                 URL: https://issues.apache.org/jira/browse/HIVE-3640
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>    Affects Versions: 0.10.0
>            Reporter: Vighnesh Avadhani
>            Assignee: Vighnesh Avadhani
>            Priority: Minor
>             Fix For: 0.10.0
>
>         Attachments: HIVE-3640.1.patch.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When I enforce bucketing and fix the number of reducers via 
> mapred.reduce.tasks Hive ignores my input and instead takes the largest value 
> <= hive.exec.reducers.max that is also an even divisor of num_buckets. In 
> other words, if I set 1024 buckets and set mapred.reduce.tasks=1024 I'll get. 
> . . 256 reducers. If I set 1997 buckets and set mapred.reduce.tasks=1997 I'll 
> get. . . 1 reducer. 
> This is totally crazy, and it's far, far crazier when the data inputs get 
> large. In the latter case the bucketing job will almost certainly fail 
> because we'll most likely try to stuff several TB of input through a single 
> reducer. We'll also drastically reduce the effectiveness of bucketing, since 
> the buckets themselves will be larger.
> If the user sets mapred.reduce.tasks in a query that inserts into a bucketed 
> table we should either accept that value or raise an exception if it's 
> invalid relative to the number of buckets. We should absolutely NOT override 
> the user's direction and fall back on automatically allocating reducers based 
> on some obscure logic dictated by completely different setting. 
> I have yet to encounter a single person who expected this the first time, so 
> it's clearly a bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to