[ https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Carl Steinbach updated HIVE-2918: --------------------------------- Affects Version/s: 0.8.0 0.8.1 Verified that this problem is present on 0.7.1 and trunk. I also discovered that we don't have any functioning test coverage for hive.exec.max.dynamic.partitions. This property is set in two clientnegative tests (dyn_part1.q and dyn_part3.q), but both tests actually fail for other reasons. > Hive Dynamic Partition Insert - move task not considering > 'hive.exec.max.dynamic.partitions' from CLI > ----------------------------------------------------------------------------------------------------- > > Key: HIVE-2918 > URL: https://issues.apache.org/jira/browse/HIVE-2918 > Project: Hive > Issue Type: Bug > Affects Versions: 0.7.1, 0.8.0, 0.8.1 > Environment: Cent OS 64 bit > Reporter: Bejoy KS > Assignee: Carl Steinbach > > Dynamic Partition insert showing an error with the number of partitions > created even after the default value of 'hive.exec.max.dynamic.partitions' is > bumped high to 2000. > Error Message: > "Failed with exception Number of dynamic partitions created is 1413, which is > more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to > at least 1413." > These are the following properties set on hive CLI > hive> set hive.exec.dynamic.partition=true; > hive> set hive.exec.dynamic.partition.mode=nonstrict; > hive> set hive.exec.max.dynamic.partitions=2000; > hive> set hive.exec.max.dynamic.partitions.pernode=2000; > This is the query with console error log > hive> > > INSERT OVERWRITE TABLE partn_dyn Partition (pobox) > > SELECT country,state,pobox FROM non_partn_dyn; > Total MapReduce jobs = 2 > Launching Job 1 out of 2 > Number of reduce tasks is set to 0 since there's no reduce operator > Starting Job = job_201204021529_0002, Tracking URL = > http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002 > Kill Command = /usr/lib/hadoop/bin/hadoop job > -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002 > 2012-04-02 16:05:28,619 Stage-1 map = 0%, reduce = 0% > 2012-04-02 16:05:39,701 Stage-1 map = 100%, reduce = 0% > 2012-04-02 16:05:50,800 Stage-1 map = 100%, reduce = 100% > Ended Job = job_201204021529_0002 > Ended Job = 248865587, job is filtered out (removed at runtime). > Moving data to: > hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-10000 > Loading data to table default.partn_dyn partition (pobox=null) > Failed with exception Number of dynamic partitions created is 1413, which is > more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to > at least 1413. > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.MoveTask > I checked the job.xml of the first map only job, there the value > hive.exec.max.dynamic.partitions=2000 is reflected but the move task is > taking the default value from hive-site.xml . If I change the value in > hive-site.xml then the job completes successfully. Bottom line,the property > 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move > task -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira