[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548049#comment-13548049
 ] 

Hudson commented on HIVE-2918:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2918. Hive Dynamic Partition Insert - move task not considering 
'hive.exec.max.dynamic.partitions' from CLI. (cwsteinbach via kevinwilfong) 
(Revision 1330417)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1330417
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/test/queries/clientnegative/dyn_part_max.q
* /hive/trunk/ql/src/test/queries/clientnegative/dyn_part_max_per_node.q
* /hive/trunk/ql/src/test/results/clientnegative/dyn_part_max.q.out
* /hive/trunk/ql/src/test/results/clientnegative/dyn_part_max_per_node.q.out


 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1, 0.9.0
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-12-07 Thread hordaway (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13526233#comment-13526233
 ] 

hordaway commented on HIVE-2918:


超过默认值1000了,重新设置以后不生效,是因为conf没有生效,可找到hive.java这个文件进行修改

 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1, 0.9.0
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-30 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265554#comment-13265554
 ] 

Kevin Wilfong commented on HIVE-2918:
-

@Ashutosh, I can't seem to reproduce the problem in any of my environments.

@Carl, do you have any ideas about why this test might be failing?

 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-30 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265586#comment-13265586
 ] 

Carl Steinbach commented on HIVE-2918:
--

Let's continue this discussion in HIVE-2984.

 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-26 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263271#comment-13263271
 ] 

Ashutosh Chauhan commented on HIVE-2918:


Looks like this has broken the trunk. See, 
https://builds.apache.org/job/Hive-trunk-h0.21/1397/

 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-25 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13261807#comment-13261807
 ] 

Phabricator commented on HIVE-2918:
---

kevinwilfong has accepted the revision HIVE-2918 [jira] Hive Dynamic Partition 
Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI.

  +1 The escape2.q test issue was fixed by a separate revision, the tests pass.

REVISION DETAIL
  https://reviews.facebook.net/D2703

BRANCH
  HIVE-2918-max-dynamic-parts


 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13262327#comment-13262327
 ] 

Hudson commented on HIVE-2918:
--

Integrated in Hive-trunk-h0.21 #1397 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1397/])
HIVE-2918. Hive Dynamic Partition Insert - move task not considering 
'hive.exec.max.dynamic.partitions' from CLI. (cwsteinbach via kevinwilfong) 
(Revision 1330417)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1330417
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/test/queries/clientnegative/dyn_part_max.q
* /hive/trunk/ql/src/test/queries/clientnegative/dyn_part_max_per_node.q
* /hive/trunk/ql/src/test/results/clientnegative/dyn_part_max.q.out
* /hive/trunk/ql/src/test/results/clientnegative/dyn_part_max_per_node.q.out


 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-13 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253521#comment-13253521
 ] 

Phabricator commented on HIVE-2918:
---

kevinwilfong has accepted the revision HIVE-2918 [jira] Hive Dynamic Partition 
Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI.

  +1 Will commit after running tests.

REVISION DETAIL
  https://reviews.facebook.net/D2703

BRANCH
  HIVE-2918-max-dynamic-parts


 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-13 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253601#comment-13253601
 ] 

Phabricator commented on HIVE-2918:
---

kevinwilfong has requested changes to the revision HIVE-2918 [jira] Hive 
Dynamic Partition Insert - move task not considering 
'hive.exec.max.dynamic.partitions' from CLI.

  escape2.q seems to be broken

REVISION DETAIL
  https://reviews.facebook.net/D2703

BRANCH
  HIVE-2918-max-dynamic-parts


 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-13 Thread Carl Steinbach (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253785#comment-13253785
 ] 

Carl Steinbach commented on HIVE-2918:
--

@Kevin: I tried running escape1.q and escape2.q on trunk and got diffs in both 
tests. Are you able to run either of these tests?

 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-13 Thread Kevin Wilfong (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253800#comment-13253800
 ] 

Kevin Wilfong commented on HIVE-2918:
-

I created a task to resolve the test failures here
https://issues.apache.org/jira/browse/HIVE-2952

 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2918) Hive Dynamic Partition Insert - move task not considering 'hive.exec.max.dynamic.partitions' from CLI

2012-04-13 Thread Kevin Wilfong (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253953#comment-13253953
 ] 

Kevin Wilfong commented on HIVE-2918:
-

@Carl

The issue with escape2.q and this patch is that now, when the BlockMergeTask 
runs, the conf it gets from the Hive object has the value for hive.query.string 
populated, since the conf in the Hive objectis being updated more frequently 
than before this patch.  The query string for many of the concatenate commands 
in that test include characters which are illegal in XML 1.0, which it looks 
like Hadoop is trying to produce using the conf when a job is submitted.  This 
is an open issue in Hadoop https://issues.apache.org/jira/browse/HADOOP-7542

There are a couple ways I can think of so that we could deal with this issue:
1) sanitize the query String wherever we set it (Driver's execute method and 
SessionState's setCmd method)  This may have the added benefit of allowing 
users to execute queries (not just DDL commands) involving such characters.  
This could potentially have the issue of escaping characters which were not 
escaped before and do not need to be depending on how we handle the 
sanitization process (this would happen for example, if we used the Apache 
commons library's Java escape method).
2) sanitize it, or remove it from the job conf in the BlockMergeTask.  The only 
two places we could run into this issue are in the BlockMergeTask and 
MapRedTask.  We already running into this issue in MapRedTask, and were only 
avoiding it in the BlockMergeTask (it appears) by luck, or somebody 
intentionally using the conf from the Hive object there rather than the one in 
the BlockMergeTask



 Hive Dynamic Partition Insert - move task not considering 
 'hive.exec.max.dynamic.partitions' from CLI
 -

 Key: HIVE-2918
 URL: https://issues.apache.org/jira/browse/HIVE-2918
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.8.1
 Environment: Cent OS 64 bit
Reporter: Bejoy KS
Assignee: Carl Steinbach
 Attachments: HIVE-2918.D2703.1.patch


 Dynamic Partition insert showing an error with the number of partitions 
 created even after the default value of 'hive.exec.max.dynamic.partitions' is 
 bumped high to 2000.
 Error Message:
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 These are the following properties set on hive CLI
 hive set hive.exec.dynamic.partition=true;
 hive set hive.exec.dynamic.partition.mode=nonstrict;
 hive set hive.exec.max.dynamic.partitions=2000;
 hive set hive.exec.max.dynamic.partitions.pernode=2000;
 This is the query with console error log
 hive 
  INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
  SELECT country,state,pobox FROM non_partn_dyn;
 Total MapReduce jobs = 2
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 Starting Job = job_201204021529_0002, Tracking URL = 
 http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
 Kill Command = /usr/lib/hadoop/bin/hadoop job  
 -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201204021529_0002
 Ended Job = 248865587, job is filtered out (removed at runtime).
 Moving data to: 
 hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-1
 Loading data to table default.partn_dyn partition (pobox=null)
 Failed with exception Number of dynamic partitions created is 1413, which is 
 more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
 at least 1413.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 I checked the job.xml of the first map only job, there the value 
 hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
 taking the default value from hive-site.xml . If I change the value in 
 hive-site.xml then the job completes successfully. Bottom line,the property 
 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
 task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira