[jira] [Assigned] (HIVE-3451) map-reduce jobs does not work for a partition containing sub-directories
[ https://issues.apache.org/jira/browse/HIVE-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain reassigned HIVE-3451: Assignee: Gang Tim Liu map-reduce jobs does not work for a partition containing sub-directories Key: HIVE-3451 URL: https://issues.apache.org/jira/browse/HIVE-3451 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Namit Jain Assignee: Gang Tim Liu Consider the following test: -- The test verifies that sub-directories are supported for versions of hadoop -- where MAPREDUCE-1501 is fixed. So, enable this test only for hadoop 23. -- INCLUDE_HADOOP_MAJOR_VERSIONS(0.23) CREATE TABLE fact_daily(x int) PARTITIONED BY (ds STRING); CREATE TABLE fact_tz(x int) PARTITIONED BY (ds STRING, hr STRING) LOCATION 'pfile:${system:test.tmp.dir}/fact_tz'; INSERT OVERWRITE TABLE fact_tz PARTITION (ds='1', hr='1') SELECT key+11 FROM src WHERE key=484; ALTER TABLE fact_daily SET TBLPROPERTIES('EXTERNAL'='TRUE'); ALTER TABLE fact_daily ADD PARTITION (ds='1') LOCATION 'pfile:${system:test.tmp.dir}/fact_tz/ds=1'; set mapred.input.dir.recursive=true; SELECT * FROM fact_daily WHERE ds='1'; SELECT count(1) FROM fact_daily WHERE ds='1'; Say, the above file was named: recursive_dir.q and we ran the test for hadoop 23: by executing: ant test -Dhadoop.mr.rev=23 -Dtest.print.classpath=true -Dhadoop.version=2.0.0-alpha -Dhadoop.security.version=2.0.0-alpha -Dtestcase=TestCliDriver -Dqfile=recursive_dir.q The select * from the table works fine, but the last command does not work since it requires a map-reduce job. This will prevent other features which are creating sub-directories to add any tests which requires a map-reduce job. The work-around is to issue queries which do not require map-reduce jobs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HIVE-3152) Disallow certain character patterns in partition names
[ https://issues.apache.org/jira/browse/HIVE-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sambavi Muthukrishnan reassigned HIVE-3152: --- Assignee: Ivan Gorbachev (was: Andrew Poland) Disallow certain character patterns in partition names -- Key: HIVE-3152 URL: https://issues.apache.org/jira/browse/HIVE-3152 Project: Hive Issue Type: New Feature Components: Metastore Reporter: Andrew Poland Assignee: Ivan Gorbachev Priority: Minor Labels: api-addition, configuration-addition Attachments: unicode.patch New event listener to allow metastore to reject a partition name if it contains undesired character patterns such as unicode and commas. Match pattern is implemented as a regular expression Modifies append_partition to call a new MetaStorePreventListener implementation, PreAppendPartitionEvent. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3459) Dynamic partition queries producing no partitions fail with hive.stats.reliable=true
[ https://issues.apache.org/jira/browse/HIVE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454711#comment-13454711 ] Namit Jain commented on HIVE-3459: -- +1 Dynamic partition queries producing no partitions fail with hive.stats.reliable=true Key: HIVE-3459 URL: https://issues.apache.org/jira/browse/HIVE-3459 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.10.0 Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-3459.1.patch.txt Dynamic partition inserts which result in no partitions (either because the input is empty or all input rows are filtered out) will fail because stats cannot be collected if hive.stats.reliable=true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HIVE-3432) perform a map-only group by if grouping key matches the sorting properties of the table
[ https://issues.apache.org/jira/browse/HIVE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain reassigned HIVE-3432: Assignee: Namit Jain (was: Sambavi Muthukrishnan) perform a map-only group by if grouping key matches the sorting properties of the table --- Key: HIVE-3432 URL: https://issues.apache.org/jira/browse/HIVE-3432 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Namit Jain Attachments: hive.3432.1.patch There should be an option to use bucketizedinputformat and use map-only group by. There would be no need to perform a map-side aggregation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3432) perform a map-only group by if grouping key matches the sorting properties of the table
[ https://issues.apache.org/jira/browse/HIVE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-3432: - Status: Patch Available (was: Open) perform a map-only group by if grouping key matches the sorting properties of the table --- Key: HIVE-3432 URL: https://issues.apache.org/jira/browse/HIVE-3432 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Sambavi Muthukrishnan Attachments: hive.3432.1.patch There should be an option to use bucketizedinputformat and use map-only group by. There would be no need to perform a map-side aggregation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3432) perform a map-only group by if grouping key matches the sorting properties of the table
[ https://issues.apache.org/jira/browse/HIVE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-3432: - Attachment: hive.3432.1.patch perform a map-only group by if grouping key matches the sorting properties of the table --- Key: HIVE-3432 URL: https://issues.apache.org/jira/browse/HIVE-3432 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Sambavi Muthukrishnan Attachments: hive.3432.1.patch There should be an option to use bucketizedinputformat and use map-only group by. There would be no need to perform a map-side aggregation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3459) Dynamic partition queries producing no partitions fail with hive.stats.reliable=true
[ https://issues.apache.org/jira/browse/HIVE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-3459: - Resolution: Fixed Fix Version/s: 0.10.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed. Thanks Kevin Dynamic partition queries producing no partitions fail with hive.stats.reliable=true Key: HIVE-3459 URL: https://issues.apache.org/jira/browse/HIVE-3459 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.10.0 Reporter: Kevin Wilfong Assignee: Kevin Wilfong Fix For: 0.10.0 Attachments: HIVE-3459.1.patch.txt Dynamic partition inserts which result in no partitions (either because the input is empty or all input rows are filtered out) will fail because stats cannot be collected if hive.stats.reliable=true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3391) Keep the original query in HiveDriverRunHookContextImpl
[ https://issues.apache.org/jira/browse/HIVE-3391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-3391: Resolution: Fixed Fix Version/s: 0.10.0 Status: Resolved (was: Patch Available) Committed, thanks Dawid. Keep the original query in HiveDriverRunHookContextImpl --- Key: HIVE-3391 URL: https://issues.apache.org/jira/browse/HIVE-3391 Project: Hive Issue Type: Improvement Components: Logging Reporter: Dawid Dabrowski Assignee: Dawid Dabrowski Priority: Minor Fix For: 0.10.0 Attachments: HIVE-3391.patch.txt Original Estimate: 72h Time Spent: 96h Remaining Estimate: 0h It'd be useful to have access to the original query in hooks. The hook that's executed first is HiveDriverRunHook, let's add it there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3391) Keep the original query in HiveDriverRunHookContextImpl
[ https://issues.apache.org/jira/browse/HIVE-3391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-3391: Component/s: Logging Keep the original query in HiveDriverRunHookContextImpl --- Key: HIVE-3391 URL: https://issues.apache.org/jira/browse/HIVE-3391 Project: Hive Issue Type: Improvement Components: Logging Reporter: Dawid Dabrowski Assignee: Dawid Dabrowski Priority: Minor Fix For: 0.10.0 Attachments: HIVE-3391.patch.txt Original Estimate: 72h Time Spent: 96h Remaining Estimate: 0h It'd be useful to have access to the original query in hooks. The hook that's executed first is HiveDriverRunHook, let's add it there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3421) Column Level Top K Values Statistics
[ https://issues.apache.org/jira/browse/HIVE-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454785#comment-13454785 ] Namit Jain commented on HIVE-3421: -- comments on phabricator Column Level Top K Values Statistics Key: HIVE-3421 URL: https://issues.apache.org/jira/browse/HIVE-3421 Project: Hive Issue Type: New Feature Reporter: Feng Lu Assignee: Feng Lu Attachments: HIVE-3421.patch.1.txt, HIVE-3421.patch.2.txt, HIVE-3421.patch.3.txt, HIVE-3421.patch.4.txt, HIVE-3421.patch.5.txt, HIVE-3421.patch.6.txt, HIVE-3421.patch.7.txt, HIVE-3421.patch.8.txt, HIVE-3421.patch.txt Compute (estimate) top k values statistics for each column, and put the most skewed column into skewed info, if user hasn't specified skew. This feature depends on ListBucketing (create table skewed on) https://cwiki.apache.org/Hive/listbucketing.html. All column topk can be added to skewed info, if in the future skewed info supports multiple independent columns. The TopK algorithm is based on this paper: http://www.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3432) perform a map-only group by if grouping key matches the sorting properties of the table
[ https://issues.apache.org/jira/browse/HIVE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-3432: - Status: Open (was: Patch Available) Found some issues when thinking more about it - fixing it right now perform a map-only group by if grouping key matches the sorting properties of the table --- Key: HIVE-3432 URL: https://issues.apache.org/jira/browse/HIVE-3432 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Namit Jain Attachments: hive.3432.1.patch There should be an option to use bucketizedinputformat and use map-only group by. There would be no need to perform a map-side aggregation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3339) Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names
[ https://issues.apache.org/jira/browse/HIVE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454813#comment-13454813 ] Hudson commented on HIVE-3339: -- Integrated in Hive-trunk-h0.21 #1666 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1666/]) Checked in recursive_dir.q by mistake while committing HIVE-3339 (Revision 1384204) HIVE-3339 Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names (Zhenxiao Luo via namit) (Revision 1384200) Result = SUCCESS namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1384204 Files : * /hive/trunk/ql/src/test/queries/clientpositive/recursive_dir.q namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1384200 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FilterOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableDummyOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/LateralViewForwardOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/LateralViewJoinOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/LimitOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SelectOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/UDTFOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/UnionOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketMapJoinOptimizer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPruner.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GroupByOptimizer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkDeDuplication.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SamplePruner.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedMergeBucketMapJoinOptimizer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyCtx.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndexCtx.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/Generator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PartitionConditionRemover.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/LocalMapJoinProcFactory.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MapJoinResolver.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SkewJoinResolver.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereTaskDispatcher.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/unionproc/UnionProcessor.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/PredicatePushDown.java * /hive/trunk/ql/src/test/queries/clientpositive/recursive_dir.q Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names - Key: HIVE-3339 URL: https://issues.apache.org/jira/browse/HIVE-3339 Project: Hive Issue Type: Bug Reporter: Namit Jain Assignee: Zhenxiao Luo Priority: Minor Attachments: HIVE-3339.1.patch.txt, HIVE-3339.2.patch.txt This should be done for code cleanup. Instead of the rule being: SEL% It should say SelectOperator.getName()% It would make the rules more readable. -- This message is automatically generated by JIRA. If you think it was sent
RE: newbie in hive dev - process help
Anybody??? -Original Message- From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com] Sent: Wednesday, September 12, 2012 9:30 AM To: dev@hive.apache.org; hive-...@hadoop.apache.org Subject: newbie in hive dev - process help Hi hive dev Gurus, I am a newbie to hive dev, but using hive for about 2 years. I created a UDF to convert map to string, since I wanted make only the key part lower or upper to convert old data in map field, because the keys could be in different cases. I created this about a year ago. I would like to add it to the hive code, so I do not have to customize the code or add udf temporarily. Also it could be a useful udf to have. I got the steps to contribute from this https://cwiki.apache.org/Hive/howtocontribute.html Anything else needed? Is this email enough or I have open a jira? Thanks, Chalcy -Original Message- From: Namit Jain (JIRA) [mailto:j...@apache.org] Sent: Wednesday, September 12, 2012 1:53 AM To: hive-...@hadoop.apache.org Subject: [jira] [Assigned] (HIVE-3141) Bug in SELECT query [ https://issues.apache.org/jira/browse/HIVE-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain reassigned HIVE-3141: Assignee: Ajesh Kumar Bug in SELECT query --- Key: HIVE-3141 URL: https://issues.apache.org/jira/browse/HIVE-3141 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.9.0 Environment: OS: Ubuntu Hive version: hive-0.7.1-cdh3u2 Hadoop : hadoop-0.20.2 Reporter: ASK Assignee: Ajesh Kumar Priority: Minor Labels: patch Attachments: HIVE-3141.2.patch.txt, Hive_bug_3141_resolution.pdf, select_syntax.q, select_syntax.q.out When i try to execute select *(followed by any alphanumeric character) from table , query is throwing some issues. It display the result for select * It doesnot happen when only numbers follow the * -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: newbie in hive dev - process help
If you want code committed to Hive, then you need to open a JIRA and attach a patch. Cheers, Phil. On 13 September 2012 13:16, Chalcy Raja chalcy.r...@careerbuilder.com wrote: Anybody??? -Original Message- From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com] Sent: Wednesday, September 12, 2012 9:30 AM To: dev@hive.apache.org; hive-...@hadoop.apache.org Subject: newbie in hive dev - process help Hi hive dev Gurus, I am a newbie to hive dev, but using hive for about 2 years. I created a UDF to convert map to string, since I wanted make only the key part lower or upper to convert old data in map field, because the keys could be in different cases. I created this about a year ago. I would like to add it to the hive code, so I do not have to customize the code or add udf temporarily. Also it could be a useful udf to have. I got the steps to contribute from this https://cwiki.apache.org/Hive/howtocontribute.html Anything else needed? Is this email enough or I have open a jira? Thanks, Chalcy -Original Message- From: Namit Jain (JIRA) [mailto:j...@apache.org] Sent: Wednesday, September 12, 2012 1:53 AM To: hive-...@hadoop.apache.org Subject: [jira] [Assigned] (HIVE-3141) Bug in SELECT query [ https://issues.apache.org/jira/browse/HIVE-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain reassigned HIVE-3141: Assignee: Ajesh Kumar Bug in SELECT query --- Key: HIVE-3141 URL: https://issues.apache.org/jira/browse/HIVE-3141 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.9.0 Environment: OS: Ubuntu Hive version: hive-0.7.1-cdh3u2 Hadoop : hadoop-0.20.2 Reporter: ASK Assignee: Ajesh Kumar Priority: Minor Labels: patch Attachments: HIVE-3141.2.patch.txt, Hive_bug_3141_resolution.pdf, select_syntax.q, select_syntax.q.out When i try to execute select *(followed by any alphanumeric character) from table , query is throwing some issues. It display the result for select * It doesnot happen when only numbers follow the * -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
RE: newbie in hive dev - process help
Appreciate your reply, Phil. I'll create a jira and start from there. -Original Message- From: Philip Tromans [mailto:philip.j.trom...@gmail.com] Sent: Thursday, September 13, 2012 8:39 AM To: dev@hive.apache.org Subject: Re: newbie in hive dev - process help If you want code committed to Hive, then you need to open a JIRA and attach a patch. Cheers, Phil. On 13 September 2012 13:16, Chalcy Raja chalcy.r...@careerbuilder.com wrote: Anybody??? -Original Message- From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com] Sent: Wednesday, September 12, 2012 9:30 AM To: dev@hive.apache.org; hive-...@hadoop.apache.org Subject: newbie in hive dev - process help Hi hive dev Gurus, I am a newbie to hive dev, but using hive for about 2 years. I created a UDF to convert map to string, since I wanted make only the key part lower or upper to convert old data in map field, because the keys could be in different cases. I created this about a year ago. I would like to add it to the hive code, so I do not have to customize the code or add udf temporarily. Also it could be a useful udf to have. I got the steps to contribute from this https://cwiki.apache.org/Hive/howtocontribute.html Anything else needed? Is this email enough or I have open a jira? Thanks, Chalcy -Original Message- From: Namit Jain (JIRA) [mailto:j...@apache.org] Sent: Wednesday, September 12, 2012 1:53 AM To: hive-...@hadoop.apache.org Subject: [jira] [Assigned] (HIVE-3141) Bug in SELECT query [ https://issues.apache.org/jira/browse/HIVE-3141?page=com.atlassian.jir a.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain reassigned HIVE-3141: Assignee: Ajesh Kumar Bug in SELECT query --- Key: HIVE-3141 URL: https://issues.apache.org/jira/browse/HIVE-3141 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.9.0 Environment: OS: Ubuntu Hive version: hive-0.7.1-cdh3u2 Hadoop : hadoop-0.20.2 Reporter: ASK Assignee: Ajesh Kumar Priority: Minor Labels: patch Attachments: HIVE-3141.2.patch.txt, Hive_bug_3141_resolution.pdf, select_syntax.q, select_syntax.q.out When i try to execute select *(followed by any alphanumeric character) from table , query is throwing some issues. It display the result for select * It doesnot happen when only numbers follow the * -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #136
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/ -- [...truncated 10125 lines...] [echo] Project: odbc [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/odbc/src/conf does not exist. ivy-resolve-test: [echo] Project: odbc ivy-retrieve-test: [echo] Project: odbc compile-test: [echo] Project: odbc create-dirs: [echo] Project: serde [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/serde/src/test/resources does not exist. init: [echo] Project: serde ivy-init-settings: [echo] Project: serde ivy-resolve: [echo] Project: serde [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-serde-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/ivy/report/org.apache.hive-hive-serde-default.html ivy-retrieve: [echo] Project: serde dynamic-serde: compile: [echo] Project: serde ivy-resolve-test: [echo] Project: serde ivy-retrieve-test: [echo] Project: serde compile-test: [echo] Project: serde [javac] Compiling 26 source files to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/serde/test/classes [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. create-dirs: [echo] Project: service [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/service/src/test/resources does not exist. init: [echo] Project: service ivy-init-settings: [echo] Project: service ivy-resolve: [echo] Project: service [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-service-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/ivy/report/org.apache.hive-hive-service-default.html ivy-retrieve: [echo] Project: service compile: [echo] Project: service ivy-resolve-test: [echo] Project: service ivy-retrieve-test: [echo] Project: service compile-test: [echo] Project: service [javac] Compiling 2 source files to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/service/test/classes test: [echo] Project: hive test-shims: [echo] Project: hive test-conditions: [echo] Project: shims gen-test: [echo] Project: shims create-dirs: [echo] Project: shims [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/test/resources does not exist. init: [echo] Project: shims ivy-init-settings: [echo] Project: shims ivy-resolve: [echo] Project: shims [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-shims-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/ivy/report/org.apache.hive-hive-shims-default.html ivy-retrieve: [echo] Project: shims compile: [echo] Project: shims [echo] Building shims 0.20 build_shims: [echo] Project: shims [echo] Compiling https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/common/java;/home/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.20/java against hadoop 0.20.2 (https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/136/artifact/hive/build/hadoopcore/hadoop-0.20.2) ivy-init-settings: [echo] Project: shims ivy-resolve-hadoop-shim: [echo] Project: shims [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml ivy-retrieve-hadoop-shim: [echo] Project: shims [echo] Building shims 0.20S build_shims: [echo] Project: shims [echo] Compiling
[jira] [Updated] (HIVE-3432) perform a map-only group by if grouping key matches the sorting properties of the table
[ https://issues.apache.org/jira/browse/HIVE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-3432: - Status: Patch Available (was: Open) fixed some more bugs - ready for review now perform a map-only group by if grouping key matches the sorting properties of the table --- Key: HIVE-3432 URL: https://issues.apache.org/jira/browse/HIVE-3432 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Namit Jain Attachments: hive.3432.1.patch, hive.3432.2.patch There should be an option to use bucketizedinputformat and use map-only group by. There would be no need to perform a map-side aggregation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3432) perform a map-only group by if grouping key matches the sorting properties of the table
[ https://issues.apache.org/jira/browse/HIVE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-3432: - Attachment: hive.3432.2.patch perform a map-only group by if grouping key matches the sorting properties of the table --- Key: HIVE-3432 URL: https://issues.apache.org/jira/browse/HIVE-3432 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Namit Jain Attachments: hive.3432.1.patch, hive.3432.2.patch There should be an option to use bucketizedinputformat and use map-only group by. There would be no need to perform a map-side aggregation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Review Request: HIVE-3443: Add serdeParamKey option to Hive Metatool
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/6962/ --- (Updated Sept. 13, 2012, 4:50 p.m.) Review request for hive, Carl Steinbach and Carl Steinbach. Changes --- Changes based on review comments on the previous revision. Description --- This patch adds an additional option to Hive Metatool that allows Metatool to take in a serdeParamKey from the user. Avro Serde's schema URL key used to be called schema.url in the past whereas its called avro.schema.url now. The purpose of the patch is to make Metatool more generic than what it is today so that its in a position to handle variations such as the one described above. The new option looks as below, -serdeParamKey serde_param_key=value Note that the new Option -serdeParamKey is valid only with the -updateLocation option. When the user attempts to use the serdeParamKey option with other options, an error is raised and the usage is printed. If the user doesn't pass -serdeParamKey as part of the -updateLocation option, Hive Metatool searches for records with both avro.schema.url and schema.url keys and updates them. This addresses bug HIVE-3343. https://issues.apache.org/jira/browse/HIVE-3343 Diffs (updated) - metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 251d4ba metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java a76594a metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaTool.java 5790ae9 Diff: https://reviews.apache.org/r/6962/diff/ Testing --- HiveMetaTool has been tested to verify that it handles both avro.schema.url and schema.url correctly. Existing test case in TestHiveMetaTool.java has been modified to use AvroSerDe instead of LazySimpleSerDe. Thanks, Shreepadma Venugopalan
Re: Review Request: HIVE-3443: Add serdeParamKey option to Hive Metatool
On Sept. 11, 2012, 12:38 a.m., Carl Steinbach wrote: metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaTool.java, line 114 https://reviews.apache.org/r/6962/diff/3/?file=152381#file152381line114 Please add bad URI records for the different fields and verify that the metatool takes a best-effort approach to completing the update operation. I've added records with bad URI for avro.schema.url. Its not feasible to add bad records for other field - metastore validates the URL and raises an exception. However I manually inserted bad records in the metastore through SQL and verified that the metatool completes. - Shreepadma --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/6962/#review11291 --- On Sept. 13, 2012, 4:50 p.m., Shreepadma Venugopalan wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/6962/ --- (Updated Sept. 13, 2012, 4:50 p.m.) Review request for hive, Carl Steinbach and Carl Steinbach. Description --- This patch adds an additional option to Hive Metatool that allows Metatool to take in a serdeParamKey from the user. Avro Serde's schema URL key used to be called schema.url in the past whereas its called avro.schema.url now. The purpose of the patch is to make Metatool more generic than what it is today so that its in a position to handle variations such as the one described above. The new option looks as below, -serdeParamKey serde_param_key=value Note that the new Option -serdeParamKey is valid only with the -updateLocation option. When the user attempts to use the serdeParamKey option with other options, an error is raised and the usage is printed. If the user doesn't pass -serdeParamKey as part of the -updateLocation option, Hive Metatool searches for records with both avro.schema.url and schema.url keys and updates them. This addresses bug HIVE-3343. https://issues.apache.org/jira/browse/HIVE-3343 Diffs - metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 251d4ba metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java a76594a metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaTool.java 5790ae9 Diff: https://reviews.apache.org/r/6962/diff/ Testing --- HiveMetaTool has been tested to verify that it handles both avro.schema.url and schema.url correctly. Existing test case in TestHiveMetaTool.java has been modified to use AvroSerDe instead of LazySimpleSerDe. Thanks, Shreepadma Venugopalan
[jira] [Updated] (HIVE-3443) Hive Metatool should take serde_param_key from the user to allow for changes to avro serde's schema url key
[ https://issues.apache.org/jira/browse/HIVE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shreepadma Venugopalan updated HIVE-3443: - Attachment: HIVE-3443.3.patch.txt Hive Metatool should take serde_param_key from the user to allow for changes to avro serde's schema url key --- Key: HIVE-3443 URL: https://issues.apache.org/jira/browse/HIVE-3443 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.10.0 Reporter: Shreepadma Venugopalan Assignee: Shreepadma Venugopalan Priority: Critical Attachments: HIVE-3443.1.patch.txt, HIVE-3443.2.patch.txt, HIVE-3443.3.patch.txt Hive Metatool should take serde_param_key from the user to allow for chanes to avro serde's schema url key. In the past avro.schema.url key used to be called schema.url. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3443) Hive Metatool should take serde_param_key from the user to allow for changes to avro serde's schema url key
[ https://issues.apache.org/jira/browse/HIVE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shreepadma Venugopalan updated HIVE-3443: - Status: Patch Available (was: Open) Hive Metatool should take serde_param_key from the user to allow for changes to avro serde's schema url key --- Key: HIVE-3443 URL: https://issues.apache.org/jira/browse/HIVE-3443 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.10.0 Reporter: Shreepadma Venugopalan Assignee: Shreepadma Venugopalan Priority: Critical Attachments: HIVE-3443.1.patch.txt, HIVE-3443.2.patch.txt, HIVE-3443.3.patch.txt Hive Metatool should take serde_param_key from the user to allow for chanes to avro serde's schema url key. In the past avro.schema.url key used to be called schema.url. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3443) Hive Metatool should take serde_param_key from the user to allow for changes to avro serde's schema url key
[ https://issues.apache.org/jira/browse/HIVE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455004#comment-13455004 ] Shreepadma Venugopalan commented on HIVE-3443: -- Updated patch available for review at : https://reviews.apache.org/r/6962/ Hive Metatool should take serde_param_key from the user to allow for changes to avro serde's schema url key --- Key: HIVE-3443 URL: https://issues.apache.org/jira/browse/HIVE-3443 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.10.0 Reporter: Shreepadma Venugopalan Assignee: Shreepadma Venugopalan Priority: Critical Attachments: HIVE-3443.1.patch.txt, HIVE-3443.2.patch.txt, HIVE-3443.3.patch.txt Hive Metatool should take serde_param_key from the user to allow for chanes to avro serde's schema url key. In the past avro.schema.url key used to be called schema.url. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3452) Missing column causes null pointer exception
[ https://issues.apache.org/jira/browse/HIVE-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean Xu updated HIVE-3452: -- Release Note: when a table name is used as column in select, it will get null exception. TypeCheckProcFactory.java has been changed to catch this case, and throw a SemanticException with possible column names. Status: Patch Available (was: Open) The code change is in https://reviews.facebook.net/D5361 Missing column causes null pointer exception Key: HIVE-3452 URL: https://issues.apache.org/jira/browse/HIVE-3452 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Jean Xu Assignee: Jean Xu Priority: Minor select * from src where src = 'alkdfaj'; FAILED: SemanticException null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3459) Dynamic partition queries producing no partitions fail with hive.stats.reliable=true
[ https://issues.apache.org/jira/browse/HIVE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455050#comment-13455050 ] Hudson commented on HIVE-3459: -- Integrated in Hive-trunk-h0.21 #1667 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1667/]) HIVE-3459 Dynamic partition queries producing no partitions fail with hive.stats.reliable=true (Kevin Wilfong via namit) (Revision 1384244) Result = FAILURE namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1384244 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/StatsTask.java * /hive/trunk/ql/src/test/queries/clientpositive/stats_empty_dyn_part.q * /hive/trunk/ql/src/test/results/clientpositive/stats_empty_dyn_part.q.out Dynamic partition queries producing no partitions fail with hive.stats.reliable=true Key: HIVE-3459 URL: https://issues.apache.org/jira/browse/HIVE-3459 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.10.0 Reporter: Kevin Wilfong Assignee: Kevin Wilfong Fix For: 0.10.0 Attachments: HIVE-3459.1.patch.txt Dynamic partition inserts which result in no partitions (either because the input is empty or all input rows are filtered out) will fail because stats cannot be collected if hive.stats.reliable=true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3391) Keep the original query in HiveDriverRunHookContextImpl
[ https://issues.apache.org/jira/browse/HIVE-3391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455051#comment-13455051 ] Hudson commented on HIVE-3391: -- Integrated in Hive-trunk-h0.21 #1667 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1667/]) HIVE-3391. Keep the original query in HiveDriverRunHookContextImpl. (Dawid Dabrowski via kevinwilfong) (Revision 1384247) Result = FAILURE kevinwilfong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1384247 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/HiveDriverRunHookContext.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/HiveDriverRunHookContextImpl.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/hooks/DriverTestHook.java * /hive/trunk/ql/src/test/queries/clientpositive/driverhook.q * /hive/trunk/ql/src/test/results/clientpositive/driverhook.q.out Keep the original query in HiveDriverRunHookContextImpl --- Key: HIVE-3391 URL: https://issues.apache.org/jira/browse/HIVE-3391 Project: Hive Issue Type: Improvement Components: Logging Reporter: Dawid Dabrowski Assignee: Dawid Dabrowski Priority: Minor Fix For: 0.10.0 Attachments: HIVE-3391.patch.txt Original Estimate: 72h Time Spent: 96h Remaining Estimate: 0h It'd be useful to have access to the original query in hooks. The hook that's executed first is HiveDriverRunHook, let's add it there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1667 - Failure
Changes for Build #1667 [kevinwilfong] HIVE-3391. Keep the original query in HiveDriverRunHookContextImpl. (Dawid Dabrowski via kevinwilfong) [namit] HIVE-3459 Dynamic partition queries producing no partitions fail with hive.stats.reliable=true (Kevin Wilfong via namit) 1 tests failed. REGRESSION: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:47) at org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:11278) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1667) Status: Failure Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1667/ to view the results.
[jira] [Updated] (HIVE-3438) Add tests for 'm' bigs tables sortmerge join with 'n' small tables where both m,n1
[ https://issues.apache.org/jira/browse/HIVE-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-3438: Resolution: Fixed Fix Version/s: 0.10.0 Status: Resolved (was: Patch Available) Committed, thanks Namit. Add tests for 'm' bigs tables sortmerge join with 'n' small tables where both m,n1 --- Key: HIVE-3438 URL: https://issues.apache.org/jira/browse/HIVE-3438 Project: Hive Issue Type: Test Components: Tests Reporter: Namit Jain Assignee: Namit Jain Fix For: 0.10.0 Attachments: hive.3438.1.patch Once https://issues.apache.org/jira/browse/HIVE-3171 is in, it would be good to add more tests which tests the above condition. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-967) Implement show create table
[ https://issues.apache.org/jira/browse/HIVE-967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455087#comment-13455087 ] Kevin Wilfong commented on HIVE-967: I don't know what you mean by frozen at the time the view is created. A view is just an alias for a subquery, and in both cases the query text is stored in the metastore, and hence frozen. Implement show create table - Key: HIVE-967 URL: https://issues.apache.org/jira/browse/HIVE-967 Project: Hive Issue Type: New Feature Components: Metastore, Query Processor Reporter: Adam Kramer Assignee: Feng Lu Attachments: HIVE-967.2.patch.txt, HIVE-967.3.patch.txt, HIVE-967.4.patch.txt, HIVE-967.5.patch.txt, HIVE-967.patch.txt, HiveShowCreateTable.jar, show_create.txt SHOW CREATE TABLE would be very useful in cases where you are trying to figure out the partitioning and/or bucketing scheme for a table. Perhaps this could be implemented by having new tables automatically SET PROPERTIES (create_command='raw text of the create statement')? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3455) ANSI CORR(X,Y) is incorrect
[ https://issues.apache.org/jira/browse/HIVE-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Bolotin updated HIVE-3455: Affects Version/s: 0.8.1 ANSI CORR(X,Y) is incorrect --- Key: HIVE-3455 URL: https://issues.apache.org/jira/browse/HIVE-3455 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.1, 0.8.0, 0.8.1, 0.9.0, 0.10.0 Reporter: Maxim Bolotin A simple test with 2 collinear vectors returns a wrong result. The problem is the merge of variances, file: http://svn.apache.org/viewvc/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCorrelation.java?revision=1157222view=markup lines: 347: myagg.xvar += xvarB + (xavgA-xavgB) * (xavgA-xavgB) * myagg.count; 348: myagg.yvar += yvarB + (yavgA-yavgB) * (yavgA-yavgB) * myagg.count; the correct merge should be like this: 347: myagg.xvar += xvarB+(xavgA - xavgB)*(xavgA-xavgB)/myagg.count*nA*nB; 348: myagg.yvar += yvarB+(yavgA - yavgB)*(yavgA-yavgB)/myagg.count*nA*nB; -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3455) ANSI CORR(X,Y) is incorrect
[ https://issues.apache.org/jira/browse/HIVE-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Bolotin updated HIVE-3455: Affects Version/s: 0.10.0 ANSI CORR(X,Y) is incorrect --- Key: HIVE-3455 URL: https://issues.apache.org/jira/browse/HIVE-3455 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.1, 0.8.0, 0.8.1, 0.9.0, 0.10.0 Reporter: Maxim Bolotin A simple test with 2 collinear vectors returns a wrong result. The problem is the merge of variances, file: http://svn.apache.org/viewvc/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCorrelation.java?revision=1157222view=markup lines: 347: myagg.xvar += xvarB + (xavgA-xavgB) * (xavgA-xavgB) * myagg.count; 348: myagg.yvar += yvarB + (yavgA-yavgB) * (yavgA-yavgB) * myagg.count; the correct merge should be like this: 347: myagg.xvar += xvarB+(xavgA - xavgB)*(xavgA-xavgB)/myagg.count*nA*nB; 348: myagg.yvar += yvarB+(yavgA - yavgB)*(yavgA-yavgB)/myagg.count*nA*nB; -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3422) Support partial partition specifications in when enabling/disabling protections in Hive
[ https://issues.apache.org/jira/browse/HIVE-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean Xu updated HIVE-3422: -- Release Note: For alter table with partial partition spec, DDLSemanticAnalyzer will call table.getPartitions to get all the partitions and insert them to outputs. DDLTask::alterTable will get all the partitions for ALTERPROTECTMODE, and alter the mode for all the partitions. Status: Patch Available (was: Open) https://reviews.facebook.net/D5241 Support partial partition specifications in when enabling/disabling protections in Hive --- Key: HIVE-3422 URL: https://issues.apache.org/jira/browse/HIVE-3422 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Jean Xu Assignee: Jean Xu Priority: Minor Currently if you have a table t with partition columns c1 and c2 the following command works: ALTER TABLE t PARTITION (c1 = 'x', c2 = 'y') ENABLE NO_DROP; The following does not: ALTER TABLE t PARTITION (c1 = 'x') ENABLE NO_DROP; We would like all existing partitions for which c1 = 'x' to have NO_DROP enabled when a user runs the above command -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-967) Implement show create table
[ https://issues.apache.org/jira/browse/HIVE-967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455158#comment-13455158 ] Steven Wong commented on HIVE-967: -- @Kevin, https://cwiki.apache.org/confluence/display/Hive/ViewDev#ViewDev-StoredViewDefinition explains frozen at the time the view is created better than I can. Here is the scenario I have in mind: # Create a view v as select * from t, where t is a table with columns a and b. Hence, select * from v will return a and b. # Add a new column c to t. Per the view dev wiki page above, select * from v will still return just a and b. # Do show create table on v and save the output. # Drop v. (Maybe this is done intentionally, accidentally, or maliciously by someone else.) # Run the saved output to re-create v. # Now, should select * from v return just a and b, or should it return a, b, and c? This is basically the issue I'm trying to point out. I think a and b looks like the better answer, which will mean using getViewExpandedText in show create table. Implement show create table - Key: HIVE-967 URL: https://issues.apache.org/jira/browse/HIVE-967 Project: Hive Issue Type: New Feature Components: Metastore, Query Processor Reporter: Adam Kramer Assignee: Feng Lu Attachments: HIVE-967.2.patch.txt, HIVE-967.3.patch.txt, HIVE-967.4.patch.txt, HIVE-967.5.patch.txt, HIVE-967.patch.txt, HiveShowCreateTable.jar, show_create.txt SHOW CREATE TABLE would be very useful in cases where you are trying to figure out the partitioning and/or bucketing scheme for a table. Perhaps this could be implemented by having new tables automatically SET PROPERTIES (create_command='raw text of the create statement')? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-967) Implement show create table
[ https://issues.apache.org/jira/browse/HIVE-967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455168#comment-13455168 ] Kevin Wilfong commented on HIVE-967: Thanks Steven, I understand now. You're right. Feng if you're still working on this can you update it as Steven suggested. Implement show create table - Key: HIVE-967 URL: https://issues.apache.org/jira/browse/HIVE-967 Project: Hive Issue Type: New Feature Components: Metastore, Query Processor Reporter: Adam Kramer Assignee: Feng Lu Attachments: HIVE-967.2.patch.txt, HIVE-967.3.patch.txt, HIVE-967.4.patch.txt, HIVE-967.5.patch.txt, HIVE-967.patch.txt, HiveShowCreateTable.jar, show_create.txt SHOW CREATE TABLE would be very useful in cases where you are trying to figure out the partitioning and/or bucketing scheme for a table. Perhaps this could be implemented by having new tables automatically SET PROPERTIES (create_command='raw text of the create statement')? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3450) Hive maven-publish ant task should be configurable
[ https://issues.apache.org/jira/browse/HIVE-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-3450: - Component/s: Build Infrastructure Hive maven-publish ant task should be configurable -- Key: HIVE-3450 URL: https://issues.apache.org/jira/browse/HIVE-3450 Project: Hive Issue Type: Improvement Components: Build Infrastructure Reporter: Travis Crawford Assignee: Travis Crawford Attachments: HIVE-3450_mvn_deploy_configure.1.patch, HIVE-3450_mvn_deploy_configure.2.patch Hive has support in its build process for publishing artifacts to Maven repositories. However, the Maven repository to publish to is not configurable. Hive's build should be updated so users can specify what Maven repository to publish artifacts in. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3231) msck repair should find partitions already containing data files
[ https://issues.apache.org/jira/browse/HIVE-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-3231: - Component/s: Metastore msck repair should find partitions already containing data files Key: HIVE-3231 URL: https://issues.apache.org/jira/browse/HIVE-3231 Project: Hive Issue Type: Improvement Components: Metastore Affects Versions: 0.10.0, 0.9.1 Reporter: Keegan Mosley Labels: msck Fix For: 0.10.0 Attachments: HIVE-3231.1.patch.txt msck repair currently will only discover partition directories if they are empty. It seems a more apt use case to copy data files into a table, creating the partition directories as you go, rather than creating a bunch of empty partition directories, then running msck repair to dynamically add them, then inserting your actual data files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3231) msck repair should find partitions already containing data files
[ https://issues.apache.org/jira/browse/HIVE-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-3231: - Status: Open (was: Patch Available) @Keegan: This patch needs to be rebased on trunk. Also, repair.q has been split into repair.q and repair_hadoop23.q, so both files probably need to be updated. Finally, would you mind submitting a review request for this on either phabricator or reviewboard? Thanks. msck repair should find partitions already containing data files Key: HIVE-3231 URL: https://issues.apache.org/jira/browse/HIVE-3231 Project: Hive Issue Type: Improvement Components: Metastore Affects Versions: 0.10.0, 0.9.1 Reporter: Keegan Mosley Labels: msck Fix For: 0.10.0 Attachments: HIVE-3231.1.patch.txt msck repair currently will only discover partition directories if they are empty. It seems a more apt use case to copy data files into a table, creating the partition directories as you go, rather than creating a bunch of empty partition directories, then running msck repair to dynamically add them, then inserting your actual data files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3231) msck repair should find partitions already containing data files
[ https://issues.apache.org/jira/browse/HIVE-3231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-3231: - Assignee: Keegan Mosley msck repair should find partitions already containing data files Key: HIVE-3231 URL: https://issues.apache.org/jira/browse/HIVE-3231 Project: Hive Issue Type: Improvement Components: Metastore Affects Versions: 0.10.0, 0.9.1 Reporter: Keegan Mosley Assignee: Keegan Mosley Labels: msck Fix For: 0.10.0 Attachments: HIVE-3231.1.patch.txt msck repair currently will only discover partition directories if they are empty. It seems a more apt use case to copy data files into a table, creating the partition directories as you go, rather than creating a bunch of empty partition directories, then running msck repair to dynamically add them, then inserting your actual data files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Jenkins build is back to normal : Hive-0.9.1-SNAPSHOT-h0.21 #136
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/136/
[jira] [Updated] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE
[ https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-3098: --- Fix Version/s: 0.9.1 Committed to 0.9 branch. Memory leak from large number of FileSystem instances in FileSystem.CACHE - Key: HIVE-3098 URL: https://issues.apache.org/jira/browse/HIVE-3098 Project: Hive Issue Type: Bug Components: Shims Affects Versions: 0.9.0 Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security turned on. Reporter: Mithun Radhakrishnan Assignee: Mithun Radhakrishnan Fix For: 0.10.0, 0.9.1 Attachments: Hive-3098_(FS_closeAllForUGI()).patch, hive-3098.patch, Hive_3098.patch The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing the Oracle backend). The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 100 instances of FileSystem, whose combined retained-mem consumed the entire heap. It boiled down to hadoop::UserGroupInformation::equals() being implemented such that the Subject member is compared for equality (==), and not equivalence (.equals()). This causes equivalent UGI instances to compare as unequal, and causes a new FileSystem instance to be created and cached. The UGI.equals() is so implemented, incidentally, as a fix for yet another problem (HADOOP-6670); so it is unlikely that that implementation can be modified. The solution for this is to check for UGI equivalence in HCatalog (i.e. in the Hive metastore), using an cache for UGI instances in the shims. I have a patch to fix this. I'll upload it shortly. I just ran an overnight test to confirm that the memory-leak has been arrested. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3428) Fix log4j configuration errors when running hive on hadoop23
[ https://issues.apache.org/jira/browse/HIVE-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455342#comment-13455342 ] Zhenxiao Luo commented on HIVE-3428: Updated patch. comments addressed. review request submitted at: https://reviews.facebook.net/D5133 Fix log4j configuration errors when running hive on hadoop23 Key: HIVE-3428 URL: https://issues.apache.org/jira/browse/HIVE-3428 Project: Hive Issue Type: Bug Affects Versions: 0.10.0 Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-3428.1.patch.txt, HIVE-3428.2.patch.txt, HIVE-3428.3.patch.txt, HIVE-3428.4.patch.txt There are log4j configuration errors when running hive on hadoop23, some of them may fail testcases, since the following log4j error message could printed to console, or to output file, which diffs from the expected output: [junit] log4j:ERROR Could not find value for key log4j.appender.NullAppender [junit] log4j:ERROR Could not instantiate appender named NullAppender. [junit] 12/09/04 11:34:42 WARN conf.HiveConf: hive-site.xml not found on CLASSPATH -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3428) Fix log4j configuration errors when running hive on hadoop23
[ https://issues.apache.org/jira/browse/HIVE-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenxiao Luo updated HIVE-3428: --- Status: Open (was: Patch Available) Fix log4j configuration errors when running hive on hadoop23 Key: HIVE-3428 URL: https://issues.apache.org/jira/browse/HIVE-3428 Project: Hive Issue Type: Bug Affects Versions: 0.10.0 Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-3428.1.patch.txt, HIVE-3428.2.patch.txt, HIVE-3428.3.patch.txt, HIVE-3428.4.patch.txt There are log4j configuration errors when running hive on hadoop23, some of them may fail testcases, since the following log4j error message could printed to console, or to output file, which diffs from the expected output: [junit] log4j:ERROR Could not find value for key log4j.appender.NullAppender [junit] log4j:ERROR Could not instantiate appender named NullAppender. [junit] 12/09/04 11:34:42 WARN conf.HiveConf: hive-site.xml not found on CLASSPATH -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3428) Fix log4j configuration errors when running hive on hadoop23
[ https://issues.apache.org/jira/browse/HIVE-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenxiao Luo updated HIVE-3428: --- Attachment: HIVE-3428.4.patch.txt Fix log4j configuration errors when running hive on hadoop23 Key: HIVE-3428 URL: https://issues.apache.org/jira/browse/HIVE-3428 Project: Hive Issue Type: Bug Affects Versions: 0.10.0 Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-3428.1.patch.txt, HIVE-3428.2.patch.txt, HIVE-3428.3.patch.txt, HIVE-3428.4.patch.txt There are log4j configuration errors when running hive on hadoop23, some of them may fail testcases, since the following log4j error message could printed to console, or to output file, which diffs from the expected output: [junit] log4j:ERROR Could not find value for key log4j.appender.NullAppender [junit] log4j:ERROR Could not instantiate appender named NullAppender. [junit] 12/09/04 11:34:42 WARN conf.HiveConf: hive-site.xml not found on CLASSPATH -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3428) Fix log4j configuration errors when running hive on hadoop23
[ https://issues.apache.org/jira/browse/HIVE-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenxiao Luo updated HIVE-3428: --- Status: Patch Available (was: Open) Fix log4j configuration errors when running hive on hadoop23 Key: HIVE-3428 URL: https://issues.apache.org/jira/browse/HIVE-3428 Project: Hive Issue Type: Bug Affects Versions: 0.10.0 Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-3428.1.patch.txt, HIVE-3428.2.patch.txt, HIVE-3428.3.patch.txt, HIVE-3428.4.patch.txt There are log4j configuration errors when running hive on hadoop23, some of them may fail testcases, since the following log4j error message could printed to console, or to output file, which diffs from the expected output: [junit] log4j:ERROR Could not find value for key log4j.appender.NullAppender [junit] log4j:ERROR Could not instantiate appender named NullAppender. [junit] 12/09/04 11:34:42 WARN conf.HiveConf: hive-site.xml not found on CLASSPATH -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-3460) Simultaneous attempts to initialize the Hive Metastore can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist
Lenni Kuff created HIVE-3460: Summary: Simultaneous attempts to initialize the Hive Metastore can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist Key: HIVE-3460 URL: https://issues.apache.org/jira/browse/HIVE-3460 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.8.1 Reporter: Lenni Kuff If multiple clients attempt to access/initialize the Hive Metastore at the same time they can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist. A common scenario where this could happen is if there is a central mysql metastore and clients from multiple machines attempt to read from the metastore at the same time. This is outside of a standalone Hive Server install scenario. I believe this is not actually a Hive bug, but instead a Data Nucleus issue. {code} Exception in thread main javax.jdo.JDODataStoreException: Exception thrown obtaining schema column information from datastore at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4154) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:389) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:408) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:485) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$300(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:507) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:504) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:504) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:266) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:228) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:114) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:98) NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'metastore_DELETEME1347565995856' doesn't exist at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.Util.getInstance(Util.java:381) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1030) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3558) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3490) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1959) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2109) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2637) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2566) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1464) at com.mysql.jdbc.DatabaseMetaData$2.forEach(DatabaseMetaData.java:2472) at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:50) at com.mysql.jdbc.DatabaseMetaData.getColumns(DatabaseMetaData.java:2346) at org.apache.commons.dbcp.DelegatingDatabaseMetaData.getColumns(DelegatingDatabaseMetaData.java:218) at org.datanucleus.store.rdbms.adapter.DatabaseAdapter.getColumns(DatabaseAdapter.java:1460)
[jira] [Commented] (HIVE-3460) Simultaneous attempts to initialize the Hive Metastore can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist
[ https://issues.apache.org/jira/browse/HIVE-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455398#comment-13455398 ] Carl Steinbach commented on HIVE-3460: -- Ed reported encountering this problem in an email to hive-user list: http://mail-archives.apache.org/mod_mbox/hive-user/201107.mbox/%3c4f6b25afffcafe44b6259a412d5f9b1033183...@exchmbx104.netflix.com%3E He noted that in some cases it's possible to prevent this error by setting datanucleus.fixedDataStore=true. Simultaneous attempts to initialize the Hive Metastore can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist Key: HIVE-3460 URL: https://issues.apache.org/jira/browse/HIVE-3460 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.8.1 Reporter: Lenni Kuff If multiple clients attempt to access/initialize the Hive Metastore at the same time they can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist. A common scenario where this could happen is if there is a central mysql metastore and clients from multiple machines attempt to read from the metastore at the same time. This is outside of a standalone Hive Server install scenario. I believe this is not actually a Hive bug, but instead a Data Nucleus issue. {code} Exception in thread main javax.jdo.JDODataStoreException: Exception thrown obtaining schema column information from datastore at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4154) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:389) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:408) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:485) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$300(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:507) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:504) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:504) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:266) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:228) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:114) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:98) NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'metastore_DELETEME1347565995856' doesn't exist at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.Util.getInstance(Util.java:381) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1030) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3558) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3490) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1959) at
Hive-trunk-h0.21 - Build # 1668 - Still Failing
Changes for Build #1667 [kevinwilfong] HIVE-3391. Keep the original query in HiveDriverRunHookContextImpl. (Dawid Dabrowski via kevinwilfong) [namit] HIVE-3459 Dynamic partition queries producing no partitions fail with hive.stats.reliable=true (Kevin Wilfong via namit) Changes for Build #1668 1 tests failed. FAILED: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:47) at org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:11278) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1668) Status: Still Failing Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1668/ to view the results.
[jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE
[ https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455465#comment-13455465 ] Hudson commented on HIVE-3098: -- Integrated in Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #137 (See [https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/]) HIVE-3098 : Memory leak from large number of FileSystem instances in FileSystem.CACHE (Mithun Radhakrishnan via Ashutosh Chauhan) (Revision 1384541) Result = FAILURE hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1384541 Files : * /hive/branches/branch-0.9/metastore/src/java/org/apache/hadoop/hive/metastore/TUGIBasedProcessor.java * /hive/branches/branch-0.9/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java * /hive/branches/branch-0.9/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java * /hive/branches/branch-0.9/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java * /hive/branches/branch-0.9/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java Memory leak from large number of FileSystem instances in FileSystem.CACHE - Key: HIVE-3098 URL: https://issues.apache.org/jira/browse/HIVE-3098 Project: Hive Issue Type: Bug Components: Shims Affects Versions: 0.9.0 Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security turned on. Reporter: Mithun Radhakrishnan Assignee: Mithun Radhakrishnan Fix For: 0.10.0, 0.9.1 Attachments: Hive-3098_(FS_closeAllForUGI()).patch, hive-3098.patch, Hive_3098.patch The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing the Oracle backend). The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 100 instances of FileSystem, whose combined retained-mem consumed the entire heap. It boiled down to hadoop::UserGroupInformation::equals() being implemented such that the Subject member is compared for equality (==), and not equivalence (.equals()). This causes equivalent UGI instances to compare as unequal, and causes a new FileSystem instance to be created and cached. The UGI.equals() is so implemented, incidentally, as a fix for yet another problem (HADOOP-6670); so it is unlikely that that implementation can be modified. The solution for this is to check for UGI equivalence in HCatalog (i.e. in the Hive metastore), using an cache for UGI instances in the shims. I have a patch to fix this. I'll upload it shortly. I just ran an overnight test to confirm that the memory-leak has been arrested. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #137
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/changes Changes: [hashutosh] HIVE-3098 : Memory leak from large number of FileSystem instances in FileSystem.CACHE (Mithun Radhakrishnan via Ashutosh Chauhan) -- [...truncated 10125 lines...] [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/odbc/src/conf does not exist. ivy-resolve-test: [echo] Project: odbc ivy-retrieve-test: [echo] Project: odbc compile-test: [echo] Project: odbc create-dirs: [echo] Project: serde [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/serde/src/test/resources does not exist. init: [echo] Project: serde ivy-init-settings: [echo] Project: serde ivy-resolve: [echo] Project: serde [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-serde-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/ivy/report/org.apache.hive-hive-serde-default.html ivy-retrieve: [echo] Project: serde dynamic-serde: compile: [echo] Project: serde ivy-resolve-test: [echo] Project: serde ivy-retrieve-test: [echo] Project: serde compile-test: [echo] Project: serde [javac] Compiling 26 source files to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/serde/test/classes [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. create-dirs: [echo] Project: service [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/service/src/test/resources does not exist. init: [echo] Project: service ivy-init-settings: [echo] Project: service ivy-resolve: [echo] Project: service [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-service-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/ivy/report/org.apache.hive-hive-service-default.html ivy-retrieve: [echo] Project: service compile: [echo] Project: service ivy-resolve-test: [echo] Project: service ivy-retrieve-test: [echo] Project: service compile-test: [echo] Project: service [javac] Compiling 2 source files to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/service/test/classes test: [echo] Project: hive test-shims: [echo] Project: hive test-conditions: [echo] Project: shims gen-test: [echo] Project: shims create-dirs: [echo] Project: shims [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/test/resources does not exist. init: [echo] Project: shims ivy-init-settings: [echo] Project: shims ivy-resolve: [echo] Project: shims [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-shims-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/ivy/report/org.apache.hive-hive-shims-default.html ivy-retrieve: [echo] Project: shims compile: [echo] Project: shims [echo] Building shims 0.20 build_shims: [echo] Project: shims [echo] Compiling https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/common/java;/home/hudson/hudson-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.20/java against hadoop 0.20.2 (https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/137/artifact/hive/build/hadoopcore/hadoop-0.20.2) ivy-init-settings: [echo] Project: shims ivy-resolve-hadoop-shim: [echo] Project: shims [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml ivy-retrieve-hadoop-shim: [echo] Project: shims [echo] Building shims
[jira] [Updated] (HIVE-3461) hive unit tests fail to get lock using zookeeper on windows
[ https://issues.apache.org/jira/browse/HIVE-3461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-3461: Description: Following exception is seen from test cases when lock is attempted - {code} 2012-08-26 10:33:33,597 ERROR ZooKeeperHiveLockManager (ZooKeeperHiveLockManager.java:lock(317)) - Serious Zookeeper exception: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hive_zookeeper_namespace/default at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:778) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.createChild(ZooKeeperHiveLockManager.java:285) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lockPrimitive(ZooKeeperHiveLockManager.java:353) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lock(ZooKeeperHiveLockManager.java:303) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lock(ZooKeeperHiveLockManager.java:220) at org.apache.hadoop.hive.ql.Driver.acquireReadWriteLocks(Driver.java:828) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:944) at org.apache.hadoop.hive.ql.QTestUtil.runLoadCmd(QTestUtil.java:458) at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:505) at org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:55) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:374) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911) {code} hive unit tests fail to get lock using zookeeper on windows --- Key: HIVE-3461 URL: https://issues.apache.org/jira/browse/HIVE-3461 Project: Hive Issue Type: Bug Affects Versions: 0.9.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Following exception is seen from test cases when lock is attempted - {code} 2012-08-26 10:33:33,597 ERROR ZooKeeperHiveLockManager (ZooKeeperHiveLockManager.java:lock(317)) - Serious Zookeeper exception: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hive_zookeeper_namespace/default at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:778) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.createChild(ZooKeeperHiveLockManager.java:285) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lockPrimitive(ZooKeeperHiveLockManager.java:353) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lock(ZooKeeperHiveLockManager.java:303) at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lock(ZooKeeperHiveLockManager.java:220) at org.apache.hadoop.hive.ql.Driver.acquireReadWriteLocks(Driver.java:828) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:944) at org.apache.hadoop.hive.ql.QTestUtil.runLoadCmd(QTestUtil.java:458) at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:505) at org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:55) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:374) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3432) perform a map-only group by if grouping key matches the sorting properties of the table
[ https://issues.apache.org/jira/browse/HIVE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455561#comment-13455561 ] Namit Jain commented on HIVE-3432: -- All the tests passed perform a map-only group by if grouping key matches the sorting properties of the table --- Key: HIVE-3432 URL: https://issues.apache.org/jira/browse/HIVE-3432 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Namit Jain Attachments: hive.3432.1.patch, hive.3432.2.patch There should be an option to use bucketizedinputformat and use map-only group by. There would be no need to perform a map-side aggregation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-3462) bucketcontext_5.q.out missing from results
Namit Jain created HIVE-3462: Summary: bucketcontext_5.q.out missing from results Key: HIVE-3462 URL: https://issues.apache.org/jira/browse/HIVE-3462 Project: Hive Issue Type: Bug Components: Tests Reporter: Namit Jain This should have been checked in as part of HIVE-3171, but somehow missed. The tests are failing due to this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HIVE-3462) bucketcontext_5.q.out missing from results
[ https://issues.apache.org/jira/browse/HIVE-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain resolved HIVE-3462. -- Resolution: Won't Fix Something was wrong in my env. - the file shows up now bucketcontext_5.q.out missing from results -- Key: HIVE-3462 URL: https://issues.apache.org/jira/browse/HIVE-3462 Project: Hive Issue Type: Bug Components: Tests Reporter: Namit Jain This should have been checked in as part of HIVE-3171, but somehow missed. The tests are failing due to this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE
[ https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455572#comment-13455572 ] Hudson commented on HIVE-3098: -- Integrated in Hive-0.9.1-SNAPSHOT-h0.21 #137 (See [https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/137/]) HIVE-3098 : Memory leak from large number of FileSystem instances in FileSystem.CACHE (Mithun Radhakrishnan via Ashutosh Chauhan) (Revision 1384541) Result = SUCCESS hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1384541 Files : * /hive/branches/branch-0.9/metastore/src/java/org/apache/hadoop/hive/metastore/TUGIBasedProcessor.java * /hive/branches/branch-0.9/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java * /hive/branches/branch-0.9/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java * /hive/branches/branch-0.9/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java * /hive/branches/branch-0.9/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java Memory leak from large number of FileSystem instances in FileSystem.CACHE - Key: HIVE-3098 URL: https://issues.apache.org/jira/browse/HIVE-3098 Project: Hive Issue Type: Bug Components: Shims Affects Versions: 0.9.0 Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security turned on. Reporter: Mithun Radhakrishnan Assignee: Mithun Radhakrishnan Fix For: 0.10.0, 0.9.1 Attachments: Hive-3098_(FS_closeAllForUGI()).patch, hive-3098.patch, Hive_3098.patch The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing the Oracle backend). The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 100 instances of FileSystem, whose combined retained-mem consumed the entire heap. It boiled down to hadoop::UserGroupInformation::equals() being implemented such that the Subject member is compared for equality (==), and not equivalence (.equals()). This causes equivalent UGI instances to compare as unequal, and causes a new FileSystem instance to be created and cached. The UGI.equals() is so implemented, incidentally, as a fix for yet another problem (HADOOP-6670); so it is unlikely that that implementation can be modified. The solution for this is to check for UGI equivalence in HCatalog (i.e. in the Hive metastore), using an cache for UGI instances in the shims. I have a patch to fix this. I'll upload it shortly. I just ran an overnight test to confirm that the memory-leak has been arrested. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive JIRA issue HIVE-3299 : Patch submitted
Dear All, The patch for the issue HIVE-3299 has been submitted . Please review . Thanks and Regards, Namitha Babychan 533296 NextGen Solutions - Kochi Ph:+914846618356,9446483467 =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you
[jira] [Updated] (HIVE-3452) Missing column causes null pointer exception
[ https://issues.apache.org/jira/browse/HIVE-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-3452: - Status: Open (was: Patch Available) minor comments on phabricator Missing column causes null pointer exception Key: HIVE-3452 URL: https://issues.apache.org/jira/browse/HIVE-3452 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Jean Xu Assignee: Jean Xu Priority: Minor select * from src where src = 'alkdfaj'; FAILED: SemanticException null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-3463) Add CASCADING to MySQL's InnoDB schema
Alexander Alten-Lorenz created HIVE-3463: Summary: Add CASCADING to MySQL's InnoDB schema Key: HIVE-3463 URL: https://issues.apache.org/jira/browse/HIVE-3463 Project: Hive Issue Type: Improvement Components: Metastore Affects Versions: 0.9.0 Reporter: Alexander Alten-Lorenz Priority: Minor Cascading could help to cleanup the tables when a FK is deleted. http://dev.mysql.com/doc/refman/5.5/en/innodb-foreign-key-constraints.html -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Hive JIRA issue HIVE-3299 : Patch submitted
Hi Namitha, I left review comments on JIRA. Thanks. Carl On Thu, Sep 13, 2012 at 10:04 PM, Namitha Babychan/TVM/TCS namitha.babyc...@tcs.com wrote: Dear All, The patch for the issue HIVE-3299 has been submitted . Please review . Thanks and Regards, Namitha Babychan 533296 NextGen Solutions - Kochi Ph:+914846618356,9446483467 =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you
[jira] [Created] (HIVE-3464) Merging join tree may reorder joins which could be invalid
Navis created HIVE-3464: --- Summary: Merging join tree may reorder joins which could be invalid Key: HIVE-3464 URL: https://issues.apache.org/jira/browse/HIVE-3464 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.10.0 Reporter: Navis Assignee: Navis Currently, hive merges join tree from right to left regardless of join types, which may introduce join reordering. For example, select * from a join a b on a.key=b.key join a c on b.key=c.key join a d on a.key=d.key; Hive tries to merge join tree in a-d=b-d, a-d=a-b, b-c=a-b order and a-d=a-b and b-c=a-b will be merged. Final join tree is a-(bdc). With this, ab-d join will be executed prior to ab-c. But if join type of -c and -d is different, this is not valid. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira