[jira] Updated: (HIVE-1896) HBase and Contrib JAR names are missing version numbers
[ https://issues.apache.org/jira/browse/HIVE-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Sichi updated HIVE-1896: - Resolution: Fixed Fix Version/s: 0.7.0 Release Note: Applications which depend on these jars will need to reference the new jar names. Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) Committed. Thanks Carl! HBase and Contrib JAR names are missing version numbers --- Key: HIVE-1896 URL: https://issues.apache.org/jira/browse/HIVE-1896 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.7.0 Reporter: Carl Steinbach Assignee: Carl Steinbach Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1896.1.patch.txt, HIVE-1896.2.patch.txt Also, does anyone know why the hbase and contrib JARs use underscores instead of dashes in their names? Can I change this or will it break something? {code} ./build/dist/lib/hive-anttasks-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-cli-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-common-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-exec-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-hwi-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-jdbc-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-metastore-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-serde-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-service-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive-shims-0.7.0-SNAPSHOT.jar ./build/dist/lib/hive_contrib.jar -- ./build/dist/lib/hive_hbase-handler.jar -- {code} -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Hudson: Hive-trunk-h0.20 #555
See https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/555/ -- [...truncated 25293 lines...] [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/service/tmp/hive_job_log_hudson_201102131107_1168142543.txt [junit] Hive history file=https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/service/tmp/hive_job_log_hudson_201102131107_1070614485.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (key int, value string) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (key int, value string) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt [junit] Loading data to table testhivedrivertable [junit] POSTHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: select key, value from testhivedrivertable [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/hudson/hive_2011-02-13_11-07-25_137_8017462829348998262/-mr-1 [junit] Total MapReduce jobs = 1 [junit] Launching Job 1 out of 1 [junit] Number of reduce tasks is set to 0 since there's no reduce operator [junit] Job running in-process (local Hadoop) [junit] 2011-02-13 11:07:27,689 null map = 100%, reduce = 0% [junit] Ended Job = job_local_0001 [junit] POSTHOOK: query: select key, value from testhivedrivertable [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/hudson/hive_2011-02-13_11-07-25_137_8017462829348998262/-mr-1 [junit] OK [junit] PREHOOK: query: select key, value from testhivedrivertable [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/hudson/hive_2011-02-13_11-07-27_850_3285606576066926987/-mr-1 [junit] Total MapReduce jobs = 1 [junit] Launching Job 1 out of 1 [junit] Number of reduce tasks is set to 0 since there's no reduce operator [junit] Job running in-process (local Hadoop) [junit] 2011-02-13 11:07:30,363 null map = 100%, reduce = 0% [junit] Ended Job = job_local_0001 [junit] POSTHOOK: query: select key, value from testhivedrivertable [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/hudson/hive_2011-02-13_11-07-27_850_3285606576066926987/-mr-1 [junit] OK [junit] PREHOOK: query: select key, value from testhivedrivertable [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/hudson/hive_2011-02-13_11-07-30_505_3522795126917262640/-mr-1 [junit] Total MapReduce jobs = 1 [junit] Launching Job 1 out of 1 [junit] Number of reduce tasks is set to 0 since there's no reduce operator [junit] Job running in-process (local Hadoop) [junit] 2011-02-13 11:07:33,037 null map = 100%, reduce = 0% [junit] Ended Job = job_local_0001 [junit] POSTHOOK: query: select key, value from testhivedrivertable [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output:
[jira] Created: (HIVE-1986) partition pruner do not take effect for non-deterministic UDF
partition pruner do not take effect for non-deterministic UDF - Key: HIVE-1986 URL: https://issues.apache.org/jira/browse/HIVE-1986 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.6.0, 0.5.0, 0.4.1, 0.7.0 Environment: trunk-src,hive default configure Reporter: zhaowei Fix For: 0.7.0 hive udf can be deterministic or non-deterministic,but for non-deterministic udf such as rand and unix_timestamp,ppr do not take effect. and for unix_timestamp with para, for example unix_timestamp('2010-01-01'),I think it is deterministic. case : hive -hiveconf hive.root.logger=DEBUG,console create kv_part(key int,value string) partitioned by(ds string); alter table kv_part add partition (ds=2010) partition (ds=2011) partition (ds=2012); create kv2(key int,value string) partitioned by(ds string); alter table kv2 add partition (ds=2013) partition (ds=2014) partition (ds=2015); explain select * from kv_part join kv2 on(kv_part.key=kv2.key) where kv_part.ds=2011 and rand() 0.5 rand() is non-deterministic ,so kv_part.ds=2011 no not filter the partition ds=2010,ds=2012 . 11/02/14 12:22:32 DEBUG lazy.LazySimpleSerDe: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with: columnNames=[key, value] columnTypes=[int, string] separator=[[B@1ac9683] nullstring=\N lastColumnTakesRest=false 11/02/14 12:22:32 INFO hive.log: DDL: struct kv_part { i32 key, string value} 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2010 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2011 11/02/14 12:22:32 DEBUG optimizer.GenMapRedUtils: Information added for path hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2012 11/02/14 12:22:32 INFO parse.SemanticAnalyzer: Completed plan generation . explain select * from kv_part join kv2 on(kv_part.key=kv2.key) where kv_part.ds=2011 and sin(kv2.key) 0.5; sin() is deterministic,so ppr work ok . 11/02/14 12:25:22 DEBUG optimizer.GenMapRedUtils: Information added for path hdfs://172.25.38.253:54310/user/hive/warehouse/kv_part/ds=2011 And user should get the deterministic info for UDF from wiki,or we shoud add this info to describe function -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1922) semantic analysis error, when using group by and order by together
[ https://issues.apache.org/jira/browse/HIVE-1922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Sichi updated HIVE-1922: - Status: Open (was: Patch Available) Running ant test, I got 41 failures in TestCliDriver. Also, this patch will need some new unit tests to provide coverage for the fix. semantic analysis error, when using group by and order by together -- Key: HIVE-1922 URL: https://issues.apache.org/jira/browse/HIVE-1922 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.7.0 Environment: Ubuntu Karmic, hadoop 0.20.0, hive 0.7.0 Reporter: Hongwei Priority: Critical Attachments: HIVE-1922.1.patch.txt Original Estimate: 168h Remaining Estimate: 168h When I tried queries like, 'select t.c from t group by t.c sort by t.c;', hive reported error ,'FAILED: Error in semantic analysis: line 1:40 Invalid Table Alias or Column Reference t'. But 'select t.c from t group by t.c ' or 'select t.c from t sort by t.c;' are ok. 'select t.c from t group by t.c sort by c;' is ok too. The hive server gives stack trace like 11/01/20 03:07:34 INFO parse.SemanticAnalyzer: Get metadata for subqueries 11/01/20 03:07:34 INFO parse.SemanticAnalyzer: Get metadata for destination tables 11/01/20 03:07:34 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis FAILED: Error in semantic analysis: line 1:40 Invalid Table Alias or Column Reference t 11/01/20 03:07:34 ERROR ql.Driver: FAILED: Error in semantic analysis: line 1:40 Invalid Table Alias or Column Reference t org.apache.hadoop.hive.ql.parse.SemanticException: line 1:40 Invalid Table Alias or Column Reference t at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:6743) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genReduceSinkPlan(SemanticAnalyzer.java:4288) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:5446) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6007) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6583) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:343) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:731) at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:116) at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive.java:699) at org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:677) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1817) Remove Hive dependency on unrelease commons-cli 2.0 Snapshot
[ https://issues.apache.org/jira/browse/HIVE-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-1817: - Attachment: HIVE-1817.2.patch.txt Remove Hive dependency on unrelease commons-cli 2.0 Snapshot Key: HIVE-1817 URL: https://issues.apache.org/jira/browse/HIVE-1817 Project: Hive Issue Type: Task Components: Build Infrastructure, CLI Reporter: Carl Steinbach Assignee: Carl Steinbach Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1817.2.patch.txt, HIVE-1817.wip.1.patch.txt The Hive CLI depends on commons-cli-2.0-SNAPSHOT. This branch of of the commons-cli project is dead. Hive needs to use commons-cli-1.2 instead. See MAPREDUCE-767 for more information. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1817) Remove Hive dependency on unreleased commons-cli 2.0 Snapshot
[ https://issues.apache.org/jira/browse/HIVE-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-1817: - Summary: Remove Hive dependency on unreleased commons-cli 2.0 Snapshot (was: Remove Hive dependency on unrelease commons-cli 2.0 Snapshot) Remove Hive dependency on unreleased commons-cli 2.0 Snapshot - Key: HIVE-1817 URL: https://issues.apache.org/jira/browse/HIVE-1817 Project: Hive Issue Type: Task Components: Build Infrastructure, CLI Reporter: Carl Steinbach Assignee: Carl Steinbach Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1817.2.patch.txt, HIVE-1817.wip.1.patch.txt The Hive CLI depends on commons-cli-2.0-SNAPSHOT. This branch of of the commons-cli project is dead. Hive needs to use commons-cli-1.2 instead. See MAPREDUCE-767 for more information. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
Review Request: HIVE-1817: Remove Hive dependency on unrelease commons-cli 2.0 Snapshot
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/421/ --- Review request for hive. Summary --- Review for HIVE-1817. This addresses bug HIVE-1817. https://issues.apache.org/jira/browse/HIVE-1817 Diffs - bin/ext/cli.sh 054d1e8 bin/ext/help.sh 6d1e484 bin/ext/hiveserver.sh d8f6657 bin/ext/hwi.sh 74de9ad bin/ext/jar.sh d16aa4c bin/ext/metastore.sh a894f7f bin/ext/rcfilecat.sh 21d515c bin/ext/util/execHiveCmd.sh 61b8635 bin/hive c0eba07 bin/hive-config.sh f474189 build-common.xml dac20d4 build.properties f589196 cli/ivy.xml 86ad1ee cli/src/java/org/apache/hadoop/hive/cli/OptionsProcessor.java 0346060 eclipse-templates/.classpath e40a07a ivy/libraries.properties 0ede62a lib/commons-cli-2.0-SNAPSHOT.jar 0b1d510 shims/build.xml 2021cfb shims/ivy.xml 82b6688 Diff: https://reviews.apache.org/r/421/diff Testing --- Verified that Hive CLI options work. Verified that version check logic passes against 0.20.1 and 0.20.1, but not against 0.20.0, 0.19.2, or 0.21.0. Also verified that it's possible to start hiveserver, metastore, HWI, etc. Thanks, Carl
[jira] Updated: (HIVE-1817) Remove Hive dependency on unreleased commons-cli 2.0 Snapshot
[ https://issues.apache.org/jira/browse/HIVE-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-1817: - Status: Patch Available (was: Open) Review request: https://reviews.apache.org/r/421 Summary of changes: * Modified OptionsProcessor to use commons-cli 1.2 instead of 2.0-SNAPSHOT * Changed default hadoop.version from 0.20.0 to 0.20.1 (Note that 0.20.2 is not available on the FB CDN) * Updated bin/hive scripts to enforce dependency on Hadoop 0.20.x (x = 1). * Updated ivy configuration to pull down commons-cli 1.2 * Added a bunch of missing ASF headers. Note to committer: Please run 'svn rm lib/commons-cli-2.0-SNAPSHOT.jar before committing. Remove Hive dependency on unreleased commons-cli 2.0 Snapshot - Key: HIVE-1817 URL: https://issues.apache.org/jira/browse/HIVE-1817 Project: Hive Issue Type: Task Components: Build Infrastructure, CLI Reporter: Carl Steinbach Assignee: Carl Steinbach Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1817.2.patch.txt, HIVE-1817.wip.1.patch.txt The Hive CLI depends on commons-cli-2.0-SNAPSHOT. This branch of of the commons-cli project is dead. Hive needs to use commons-cli-1.2 instead. See MAPREDUCE-767 for more information. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira