Doubt in INSERT query in Hive?
Hello, Whenever we want to insert into table we use: INSERT OVERWRITE TABLE TBL_NAME (SELECT ) Due to this, table gets overwrites everytime. I don't want to overwrite table, I want append it everytime. I thought about LOAD TABLE , but writing the file may take more time and I don't think so that it will efficient. Does Hive Support INSERT INTO TABLE TAB_NAME? (I am using hive-0.7.1) Is there any patch for it? (But I don't know How to apply patch ?) Pls suggest me as soon as possible. Thanks. -- Regards, Bhavesh Shah
Re: Doubt in INSERT query in Hive?
Hi Bhavesh INSERT INTO is supported in hive 0.8 . An upgrade would get you things rolling. LOAD DATA inefficient? What was the performance overhead you were facing here? Regards Bejoy K S From handheld, Please excuse typos. -Original Message- From: Bhavesh Shah bhavesh25s...@gmail.com Date: Wed, 15 Feb 2012 14:33:29 To: u...@hive.apache.org; dev@hive.apache.org Reply-To: u...@hive.apache.org Subject: Doubt in INSERT query in Hive? Hello, Whenever we want to insert into table we use: INSERT OVERWRITE TABLE TBL_NAME (SELECT ) Due to this, table gets overwrites everytime. I don't want to overwrite table, I want append it everytime. I thought about LOAD TABLE , but writing the file may take more time and I don't think so that it will efficient. Does Hive Support INSERT INTO TABLE TAB_NAME? (I am using hive-0.7.1) Is there any patch for it? (But I don't know How to apply patch ?) Pls suggest me as soon as possible. Thanks. -- Regards, Bhavesh Shah
Hive-trunk-h0.21 - Build # 1259 - Still Failing
Changes for Build #1218 Changes for Build #1219 [hashutosh] HIVE-2665 : Support for metastore service specific HADOOP_OPTS environment setting (thw via hashutosh) Changes for Build #1220 [namit] HIVE-2727 add a testcase for partitioned view on union and base tables have index (He Yongqiang via namit) Changes for Build #1221 [hashutosh] HIVE-2746 : Metastore client doesn't log properly in case of connection failure to server (hashutosh) [cws] HIVE-2698 [jira] Enable Hadoop-1.0.0 in Hive (Enis Söztutar via Carl Steinbach) Summary: third version of the patch Hadoop-1.0.0 is recently released, which is AFAIK, API compatible to the 0.20S release. Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach CC: cwsteinbach, enis Differential Revision: https://reviews.facebook.net/D1389 Changes for Build #1222 [namit] HIVE-2750 Hive multi group by single reducer optimization causes invalid column reference error (Kevin Wilfong via namit) Changes for Build #1223 Changes for Build #1224 [cws] HIVE-2734 [jira] Fix some nondeterministic test output (Zhenxiao Luo via Carl Steinbach) Summary: HIVE-2734: Fix some nondeterministic test output Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 columnarserde_create_shortcut combine1 global_limit Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach CC: zhenxiao, cwsteinbach Differential Revision: https://reviews.facebook.net/D1449 [namit] HIVE-2754 NPE in union with lateral view (Yongqiang He via namit) Changes for Build #1225 Changes for Build #1226 Changes for Build #1227 [namit] HIVE-2755 union follwowed by union_subq does not work if the subquery union has reducers (He Yongqiang via namit) Changes for Build #1228 Changes for Build #1229 [hashutosh] HIVE-2735: PlanUtils.configureTableJobPropertiesForStorageHandler() is not called for partitioned table (sushanth via ashutosh) Changes for Build #1230 [cws] HIVE-2760 [jira] TestCliDriver should log elapsed time Summary: HIVE-2760. TestCliDriver should log elapsed time Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc CC: ashutoshc, cwsteinbach Differential Revision: https://reviews.facebook.net/D1503 [cws] HIVE-2662 [jira] Add Ant configuration property for dumping classpath of tests Summary: HIVE-2662. Add Ant configuration property for dumping classpath of tests Test Plan: EMPTY Reviewers: JIRA, jsichi, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D903 Changes for Build #1231 [hashutosh] HIVE_2645: Hive Web Server startup messages logs incorrect path it is searching for WAR (Edward Capriolo via Ashutosh Chauhan) Changes for Build #1232 Changes for Build #1233 [sdong] HIVE-2249 When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double (Zhiqiu Kong via Siying Dong) Changes for Build #1234 Changes for Build #1235 [heyongqiang] HIVE-2765 hbase handler uses ZooKeeperConnectionException which is not compatible with HBase versions other than 0.89 (Pei Yue via He Yongqiang) Changes for Build #1236 Changes for Build #1237 Changes for Build #1238 [heyongqiang] HIVE-2772 [jira] make union31.q deterministic (Namit Jain via Yongqiang He) Summary: https://issues.apache.org/jira/browse/HIVE-2772 HIVE-2772 Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D1557 [kevinwilfong] HIVE-2758 Metastore is caching too aggressively (Kevin Wilfong reviewed by Carl Steinbach) Changes for Build #1239 Changes for Build #1240 [namit] HIVE-2762 Alter Table Partition Concatenate Fails On Certain Characters (Kevin Wilfong via namit) Changes for Build #1241 [namit] HIVE-2756 Views should be added to the inputs of queries. (Yongqiang He via namit) Changes for Build #1242 Changes for Build #1243 Changes for Build #1244 Changes for Build #1245 Changes for Build #1246 Changes for Build #1247 [namit] HIVE-2779 Improve Hooks run in Driver (Kevin Wilfong via namit) Changes for Build #1248 Changes for Build #1249 [namit] HIVE-2759 Change global_limit.q into linux format file (Zhenxiao Luo via namit) Changes for Build #1250 [namit] HIVE-2749 CONV returns incorrect results sometimes (Jonathan Chang via namit) Changes for Build #1251 Changes for Build #1252 [namit] HIVE-2795 View partitions do not have a storage descriptor (Kevin Wilfong via namit) Changes for Build #1253 Changes for Build #1254 [namit] HIVE-2612 support hive table/partitions exists in more than one region (Kevin
[jira] [Commented] (HIVE-2793) Disable loadpart_err.q on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208494#comment-13208494 ] Hudson commented on HIVE-2793: -- Integrated in Hive-trunk-h0.21 #1259 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1259/]) HIVE-2793 [jira] Disable loadpart_err.q on 0.23 Summary: HIVE-2793. Add 0.23 to list of excluded Hadoop versions for loadpart_err.q Test Plan: EMPTY Reviewers: JIRA, jsichi, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D1665 (Revision 1244311) Result = FAILURE cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1244311 Files : * /hive/trunk/ql/src/test/queries/clientpositive/loadpart_err.q Disable loadpart_err.q on 0.23 -- Key: HIVE-2793 URL: https://issues.apache.org/jira/browse/HIVE-2793 Project: Hive Issue Type: Bug Components: Testing Infrastructure Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2793.D1665.1.patch, HIVE-2793.D1665.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2782) New BINARY type produces unexpected results with supported UDFS when using MapReduce2
[ https://issues.apache.org/jira/browse/HIVE-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208493#comment-13208493 ] Hudson commented on HIVE-2782: -- Integrated in Hive-trunk-h0.21 #1259 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1259/]) HIVE-2782 [jira] New BINARY type produces unexpected results with supported UDFS when using MapReduce2 Summary: HIVE-2782. Make ba_table_udfs.q deterministic Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D1653 (Revision 1244314) Result = FAILURE cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1244314 Files : * /hive/trunk/ql/src/test/queries/clientpositive/ba_table_udfs.q * /hive/trunk/ql/src/test/results/clientpositive/ba_table_udfs.q.out New BINARY type produces unexpected results with supported UDFS when using MapReduce2 - Key: HIVE-2782 URL: https://issues.apache.org/jira/browse/HIVE-2782 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2782.D1653.1.patch, HIVE-2782.D1653.1.patch When using MapReduce2 for Hive ba_table_udfs is failing with unexpected output: [junit] Begin query: ba_table_udfs.q [junit] 12/01/23 13:32:28 WARN conf.Configuration: mapred.system.dir is deprecated. Instead, use mapreduce.jobtracker.system.dir [junit] 12/01/23 13:32:28 WARN conf.Configuration: mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I LOCATION ' -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I LOCK_TIME: -I grantTime -I [.][.][.] [0-9]* more -I job_[0-9]*_[0-9]* -I USING 'java -cp /home/cloudera/Code/hive/build/ql/test/logs/clientpositive/ba_table_udfs.q.out /home/cloudera/Code/hive/ql/src/test/results/clientpositive/ba_table_udfs.q.out [junit] 20,26c20,26 [junit] 2 10val_101 [junit] 3 164val_164 1 [junit] 3 150val_150 1 [junit] 2 18val_181 [junit] 3 177val_177 1 [junit] 2 12val_121 [junit] 2 11val_111 [junit] — [junit] 3 120val_120 1 [junit] 3 192val_192 1 [junit] 3 119val_119 1 [junit] 3 187val_187 1 [junit] 3 176val_176 1 [junit] 3 199val_199 1 [junit] 3 118val_118 1 [junit] Exception: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] junit.framework.AssertionFailedError: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] at junit.framework.Assert.fail(Assert.java:50) [junit] at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table_udfs(TestCliDriver.java:129) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit] at java.lang.reflect.Method.invoke(Method.java:616) [junit] at junit.framework.TestCase.runTest(TestCase.java:168) [junit] at junit.framework.TestCase.runBare(TestCase.java:134) [junit] at junit.framework.TestResult$1.protect(TestResult.java:110) [junit] at junit.framework.TestResult.runProtected(TestResult.java:128) [junit] at junit.framework.TestResult.run(TestResult.java:113) [junit] at junit.framework.TestCase.run(TestCase.java:124) [junit] at junit.framework.TestSuite.runTest(TestSuite.java:243) [junit] at junit.framework.TestSuite.run(TestSuite.java:238) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs.) [junit] Cleaning up TestCliDriver [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 10.751 sec [junit] Test org.apache.hadoop.hive.cli.TestCliDriver FAILED [for] /home/cloudera/Code/hive/ql/build.xml: The following error occurred while executing this line: [for] /home/cloudera/Code/hive/build.xml:328: The
Hive-trunk-h0.21 - Build # 1260 - Still Failing
Changes for Build #1218 Changes for Build #1219 [hashutosh] HIVE-2665 : Support for metastore service specific HADOOP_OPTS environment setting (thw via hashutosh) Changes for Build #1220 [namit] HIVE-2727 add a testcase for partitioned view on union and base tables have index (He Yongqiang via namit) Changes for Build #1221 [hashutosh] HIVE-2746 : Metastore client doesn't log properly in case of connection failure to server (hashutosh) [cws] HIVE-2698 [jira] Enable Hadoop-1.0.0 in Hive (Enis Söztutar via Carl Steinbach) Summary: third version of the patch Hadoop-1.0.0 is recently released, which is AFAIK, API compatible to the 0.20S release. Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach CC: cwsteinbach, enis Differential Revision: https://reviews.facebook.net/D1389 Changes for Build #1222 [namit] HIVE-2750 Hive multi group by single reducer optimization causes invalid column reference error (Kevin Wilfong via namit) Changes for Build #1223 Changes for Build #1224 [cws] HIVE-2734 [jira] Fix some nondeterministic test output (Zhenxiao Luo via Carl Steinbach) Summary: HIVE-2734: Fix some nondeterministic test output Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 columnarserde_create_shortcut combine1 global_limit Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach CC: zhenxiao, cwsteinbach Differential Revision: https://reviews.facebook.net/D1449 [namit] HIVE-2754 NPE in union with lateral view (Yongqiang He via namit) Changes for Build #1225 Changes for Build #1226 Changes for Build #1227 [namit] HIVE-2755 union follwowed by union_subq does not work if the subquery union has reducers (He Yongqiang via namit) Changes for Build #1228 Changes for Build #1229 [hashutosh] HIVE-2735: PlanUtils.configureTableJobPropertiesForStorageHandler() is not called for partitioned table (sushanth via ashutosh) Changes for Build #1230 [cws] HIVE-2760 [jira] TestCliDriver should log elapsed time Summary: HIVE-2760. TestCliDriver should log elapsed time Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc CC: ashutoshc, cwsteinbach Differential Revision: https://reviews.facebook.net/D1503 [cws] HIVE-2662 [jira] Add Ant configuration property for dumping classpath of tests Summary: HIVE-2662. Add Ant configuration property for dumping classpath of tests Test Plan: EMPTY Reviewers: JIRA, jsichi, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D903 Changes for Build #1231 [hashutosh] HIVE_2645: Hive Web Server startup messages logs incorrect path it is searching for WAR (Edward Capriolo via Ashutosh Chauhan) Changes for Build #1232 Changes for Build #1233 [sdong] HIVE-2249 When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double (Zhiqiu Kong via Siying Dong) Changes for Build #1234 Changes for Build #1235 [heyongqiang] HIVE-2765 hbase handler uses ZooKeeperConnectionException which is not compatible with HBase versions other than 0.89 (Pei Yue via He Yongqiang) Changes for Build #1236 Changes for Build #1237 Changes for Build #1238 [heyongqiang] HIVE-2772 [jira] make union31.q deterministic (Namit Jain via Yongqiang He) Summary: https://issues.apache.org/jira/browse/HIVE-2772 HIVE-2772 Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D1557 [kevinwilfong] HIVE-2758 Metastore is caching too aggressively (Kevin Wilfong reviewed by Carl Steinbach) Changes for Build #1239 Changes for Build #1240 [namit] HIVE-2762 Alter Table Partition Concatenate Fails On Certain Characters (Kevin Wilfong via namit) Changes for Build #1241 [namit] HIVE-2756 Views should be added to the inputs of queries. (Yongqiang He via namit) Changes for Build #1242 Changes for Build #1243 Changes for Build #1244 Changes for Build #1245 Changes for Build #1246 Changes for Build #1247 [namit] HIVE-2779 Improve Hooks run in Driver (Kevin Wilfong via namit) Changes for Build #1248 Changes for Build #1249 [namit] HIVE-2759 Change global_limit.q into linux format file (Zhenxiao Luo via namit) Changes for Build #1250 [namit] HIVE-2749 CONV returns incorrect results sometimes (Jonathan Chang via namit) Changes for Build #1251 Changes for Build #1252 [namit] HIVE-2795 View partitions do not have a storage descriptor (Kevin Wilfong via namit) Changes for Build #1253 Changes for Build #1254 [namit] HIVE-2612 support hive table/partitions exists in more than one region (Kevin
Re: Running hive in eclipse
I has the same problem. Can anyone help us? El 15 de febrero de 2012 13:52, Aaron Sun aaron.su...@gmail.com escribió: Hi Team, I am trying to run and debug hive in eclipse. I checked out release-0.8.0 1215012 from the SVN repository and built the project with thrift and fb303 library installed correctly. The building process returned Build Successfully. Then I tried to launch the cli by running CliDriver.java as a Java Application, and it returned errors as Exception in thread main java.lang.RuntimeException: Failed to load Hive builtin functions at org.apache.hadoop.hive.ql.session.SessionState.init(SessionState.java:190) at org.apache.hadoop.hive.cli.CliSessionState.init(CliSessionState.java:81) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:576) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) Caused by: java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.init(ZipFile.java:131) at java.util.jar.JarFile.init(JarFile.java:150) at java.util.jar.JarFile.init(JarFile.java:87) at sun.net.www.protocol.jar.URLJarFile.init(URLJarFile.java:90) at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:66) at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:71) at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150) at java.net.URL.openStream(URL.java:1029) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerFunctionsFromPluginJar(FunctionRegistry.java:1194) at org.apache.hadoop.hive.ql.session.SessionState.init(SessionState.java:187) ... 3 more I looked over the build.xml under ./builtins directory, and noticed that the compile and jar targets are both commented, and no jar is generated for builtins target name=compile depends=init, setup echo message=Project: ${ant.project.name}/ !-- defer compilation until package phase -- /target target name=jar depends=init echo message=Project: ${ant.project.name}/ !-- defer compilation until package phase -- /target I then manually changed the build.xml for compile part as follows and rebuilt the project: target name=compile depends=init, setup echo message=Project: ${ant.project.name}/ javac encoding=${build.encoding} srcdir=${src.dir} includes=**/*.java destdir=${build.classes} debug=${javac.debug} deprecation=${javac.deprecation} includeantruntime=false compilerarg line=${javac.args} ${javac.args.warnings} / classpath refid=classpath/ /javac /target Now the 'hive-buitins-0.8.0-SNAPSHOT.jar' is under the .build/buitins directory. However, I am still getting the same error message as Failed to load Hive builtin functions. Could someone kindly let me know what is the problem and how I should run cli correctly in eclipse? Thanks Aaron -- Ing. Alexis de la Cruz Toledo. *Av. Instituto Politécnico Nacional No. 2508 Col. San Pedro Zacatenco. México, D.F, 07360 * *CINVESTAV, DF.*
Hive-0.8.1-SNAPSHOT-h0.21 - Build # 195 - Fixed
Changes for Build #194 Changes for Build #195 All tests passed The Apache Jenkins build system has built Hive-0.8.1-SNAPSHOT-h0.21 (build #195) Status: Fixed Check console output at https://builds.apache.org/job/Hive-0.8.1-SNAPSHOT-h0.21/195/ to view the results.
[jira] [Commented] (HIVE-1054) CHANGE COLUMN does not support changing partition column types.
[ https://issues.apache.org/jira/browse/HIVE-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208940#comment-13208940 ] Ashutosh Chauhan commented on HIVE-1054: I think problem there is what will happen to existing partitions? One has to come up for the convention of how does adding / dropping of column maps to dir structure on hdfs and then possibly need to move around/rename partition dirs to match with this convention. CHANGE COLUMN does not support changing partition column types. - Key: HIVE-1054 URL: https://issues.apache.org/jira/browse/HIVE-1054 Project: Hive Issue Type: Bug Reporter: He Yongqiang -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region
[ https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208945#comment-13208945 ] Ashutosh Chauhan commented on HIVE-2799: My understanding of thrift is limited. But, AFAIK you can add new params into existing thrift api and they will still be backward compatible. Is that not the case? Or, is there any other reason that you want to add a whole new set of parallel apis? change the following thrift apis to add a region Key: HIVE-2799 URL: https://issues.apache.org/jira/browse/HIVE-2799 Project: Hive Issue Type: New Feature Components: Metastore, Thrift API Reporter: Namit Jain Assignee: Kevin Wilfong liststring get_tables(1: string db_name, 2: string pattern) throws (1: MetaException o1) liststring get_all_tables(1: string db_name) throws (1: MetaException o1) Table get_table(1:string dbname, 2:string tbl_name) throws (1:MetaException o1, 2:NoSuchObjectException o2) listTable get_table_objects_by_name(1:string dbname, 2:liststring tbl_names) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) liststring get_table_names_by_filter(1:string dbname, 2:string filter, 3:i16 max_tables=-1) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) Partition add_partition(1:Partition new_part) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) i32 add_partitions(1:listPartition new_parts) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) bool drop_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Partition get_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4: string user_name, 5: liststring group_names) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_by_name(1:string db_name 2:string tbl_name, 3:string part_name) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:NoSuchObjectException o1, 2:MetaException o2) listPartition get_partitions_with_auth(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1, 4: string user_name, 5: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:MetaException o2) listPartition get_partitions_ps(1:string db_name 2:string tbl_name 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_ps_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1, 5: string user_name, 6: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names_ps(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_filter(1:string db_name 2:string tbl_name 3:string filter, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_names(1:string db_name 2:string tbl_name 3:liststring names) throws(1:MetaException o1, 2:NoSuchObjectException o2) bool drop_index_by_name(1:string db_name, 2:string tbl_name, 3:string index_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Index
[jira] [Commented] (HIVE-2801) When join key is null, random distribute this tuple
[ https://issues.apache.org/jira/browse/HIVE-2801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208953#comment-13208953 ] Ashutosh Chauhan commented on HIVE-2801: I didn't get the context. Can you expand a bit more? Better still, you can add a testcase which illustrate the fix for the problem. When join key is null, random distribute this tuple --- Key: HIVE-2801 URL: https://issues.apache.org/jira/browse/HIVE-2801 Project: Hive Issue Type: Improvement Reporter: binlijin Attachments: HIVE-2801.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2612) support hive table/partitions exists in more than one region
[ https://issues.apache.org/jira/browse/HIVE-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208958#comment-13208958 ] Ashutosh Chauhan commented on HIVE-2612: @Kevin, You named the script upgrade-0.9.0-to-0.10.0.mysql.sql, upgrade-0.9.0-to-0.10.0.derby.sql, hive-schema-0.10.0.derby.sql, hive-schema-0.10.0.mysql.sql But we have not released 0.9 yet. These should be named upgrade-0.8.0-to-0.9.0.mysql.sql, upgrade-0.8.0-to-0.9.0.derby.sql, hive-schema-0.9.0.derby.sql, hive-schema-0.9.0.mysql.sql respectively. support hive table/partitions exists in more than one region Key: HIVE-2612 URL: https://issues.apache.org/jira/browse/HIVE-2612 Project: Hive Issue Type: New Feature Components: Metastore Reporter: He Yongqiang Assignee: Kevin Wilfong Fix For: 0.9.0 Attachments: HIVE-2612.1.patch, HIVE-2612.2.patch.txt, HIVE-2612.3.patch.txt, HIVE-2612.4.patch.txt, HIVE-2612.6.patch.txt, HIVE-2612.7.patch.txt, HIVE-2612.8.patch.txt, HIVE-2612.D1569.1.patch, HIVE-2612.D1569.2.patch, HIVE-2612.D1569.3.patch, HIVE-2612.D1569.4.patch, HIVE-2612.D1569.5.patch, HIVE-2612.D1569.6.patch, HIVE-2612.D1569.7.patch, HIVE-2612.D1707.1.patch, hive.2612.5.patch 1) add region object into hive metastore 2) each partition/table has a primary region and a list of living regions, and also data location in each region -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region
[ https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208972#comment-13208972 ] Kevin Wilfong commented on HIVE-2799: - I was under the impression that that's true in the sense that, if you are using an old client which thinks a method takes 1 parameter and the server is running new code which thinks that method takes 2 parameters it will still work. However, once you upgrade your client to use new code, you will always have to provide 2 parameters, at least if the client is in Java, I'm not sure if this applies to all languages Thrift supports. I wanted to makes sure that this would not break code that uses the current Thrift APIs, even after the clients are upgraded. change the following thrift apis to add a region Key: HIVE-2799 URL: https://issues.apache.org/jira/browse/HIVE-2799 Project: Hive Issue Type: New Feature Components: Metastore, Thrift API Reporter: Namit Jain Assignee: Kevin Wilfong liststring get_tables(1: string db_name, 2: string pattern) throws (1: MetaException o1) liststring get_all_tables(1: string db_name) throws (1: MetaException o1) Table get_table(1:string dbname, 2:string tbl_name) throws (1:MetaException o1, 2:NoSuchObjectException o2) listTable get_table_objects_by_name(1:string dbname, 2:liststring tbl_names) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) liststring get_table_names_by_filter(1:string dbname, 2:string filter, 3:i16 max_tables=-1) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) Partition add_partition(1:Partition new_part) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) i32 add_partitions(1:listPartition new_parts) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) bool drop_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Partition get_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4: string user_name, 5: liststring group_names) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_by_name(1:string db_name 2:string tbl_name, 3:string part_name) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:NoSuchObjectException o1, 2:MetaException o2) listPartition get_partitions_with_auth(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1, 4: string user_name, 5: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:MetaException o2) listPartition get_partitions_ps(1:string db_name 2:string tbl_name 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_ps_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1, 5: string user_name, 6: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names_ps(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_filter(1:string db_name 2:string tbl_name 3:string filter, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_names(1:string db_name 2:string
[jira] [Commented] (HIVE-2612) support hive table/partitions exists in more than one region
[ https://issues.apache.org/jira/browse/HIVE-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208978#comment-13208978 ] Kevin Wilfong commented on HIVE-2612: - @Ashutosh I'm sorry about that, there already were scripts with the name upgrade-0.8.0-to-0.9.0.mysql.sql, I didn't realize those were Hive version numbers, I thought this was a metastore versioning system. I can move the sql commands in those files into the 0.8.0-to-0.9.0 scripts and rename the schema-0.10.0 scripts to schema-0.9.0 support hive table/partitions exists in more than one region Key: HIVE-2612 URL: https://issues.apache.org/jira/browse/HIVE-2612 Project: Hive Issue Type: New Feature Components: Metastore Reporter: He Yongqiang Assignee: Kevin Wilfong Fix For: 0.9.0 Attachments: HIVE-2612.1.patch, HIVE-2612.2.patch.txt, HIVE-2612.3.patch.txt, HIVE-2612.4.patch.txt, HIVE-2612.6.patch.txt, HIVE-2612.7.patch.txt, HIVE-2612.8.patch.txt, HIVE-2612.D1569.1.patch, HIVE-2612.D1569.2.patch, HIVE-2612.D1569.3.patch, HIVE-2612.D1569.4.patch, HIVE-2612.D1569.5.patch, HIVE-2612.D1569.6.patch, HIVE-2612.D1569.7.patch, HIVE-2612.D1707.1.patch, hive.2612.5.patch 1) add region object into hive metastore 2) each partition/table has a primary region and a list of living regions, and also data location in each region -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2805) Move metastore upgrade scripts labeled 0.10.0 into scripts labeled 0.9.0
Move metastore upgrade scripts labeled 0.10.0 into scripts labeled 0.9.0 Key: HIVE-2805 URL: https://issues.apache.org/jira/browse/HIVE-2805 Project: Hive Issue Type: Task Reporter: Kevin Wilfong Assignee: Kevin Wilfong Move contents of upgrade-0.9.0-to-0.10.0.mysql.sql, upgrade-0.9.0-to-0.10.0.derby.sql into upgrade-0.8.0-to-0.9.0.mysql.sql, upgrade-0.8.0-to-0.9.0.derby.sql Rename hive-schema-0.10.0.derby.sql, hive-schema-0.10.0.mysql.sql to hive-schema-0.9.0.derby.sql, hive-schema-0.9.0.mysql.sql -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2805) Move metastore upgrade scripts labeled 0.10.0 into scripts labeled 0.9.0
[ https://issues.apache.org/jira/browse/HIVE-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208980#comment-13208980 ] Kevin Wilfong commented on HIVE-2805: - See https://issues.apache.org/jira/browse/HIVE-2612 Move metastore upgrade scripts labeled 0.10.0 into scripts labeled 0.9.0 Key: HIVE-2805 URL: https://issues.apache.org/jira/browse/HIVE-2805 Project: Hive Issue Type: Task Reporter: Kevin Wilfong Assignee: Kevin Wilfong Move contents of upgrade-0.9.0-to-0.10.0.mysql.sql, upgrade-0.9.0-to-0.10.0.derby.sql into upgrade-0.8.0-to-0.9.0.mysql.sql, upgrade-0.8.0-to-0.9.0.derby.sql Rename hive-schema-0.10.0.derby.sql, hive-schema-0.10.0.mysql.sql to hive-schema-0.9.0.derby.sql, hive-schema-0.9.0.mysql.sql -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region
[ https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208988#comment-13208988 ] Ashutosh Chauhan commented on HIVE-2799: In case client is upgraded but server is not, my impression is that extra param passed on by client is automatically dropped by rpc before making a call on server side. So, that will still work. change the following thrift apis to add a region Key: HIVE-2799 URL: https://issues.apache.org/jira/browse/HIVE-2799 Project: Hive Issue Type: New Feature Components: Metastore, Thrift API Reporter: Namit Jain Assignee: Kevin Wilfong liststring get_tables(1: string db_name, 2: string pattern) throws (1: MetaException o1) liststring get_all_tables(1: string db_name) throws (1: MetaException o1) Table get_table(1:string dbname, 2:string tbl_name) throws (1:MetaException o1, 2:NoSuchObjectException o2) listTable get_table_objects_by_name(1:string dbname, 2:liststring tbl_names) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) liststring get_table_names_by_filter(1:string dbname, 2:string filter, 3:i16 max_tables=-1) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) Partition add_partition(1:Partition new_part) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) i32 add_partitions(1:listPartition new_parts) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) bool drop_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Partition get_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4: string user_name, 5: liststring group_names) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_by_name(1:string db_name 2:string tbl_name, 3:string part_name) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:NoSuchObjectException o1, 2:MetaException o2) listPartition get_partitions_with_auth(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1, 4: string user_name, 5: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:MetaException o2) listPartition get_partitions_ps(1:string db_name 2:string tbl_name 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_ps_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1, 5: string user_name, 6: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names_ps(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_filter(1:string db_name 2:string tbl_name 3:string filter, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_names(1:string db_name 2:string tbl_name 3:liststring names) throws(1:MetaException o1, 2:NoSuchObjectException o2) bool drop_index_by_name(1:string db_name, 2:string tbl_name, 3:string index_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Index get_index_by_name(1:string db_name 2:string tbl_name, 3:string
[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region
[ https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208997#comment-13208997 ] Kevin Wilfong commented on HIVE-2799: - @Ashutosh I agree, that will work as well. I meant I wanted to make sure I don't force people who don't want to use multi-region to have to add a region to their Thrift API calls, once both the server and client are upgraded. change the following thrift apis to add a region Key: HIVE-2799 URL: https://issues.apache.org/jira/browse/HIVE-2799 Project: Hive Issue Type: New Feature Components: Metastore, Thrift API Reporter: Namit Jain Assignee: Kevin Wilfong liststring get_tables(1: string db_name, 2: string pattern) throws (1: MetaException o1) liststring get_all_tables(1: string db_name) throws (1: MetaException o1) Table get_table(1:string dbname, 2:string tbl_name) throws (1:MetaException o1, 2:NoSuchObjectException o2) listTable get_table_objects_by_name(1:string dbname, 2:liststring tbl_names) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) liststring get_table_names_by_filter(1:string dbname, 2:string filter, 3:i16 max_tables=-1) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) Partition add_partition(1:Partition new_part) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) i32 add_partitions(1:listPartition new_parts) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) bool drop_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Partition get_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4: string user_name, 5: liststring group_names) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_by_name(1:string db_name 2:string tbl_name, 3:string part_name) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:NoSuchObjectException o1, 2:MetaException o2) listPartition get_partitions_with_auth(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1, 4: string user_name, 5: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:MetaException o2) listPartition get_partitions_ps(1:string db_name 2:string tbl_name 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_ps_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1, 5: string user_name, 6: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names_ps(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_filter(1:string db_name 2:string tbl_name 3:string filter, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_names(1:string db_name 2:string tbl_name 3:liststring names) throws(1:MetaException o1, 2:NoSuchObjectException o2) bool drop_index_by_name(1:string db_name, 2:string tbl_name, 3:string index_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Index get_index_by_name(1:string db_name
[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region
[ https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209013#comment-13209013 ] Ashutosh Chauhan commented on HIVE-2799: For that we can modify HiveMetaStoreClient.java (the most widely used client) to wrap these methods in the one which don't take region as an argument (which is the current api) and then passing null for server param through rpc client. Those folks who are using real rpc clients their clients will continue to work without recompilation and if they are indeed recompiling they can pass on a null in there. At this point, I think we should reconsider whether we want to add a new set of apis or want to modify the existing ones. To me, latter seems a better choice to avoid code duplication and confusion. change the following thrift apis to add a region Key: HIVE-2799 URL: https://issues.apache.org/jira/browse/HIVE-2799 Project: Hive Issue Type: New Feature Components: Metastore, Thrift API Reporter: Namit Jain Assignee: Kevin Wilfong liststring get_tables(1: string db_name, 2: string pattern) throws (1: MetaException o1) liststring get_all_tables(1: string db_name) throws (1: MetaException o1) Table get_table(1:string dbname, 2:string tbl_name) throws (1:MetaException o1, 2:NoSuchObjectException o2) listTable get_table_objects_by_name(1:string dbname, 2:liststring tbl_names) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) liststring get_table_names_by_filter(1:string dbname, 2:string filter, 3:i16 max_tables=-1) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) Partition add_partition(1:Partition new_part) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) i32 add_partitions(1:listPartition new_parts) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) bool drop_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Partition get_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4: string user_name, 5: liststring group_names) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_by_name(1:string db_name 2:string tbl_name, 3:string part_name) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:NoSuchObjectException o1, 2:MetaException o2) listPartition get_partitions_with_auth(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1, 4: string user_name, 5: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:MetaException o2) listPartition get_partitions_ps(1:string db_name 2:string tbl_name 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_ps_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1, 5: string user_name, 6: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names_ps(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_filter(1:string db_name 2:string tbl_name 3:string filter, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition
[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region
[ https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209016#comment-13209016 ] Kevin Wilfong commented on HIVE-2799: - I'm definitely in favor of that approach. The primary reason I intended to add duplicate Thrift API calls was to keep open source users happy, but if people are content with simply wrapping them in HiveMetaStoreClient, I am more than happy to oblige. change the following thrift apis to add a region Key: HIVE-2799 URL: https://issues.apache.org/jira/browse/HIVE-2799 Project: Hive Issue Type: New Feature Components: Metastore, Thrift API Reporter: Namit Jain Assignee: Kevin Wilfong liststring get_tables(1: string db_name, 2: string pattern) throws (1: MetaException o1) liststring get_all_tables(1: string db_name) throws (1: MetaException o1) Table get_table(1:string dbname, 2:string tbl_name) throws (1:MetaException o1, 2:NoSuchObjectException o2) listTable get_table_objects_by_name(1:string dbname, 2:liststring tbl_names) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) liststring get_table_names_by_filter(1:string dbname, 2:string filter, 3:i16 max_tables=-1) throws (1:MetaException o1, 2:InvalidOperationException o2, 3:UnknownDBException o3) Partition add_partition(1:Partition new_part) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) i32 add_partitions(1:listPartition new_parts) throws(1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name) throws (1:InvalidObjectException o1, 2:AlreadyExistsException o2, 3:MetaException o3) bool drop_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string part_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Partition get_partition(1:string db_name, 2:string tbl_name, 3:liststring part_vals) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4: string user_name, 5: liststring group_names) throws(1:MetaException o1, 2:NoSuchObjectException o2) Partition get_partition_by_name(1:string db_name 2:string tbl_name, 3:string part_name) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:NoSuchObjectException o1, 2:MetaException o2) listPartition get_partitions_with_auth(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1, 4: string user_name, 5: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 max_parts=-1) throws(1:MetaException o2) listPartition get_partitions_ps(1:string db_name 2:string tbl_name 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_ps_with_auth(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1, 5: string user_name, 6: liststring group_names) throws(1:NoSuchObjectException o1, 2:MetaException o2) liststring get_partition_names_ps(1:string db_name, 2:string tbl_name, 3:liststring part_vals, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_filter(1:string db_name 2:string tbl_name 3:string filter, 4:i16 max_parts=-1) throws(1:MetaException o1, 2:NoSuchObjectException o2) listPartition get_partitions_by_names(1:string db_name 2:string tbl_name 3:liststring names) throws(1:MetaException o1, 2:NoSuchObjectException o2) bool drop_index_by_name(1:string db_name, 2:string tbl_name, 3:string index_name, 4:bool deleteData) throws(1:NoSuchObjectException o1, 2:MetaException o2) Index
[jira] [Commented] (HIVE-2797) Make the IP address of a Thrift client available to HMSHandler.
[ https://issues.apache.org/jira/browse/HIVE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209026#comment-13209026 ] Phabricator commented on HIVE-2797: --- ashutoshc has commented on the revision HIVE-2797 [jira] Make the IP address of a Thrift client available to HMSHandler.. Looks good. Mind, adding a test for it? REVISION DETAIL https://reviews.facebook.net/D1701 Make the IP address of a Thrift client available to HMSHandler. --- Key: HIVE-2797 URL: https://issues.apache.org/jira/browse/HIVE-2797 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2797.D1701.1.patch Currently, in unsecured mode, metastore Thrift calls are, from the HMSHandler's point of view, anonymous. If we expose the IP address of the Thrift client to the HMSHandler from the Processor, this will help to give some context, in particular for audit logging, of where the call is coming from. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HIVE-2796) Support auto completion for hive configs in CliDriver
[ https://issues.apache.org/jira/browse/HIVE-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan reassigned HIVE-2796: -- Assignee: Navis If its ready for review, feel free to change status to Patch Available Support auto completion for hive configs in CliDriver - Key: HIVE-2796 URL: https://issues.apache.org/jira/browse/HIVE-2796 Project: Hive Issue Type: Improvement Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-2796.D1689.1.patch, HIVE-2796.D1689.2.patch It's very cumbersome to memorize hive conf vars. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2796) Support auto completion for hive configs in CliDriver
[ https://issues.apache.org/jira/browse/HIVE-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-2796: Fix Version/s: 0.9.0 Affects Version/s: 0.9.0 Status: Patch Available (was: Open) Support auto completion for hive configs in CliDriver - Key: HIVE-2796 URL: https://issues.apache.org/jira/browse/HIVE-2796 Project: Hive Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Navis Assignee: Navis Priority: Trivial Fix For: 0.9.0 Attachments: HIVE-2796.D1689.1.patch, HIVE-2796.D1689.2.patch It's very cumbersome to memorize hive conf vars. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Potential change to metastore Thrift APIs
This is regarding https://issues.apache.org/jira/browse/HIVE-2799 I am working on adding a new string parameter to most metastore RPCs which either create, get, or drop a table, partition, or index. This can either be done by 1. adding the parameter to the existing RPCs and modifying HiveMetaStoreClient.java to wrap these using the old RPC's, or 2. adding new, nearly duplicate, RPCs which take the parameter. Option 1, will break any existing Thrift clients other than HiveMetaStoreClient.java once the user updates ThriftHiveMetastore.java. The calls to the modified RPCs will need to be updated to pass some value, in place of the parameter even if the user doesn’t plan to use the new feature( in which case they can pass null) However, Option 1 will reduce code duplication, and keep the interface simpler. I wanted to get a sense of how many people are using Thrift clients other than HiveMetaStoreClient.java, and which option is generally preferred. I am, of course, also open to other ideas. - Kevin Wilfong
[jira] [Updated] (HIVE-2805) Move metastore upgrade scripts labeled 0.10.0 into scripts labeled 0.9.0
[ https://issues.apache.org/jira/browse/HIVE-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2805: -- Attachment: HIVE-2805.D1743.1.patch kevinwilfong requested code review of HIVE-2805 [jira] Move metastore upgrade scripts labeled 0.10.0 into scripts labeled 0.9.0. Reviewers: JIRA https://issues.apache.org/jira/browse/HIVE-2805 Changed all metastore upgrade script names from 010 to 009 replaced 0.9.0 schema scripts with the contents of the 0.10.0 schema scripts and deleted the 0.10.0 schema scripts, modified the 0.8.0-to-0.9.0 updgrade scripts to call the new scripts and delted the 0.9.0-to-0.10.0 scripts. Move contents of upgrade-0.9.0-to-0.10.0.mysql.sql, upgrade-0.9.0-to-0.10.0.derby.sql into upgrade-0.8.0-to-0.9.0.mysql.sql, upgrade-0.8.0-to-0.9.0.derby.sql Rename hive-schema-0.10.0.derby.sql, hive-schema-0.10.0.mysql.sql to hive-schema-0.9.0.derby.sql, hive-schema-0.9.0.mysql.sql TEST PLAN EMPTY REVISION DETAIL https://reviews.facebook.net/D1743 AFFECTED FILES metastore/scripts/upgrade/derby/010-HIVE-2612.derby.sql metastore/scripts/upgrade/derby/upgrade-0.9.0-to-0.10.0.derby.sql metastore/scripts/upgrade/derby/009-HIVE-2612.derby.sql metastore/scripts/upgrade/derby/hive-schema-0.9.0.derby.sql metastore/scripts/upgrade/derby/hive-schema-0.10.0.derby.sql metastore/scripts/upgrade/derby/upgrade-0.8.0-to-0.9.0.derby.sql metastore/scripts/upgrade/mysql/010-HIVE-2612.mysql.sql metastore/scripts/upgrade/mysql/upgrade-0.9.0-to-0.10.0.mysql.sql metastore/scripts/upgrade/mysql/009-HIVE-2612.mysql.sql metastore/scripts/upgrade/mysql/hive-schema-0.9.0.mysql.sql metastore/scripts/upgrade/mysql/hive-schema-0.10.0.mysql.sql metastore/scripts/upgrade/mysql/upgrade-0.8.0-to-0.9.0.mysql.sql metastore/scripts/upgrade/postgres/010-HIVE-2612.postgres.sql metastore/scripts/upgrade/postgres/009-HIVE-2612.postgres.sql MANAGE HERALD DIFFERENTIAL RULES https://reviews.facebook.net/herald/view/differential/ WHY DID I GET THIS EMAIL? https://reviews.facebook.net/herald/transcript/3711/ Tip: use the X-Herald-Rules header to filter Herald messages in your client. Move metastore upgrade scripts labeled 0.10.0 into scripts labeled 0.9.0 Key: HIVE-2805 URL: https://issues.apache.org/jira/browse/HIVE-2805 Project: Hive Issue Type: Task Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2805.D1743.1.patch Move contents of upgrade-0.9.0-to-0.10.0.mysql.sql, upgrade-0.9.0-to-0.10.0.derby.sql into upgrade-0.8.0-to-0.9.0.mysql.sql, upgrade-0.8.0-to-0.9.0.derby.sql Rename hive-schema-0.10.0.derby.sql, hive-schema-0.10.0.mysql.sql to hive-schema-0.9.0.derby.sql, hive-schema-0.9.0.mysql.sql -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Running hive in eclipse
Hello there , https://issues.apache.org/jira/browse/HIVE-2673, this might be helpful. On Thu, Feb 16, 2012 at 1:22 AM, Aaron Sun aaron.su...@gmail.com wrote: Hi Team, I am trying to run and debug hive in eclipse. I checked out release-0.8.0 1215012 from the SVN repository and built the project with thrift and fb303 library installed correctly. The building process returned Build Successfully. Then I tried to launch the cli by running CliDriver.java as a Java Application, and it returned errors as Exception in thread main java.lang.RuntimeException: Failed to load Hive builtin functions at org.apache.hadoop.hive.ql.session.SessionState.init(SessionState.java:190) at org.apache.hadoop.hive.cli.CliSessionState.init(CliSessionState.java:81) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:576) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) Caused by: java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.init(ZipFile.java:131) at java.util.jar.JarFile.init(JarFile.java:150) at java.util.jar.JarFile.init(JarFile.java:87) at sun.net.www.protocol.jar.URLJarFile.init(URLJarFile.java:90) at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:66) at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:71) at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150) at java.net.URL.openStream(URL.java:1029) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerFunctionsFromPluginJar(FunctionRegistry.java:1194) at org.apache.hadoop.hive.ql.session.SessionState.init(SessionState.java:187) ... 3 more I looked over the build.xml under ./builtins directory, and noticed that the compile and jar targets are both commented, and no jar is generated for builtins target name=compile depends=init, setup echo message=Project: ${ant.project.name}/ !-- defer compilation until package phase -- /target target name=jar depends=init echo message=Project: ${ant.project.name}/ !-- defer compilation until package phase -- /target I then manually changed the build.xml for compile part as follows and rebuilt the project: target name=compile depends=init, setup echo message=Project: ${ant.project.name}/ javac encoding=${build.encoding} srcdir=${src.dir} includes=**/*.java destdir=${build.classes} debug=${javac.debug} deprecation=${javac.deprecation} includeantruntime=false compilerarg line=${javac.args} ${javac.args.warnings} / classpath refid=classpath/ /javac /target Now the 'hive-buitins-0.8.0-SNAPSHOT.jar' is under the .build/buitins directory. However, I am still getting the same error message as Failed to load Hive builtin functions. Could someone kindly let me know what is the problem and how I should run cli correctly in eclipse? Thanks Aaron
Hive-trunk-h0.21 - Build # 1261 - Fixed
Changes for Build #1220 [namit] HIVE-2727 add a testcase for partitioned view on union and base tables have index (He Yongqiang via namit) Changes for Build #1221 [hashutosh] HIVE-2746 : Metastore client doesn't log properly in case of connection failure to server (hashutosh) [cws] HIVE-2698 [jira] Enable Hadoop-1.0.0 in Hive (Enis Söztutar via Carl Steinbach) Summary: third version of the patch Hadoop-1.0.0 is recently released, which is AFAIK, API compatible to the 0.20S release. Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach CC: cwsteinbach, enis Differential Revision: https://reviews.facebook.net/D1389 Changes for Build #1222 [namit] HIVE-2750 Hive multi group by single reducer optimization causes invalid column reference error (Kevin Wilfong via namit) Changes for Build #1223 Changes for Build #1224 [cws] HIVE-2734 [jira] Fix some nondeterministic test output (Zhenxiao Luo via Carl Steinbach) Summary: HIVE-2734: Fix some nondeterministic test output Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 columnarserde_create_shortcut combine1 global_limit Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach CC: zhenxiao, cwsteinbach Differential Revision: https://reviews.facebook.net/D1449 [namit] HIVE-2754 NPE in union with lateral view (Yongqiang He via namit) Changes for Build #1225 Changes for Build #1226 Changes for Build #1227 [namit] HIVE-2755 union follwowed by union_subq does not work if the subquery union has reducers (He Yongqiang via namit) Changes for Build #1228 Changes for Build #1229 [hashutosh] HIVE-2735: PlanUtils.configureTableJobPropertiesForStorageHandler() is not called for partitioned table (sushanth via ashutosh) Changes for Build #1230 [cws] HIVE-2760 [jira] TestCliDriver should log elapsed time Summary: HIVE-2760. TestCliDriver should log elapsed time Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc CC: ashutoshc, cwsteinbach Differential Revision: https://reviews.facebook.net/D1503 [cws] HIVE-2662 [jira] Add Ant configuration property for dumping classpath of tests Summary: HIVE-2662. Add Ant configuration property for dumping classpath of tests Test Plan: EMPTY Reviewers: JIRA, jsichi, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D903 Changes for Build #1231 [hashutosh] HIVE_2645: Hive Web Server startup messages logs incorrect path it is searching for WAR (Edward Capriolo via Ashutosh Chauhan) Changes for Build #1232 Changes for Build #1233 [sdong] HIVE-2249 When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double (Zhiqiu Kong via Siying Dong) Changes for Build #1234 Changes for Build #1235 [heyongqiang] HIVE-2765 hbase handler uses ZooKeeperConnectionException which is not compatible with HBase versions other than 0.89 (Pei Yue via He Yongqiang) Changes for Build #1236 Changes for Build #1237 Changes for Build #1238 [heyongqiang] HIVE-2772 [jira] make union31.q deterministic (Namit Jain via Yongqiang He) Summary: https://issues.apache.org/jira/browse/HIVE-2772 HIVE-2772 Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc CC: ashutoshc Differential Revision: https://reviews.facebook.net/D1557 [kevinwilfong] HIVE-2758 Metastore is caching too aggressively (Kevin Wilfong reviewed by Carl Steinbach) Changes for Build #1239 Changes for Build #1240 [namit] HIVE-2762 Alter Table Partition Concatenate Fails On Certain Characters (Kevin Wilfong via namit) Changes for Build #1241 [namit] HIVE-2756 Views should be added to the inputs of queries. (Yongqiang He via namit) Changes for Build #1242 Changes for Build #1243 Changes for Build #1244 Changes for Build #1245 Changes for Build #1246 Changes for Build #1247 [namit] HIVE-2779 Improve Hooks run in Driver (Kevin Wilfong via namit) Changes for Build #1248 Changes for Build #1249 [namit] HIVE-2759 Change global_limit.q into linux format file (Zhenxiao Luo via namit) Changes for Build #1250 [namit] HIVE-2749 CONV returns incorrect results sometimes (Jonathan Chang via namit) Changes for Build #1251 Changes for Build #1252 [namit] HIVE-2795 View partitions do not have a storage descriptor (Kevin Wilfong via namit) Changes for Build #1253 Changes for Build #1254 [namit] HIVE-2612 support hive table/partitions exists in more than one region (Kevin Wilfong via namit) Changes for Build #1256 Changes for Build #1257 [cws] HIVE-2753 [jira] Remove empty java files (Owen O'Malley via Carl Steinbach) Summary: remove dead
[jira] [Commented] (HIVE-2760) TestCliDriver should log elapsed time
[ https://issues.apache.org/jira/browse/HIVE-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209156#comment-13209156 ] Phabricator commented on HIVE-2760: --- cwsteinbach has committed the revision HIVE-2760 [jira] TestCliDriver should log elapsed time. Change committed by cws. REVISION DETAIL https://reviews.facebook.net/D1503 COMMIT https://reviews.facebook.net/rHIVE1237511 TestCliDriver should log elapsed time - Key: HIVE-2760 URL: https://issues.apache.org/jira/browse/HIVE-2760 Project: Hive Issue Type: Improvement Components: Testing Infrastructure Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2760.D1503.1.patch, HIVE-2760.D1503.1.patch, HIVE-2760.D1503.2.patch, HIVE-2760.D1503.2.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2772) make union31.q deterministic
[ https://issues.apache.org/jira/browse/HIVE-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209157#comment-13209157 ] Phabricator commented on HIVE-2772: --- heyongqiang has committed the revision HIVE-2772 [jira] make union31.q deterministic. REVISION DETAIL https://reviews.facebook.net/D1557 COMMIT https://reviews.facebook.net/rHIVE1239286 make union31.q deterministic Key: HIVE-2772 URL: https://issues.apache.org/jira/browse/HIVE-2772 Project: Hive Issue Type: Bug Reporter: Namit Jain Assignee: Namit Jain Fix For: 0.9.0 Attachments: HIVE-2772.D1557.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2753) Remove empty java files
[ https://issues.apache.org/jira/browse/HIVE-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209158#comment-13209158 ] Phabricator commented on HIVE-2753: --- omalley has committed the revision HIVE-2753 [jira] Remove empty java files. Change committed by cws. REVISION DETAIL https://reviews.facebook.net/D1611 COMMIT https://reviews.facebook.net/rHIVE1243762 Remove empty java files --- Key: HIVE-2753 URL: https://issues.apache.org/jira/browse/HIVE-2753 Project: Hive Issue Type: Bug Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.9.0 Attachments: HIVE-2753.D1611.1.patch, h-2753.patch When looking at the 0.8.1 rc1, I discovered there were a set of empty Java files that were likely left over from using 'patch' without the -E. {quote} jdbc/src/java/org/apache/hadoop/hive/jdbc/JdbcSessionState.java ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeIndexEvaluator.java ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinObject.java ql/src/java/org/apache/hadoop/hive/ql/exec/PathUtil.java ql/src/java/org/apache/hadoop/hive/ql/exec/TypedBytesRecordReader.java ql/src/java/org/apache/hadoop/hive/ql/plan/AlterPartitionProtectModeDesc.java ql/src/java/org/apache/hadoop/hive/ql/plan/TouchDesc.java ql/src/test/org/apache/hadoop/hive/ql/plan/TestAddPartition.java serde/src/gen-java/org/apache/hadoop/hive/serde/test/Constants.java shims/src/0.20/java/org/apache/hadoop/fs/ProxyFileSystem.java shims/src/0.20/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java {quote} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2769) union with a multi-table insert is not working
[ https://issues.apache.org/jira/browse/HIVE-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209174#comment-13209174 ] Phabricator commented on HIVE-2769: --- heyongqiang has committed the revision HIVE-2769 [jira] union with a multi-table insert is not working. REVISION DETAIL https://reviews.facebook.net/D1545 COMMIT https://reviews.facebook.net/rHIVE1239161 union with a multi-table insert is not working -- Key: HIVE-2769 URL: https://issues.apache.org/jira/browse/HIVE-2769 Project: Hive Issue Type: Bug Reporter: Namit Jain Assignee: Namit Jain Fix For: 0.9.0 Attachments: HIVE-2769.D1545.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2782) New BINARY type produces unexpected results with supported UDFS when using MapReduce2
[ https://issues.apache.org/jira/browse/HIVE-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209172#comment-13209172 ] Phabricator commented on HIVE-2782: --- cwsteinbach has committed the revision HIVE-2782 [jira] New BINARY type produces unexpected results with supported UDFS when using MapReduce2. Change committed by cws. REVISION DETAIL https://reviews.facebook.net/D1653 COMMIT https://reviews.facebook.net/rHIVE1244314 New BINARY type produces unexpected results with supported UDFS when using MapReduce2 - Key: HIVE-2782 URL: https://issues.apache.org/jira/browse/HIVE-2782 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2782.D1653.1.patch, HIVE-2782.D1653.1.patch When using MapReduce2 for Hive ba_table_udfs is failing with unexpected output: [junit] Begin query: ba_table_udfs.q [junit] 12/01/23 13:32:28 WARN conf.Configuration: mapred.system.dir is deprecated. Instead, use mapreduce.jobtracker.system.dir [junit] 12/01/23 13:32:28 WARN conf.Configuration: mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I LOCATION ' -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I LOCK_TIME: -I grantTime -I [.][.][.] [0-9]* more -I job_[0-9]*_[0-9]* -I USING 'java -cp /home/cloudera/Code/hive/build/ql/test/logs/clientpositive/ba_table_udfs.q.out /home/cloudera/Code/hive/ql/src/test/results/clientpositive/ba_table_udfs.q.out [junit] 20,26c20,26 [junit] 2 10val_101 [junit] 3 164val_164 1 [junit] 3 150val_150 1 [junit] 2 18val_181 [junit] 3 177val_177 1 [junit] 2 12val_121 [junit] 2 11val_111 [junit] — [junit] 3 120val_120 1 [junit] 3 192val_192 1 [junit] 3 119val_119 1 [junit] 3 187val_187 1 [junit] 3 176val_176 1 [junit] 3 199val_199 1 [junit] 3 118val_118 1 [junit] Exception: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] junit.framework.AssertionFailedError: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] at junit.framework.Assert.fail(Assert.java:50) [junit] at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table_udfs(TestCliDriver.java:129) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit] at java.lang.reflect.Method.invoke(Method.java:616) [junit] at junit.framework.TestCase.runTest(TestCase.java:168) [junit] at junit.framework.TestCase.runBare(TestCase.java:134) [junit] at junit.framework.TestResult$1.protect(TestResult.java:110) [junit] at junit.framework.TestResult.runProtected(TestResult.java:128) [junit] at junit.framework.TestResult.run(TestResult.java:113) [junit] at junit.framework.TestCase.run(TestCase.java:124) [junit] at junit.framework.TestSuite.runTest(TestSuite.java:243) [junit] at junit.framework.TestSuite.run(TestSuite.java:238) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs.) [junit] Cleaning up TestCliDriver [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 10.751 sec [junit] Test org.apache.hadoop.hive.cli.TestCliDriver FAILED [for] /home/cloudera/Code/hive/ql/build.xml: The following error occurred while executing this line: [for] /home/cloudera/Code/hive/build.xml:328: The following error occurred while executing this line: [for] /home/cloudera/Code/hive/build-common.xml:453: Tests failed! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see:
[jira] [Updated] (HIVE-2261) Add cleanup stages for UDFs
[ https://issues.apache.org/jira/browse/HIVE-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-2261: Fix Version/s: 0.9.0 Affects Version/s: 0.9.0 Status: Patch Available (was: Open) Add cleanup stages for UDFs --- Key: HIVE-2261 URL: https://issues.apache.org/jira/browse/HIVE-2261 Project: Hive Issue Type: Wish Components: Query Processor Affects Versions: 0.9.0 Reporter: Navis Assignee: Navis Priority: Trivial Fix For: 0.9.0 Attachments: HIVE-2261.D1329.1.patch, HIVE-2261.D1329.2.patch In some cases, we bind values at last stage of big SQL from other sources, especially from memcached. I made that kind of UDFs for internal-use. I found 'initialize' method of GenericUDF class is good place for making connections to memcached cluster, but failed to find somewhere to close/cleanup the connections. If there is cleaup method in GenericUDF class, things can be more neat. If initializing entity like map/reduce/fetch could be also providable to life-cycles(init/close), that makes perfect. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2503) HiveServer should provide per session configuration
[ https://issues.apache.org/jira/browse/HIVE-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-2503: Status: Patch Available (was: Open) HiveServer should provide per session configuration --- Key: HIVE-2503 URL: https://issues.apache.org/jira/browse/HIVE-2503 Project: Hive Issue Type: Bug Components: CLI, Server Infrastructure Affects Versions: 0.9.0 Reporter: Navis Assignee: Navis Fix For: 0.9.0 Attachments: HIVE-2503.1.patch.txt Currently ThriftHiveProcessorFactory returns same HiveConf instance to HiveServerHandler, making impossible to use per sesssion configuration. Just wrapping 'conf' - 'new HiveConf(conf)' seemed to solve this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2806) 'The database has reported an Exception: Query returned non-zero code: 9; cause: FAILED' - Queries that have 'JOIN' on tables that have huge data result in this exception
'The database has reported an Exception: Query returned non-zero code: 9; cause: FAILED' - Queries that have 'JOIN' on tables that have huge data result in this exception -- Key: HIVE-2806 URL: https://issues.apache.org/jira/browse/HIVE-2806 Project: Hive Issue Type: Bug Affects Versions: 0.8.0 Reporter: Archit Garg Priority: Critical Below are the two queries that result in exception when executed on hive 0.8.0: 1. SELECT T5.C47 AS C56, ((T5.C48 + T5.C49) - T5.C50) AS C57 FROM (SELECT count(T1.b2) AS C47, sum(T2.b2) AS C48, max(T3.e2) AS C49, max(T4.e3) AS C50 FROM default.qutest2 T1 JOIN default.qutest2 T2 JOIN default.qutest2 T3 JOIN default.qutest3 T4 WHERE (((T4.a3 = T3.a2) AND (T3.a2 = T2.a2)) AND (T2.a2 = T1.a2))) T5 ORDER BY C56 ASC, C57 DESC; 2. SELECT (T5.C43 - T5.C44) AS C58, T5.C45 AS C59, T5.C46 AS C60, T5.C47 AS C61 FROM (SELECT sum(T1.a2) AS C43, count(T4.b2) AS C44, count(T1.b2) AS C45, sum(T3.b1) AS C46, max(T3.b1) AS C47 FROM default.qutest2 T1 JOIN default.qutest2 T2 JOIN default.qutest1 T3 JOIN default.qutest2 T4 WHERE (((T4.a2 = T3.a1) AND (T4.a2 = T2.a2)) AND (T4.a2 = T1.a2))) T5 ORDER BY C58 ASC, C59 ASC, C60 DESC, C61 ASC; The queries keep on running for a long time ( more than an hour) and finally result in exception The exception is as follows: Database has reported an Exception: Query returned non-zero code: 9; cause: FAILED: Execution Error; return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask (SQLState: 08S01; Vendor code: 9) Schema of all the tables used is same. All have four columns [(INT),(INT),(STRING),(INT)] For qutest1, columns are a1, b1, d1, e1 For qutest2, columns are a2, b2, d2, e2 [This tables has 1000 rows and has been used thrice in JOIN condition in both queries] For qutest3, columns are a3, b3, d3, e3 For qutest4, columns are a4, b4, d4, e4 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira