[jira] [Updated] (HIVE-2872) Store which configs the user has explicitly changed
[ https://issues.apache.org/jira/browse/HIVE-2872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-2872: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed. Thanks Kevin Store which configs the user has explicitly changed --- Key: HIVE-2872 URL: https://issues.apache.org/jira/browse/HIVE-2872 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2872.D2337.1.patch, HIVE-2872.D2337.2.patch It would be useful to keep track of which config variables the user has explicitly changed from the values which are either default or loaded from hive-site.xml. These include config variables set using the hiveconf argument to the CLI, and via the SET command. This could be used to prevent Hive from changing a config variable which has been explicitly set by the user, and also potentially for logging to help with later debugging of failed queries. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2875) Renaming partition changes partition location prefix
[ https://issues.apache.org/jira/browse/HIVE-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230929#comment-13230929 ] Phabricator commented on HIVE-2875: --- njain has accepted the revision HIVE-2875 [jira] Renaming partition changes partition location prefix. REVISION DETAIL https://reviews.facebook.net/D2349 BRANCH svn Renaming partition changes partition location prefix Key: HIVE-2875 URL: https://issues.apache.org/jira/browse/HIVE-2875 Project: Hive Issue Type: Bug Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2875.D2349.1.patch Renaming a partition changes the location of the partition to the default location of the table, followed by the partition specification. It should just change the partition specification of the path. If the path does not end with the old partition specification, we should probably throw an exception because renaming a partition should not change the path so dramatically, and not changing the path to reflect the new partition name could leave the partition in a very confusing state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2835) Change default configuration for hive.exec.dynamic.partition
[ https://issues.apache.org/jira/browse/HIVE-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2835: --- Resolution: Fixed Fix Version/s: 0.9.0 Release Note: Dynamic Partitioning is now on by default. Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Owen! Change default configuration for hive.exec.dynamic.partition Key: HIVE-2835 URL: https://issues.apache.org/jira/browse/HIVE-2835 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.9.0 Attachments: HIVE-2835.D2157.1.patch, HIVE-2835.D2157.2.patch I think we should enable dynamic partitions by default. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2860: --- Status: Open (was: Patch Available) TestNegativeCliDriver autolocal1.q fails on 0.23 Key: HIVE-2860 URL: https://issues.apache.org/jira/browse/HIVE-2860 Project: Hive Issue Type: Bug Components: Testing Infrastructure Affects Versions: 0.9.0 Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2860.D2253.1.patch, HIVE-2860.D2253.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2819) Closed range scans on hbase keys
[ https://issues.apache.org/jira/browse/HIVE-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2819: -- Attachment: HIVE-2819.D1923.2.patch ashutoshc updated the revision HIVE-2819 [jira] Closed range scans on hbase keys. Reviewers: JIRA, cwsteinbach Rebased to trunk. Incorporated Carl's comments. REVISION DETAIL https://reviews.facebook.net/D1923 AFFECTED FILES hbase-handler/src/test/results/ppd_key_ranges.q.out hbase-handler/src/test/results/hbase_ppd_key_range.q.out hbase-handler/src/test/queries/hbase_ppd_key_range.q hbase-handler/src/test/queries/ppd_key_ranges.q hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java Closed range scans on hbase keys - Key: HIVE-2819 URL: https://issues.apache.org/jira/browse/HIVE-2819 Project: Hive Issue Type: Improvement Components: HBase Handler Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-2819.D1923.1.patch, HIVE-2819.D1923.2.patch This patch pushes range scans on keys of closed form into hbase -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2819) Closed range scans on hbase keys
[ https://issues.apache.org/jira/browse/HIVE-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2819: --- Status: Patch Available (was: Open) Ready for review. Closed range scans on hbase keys - Key: HIVE-2819 URL: https://issues.apache.org/jira/browse/HIVE-2819 Project: Hive Issue Type: Improvement Components: HBase Handler Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-2819.D1923.1.patch, HIVE-2819.D1923.2.patch This patch pushes range scans on keys of closed form into hbase -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2503) HiveServer should provide per session configuration
[ https://issues.apache.org/jira/browse/HIVE-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230949#comment-13230949 ] Ashutosh Chauhan commented on HIVE-2503: +1 Will commit if tests pass. Navis, can you also post the patch on jira granting license. HiveServer should provide per session configuration --- Key: HIVE-2503 URL: https://issues.apache.org/jira/browse/HIVE-2503 Project: Hive Issue Type: Bug Components: CLI, Server Infrastructure Affects Versions: 0.9.0 Reporter: Navis Assignee: Navis Fix For: 0.9.0 Attachments: HIVE-2503.1.patch.txt Currently ThriftHiveProcessorFactory returns same HiveConf instance to HiveServerHandler, making impossible to use per sesssion configuration. Just wrapping 'conf' - 'new HiveConf(conf)' seemed to solve this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1313 - Still Failing
Changes for Build #1312 [cws] HIVE-2856. Fix TestCliDriver escape1.q failure on MR2 (Zhenxiao Luo via cws) Changes for Build #1313 [cws] HIVE-2815 [jira] Filter pushdown in hbase for keys stored in binary format (Ashutosh Chauhan via Carl Steinbach) Summary: Further support for pushdown on keys stored in binary format This patch enables filter pushdown for keys stored in binary format in hbase Test Plan: Included a new test case. Reviewers: JIRA, jsichi, njain, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1875 [cws] HIVE-2778 [jira] Fail on table sampling (Navis Ryu via Carl Steinbach) Summary: HIVE-2778 fix NPE on table sampling Trying table sampling on any non-empty table throws NPE. This does not occur by test on mini-MR. div class=preformatted panel style=border-width: 1px;div class=preformattedContent panelContent preselect count(*) from emp tablesample (0.1 percent); Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number java.lang.NullPointerException at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.sampleSplits(CombineHiveInputFormat.java:450) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:403) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:432) at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with exception 'java.lang.NullPointerException(null)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask /pre /div/div Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1593 1 tests failed. FAILED: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:10340) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at
[jira] [Commented] (HIVE-2778) Fail on table sampling
[ https://issues.apache.org/jira/browse/HIVE-2778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230954#comment-13230954 ] Hudson commented on HIVE-2778: -- Integrated in Hive-trunk-h0.21 #1313 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1313/]) HIVE-2778 [jira] Fail on table sampling (Navis Ryu via Carl Steinbach) Summary: HIVE-2778 fix NPE on table sampling Trying table sampling on any non-empty table throws NPE. This does not occur by test on mini-MR. div class=preformatted panel style=border-width: 1px;div class=preformattedContent panelContent preselect count(*) from emp tablesample (0.1 percent); Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number java.lang.NullPointerException at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.sampleSplits(CombineHiveInputFormat.java:450) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:403) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:432) at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with exception 'java.lang.NullPointerException(null)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask /pre /div/div Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1593 (Revision 1301310) Result = FAILURE cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301310 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java Fail on table sampling --- Key: HIVE-2778 URL: https://issues.apache.org/jira/browse/HIVE-2778 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.9.0 Environment: Reproduced only on hadoop-0.20.2-CDH3u1, work fine on hadoop-0.20.2 Reporter: Navis Assignee: Navis Fix For: 0.9.0 Attachments: HIVE-2778.D1593.1.patch, HIVE-2778.D1593.2.patch, HIVE-2778.D1593.2.patch Trying table sampling on any non-empty table throws NPE. This does not occur by test on mini-MR. {noformat} select count(*) from emp tablesample (0.1 percent); Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number java.lang.NullPointerException at
[jira] [Commented] (HIVE-2815) Filter pushdown in hbase for keys stored in binary format
[ https://issues.apache.org/jira/browse/HIVE-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230955#comment-13230955 ] Hudson commented on HIVE-2815: -- Integrated in Hive-trunk-h0.21 #1313 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1313/]) HIVE-2815 [jira] Filter pushdown in hbase for keys stored in binary format (Ashutosh Chauhan via Carl Steinbach) Summary: Further support for pushdown on keys stored in binary format This patch enables filter pushdown for keys stored in binary format in hbase Test Plan: Included a new test case. Reviewers: JIRA, jsichi, njain, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1875 (Revision 1301315) Result = FAILURE cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301315 Files : * /hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java * /hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java * /hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java * /hive/trunk/hbase-handler/src/test/queries/external_table_ppd.q * /hive/trunk/hbase-handler/src/test/results/external_table_ppd.q.out Filter pushdown in hbase for keys stored in binary format - Key: HIVE-2815 URL: https://issues.apache.org/jira/browse/HIVE-2815 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.6.0, 0.7.0, 0.7.1, 0.8.0, 0.8.1 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Fix For: 0.9.0 Attachments: HIVE-2815.D1875.1.patch, HIVE-2815.D1875.2.patch This patch enables filter pushdown for keys stored in binary format in hbase -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2702) listPartitionsByFilter only supports non-string partitions
[ https://issues.apache.org/jira/browse/HIVE-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2702: --- Status: Open (was: Patch Available) listPartitionsByFilter only supports non-string partitions -- Key: HIVE-2702 URL: https://issues.apache.org/jira/browse/HIVE-2702 Project: Hive Issue Type: Bug Affects Versions: 0.8.1 Reporter: Aniket Mokashi Assignee: Aniket Mokashi Attachments: HIVE-2702.1.patch, HIVE-2702.D2043.1.patch listPartitionsByFilter supports only non-string partitions. This is because its explicitly specified in generateJDOFilterOverPartitions in ExpressionTree.java. //Can only support partitions whose types are string if( ! table.getPartitionKeys().get(partitionColumnIndex). getType().equals(org.apache.hadoop.hive.serde.Constants.STRING_TYPE_NAME) ) { throw new MetaException (Filtering is supported only on partition keys of type string); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2702) listPartitionsByFilter only supports non-string partitions
[ https://issues.apache.org/jira/browse/HIVE-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230971#comment-13230971 ] Ashutosh Chauhan commented on HIVE-2702: @Aniket, This is by design. Partition values are stored as strings in backend db (mysql) so pushing filters into db where partition column is of numeric types won't work, since then comparison will happen lexicographically. You should be able to catch this with rigorous tests. e.g., with your patch on, create table with partition key of int type, add partitions 1-11 and then do filter p 2 and you will get partitions 1,10,11 instead of 1. Though, you can still push equality predicate. Enabling this feature requires mysql table schema updates which can retain type information for partition keys. listPartitionsByFilter only supports non-string partitions -- Key: HIVE-2702 URL: https://issues.apache.org/jira/browse/HIVE-2702 Project: Hive Issue Type: Bug Affects Versions: 0.8.1 Reporter: Aniket Mokashi Assignee: Aniket Mokashi Attachments: HIVE-2702.1.patch, HIVE-2702.D2043.1.patch listPartitionsByFilter supports only non-string partitions. This is because its explicitly specified in generateJDOFilterOverPartitions in ExpressionTree.java. //Can only support partitions whose types are string if( ! table.getPartitionKeys().get(partitionColumnIndex). getType().equals(org.apache.hadoop.hive.serde.Constants.STRING_TYPE_NAME) ) { throw new MetaException (Filtering is supported only on partition keys of type string); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Fwd: Hive with JDBC
-- Forwarded message -- From: hadoop hive hadooph...@gmail.com Date: Fri, Mar 16, 2012 at 2:04 PM Subject: Hive with JDBC To: u...@hive.apache.org HI folks, I m facing a problem while when i fired a query through java code, its returns around half a million records which make the result set in hang state, please provide if there is any solution of that. Regards Vikas Srivastva
Hive-trunk-h0.21 - Build # 1314 - Still Failing
Changes for Build #1312 [cws] HIVE-2856. Fix TestCliDriver escape1.q failure on MR2 (Zhenxiao Luo via cws) Changes for Build #1313 [cws] HIVE-2815 [jira] Filter pushdown in hbase for keys stored in binary format (Ashutosh Chauhan via Carl Steinbach) Summary: Further support for pushdown on keys stored in binary format This patch enables filter pushdown for keys stored in binary format in hbase Test Plan: Included a new test case. Reviewers: JIRA, jsichi, njain, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1875 [cws] HIVE-2778 [jira] Fail on table sampling (Navis Ryu via Carl Steinbach) Summary: HIVE-2778 fix NPE on table sampling Trying table sampling on any non-empty table throws NPE. This does not occur by test on mini-MR. div class=preformatted panel style=border-width: 1px;div class=preformattedContent panelContent preselect count(*) from emp tablesample (0.1 percent); Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number java.lang.NullPointerException at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.sampleSplits(CombineHiveInputFormat.java:450) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:403) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:432) at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with exception 'java.lang.NullPointerException(null)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask /pre /div/div Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1593 Changes for Build #1314 [hashutosh] HIVE-2835: Change default configuration for hive.exec.dynamic.partition (Owen Omalley via hashutosh) [namit] HIVE-2872 Store which configs the user has explicitly changed (Kevin Wilfng via namit) 1 tests failed. FAILED: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:10340) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at
[jira] [Commented] (HIVE-2835) Change default configuration for hive.exec.dynamic.partition
[ https://issues.apache.org/jira/browse/HIVE-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231135#comment-13231135 ] Hudson commented on HIVE-2835: -- Integrated in Hive-trunk-h0.21 #1314 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1314/]) HIVE-2835: Change default configuration for hive.exec.dynamic.partition (Owen Omalley via hashutosh) (Revision 1301348) Result = FAILURE hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301348 Files : * /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java * /hive/trunk/conf/hive-default.xml.template Change default configuration for hive.exec.dynamic.partition Key: HIVE-2835 URL: https://issues.apache.org/jira/browse/HIVE-2835 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.9.0 Attachments: HIVE-2835.D2157.1.patch, HIVE-2835.D2157.2.patch I think we should enable dynamic partitions by default. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2872) Store which configs the user has explicitly changed
[ https://issues.apache.org/jira/browse/HIVE-2872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231136#comment-13231136 ] Hudson commented on HIVE-2872: -- Integrated in Hive-trunk-h0.21 #1314 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1314/]) HIVE-2872 Store which configs the user has explicitly changed (Kevin Wilfng via namit) (Revision 1301347) Result = FAILURE namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301347 Files : * /hive/trunk/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java * /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/SetProcessor.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/VerifyOverriddenConfigsHook.java * /hive/trunk/ql/src/test/queries/clientpositive/overridden_confs.q * /hive/trunk/ql/src/test/results/clientpositive/overridden_confs.q.out Store which configs the user has explicitly changed --- Key: HIVE-2872 URL: https://issues.apache.org/jira/browse/HIVE-2872 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2872.D2337.1.patch, HIVE-2872.D2337.2.patch It would be useful to keep track of which config variables the user has explicitly changed from the values which are either default or loaded from hive-site.xml. These include config variables set using the hiveconf argument to the CLI, and via the SET command. This could be used to prevent Hive from changing a config variable which has been explicitly set by the user, and also potentially for logging to help with later debugging of failed queries. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2863) Ambiguous table name or column reference message displays when table and column names are the same
[ https://issues.apache.org/jira/browse/HIVE-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-2863: Assignee: Navis Status: Patch Available (was: Open) Passed all tests. Ambiguous table name or column reference message displays when table and column names are the same -- Key: HIVE-2863 URL: https://issues.apache.org/jira/browse/HIVE-2863 Project: Hive Issue Type: Bug Reporter: Mauro Cazzari Assignee: Navis Attachments: HIVE-2863.D2361.1.patch Given the following table: CREATE TABLE `Y` (`y` DOUBLE) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' STORED AS TEXTFILE; The following query fails: SELECT `Y`.`y` FROM `Y` WHERE ( `y` = 1 ) ERROR: java.sql.SQLException: Query returned non-zero code: 10, cause: FAILED: Error in semantic analysis: Line 1:36 Ambiguous table alias or column reference '`y`' ERROR: Unable to execute Hadoop query. ERROR: Prepare error. SQL statement: SELECT `Y`.`y` FROM `Y` WHERE ( `y` = 1 ). The problem goes away if the table and column names do not match. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2797) Make the IP address of a Thrift client available to HMSHandler.
[ https://issues.apache.org/jira/browse/HIVE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231249#comment-13231249 ] Phabricator commented on HIVE-2797: --- ashutoshc has requested changes to the revision HIVE-2797 [jira] Make the IP address of a Thrift client available to HMSHandler.. I saw multiple failures in metastore tests. REVISION DETAIL https://reviews.facebook.net/D1701 BRANCH svn Make the IP address of a Thrift client available to HMSHandler. --- Key: HIVE-2797 URL: https://issues.apache.org/jira/browse/HIVE-2797 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2797.D1701.1.patch, HIVE-2797.D1701.2.patch, HIVE-2797.D1701.3.patch, HIVE-2797.D1701.4.patch Currently, in unsecured mode, metastore Thrift calls are, from the HMSHandler's point of view, anonymous. If we expose the IP address of the Thrift client to the HMSHandler from the Processor, this will help to give some context, in particular for audit logging, of where the call is coming from. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2503) HiveServer should provide per session configuration
[ https://issues.apache.org/jira/browse/HIVE-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2503: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Navis! HiveServer should provide per session configuration --- Key: HIVE-2503 URL: https://issues.apache.org/jira/browse/HIVE-2503 Project: Hive Issue Type: Bug Components: CLI, Server Infrastructure Affects Versions: 0.9.0 Reporter: Navis Assignee: Navis Fix For: 0.9.0 Attachments: HIVE-2503.1.patch.txt Currently ThriftHiveProcessorFactory returns same HiveConf instance to HiveServerHandler, making impossible to use per sesssion configuration. Just wrapping 'conf' - 'new HiveConf(conf)' seemed to solve this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2850) Remove zero length files
[ https://issues.apache.org/jira/browse/HIVE-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HIVE-2850: Status: Patch Available (was: Open) Tests pass with files removed. Remove zero length files Key: HIVE-2850 URL: https://issues.apache.org/jira/browse/HIVE-2850 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Attachments: HIVE-2850.D2163.1.patch There are also zero-length non-source files that need to be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2871) Add a new hook to run at the beginning and end of the Driver.run method
[ https://issues.apache.org/jira/browse/HIVE-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-2871: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed. Thanks Kevin Add a new hook to run at the beginning and end of the Driver.run method --- Key: HIVE-2871 URL: https://issues.apache.org/jira/browse/HIVE-2871 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2871.D2331.1.patch, HIVE-2871.D2331.2.patch, HIVE-2871.D2331.3.patch Driver.run is the highest level method which all queries go through, whether they come from Hive Server, the CLI, or any other entry. We also do not have any hooks before the compilation method is called, and having hooks in Driver.run would provide this. Having hooks in Driver.run will allow, for example, being able to overwrite config values used throughout query processing, including compilation, and at the other end, cleaning up any resources/logging any final values just before returning to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2850) Remove zero length files
[ https://issues.apache.org/jira/browse/HIVE-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231321#comment-13231321 ] Phabricator commented on HIVE-2850: --- ashutoshc has accepted the revision HIVE-2850 [jira] Remove zero length files. Thanks , Owen for running tests. Will commit soon. REVISION DETAIL https://reviews.facebook.net/D2163 BRANCH emptyfile Remove zero length files Key: HIVE-2850 URL: https://issues.apache.org/jira/browse/HIVE-2850 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Attachments: HIVE-2850.D2163.1.patch There are also zero-length non-source files that need to be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2864) If hive history file's directory doesn't exist don't crash
[ https://issues.apache.org/jira/browse/HIVE-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231323#comment-13231323 ] Phabricator commented on HIVE-2864: --- njain has accepted the revision HIVE-2864 [jira] If hive history file's directory doesn't exist don't crash. REVISION DETAIL https://reviews.facebook.net/D2265 BRANCH svn If hive history file's directory doesn't exist don't crash -- Key: HIVE-2864 URL: https://issues.apache.org/jira/browse/HIVE-2864 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2864.D2265.1.patch, HIVE-2864.D2265.2.patch Currently, if the history file's directory does not exist the Hive client crashes. Instead, since this is not a vital feature, it should just display a warning to the user and continue without it. This will become more important once the directory becomes configurable, see: https://issues.apache.org/jira/browse/HIVE-1708 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2850) Remove zero length files
[ https://issues.apache.org/jira/browse/HIVE-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2850: --- Resolution: Fixed Fix Version/s: 0.9.0 Status: Resolved (was: Patch Available) Committed. Thanks, Owen! Remove zero length files Key: HIVE-2850 URL: https://issues.apache.org/jira/browse/HIVE-2850 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.9.0 Attachments: HIVE-2850.D2163.1.patch There are also zero-length non-source files that need to be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2850) Remove zero length files
[ https://issues.apache.org/jira/browse/HIVE-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231338#comment-13231338 ] Phabricator commented on HIVE-2850: --- omalley has committed the revision HIVE-2850 [jira] Remove zero length files. Change committed by hashutosh. REVISION DETAIL https://reviews.facebook.net/D2163 COMMIT https://reviews.facebook.net/rHIVE1301629 Remove zero length files Key: HIVE-2850 URL: https://issues.apache.org/jira/browse/HIVE-2850 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.9.0 Attachments: HIVE-2850.D2163.1.patch There are also zero-length non-source files that need to be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2831) TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2831: --- Resolution: Fixed Fix Version/s: 0.9.0 Status: Resolved (was: Patch Available) Committed. Thanks, Carl! TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23 Key: HIVE-2831 URL: https://issues.apache.org/jira/browse/HIVE-2831 Project: Hive Issue Type: Bug Components: Tests Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2831.1.patch.txt, HIVE-2831.D2049.1.patch, HIVE-2831.D2049.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2831) TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231349#comment-13231349 ] Phabricator commented on HIVE-2831: --- cwsteinbach has committed the revision HIVE-2831 [jira] Mask FsShell output in QTestUtil. Change committed by hashutosh. REVISION DETAIL https://reviews.facebook.net/D2049 COMMIT https://reviews.facebook.net/rHIVE1301630 TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23 Key: HIVE-2831 URL: https://issues.apache.org/jira/browse/HIVE-2831 Project: Hive Issue Type: Bug Components: Tests Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2831.1.patch.txt, HIVE-2831.D2049.1.patch, HIVE-2831.D2049.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2797) Make the IP address of a Thrift client available to HMSHandler.
[ https://issues.apache.org/jira/browse/HIVE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-2797: Status: Open (was: Patch Available) Make the IP address of a Thrift client available to HMSHandler. --- Key: HIVE-2797 URL: https://issues.apache.org/jira/browse/HIVE-2797 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2797.D1701.1.patch, HIVE-2797.D1701.2.patch, HIVE-2797.D1701.3.patch, HIVE-2797.D1701.4.patch Currently, in unsecured mode, metastore Thrift calls are, from the HMSHandler's point of view, anonymous. If we expose the IP address of the Thrift client to the HMSHandler from the Processor, this will help to give some context, in particular for audit logging, of where the call is coming from. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2876) Need more server side authorization for Hive Metastore operations
Need more server side authorization for Hive Metastore operations - Key: HIVE-2876 URL: https://issues.apache.org/jira/browse/HIVE-2876 Project: Hive Issue Type: Bug Components: Metastore Reporter: Rohini Palaniswamy Currently the metastore client performs the authorization by checking hdfs acls before talking to metastore server which is controlled by hive.security.authorization.enabled setting. Server only does authorization checks for drop operations (HIVE-1943). Need authorization checks for add and alter operations on the server side for better security. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2471) Add timestamp column with index to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2471: -- Attachment: HIVE-2471.D2367.1.patch kevinwilfong requested code review of HIVE-2471 [jira] Add timestamp column with index to the partition stats table.. Reviewers: JIRA https://issues.apache.org/jira/browse/HIVE-2471 Added a timestamp column to the stats table. It defaults to the current timestamp on inserts. I also updated the update query to update the timestamp, since derby does not support the on update option as far as I can tell. I modified the insert query to specify the columns it's inserting to. This is not only necessary to prevent the query from inserting into the timestamp column, it is safer in general. Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. I am also adding an index on this column. This will allow us to create scripts that go through periodically and clean out old entries from the table. The index will help to keep the runtime of these scripts short, and hence reduce the amount of time they need to lock the table/indexes for. TEST PLAN EMPTY REVISION DETAIL https://reviews.facebook.net/D2367 AFFECTED FILES ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsSetupConstants.java ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java MANAGE HERALD DIFFERENTIAL RULES https://reviews.facebook.net/herald/view/differential/ WHY DID I GET THIS EMAIL? https://reviews.facebook.net/herald/transcript/5253/ Tip: use the X-Herald-Rules header to filter Herald messages in your client. Add timestamp column with index to the partition stats table. - Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. I am also adding an index on this column. This will allow us to create scripts that go through periodically and clean out old entries from the table. The index will help to keep the runtime of these scripts short, and hence reduce the amount of time they need to lock the table/indexes for. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-2471: Description: Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. I am also adding an index on this column. This will allow us to create scripts that go through periodically and clean out old entries from the table. (was: Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. I am also adding an index on this column. This will allow us to create scripts that go through periodically and clean out old entries from the table. The index will help to keep the runtime of these scripts short, and hence reduce the amount of time they need to lock the table/indexes for.) Summary: Add timestamp column to the partition stats table. (was: Add timestamp column with index to the partition stats table.) Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. I am also adding an index on this column. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-2471: Description: Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. (was: Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. I am also adding an index on this column. This will allow us to create scripts that go through periodically and clean out old entries from the table.) Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-2471: Status: Patch Available (was: Open) Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Sorting algorithm
On Fri, Mar 16, 2012 at 6:05 PM, indrani gorti indrani.go...@gmail.comwrote: Hi Which is the sorting algorith used in map-reduce to sort the data set in the shuffling stage i.e after the mapped for each split up of the entire dataset. Take a look at Chris Douglas' presentation on the sort. Slides: http://www.slideshare.net/hadoopusergroup/ordered-record-collection Video: http://developer.yahoo.com/blogs/hadoop/posts/2010/01/hadoop_bay_area_january_2010_u/ The original in memory sort is a quicksort. After that it is a merge sort. -- Owen
Re: Sorting algorithm
Thanks a lot Owen! :-) On Fri, Mar 16, 2012 at 2:49 PM, Owen O'Malley omal...@apache.org wrote: On Fri, Mar 16, 2012 at 6:05 PM, indrani gorti indrani.go...@gmail.com wrote: Hi Which is the sorting algorith used in map-reduce to sort the data set in the shuffling stage i.e after the mapped for each split up of the entire dataset. Take a look at Chris Douglas' presentation on the sort. Slides: http://www.slideshare.net/hadoopusergroup/ordered-record-collection Video: http://developer.yahoo.com/blogs/hadoop/posts/2010/01/hadoop_bay_area_january_2010_u/ The original in memory sort is a quicksort. After that it is a merge sort. -- Owen -- Indrani Gorti
[jira] [Updated] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2471: -- Attachment: HIVE-2471.D2367.2.patch kevinwilfong updated the revision HIVE-2471 [jira] Add timestamp column with index to the partition stats table.. Reviewers: JIRA, njain Changed the name of the stats table so that this update will apply automatically and immediately, otherwise the update command will fail on old schemas. Also introduced versioning to the name while I think is better than the old method of coming up with a new combination of (PARTITION, PART) and (STATISTICS, STATS). REVISION DETAIL https://reviews.facebook.net/D2367 AFFECTED FILES ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsSetupConstants.java ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch, HIVE-2471.D2367.2.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2865) hive-config.sh should honor HIVE_HOME env
[ https://issues.apache.org/jira/browse/HIVE-2865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231556#comment-13231556 ] Ashutosh Chauhan commented on HIVE-2865: +1 looks good. will commit if tests pass. hive-config.sh should honor HIVE_HOME env -- Key: HIVE-2865 URL: https://issues.apache.org/jira/browse/HIVE-2865 Project: Hive Issue Type: Improvement Affects Versions: 0.8.0 Reporter: Giridharan Kesavan Assignee: Giridharan Kesavan Attachments: HIVE-2865.patch hive-config.sh should honor HIVE_HOME env variable if set. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1315 - Still Failing
Changes for Build #1312 [cws] HIVE-2856. Fix TestCliDriver escape1.q failure on MR2 (Zhenxiao Luo via cws) Changes for Build #1313 [cws] HIVE-2815 [jira] Filter pushdown in hbase for keys stored in binary format (Ashutosh Chauhan via Carl Steinbach) Summary: Further support for pushdown on keys stored in binary format This patch enables filter pushdown for keys stored in binary format in hbase Test Plan: Included a new test case. Reviewers: JIRA, jsichi, njain, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1875 [cws] HIVE-2778 [jira] Fail on table sampling (Navis Ryu via Carl Steinbach) Summary: HIVE-2778 fix NPE on table sampling Trying table sampling on any non-empty table throws NPE. This does not occur by test on mini-MR. div class=preformatted panel style=border-width: 1px;div class=preformattedContent panelContent preselect count(*) from emp tablesample (0.1 percent); Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number java.lang.NullPointerException at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.sampleSplits(CombineHiveInputFormat.java:450) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:403) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:432) at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with exception 'java.lang.NullPointerException(null)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask /pre /div/div Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1593 Changes for Build #1314 [hashutosh] HIVE-2835: Change default configuration for hive.exec.dynamic.partition (Owen Omalley via hashutosh) [namit] HIVE-2872 Store which configs the user has explicitly changed (Kevin Wilfng via namit) Changes for Build #1315 [hashutosh] HIVE-2503: HiveServer should provide per session configuration (navis via hashutosh) 1 tests failed. REGRESSION: org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_smb_mapjoin_8 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_smb_mapjoin_8(TestMinimrCliDriver.java:607) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at
[jira] [Commented] (HIVE-2503) HiveServer should provide per session configuration
[ https://issues.apache.org/jira/browse/HIVE-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231560#comment-13231560 ] Hudson commented on HIVE-2503: -- Integrated in Hive-trunk-h0.21 #1315 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1315/]) HIVE-2503: HiveServer should provide per session configuration (navis via hashutosh) (Revision 1301568) Result = FAILURE hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301568 Files : * /hive/trunk/service/src/java/org/apache/hadoop/hive/service/HiveServer.java * /hive/trunk/service/src/test/org/apache/hadoop/hive/service/TestHiveServerSessions.java HiveServer should provide per session configuration --- Key: HIVE-2503 URL: https://issues.apache.org/jira/browse/HIVE-2503 Project: Hive Issue Type: Bug Components: CLI, Server Infrastructure Affects Versions: 0.9.0 Reporter: Navis Assignee: Navis Fix For: 0.9.0 Attachments: HIVE-2503.1.patch.txt Currently ThriftHiveProcessorFactory returns same HiveConf instance to HiveServerHandler, making impossible to use per sesssion configuration. Just wrapping 'conf' - 'new HiveConf(conf)' seemed to solve this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-0.8.1-SNAPSHOT-h0.21 - Build # 224 - Failure
Changes for Build #224 1 tests failed. REGRESSION: org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_smb_mapjoin_8 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_smb_mapjoin_8(TestMinimrCliDriver.java:578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) The Apache Jenkins build system has built Hive-0.8.1-SNAPSHOT-h0.21 (build #224) Status: Failure Check console output at https://builds.apache.org/job/Hive-0.8.1-SNAPSHOT-h0.21/224/ to view the results.
[jira] [Updated] (HIVE-2875) Renaming partition changes partition location prefix
[ https://issues.apache.org/jira/browse/HIVE-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-2875: - Status: Open (was: Patch Available) Renaming partition changes partition location prefix Key: HIVE-2875 URL: https://issues.apache.org/jira/browse/HIVE-2875 Project: Hive Issue Type: Bug Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2875.D2349.1.patch Renaming a partition changes the location of the partition to the default location of the table, followed by the partition specification. It should just change the partition specification of the path. If the path does not end with the old partition specification, we should probably throw an exception because renaming a partition should not change the path so dramatically, and not changing the path to reflect the new partition name could leave the partition in a very confusing state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2875) Renaming partition changes partition location prefix
[ https://issues.apache.org/jira/browse/HIVE-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231597#comment-13231597 ] Namit Jain commented on HIVE-2875: -- Lot of tests are failing for me. Can you debug this ? Renaming partition changes partition location prefix Key: HIVE-2875 URL: https://issues.apache.org/jira/browse/HIVE-2875 Project: Hive Issue Type: Bug Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2875.D2349.1.patch Renaming a partition changes the location of the partition to the default location of the table, followed by the partition specification. It should just change the partition specification of the path. If the path does not end with the old partition specification, we should probably throw an exception because renaming a partition should not change the path so dramatically, and not changing the path to reflect the new partition name could leave the partition in a very confusing state. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231603#comment-13231603 ] Phabricator commented on HIVE-2471: --- njain has commented on the revision HIVE-2471 [jira] Add timestamp column with index to the partition stats table.. INLINE COMMENTS ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsSetupConstants.java:26 Write a big comment here that it is the users responsibility to delete the old table ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java:128 I am not sure this will work - I am assuming this is invoked by StatsAggregator, but the data is inserted by StatsPublisher. The timestamp will be different in the 2 places REVISION DETAIL https://reviews.facebook.net/D2367 Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch, HIVE-2471.D2367.2.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231621#comment-13231621 ] Phabricator commented on HIVE-2471: --- kevinwilfong has commented on the revision HIVE-2471 [jira] Add timestamp column with index to the partition stats table.. INLINE COMMENTS ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsSetupConstants.java:26 Will do ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java:128 This is invoked by the StatsPublisher, it is used for the case where a row was not deleted by a previous StatsPublisher, otherwise there is a conflict between the primary keys. The StatsAggregator only invokes SELECT and DELETE statements. The aggregated stats are added to the metastore via a call to the metastore's alter_table method. REVISION DETAIL https://reviews.facebook.net/D2367 Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch, HIVE-2471.D2367.2.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2471: -- Attachment: HIVE-2471.D2367.3.patch kevinwilfong updated the revision HIVE-2471 [jira] Add timestamp column with index to the partition stats table.. Reviewers: JIRA, njain Added a big comment saying it is up to the Hive administrator to drop old partition stats tables. REVISION DETAIL https://reviews.facebook.net/D2367 AFFECTED FILES ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsSetupConstants.java ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch, HIVE-2471.D2367.2.patch, HIVE-2471.D2367.3.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2471) Add timestamp column to the partition stats table.
[ https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231643#comment-13231643 ] Phabricator commented on HIVE-2471: --- njain has commented on the revision HIVE-2471 [jira] Add timestamp column with index to the partition stats table.. INLINE COMMENTS ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java:128 Let us discuss offline - I am not sure I understood REVISION DETAIL https://reviews.facebook.net/D2367 Add timestamp column to the partition stats table. -- Key: HIVE-2471 URL: https://issues.apache.org/jira/browse/HIVE-2471 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2471.1.patch.txt, HIVE-2471.D2367.1.patch, HIVE-2471.D2367.2.patch, HIVE-2471.D2367.3.patch Occasionally, when entries are added to the partition stats table the program is halted before it can delete those entries, by an exception, keyboard interrupt, etc. These build up to the point where the table gets very large, and it hurts the performance of the update statement which is often called. In order to fix this, I am adding a column to the table which is auto-populated with the current timestamp. This will allow us to create scripts that go through periodically and clean out old entries from the table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2865) hive-config.sh should honor HIVE_HOME env
[ https://issues.apache.org/jira/browse/HIVE-2865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-2865: --- Fix Version/s: 0.9.0 Status: Patch Available (was: Open) Committed to trunk. Thanks, Giri! hive-config.sh should honor HIVE_HOME env -- Key: HIVE-2865 URL: https://issues.apache.org/jira/browse/HIVE-2865 Project: Hive Issue Type: Improvement Affects Versions: 0.8.0 Reporter: Giridharan Kesavan Assignee: Giridharan Kesavan Fix For: 0.9.0 Attachments: HIVE-2865.patch hive-config.sh should honor HIVE_HOME env variable if set. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HIVE-2784) Integrating with MapReduce2 get NPE throwed when executing a query with a TABLESAMPLE(x percent) clause
[ https://issues.apache.org/jira/browse/HIVE-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach resolved HIVE-2784. -- Resolution: Duplicate Integrating with MapReduce2 get NPE throwed when executing a query with a TABLESAMPLE(x percent) clause - Key: HIVE-2784 URL: https://issues.apache.org/jira/browse/HIVE-2784 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Carl Steinbach the following TestCliDriver testcases fail: sample_islocalmode_hook split_sample [junit] java.lang.NullPointerException [junit] at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.sampleSplits(CombineHiveInputFormat.java:450) [junit] at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:403) [junit] at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:472) [junit] at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:464) [junit] at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:360) [junit] at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215) [junit] at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1212) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at javax.security.auth.Subject.doAs(Subject.java:396) [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) [junit] at org.apache.hadoop.mapreduce.Job.submit(Job.java:1212) [junit] at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:592) [junit] at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:587) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at javax.security.auth.Subject.doAs(Subject.java:396) [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) [junit] at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:587) [junit] at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:452) [junit] at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:710) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:200) There are other qfiles which pass which use TABLESAMPLE without specifying a percent -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2877) TABLESAMPLE(x PERCENT) tests fail on 0.22/0.23
TABLESAMPLE(x PERCENT) tests fail on 0.22/0.23 -- Key: HIVE-2877 URL: https://issues.apache.org/jira/browse/HIVE-2877 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Carl Steinbach Assignee: Carl Steinbach -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2871) Add a new hook to run at the beginning and end of the Driver.run method
[ https://issues.apache.org/jira/browse/HIVE-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231762#comment-13231762 ] Hudson commented on HIVE-2871: -- Integrated in Hive-trunk-h0.21 #1316 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1316/]) HIVE-2871 Add a new hook to run at the beginning and end of the Driver.run method (Kevin Wilfong via namit) (Revision 1301610) Result = FAILURE namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301610 Files : * /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java * /hive/trunk/conf/hive-default.xml.template * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/HiveDriverRunHook.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/HiveDriverRunHookContext.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/HiveDriverRunHookContextImpl.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/VerifyHooksRunInOrder.java * /hive/trunk/ql/src/test/queries/clientpositive/hook_order.q * /hive/trunk/ql/src/test/results/clientpositive/hook_order.q.out Add a new hook to run at the beginning and end of the Driver.run method --- Key: HIVE-2871 URL: https://issues.apache.org/jira/browse/HIVE-2871 Project: Hive Issue Type: Improvement Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2871.D2331.1.patch, HIVE-2871.D2331.2.patch, HIVE-2871.D2331.3.patch Driver.run is the highest level method which all queries go through, whether they come from Hive Server, the CLI, or any other entry. We also do not have any hooks before the compilation method is called, and having hooks in Driver.run would provide this. Having hooks in Driver.run will allow, for example, being able to overwrite config values used throughout query processing, including compilation, and at the other end, cleaning up any resources/logging any final values just before returning to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2831) TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231763#comment-13231763 ] Hudson commented on HIVE-2831: -- Integrated in Hive-trunk-h0.21 #1316 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1316/]) HIVE-2831 [jira] Mask FsShell output in QTestUtil (Carl Steinbach via Ashutosh Chauhan) Summary: HIVE-2831. Mask FsShell output in QTestUtil Test Plan: EMPTY Reviewers: JIRA, edwardcapriolo, ashutoshc Reviewed By: ashutoshc Differential Revision: https://reviews.facebook.net/D2049 (Revision 1301630) Result = FAILURE hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301630 Files : * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23 Key: HIVE-2831 URL: https://issues.apache.org/jira/browse/HIVE-2831 Project: Hive Issue Type: Bug Components: Tests Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2831.1.patch.txt, HIVE-2831.D2049.1.patch, HIVE-2831.D2049.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2850) Remove zero length files
[ https://issues.apache.org/jira/browse/HIVE-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231764#comment-13231764 ] Hudson commented on HIVE-2850: -- Integrated in Hive-trunk-h0.21 #1316 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1316/]) HIVE-2850 [jira] Remove zero length files (Owen O'Malley via Ashutosh Chauhan) Summary: Enter Revision Title Remove empty files There are also zero-length non-source files that need to be removed. Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc Differential Revision: https://reviews.facebook.net/D2163 (Revision 1301629) Result = FAILURE hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301629 Files : * /hive/trunk/data/warehouse/src/.gitignore * /hive/trunk/hwi/web/set_processor.jsp * /hive/trunk/metastore/src/gen-py/__init__.py * /hive/trunk/metastore/src/gen/thrift/gen-py/__init__.py * /hive/trunk/ql/src/gen/thrift/gen-py/__init__.py * /hive/trunk/ql/src/test/queries/clientpositive/describe_function.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_avg.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_count.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_max.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_min.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_std.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_stddev_samp.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_sum.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_var_samp.q * /hive/trunk/ql/src/test/queries/clientpositive/udaf_variance.q * /hive/trunk/ql/src/test/queries/clientpositive/udf_divider.q * /hive/trunk/ql/src/test/queries/clientpositive/udf_hour_minute_second.q * /hive/trunk/ql/src/test/queries/clientpositive/udf_json.q * /hive/trunk/ql/src/test/queries/clientpositive/udf_lpad_rpad.q * /hive/trunk/ql/src/test/results/clientpositive/describe_function.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_avg.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_count.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_max.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_min.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_std.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_stddev_samp.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_sum.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_var_samp.q.out * /hive/trunk/ql/src/test/results/clientpositive/udaf_variance.q.out * /hive/trunk/ql/src/test/results/clientpositive/udf_divider.q.out * /hive/trunk/ql/src/test/results/clientpositive/udf_hour_minute_second.q.out * /hive/trunk/ql/src/test/results/clientpositive/udf_json.q.out * /hive/trunk/ql/src/test/results/clientpositive/udf_lpad_rpad.q.out * /hive/trunk/ql/src/test/results/compiler/errors/invalid_function_param1.q.out * /hive/trunk/ql/src/test/results/compiler/errors/unknown_function5.q.out * /hive/trunk/serde/src/gen-py/__init__.py * /hive/trunk/serde/src/gen/thrift/gen-py/__init__.py * /hive/trunk/service/src/gen-py/__init__.py * /hive/trunk/service/src/gen/thrift/gen-py/__init__.py Remove zero length files Key: HIVE-2850 URL: https://issues.apache.org/jira/browse/HIVE-2850 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.9.0 Attachments: HIVE-2850.D2163.1.patch There are also zero-length non-source files that need to be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1316 - Still Failing
Changes for Build #1312 [cws] HIVE-2856. Fix TestCliDriver escape1.q failure on MR2 (Zhenxiao Luo via cws) Changes for Build #1313 [cws] HIVE-2815 [jira] Filter pushdown in hbase for keys stored in binary format (Ashutosh Chauhan via Carl Steinbach) Summary: Further support for pushdown on keys stored in binary format This patch enables filter pushdown for keys stored in binary format in hbase Test Plan: Included a new test case. Reviewers: JIRA, jsichi, njain, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1875 [cws] HIVE-2778 [jira] Fail on table sampling (Navis Ryu via Carl Steinbach) Summary: HIVE-2778 fix NPE on table sampling Trying table sampling on any non-empty table throws NPE. This does not occur by test on mini-MR. div class=preformatted panel style=border-width: 1px;div class=preformattedContent panelContent preselect count(*) from emp tablesample (0.1 percent); Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number java.lang.NullPointerException at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.sampleSplits(CombineHiveInputFormat.java:450) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:403) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:432) at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with exception 'java.lang.NullPointerException(null)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask /pre /div/div Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D1593 Changes for Build #1314 [hashutosh] HIVE-2835: Change default configuration for hive.exec.dynamic.partition (Owen Omalley via hashutosh) [namit] HIVE-2872 Store which configs the user has explicitly changed (Kevin Wilfng via namit) Changes for Build #1315 [hashutosh] HIVE-2503: HiveServer should provide per session configuration (navis via hashutosh) Changes for Build #1316 [hashutosh] HIVE-2831 [jira] Mask FsShell output in QTestUtil (Carl Steinbach via Ashutosh Chauhan) Summary: HIVE-2831. Mask FsShell output in QTestUtil Test Plan: EMPTY Reviewers: JIRA, edwardcapriolo, ashutoshc Reviewed By: ashutoshc Differential Revision: https://reviews.facebook.net/D2049 [hashutosh] HIVE-2850 [jira] Remove zero length files (Owen O'Malley via Ashutosh Chauhan) Summary: Enter Revision Title Remove empty files There are also zero-length non-source files that need to be removed. Test Plan: EMPTY Reviewers: JIRA, ashutoshc Reviewed By: ashutoshc Differential Revision: https://reviews.facebook.net/D2163 [namit] HIVE-2871 Add a new hook to run at the beginning and end of the Driver.run method (Kevin Wilfong via
[jira] [Updated] (HIVE-2828) make timestamp accessible in the hbase KeyValue
[ https://issues.apache.org/jira/browse/HIVE-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2828: -- Attachment: HIVE-2828.D1989.4.patch navis updated the revision HIVE-2828 [jira] make timestamp accessible in the hbase KeyValue. Reviewers: JIRA Rebased on trunk REVISION DETAIL https://reviews.facebook.net/D1989 AFFECTED FILES hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java hbase-handler/src/test/queries/hbase_timestamp.q hbase-handler/src/test/results/hbase_timestamp.q.out make timestamp accessible in the hbase KeyValue Key: HIVE-2828 URL: https://issues.apache.org/jira/browse/HIVE-2828 Project: Hive Issue Type: Improvement Components: HBase Handler Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-2828.D1989.1.patch, HIVE-2828.D1989.2.patch, HIVE-2828.D1989.3.patch, HIVE-2828.D1989.4.patch Originated from HIVE-2781 and not accepted, but I think this could be helpful to someone. By using special column notation ':timestamp' in HBASE_COLUMNS_MAPPING, user might access timestamp value in hbase KeyValue. {code} CREATE TABLE hbase_table (key int, value string, time timestamp) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:string,:timestamp) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2828) make timestamp accessible in the hbase KeyValue
[ https://issues.apache.org/jira/browse/HIVE-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-2828: Status: Patch Available (was: Open) make timestamp accessible in the hbase KeyValue Key: HIVE-2828 URL: https://issues.apache.org/jira/browse/HIVE-2828 Project: Hive Issue Type: Improvement Components: HBase Handler Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-2828.D1989.1.patch, HIVE-2828.D1989.2.patch, HIVE-2828.D1989.3.patch, HIVE-2828.D1989.4.patch Originated from HIVE-2781 and not accepted, but I think this could be helpful to someone. By using special column notation ':timestamp' in HBASE_COLUMNS_MAPPING, user might access timestamp value in hbase KeyValue. {code} CREATE TABLE hbase_table (key int, value string, time timestamp) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:string,:timestamp) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira