[jira] [Commented] (HIVE-2838) cleanup readentity/writeentity
[ https://issues.apache.org/jira/browse/HIVE-2838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226007#comment-13226007 ] Hudson commented on HIVE-2838: -- Integrated in Hive-trunk-h0.21 #1300 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1300/]) HIVE-2838. cleanup readentity/writeentity. (namit via kevinwilfong) (Revision 1298699) Result = FAILURE kevinwilfong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1298699 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/hooks/Entity.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/hooks/ReadEntity.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java cleanup readentity/writeentity -- Key: HIVE-2838 URL: https://issues.apache.org/jira/browse/HIVE-2838 Project: Hive Issue Type: Bug Reporter: Namit Jain Assignee: Namit Jain Fix For: 0.9.0 Attachments: HIVE-2838.D2193.1.patch, HIVE-2838.D2193.2.patch Ideally, there should be one common entity instead of readentity/writeentity. Unfortunately, that would be a backward incompatible change since users os hive might have written there own hooks, where they are using readentity/writeentity. We should atleast create a common class, and then we can deprecate read/write entity later, for a new release. For now, I propose to make a backward compatible change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1300 - Still Failing
Changes for Build #1299 [hashutosh] HIVE-1634: Allow access to Primitive types stored in binary format in HBase (Basab Maulik, Ashutosh Chauhan via hashutosh) Changes for Build #1300 [kevinwilfong] HIVE-2838. cleanup readentity/writeentity. (namit via kevinwilfong) 1 tests failed. FAILED: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:10262) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785) The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1300) Status: Still Failing Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1300/ to view the results.
[jira] [Updated] (HIVE-2837) insert into external tables should not be allowed
[ https://issues.apache.org/jira/browse/HIVE-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Wilfong updated HIVE-2837: Resolution: Fixed Status: Resolved (was: Patch Available) Committed, thanks Namit. insert into external tables should not be allowed - Key: HIVE-2837 URL: https://issues.apache.org/jira/browse/HIVE-2837 Project: Hive Issue Type: Bug Reporter: Namit Jain Assignee: Namit Jain Attachments: HIVE-2837.D2211.1.patch, HIVE-2837.D2211.2.patch This is a very risky thing to allow. Since, the external tables can point to any user location, which can potentially corrupt some other tables. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2835) Change default configuration for hive.exec.dynamic.partition
[ https://issues.apache.org/jira/browse/HIVE-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226300#comment-13226300 ] Owen O'Malley commented on HIVE-2835: - Thanks for catching the problem, Ed. I uploaded a new patch although clearly you already have the fixed patch. Change default configuration for hive.exec.dynamic.partition Key: HIVE-2835 URL: https://issues.apache.org/jira/browse/HIVE-2835 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Attachments: HIVE-2835.D2157.1.patch, HIVE-2835.D2157.2.patch I think we should enable dynamic partitions by default. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2835) Change default configuration for hive.exec.dynamic.partition
[ https://issues.apache.org/jira/browse/HIVE-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2835: -- Attachment: HIVE-2835.D2157.2.patch omalley updated the revision HIVE-2835 [jira] Change default configuration for hive.exec.dynamic.partition. Reviewers: JIRA Added change to HiveConf.java REVISION DETAIL https://reviews.facebook.net/D2157 AFFECTED FILES common/src/java/org/apache/hadoop/hive/conf/HiveConf.java conf/hive-default.xml.template Change default configuration for hive.exec.dynamic.partition Key: HIVE-2835 URL: https://issues.apache.org/jira/browse/HIVE-2835 Project: Hive Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Attachments: HIVE-2835.D2157.1.patch, HIVE-2835.D2157.2.patch I think we should enable dynamic partitions by default. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2832) Cache error messages for additional logging
[ https://issues.apache.org/jira/browse/HIVE-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-2832: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed. Thanks Kevin Cache error messages for additional logging --- Key: HIVE-2832 URL: https://issues.apache.org/jira/browse/HIVE-2832 Project: Hive Issue Type: Improvement Components: Logging Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2832.D2025.1.patch, HIVE-2832.D2025.2.patch It would be good if we could cache logs written to SessionState.err so that they could be exposed to hooks for additional logging. This would allow logging of error messages with the queries that failed in a central location. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1301 - Still Failing
Changes for Build #1299 [hashutosh] HIVE-1634: Allow access to Primitive types stored in binary format in HBase (Basab Maulik, Ashutosh Chauhan via hashutosh) Changes for Build #1300 [kevinwilfong] HIVE-2838. cleanup readentity/writeentity. (namit via kevinwilfong) Changes for Build #1301 1 tests failed. FAILED: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:10262) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785) The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1301) Status: Still Failing Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1301/ to view the results.
[jira] [Created] (HIVE-2859) STRING data corruption in internationalized data -- based on LANG env variable
STRING data corruption in internationalized data -- based on LANG env variable -- Key: HIVE-2859 URL: https://issues.apache.org/jira/browse/HIVE-2859 Project: Hive Issue Type: Bug Components: Configuration, Import/Export, Serializers/Deserializers, Types Affects Versions: 0.7.1 Environment: Windows / RHEL5 with LANG = en_US.CP1252 Reporter: John Gordon Fix For: 0.9.0, 0.7.1 This is a bug in Hive that is exacerbated by replatforming it to Windows without CYGWIN. Basically, it assumes that the default file.encoding is UTF8. There are something like 6-7 getBytes() calls and write() calls that don't specify the encoding. The rest specify UTF-8 explicitly, which blocks auto-detection of UTF-16 data in files with a BOM present. The mix of explicit encodings and default encoding assumptions means that Hive must be run in a JVM whose default encoding is UTF-8 and only UTF-8. When the JVM starts up, it derives the default encoding from the C runtime setlocale() call. On Linux/Unix, this would use the LANG env variable (which is almost always locale.UTF8 for machines handling internationalized data, but not guaranteed to be so). On Windows, this is derived from the user's language settings, and cannot return a UTF-8 encoding, right now. So there isn't an environment setting for Windows that would reliably provide the JVM with a set of inputs to cause it to set the default encoding to UTF-8 on startup without additional options. However, there are 2 feasible options: 1.) the JVM has a startup option -Dfile.encoding=UTF-8 which should explicitly override the default encoding detection behavior in the JVM to make it always UTF-8 regardless of the environmental configuration. This would make all deployments on all OS/environment configs behave consistently. I don't know where Hive sets the JVM options we use when it starts the service. 2.) We could add UTF8 explicitly to all the remaining getBytes() calls that need it, and make all the string I/O explicitly UTF-8 encoded. This is probably being changed right now as part of Hive-1505, so we would duplicate effort and maybe make that change harder. Seems easier to trick the JVM into behaving like it is on a well-configured machine WRT default encoding instead of setting explicit encodings everywhere. So: - Pretty much any globalized strings than Western European are going to be corrupted in the current Hive service on Windows with this bug present because there really isn't a way to have the JVM read the environment and determine by default that UTF8 should be the default encoding. - Anyone can repro this on Linux fairly easily -- Add export LANG=en_US.CP1252 to /etc/profile to modify the global LANG default encoding to CP1252 explicitly, then restart the service and do a query over internationalized UTF-8 data. - We shouldn't rely on JVM default codepage selection if we want to support UTF-8 consistently and reliably as the default encoding. - The estimate can range wildly, but adding an explicit default encoding on startup should only take a little while if you know where to do it, theoretically. - I don't know where to update the start arguments of the JVM when the service is started, just getting into the code for the first time with this bug investigation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-0.8.1-SNAPSHOT-h0.21 - Build # 217 - Failure
Changes for Build #217 1 tests failed. REGRESSION: org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:9440) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) The Apache Jenkins build system has built Hive-0.8.1-SNAPSHOT-h0.21 (build #217) Status: Failure Check console output at https://builds.apache.org/job/Hive-0.8.1-SNAPSHOT-h0.21/217/ to view the results.
[jira] [Updated] (HIVE-2854) Support between filter pushdown for key ranges in hbase
[ https://issues.apache.org/jira/browse/HIVE-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-2854: Status: Patch Available (was: Open) Support between filter pushdown for key ranges in hbase --- Key: HIVE-2854 URL: https://issues.apache.org/jira/browse/HIVE-2854 Project: Hive Issue Type: Improvement Components: HBase Handler Environment: ubuntu 10.04 Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-2854.D2169.1.patch https://issues.apache.org/jira/browse/HIVE-2771 omitted 'between' operator. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226650#comment-13226650 ] Phabricator commented on HIVE-2646: --- abayer has commented on the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. INLINE COMMENTS shims/ivy.xml:34 Sure thing. build-common.xml:86 It is - will do so. EDIT: actually, no, it isn't. It was originally but is general now. Will remove the misleading comment. build-common.xml:154 ivy-{resolve,retrieve}-hadoop20 is an obsolete bit of cruft - removing it. I've also replaced the ivy-{resolve,retrieve}-hadoop0.{20,20S,23}-shim targets with ivy-{resolve,retrieve}-hadoop-shim, which is called with a parameter that specifies the shim conf to get. build.xml:700 Working for me? build.xml:740 Again - works for me, with ant clean binary REVISION DETAIL https://reviews.facebook.net/D2133 BRANCH HIVE-2646-dev-branch Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2646: -- Attachment: HIVE-2646.D2133.4.patch abayer updated the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. Reviewers: JIRA, cwsteinbach Redid the shim ivy targets, clarified a few things. REVISION DETAIL https://reviews.facebook.net/D2133 AFFECTED FILES build-common.xml build.properties build.xml builtins/build.xml builtins/ivy.xml cli/ivy.xml common/ivy.xml contrib/build.xml contrib/ivy.xml hbase-handler/build.xml hbase-handler/ivy.xml hwi/build.xml hwi/ivy.xml ivy/common-configurations.xml ivy/ivysettings.xml ivy/libraries.properties jdbc/build.xml jdbc/ivy.xml metastore/ivy.xml pdk/ivy.xml pdk/scripts/build-plugin.xml ql/build.xml ql/ivy.xml serde/ivy.xml service/build.xml service/ivy.xml shims/build.xml shims/ivy.xml testutils/hadoop Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2837) insert into external tables should not be allowed
[ https://issues.apache.org/jira/browse/HIVE-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226664#comment-13226664 ] Hudson commented on HIVE-2837: -- Integrated in Hive-trunk-h0.21 #1302 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1302/]) HIVE-2837. insert into external tables should not be allowed. (namit via kevinwilfong) (Revision 1298936) Result = SUCCESS kevinwilfong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1298936 Files : * /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ErrorMsg.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java * /hive/trunk/ql/src/test/queries/clientnegative/insertexternal1.q * /hive/trunk/ql/src/test/results/clientnegative/insertexternal1.q.out insert into external tables should not be allowed - Key: HIVE-2837 URL: https://issues.apache.org/jira/browse/HIVE-2837 Project: Hive Issue Type: Bug Reporter: Namit Jain Assignee: Namit Jain Attachments: HIVE-2837.D2211.1.patch, HIVE-2837.D2211.2.patch This is a very risky thing to allow. Since, the external tables can point to any user location, which can potentially corrupt some other tables. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2832) Cache error messages for additional logging
[ https://issues.apache.org/jira/browse/HIVE-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226665#comment-13226665 ] Hudson commented on HIVE-2832: -- Integrated in Hive-trunk-h0.21 #1302 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1302/]) HIVE-2832 Cache error messages for additional logging (Kevin Wilfong via namit) (Revision 1299000) Result = SUCCESS namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1299000 Files : * /hive/trunk/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java * /hive/trunk/common/src/java/org/apache/hadoop/hive/common/io/CachingPrintStream.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/VerifyCachingPrintStreamHook.java * /hive/trunk/ql/src/test/queries/clientnegative/cachingprintstream.q * /hive/trunk/ql/src/test/results/clientnegative/cachingprintstream.q.out Cache error messages for additional logging --- Key: HIVE-2832 URL: https://issues.apache.org/jira/browse/HIVE-2832 Project: Hive Issue Type: Improvement Components: Logging Reporter: Kevin Wilfong Assignee: Kevin Wilfong Attachments: HIVE-2832.D2025.1.patch, HIVE-2832.D2025.2.patch It would be good if we could cache logs written to SessionState.err so that they could be exposed to hooks for additional logging. This would allow logging of error messages with the queries that failed in a central location. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226673#comment-13226673 ] Phabricator commented on HIVE-2646: --- thw has commented on the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. arc patch --revision 2133 gives me two rejects (applying to trunk head)? patching file build-common.xml patching file build.properties patching file build.xml Hunk #1 FAILED at 684. 1 out of 1 hunk FAILED -- saving rejects to file build.xml.rej patching file builtins/build.xml patching file builtins/ivy.xml patching file cli/ivy.xml patching file common/ivy.xml patching file contrib/build.xml patching file contrib/ivy.xml patching file hbase-handler/build.xml patching file hbase-handler/ivy.xml patching file hwi/build.xml patching file hwi/ivy.xml patching file ivy/common-configurations.xml Hunk #1 FAILED at 18. 1 out of 1 hunk FAILED -- saving rejects to file ivy/common-configurations.xml.rej patching file ivy/ivysettings.xml patching file ivy/libraries.properties patching file jdbc/build.xml patching file jdbc/ivy.xml patching file metastore/ivy.xml patching file pdk/ivy.xml patching file pdk/scripts/build-plugin.xml patching file ql/build.xml patching file ql/ivy.xml patching file serde/ivy.xml patching file service/build.xml patching file service/ivy.xml patching file shims/build.xml patching file shims/ivy.xml patching file testutils/hadoop REVISION DETAIL https://reviews.facebook.net/D2133 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226674#comment-13226674 ] Phabricator commented on HIVE-2646: --- thw has commented on the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. also tried arc export --unified --revision 2133 and then patch command, same result REVISION DETAIL https://reviews.facebook.net/D2133 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2646: -- Attachment: HIVE-2646.D2133.5.patch abayer updated the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. Reviewers: JIRA, cwsteinbach Rebased on trunk - got the build.xml conflict when rebasing, but didn't see any issues with ivy/common-configurations.xml. REVISION DETAIL https://reviews.facebook.net/D2133 AFFECTED FILES build-common.xml build.properties build.xml builtins/build.xml builtins/ivy.xml cli/ivy.xml common/ivy.xml contrib/build.xml contrib/ivy.xml hbase-handler/build.xml hbase-handler/ivy.xml hwi/build.xml hwi/ivy.xml ivy/common-configurations.xml ivy/ivysettings.xml ivy/libraries.properties jdbc/build.xml jdbc/ivy.xml metastore/ivy.xml pdk/ivy.xml pdk/scripts/build-plugin.xml ql/build.xml ql/ivy.xml serde/ivy.xml service/build.xml service/ivy.xml shims/build.xml shims/ivy.xml testutils/hadoop Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.D2133.5.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226685#comment-13226685 ] Phabricator commented on HIVE-2646: --- thw has commented on the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. build.xml conflict is gone, ivy/common-configurations.xml.rej: ``` *** *** 18,23 !--these match the Maven configurations-- conf name=default extends=master,compile/ conf name=master description=contains the artifact but no dependencies/ - conf name=compile description=contains the artifact but no dependencies/ conf name=runtime description=runtime but not the artifact/ /configurations --- 18,31 !--these match the Maven configurations-- conf name=default extends=master,compile/ conf name=master description=contains the artifact but no dependencies/ + conf name=compile extends=hadoop${hadoop.mr.rev} description=contains the artifact but no dependencies visibility=private/ conf name=runtime description=runtime but not the artifact/ + conf name=test extends=hadoop${hadoop.mr.rev}test,compile visibility=private / + conf name=hadoop20 visibility=private/ + conf name=hadoop23 visibility=private/ + conf name=hadoop20test visibility=private/ + conf name=hadoop23test visibility=private/ + conf name=hadoop0.20.shim visibility=private/ + conf name=hadoop0.20S.shim visibility=private/ + conf name=hadoop0.23.shim visibility=private/ /configurations ``` REVISION DETAIL https://reviews.facebook.net/D2133 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.D2133.5.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226688#comment-13226688 ] Phabricator commented on HIVE-2646: --- thw has commented on the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. INLINE COMMENTS shims/build.xml:28 Is this here still needed? REVISION DETAIL https://reviews.facebook.net/D2133 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.D2133.5.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
[ https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226691#comment-13226691 ] Phabricator commented on HIVE-2646: --- abayer has commented on the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs. Not sure why that doesn't apply cleanly. REVISION DETAIL https://reviews.facebook.net/D2133 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs Key: HIVE-2646 URL: https://issues.apache.org/jira/browse/HIVE-2646 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Priority: Critical Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.D2133.5.patch, HIVE-2646.diff.txt The current Hive Ivy dependency logic for its Hadoop dependencies is problematic - depending on the tarball and extracting the jars from there, rather than depending on the jars directly. It'd be great if this was fixed to actually have the jar dependencies defined directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2856) When integrating into MapReduce2, additional '^' in escape test
[ https://issues.apache.org/jira/browse/HIVE-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226716#comment-13226716 ] Carl Steinbach commented on HIVE-2856: -- @Zhenxiao: Please attach an updated copy of escape1.q.out. You can either attach this separately, or generate a new diff that includes binary changes using the command git diff --binary HEAD^ patch.txt. Thanks. When integrating into MapReduce2, additional '^' in escape test --- Key: HIVE-2856 URL: https://issues.apache.org/jira/browse/HIVE-2856 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-2856.1.patch.txt Additional '^' in escape test: [junit] Begin query: escape1.q [junit] Copying file: file:/home/cloudera/Code/hive/data/files/escapetest.txt [junit] 12/01/23 15:22:15 WARN conf.Configuration: mapred.system.dir is deprecated. Instead, use mapreduce.jobtracker.system.dir [junit] 12/01/23 15:22:15 WARN conf.Configuration: mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I LOCATION ' -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I LOCK_TIME: -I grantTime -I [.][.][.] [0-9]* more -I job_[0-9]*_[0-9]* -I USING 'java -cp /home/cloudera/Code/hive/build/ql/test/logs/clientpositive/escape1.q.out /home/cloudera/Code/hive/ql/src/test/results/clientpositive/escape1.q.out [junit] 893d892 [junit] 1 1 ^ [junit] junit.framework.AssertionFailedError: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] at junit.framework.Assert.fail(Assert.java:50) [junit] at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape1(TestCliDriver.java:131) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit] at java.lang.reflect.Method.invoke(Method.java:616) [junit] at junit.framework.TestCase.runTest(TestCase.java:168) [junit] at junit.framework.TestCase.runBare(TestCase.java:134) [junit] at junit.framework.TestResult$1.protect(TestResult.java:110) [junit] at junit.framework.TestResult.runProtected(TestResult.java:128) [junit] at junit.framework.TestResult.run(TestResult.java:113) [junit] at junit.framework.TestCase.run(TestCase.java:124) [junit] at junit.framework.TestSuite.runTest(TestSuite.java:243) [junit] at junit.framework.TestSuite.run(TestSuite.java:238) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) [junit] Exception: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HIVE-2748) Upgrade Hbase and ZK dependcies
[ https://issues.apache.org/jira/browse/HIVE-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar reassigned HIVE-2748: --- Assignee: Enis Soztutar (was: Ashutosh Chauhan) Upgrade Hbase and ZK dependcies --- Key: HIVE-2748 URL: https://issues.apache.org/jira/browse/HIVE-2748 Project: Hive Issue Type: Task Affects Versions: 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.9.0 Reporter: Ashutosh Chauhan Assignee: Enis Soztutar Attachments: HIVE-2748.3.patch, HIVE-2748.D1431.1.patch, HIVE-2748.D1431.2.patch Both softwares have moved forward with significant improvements. Lets bump compile time dependency to keep up -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2748) Upgrade Hbase and ZK dependcies
[ https://issues.apache.org/jira/browse/HIVE-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HIVE-2748: Attachment: HIVE-2748_v4.patch Attaching a patch with ZK updated to 3.4.3, and HBase 0.92.0 as in Alan's patch. Also adds a TokenStore.close() method, so that ZKTokenStore can properly close the client connection. Running the tests. Upgrade Hbase and ZK dependcies --- Key: HIVE-2748 URL: https://issues.apache.org/jira/browse/HIVE-2748 Project: Hive Issue Type: Task Affects Versions: 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.9.0 Reporter: Ashutosh Chauhan Assignee: Enis Soztutar Attachments: HIVE-2748.3.patch, HIVE-2748.D1431.1.patch, HIVE-2748.D1431.2.patch, HIVE-2748_v4.patch Both softwares have moved forward with significant improvements. Lets bump compile time dependency to keep up -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23
TestNegativeCliDriver autolocal1.q fails on 0.23 Key: HIVE-2860 URL: https://issues.apache.org/jira/browse/HIVE-2860 Project: Hive Issue Type: Bug Components: Testing Infrastructure Affects Versions: 0.9.0 Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226749#comment-13226749 ] Carl Steinbach commented on HIVE-2860: -- The expected result is: {noformat} PREHOOK: Output: file:/data/users/jssarma/hive_trunk/build/ql/scratchdir/hive_2010-07-25_01-23-08_631_3140637565055726778/-mr-1 Job Submission failed with exception 'java.lang.RuntimeException(Not a host:port pair: abracadabra)' {noformat} But on 0.23, the actual result is: {noformat} PREHOOK: Output: file:/Users/carl/Work/repos/hive1/build/ql/scratchdir/hive_2012-03-09_17-27-12_806_7440178579833934359/-mr-1 Job Submission failed with exception 'java.lang.IllegalArgumentException(Does not contain a valid host:port authority: abracadabra)' {noformat} QTestUtil already masks output lines containing the string java.lang.RuntimeException. It looks like it should also mask occurrences of java.lang.IllegalArgumentException. TestNegativeCliDriver autolocal1.q fails on 0.23 Key: HIVE-2860 URL: https://issues.apache.org/jira/browse/HIVE-2860 Project: Hive Issue Type: Bug Components: Testing Infrastructure Affects Versions: 0.9.0 Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2860: -- Attachment: HIVE-2860.D2253.1.patch cwsteinbach requested code review of HIVE-2860 [jira] TestNegativeCliDriver autolocal1.q fails on 0.23. Reviewers: JIRA HIVE-2860. TestNegativeCliDriver autolocal1.q fails on 0.23 Add java.lang.IllegalArgumentException to the list of masked strings in QTestUtil TEST PLAN EMPTY REVISION DETAIL https://reviews.facebook.net/D2253 AFFECTED FILES ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java MANAGE HERALD DIFFERENTIAL RULES https://reviews.facebook.net/herald/view/differential/ WHY DID I GET THIS EMAIL? https://reviews.facebook.net/herald/transcript/4935/ Tip: use the X-Herald-Rules header to filter Herald messages in your client. TestNegativeCliDriver autolocal1.q fails on 0.23 Key: HIVE-2860 URL: https://issues.apache.org/jira/browse/HIVE-2860 Project: Hive Issue Type: Bug Components: Testing Infrastructure Affects Versions: 0.9.0 Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2860.D2253.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2856) When integrating into MapReduce2, additional '^' in escape test
[ https://issues.apache.org/jira/browse/HIVE-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenxiao Luo updated HIVE-2856: --- Attachment: escape1.q.out the new expected output When integrating into MapReduce2, additional '^' in escape test --- Key: HIVE-2856 URL: https://issues.apache.org/jira/browse/HIVE-2856 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-2856.1.patch.txt, escape1.q.out Additional '^' in escape test: [junit] Begin query: escape1.q [junit] Copying file: file:/home/cloudera/Code/hive/data/files/escapetest.txt [junit] 12/01/23 15:22:15 WARN conf.Configuration: mapred.system.dir is deprecated. Instead, use mapreduce.jobtracker.system.dir [junit] 12/01/23 15:22:15 WARN conf.Configuration: mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I LOCATION ' -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I LOCK_TIME: -I grantTime -I [.][.][.] [0-9]* more -I job_[0-9]*_[0-9]* -I USING 'java -cp /home/cloudera/Code/hive/build/ql/test/logs/clientpositive/escape1.q.out /home/cloudera/Code/hive/ql/src/test/results/clientpositive/escape1.q.out [junit] 893d892 [junit] 1 1 ^ [junit] junit.framework.AssertionFailedError: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] at junit.framework.Assert.fail(Assert.java:50) [junit] at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape1(TestCliDriver.java:131) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit] at java.lang.reflect.Method.invoke(Method.java:616) [junit] at junit.framework.TestCase.runTest(TestCase.java:168) [junit] at junit.framework.TestCase.runBare(TestCase.java:134) [junit] at junit.framework.TestResult$1.protect(TestResult.java:110) [junit] at junit.framework.TestResult.runProtected(TestResult.java:128) [junit] at junit.framework.TestResult.run(TestResult.java:113) [junit] at junit.framework.TestCase.run(TestCase.java:124) [junit] at junit.framework.TestSuite.runTest(TestSuite.java:243) [junit] at junit.framework.TestSuite.run(TestSuite.java:238) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) [junit] Exception: Client execution results failed with error code = 1 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2860: - Attachment: HIVE-2860.D2253.1.patch TestNegativeCliDriver autolocal1.q fails on 0.23 Key: HIVE-2860 URL: https://issues.apache.org/jira/browse/HIVE-2860 Project: Hive Issue Type: Bug Components: Testing Infrastructure Affects Versions: 0.9.0 Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2860.D2253.1.patch, HIVE-2860.D2253.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23
[ https://issues.apache.org/jira/browse/HIVE-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2860: - Status: Patch Available (was: Open) TestNegativeCliDriver autolocal1.q fails on 0.23 Key: HIVE-2860 URL: https://issues.apache.org/jira/browse/HIVE-2860 Project: Hive Issue Type: Bug Components: Testing Infrastructure Affects Versions: 0.9.0 Reporter: Carl Steinbach Assignee: Carl Steinbach Fix For: 0.9.0 Attachments: HIVE-2860.D2253.1.patch, HIVE-2860.D2253.1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2861) Support eventual constant expression for filter pushdown for key ranges in hbase
Support eventual constant expression for filter pushdown for key ranges in hbase Key: HIVE-2861 URL: https://issues.apache.org/jira/browse/HIVE-2861 Project: Hive Issue Type: Improvement Components: HBase Handler Reporter: Navis Assignee: Navis Priority: Trivial Minor upgrade from HIVE-2771, which supports simple eventual constant expression as a filter (especially 'cast'). For example, {noformat} select * from hbase_pushdown where key cast(20 + 30 as string); {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira