[jira] [Commented] (HIVE-3599) missing return of compression codec to pool
[ https://issues.apache.org/jira/browse/HIVE-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480765#comment-13480765 ] Edward Capriolo commented on HIVE-3599: --- +1 will commit. missing return of compression codec to pool --- Key: HIVE-3599 URL: https://issues.apache.org/jira/browse/HIVE-3599 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Owen O'Malley Assignee: Owen O'Malley Attachments: hive-3599.patch The RCFile writer is currently missing a call to return of one of the compression codecs to the pool. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3599) missing return of compression codec to pool
[ https://issues.apache.org/jira/browse/HIVE-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Edward Capriolo updated HIVE-3599: -- Resolution: Fixed Fix Version/s: 0.9.1 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed. Thanks Owen. missing return of compression codec to pool --- Key: HIVE-3599 URL: https://issues.apache.org/jira/browse/HIVE-3599 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.9.1 Attachments: hive-3599.patch The RCFile writer is currently missing a call to return of one of the compression codecs to the pool. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #173
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/ -- [...truncated 10125 lines...] [echo] Project: odbc [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/odbc/src/conf does not exist. ivy-resolve-test: [echo] Project: odbc ivy-retrieve-test: [echo] Project: odbc compile-test: [echo] Project: odbc create-dirs: [echo] Project: serde [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/serde/src/test/resources does not exist. init: [echo] Project: serde ivy-init-settings: [echo] Project: serde ivy-resolve: [echo] Project: serde [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-serde-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/ivy/report/org.apache.hive-hive-serde-default.html ivy-retrieve: [echo] Project: serde dynamic-serde: compile: [echo] Project: serde ivy-resolve-test: [echo] Project: serde ivy-retrieve-test: [echo] Project: serde compile-test: [echo] Project: serde [javac] Compiling 26 source files to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/serde/test/classes [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. create-dirs: [echo] Project: service [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/service/src/test/resources does not exist. init: [echo] Project: service ivy-init-settings: [echo] Project: service ivy-resolve: [echo] Project: service [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-service-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/ivy/report/org.apache.hive-hive-service-default.html ivy-retrieve: [echo] Project: service compile: [echo] Project: service ivy-resolve-test: [echo] Project: service ivy-retrieve-test: [echo] Project: service compile-test: [echo] Project: service [javac] Compiling 2 source files to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/service/test/classes test: [echo] Project: hive test-shims: [echo] Project: hive test-conditions: [echo] Project: shims gen-test: [echo] Project: shims create-dirs: [echo] Project: shims [copy] Warning: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/test/resources does not exist. init: [echo] Project: shims ivy-init-settings: [echo] Project: shims ivy-resolve: [echo] Project: shims [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml [ivy:report] Processing https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-shims-default.xml to https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/ivy/report/org.apache.hive-hive-shims-default.html ivy-retrieve: [echo] Project: shims compile: [echo] Project: shims [echo] Building shims 0.20 build_shims: [echo] Project: shims [echo] Compiling https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/common/java;/home/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.20/java against hadoop 0.20.2 (https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/173/artifact/hive/build/hadoopcore/hadoop-0.20.2) ivy-init-settings: [echo] Project: shims ivy-resolve-hadoop-shim: [echo] Project: shims [ivy:resolve] :: loading settings :: file = https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml ivy-retrieve-hadoop-shim: [echo] Project: shims [echo] Building shims 0.20S build_shims: [echo] Project: shims [echo] Compiling
[jira] [Commented] (HIVE-3152) Disallow certain character patterns in partition names
[ https://issues.apache.org/jira/browse/HIVE-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480777#comment-13480777 ] Ashutosh Chauhan commented on HIVE-3152: +1 Kevin can you take care of running tests and committing it. Ivan, on a side note for your future patches I will suggest to avoid whitespace changes, that makes it harder while reading patch to see where exactly important changes are. Disallow certain character patterns in partition names -- Key: HIVE-3152 URL: https://issues.apache.org/jira/browse/HIVE-3152 Project: Hive Issue Type: New Feature Components: Metastore Reporter: Andrew Poland Assignee: Ivan Gorbachev Priority: Minor Labels: api-addition, configuration-addition Attachments: jira-3152.0.patch, jira-3152.1.patch, jira-3152.2.patch New event listener to allow metastore to reject a partition name if it contains undesired character patterns such as unicode and commas. Match pattern is implemented as a regular expression Modifies append_partition to call a new MetaStorePreventListener implementation, PreAppendPartitionEvent. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3602) Provide ability to roll up column stats from specific partitions
[ https://issues.apache.org/jira/browse/HIVE-3602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shreepadma Venugopalan updated HIVE-3602: - Summary: Provide ability to roll up column stats from specific partitions (was: Provide ability to roll up column stats from different partitions) Provide ability to roll up column stats from specific partitions Key: HIVE-3602 URL: https://issues.apache.org/jira/browse/HIVE-3602 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.10.0 Reporter: Shreepadma Venugopalan Assignee: Shreepadma Venugopalan When executing query on a partitioned table it becomes necessary to combine the statistics from different partitions to know about the combined data distribution. For eg., consider a table with 5 partitions P0 to P4 and a query that scans partitions p0, p2 and p4. It becomes necessary to combine the statistics from the individual partitions to obtain the statistical properties of the combined data from the partitions. This JIRA covers the task of a) implementing logic to roll up statistics from specific partitions and b) providing an API to retrieve the combined statistics -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-3602) Provide ability to roll up column stats from different partitions
Shreepadma Venugopalan created HIVE-3602: Summary: Provide ability to roll up column stats from different partitions Key: HIVE-3602 URL: https://issues.apache.org/jira/browse/HIVE-3602 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.10.0 Reporter: Shreepadma Venugopalan Assignee: Shreepadma Venugopalan When executing query on a partitioned table it becomes necessary to combine the statistics from different partitions to know about the combined data distribution. For eg., consider a table with 5 partitions P0 to P4 and a query that scans partitions p0, p2 and p4. It becomes necessary to combine the statistics from the individual partitions to obtain the statistical properties of the combined data from the partitions. This JIRA covers the task of a) implementing logic to roll up statistics from specific partitions and b) providing an API to retrieve the combined statistics -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #173
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/173/ -- [...truncated 36569 lines...] [junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/jenkins/hive_2012-10-20_13-51-04_935_5269897821644439348/-mr-1 [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/173/artifact/hive/build/service/tmp/hive_job_log_jenkins_201210201351_109311393.txt [junit] Copying file: https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: load data local inpath 'https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Output: default@testhivedrivertable [junit] Copying data from https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt [junit] Loading data to table default.testhivedrivertable [junit] POSTHOOK: query: load data local inpath 'https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: select * from testhivedrivertable limit 10 [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/jenkins/hive_2012-10-20_13-51-09_157_7507575820152155082/-mr-1 [junit] POSTHOOK: query: select * from testhivedrivertable limit 10 [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/jenkins/hive_2012-10-20_13-51-09_157_7507575820152155082/-mr-1 [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/173/artifact/hive/build/service/tmp/hive_job_log_jenkins_201210201351_825601501.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/173/artifact/hive/build/service/tmp/hive_job_log_jenkins_201210201351_255059004.txt [junit] Hive history file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/173/artifact/hive/build/service/tmp/hive_job_log_jenkins_201210201351_1390329658.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK:
[jira] [Commented] (HIVE-2794) Aggregations without grouping should return NULL when applied to partitioning column of a partitionless table
[ https://issues.apache.org/jira/browse/HIVE-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480841#comment-13480841 ] Shreepadma Venugopalan commented on HIVE-2794: -- SQL standard dictates that the return value should be NULL. However, HiveQL seems to deviate and return an empty result set for other aggregate functions as well. This needs to be fixed to achieve parity with the SQL standard. Aggregations without grouping should return NULL when applied to partitioning column of a partitionless table - Key: HIVE-2794 URL: https://issues.apache.org/jira/browse/HIVE-2794 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Carl Steinbach Assignee: Zhenxiao Luo Attachments: HIVE-2794.1.patch.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3056) Ability to bulk update location field in Db/Table/Partition records
[ https://issues.apache.org/jira/browse/HIVE-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480843#comment-13480843 ] Shreepadma Venugopalan commented on HIVE-3056: -- Metatool provides the following functionality, * Ability to search and replace the HDFS NN location in metastore records that reference the NN. Note that one use is the ability to transition an existing Hive deployment to use HDFS HA NN. * A command line tool to execute JDOQL against the metastore. We believe the ability to execute JDOQL against the metastore will be a useful debugging tool for both users and Hive developers. Ability to bulk update location field in Db/Table/Partition records --- Key: HIVE-3056 URL: https://issues.apache.org/jira/browse/HIVE-3056 Project: Hive Issue Type: New Feature Components: Metastore Reporter: Carl Steinbach Assignee: Shreepadma Venugopalan Fix For: 0.10.0 Attachments: HIVE-3056.2.patch.txt, HIVE-3056.3.patch.txt, HIVE-3056.4.patch.txt, HIVE-3056.5.patch.txt, HIVE-3056.7.patch.txt, HIVE-3056.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3056) Create a new metastore tool to bulk update location field in Db/Table/Partition records
[ https://issues.apache.org/jira/browse/HIVE-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shreepadma Venugopalan updated HIVE-3056: - Summary: Create a new metastore tool to bulk update location field in Db/Table/Partition records (was: Ability to bulk update location field in Db/Table/Partition records) Create a new metastore tool to bulk update location field in Db/Table/Partition records Key: HIVE-3056 URL: https://issues.apache.org/jira/browse/HIVE-3056 Project: Hive Issue Type: New Feature Components: Metastore Reporter: Carl Steinbach Assignee: Shreepadma Venugopalan Fix For: 0.10.0 Attachments: HIVE-3056.2.patch.txt, HIVE-3056.3.patch.txt, HIVE-3056.4.patch.txt, HIVE-3056.5.patch.txt, HIVE-3056.7.patch.txt, HIVE-3056.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3056) Create a new metastore tool to bulk update location field in Db/Table/Partition records
[ https://issues.apache.org/jira/browse/HIVE-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480845#comment-13480845 ] Shreepadma Venugopalan commented on HIVE-3056: -- Command Notes metatool -listFSRoot Obtain the NameNode location. The value is prefixed with hdfs:// scheme metatool -updateLocation [-dryRun] new-HA-NN-loc pre-NN-loc Update records in the Hive metastore to point to the new NameNode location metatool -updateLocation [-dryRun] [-tablePropKey table-prop-key] [-serdePropKey serde-prop-key] new-HA-NN-loc pre-NN-locUpdates metastore records, including the ones that reference the Avro SerDe schema URL, to point to the new HDFS NameNode location. metatool -listFSRoot Verify that the update was executed successfully; it should now return the new location that matches the value of the dfs.nameservices property Create a new metastore tool to bulk update location field in Db/Table/Partition records Key: HIVE-3056 URL: https://issues.apache.org/jira/browse/HIVE-3056 Project: Hive Issue Type: New Feature Components: Metastore Reporter: Carl Steinbach Assignee: Shreepadma Venugopalan Fix For: 0.10.0 Attachments: HIVE-3056.2.patch.txt, HIVE-3056.3.patch.txt, HIVE-3056.4.patch.txt, HIVE-3056.5.patch.txt, HIVE-3056.7.patch.txt, HIVE-3056.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3056) Create a new metastore tool to bulk update location field in Db/Table/Partition records
[ https://issues.apache.org/jira/browse/HIVE-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480846#comment-13480846 ] Shreepadma Venugopalan commented on HIVE-3056: -- The options used for the listFSRoot option are: new-HA-NN-loc – specifies the new location and should match the value of the dfs.nameservices property. pre-NN-loc – return value from the listFSRoot command that references the NameNode location. dryRun – outputs the location to the console without making persistent changes. tablePropKey table-prop-key – allows users to specify a table property key whose value field may reference the HDFS NameNode location and hence may require an update. To update the Avro SerDe schema URL, specify avro.schema.url for this argument. serdePropKey serde-prop-key – allows users to specify a SerDe property key whose value field may reference the HDFS NameNode location and hence may require an update. To update the Haivvero schema URL, specify schema.url for this argument. If you are unsure which version of Avro SerDe is used, pass both schemaPropKey and tablePropKey arguments with their respective values for the keys to updateLocation Create a new metastore tool to bulk update location field in Db/Table/Partition records Key: HIVE-3056 URL: https://issues.apache.org/jira/browse/HIVE-3056 Project: Hive Issue Type: New Feature Components: Metastore Reporter: Carl Steinbach Assignee: Shreepadma Venugopalan Fix For: 0.10.0 Attachments: HIVE-3056.2.patch.txt, HIVE-3056.3.patch.txt, HIVE-3056.4.patch.txt, HIVE-3056.5.patch.txt, HIVE-3056.7.patch.txt, HIVE-3056.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1749 - Still Failing
Changes for Build #1747 Changes for Build #1748 [namit] HIVE-3544 union involving double column with a map join subquery will fail or give wrong results (Kevin Wilfong via namit) [cws] HIVE-3590. TCP KeepAlive and connection timeout for the HiveServer (Esteban Gutierrez via cws) Changes for Build #1749 6 tests failed. REGRESSION: org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisherOneStat Error Message: null Stack Trace: junit.framework.AssertionFailedError: null at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.assertTrue(Assert.java:20) at junit.framework.Assert.assertTrue(Assert.java:27) at org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisherOneStat(TestStatsPublisherEnhanced.java:81) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:79) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785) REGRESSION: org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisher Error Message: null Stack Trace: junit.framework.AssertionFailedError: null at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.assertTrue(Assert.java:20) at junit.framework.Assert.assertTrue(Assert.java:27) at org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisher(TestStatsPublisherEnhanced.java:129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:79) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785) REGRESSION: org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisherMultipleUpdates Error Message: null Stack Trace: junit.framework.AssertionFailedError: null at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.assertTrue(Assert.java:20) at junit.framework.Assert.assertTrue(Assert.java:27) at org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisherMultipleUpdates(TestStatsPublisherEnhanced.java:190) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at
[jira] [Commented] (HIVE-2206) add a new optimizer for query correlation discovery and optimization
[ https://issues.apache.org/jira/browse/HIVE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480878#comment-13480878 ] alex gemini commented on HIVE-2206: --- Did this jira have a short version description? I know a join followed by group is optimized like pipeline, what else we may want to add to wiki? add a new optimizer for query correlation discovery and optimization Key: HIVE-2206 URL: https://issues.apache.org/jira/browse/HIVE-2206 Project: Hive Issue Type: New Feature Components: Query Processor Affects Versions: 0.10.0 Reporter: He Yongqiang Assignee: Yin Huai Attachments: HIVE-2206.10-r1384442.patch.txt, HIVE-2206.11-r1385084.patch.txt, HIVE-2206.12-r1386996.patch.txt, HIVE-2206.13-r1389072.patch.txt, HIVE-2206.14-r1389704.patch.txt, HIVE-2206.15-r1392491.patch.txt, HIVE-2206.16-r1399936.patch.txt, HIVE-2206.1.patch.txt, HIVE-2206.2.patch.txt, HIVE-2206.3.patch.txt, HIVE-2206.4.patch.txt, HIVE-2206.5-1.patch.txt, HIVE-2206.5.patch.txt, HIVE-2206.6.patch.txt, HIVE-2206.7.patch.txt, HIVE-2206.8.r1224646.patch.txt, HIVE-2206.8-r1237253.patch.txt, testQueries.2.q, YSmartPatchForHive.patch reference: http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-11-7.pdf -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2206) add a new optimizer for query correlation discovery and optimization
[ https://issues.apache.org/jira/browse/HIVE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480881#comment-13480881 ] Yin Huai commented on HIVE-2206: [~gemini5201314] I do not have a short version description right now. Let me write one and create a wiki page. add a new optimizer for query correlation discovery and optimization Key: HIVE-2206 URL: https://issues.apache.org/jira/browse/HIVE-2206 Project: Hive Issue Type: New Feature Components: Query Processor Affects Versions: 0.10.0 Reporter: He Yongqiang Assignee: Yin Huai Attachments: HIVE-2206.10-r1384442.patch.txt, HIVE-2206.11-r1385084.patch.txt, HIVE-2206.12-r1386996.patch.txt, HIVE-2206.13-r1389072.patch.txt, HIVE-2206.14-r1389704.patch.txt, HIVE-2206.15-r1392491.patch.txt, HIVE-2206.16-r1399936.patch.txt, HIVE-2206.1.patch.txt, HIVE-2206.2.patch.txt, HIVE-2206.3.patch.txt, HIVE-2206.4.patch.txt, HIVE-2206.5-1.patch.txt, HIVE-2206.5.patch.txt, HIVE-2206.6.patch.txt, HIVE-2206.7.patch.txt, HIVE-2206.8.r1224646.patch.txt, HIVE-2206.8-r1237253.patch.txt, testQueries.2.q, YSmartPatchForHive.patch reference: http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-11-7.pdf -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira