[jira] [Commented] (HIVE-6806) Native Avro support in Hive
[ https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057183#comment-14057183 ] Hive QA commented on HIVE-6806: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12654935/HIVE-6806.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5723 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_native org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_schema_error_message org.apache.hadoop.hive.serde2.avro.TestAvroSerde.noSchemaProvidedReturnsErrorSchema org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/730/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/730/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-730/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12654935 Native Avro support in Hive --- Key: HIVE-6806 URL: https://issues.apache.org/jira/browse/HIVE-6806 Project: Hive Issue Type: New Feature Components: Serializers/Deserializers Affects Versions: 0.12.0 Reporter: Jeremy Beard Assignee: Ashish Kumar Singh Priority: Minor Labels: Avro Attachments: HIVE-6806.patch Avro is well established and widely used within Hive, however creating Avro-backed tables requires the messy listing of the SerDe, InputFormat and OutputFormat classes. Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had native Avro support. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengxiang Li reassigned HIVE-7372: --- Assignee: Chengxiang Li Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23355: Hive unnecessarily validates table SerDes when dropping a table
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23355/#review47553 --- ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java https://reviews.apache.org/r/23355/#comment83560 Can you change this message to output just the column name rather than the entire FieldSchema, to match the code that this method replaces? String partCol = partColsIter.next().getName(); - Jason Dere On July 9, 2014, 7:55 a.m., Navis Ryu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23355/ --- (Updated July 9, 2014, 7:55 a.m.) Review request for hive. Bugs: HIVE-3392 https://issues.apache.org/jira/browse/HIVE-3392 Repository: hive-git Description --- natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive drop table test; OK Time taken: 0.658 seconds hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 4d8e10c ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 250756c ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java 3a1e7fd ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 3df2690 ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java 2537b75 Diff: https://reviews.apache.org/r/23355/diff/ Testing --- Thanks, Navis Ryu
[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table
[ https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057194#comment-14057194 ] Jason Dere commented on HIVE-3392: -- Left comment on RB, which I think is causing the failure in TestNegativeCliDriver.testNegativeCliDriver_altern1. Hive unnecessarily validates table SerDes when dropping a table --- Key: HIVE-3392 URL: https://issues.apache.org/jira/browse/HIVE-3392 Project: Hive Issue Type: Bug Affects Versions: 0.9.0 Reporter: Jonathan Natkins Assignee: Navis Labels: patch Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, HIVE-3392.4.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive drop table test; OK Time taken: 0.658 seconds hive -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23355: Hive unnecessarily validates table SerDes when dropping a table
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23355/ --- (Updated July 10, 2014, 6:50 a.m.) Review request for hive. Changes --- Fixed test fail rebased to trunk. Bugs: HIVE-3392 https://issues.apache.org/jira/browse/HIVE-3392 Repository: hive-git Description --- natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive drop table test; OK Time taken: 0.658 seconds hive Diffs (updated) - ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java bbf89ef ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java fea1e47 ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java 3a1e7fd ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 563dbd1 ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java 52a8096 Diff: https://reviews.apache.org/r/23355/diff/ Testing --- Thanks, Navis Ryu
[jira] [Updated] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table
[ https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-3392: Attachment: HIVE-3392.5.patch.txt Hive unnecessarily validates table SerDes when dropping a table --- Key: HIVE-3392 URL: https://issues.apache.org/jira/browse/HIVE-3392 Project: Hive Issue Type: Bug Affects Versions: 0.9.0 Reporter: Jonathan Natkins Assignee: Navis Labels: patch Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, HIVE-3392.4.patch.txt, HIVE-3392.5.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive drop table test; OK Time taken: 0.658 seconds hive -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table
[ https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057211#comment-14057211 ] Navis commented on HIVE-3392: - Rebased to trunk and fixed that. Thanks. Hive unnecessarily validates table SerDes when dropping a table --- Key: HIVE-3392 URL: https://issues.apache.org/jira/browse/HIVE-3392 Project: Hive Issue Type: Bug Affects Versions: 0.9.0 Reporter: Jonathan Natkins Assignee: Navis Labels: patch Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, HIVE-3392.4.patch.txt, HIVE-3392.5.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive drop table test; OK Time taken: 0.658 seconds hive -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7376) add minimizeJar to jdbc/pom.xml
[ https://issues.apache.org/jira/browse/HIVE-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057227#comment-14057227 ] Hive QA commented on HIVE-7376: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12654941/HIVE-7376.1.patch.txt {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5718 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2 org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/732/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/732/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-732/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12654941 add minimizeJar to jdbc/pom.xml --- Key: HIVE-7376 URL: https://issues.apache.org/jira/browse/HIVE-7376 Project: Hive Issue Type: Bug Reporter: Eugene Koifman Attachments: HIVE-7376.1.patch.txt adding {code}minimizeJartrue/minimizeJar{code} to maven-shade-plugin reduces the uber jar (hive-jdbc-0.14.0-SNAPSHOT-standalone.jar) from 51MB to 27MB. Is there any reason not to add it? https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#minimizeJar -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengxiang Li updated HIVE-7372: Attachment: HIVE-7372.patch the key/value pair, as input of SparkCollector.collect, would be reused by Hive, so we need to make copy of these key/value BytesWritable. Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Work started] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-7372 started by Chengxiang Li. Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengxiang Li updated HIVE-7372: Status: Patch Available (was: In Progress) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Work started] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-7371 started by Chengxiang Li. Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch] - Key: HIVE-7371 URL: https://issues.apache.org/jira/browse/HIVE-7371 Project: Hive Issue Type: Task Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Currently, Spark client ships all Hive JARs, including those that Hive depends on, to Spark cluster when a query is executed by Spark. This is not efficient, causing potential library conflicts. Ideally, only a minimum set of JARs needs to be shipped. This task is to identify such a set. We should learn from current MR cluster, for which I assume only hive-exec JAR is shipped to MR cluster. We also need to ensure that user-supplied JARs are also shipped to Spark cluster, in a similar fashion as MR does. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057246#comment-14057246 ] Chengxiang Li commented on HIVE-7372: - select count(*) from test return 5 results as reducerCount is set to 5 in SparkClient hardhanded, reduce parallelism should consider several factors, i change the default value to 1 temporary for this POC workround. Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7374) SHOW COMPACTIONS fail on trunk
[ https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057252#comment-14057252 ] Damien Carol commented on HIVE-7374: I looked into the test failures. Here my conclusions : Many tests rely on the fact that the metastore returns an object {{ShowCompactResponse}} with {{compacts}} property at {{null}} But this patch changed that because this throws errors in Thrift (this property is required). I made this change in unit tests : {code} ShowCompactResponse rsp = txnHandler.showCompact(new ShowCompactRequest()); Assert.assertEquals(0, rsp.getCompactsSize()); {code} Instead of : {code} ShowCompactResponse rsp = txnHandler.showCompact(new ShowCompactRequest()); Assert.assertNull(rsp.getCompacts()); {code} This fix these tests : * org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testMarkCleaned * org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompaction * org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompactionNoBase * org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorTableCompaction * org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorPartitionCompaction * org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorTableCompaction * org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactOnManyDifferentPartitionAborts * org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactTableDeltaPctNotHighEnough * org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactTableNotEnoughDeltas * org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactWhenNoCompactSet This test IS NOT due to this patch. * org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection SHOW COMPACTIONS fail on trunk -- Key: HIVE-7374 URL: https://issues.apache.org/jira/browse/HIVE-7374 Project: Hive Issue Type: Bug Components: CLI, Metastore Affects Versions: 0.14.0 Reporter: Damien Carol Assignee: Damien Carol Labels: cli, compaction, metastore Attachments: HIVE-7374.1.patch In CLI in trunk after doing this : {{show compactions;}} Return error : {noformat} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.thrift.transport.TTransportException {noformat} In metatore : {noformat} 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is unset! Struct:ShowCompactResponse(compacts:null) at org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7374) SHOW COMPACTIONS fail on trunk
[ https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Carol updated HIVE-7374: --- Attachment: HIVE-7374.2.patch I rebased patch and fixed errors in unit tests. Ready for review. SHOW COMPACTIONS fail on trunk -- Key: HIVE-7374 URL: https://issues.apache.org/jira/browse/HIVE-7374 Project: Hive Issue Type: Bug Components: CLI, Metastore Affects Versions: 0.14.0 Reporter: Damien Carol Assignee: Damien Carol Labels: cli, compaction, metastore Attachments: HIVE-7374.1.patch, HIVE-7374.2.patch In CLI in trunk after doing this : {{show compactions;}} Return error : {noformat} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.thrift.transport.TTransportException {noformat} In metatore : {noformat} 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is unset! Struct:ShowCompactResponse(compacts:null) at org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table
[ https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057275#comment-14057275 ] Jason Dere commented on HIVE-3392: -- +1 if tests pass ok. Hive unnecessarily validates table SerDes when dropping a table --- Key: HIVE-3392 URL: https://issues.apache.org/jira/browse/HIVE-3392 Project: Hive Issue Type: Bug Affects Versions: 0.9.0 Reporter: Jonathan Natkins Assignee: Navis Labels: patch Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, HIVE-3392.4.patch.txt, HIVE-3392.5.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive drop table test; OK Time taken: 0.658 seconds hive -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6806) Native Avro support in Hive
[ https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057276#comment-14057276 ] Damien Carol commented on HIVE-6806: Doesn't this JIRA be superseded by HIVE-5976 ? Native Avro support in Hive --- Key: HIVE-6806 URL: https://issues.apache.org/jira/browse/HIVE-6806 Project: Hive Issue Type: New Feature Components: Serializers/Deserializers Affects Versions: 0.12.0 Reporter: Jeremy Beard Assignee: Ashish Kumar Singh Priority: Minor Labels: Avro Attachments: HIVE-6806.patch Avro is well established and widely used within Hive, however creating Avro-backed tables requires the messy listing of the SerDe, InputFormat and OutputFormat classes. Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had native Avro support. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table
[ https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057330#comment-14057330 ] Hive QA commented on HIVE-3392: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12654951/HIVE-3392.5.patch.txt {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5703 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testReadDataPrimitiveTypes org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/733/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/733/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-733/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12654951 Hive unnecessarily validates table SerDes when dropping a table --- Key: HIVE-3392 URL: https://issues.apache.org/jira/browse/HIVE-3392 Project: Hive Issue Type: Bug Affects Versions: 0.9.0 Reporter: Jonathan Natkins Assignee: Navis Labels: patch Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, HIVE-3392.4.patch.txt, HIVE-3392.5.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar
[jira] [Commented] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057331#comment-14057331 ] Hive QA commented on HIVE-7372: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12654952/HIVE-7372.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/734/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/734/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-734/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-734/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java' ++ egrep -v '^X|^Performing status on external' ++ awk '{print $2}' ++ svn status --no-ignore + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/core/target hcatalog/streaming/target hcatalog/server-extensions/target hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hwi/target common/target common/src/gen contrib/target service/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target + svn update Fetching external item into 'hcatalog/src/test/e2e/harness' External at revision 1609430. At revision 1609430. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12654952 Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. -- This message was sent by Atlassian JIRA (v6.2#6252)
Auto scaling on ec2
Scaling a Hadoop cluster with Hive has the following issues 1. Adding a computing node(Scaling up) when load on the cluster is high decreases the execution time of the queries but its there is still a huge time lag as the new node works on data from other nodes. 2. The process of removing a node from the cluster(Scaling down) when load on the cluster is low, is also time consuming. To reduce the time to scale the Hadoop cluster, we came up with the following solution. Prior to adding a new node, move the data from the existing nodes to the new node. This balances the cluster and if a new task comes, the newly added node can take it up as it already has the data (data locality). When decommissioning a node, move the data available on that node to the other nodes in the cluster., then decommission it. We tested this with hive on hadoop on 5 node cluster, *Time taken for Hive query,* *4node cluster* *Existing procedure(added new node) 5node cluster* *New procedure(added new node) 5node cluster* 16mins,25sec 13mins,38sec 9mins,41sec check the results and the approach here https://github.com/rajuch/Auto-scaling-on-ec2 Any drawbacks/suggestions on this approach, we would like to hear from you.. -- Thanks Regards, Raju Chinthala
[jira] [Created] (HIVE-7378) Could not build hive 0.13.1 with hadoop 2.2.0
John created HIVE-7378: -- Summary: Could not build hive 0.13.1 with hadoop 2.2.0 Key: HIVE-7378 URL: https://issues.apache.org/jira/browse/HIVE-7378 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: John I attemped to build hive 0.13.1 with hadoop 2.2.0 and got a failure. 1. Steps a. set `hadoop-23.version' to 2.2.0 in main pom file b. build with command `mvn clean install -DskipTests -Phadoop-2' 2. Error Messages [INFO] [INFO] Building Hive Shims 0.23 0.13.1 [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims-0.23 --- [INFO] Deleting /home/pivotal/PHD/hive/shims/0.23/target [INFO] Deleting /home/pivotal/PHD/hive/shims/0.23 (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-shims-0.23 --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-shims-0.23 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/pivotal/PHD/hive/shims/0.23/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims-0.23 --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims-0.23 --- [INFO] Compiling 4 source files to /home/pivotal/PHD/hive/shims/0.23/target/classes [INFO] - [WARNING] COMPILATION WARNING : [INFO] - [WARNING] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java uses or overrides a deprecated API. [WARNING] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: Recompile with -Xlint:deprecation for details. [INFO] 2 warnings [INFO] - [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[25,28] cannot find symbol symbol: class ReadOption location: package org.apache.hadoop.fs [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[28,28] cannot find symbol symbol: class ByteBufferPool location: package org.apache.hadoop.io [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[29,37] cannot find symbol symbol: class DirectDecompressor location: package org.apache.hadoop.io.compress [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[30,63] cannot find symbol symbol: class SnappyDirectDecompressor location: class org.apache.hadoop.io.compress.snappy.SnappyDecompressor [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[32,59] cannot find symbol symbol: class ZlibDirectDecompressor location: class org.apache.hadoop.io.compress.zlib.ZlibDecompressor [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[38,63] cannot find symbol symbol: class ByteBufferPool location: class org.apache.hadoop.hive.shims.ZeroCopyShims [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[59,34] cannot find symbol symbol: class ReadOption location: class org.apache.hadoop.hive.shims.ZeroCopyShims.ZeroCopyAdapter [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[61,34] cannot find symbol symbol: class ReadOption location: class org.apache.hadoop.hive.shims.ZeroCopyShims.ZeroCopyAdapter [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[94,19] cannot find symbol symbol: class DirectDecompressor location: class org.apache.hadoop.hive.shims.ZeroCopyShims.DirectDecompressorAdapter [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[96,38] cannot find symbol symbol: class DirectDecompressor location: class org.apache.hadoop.hive.shims.ZeroCopyShims.DirectDecompressorAdapter [ERROR] /home/pivotal/PHD/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[45,5] method does not override or
[jira] [Commented] (HIVE-7374) SHOW COMPACTIONS fail on trunk
[ https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057373#comment-14057373 ] Hive QA commented on HIVE-7374: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12654955/HIVE-7374.2.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5718 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/735/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/735/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-735/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12654955 SHOW COMPACTIONS fail on trunk -- Key: HIVE-7374 URL: https://issues.apache.org/jira/browse/HIVE-7374 Project: Hive Issue Type: Bug Components: CLI, Metastore Affects Versions: 0.14.0 Reporter: Damien Carol Assignee: Damien Carol Labels: cli, compaction, metastore Attachments: HIVE-7374.1.patch, HIVE-7374.2.patch In CLI in trunk after doing this : {{show compactions;}} Return error : {noformat} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.thrift.transport.TTransportException {noformat} In metatore : {noformat} 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is unset! Struct:ShowCompactResponse(compacts:null) at org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7213) COUNT(*) returns out-dated count value after TRUNCATE or INSERT INTO
[ https://issues.apache.org/jira/browse/HIVE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057430#comment-14057430 ] Moustafa Aboul Atta commented on HIVE-7213: --- [~ashutoshc] thanks COUNT(*) returns out-dated count value after TRUNCATE or INSERT INTO Key: HIVE-7213 URL: https://issues.apache.org/jira/browse/HIVE-7213 Project: Hive Issue Type: Bug Components: Query Processor, Statistics Affects Versions: 0.13.0 Environment: HDP 2.1 Windows Server 2012 64-bit Reporter: Moustafa Aboul Atta Running a query to count number of rows in a table through {{SELECT COUNT( * ) FROM t}} always returns the last number of rows added through the following statement: {{INSERT INTO TABLE t SELECT r FROM t2}} However, running {{SELECT * FROM t}} returns the expected results i.e. the old and newly added rows. Also running {{TRUNCATE TABLE t;}} returns the original count of rows in the table, however running {{SELECT * FROM t;}} returns nothing as expected -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-7372: -- Description: In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. NO PRECOMMIT TESTS. This is for spark branch only. was: In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-7371: -- Description: Currently, Spark client ships all Hive JARs, including those that Hive depends on, to Spark cluster when a query is executed by Spark. This is not efficient, causing potential library conflicts. Ideally, only a minimum set of JARs needs to be shipped. This task is to identify such a set. We should learn from current MR cluster, for which I assume only hive-exec JAR is shipped to MR cluster. We also need to ensure that user-supplied JARs are also shipped to Spark cluster, in a similar fashion as MR does. NO PRECOMMIT TESTS. This is for spark-branch only. was: Currently, Spark client ships all Hive JARs, including those that Hive depends on, to Spark cluster when a query is executed by Spark. This is not efficient, causing potential library conflicts. Ideally, only a minimum set of JARs needs to be shipped. This task is to identify such a set. We should learn from current MR cluster, for which I assume only hive-exec JAR is shipped to MR cluster. We also need to ensure that user-supplied JARs are also shipped to Spark cluster, in a similar fashion as MR does. Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch] - Key: HIVE-7371 URL: https://issues.apache.org/jira/browse/HIVE-7371 Project: Hive Issue Type: Task Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Currently, Spark client ships all Hive JARs, including those that Hive depends on, to Spark cluster when a query is executed by Spark. This is not efficient, causing potential library conflicts. Ideally, only a minimum set of JARs needs to be shipped. This task is to identify such a set. We should learn from current MR cluster, for which I assume only hive-exec JAR is shipped to MR cluster. We also need to ensure that user-supplied JARs are also shipped to Spark cluster, in a similar fashion as MR does. NO PRECOMMIT TESTS. This is for spark-branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7379) Beeline to fetch full stack trace for job (task) failures
Viji created HIVE-7379: -- Summary: Beeline to fetch full stack trace for job (task) failures Key: HIVE-7379 URL: https://issues.apache.org/jira/browse/HIVE-7379 Project: Hive Issue Type: Improvement Components: CLI, Clients, JDBC Affects Versions: 0.13.0, 0.12.0 Reporter: Viji Priority: Minor When a query submitted via Beeline fails, Beeline displays a generic error message as below: {quote}FAILED: Execution Error, return code 1 from …{quote} This is expected, as Beeline is basically a regular JDBC client and is hence limited by JDBC's capabilities today. But it would be useful if Beeline can return the full remote stack trace and task diagnostics or job ID. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057509#comment-14057509 ] Xuefu Zhang commented on HIVE-7372: --- Thanks for working on this, [~chengxiang li]. Patch looks good to me. One minor nit, for cloning, it might be better to reuse some existing utility methods, or put our implementation in a utility class for later reuse. Could you please also check if the sample problem exists in HiveReduceFunction, where rows are clustered? If so, that can be addressed in a separate JIRA. Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6806) Native Avro support in Hive
[ https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057543#comment-14057543 ] Brock Noland commented on HIVE-6806: I don't think this is superceded by HIVE-5976, but rather we have to figure out who one goes first and the other will have to update their patch. Since HIVE-5976 is a larger patch and will make this patch much smaller, I am inclined to let that one go first. [~davidzchen] what are your thoughts? Native Avro support in Hive --- Key: HIVE-6806 URL: https://issues.apache.org/jira/browse/HIVE-6806 Project: Hive Issue Type: New Feature Components: Serializers/Deserializers Affects Versions: 0.12.0 Reporter: Jeremy Beard Assignee: Ashish Kumar Singh Priority: Minor Labels: Avro Attachments: HIVE-6806.patch Avro is well established and widely used within Hive, however creating Avro-backed tables requires the messy listing of the SerDe, InputFormat and OutputFormat classes. Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had native Avro support. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23387: HIVE-6806: Native Avro support in Hive
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47568 --- I *love* this patch! Thank you so much. serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java https://reviews.apache.org/r/23387/#comment83585 Can we add some unit tests for this class? serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java https://reviews.apache.org/r/23387/#comment83588 Two thoughts: 1) Char/varchar support? 2) By defaulting to null won't any new types end up with null if this code is not updated? I think instead we should throw an exception for unknown types. - Brock Noland On July 10, 2014, 4:50 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 10, 2014, 4:50 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native Avro support in Hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 75394f3bc4f2285a7ced97ea90788d5bbff6b563 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 6cd1f39df3d419651755c35aab7cfc06833b16a4 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4e3b736eed1b3060fa516124c67f9a2f87 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests Thanks, Ashish Singh
Dealing with monstrous hive startup overhead
So Everyone is running around saying hive is slow x is faster. I think hive's biggest issue is that the mr2 entire process to acquire containers and then launch a job in them is super overkill. I see it result in 40 seconds startup time for what amounts to a 2 second job. In the old hadoop 0.20.2 days these queries were much faster. Honestly I know everyones is in the ball park that (tez/spark) is some magical answerbut how about we make a yarn service that just keeps N / nodes open and ready for action. Cut down the entire ask the manager for nodes each job part out.
[jira] [Created] (HIVE-7380) HWI war is not packaged in tar.gz
Brock Noland created HIVE-7380: -- Summary: HWI war is not packaged in tar.gz Key: HIVE-7380 URL: https://issues.apache.org/jira/browse/HIVE-7380 Project: Hive Issue Type: Bug Reporter: Brock Noland packaging pom or assembly needs to be modified to include the HWI interface -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7380) HWI war is not packaged in tar.gz
[ https://issues.apache.org/jira/browse/HIVE-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057651#comment-14057651 ] Brock Noland commented on HIVE-7380: FYI [~appodictic] HWI war is not packaged in tar.gz - Key: HIVE-7380 URL: https://issues.apache.org/jira/browse/HIVE-7380 Project: Hive Issue Type: Bug Reporter: Brock Noland packaging pom or assembly needs to be modified to include the HWI interface -- This message was sent by Atlassian JIRA (v6.2#6252)
Need help on hive error -- Error: GC overhead limit exceeded
Hi Team, We are facing one error while running hive query and error is Error: GC overhead limit exceeded. Can you please help if there is any setting to overcome this. Thanks, Hazarath.
[jira] [Commented] (HIVE-7374) SHOW COMPACTIONS fail on trunk
[ https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057735#comment-14057735 ] Damien Carol commented on HIVE-7374: Work done. Ready for review. SHOW COMPACTIONS fail on trunk -- Key: HIVE-7374 URL: https://issues.apache.org/jira/browse/HIVE-7374 Project: Hive Issue Type: Bug Components: CLI, Metastore Affects Versions: 0.14.0 Reporter: Damien Carol Assignee: Damien Carol Labels: cli, compaction, metastore Attachments: HIVE-7374.1.patch, HIVE-7374.2.patch In CLI in trunk after doing this : {{show compactions;}} Return error : {noformat} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.thrift.transport.TTransportException {noformat} In metatore : {noformat} 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is unset! Struct:ShowCompactResponse(compacts:null) at org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23373/#review47565 --- metastore/if/hive_metastore.thrift https://reviews.apache.org/r/23373/#comment83571 Making these fields optional can be confusing for external users. Should we just update this unit test to set some dummy values ? metastore/if/hive_metastore.thrift https://reviews.apache.org/r/23373/#comment83572 Can you also add comments saying that grant_role and revoke_role functions are deprecated ? Unfortunately, thrift does not seem have built-in proper support for deprecation. (https://issues.apache.org/jira/browse/THRIFT-640) ql/src/test/results/clientnegative/authorization_role_grant2.q.out https://reviews.apache.org/r/23373/#comment83663 This not related to your changes, but can you make this minor correction to the error message ? ADMIN privileges on role is better worded as ADMIN OPTION on role - Thejas Nair On July 10, 2014, 2:53 a.m., Jason Dere wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23373/ --- (Updated July 10, 2014, 2:53 a.m.) Review request for hive and Thejas Nair. Bugs: HIVE-6252 https://issues.apache.org/jira/browse/HIVE-6252 Repository: hive-git Description --- Parser changes - support REVOKE ADMIN ROLE FOR New grant_revoke_role() thrift metastore method Diffs - itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java 6b2f28e metastore/if/hive_metastore.thrift d425d2b metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 2a1b4d7 metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 9567874 metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp b18009c metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h a0f208a metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp a6cd09a metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsRequest.java 791c46b metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsResult.java 2471690 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java aa647d4 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java b8d5a56 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Function.java 4a24bbf metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsInfoResponse.java 427204e metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsResponse.java eda18ad metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPrincipalsInRoleResponse.java 083699b metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetRoleGrantsForPrincipalResponse.java f745c08 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HeartbeatTxnRangeResponse.java 0fc4310 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HiveObjectRef.java 997060f metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LockRequest.java c35aadd metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/OpenTxnsResponse.java 3d47286 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java 312807e metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java ea8f0bb metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsRequest.java a46bdc8 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsResult.java 27f654d metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java eea86e5 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrivilegeBag.java a4687ad metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java 5119b83 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java d91ca2d metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowCompactResponse.java a9f9f7c metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowLocksResponse.java d2657e0 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java 83438c7
[jira] [Assigned] (HIVE-7329) Create SparkWork
[ https://issues.apache.org/jira/browse/HIVE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang reassigned HIVE-7329: - Assignee: Xuefu Zhang Create SparkWork Key: HIVE-7329 URL: https://issues.apache.org/jira/browse/HIVE-7329 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Xuefu Zhang Assignee: Xuefu Zhang This class encapsulates all the work objects that can be executed in a single Spark job. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7381) Class TezEdgeProperty missing license header
Xuefu Zhang created HIVE-7381: - Summary: Class TezEdgeProperty missing license header Key: HIVE-7381 URL: https://issues.apache.org/jira/browse/HIVE-7381 Project: Hive Issue Type: Task Components: Documentation Affects Versions: 0.13.1, 0.13.0 Reporter: Xuefu Zhang Priority: Trivial -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api
On July 10, 2014, 6:05 p.m., Thejas Nair wrote: metastore/if/hive_metastore.thrift, line 178 https://reviews.apache.org/r/23373/diff/1-2/?file=627078#file627078line178 Making these fields optional can be confusing for external users. Should we just update this unit test to set some dummy values ? Will there be any backward compatibility issues here since the previous grant/revoke role methods allowed null? Especially since grant/revoke will now be using this new call. I can define them as default fields (neither required or optional), which I think will still allow the nulls. Making them required caused odd errors in TestAuthzApiEmbedAuthorizerInRemote when nulls were passed in - I don't think they should be marked as required. While the thrift call probably should have failed altogether due to invalid request struct, the method somehow still got invoked, but with a null request struct. On July 10, 2014, 6:05 p.m., Thejas Nair wrote: metastore/if/hive_metastore.thrift, line 980 https://reviews.apache.org/r/23373/diff/2/?file=627481#file627481line980 Can you also add comments saying that grant_role and revoke_role functions are deprecated ? Unfortunately, thrift does not seem have built-in proper support for deprecation. (https://issues.apache.org/jira/browse/THRIFT-640) Ok, will do. On July 10, 2014, 6:05 p.m., Thejas Nair wrote: ql/src/test/results/clientnegative/authorization_role_grant2.q.out, line 62 https://reviews.apache.org/r/23373/diff/2/?file=627538#file627538line62 This not related to your changes, but can you make this minor correction to the error message ? ADMIN privileges on role is better worded as ADMIN OPTION on role Will change. - Jason --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23373/#review47565 --- On July 10, 2014, 2:53 a.m., Jason Dere wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23373/ --- (Updated July 10, 2014, 2:53 a.m.) Review request for hive and Thejas Nair. Bugs: HIVE-6252 https://issues.apache.org/jira/browse/HIVE-6252 Repository: hive-git Description --- Parser changes - support REVOKE ADMIN ROLE FOR New grant_revoke_role() thrift metastore method Diffs - itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java 6b2f28e metastore/if/hive_metastore.thrift d425d2b metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 2a1b4d7 metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 9567874 metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp b18009c metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h a0f208a metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp a6cd09a metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsRequest.java 791c46b metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsResult.java 2471690 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java aa647d4 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java b8d5a56 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Function.java 4a24bbf metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsInfoResponse.java 427204e metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsResponse.java eda18ad metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPrincipalsInRoleResponse.java 083699b metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetRoleGrantsForPrincipalResponse.java f745c08 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HeartbeatTxnRangeResponse.java 0fc4310 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HiveObjectRef.java 997060f metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LockRequest.java c35aadd metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/OpenTxnsResponse.java 3d47286 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java 312807e metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java ea8f0bb metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsRequest.java a46bdc8
Re: Review Request 23387: HIVE-6806: Native Avro support in Hive
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47609 --- serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java https://reviews.apache.org/r/23387/#comment83713 It might be better to call this TypeInfoToSchema to make it consistent with SchemaToTypeInfo, which converts from Avro Schema to Hive TypeInfo. - David Chen On July 10, 2014, 4:50 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 10, 2014, 4:50 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native Avro support in Hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 75394f3bc4f2285a7ced97ea90788d5bbff6b563 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 6cd1f39df3d419651755c35aab7cfc06833b16a4 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4e3b736eed1b3060fa516124c67f9a2f87 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests Thanks, Ashish Singh
[jira] [Commented] (HIVE-6806) Native Avro support in Hive
[ https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057891#comment-14057891 ] David Chen commented on HIVE-6806: -- Thanks for contributing this, [~singhashish]! I agree with [~brocknoland] on this. Once HIVE-5976 goes in, then this patch will become much simpler since adding native support for Avro will no longer require the changes to the Hive grammar and parser. Native Avro support in Hive --- Key: HIVE-6806 URL: https://issues.apache.org/jira/browse/HIVE-6806 Project: Hive Issue Type: New Feature Components: Serializers/Deserializers Affects Versions: 0.12.0 Reporter: Jeremy Beard Assignee: Ashish Kumar Singh Priority: Minor Labels: Avro Attachments: HIVE-6806.patch Avro is well established and widely used within Hive, however creating Avro-backed tables requires the messy listing of the SerDe, InputFormat and OutputFormat classes. Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had native Avro support. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6806) Native Avro support in Hive
[ https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057906#comment-14057906 ] Ashish Kumar Singh commented on HIVE-6806: -- [~brocknoland] and [~davidchen], thanks for reviewing the work. Sounds good to me. I have to anyways address [~brocknoland]'s review comments. Native Avro support in Hive --- Key: HIVE-6806 URL: https://issues.apache.org/jira/browse/HIVE-6806 Project: Hive Issue Type: New Feature Components: Serializers/Deserializers Affects Versions: 0.12.0 Reporter: Jeremy Beard Assignee: Ashish Kumar Singh Priority: Minor Labels: Avro Attachments: HIVE-6806.patch Avro is well established and widely used within Hive, however creating Avro-backed tables requires the messy listing of the SerDe, InputFormat and OutputFormat classes. Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had native Avro support. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23387: HIVE-6806: Native Avro support in Hive
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47612 --- ql/src/test/queries/clientpositive/avro_decimal_native.q https://reviews.apache.org/r/23387/#comment83721 I think we can remove COMMENT 'from deserializer' as well. Thx!! - Brock Noland On July 10, 2014, 4:50 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 10, 2014, 4:50 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native Avro support in Hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 75394f3bc4f2285a7ced97ea90788d5bbff6b563 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 6cd1f39df3d419651755c35aab7cfc06833b16a4 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4e3b736eed1b3060fa516124c67f9a2f87 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests Thanks, Ashish Singh
[jira] [Commented] (HIVE-7286) Parameterize HCatMapReduceTest for testing against all Hive storage formats
[ https://issues.apache.org/jira/browse/HIVE-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057911#comment-14057911 ] David Chen commented on HIVE-7286: -- Currently, the patch will only test against the entires in the table that specify a TestStorageFormat class since there is currently no way to infer the mapping of SerDe to InputFormat/OutputFormat. Once HIVE-5976 is in, then the TestStorageFormat classes will be removed and the StorageFormatDescriptor classes will be used. However, this will require additional code changes to properly handle storage formats with configurable SerDes, such as RCFile. The test code for Avro is in this patch, but the tests (along with the tests for Parquet) will currently fail due to HIVE-4329. To clarify, once HIVE-5976 is in and the corresponding changes to this fixture are made, then SerDe devs would only need to add an entry to the table if they _do not_ want their SerDe tested against HCatalog. Parameterize HCatMapReduceTest for testing against all Hive storage formats --- Key: HIVE-7286 URL: https://issues.apache.org/jira/browse/HIVE-7286 Project: Hive Issue Type: Test Components: HCatalog Reporter: David Chen Assignee: David Chen Attachments: HIVE-7286.1.patch Currently, HCatMapReduceTest, which is extended by the following test suites: * TestHCatDynamicPartitioned * TestHCatNonPartitioned * TestHCatPartitioned * TestHCatExternalDynamicPartitioned * TestHCatExternalNonPartitioned * TestHCatExternalPartitioned * TestHCatMutableDynamicPartitioned * TestHCatMutableNonPartitioned * TestHCatMutablePartitioned These tests run against RCFile. Currently, only TestHCatDynamicPartitioned is run against any other storage format (ORC). Ideally, HCatalog should be tested against all storage formats supported by Hive. The easiest way to accomplish this is to turn HCatMapReduceTest into a parameterized test fixture that enumerates all Hive storage formats. Until HIVE-5976 is implemented, we would need to manually create the mapping of SerDe to InputFormat and OutputFormat. This way, we can explicitly keep track of which storage formats currently work with HCatalog or which ones are untested or have test failures. The test fixture should also use Reflection to find all classes in the classpath that implements the SerDe interface and raise a failure if any of them are not enumerated. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057919#comment-14057919 ] Gunther Hagleitner commented on HIVE-6988: -- .2 is rebased + fixes routeErrorInput... incompat Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-6988: - Attachment: HIVE-6988.2.patch Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-6988: - Status: Open (was: Patch Available) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-6988: - Status: Patch Available (was: Open) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057936#comment-14057936 ] Xuefu Zhang commented on HIVE-7372: --- [~chengxiang li] In my original POC code, I had the following way of cloning, which somehow got lost during refactoring/rebasing. {code} @Override public void collect(BytesWritable key, BytesWritable value) throws IOException { result.add(new Tuple2BytesWritable, BytesWritable(new BytesWritable(key.copyBytes()), new BytesWritable(value.copyBytes(; } {code} Could you evaluate this and your approach? Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords
[ https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057974#comment-14057974 ] David Chen commented on HIVE-5976: -- [~brocknoland] I realized that the reason why the alter table tests are now passing and a number of the create table tests are now failing is that the create table and alter table codepaths do things differently in terms of setting the SerDe for text and sequencefile. The create table codepath does not set the SerDe for these two storage formats (see BaseSemanticAnalyzer. StorageFormat.fillStorageFormat()). However, the alter table codepath, in fact, does (see DDLSemanticAnalyze.analyzeAlterTableFileFormat()) and sets them to LazySimpleSerDe. However, now that both code paths go through the new StorageFormat class, not setting the SerDe to LazySimpleSerDe for text and sequencefile causes alter table fileformat to fail because the SerDe remains unchanged -- which is clearly incorrect -- but setting the SerDe to LazySimpleSerDe causes the create table tests to fail because now we have one extra line in the output because the SerDe is now being set. It seems to me that the create table codepath should also set the SerDe to LazySimpleSerDe for text and sequencefile. Is there a reason why it is currently not doing so? Decouple input formats from STORED as keywords -- Key: HIVE-5976 URL: https://issues.apache.org/jira/browse/HIVE-5976 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch As noted in HIVE-5783, we hard code the input formats mapped to keywords. It'd be nice if there was a registration system so we didn't need to do that. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23373/ --- (Updated July 10, 2014, 9:20 p.m.) Review request for hive and Thejas Nair. Changes --- Changes based on Thejas' feedback Bugs: HIVE-6252 https://issues.apache.org/jira/browse/HIVE-6252 Repository: hive-git Description --- Parser changes - support REVOKE ADMIN ROLE FOR New grant_revoke_role() thrift metastore method Diffs (updated) - itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java 6b2f28e metastore/if/hive_metastore.thrift d425d2b metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java acef599 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 0595b09 metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 0c2209b metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 911c997 metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java e0de0e0 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java 5c00aa1 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java 5025b83 ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java bbf89ef ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java fea1e47 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 5ac6452 ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java 419117c ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java 6ede03c ql/src/test/queries/clientnegative/authorization_role_grant2.q PRE-CREATION ql/src/test/queries/clientpositive/authorization_role_grant1.q 051bdee ql/src/test/results/clientnegative/authorization_role_grant.q.out 0e5e724 ql/src/test/results/clientnegative/authorization_role_grant2.q.out PRE-CREATION ql/src/test/results/clientpositive/authorization_role_grant1.q.out cdbcb26 Diff: https://reviews.apache.org/r/23373/diff/ Testing --- unit tests added Thanks, Jason Dere
[jira] [Updated] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api
[ https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-6252: - Attachment: HIVE-6252.3.patch Patch v3, changes based on Thejas feedback. sql std auth - support 'with admin option' in revoke role metastore api --- Key: HIVE-6252 URL: https://issues.apache.org/jira/browse/HIVE-6252 Project: Hive Issue Type: Sub-task Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Jason Dere Attachments: HIVE-6252.1.patch, HIVE-6252.2.patch, HIVE-6252.3.patch Original Estimate: 24h Remaining Estimate: 24h The metastore api for revoking role privileges does not accept 'with admin option' , though the syntax supports it. SQL syntax also supports grantor specification in revoke role statement. It should be similar to the grant_role api. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23373/#review47624 --- Ship it! Ship It! - Thejas Nair On July 10, 2014, 9:20 p.m., Jason Dere wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23373/ --- (Updated July 10, 2014, 9:20 p.m.) Review request for hive and Thejas Nair. Bugs: HIVE-6252 https://issues.apache.org/jira/browse/HIVE-6252 Repository: hive-git Description --- Parser changes - support REVOKE ADMIN ROLE FOR New grant_revoke_role() thrift metastore method Diffs - itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java 6b2f28e metastore/if/hive_metastore.thrift d425d2b metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java acef599 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 0595b09 metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 0c2209b metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 911c997 metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java e0de0e0 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java 5c00aa1 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java 5025b83 ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java bbf89ef ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java fea1e47 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 5ac6452 ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java 419117c ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java 6ede03c ql/src/test/queries/clientnegative/authorization_role_grant2.q PRE-CREATION ql/src/test/queries/clientpositive/authorization_role_grant1.q 051bdee ql/src/test/results/clientnegative/authorization_role_grant.q.out 0e5e724 ql/src/test/results/clientnegative/authorization_role_grant2.q.out PRE-CREATION ql/src/test/results/clientpositive/authorization_role_grant1.q.out cdbcb26 Diff: https://reviews.apache.org/r/23373/diff/ Testing --- unit tests added Thanks, Jason Dere
[jira] [Commented] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api
[ https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057987#comment-14057987 ] Thejas M Nair commented on HIVE-6252: - +1 sql std auth - support 'with admin option' in revoke role metastore api --- Key: HIVE-6252 URL: https://issues.apache.org/jira/browse/HIVE-6252 Project: Hive Issue Type: Sub-task Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Jason Dere Attachments: HIVE-6252.1.patch, HIVE-6252.2.patch, HIVE-6252.3.patch Original Estimate: 24h Remaining Estimate: 24h The metastore api for revoking role privileges does not accept 'with admin option' , though the syntax supports it. SQL syntax also supports grantor specification in revoke role statement. It should be similar to the grant_role api. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7342: Attachment: HIVE-7342.2.patch support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, metastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23319: HIVE-7342 - support hiveserver2, metastore specific config files
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23319/ --- (Updated July 10, 2014, 9:37 p.m.) Review request for hive. Changes --- With earlier patch hivemetastore-site.xml would take precedence over hiveserver2-site.xml if embedded metastore is used with hiveserver2, as metastore-site.xml was getting added later. With this change HiveConf initialization itself would check if embedded metastore is used and load the hivemetastore-site.xml. This way the order of adding the resources to the Configuration always remains the same. Patch also adds tests for both embedded and remote metastore mode. The order of predendence (later one takes precedence) : hive-site.xml - hivemetastore-site.xml - hiveserver2-site.xml - HiveConf.ConfVars set through system properties (same as ones set through -hiveconf cmdline params) Bugs: HIVE-7342 https://issues.apache.org/jira/browse/HIVE-7342 Repository: hive-git Description --- See jira Diffs (updated) - common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8bff2a9 common/src/java/org/apache/hadoop/hive/conf/HiveConfUtil.java PRE-CREATION data/conf/hive-site.xml 1c9c598 data/conf/hivemetastore-site.xml PRE-CREATION data/conf/hiveserver2-site.xml PRE-CREATION itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestServerSpecificConfig.java PRE-CREATION metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java acef599 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 664dccd service/src/java/org/apache/hive/service/cli/thrift/EmbeddedThriftBinaryCLIService.java 62b1d9c service/src/java/org/apache/hive/service/server/HiveServer2.java e7ed267 Diff: https://reviews.apache.org/r/23319/diff/ Testing --- New tests added Thanks, Thejas Nair
[jira] [Updated] (HIVE-7369) Support agg distinct function with GB
[ https://issues.apache.org/jira/browse/HIVE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-7369: - Status: Patch Available (was: Open) Support agg distinct function with GB - Key: HIVE-7369 URL: https://issues.apache.org/jira/browse/HIVE-7369 Project: Hive Issue Type: Sub-task Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-7369.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7369) Support agg distinct function with GB
[ https://issues.apache.org/jira/browse/HIVE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-7369: - Attachment: HIVE-7369.patch Support agg distinct function with GB - Key: HIVE-7369 URL: https://issues.apache.org/jira/browse/HIVE-7369 Project: Hive Issue Type: Sub-task Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-7369.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057996#comment-14057996 ] Thejas M Nair commented on HIVE-7342: - [~sushanth] Thanks for prompting me to take a closer look at the precedence! I found an issue, here is the updated patch. HIVE-7342.2.patch - With earlier patch hivemetastore-site.xml would take precedence over hiveserver2-site.xml if embedded metastore is used with hiveserver2, as metastore-site.xml was getting added later. With this change HiveConf initialization itself would check if embedded metastore is used and load the hivemetastore-site.xml. This way the order of adding the resources to the Configuration always remains the same. Patch also adds tests for both embedded and remote metastore mode. The order of predendence (later one takes precedence) : hive-site.xml - hivemetastore-site.xml - hiveserver2-site.xml - HiveConf.ConfVars set through system properties (same as ones set through -hiveconf cmdline params) support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, metastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7382) Create a MiniSparkCluster and set up a testing framework
Xuefu Zhang created HIVE-7382: - Summary: Create a MiniSparkCluster and set up a testing framework Key: HIVE-7382 URL: https://issues.apache.org/jira/browse/HIVE-7382 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Xuefu Zhang To automatically test Hive functionality over Spark execution engine, we need to create a test framework that can execute Hive queries with Spark as the backend. For that, we should create a MiniSparkCluser for this, similar to other execution engines. Spark has a way to create a local cluster with a few processes in the local machine, each process is a work node. It's fairly close to a real Spark cluster. Our mini cluster can be based on that. For more info, please refer to the design doc on wiki. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23387: HIVE-6806: Native Avro support in Hive
On July 10, 2014, 3:01 p.m., Brock Noland wrote: I *love* this patch! Thank you so much. Thanks! On July 10, 2014, 3:01 p.m., Brock Noland wrote: serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java, line 34 https://reviews.apache.org/r/23387/diff/1/?file=627564#file627564line34 Can we add some unit tests for this class? Will do. On July 10, 2014, 3:01 p.m., Brock Noland wrote: serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java, line 110 https://reviews.apache.org/r/23387/diff/1/?file=627564#file627564line110 Two thoughts: 1) Char/varchar support? 2) By defaulting to null won't any new types end up with null if this code is not updated? I think instead we should throw an exception for unknown types. 1. Char/varchar is not yet supported by avro. It is planned though. 2. Good point. Will do. - Ashish --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47568 --- On July 10, 2014, 4:50 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 10, 2014, 4:50 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native Avro support in Hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 75394f3bc4f2285a7ced97ea90788d5bbff6b563 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 6cd1f39df3d419651755c35aab7cfc06833b16a4 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4e3b736eed1b3060fa516124c67f9a2f87 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests Thanks, Ashish Singh
Re: Review Request 23387: HIVE-6806: Native Avro support in Hive
On July 10, 2014, 8:13 p.m., David Chen wrote: serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java, line 34 https://reviews.apache.org/r/23387/diff/1/?file=627564#file627564line34 It might be better to call this TypeInfoToSchema to make it consistent with SchemaToTypeInfo, which converts from Avro Schema to Hive TypeInfo. That sounds reasonable, but the functionality of this class is not just to convert typeinfo to schema. It also uses column names. From functionality point of view, AvroSchemaGenerator still sounds better than TypeInfoToSchema. TypeInfoToSchema would have been suitable had the class been converting one typeInfo to avro schema. Let me know if you disagree. - Ashish --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47609 --- On July 10, 2014, 4:50 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 10, 2014, 4:50 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native Avro support in Hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 75394f3bc4f2285a7ced97ea90788d5bbff6b563 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 6cd1f39df3d419651755c35aab7cfc06833b16a4 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4e3b736eed1b3060fa516124c67f9a2f87 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests Thanks, Ashish Singh
Re: Review Request 23387: HIVE-6806: Native Avro support in Hive
On July 10, 2014, 8:28 p.m., Brock Noland wrote: ql/src/test/queries/clientpositive/avro_decimal_native.q, line 13 https://reviews.apache.org/r/23387/diff/1/?file=627556#file627556line13 I think we can remove COMMENT 'from deserializer' as well. Thx!! Yes, it is redundant and we will be better without it. Will do. - Ashish --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47612 --- On July 10, 2014, 4:50 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 10, 2014, 4:50 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native Avro support in Hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 75394f3bc4f2285a7ced97ea90788d5bbff6b563 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 6cd1f39df3d419651755c35aab7cfc06833b16a4 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4e3b736eed1b3060fa516124c67f9a2f87 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests Thanks, Ashish Singh
Re: Review Request 23387: HIVE-6806: Native Avro support in Hive
On July 10, 2014, 8:13 p.m., David Chen wrote: serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java, line 34 https://reviews.apache.org/r/23387/diff/1/?file=627564#file627564line34 It might be better to call this TypeInfoToSchema to make it consistent with SchemaToTypeInfo, which converts from Avro Schema to Hive TypeInfo. Ashish Singh wrote: That sounds reasonable, but the functionality of this class is not just to convert typeinfo to schema. It also uses column names. From functionality point of view, AvroSchemaGenerator still sounds better than TypeInfoToSchema. TypeInfoToSchema would have been suitable had the class been converting one typeInfo to avro schema. Let me know if you disagree. Just realized that SchemaToTypeInfo also generates all typeinfos from schema. That makes your comment even more reasonable. Will do. - Ashish --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47609 --- On July 10, 2014, 4:50 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 10, 2014, 4:50 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native Avro support in Hive Diffs - ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 75394f3bc4f2285a7ced97ea90788d5bbff6b563 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 6cd1f39df3d419651755c35aab7cfc06833b16a4 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4e3b736eed1b3060fa516124c67f9a2f87 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSchemaGenerator.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests Thanks, Ashish Singh
[jira] [Updated] (HIVE-7279) UDF format_number() does not work on DECIMAL types
[ https://issues.apache.org/jira/browse/HIVE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wilbur Yang updated HIVE-7279: -- Status: Patch Available (was: Open) UDF format_number() does not work on DECIMAL types -- Key: HIVE-7279 URL: https://issues.apache.org/jira/browse/HIVE-7279 Project: Hive Issue Type: Bug Components: UDF Reporter: Szehon Ho Assignee: Wilbur Yang Priority: Minor Attachments: HIVE-7279.1.patch I believe UDF format should work on decimal types. {noformat} hive select format_number(decimal_1.u,1) from decimal_1; FAILED: SemanticException [Error 10016]: Line 1:21 Argument type mismatch 'u': Argument 1 of function FORMAT_NUMBER must be tinyint or smallint or int or bigint or double or float, but decimal(5,0) was found. {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7279) UDF format_number() does not work on DECIMAL types
[ https://issues.apache.org/jira/browse/HIVE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wilbur Yang updated HIVE-7279: -- Attachment: HIVE-7279.1.patch UDF format_number() does not work on DECIMAL types -- Key: HIVE-7279 URL: https://issues.apache.org/jira/browse/HIVE-7279 Project: Hive Issue Type: Bug Components: UDF Reporter: Szehon Ho Assignee: Wilbur Yang Priority: Minor Attachments: HIVE-7279.1.patch I believe UDF format should work on decimal types. {noformat} hive select format_number(decimal_1.u,1) from decimal_1; FAILED: SemanticException [Error 10016]: Line 1:21 Argument type mismatch 'u': Argument 1 of function FORMAT_NUMBER must be tinyint or smallint or int or bigint or double or float, but decimal(5,0) was found. {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7369) Support agg distinct function with GB
[ https://issues.apache.org/jira/browse/HIVE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-7369: - Status: Open (was: Patch Available) Canceling patch available to avoid HIVE QA running the patch. This is for cbo branch only. (looking to commit right now). Support agg distinct function with GB - Key: HIVE-7369 URL: https://issues.apache.org/jira/browse/HIVE-7369 Project: Hive Issue Type: Sub-task Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-7369.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7383) Add SQuirrelSQLClient notes from HiveServer v1 client wiki to v2 wiki
[ https://issues.apache.org/jira/browse/HIVE-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058080#comment-14058080 ] Vaibhav Gumashta commented on HIVE-7383: [~leftylev] I've modified the wiki (https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients). Let me know if it looks fine. Thanks! Add SQuirrelSQLClient notes from HiveServer v1 client wiki to v2 wiki - Key: HIVE-7383 URL: https://issues.apache.org/jira/browse/HIVE-7383 Project: Hive Issue Type: Task Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Copy this: https://cwiki.apache.org/confluence/display/Hive/HiveJDBCInterface#HiveJDBCInterface-IntegrationwithSQuirrelSQLClient to this: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-IntegrationwithSQuirrelSQLClient -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-7369) Support agg distinct function with GB
[ https://issues.apache.org/jira/browse/HIVE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner resolved HIVE-7369. -- Resolution: Fixed Committed to branch. Thanks [~jpullokkaran]! Support agg distinct function with GB - Key: HIVE-7369 URL: https://issues.apache.org/jira/browse/HIVE-7369 Project: Hive Issue Type: Sub-task Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-7369.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7383) Add SQuirrelSQLClient notes from HiveServer v1 client wiki to v2 wiki
Vaibhav Gumashta created HIVE-7383: -- Summary: Add SQuirrelSQLClient notes from HiveServer v1 client wiki to v2 wiki Key: HIVE-7383 URL: https://issues.apache.org/jira/browse/HIVE-7383 Project: Hive Issue Type: Task Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Copy this: https://cwiki.apache.org/confluence/display/Hive/HiveJDBCInterface#HiveJDBCInterface-IntegrationwithSQuirrelSQLClient to this: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-IntegrationwithSQuirrelSQLClient -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords
[ https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058098#comment-14058098 ] Brock Noland commented on HIVE-5976: Thank you for your analysis! I think that both alter and create should set the serde. AFAIK it is populated in the HMS despite not being set. If tests fail because an additional line of output we'll just have to update them. Instructions on how to update them is under How do I update the output of a CliDriver testcase? here https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ Decouple input formats from STORED as keywords -- Key: HIVE-5976 URL: https://issues.apache.org/jira/browse/HIVE-5976 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch As noted in HIVE-5783, we hard code the input formats mapped to keywords. It'd be nice if there was a registration system so we didn't need to do that. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7384) Research into reduce-side join
Xuefu Zhang created HIVE-7384: - Summary: Research into reduce-side join Key: HIVE-7384 URL: https://issues.apache.org/jira/browse/HIVE-7384 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Xuefu Zhang Hive's join operator is very sophisticated, especially for reduce-side join. While we expect that other types of join, such as map-side join and SMB map-side join, will work out of the box with our design, there may be some complication in reduce-side join, which extensively utilizes key tag and shuffle behavior. Our design principle prefer to make Hive implementation work out of box also, which might requires new functionality from Spark. The tasks is to research into this area, identifying requirements for Spark community and work to be done on Hive to make reduce-side join work. A design doc might be needed for this. For more information, please refer to the overall design doc on wiki. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HIVE-5701) vectorized groupby should work with vectorized reduce sink
[ https://issues.apache.org/jira/browse/HIVE-5701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-5701: -- Assignee: Matt McCline (was: Jitendra Nath Pandey) vectorized groupby should work with vectorized reduce sink -- Key: HIVE-5701 URL: https://issues.apache.org/jira/browse/HIVE-5701 Project: Hive Issue Type: Improvement Components: Vectorization Reporter: Sergey Shelukhin Assignee: Matt McCline As far as I understand right now vectorized group by works with regular reduce sink. [~jnp] fyi -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7384) Research into reduce-side join
[ https://issues.apache.org/jira/browse/HIVE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-7384: -- Description: Hive's join operator is very sophisticated, especially for reduce-side join. While we expect that other types of join, such as map-side join and SMB map-side join, will work out of the box with our design, there may be some complication in reduce-side join, which extensively utilizes key tag and shuffle behavior. Our design principle prefers to making Hive implementation work out of box also, which might requires new functionality from Spark. The tasks is to research into this area, identifying requirements for Spark community and the work to be done on Hive to make reduce-side join work. A design doc might be needed for this. For more information, please refer to the overall design doc on wiki. was: Hive's join operator is very sophisticated, especially for reduce-side join. While we expect that other types of join, such as map-side join and SMB map-side join, will work out of the box with our design, there may be some complication in reduce-side join, which extensively utilizes key tag and shuffle behavior. Our design principle prefer to make Hive implementation work out of box also, which might requires new functionality from Spark. The tasks is to research into this area, identifying requirements for Spark community and work to be done on Hive to make reduce-side join work. A design doc might be needed for this. For more information, please refer to the overall design doc on wiki. Research into reduce-side join -- Key: HIVE-7384 URL: https://issues.apache.org/jira/browse/HIVE-7384 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Xuefu Zhang Hive's join operator is very sophisticated, especially for reduce-side join. While we expect that other types of join, such as map-side join and SMB map-side join, will work out of the box with our design, there may be some complication in reduce-side join, which extensively utilizes key tag and shuffle behavior. Our design principle prefers to making Hive implementation work out of box also, which might requires new functionality from Spark. The tasks is to research into this area, identifying requirements for Spark community and the work to be done on Hive to make reduce-side join work. A design doc might be needed for this. For more information, please refer to the overall design doc on wiki. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058108#comment-14058108 ] Hive QA commented on HIVE-6988: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12655071/HIVE-6988.2.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5718 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/736/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/736/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-736/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12655071 Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7385) Optimize for empty relation scans
[ https://issues.apache.org/jira/browse/HIVE-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7385: --- Status: Patch Available (was: Open) Optimize for empty relation scans - Key: HIVE-7385 URL: https://issues.apache.org/jira/browse/HIVE-7385 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7385.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7385) Optimize for empty relation scans
Ashutosh Chauhan created HIVE-7385: -- Summary: Optimize for empty relation scans Key: HIVE-7385 URL: https://issues.apache.org/jira/browse/HIVE-7385 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7385.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7385) Optimize for empty relation scans
[ https://issues.apache.org/jira/browse/HIVE-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7385: --- Attachment: HIVE-7385.patch Optimize for empty relation scans - Key: HIVE-7385 URL: https://issues.apache.org/jira/browse/HIVE-7385 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7385.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
Review Request 23404: Optimize for empty relation scans
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23404/ --- Review request for hive. Bugs: HIVE-7385 https://issues.apache.org/jira/browse/HIVE-7385 Repository: hive-git Description --- Optimize for empty relation scans Diffs - common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8bff2a9 itests/qtest/testconfiguration.properties f074b8e ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java 5bad2e5 ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyTaskDispatcher.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/NullScanOptimizer.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/PhysicalOptimizer.java cf049b2 ql/src/test/queries/clientpositive/optimize_nullscan.q PRE-CREATION ql/src/test/results/clientpositive/optimize_nullscan.q.out PRE-CREATION ql/src/test/results/clientpositive/tez/optimize_nullscan.q.out PRE-CREATION Diff: https://reviews.apache.org/r/23404/diff/ Testing --- added new tests. Thanks, Ashutosh Chauhan
[jira] [Created] (HIVE-7386) PTest support non-spot instances and higher cpu masters
Brock Noland created HIVE-7386: -- Summary: PTest support non-spot instances and higher cpu masters Key: HIVE-7386 URL: https://issues.apache.org/jira/browse/HIVE-7386 Project: Hive Issue Type: Improvement Reporter: Brock Noland Today we don't support non-spot instances and when a master has more CPUs we do not allow configuration of the number of RSync threads. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7386) PTest support non-spot instances and higher cpu masters
[ https://issues.apache.org/jira/browse/HIVE-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-7386: --- Attachment: HIVE-7386.patch Unit tests pass and I've tested this manually. PTest support non-spot instances and higher cpu masters --- Key: HIVE-7386 URL: https://issues.apache.org/jira/browse/HIVE-7386 Project: Hive Issue Type: Improvement Reporter: Brock Noland Attachments: HIVE-7386.patch Today we don't support non-spot instances and when a master has more CPUs we do not allow configuration of the number of RSync threads. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7386) PTest support non-spot instances and higher cpu masters
[ https://issues.apache.org/jira/browse/HIVE-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058154#comment-14058154 ] Szehon Ho commented on HIVE-7386: - +1. We should probably create a wiki at some point about all these flags PTest support non-spot instances and higher cpu masters --- Key: HIVE-7386 URL: https://issues.apache.org/jira/browse/HIVE-7386 Project: Hive Issue Type: Improvement Reporter: Brock Noland Attachments: HIVE-7386.patch Today we don't support non-spot instances and when a master has more CPUs we do not allow configuration of the number of RSync threads. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api
[ https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058164#comment-14058164 ] Hive QA commented on HIVE-6252: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12655079/HIVE-6252.3.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5719 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/737/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/737/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-737/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12655079 sql std auth - support 'with admin option' in revoke role metastore api --- Key: HIVE-6252 URL: https://issues.apache.org/jira/browse/HIVE-6252 Project: Hive Issue Type: Sub-task Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Jason Dere Attachments: HIVE-6252.1.patch, HIVE-6252.2.patch, HIVE-6252.3.patch Original Estimate: 24h Remaining Estimate: 24h The metastore api for revoking role privileges does not accept 'with admin option' , though the syntax supports it. SQL syntax also supports grantor specification in revoke role statement. It should be similar to the grant_role api. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-538) make hive_jdbc.jar self-containing
[ https://issues.apache.org/jira/browse/HIVE-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058185#comment-14058185 ] Eugene Koifman commented on HIVE-538: - the current build system produces 2 jdbc jars: hive-jdbc-0.14.0-SNAPSHOT-standalone.jar - the 51MB uber jar hive-jdbc-0.14.0-SNAPSHOT.jar - the 135K jar The pom file hive-jdbc-0.14.0-SNAPSHOT.pom (which I will attach) does not mention the hive-jdbc-0.14.0-SNAPSHOT-standalone.jar at all. Standalone jar is not part of hive tar bundle either. How is the end user supposed to access this standalone jar? make hive_jdbc.jar self-containing -- Key: HIVE-538 URL: https://issues.apache.org/jira/browse/HIVE-538 Project: Hive Issue Type: Improvement Components: JDBC Affects Versions: 0.3.0, 0.4.0, 0.6.0, 0.13.0 Reporter: Raghotham Murthy Assignee: Nick White Fix For: 0.14.0 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-538.D2553.1.patch, ASF.LICENSE.NOT.GRANTED--HIVE-538.D2553.2.patch, HIVE-538.patch Currently, most jars in hive/build/dist/lib and the hadoop-*-core.jar are required in the classpath to run jdbc applications on hive. We need to do atleast the following to get rid of most unnecessary dependencies: 1. get rid of dynamic serde and use a standard serialization format, maybe tab separated, json or avro 2. dont use hadoop configuration parameters 3. repackage thrift and fb303 classes into hive_jdbc.jar -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf
[ https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-6037: Attachment: HIVE-6037.18.patch.txt Synchronize HiveConf with hive-default.xml.template and support show conf - Key: HIVE-6037 URL: https://issues.apache.org/jira/browse/HIVE-6037 Project: Hive Issue Type: Improvement Components: Configuration Reporter: Navis Assignee: Navis Priority: Minor Fix For: 0.14.0 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037-0.13.0, HIVE-6037.1.patch.txt, HIVE-6037.10.patch.txt, HIVE-6037.11.patch.txt, HIVE-6037.12.patch.txt, HIVE-6037.14.patch.txt, HIVE-6037.15.patch.txt, HIVE-6037.16.patch.txt, HIVE-6037.17.patch, HIVE-6037.18.patch.txt, HIVE-6037.2.patch.txt, HIVE-6037.4.patch.txt, HIVE-6037.5.patch.txt, HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, HIVE-6037.8.patch.txt, HIVE-6037.9.patch.txt, HIVE-6037.patch see HIVE-5879 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf
[ https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-6037: Status: Patch Available (was: Reopened) Rebased to trunk fixed a dozen of typos. Synchronize HiveConf with hive-default.xml.template and support show conf - Key: HIVE-6037 URL: https://issues.apache.org/jira/browse/HIVE-6037 Project: Hive Issue Type: Improvement Components: Configuration Reporter: Navis Assignee: Navis Priority: Minor Fix For: 0.14.0 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037-0.13.0, HIVE-6037.1.patch.txt, HIVE-6037.10.patch.txt, HIVE-6037.11.patch.txt, HIVE-6037.12.patch.txt, HIVE-6037.14.patch.txt, HIVE-6037.15.patch.txt, HIVE-6037.16.patch.txt, HIVE-6037.17.patch, HIVE-6037.18.patch.txt, HIVE-6037.2.patch.txt, HIVE-6037.4.patch.txt, HIVE-6037.5.patch.txt, HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, HIVE-6037.8.patch.txt, HIVE-6037.9.patch.txt, HIVE-6037.patch see HIVE-5879 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords
[ https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Chen updated HIVE-5976: - Attachment: HIVE-5976.7.patch Thanks, [~brocknoland]! I have updated the test output and attached a new patch. Decouple input formats from STORED as keywords -- Key: HIVE-5976 URL: https://issues.apache.org/jira/browse/HIVE-5976 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.7.patch, HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch As noted in HIVE-5783, we hard code the input formats mapped to keywords. It'd be nice if there was a registration system so we didn't need to do that. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23153: HIVE-5976: Decouple input formats from STORED as keywords.
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23153/ --- (Updated July 11, 2014, 1:20 a.m.) Review request for hive. Summary (updated) - HIVE-5976: Decouple input formats from STORED as keywords. Bugs: HIVE-5976 https://issues.apache.org/jira/browse/HIVE-5976 Repository: hive-git Description (updated) --- HIVE-5976: Decouple input formats from STORED as keywords. Diffs (updated) - common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8bff2a96fbfc572d86e6a6cdbc2a74ff4f5b0609 hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java ec24531117203a5c75c62d0e5b54d5a43d37fa79 itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextSerDe.java PRE-CREATION itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextStorageFormatDescriptor.java PRE-CREATION itests/custom-serde/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/AbstractStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 41310661ced0616f6bee27af2b1195127e5230e8 ql/src/java/org/apache/hadoop/hive/ql/io/ORCFileStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/ParquetFileStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/RCFileStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/SequenceFileStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatFactory.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/TextFileStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 60d54b6a04e1a9601342b0159387114f7b666338 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 640b6b319ce84a875cc78cb8b29fa6bbc1067fc5 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046488eaea42a6416c7cbd514715d37e249f ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 5ac64527497d3d047d6c7bffd64c4201a66a2a04 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9c001c1495b423c19f3fa710c74f1bb1e24a08f4 ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java 0af25360ee6f3088c764f0c4d812f30d1eeb91d6 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java c42923f716afb89ac6c60fb386fb91c1c94413dd ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java PRE-CREATION ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor PRE-CREATION ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java PRE-CREATION ql/src/test/queries/clientpositive/storage_format_descriptor.q PRE-CREATION ql/src/test/results/clientnegative/fileformat_bad_class.q.out ab1e9357c0a7d4e21816290fbf7ed99396932b92 ql/src/test/results/clientnegative/genericFileFormat.q.out 9613df95c8fc977c0ad1f717afa2db3870dfd904 ql/src/test/results/clientpositive/create_union_table.q.out dc994f161a0a4372bfe009017f45ade56f06ae6e ql/src/test/results/clientpositive/ctas.q.out 5af90d03b72d42c30c4d31ce6b28bfd5493470ac ql/src/test/results/clientpositive/ctas_colname.q.out 20259a7662ec2e4b3157f90ab1c3913b57798d65 ql/src/test/results/clientpositive/ctas_uses_database_location.q.out a2c8c4a874e6ba4e926f47b354bf9e5dd8b0569e ql/src/test/results/clientpositive/groupby_duplicate_key.q.out e37b2d4ea286971dd2e351463e98e92c64c5d7d5 ql/src/test/results/clientpositive/input15.q.out a9575ddb675961fdc3fb73f2774c2fa8f2c08cd9 ql/src/test/results/clientpositive/inputddl1.q.out 17bdd7b220166b077f6368b1d51b928d7d1d638a ql/src/test/results/clientpositive/inputddl2.q.out f53b0b7039bfbbdf87a09a16d96049739b069ee8 ql/src/test/results/clientpositive/inputddl3.q.out 6682b09e33d673aac02e50a6d260797d66ea1676 ql/src/test/results/clientpositive/merge3.q.out 41b7972381a69f8066c5ca52dcc8335c2c9cd05d ql/src/test/results/clientpositive/nonmr_fetch.q.out 5a13e841ec53e7a59ad34595ef95ee6f5480992c ql/src/test/results/clientpositive/nullformat.q.out 07dae64f410cc0e847e5ded1e00198d47c65e497 ql/src/test/results/clientpositive/nullformatCTAS.q.out c76c30bc0b0431b31424ea31b934241674da2f83 ql/src/test/results/clientpositive/parallel_orderby.q.out 39582a83a553f7b769695797afcdf6866d8bbdef ql/src/test/results/clientpositive/skewjoin_noskew.q.out 44e920e5c1fde042c6c789ff098eb42313beefcd ql/src/test/results/clientpositive/smb_mapjoin9.q.out f0ab703eeca399e82d891b9c6b9ac6581c1b872a
[jira] [Commented] (HIVE-6806) Native Avro support in Hive
[ https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058196#comment-14058196 ] David Chen commented on HIVE-6806: -- By the way, it may also be good to add a qfile test for Avro schema evolution over different partitions. I remember we have had to fix some issues related to schema evolution, such as HIVE-6835. FYI, I also have a TypeInfo to Avro Schema converter in my patch for HIVE-7286 along with some unit tests for the converter. Feel free to go ahead and make use of it. Native Avro support in Hive --- Key: HIVE-6806 URL: https://issues.apache.org/jira/browse/HIVE-6806 Project: Hive Issue Type: New Feature Components: Serializers/Deserializers Affects Versions: 0.12.0 Reporter: Jeremy Beard Assignee: Ashish Kumar Singh Priority: Minor Labels: Avro Attachments: HIVE-6806.patch Avro is well established and widely used within Hive, however creating Avro-backed tables requires the messy listing of the SerDe, InputFormat and OutputFormat classes. Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had native Avro support. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7379) Beeline to fetch full stack trace for job (task) failures
[ https://issues.apache.org/jira/browse/HIVE-7379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058208#comment-14058208 ] Navis commented on HIVE-7379: - With HIVE-7127, you can get full stack trace in verbose mode. Beeline to fetch full stack trace for job (task) failures -- Key: HIVE-7379 URL: https://issues.apache.org/jira/browse/HIVE-7379 Project: Hive Issue Type: Improvement Components: CLI, Clients, JDBC Affects Versions: 0.12.0, 0.13.0 Reporter: Viji Priority: Minor When a query submitted via Beeline fails, Beeline displays a generic error message as below: {quote}FAILED: Execution Error, return code 1 from …{quote} This is expected, as Beeline is basically a regular JDBC client and is hence limited by JDBC's capabilities today. But it would be useful if Beeline can return the full remote stack trace and task diagnostics or job ID. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-7379) Beeline to fetch full stack trace for job (task) failures
[ https://issues.apache.org/jira/browse/HIVE-7379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis resolved HIVE-7379. - Resolution: Duplicate Beeline to fetch full stack trace for job (task) failures -- Key: HIVE-7379 URL: https://issues.apache.org/jira/browse/HIVE-7379 Project: Hive Issue Type: Improvement Components: CLI, Clients, JDBC Affects Versions: 0.12.0, 0.13.0 Reporter: Viji Priority: Minor When a query submitted via Beeline fails, Beeline displays a generic error message as below: {quote}FAILED: Execution Error, return code 1 from …{quote} This is expected, as Beeline is basically a regular JDBC client and is hence limited by JDBC's capabilities today. But it would be useful if Beeline can return the full remote stack trace and task diagnostics or job ID. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058216#comment-14058216 ] Hive QA commented on HIVE-7342: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12655084/HIVE-7342.2.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5721 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/739/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/739/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-739/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12655084 support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, metastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table
[ https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-3392: Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks Jason, for the review. Hive unnecessarily validates table SerDes when dropping a table --- Key: HIVE-3392 URL: https://issues.apache.org/jira/browse/HIVE-3392 Project: Hive Issue Type: Bug Affects Versions: 0.9.0 Reporter: Jonathan Natkins Assignee: Navis Labels: patch Fix For: 0.14.0 Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, HIVE-3392.4.patch.txt, HIVE-3392.5.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt natty@hadoop1:~$ hive hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive create table test (a int) row format serde 'hive.serde.JSONSerDe'; OK Time taken: 2.399 seconds natty@hadoop1:~$ hive hive drop table test; FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist)) java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe com.cloudera.hive.serde.JSONSerDe does not exist) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) ... 20 more hive add jar /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar; Added /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar to class path Added resource: /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar hive drop table test; OK Time taken: 0.658 seconds hive -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Case problem in complex type
Any opinions? IMO, field names should be case-sensitive, but I'm doubt on backward compatibility issue. Thanks, Navis 2014-07-10 13:31 GMT+09:00 Lefty Leverenz leftylever...@gmail.com: Struct doesn't have its own section in the Types doc https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types, but it could (see Complex Types https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-ComplexTypes ). However I don't think people will look there for information about case sensitivity -- it belongs in the DDL and DML docs. Case-insensitivity for column names is mentioned here: - Create Table https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTable (notes immediately after the syntax) - Alter Column -- Rules for Column Names https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterColumn - Select Syntax https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select#LanguageManualSelect-SelectSyntax (notes after the syntax) The ORC doc could also mention this issue, preferably in the section Hive QL Syntax https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax . -- Lefty On Wed, Jul 9, 2014 at 11:48 PM, Navis류승우 navis@nexr.com wrote: For column name, hive restricts it as a lower case string. But how about field name? Currently, StructObjectInspector except ORC ignores case(lower case only). This should not be implementation dependent and should be documented somewhere. see https://issues.apache.org/jira/browse/HIVE-6198 Thanks, Navis
[jira] [Updated] (HIVE-7378) Could not build hive 0.13.1 with hadoop 2.2.0
[ https://issues.apache.org/jira/browse/HIVE-7378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John updated HIVE-7378: --- Description: I attemped to build hive 0.13.1 with hadoop 2.2.0 and got a failure. 1. Steps a. set `hadoop-23.version' to 2.2.0 in main pom file b. build with command `mvn clean install -DskipTests -Phadoop-2' 2. Error Messages [INFO] [INFO] Building Hive Shims 0.23 0.13.1 [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims-0.23 --- [INFO] Deleting /home/apache/hive/shims/0.23/target [INFO] Deleting /home/apache/hive/shims/0.23 (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-shims-0.23 --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-shims-0.23 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/apache/hive/shims/0.23/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims-0.23 --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims-0.23 --- [INFO] Compiling 4 source files to /home/apache/hive/shims/0.23/target/classes [INFO] - [WARNING] COMPILATION WARNING : [INFO] - [WARNING] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java uses or overrides a deprecated API. [WARNING] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: Recompile with -Xlint:deprecation for details. [INFO] 2 warnings [INFO] - [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[25,28] cannot find symbol symbol: class ReadOption location: package org.apache.hadoop.fs [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[28,28] cannot find symbol symbol: class ByteBufferPool location: package org.apache.hadoop.io [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[29,37] cannot find symbol symbol: class DirectDecompressor location: package org.apache.hadoop.io.compress [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[30,63] cannot find symbol symbol: class SnappyDirectDecompressor location: class org.apache.hadoop.io.compress.snappy.SnappyDecompressor [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[32,59] cannot find symbol symbol: class ZlibDirectDecompressor location: class org.apache.hadoop.io.compress.zlib.ZlibDecompressor [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[38,63] cannot find symbol symbol: class ByteBufferPool location: class org.apache.hadoop.hive.shims.ZeroCopyShims [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[59,34] cannot find symbol symbol: class ReadOption location: class org.apache.hadoop.hive.shims.ZeroCopyShims.ZeroCopyAdapter [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[61,34] cannot find symbol symbol: class ReadOption location: class org.apache.hadoop.hive.shims.ZeroCopyShims.ZeroCopyAdapter [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[94,19] cannot find symbol symbol: class DirectDecompressor location: class org.apache.hadoop.hive.shims.ZeroCopyShims.DirectDecompressorAdapter [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[96,38] cannot find symbol symbol: class DirectDecompressor location: class org.apache.hadoop.hive.shims.ZeroCopyShims.DirectDecompressorAdapter [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[45,5] method does not override or implement a method from a supertype [ERROR] /home/apache/hive/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/ZeroCopyShims.java:[50,5] method does not override or implement a method from a supertype [ERROR]
[jira] [Commented] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058279#comment-14058279 ] Chengxiang Li commented on HIVE-7372: - {quote} Thanks for working on this, Chengxiang Li. Patch looks good to me. One minor nit, for cloning, it might be better to reuse some existing utility methods, or put our implementation in a utility class for later reuse. {quote} I took this as a POC workround and do not pay more attention on clone implementation, as we don't need to copy key/value in further SparkCollector implementation. But you are write, we need reasonable coding style at anytime.:D {quote} Could you please also check if the sample problem exists in HiveReduceFunction, where rows are clustered? If so, that can be addressed in a separate JIRA. {quote} HiveReduceFunction use SparkCollector as well, so it's ok. Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7279) UDF format_number() does not work on DECIMAL types
[ https://issues.apache.org/jira/browse/HIVE-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058280#comment-14058280 ] Hive QA commented on HIVE-7279: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12655101/HIVE-7279.1.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5718 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_udf_format_number_wrong5 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_udf_format_number_wrong7 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/740/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/740/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-740/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12655101 UDF format_number() does not work on DECIMAL types -- Key: HIVE-7279 URL: https://issues.apache.org/jira/browse/HIVE-7279 Project: Hive Issue Type: Bug Components: UDF Reporter: Szehon Ho Assignee: Wilbur Yang Priority: Minor Attachments: HIVE-7279.1.patch I believe UDF format should work on decimal types. {noformat} hive select format_number(decimal_1.u,1) from decimal_1; FAILED: SemanticException [Error 10016]: Line 1:21 Argument type mismatch 'u': Argument 1 of function FORMAT_NUMBER must be tinyint or smallint or int or bigint or double or float, but decimal(5,0) was found. {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengxiang Li updated HIVE-7372: Attachment: HIVE-7372-Spark.1.patch Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch] --- Key: HIVE-7372 URL: https://issues.apache.org/jira/browse/HIVE-7372 Project: Hive Issue Type: Bug Components: Spark Reporter: Xuefu Zhang Assignee: Chengxiang Li Attachments: HIVE-7372-Spark.1.patch, HIVE-7372.patch In SparkClient.java, if the following property is set, unpredictable, incorrect result may be observed. {code} sparkConf.set(spark.default.parallelism, 1); {code} It's suspected that there are some concurrency issues, as Spark may process multiple datasets in a single JVM when parallelism is greater than 1 in order to use multiple cores. NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7387) Guava version conflict between hadoop and spark
Chengxiang Li created HIVE-7387: --- Summary: Guava version conflict between hadoop and spark Key: HIVE-7387 URL: https://issues.apache.org/jira/browse/HIVE-7387 Project: Hive Issue Type: Bug Components: Spark Reporter: Chengxiang Li hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, as Hive CLI load both dependency into classpath currently, query failed on either spark engine or mr engine. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7387) Guava version conflict between hadoop and spark
[ https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058302#comment-14058302 ] Chengxiang Li commented on HIVE-7387: - Hive on spark can submit spark job in a separate JVM while hive.exec.submitviachild configured as true, so that Hive only need to load dependencies identified by HIVE-7371. But we still get this issue while Hive submit spark job in same JVM with Hive CLI. Guava version conflict between hadoop and spark --- Key: HIVE-7387 URL: https://issues.apache.org/jira/browse/HIVE-7387 Project: Hive Issue Type: Bug Components: Spark Reporter: Chengxiang Li hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, as Hive CLI load both dependency into classpath currently, query failed on either spark engine or mr engine. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7388) Remove non-ascii char from comments
Gunther Hagleitner created HIVE-7388: Summary: Remove non-ascii char from comments Key: HIVE-7388 URL: https://issues.apache.org/jira/browse/HIVE-7388 Project: Hive Issue Type: Sub-task Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner -- This message was sent by Atlassian JIRA (v6.2#6252)