[jira] [Updated] (HIVE-6859) Test JIRA
[ https://issues.apache.org/jira/browse/HIVE-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6859: Status: Patch Available (was: Reopened) Test JIRA - Key: HIVE-6859 URL: https://issues.apache.org/jira/browse/HIVE-6859 Project: Hive Issue Type: Bug Reporter: Szehon Ho Attachments: HIVE-6859.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6859) Test JIRA
[ https://issues.apache.org/jira/browse/HIVE-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6859: Summary: Test JIRA (was: 8) Test JIRA - Key: HIVE-6859 URL: https://issues.apache.org/jira/browse/HIVE-6859 Project: Hive Issue Type: Bug Reporter: Szehon Ho Attachments: HIVE-6859.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6859) Test JIRA
[ https://issues.apache.org/jira/browse/HIVE-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6859: Attachment: HIVE-6859.patch Bogus patch to test jenkins Test JIRA - Key: HIVE-6859 URL: https://issues.apache.org/jira/browse/HIVE-6859 Project: Hive Issue Type: Bug Reporter: Szehon Ho Attachments: HIVE-6859.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Reopened] (HIVE-6859) Test JIRA
[ https://issues.apache.org/jira/browse/HIVE-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho reopened HIVE-6859: - Test JIRA - Key: HIVE-6859 URL: https://issues.apache.org/jira/browse/HIVE-6859 Project: Hive Issue Type: Bug Reporter: Szehon Ho Attachments: HIVE-6859.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6859) Test JIRA
[ https://issues.apache.org/jira/browse/HIVE-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970527#comment-13970527 ] Hive QA commented on HIVE-6859: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640399/HIVE-6859.patch Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/4/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/4/console Messages: {noformat} This message was trimmed, see log for full details D hcatalog/storage-handlers/hbase/src/gen-java/org/apache/hcatalog/hbase/snapshot/transaction/thrift/StoreFamilyRevisionList.java D hcatalog/storage-handlers/hbase/src/gen-java/org/apache/hcatalog/hbase/snapshot/transaction/thrift/StoreFamilyRevision.java Dhcatalog/storage-handlers/hbase/src/resources/revision-manager-default.xml Dhcatalog/storage-handlers/hbase/if/transaction.thrift Dhcatalog/storage-handlers/hbase/conf/revision-manager-site.xml D hcatalog/server-extensions/src/test/java/org/apache/hcatalog/listener/TestMsgBusConnection.java D hcatalog/server-extensions/src/test/java/org/apache/hcatalog/listener/TestNotificationListener.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/listener/NotificationListener.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/DropDatabaseMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/CreateTableMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/DropTableMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/DropPartitionMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/HCatEventMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/AddPartitionMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/MessageDeserializer.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/MessageFactory.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/CreateDatabaseMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/jms/MessagingUtils.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONCreateTableMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONDropTableMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONDropPartitionMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONAddPartitionMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONMessageDeserializer.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONMessageFactory.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONCreateDatabaseMessage.java D hcatalog/server-extensions/src/main/java/org/apache/hcatalog/messaging/json/JSONDropDatabaseMessage.java Uhcatalog/conf/proto-hive-site.xml Uhcatalog/src/docs/src/documentation/content/xdocs/readerwriter.xml D hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hcatalog/utils/HBaseReadWrite.java U hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteTextPartitioned.java Uhcatalog/src/test/e2e/hcatalog/tests/hadoop.conf Uhcatalog/src/test/e2e/hcatalog/tests/pig.conf Uhcatalog/src/test/e2e/templeton/tests/jobstatus.conf Dhcatalog/src/java/org/apache/hcatalog/package-info.java Uhcatalog/bin/templeton.cmd Uhcatalog/bin/hcat Uhcatalog/bin/hcat.py Dhcatalog/core/src/test/java/org/apache/hcatalog/ExitException.java Dhcatalog/core/src/test/java/org/apache/hcatalog/NoExitSecurityManager.java Dhcatalog/core/src/test/java/org/apache/hcatalog/MiniCluster.java Dhcatalog/core/src/test/java/org/apache/hcatalog/HcatTestUtils.java D hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/HCatMapReduceTest.java D hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestInputJobInfo.java D hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestHCatDynamicPartitioned.java D hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestHCatInputFormat.java D hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestHCatOutputFormat.java Dhcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/HCatBaseTest.java D hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestHCatEximInputFormat.java.broken D hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestHCatEximOutputFormat.java.broken D
[jira] [Commented] (HIVE-6907) HiveServer2 - wrong user gets used for metastore operation with embedded metastore
[ https://issues.apache.org/jira/browse/HIVE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970539#comment-13970539 ] Navis commented on HIVE-6907: - Yes, it's related to HIVE-6478, not to this issue. Some authorizations is taken place in metastore but we cannot hand-over username in SessionState to metastore because SessionState is in hive-exec which is not accessible from hive-metastore. HiveServer2 - wrong user gets used for metastore operation with embedded metastore -- Key: HIVE-6907 URL: https://issues.apache.org/jira/browse/HIVE-6907 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Priority: Blocker Fix For: 0.13.0 Attachments: HIVE-6907.1.patch, HIVE-6907.2.patch, HIVE-6907.3.patch When queries are being run concurrently against HS2, sometimes the wrong user ends performing the metastore action and you get an error like - {code} ..INFO|java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.security.AccessControlException: action WRITE not permitted on path hdfs://example.net:8020/apps/hive/warehouse/tbl_4eeulg9zp4 for user hrt_qa) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HIVE-6907) HiveServer2 - wrong user gets used for metastore operation with embedded metastore
[ https://issues.apache.org/jira/browse/HIVE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970539#comment-13970539 ] Navis edited comment on HIVE-6907 at 4/16/14 7:32 AM: -- Yes, it's related to HIVE-6478, not to this issue. Some authorizations is taken place in metastore and, in stand-alone mode, we cannot hand-over username in SessionState to metastore because SessionState is in hive-exec which is not accessible from hive-metastore. Also in remote metastore, current set_ugi hands over HS2 user instead of user in session state. Should we fix this, too? was (Author: navis): Yes, it's related to HIVE-6478, not to this issue. Some authorizations is taken place in metastore but we cannot hand-over username in SessionState to metastore because SessionState is in hive-exec which is not accessible from hive-metastore. HiveServer2 - wrong user gets used for metastore operation with embedded metastore -- Key: HIVE-6907 URL: https://issues.apache.org/jira/browse/HIVE-6907 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Priority: Blocker Fix For: 0.13.0 Attachments: HIVE-6907.1.patch, HIVE-6907.2.patch, HIVE-6907.3.patch When queries are being run concurrently against HS2, sometimes the wrong user ends performing the metastore action and you get an error like - {code} ..INFO|java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.security.AccessControlException: action WRITE not permitted on path hdfs://example.net:8020/apps/hive/warehouse/tbl_4eeulg9zp4 for user hrt_qa) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Submit Precommit jobs on temporary Jenkins
To unblock people with waiting JIRA's, I setup a Jenkins on my own EC2 instance to run precommit tests, as the Bigtop guys with authorization to fix their Jenkins host are not available currently. It is at the following location: http://ec2-54-237-84-140.compute-1.amazonaws.com/job/precommit-hive/ I don't have permission to redirect JIRA's Submit Patch Auto-Trigger from Bigtop Jenkins to my Jenkins, so please submit manually in the url if you have a patch you want to test. I have submitted a few JIRA's that have been missed during the outage, and also granted all users permission to trigger the job. Steps: 1. Clicking 'Build with parameters' on the left 2. In the parameter box ISSUE_NUM, type the JIRA number part, like 6908. 3. Click 'Build' This uses the existing PTest cluster to run the tests. As soon as the Bigtop Jenkins come back, we can switch back. Hope this works and can help! Szehon On Tue, Apr 15, 2014 at 8:26 PM, Szehon Ho sze...@cloudera.com wrote: Bumping in case some people miss this. I emailed the bigtop-dev apache list yesterday about this issue, as they are hosting the jenkins running the hive builds. Some guys have looked at it and confirmed the machine is out of space, but there's only two guys who have the access, they have not responded yet (they may not be available). I'll email again tomorrow, but feel free to pile on the thread. https://mail-archives.apache.org/mod_mbox/bigtop-dev/201404.mbox/%3C20140415212726.GP22142%40boudnik.org%3E If problem persists, I could try setting up a temp jenkins on a new ec2 host, but I'd rather not if its going to be fixed, so let's see if they respond tomorrow. Thanks Szehon On Mon, Apr 14, 2014 at 1:44 PM, Szehon Ho sze...@cloudera.com wrote: Hi, New precommit builds haven't been submitted successfully on bigtop01 since yesterday morning. http://bigtop01.cloudera.org:8080/view/Hive/job/PreCommit-HIVE-Build/ The machine might be out of space again, or other issue. I mailed the Bigtop dev list, hopefully they can respond soon. Until then, new patches submitted to the Hive JIRA wont get picked up for testing. I'll notify if there are any updates. Thanks, Szehon
[jira] [Work started] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-6820 started by Bing Li. HiveServer(2) ignores HIVE_OPTS --- Key: HIVE-6820 URL: https://issues.apache.org/jira/browse/HIVE-6820 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.12.0 Reporter: Richard Ding Assignee: Bing Li Priority: Minor In hiveserver2.sh: {code} exec $HADOOP jar $JAR $CLASS $@ {code} While cli.sh having: {code} exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS $@ {code} Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6820: -- Attachment: HIVE-6820.1.patch Append $HIVE_OPTS to hiveserver2.sh and hiveserver.sh HiveServer(2) ignores HIVE_OPTS --- Key: HIVE-6820 URL: https://issues.apache.org/jira/browse/HIVE-6820 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.12.0 Reporter: Richard Ding Assignee: Bing Li Priority: Minor Attachments: HIVE-6820.1.patch In hiveserver2.sh: {code} exec $HADOOP jar $JAR $CLASS $@ {code} While cli.sh having: {code} exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS $@ {code} Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6820: -- Status: Patch Available (was: In Progress) The patch is created based on trunk branch HiveServer(2) ignores HIVE_OPTS --- Key: HIVE-6820 URL: https://issues.apache.org/jira/browse/HIVE-6820 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.12.0 Reporter: Richard Ding Assignee: Bing Li Priority: Minor Attachments: HIVE-6820.1.patch In hiveserver2.sh: {code} exec $HADOOP jar $JAR $CLASS $@ {code} While cli.sh having: {code} exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS $@ {code} Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6835) Reading of partitioned Avro data fails if partition schema does not match table schema
[ https://issues.apache.org/jira/browse/HIVE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970596#comment-13970596 ] Hive QA commented on HIVE-6835: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640124/HIVE-6835.2.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5402 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hive.service.cli.thrift.TestThriftHttpCLIService.testExecuteStatementAsync {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/5/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/5/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640124 Reading of partitioned Avro data fails if partition schema does not match table schema -- Key: HIVE-6835 URL: https://issues.apache.org/jira/browse/HIVE-6835 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6835.1.patch, HIVE-6835.2.patch To reproduce: {code} create table testarray (a arraystring); load data local inpath '/home/ahsu/test/array.txt' into table testarray; # create partitioned Avro table with one array column create table avroarray partitioned by (y string) row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties ('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ { name:a, type:{type:array,items:string} } ] }') STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'; insert into table avroarray partition(y=1) select * from testarray; # add an int column with a default value of 0 alter table avroarray set serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ {name:intfield,type:int,default:0},{ name:a, type:{type:array,items:string} } ] }'); # fails with ClassCastException select * from avroarray; {code} The select * fails with: {code} Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6920) Parquet Serde Simplification
Justin Coffey created HIVE-6920: --- Summary: Parquet Serde Simplification Key: HIVE-6920 URL: https://issues.apache.org/jira/browse/HIVE-6920 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Justin Coffey Assignee: Justin Coffey Priority: Minor Fix For: 0.14.0 Various fixes and code simplification in the ParquetHiveSerde (with minor optimizations) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6920) Parquet Serde Simplification
[ https://issues.apache.org/jira/browse/HIVE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Justin Coffey updated HIVE-6920: Attachment: HIVE-6920.patch Parquet Serde Simplification Key: HIVE-6920 URL: https://issues.apache.org/jira/browse/HIVE-6920 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Justin Coffey Assignee: Justin Coffey Priority: Minor Fix For: 0.14.0 Attachments: HIVE-6920.patch Various fixes and code simplification in the ParquetHiveSerde (with minor optimizations) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6920) Parquet Serde Simplification
[ https://issues.apache.org/jira/browse/HIVE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Justin Coffey updated HIVE-6920: Release Note: - Removed unused serde stats - Simplified initialize code - Renamed test class to match serde class name - Separated serialize and deserialize tests Status: Patch Available (was: Open) Parquet Serde Simplification Key: HIVE-6920 URL: https://issues.apache.org/jira/browse/HIVE-6920 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Justin Coffey Assignee: Justin Coffey Priority: Minor Fix For: 0.14.0 Attachments: HIVE-6920.patch Various fixes and code simplification in the ParquetHiveSerde (with minor optimizations) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6920) Parquet Serde Simplification
[ https://issues.apache.org/jira/browse/HIVE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Justin Coffey updated HIVE-6920: Release Note: - Removed unused serde stats - Simplified initialize code - Renamed test class to match serde class name - Separated serialize and deserialize tests - Bumped Parquet version to 1.4.1 was: - Removed unused serde stats - Simplified initialize code - Renamed test class to match serde class name - Separated serialize and deserialize tests Parquet Serde Simplification Key: HIVE-6920 URL: https://issues.apache.org/jira/browse/HIVE-6920 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Justin Coffey Assignee: Justin Coffey Priority: Minor Fix For: 0.14.0 Attachments: HIVE-6920.patch Various fixes and code simplification in the ParquetHiveSerde (with minor optimizations) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6913) Hive unable to find the hashtable file during complex multi-staged map join
[ https://issues.apache.org/jira/browse/HIVE-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970669#comment-13970669 ] Hive QA commented on HIVE-6913: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640286/HIVE-6913.patch {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5401 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_map_operators org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/6/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/6/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640286 Hive unable to find the hashtable file during complex multi-staged map join --- Key: HIVE-6913 URL: https://issues.apache.org/jira/browse/HIVE-6913 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-6913.patch If a query has multiple mapjoins and one of the tables to be mapjoined is empty, the query can result in a no such file or directory when looking for the hashtable. This is because when we generate a dummy hash table, we do not close the TableScan (TS) operator for that table. Additionally, HashTableSinkOperator (HTSO) outputs it's hash tables in the closeOp method. However, when close is called on HTSO we check to ensure that all parents are closed: https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java#L333 which is not true on this case, because the TS operator for the empty table was never closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Submit Precommit jobs on temporary Jenkins
Hi, Nice work!! I do have permission to 'redirect JIRA's Submit Patch Auto-Trigger from Bigtop Jenkins to my Jenkins' should I do that? Brock On Wed, Apr 16, 2014 at 12:49 AM, Szehon Ho sze...@cloudera.com wrote: To unblock people with waiting JIRA's, I setup a Jenkins on my own EC2 instance to run precommit tests, as the Bigtop guys with authorization to fix their Jenkins host are not available currently. It is at the following location: http://ec2-54-237-84-140.compute-1.amazonaws.com/job/precommit-hive/ I don't have permission to redirect JIRA's Submit Patch Auto-Trigger from Bigtop Jenkins to my Jenkins, so please submit manually in the url if you have a patch you want to test. I have submitted a few JIRA's that have been missed during the outage, and also granted all users permission to trigger the job. Steps: 1. Clicking 'Build with parameters' on the left 2. In the parameter box ISSUE_NUM, type the JIRA number part, like 6908. 3. Click 'Build' This uses the existing PTest cluster to run the tests. As soon as the Bigtop Jenkins come back, we can switch back. Hope this works and can help! Szehon On Tue, Apr 15, 2014 at 8:26 PM, Szehon Ho sze...@cloudera.com wrote: Bumping in case some people miss this. I emailed the bigtop-dev apache list yesterday about this issue, as they are hosting the jenkins running the hive builds. Some guys have looked at it and confirmed the machine is out of space, but there's only two guys who have the access, they have not responded yet (they may not be available). I'll email again tomorrow, but feel free to pile on the thread. https://mail-archives.apache.org/mod_mbox/bigtop-dev/201404.mbox/%3C20140415212726.GP22142%40boudnik.org%3E If problem persists, I could try setting up a temp jenkins on a new ec2 host, but I'd rather not if its going to be fixed, so let's see if they respond tomorrow. Thanks Szehon On Mon, Apr 14, 2014 at 1:44 PM, Szehon Ho sze...@cloudera.com wrote: Hi, New precommit builds haven't been submitted successfully on bigtop01 since yesterday morning. http://bigtop01.cloudera.org:8080/view/Hive/job/PreCommit-HIVE-Build/ The machine might be out of space again, or other issue. I mailed the Bigtop dev list, hopefully they can respond soon. Until then, new patches submitted to the Hive JIRA wont get picked up for testing. I'll notify if there are any updates. Thanks, Szehon
[jira] [Commented] (HIVE-6785) query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe
[ https://issues.apache.org/jira/browse/HIVE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970726#comment-13970726 ] Brock Noland commented on HIVE-6785: +1 query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe -- Key: HIVE-6785 URL: https://issues.apache.org/jira/browse/HIVE-6785 Project: Hive Issue Type: Bug Components: File Formats, Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.14.0 Attachments: HIVE-6785.1.patch.txt, HIVE-6785.2.patch.txt, HIVE-6785.3.patch When a hive table's SerDe is ParquetHiveSerDe, while some partitions are of other SerDe, AND if this table has string column[s], hive generates confusing error message: Failed with exception java.io.IOException:java.lang.ClassCastException: parquet.hive.serde.primitive.ParquetStringInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.primitive.SettableTimestampObjectInspector This is confusing because timestamp is mentioned even if it is not been used by the table. The reason is when there is SerDe difference between table and partition, hive tries to convert objectinspector of two SerDes. ParquetHiveSerDe's object inspector for string type is ParquetStringInspector (newly introduced), neither a subclass of WritableStringObjectInspector nor JavaStringObjectInspector, which ObjectInspectorConverters expect for string category objector inspector. There is no break statement in STRING case statement, hence the following TIMESTAMP case statement is executed, generating confusing error message. see also in the following parquet issue: https://github.com/Parquet/parquet-mr/issues/324 To fix that it is relatively easy, just make ParquetStringInspector subclass of JavaStringObjectInspector instead of AbstractPrimitiveJavaObjectInspector. But because constructor of class JavaStringObjectInspector is package scope instead of public or protected, we would need to move ParquetStringInspector to the same package with JavaStringObjectInspector. Also ArrayWritableObjectInspector's setStructFieldData needs to also accept List data, since the corresponding setStructFieldData and create methods return a list. This is also needed when table SerDe is ParquetHiveSerDe, and partition SerDe is something else. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5072) [WebHCat]Enable directly invoke Sqoop job through Templeton
[ https://issues.apache.org/jira/browse/HIVE-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970733#comment-13970733 ] Hive QA commented on HIVE-5072: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640189/HIVE-5072.4.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5401 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/7/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/7/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640189 [WebHCat]Enable directly invoke Sqoop job through Templeton --- Key: HIVE-5072 URL: https://issues.apache.org/jira/browse/HIVE-5072 Project: Hive Issue Type: Improvement Components: WebHCat Affects Versions: 0.12.0 Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-5072.1.patch, HIVE-5072.2.patch, HIVE-5072.3.patch, HIVE-5072.4.patch, Templeton-Sqoop-Action.pdf Now it is hard to invoke a Sqoop job through templeton. The only way is to use the classpath jar generated by a sqoop job and use the jar delegator in Templeton. We should implement Sqoop Delegator to enable directly invoke Sqoop job through Templeton. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6919) hive sql std auth select query fails on partitioned tables
[ https://issues.apache.org/jira/browse/HIVE-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971485#comment-13971485 ] Thejas M Nair commented on HIVE-6919: - Ran tests locally and they passed. hive sql std auth select query fails on partitioned tables -- Key: HIVE-6919 URL: https://issues.apache.org/jira/browse/HIVE-6919 Project: Hive Issue Type: Bug Components: Authorization Affects Versions: 0.13.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Priority: Critical Attachments: HIVE-6919.1.patch {code} analyze table studentparttab30k partition (ds) compute statistics; Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied. Principal [name=hadoopqa, type=USER] does not have following privileges on Object [type=PARTITION, name=null] : [SELECT] (state=42000,code=4) {code} Sql std auth is supposed to ignore partition level objects for privilege checks, but that is not working as intended. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6910) Invalid column access info for partitioned table
[ https://issues.apache.org/jira/browse/HIVE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971490#comment-13971490 ] Hive QA commented on HIVE-6910: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640374/HIVE-6910.2.patch.txt {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5401 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2 {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/8/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/8/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640374 Invalid column access info for partitioned table Key: HIVE-6910 URL: https://issues.apache.org/jira/browse/HIVE-6910 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.11.0, 0.12.0, 0.13.0 Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6910.1.patch.txt, HIVE-6910.2.patch.txt From http://www.mail-archive.com/user@hive.apache.org/msg11324.html neededColumnIDs in TS is only for non-partition columns. But ColumnAccessAnalyzer is calculating it on all columns. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: [VOTE] Apache Hive 0.13.0 Release Candidate 2
I found a major issue in working of SQL standard auth - HIVE-6919 . It has a fix, and was reviewed yesterday night, and I have also run the tests. Since sql standard auth is one of major new features in this release, I think it would make sense to roll out another rc with this fix. Thoughts ? Sorry about this late find! On Tue, Apr 15, 2014 at 7:02 PM, Gunther Hagleitner ghagleit...@hortonworks.com wrote: +1 - Verified checksums and signatures - Compiled source and ran partial unit tests - Installed both binary and hive built from source on cluster - Ran a number of test queries without any problems on both Thanks, Gunther. On Tue, Apr 15, 2014 at 2:06 PM, Harish Butani rhbut...@apache.org wrote: Apache Hive 0.13.0 Release Candidate 2 is available here: http://people.apache.org/~rhbutani/hive-0.13.0-candidate-2 Maven artifacts are available here: https://repository.apache.org/content/repositories/orgapachehive-1011 Source tag for RCN is at: https://svn.apache.org/repos/asf/hive/tags/release-0.13.0-rc2/ Voting will conclude in 72 hours. Hive PMC Members: Please test and vote. Thanks. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Assigned] (HIVE-538) make hive_jdbc.jar self-containing
[ https://issues.apache.org/jira/browse/HIVE-538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick White reassigned HIVE-538: --- Assignee: Nick White (was: Ashutosh Chauhan) make hive_jdbc.jar self-containing -- Key: HIVE-538 URL: https://issues.apache.org/jira/browse/HIVE-538 Project: Hive Issue Type: Improvement Components: JDBC Affects Versions: 0.3.0, 0.4.0, 0.6.0, 0.13.0 Reporter: Raghotham Murthy Assignee: Nick White Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-538.D2553.1.patch, ASF.LICENSE.NOT.GRANTED--HIVE-538.D2553.2.patch Currently, most jars in hive/build/dist/lib and the hadoop-*-core.jar are required in the classpath to run jdbc applications on hive. We need to do atleast the following to get rid of most unnecessary dependencies: 1. get rid of dynamic serde and use a standard serialization format, maybe tab separated, json or avro 2. dont use hadoop configuration parameters 3. repackage thrift and fb303 classes into hive_jdbc.jar -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-538) make hive_jdbc.jar self-containing
[ https://issues.apache.org/jira/browse/HIVE-538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick White updated HIVE-538: Affects Version/s: 0.13.0 make hive_jdbc.jar self-containing -- Key: HIVE-538 URL: https://issues.apache.org/jira/browse/HIVE-538 Project: Hive Issue Type: Improvement Components: JDBC Affects Versions: 0.3.0, 0.4.0, 0.6.0, 0.13.0 Reporter: Raghotham Murthy Assignee: Ashutosh Chauhan Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-538.D2553.1.patch, ASF.LICENSE.NOT.GRANTED--HIVE-538.D2553.2.patch Currently, most jars in hive/build/dist/lib and the hadoop-*-core.jar are required in the classpath to run jdbc applications on hive. We need to do atleast the following to get rid of most unnecessary dependencies: 1. get rid of dynamic serde and use a standard serialization format, maybe tab separated, json or avro 2. dont use hadoop configuration parameters 3. repackage thrift and fb303 classes into hive_jdbc.jar -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6835) Reading of partitioned Avro data fails if partition schema does not match table schema
[ https://issues.apache.org/jira/browse/HIVE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anthony Hsu updated HIVE-6835: -- Attachment: (was: HIVE-6835.2.patch) Reading of partitioned Avro data fails if partition schema does not match table schema -- Key: HIVE-6835 URL: https://issues.apache.org/jira/browse/HIVE-6835 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6835.1.patch, HIVE-6835.2.patch To reproduce: {code} create table testarray (a arraystring); load data local inpath '/home/ahsu/test/array.txt' into table testarray; # create partitioned Avro table with one array column create table avroarray partitioned by (y string) row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties ('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ { name:a, type:{type:array,items:string} } ] }') STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'; insert into table avroarray partition(y=1) select * from testarray; # add an int column with a default value of 0 alter table avroarray set serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ {name:intfield,type:int,default:0},{ name:a, type:{type:array,items:string} } ] }'); # fails with ClassCastException select * from avroarray; {code} The select * fails with: {code} Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6835) Reading of partitioned Avro data fails if partition schema does not match table schema
[ https://issues.apache.org/jira/browse/HIVE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anthony Hsu updated HIVE-6835: -- Attachment: HIVE-6835.2.patch Reuploading patch version 2 to trigger the tests again. I ran locally the tests that failed in the last pre-commit build run, and they both passed for me. Reading of partitioned Avro data fails if partition schema does not match table schema -- Key: HIVE-6835 URL: https://issues.apache.org/jira/browse/HIVE-6835 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6835.1.patch, HIVE-6835.2.patch To reproduce: {code} create table testarray (a arraystring); load data local inpath '/home/ahsu/test/array.txt' into table testarray; # create partitioned Avro table with one array column create table avroarray partitioned by (y string) row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties ('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ { name:a, type:{type:array,items:string} } ] }') STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'; insert into table avroarray partition(y=1) select * from testarray; # add an int column with a default value of 0 alter table avroarray set serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ {name:intfield,type:int,default:0},{ name:a, type:{type:array,items:string} } ] }'); # fails with ClassCastException select * from avroarray; {code} The select * fails with: {code} Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6835) Reading of partitioned Avro data fails if partition schema does not match table schema
[ https://issues.apache.org/jira/browse/HIVE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6835: --- Status: Open (was: Patch Available) Please don't modify generated file serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java Instead place new constant in serde/if/serde.thrift Reading of partitioned Avro data fails if partition schema does not match table schema -- Key: HIVE-6835 URL: https://issues.apache.org/jira/browse/HIVE-6835 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6835.1.patch, HIVE-6835.2.patch To reproduce: {code} create table testarray (a arraystring); load data local inpath '/home/ahsu/test/array.txt' into table testarray; # create partitioned Avro table with one array column create table avroarray partitioned by (y string) row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties ('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ { name:a, type:{type:array,items:string} } ] }') STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'; insert into table avroarray partition(y=1) select * from testarray; # add an int column with a default value of 0 alter table avroarray set serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ {name:intfield,type:int,default:0},{ name:a, type:{type:array,items:string} } ] }'); # fails with ClassCastException select * from avroarray; {code} The select * fails with: {code} Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6913) Hive unable to find the hashtable file during complex multi-staged map join
[ https://issues.apache.org/jira/browse/HIVE-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-6913: --- Attachment: HIVE-6913.patch All three tests passed locally. Re-uploading for a another run. Hive unable to find the hashtable file during complex multi-staged map join --- Key: HIVE-6913 URL: https://issues.apache.org/jira/browse/HIVE-6913 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-6913.patch, HIVE-6913.patch If a query has multiple mapjoins and one of the tables to be mapjoined is empty, the query can result in a no such file or directory when looking for the hashtable. This is because when we generate a dummy hash table, we do not close the TableScan (TS) operator for that table. Additionally, HashTableSinkOperator (HTSO) outputs it's hash tables in the closeOp method. However, when close is called on HTSO we check to ensure that all parents are closed: https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java#L333 which is not true on this case, because the TS operator for the empty table was never closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5072) [WebHCat]Enable directly invoke Sqoop job through Templeton
[ https://issues.apache.org/jira/browse/HIVE-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971702#comment-13971702 ] Eugene Koifman commented on HIVE-5072: -- If you are sure that this works, that's fine. Please make sure the doc ticket explains this clearly. [WebHCat]Enable directly invoke Sqoop job through Templeton --- Key: HIVE-5072 URL: https://issues.apache.org/jira/browse/HIVE-5072 Project: Hive Issue Type: Improvement Components: WebHCat Affects Versions: 0.12.0 Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-5072.1.patch, HIVE-5072.2.patch, HIVE-5072.3.patch, HIVE-5072.4.patch, Templeton-Sqoop-Action.pdf Now it is hard to invoke a Sqoop job through templeton. The only way is to use the classpath jar generated by a sqoop job and use the jar delegator in Templeton. We should implement Sqoop Delegator to enable directly invoke Sqoop job through Templeton. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: [VOTE] Apache Hive 0.13.0 Release Candidate 2
On second thoughts lets go ahead with this RC. We had several cycles already for this 0.13 I can include this in a 0.13.1, which we can hopefully get out in a week or so after 0.13.0. On Wed, Apr 16, 2014 at 7:44 AM, Thejas Nair the...@hortonworks.com wrote: I found a major issue in working of SQL standard auth - HIVE-6919 . It has a fix, and was reviewed yesterday night, and I have also run the tests. Since sql standard auth is one of major new features in this release, I think it would make sense to roll out another rc with this fix. Thoughts ? Sorry about this late find! On Tue, Apr 15, 2014 at 7:02 PM, Gunther Hagleitner ghagleit...@hortonworks.com wrote: +1 - Verified checksums and signatures - Compiled source and ran partial unit tests - Installed both binary and hive built from source on cluster - Ran a number of test queries without any problems on both Thanks, Gunther. On Tue, Apr 15, 2014 at 2:06 PM, Harish Butani rhbut...@apache.org wrote: Apache Hive 0.13.0 Release Candidate 2 is available here: http://people.apache.org/~rhbutani/hive-0.13.0-candidate-2 Maven artifacts are available here: https://repository.apache.org/content/repositories/orgapachehive-1011 Source tag for RCN is at: https://svn.apache.org/repos/asf/hive/tags/release-0.13.0-rc2/ Voting will conclude in 72 hours. Hive PMC Members: Please test and vote. Thanks. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Updated] (HIVE-6903) Change default value of hive.metastore.execute.setugi to true
[ https://issues.apache.org/jira/browse/HIVE-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6903: --- Status: Open (was: Patch Available) Change default value of hive.metastore.execute.setugi to true - Key: HIVE-6903 URL: https://issues.apache.org/jira/browse/HIVE-6903 Project: Hive Issue Type: Task Components: Metastore Affects Versions: 0.12.0, 0.11.0, 0.10.0, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6903.1.patch, HIVE-6903.patch Since its introduction in HIVE-2616 I havent seen any bug reported for it, only grief from users who expect system to work as if this is true by default. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6903) Change default value of hive.metastore.execute.setugi to true
[ https://issues.apache.org/jira/browse/HIVE-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6903: --- Status: Patch Available (was: Open) Change default value of hive.metastore.execute.setugi to true - Key: HIVE-6903 URL: https://issues.apache.org/jira/browse/HIVE-6903 Project: Hive Issue Type: Task Components: Metastore Affects Versions: 0.12.0, 0.11.0, 0.10.0, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6903.1.patch, HIVE-6903.patch Since its introduction in HIVE-2616 I havent seen any bug reported for it, only grief from users who expect system to work as if this is true by default. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6903) Change default value of hive.metastore.execute.setugi to true
[ https://issues.apache.org/jira/browse/HIVE-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6903: --- Attachment: HIVE-6903.1.patch Incorporating [~navis] feedback. Change default value of hive.metastore.execute.setugi to true - Key: HIVE-6903 URL: https://issues.apache.org/jira/browse/HIVE-6903 Project: Hive Issue Type: Task Components: Metastore Affects Versions: 0.10.0, 0.11.0, 0.12.0, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6903.1.patch, HIVE-6903.patch Since its introduction in HIVE-2616 I havent seen any bug reported for it, only grief from users who expect system to work as if this is true by default. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5072) [WebHCat]Enable directly invoke Sqoop job through Templeton
[ https://issues.apache.org/jira/browse/HIVE-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971735#comment-13971735 ] Eugene Koifman commented on HIVE-5072: -- also, your test cases still use user.name as form parameter - that's deprecated, so please make it a query parameter. I think implementing version/sqoop to throw an exception to say that it's not implemented is not what is required. [WebHCat]Enable directly invoke Sqoop job through Templeton --- Key: HIVE-5072 URL: https://issues.apache.org/jira/browse/HIVE-5072 Project: Hive Issue Type: Improvement Components: WebHCat Affects Versions: 0.12.0 Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-5072.1.patch, HIVE-5072.2.patch, HIVE-5072.3.patch, HIVE-5072.4.patch, Templeton-Sqoop-Action.pdf Now it is hard to invoke a Sqoop job through templeton. The only way is to use the classpath jar generated by a sqoop job and use the jar delegator in Templeton. We should implement Sqoop Delegator to enable directly invoke Sqoop job through Templeton. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6862) add DB schema DDL and upgrade 12to13 scripts for MS SQL Server
[ https://issues.apache.org/jira/browse/HIVE-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-6862: - Attachment: HIVE-6862.3.patch HIVE-6862.3.patch to address [~leftylev]'s comments add DB schema DDL and upgrade 12to13 scripts for MS SQL Server -- Key: HIVE-6862 URL: https://issues.apache.org/jira/browse/HIVE-6862 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Attachments: HIVE-6862.2.patch, HIVE-6862.3.patch, HIVE-6862.patch need to add a unifed 0.13 script and a separate script for ACID support NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6908) TestThriftBinaryCLIService.testExecuteStatementAsync has intermittent failures
[ https://issues.apache.org/jira/browse/HIVE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971772#comment-13971772 ] Hive QA commented on HIVE-6908: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640168/HIVE-6908.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5401 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/10/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/10/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640168 TestThriftBinaryCLIService.testExecuteStatementAsync has intermittent failures -- Key: HIVE-6908 URL: https://issues.apache.org/jira/browse/HIVE-6908 Project: Hive Issue Type: Bug Components: Tests Affects Versions: 0.13.0 Reporter: Szehon Ho Assignee: Szehon Ho Attachments: HIVE-6908.patch This has failed sometimes in the pre-commit tests. ThriftCLIServiceTest.testExecuteStatementAsync runs two statements. They are given 100 second timeout total, not sure if its by intention. As the first is a select query, it will take a majority of the time. The second statement (create table) should be quicker, but it fails sometimes because timeout is already mostly used up. The timeout should probably be reset after the first statement. If the operation finishes before the timeout, it wont have any effect as it'll break out. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6862) add DB schema DDL and upgrade 12to13 scripts for MS SQL Server
[ https://issues.apache.org/jira/browse/HIVE-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971792#comment-13971792 ] Lefty Leverenz commented on HIVE-6862: -- Sorry to be a pest, but step 2 in Updating still repeats to create -- {quote} + in your hive-site.xml. This will cause DataNucleus to create to + create tables which are missing from your database once metastore starts. {quote} add DB schema DDL and upgrade 12to13 scripts for MS SQL Server -- Key: HIVE-6862 URL: https://issues.apache.org/jira/browse/HIVE-6862 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Attachments: HIVE-6862.2.patch, HIVE-6862.3.patch, HIVE-6862.patch need to add a unifed 0.13 script and a separate script for ACID support NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Submit Precommit jobs on temporary Jenkins
Sure that would be great if you havent done so already. Thanks, Brock! Szehon On Wed, Apr 16, 2014 at 5:10 AM, Brock Noland br...@cloudera.com wrote: Hi, Nice work!! I do have permission to 'redirect JIRA's Submit Patch Auto-Trigger from Bigtop Jenkins to my Jenkins' should I do that? Brock On Wed, Apr 16, 2014 at 12:49 AM, Szehon Ho sze...@cloudera.com wrote: To unblock people with waiting JIRA's, I setup a Jenkins on my own EC2 instance to run precommit tests, as the Bigtop guys with authorization to fix their Jenkins host are not available currently. It is at the following location: http://ec2-54-237-84-140.compute-1.amazonaws.com/job/precommit-hive/ I don't have permission to redirect JIRA's Submit Patch Auto-Trigger from Bigtop Jenkins to my Jenkins, so please submit manually in the url if you have a patch you want to test. I have submitted a few JIRA's that have been missed during the outage, and also granted all users permission to trigger the job. Steps: 1. Clicking 'Build with parameters' on the left 2. In the parameter box ISSUE_NUM, type the JIRA number part, like 6908. 3. Click 'Build' This uses the existing PTest cluster to run the tests. As soon as the Bigtop Jenkins come back, we can switch back. Hope this works and can help! Szehon On Tue, Apr 15, 2014 at 8:26 PM, Szehon Ho sze...@cloudera.com wrote: Bumping in case some people miss this. I emailed the bigtop-dev apache list yesterday about this issue, as they are hosting the jenkins running the hive builds. Some guys have looked at it and confirmed the machine is out of space, but there's only two guys who have the access, they have not responded yet (they may not be available). I'll email again tomorrow, but feel free to pile on the thread. https://mail-archives.apache.org/mod_mbox/bigtop-dev/201404.mbox/%3C20140415212726.GP22142%40boudnik.org%3E If problem persists, I could try setting up a temp jenkins on a new ec2 host, but I'd rather not if its going to be fixed, so let's see if they respond tomorrow. Thanks Szehon On Mon, Apr 14, 2014 at 1:44 PM, Szehon Ho sze...@cloudera.com wrote: Hi, New precommit builds haven't been submitted successfully on bigtop01 since yesterday morning. http://bigtop01.cloudera.org:8080/view/Hive/job/PreCommit-HIVE-Build/ The machine might be out of space again, or other issue. I mailed the Bigtop dev list, hopefully they can respond soon. Until then, new patches submitted to the Hive JIRA wont get picked up for testing. I'll notify if there are any updates. Thanks, Szehon
[jira] [Commented] (HIVE-6549) remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh
[ https://issues.apache.org/jira/browse/HIVE-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971829#comment-13971829 ] Lefty Leverenz commented on HIVE-6549: -- Thanks [~eugene.koifman]. I've removed it. We never decided whether to maintain the doc or just refer readers to webhcat-default.xml. Now that the HCatalog 0.5.0 docs have gone missing, I'm inclined to remove the default values from the table but keep the variable names and descriptions, which seem to be fairly stable. Then instead of the introduction discussing default values, it could give the path to webhcat-default.xml. What do you think? By the way, I notice SNAPSHOT in the templeton.jar default in 0.13.0 RC2 (escape chars added): {quote} property nametempleton.jar/name value$\{env.TEMPLETON_HOME\}/share/webhcat/svr/webhcat-0.6.0-SNAPSHOT.jar/value descriptionThe path to the Templeton jar file./description /property {quote} It's also in branch-0.11 and branch-0.12 even though the doc for branch-0.11 gives webhcat-0.11.0.jar instead of webhcat-0.6.0-SNAPSHOT.jar. remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh - Key: HIVE-6549 URL: https://issues.apache.org/jira/browse/HIVE-6549 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Priority: Minor Attachments: HIVE-6549.patch this property is no longer used also removed corresponding AppConfig.TEMPLETON_JAR_NAME hcatalog/bin/hive-config.sh is not used NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5376) Hive does not honor type for partition columns when altering column type
[ https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-5376: Status: Open (was: Patch Available) Hive does not honor type for partition columns when altering column type Key: HIVE-5376 URL: https://issues.apache.org/jira/browse/HIVE-5376 Project: Hive Issue Type: Bug Components: CLI Reporter: Sergey Shelukhin Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-5376.1.patch, HIVE-5376.2.patch, HIVE-5376.3.patch Followup for HIVE-5297. If partition column of type string is changed to int, the data is not verified. The values for partition columns are all in metastore db, so it's easy to check and fail the type change. alter_partition_coltype.q (or some other test?) checks this behavior right now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5376) Hive does not honor type for partition columns when altering column type
[ https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-5376: Attachment: HIVE-5376.3.patch Hive does not honor type for partition columns when altering column type Key: HIVE-5376 URL: https://issues.apache.org/jira/browse/HIVE-5376 Project: Hive Issue Type: Bug Components: CLI Reporter: Sergey Shelukhin Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-5376.1.patch, HIVE-5376.2.patch, HIVE-5376.3.patch Followup for HIVE-5297. If partition column of type string is changed to int, the data is not verified. The values for partition columns are all in metastore db, so it's easy to check and fail the type change. alter_partition_coltype.q (or some other test?) checks this behavior right now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5376) Hive does not honor type for partition columns when altering column type
[ https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971864#comment-13971864 ] Hari Sankar Sivarama Subramaniyan commented on HIVE-5376: - [~rhbutani] The DEFAULT_PARTITION issue is no longer present. It looks like a local setup issue. Please look at the new patch Hive does not honor type for partition columns when altering column type Key: HIVE-5376 URL: https://issues.apache.org/jira/browse/HIVE-5376 Project: Hive Issue Type: Bug Components: CLI Reporter: Sergey Shelukhin Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-5376.1.patch, HIVE-5376.2.patch, HIVE-5376.3.patch Followup for HIVE-5297. If partition column of type string is changed to int, the data is not verified. The values for partition columns are all in metastore db, so it's easy to check and fail the type change. alter_partition_coltype.q (or some other test?) checks this behavior right now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5376) Hive does not honor type for partition columns when altering column type
[ https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-5376: Status: Patch Available (was: Open) Hive does not honor type for partition columns when altering column type Key: HIVE-5376 URL: https://issues.apache.org/jira/browse/HIVE-5376 Project: Hive Issue Type: Bug Components: CLI Reporter: Sergey Shelukhin Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-5376.1.patch, HIVE-5376.2.patch, HIVE-5376.3.patch Followup for HIVE-5297. If partition column of type string is changed to int, the data is not verified. The values for partition columns are all in metastore db, so it's easy to check and fail the type change. alter_partition_coltype.q (or some other test?) checks this behavior right now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6549) remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh
[ https://issues.apache.org/jira/browse/HIVE-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971869#comment-13971869 ] Eugene Koifman commented on HIVE-6549: -- the value for templeton.jar is not relevant - it's not used (HIVE-6549 covers removing it) I agree that that listing defaults is not useful. I think including a link on github/svn to webhcat-default.xml would useful. remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh - Key: HIVE-6549 URL: https://issues.apache.org/jira/browse/HIVE-6549 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Priority: Minor Attachments: HIVE-6549.patch this property is no longer used also removed corresponding AppConfig.TEMPLETON_JAR_NAME hcatalog/bin/hive-config.sh is not used NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5523) HiveHBaseStorageHandler should pass kerbros credentials down to HBase
[ https://issues.apache.org/jira/browse/HIVE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-5523: --- Status: Open (was: Patch Available) HiveHBaseStorageHandler should pass kerbros credentials down to HBase - Key: HIVE-5523 URL: https://issues.apache.org/jira/browse/HIVE-5523 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.11.0 Reporter: Nick Dimiduk Assignee: Sushanth Sowmyan Attachments: HIVE-5523.patch, Task Logs_ 'attempt_201310110032_0023_r_00_0'.html Running on a secured cluster, I have an HBase table defined thusly {noformat} CREATE TABLE IF NOT EXISTS pagecounts_hbase (rowkey STRING, pageviews STRING, bytes STRING) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,f:c1,f:c2') TBLPROPERTIES ('hbase.table.name' = 'pagecounts'); {noformat} and a query to populate that table {noformat} -- ensure hbase dependency jars are shipped with the MR job SET hive.aux.jars.path = file:///etc/hbase/conf/hbase-site.xml,file:///usr/lib/hive/lib/hive-hbase-handler-0.11.0.1.3.2.0-111.jar,file:///usr/lib/hbase/hbase-0.94.6.1.3.2.0-111-security.jar,file:///usr/lib/zookeeper/zookeeper-3.4.5.1.3.2.0-111.jar; -- populate our hbase table FROM pgc INSERT INTO TABLE pagecounts_hbase SELECT pgc.* WHERE rowkey LIKE 'en/q%' LIMIT 10; {noformat} The reduce tasks fail with what boils down to the following exception: {noformat} Caused by: java.lang.RuntimeException: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. at org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:263) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) at org.apache.hadoop.hbase.security.User.call(User.java:590) at org.apache.hadoop.hbase.security.User.access$700(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:444) at org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.handleSaslConnectionFailure(SecureClient.java:224) at org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:313) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:104) at $Proxy10.getProtocolVersion(Unknown Source) at org.apache.hadoop.hbase.ipc.SecureRpcEngine.getProxy(SecureRpcEngine.java:146) at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1346) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1305) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1292) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1001) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:896) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:998) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:900) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:857) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:234) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:174) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:133) at org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat.getHiveRecordWriter(HiveHBaseTableOutputFormat.java:83) at
[jira] [Commented] (HIVE-5523) HiveHBaseStorageHandler should pass kerbros credentials down to HBase
[ https://issues.apache.org/jira/browse/HIVE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971876#comment-13971876 ] Sushanth Sowmyan commented on HIVE-5523: I'm canceling this patch because I feel there is still more code cleanup requried here to make things more obvious, and code cleanup is now the point of this patch, since the original reported issue is working without this patch. HIVE-6915 fixes adding delegation token for Tez, and in doing so, opens up questions on whether how it's done currently is working because it happens-to-work, or because that's the way it's supposed to, and I'm leaning towards the former. HiveHBaseStorageHandler should pass kerbros credentials down to HBase - Key: HIVE-5523 URL: https://issues.apache.org/jira/browse/HIVE-5523 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.11.0 Reporter: Nick Dimiduk Assignee: Sushanth Sowmyan Attachments: HIVE-5523.patch, Task Logs_ 'attempt_201310110032_0023_r_00_0'.html Running on a secured cluster, I have an HBase table defined thusly {noformat} CREATE TABLE IF NOT EXISTS pagecounts_hbase (rowkey STRING, pageviews STRING, bytes STRING) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,f:c1,f:c2') TBLPROPERTIES ('hbase.table.name' = 'pagecounts'); {noformat} and a query to populate that table {noformat} -- ensure hbase dependency jars are shipped with the MR job SET hive.aux.jars.path = file:///etc/hbase/conf/hbase-site.xml,file:///usr/lib/hive/lib/hive-hbase-handler-0.11.0.1.3.2.0-111.jar,file:///usr/lib/hbase/hbase-0.94.6.1.3.2.0-111-security.jar,file:///usr/lib/zookeeper/zookeeper-3.4.5.1.3.2.0-111.jar; -- populate our hbase table FROM pgc INSERT INTO TABLE pagecounts_hbase SELECT pgc.* WHERE rowkey LIKE 'en/q%' LIMIT 10; {noformat} The reduce tasks fail with what boils down to the following exception: {noformat} Caused by: java.lang.RuntimeException: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. at org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:263) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) at org.apache.hadoop.hbase.security.User.call(User.java:590) at org.apache.hadoop.hbase.security.User.access$700(User.java:51) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:444) at org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.handleSaslConnectionFailure(SecureClient.java:224) at org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:313) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974) at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:104) at $Proxy10.getProtocolVersion(Unknown Source) at org.apache.hadoop.hbase.ipc.SecureRpcEngine.getProxy(SecureRpcEngine.java:146) at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1346) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1305) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1292) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1001) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:896) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:998) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:900) at
[jira] [Commented] (HIVE-6901) Explain plan doesn't show operator tree for the fetch operator
[ https://issues.apache.org/jira/browse/HIVE-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971880#comment-13971880 ] Hive QA commented on HIVE-6901: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640079/HIVE-6901.1.patch {color:red}ERROR:{color} -1 due to 65 failed/errored test(s), 5401 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_union org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binarysortable_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin9 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_like_view org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_filter_join_breaktask org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input23 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input42 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part7 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part9 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_filters_overlap org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_louter_join_ppr org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadataonly1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nullformatCTAS org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_outer_join_ppr org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pcr org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_join_filter org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_vc org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppr_allchildsarenull org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_push_or org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_rand_partitionpruner1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_rand_partitionpruner3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_regexp_extract org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_router_join_ppr org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample10 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample9 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_serde_user_properties org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_create_table_alter org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_create_table_serde org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_15 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_transform_ppr1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_transform_ppr2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_explode org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_explode org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union24 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_ppr org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_unset_table_view_property org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket4 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin7
[jira] [Commented] (HIVE-5775) Introduce Cost Based Optimizer to Hive
[ https://issues.apache.org/jira/browse/HIVE-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971936#comment-13971936 ] Vaibhav Gumashta commented on HIVE-5775: Hi [~jpullokkaran]; wanted to go through the code - can you please upload to review board? Thanks! Introduce Cost Based Optimizer to Hive -- Key: HIVE-5775 URL: https://issues.apache.org/jira/browse/HIVE-5775 Project: Hive Issue Type: New Feature Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: CBO-2.pdf, HIVE-5775.1.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6921) index creation fails with sql std auth turned on
Ashutosh Chauhan created HIVE-6921: -- Summary: index creation fails with sql std auth turned on Key: HIVE-6921 URL: https://issues.apache.org/jira/browse/HIVE-6921 Project: Hive Issue Type: Bug Components: Authorization, Indexing, Security Affects Versions: 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6921) index creation fails with sql std auth turned on
[ https://issues.apache.org/jira/browse/HIVE-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6921: --- Attachment: HIVE-6921.patch index creation fails with sql std auth turned on - Key: HIVE-6921 URL: https://issues.apache.org/jira/browse/HIVE-6921 Project: Hive Issue Type: Bug Components: Authorization, Indexing, Security Affects Versions: 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6921.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6921) index creation fails with sql std auth turned on
[ https://issues.apache.org/jira/browse/HIVE-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6921: --- Status: Patch Available (was: Open) index creation fails with sql std auth turned on - Key: HIVE-6921 URL: https://issues.apache.org/jira/browse/HIVE-6921 Project: Hive Issue Type: Bug Components: Authorization, Indexing, Security Affects Versions: 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6921.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
Review Request 20426: index creation fails with std sql auth turned on
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20426/ --- Review request for hive and Thejas Nair. Bugs: HIVE-6921 https://issues.apache.org/jira/browse/HIVE-6921 Repository: hive-git Description --- Issue was owner and default grants were not set for underlying table of index. Diffs - ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 254e2b0 ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ae05f04 ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java ae3c11b ql/src/test/queries/clientpositive/authorization_index.q PRE-CREATION ql/src/test/results/clientpositive/authorization_index.q.out PRE-CREATION Diff: https://reviews.apache.org/r/20426/diff/ Testing --- Added new test and ran existing auth tests. Thanks, Ashutosh Chauhan
[jira] [Commented] (HIVE-5775) Introduce Cost Based Optimizer to Hive
[ https://issues.apache.org/jira/browse/HIVE-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971970#comment-13971970 ] Laljo John Pullokkaran commented on HIVE-5775: -- I don't think this should go in to trunk yet. I need to remove some of the limitations (outer join, union) before it can go on to trunk. Also a better algorithm for join permutations is also being worked on. Introduce Cost Based Optimizer to Hive -- Key: HIVE-5775 URL: https://issues.apache.org/jira/browse/HIVE-5775 Project: Hive Issue Type: New Feature Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: CBO-2.pdf, HIVE-5775.1.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5072) [WebHCat]Enable directly invoke Sqoop job through Templeton
[ https://issues.apache.org/jira/browse/HIVE-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971971#comment-13971971 ] Shuaishuai Nie commented on HIVE-5072: -- Hi [~ekoifman], the way I pass 'user.name in the new test is the same with other tests in jobsubmission.conf. Can you point me to the JIRA which deprecate user.name parameter so that I can have a better understanding on this issue? For version/sqoop, I think it is similar as version/pig here. Given Sqoop and pig are separate projects with WebHCat and WebHCat don't have dependencies on these projects, we cannot get the version of Pig or Sqoop the same way we do with Hive and Hadoop. So I think for now we can treat Sqoop the same as Pig here until we have a better way to determine the version for these projects in the code. [WebHCat]Enable directly invoke Sqoop job through Templeton --- Key: HIVE-5072 URL: https://issues.apache.org/jira/browse/HIVE-5072 Project: Hive Issue Type: Improvement Components: WebHCat Affects Versions: 0.12.0 Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-5072.1.patch, HIVE-5072.2.patch, HIVE-5072.3.patch, HIVE-5072.4.patch, Templeton-Sqoop-Action.pdf Now it is hard to invoke a Sqoop job through templeton. The only way is to use the classpath jar generated by a sqoop job and use the jar delegator in Templeton. We should implement Sqoop Delegator to enable directly invoke Sqoop job through Templeton. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: [VOTE] Apache Hive 0.13.0 Release Candidate 2
+1 Compiled sources and built package. Ran some basic tests. Looks good. On Wed, Apr 16, 2014 at 10:38 AM, Thejas Nair the...@hortonworks.comwrote: On second thoughts lets go ahead with this RC. We had several cycles already for this 0.13 I can include this in a 0.13.1, which we can hopefully get out in a week or so after 0.13.0. On Wed, Apr 16, 2014 at 7:44 AM, Thejas Nair the...@hortonworks.com wrote: I found a major issue in working of SQL standard auth - HIVE-6919 . It has a fix, and was reviewed yesterday night, and I have also run the tests. Since sql standard auth is one of major new features in this release, I think it would make sense to roll out another rc with this fix. Thoughts ? Sorry about this late find! On Tue, Apr 15, 2014 at 7:02 PM, Gunther Hagleitner ghagleit...@hortonworks.com wrote: +1 - Verified checksums and signatures - Compiled source and ran partial unit tests - Installed both binary and hive built from source on cluster - Ran a number of test queries without any problems on both Thanks, Gunther. On Tue, Apr 15, 2014 at 2:06 PM, Harish Butani rhbut...@apache.org wrote: Apache Hive 0.13.0 Release Candidate 2 is available here: http://people.apache.org/~rhbutani/hive-0.13.0-candidate-2 Maven artifacts are available here: https://repository.apache.org/content/repositories/orgapachehive-1011 Source tag for RCN is at: https://svn.apache.org/repos/asf/hive/tags/release-0.13.0-rc2/ Voting will conclude in 72 hours. Hive PMC Members: Please test and vote. Thanks. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Commented] (HIVE-5072) [WebHCat]Enable directly invoke Sqoop job through Templeton
[ https://issues.apache.org/jira/browse/HIVE-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971986#comment-13971986 ] Eugene Koifman commented on HIVE-5072: -- It's not the 'user.name' parameter that is deprecated, it's how you pass it in using a Form parameter. You should pass it as a query parameter instead. Here is the ticket HIVE-6576. Just because version/pig is not finished, it doesn't mean it should not be done for other components. Sqoop provides a way to get the version (https://sqoop.apache.org/docs/1.99.1/CommandLineClient.html#show-version-function) [WebHCat]Enable directly invoke Sqoop job through Templeton --- Key: HIVE-5072 URL: https://issues.apache.org/jira/browse/HIVE-5072 Project: Hive Issue Type: Improvement Components: WebHCat Affects Versions: 0.12.0 Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-5072.1.patch, HIVE-5072.2.patch, HIVE-5072.3.patch, HIVE-5072.4.patch, Templeton-Sqoop-Action.pdf Now it is hard to invoke a Sqoop job through templeton. The only way is to use the classpath jar generated by a sqoop job and use the jar delegator in Templeton. We should implement Sqoop Delegator to enable directly invoke Sqoop job through Templeton. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 20399: Invalid column access info for partitioned table
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20399/#review40603 --- Patch is only adding partition columns from where clause not from select list. ql/src/test/results/clientpositive/column_access_stats.q.out https://reviews.apache.org/r/20399/#comment73629 ds,hr is missing ql/src/test/results/clientpositive/column_access_stats.q.out https://reviews.apache.org/r/20399/#comment73630 ds is missing - Ashutosh Chauhan On April 16, 2014, 1:07 a.m., Navis Ryu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20399/ --- (Updated April 16, 2014, 1:07 a.m.) Review request for hive. Bugs: HIVE-6910 https://issues.apache.org/jira/browse/HIVE-6910 Repository: hive-git Description --- From http://www.mail-archive.com/user@hive.apache.org/msg11324.html neededColumnIDs in TS is only for non-partition columns. But ColumnAccessAnalyzer is calculating it on all columns. Diffs - ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java 8c4b891 ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java f285312 ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java 6bdf394 ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnAccessAnalyzer.java 74b595a ql/src/java/org/apache/hadoop/hive/ql/parse/ProcessAnalyzeTable.java c26be3c ql/src/java/org/apache/hadoop/hive/ql/parse/PrunedPartitionList.java d3268dd ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java a7cec5d ql/src/test/queries/clientpositive/column_access_stats.q fbf8bba ql/src/test/results/clientpositive/column_access_stats.q.out 7eee4ba Diff: https://reviews.apache.org/r/20399/diff/ Testing --- Thanks, Navis Ryu
[jira] [Updated] (HIVE-6910) Invalid column access info for partitioned table
[ https://issues.apache.org/jira/browse/HIVE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6910: --- Status: Open (was: Patch Available) Seems like patch is only adding partition columns from where clause not from select list. See my comments on RB Invalid column access info for partitioned table Key: HIVE-6910 URL: https://issues.apache.org/jira/browse/HIVE-6910 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.12.0, 0.11.0, 0.13.0 Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6910.1.patch.txt, HIVE-6910.2.patch.txt From http://www.mail-archive.com/user@hive.apache.org/msg11324.html neededColumnIDs in TS is only for non-partition columns. But ColumnAccessAnalyzer is calculating it on all columns. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6913) Hive unable to find the hashtable file during complex multi-staged map join
[ https://issues.apache.org/jira/browse/HIVE-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971999#comment-13971999 ] Xuefu Zhang commented on HIVE-6913: --- +1 Hive unable to find the hashtable file during complex multi-staged map join --- Key: HIVE-6913 URL: https://issues.apache.org/jira/browse/HIVE-6913 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-6913.patch, HIVE-6913.patch If a query has multiple mapjoins and one of the tables to be mapjoined is empty, the query can result in a no such file or directory when looking for the hashtable. This is because when we generate a dummy hash table, we do not close the TableScan (TS) operator for that table. Additionally, HashTableSinkOperator (HTSO) outputs it's hash tables in the closeOp method. However, when close is called on HTSO we check to ensure that all parents are closed: https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java#L333 which is not true on this case, because the TS operator for the empty table was never closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6913) Hive unable to find the hashtable file during complex multi-staged map join
[ https://issues.apache.org/jira/browse/HIVE-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972015#comment-13972015 ] Hive QA commented on HIVE-6913: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640498/HIVE-6913.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5401 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/13/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/13/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640498 Hive unable to find the hashtable file during complex multi-staged map join --- Key: HIVE-6913 URL: https://issues.apache.org/jira/browse/HIVE-6913 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-6913.patch, HIVE-6913.patch If a query has multiple mapjoins and one of the tables to be mapjoined is empty, the query can result in a no such file or directory when looking for the hashtable. This is because when we generate a dummy hash table, we do not close the TableScan (TS) operator for that table. Additionally, HashTableSinkOperator (HTSO) outputs it's hash tables in the closeOp method. However, when close is called on HTSO we check to ensure that all parents are closed: https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java#L333 which is not true on this case, because the TS operator for the empty table was never closed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6549) remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh
[ https://issues.apache.org/jira/browse/HIVE-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972085#comment-13972085 ] Lefty Leverenz commented on HIVE-6549: -- My github knowledge is rusty and never was much to begin with, so what would the link be? For svn, I've found this: http://svn.apache.org/repos/asf/hive/trunk/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml (or equivalent files in branches, such as http://svn.apache.org/repos/asf/hive/branches/branch-0.12/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml). Is that what you mean? remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh - Key: HIVE-6549 URL: https://issues.apache.org/jira/browse/HIVE-6549 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Priority: Minor Attachments: HIVE-6549.patch this property is no longer used also removed corresponding AppConfig.TEMPLETON_JAR_NAME hcatalog/bin/hive-config.sh is not used NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6549) remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh
[ https://issues.apache.org/jira/browse/HIVE-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972088#comment-13972088 ] Eugene Koifman commented on HIVE-6549: -- I think http://svn.apache.org/repos/asf/hive/trunk/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml is perfect remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh - Key: HIVE-6549 URL: https://issues.apache.org/jira/browse/HIVE-6549 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Priority: Minor Attachments: HIVE-6549.patch this property is no longer used also removed corresponding AppConfig.TEMPLETON_JAR_NAME hcatalog/bin/hive-config.sh is not used NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5538) Turn on vectorization by default.
[ https://issues.apache.org/jira/browse/HIVE-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972096#comment-13972096 ] Hari Sankar Sivarama Subramaniyan commented on HIVE-5538: - [~jnp] It might be worth to run the tests again given that lot of vectorization related issues have been fixed over the past few months and the new run might expose any existing issues with vectorization. Turn on vectorization by default. - Key: HIVE-5538 URL: https://issues.apache.org/jira/browse/HIVE-5538 Project: Hive Issue Type: Sub-task Reporter: Jitendra Nath Pandey Assignee: Jitendra Nath Pandey Attachments: HIVE-5538.1.patch Vectorization should be turned on by default, so that users don't have to specifically enable vectorization. Vectorization code validates and ensures that a query falls back to row mode if it is not supported on vectorized code path. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6891) Alter rename partition Perm inheritance and general partition/table group inheritance
[ https://issues.apache.org/jira/browse/HIVE-6891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasad Mujumdar updated HIVE-6891: -- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks [~szehon] for the contribution, thanks [~brocknoland] for the review. Alter rename partition Perm inheritance and general partition/table group inheritance - Key: HIVE-6891 URL: https://issues.apache.org/jira/browse/HIVE-6891 Project: Hive Issue Type: Bug Reporter: Szehon Ho Assignee: Szehon Ho Fix For: 0.14.0 Attachments: HIVE-6891.2.patch, HIVE-6891.3.patch, HIVE-6891.4.patch, HIVE-6891.patch Found this issue while looking at the method mentioned by HIVE-6648. 'alter table .. partition .. rename to ..' and other commands calling Warehouse.mkdirs() doesn't inherit permission on the partition directories and consequently the data, when hive.warehouse.subdir.inherit.perms is set. Also, in these scenarios of directory creation, group is not being inherited. Data files are already inheriting group by HIVE-3756. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6549) remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh
[ https://issues.apache.org/jira/browse/HIVE-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972098#comment-13972098 ] Lefty Leverenz commented on HIVE-6549: -- Good, will do. In the meantime I found this on github: https://github.com/apache/hive/blob/trunk/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml (or for branch-0.12 it's https://github.com/apache/hive/blob/branch-0.12/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml). Should we give both github and svn, in trunk? Or mention how to find the branches too? remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh - Key: HIVE-6549 URL: https://issues.apache.org/jira/browse/HIVE-6549 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Priority: Minor Attachments: HIVE-6549.patch this property is no longer used also removed corresponding AppConfig.TEMPLETON_JAR_NAME hcatalog/bin/hive-config.sh is not used NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6549) remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh
[ https://issues.apache.org/jira/browse/HIVE-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972109#comment-13972109 ] Eugene Koifman commented on HIVE-6549: -- github repo is a mirror of the svn repo so one should be sufficient. Now that you mention it, I think http://svn.apache.org/repos/asf/hive/branches/branch-0.12/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml would be better (or perhaps 0.13) since 'trunk' is the version currently in develpment. I think users will be able to look at the URL and figure out how to change it to get to a different branch. remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh - Key: HIVE-6549 URL: https://issues.apache.org/jira/browse/HIVE-6549 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Priority: Minor Attachments: HIVE-6549.patch this property is no longer used also removed corresponding AppConfig.TEMPLETON_JAR_NAME hcatalog/bin/hive-config.sh is not used NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6903) Change default value of hive.metastore.execute.setugi to true
[ https://issues.apache.org/jira/browse/HIVE-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972114#comment-13972114 ] Hive QA commented on HIVE-6903: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640507/HIVE-6903.1.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5401 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/14/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/14/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640507 Change default value of hive.metastore.execute.setugi to true - Key: HIVE-6903 URL: https://issues.apache.org/jira/browse/HIVE-6903 Project: Hive Issue Type: Task Components: Metastore Affects Versions: 0.10.0, 0.11.0, 0.12.0, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6903.1.patch, HIVE-6903.patch Since its introduction in HIVE-2616 I havent seen any bug reported for it, only grief from users who expect system to work as if this is true by default. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972122#comment-13972122 ] Szehon Ho commented on HIVE-6783: - Hi [~rhbutani], the Parquet community is wondering if this can be also in 0.13.1, or some release of 0.13? That would be helpful , as this fixes parquet format compatibility between hive and pig. Thanks. Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.14.0 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, HIVE-6783.3.patch.txt, HIVE-6783.4.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
Review Request 20435: HIVE-6916 - Export/import inherit permissions from parent directory
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20435/ --- Review request for hive and Brock Noland. Repository: hive-git Description --- This fixes the CopyTask (used by export/import) to also do the permission inheritance semantics, if the flag is on. Like elsewhere in the code, this is using the HDFS shell because it allows specification of recursion (whereas I could not find it in the HDFS-API). Diffs - common/src/java/org/apache/hadoop/hive/common/FileUtils.java b36a016 itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestFolderPermissions.java a635bb0 ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java b429a58 Diff: https://reviews.apache.org/r/20435/diff/ Testing --- Added a unit test for the same. Thanks, Szehon Ho
[jira] [Updated] (HIVE-6916) Export/import inherit permissions from parent directory
[ https://issues.apache.org/jira/browse/HIVE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6916: Status: Patch Available (was: Open) Export/import inherit permissions from parent directory --- Key: HIVE-6916 URL: https://issues.apache.org/jira/browse/HIVE-6916 Project: Hive Issue Type: Bug Components: Security Reporter: Szehon Ho Assignee: Szehon Ho Attachments: HIVE-6916.patch Export table into an external location and importing into hive, should set the table to have the permission of the parent directory, if the flag hive.warehouse.subdir.inherit.perms is set. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6916) Export/import inherit permissions from parent directory
[ https://issues.apache.org/jira/browse/HIVE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6916: Attachment: HIVE-6916.patch Export/import inherit permissions from parent directory --- Key: HIVE-6916 URL: https://issues.apache.org/jira/browse/HIVE-6916 Project: Hive Issue Type: Bug Components: Security Reporter: Szehon Ho Assignee: Szehon Ho Attachments: HIVE-6916.patch Export table into an external location and importing into hive, should set the table to have the permission of the parent directory, if the flag hive.warehouse.subdir.inherit.perms is set. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-1643) support range scans and non-key columns in HBase filter pushdown
[ https://issues.apache.org/jira/browse/HIVE-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972168#comment-13972168 ] Craig Condit commented on HIVE-1643: Is this issue still being worked? Would love to see this in 0.13... support range scans and non-key columns in HBase filter pushdown Key: HIVE-1643 URL: https://issues.apache.org/jira/browse/HIVE-1643 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.9.0 Reporter: John Sichi Assignee: bharath v Labels: patch Attachments: HIVE-1643.patch, Hive-1643.2.patch, hbase_handler.patch HIVE-1226 added support for WHERE rowkey=3. We would like to support WHERE rowkey BETWEEN 10 and 20, as well as predicates on non-rowkeys (plus conjunctions etc). Non-rowkey conditions can't be used to filter out entire ranges, but they can be used to push the per-row filter processing as far down as possible. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6430) MapJoin hash table has large memory overhead
[ https://issues.apache.org/jira/browse/HIVE-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6430: --- Attachment: HIVE-6430.07.patch Patch that fixes some issues, main thing is that Murmur hash from guava is used; hashing behavior is very bad with previous hash code method and perf suffers a lot. There's also an issue with previously used expand method. To make expand fast, hash is now stored fully. This is not necessary for anything else so it's a tradeoff - more memory (+4 bytes per key) or expensive rehash. We may do it later. Fast paths were added to WriteBuffers for the majority of cases where whatever we are doing is all in one buffer. There's some bug in there that causes some queries to fail, I'll investigate... want to UL patch with what is done, the queries with large map joins that do work now run approximately as fast as before (will later measure more precisely) in a fraction of memory. MapJoin hash table has large memory overhead Key: HIVE-6430 URL: https://issues.apache.org/jira/browse/HIVE-6430 Project: Hive Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HIVE-6430.01.patch, HIVE-6430.02.patch, HIVE-6430.03.patch, HIVE-6430.04.patch, HIVE-6430.05.patch, HIVE-6430.06.patch, HIVE-6430.07.patch, HIVE-6430.patch Right now, in some queries, I see that storing e.g. 4 ints (2 for key and 2 for row) can take several hundred bytes, which is ridiculous. I am reducing the size of MJKey and MJRowContainer in other jiras, but in general we don't need to have java hash table there. We can either use primitive-friendly hashtable like the one from HPPC (Apache-licenced), or some variation, to map primitive keys to single row storage structure without an object per row (similar to vectorization). -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 18936: HIVE-6430 MapJoin hash table has large memory overhead
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18936/ --- (Updated April 17, 2014, 1:07 a.m.) Review request for hive, Gopal V and Gunther Hagleitner. Repository: hive-git Description --- See JIRA Diffs (updated) - common/src/java/org/apache/hadoop/hive/conf/HiveConf.java da45f1a hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java 142bfd8 ql/src/java/org/apache/hadoop/hive/ql/Driver.java 370f6e4 ql/src/java/org/apache/hadoop/hive/ql/debug/Utils.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java 2b1438d ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java 1104a2b ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/AbstractMapJoinTableContainer.java 8854b19 ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/BytesBytesMultiHashMap.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HashMapWrapper.java 9df425b ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinBytesTableContainer.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinKey.java 64f0be2 ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinPersistableTableContainer.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinRowContainer.java 008a8db ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainer.java 988959f ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainerSerDe.java 55b7415 ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HashTableLoader.java e392592 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java eef7656 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedColumnarSerDe.java d4be78d ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToString.java 118b339 ql/src/test/org/apache/hadoop/hive/ql/exec/persistence/TestBytesBytesMultiHashMap.java PRE-CREATION ql/src/test/org/apache/hadoop/hive/ql/exec/persistence/TestMapJoinEqualityTableContainer.java 65e3779 ql/src/test/org/apache/hadoop/hive/ql/exec/persistence/TestMapJoinTableContainer.java 755d783 ql/src/test/queries/clientpositive/mapjoin_decimal.q b65a7be ql/src/test/queries/clientpositive/mapjoin_mapjoin.q 1eb95f6 ql/src/test/results/clientpositive/mapjoin_mapjoin.q.out 8350670 ql/src/test/results/clientpositive/tez/mapjoin_decimal.q.out 3c55b5c ql/src/test/results/clientpositive/tez/mapjoin_mapjoin.q.out 284cc03 serde/src/java/org/apache/hadoop/hive/serde2/ByteStream.java 73d9b29 serde/src/java/org/apache/hadoop/hive/serde2/WriteBuffers.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarSerDe.java 5870884 serde/src/java/org/apache/hadoop/hive/serde2/columnar/LazyBinaryColumnarSerDe.java bab505e serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java 6f344bb serde/src/java/org/apache/hadoop/hive/serde2/io/DateWritable.java 1f4ccdd serde/src/java/org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.java a99c7b4 serde/src/java/org/apache/hadoop/hive/serde2/io/TimestampWritable.java 435d6c6 serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java 82c1263 serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinarySerDe.java b188c3f serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java caf3517 serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java 6c14081 serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorConverter.java 06d5c5e serde/src/test/org/apache/hadoop/hive/serde2/lazy/TestLazyPrimitive.java 868dd4c serde/src/test/org/apache/hadoop/hive/serde2/thrift_test/CreateSequenceFile.java 1fb49e5 Diff: https://reviews.apache.org/r/18936/diff/ Testing --- Thanks, Sergey Shelukhin
Re: Review Request 20096: HIVE-6835: Reading of partitioned Avro data fails if partition schema does not match table schema
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20096/ --- (Updated April 17, 2014, 1:14 a.m.) Review request for hive. Changes --- Addressed Ashutosh's comments in HIVE-6835. Added the constant to serde.thrift and used the Thrift compiler to generate all the language-specific bindings. Repository: hive-git Description --- The problem occurs when you store the avro.schema.(literal|url) in the SERDEPROPERTIES instead of the TBLPROPERTIES, add a partition, change the table's schema, and then try reading from the old partition. I fixed this problem by passing the table properties to the partition with a table. prefix, and changing the Avro SerDe to always use the table properties when available. Diffs (updated) - ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionDesc.java 43cef5c ql/src/test/queries/clientpositive/avro_partitioned.q 6fe5117 ql/src/test/results/clientpositive/avro_partitioned.q.out 644716d serde/if/serde.thrift 31c87ee serde/src/gen/thrift/gen-cpp/serde_constants.h d56c917 serde/src/gen/thrift/gen-cpp/serde_constants.cpp 54503e3 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java 515cf25 serde/src/gen/thrift/gen-php/org/apache/hadoop/hive/serde/Types.php 837dd11 serde/src/gen/thrift/gen-py/org_apache_hadoop_hive_serde/constants.py 8eac87d serde/src/gen/thrift/gen-rb/serde_constants.rb ed86522 serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 9d58d13 serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerdeUtils.java 67d5570 Diff: https://reviews.apache.org/r/20096/diff/ Testing --- Added test cases Thanks, Anthony Hsu
[jira] [Updated] (HIVE-6835) Reading of partitioned Avro data fails if partition schema does not match table schema
[ https://issues.apache.org/jira/browse/HIVE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anthony Hsu updated HIVE-6835: -- Attachment: HIVE-6835.3.patch Thanks for catching this, Ashutosh. My bad for not noticing I was modifying a generated file. I have updated my [Review Board request|https://reviews.apache.org/r/20096/] and also uploaded a new patch. Reading of partitioned Avro data fails if partition schema does not match table schema -- Key: HIVE-6835 URL: https://issues.apache.org/jira/browse/HIVE-6835 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6835.1.patch, HIVE-6835.2.patch, HIVE-6835.3.patch To reproduce: {code} create table testarray (a arraystring); load data local inpath '/home/ahsu/test/array.txt' into table testarray; # create partitioned Avro table with one array column create table avroarray partitioned by (y string) row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties ('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ { name:a, type:{type:array,items:string} } ] }') STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'; insert into table avroarray partition(y=1) select * from testarray; # add an int column with a default value of 0 alter table avroarray set serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ {name:intfield,type:int,default:0},{ name:a, type:{type:array,items:string} } ] }'); # fails with ClassCastException select * from avroarray; {code} The select * fails with: {code} Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6921) index creation fails with sql std auth turned on
[ https://issues.apache.org/jira/browse/HIVE-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972215#comment-13972215 ] Hive QA commented on HIVE-6921: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640536/HIVE-6921.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5406 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_map_operators {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/15/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/15/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640536 index creation fails with sql std auth turned on - Key: HIVE-6921 URL: https://issues.apache.org/jira/browse/HIVE-6921 Project: Hive Issue Type: Bug Components: Authorization, Indexing, Security Affects Versions: 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6921.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 20096: HIVE-6835: Reading of partitioned Avro data fails if partition schema does not match table schema
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20096/#review40626 --- Ship it! Ship It! - Carl Steinbach On April 17, 2014, 1:14 a.m., Anthony Hsu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20096/ --- (Updated April 17, 2014, 1:14 a.m.) Review request for hive. Repository: hive-git Description --- The problem occurs when you store the avro.schema.(literal|url) in the SERDEPROPERTIES instead of the TBLPROPERTIES, add a partition, change the table's schema, and then try reading from the old partition. I fixed this problem by passing the table properties to the partition with a table. prefix, and changing the Avro SerDe to always use the table properties when available. Diffs - ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionDesc.java 43cef5c ql/src/test/queries/clientpositive/avro_partitioned.q 6fe5117 ql/src/test/results/clientpositive/avro_partitioned.q.out 644716d serde/if/serde.thrift 31c87ee serde/src/gen/thrift/gen-cpp/serde_constants.h d56c917 serde/src/gen/thrift/gen-cpp/serde_constants.cpp 54503e3 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java 515cf25 serde/src/gen/thrift/gen-php/org/apache/hadoop/hive/serde/Types.php 837dd11 serde/src/gen/thrift/gen-py/org_apache_hadoop_hive_serde/constants.py 8eac87d serde/src/gen/thrift/gen-rb/serde_constants.rb ed86522 serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 9d58d13 serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerdeUtils.java 67d5570 Diff: https://reviews.apache.org/r/20096/diff/ Testing --- Added test cases Thanks, Anthony Hsu
[jira] [Commented] (HIVE-6835) Reading of partitioned Avro data fails if partition schema does not match table schema
[ https://issues.apache.org/jira/browse/HIVE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972241#comment-13972241 ] Carl Steinbach commented on HIVE-6835: -- [~ashutoshc]: Thanks for catching the Thrift codegen problem. [~erwaman]: Updated patch looks good. +1 Reading of partitioned Avro data fails if partition schema does not match table schema -- Key: HIVE-6835 URL: https://issues.apache.org/jira/browse/HIVE-6835 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6835.1.patch, HIVE-6835.2.patch, HIVE-6835.3.patch To reproduce: {code} create table testarray (a arraystring); load data local inpath '/home/ahsu/test/array.txt' into table testarray; # create partitioned Avro table with one array column create table avroarray partitioned by (y string) row format serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties ('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ { name:a, type:{type:array,items:string} } ] }') STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'; insert into table avroarray partition(y=1) select * from testarray; # add an int column with a default value of 0 alter table avroarray set serde 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' with serdeproperties('avro.schema.literal'='{namespace:test,name:avroarray,type: record, fields: [ {name:intfield,type:int,default:0},{ name:a, type:{type:array,items:string} } ] }'); # fails with ClassCastException select * from avroarray; {code} The select * fails with: {code} Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6916) Export/import inherit permissions from parent directory
[ https://issues.apache.org/jira/browse/HIVE-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972270#comment-13972270 ] Hive QA commented on HIVE-6916: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12640558/HIVE-6916.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5406 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketizedhiveinputformat {noformat} Test results: http://bigtop01.cloudera.org:8080/job/precommit-hive/16/testReport Console output: http://bigtop01.cloudera.org:8080/job/precommit-hive/16/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12640558 Export/import inherit permissions from parent directory --- Key: HIVE-6916 URL: https://issues.apache.org/jira/browse/HIVE-6916 Project: Hive Issue Type: Bug Components: Security Reporter: Szehon Ho Assignee: Szehon Ho Attachments: HIVE-6916.patch Export table into an external location and importing into hive, should set the table to have the permission of the parent directory, if the flag hive.warehouse.subdir.inherit.perms is set. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6922) NullPointerException in collect_set() UDAF
Sun Rui created HIVE-6922: - Summary: NullPointerException in collect_set() UDAF Key: HIVE-6922 URL: https://issues.apache.org/jira/browse/HIVE-6922 Project: Hive Issue Type: Bug Components: UDF Reporter: Sun Rui Assignee: Sun Rui Steps to reproduce the bug: {noformat} create table temp(key int, value string); -- leave the table empty select collect_set(key) from temp where key=0; Error: java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:326) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:471) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:318) ... 7 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMkCollectionEvaluator.merge(GenericUDAFMkCollectionEvaluator.java:140) at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:186) at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1132) ... 9 more {noformat} The root cause is that in GenericUDAFMkCollectionEvaluator.merge() partialResult could be null but is not validated before it is used. {code} ListObject partialResult = (ArrayListObject) internalMergeOI.getList(partial); for(Object i : partialResult) { putIntoCollection(i, myagg); } {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6430) MapJoin hash table has large memory overhead
[ https://issues.apache.org/jira/browse/HIVE-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972288#comment-13972288 ] Gopal V commented on HIVE-6430: --- This is an excellent find! The hash collision scenario seems to be affecting the regular hashmap cases as well. I flipped over the MapJoinKeyBytes::hashCode() to an inlined murmur, which resulted in a ~2 seconds savings to my map tasks. MapJoin hash table has large memory overhead Key: HIVE-6430 URL: https://issues.apache.org/jira/browse/HIVE-6430 Project: Hive Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HIVE-6430.01.patch, HIVE-6430.02.patch, HIVE-6430.03.patch, HIVE-6430.04.patch, HIVE-6430.05.patch, HIVE-6430.06.patch, HIVE-6430.07.patch, HIVE-6430.patch Right now, in some queries, I see that storing e.g. 4 ints (2 for key and 2 for row) can take several hundred bytes, which is ridiculous. I am reducing the size of MJKey and MJRowContainer in other jiras, but in general we don't need to have java hash table there. We can either use primitive-friendly hashtable like the one from HPPC (Apache-licenced), or some variation, to map primitive keys to single row storage structure without an object per row (similar to vectorization). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6922) NullPointerException in collect_set() UDAF
[ https://issues.apache.org/jira/browse/HIVE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sun Rui updated HIVE-6922: -- Status: Patch Available (was: Open) NullPointerException in collect_set() UDAF -- Key: HIVE-6922 URL: https://issues.apache.org/jira/browse/HIVE-6922 Project: Hive Issue Type: Bug Components: UDF Reporter: Sun Rui Assignee: Sun Rui Attachments: HIVE-6922.patch Steps to reproduce the bug: {noformat} create table temp(key int, value string); -- leave the table empty select collect_set(key) from temp where key=0; Error: java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:326) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:471) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:318) ... 7 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMkCollectionEvaluator.merge(GenericUDAFMkCollectionEvaluator.java:140) at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:186) at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1132) ... 9 more {noformat} The root cause is that in GenericUDAFMkCollectionEvaluator.merge() partialResult could be null but is not validated before it is used. {code} ListObject partialResult = (ArrayListObject) internalMergeOI.getList(partial); for(Object i : partialResult) { putIntoCollection(i, myagg); } {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6922) NullPointerException in collect_set() UDAF
[ https://issues.apache.org/jira/browse/HIVE-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sun Rui updated HIVE-6922: -- Attachment: HIVE-6922.patch NullPointerException in collect_set() UDAF -- Key: HIVE-6922 URL: https://issues.apache.org/jira/browse/HIVE-6922 Project: Hive Issue Type: Bug Components: UDF Reporter: Sun Rui Assignee: Sun Rui Attachments: HIVE-6922.patch Steps to reproduce the bug: {noformat} create table temp(key int, value string); -- leave the table empty select collect_set(key) from temp where key=0; Error: java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:326) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:471) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:318) ... 7 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMkCollectionEvaluator.merge(GenericUDAFMkCollectionEvaluator.java:140) at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:186) at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1132) ... 9 more {noformat} The root cause is that in GenericUDAFMkCollectionEvaluator.merge() partialResult could be null but is not validated before it is used. {code} ListObject partialResult = (ArrayListObject) internalMergeOI.getList(partial); for(Object i : partialResult) { putIntoCollection(i, myagg); } {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 20435: HIVE-6916 - Export/import inherit permissions from parent directory
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/20435/ --- (Updated April 17, 2014, 4:58 a.m.) Review request for hive and Brock Noland. Bugs: HIVE-6916 https://issues.apache.org/jira/browse/HIVE-6916 Repository: hive-git Description --- This fixes the CopyTask (used by export/import) to also do the permission inheritance semantics, if the flag is on. Like elsewhere in the code, this is using the HDFS shell because it allows specification of recursion (whereas I could not find it in the HDFS-API). Diffs - common/src/java/org/apache/hadoop/hive/common/FileUtils.java b36a016 itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestFolderPermissions.java a635bb0 ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java b429a58 Diff: https://reviews.apache.org/r/20435/diff/ Testing --- Added a unit test for the same. Thanks, Szehon Ho
[jira] [Commented] (HIVE-6549) remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh
[ https://issues.apache.org/jira/browse/HIVE-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13972311#comment-13972311 ] Lefty Leverenz commented on HIVE-6549: -- Here's what I put in the wiki after the table: [Default Values |https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure#WebHCatConfigure-DefaultValues] {quote} Some of the default values for WebHCat configuration variables depend on the release number. For the default values in the Hive release you are using, see the webhcat-default.xml file. It can be found in the SVN repository at: http://svn.apache.org/repos/asf/hive/branches/branch-release_number/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml where release_number is 0.11, 0.12, and so on. (Prior to Hive 0.11, WebHCat was in the Apache incubator.) For example, the file for Hive 0.12 is at http://svn.apache.org/repos/asf/hive/branches/branch-0.12/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml. {quote} I'd like to include a link to the file in HCatalog 0.5.0 too, but don't know where to find it. I sent email to dev@hive yesterday and might open a JIRA, because lots of wiki links to the HCat 0.5.0 docs are broken. remove templeton.jar from webhcat-default.xml, remove hcatalog/bin/hive-config.sh - Key: HIVE-6549 URL: https://issues.apache.org/jira/browse/HIVE-6549 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Priority: Minor Attachments: HIVE-6549.patch this property is no longer used also removed corresponding AppConfig.TEMPLETON_JAR_NAME hcatalog/bin/hive-config.sh is not used NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)