[jira] [Commented] (HIVE-2752) Index names are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951781#comment-13951781 ] Lefty Leverenz commented on HIVE-2752: -- Updated in two wikidocs: {quote} In Hive 0.12.0 and earlier releases, the index name is case-sensitive for CREATE INDEX and DROP INDEX statements. However, ALTER INDEX requires an index name that was created with lowercase letters (see HIVE-2752). This bug is fixed in Hive 0.13.0 by making index names case-insensitive for all HiveQL statements. For releases prior to 0.13.0, the best practice is to use lowercase letters for all index names. {quote} * [DDL: Create/Drop/AlterIndex |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/Drop/AlterIndex] * [Indexing: Simple Examples |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Indexing#LanguageManualIndexing-SimpleExamples] Index names are case sensitive -- Key: HIVE-2752 URL: https://issues.apache.org/jira/browse/HIVE-2752 Project: Hive Issue Type: Bug Components: Indexing, Metastore, Query Processor Affects Versions: 0.9.0 Reporter: Philip Tromans Assignee: Navis Priority: Minor Fix For: 0.13.0 Attachments: HIVE-2752.1.patch.txt Original Estimate: 4h Remaining Estimate: 4h The following script: DROP TABLE IF EXISTS TestTable; CREATE TABLE TestTable (a INT); DROP INDEX IF EXISTS TestTableA_IDX ON TestTable; CREATE INDEX TestTableA_IDX ON TABLE TestTable (a) AS 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED REBUILD; ALTER INDEX TestTableA_IDX ON TestTable REBUILD; results in the following exception: MetaException(message:index testtablea_idx doesn't exist) at org.apache.hadoop.hive.metastore.ObjectStore.alterIndex(ObjectStore.java:1880) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$30.run(HiveMetaStore.java:1930) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$30.run(HiveMetaStore.java:1927) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:356) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_index(HiveMetaStore.java:1927) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_index(HiveMetaStoreClient.java:868) at org.apache.hadoop.hive.ql.metadata.Hive.alterIndex(Hive.java:398) at org.apache.hadoop.hive.ql.exec.DDLTask.alterIndex(DDLTask.java:902) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:236) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:338) at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:436) at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:446) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:642) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) When you execute: SHOW INDEXES ON TestTable;, you get: TestTableA_IDXtesttable a default__testtable_testtablea_idx__ compact so it looks like things don't get lower cased when they go into the metastore, but they do when the rebuild op is trying to execute. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6758) Beeline doesn't work with -e option when started in background
[ https://issues.apache.org/jira/browse/HIVE-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951794#comment-13951794 ] Harsh J commented on HIVE-6758: --- Beeline is running into one of SIGTTOU or SIGTTIN signals from the TTY. Beeline doesn't work with -e option when started in background -- Key: HIVE-6758 URL: https://issues.apache.org/jira/browse/HIVE-6758 Project: Hive Issue Type: Improvement Components: CLI Affects Versions: 0.11.0 Reporter: Johndee Burks Assignee: Xuefu Zhang In hive CLI you could easily integrate its use into a script and back ground the process like this: hive -e some query Beeline does not run when you do the same even with the -f switch. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5825) Case statement type checking too restrictive for parameterized types
[ https://issues.apache.org/jira/browse/HIVE-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951801#comment-13951801 ] Jason Dere commented on HIVE-5825: -- Hi [~dougsedlak], yes that query looks like it compiles/runs without error Case statement type checking too restrictive for parameterized types Key: HIVE-5825 URL: https://issues.apache.org/jira/browse/HIVE-5825 Project: Hive Issue Type: Bug Components: UDF Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: HIVE-5825.1.patch explain select case when (key = '0') then 123.456BD else 0.0BD end from src limit 2 FAILED: SemanticException [Error 10016]: Line 3:44 Argument type mismatch '0.0BD': The expression after ELSE should have the same type as those after THEN: decimal(6,3) is expected but decimal(1,0) is found The return type checking is too strict and won't allow different decimal types to be returned if they are not the exact same type (precision/scale). There are similar issues with char/varchar length, but even in the general case it seems odd that you wouldn't be able to specify 1 and 0.0 in the same case statement. I would propose setting returnOIResolver so that it is able to convert the return values to common type. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-6047) Permanent UDFs in Hive
[ https://issues.apache.org/jira/browse/HIVE-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere resolved HIVE-6047. -- Resolution: Fixed Fix Version/s: 0.13.0 Yeah this should be closed, thanks for the reminder Lefty. Permanent UDFs in Hive -- Key: HIVE-6047 URL: https://issues.apache.org/jira/browse/HIVE-6047 Project: Hive Issue Type: Bug Components: UDF Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: PermanentFunctionsinHive.pdf, PermanentFunctionsinHive.pdf Currently Hive only supports temporary UDFs which must be re-registered when starting up a Hive session. Provide some support to register permanent UDFs with Hive. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6460) Need new show functionality for transactions
[ https://issues.apache.org/jira/browse/HIVE-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951810#comment-13951810 ] Lefty Leverenz commented on HIVE-6460: -- Added three sections to the DDL wiki and revised another, based on the release note: * [DDL: Alter Table/Partition Compact |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/PartitionCompact] * [DDL: Show Locks (_revised_) |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowLocks] * [DDL: Show Transactions |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowTransactions] * [DDL: Show Compactions |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowCompactions] The alter table syntax differs from other sections by spelling out the partition spec, but I find that helpful so didn't change it to partition_spec. Still needed: revision of the Locking doc and a separate doc for ACID transactions. When the ACID doc exists, various links in the DDL doc can be switched from this jira to the new doc. Need new show functionality for transactions -- Key: HIVE-6460 URL: https://issues.apache.org/jira/browse/HIVE-6460 Project: Hive Issue Type: Sub-task Components: SQL Reporter: Alan Gates Assignee: Alan Gates Fix For: 0.13.0 Attachments: 6460.wip.patch, HIVE-6460.1.patch, HIVE-6460.3.patch, HIVE-6460.4.patch, HIVE-6460.5.patch, HIVE-6460.patch With the addition of transactions and compactions for delta files some new show commands are required. * show transactions to show currently open or aborted transactions * show compactions to show currently waiting or running compactions * show locks needs to work with the new db style of locks as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6763) HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception
[ https://issues.apache.org/jira/browse/HIVE-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951818#comment-13951818 ] Hive QA commented on HIVE-6763: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637592/HIVE-6763.2.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5502 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2 org.apache.hive.service.cli.thrift.TestThriftHttpCLIService.testExecuteStatementAsync {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2028/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2028/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12637592 HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception -- Key: HIVE-6763 URL: https://issues.apache.org/jira/browse/HIVE-6763 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6763.1.patch, HIVE-6763.2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6782) HiveServer2Concurrency issue when running with tez intermittently, throwing org.apache.tez.dag.api.SessionNotRunning: Application not running error
[ https://issues.apache.org/jira/browse/HIVE-6782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951854#comment-13951854 ] Hive QA commented on HIVE-6782: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637599/HIVE-6782.2.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5502 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2 {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2029/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2029/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12637599 HiveServer2Concurrency issue when running with tez intermittently, throwing org.apache.tez.dag.api.SessionNotRunning: Application not running error - Key: HIVE-6782 URL: https://issues.apache.org/jira/browse/HIVE-6782 Project: Hive Issue Type: Bug Components: Tez Reporter: Vikram Dixit K Assignee: Vikram Dixit K Fix For: 0.13.0, 0.14.0 Attachments: HIVE-6782.1.patch, HIVE-6782.2.patch HiveServer2 concurrency is failing intermittently when using tez, throwing org.apache.tez.dag.api.SessionNotRunning: Application not running error -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954269#comment-13954269 ] Hive QA commented on HIVE-6783: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637609/HIVE-6783.1.patch.txt {color:green}SUCCESS:{color} +1 5503 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2031/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2031/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12637609 Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6773) Update readme for ptest2 framework
[ https://issues.apache.org/jira/browse/HIVE-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954277#comment-13954277 ] Brock Noland commented on HIVE-6773: +1 Update readme for ptest2 framework -- Key: HIVE-6773 URL: https://issues.apache.org/jira/browse/HIVE-6773 Project: Hive Issue Type: Bug Components: Testing Infrastructure Reporter: Szehon Ho Assignee: Szehon Ho Priority: Minor Attachments: HIVE-6773.patch Approvals dependency is needed for testing. Need to add instructions. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954288#comment-13954288 ] Brock Noland commented on HIVE-6783: Hi, The patch has tabs where as Hive requires two spaces. Can you fix that and then put a Review Board item up? (reviews.apache.org) FYI [~jcoffey] [~xuefuz] Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6784) parquet-hive should allow column type change
[ https://issues.apache.org/jira/browse/HIVE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954289#comment-13954289 ] Brock Noland commented on HIVE-6784: FYI [~jcoffey] [~xuefuz] parquet-hive should allow column type change Key: HIVE-6784 URL: https://issues.apache.org/jira/browse/HIVE-6784 Project: Hive Issue Type: Bug Components: File Formats, Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Tongjie Chen see also in the following parquet issue: https://github.com/Parquet/parquet-mr/issues/323 Currently, if we change parquet format hive table using alter table parquet_table change c1 c1 bigint ( assuming original type of c1 is int), it will result in exception thrown from SerDe: org.apache.hadoop.io.IntWritable cannot be cast to org.apache.hadoop.io.LongWritable in query runtime. This is different behavior from hive (using other file format), where it will try to perform cast (null value in case of incompatible type). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6784) parquet-hive should allow column type change
[ https://issues.apache.org/jira/browse/HIVE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954290#comment-13954290 ] Brock Noland commented on HIVE-6784: FYI [~szehon] parquet-hive should allow column type change Key: HIVE-6784 URL: https://issues.apache.org/jira/browse/HIVE-6784 Project: Hive Issue Type: Bug Components: File Formats, Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Tongjie Chen see also in the following parquet issue: https://github.com/Parquet/parquet-mr/issues/323 Currently, if we change parquet format hive table using alter table parquet_table change c1 c1 bigint ( assuming original type of c1 is int), it will result in exception thrown from SerDe: org.apache.hadoop.io.IntWritable cannot be cast to org.apache.hadoop.io.LongWritable in query runtime. This is different behavior from hive (using other file format), where it will try to perform cast (null value in case of incompatible type). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6785) query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe
[ https://issues.apache.org/jira/browse/HIVE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954291#comment-13954291 ] Brock Noland commented on HIVE-6785: FYI [~jcoffey] [~xuefuz] [~szehon] query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe -- Key: HIVE-6785 URL: https://issues.apache.org/jira/browse/HIVE-6785 Project: Hive Issue Type: Bug Components: File Formats, Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Tongjie Chen More specifically, if table contains string type columns. it will result in the following exception Failed with exception java.io.IOException:java.lang.ClassCastException: parquet.hive.serde.primitive.ParquetStringInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspector.primitive.SettableTimestampObjectInspector see also in the following parquet issue: https://github.com/Parquet/parquet-mr/issues/324 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954294#comment-13954294 ] Brock Noland commented on HIVE-6783: FYI [~szehon] Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package
[ https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954307#comment-13954307 ] Brock Noland commented on HIVE-6757: Hi, The work that was done in HIVE-5783, by the Hive community, ensured that it was backwards compatible for Parquet Hive users, also members of the Hive community. There would be no issue with a patch that kept the backwards compatibility work. The simplest solution would be to update the serde, input, and output class names in the metastore via the upgrade scripts. Brock Remove deprecated parquet classes from outside of org.apache package Key: HIVE-6757 URL: https://issues.apache.org/jira/browse/HIVE-6757 Project: Hive Issue Type: Bug Reporter: Owen O'Malley Assignee: Owen O'Malley Priority: Blocker Fix For: 0.13.0 Attachments: HIVE-6757.patch, parquet-hive.patch Apache shouldn't release projects with files outside of the org.apache namespace. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6570) Hive variable substitution does not work with the source command
[ https://issues.apache.org/jira/browse/HIVE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954321#comment-13954321 ] Anthony Hsu commented on HIVE-6570: --- Thanks. Could one of you guys commit the patch for me please? Hive variable substitution does not work with the source command -- Key: HIVE-6570 URL: https://issues.apache.org/jira/browse/HIVE-6570 Project: Hive Issue Type: Bug Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6570.1.patch The following does not work: {code} source ${hivevar:test-dir}/test.q; {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6721) Streaming ingest needs to be able to send many heartbeats together
[ https://issues.apache.org/jira/browse/HIVE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954336#comment-13954336 ] Alan Gates commented on HIVE-6721: -- Ok, I'm not sure how to proceed here. All the merge failures are in the generated code. I don't know how to make thrift generate code that will merge properly. If the committer applies the src-only patch and re-runs the thrift generation all should work. Please let me know how to proceed. Streaming ingest needs to be able to send many heartbeats together -- Key: HIVE-6721 URL: https://issues.apache.org/jira/browse/HIVE-6721 Project: Hive Issue Type: Bug Components: Locking Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates Fix For: 0.13.0 Attachments: HIVE-6721.patch, HIVE-6721.src-only.patch The heartbeat method added to HiveMetaStoreClient is intended for SQL operations where the user will have one transaction and a hand full of locks. But in the streaming ingest case the client opens a batch of transactions together. In this case we need a way for the client to send a heartbeat for this batch of transactions rather than being forced to send the heartbeats one at a time. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6763) HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception
[ https://issues.apache.org/jira/browse/HIVE-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954340#comment-13954340 ] Thejas M Nair commented on HIVE-6763: - Vaibhav, Can you please check if the test failures are related ? HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception -- Key: HIVE-6763 URL: https://issues.apache.org/jira/browse/HIVE-6763 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6763.1.patch, HIVE-6763.2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5835) Null pointer exception in DeleteDelegator in templeton code
[ https://issues.apache.org/jira/browse/HIVE-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5835: Attachment: HIVE-5835.1.patch Null pointer exception in DeleteDelegator in templeton code Key: HIVE-5835 URL: https://issues.apache.org/jira/browse/HIVE-5835 Project: Hive Issue Type: Bug Components: WebHCat Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-5835.1.patch, HIVE-5835.1.patch The following NPE is possible with the current implementation: ERROR | 13 Nov 2013 08:01:04,292 | org.apache.hcatalog.templeton.CatchallExceptionMapper | java.lang.NullPointerException at org.apache.hcatalog.templeton.tool.JobState.getChildren(JobState.java:180) at org.apache.hcatalog.templeton.DeleteDelegator.run(DeleteDelegator.java:51) at org.apache.hcatalog.templeton.Server.deleteJobId(Server.java:849) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1360) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:382) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1331) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:477) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:47) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) at org.eclipse.jetty.server.Server.handle(Server.java:349) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:449) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:910) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:76) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:609) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:45) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599) at
[jira] [Commented] (HIVE-5835) Null pointer exception in DeleteDelegator in templeton code
[ https://issues.apache.org/jira/browse/HIVE-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954355#comment-13954355 ] Thejas M Nair commented on HIVE-5835: - +1 Null pointer exception in DeleteDelegator in templeton code Key: HIVE-5835 URL: https://issues.apache.org/jira/browse/HIVE-5835 Project: Hive Issue Type: Bug Components: WebHCat Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-5835.1.patch, HIVE-5835.1.patch The following NPE is possible with the current implementation: ERROR | 13 Nov 2013 08:01:04,292 | org.apache.hcatalog.templeton.CatchallExceptionMapper | java.lang.NullPointerException at org.apache.hcatalog.templeton.tool.JobState.getChildren(JobState.java:180) at org.apache.hcatalog.templeton.DeleteDelegator.run(DeleteDelegator.java:51) at org.apache.hcatalog.templeton.Server.deleteJobId(Server.java:849) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1360) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:382) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1331) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:477) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:47) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) at org.eclipse.jetty.server.Server.handle(Server.java:349) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:449) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:910) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:76) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:609) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:45) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599) at
[jira] [Commented] (HIVE-6721) Streaming ingest needs to be able to send many heartbeats together
[ https://issues.apache.org/jira/browse/HIVE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954356#comment-13954356 ] Thejas M Nair commented on HIVE-6721: - Alan, I suspect the merge failures are because of commits with thrift changes that went in yesterday. The patches with generated code usually applies fine in my experience. Can you try regenerating the patch on latest trunk ? I think it might just work. Streaming ingest needs to be able to send many heartbeats together -- Key: HIVE-6721 URL: https://issues.apache.org/jira/browse/HIVE-6721 Project: Hive Issue Type: Bug Components: Locking Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates Fix For: 0.13.0 Attachments: HIVE-6721.patch, HIVE-6721.src-only.patch The heartbeat method added to HiveMetaStoreClient is intended for SQL operations where the user will have one transaction and a hand full of locks. But in the streaming ingest case the client opens a batch of transactions together. In this case we need a way for the client to send a heartbeat for this batch of transactions rather than being forced to send the heartbeats one at a time. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6721) Streaming ingest needs to be able to send many heartbeats together
[ https://issues.apache.org/jira/browse/HIVE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-6721: - Status: Patch Available (was: Open) Streaming ingest needs to be able to send many heartbeats together -- Key: HIVE-6721 URL: https://issues.apache.org/jira/browse/HIVE-6721 Project: Hive Issue Type: Bug Components: Locking Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates Fix For: 0.13.0 Attachments: HIVE-6721.patch, HIVE-6721.patch, HIVE-6721.src-only.patch The heartbeat method added to HiveMetaStoreClient is intended for SQL operations where the user will have one transaction and a hand full of locks. But in the streaming ingest case the client opens a batch of transactions together. In this case we need a way for the client to send a heartbeat for this batch of transactions rather than being forced to send the heartbeats one at a time. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6721) Streaming ingest needs to be able to send many heartbeats together
[ https://issues.apache.org/jira/browse/HIVE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-6721: - Status: Open (was: Patch Available) Canceling and resubmitting patch to get test run. Streaming ingest needs to be able to send many heartbeats together -- Key: HIVE-6721 URL: https://issues.apache.org/jira/browse/HIVE-6721 Project: Hive Issue Type: Bug Components: Locking Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates Fix For: 0.13.0 Attachments: HIVE-6721.patch, HIVE-6721.patch, HIVE-6721.src-only.patch The heartbeat method added to HiveMetaStoreClient is intended for SQL operations where the user will have one transaction and a hand full of locks. But in the streaming ingest case the client opens a batch of transactions together. In this case we need a way for the client to send a heartbeat for this batch of transactions rather than being forced to send the heartbeats one at a time. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tongjie Chen updated HIVE-6783: --- Attachment: HIVE-6783.2.patch.txt remove tab and clean up some format Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tongjie Chen updated HIVE-6783: --- Attachment: HIVE-6783.3.patch.txt Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, HIVE-6783.3.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tongjie Chen updated HIVE-6783: --- Attachment: HIVE-6783.4.patch.txt Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, HIVE-6783.3.patch.txt, HIVE-6783.4.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954387#comment-13954387 ] Tongjie Chen commented on HIVE-6783: https://reviews.apache.org/r/19825/ Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, HIVE-6783.3.patch.txt, HIVE-6783.4.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6788) Abandoned opened transactions not being timed out
Alan Gates created HIVE-6788: Summary: Abandoned opened transactions not being timed out Key: HIVE-6788 URL: https://issues.apache.org/jira/browse/HIVE-6788 Project: Hive Issue Type: Bug Components: Locking Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates If a client abandons an open transaction it is never closed. This does not cause any immediate problems (as locks are timed out) but it will eventually lead to high levels of open transactions in the lists that readers need to be aware of when reading tables or partitions. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6694) Beeline should provide a way to execute shell command as Hive CLI does
[ https://issues.apache.org/jira/browse/HIVE-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954415#comment-13954415 ] Hive QA commented on HIVE-6694: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637087/HIVE-6694.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5503 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part org.apache.hive.service.cli.thrift.TestThriftBinaryCLIService.testExecuteStatementAsync {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2032/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2032/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12637087 Beeline should provide a way to execute shell command as Hive CLI does -- Key: HIVE-6694 URL: https://issues.apache.org/jira/browse/HIVE-6694 Project: Hive Issue Type: Improvement Components: CLI, Clients Affects Versions: 0.11.0, 0.12.0, 0.13.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-6694.patch Hive CLI allows a user to execute a shell command using ! notation. For instance, !cat myfile.txt. Being able to execute shell command may be important for some users. As a replacement, however, Beeline provides no such capability, possibly because ! notation is reserved for SQLLine commands. It's possible to provide this using a slightly syntactic variation such as !sh cat myfilie.txt. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6642) Query fails to vectorize when a non string partition column is part of the query expression
[ https://issues.apache.org/jira/browse/HIVE-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-6642: Status: Patch Available (was: Open) Query fails to vectorize when a non string partition column is part of the query expression --- Key: HIVE-6642 URL: https://issues.apache.org/jira/browse/HIVE-6642 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6642-2.patch, HIVE-6642-3.patch, HIVE-6642-4.patch, HIVE-6642.1.patch, HIVE-6642.5.patch, HIVE-6642.6.patch drop table if exists alltypesorc_part; CREATE TABLE alltypesorc_part ( ctinyint tinyint, csmallint smallint, cint int, cbigint bigint, cfloat float, cdouble double, cstring1 string, cstring2 string, ctimestamp1 timestamp, ctimestamp2 timestamp, cboolean1 boolean, cboolean2 boolean) partitioned by (ds int) STORED AS ORC; insert overwrite table alltypesorc_part partition (ds=2011) select * from alltypesorc limit 100; insert overwrite table alltypesorc_part partition (ds=2012) select * from alltypesorc limit 200; explain select * from (select ds from alltypesorc_part) t1, alltypesorc t2 where t1.ds = t2.cint order by t2.ctimestamp1 limit 100; The above query fails to vectorize because (select ds from alltypesorc_part) t1 returns a string column and the join equality on t2 is performed on an int column. The correct output when vectorization is turned on should be: STAGE DEPENDENCIES: Stage-5 is a root stage Stage-2 depends on stages: Stage-5 Stage-0 is a root stage STAGE PLANS: Stage: Stage-5 Map Reduce Local Work Alias - Map Local Tables: t1:alltypesorc_part Fetch Operator limit: -1 Alias - Map Local Operator Tree: t1:alltypesorc_part TableScan alias: alltypesorc_part Statistics: Num rows: 300 Data size: 62328 Basic stats: COMPLETE Column stats: COMPLETE Select Operator expressions: ds (type: int) outputColumnNames: _col0 Statistics: Num rows: 300 Data size: 1200 Basic stats: COMPLETE Column stats: COMPLETE HashTable Sink Operator condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) Stage: Stage-2 Map Reduce Map Operator Tree: TableScan alias: t2 Statistics: Num rows: 3536 Data size: 1131711 Basic stats: COMPLETE Column stats: NONE Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 3889 Data size: 1244882 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (_col0 = _col3) (type: boolean) Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type: double), _col7 (type: string), _col8 (type: string), _col\ 9 (type: timestamp), _col10 (type: timestamp), _col11 (type: boolean), _col12 (type: boolean) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: _col9 (type: timestamp) sort order: + Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE value expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type:
[jira] [Updated] (HIVE-6642) Query fails to vectorize when a non string partition column is part of the query expression
[ https://issues.apache.org/jira/browse/HIVE-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-6642: Status: Open (was: Patch Available) Query fails to vectorize when a non string partition column is part of the query expression --- Key: HIVE-6642 URL: https://issues.apache.org/jira/browse/HIVE-6642 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6642-2.patch, HIVE-6642-3.patch, HIVE-6642-4.patch, HIVE-6642.1.patch, HIVE-6642.5.patch, HIVE-6642.6.patch drop table if exists alltypesorc_part; CREATE TABLE alltypesorc_part ( ctinyint tinyint, csmallint smallint, cint int, cbigint bigint, cfloat float, cdouble double, cstring1 string, cstring2 string, ctimestamp1 timestamp, ctimestamp2 timestamp, cboolean1 boolean, cboolean2 boolean) partitioned by (ds int) STORED AS ORC; insert overwrite table alltypesorc_part partition (ds=2011) select * from alltypesorc limit 100; insert overwrite table alltypesorc_part partition (ds=2012) select * from alltypesorc limit 200; explain select * from (select ds from alltypesorc_part) t1, alltypesorc t2 where t1.ds = t2.cint order by t2.ctimestamp1 limit 100; The above query fails to vectorize because (select ds from alltypesorc_part) t1 returns a string column and the join equality on t2 is performed on an int column. The correct output when vectorization is turned on should be: STAGE DEPENDENCIES: Stage-5 is a root stage Stage-2 depends on stages: Stage-5 Stage-0 is a root stage STAGE PLANS: Stage: Stage-5 Map Reduce Local Work Alias - Map Local Tables: t1:alltypesorc_part Fetch Operator limit: -1 Alias - Map Local Operator Tree: t1:alltypesorc_part TableScan alias: alltypesorc_part Statistics: Num rows: 300 Data size: 62328 Basic stats: COMPLETE Column stats: COMPLETE Select Operator expressions: ds (type: int) outputColumnNames: _col0 Statistics: Num rows: 300 Data size: 1200 Basic stats: COMPLETE Column stats: COMPLETE HashTable Sink Operator condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) Stage: Stage-2 Map Reduce Map Operator Tree: TableScan alias: t2 Statistics: Num rows: 3536 Data size: 1131711 Basic stats: COMPLETE Column stats: NONE Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 3889 Data size: 1244882 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (_col0 = _col3) (type: boolean) Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type: double), _col7 (type: string), _col8 (type: string), _col\ 9 (type: timestamp), _col10 (type: timestamp), _col11 (type: boolean), _col12 (type: boolean) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: _col9 (type: timestamp) sort order: + Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE value expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type:
[jira] [Updated] (HIVE-6642) Query fails to vectorize when a non string partition column is part of the query expression
[ https://issues.apache.org/jira/browse/HIVE-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-6642: Attachment: HIVE-6642.6.patch more .q.out file changes Query fails to vectorize when a non string partition column is part of the query expression --- Key: HIVE-6642 URL: https://issues.apache.org/jira/browse/HIVE-6642 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6642-2.patch, HIVE-6642-3.patch, HIVE-6642-4.patch, HIVE-6642.1.patch, HIVE-6642.5.patch, HIVE-6642.6.patch drop table if exists alltypesorc_part; CREATE TABLE alltypesorc_part ( ctinyint tinyint, csmallint smallint, cint int, cbigint bigint, cfloat float, cdouble double, cstring1 string, cstring2 string, ctimestamp1 timestamp, ctimestamp2 timestamp, cboolean1 boolean, cboolean2 boolean) partitioned by (ds int) STORED AS ORC; insert overwrite table alltypesorc_part partition (ds=2011) select * from alltypesorc limit 100; insert overwrite table alltypesorc_part partition (ds=2012) select * from alltypesorc limit 200; explain select * from (select ds from alltypesorc_part) t1, alltypesorc t2 where t1.ds = t2.cint order by t2.ctimestamp1 limit 100; The above query fails to vectorize because (select ds from alltypesorc_part) t1 returns a string column and the join equality on t2 is performed on an int column. The correct output when vectorization is turned on should be: STAGE DEPENDENCIES: Stage-5 is a root stage Stage-2 depends on stages: Stage-5 Stage-0 is a root stage STAGE PLANS: Stage: Stage-5 Map Reduce Local Work Alias - Map Local Tables: t1:alltypesorc_part Fetch Operator limit: -1 Alias - Map Local Operator Tree: t1:alltypesorc_part TableScan alias: alltypesorc_part Statistics: Num rows: 300 Data size: 62328 Basic stats: COMPLETE Column stats: COMPLETE Select Operator expressions: ds (type: int) outputColumnNames: _col0 Statistics: Num rows: 300 Data size: 1200 Basic stats: COMPLETE Column stats: COMPLETE HashTable Sink Operator condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) Stage: Stage-2 Map Reduce Map Operator Tree: TableScan alias: t2 Statistics: Num rows: 3536 Data size: 1131711 Basic stats: COMPLETE Column stats: NONE Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 3889 Data size: 1244882 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (_col0 = _col3) (type: boolean) Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type: double), _col7 (type: string), _col8 (type: string), _col\ 9 (type: timestamp), _col10 (type: timestamp), _col11 (type: boolean), _col12 (type: boolean) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: _col9 (type: timestamp) sort order: + Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE value expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float),
[jira] [Updated] (HIVE-6642) Query fails to vectorize when a non string partition column is part of the query expression
[ https://issues.apache.org/jira/browse/HIVE-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-6642: Attachment: (was: HIVE-6642.6.patch) Query fails to vectorize when a non string partition column is part of the query expression --- Key: HIVE-6642 URL: https://issues.apache.org/jira/browse/HIVE-6642 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6642-2.patch, HIVE-6642-3.patch, HIVE-6642-4.patch, HIVE-6642.1.patch, HIVE-6642.5.patch drop table if exists alltypesorc_part; CREATE TABLE alltypesorc_part ( ctinyint tinyint, csmallint smallint, cint int, cbigint bigint, cfloat float, cdouble double, cstring1 string, cstring2 string, ctimestamp1 timestamp, ctimestamp2 timestamp, cboolean1 boolean, cboolean2 boolean) partitioned by (ds int) STORED AS ORC; insert overwrite table alltypesorc_part partition (ds=2011) select * from alltypesorc limit 100; insert overwrite table alltypesorc_part partition (ds=2012) select * from alltypesorc limit 200; explain select * from (select ds from alltypesorc_part) t1, alltypesorc t2 where t1.ds = t2.cint order by t2.ctimestamp1 limit 100; The above query fails to vectorize because (select ds from alltypesorc_part) t1 returns a string column and the join equality on t2 is performed on an int column. The correct output when vectorization is turned on should be: STAGE DEPENDENCIES: Stage-5 is a root stage Stage-2 depends on stages: Stage-5 Stage-0 is a root stage STAGE PLANS: Stage: Stage-5 Map Reduce Local Work Alias - Map Local Tables: t1:alltypesorc_part Fetch Operator limit: -1 Alias - Map Local Operator Tree: t1:alltypesorc_part TableScan alias: alltypesorc_part Statistics: Num rows: 300 Data size: 62328 Basic stats: COMPLETE Column stats: COMPLETE Select Operator expressions: ds (type: int) outputColumnNames: _col0 Statistics: Num rows: 300 Data size: 1200 Basic stats: COMPLETE Column stats: COMPLETE HashTable Sink Operator condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) Stage: Stage-2 Map Reduce Map Operator Tree: TableScan alias: t2 Statistics: Num rows: 3536 Data size: 1131711 Basic stats: COMPLETE Column stats: NONE Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 3889 Data size: 1244882 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (_col0 = _col3) (type: boolean) Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type: double), _col7 (type: string), _col8 (type: string), _col\ 9 (type: timestamp), _col10 (type: timestamp), _col11 (type: boolean), _col12 (type: boolean) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: _col9 (type: timestamp) sort order: + Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE value expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type: double), _col7
[jira] [Updated] (HIVE-6642) Query fails to vectorize when a non string partition column is part of the query expression
[ https://issues.apache.org/jira/browse/HIVE-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-6642: Attachment: HIVE-6642.6.patch Query fails to vectorize when a non string partition column is part of the query expression --- Key: HIVE-6642 URL: https://issues.apache.org/jira/browse/HIVE-6642 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6642-2.patch, HIVE-6642-3.patch, HIVE-6642-4.patch, HIVE-6642.1.patch, HIVE-6642.5.patch, HIVE-6642.6.patch drop table if exists alltypesorc_part; CREATE TABLE alltypesorc_part ( ctinyint tinyint, csmallint smallint, cint int, cbigint bigint, cfloat float, cdouble double, cstring1 string, cstring2 string, ctimestamp1 timestamp, ctimestamp2 timestamp, cboolean1 boolean, cboolean2 boolean) partitioned by (ds int) STORED AS ORC; insert overwrite table alltypesorc_part partition (ds=2011) select * from alltypesorc limit 100; insert overwrite table alltypesorc_part partition (ds=2012) select * from alltypesorc limit 200; explain select * from (select ds from alltypesorc_part) t1, alltypesorc t2 where t1.ds = t2.cint order by t2.ctimestamp1 limit 100; The above query fails to vectorize because (select ds from alltypesorc_part) t1 returns a string column and the join equality on t2 is performed on an int column. The correct output when vectorization is turned on should be: STAGE DEPENDENCIES: Stage-5 is a root stage Stage-2 depends on stages: Stage-5 Stage-0 is a root stage STAGE PLANS: Stage: Stage-5 Map Reduce Local Work Alias - Map Local Tables: t1:alltypesorc_part Fetch Operator limit: -1 Alias - Map Local Operator Tree: t1:alltypesorc_part TableScan alias: alltypesorc_part Statistics: Num rows: 300 Data size: 62328 Basic stats: COMPLETE Column stats: COMPLETE Select Operator expressions: ds (type: int) outputColumnNames: _col0 Statistics: Num rows: 300 Data size: 1200 Basic stats: COMPLETE Column stats: COMPLETE HashTable Sink Operator condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) Stage: Stage-2 Map Reduce Map Operator Tree: TableScan alias: t2 Statistics: Num rows: 3536 Data size: 1131711 Basic stats: COMPLETE Column stats: NONE Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {_col0} 1 {ctinyint} {csmallint} {cint} {cbigint} {cfloat} {cdouble} {cstring1} {cstring2} {ctimestamp1} {ctimestamp2} {cboolean1} {cboolean2} keys: 0 _col0 (type: int) 1 cint (type: int) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 3889 Data size: 1244882 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (_col0 = _col3) (type: boolean) Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type: double), _col7 (type: string), _col8 (type: string), _col\ 9 (type: timestamp), _col10 (type: timestamp), _col11 (type: boolean), _col12 (type: boolean) outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12 Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: _col9 (type: timestamp) sort order: + Statistics: Num rows: 1944 Data size: 622280 Basic stats: COMPLETE Column stats: NONE value expressions: _col0 (type: int), _col1 (type: tinyint), _col2 (type: smallint), _col3 (type: int), _col4 (type: bigint), _col5 (type: float), _col6 (type: double),
[jira] [Commented] (HIVE-6570) Hive variable substitution does not work with the source command
[ https://issues.apache.org/jira/browse/HIVE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954441#comment-13954441 ] Lefty Leverenz commented on HIVE-6570: -- Do the bug fix need to be mentioned in the wiki? A version note could be added in the CLI doc and/or Variable Substitution: * [CLI: Hive Interactive Shell Commands |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli#LanguageManualCli-HiveInteractiveShellCommands] * [Variable Substitution |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+VariableSubstitution] Hive variable substitution does not work with the source command -- Key: HIVE-6570 URL: https://issues.apache.org/jira/browse/HIVE-6570 Project: Hive Issue Type: Bug Reporter: Anthony Hsu Assignee: Anthony Hsu Attachments: HIVE-6570.1.patch The following does not work: {code} source ${hivevar:test-dir}/test.q; {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6781) Hive JDBC in http mode is using HiveConf - should be removed
[ https://issues.apache.org/jira/browse/HIVE-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-6781: --- Description: This change is needed so that in unsecured mode, the jdbc driver does not depend on HiveConf which is derived from hadoop's Configuration class, continue being a thin client. Hive JDBC in http mode is using HiveConf - should be removed - Key: HIVE-6781 URL: https://issues.apache.org/jira/browse/HIVE-6781 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6781.1.patch This change is needed so that in unsecured mode, the jdbc driver does not depend on HiveConf which is derived from hadoop's Configuration class, continue being a thin client. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6763) HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception
[ https://issues.apache.org/jira/browse/HIVE-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954445#comment-13954445 ] Vaibhav Gumashta commented on HIVE-6763: [~thejas] The failures are unrelated. Thanks! HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception -- Key: HIVE-6763 URL: https://issues.apache.org/jira/browse/HIVE-6763 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6763.1.patch, HIVE-6763.2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
Review Request 19827: HiveStatement client transport lock should unlock in finally block.
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/19827/ --- Review request for hive and Thejas Nair. Bugs: HIVE-6789 https://issues.apache.org/jira/browse/HIVE-6789 Repository: hive-git Description --- https://issues.apache.org/jira/browse/HIVE-6789 Diffs - jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java 95a1843 Diff: https://reviews.apache.org/r/19827/diff/ Testing --- TestJdbcDriver2 Thanks, Vaibhav Gumashta
[jira] [Commented] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.
[ https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954463#comment-13954463 ] Vaibhav Gumashta commented on HIVE-6789: cc [~thejas] [~rhbutani] bug for 13! HiveStatement client transport lock should unlock in finally block. --- Key: HIVE-6789 URL: https://issues.apache.org/jira/browse/HIVE-6789 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6789.1.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.
Vaibhav Gumashta created HIVE-6789: -- Summary: HiveStatement client transport lock should unlock in finally block. Key: HIVE-6789 URL: https://issues.apache.org/jira/browse/HIVE-6789 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.
[ https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-6789: --- Attachment: HIVE-6789.1.patch HiveStatement client transport lock should unlock in finally block. --- Key: HIVE-6789 URL: https://issues.apache.org/jira/browse/HIVE-6789 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6789.1.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5835) Null pointer exception in DeleteDelegator in templeton code
[ https://issues.apache.org/jira/browse/HIVE-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954461#comment-13954461 ] Hive QA commented on HIVE-5835: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637655/HIVE-5835.1.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5502 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2033/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2033/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12637655 Null pointer exception in DeleteDelegator in templeton code Key: HIVE-5835 URL: https://issues.apache.org/jira/browse/HIVE-5835 Project: Hive Issue Type: Bug Components: WebHCat Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-5835.1.patch, HIVE-5835.1.patch The following NPE is possible with the current implementation: ERROR | 13 Nov 2013 08:01:04,292 | org.apache.hcatalog.templeton.CatchallExceptionMapper | java.lang.NullPointerException at org.apache.hcatalog.templeton.tool.JobState.getChildren(JobState.java:180) at org.apache.hcatalog.templeton.DeleteDelegator.run(DeleteDelegator.java:51) at org.apache.hcatalog.templeton.Server.deleteJobId(Server.java:849) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1360) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:382) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1331) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:477) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:47) at
[jira] [Updated] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.
[ https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-6789: --- Status: Patch Available (was: Open) HiveStatement client transport lock should unlock in finally block. --- Key: HIVE-6789 URL: https://issues.apache.org/jira/browse/HIVE-6789 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6789.1.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6790) Document min jars required for JDBC client
Vaibhav Gumashta created HIVE-6790: -- Summary: Document min jars required for JDBC client Key: HIVE-6790 URL: https://issues.apache.org/jira/browse/HIVE-6790 Project: Hive Issue Type: Task Components: JDBC Reporter: Vaibhav Gumashta For: 1. Unsecured binary http mode HS2 2. Unsecured binary http mode with SSL 3. Kerberized setup binary http -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6721) Streaming ingest needs to be able to send many heartbeats together
[ https://issues.apache.org/jira/browse/HIVE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954511#comment-13954511 ] Hive QA commented on HIVE-6721: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637659/HIVE-6721.patch {color:green}SUCCESS:{color} +1 5505 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2035/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2035/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12637659 Streaming ingest needs to be able to send many heartbeats together -- Key: HIVE-6721 URL: https://issues.apache.org/jira/browse/HIVE-6721 Project: Hive Issue Type: Bug Components: Locking Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates Fix For: 0.13.0 Attachments: HIVE-6721.patch, HIVE-6721.patch, HIVE-6721.src-only.patch The heartbeat method added to HiveMetaStoreClient is intended for SQL operations where the user will have one transaction and a hand full of locks. But in the streaming ingest case the client opens a batch of transactions together. In this case we need a way for the client to send a heartbeat for this batch of transactions rather than being forced to send the heartbeats one at a time. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6786) Off by one error in ORC PPD
[ https://issues.apache.org/jira/browse/HIVE-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth J updated HIVE-6786: - Attachment: HIVE-6786.1.patch Off by one error in ORC PPD Key: HIVE-6786 URL: https://issues.apache.org/jira/browse/HIVE-6786 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Gopal V Assignee: Prasanth J Priority: Critical Fix For: 0.13.0 Attachments: HIVE-6786.1.patch Turning on ORC PPD makes split computation fail for a 10Tb benchmark. Narrowed down to the following code fragment https://github.com/apache/hive/blob/branch-0.13/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L757 {code} includeStripe[i] = (i stripeStats.size()) || isStripeSatisfyPredicate(stripeStats.get(i), sarg, filterColumns); {code} I would guess that should be a =, but [~prasanth_j], can you comment if that is the right fix? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6786) Off by one error in ORC PPD
[ https://issues.apache.org/jira/browse/HIVE-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954528#comment-13954528 ] Prasanth J commented on HIVE-6786: -- [~gopalv] Thats looks like an off-by-one error. It should be =. I think this case will only be hit when the stripe statistics is missing, in which case the stripeStatistics.size() will be 0 and all stripes should be included. The attached patch fixes it. I am curious how is it missing stripe statistics? Is it an old ORC which did not have stripe statistics or is the ORC file generated via other means (apart from hive)? Off by one error in ORC PPD Key: HIVE-6786 URL: https://issues.apache.org/jira/browse/HIVE-6786 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Gopal V Assignee: Prasanth J Priority: Critical Fix For: 0.13.0 Attachments: HIVE-6786.1.patch Turning on ORC PPD makes split computation fail for a 10Tb benchmark. Narrowed down to the following code fragment https://github.com/apache/hive/blob/branch-0.13/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L757 {code} includeStripe[i] = (i stripeStats.size()) || isStripeSatisfyPredicate(stripeStats.get(i), sarg, filterColumns); {code} I would guess that should be a =, but [~prasanth_j], can you comment if that is the right fix? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6781) Hive JDBC in http mode is using HiveConf - should be removed
[ https://issues.apache.org/jira/browse/HIVE-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954529#comment-13954529 ] Thejas M Nair commented on HIVE-6781: - +1 [~rhbutani] It will be very useful to have this patch in 0.13. Otherwise, we will need to add hadoop as a dependency for jdbc applications, even for the unsecure mode. Hive JDBC in http mode is using HiveConf - should be removed - Key: HIVE-6781 URL: https://issues.apache.org/jira/browse/HIVE-6781 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6781.1.patch This change is needed so that in unsecured mode, the jdbc driver does not depend on HiveConf which is derived from hadoop's Configuration class, continue being a thin client. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6786) Off by one error in ORC PPD
[ https://issues.apache.org/jira/browse/HIVE-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth J updated HIVE-6786: - Status: Patch Available (was: Open) Off by one error in ORC PPD Key: HIVE-6786 URL: https://issues.apache.org/jira/browse/HIVE-6786 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Gopal V Assignee: Prasanth J Priority: Critical Fix For: 0.13.0 Attachments: HIVE-6786.1.patch Turning on ORC PPD makes split computation fail for a 10Tb benchmark. Narrowed down to the following code fragment https://github.com/apache/hive/blob/branch-0.13/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L757 {code} includeStripe[i] = (i stripeStats.size()) || isStripeSatisfyPredicate(stripeStats.get(i), sarg, filterColumns); {code} I would guess that should be a =, but [~prasanth_j], can you comment if that is the right fix? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6786) Off by one error in ORC PPD
[ https://issues.apache.org/jira/browse/HIVE-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954531#comment-13954531 ] Prasanth J commented on HIVE-6786: -- [~gopalv] Was it throwing ArrayIndexOutOfBounds exception when PPD is enabled? Off by one error in ORC PPD Key: HIVE-6786 URL: https://issues.apache.org/jira/browse/HIVE-6786 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Gopal V Assignee: Prasanth J Priority: Critical Fix For: 0.13.0 Attachments: HIVE-6786.1.patch Turning on ORC PPD makes split computation fail for a 10Tb benchmark. Narrowed down to the following code fragment https://github.com/apache/hive/blob/branch-0.13/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L757 {code} includeStripe[i] = (i stripeStats.size()) || isStripeSatisfyPredicate(stripeStats.get(i), sarg, filterColumns); {code} I would guess that should be a =, but [~prasanth_j], can you comment if that is the right fix? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6763) HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception
[ https://issues.apache.org/jira/browse/HIVE-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6763: Resolution: Fixed Status: Resolved (was: Patch Available) Patch committed to 0.13 branch and trunk. Thanks for the contribution Vaibhav! HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception -- Key: HIVE-6763 URL: https://issues.apache.org/jira/browse/HIVE-6763 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6763.1.patch, HIVE-6763.2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6781) Hive JDBC in http mode is using HiveConf - should be removed
[ https://issues.apache.org/jira/browse/HIVE-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6781: Status: Patch Available (was: Open) Hive JDBC in http mode is using HiveConf - should be removed - Key: HIVE-6781 URL: https://issues.apache.org/jira/browse/HIVE-6781 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6781.1.patch This change is needed so that in unsecured mode, the jdbc driver does not depend on HiveConf which is derived from hadoop's Configuration class, continue being a thin client. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954543#comment-13954543 ] Hive QA commented on HIVE-6783: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637662/HIVE-6783.4.patch.txt {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5503 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcDriver2.testNewConnectionConfiguration {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2036/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2036/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12637662 Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, HIVE-6783.3.patch.txt, HIVE-6783.4.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6642) Query fails to vectorize when a non string partition column is part of the query expression
[ https://issues.apache.org/jira/browse/HIVE-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954544#comment-13954544 ] Hive QA commented on HIVE-6642: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637667/HIVE-6642.6.patch Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2038/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2038/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n '' ]] + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-2038/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . Reverted 'ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestHiveSchemaConverter.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java' ++ awk '{print $2}' ++ egrep -v '^X|^Performing status on external' ++ svn status --no-ignore + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target hcatalog/server-extensions/target hcatalog/core/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target + svn update Ujdbc/src/java/org/apache/hive/jdbc/HttpKerberosRequestInterceptor.java Fetching external item into 'hcatalog/src/test/e2e/harness' Updated external to revision 1583097. Updated to revision 1583097. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12637667 Query fails to vectorize when a non string partition column is part of the query expression --- Key: HIVE-6642 URL: https://issues.apache.org/jira/browse/HIVE-6642 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6642-2.patch, HIVE-6642-3.patch, HIVE-6642-4.patch, HIVE-6642.1.patch, HIVE-6642.5.patch, HIVE-6642.6.patch drop table if exists alltypesorc_part; CREATE TABLE alltypesorc_part ( ctinyint tinyint, csmallint smallint, cint int, cbigint bigint, cfloat float, cdouble double, cstring1 string, cstring2 string, ctimestamp1 timestamp, ctimestamp2 timestamp, cboolean1 boolean, cboolean2 boolean) partitioned by (ds int) STORED AS ORC; insert overwrite table alltypesorc_part partition (ds=2011) select * from alltypesorc limit 100; insert overwrite table alltypesorc_part partition (ds=2012) select * from alltypesorc limit 200; explain select * from (select ds from alltypesorc_part) t1, alltypesorc t2 where t1.ds = t2.cint order by t2.ctimestamp1 limit 100; The above query fails to vectorize because (select ds from alltypesorc_part) t1 returns a string column and the join equality on t2 is performed on an int column. The correct output when vectorization is turned on should be: STAGE DEPENDENCIES: Stage-5 is a root stage Stage-2 depends on stages: Stage-5 Stage-0 is a root stage STAGE PLANS: Stage: Stage-5 Map Reduce Local Work Alias - Map Local Tables:
[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig
[ https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954548#comment-13954548 ] Tongjie Chen commented on HIVE-6783: Hmm, why formatting code would cause one previously passed test to fail. The failure is related to jdbc, which is unrelated at all. Is that transient error? Incompatible schema for maps between parquet-hive and parquet-pig - Key: HIVE-6783 URL: https://issues.apache.org/jira/browse/HIVE-6783 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.13.0 Reporter: Tongjie Chen Fix For: 0.13.0 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, HIVE-6783.3.patch.txt, HIVE-6783.4.patch.txt see also in following parquet issue: https://github.com/Parquet/parquet-mr/issues/290 The schema written for maps isn't compatible between hive and pig. This means any files written in one cannot be properly read in the other. More specifically, for the same map column c1, parquet-pig generates schema: message pig_schema { optional group c1 (MAP) { repeated group map (MAP_KEY_VALUE) { required binary key (UTF8); optional binary value; } } } while parquet-hive generates schema: message hive_schema { optional group c1 (MAP_KEY_VALUE) { repeated group map { required binary key; optional binary value; } } } -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.
[ https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954560#comment-13954560 ] Hive QA commented on HIVE-6789: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637670/HIVE-6789.1.patch {color:green}SUCCESS:{color} +1 5502 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2040/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2040/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12637670 HiveStatement client transport lock should unlock in finally block. --- Key: HIVE-6789 URL: https://issues.apache.org/jira/browse/HIVE-6789 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6789.1.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6786) Off by one error in ORC PPD
[ https://issues.apache.org/jira/browse/HIVE-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954585#comment-13954585 ] Hive QA commented on HIVE-6786: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12637678/HIVE-6786.1.patch {color:green}SUCCESS:{color} +1 5502 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2041/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2041/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12637678 Off by one error in ORC PPD Key: HIVE-6786 URL: https://issues.apache.org/jira/browse/HIVE-6786 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Gopal V Assignee: Prasanth J Priority: Critical Fix For: 0.13.0 Attachments: HIVE-6786.1.patch Turning on ORC PPD makes split computation fail for a 10Tb benchmark. Narrowed down to the following code fragment https://github.com/apache/hive/blob/branch-0.13/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L757 {code} includeStripe[i] = (i stripeStats.size()) || isStripeSatisfyPredicate(stripeStats.get(i), sarg, filterColumns); {code} I would guess that should be a =, but [~prasanth_j], can you comment if that is the right fix? -- This message was sent by Atlassian JIRA (v6.2#6252)