[jira] [Commented] (HIVE-11609) Capability to add a filter to hbase scan via composite key doesn't work

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299679#comment-16299679
 ] 

Hive QA commented on HIVE-11609:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903135/HIVE-11609.10.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11536 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join] (batchId=84)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8349/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8349/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8349/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903135 - PreCommit-HIVE-Build

> Capability to add a filter to hbase scan via composite key doesn't work
> ---
>
> Key: HIVE-11609
> URL: https://issues.apache.org/jira/browse/HIVE-11609
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Swarnim Kulkarni
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-11609.08.patch, HIVE-11609.09.patch, 
> HIVE-11609.1.patch.txt, HIVE-11609.10.patch, HIVE-11609.2.patch.txt, 
> HIVE-11609.3.patch.txt, HIVE-11609.4.patch.txt, HIVE-11609.5.patch, 
> HIVE-11609.6.patch.txt, HIVE-11609.7.patch.txt
>
>
> It seems like the capability to add filter to an hbase scan which was added 
> as part of HIVE-6411 doesn't work. This is primarily because in the 
> HiveHBaseInputFormat, the filter is added in the getsplits instead of 
> getrecordreader. This works fine for start and stop keys but not for filter 
> because a filter is respected only when an actual scan is performed. This is 
> also related to the initial refactoring that was done as part of HIVE-3420.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2017-12-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299686#comment-16299686
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Design document is attached in the umbrella jira HIVE-18320.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: ACID, DR
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18052) Run p-tests on mm tables

2017-12-21 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-18052:
--
Attachment: HIVE-18052.11.patch

> Run p-tests on mm tables
> 
>
> Key: HIVE-18052
> URL: https://issues.apache.org/jira/browse/HIVE-18052
> Project: Hive
>  Issue Type: Task
>Reporter: Steve Yeom
>Assignee: Steve Yeom
> Attachments: HIVE-18052.1.patch, HIVE-18052.10.patch, 
> HIVE-18052.11.patch, HIVE-18052.2.patch, HIVE-18052.3.patch, 
> HIVE-18052.4.patch, HIVE-18052.5.patch, HIVE-18052.6.patch, 
> HIVE-18052.7.patch, HIVE-18052.8.patch, HIVE-18052.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-11609) Capability to add a filter to hbase scan via composite key doesn't work

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299700#comment-16299700
 ] 

Hive QA commented on HIVE-11609:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} hbase-handler: The patch generated 22 new + 29 
unchanged - 34 fixed = 51 total (was 63) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 21e18de |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8350/yetus/diff-checkstyle-hbase-handler.txt
 |
| modules | C: ql hbase-handler U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8350/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Capability to add a filter to hbase scan via composite key doesn't work
> ---
>
> Key: HIVE-11609
> URL: https://issues.apache.org/jira/browse/HIVE-11609
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Swarnim Kulkarni
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-11609.08.patch, HIVE-11609.09.patch, 
> HIVE-11609.1.patch.txt, HIVE-11609.10.patch, HIVE-11609.2.patch.txt, 
> HIVE-11609.3.patch.txt, HIVE-11609.4.patch.txt, HIVE-11609.5.patch, 
> HIVE-11609.6.patch.txt, HIVE-11609.7.patch.txt
>
>
> It seems like the capability to add filter to an hbase scan which was added 
> as part of HIVE-6411 doesn't work. This is primarily because in the 
> HiveHBaseInputFormat, the filter is added in the getsplits instead of 
> getrecordreader. This works fine for start and stop keys but not for filter 
> because a filter is respected only when an actual scan is performed. This is 
> also related to the initial refactoring that was done as part of HIVE-3420.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18031) Support replication for Alter Database operation.

2017-12-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299703#comment-16299703
 ] 

Sankar Hariappan commented on HIVE-18031:
-

Thanks for the review [~anishek]!
Test failures are irrelevant to the patch. testConstraints case is failing 
since 29 builds and I believe [~daijy] is already looking at it.
I'll commit the patch to master in a while.

> Support replication for Alter Database operation.
> -
>
> Key: HIVE-18031
> URL: https://issues.apache.org/jira/browse/HIVE-18031
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-18031.01.patch, HIVE-18031.02.patch
>
>
> Currently alter database operations to alter the database properties or 
> description are not generating any events due to which it is not getting 
> replicated.
> Need to add an event for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18031) Support replication for Alter Database operation.

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18031:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Support replication for Alter Database operation.
> -
>
> Key: HIVE-18031
> URL: https://issues.apache.org/jira/browse/HIVE-18031
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-18031.01.patch, HIVE-18031.02.patch
>
>
> Currently alter database operations to alter the database properties or 
> description are not generating any events due to which it is not getting 
> replicated.
> Need to add an event for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18031) Support replication for Alter Database operation.

2017-12-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299718#comment-16299718
 ] 

Sankar Hariappan commented on HIVE-18031:
-

Patch committed to master!

> Support replication for Alter Database operation.
> -
>
> Key: HIVE-18031
> URL: https://issues.apache.org/jira/browse/HIVE-18031
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-18031.01.patch, HIVE-18031.02.patch
>
>
> Currently alter database operations to alter the database properties or 
> description are not generating any events due to which it is not getting 
> replicated.
> Need to add an event for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Status: Open  (was: Patch Available)

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan reassigned HIVE-17897:
---

Assignee: Sankar Hariappan  (was: Thejas M Nair)

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Attachment: HIVE-17897.02.patch

Re-attaching the same patch after rebasing with master for the ptest build to 
trigger.

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.02.patch, HIVE-17897.1.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Assignee: Thejas M Nair  (was: Sankar Hariappan)
  Status: Patch Available  (was: Open)

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.02.patch, HIVE-17897.1.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18306) Fix spark smb tests

2017-12-21 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299738#comment-16299738
 ] 

Deepak Jaiswal commented on HIVE-18306:
---

Thanks [~jdere] for review and commit.

> Fix spark smb tests
> ---
>
> Key: HIVE-18306
> URL: https://issues.apache.org/jira/browse/HIVE-18306
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Deepak Jaiswal
> Fix For: 3.0.0
>
> Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch
>
>
> seems to me that 
> {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and 
> {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is 
> failing since HIVE-18208 is in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299736#comment-16299736
 ] 

Sankar Hariappan edited comment on HIVE-17897 at 12/21/17 9:12 AM:
---

Re-attaching the same patch after rebasing with master to trigger ptest build.


was (Author: sankarh):
Re-attaching the same patch after rebasing with master for the ptest build to 
trigger.

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.02.patch, HIVE-17897.1.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-11609) Capability to add a filter to hbase scan via composite key doesn't work

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299739#comment-16299739
 ] 

Hive QA commented on HIVE-11609:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903135/HIVE-11609.10.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11536 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort] 
(batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=177)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8350/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8350/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8350/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903135 - PreCommit-HIVE-Build

> Capability to add a filter to hbase scan via composite key doesn't work
> ---
>
> Key: HIVE-11609
> URL: https://issues.apache.org/jira/browse/HIVE-11609
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Swarnim Kulkarni
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-11609.08.patch, HIVE-11609.09.patch, 
> HIVE-11609.1.patch.txt, HIVE-11609.10.patch, HIVE-11609.2.patch.txt, 
> HIVE-11609.3.patch.txt, HIVE-11609.4.patch.txt, HIVE-11609.5.patch, 
> HIVE-11609.6.patch.txt, HIVE-11609.7.patch.txt
>
>
> It seems like the capability to add filter to an hbase scan which was added 
> as part of HIVE-6411 doesn't work. This is primarily because in the 
> HiveHBaseInputFormat, the filter is added in the getsplits instead of 
> getrecordreader. This works fine for start and stop keys but not for filter 
> because a filter is respected only when an actual scan is performed. This is 
> also related to the initial refactoring that was done as part of HIVE-3420.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18095) add a unmanaged flag to triggers (applies to container based sessions)

2017-12-21 Thread Harish Jaiprakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299749#comment-16299749
 ] 

Harish Jaiprakash commented on HIVE-18095:
--

+1 Thanks.

> add a unmanaged flag to triggers (applies to container based sessions)
> --
>
> Key: HIVE-18095
> URL: https://issues.apache.org/jira/browse/HIVE-18095
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18095.01.patch, HIVE-18095.02.patch, 
> HIVE-18095.nogen.patch, HIVE-18095.patch
>
>
> cc [~prasanth_j]
> It should be impossible to attach global triggers for pools. Setting global 
> flag should probably automatically remove attachments to pools.
> Global triggers would only support actions that Tez supports (for simplicity; 
> also, for now, move doesn't make a lot of sense because the trigger would 
> apply again after the move).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18095) add a unmanaged flag to triggers (applies to container based sessions)

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299769#comment-16299769
 ] 

Hive QA commented on HIVE-18095:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} standalone-metastore: The patch generated 0 new + 
491 unchanged - 1 fixed = 491 total (was 492) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} The patch metastore passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 519 unchanged - 24 
fixed = 519 total (was 543) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / ad5bcb1 |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8351/yetus/whitespace-eol.txt 
|
| modules | C: standalone-metastore metastore ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8351/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> add a unmanaged flag to triggers (applies to container based sessions)
> --
>
> Key: HIVE-18095
> URL: https://issues.apache.org/jira/browse/HIVE-18095
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18095.01.patch, HIVE-18095.02.patch, 
> HIVE-18095.nogen.patch, HIVE-18095.patch
>
>
> cc [~prasanth_j]
> It should be impossible to attach global triggers for pools. Setting global 
> flag should probably automatically remove attachments to pools.
> Global triggers would only support actions that Tez supports (for simplicity; 
> also, for now, move doesn't make a lot of 

[jira] [Updated] (HIVE-18031) Support replication for Alter Database operation.

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18031:

Description: 
Currently alter database operations to alter the database properties or owner 
info are not generating any events due to which it is not getting replicated.
Need to add an event for this.

  was:
Currently alter database operations to alter the database properties or 
description are not generating any events due to which it is not getting 
replicated.
Need to add an event for this.


> Support replication for Alter Database operation.
> -
>
> Key: HIVE-18031
> URL: https://issues.apache.org/jira/browse/HIVE-18031
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-18031.01.patch, HIVE-18031.02.patch
>
>
> Currently alter database operations to alter the database properties or owner 
> info are not generating any events due to which it is not getting replicated.
> Need to add an event for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18095) add a unmanaged flag to triggers (applies to container based sessions)

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299844#comment-16299844
 ] 

Hive QA commented on HIVE-18095:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903153/HIVE-18095.02.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11537 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8351/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8351/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8351/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903153 - PreCommit-HIVE-Build

> add a unmanaged flag to triggers (applies to container based sessions)
> --
>
> Key: HIVE-18095
> URL: https://issues.apache.org/jira/browse/HIVE-18095
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18095.01.patch, HIVE-18095.02.patch, 
> HIVE-18095.nogen.patch, HIVE-18095.patch
>
>
> cc [~prasanth_j]
> It should be impossible to attach global triggers for pools. Setting global 
> flag should probably automatically remove attachments to pools.
> Global triggers would only support actions that Tez supports (for simplicity; 
> also, for now, move doesn't make a lot of sense because the trigger would 
> apply again after the move).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18257) implement scheduling policy configuration instead of hardcoding fair scheduling

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299859#comment-16299859
 ] 

Hive QA commented on HIVE-18257:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} standalone-metastore: The patch generated 3 new + 572 
unchanged - 0 fixed = 575 total (was 572) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
35s{color} | {color:red} ql: The patch generated 3 new + 477 unchanged - 1 
fixed = 480 total (was 478) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / ad5bcb1 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8352/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8352/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8352/yetus/whitespace-eol.txt 
|
| modules | C: standalone-metastore ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8352/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> implement scheduling policy configuration instead of hardcoding fair 
> scheduling
> ---
>
> Key: HIVE-18257
> URL: https://issues.apache.org/jira/browse/HIVE-18257
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18257.01.patch, HIVE-18257.02.patch, 
> HIVE-18257.02.patch, HIVE-18257.03.patch, HIVE-18257.patch
>
>
> Not sure it makes sense to actually make it pluggable. At least the standard 
> ones will be an enum; we don't expect people to implement custom classes - 
> phase 2 if someone wants to



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18257) implement scheduling policy configuration instead of hardcoding fair scheduling

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299900#comment-16299900
 ] 

Hive QA commented on HIVE-18257:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903154/HIVE-18257.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11538 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8352/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8352/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8352/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903154 - PreCommit-HIVE-Build

> implement scheduling policy configuration instead of hardcoding fair 
> scheduling
> ---
>
> Key: HIVE-18257
> URL: https://issues.apache.org/jira/browse/HIVE-18257
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18257.01.patch, HIVE-18257.02.patch, 
> HIVE-18257.02.patch, HIVE-18257.03.patch, HIVE-18257.patch
>
>
> Not sure it makes sense to actually make it pluggable. At least the standard 
> ones will be an enum; we don't expect people to implement custom classes - 
> phase 2 if someone wants to



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17829) ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299913#comment-16299913
 ] 

Hive QA commented on HIVE-17829:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} hbase-handler: The patch generated 0 new + 6 
unchanged - 4 fixed = 6 total (was 10) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / ad5bcb1 |
| Default Java | 1.8.0_111 |
| modules | C: hbase-handler U: hbase-handler |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8353/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2
> --
>
> Key: HIVE-17829
> URL: https://issues.apache.org/jira/browse/HIVE-17829
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 2.1.0
>Reporter: Chiran Ravani
>Assignee: anishek
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HIVE-17829.0.patch, HIVE-17829.1.patch
>
>
> Stack
> {code}
> 2017-10-09T09:39:54,804 ERROR [HiveServer2-Background-Pool: Thread-95]: 
> metadata.Table (Table.java:getColsInternal(642)) - Unable to get field from 
> serde: org.apache.hadoop.hive.hbase.HBaseSerDe
> java.lang.ArrayIndexOutOfBoundsException: 1
> at java.util.Arrays$ArrayList.get(Arrays.java:3841) ~[?:1.8.0_77]
> at 
> org.apache.hadoop.hive.serde2.BaseStructObjectInspector.init(BaseStructObjectInspector.java:104)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.init(LazySimpleStructObjectInspector.java:97)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.(LazySimpleStructObjectInspector.java:77)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory.getLazySimpleStructObjectInspector(LazyObjectInspectorFactory.java:115)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.hbase.HBaseLazyObjectFactory.createLazyH

[jira] [Commented] (HIVE-17829) ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299983#comment-16299983
 ] 

Hive QA commented on HIVE-17829:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903163/HIVE-17829.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11538 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fp_literal_arithmetic] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testSyntheticComplexSchema[5]
 (batchId=190)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag[5]
 (batchId=190)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8353/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8353/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8353/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903163 - PreCommit-HIVE-Build

> ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2
> --
>
> Key: HIVE-17829
> URL: https://issues.apache.org/jira/browse/HIVE-17829
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 2.1.0
>Reporter: Chiran Ravani
>Assignee: anishek
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HIVE-17829.0.patch, HIVE-17829.1.patch
>
>
> Stack
> {code}
> 2017-10-09T09:39:54,804 ERROR [HiveServer2-Background-Pool: Thread-95]: 
> metadata.Table (Table.java:getColsInternal(642)) - Unable to get field from 
> serde: org.apache.hadoop.hive.hbase.HBaseSerDe
> java.lang.ArrayIndexOutOfBoundsException: 1
> at java.util.Arrays$ArrayList.get(Arrays.java:3841) ~[?:1.8.0_77]
> at 
> org.apache.hadoop.hive.serde2.BaseStructObjectInspector.init(BaseStructObjectInspector.java:104)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.init(LazySimpleStructObjectInspector.java:97)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.(LazySimpleStructObjectInspector.java:77)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory.getLazySimpleStructObjectInspector(LazyObjectInspectorFactory.java:115)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.hbase.HBaseLazyObjectFactory.createLazyHBaseStructInspector(HBaseLazyObjectFactory.java:79)
>  ~[hive-hbase-h

[jira] [Commented] (HIVE-14146) Column comments with "\n" character "corrupts" table metadata

2017-12-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300051#comment-16300051
 ] 

Marcin Łuczyński commented on HIVE-14146:
-

I got a customer with Hive 1.2.1.2.4 who has the same problem. Was this fix 
backported into 1.x stream?

> Column comments with "\n" character "corrupts" table metadata
> -
>
> Key: HIVE-14146
> URL: https://issues.apache.org/jira/browse/HIVE-14146
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
> Fix For: 2.3.0
>
> Attachments: HIVE-14146.10.patch, HIVE-14146.11.patch, 
> HIVE-14146.2.patch, HIVE-14146.3.patch, HIVE-14146.4.patch, 
> HIVE-14146.5.patch, HIVE-14146.6.patch, HIVE-14146.7.patch, 
> HIVE-14146.8.patch, HIVE-14146.9.patch, HIVE-14146.patch, changes
>
>
> Create a table with the following(noting the \n in the COMMENT):
> {noformat}
> CREATE TABLE commtest(first_nm string COMMENT 'Indicates First name\nof an 
> individual’);
> {noformat}
> Describe shows that now the metadata is messed up:
> {noformat}
> beeline> describe commtest;
> +---++---+--+
> | col_name  | data_type  |comment|
> +---++---+--+
> | first_nm | string   | Indicates First name  |
> | of an individual  | NULL   | NULL  |
> +---++---+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-18325) sending flag to do case unaware schema evolution to reader.

2017-12-21 Thread piyush mukati (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

piyush mukati reassigned HIVE-18325:



> sending flag to do case unaware schema evolution to reader.
> ---
>
> Key: HIVE-18325
> URL: https://issues.apache.org/jira/browse/HIVE-18325
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Reporter: piyush mukati
>Assignee: piyush mukati
>
> in case of orc data reader schema passed by hive are all small cases and if 
> the column name stored in the file has any uppercase, it will return null 
> values for those columns even if the data is present in the file. 
> Column name matching while schema evolution should be case unaware. 
> we need to pass config for same from hive. the 
> config(orc.schema.evolution.case.sensitive) in orc will be exposed by 
> https://issues.apache.org/jira/browse/ORC-264 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18052) Run p-tests on mm tables

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300103#comment-16300103
 ] 

Hive QA commented on HIVE-18052:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903180/HIVE-18052.11.patch

{color:green}SUCCESS:{color} +1 due to 94 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1388 failed/errored test(s), 10189 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=90)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,index_bitmap_auto.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[udf_invalid.q,authorization_uri_export.q,druid_datasource2.q,view_update.q,default_partition_name.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,orc_replace_columns2_acid.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,duplicate_alias_in_transform.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,udaf_collect_set_unsupported.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,merge_negative_3.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,split_sample_out_of_range.q,alter_concatenate_indexed_table.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,analyze_view.q,exim_14_nonpart_part.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,add_partition_with_whitelist.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,exim_03_nonpart_noncompat_colschema.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,authorization_create_role_no_admin.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,archive1.q,subquery_multiple_cols_in_select.q,drop_index_failure.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,delete_non_acid_table.q,udf_greatest_error_2.q,create_with_constraints_validate.q,authorization_view_6.q,show_tablestatus.q,describe_xpath3.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,authorization_create_tbl.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_size_wrong_type.q,exim_04_nonpart_noncompat_colnumber.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,dyn_part4.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,create_unknown_genericudf.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,udf_map_values_arg_type.q,alter_partition_change_col_nonexist.q,create_with_constraints_enforced.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.q,authorization_uri_index.q,authorization_show_grant_otheruser_all.q,authorization_view_2.q,show_tables_bad2.q,groupby_rollup2.q,truncate_column_seqfile.q,create_view_failure5.q,authorization_create_view.q,ctasnullcol.q,create_or_replace_view1.q,udf_max.q,exim_01_nonpart_over_

[jira] [Commented] (HIVE-18052) Run p-tests on mm tables

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300105#comment-16300105
 ] 

Hive QA commented on HIVE-18052:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 6 new + 1642 unchanged - 2 
fixed = 1648 total (was 1644) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
49s{color} | {color:red} root: The patch generated 6 new + 2761 unchanged - 2 
fixed = 2767 total (was 2763) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / ad5bcb1 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8354/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8354/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8354/yetus/whitespace-eol.txt 
|
| modules | C: common standalone-metastore ql service hcatalog/core 
hcatalog/hcatalog-pig-adapter hcatalog/server-extensions 
hcatalog/webhcat/java-client hcatalog/streaming . itests/hcatalog-unit 
itests/hive-minikdc itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8354/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Run p-tests on mm tables
> 
>
> Key: HIVE-18052
> URL: https://issues.apache.org/jira/browse/HIVE-18052
> Project: Hive
>  Issue Type: Task
>Reporter: Steve Yeom
>Assignee: Steve Yeom
> Attachments: HIVE-18052.1.patch, HIVE-18052.10.patch, 
> HIVE-18052.11.patch, HIVE-18052.2.patch, HIVE-18052.3.patch, 
> HIVE-18052.4.patch, HIVE-18052.5.patch, HIVE-18052.6.patch, 
> HIVE-18052.7.patch, HIVE-18052.8.patch, HIVE-18052.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#640

[jira] [Commented] (HIVE-11609) Capability to add a filter to hbase scan via composite key doesn't work

2017-12-21 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300116#comment-16300116
 ] 

Yongzhi Chen commented on HIVE-11609:
-

The failures are not related.

> Capability to add a filter to hbase scan via composite key doesn't work
> ---
>
> Key: HIVE-11609
> URL: https://issues.apache.org/jira/browse/HIVE-11609
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Swarnim Kulkarni
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-11609.08.patch, HIVE-11609.09.patch, 
> HIVE-11609.1.patch.txt, HIVE-11609.10.patch, HIVE-11609.2.patch.txt, 
> HIVE-11609.3.patch.txt, HIVE-11609.4.patch.txt, HIVE-11609.5.patch, 
> HIVE-11609.6.patch.txt, HIVE-11609.7.patch.txt
>
>
> It seems like the capability to add filter to an hbase scan which was added 
> as part of HIVE-6411 doesn't work. This is primarily because in the 
> HiveHBaseInputFormat, the filter is added in the getsplits instead of 
> getrecordreader. This works fine for start and stop keys but not for filter 
> because a filter is respected only when an actual scan is performed. This is 
> also related to the initial refactoring that was done as part of HIVE-3420.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Status: Open  (was: Patch Available)

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.02.patch, HIVE-17897.1.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Attachment: HIVE-17897.2.patch

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Attachment: (was: HIVE-17897.02.patch)

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Status: Patch Available  (was: Open)

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18324) drop database failed for not empty tables, but acid table info in metastore is still deleted

2017-12-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18324:
--
Component/s: Transactions

> drop database failed for not empty tables, but  acid table info in metastore 
> is still deleted
> -
>
> Key: HIVE-18324
> URL: https://issues.apache.org/jira/browse/HIVE-18324
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 2.1.1, 2.3.2
>Reporter: J.P Feng
>
> I use "drop database hive_test" to drop database (without cascade) , because 
> it's not empty, my operation runs fail, and throws exception -> 
> InvalidOperationException(message:Database hive_test is not empty. One or 
> more tables exist.)
> but acid tables info in 
> TXN_COMPONENTS,COMPLETED_TXN_COMPONENTS,COMPACTION_QUEUE,COMPLETED_COMPACTIONS
>  is still deleted by AcidEventListener.
> So I advise that, onDropDatabase in AcidEventListener need to judge 
> DropDatabaseEvent.getStatus before delete acid table info.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300245#comment-16300245
 ] 

Hive QA commented on HIVE-17897:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / ad5bcb1 |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8355/yetus/whitespace-eol.txt 
|
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8355/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(T

[jira] [Commented] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300318#comment-16300318
 ] 

Hive QA commented on HIVE-17897:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903274/HIVE-17897.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11538 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=177)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8355/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8355/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8355/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903274 - PreCommit-HIVE-Build

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
>

[jira] [Commented] (HIVE-18315) update tests use non-acid tables

2017-12-21 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300437#comment-16300437
 ] 

Jason Dere commented on HIVE-18315:
---

+1

> update tests use non-acid tables
> 
>
> Key: HIVE-18315
> URL: https://issues.apache.org/jira/browse/HIVE-18315
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18315.01.patch, HIVE-18315.02.patch, 
> HIVE-18315.03.patch
>
>
> some tests like TestTxnLoadData need to create create non-acid table so that 
> non-acid to acid conversion can be tested so they need explicit 
> tblproperties('transactional'='false').
> HCat doesn't support acid so the tests need to use non-acid tables



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300543#comment-16300543
 ] 

Sankar Hariappan commented on HIVE-17897:
-

Patch committed to master!
Thanks for the fix [~thejas]!

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Labels: DR replication  (was: )

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
>  Labels: DR, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300543#comment-16300543
 ] 

Sankar Hariappan edited comment on HIVE-17897 at 12/21/17 8:46 PM:
---

Test failures are irrelevant. Patch is committed to master!
Thanks for the fix [~thejas]!


was (Author: sankarh):
Patch committed to master!
Thanks for the fix [~thejas]!

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
>  Labels: DR, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17897) "repl load" in bootstrap phase fails when partitions have whitespace

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-17897:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> "repl load" in bootstrap phase fails when partitions have whitespace
> 
>
> Key: HIVE-17897
> URL: https://issues.apache.org/jira/browse/HIVE-17897
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sankar Hariappan
>Assignee: Thejas M Nair
>Priority: Critical
>  Labels: DR, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-17897.1.patch, HIVE-17897.2.patch
>
>
> The issue is that Path.toURI().toString() is being used to serialize the 
> location, while new Path(String) is used to deserialize it. URI escapes chars 
> such as space, so the deserialized location doesn't point to the correct file 
> location.
> Following exception is seen - 
> {code}
> 2017-10-24T11:58:34,451 ERROR [d5606640-8174-4584-8b54-936b0f5628fa main] 
> exec.Task: Failed with exception null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.regularCopy(CopyUtils.java:211)
> at 
> org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:71)
> at 
> org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:137)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:206)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2276)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1906)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1623)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1362)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1352)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Work started] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2017-12-21 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-18192 started by Sankar Hariappan.
---
> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: ACID, DR
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-18326) LLAP Tez scheduler - only preempt tasks if there's a dependency between them

2017-12-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-18326:
---


> LLAP Tez scheduler - only preempt tasks if there's a dependency between them
> 
>
> Key: HIVE-18326
> URL: https://issues.apache.org/jira/browse/HIVE-18326
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18326) LLAP Tez scheduler - only preempt tasks if there's a dependency between them

2017-12-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18326:

Attachment: HIVE-18326.patch

A patch. I didn't have time to test it cause I have to run an errand now :) It 
does compile.
[~ewohlstadter]  can you take a look?
[~sseth] [~gopalv]  I'd also need a binding plus-one, or you could even review 
:P

> LLAP Tez scheduler - only preempt tasks if there's a dependency between them
> 
>
> Key: HIVE-18326
> URL: https://issues.apache.org/jira/browse/HIVE-18326
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18326.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18326) LLAP Tez scheduler - only preempt tasks if there's a dependency between them

2017-12-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18326:

Status: Patch Available  (was: Open)

> LLAP Tez scheduler - only preempt tasks if there's a dependency between them
> 
>
> Key: HIVE-18326
> URL: https://issues.apache.org/jira/browse/HIVE-18326
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18326.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18326) LLAP Tez scheduler - only preempt tasks if there's a dependency between them

2017-12-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18326:

Description: It is currently possible for e.g. two sides of a union (or a 
join for that matter) to have slightly different priorities. We don't want to 
preempt running tasks on one side in favor of the other side in such cases.

> LLAP Tez scheduler - only preempt tasks if there's a dependency between them
> 
>
> Key: HIVE-18326
> URL: https://issues.apache.org/jira/browse/HIVE-18326
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18326.patch
>
>
> It is currently possible for e.g. two sides of a union (or a join for that 
> matter) to have slightly different priorities. We don't want to preempt 
> running tasks on one side in favor of the other side in such cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-18326) LLAP Tez scheduler - only preempt tasks if there's a dependency between them

2017-12-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300650#comment-16300650
 ] 

Sergey Shelukhin edited comment on HIVE-18326 at 12/21/17 10:24 PM:


A patch. I didn't have time to test it cause I have to run an errand now :) It 
does compile.  WIll test later :)
[~ewohlstadter]  can you take a look?
[~sseth] [~gopalv]  I'd also need a binding plus-one, or you could even review 
:P


was (Author: sershe):
A patch. I didn't have time to test it cause I have to run an errand now :) It 
does compile.
[~ewohlstadter]  can you take a look?
[~sseth] [~gopalv]  I'd also need a binding plus-one, or you could even review 
:P

> LLAP Tez scheduler - only preempt tasks if there's a dependency between them
> 
>
> Key: HIVE-18326
> URL: https://issues.apache.org/jira/browse/HIVE-18326
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18326.patch
>
>
> It is currently possible for e.g. two sides of a union (or a join for that 
> matter) to have slightly different priorities. We don't want to preempt 
> running tasks on one side in favor of the other side in such cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18326) LLAP Tez scheduler - only preempt tasks if there's a dependency between them

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300692#comment-16300692
 ] 

Hive QA commented on HIVE-18326:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} llap-tez: The patch generated 4 new + 146 unchanged - 
0 fixed = 150 total (was 146) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / b4b06ac |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8356/yetus/diff-checkstyle-llap-tez.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8356/yetus/whitespace-eol.txt 
|
| modules | C: common llap-tez U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8356/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP Tez scheduler - only preempt tasks if there's a dependency between them
> 
>
> Key: HIVE-18326
> URL: https://issues.apache.org/jira/browse/HIVE-18326
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18326.patch
>
>
> It is currently possible for e.g. two sides of a union (or a join for that 
> matter) to have slightly different priorities. We don't want to preempt 
> running tasks on one side in favor of the other side in such cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18224) Introduce interface above driver

2017-12-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300713#comment-16300713
 ] 

Ashutosh Chauhan commented on HIVE-18224:
-

There are bunch of deprecations. Is deprecation hard to get rid of? You plan to 
do that in a follow-up.
Other than that patch looks good.

> Introduce interface above driver
> 
>
> Key: HIVE-18224
> URL: https://issues.apache.org/jira/browse/HIVE-18224
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-18224.01.patch, HIVE-18224.02.patch, 
> HIVE-18224.03.patch, HIVE-18224.04.patch
>
>
> Add an interface above driver; and use it outside of ql.
> The goal is to enable the overlaying of the Driver with some strategy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15393) Update Guava version

2017-12-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-15393:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Slim!

> Update Guava version
> 
>
> Key: HIVE-15393
> URL: https://issues.apache.org/jira/browse/HIVE-15393
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: slim bouguerra
>Assignee: Ashutosh Chauhan
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-15393.2.patch, HIVE-15393.3.patch, 
> HIVE-15393.5.patch, HIVE-15393.6.patch, HIVE-15393.7.patch, 
> HIVE-15393.8.patch, HIVE-15393.9.patch, HIVE-15393.patch
>
>
> Druid base code is using newer version of guava 16.0.1 that is not compatible 
> with the current version used by Hive.
> FYI Hadoop project is moving to Guava 18 not sure if it is better to move to 
> guava 18 or even 19.
> https://issues.apache.org/jira/browse/HADOOP-10101



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18224) Introduce interface above driver

2017-12-21 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300717#comment-16300717
 ] 

Zoltan Haindrich commented on HIVE-18224:
-

[~ashutoshc] yes, absolutely...after I've hit issues by using a clean config to 
isolate each command from each other; I probably begin to be much carefull...I 
will file followups to remove those

> Introduce interface above driver
> 
>
> Key: HIVE-18224
> URL: https://issues.apache.org/jira/browse/HIVE-18224
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-18224.01.patch, HIVE-18224.02.patch, 
> HIVE-18224.03.patch, HIVE-18224.04.patch
>
>
> Add an interface above driver; and use it outside of ql.
> The goal is to enable the overlaying of the Driver with some strategy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18052) Run p-tests on mm tables

2017-12-21 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-18052:
--
Attachment: HIVE-18052.12.patch

Resubmitting the same patch 11 as patch 12 since patch 11 run did generate logs 
but not the report. 

> Run p-tests on mm tables
> 
>
> Key: HIVE-18052
> URL: https://issues.apache.org/jira/browse/HIVE-18052
> Project: Hive
>  Issue Type: Task
>Reporter: Steve Yeom
>Assignee: Steve Yeom
> Attachments: HIVE-18052.1.patch, HIVE-18052.10.patch, 
> HIVE-18052.11.patch, HIVE-18052.12.patch, HIVE-18052.2.patch, 
> HIVE-18052.3.patch, HIVE-18052.4.patch, HIVE-18052.5.patch, 
> HIVE-18052.6.patch, HIVE-18052.7.patch, HIVE-18052.8.patch, HIVE-18052.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18326) LLAP Tez scheduler - only preempt tasks if there's a dependency between them

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300735#comment-16300735
 ] 

Hive QA commented on HIVE-18326:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903318/HIVE-18326.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11524 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=237)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[char_pad_convert] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.org.apache.hadoop.hive.cli.TestSparkCliDriver
 (batchId=105)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8356/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8356/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8356/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903318 - PreCommit-HIVE-Build

> LLAP Tez scheduler - only preempt tasks if there's a dependency between them
> 
>
> Key: HIVE-18326
> URL: https://issues.apache.org/jira/browse/HIVE-18326
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18326.patch
>
>
> It is currently possible for e.g. two sides of a union (or a join for that 
> matter) to have slightly different priorities. We don't want to preempt 
> running tasks on one side in favor of the other side in such cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18224) Introduce interface above driver

2017-12-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300748#comment-16300748
 ] 

Ashutosh Chauhan commented on HIVE-18224:
-

+1

> Introduce interface above driver
> 
>
> Key: HIVE-18224
> URL: https://issues.apache.org/jira/browse/HIVE-18224
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-18224.01.patch, HIVE-18224.02.patch, 
> HIVE-18224.03.patch, HIVE-18224.04.patch
>
>
> Add an interface above driver; and use it outside of ql.
> The goal is to enable the overlaying of the Driver with some strategy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18053) Support different table types for MVs

2017-12-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300760#comment-16300760
 ] 

Ashutosh Chauhan commented on HIVE-18053:
-

Code changes look good. Few comments w.r.t comments:
{code}
 final List qualifiedTableName;
   if (tableRel instanceof Project) {
  qualifiedTableName = 
tableRel.getInput(0).getTable().getQualifiedName();
} else {
   qualifiedTableName = tableRel.getTable().getQualifiedName();
}
{code}
Good to add a comment on when tableRel can be a Project. Also, that it can't be 
of 3rd type.

{code}
DruidQuery.create(cluster, cluster.traitSetOf(BindableConvention.INSTANCE),
{code}
Good to a comment of effect that ideally we shall use HiveRelNode convention. 
But since Volcano planner throws in that case, we set it as Bindable. Since,
we do't really use convention currently its ok. In future if we want to make 
use of convention (e.g., while directly generation op tree instead of AST) this 
needs to be changed.

{code}
String viewText = tab.getViewExpandedText();
{code}
Good to add a comment on why this needs expanded text.

Also patch introduces checkstyle warnings. 

> Support different table types for MVs
> -
>
> Key: HIVE-18053
> URL: https://issues.apache.org/jira/browse/HIVE-18053
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-18053.patch
>
>
> MVs backed by MM tables, managed tables, external tables. This might work 
> already, but we need to add tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18285) StatsTask uses a cached ql.metadata.Table object

2017-12-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300771#comment-16300771
 ] 

Ashutosh Chauhan commented on HIVE-18285:
-

I agree with Zoltan. We don't want to make extra metastore calls. 
All modifications on metadata objects shall be made on compiler side.  
Metastore should just be used to read and persist metadata objects. This is 
because of 2 reasons : a)  Making metastore calls add latencies to compilation. 
b) Having modifications being done on metadata objects in 2 different processes 
makes it hard to reason about what processes are making what changes.
Secondly, logic to convert table type based on config to me seems to belong to 
compiler. Metastore shall not govern that.
Lastly, with standalone-metastore ACID concept seems like specific to Hive and 
may not be applicable to other engines. So, Hive compiler seems like a logical 
place for that logic, instead of metastore which is deemed to be used by 
multiple engines.

> StatsTask uses a cached ql.metadata.Table object
> 
>
> Key: HIVE-18285
> URL: https://issues.apache.org/jira/browse/HIVE-18285
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Statistics
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18285.01.patch
>
>
> this then causes BasicStatsTask.aggregateStats(Hive) to call 
> Hive.alterTable() with a stale Table object.  (It misses any changes made by 
> any MetaStorePreEventListener)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18052) Run p-tests on mm tables

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300785#comment-16300785
 ] 

Hive QA commented on HIVE-18052:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903331/HIVE-18052.12.patch

{color:green}SUCCESS:{color} +1 due to 94 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1393 failed/errored test(s), 10201 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[udf_invalid.q,authorization_uri_export.q,druid_datasource2.q,view_update.q,default_partition_name.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,orc_replace_columns2_acid.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,duplicate_alias_in_transform.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,udaf_collect_set_unsupported.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,merge_negative_3.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,split_sample_out_of_range.q,alter_concatenate_indexed_table.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,analyze_view.q,exim_14_nonpart_part.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,add_partition_with_whitelist.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,exim_03_nonpart_noncompat_colschema.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,authorization_create_role_no_admin.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,archive1.q,subquery_multiple_cols_in_select.q,drop_index_failure.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,delete_non_acid_table.q,udf_greatest_error_2.q,create_with_constraints_validate.q,authorization_view_6.q,show_tablestatus.q,describe_xpath3.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,authorization_create_tbl.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_size_wrong_type.q,exim_04_nonpart_noncompat_colnumber.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,dyn_part4.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,create_unknown_genericudf.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,udf_map_values_arg_type.q,alter_partition_change_col_nonexist.q,create_with_constraints_enforced.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.q,authorization_uri_index.q,authorization_show_grant_otheruser_all.q,authorization_view_2.q,show_tables_bad2.q,groupby_rollup2.q,truncate_column_seqfile.q,create_view_failure5.q,authorization_create_view.q,ctasnullcol.q,create_or_replace_view1.q,udf_max.q,exim_01_nonpart_over_loaded.q,msck_repair_1.q,orc_change_fileformat_acid.q,udf_nonexistent_resource.q,exim_19_external_over_existing.q,serde_regex2.q,msck_repair_2.q,exim_06_nonpart_noncompat_storage.q,illegal_partition_type4.q,udf_sort_array_by_wrong1.q,create_or_replace_view5.q,windowing_leadlag_in_udaf.q,authorization_drop_index.q,truncate_column_indexed_table.q,avro_decimal.q,inva

[jira] [Updated] (HIVE-18053) Support different table types for MVs

2017-12-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18053:
---
Attachment: HIVE-18053.01.patch

> Support different table types for MVs
> -
>
> Key: HIVE-18053
> URL: https://issues.apache.org/jira/browse/HIVE-18053
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-18053.01.patch, HIVE-18053.patch
>
>
> MVs backed by MM tables, managed tables, external tables. This might work 
> already, but we need to add tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18052) Run p-tests on mm tables

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300787#comment-16300787
 ] 

Hive QA commented on HIVE-18052:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
48s{color} | {color:red} ql: The patch generated 6 new + 1642 unchanged - 2 
fixed = 1648 total (was 1644) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
53s{color} | {color:red} root: The patch generated 6 new + 2761 unchanged - 2 
fixed = 2767 total (was 2763) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 2a5ba5c |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8357/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8357/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8357/yetus/whitespace-eol.txt 
|
| modules | C: common standalone-metastore ql service hcatalog/core 
hcatalog/hcatalog-pig-adapter hcatalog/server-extensions 
hcatalog/webhcat/java-client hcatalog/streaming . itests/hcatalog-unit 
itests/hive-minikdc itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8357/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Run p-tests on mm tables
> 
>
> Key: HIVE-18052
> URL: https://issues.apache.org/jira/browse/HIVE-18052
> Project: Hive
>  Issue Type: Task
>Reporter: Steve Yeom
>Assignee: Steve Yeom
> Attachments: HIVE-18052.1.patch, HIVE-18052.10.patch, 
> HIVE-18052.11.patch, HIVE-18052.12.patch, HIVE-18052.2.patch, 
> HIVE-18052.3.patch, HIVE-18052.4.patch, HIVE-18052.5.patch, 
> HIVE-18052.6.patch, HIVE-18052.7.patch, HIVE-18052.8.patch, HIVE-18052.9.patch
>
>




--
This message was sent by Atlass

[jira] [Commented] (HIVE-18053) Support different table types for MVs

2017-12-21 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300790#comment-16300790
 ] 

Jesus Camacho Rodriguez commented on HIVE-18053:


[~ashutoshc], thanks, I have uploaded a new patch. I added the comments that 
you suggested, except for the first one: actually that check is not needed, as 
it will always be a TS (it was part of the work that I had been doing to use 
HiveRelNode convention instead of Bindable by adding a Project on top of the 
DruidQuery, then I did not remove the code).

> Support different table types for MVs
> -
>
> Key: HIVE-18053
> URL: https://issues.apache.org/jira/browse/HIVE-18053
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-18053.01.patch, HIVE-18053.patch
>
>
> MVs backed by MM tables, managed tables, external tables. This might work 
> already, but we need to add tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18053) Support different table types for MVs

2017-12-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300793#comment-16300793
 ] 

Ashutosh Chauhan commented on HIVE-18053:
-

+1

> Support different table types for MVs
> -
>
> Key: HIVE-18053
> URL: https://issues.apache.org/jira/browse/HIVE-18053
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-18053.01.patch, HIVE-18053.patch
>
>
> MVs backed by MM tables, managed tables, external tables. This might work 
> already, but we need to add tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18053) Support different table types for MVs

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300805#comment-16300805
 ] 

Hive QA commented on HIVE-18053:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 15 new + 1197 unchanged - 6 
fixed = 1212 total (was 1203) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 2a5ba5c |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8358/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8358/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support different table types for MVs
> -
>
> Key: HIVE-18053
> URL: https://issues.apache.org/jira/browse/HIVE-18053
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-18053.01.patch, HIVE-18053.patch
>
>
> MVs backed by MM tables, managed tables, external tables. This might work 
> already, but we need to add tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver

2017-12-21 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-18148:
--
Attachment: HIVE-18148.3.patch

Fix the check style.

> NPE in SparkDynamicPartitionPruningResolver
> ---
>
> Key: HIVE-18148
> URL: https://issues.apache.org/jira/browse/HIVE-18148
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch, 
> HIVE-18148.3.patch
>
>
> The stack trace is:
> {noformat}
> 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] 
> ql.Driver: FAILED: NullPointerException null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125)
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568)
> {noformat}
> At this stage, there shouldn't be a DPP sink whose target map work is null. 
> The root cause seems to be a malformed operator tree generated by 
> SplitOpTreeForDPP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18053) Support different table types for MVs

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300875#comment-16300875
 ] 

Hive QA commented on HIVE-18053:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/1290/HIVE-18053.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 11540 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_lineage2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=253)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8358/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8358/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8358/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 23 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 1290 - PreCommit-HIVE-Build

> Support different table types for MVs
> -
>
> Key: HIVE-18053
> URL: https://issues.apache.org/jira/browse/HIVE-18053
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-18053.01.patch, HIVE-18053.patch
>
>
> MVs backed by MM tables, managed tables, external tables. This might work 
> already, but we need to add tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300895#comment-16300895
 ] 

Hive QA commented on HIVE-18148:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 2a5ba5c |
| Default Java | 1.8.0_111 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8359/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> NPE in SparkDynamicPartitionPruningResolver
> ---
>
> Key: HIVE-18148
> URL: https://issues.apache.org/jira/browse/HIVE-18148
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch, 
> HIVE-18148.3.patch
>
>
> The stack trace is:
> {noformat}
> 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] 
> ql.Driver: FAILED: NullPointerException null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125)
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568)
> {noformat}
> At this stage, there shouldn't be a DPP sink whose target map work is null. 
> The root cause seems to be a malformed operator tree generated by 
> SplitOpTreeForDPP.



--
This message was sent by Atlassian JIRA
(v6.4.14

[jira] [Assigned] (HIVE-18328) Improve schematool validator to report duplicate rows for column statistics

2017-12-21 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-18328:



> Improve schematool validator to report duplicate rows for column statistics
> ---
>
> Key: HIVE-18328
> URL: https://issues.apache.org/jira/browse/HIVE-18328
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.1
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>
> By design, in the {{TAB_COL_STATS}} table of the HMS schema, there should be 
> ONE AND ONLY ONE row, representing its statistics, for each column defined in 
> hive. A combination of DB_NAME, TABLE_NAME and COLUMN_NAME constitute a 
> primary key/unique row.
> Each time the statistics are computed for a column, this row is updated. 
> However, if somehow via  BDR/replication process, we end up with multiple 
> rows in this table for a given column, HMS server to recompute the statistics 
> there after.
> So it would be good to detect this data anamoly via the schema validation 
> tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver

2017-12-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300925#comment-16300925
 ] 

Hive QA commented on HIVE-18148:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12903341/HIVE-18148.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 11540 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_lineage2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
 (batchId=158)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation
 (batchId=213)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=253)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=225)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testRenamePartitionWithCM
 (batchId=225)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8359/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8359/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8359/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 24 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12903341 - PreCommit-HIVE-Build

> NPE in SparkDynamicPartitionPruningResolver
> ---
>
> Key: HIVE-18148
> URL: https://issues.apache.org/jira/browse/HIVE-18148
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch, 
> HIVE-18148.3.patch
>
>
> The stack trace is:
> {noformat}
> 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] 
> ql.Driver: FAILED: NullPointerException null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125)
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568)
> {noformat}
> At this stage, there shouldn't be a DPP sink whose target map work is null. 
> The root cause seems to be a malformed operator tree generat

[jira] [Updated] (HIVE-18328) Improve schematool validator to report duplicate rows for column statistics

2017-12-21 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18328:
-
Attachment: HIVE-18328.patch

[~aihuaxu] Attaching a patch to add validation for this table in the schema.

> Improve schematool validator to report duplicate rows for column statistics
> ---
>
> Key: HIVE-18328
> URL: https://issues.apache.org/jira/browse/HIVE-18328
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.1
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-18328.patch
>
>
> By design, in the {{TAB_COL_STATS}} table of the HMS schema, there should be 
> ONE AND ONLY ONE row, representing its statistics, for each column defined in 
> hive. A combination of DB_NAME, TABLE_NAME and COLUMN_NAME constitute a 
> primary key/unique row.
> Each time the statistics are computed for a column, this row is updated. 
> However, if somehow via  BDR/replication process, we end up with multiple 
> rows in this table for a given column, HMS server to recompute the statistics 
> there after.
> So it would be good to detect this data anamoly via the schema validation 
> tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17829) ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2

2017-12-21 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300955#comment-16300955
 ] 

anishek commented on HIVE-17829:


Test failures are not related to this patch. Patch committed to master. Thanks 
[~thejas] for the review!

> ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2
> --
>
> Key: HIVE-17829
> URL: https://issues.apache.org/jira/browse/HIVE-17829
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 2.1.0
>Reporter: Chiran Ravani
>Assignee: anishek
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-17829.0.patch, HIVE-17829.1.patch
>
>
> Stack
> {code}
> 2017-10-09T09:39:54,804 ERROR [HiveServer2-Background-Pool: Thread-95]: 
> metadata.Table (Table.java:getColsInternal(642)) - Unable to get field from 
> serde: org.apache.hadoop.hive.hbase.HBaseSerDe
> java.lang.ArrayIndexOutOfBoundsException: 1
> at java.util.Arrays$ArrayList.get(Arrays.java:3841) ~[?:1.8.0_77]
> at 
> org.apache.hadoop.hive.serde2.BaseStructObjectInspector.init(BaseStructObjectInspector.java:104)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.init(LazySimpleStructObjectInspector.java:97)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.(LazySimpleStructObjectInspector.java:77)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory.getLazySimpleStructObjectInspector(LazyObjectInspectorFactory.java:115)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.hbase.HBaseLazyObjectFactory.createLazyHBaseStructInspector(HBaseLazyObjectFactory.java:79)
>  ~[hive-hbase-handler-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:127) 
> ~[hive-hbase-handler-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54) 
> ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:531) 
> ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:424)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:411)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:279)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261) 
> ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:639) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:622) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:833) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:869) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4228) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:347) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1905) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1607) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1354) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]

[jira] [Updated] (HIVE-17829) ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2

2017-12-21 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-17829:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2
> --
>
> Key: HIVE-17829
> URL: https://issues.apache.org/jira/browse/HIVE-17829
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 2.1.0
>Reporter: Chiran Ravani
>Assignee: anishek
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-17829.0.patch, HIVE-17829.1.patch
>
>
> Stack
> {code}
> 2017-10-09T09:39:54,804 ERROR [HiveServer2-Background-Pool: Thread-95]: 
> metadata.Table (Table.java:getColsInternal(642)) - Unable to get field from 
> serde: org.apache.hadoop.hive.hbase.HBaseSerDe
> java.lang.ArrayIndexOutOfBoundsException: 1
> at java.util.Arrays$ArrayList.get(Arrays.java:3841) ~[?:1.8.0_77]
> at 
> org.apache.hadoop.hive.serde2.BaseStructObjectInspector.init(BaseStructObjectInspector.java:104)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.init(LazySimpleStructObjectInspector.java:97)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.(LazySimpleStructObjectInspector.java:77)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory.getLazySimpleStructObjectInspector(LazyObjectInspectorFactory.java:115)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.hbase.HBaseLazyObjectFactory.createLazyHBaseStructInspector(HBaseLazyObjectFactory.java:79)
>  ~[hive-hbase-handler-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:127) 
> ~[hive-hbase-handler-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54) 
> ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:531) 
> ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:424)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:411)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:279)
>  ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261) 
> ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:639) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:622) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:833) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:869) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4228) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:347) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1905) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1607) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1354) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) 
> [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:

[jira] [Commented] (HIVE-18285) StatsTask uses a cached ql.metadata.Table object

2017-12-21 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300971#comment-16300971
 ] 

Eugene Koifman commented on HIVE-18285:
---

this patch should not add any time to compilation since it only changes where 
StatsTask gets the Table object when it runs.  I believe this happens on the 
client side and long after compilation is done.

bq. Secondly, logic to convert table type based on config to me seems to belong 
to compiler. Metastore shall not govern that.
Table is a metadata object and whether a table supports Acid ops is a property 
of this metadata object.  Seems like Metastore is exactly the place to govern 
metadata objects.  Normally compiler just reads metadata objects.

Finally, perhaps most importantly: A large part of acid is in the metastore.  
The entire state of compactor, lock manger, transaction manager is currently 
managed by the metastore.  This is very specific to Hive.  If you don't think 
Hive specific logic belongs in the metastore, where do you see all this logic 
going once metastore becomes standalone? 

> StatsTask uses a cached ql.metadata.Table object
> 
>
> Key: HIVE-18285
> URL: https://issues.apache.org/jira/browse/HIVE-18285
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Statistics
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18285.01.patch
>
>
> this then causes BasicStatsTask.aggregateStats(Hive) to call 
> Hive.alterTable() with a stale Table object.  (It misses any changes made by 
> any MetaStorePreEventListener)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18221) test acid default

2017-12-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18221:
--
Status: Open  (was: Patch Available)

> test acid default
> -
>
> Key: HIVE-18221
> URL: https://issues.apache.org/jira/browse/HIVE-18221
> Project: Hive
>  Issue Type: Test
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18221.01.patch, HIVE-18221.02.patch, 
> HIVE-18221.03.patch, HIVE-18221.04.patch, HIVE-18221.07.patch, 
> HIVE-18221.08.patch, HIVE-18221.09.patch, HIVE-18221.10.patch, 
> HIVE-18221.11.patch, HIVE-18221.12.patch, HIVE-18221.13.patch, 
> HIVE-18221.14.patch, HIVE-18221.16.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18315) update tests use non-acid tables

2017-12-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18315:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

committed to master
thanks Jason for the review

> update tests use non-acid tables
> 
>
> Key: HIVE-18315
> URL: https://issues.apache.org/jira/browse/HIVE-18315
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 3.0.0
>
> Attachments: HIVE-18315.01.patch, HIVE-18315.02.patch, 
> HIVE-18315.03.patch
>
>
> some tests like TestTxnLoadData need to create create non-acid table so that 
> non-acid to acid conversion can be tested so they need explicit 
> tblproperties('transactional'='false').
> HCat doesn't support acid so the tests need to use non-acid tables



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18288) merge/concat not supported on Acid table

2017-12-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18288:
--
Summary: merge/concat not supported on Acid table  (was: merge/concat not 
support on Acid table)

> merge/concat not supported on Acid table
> 
>
> Key: HIVE-18288
> URL: https://issues.apache.org/jira/browse/HIVE-18288
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> For example, mvn test -Dtest=TestCliDriver -Dqfile=orc_merge10.q
> now ends up with 
> {noformat}
> 2017-12-15T15:12:30,753 ERROR [7c3ff5b2-285c-44f2-8b13-5c3ccbd41b13 main] 
> ql.Driver: FAILED: SemanticException 
> org.apache.hadoop.hive.ql.parse.SemanticException: Concatenate/M\
> erge can not be performed on transactional tables
> org.apache.hadoop.hive.ql.parse.SemanticException: 
> org.apache.hadoop.hive.ql.parse.SemanticException: Concatenate/Merge can not 
> be performed on transactional tables
> at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterTablePartMergeFiles(DDLSemanticAnalyzer.java:2172)
> at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:343)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark

2017-12-21 Thread liyunzhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300999#comment-16300999
 ] 

liyunzhang commented on HIVE-18301:
---

The reason why have this NPE is because 
[org.apache.hadoop.hive.ql.io.IOContextMap#sparkThreadLocal|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/IOContextMap.java#L54
 ] is ThreadLocal variable.The value of the same ThreadLocal variable maybe 
different in different threads.
we set the InputPath by
{code}
 CombineHiveRecordReader#init
->HiveContextAwareRecordReader.initIOContext
->IOContext.setInputPath
{code}

we get the InputPath by
{code}
SparkMapRecordHandler.processRow
->MapOperator.process
->MapOperator.cleanUpInputFileChangedOp
->ExecMapperContext.getCurrentInputPath
->IOContext#getInputPath
{code}

when cache is enabled, there is no setInputPath , so return null when call 
IOContext#getInputPath

> Investigate to enable MapInput cache in Hive on Spark
> -
>
> Key: HIVE-18301
> URL: https://issues.apache.org/jira/browse/HIVE-18301
> Project: Hive
>  Issue Type: Bug
>Reporter: liyunzhang
>Assignee: liyunzhang
>
> Before IOContext problem is found in MapTran when spark rdd cache is enabled 
> in HIVE-8920.
> so we disabled rdd cache in MapTran at 
> [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202].
>   The problem is IOContext seems not initialized correctly in the spark yarn 
> client/cluster mode and caused the exception like 
> {code}
> Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): 
> java.lang.RuntimeException: Error processing row: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
>   at 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>   at org.apache.spark.scheduler.Task.run(Task.scala:85)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152)
>   ... 12 more
> Driver stacktrace:
> {code}
> in yarn client/cluster mode, sometimes 
> [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109]
>  is null when rdd cach is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark

2017-12-21 Thread liyunzhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301027#comment-16301027
 ] 

liyunzhang commented on HIVE-18301:
---

[~xuefuz],[~lirui]: Is there any way to store IOcontext in some place and then 
reinitialize it in current code when rdd cache is enabled?

> Investigate to enable MapInput cache in Hive on Spark
> -
>
> Key: HIVE-18301
> URL: https://issues.apache.org/jira/browse/HIVE-18301
> Project: Hive
>  Issue Type: Bug
>Reporter: liyunzhang
>Assignee: liyunzhang
>
> Before IOContext problem is found in MapTran when spark rdd cache is enabled 
> in HIVE-8920.
> so we disabled rdd cache in MapTran at 
> [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202].
>   The problem is IOContext seems not initialized correctly in the spark yarn 
> client/cluster mode and caused the exception like 
> {code}
> Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): 
> java.lang.RuntimeException: Error processing row: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
>   at 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>   at org.apache.spark.scheduler.Task.run(Task.scala:85)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152)
>   ... 12 more
> Driver stacktrace:
> {code}
> in yarn client/cluster mode, sometimes 
> [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109]
>  is null when rdd cach is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)