[jira] [Commented] (HIVE-20830) JdbcStorageHandler range query assertion failure in some cases

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668149#comment-16668149
 ] 

Hive QA commented on HIVE-20830:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} jdbc-handler in master has 12 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
50s{color} | {color:blue} ql in master has 2315 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} jdbc-handler: The patch generated 1 new + 20 unchanged 
- 1 fixed = 21 total (was 21) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-14687/dev-support/hive-personality.sh
 |
| git revision | master / 2f7abcc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14687/yetus/diff-checkstyle-jdbc-handler.txt
 |
| modules | C: jdbc-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14687/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JdbcStorageHandler range query assertion failure in some cases
> --
>
> Key: HIVE-20830
> URL: https://issues.apache.org/jira/browse/HIVE-20830
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20830.1.patch
>
>
> {code}
> 2018-10-29T10:10:16,325 ERROR [b4bf5eb2-a986-4aae-908e-93b9908acd32 
> HiveServer2-HttpHandler-Pool: Thread-124]: dao.GenericJdbcDatabaseAccessor 
> (:()) - Caught exception while trying to execute query
> java.lang.IllegalArgumentException: null
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:108) 
> ~[guava-19.0.jar:?]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.addBoundaryToQuery(GenericJdbcDatabaseAccessor.java:238)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> 

[jira] [Updated] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20829:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Patch pushed to master. Thanks Thejas for review!

> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20829.1.patch
>
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ]Vertex killed, vertexName=Reducer 2, 
> vertexId=vertex_1540588928441_0121_2_01, diagnostics=[Vertex received Kill in 
> INITED state., Vertex vertex_1540588928441_0121_2_01 [Reducer 2] 
> killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to 
> VERTEX_FAILURE. failedVertices:1 killedVertices:1
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_161]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_161]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.3.0-150.jar:?]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:338)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_161]

[jira] [Commented] (HIVE-20834) Hive QueryResultCache entries keeping reference to SemanticAnalyzer from cached query

2018-10-29 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668122#comment-16668122
 ] 

Jason Dere commented on HIVE-20834:
---

Patch to just fetch the ValidTxnWriteIdList upfront when creating the results 
cache entry, rather than trying to defer the call to fetch the 
ValidTxnWriteIdList.
Looked at heap dumps before/after this patch and the results cache entry is no 
longer hanging onto a reference to the SemanticAnalyzer.

[~jcamachorodriguez] [~gopalv] can you review?

> Hive QueryResultCache entries keeping reference to SemanticAnalyzer from 
> cached query
> -
>
> Key: HIVE-20834
> URL: https://issues.apache.org/jira/browse/HIVE-20834
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-20834.1.patch, dominator_tree.png
>
>
> QueryResultCache.LookupInfo ends up keeping a reference to the 
> SemanticAnalyzer from the cached query, for as long as the cached entry is in 
> the cache. We should not be keeping the SemanticAnalyzer around after the 
> query is done executing since they can hold on to quite a bit of memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20834) Hive QueryResultCache entries keeping reference to SemanticAnalyzer from cached query

2018-10-29 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-20834:
--
Attachment: HIVE-20834.1.patch

> Hive QueryResultCache entries keeping reference to SemanticAnalyzer from 
> cached query
> -
>
> Key: HIVE-20834
> URL: https://issues.apache.org/jira/browse/HIVE-20834
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-20834.1.patch, dominator_tree.png
>
>
> QueryResultCache.LookupInfo ends up keeping a reference to the 
> SemanticAnalyzer from the cached query, for as long as the cached entry is in 
> the cache. We should not be keeping the SemanticAnalyzer around after the 
> query is done executing since they can hold on to quite a bit of memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668110#comment-16668110
 ] 

Hive QA commented on HIVE-20829:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12946115/HIVE-20829.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15518 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/14686/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14686/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14686/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12946115 - PreCommit-HIVE-Build

> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20829.1.patch
>
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ]Vertex killed, vertexName=Reducer 2, 
> vertexId=vertex_1540588928441_0121_2_01, diagnostics=[Vertex received Kill in 
> INITED state., Vertex vertex_1540588928441_0121_2_01 [Reducer 2] 
> killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to 
> VERTEX_FAILURE. failedVertices:1 killedVertices:1
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_161]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_161]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.3.0-150.jar:?]
>   at 
> 

[jira] [Updated] (HIVE-20821) Rewrite SUM0 into SUM + COALESCE combination

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20821:
---
Attachment: HIVE-20821.01.patch

> Rewrite SUM0 into SUM + COALESCE combination
> 
>
> Key: HIVE-20821
> URL: https://issues.apache.org/jira/browse/HIVE-20821
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20821.01.patch, HIVE-20821.patch
>
>
> Since SUM0 is not vectorized, but SUM + COALESCE are.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20822) Improvements to push computation to JDBC from Calcite

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20822:
---
Attachment: HIVE-20822.01.patch

> Improvements to push computation to JDBC from Calcite
> -
>
> Key: HIVE-20822
> URL: https://issues.apache.org/jira/browse/HIVE-20822
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20822.01.patch, HIVE-20822.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-20835) Interaction between constraints and MV rewriting may create loop in Calcite planner

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-20835 started by Jesus Camacho Rodriguez.
--
> Interaction between constraints and MV rewriting may create loop in Calcite 
> planner
> ---
>
> Key: HIVE-20835
> URL: https://issues.apache.org/jira/browse/HIVE-20835
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20835.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20835) Interaction between constraints and MV rewriting may create loop in Calcite planner

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20835:
---
Attachment: HIVE-20835.patch

> Interaction between constraints and MV rewriting may create loop in Calcite 
> planner
> ---
>
> Key: HIVE-20835
> URL: https://issues.apache.org/jira/browse/HIVE-20835
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20835.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20835) Interaction between constraints and MV rewriting may create loop in Calcite planner

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20835:
---
Status: Patch Available  (was: In Progress)

> Interaction between constraints and MV rewriting may create loop in Calcite 
> planner
> ---
>
> Key: HIVE-20835
> URL: https://issues.apache.org/jira/browse/HIVE-20835
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20835.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20835) Interaction between constraints and MV rewriting may create loop in Calcite planner

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-20835:
--


> Interaction between constraints and MV rewriting may create loop in Calcite 
> planner
> ---
>
> Key: HIVE-20835
> URL: https://issues.apache.org/jira/browse/HIVE-20835
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-20805:
--
Labels: pull-request-available  (was: )

> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch, HIVE-20805.2.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668078#comment-16668078
 ] 

ASF GitHub Bot commented on HIVE-20805:
---

GitHub user maheshk114 opened a pull request:

https://github.com/apache/hive/pull/482

HIVE-20805 : Hive does not copy source data when importing as non-hive user

…

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maheshk114/hive HIVE-20805

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/482.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #482


commit bfd8a8005d6a646d53ab2ab7093da8894e57f911
Author: Mahesh Kumar Behera 
Date:   2018-10-30T03:37:38Z

HIVE-20805 : Hive does not copy source data when importing as non-hive user




> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch, HIVE-20805.2.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20817) Reading Timestamp datatype via HiveServer2 gives errors

2018-10-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-20817:
--
Labels: pull-request-available  (was: )

> Reading Timestamp datatype via HiveServer2 gives errors
> ---
>
> Key: HIVE-20817
> URL: https://issues.apache.org/jira/browse/HIVE-20817
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20817.01.patch
>
>
> CREATE TABLE JdbcBasicRead ( empno int, desg string,empname string,doj 
> timestamp,Salary float,mgrid smallint, deptno tinyint ) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY ',';
> LOAD DATA LOCAL INPATH '/tmp/art_jdbc/hive/input/input_7columns.txt' 
> OVERWRITE INTO TABLE JdbcBasicRead;
> Sample Data.
> —
> 7369,M,SMITH,1980-12-17 17:07:29.234234,5000.00,7902,20
> 7499,X,ALLEN,1981-02-20 17:07:29.234234,1250.00,7698,30
> 7521,X,WARD,1981-02-22 17:07:29.234234,01600.57,7698,40
> 7566,M,JONES,1981-04-02 17:07:29.234234,02975.65,7839,10
> 7654,X,MARTIN,1981-09-28 17:07:29.234234,01250.00,7698,20
> 7698,M,BLAKE,1981-05-01 17:07:29.234234,2850.98,7839,30
> 7782,M,CLARK,1981-06-09 17:07:29.234234,02450.00,7839,20
> —
> Select statement: SELECT empno, desg, empname, doj, salary, mgrid, deptno 
> FROM JdbcBasicWrite
> {code}
> 2018-09-25T07:11:03,222 WARN [HiveServer2-Handler-Pool: Thread-83]: 
> thrift.ThriftCLIService (:()) - Error fetching results:
> org.apache.hive.service.cli.HiveSQLException: java.lang.ClassCastException: 
> org.apache.hadoop.hive.common.type.Timestamp cannot be cast to 
> java.sql.Timestamp
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.1.0-187.jar:?]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at com.sun.proxy.$Proxy46.fetchResults(Unknown Source) ~[?:?]
> at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:786)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[?:1.8.0_112]
> at 
> 

[jira] [Commented] (HIVE-20817) Reading Timestamp datatype via HiveServer2 gives errors

2018-10-29 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668077#comment-16668077
 ] 

ASF GitHub Bot commented on HIVE-20817:
---

GitHub user maheshk114 opened a pull request:

https://github.com/apache/hive/pull/481

HIVE-20817 : Reading Timestamp datatype via HiveServer2 gives errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maheshk114/hive HIVE-20817-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/481.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #481


commit 132a10459e8348707b014215509727f563dbd57a
Author: Mahesh Kumar Behera 
Date:   2018-10-30T02:48:55Z

HIVE-20817 : Reading Timestamp datatype via HiveServer2 gives errors




> Reading Timestamp datatype via HiveServer2 gives errors
> ---
>
> Key: HIVE-20817
> URL: https://issues.apache.org/jira/browse/HIVE-20817
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20817.01.patch
>
>
> CREATE TABLE JdbcBasicRead ( empno int, desg string,empname string,doj 
> timestamp,Salary float,mgrid smallint, deptno tinyint ) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY ',';
> LOAD DATA LOCAL INPATH '/tmp/art_jdbc/hive/input/input_7columns.txt' 
> OVERWRITE INTO TABLE JdbcBasicRead;
> Sample Data.
> —
> 7369,M,SMITH,1980-12-17 17:07:29.234234,5000.00,7902,20
> 7499,X,ALLEN,1981-02-20 17:07:29.234234,1250.00,7698,30
> 7521,X,WARD,1981-02-22 17:07:29.234234,01600.57,7698,40
> 7566,M,JONES,1981-04-02 17:07:29.234234,02975.65,7839,10
> 7654,X,MARTIN,1981-09-28 17:07:29.234234,01250.00,7698,20
> 7698,M,BLAKE,1981-05-01 17:07:29.234234,2850.98,7839,30
> 7782,M,CLARK,1981-06-09 17:07:29.234234,02450.00,7839,20
> —
> Select statement: SELECT empno, desg, empname, doj, salary, mgrid, deptno 
> FROM JdbcBasicWrite
> {code}
> 2018-09-25T07:11:03,222 WARN [HiveServer2-Handler-Pool: Thread-83]: 
> thrift.ThriftCLIService (:()) - Error fetching results:
> org.apache.hive.service.cli.HiveSQLException: java.lang.ClassCastException: 
> org.apache.hadoop.hive.common.type.Timestamp cannot be cast to 
> java.sql.Timestamp
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.1.0-187.jar:?]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at com.sun.proxy.$Proxy46.fetchResults(Unknown Source) ~[?:?]
> at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:786)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  

[jira] [Updated] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20805:
---
Attachment: HIVE-20805.2.patch

> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch, HIVE-20805.2.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20805:
---
Status: Open  (was: Patch Available)

> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch, HIVE-20805.2.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20805:
---
Status: Patch Available  (was: Open)

fixed test failures

> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch, HIVE-20805.2.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668071#comment-16668071
 ] 

Hive QA commented on HIVE-20829:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} jdbc-handler in master has 12 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} jdbc-handler: The patch generated 1 new + 9 unchanged - 
0 fixed = 10 total (was 9) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-14686/dev-support/hive-personality.sh
 |
| git revision | master / 1656e1b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14686/yetus/diff-checkstyle-jdbc-handler.txt
 |
| modules | C: jdbc-handler U: jdbc-handler |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14686/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20829.1.patch
>
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   

[jira] [Commented] (HIVE-20817) Reading Timestamp datatype via HiveServer2 gives errors

2018-10-29 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668055#comment-16668055
 ] 

mahesh kumar behera commented on HIVE-20817:


[~thejas]

can you please review the patch. 

No test cases are added as we don't have framework to test using old version of 
client.

> Reading Timestamp datatype via HiveServer2 gives errors
> ---
>
> Key: HIVE-20817
> URL: https://issues.apache.org/jira/browse/HIVE-20817
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20817.01.patch
>
>
> CREATE TABLE JdbcBasicRead ( empno int, desg string,empname string,doj 
> timestamp,Salary float,mgrid smallint, deptno tinyint ) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY ',';
> LOAD DATA LOCAL INPATH '/tmp/art_jdbc/hive/input/input_7columns.txt' 
> OVERWRITE INTO TABLE JdbcBasicRead;
> Sample Data.
> —
> 7369,M,SMITH,1980-12-17 17:07:29.234234,5000.00,7902,20
> 7499,X,ALLEN,1981-02-20 17:07:29.234234,1250.00,7698,30
> 7521,X,WARD,1981-02-22 17:07:29.234234,01600.57,7698,40
> 7566,M,JONES,1981-04-02 17:07:29.234234,02975.65,7839,10
> 7654,X,MARTIN,1981-09-28 17:07:29.234234,01250.00,7698,20
> 7698,M,BLAKE,1981-05-01 17:07:29.234234,2850.98,7839,30
> 7782,M,CLARK,1981-06-09 17:07:29.234234,02450.00,7839,20
> —
> Select statement: SELECT empno, desg, empname, doj, salary, mgrid, deptno 
> FROM JdbcBasicWrite
> {code}
> 2018-09-25T07:11:03,222 WARN [HiveServer2-Handler-Pool: Thread-83]: 
> thrift.ThriftCLIService (:()) - Error fetching results:
> org.apache.hive.service.cli.HiveSQLException: java.lang.ClassCastException: 
> org.apache.hadoop.hive.common.type.Timestamp cannot be cast to 
> java.sql.Timestamp
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.1.0-187.jar:?]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at com.sun.proxy.$Proxy46.fetchResults(Unknown Source) ~[?:?]
> at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:786)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> 

[jira] [Updated] (HIVE-20817) Reading Timestamp datatype via HiveServer2 gives errors

2018-10-29 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20817:
---
Status: Patch Available  (was: Open)

> Reading Timestamp datatype via HiveServer2 gives errors
> ---
>
> Key: HIVE-20817
> URL: https://issues.apache.org/jira/browse/HIVE-20817
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20817.01.patch
>
>
> CREATE TABLE JdbcBasicRead ( empno int, desg string,empname string,doj 
> timestamp,Salary float,mgrid smallint, deptno tinyint ) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY ',';
> LOAD DATA LOCAL INPATH '/tmp/art_jdbc/hive/input/input_7columns.txt' 
> OVERWRITE INTO TABLE JdbcBasicRead;
> Sample Data.
> —
> 7369,M,SMITH,1980-12-17 17:07:29.234234,5000.00,7902,20
> 7499,X,ALLEN,1981-02-20 17:07:29.234234,1250.00,7698,30
> 7521,X,WARD,1981-02-22 17:07:29.234234,01600.57,7698,40
> 7566,M,JONES,1981-04-02 17:07:29.234234,02975.65,7839,10
> 7654,X,MARTIN,1981-09-28 17:07:29.234234,01250.00,7698,20
> 7698,M,BLAKE,1981-05-01 17:07:29.234234,2850.98,7839,30
> 7782,M,CLARK,1981-06-09 17:07:29.234234,02450.00,7839,20
> —
> Select statement: SELECT empno, desg, empname, doj, salary, mgrid, deptno 
> FROM JdbcBasicWrite
> {code}
> 2018-09-25T07:11:03,222 WARN [HiveServer2-Handler-Pool: Thread-83]: 
> thrift.ThriftCLIService (:()) - Error fetching results:
> org.apache.hive.service.cli.HiveSQLException: java.lang.ClassCastException: 
> org.apache.hadoop.hive.common.type.Timestamp cannot be cast to 
> java.sql.Timestamp
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.1.0-187.jar:?]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at com.sun.proxy.$Proxy46.fetchResults(Unknown Source) ~[?:?]
> at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:786)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[?:1.8.0_112]
> at 
> 

[jira] [Updated] (HIVE-20817) Reading Timestamp datatype via HiveServer2 gives errors

2018-10-29 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20817:
---
Attachment: HIVE-20817.01.patch

> Reading Timestamp datatype via HiveServer2 gives errors
> ---
>
> Key: HIVE-20817
> URL: https://issues.apache.org/jira/browse/HIVE-20817
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20817.01.patch
>
>
> CREATE TABLE JdbcBasicRead ( empno int, desg string,empname string,doj 
> timestamp,Salary float,mgrid smallint, deptno tinyint ) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY ',';
> LOAD DATA LOCAL INPATH '/tmp/art_jdbc/hive/input/input_7columns.txt' 
> OVERWRITE INTO TABLE JdbcBasicRead;
> Sample Data.
> —
> 7369,M,SMITH,1980-12-17 17:07:29.234234,5000.00,7902,20
> 7499,X,ALLEN,1981-02-20 17:07:29.234234,1250.00,7698,30
> 7521,X,WARD,1981-02-22 17:07:29.234234,01600.57,7698,40
> 7566,M,JONES,1981-04-02 17:07:29.234234,02975.65,7839,10
> 7654,X,MARTIN,1981-09-28 17:07:29.234234,01250.00,7698,20
> 7698,M,BLAKE,1981-05-01 17:07:29.234234,2850.98,7839,30
> 7782,M,CLARK,1981-06-09 17:07:29.234234,02450.00,7839,20
> —
> Select statement: SELECT empno, desg, empname, doj, salary, mgrid, deptno 
> FROM JdbcBasicWrite
> {code}
> 2018-09-25T07:11:03,222 WARN [HiveServer2-Handler-Pool: Thread-83]: 
> thrift.ThriftCLIService (:()) - Error fetching results:
> org.apache.hive.service.cli.HiveSQLException: java.lang.ClassCastException: 
> org.apache.hadoop.hive.common.type.Timestamp cannot be cast to 
> java.sql.Timestamp
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) ~[?:?]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.1.0-187.jar:?]
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at com.sun.proxy.$Proxy46.fetchResults(Unknown Source) ~[?:?]
> at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:786)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[?:1.8.0_112]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

[jira] [Commented] (HIVE-20825) Hive ACID Merge generates invalid ORC files (bucket files 0 or 3 bytes in length) causing the "Not a valid ORC file" error

2018-10-29 Thread Tom Zeng (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668031#comment-16668031
 ] 

Tom Zeng commented on HIVE-20825:
-

Thanks Eugene for testing it. I could reproduce this with a 3 node 5.18.0 EMR 
cluster using the quick launch option. And I've tried a number of other EMR 
clusters of different sizes and versions and able to reproduce. This could be 
EMR specific then. I will find a different env to try this on to confirm.

> Hive ACID Merge generates invalid ORC files (bucket files 0 or 3 bytes in 
> length) causing the "Not a valid ORC file" error
> --
>
> Key: HIVE-20825
> URL: https://issues.apache.org/jira/browse/HIVE-20825
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC, Transactions
>Affects Versions: 2.2.0, 2.3.1, 2.3.2
> Environment: Hive 2.3.x on Amazon EMR 5.8.0 to 5.18.0
>Reporter: Tom Zeng
>Priority: Major
> Attachments: hive-merge-invalid-orc-repro.hql, 
> hive-merge-invalid-orc-repro.log
>
>
> When using Hive ACID Merge (supported with the ORC format) to update/insert 
> data, bucket files with 0 byte or 3 bytes (file content contains three 
> characters: ORC) are generated during MERGE INTO operations which finish with 
> no errors. Subsequent queries on the base table will get "Not a valid ORC 
> file" error.
>  
> The following script can be used to reproduce the issue(note that with small 
> amount of data like this increasing the number of buckets could result in 
> query working, but with large data set it will fail no matter what bucket 
> size):
> set hive.auto.convert.join=false;
>  set hive.enforce.bucketing=true;
>  set hive.exec.dynamic.partition.mode = nonstrict;
>  set hive.support.concurrency=true;
>  set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> drop table if exists mergedelta_txt_1;
>  drop table if exists mergedelta_txt_2;
> CREATE TABLE mergedelta_txt_1 (
>  id_str varchar(12), time_key int, value bigint)
>  PARTITIONED BY (date_key int)
>  ROW FORMAT DELIMITED
>  STORED AS TEXTFILE;
> CREATE TABLE mergedelta_txt_2 (
>  id_str varchar(12), time_key int, value bigint)
>  PARTITIONED BY (date_key int)
>  ROW FORMAT DELIMITED
>  STORED AS TEXTFILE;
> INSERT INTO TABLE mergedelta_txt_1
>  partition(date_key=20170103)
>  VALUES
>  ("AB94LIENR0",46700,12345676836978),
>  ("AB94LIENR1",46825,12345676836978),
>  ("AB94LIENS0",46709,12345676836978),
>  ("AB94LIENS1",46834,12345676836978),
>  ("AB94LIENT0",46709,12345676836978),
>  ("AB94LIENT1",46834,12345676836978),
>  ("AB94LIENU0",46718,12345676836978),
>  ("AB94LIENU1",46844,12345676836978),
>  ("AB94LIENV0",46719,12345676836978),
>  ("AB94LIENV1",46844,12345676836978),
>  ("AB94LIENW0",46728,12345676836978),
>  ("AB94LIENW1",46854,12345676836978),
>  ("AB94LIENX0",46728,12345676836978),
>  ("AB94LIENX1",46854,12345676836978),
>  ("AB94LIENY0",46737,12345676836978),
>  ("AB94LIENY1",46863,12345676836978),
>  ("AB94LIENZ0",46738,12345676836978),
>  ("AB94LIENZ1",46863,12345676836978),
>  ("AB94LIERA0",47176,12345676836982),
>  ("AB94LIERA1",47302,12345676836982);
> INSERT INTO TABLE mergedelta_txt_2
>  partition(date_key=20170103)
>  VALUES 
>  ("AB94LIENT1",46834,12345676836978),
>  ("AB94LIENU0",46718,12345676836978),
>  ("AB94LIENU1",46844,12345676836978),
>  ("AB94LIENV0",46719,12345676836978),
>  ("AB94LIENV1",46844,12345676836978),
>  ("AB94LIENW0",46728,12345676836978),
>  ("AB94LIENW1",46854,12345676836978),
>  ("AB94LIENX0",46728,12345676836978),
>  ("AB94LIENX1",46854,12345676836978),
>  ("AB94LIENY0",46737,12345676836978),
>  ("AB94LIENY1",46863,12345676836978),
>  ("AB94LIENZ0",46738,12345676836978),
>  ("AB94LIENZ1",46863,12345676836978),
>  ("AB94LIERA0",47176,12345676836982),
>  ("AB94LIERA1",47302,12345676836982),
>  ("AB94LIERA2",47418,12345676836982),
>  ("AB94LIERB0",47176,12345676836982),
>  ("AB94LIERB1",47302,12345676836982),
>  ("AB94LIERB2",47418,12345676836982),
>  ("AB94LIERC0",47185,12345676836982);
> DROP TABLE IF EXISTS mergebase_1;
>  CREATE TABLE mergebase_1 (
>  id_str varchar(12) , time_key int , value bigint)
>  PARTITIONED BY (date_key int)
>  CLUSTERED BY (id_str,time_key) INTO 4 BUCKETS
>  STORED AS ORC
>  TBLPROPERTIES (
>  'orc.compress'='SNAPPY',
>  'pk_columns'='id_str,date_key,time_key',
>  'NO_AUTO_COMPACTION'='true',
>  'transactional'='true');
> MERGE INTO mergebase_1 AS base
>  USING (SELECT * 
>  FROM (
>  SELECT id_str ,time_key ,value, date_key, rank() OVER (PARTITION BY 
> id_str,date_key,time_key ORDER BY id_str,date_key,time_key) AS rk 
>  FROM mergedelta_txt_1
>  DISTRIBUTE BY date_key
>  ) rankedtbl 
>  WHERE rankedtbl.rk=1
>  ) AS delta
>  ON delta.id_str=base.id_str AND delta.date_key=base.date_key AND 
> 

[jira] [Reopened] (HIVE-20552) Get Schema from LogicalPlan faster

2018-10-29 Thread Eric Wohlstadter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wohlstadter reopened HIVE-20552:
-

Reopened. The patch is failing for me with:
{code:java}
Caused by: java.lang.NullPointerException: null^M
at 
org.apache.hadoop.hive.ql.parse.QBSubQuery.(QBSubQuery.java:489)^M
at 
org.apache.hadoop.hive.ql.parse.SubQueryUtils.buildSubQuery(SubQueryUtils.java:249)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.subqueryRestrictionCheck(CalcitePlanner.java:3141)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSubQueryRelNode(CalcitePlanner.java:3322)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genFilterRelNode(CalcitePlanner.java:3379)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genFilterLogicalPlan(CalcitePlanner.java:3434)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4970)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1722)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1670)^M
at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118)^M
at 
org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:1052)^M
at 
org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:154)^M
at 
org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1431)^M
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genLogicalPlan(CalcitePlanner.java:393)^M
at 
org.apache.hadoop.hive.ql.parse.ParseUtils.parseQueryAndGetSchema(ParseUtils.java:554)^M
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.createPlanFragment(GenericUDTFGetSplits.java:254)^M
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:206)^M
at 
org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116)^M
at 
org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)^M
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)^M
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)^M
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)^M
at 
org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)^M
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)^M
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)^M
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:519)^M
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:511)^M
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)^M
... 16 more{code}

> Get Schema from LogicalPlan faster
> --
>
> Key: HIVE-20552
> URL: https://issues.apache.org/jira/browse/HIVE-20552
> Project: Hive
>  Issue Type: Improvement
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20552.1.patch, HIVE-20552.2.patch, 
> HIVE-20552.3.patch
>
>
> To get the schema of a query faster, it currently needs to compile, optimize, 
> and generate a TezPlan, which creates extra overhead when only the 
> LogicalPlan is needed.
> 1. Copy the method \{{HiveMaterializedViewsRegistry.parseQuery}}, making it 
> \{{public static}} and putting it in a utility class. 
> 2. Change the return statement of the method to \{{return 
> analyzer.getResultSchema();}}
> 3. Change the return type of the method to \{{List}}
> 4. Call the new method from \{{GenericUDTFGetSplits.createPlanFragment}} 
> replacing the current code which does this:
> {code}
>  if(num == 0) {
>  //Schema only
>  return new PlanFragment(null, schema, null);
>  }
> {code}
> moving the call earlier in \{{getPlanFragment}} ... right after the HiveConf 
> is created ... bypassing the code that uses \{{HiveTxnManager}} and 
> \{{Driver}}.
> 5. Convert the \{{List}} to 
> \{{org.apache.hadoop.hive.llap.Schema}}.
> 6. return from \{{getPlanFragment}} by returning \{{new PlanFragment(null, 
> schema, null)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668001#comment-16668001
 ] 

Jason Dere commented on HIVE-20833:
---

+1

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20833.1.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>

[jira] [Commented] (HIVE-20822) Improvements to push computation to JDBC from Calcite

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667997#comment-16667997
 ] 

Hive QA commented on HIVE-20822:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12946112/HIVE-20822.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 105 failed/errored test(s), 15519 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_coltype] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ambiguitycheck] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join1] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_simple_select] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_simple_select] 
(batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_udf] (batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynamic_partition_skip_default]
 (batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[external_jdbc_table_perf]
 (batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_join_pushdown] 
(batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_join_preds] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input8] (batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_multiskew_2]
 (batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[macro] (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[order3] (batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcr] (batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup2] 
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup3] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup4] 
(batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup] (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_constant_expr] 
(batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_isops_simplify] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_offcbo] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[windowing_duplicate] 
(batchId=33)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=194)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketpruning1]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_simple_select]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[count] 
(batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dec_str] 
(batchId=179)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] 
(batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin_hint]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_views]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_0]
 (batchId=181)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=115)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_simple_select] 
(batchId=118)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[count] (batchId=122)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_0]
 (batchId=117)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[pcr] (batchId=137)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=142)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=116)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_0] 
(batchId=148)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query11] 
(batchId=274)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query31] 
(batchId=274)

[jira] [Commented] (HIVE-20822) Improvements to push computation to JDBC from Calcite

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667987#comment-16667987
 ] 

Hive QA commented on HIVE-20822:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
47s{color} | {color:blue} ql in master has 2315 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 6 new + 175 unchanged - 0 
fixed = 181 total (was 175) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 552 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
7s{color} | {color:red} The patch 142 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-14685/dev-support/hive-personality.sh
 |
| git revision | master / 1656e1b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14685/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14685/yetus/whitespace-eol.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14685/yetus/whitespace-tabs.txt
 |
| modules | C: . ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14685/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improvements to push computation to JDBC from Calcite
> -
>
> Key: HIVE-20822
> URL: https://issues.apache.org/jira/browse/HIVE-20822
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20822.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20807) Refactor LlapStatusServiceDriver

2018-10-29 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20807:
--
Attachment: HIVE-20807.03.patch

> Refactor LlapStatusServiceDriver
> 
>
> Key: HIVE-20807
> URL: https://issues.apache.org/jira/browse/HIVE-20807
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20807.01.patch, HIVE-20807.02.patch, 
> HIVE-20807.03.patch
>
>
> LlapStatusServiceDriver is the class used to determine if LLAP has started. 
> The following problems should be solved by refactoring:
> 1. The main class is more than 800 lines long,should be cut into multiple 
> smaller classes.
> 2. The current design makes it extremely hard to write unit tests.
> 3. There are some overcomplicated, over-engineered parts of the code.
> 4. Most of the code is under org.apache.hadoop.hive.llap.cli, but some parts 
> are under org.apache.hadoop.hive.llap.cli.status. The whole program could be 
> moved to the latter.
> 5. LlapStatusHelpers serves as a class for holding classes, which doesn't 
> make much sense.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20807) Refactor LlapStatusServiceDriver

2018-10-29 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20807:
--
Status: Open  (was: Patch Available)

> Refactor LlapStatusServiceDriver
> 
>
> Key: HIVE-20807
> URL: https://issues.apache.org/jira/browse/HIVE-20807
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20807.01.patch, HIVE-20807.02.patch, 
> HIVE-20807.03.patch
>
>
> LlapStatusServiceDriver is the class used to determine if LLAP has started. 
> The following problems should be solved by refactoring:
> 1. The main class is more than 800 lines long,should be cut into multiple 
> smaller classes.
> 2. The current design makes it extremely hard to write unit tests.
> 3. There are some overcomplicated, over-engineered parts of the code.
> 4. Most of the code is under org.apache.hadoop.hive.llap.cli, but some parts 
> are under org.apache.hadoop.hive.llap.cli.status. The whole program could be 
> moved to the latter.
> 5. LlapStatusHelpers serves as a class for holding classes, which doesn't 
> make much sense.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20807) Refactor LlapStatusServiceDriver

2018-10-29 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20807:
--
Status: Patch Available  (was: Open)

> Refactor LlapStatusServiceDriver
> 
>
> Key: HIVE-20807
> URL: https://issues.apache.org/jira/browse/HIVE-20807
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20807.01.patch, HIVE-20807.02.patch, 
> HIVE-20807.03.patch
>
>
> LlapStatusServiceDriver is the class used to determine if LLAP has started. 
> The following problems should be solved by refactoring:
> 1. The main class is more than 800 lines long,should be cut into multiple 
> smaller classes.
> 2. The current design makes it extremely hard to write unit tests.
> 3. There are some overcomplicated, over-engineered parts of the code.
> 4. Most of the code is under org.apache.hadoop.hive.llap.cli, but some parts 
> are under org.apache.hadoop.hive.llap.cli.status. The whole program could be 
> moved to the latter.
> 5. LlapStatusHelpers serves as a class for holding classes, which doesn't 
> make much sense.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20823) Make Compactor run in a transaction

2018-10-29 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-20823:
--
Status: Patch Available  (was: Open)

01.patch - see what breaks

> Make Compactor run in a transaction
> ---
>
> Key: HIVE-20823
> URL: https://issues.apache.org/jira/browse/HIVE-20823
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-20823.01.patch
>
>
> Have compactor open a transaction and run the job in that transaction.
> # make compactor produced base/delta include this txn id in the folder name, 
> e.g. base_7_c17 where 17 is the txnid.
> # add {{CQ_TXN_ID bigint}} to COMPACTION_QUEUE and COMPLETED_COMPACTIONS to 
> record this txn id
> # make sure {{AcidUtils.getAcidState()}} pays attention to this transaction 
> on read and ignores this dir if this txn id is not committed in the current 
> snapshot
> ## this means not only validWriteIdList but ValidTxnIdList should be passed 
> along in config (if it isn't yet)
> # once this is done, {{CompactorMR.createCompactorMarker()}} can be 
> eliminated and {{AcidUtils.isValidBase}} modified accordingly
> # modify Cleaner so that it doesn't clean old files until new file is visible 
> to all readers
> # 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20823) Make Compactor run in a transaction

2018-10-29 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-20823:
--
Attachment: HIVE-20823.01.patch

> Make Compactor run in a transaction
> ---
>
> Key: HIVE-20823
> URL: https://issues.apache.org/jira/browse/HIVE-20823
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-20823.01.patch
>
>
> Have compactor open a transaction and run the job in that transaction.
> # make compactor produced base/delta include this txn id in the folder name, 
> e.g. base_7_c17 where 17 is the txnid.
> # add {{CQ_TXN_ID bigint}} to COMPACTION_QUEUE and COMPLETED_COMPACTIONS to 
> record this txn id
> # make sure {{AcidUtils.getAcidState()}} pays attention to this transaction 
> on read and ignores this dir if this txn id is not committed in the current 
> snapshot
> ## this means not only validWriteIdList but ValidTxnIdList should be passed 
> along in config (if it isn't yet)
> # once this is done, {{CompactorMR.createCompactorMarker()}} can be 
> eliminated and {{AcidUtils.isValidBase}} modified accordingly
> # modify Cleaner so that it doesn't clean old files until new file is visible 
> to all readers
> # 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-20833:
--

Assignee: Vineet Garg

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20833.1.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: HIVE-20833.1.patch

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
> Attachments: HIVE-20833.1.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: HIVE-20833.1.patch

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>`quote_warning_delay` double,
>`quote_warning_delay_unit` 

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: (was: HIVE-20833.1.patch)

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>`quote_warning_delay` double,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: (was: HIVE-20833.1.patch)

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>`quote_warning_delay` double,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: HIVE-20833.1.patch

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>`quote_warning_delay` double,
>`quote_warning_delay_unit` 

[jira] [Updated] (HIVE-20778) Join reordering may not be triggered if all joins in plan are created by decorrelation logic

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20778:
---
Attachment: HIVE-20778.4.patch

> Join reordering may not be triggered if all joins in plan are created by 
> decorrelation logic
> 
>
> Key: HIVE-20778
> URL: https://issues.apache.org/jira/browse/HIVE-20778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20778.1.patch, HIVE-20778.2.patch, 
> HIVE-20778.3.patch, HIVE-20778.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20778) Join reordering may not be triggered if all joins in plan are created by decorrelation logic

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20778:
---
Status: Open  (was: Patch Available)

> Join reordering may not be triggered if all joins in plan are created by 
> decorrelation logic
> 
>
> Key: HIVE-20778
> URL: https://issues.apache.org/jira/browse/HIVE-20778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20778.1.patch, HIVE-20778.2.patch, 
> HIVE-20778.3.patch, HIVE-20778.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20778) Join reordering may not be triggered if all joins in plan are created by decorrelation logic

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20778:
---
Status: Patch Available  (was: Open)

> Join reordering may not be triggered if all joins in plan are created by 
> decorrelation logic
> 
>
> Key: HIVE-20778
> URL: https://issues.apache.org/jira/browse/HIVE-20778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20778.1.patch, HIVE-20778.2.patch, 
> HIVE-20778.3.patch, HIVE-20778.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: (was: first.patch)

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>`quote_warning_delay` double,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: (was: HIVE-20833.1.patch)

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>`quote_warning_delay` double,
>

[jira] [Commented] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667912#comment-16667912
 ] 

Jason Dere commented on HIVE-20833:
---

Thanks for finding this. +1

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
> Attachments: HIVE-20833.1.patch, first.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>

[jira] [Resolved] (HIVE-17042) Expose NOT NULL constraint in optimizer so constant folding can take advantage of it

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg resolved HIVE-17042.

Resolution: Fixed

Fixed in HIVE-17043

> Expose NOT NULL constraint in optimizer so constant folding can take 
> advantage of it
> 
>
> Key: HIVE-17042
> URL: https://issues.apache.org/jira/browse/HIVE-17042
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Priority: Major
>
> We need to set the type to be not nullable for those columns.
> Among others, it would be useful to simplify IS NOT NULL, NVL, and COALESCE 
> predicates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20834) Hive QueryResultCache entries keeping reference to SemanticAnalyzer from cached query

2018-10-29 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-20834:
--
Attachment: dominator_tree.png

> Hive QueryResultCache entries keeping reference to SemanticAnalyzer from 
> cached query
> -
>
> Key: HIVE-20834
> URL: https://issues.apache.org/jira/browse/HIVE-20834
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: dominator_tree.png
>
>
> QueryResultCache.LookupInfo ends up keeping a reference to the 
> SemanticAnalyzer from the cached query, for as long as the cached entry is in 
> the cache. We should not be keeping the SemanticAnalyzer around after the 
> query is done executing since they can hold on to quite a bit of memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17041) Aggregate elimination with UNIQUE and NOT NULL column

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17041:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

This was fixed in HIVE-17043

> Aggregate elimination with UNIQUE and NOT NULL column
> -
>
> Key: HIVE-17041
> URL: https://issues.apache.org/jira/browse/HIVE-17041
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-17041.1.patch
>
>
> If columns are part of a GROUP BY expression and they are UNIQUE and do not 
> accept NULL values, i.e. PK or UK+NOTNULL, the _Aggregate_ operator can be 
> transformed into a Project operator, as each row will end up in a different 
> group.
> For instance, given that _pk_ is the PRIMARY KEY for the table, the GROUP BY 
> could be removed from grouping columns for following query:
> {code:sql}
> SELECT pk, value1
> FROM table_1
> GROUP BY value1, pk, value2;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20834) Hive QueryResultCache entries keeping reference to SemanticAnalyzer from cached query

2018-10-29 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-20834:
-


> Hive QueryResultCache entries keeping reference to SemanticAnalyzer from 
> cached query
> -
>
> Key: HIVE-20834
> URL: https://issues.apache.org/jira/browse/HIVE-20834
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> QueryResultCache.LookupInfo ends up keeping a reference to the 
> SemanticAnalyzer from the cached query, for as long as the cached entry is in 
> the cache. We should not be keeping the SemanticAnalyzer around after the 
> query is done executing since they can hold on to quite a bit of memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667898#comment-16667898
 ] 

Vineet Garg commented on HIVE-20833:


Uploaded a patch which should fix the issue.

 

cc [~jdere]

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
> Attachments: HIVE-20833.1.patch, first.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Status: Patch Available  (was: Open)

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
> Attachments: HIVE-20833.1.patch, first.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: HIVE-20833.1.patch

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
> Attachments: HIVE-20833.1.patch, first.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>

[jira] [Commented] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667884#comment-16667884
 ] 

Hive QA commented on HIVE-20512:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12946097/HIVE-20512.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15518 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/14684/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14684/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14684/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12946097 - PreCommit-HIVE-Build

> Improve record and memory usage logging in SparkRecordHandler
> -
>
> Key: HIVE-20512
> URL: https://issues.apache.org/jira/browse/HIVE-20512
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, 
> HIVE-20512.3.patch, HIVE-20512.4.patch, HIVE-20512.5.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but 
> we should improve the methodology for how frequently we log this info. 
> Currently we use the following code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
> // A very simple counter to keep track of number of rows processed by the
> // reducer. It dumps
> // every 1 million times, and quickly before that
> if (currentThreshold >= 100) {
>   return currentThreshold + 100;
> }
> return 10 * currentThreshold;
>   }
> {code}
> The issue is that after a while, the increase by 10x factor means that you 
> have to process a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would 
> help in debugging tasks that are seemingly hung.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20815) JdbcRecordReader.next shall not eat exception

2018-10-29 Thread Thejas M Nair (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667865#comment-16667865
 ] 

Thejas M Nair commented on HIVE-20815:
--

+1 pending tests



> JdbcRecordReader.next shall not eat exception
> -
>
> Key: HIVE-20815
> URL: https://issues.apache.org/jira/browse/HIVE-20815
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20815.1.patch, HIVE-20815.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20830) JdbcStorageHandler range query assertion failure in some cases

2018-10-29 Thread Thejas M Nair (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667864#comment-16667864
 ] 

Thejas M Nair commented on HIVE-20830:
--

+1 pending tests

> JdbcStorageHandler range query assertion failure in some cases
> --
>
> Key: HIVE-20830
> URL: https://issues.apache.org/jira/browse/HIVE-20830
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20830.1.patch
>
>
> {code}
> 2018-10-29T10:10:16,325 ERROR [b4bf5eb2-a986-4aae-908e-93b9908acd32 
> HiveServer2-HttpHandler-Pool: Thread-124]: dao.GenericJdbcDatabaseAccessor 
> (:()) - Caught exception while trying to execute query
> java.lang.IllegalArgumentException: null
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:108) 
> ~[guava-19.0.jar:?]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.addBoundaryToQuery(GenericJdbcDatabaseAccessor.java:238)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getRecordIterator(GenericJdbcDatabaseAccessor.java:161)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:58) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:35) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:569)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2734) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:790)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.server.TServlet.doPost(TServlet.java:83) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:208)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>  ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  

[jira] [Commented] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Thejas M Nair (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667861#comment-16667861
 ] 

Thejas M Nair commented on HIVE-20829:
--

+1 pending tests


> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20829.1.patch
>
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ]Vertex killed, vertexName=Reducer 2, 
> vertexId=vertex_1540588928441_0121_2_01, diagnostics=[Vertex received Kill in 
> INITED state., Vertex vertex_1540588928441_0121_2_01 [Reducer 2] 
> killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to 
> VERTEX_FAILURE. failedVertices:1 killedVertices:1
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_161]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_161]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.3.0-150.jar:?]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:338)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, 
> 

[jira] [Updated] (HIVE-20820) MV partition on clause position

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20820:
---
Description: 
It should obey the following syntax as per 
https://cwiki.apache.org/confluence/display/Hive/Materialized+views :
{code}
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
  [DISABLE REWRITE]
  [COMMENT materialized_view_comment]
  [PARTITIONED ON (col_name, ...)]
  [
[ROW FORMAT row_format]
[STORED AS file_format]
  | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
  ]
  [LOCATION hdfs_path]
  [TBLPROPERTIES (property_name=property_value, ...)]
AS
;
{code}
Currently it is positioned just before TBLPROPERTIES.

  was:
It should obey the following syntax as per 
https://cwiki.apache.org/confluence/display/Hive/Materialized+views:
{code}
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
  [DISABLE REWRITE]
  [COMMENT materialized_view_comment]
  [PARTITIONED ON (col_name, ...)]
  [
[ROW FORMAT row_format]
[STORED AS file_format]
  | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
  ]
  [LOCATION hdfs_path]
  [TBLPROPERTIES (property_name=property_value, ...)]
AS
;
{code}
Currently it is positioned just before TBLPROPERTIES.


> MV partition on clause position
> ---
>
> Key: HIVE-20820
> URL: https://issues.apache.org/jira/browse/HIVE-20820
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20820.patch
>
>
> It should obey the following syntax as per 
> https://cwiki.apache.org/confluence/display/Hive/Materialized+views :
> {code}
> CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
>   [DISABLE REWRITE]
>   [COMMENT materialized_view_comment]
>   [PARTITIONED ON (col_name, ...)]
>   [
> [ROW FORMAT row_format]
> [STORED AS file_format]
>   | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
>   ]
>   [LOCATION hdfs_path]
>   [TBLPROPERTIES (property_name=property_value, ...)]
> AS
> ;
> {code}
> Currently it is positioned just before TBLPROPERTIES.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20820) MV partition on clause position

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20820:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, branch-3. Thanks [~vgarg]

> MV partition on clause position
> ---
>
> Key: HIVE-20820
> URL: https://issues.apache.org/jira/browse/HIVE-20820
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20820.patch
>
>
> It should obey the following syntax as per 
> https://cwiki.apache.org/confluence/display/Hive/Materialized+views:
> {code}
> CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
>   [DISABLE REWRITE]
>   [COMMENT materialized_view_comment]
>   [PARTITIONED ON (col_name, ...)]
>   [
> [ROW FORMAT row_format]
> [STORED AS file_format]
>   | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
>   ]
>   [LOCATION hdfs_path]
>   [TBLPROPERTIES (property_name=property_value, ...)]
> AS
> ;
> {code}
> Currently it is positioned just before TBLPROPERTIES.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20827) Inconsistent results for empty arrays

2018-10-29 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667854#comment-16667854
 ] 

ASF GitHub Bot commented on HIVE-20827:
---

GitHub user pudidic opened a pull request:

https://github.com/apache/hive/pull/480

HIVE-20827: Inconsistent results for empty arrays (Teddy Choi)

Signed-off-by: Teddy Choi 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pudidic/hive HIVE-20827

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/480.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #480


commit 7b4ef6676dcc103d6149eda0365541cb3ba8cabd
Author: Teddy Choi 
Date:   2018-10-29T22:47:14Z

HIVE-20827: Inconsistent results for empty arrays (Teddy Choi)

Signed-off-by: Teddy Choi 




> Inconsistent results for empty arrays
> -
>
> Key: HIVE-20827
> URL: https://issues.apache.org/jira/browse/HIVE-20827
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20827.1.patch
>
>
> LazySimpleDeserializeRead parses an empty array wrong. For example, a line 
> ',' in a text file table with a delimiter ',' and schema 'array, 
> array>' shows \[null\], \[\[""\]\], instead of \[\], \[\] with 
> MapReduce engine and vectorized execution enabled. LazySimpleDeserializeRead 
> has following code; 
> {code:java}
> switch (complexField.complexCategory) {
> case LIST:
>   {
> // Allow for empty string, etc.
> final boolean isNext = (fieldPosition <= complexFieldEnd);
> {code}
> Empty string value read should be only applied to string families, not to 
> other data types. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20827) Inconsistent results for empty arrays

2018-10-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-20827:
--
Labels: pull-request-available  (was: )

> Inconsistent results for empty arrays
> -
>
> Key: HIVE-20827
> URL: https://issues.apache.org/jira/browse/HIVE-20827
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20827.1.patch
>
>
> LazySimpleDeserializeRead parses an empty array wrong. For example, a line 
> ',' in a text file table with a delimiter ',' and schema 'array, 
> array>' shows \[null\], \[\[""\]\], instead of \[\], \[\] with 
> MapReduce engine and vectorized execution enabled. LazySimpleDeserializeRead 
> has following code; 
> {code:java}
> switch (complexField.complexCategory) {
> case LIST:
>   {
> // Allow for empty string, etc.
> final boolean isNext = (fieldPosition <= complexFieldEnd);
> {code}
> Empty string value read should be only applied to string families, not to 
> other data types. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667853#comment-16667853
 ] 

Hive QA commented on HIVE-20512:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
33s{color} | {color:blue} ql in master has 2315 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} ql: The patch generated 0 new + 4 unchanged - 5 
fixed = 4 total (was 9) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
55s{color} | {color:red} ql generated 1 new + 2315 unchanged - 0 fixed = 2316 
total (was 2315) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Increment of volatile field 
org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler.rowNumber in 
org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler.incrementRowNumber()  
At SparkRecordHandler.java:in 
org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler.incrementRowNumber()  
At SparkRecordHandler.java:[line 109] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-14684/dev-support/hive-personality.sh
 |
| git revision | master / 64bea03 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14684/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14684/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve record and memory usage logging in SparkRecordHandler
> -
>
> Key: HIVE-20512
> URL: https://issues.apache.org/jira/browse/HIVE-20512
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, 
> HIVE-20512.3.patch, HIVE-20512.4.patch, HIVE-20512.5.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but 
> we should improve the methodology for how frequently we log this info. 
> Currently we use the following code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
> // A very simple counter to keep track of number of rows processed by the
> // reducer. It dumps
> // every 1 million times, and quickly before that
> if (currentThreshold >= 100) {
>   return currentThreshold + 100;
> }
> return 10 * currentThreshold;
>   

[jira] [Commented] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667846#comment-16667846
 ] 

Vineet Garg commented on HIVE-20833:


[~kgyrtkirk] Would you mind taking a look? I tried updating package.jdo to 
change the datatype from varchar(4000) to CLOB but other queries are failing.

I have attached a patch which contains the test as well as the change to 
package.jdo. If you run the test it will fail.

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
> Attachments: first.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>

[jira] [Updated] (HIVE-20833) package.jdo needs to be updated to conform with HIVE-20221 changes

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20833:
---
Attachment: first.patch

> package.jdo needs to be updated to conform with HIVE-20221 changes
> --
>
> Key: HIVE-20833
> URL: https://issues.apache.org/jira/browse/HIVE-20833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Priority: Major
> Attachments: first.patch
>
>
> Following test if run with TestMiniLlapLocalCliDriver will fail:
> {code:sql}
> CREATE TABLE `alterPartTbl`(
>`po_header_id` bigint,
>`vendor_num` string,
>`requester_name` string,
>`approver_name` string,
>`buyer_name` string,
>`preparer_name` string,
>`po_requisition_number` string,
>`po_requisition_id` bigint,
>`po_requisition_desc` string,
>`rate_type` string,
>`rate_date` date,
>`rate` double,
>`blanket_total_amount` double,
>`authorization_status` string,
>`revision_num` bigint,
>`revised_date` date,
>`approved_flag` string,
>`approved_date` timestamp,
>`amount_limit` double,
>`note_to_authorizer` string,
>`note_to_vendor` string,
>`note_to_receiver` string,
>`vendor_order_num` string,
>`comments` string,
>`acceptance_required_flag` string,
>`acceptance_due_date` date,
>`closed_date` timestamp,
>`user_hold_flag` string,
>`approval_required_flag` string,
>`cancel_flag` string,
>`firm_status_lookup_code` string,
>`firm_date` date,
>`frozen_flag` string,
>`closed_code` string,
>`org_id` bigint,
>`reference_num` string,
>`wf_item_type` string,
>`wf_item_key` string,
>`submit_date` date,
>`sap_company_code` string,
>`sap_fiscal_year` bigint,
>`po_number` string,
>`sap_line_item` bigint,
>`closed_status_flag` string,
>`balancing_segment` string,
>`cost_center_segment` string,
>`base_amount_limit` double,
>`base_blanket_total_amount` double,
>`base_open_amount` double,
>`base_ordered_amount` double,
>`cancel_date` timestamp,
>`cbc_accounting_date` date,
>`change_requested_by` string,
>`change_summary` string,
>`confirming_order_flag` string,
>`document_creation_method` string,
>`edi_processed_flag` string,
>`edi_processed_status` string,
>`enabled_flag` string,
>`encumbrance_required_flag` string,
>`end_date` date,
>`end_date_active` date,
>`from_header_id` bigint,
>`from_type_lookup_code` string,
>`global_agreement_flag` string,
>`government_context` string,
>`interface_source_code` string,
>`ledger_currency_code` string,
>`open_amount` double,
>`ordered_amount` double,
>`pay_on_code` string,
>`payment_term_name` string,
>`pending_signature_flag` string,
>`po_revision_num` double,
>`preparer_id` bigint,
>`price_update_tolerance` double,
>`print_count` double,
>`printed_date` date,
>`reply_date` date,
>`reply_method_lookup_code` string,
>`rfq_close_date` date,
>`segment2` string,
>`segment3` string,
>`segment4` string,
>`segment5` string,
>`shipping_control` string,
>`start_date` date,
>`start_date_active` date,
>`summary_flag` string,
>`supply_agreement_flag` string,
>`usd_amount_limit` double,
>`usd_blanket_total_amount` double,
>`usd_exchange_rate` double,
>`usd_open_amount` double,
>`usd_order_amount` double,
>`ussgl_transaction_code` string,
>`xml_flag` string,
>`purchasing_organization_id` bigint,
>`purchasing_group_code` string,
>`last_updated_by_name` string,
>`created_by_name` string,
>`incoterms_1` string,
>`incoterms_2` string,
>`ame_approval_id` double,
>`ame_transaction_type` string,
>`auto_sourcing_flag` string,
>`cat_admin_auth_enabled_flag` string,
>`clm_document_number` string,
>`comm_rev_num` double,
>`consigned_consumption_flag` string,
>`consume_req_demand_flag` string,
>`conterms_articles_upd_date` timestamp,
>`conterms_deliv_upd_date` timestamp,
>`conterms_exist_flag` string,
>`cpa_reference` double,
>`created_language` string,
>`email_address` string,
>`enable_all_sites` string,
>`fax` string,
>`lock_owner_role` string,
>`lock_owner_user_id` double,
>`min_release_amount` double,
>`mrc_rate` string,
>`mrc_rate_date` string,
>`mrc_rate_type` string,
>`otm_recovery_flag` string,
>`otm_status_code` string,
>`pay_when_paid` string,
>`pcard_id` bigint,
>`program_update_date` timestamp,
>`quotation_class_code` string,
>`quote_type_lookup_code` string,
>`quote_vendor_quote_number` string,
>`quote_warning_delay` double,
>  

[jira] [Updated] (HIVE-20827) Inconsistent results for empty arrays

2018-10-29 Thread Teddy Choi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-20827:
--
Status: Patch Available  (was: Open)

> Inconsistent results for empty arrays
> -
>
> Key: HIVE-20827
> URL: https://issues.apache.org/jira/browse/HIVE-20827
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-20827.1.patch
>
>
> LazySimpleDeserializeRead parses an empty array wrong. For example, a line 
> ',' in a text file table with a delimiter ',' and schema 'array, 
> array>' shows \[null\], \[\[""\]\], instead of \[\], \[\] with 
> MapReduce engine and vectorized execution enabled. LazySimpleDeserializeRead 
> has following code; 
> {code:java}
> switch (complexField.complexCategory) {
> case LIST:
>   {
> // Allow for empty string, etc.
> final boolean isNext = (fieldPosition <= complexFieldEnd);
> {code}
> Empty string value read should be only applied to string families, not to 
> other data types. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20830) JdbcStorageHandler range query assertion failure in some cases

2018-10-29 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20830:
--
Status: Patch Available  (was: Open)

> JdbcStorageHandler range query assertion failure in some cases
> --
>
> Key: HIVE-20830
> URL: https://issues.apache.org/jira/browse/HIVE-20830
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20830.1.patch
>
>
> {code}
> 2018-10-29T10:10:16,325 ERROR [b4bf5eb2-a986-4aae-908e-93b9908acd32 
> HiveServer2-HttpHandler-Pool: Thread-124]: dao.GenericJdbcDatabaseAccessor 
> (:()) - Caught exception while trying to execute query
> java.lang.IllegalArgumentException: null
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:108) 
> ~[guava-19.0.jar:?]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.addBoundaryToQuery(GenericJdbcDatabaseAccessor.java:238)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getRecordIterator(GenericJdbcDatabaseAccessor.java:161)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:58) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:35) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:569)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2734) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:790)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.server.TServlet.doPost(TServlet.java:83) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:208)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>  ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  

[jira] [Updated] (HIVE-20830) JdbcStorageHandler range query assertion failure in some cases

2018-10-29 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20830:
--
Attachment: HIVE-20830.1.patch

> JdbcStorageHandler range query assertion failure in some cases
> --
>
> Key: HIVE-20830
> URL: https://issues.apache.org/jira/browse/HIVE-20830
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20830.1.patch
>
>
> {code}
> 2018-10-29T10:10:16,325 ERROR [b4bf5eb2-a986-4aae-908e-93b9908acd32 
> HiveServer2-HttpHandler-Pool: Thread-124]: dao.GenericJdbcDatabaseAccessor 
> (:()) - Caught exception while trying to execute query
> java.lang.IllegalArgumentException: null
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:108) 
> ~[guava-19.0.jar:?]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.addBoundaryToQuery(GenericJdbcDatabaseAccessor.java:238)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getRecordIterator(GenericJdbcDatabaseAccessor.java:161)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:58) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:35) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:569)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2734) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:790)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.server.TServlet.doPost(TServlet.java:83) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:208)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>  ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  

[jira] [Updated] (HIVE-20707) Automatic partition management

2018-10-29 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20707:
-
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks Jason for the review!

> Automatic partition management
> --
>
> Key: HIVE-20707
> URL: https://issues.apache.org/jira/browse/HIVE-20707
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20702.3.patch, HIVE-20707.1.patch, 
> HIVE-20707.2.patch, HIVE-20707.4.patch, HIVE-20707.5.patch, 
> HIVE-20707.6.patch, HIVE-20707.6.patch, HIVE-20707.7.patch
>
>
> In current scenario, to add partitions for external tables to metastore, MSCK 
> REPAIR command has to be executed manually. To avoid this manual step, 
> external tables can be specified a table property based on which a background 
> metastore thread can sync partitions periodically. Tables can also be 
> specified with partition retention period. Any partition whose age exceeds 
> the retention period will be dropped automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667804#comment-16667804
 ] 

Daniel Dai commented on HIVE-20829:
---

I cannot add a .q test, as it only happens on non-local mode. However, test 
with MiniCluster will result a "Derby may have already booted" issue.

> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20829.1.patch
>
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ]Vertex killed, vertexName=Reducer 2, 
> vertexId=vertex_1540588928441_0121_2_01, diagnostics=[Vertex received Kill in 
> INITED state., Vertex vertex_1540588928441_0121_2_01 [Reducer 2] 
> killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to 
> VERTEX_FAILURE. failedVertices:1 killedVertices:1
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_161]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_161]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.3.0-150.jar:?]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:338)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_161]
>   at 

[jira] [Updated] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20829:
--
Attachment: HIVE-20829.1.patch

> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20829.1.patch
>
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ]Vertex killed, vertexName=Reducer 2, 
> vertexId=vertex_1540588928441_0121_2_01, diagnostics=[Vertex received Kill in 
> INITED state., Vertex vertex_1540588928441_0121_2_01 [Reducer 2] 
> killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to 
> VERTEX_FAILURE. failedVertices:1 killedVertices:1
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_161]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_161]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.3.0-150.jar:?]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:338)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, 
> vertexName=Map 1, 

[jira] [Updated] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20829:
--
Status: Patch Available  (was: Open)

> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20829.1.patch
>
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ]Vertex killed, vertexName=Reducer 2, 
> vertexId=vertex_1540588928441_0121_2_01, diagnostics=[Vertex received Kill in 
> INITED state., Vertex vertex_1540588928441_0121_2_01 [Reducer 2] 
> killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to 
> VERTEX_FAILURE. failedVertices:1 killedVertices:1
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_161]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_161]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.3.0-150.jar:?]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:338)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, 
> vertexName=Map 1, 

[jira] [Commented] (HIVE-20707) Automatic partition management

2018-10-29 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667796#comment-16667796
 ] 

Jason Dere commented on HIVE-20707:
---

+1

> Automatic partition management
> --
>
> Key: HIVE-20707
> URL: https://issues.apache.org/jira/browse/HIVE-20707
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20702.3.patch, HIVE-20707.1.patch, 
> HIVE-20707.2.patch, HIVE-20707.4.patch, HIVE-20707.5.patch, 
> HIVE-20707.6.patch, HIVE-20707.6.patch, HIVE-20707.7.patch
>
>
> In current scenario, to add partitions for external tables to metastore, MSCK 
> REPAIR command has to be executed manually. To avoid this manual step, 
> external tables can be specified a table property based on which a background 
> metastore thread can sync partitions periodically. Tables can also be 
> specified with partition retention period. Any partition whose age exceeds 
> the retention period will be dropped automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20822) Improvements to push computation to JDBC from Calcite

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20822:
---
Status: Patch Available  (was: In Progress)

> Improvements to push computation to JDBC from Calcite
> -
>
> Key: HIVE-20822
> URL: https://issues.apache.org/jira/browse/HIVE-20822
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20822.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20822) Improvements to push computation to JDBC from Calcite

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20822:
---
Attachment: HIVE-20822.patch

> Improvements to push computation to JDBC from Calcite
> -
>
> Key: HIVE-20822
> URL: https://issues.apache.org/jira/browse/HIVE-20822
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20822.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-20822) Improvements to push computation to JDBC from Calcite

2018-10-29 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-20822 started by Jesus Camacho Rodriguez.
--
> Improvements to push computation to JDBC from Calcite
> -
>
> Key: HIVE-20822
> URL: https://issues.apache.org/jira/browse/HIVE-20822
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20822.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20825) Hive ACID Merge generates invalid ORC files (bucket files 0 or 3 bytes in length) causing the "Not a valid ORC file" error

2018-10-29 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-20825:
-

Assignee: (was: Eugene Koifman)

> Hive ACID Merge generates invalid ORC files (bucket files 0 or 3 bytes in 
> length) causing the "Not a valid ORC file" error
> --
>
> Key: HIVE-20825
> URL: https://issues.apache.org/jira/browse/HIVE-20825
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC, Transactions
>Affects Versions: 2.2.0, 2.3.1, 2.3.2
> Environment: Hive 2.3.x on Amazon EMR 5.8.0 to 5.18.0
>Reporter: Tom Zeng
>Priority: Major
> Attachments: hive-merge-invalid-orc-repro.hql, 
> hive-merge-invalid-orc-repro.log
>
>
> When using Hive ACID Merge (supported with the ORC format) to update/insert 
> data, bucket files with 0 byte or 3 bytes (file content contains three 
> characters: ORC) are generated during MERGE INTO operations which finish with 
> no errors. Subsequent queries on the base table will get "Not a valid ORC 
> file" error.
>  
> The following script can be used to reproduce the issue(note that with small 
> amount of data like this increasing the number of buckets could result in 
> query working, but with large data set it will fail no matter what bucket 
> size):
> set hive.auto.convert.join=false;
>  set hive.enforce.bucketing=true;
>  set hive.exec.dynamic.partition.mode = nonstrict;
>  set hive.support.concurrency=true;
>  set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> drop table if exists mergedelta_txt_1;
>  drop table if exists mergedelta_txt_2;
> CREATE TABLE mergedelta_txt_1 (
>  id_str varchar(12), time_key int, value bigint)
>  PARTITIONED BY (date_key int)
>  ROW FORMAT DELIMITED
>  STORED AS TEXTFILE;
> CREATE TABLE mergedelta_txt_2 (
>  id_str varchar(12), time_key int, value bigint)
>  PARTITIONED BY (date_key int)
>  ROW FORMAT DELIMITED
>  STORED AS TEXTFILE;
> INSERT INTO TABLE mergedelta_txt_1
>  partition(date_key=20170103)
>  VALUES
>  ("AB94LIENR0",46700,12345676836978),
>  ("AB94LIENR1",46825,12345676836978),
>  ("AB94LIENS0",46709,12345676836978),
>  ("AB94LIENS1",46834,12345676836978),
>  ("AB94LIENT0",46709,12345676836978),
>  ("AB94LIENT1",46834,12345676836978),
>  ("AB94LIENU0",46718,12345676836978),
>  ("AB94LIENU1",46844,12345676836978),
>  ("AB94LIENV0",46719,12345676836978),
>  ("AB94LIENV1",46844,12345676836978),
>  ("AB94LIENW0",46728,12345676836978),
>  ("AB94LIENW1",46854,12345676836978),
>  ("AB94LIENX0",46728,12345676836978),
>  ("AB94LIENX1",46854,12345676836978),
>  ("AB94LIENY0",46737,12345676836978),
>  ("AB94LIENY1",46863,12345676836978),
>  ("AB94LIENZ0",46738,12345676836978),
>  ("AB94LIENZ1",46863,12345676836978),
>  ("AB94LIERA0",47176,12345676836982),
>  ("AB94LIERA1",47302,12345676836982);
> INSERT INTO TABLE mergedelta_txt_2
>  partition(date_key=20170103)
>  VALUES 
>  ("AB94LIENT1",46834,12345676836978),
>  ("AB94LIENU0",46718,12345676836978),
>  ("AB94LIENU1",46844,12345676836978),
>  ("AB94LIENV0",46719,12345676836978),
>  ("AB94LIENV1",46844,12345676836978),
>  ("AB94LIENW0",46728,12345676836978),
>  ("AB94LIENW1",46854,12345676836978),
>  ("AB94LIENX0",46728,12345676836978),
>  ("AB94LIENX1",46854,12345676836978),
>  ("AB94LIENY0",46737,12345676836978),
>  ("AB94LIENY1",46863,12345676836978),
>  ("AB94LIENZ0",46738,12345676836978),
>  ("AB94LIENZ1",46863,12345676836978),
>  ("AB94LIERA0",47176,12345676836982),
>  ("AB94LIERA1",47302,12345676836982),
>  ("AB94LIERA2",47418,12345676836982),
>  ("AB94LIERB0",47176,12345676836982),
>  ("AB94LIERB1",47302,12345676836982),
>  ("AB94LIERB2",47418,12345676836982),
>  ("AB94LIERC0",47185,12345676836982);
> DROP TABLE IF EXISTS mergebase_1;
>  CREATE TABLE mergebase_1 (
>  id_str varchar(12) , time_key int , value bigint)
>  PARTITIONED BY (date_key int)
>  CLUSTERED BY (id_str,time_key) INTO 4 BUCKETS
>  STORED AS ORC
>  TBLPROPERTIES (
>  'orc.compress'='SNAPPY',
>  'pk_columns'='id_str,date_key,time_key',
>  'NO_AUTO_COMPACTION'='true',
>  'transactional'='true');
> MERGE INTO mergebase_1 AS base
>  USING (SELECT * 
>  FROM (
>  SELECT id_str ,time_key ,value, date_key, rank() OVER (PARTITION BY 
> id_str,date_key,time_key ORDER BY id_str,date_key,time_key) AS rk 
>  FROM mergedelta_txt_1
>  DISTRIBUTE BY date_key
>  ) rankedtbl 
>  WHERE rankedtbl.rk=1
>  ) AS delta
>  ON delta.id_str=base.id_str AND delta.date_key=base.date_key AND 
> delta.time_key=base.time_key
>  WHEN MATCHED THEN UPDATE SET value=delta.value
>  WHEN NOT MATCHED THEN INSERT VALUES ( delta.id_str , delta.time_key , 
> delta.value, delta.date_key);
> MERGE INTO mergebase_1 AS base
>  USING (SELECT * 
>  FROM (
>  SELECT id_str ,time_key ,value, date_key, 

[jira] [Commented] (HIVE-20825) Hive ACID Merge generates invalid ORC files (bucket files 0 or 3 bytes in length) causing the "Not a valid ORC file" error

2018-10-29 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667760#comment-16667760
 ] 

Eugene Koifman commented on HIVE-20825:
---

Perhaps HIVE-14014 is relevant.
I tried these examples on master and branch-2.2 and I don't see the error 
(count returns 25) nor do I see any empty (3-byte) files.  

> Hive ACID Merge generates invalid ORC files (bucket files 0 or 3 bytes in 
> length) causing the "Not a valid ORC file" error
> --
>
> Key: HIVE-20825
> URL: https://issues.apache.org/jira/browse/HIVE-20825
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC, Transactions
>Affects Versions: 2.2.0, 2.3.1, 2.3.2
> Environment: Hive 2.3.x on Amazon EMR 5.8.0 to 5.18.0
>Reporter: Tom Zeng
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: hive-merge-invalid-orc-repro.hql, 
> hive-merge-invalid-orc-repro.log
>
>
> When using Hive ACID Merge (supported with the ORC format) to update/insert 
> data, bucket files with 0 byte or 3 bytes (file content contains three 
> characters: ORC) are generated during MERGE INTO operations which finish with 
> no errors. Subsequent queries on the base table will get "Not a valid ORC 
> file" error.
>  
> The following script can be used to reproduce the issue(note that with small 
> amount of data like this increasing the number of buckets could result in 
> query working, but with large data set it will fail no matter what bucket 
> size):
> set hive.auto.convert.join=false;
>  set hive.enforce.bucketing=true;
>  set hive.exec.dynamic.partition.mode = nonstrict;
>  set hive.support.concurrency=true;
>  set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> drop table if exists mergedelta_txt_1;
>  drop table if exists mergedelta_txt_2;
> CREATE TABLE mergedelta_txt_1 (
>  id_str varchar(12), time_key int, value bigint)
>  PARTITIONED BY (date_key int)
>  ROW FORMAT DELIMITED
>  STORED AS TEXTFILE;
> CREATE TABLE mergedelta_txt_2 (
>  id_str varchar(12), time_key int, value bigint)
>  PARTITIONED BY (date_key int)
>  ROW FORMAT DELIMITED
>  STORED AS TEXTFILE;
> INSERT INTO TABLE mergedelta_txt_1
>  partition(date_key=20170103)
>  VALUES
>  ("AB94LIENR0",46700,12345676836978),
>  ("AB94LIENR1",46825,12345676836978),
>  ("AB94LIENS0",46709,12345676836978),
>  ("AB94LIENS1",46834,12345676836978),
>  ("AB94LIENT0",46709,12345676836978),
>  ("AB94LIENT1",46834,12345676836978),
>  ("AB94LIENU0",46718,12345676836978),
>  ("AB94LIENU1",46844,12345676836978),
>  ("AB94LIENV0",46719,12345676836978),
>  ("AB94LIENV1",46844,12345676836978),
>  ("AB94LIENW0",46728,12345676836978),
>  ("AB94LIENW1",46854,12345676836978),
>  ("AB94LIENX0",46728,12345676836978),
>  ("AB94LIENX1",46854,12345676836978),
>  ("AB94LIENY0",46737,12345676836978),
>  ("AB94LIENY1",46863,12345676836978),
>  ("AB94LIENZ0",46738,12345676836978),
>  ("AB94LIENZ1",46863,12345676836978),
>  ("AB94LIERA0",47176,12345676836982),
>  ("AB94LIERA1",47302,12345676836982);
> INSERT INTO TABLE mergedelta_txt_2
>  partition(date_key=20170103)
>  VALUES 
>  ("AB94LIENT1",46834,12345676836978),
>  ("AB94LIENU0",46718,12345676836978),
>  ("AB94LIENU1",46844,12345676836978),
>  ("AB94LIENV0",46719,12345676836978),
>  ("AB94LIENV1",46844,12345676836978),
>  ("AB94LIENW0",46728,12345676836978),
>  ("AB94LIENW1",46854,12345676836978),
>  ("AB94LIENX0",46728,12345676836978),
>  ("AB94LIENX1",46854,12345676836978),
>  ("AB94LIENY0",46737,12345676836978),
>  ("AB94LIENY1",46863,12345676836978),
>  ("AB94LIENZ0",46738,12345676836978),
>  ("AB94LIENZ1",46863,12345676836978),
>  ("AB94LIERA0",47176,12345676836982),
>  ("AB94LIERA1",47302,12345676836982),
>  ("AB94LIERA2",47418,12345676836982),
>  ("AB94LIERB0",47176,12345676836982),
>  ("AB94LIERB1",47302,12345676836982),
>  ("AB94LIERB2",47418,12345676836982),
>  ("AB94LIERC0",47185,12345676836982);
> DROP TABLE IF EXISTS mergebase_1;
>  CREATE TABLE mergebase_1 (
>  id_str varchar(12) , time_key int , value bigint)
>  PARTITIONED BY (date_key int)
>  CLUSTERED BY (id_str,time_key) INTO 4 BUCKETS
>  STORED AS ORC
>  TBLPROPERTIES (
>  'orc.compress'='SNAPPY',
>  'pk_columns'='id_str,date_key,time_key',
>  'NO_AUTO_COMPACTION'='true',
>  'transactional'='true');
> MERGE INTO mergebase_1 AS base
>  USING (SELECT * 
>  FROM (
>  SELECT id_str ,time_key ,value, date_key, rank() OVER (PARTITION BY 
> id_str,date_key,time_key ORDER BY id_str,date_key,time_key) AS rk 
>  FROM mergedelta_txt_1
>  DISTRIBUTE BY date_key
>  ) rankedtbl 
>  WHERE rankedtbl.rk=1
>  ) AS delta
>  ON delta.id_str=base.id_str AND delta.date_key=base.date_key AND 
> delta.time_key=base.time_key
>  WHEN MATCHED THEN UPDATE SET value=delta.value
>  WHEN NOT 

[jira] [Commented] (HIVE-20804) Further improvements to group by optimization with constraints

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667747#comment-16667747
 ] 

Hive QA commented on HIVE-20804:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12946087/HIVE-20804.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15507 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth.org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth
 (batchId=261)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/14683/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14683/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14683/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12946087 - PreCommit-HIVE-Build

> Further improvements to group by optimization with constraints
> --
>
> Key: HIVE-20804
> URL: https://issues.apache.org/jira/browse/HIVE-20804
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20804.1.patch, HIVE-20804.2.patch, 
> HIVE-20804.3.patch
>
>
> Continuation of HIVE-17043



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler

2018-10-29 Thread Bharathkrishna Guruvayoor Murali (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667698#comment-16667698
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-20512:
-

Tests run locally but here it ails with "did not produce a TEST-*.xml file 
(likely timed out". Attaching HIVE-20512.5.patch with no awaitTermination to 
see if the tests are passing.

> Improve record and memory usage logging in SparkRecordHandler
> -
>
> Key: HIVE-20512
> URL: https://issues.apache.org/jira/browse/HIVE-20512
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, 
> HIVE-20512.3.patch, HIVE-20512.4.patch, HIVE-20512.5.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but 
> we should improve the methodology for how frequently we log this info. 
> Currently we use the following code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
> // A very simple counter to keep track of number of rows processed by the
> // reducer. It dumps
> // every 1 million times, and quickly before that
> if (currentThreshold >= 100) {
>   return currentThreshold + 100;
> }
> return 10 * currentThreshold;
>   }
> {code}
> The issue is that after a while, the increase by 10x factor means that you 
> have to process a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would 
> help in debugging tasks that are seemingly hung.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler

2018-10-29 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-20512:

Attachment: HIVE-20512.5.patch

> Improve record and memory usage logging in SparkRecordHandler
> -
>
> Key: HIVE-20512
> URL: https://issues.apache.org/jira/browse/HIVE-20512
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, 
> HIVE-20512.3.patch, HIVE-20512.4.patch, HIVE-20512.5.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but 
> we should improve the methodology for how frequently we log this info. 
> Currently we use the following code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
> // A very simple counter to keep track of number of rows processed by the
> // reducer. It dumps
> // every 1 million times, and quickly before that
> if (currentThreshold >= 100) {
>   return currentThreshold + 100;
> }
> return 10 * currentThreshold;
>   }
> {code}
> The issue is that after a while, the increase by 10x factor means that you 
> have to process a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would 
> help in debugging tasks that are seemingly hung.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20804) Further improvements to group by optimization with constraints

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667677#comment-16667677
 ] 

Hive QA commented on HIVE-20804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
43s{color} | {color:blue} ql in master has 2317 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 1 new + 6 unchanged - 0 fixed 
= 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
2s{color} | {color:red} ql generated 1 new + 2317 unchanged - 0 fixed = 2318 
total (was 2317) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Dead store to mapInToOutPos in 
org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelOptUtil$CardinalityChange.isCardinalitySameAsSource(HiveProject,
 ImmutableBitSet)  At 
HiveRelOptUtil.java:org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelOptUtil$CardinalityChange.isCardinalitySameAsSource(HiveProject,
 ImmutableBitSet)  At HiveRelOptUtil.java:[line 823] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-14683/dev-support/hive-personality.sh
 |
| git revision | master / 54bba9c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14683/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14683/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14683/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Further improvements to group by optimization with constraints
> --
>
> Key: HIVE-20804
> URL: https://issues.apache.org/jira/browse/HIVE-20804
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20804.1.patch, HIVE-20804.2.patch, 
> HIVE-20804.3.patch
>
>
> Continuation of HIVE-17043



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20804) Further improvements to group by optimization with constraints

2018-10-29 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667592#comment-16667592
 ] 

Vineet Garg commented on HIVE-20804:


[~jcamachorodriguez] Can you take a look please? 
https://reviews.apache.org/r/69202/

> Further improvements to group by optimization with constraints
> --
>
> Key: HIVE-20804
> URL: https://issues.apache.org/jira/browse/HIVE-20804
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20804.1.patch, HIVE-20804.2.patch, 
> HIVE-20804.3.patch
>
>
> Continuation of HIVE-17043



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20804) Further improvements to group by optimization with constraints

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20804:
---
Status: Patch Available  (was: Open)

> Further improvements to group by optimization with constraints
> --
>
> Key: HIVE-20804
> URL: https://issues.apache.org/jira/browse/HIVE-20804
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20804.1.patch, HIVE-20804.2.patch, 
> HIVE-20804.3.patch
>
>
> Continuation of HIVE-17043



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20804) Further improvements to group by optimization with constraints

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20804:
---
Status: Open  (was: Patch Available)

> Further improvements to group by optimization with constraints
> --
>
> Key: HIVE-20804
> URL: https://issues.apache.org/jira/browse/HIVE-20804
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20804.1.patch, HIVE-20804.2.patch, 
> HIVE-20804.3.patch
>
>
> Continuation of HIVE-17043



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20804) Further improvements to group by optimization with constraints

2018-10-29 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-20804:
---
Attachment: HIVE-20804.3.patch

> Further improvements to group by optimization with constraints
> --
>
> Key: HIVE-20804
> URL: https://issues.apache.org/jira/browse/HIVE-20804
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-20804.1.patch, HIVE-20804.2.patch, 
> HIVE-20804.3.patch
>
>
> Continuation of HIVE-17043



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20830) JdbcStorageHandler range query assertion failure in some cases

2018-10-29 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reassigned HIVE-20830:
-


> JdbcStorageHandler range query assertion failure in some cases
> --
>
> Key: HIVE-20830
> URL: https://issues.apache.org/jira/browse/HIVE-20830
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
>
> {code}
> 2018-10-29T10:10:16,325 ERROR [b4bf5eb2-a986-4aae-908e-93b9908acd32 
> HiveServer2-HttpHandler-Pool: Thread-124]: dao.GenericJdbcDatabaseAccessor 
> (:()) - Caught exception while trying to execute query
> java.lang.IllegalArgumentException: null
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:108) 
> ~[guava-19.0.jar:?]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.addBoundaryToQuery(GenericJdbcDatabaseAccessor.java:238)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getRecordIterator(GenericJdbcDatabaseAccessor.java:161)
>  ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:58) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hive.storage.jdbc.JdbcRecordReader.next(JdbcRecordReader.java:35) 
> ~[hive-jdbc-handler-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-99]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:569)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2734) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:469)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:910)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564) 
> ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:790)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
>  ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at org.apache.thrift.server.TServlet.doPost(TServlet.java:83) 
> ~[hive-exec-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:208)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584) 
> ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>  ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  ~[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531]
>   at 
> 

[jira] [Assigned] (HIVE-20829) JdbcStorageHandler range split throws NPE

2018-10-29 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reassigned HIVE-20829:
-


> JdbcStorageHandler range split throws NPE
> -
>
> Key: HIVE-20829
> URL: https://issues.apache.org/jira/browse/HIVE-20829
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
>
> {code}
> 2018-10-29T06:37:14,982 ERROR [HiveServer2-Background-Pool: Thread-44466]: 
> operation.Operation (:()) - Error running hive query:
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1540588928441_0121_2_00, diagnostics=[Vertex 
> vertex_1540588928441_0121_2_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: employees initializer failed, 
> vertex=vertex_1540588928441_0121_2_00 [Map 1], java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:272)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ]Vertex killed, vertexName=Reducer 2, 
> vertexId=vertex_1540588928441_0121_2_01, diagnostics=[Vertex received Kill in 
> INITED state., Vertex vertex_1540588928441_0121_2_01 [Reducer 2] 
> killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to 
> VERTEX_FAILURE. failedVertices:1 killedVertices:1
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:318)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_161]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_161]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.0.3.0-150.jar:?]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:338)
>  ~[hive-service-3.1.0.3.0.3.0-150.jar:3.1.0.3.0.3.0-150]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_161]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_161]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, 
> vertexName=Map 1, vertexId=vertex_1540588928441_0121_2_00, 
> diagnostics=[Vertex 

[jira] [Updated] (HIVE-20828) Upgrade to Spark 2.4.0

2018-10-29 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-20828:

Description: The Spark community is in the process of releasing Spark 
2.4.0. We should do some testing with the RC candidates and then upgrade once 
the release is finalized.  (was: Spark is in the process of releasing Spark 
2.4.0. We should do something testing with the RC candidates and then upgrade 
once the release is finalized.)

> Upgrade to Spark 2.4.0
> --
>
> Key: HIVE-20828
> URL: https://issues.apache.org/jira/browse/HIVE-20828
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> The Spark community is in the process of releasing Spark 2.4.0. We should do 
> some testing with the RC candidates and then upgrade once the release is 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667558#comment-16667558
 ] 

Hive QA commented on HIVE-20805:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12945842/HIVE-20805.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15506 tests 
executed
*Failed tests:*
{noformat}
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=196)

[druidmini_masking.q,druidmini_test1.q,druidkafkamini_basic.q,druidmini_joins.q,druid_timestamptz.q]
org.apache.hadoop.hive.common.TestFileUtils.testCopyWithDistCpAs (batchId=281)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/14682/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14682/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14682/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12945842 - PreCommit-HIVE-Build

> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20828) Upgrade to Spark 2.4.0

2018-10-29 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar reassigned HIVE-20828:
---


> Upgrade to Spark 2.4.0
> --
>
> Key: HIVE-20828
> URL: https://issues.apache.org/jira/browse/HIVE-20828
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> Spark is in the process of releasing Spark 2.4.0. We should do something 
> testing with the RC candidates and then upgrade once the release is finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667497#comment-16667497
 ] 

Hive QA commented on HIVE-20805:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 2317 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-14682/dev-support/hive-personality.sh
 |
| git revision | master / 54bba9c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14682/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18661) CachedStore: Use metastore notification log events to update cache

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667420#comment-16667420
 ] 

Hive QA commented on HIVE-18661:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12946051/HIVE-18661.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 15492 tests 
executed
*Failed tests:*
{noformat}
TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) 
(batchId=249)
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=249)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=249)
TestReplAcidTablesWithJsonMessage - did not produce a TEST-*.xml file (likely 
timed out) (batchId=249)
TestReplIncrementalLoadAcidTablesWithJsonMessage - did not produce a TEST-*.xml 
file (likely timed out) (batchId=249)
TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely 
timed out) (batchId=249)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/14681/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14681/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14681/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12946051 - PreCommit-HIVE-Build

> CachedStore: Use metastore notification log events to update cache
> --
>
> Key: HIVE-18661
> URL: https://issues.apache.org/jira/browse/HIVE-18661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-18661.02.patch
>
>
> Currently, a background thread updates the entire cache which is pretty 
> inefficient. We capture the updates to metadata in NOTIFICATION_LOG table 
> which is getting used in the Replication work. We should have the background 
> thread apply these notifications to incrementally update the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler

2018-10-29 Thread Bharathkrishna Guruvayoor Murali (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667405#comment-16667405
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-20512:
-

Thanks [~stakiar] for the review.
The test failures are all related to Hive-On-spark so I definitely want to 
check if those are working. I will follow up on that.

> Improve record and memory usage logging in SparkRecordHandler
> -
>
> Key: HIVE-20512
> URL: https://issues.apache.org/jira/browse/HIVE-20512
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, 
> HIVE-20512.3.patch, HIVE-20512.4.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but 
> we should improve the methodology for how frequently we log this info. 
> Currently we use the following code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
> // A very simple counter to keep track of number of rows processed by the
> // reducer. It dumps
> // every 1 million times, and quickly before that
> if (currentThreshold >= 100) {
>   return currentThreshold + 100;
> }
> return 10 * currentThreshold;
>   }
> {code}
> The issue is that after a while, the increase by 10x factor means that you 
> have to process a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would 
> help in debugging tasks that are seemingly hung.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler

2018-10-29 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667403#comment-16667403
 ] 

Sahil Takiar commented on HIVE-20512:
-

+1 are the test failures related? Given there are no new unit tests for this 
patch, I'm assuming it was at least manually verified to work?

> Improve record and memory usage logging in SparkRecordHandler
> -
>
> Key: HIVE-20512
> URL: https://issues.apache.org/jira/browse/HIVE-20512
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, 
> HIVE-20512.3.patch, HIVE-20512.4.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but 
> we should improve the methodology for how frequently we log this info. 
> Currently we use the following code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
> // A very simple counter to keep track of number of rows processed by the
> // reducer. It dumps
> // every 1 million times, and quickly before that
> if (currentThreshold >= 100) {
>   return currentThreshold + 100;
> }
> return 10 * currentThreshold;
>   }
> {code}
> The issue is that after a while, the increase by 10x factor means that you 
> have to process a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would 
> help in debugging tasks that are seemingly hung.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18661) CachedStore: Use metastore notification log events to update cache

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667374#comment-16667374
 ] 

Hive QA commented on HIVE-18661:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
13s{color} | {color:blue} standalone-metastore/metastore-common in master has 
28 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
0s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
37s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
38s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 38s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} itests/hive-unit: The patch generated 11 new + 0 
unchanged - 0 fixed = 11 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-14681/dev-support/hive-personality.sh
 |
| git revision | master / 54bba9c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14681/yetus/patch-mvninstall-itests_hive-unit.txt
 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14681/yetus/patch-compile-itests_hive-unit.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14681/yetus/patch-compile-itests_hive-unit.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14681/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14681/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14681/yetus/patch-findbugs-itests_hive-unit.txt
 |
| modules | C: standalone-metastore/metastore-common itests/hive-unit 
standalone-metastore/metastore-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-14681/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CachedStore: Use metastore notification log events to update cache
> 

[jira] [Commented] (HIVE-18874) JDBC: HiveConnection shades log4j interfaces

2018-10-29 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667257#comment-16667257
 ] 

Kevin Risden commented on HIVE-18874:
-

This breaks with JMeter 5 trying to performance test Hive JDBC 3.1.0. I have 
not checked later versions of Hive.

> JDBC: HiveConnection shades log4j interfaces
> 
>
> Key: HIVE-18874
> URL: https://issues.apache.org/jira/browse/HIVE-18874
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Priority: Major
>
> This prevents Hive JDBC from being instantiated into a regular SLF4J logger 
> env.
> {code}
> java.lang.IncompatibleClassChangeError: Class 
> org.apache.logging.slf4j.Log4jLoggerFactory does not implement the requested 
> interface org.apache.hive.org.slf4j.ILoggerFactory
> at 
> org.apache.hive.org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:285)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20707) Automatic partition management

2018-10-29 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667242#comment-16667242
 ] 

Hive QA commented on HIVE-20707:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12946012/HIVE-20707.7.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15518 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/14680/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14680/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14680/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12946012 - PreCommit-HIVE-Build

> Automatic partition management
> --
>
> Key: HIVE-20707
> URL: https://issues.apache.org/jira/browse/HIVE-20707
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20702.3.patch, HIVE-20707.1.patch, 
> HIVE-20707.2.patch, HIVE-20707.4.patch, HIVE-20707.5.patch, 
> HIVE-20707.6.patch, HIVE-20707.6.patch, HIVE-20707.7.patch
>
>
> In current scenario, to add partitions for external tables to metastore, MSCK 
> REPAIR command has to be executed manually. To avoid this manual step, 
> external tables can be specified a table property based on which a background 
> metastore thread can sync partitions periodically. Tables can also be 
> specified with partition retention period. Any partition whose age exceeds 
> the retention period will be dropped automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20805) Hive does not copy source data when importing as non-hive user

2018-10-29 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-20805:
---
Status: Patch Available  (was: Open)

> Hive does not copy source data when importing as non-hive user 
> ---
>
> Key: HIVE-20805
> URL: https://issues.apache.org/jira/browse/HIVE-20805
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20805.1.patch
>
>
> while loading data to a managed table from user given path, Hive uses move 
> operation to copy data from user location to table location. In case move can 
> not be used due to permission issue or mismatched encryption zone etc, hive 
> uses copy and then deletes the files from source location to keep to behavior 
> same. But in case the user does not have write access to the source location, 
> delete will fail with file permission exception and load operation will fail. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18661) CachedStore: Use metastore notification log events to update cache

2018-10-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-18661:
--
Labels: pull-request-available  (was: )

> CachedStore: Use metastore notification log events to update cache
> --
>
> Key: HIVE-18661
> URL: https://issues.apache.org/jira/browse/HIVE-18661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-18661.02.patch
>
>
> Currently, a background thread updates the entire cache which is pretty 
> inefficient. We capture the updates to metadata in NOTIFICATION_LOG table 
> which is getting used in the Replication work. We should have the background 
> thread apply these notifications to incrementally update the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18661) CachedStore: Use metastore notification log events to update cache

2018-10-29 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667216#comment-16667216
 ] 

ASF GitHub Bot commented on HIVE-18661:
---

GitHub user maheshk114 opened a pull request:

https://github.com/apache/hive/pull/479

HIVE-18661 : CachedStore: Use metastore notification log events to update 
cache

…

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maheshk114/hive HIVE-18661-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/479.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #479


commit d1698c7cfe9dc87c6044a1d78478a8c1968e2501
Author: Mahesh Kumar Behera 
Date:   2018-10-29T12:49:26Z

HIVE-18661 : CachedStore: Use metastore notification log events to update 
cache




> CachedStore: Use metastore notification log events to update cache
> --
>
> Key: HIVE-18661
> URL: https://issues.apache.org/jira/browse/HIVE-18661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-18661.02.patch
>
>
> Currently, a background thread updates the entire cache which is pretty 
> inefficient. We capture the updates to metadata in NOTIFICATION_LOG table 
> which is getting used in the Replication work. We should have the background 
> thread apply these notifications to incrementally update the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18661) CachedStore: Use metastore notification log events to update cache

2018-10-29 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667213#comment-16667213
 ] 

mahesh kumar behera commented on HIVE-18661:


[~vgumashta]

Can you please review the patch

> CachedStore: Use metastore notification log events to update cache
> --
>
> Key: HIVE-18661
> URL: https://issues.apache.org/jira/browse/HIVE-18661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-18661.02.patch
>
>
> Currently, a background thread updates the entire cache which is pretty 
> inefficient. We capture the updates to metadata in NOTIFICATION_LOG table 
> which is getting used in the Replication work. We should have the background 
> thread apply these notifications to incrementally update the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18661) CachedStore: Use metastore notification log events to update cache

2018-10-29 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-18661:
---
Attachment: HIVE-18661.02.patch

> CachedStore: Use metastore notification log events to update cache
> --
>
> Key: HIVE-18661
> URL: https://issues.apache.org/jira/browse/HIVE-18661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-18661.02.patch
>
>
> Currently, a background thread updates the entire cache which is pretty 
> inefficient. We capture the updates to metadata in NOTIFICATION_LOG table 
> which is getting used in the Replication work. We should have the background 
> thread apply these notifications to incrementally update the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >