[jira] [Updated] (HIVE-27321) Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-27321:
--
Labels: pull-request-available  (was: )

> Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q
> ---
>
> Key: HIVE-27321
> URL: https://issues.apache.org/jira/browse/HIVE-27321
> Project: Hive
>  Issue Type: Test
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS_BYTES_READ/WRITTEN  depends on ORC file size and it can change in case 
> of an ORC lib upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27321) Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q

2023-05-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-27321:
--
Priority: Minor  (was: Major)

> Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q
> ---
>
> Key: HIVE-27321
> URL: https://issues.apache.org/jira/browse/HIVE-27321
> Project: Hive
>  Issue Type: Test
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS_BYTES_READ/WRITTEN  depends on ORC file size and it can change in case 
> of an ORC lib upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27321) Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27321?focusedWorklogId=860680&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860680
 ]

ASF GitHub Bot logged work on HIVE-27321:
-

Author: ASF GitHub Bot
Created on: 05/May/23 06:09
Start Date: 05/May/23 06:09
Worklog Time Spent: 10m 
  Work Description: kasakrisz opened a new pull request, #4295:
URL: https://github.com/apache/hive/pull/4295

   
   
   ### What changes were proposed in this pull request?
   See jira
   
   ### Does this PR introduce _any_ user-facing change?
   No.
   
   ### How was this patch tested?
   ```
   mvn test -Dtest.output.overwrite -Dtest=TestMiniLlapCliDriver 
-Dqfile=orc_ppd_basic.q -pl itests/qtest -Pitests
   ```




Issue Time Tracking
---

Worklog Id: (was: 860680)
Remaining Estimate: 0h
Time Spent: 10m

> Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q
> ---
>
> Key: HIVE-27321
> URL: https://issues.apache.org/jira/browse/HIVE-27321
> Project: Hive
>  Issue Type: Test
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS_BYTES_READ/WRITTEN  depends on ORC file size and it can change in case 
> of an ORC lib upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27320) Mask total size in materialized_view_create_acid.q.out

2023-05-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-27320:
--
Priority: Minor  (was: Major)

> Mask total size in materialized_view_create_acid.q.out
> --
>
> Key: HIVE-27320
> URL: https://issues.apache.org/jira/browse/HIVE-27320
> Project: Hive
>  Issue Type: Test
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Total size depends on ORC file size and it can change in case of an ORC lib 
> upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27321) Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q

2023-05-04 Thread Krisztian Kasa (Jira)
Krisztian Kasa created HIVE-27321:
-

 Summary: Mask HDFS_BYTES_READ/WRITTEN in orc_ppd_basic.q
 Key: HIVE-27321
 URL: https://issues.apache.org/jira/browse/HIVE-27321
 Project: Hive
  Issue Type: Test
Reporter: Krisztian Kasa
Assignee: Krisztian Kasa


HDFS_BYTES_READ/WRITTEN  depends on ORC file size and it can change in case of 
an ORC lib upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27320) Mask total size in materialized_view_create_acid.q.out

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-27320:
--
Labels: pull-request-available  (was: )

> Mask total size in materialized_view_create_acid.q.out
> --
>
> Key: HIVE-27320
> URL: https://issues.apache.org/jira/browse/HIVE-27320
> Project: Hive
>  Issue Type: Test
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Total size depends on ORC file size and it can change in case of an ORC lib 
> upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27320) Mask total size in materialized_view_create_acid.q.out

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27320?focusedWorklogId=860679&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860679
 ]

ASF GitHub Bot logged work on HIVE-27320:
-

Author: ASF GitHub Bot
Created on: 05/May/23 05:53
Start Date: 05/May/23 05:53
Worklog Time Spent: 10m 
  Work Description: kasakrisz opened a new pull request, #4294:
URL: https://github.com/apache/hive/pull/4294

   
   
   ### What changes were proposed in this pull request?
   See jira
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   ### How was this patch tested?
   ```
   mvn test -Dtest.output.overwrite -Dtest=TestMiniLlapLocalCliDriver 
-Dqfile=materialized_view_create_acid.q -pl itests/qtest -Pitests
   ```




Issue Time Tracking
---

Worklog Id: (was: 860679)
Remaining Estimate: 0h
Time Spent: 10m

> Mask total size in materialized_view_create_acid.q.out
> --
>
> Key: HIVE-27320
> URL: https://issues.apache.org/jira/browse/HIVE-27320
> Project: Hive
>  Issue Type: Test
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Total size depends on ORC file size and it can change in case of an ORC lib 
> upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27320) Mask total size in materialized_view_create_acid.q.out

2023-05-04 Thread Krisztian Kasa (Jira)
Krisztian Kasa created HIVE-27320:
-

 Summary: Mask total size in materialized_view_create_acid.q.out
 Key: HIVE-27320
 URL: https://issues.apache.org/jira/browse/HIVE-27320
 Project: Hive
  Issue Type: Test
Reporter: Krisztian Kasa
Assignee: Krisztian Kasa


Total size depends on ORC file size and it can change in case of an ORC lib 
upgrade but this value is not relevant in this test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27315) DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan throws NPE when hive.tez.dynamic.semijoin.reduction=true

2023-05-04 Thread FangBO (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

FangBO updated HIVE-27315:
--
Description: 
Hive version:
{code:java}
2.3.9
3.1.2{code}
 

Table definition:
{code:java}
CREATE TABLE `t`(
  `emplid` string COMMENT '', 
  `effdt` string COMMENT '', 
  `mar_status` string COMMENT ''); {code}
Query:
{code:java}
SELECT t1.emplid, t1.effdt, t2.mar_status
FROM (
 SELECT emplid, max(effdt) AS effdt
 FROM t
 GROUP BY emplid
) t1
 LEFT JOIN t t2
 ON t1.emplid = t2.emplid
 AND t1.effdt = t2.effdt; {code}
Exception stacktrace:

 
{code:java}
java.lang.NullPointerException
        at 
org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:463)
        at 
org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226)
        at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
        at 
org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
        at 
org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:370)
        at 
org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:94)
        at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:140)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11273)
        at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
        at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
        at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) {code}

  was:
Hive version:
{code:java}
2.3.9{code}
 

Table definition:
{code:java}

CREATE TABLE `t`(
  `emplid` string COMMENT '', 
  `effdt` string COMMENT '', 
  `mar_status` string COMMENT ''); {code}
Query:
{code:java}
SELECT t1.emplid, t1.effdt, t2.mar_status
FROM (
 SELECT emplid, max(effdt) AS effdt
 FROM t
 GROUP BY emplid
) t1
 LEFT JOIN t t2
 ON t1.emplid = t2.emplid
 AND t1.effdt = t2.effdt; {code}
Exception stacktrace:

 
{code:java}
java.lang.NullPointerException
        at 
org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan(DynamicPartitionPruningOptimization.java:463)
        at 
org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.process(DynamicPartitionPruningOptimization.java:226)
        at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
        at 
org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
        at 
org.apache.hadoop.hive.ql.parse.TezCompiler.runDynamicPartitionPruning(TezCompiler.java:370)
        at 
org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:94)
        at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:140)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11273)
        at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
        at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
        at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) {code}


> DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan throws NPE 
> when hive.tez.dynamic.semijoin.reduction=true
> --

[jira] [Work logged] (HIVE-27120) Warn when Authorizer V2 is configured

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27120?focusedWorklogId=860665&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860665
 ]

ASF GitHub Bot logged work on HIVE-27120:
-

Author: ASF GitHub Bot
Created on: 05/May/23 00:18
Start Date: 05/May/23 00:18
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on PR #4095:
URL: https://github.com/apache/hive/pull/4095#issuecomment-1535551765

   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.




Issue Time Tracking
---

Worklog Id: (was: 860665)
Time Spent: 1h  (was: 50m)

> Warn when Authorizer V2 is configured
> -
>
> Key: HIVE-27120
> URL: https://issues.apache.org/jira/browse/HIVE-27120
> Project: Hive
>  Issue Type: Improvement
>Reporter: okumin
>Assignee: okumin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> SessionState configures an internal param that is not listed in HiveConf. It 
> causes WARN.
> {code:java}
> pod/hive-hiveserver2-7fc4df88b6-dmn8v: 2023-03-01T13:50:38,959  WARN [main] 
> conf.HiveConf: HiveConf of name 
> hive.internal.ss.authz.settings.applied.marker does not exist {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27094) Big numbers support for `conv` function

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27094?focusedWorklogId=860666&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860666
 ]

ASF GitHub Bot logged work on HIVE-27094:
-

Author: ASF GitHub Bot
Created on: 05/May/23 00:18
Start Date: 05/May/23 00:18
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on PR #4074:
URL: https://github.com/apache/hive/pull/4074#issuecomment-1535551775

   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.




Issue Time Tracking
---

Worklog Id: (was: 860666)
Time Spent: 1h 20m  (was: 1h 10m)

> Big numbers support for `conv` function 
> 
>
> Key: HIVE-27094
> URL: https://issues.apache.org/jira/browse/HIVE-27094
> Project: Hive
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Narek Karapetian
>Assignee: Narek Karapetian
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add support for converting big numbers between different radixes, without an 
> overflow.
> For example such a query:
> {code:java}
> SELECT
>   conv(9223372036854775807, 36, 16),
>   conv(9223372036854775807, 36, -16),
>   conv(-9223372036854775807, 36, 16),
>   conv(-9223372036854775807, 36, -16)
> FROM src tablesample (1 rows); {code}
>  Should give a correct result, like:
> {code:java}
> 12DDAC15F246BAF8C0D551AC7   12DDAC15F246BAF8C0D551AC7  
> D2253EA0DB945073F2AAE539   -12DDAC15F246BAF8C0D551AC7
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?focusedWorklogId=860663&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860663
 ]

ASF GitHub Bot logged work on HIVE-27317:
-

Author: ASF GitHub Bot
Created on: 04/May/23 23:58
Start Date: 04/May/23 23:58
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4293:
URL: https://github.com/apache/hive/pull/4293#issuecomment-1535539746

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4293)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4293&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4293&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4293&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=CODE_SMELL)
 [1 Code 
Smell](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4293&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4293&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4293&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860663)
Time Spent: 20m  (was: 10m)

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-27317.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 

[jira] [Work logged] (HIVE-27288) Backport of HIVE-23262 : Remove dependency on activemq

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27288?focusedWorklogId=860661&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860661
 ]

ASF GitHub Bot logged work on HIVE-27288:
-

Author: ASF GitHub Bot
Created on: 04/May/23 23:49
Start Date: 04/May/23 23:49
Worklog Time Spent: 10m 
  Work Description: vihangk1 merged PR #4261:
URL: https://github.com/apache/hive/pull/4261




Issue Time Tracking
---

Worklog Id: (was: 860661)
Time Spent: 0.5h  (was: 20m)

> Backport of HIVE-23262 : Remove dependency on activemq
> --
>
> Key: HIVE-27288
> URL: https://issues.apache.org/jira/browse/HIVE-27288
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aman Raj
>Assignee: Aman Raj
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27319) HMS server should throw InvalidObjectException when the table is missing in get_partitions_by_names()

2023-05-04 Thread Sai Hemanth Gantasala (Jira)
Sai Hemanth Gantasala created HIVE-27319:


 Summary: HMS server should throw InvalidObjectException when the 
table is missing in get_partitions_by_names()
 Key: HIVE-27319
 URL: https://issues.apache.org/jira/browse/HIVE-27319
 Project: Hive
  Issue Type: Bug
  Components: Hive, Standalone Metastore
Reporter: Sai Hemanth Gantasala
Assignee: Sai Hemanth Gantasala


When the table object is dropped by a concurrent thread, the 
get_partitions_by_names_req() API is currently throwing a TApplicationException 
to the client. Instead, the HMS server should propagate the 
InvalidObjectException thrown by getTable() to the HMS client. By doing this, 
other services using HMS client will understand the exception better.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27302) Iceberg: Suport write to iceberg branch

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27302?focusedWorklogId=860660&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860660
 ]

ASF GitHub Bot logged work on HIVE-27302:
-

Author: ASF GitHub Bot
Created on: 04/May/23 23:31
Start Date: 04/May/23 23:31
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4292:
URL: https://github.com/apache/hive/pull/4292#issuecomment-1535526849

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4292)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4292&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4292&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4292&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=CODE_SMELL)
 [5 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4292&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4292&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4292&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860660)
Time Spent: 20m  (was: 10m)

> Iceberg: Suport write to iceberg branch
> ---
>
> Key: HIVE-27302
> URL: https://issues.apache.org/jira/browse/HIVE-27302
> Project: Hive
>  Issue Type: Sub-task
>  Components: Iceberg integration
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This feature depends on Iceberg1.2.0 interface: 
> [https://github.com/apache/iceberg/pull/5234] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27243) Iceberg: Implement Load data via temp table

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27243?focusedWorklogId=860659&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860659
 ]

ASF GitHub Bot logged work on HIVE-27243:
-

Author: ASF GitHub Bot
Created on: 04/May/23 22:21
Start Date: 04/May/23 22:21
Worklog Time Spent: 10m 
  Work Description: aturoczy commented on code in PR #4289:
URL: https://github.com/apache/hive/pull/4289#discussion_r1185567260


##
ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java:
##
@@ -38,7 +38,7 @@
 public class StorageFormat {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(StorageFormat.class);
-  private static final StorageFormatFactory storageFormatFactory = new 
StorageFormatFactory();
+  public static final StorageFormatFactory storageFormatFactory = new 
StorageFormatFactory();

Review Comment:
   It is not better to expose via a getter method?  Like 
getStorageFromatFactory()





Issue Time Tracking
---

Worklog Id: (was: 860659)
Time Spent: 40m  (was: 0.5h)

> Iceberg: Implement Load data via temp table
> ---
>
> Key: HIVE-27243
> URL: https://issues.apache.org/jira/browse/HIVE-27243
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Do a load data for iceberg via ingesting it to a temp table and then Insert 
> into/overwrite. (Impala Approach)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27032) Introduce liquibase for HMS schema evolution

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27032?focusedWorklogId=860658&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860658
 ]

ASF GitHub Bot logged work on HIVE-27032:
-

Author: ASF GitHub Bot
Created on: 04/May/23 21:45
Start Date: 04/May/23 21:45
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4060:
URL: https://github.com/apache/hive/pull/4060#issuecomment-1535452128

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4060)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=BUG)
 
[![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png
 
'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=BUG)
 [1 
Bug](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4060&resolved=false&types=SECURITY_HOTSPOT)
 
[![E](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E-16px.png
 
'E')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4060&resolved=false&types=SECURITY_HOTSPOT)
 [6 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4060&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=CODE_SMELL)
 [204 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4060&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4060&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4060&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860658)
Time Spent: 4.5h  (was: 4h 20m)

> Introduce liquibase for HMS schema evolution
> 
>
> Key: HIVE-27032
> URL: https://issues.apache.org/jira/browse/HIVE-27032
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Végh
>Assignee: László Végh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Introduce liquibase, and replace current upgrade procedure with it.
> The Schematool CLI API should remain untouched, while under the hood, 
> liquibase should be used for HMS schema evolution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27186) A persistent property store

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27186?focusedWorklogId=860635&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860635
 ]

ASF GitHub Bot logged work on HIVE-27186:
-

Author: ASF GitHub Bot
Created on: 04/May/23 20:09
Start Date: 04/May/23 20:09
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4194:
URL: https://github.com/apache/hive/pull/4194#issuecomment-1535349475

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4194)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=BUG)
 
[![E](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E-16px.png
 
'E')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=BUG)
 [5 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=VULNERABILITY)
 
[![B](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/B-16px.png
 
'B')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=VULNERABILITY)
 [5 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4194&resolved=false&types=SECURITY_HOTSPOT)
 
[![E](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E-16px.png
 
'E')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4194&resolved=false&types=SECURITY_HOTSPOT)
 [1 Security 
Hotspot](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4194&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=CODE_SMELL)
 [113 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4194&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4194&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860635)
Time Spent: 15h 20m  (was: 15h 10m)

> A persistent property store 
> 
>
> Key: HIVE-27186
> URL: https://issues.apache.org/jira/browse/HIVE-27186
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0-alpha-2
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 15h 20m
>  Remaining Estimate: 0h
>
> WHAT
> A persistent property store usable as a support facility for any metadata 
> augmentation feature.
> WHY
> When adding new meta-data oriented features, we usually need to persist 
> information linking the feature data and the HiveMetaStore objects it applies 
> to. Any information related to a database, a table or the cluster - like 
> statistics for example or any operational data state or data (think rolling 
> backup) -  fall in this use-case.
> Typically, accommodating such a feature requires modifying the Metastore 
> database schema by adding or altering a ta

[jira] [Work logged] (HIVE-27277) Set up github actions workflow to build and push docker image to docker hub

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27277?focusedWorklogId=860630&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860630
 ]

ASF GitHub Bot logged work on HIVE-27277:
-

Author: ASF GitHub Bot
Created on: 04/May/23 19:55
Start Date: 04/May/23 19:55
Worklog Time Spent: 10m 
  Work Description: simhadri-g commented on PR #4274:
URL: https://github.com/apache/hive/pull/4274#issuecomment-1535334329

   Will reopen the PR after testing with new changes completes. This is to 
prevent unnecessary runs of hive precommit tests .




Issue Time Tracking
---

Worklog Id: (was: 860630)
Time Spent: 3h  (was: 2h 50m)

> Set up github actions workflow to build and push docker image to docker hub
> ---
>
> Key: HIVE-27277
> URL: https://issues.apache.org/jira/browse/HIVE-27277
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Simhadri Govindappa
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27277) Set up github actions workflow to build and push docker image to docker hub

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27277?focusedWorklogId=860631&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860631
 ]

ASF GitHub Bot logged work on HIVE-27277:
-

Author: ASF GitHub Bot
Created on: 04/May/23 19:55
Start Date: 04/May/23 19:55
Worklog Time Spent: 10m 
  Work Description: simhadri-g closed pull request #4274: HIVE-27277: GH 
actions to build and push docker image
URL: https://github.com/apache/hive/pull/4274




Issue Time Tracking
---

Worklog Id: (was: 860631)
Time Spent: 3h 10m  (was: 3h)

> Set up github actions workflow to build and push docker image to docker hub
> ---
>
> Key: HIVE-27277
> URL: https://issues.apache.org/jira/browse/HIVE-27277
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Simhadri Govindappa
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-23394) TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23394?focusedWorklogId=860609&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860609
 ]

ASF GitHub Bot logged work on HIVE-23394:
-

Author: ASF GitHub Bot
Created on: 04/May/23 17:45
Start Date: 04/May/23 17:45
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4249:
URL: https://github.com/apache/hive/pull/4249#issuecomment-1535169276

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4249)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4249&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4249&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4249&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4249&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4249&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4249&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860609)
Time Spent: 4h  (was: 3h 50m)

> TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky
> 
>
> Key: HIVE-23394
> URL: https://issues.apache.org/jira/browse/HIVE-23394
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> both 
> TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1 and
> TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
> can fail with the exception below
> seems like the connection was lost
> {code}
> Error Message
> Failed to close statement
> Stacktrace
> java.sql.SQLException: Failed to close statement
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:200)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:205)
>   at org.apache.hive.jdbc.HiveSt

[jira] [Commented] (HIVE-27312) Backport of HIVE-24965: Describe table partition stats fetch should be configurable

2023-05-04 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17719438#comment-17719438
 ] 

Ayush Saxena commented on HIVE-27312:
-

Committed to branch-3,

Thanx [~diksha193] for the contribution!!!

> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable
> ---
>
> Key: HIVE-27312
> URL: https://issues.apache.org/jira/browse/HIVE-27312
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Diksha
>Assignee: Diksha
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27312) Backport of HIVE-24965: Describe table partition stats fetch should be configurable

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27312?focusedWorklogId=860601&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860601
 ]

ASF GitHub Bot logged work on HIVE-27312:
-

Author: ASF GitHub Bot
Created on: 04/May/23 17:29
Start Date: 04/May/23 17:29
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged PR #4285:
URL: https://github.com/apache/hive/pull/4285




Issue Time Tracking
---

Worklog Id: (was: 860601)
Time Spent: 0.5h  (was: 20m)

> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable
> ---
>
> Key: HIVE-27312
> URL: https://issues.apache.org/jira/browse/HIVE-27312
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Diksha
>Assignee: Diksha
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-27312) Backport of HIVE-24965: Describe table partition stats fetch should be configurable

2023-05-04 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HIVE-27312.
-
Fix Version/s: 3.2.0
   Resolution: Fixed

> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable
> ---
>
> Key: HIVE-27312
> URL: https://issues.apache.org/jira/browse/HIVE-27312
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Diksha
>Assignee: Diksha
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-27317:
--
Labels: pull-request-available  (was: )

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-27317.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17719435#comment-17719435
 ] 

Sercan Tekin commented on HIVE-27317:
-

Hi [~zhangbutao], thank you for the response. I have created the PR 
[https://github.com/apache/hive/pull/4293]

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-27317.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?focusedWorklogId=860598&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860598
 ]

ASF GitHub Bot logged work on HIVE-27317:
-

Author: ASF GitHub Bot
Created on: 04/May/23 17:21
Start Date: 04/May/23 17:21
Worklog Time Spent: 10m 
  Work Description: sercanCyberVision opened a new pull request, #4293:
URL: https://github.com/apache/hive/pull/4293

   ### What changes were proposed in this pull request?
   When `ClearDanglingScratchDir` service identifies the dangling sessions to 
clean HDFS FS, we will be cleaning files/dirs in 
`HiveConf.ConfVars.LOCALSCRATCHDIR` (local FS) as well.
   
   ### Why are the changes needed?
   When Hive session is killed, no chance for shutdown hook to clean-up tmp 
files. This causes accumulation of tmp files/dirs in local FS as below;
   ```
   > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
   drwx

Issue Time Tracking
---

Worklog Id: (was: 860598)
Remaining Estimate: 0h
Time Spent: 10m

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
> Attachments: HIVE-27317.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27234) Iceberg: CREATE BRANCH SQL implementation

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27234?focusedWorklogId=860593&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860593
 ]

ASF GitHub Bot logged work on HIVE-27234:
-

Author: ASF GitHub Bot
Created on: 04/May/23 17:03
Start Date: 04/May/23 17:03
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4216:
URL: https://github.com/apache/hive/pull/4216#issuecomment-1535099360

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4216)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=BUG)
 
[![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png
 
'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=BUG)
 [1 
Bug](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4216&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4216&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4216&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=CODE_SMELL)
 [2 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4216&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4216&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860593)
Time Spent: 7h 20m  (was: 7h 10m)

> Iceberg:  CREATE BRANCH SQL implementation
> --
>
> Key: HIVE-27234
> URL: https://issues.apache.org/jira/browse/HIVE-27234
> Project: Hive
>  Issue Type: Sub-task
>  Components: Iceberg integration
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> Maybe we can follow spark sql about branch ddl implementation 
> [https://github.com/apache/iceberg/pull/6617]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27312) Backport of HIVE-24965: Describe table partition stats fetch should be configurable

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27312?focusedWorklogId=860587&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860587
 ]

ASF GitHub Bot logged work on HIVE-27312:
-

Author: ASF GitHub Bot
Created on: 04/May/23 16:43
Start Date: 04/May/23 16:43
Worklog Time Spent: 10m 
  Work Description: Diksha628 commented on PR #4285:
URL: https://github.com/apache/hive/pull/4285#issuecomment-1535064830

   @sankarh , @ayushtkn , @vihangk1 , can you please review and merge this PR?
   




Issue Time Tracking
---

Worklog Id: (was: 860587)
Time Spent: 20m  (was: 10m)

> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable
> ---
>
> Key: HIVE-27312
> URL: https://issues.apache.org/jira/browse/HIVE-27312
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Diksha
>Assignee: Diksha
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Backport of HIVE-24965: Describe table partition stats fetch should be 
> configurable



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27187) Incremental rebuild of materialized view having aggregate and stored by iceberg

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27187?focusedWorklogId=860585&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860585
 ]

ASF GitHub Bot logged work on HIVE-27187:
-

Author: ASF GitHub Bot
Created on: 04/May/23 16:35
Start Date: 04/May/23 16:35
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on code in PR #4278:
URL: https://github.com/apache/hive/pull/4278#discussion_r1185253932


##
ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java:
##
@@ -3096,6 +3096,8 @@ Seems much cleaner if each stmt is identified as a 
particular HiveOperation (whi
 assert t != null;
 if (AcidUtils.isTransactionalTable(t) && sharedWrite) {
   compBuilder.setSharedWrite();
+} else if (MetaStoreUtils.isNonNativeTable(t.getTTable())) {
+  compBuilder.setLock(getLockTypeFromStorageHandler(output, t));

Review Comment:
   Yes, I think the function `AcidUtils.makeLockComponents()` has to be 
reconsidered from Iceberg standpoint.
   But I think that is beyond the scope of this patch.





Issue Time Tracking
---

Worklog Id: (was: 860585)
Time Spent: 5h 20m  (was: 5h 10m)

> Incremental rebuild of materialized view having aggregate and stored by 
> iceberg
> ---
>
> Key: HIVE-27187
> URL: https://issues.apache.org/jira/browse/HIVE-27187
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration, Materialized views
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Currently incremental rebuild of materialized view stored by iceberg which 
> definition query contains aggregate operator is transformed to an insert 
> overwrite statement which contains a union operator if the source tables 
> contains insert operations only. One branch of the union scans the view the 
> other produces the delta.
> This can be improved further: transform the statement to a multi insert 
> statement representing a merge statement to insert new aggregations and 
> update existing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27268) Hive.getPartitionsByNames should not enforce SessionState to be available

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27268?focusedWorklogId=860582&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860582
 ]

ASF GitHub Bot logged work on HIVE-27268:
-

Author: ASF GitHub Bot
Created on: 04/May/23 15:49
Start Date: 04/May/23 15:49
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4241:
URL: https://github.com/apache/hive/pull/4241#issuecomment-1535015163

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4241)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4241&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4241&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4241&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=CODE_SMELL)
 [1 Code 
Smell](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4241&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4241&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4241&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860582)
Time Spent: 1h  (was: 50m)

> Hive.getPartitionsByNames should not enforce SessionState to be available
> -
>
> Key: HIVE-27268
> URL: https://issues.apache.org/jira/browse/HIVE-27268
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.3
>Reporter: Henri Biestro
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HIVE-24743, HIVE-24392 is enforcing to check for valid write Id list for 
> "Hive.getPartitionsByName".
> This breaks basic API integration. For a user who needs to get basic 
> partition detail, he is forced to have SessionState.
> Request in this ticket is to ensure that if SessionState.get() is null, it 
> should return empty validWriteIdList.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread zhangbutao (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17719385#comment-17719385
 ] 

zhangbutao commented on HIVE-27317:
---

Hi [~sercan.tekin] Please create a Github pull request in 
[https://github.com/apache/hive/pulls] ,as patch review has been not used for a 
long time.

 

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
> Attachments: HIVE-27317.patch
>
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27302) Iceberg: Suport write to iceberg branch

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-27302:
--
Labels: pull-request-available  (was: )

> Iceberg: Suport write to iceberg branch
> ---
>
> Key: HIVE-27302
> URL: https://issues.apache.org/jira/browse/HIVE-27302
> Project: Hive
>  Issue Type: Sub-task
>  Components: Iceberg integration
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This feature depends on Iceberg1.2.0 interface: 
> [https://github.com/apache/iceberg/pull/5234] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27302) Iceberg: Suport write to iceberg branch

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27302?focusedWorklogId=860572&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860572
 ]

ASF GitHub Bot logged work on HIVE-27302:
-

Author: ASF GitHub Bot
Created on: 04/May/23 14:48
Start Date: 04/May/23 14:48
Worklog Time Spent: 10m 
  Work Description: zhangbutao opened a new pull request, #4292:
URL: https://github.com/apache/hive/pull/4292

   
   
   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   




Issue Time Tracking
---

Worklog Id: (was: 860572)
Remaining Estimate: 0h
Time Spent: 10m

> Iceberg: Suport write to iceberg branch
> ---
>
> Key: HIVE-27302
> URL: https://issues.apache.org/jira/browse/HIVE-27302
> Project: Hive
>  Issue Type: Sub-task
>  Components: Iceberg integration
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This feature depends on Iceberg1.2.0 interface: 
> [https://github.com/apache/iceberg/pull/5234] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27186) A persistent property store

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27186?focusedWorklogId=860567&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860567
 ]

ASF GitHub Bot logged work on HIVE-27186:
-

Author: ASF GitHub Bot
Created on: 04/May/23 14:16
Start Date: 04/May/23 14:16
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4194:
URL: https://github.com/apache/hive/pull/4194#issuecomment-1534864126

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4194)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=BUG)
 
[![E](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E-16px.png
 
'E')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=BUG)
 [5 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=VULNERABILITY)
 
[![B](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/B-16px.png
 
'B')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=VULNERABILITY)
 [5 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4194&resolved=false&types=SECURITY_HOTSPOT)
 
[![E](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E-16px.png
 
'E')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4194&resolved=false&types=SECURITY_HOTSPOT)
 [1 Security 
Hotspot](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4194&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=CODE_SMELL)
 [112 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4194&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4194&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4194&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860567)
Time Spent: 15h 10m  (was: 15h)

> A persistent property store 
> 
>
> Key: HIVE-27186
> URL: https://issues.apache.org/jira/browse/HIVE-27186
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0-alpha-2
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 15h 10m
>  Remaining Estimate: 0h
>
> WHAT
> A persistent property store usable as a support facility for any metadata 
> augmentation feature.
> WHY
> When adding new meta-data oriented features, we usually need to persist 
> information linking the feature data and the HiveMetaStore objects it applies 
> to. Any information related to a database, a table or the cluster - like 
> statistics for example or any operational data state or data (think rolling 
> backup) -  fall in this use-case.
> Typically, accommodating such a feature requires modifying the Metastore 
> database schema by adding or altering a table.

[jira] [Created] (HIVE-27318) Hive CLI creates 2 Tez sessions

2023-05-04 Thread Jing-Long Wu (Jira)
Jing-Long Wu created HIVE-27318:
---

 Summary: Hive CLI creates 2 Tez sessions
 Key: HIVE-27318
 URL: https://issues.apache.org/jira/browse/HIVE-27318
 Project: Hive
  Issue Type: Bug
  Components: Hive, Tez
Affects Versions: 3.2.0
 Environment: Bigtop 3.2.0 on Debian 10
h4.
Reporter: Jing-Long Wu


Running Hive CLI I noticed in the new version of Hive 3.1.3 and Tez 0.10.1 in 
Bigtop 3.2.0 (as opposed to Bigtop 1.5.0), there are now 2 Tez sessions created 
with "hive.execution.engine=tez" enabled as default:
{code:java}
2023-05-04T13:46:19,189  INFO [main] SessionState: Hive Session ID = 
dbd5b26f-9904-44a1-b70b-9477287e0c03

2023-05-04T13:46:22,275  INFO [pool-6-thread-1] SessionState: Hive Session ID = 
58d15cbd-f815-4c55-b323-02559dcacd61{code}
When compared to previous behavior, there was only 1 session created with the 
main thread. Worst thing is that the session created by the pool thread is not 
even utilized at all, and will just linger and take up resources for the 
ApplicationMaster container, until it dies from timeout defined in 
"tez.session.am.dag.submit.timeout.secs".

Running queries also only utilizes the session created by the main thread ie.

 
{code:java}
Status: Running (Executing on YARN cluster with App id 
application_1682588680048_0135)
2023-05-04T13:50:18,804  INFO [dbd5b26f-9904-44a1-b70b-9477287e0c03 main] 
SessionState: Status: Running (Executing on YARN cluster with App id 
application_1682588680048_0135)
{code}
This is using identical configs in Hive for both Hive 3.1.3 and Hive 2.3.6.

This behavior is not present in Hiveserver2 though, and could only be observed 
when using the Hive CLI.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-27078) Bucket Map Join can hang if the source vertex parallelism is changed by reducer autoparallelism

2023-05-04 Thread Jacques (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17719342#comment-17719342
 ] 

Jacques commented on HIVE-27078:


Any progress on this issue? We basically cannot enable BMJ on our cluster 
because of this

> Bucket Map Join can hang if the source vertex parallelism is changed by 
> reducer autoparallelism
> ---
>
> Key: HIVE-27078
> URL: https://issues.apache.org/jira/browse/HIVE-27078
> Project: Hive
>  Issue Type: Bug
>Reporter: László Bodor
>Priority: Major
>
> Considering this DAG:
> {code}
> | Map 1 <- Reducer 3 (CUSTOM_EDGE)   |
> | Map 2 <- Map 4 (CUSTOM_EDGE)   |
> | Map 5 <- Map 1 (CUSTOM_EDGE)   |
> | Reducer 3 <- Map 2 (SIMPLE_EDGE)   
> {code}
> this can be simplified further, just picked from a customer query, the 
> problematic vertices and edge is:
> {code}
> | Map 1 <- Reducer 3 (CUSTOM_EDGE)   |
> {code}
> Reducer 3 started scheduled with 20 tasks, and later it's decided by auto 
> reducer parallelism that only 4 tasks are needed:
> {code}
> 2023-02-07 13:00:36,078 [INFO] [App Shared Pool - #4] 
> |vertexmanager.ShuffleVertexManager|: Reducing auto parallelism for vertex: 
> Reducer 3 from 20 to 4
> {code}
> in this case, Map 1 can hang as it still expects 20 inputs:
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 4 .. container SUCCEEDED 16 1600  
>  0   0
> Map 2 .. container SUCCEEDED 48 4800  
>  0   0
> Reducer 3 .. container SUCCEEDED  4  400  
>  0   0
> Map 1container   RUNNING192  0   13  179  
>  0   0
> Map 5containerINITED241  00  241  
>  0   0
> --
> VERTICES: 03/05  [===>>---] 13%   ELAPSED TIME: 901.18 s
> --
> {code}
> in logs it's like:
> {code}
> 2022-12-08 09:42:26,845 [INFO] [I/O Setup 2 Start: {Reducer 3}] 
> |impl.ShuffleManager|: Reducer_3: numInputs=20, 
> compressionCodec=org.apache.hadoop.io.compress.SnappyCodec, numFetchers=10, 
> ifileBufferSize=4096, ifileReadAheadEnabled=true, 
> ifileReadAheadLength=4194304, localDiskFetchEnabled=true, 
> sharedFetchEnabled=false, keepAlive=true, keepAliveMaxConnections=20, 
> connectionTimeout=18, readTimeout=18, bufferSize=8192, 
> bufferSize=8192, maxTaskOutputAtOnce=20, asyncHttp=false
> ...receives the input event:
> 2022-12-08 09:42:27,134 [INFO] [TaskHeartbeatThread] |task.TaskReporter|: 
> Routing events from heartbeat response to task, 
> currentTaskAttemptId=attempt_1670331499491_1408_1_03_39_0, eventCount=1 
> fromEventId=0 nextFromEventId=0
> ...but then it hangs while waiting for further inputs:
> "TezChild" #29 daemon prio=5 os_prio=0 tid=0x7f3fae141000 nid=0x9581 
> waiting on condition [0x7f3f737ba000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00071ad90a00> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
>   at 
> java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
>   at 
> org.apache.tez.runtime.library.common.shuffle.impl.ShuffleManager.getNextInput(ShuffleManager.java:1033)
>   at 
> org.apache.tez.runtime.library.common.readers.UnorderedKVReader.moveToNextInput(UnorderedKVReader.java:202)
>   at 
> org.apache.tez.runtime.library.common.readers.UnorderedKVReader.next(UnorderedKVReader.java:125)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.fast.VectorMapJoinFastHashTableLoader.load(VectorMapJoinFastHashTableLoader.java:129)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTableInternal(MapJoinOperator.java:385)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:454)
>   at 
> org.apache.hadoop.hive.

[jira] [Updated] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sercan Tekin updated HIVE-27317:

Attachment: HIVE-27317.patch

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
> Attachments: HIVE-27317.patch
>
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sercan Tekin updated HIVE-27317:

Attachment: (was: HIVE-27317.patch)

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sercan Tekin updated HIVE-27317:

Attachment: (was: HIVE-27317-1.patch)

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sercan Tekin updated HIVE-27317:

Attachment: HIVE-27317-1.patch
Status: Patch Available  (was: Open)

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17719319#comment-17719319
 ] 

Sercan Tekin commented on HIVE-27317:
-

Created a patch for master branch.

With the patch, when *ClearDanglingScratchDir* identifies the dangling 
sessions, we will be cleaning files/dirs in *HiveConf.ConfVars.LOCALSCRATCHDIR* 
as well.

Added unit test as well.

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
> Attachments: HIVE-27317.patch
>
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sercan Tekin updated HIVE-27317:

Attachment: HIVE-27317.patch

> Temporary (local) session files cleanup improvements
> 
>
> Key: HIVE-27317
> URL: https://issues.apache.org/jira/browse/HIVE-27317
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sercan Tekin
>Assignee: Sercan Tekin
>Priority: Major
> Attachments: HIVE-27317.patch
>
>
> When Hive session is killed, no chance for shutdown hook to clean-up tmp 
> files.
> There is a Hive service to clean residual files 
> https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution 
> is scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to 
> make sure not to leave any temp file behind. But this service cleans up only 
> HDFS temp files, there are still residual files/dirs in 
> *HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
> {code:java}
> > ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
> drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
> -rw--- 1 user user    0 Oct 29 10:09 
> 97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27317) Temporary (local) session files cleanup improvements

2023-05-04 Thread Sercan Tekin (Jira)
Sercan Tekin created HIVE-27317:
---

 Summary: Temporary (local) session files cleanup improvements
 Key: HIVE-27317
 URL: https://issues.apache.org/jira/browse/HIVE-27317
 Project: Hive
  Issue Type: Improvement
Reporter: Sercan Tekin
Assignee: Sercan Tekin
 Attachments: HIVE-27317.patch

When Hive session is killed, no chance for shutdown hook to clean-up tmp files.

There is a Hive service to clean residual files 
https://issues.apache.org/jira/browse/HIVE-13429, and later on its execution is 
scheduled inside HS2 https://issues.apache.org/jira/browse/HIVE-15068 to make 
sure not to leave any temp file behind. But this service cleans up only HDFS 
temp files, there are still residual files/dirs in 
*HiveConf.ConfVars.LOCALSCRATCHDIR* location as follows;
{code:java}
> ll /tmp/user/97c4ef50-5e80-480e-a6f0-4f779050852b*
drwx-- 2 user user 4096 Oct 29 10:09 97c4ef50-5e80-480e-a6f0-4f779050852b
-rw--- 1 user user    0 Oct 29 10:09 
97c4ef50-5e80-480e-a6f0-4f779050852b10571819313894728966.pipeout
-rw--- 1 user user    0 Oct 29 10:09 
97c4ef50-5e80-480e-a6f0-4f779050852b16013956055489853961.pipeout
-rw--- 1 user user    0 Oct 29 10:09 
97c4ef50-5e80-480e-a6f0-4f779050852b4383913570068173450.pipeout
-rw--- 1 user user    0 Oct 29 10:09 
97c4ef50-5e80-480e-a6f0-4f779050852b889740171428672108.pipeout {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-27307) NPE when generating incremental rebuild plan of materialized view with empty Iceberg source table

2023-05-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa resolved HIVE-27307.
---
Resolution: Fixed

Merged to master. Thanks [~lvegh] for review.

> NPE when generating incremental rebuild plan of materialized view with empty 
> Iceberg source table
> -
>
> Key: HIVE-27307
> URL: https://issues.apache.org/jira/browse/HIVE-27307
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create external table tbl_ice(a int, b string, c int) stored by iceberg 
> stored as orc tblproperties ('format-version'='1');
> create external table tbl_ice_v2(d int, e string, f int) stored by iceberg 
> stored as orc tblproperties ('format-version'='2');
> insert into tbl_ice_v2 values (1, 'one v2', 50), (4, 'four v2', 53), (5, 
> 'five v2', 54);
> create materialized view mat1 as
> select tbl_ice.b, tbl_ice.c, tbl_ice_v2.e from tbl_ice join tbl_ice_v2 on 
> tbl_ice.a=tbl_ice_v2.d where tbl_ice.c > 52;
> -- insert some new values to one of the source tables
> insert into tbl_ice values (1, 'one', 50), (2, 'two', 51), (3, 'three', 52), 
> (4, 'four', 53), (5, 'five', 54);
> alter materialized view mat1 rebuild;
> {code}
> {code}
> 2023-04-28T07:34:17,949  WARN [1fb94a8e-8d75-4a1f-8f44-a5beaa8aafb6 Listener 
> at 0.0.0.0/36857] rebuild.AlterMaterializedViewRebuildAnalyzer: Exception 
> loading materialized views
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2298)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2204)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnalyzer.java:215)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1722)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1591)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1343)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12824)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:465)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer.analyzeInternal(AlterMaterializedViewRebuildAnalyzer.java:135)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:326)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:326)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.ap

[jira] [Work logged] (HIVE-27307) NPE when generating incremental rebuild plan of materialized view with empty Iceberg source table

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27307?focusedWorklogId=860558&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860558
 ]

ASF GitHub Bot logged work on HIVE-27307:
-

Author: ASF GitHub Bot
Created on: 04/May/23 12:50
Start Date: 04/May/23 12:50
Worklog Time Spent: 10m 
  Work Description: kasakrisz merged PR #4279:
URL: https://github.com/apache/hive/pull/4279




Issue Time Tracking
---

Worklog Id: (was: 860558)
Time Spent: 1h 20m  (was: 1h 10m)

> NPE when generating incremental rebuild plan of materialized view with empty 
> Iceberg source table
> -
>
> Key: HIVE-27307
> URL: https://issues.apache.org/jira/browse/HIVE-27307
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create external table tbl_ice(a int, b string, c int) stored by iceberg 
> stored as orc tblproperties ('format-version'='1');
> create external table tbl_ice_v2(d int, e string, f int) stored by iceberg 
> stored as orc tblproperties ('format-version'='2');
> insert into tbl_ice_v2 values (1, 'one v2', 50), (4, 'four v2', 53), (5, 
> 'five v2', 54);
> create materialized view mat1 as
> select tbl_ice.b, tbl_ice.c, tbl_ice_v2.e from tbl_ice join tbl_ice_v2 on 
> tbl_ice.a=tbl_ice_v2.d where tbl_ice.c > 52;
> -- insert some new values to one of the source tables
> insert into tbl_ice values (1, 'one', 50), (2, 'two', 51), (3, 'three', 52), 
> (4, 'four', 53), (5, 'five', 54);
> alter materialized view mat1 rebuild;
> {code}
> {code}
> 2023-04-28T07:34:17,949  WARN [1fb94a8e-8d75-4a1f-8f44-a5beaa8aafb6 Listener 
> at 0.0.0.0/36857] rebuild.AlterMaterializedViewRebuildAnalyzer: Exception 
> loading materialized views
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2298)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2204)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnalyzer.java:215)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1722)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1591)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1343)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12824)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:465)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer.analyzeInternal(AlterMaterializedViewRebuildAnalyzer.java:135)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:326)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.ana

[jira] [Work logged] (HIVE-27187) Incremental rebuild of materialized view having aggregate and stored by iceberg

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27187?focusedWorklogId=860557&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860557
 ]

ASF GitHub Bot logged work on HIVE-27187:
-

Author: ASF GitHub Bot
Created on: 04/May/23 12:48
Start Date: 04/May/23 12:48
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #4278:
URL: https://github.com/apache/hive/pull/4278#discussion_r1184967460


##
ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java:
##
@@ -3096,6 +3096,8 @@ Seems much cleaner if each stmt is identified as a 
particular HiveOperation (whi
 assert t != null;
 if (AcidUtils.isTransactionalTable(t) && sharedWrite) {
   compBuilder.setSharedWrite();
+} else if (MetaStoreUtils.isNonNativeTable(t.getTTable())) {
+  compBuilder.setLock(getLockTypeFromStorageHandler(output, t));

Review Comment:
   do we need locks for iceberg delete/update. That should be handled in 
iceberg itself, isn't it?





Issue Time Tracking
---

Worklog Id: (was: 860557)
Time Spent: 5h 10m  (was: 5h)

> Incremental rebuild of materialized view having aggregate and stored by 
> iceberg
> ---
>
> Key: HIVE-27187
> URL: https://issues.apache.org/jira/browse/HIVE-27187
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration, Materialized views
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently incremental rebuild of materialized view stored by iceberg which 
> definition query contains aggregate operator is transformed to an insert 
> overwrite statement which contains a union operator if the source tables 
> contains insert operations only. One branch of the union scans the view the 
> other produces the delta.
> This can be improved further: transform the statement to a multi insert 
> statement representing a merge statement to insert new aggregations and 
> update existing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27307) NPE when generating incremental rebuild plan of materialized view with empty Iceberg source table

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27307?focusedWorklogId=860554&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860554
 ]

ASF GitHub Bot logged work on HIVE-27307:
-

Author: ASF GitHub Bot
Created on: 04/May/23 12:45
Start Date: 04/May/23 12:45
Worklog Time Spent: 10m 
  Work Description: veghlaci05 commented on code in PR #4279:
URL: https://github.com/apache/hive/pull/4279#discussion_r1184964019


##
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java:
##
@@ -1297,8 +1297,12 @@ public boolean 
shouldOverwrite(org.apache.hadoop.hive.ql.metadata.Table mTable,
   public Boolean hasAppendsOnly(org.apache.hadoop.hive.ql.metadata.Table 
hmsTable, SnapshotContext since) {
 TableDesc tableDesc = Utilities.getTableDesc(hmsTable);
 Table table = IcebergTableUtil.getTable(conf, tableDesc.getProperties());
-boolean foundSince = false;
-for (Snapshot snapshot : table.snapshots()) {
+return hasAppendsOnly(table.snapshots(), since);
+  }
+
+  public Boolean hasAppendsOnly(Iterable snapshots, SnapshotContext 
since) {
+boolean foundSince = since == null;

Review Comment:
   OK, it's very compact and efficient construct, so although it was a bit 
confusing for me, I'm fine with it.





Issue Time Tracking
---

Worklog Id: (was: 860554)
Time Spent: 1h 10m  (was: 1h)

> NPE when generating incremental rebuild plan of materialized view with empty 
> Iceberg source table
> -
>
> Key: HIVE-27307
> URL: https://issues.apache.org/jira/browse/HIVE-27307
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code}
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create external table tbl_ice(a int, b string, c int) stored by iceberg 
> stored as orc tblproperties ('format-version'='1');
> create external table tbl_ice_v2(d int, e string, f int) stored by iceberg 
> stored as orc tblproperties ('format-version'='2');
> insert into tbl_ice_v2 values (1, 'one v2', 50), (4, 'four v2', 53), (5, 
> 'five v2', 54);
> create materialized view mat1 as
> select tbl_ice.b, tbl_ice.c, tbl_ice_v2.e from tbl_ice join tbl_ice_v2 on 
> tbl_ice.a=tbl_ice_v2.d where tbl_ice.c > 52;
> -- insert some new values to one of the source tables
> insert into tbl_ice values (1, 'one', 50), (2, 'two', 51), (3, 'three', 52), 
> (4, 'four', 53), (5, 'five', 54);
> alter materialized view mat1 rebuild;
> {code}
> {code}
> 2023-04-28T07:34:17,949  WARN [1fb94a8e-8d75-4a1f-8f44-a5beaa8aafb6 Listener 
> at 0.0.0.0/36857] rebuild.AlterMaterializedViewRebuildAnalyzer: Exception 
> loading materialized views
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2298)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2204)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnalyzer.java:215)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1722)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1591)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1343)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.had

[jira] [Work logged] (HIVE-27243) Iceberg: Implement Load data via temp table

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27243?focusedWorklogId=860553&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860553
 ]

ASF GitHub Bot logged work on HIVE-27243:
-

Author: ASF GitHub Bot
Created on: 04/May/23 12:25
Start Date: 04/May/23 12:25
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4289:
URL: https://github.com/apache/hive/pull/4289#issuecomment-1534681590

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4289)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4289&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4289&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4289&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=CODE_SMELL)
 [1 Code 
Smell](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4289&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4289&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860553)
Time Spent: 0.5h  (was: 20m)

> Iceberg: Implement Load data via temp table
> ---
>
> Key: HIVE-27243
> URL: https://issues.apache.org/jira/browse/HIVE-27243
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Do a load data for iceberg via ingesting it to a temp table and then Insert 
> into/overwrite. (Impala Approach)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27307) NPE when generating incremental rebuild plan of materialized view with empty Iceberg source table

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27307?focusedWorklogId=860552&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860552
 ]

ASF GitHub Bot logged work on HIVE-27307:
-

Author: ASF GitHub Bot
Created on: 04/May/23 12:18
Start Date: 04/May/23 12:18
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on code in PR #4279:
URL: https://github.com/apache/hive/pull/4279#discussion_r1184934962


##
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java:
##
@@ -1297,8 +1297,12 @@ public boolean 
shouldOverwrite(org.apache.hadoop.hive.ql.metadata.Table mTable,
   public Boolean hasAppendsOnly(org.apache.hadoop.hive.ql.metadata.Table 
hmsTable, SnapshotContext since) {
 TableDesc tableDesc = Utilities.getTableDesc(hmsTable);
 Table table = IcebergTableUtil.getTable(conf, tableDesc.getProperties());
-boolean foundSince = false;
-for (Snapshot snapshot : table.snapshots()) {
+return hasAppendsOnly(table.snapshots(), since);
+  }
+
+  public Boolean hasAppendsOnly(Iterable snapshots, SnapshotContext 
since) {

Review Comment:
   removed `public` and added `@VisibleForTests`





Issue Time Tracking
---

Worklog Id: (was: 860552)
Time Spent: 1h  (was: 50m)

> NPE when generating incremental rebuild plan of materialized view with empty 
> Iceberg source table
> -
>
> Key: HIVE-27307
> URL: https://issues.apache.org/jira/browse/HIVE-27307
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code}
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create external table tbl_ice(a int, b string, c int) stored by iceberg 
> stored as orc tblproperties ('format-version'='1');
> create external table tbl_ice_v2(d int, e string, f int) stored by iceberg 
> stored as orc tblproperties ('format-version'='2');
> insert into tbl_ice_v2 values (1, 'one v2', 50), (4, 'four v2', 53), (5, 
> 'five v2', 54);
> create materialized view mat1 as
> select tbl_ice.b, tbl_ice.c, tbl_ice_v2.e from tbl_ice join tbl_ice_v2 on 
> tbl_ice.a=tbl_ice_v2.d where tbl_ice.c > 52;
> -- insert some new values to one of the source tables
> insert into tbl_ice values (1, 'one', 50), (2, 'two', 51), (3, 'three', 52), 
> (4, 'four', 53), (5, 'five', 54);
> alter materialized view mat1 rebuild;
> {code}
> {code}
> 2023-04-28T07:34:17,949  WARN [1fb94a8e-8d75-4a1f-8f44-a5beaa8aafb6 Listener 
> at 0.0.0.0/36857] rebuild.AlterMaterializedViewRebuildAnalyzer: Exception 
> loading materialized views
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2298)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2204)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnalyzer.java:215)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1722)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1591)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1343)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12824)
>  ~[hive-exec-4.0.0-SNAPSHOT.ja

[jira] [Assigned] (HIVE-27114) Provide a configurable filter for removing useless properties in Partition objects from listPartitions HMS Calls

2023-05-04 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro reassigned HIVE-27114:


Assignee: Henri Biestro

> Provide a configurable filter for removing useless properties in Partition 
> objects from listPartitions HMS Calls
> 
>
> Key: HIVE-27114
> URL: https://issues.apache.org/jira/browse/HIVE-27114
> Project: Hive
>  Issue Type: Bug
>Reporter: Naresh P R
>Assignee: Henri Biestro
>Priority: Major
>
> HMS API calls are throwing following exception because of thrift upgrade
> {code:java}
> org.apache.thrift.transport.TTransportException: MaxMessageSize reached
>         at 
> org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96)
>  
>         at 
> org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97)
>  
>         at 
> org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) 
>         at 
> org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39)
>  
>         at 
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) 
>         at 
> org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63)
>  
>         at 
> org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417)
>  
>         at 
> org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:411)
>  
>         at 
> org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1286)
>  
>         at 
> org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1205)
>  
>         at 
> org.apache.hadoop.hive.metastore.api.Partition.read(Partition.java:1062) 
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java)
>  
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java)
>  
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java)
>  
>         at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88) 
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:3290)
>  
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:3275)
>  
>         at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1782)
>  
>         at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.listPartitions(SessionHiveMetaStoreClient.java:1134)
>  
>         at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1775)
>  
>         at sun.reflect.GeneratedMethodAccessor169.invoke(Unknown Source) 
> ~[?:?]
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_311]
>         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_311]
>         at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213)
>  
>         at com.sun.proxy.$Proxy52.listPartitions(Unknown Source) ~[?:?]
>         at sun.reflect.GeneratedMethodAccessor169.invoke(Unknown Source) 
> ~[?:?]
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_311]
>         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_311]
>         at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3550)
>  
>         at com.sun.proxy.$Proxy52.listPartitions(Unknown Source) ~[?:?]
>         at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllPartitionsOf(Hive.java:3793) 
>         at 
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.getAllPartitions(PartitionPruner.java:485)
>    {code}
> Large size partition metadata is causing this issue
> eg., impala stores huge stats chunk in partitionMetadata with {*}param_keys = 
> (impala_intermediate_stats_chunk{*}{*}), these PARTITION_PARAM_KEYS are not 
> required for Hive. These params should be skipped while preparing partition 
> object from HMS to HS2.
> Similar to HIVE-25501, any user defined regex param_keys should be skipped in 
> listPartitions HMS API call response.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27311) Improve LDAP auth to support generic search bind authentication

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27311?focusedWorklogId=860548&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860548
 ]

ASF GitHub Bot logged work on HIVE-27311:
-

Author: ASF GitHub Bot
Created on: 04/May/23 11:47
Start Date: 04/May/23 11:47
Worklog Time Spent: 10m 
  Work Description: henrib commented on code in PR #4284:
URL: https://github.com/apache/hive/pull/4284#discussion_r1184904759


##
service/src/java/org/apache/hive/service/auth/ldap/DirSearch.java:
##
@@ -34,6 +34,16 @@ public interface DirSearch extends Closeable {
*/
   String findUserDn(String user) throws NamingException;
 
+  /**
+   * Finds user's distinguished name.
+   * @param user username
+   * @param userSearchFilter Generic LDAP Search filter for ex: 
(&(uid={0})(objectClass=person))
+   * @param baseDn LDAP BaseDN for user searches for ex: dc=apache,dc=org
+   * @return DN for the specific user if exists, null otherwise
+   * @throws NamingException
+   */
+  String findUserDnBySearch(String user, String userSearchFilter, String 
baseDn) throws NamingException;

Review Comment:
   I'm confused and missing the obvious;  I'm just suggesting renaming `String 
findUserDnBySearch(String user, String userSearchFilter, String baseDn) throws 
NamingException; `to `String findUserDn(String user, String userSearchFilter, 
String baseDn) throws NamingException;` . The new method with search arguments 
seem a nice extension (overload) to the original one. 





Issue Time Tracking
---

Worklog Id: (was: 860548)
Time Spent: 1h 10m  (was: 1h)

> Improve LDAP auth to support generic search bind authentication
> ---
>
> Key: HIVE-27311
> URL: https://issues.apache.org/jira/browse/HIVE-27311
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 4.0.0-alpha-2
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Hive's LDAP auth configuration is home-baked and a bit specific to hive. This 
> was by design intending to be as flexible as it can be for accommodating 
> various LDAP implementations. But this does not necessarily make it easy to 
> configure hive with such custom values for ldap filtering when most other 
> components accept generic ldap filters, for example: search bind filters.
> There has to be a layer of translation to have it configured. Instead we can 
> enhance Hive to support generic search bind filters.
> To support this, I am proposing adding NEW alternate configurations. 
> hive.server2.authentication.ldap.userSearchFilter
> hive.server2.authentication.ldap.groupSearchFilter
> hive.server2.authentication.ldap.groupBaseDN
> Search bind filtering will also use EXISTING config param
> hive.server2.authentication.ldap.baseDN
> This is alternate configuration and will be used first if specified. So users 
> can continue to use existing configuration as well. These changes should not 
> interfere with existing configurations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27311) Improve LDAP auth to support generic search bind authentication

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27311?focusedWorklogId=860549&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860549
 ]

ASF GitHub Bot logged work on HIVE-27311:
-

Author: ASF GitHub Bot
Created on: 04/May/23 11:47
Start Date: 04/May/23 11:47
Worklog Time Spent: 10m 
  Work Description: henrib commented on code in PR #4284:
URL: https://github.com/apache/hive/pull/4284#discussion_r1184904759


##
service/src/java/org/apache/hive/service/auth/ldap/DirSearch.java:
##
@@ -34,6 +34,16 @@ public interface DirSearch extends Closeable {
*/
   String findUserDn(String user) throws NamingException;
 
+  /**
+   * Finds user's distinguished name.
+   * @param user username
+   * @param userSearchFilter Generic LDAP Search filter for ex: 
(&(uid={0})(objectClass=person))
+   * @param baseDn LDAP BaseDN for user searches for ex: dc=apache,dc=org
+   * @return DN for the specific user if exists, null otherwise
+   * @throws NamingException
+   */
+  String findUserDnBySearch(String user, String userSearchFilter, String 
baseDn) throws NamingException;

Review Comment:
   I'm confused and missing the obvious;  I'm just suggesting renaming:
`String findUserDnBySearch(String user, String userSearchFilter, String 
baseDn) throws NamingException; `
   to
   `String findUserDn(String user, String userSearchFilter, String baseDn) 
throws NamingException;`
   The new method with search arguments seem a nice extension (overload) to the 
original one. 





Issue Time Tracking
---

Worklog Id: (was: 860549)
Time Spent: 1h 20m  (was: 1h 10m)

> Improve LDAP auth to support generic search bind authentication
> ---
>
> Key: HIVE-27311
> URL: https://issues.apache.org/jira/browse/HIVE-27311
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 4.0.0-alpha-2
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Hive's LDAP auth configuration is home-baked and a bit specific to hive. This 
> was by design intending to be as flexible as it can be for accommodating 
> various LDAP implementations. But this does not necessarily make it easy to 
> configure hive with such custom values for ldap filtering when most other 
> components accept generic ldap filters, for example: search bind filters.
> There has to be a layer of translation to have it configured. Instead we can 
> enhance Hive to support generic search bind filters.
> To support this, I am proposing adding NEW alternate configurations. 
> hive.server2.authentication.ldap.userSearchFilter
> hive.server2.authentication.ldap.groupSearchFilter
> hive.server2.authentication.ldap.groupBaseDN
> Search bind filtering will also use EXISTING config param
> hive.server2.authentication.ldap.baseDN
> This is alternate configuration and will be used first if specified. So users 
> can continue to use existing configuration as well. These changes should not 
> interfere with existing configurations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-26913) Iceberg: HiveVectorizedReader::parquetRecordReader should reuse footer information

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26913?focusedWorklogId=860546&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860546
 ]

ASF GitHub Bot logged work on HIVE-26913:
-

Author: ASF GitHub Bot
Created on: 04/May/23 11:42
Start Date: 04/May/23 11:42
Worklog Time Spent: 10m 
  Work Description: deniskuzZ merged PR #4136:
URL: https://github.com/apache/hive/pull/4136




Issue Time Tracking
---

Worklog Id: (was: 860546)
Time Spent: 1.5h  (was: 1h 20m)

> Iceberg: HiveVectorizedReader::parquetRecordReader should reuse footer 
> information
> --
>
> Key: HIVE-26913
> URL: https://issues.apache.org/jira/browse/HIVE-26913
> Project: Hive
>  Issue Type: Improvement
>  Components: Iceberg integration
>Reporter: Rajesh Balamohan
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: performance, pull-request-available, stability
> Fix For: 4.0.0
>
> Attachments: Screenshot 2023-01-09 at 4.01.14 PM.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HiveVectorizedReader::parquetRecordReader should reuse details of parquet 
> footer, instead of reading it again.
>  
> It reads parquet footer here:
> [https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/vector/HiveVectorizedReader.java#L230-L232]
> Again it reads the footer here for constructing vectorized recordreader
> [https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/vector/HiveVectorizedReader.java#L249]
>  
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java#L50]
>  
> Check the codepath of 
> VectorizedParquetRecordReader::setupMetadataAndParquetSplit
> [https://github.com/apache/hive/blob/6b0139188aba6a95808c8d1bec63a651ec9e4bdc/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedParquetRecordReader.java#L180]
>  
> It should be possible to share "ParquetMetadata" in 
> VectorizedParuqetRecordReader.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27316) Select query on table with remote database returns NULL values with postgreSQL and Redshift data connectors

2023-05-04 Thread Venugopal Reddy K (Jira)
Venugopal Reddy K created HIVE-27316:


 Summary: Select query on table with remote database returns NULL 
values with postgreSQL and Redshift data connectors
 Key: HIVE-27316
 URL: https://issues.apache.org/jira/browse/HIVE-27316
 Project: Hive
  Issue Type: Bug
Reporter: Venugopal Reddy K


*Brief Description:*

Few datatypes are not mapped from postgres/redshift to hive data types. Thus 
values for unmapped columns are shown as null.

 

*Steps to reproduce:*

1. create connectors for postgres,redshift, and create remote database with the 
respective connectors.

 
{code:java}
create connector rscon1 type 'postgres' url 
'jdbc:redshift://redshift.us-east-2.redshift.amazonaws.com:5439/dev?ssl=false&tcpKeepAlive=true'
 WITH DCPROPERTIES 
('hive.sql.dbcp.username'='cloudera','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='com.amazon.redshift.jdbc.Driver','hive.sql.schema'
 = 'public');

create REMOTE database localdev1 using rscon1 with 
DBPROPERTIES("connector.remoteDbName"="dev");

create connector pscon1 type 'postgres' url 
'jdbc:postgresql://postgre.us-east-2.rds.amazonaws.com:5432/postgres?ssl=false&tcpKeepAlive=true'
 WITH DCPROPERTIES 
('hive.sql.dbcp.username'='postgres','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='org.postgresql.Driver','hive.sql.schema'
 = 'public');

create REMOTE database localdevps1 using pscon1 with 
DBPROPERTIES("connector.remoteDbName"="postgres");
{code}
2. Execute select query on table(in remote db) having int2, int4, float4, 
float8, bool, character(n) columns shows NULL values.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27316) Select query on table with remote database returns NULL values with postgreSQL and Redshift data connectors

2023-05-04 Thread Venugopal Reddy K (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venugopal Reddy K updated HIVE-27316:
-
Description: 
*Brief Description:*

Few datatypes are not mapped from postgres/redshift to hive data types. Thus 
values for unmapped columns are shown as null.

 

*Steps to reproduce:*

1. create connectors for postgres,redshift, and create remote database with the 
respective connectors.

 
{code:java}
create connector rscon1 type 'postgres' url 
'jdbc:redshift://redshift.us-east-2.redshift.amazonaws.com:5439/dev?ssl=false&tcpKeepAlive=true'
 WITH DCPROPERTIES 
('hive.sql.dbcp.username'='cloudera','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='com.amazon.redshift.jdbc.Driver','hive.sql.schema'
 = 'public');

create REMOTE database localdev1 using rscon1 with 
DBPROPERTIES("connector.remoteDbName"="dev");

create connector pscon1 type 'postgres' url 
'jdbc:postgresql://postgre.us-east-2.rds.amazonaws.com:5432/postgres?ssl=false&tcpKeepAlive=true'
 WITH DCPROPERTIES 
('hive.sql.dbcp.username'='postgres','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='org.postgresql.Driver','hive.sql.schema'
 = 'public');

create REMOTE database localdevps1 using pscon1 with 
DBPROPERTIES("connector.remoteDbName"="postgres");
{code}
2. Execute select query on table(in remote db) having int2, int4, float4, 
float8, bool, character columns shows NULL values.

 

  was:
*Brief Description:*

Few datatypes are not mapped from postgres/redshift to hive data types. Thus 
values for unmapped columns are shown as null.

 

*Steps to reproduce:*

1. create connectors for postgres,redshift, and create remote database with the 
respective connectors.

 
{code:java}
create connector rscon1 type 'postgres' url 
'jdbc:redshift://redshift.us-east-2.redshift.amazonaws.com:5439/dev?ssl=false&tcpKeepAlive=true'
 WITH DCPROPERTIES 
('hive.sql.dbcp.username'='cloudera','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='com.amazon.redshift.jdbc.Driver','hive.sql.schema'
 = 'public');

create REMOTE database localdev1 using rscon1 with 
DBPROPERTIES("connector.remoteDbName"="dev");

create connector pscon1 type 'postgres' url 
'jdbc:postgresql://postgre.us-east-2.rds.amazonaws.com:5432/postgres?ssl=false&tcpKeepAlive=true'
 WITH DCPROPERTIES 
('hive.sql.dbcp.username'='postgres','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='org.postgresql.Driver','hive.sql.schema'
 = 'public');

create REMOTE database localdevps1 using pscon1 with 
DBPROPERTIES("connector.remoteDbName"="postgres");
{code}
2. Execute select query on table(in remote db) having int2, int4, float4, 
float8, bool, character(n) columns shows NULL values.

 


> Select query on table with remote database returns NULL values with 
> postgreSQL and Redshift data connectors
> ---
>
> Key: HIVE-27316
> URL: https://issues.apache.org/jira/browse/HIVE-27316
> Project: Hive
>  Issue Type: Bug
>Reporter: Venugopal Reddy K
>Priority: Major
>
> *Brief Description:*
> Few datatypes are not mapped from postgres/redshift to hive data types. Thus 
> values for unmapped columns are shown as null.
>  
> *Steps to reproduce:*
> 1. create connectors for postgres,redshift, and create remote database with 
> the respective connectors.
>  
> {code:java}
> create connector rscon1 type 'postgres' url 
> 'jdbc:redshift://redshift.us-east-2.redshift.amazonaws.com:5439/dev?ssl=false&tcpKeepAlive=true'
>  WITH DCPROPERTIES 
> ('hive.sql.dbcp.username'='cloudera','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='com.amazon.redshift.jdbc.Driver','hive.sql.schema'
>  = 'public');
> create REMOTE database localdev1 using rscon1 with 
> DBPROPERTIES("connector.remoteDbName"="dev");
> create connector pscon1 type 'postgres' url 
> 'jdbc:postgresql://postgre.us-east-2.rds.amazonaws.com:5432/postgres?ssl=false&tcpKeepAlive=true'
>  WITH DCPROPERTIES 
> ('hive.sql.dbcp.username'='postgres','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='org.postgresql.Driver','hive.sql.schema'
>  = 'public');
> create REMOTE database localdevps1 using pscon1 with 
> DBPROPERTIES("connector.remoteDbName"="postgres");
> {code}
> 2. Execute select query on table(in remote db) having int2, int4, float4, 
> float8, bool, character columns shows NULL values.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27316) Select query on table with remote database returns NULL values with postgreSQL and Redshift data connectors

2023-05-04 Thread Venugopal Reddy K (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venugopal Reddy K reassigned HIVE-27316:


Assignee: Venugopal Reddy K

> Select query on table with remote database returns NULL values with 
> postgreSQL and Redshift data connectors
> ---
>
> Key: HIVE-27316
> URL: https://issues.apache.org/jira/browse/HIVE-27316
> Project: Hive
>  Issue Type: Bug
>Reporter: Venugopal Reddy K
>Assignee: Venugopal Reddy K
>Priority: Major
>
> *Brief Description:*
> Few datatypes are not mapped from postgres/redshift to hive data types. Thus 
> values for unmapped columns are shown as null.
>  
> *Steps to reproduce:*
> 1. create connectors for postgres,redshift, and create remote database with 
> the respective connectors.
>  
> {code:java}
> create connector rscon1 type 'postgres' url 
> 'jdbc:redshift://redshift.us-east-2.redshift.amazonaws.com:5439/dev?ssl=false&tcpKeepAlive=true'
>  WITH DCPROPERTIES 
> ('hive.sql.dbcp.username'='cloudera','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='com.amazon.redshift.jdbc.Driver','hive.sql.schema'
>  = 'public');
> create REMOTE database localdev1 using rscon1 with 
> DBPROPERTIES("connector.remoteDbName"="dev");
> create connector pscon1 type 'postgres' url 
> 'jdbc:postgresql://postgre.us-east-2.rds.amazonaws.com:5432/postgres?ssl=false&tcpKeepAlive=true'
>  WITH DCPROPERTIES 
> ('hive.sql.dbcp.username'='postgres','hive.sql.dbcp.password'='Cloudera#123','hive.sql.jdbc.driver'='org.postgresql.Driver','hive.sql.schema'
>  = 'public');
> create REMOTE database localdevps1 using pscon1 with 
> DBPROPERTIES("connector.remoteDbName"="postgres");
> {code}
> 2. Execute select query on table(in remote db) having int2, int4, float4, 
> float8, bool, character columns shows NULL values.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27195) Drop table if Exists . fails during authorization for temporary tables

2023-05-04 Thread Riju Trivedi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Riju Trivedi reassigned HIVE-27195:
---

Assignee: Riju Trivedi  (was: Srinivasu Majeti)

> Drop table if Exists . fails during authorization for 
> temporary tables
> ---
>
> Key: HIVE-27195
> URL: https://issues.apache.org/jira/browse/HIVE-27195
> Project: Hive
>  Issue Type: Bug
>Reporter: Riju Trivedi
>Assignee: Riju Trivedi
>Priority: Major
>
> https://issues.apache.org/jira/browse/HIVE-20051 handles skipping 
> authorization for temporary tables. But still, the drop table if Exists fails 
> with  HiveAccessControlException.
> Steps to Repro:
> {code:java}
> use test; CREATE TEMPORARY TABLE temp_table (id int);
> drop table if exists test.temp_table;
> Error: Error while compiling statement: FAILED: HiveAccessControlException 
> Permission denied: user [rtrivedi] does not have [DROP] privilege on 
> [test/temp_table] (state=42000,code=4) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27234) Iceberg: CREATE BRANCH SQL implementation

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27234?focusedWorklogId=860538&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860538
 ]

ASF GitHub Bot logged work on HIVE-27234:
-

Author: ASF GitHub Bot
Created on: 04/May/23 10:37
Start Date: 04/May/23 10:37
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4216:
URL: https://github.com/apache/hive/pull/4216#issuecomment-1534519816

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4216)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=BUG)
 
[![C](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/C-16px.png
 
'C')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=BUG)
 [1 
Bug](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4216&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4216&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4216&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=CODE_SMELL)
 [3 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4216&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4216&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4216&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860538)
Time Spent: 7h 10m  (was: 7h)

> Iceberg:  CREATE BRANCH SQL implementation
> --
>
> Key: HIVE-27234
> URL: https://issues.apache.org/jira/browse/HIVE-27234
> Project: Hive
>  Issue Type: Sub-task
>  Components: Iceberg integration
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Maybe we can follow spark sql about branch ddl implementation 
> [https://github.com/apache/iceberg/pull/6617]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-27300) Upgrade to Parquet 1.13.0

2023-05-04 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17719244#comment-17719244
 ] 

Denys Kuzmenko commented on HIVE-27300:
---

Hi [~fokko], do you know why parquet version wasn't updated in Iceberg? it's 
still [1.12.3|https://github.com/apache/iceberg/blob/master/versions.props#L8] 

> Upgrade to Parquet 1.13.0
> -
>
> Key: HIVE-27300
> URL: https://issues.apache.org/jira/browse/HIVE-27300
> Project: Hive
>  Issue Type: Improvement
>  Components: Parquet
>Affects Versions: 3.1.3
>Reporter: Fokko Driesprong
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-23394) TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23394?focusedWorklogId=860524&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860524
 ]

ASF GitHub Bot logged work on HIVE-23394:
-

Author: ASF GitHub Bot
Created on: 04/May/23 09:57
Start Date: 04/May/23 09:57
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #4249:
URL: https://github.com/apache/hive/pull/4249#discussion_r1184802785


##
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcGenericUDTFGetSplits.java:
##
@@ -38,14 +38,16 @@
 public class TestJdbcGenericUDTFGetSplits extends 
AbstractTestJdbcGenericUDTFGetSplits {
 
   @Test(timeout = 20)
-  @Ignore("HIVE-23394")
   public void testGenericUDTFOrderBySplitCount1() throws Exception {
-super.testGenericUDTFOrderBySplitCount1("get_splits", new int[]{10, 1, 0, 
2, 2, 2, 1, 10});
+super.testGenericUDTFOrderBySplitCount1("get_splits", new int[] { 10, 5, 
0, 2, 2, 2, 5 });
+super.testGenericUDTFOrderBySplitCount1("get_llap_splits", new int[] { 12, 
7, 1, 4, 4, 4, 7 });
   }
 
+
   @Test(timeout = 20)
   public void testGenericUDTFOrderBySplitCount1OnPartitionedTable() throws 
Exception {
 super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_splits", 
new int[]{5, 5, 1, 1, 1});
+
super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_llap_splits", 
new int[]{7, 7, 3, 3, 3});

Review Comment:
   no need for `super` and `GenericUDTF` in a method name





Issue Time Tracking
---

Worklog Id: (was: 860524)
Time Spent: 3h 50m  (was: 3h 40m)

> TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky
> 
>
> Key: HIVE-23394
> URL: https://issues.apache.org/jira/browse/HIVE-23394
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> both 
> TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1 and
> TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
> can fail with the exception below
> seems like the connection was lost
> {code}
> Error Message
> Failed to close statement
> Stacktrace
> java.sql.SQLException: Failed to close statement
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:200)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:205)
>   at org.apache.hive.jdbc.HiveStatement.close(HiveStatement.java:222)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.runQuery(AbstractTestJdbcGenericUDTFGetSplits.java:135)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1(AbstractTestJdbcGenericUDTFGetSplits.java:164)
>   at 
> org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1(TestJdbcGenericUDTFGetSplits2.java:28)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.thrift.TApplicationException: CloseOperation failed: 
> out of sequence response
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:84)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_CloseOperation(TCLIService.java:521)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.CloseOperation(TCLIService.java:508)
>   at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1732)
>   at com.sun.proxy.$Proxy146.CloseOperation(Unknown Source)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:193)
>  

[jira] [Work logged] (HIVE-23394) TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23394?focusedWorklogId=860523&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860523
 ]

ASF GitHub Bot logged work on HIVE-23394:
-

Author: ASF GitHub Bot
Created on: 04/May/23 09:55
Start Date: 04/May/23 09:55
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #4249:
URL: https://github.com/apache/hive/pull/4249#discussion_r1184802785


##
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcGenericUDTFGetSplits.java:
##
@@ -38,14 +38,16 @@
 public class TestJdbcGenericUDTFGetSplits extends 
AbstractTestJdbcGenericUDTFGetSplits {
 
   @Test(timeout = 20)
-  @Ignore("HIVE-23394")
   public void testGenericUDTFOrderBySplitCount1() throws Exception {
-super.testGenericUDTFOrderBySplitCount1("get_splits", new int[]{10, 1, 0, 
2, 2, 2, 1, 10});
+super.testGenericUDTFOrderBySplitCount1("get_splits", new int[] { 10, 5, 
0, 2, 2, 2, 5 });
+super.testGenericUDTFOrderBySplitCount1("get_llap_splits", new int[] { 12, 
7, 1, 4, 4, 4, 7 });
   }
 
+
   @Test(timeout = 20)
   public void testGenericUDTFOrderBySplitCount1OnPartitionedTable() throws 
Exception {
 super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_splits", 
new int[]{5, 5, 1, 1, 1});
+
super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_llap_splits", 
new int[]{7, 7, 3, 3, 3});

Review Comment:
   no need for super



##
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcGenericUDTFGetSplits.java:
##
@@ -38,14 +38,16 @@
 public class TestJdbcGenericUDTFGetSplits extends 
AbstractTestJdbcGenericUDTFGetSplits {
 
   @Test(timeout = 20)
-  @Ignore("HIVE-23394")
   public void testGenericUDTFOrderBySplitCount1() throws Exception {
-super.testGenericUDTFOrderBySplitCount1("get_splits", new int[]{10, 1, 0, 
2, 2, 2, 1, 10});
+super.testGenericUDTFOrderBySplitCount1("get_splits", new int[] { 10, 5, 
0, 2, 2, 2, 5 });
+super.testGenericUDTFOrderBySplitCount1("get_llap_splits", new int[] { 12, 
7, 1, 4, 4, 4, 7 });
   }
 
+
   @Test(timeout = 20)
   public void testGenericUDTFOrderBySplitCount1OnPartitionedTable() throws 
Exception {
 super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_splits", 
new int[]{5, 5, 1, 1, 1});
+
super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_llap_splits", 
new int[]{7, 7, 3, 3, 3});

Review Comment:
   no need for `super`





Issue Time Tracking
---

Worklog Id: (was: 860523)
Time Spent: 3h 40m  (was: 3.5h)

> TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky
> 
>
> Key: HIVE-23394
> URL: https://issues.apache.org/jira/browse/HIVE-23394
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> both 
> TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1 and
> TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
> can fail with the exception below
> seems like the connection was lost
> {code}
> Error Message
> Failed to close statement
> Stacktrace
> java.sql.SQLException: Failed to close statement
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:200)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:205)
>   at org.apache.hive.jdbc.HiveStatement.close(HiveStatement.java:222)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.runQuery(AbstractTestJdbcGenericUDTFGetSplits.java:135)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1(AbstractTestJdbcGenericUDTFGetSplits.java:164)
>   at 
> org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1(TestJdbcGenericUDTFGetSplits2.java:28)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit

[jira] [Work logged] (HIVE-23394) TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23394?focusedWorklogId=860520&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860520
 ]

ASF GitHub Bot logged work on HIVE-23394:
-

Author: ASF GitHub Bot
Created on: 04/May/23 09:48
Start Date: 04/May/23 09:48
Worklog Time Spent: 10m 
  Work Description: simhadri-g commented on code in PR #4249:
URL: https://github.com/apache/hive/pull/4249#discussion_r1184794707


##
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcGenericUDTFGetSplits.java:
##
@@ -38,14 +38,16 @@
 public class TestJdbcGenericUDTFGetSplits extends 
AbstractTestJdbcGenericUDTFGetSplits {
 
   @Test(timeout = 20)
-  @Ignore("HIVE-23394")
   public void testGenericUDTFOrderBySplitCount1() throws Exception {
-super.testGenericUDTFOrderBySplitCount1("get_splits", new int[]{10, 1, 0, 
2, 2, 2, 1, 10});
+super.testGenericUDTFOrderBySplitCount1("get_splits", new int[] { 10, 5, 
0, 2, 2, 2, 5 });
+super.testGenericUDTFOrderBySplitCount1("get_llap_splits", new int[] { 12, 
7, 1, 4, 4, 4, 7 });
   }
 
+
   @Test(timeout = 20)
   public void testGenericUDTFOrderBySplitCount1OnPartitionedTable() throws 
Exception {
 super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_splits", 
new int[]{5, 5, 1, 1, 1});
+
super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_llap_splits", 
new int[]{7, 7, 3, 3, 3});

Review Comment:
   Done





Issue Time Tracking
---

Worklog Id: (was: 860520)
Time Spent: 3.5h  (was: 3h 20m)

> TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky
> 
>
> Key: HIVE-23394
> URL: https://issues.apache.org/jira/browse/HIVE-23394
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> both 
> TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1 and
> TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
> can fail with the exception below
> seems like the connection was lost
> {code}
> Error Message
> Failed to close statement
> Stacktrace
> java.sql.SQLException: Failed to close statement
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:200)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:205)
>   at org.apache.hive.jdbc.HiveStatement.close(HiveStatement.java:222)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.runQuery(AbstractTestJdbcGenericUDTFGetSplits.java:135)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1(AbstractTestJdbcGenericUDTFGetSplits.java:164)
>   at 
> org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1(TestJdbcGenericUDTFGetSplits2.java:28)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.thrift.TApplicationException: CloseOperation failed: 
> out of sequence response
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:84)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_CloseOperation(TCLIService.java:521)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.CloseOperation(TCLIService.java:508)
>   at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1732)
>   at com.sun.proxy.$Proxy146.CloseOperation(Unknown Source)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:193)
>   ... 14 more
> {code}



--
This message was sent

[jira] [Work logged] (HIVE-23394) TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23394?focusedWorklogId=860505&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860505
 ]

ASF GitHub Bot logged work on HIVE-23394:
-

Author: ASF GitHub Bot
Created on: 04/May/23 09:31
Start Date: 04/May/23 09:31
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #4249:
URL: https://github.com/apache/hive/pull/4249#discussion_r1184773121


##
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcGenericUDTFGetSplits.java:
##
@@ -38,14 +38,16 @@
 public class TestJdbcGenericUDTFGetSplits extends 
AbstractTestJdbcGenericUDTFGetSplits {
 
   @Test(timeout = 20)
-  @Ignore("HIVE-23394")
   public void testGenericUDTFOrderBySplitCount1() throws Exception {
-super.testGenericUDTFOrderBySplitCount1("get_splits", new int[]{10, 1, 0, 
2, 2, 2, 1, 10});
+super.testGenericUDTFOrderBySplitCount1("get_splits", new int[] { 10, 5, 
0, 2, 2, 2, 5 });
+super.testGenericUDTFOrderBySplitCount1("get_llap_splits", new int[] { 12, 
7, 1, 4, 4, 4, 7 });
   }
 
+
   @Test(timeout = 20)
   public void testGenericUDTFOrderBySplitCount1OnPartitionedTable() throws 
Exception {
 super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_splits", 
new int[]{5, 5, 1, 1, 1});
+
super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_llap_splits", 
new int[]{7, 7, 3, 3, 3});

Review Comment:
   but they test diff UDFs, could we at least have separate test methods for 
them?
   
   testGetSplitsOrderBySplitCount1OnPartitionedTable
   testGetLlapSplitsOrderBySplitCount1OnPartitionedTable
   





Issue Time Tracking
---

Worklog Id: (was: 860505)
Time Spent: 3h 20m  (was: 3h 10m)

> TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky
> 
>
> Key: HIVE-23394
> URL: https://issues.apache.org/jira/browse/HIVE-23394
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> both 
> TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1 and
> TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
> can fail with the exception below
> seems like the connection was lost
> {code}
> Error Message
> Failed to close statement
> Stacktrace
> java.sql.SQLException: Failed to close statement
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:200)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:205)
>   at org.apache.hive.jdbc.HiveStatement.close(HiveStatement.java:222)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.runQuery(AbstractTestJdbcGenericUDTFGetSplits.java:135)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1(AbstractTestJdbcGenericUDTFGetSplits.java:164)
>   at 
> org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1(TestJdbcGenericUDTFGetSplits2.java:28)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.thrift.TApplicationException: CloseOperation failed: 
> out of sequence response
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:84)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_CloseOperation(TCLIService.java:521)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.CloseOperation(TCLIService.java:508)
>   at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1732)
>   at co

[jira] [Work logged] (HIVE-23394) TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23394?focusedWorklogId=860504&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860504
 ]

ASF GitHub Bot logged work on HIVE-23394:
-

Author: ASF GitHub Bot
Created on: 04/May/23 09:29
Start Date: 04/May/23 09:29
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on code in PR #4249:
URL: https://github.com/apache/hive/pull/4249#discussion_r1184773121


##
itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcGenericUDTFGetSplits.java:
##
@@ -38,14 +38,16 @@
 public class TestJdbcGenericUDTFGetSplits extends 
AbstractTestJdbcGenericUDTFGetSplits {
 
   @Test(timeout = 20)
-  @Ignore("HIVE-23394")
   public void testGenericUDTFOrderBySplitCount1() throws Exception {
-super.testGenericUDTFOrderBySplitCount1("get_splits", new int[]{10, 1, 0, 
2, 2, 2, 1, 10});
+super.testGenericUDTFOrderBySplitCount1("get_splits", new int[] { 10, 5, 
0, 2, 2, 2, 5 });
+super.testGenericUDTFOrderBySplitCount1("get_llap_splits", new int[] { 12, 
7, 1, 4, 4, 4, 7 });
   }
 
+
   @Test(timeout = 20)
   public void testGenericUDTFOrderBySplitCount1OnPartitionedTable() throws 
Exception {
 super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_splits", 
new int[]{5, 5, 1, 1, 1});
+
super.testGenericUDTFOrderBySplitCount1OnPartitionedTable("get_llap_splits", 
new int[]{7, 7, 3, 3, 3});

Review Comment:
   but they test diff UDFs, could we at least have separate test methods for 
them?





Issue Time Tracking
---

Worklog Id: (was: 860504)
Time Spent: 3h 10m  (was: 3h)

> TestJdbcGenericUDTFGetSplits2#testGenericUDTFOrderBySplitCount1 is flaky
> 
>
> Key: HIVE-23394
> URL: https://issues.apache.org/jira/browse/HIVE-23394
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Simhadri Govindappa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> both 
> TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1 and
> TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
> can fail with the exception below
> seems like the connection was lost
> {code}
> Error Message
> Failed to close statement
> Stacktrace
> java.sql.SQLException: Failed to close statement
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:200)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:205)
>   at org.apache.hive.jdbc.HiveStatement.close(HiveStatement.java:222)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.runQuery(AbstractTestJdbcGenericUDTFGetSplits.java:135)
>   at 
> org.apache.hive.jdbc.AbstractTestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1(AbstractTestJdbcGenericUDTFGetSplits.java:164)
>   at 
> org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1(TestJdbcGenericUDTFGetSplits2.java:28)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.thrift.TApplicationException: CloseOperation failed: 
> out of sequence response
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:84)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_CloseOperation(TCLIService.java:521)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.CloseOperation(TCLIService.java:508)
>   at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1732)
>   at com.sun.proxy.$Proxy146.CloseOperation(Unknown Source)
>   at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveS

[jira] [Work logged] (HIVE-27243) Iceberg: Implement Load data via temp table

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27243?focusedWorklogId=860490&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860490
 ]

ASF GitHub Bot logged work on HIVE-27243:
-

Author: ASF GitHub Bot
Created on: 04/May/23 08:29
Start Date: 04/May/23 08:29
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4289:
URL: https://github.com/apache/hive/pull/4289#issuecomment-1534292597

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4289)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4289&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4289&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4289&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=CODE_SMELL)
 [1 Code 
Smell](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4289&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4289&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4289&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860490)
Time Spent: 20m  (was: 10m)

> Iceberg: Implement Load data via temp table
> ---
>
> Key: HIVE-27243
> URL: https://issues.apache.org/jira/browse/HIVE-27243
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Do a load data for iceberg via ingesting it to a temp table and then Insert 
> into/overwrite. (Impala Approach)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (HIVE-27308) Exposing client keystore and truststore passwords in the JDBC URL can be a security concern

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27308?focusedWorklogId=860472&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860472
 ]

ASF GitHub Bot logged work on HIVE-27308:
-

Author: ASF GitHub Bot
Created on: 04/May/23 07:01
Start Date: 04/May/23 07:01
Worklog Time Spent: 10m 
  Work Description: sonarcloud[bot] commented on PR #4282:
URL: https://github.com/apache/hive/pull/4282#issuecomment-1534187316

   Kudos, SonarCloud Quality Gate passed!    [![Quality Gate 
passed](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/QualityGateBadge/passed-16px.png
 'Quality Gate 
passed')](https://sonarcloud.io/dashboard?id=apache_hive&pullRequest=4282)
   
   
[![Bug](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug-16px.png
 
'Bug')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=BUG)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=BUG)
  
   
[![Vulnerability](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability-16px.png
 
'Vulnerability')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=VULNERABILITY)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=VULNERABILITY)
  
   [![Security 
Hotspot](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot-16px.png
 'Security 
Hotspot')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4282&resolved=false&types=SECURITY_HOTSPOT)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4282&resolved=false&types=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/security_hotspots?id=apache_hive&pullRequest=4282&resolved=false&types=SECURITY_HOTSPOT)
  
   [![Code 
Smell](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell-16px.png
 'Code 
Smell')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=CODE_SMELL)
 
[![A](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A-16px.png
 
'A')](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=CODE_SMELL)
 [0 Code 
Smells](https://sonarcloud.io/project/issues?id=apache_hive&pullRequest=4282&resolved=false&types=CODE_SMELL)
   
   [![No Coverage 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/NoCoverageInfo-16px.png
 'No Coverage 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4282&metric=coverage&view=list)
 No Coverage information  
   [![No Duplication 
information](https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/NoDuplicationInfo-16px.png
 'No Duplication 
information')](https://sonarcloud.io/component_measures?id=apache_hive&pullRequest=4282&metric=duplicated_lines_density&view=list)
 No Duplication information
   
   




Issue Time Tracking
---

Worklog Id: (was: 860472)
Time Spent: 0.5h  (was: 20m)

> Exposing client keystore and truststore passwords in the JDBC URL can be a 
> security concern
> ---
>
> Key: HIVE-27308
> URL: https://issues.apache.org/jira/browse/HIVE-27308
> Project: Hive
>  Issue Type: Improvement
>Reporter: Venugopal Reddy K
>Assignee: Venugopal Reddy K
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> At present, we may have the following keystore and truststore passwords in 
> the JDBC URL.
>  # trustStorePassword
>  # keyStorePassword
>  # zooKeeperTruststorePassword
>  # zooKeeperKeystorePassword
> Exposing these passwords in URL can be a security concern. Can hide all these 
> passwords from JDBC URL when we protect these passwords in a local JCEKS 
> keystore file and pass the JCEKS file to URL instead.
> 1. Leverage the hadoop credential provider 
> [Link|https://hadoop.apache.org/docs/stable/hadoop-project-d