[jira] [Updated] (HIVE-25980) Reduce fs calls in HiveMetaStoreChecker.checkTable

2022-03-16 Thread Chiran Ravani (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chiran Ravani updated HIVE-25980:
-
Description: 
MSCK Repair table for high partition table can perform slow on Cloud Storage 
such as S3, one of the case we found where slowness was observed in 
HiveMetaStoreChecker.checkTable.


{code:java}
"HiveServer2-Background-Pool: Thread-382" #382 prio=5 os_prio=0 
tid=0x7f97fc4a4000 nid=0x5c2a runnable [0x7f97c41a8000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:464)
at 
sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:68)
at 
sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1341)
at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:73)
at 
sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:957)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
com.amazonaws.thirdparty.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
com.amazonaws.thirdparty.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
at 
com.amazonaws.thirdparty.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:82)
at 
com.amazonaws.thirdparty.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at 
com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1331)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5437)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5384)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1367)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$10(S3AFileSystem.java:2458)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem$$Lambda$437/835000758.apply(Unknown 
Source)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:414)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:377)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2446)
at 

[jira] [Updated] (HIVE-25980) Reduce fs calls in HiveMetaStoreChecker.checkTable

2022-03-16 Thread Chiran Ravani (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chiran Ravani updated HIVE-25980:
-
Summary: Reduce fs calls in HiveMetaStoreChecker.checkTable  (was: Support 
HiveMetaStoreChecker.checkTable operation with multi-threaded)

> Reduce fs calls in HiveMetaStoreChecker.checkTable
> --
>
> Key: HIVE-25980
> URL: https://issues.apache.org/jira/browse/HIVE-25980
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.1.2, 4.0.0
>Reporter: Chiran Ravani
>Assignee: Chiran Ravani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> MSCK Repair table for high partition table can perform slow on Cloud Storage 
> such as S3, one of the case we found where slowness was observed in 
> HiveMetaStoreChecker.checkTable.
> {code:java}
> "HiveServer2-Background-Pool: Thread-382" #382 prio=5 os_prio=0 
> tid=0x7f97fc4a4000 nid=0x5c2a runnable [0x7f97c41a8000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at 
> sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:464)
>   at 
> sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:68)
>   at 
> sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1341)
>   at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:73)
>   at 
> sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:957)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
>   at 
> com.amazonaws.thirdparty.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
>   at 
> com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:82)
>   at 
> com.amazonaws.thirdparty.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>   at 
> com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>   at 
> com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1331)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
>

[jira] [Work logged] (HIVE-25758) OOM due to recursive application of CBO rules

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25758?focusedWorklogId=742896=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742896
 ]

ASF GitHub Bot logged work on HIVE-25758:
-

Author: ASF GitHub Bot
Created on: 17/Mar/22 00:16
Start Date: 17/Mar/22 00:16
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] closed pull request #2840:
URL: https://github.com/apache/hive/pull/2840


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742896)
Time Spent: 1h 40m  (was: 1.5h)

> OOM due to recursive application of CBO rules
> -
>
> Key: HIVE-25758
> URL: https://issues.apache.org/jira/browse/HIVE-25758
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Affects Versions: 4.0.0
>Reporter: Alessandro Solimando
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
>  
> Reproducing query is as follows:
> {code:java}
> create table test1 (act_nbr string);
> create table test2 (month int);
> create table test3 (mth int, con_usd double);
> EXPLAIN
>SELECT c.month,
>   d.con_usd
>FROM
>  (SELECT 
> cast(regexp_replace(substr(add_months(from_unixtime(unix_timestamp(), 
> '-MM-dd'), -1), 1, 7), '-', '') AS int) AS month
>   FROM test1
>   UNION ALL
>   SELECT month
>   FROM test2
>   WHERE month = 202110) c
>JOIN test3 d ON c.month = d.mth; {code}
>  
> Different plans are generated during the first CBO steps, last being:
> {noformat}
> 2021-12-01T08:28:08,598 DEBUG [a18191bb-3a2b-4193-9abf-4e37dd1996bb main] 
> parse.CalcitePlanner: Plan after decorre
> lation:
> HiveProject(month=[$0], con_usd=[$2])
>   HiveJoin(condition=[=($0, $1)], joinType=[inner], algorithm=[none], 
> cost=[not available])
>     HiveProject(month=[$0])
>       HiveUnion(all=[true])
>         
> HiveProject(month=[CAST(regexp_replace(substr(add_months(FROM_UNIXTIME(UNIX_TIMESTAMP,
>  _UTF-16LE'-MM-d
> d':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"), -1), 1, 7), 
> _UTF-16LE'-':VARCHAR(2147483647) CHARACTER SET "UTF-
> 16LE", _UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE")):INTEGER])
>           HiveTableScan(table=[[default, test1]], table:alias=[test1])
>         HiveProject(month=[$0])
>           HiveFilter(condition=[=($0, CAST(202110):INTEGER)])
>             HiveTableScan(table=[[default, test2]], table:alias=[test2])
>     HiveTableScan(table=[[default, test3]], table:alias=[d]){noformat}
>  
> Then, the HEP planner will keep expanding the filter expression with 
> redundant expressions, such as the following, where the identical CAST 
> expression is present multiple times:
>  
> {noformat}
> rel#118:HiveFilter.HIVE.[].any(input=HepRelVertex#39,condition=IN(CAST(regexp_replace(substr(add_months(FROM_UNIXTIME(UNIX_TIMESTAMP,
>  _UTF-16LE'-MM-dd':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"), -1), 1, 
> 7), _UTF-16LE'-':VARCHAR(2147483647) CHARACTER SET "UTF-16LE", 
> _UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE")):INTEGER, 
> CAST(regexp_replace(substr(add_months(FROM_UNIXTIME(UNIX_TIMESTAMP, 
> _UTF-16LE'-MM-dd':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"), -1), 1, 
> 7), _UTF-16LE'-':VARCHAR(2147483647) CHARACTER SET "UTF-16LE", 
> _UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE")):INTEGER, 
> 202110)){noformat}
>  
> The problem seems to come from a bad interaction of at least 
> _HiveFilterProjectTransposeRule_ and 
> {_}HiveJoinPushTransitivePredicatesRule{_}, possibly more.
> Most probably then UNION part can be removed and the reproducer be simplified 
> even further.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-26028) Upgrade pac4j-saml-opensamlv3 to 4.5.5 due to CVE

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26028?focusedWorklogId=742722=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742722
 ]

ASF GitHub Bot logged work on HIVE-26028:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 19:37
Start Date: 16/Mar/22 19:37
Worklog Time Spent: 10m 
  Work Description: nrg4878 commented on pull request #3096:
URL: https://github.com/apache/hive/pull/3096#issuecomment-1069541758


   the second re-run succeeded. The force-push retriggered the test for the 3rd 
time. I cancelled it because there are no new changes. Committing it even 
though it indicates test failures but it actually passed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742722)
Time Spent: 20m  (was: 10m)

> Upgrade pac4j-saml-opensamlv3 to 4.5.5 due to CVE
> -
>
> Key: HIVE-26028
> URL: https://issues.apache.org/jira/browse/HIVE-26028
> Project: Hive
>  Issue Type: Improvement
>Reporter: Yu-Wen Lai
>Assignee: Yu-Wen Lai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As title.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-25575) Add support for JWT authentication in HTTP mode

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25575?focusedWorklogId=742708=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742708
 ]

ASF GitHub Bot logged work on HIVE-25575:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 19:28
Start Date: 16/Mar/22 19:28
Worklog Time Spent: 10m 
  Work Description: hsnusonic commented on a change in pull request #3006:
URL: https://github.com/apache/hive/pull/3006#discussion_r828373075



##
File path: 
itests/hive-unit/src/test/java/org/apache/hive/service/auth/jwt/TestHttpJwtAuthentication.java
##
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hive.service.auth.jwt;
+
+import com.github.tomakehurst.wiremock.junit.WireMockRule;
+import com.nimbusds.jose.JWSAlgorithm;
+import com.nimbusds.jose.JWSHeader;
+import com.nimbusds.jose.JWSSigner;
+import com.nimbusds.jose.crypto.RSASSASigner;
+import com.nimbusds.jose.jwk.RSAKey;
+import com.nimbusds.jwt.JWTClaimsSet;
+import com.nimbusds.jwt.SignedJWT;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hive.jdbc.HiveConnection;
+import org.apache.hive.jdbc.Utils;
+import org.apache.hive.jdbc.miniHS2.MiniHS2;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+
+import java.io.File;
+import java.lang.reflect.Field;
+import java.lang.reflect.Modifier;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import static com.github.tomakehurst.wiremock.client.WireMock.get;
+import static com.github.tomakehurst.wiremock.client.WireMock.ok;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+
+public class TestHttpJwtAuthentication {
+  private static final Map DEFAULTS = new 
HashMap<>(System.getenv());
+  private static Map envMap;
+
+  private static final File jwtAuthorizedKeyFile =
+  new File("src/test/resources/auth.jwt/jwt-authorized-key.json");
+  private static final File jwtUnauthorizedKeyFile =
+  new File("src/test/resources/auth.jwt/jwt-unauthorized-key.json");
+  private static final File jwtVerificationJWKSFile =
+  new File("src/test/resources/auth.jwt/jwt-verification-jwks.json");
+
+  public static final String USER_1 = "USER_1";
+
+  private static MiniHS2 miniHS2;
+
+  private static final int MOCK_JWKS_SERVER_PORT = 8089;
+  @ClassRule
+  public static final WireMockRule MOCK_JWKS_SERVER = new 
WireMockRule(MOCK_JWKS_SERVER_PORT);
+
+  @BeforeClass
+  public static void makeEnvModifiable() throws Exception {

Review comment:
   We can't set an env variable inside a java program directly. Ref: 
https://stackoverflow.com/questions/318239/how-do-i-set-environment-variables-from-java




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742708)
Time Spent: 3h 40m  (was: 3.5h)

> Add support for JWT authentication in HTTP mode
> ---
>
> Key: HIVE-25575
> URL: https://issues.apache.org/jira/browse/HIVE-25575
> Project: Hive
>  Issue Type: New Feature
>  Components: HiveServer2, JDBC
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Yu-Wen Lai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining 

[jira] [Commented] (HIVE-25540) Enable batch update of column stats only for MySql and Postgres

2022-03-16 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17507813#comment-17507813
 ] 

Peter Vary commented on HIVE-25540:
---

[~zabetak], [~maheshk114]: I have a fix for the issue above: See: HIVE-26040. 
Could you please check that successfully running the below check confirms that 
the fix is enough to close this jira?:
{code}
mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
-Dtest.metastore.db=mssql
mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
-Dtest.metastore.db=oracle
mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
-Dtest.metastore.db=postgres
mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
-Dtest.metastore.db=derby
mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
-Dtest.metastore.db=mysql
{code}

> Enable batch update of column stats only for MySql and Postgres 
> 
>
> Key: HIVE-25540
> URL: https://issues.apache.org/jira/browse/HIVE-25540
> Project: Hive
>  Issue Type: Sub-task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The batch updation of partition column stats using direct sql is tested only 
> for MySql and Postgres.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-26041) Fix wrong type supplied for getLatestCommittedCompaction

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26041?focusedWorklogId=742695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742695
 ]

ASF GitHub Bot logged work on HIVE-26041:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 18:56
Start Date: 16/Mar/22 18:56
Worklog Time Spent: 10m 
  Work Description: hsnusonic opened a new pull request #3113:
URL: https://github.com/apache/hive/pull/3113


   We should use preparedStatement.setLong() to set correct type for CC_ID.
   
   
   
   ### What changes were proposed in this pull request?
   
   To fix a type error in prepared statement.
   
   ### Why are the changes needed?
   
   Most databases need correct type casting. I tested on PostgresSQL and the 
sql query for retrieving the latest compaction would fail.
   
   ### Does this PR introduce _any_ user-facing change?
   
   No
   
   ### How was this patch tested?
   
   Added one more test case for compaction id filter.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742695)
Remaining Estimate: 0h
Time Spent: 10m

> Fix wrong type supplied for getLatestCommittedCompaction
> 
>
> Key: HIVE-26041
> URL: https://issues.apache.org/jira/browse/HIVE-26041
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Yu-Wen Lai
>Assignee: Yu-Wen Lai
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In HIVE-25753, we filter compactions by CC_ID, but I used string type as the 
> parameter for the prepared statement. That cause a type error on some 
> databases (at least failed on PostgreSQL).
> To correctly handle the filter, we should use 
> {code:java}
> preparedStatement.setLong(...){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-26041) Fix wrong type supplied for getLatestCommittedCompaction

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26041:
--
Labels: pull-request-available  (was: )

> Fix wrong type supplied for getLatestCommittedCompaction
> 
>
> Key: HIVE-26041
> URL: https://issues.apache.org/jira/browse/HIVE-26041
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Yu-Wen Lai
>Assignee: Yu-Wen Lai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In HIVE-25753, we filter compactions by CC_ID, but I used string type as the 
> parameter for the prepared statement. That cause a type error on some 
> databases (at least failed on PostgreSQL).
> To correctly handle the filter, we should use 
> {code:java}
> preparedStatement.setLong(...){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work started] (HIVE-26040) Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-26040 started by Peter Vary.
-
> Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql
> --
>
> Key: HIVE-26040
> URL: https://issues.apache.org/jira/browse/HIVE-26040
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
> -Dtest.metastore.db=mssql
> {code}
> Fails with
> {code}
> 2022-03-15T07:57:17,078 ERROR [2b933b88-6083-4750-b151-2d2c7e04ccce main] 
> metastore.DirectSqlUpdateStat: Unable to 
> getNextCSIdForMPartitionColumnStatistics
> com.microsoft.sqlserver.jdbc.SQLServerException: Line 1: FOR UPDATE clause 
> allowed only for DECLARE CURSOR.
> at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1535)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:845)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:752)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151) 
> ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:219)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:199)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:654)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108) 
> ~[HikariCP-2.6.1.jar:?]
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
>  ~[HikariCP-2.6.1.jar:?]
> at 
> org.apache.hadoop.hive.metastore.DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics(DirectSqlUpdateStat.java:676)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.updatePartitionColumnStatisticsBatch(MetaStoreDirectSql.java:2966)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.updatePartitionColumnStatisticsInBatch(ObjectStore.java:9849)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_261]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
> at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
> [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> com.sun.proxy.$Proxy60.updatePartitionColumnStatisticsInBatch(Unknown Source) 
> [?:?]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.updatePartitionColStatsForOneBatch(HMSHandler.java:7060)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.updatePartitionColStatsInBatch(HMSHandler.java:7113)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.set_aggr_stats_for(HMSHandler.java:9137)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_261]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
> at 
> 

[jira] [Work logged] (HIVE-26040) Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26040?focusedWorklogId=742681=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742681
 ]

ASF GitHub Bot logged work on HIVE-26040:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 18:52
Start Date: 16/Mar/22 18:52
Worklog Time Spent: 10m 
  Work Description: pvary opened a new pull request #3112:
URL: https://github.com/apache/hive/pull/3112


   ### What changes were proposed in this pull request?
   Fix directsql `FOR UPDATE` to work with all of the databases
   
   ### Why are the changes needed?
   `mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
-Dtest.metastore.db=mssql` was failing
   
   ### Does this PR introduce _any_ user-facing change?
   Fixing the issue
   
   ### How was this patch tested?
   By running the unit test above


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742681)
Remaining Estimate: 0h
Time Spent: 10m

> Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql
> --
>
> Key: HIVE-26040
> URL: https://issues.apache.org/jira/browse/HIVE-26040
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
> -Dtest.metastore.db=mssql
> {code}
> Fails with
> {code}
> 2022-03-15T07:57:17,078 ERROR [2b933b88-6083-4750-b151-2d2c7e04ccce main] 
> metastore.DirectSqlUpdateStat: Unable to 
> getNextCSIdForMPartitionColumnStatistics
> com.microsoft.sqlserver.jdbc.SQLServerException: Line 1: FOR UPDATE clause 
> allowed only for DECLARE CURSOR.
> at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1535)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:845)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:752)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151) 
> ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:219)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:199)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:654)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108) 
> ~[HikariCP-2.6.1.jar:?]
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
>  ~[HikariCP-2.6.1.jar:?]
> at 
> org.apache.hadoop.hive.metastore.DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics(DirectSqlUpdateStat.java:676)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.updatePartitionColumnStatisticsBatch(MetaStoreDirectSql.java:2966)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.updatePartitionColumnStatisticsInBatch(ObjectStore.java:9849)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_261]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
> at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
> 

[jira] [Updated] (HIVE-26040) Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-26040:
--
Labels: pull-request-available  (was: )

> Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql
> --
>
> Key: HIVE-26040
> URL: https://issues.apache.org/jira/browse/HIVE-26040
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
> -Dtest.metastore.db=mssql
> {code}
> Fails with
> {code}
> 2022-03-15T07:57:17,078 ERROR [2b933b88-6083-4750-b151-2d2c7e04ccce main] 
> metastore.DirectSqlUpdateStat: Unable to 
> getNextCSIdForMPartitionColumnStatistics
> com.microsoft.sqlserver.jdbc.SQLServerException: Line 1: FOR UPDATE clause 
> allowed only for DECLARE CURSOR.
> at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1535)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:845)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:752)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151) 
> ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:219)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:199)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:654)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108) 
> ~[HikariCP-2.6.1.jar:?]
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
>  ~[HikariCP-2.6.1.jar:?]
> at 
> org.apache.hadoop.hive.metastore.DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics(DirectSqlUpdateStat.java:676)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.updatePartitionColumnStatisticsBatch(MetaStoreDirectSql.java:2966)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.updatePartitionColumnStatisticsInBatch(ObjectStore.java:9849)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_261]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
> at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
> [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> com.sun.proxy.$Proxy60.updatePartitionColumnStatisticsInBatch(Unknown Source) 
> [?:?]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.updatePartitionColStatsForOneBatch(HMSHandler.java:7060)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.updatePartitionColStatsInBatch(HMSHandler.java:7113)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.set_aggr_stats_for(HMSHandler.java:9137)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_261]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
> at 
> 

[jira] [Commented] (HIVE-25540) Enable batch update of column stats only for MySql and Postgres

2022-03-16 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17507810#comment-17507810
 ] 

Peter Vary commented on HIVE-25540:
---

Created a Jira to fix the issue mentioned above: HIVE-26040

> Enable batch update of column stats only for MySql and Postgres 
> 
>
> Key: HIVE-25540
> URL: https://issues.apache.org/jira/browse/HIVE-25540
> Project: Hive
>  Issue Type: Sub-task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The batch updation of partition column stats using direct sql is tested only 
> for MySql and Postgres.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (HIVE-26041) Fix wrong type supplied for getLatestCommittedCompaction

2022-03-16 Thread Yu-Wen Lai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Wen Lai reassigned HIVE-26041:
-


> Fix wrong type supplied for getLatestCommittedCompaction
> 
>
> Key: HIVE-26041
> URL: https://issues.apache.org/jira/browse/HIVE-26041
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Yu-Wen Lai
>Assignee: Yu-Wen Lai
>Priority: Major
>
> In HIVE-25753, we filter compactions by CC_ID, but I used string type as the 
> parameter for the prepared statement. That cause a type error on some 
> databases (at least failed on PostgreSQL).
> To correctly handle the filter, we should use 
> {code:java}
> preparedStatement.setLong(...){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-26025) Remove IMetaStoreClient#listPartitionNames which is not used

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26025?focusedWorklogId=742660=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742660
 ]

ASF GitHub Bot logged work on HIVE-26025:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 18:44
Start Date: 16/Mar/22 18:44
Worklog Time Spent: 10m 
  Work Description: pvary merged pull request #3093:
URL: https://github.com/apache/hive/pull/3093


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742660)
Time Spent: 20m  (was: 10m)

> Remove IMetaStoreClient#listPartitionNames which is not used
> 
>
> Key: HIVE-26025
> URL: https://issues.apache.org/jira/browse/HIVE-26025
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the following method is not used and not yet released:
> {code:java}
> List listPartitionNames(String catName, String dbName, String tblName,
> String defaultPartName, byte[] exprBytes, String order, short maxParts)
> throws MetaException, TException, NoSuchObjectException; {code}
> We should not release unused methods



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (HIVE-26040) Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-26040:
-


> Fix DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics for mssql
> --
>
> Key: HIVE-26040
> URL: https://issues.apache.org/jira/browse/HIVE-26040
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Fix For: 4.0.0-alpha-1
>
>
> {code}
> mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=list_bucket_dml_9.q 
> -Dtest.metastore.db=mssql
> {code}
> Fails with
> {code}
> 2022-03-15T07:57:17,078 ERROR [2b933b88-6083-4750-b151-2d2c7e04ccce main] 
> metastore.DirectSqlUpdateStat: Unable to 
> getNextCSIdForMPartitionColumnStatistics
> com.microsoft.sqlserver.jdbc.SQLServerException: Line 1: FOR UPDATE clause 
> allowed only for DECLARE CURSOR.
> at 
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1535)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:845)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:752)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151) 
> ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:219)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:199)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:654)
>  ~[mssql-jdbc-6.2.1.jre8.jar:?]
> at 
> com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108) 
> ~[HikariCP-2.6.1.jar:?]
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
>  ~[HikariCP-2.6.1.jar:?]
> at 
> org.apache.hadoop.hive.metastore.DirectSqlUpdateStat.getNextCSIdForMPartitionColumnStatistics(DirectSqlUpdateStat.java:676)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.updatePartitionColumnStatisticsBatch(MetaStoreDirectSql.java:2966)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.updatePartitionColumnStatisticsInBatch(ObjectStore.java:9849)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_261]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
> at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
> [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> com.sun.proxy.$Proxy60.updatePartitionColumnStatisticsInBatch(Unknown Source) 
> [?:?]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.updatePartitionColStatsForOneBatch(HMSHandler.java:7060)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.updatePartitionColStatsInBatch(HMSHandler.java:7113)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.HMSHandler.set_aggr_stats_for(HMSHandler.java:9137)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_261]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_261]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:146)
>  

[jira] [Resolved] (HIVE-26025) Remove IMetaStoreClient#listPartitionNames which is not used

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary resolved HIVE-26025.
---
Resolution: Fixed

Pushed to master.
Thanks for the PR [~dengzh]!

> Remove IMetaStoreClient#listPartitionNames which is not used
> 
>
> Key: HIVE-26025
> URL: https://issues.apache.org/jira/browse/HIVE-26025
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the following method is not used and not yet released:
> {code:java}
> List listPartitionNames(String catName, String dbName, String tblName,
> String defaultPartName, byte[] exprBytes, String order, short maxParts)
> throws MetaException, TException, NoSuchObjectException; {code}
> We should not release unused methods



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-25994) Analyze table runs into ClassNotFoundException-s

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25994?focusedWorklogId=742423=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742423
 ]

ASF GitHub Bot logged work on HIVE-25994:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 15:03
Start Date: 16/Mar/22 15:03
Worklog Time Spent: 10m 
  Work Description: asolimando commented on a change in pull request #3095:
URL: https://github.com/apache/hive/pull/3095#discussion_r828101962



##
File path: ql/pom.xml
##
@@ -1072,6 +1072,7 @@
   
 
   
+  org.antlr:antlr-runtime

Review comment:
   It's the Tez worker throwing the classnotfound exception, I don't think 
that Hive plays any role on that. From what you said earlier, it's also clear 
that nothing has changed on the Hive side regarding the shipping of antlr 
classes. 
   
   I have checked the jars for tez 0.9.1 and 0.10.0 (in all the locations), 
none of them has the antlr classes either, so I am a bit puzzled, they were 
either not needed before, or there is something trickier like class loader 
issues. I am not sure where to go from here, but I understand your concern.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742423)
Time Spent: 1h 10m  (was: 1h)

> Analyze table runs into ClassNotFoundException-s
> 
>
> Key: HIVE-25994
> URL: https://issues.apache.org/jira/browse/HIVE-25994
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> any nightly release can be used to reproduce this:
> {code}
> create table t (a integer); insert into t values (1) ; analyze table t 
> compute statistics for columns;
> {code}
> results in
> {code}
> DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_164683
> 1571866_0006_2_00, diagnostics=[Vertex vertex_1646831571866_0006_2_00 [Map 1] 
> killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test
> initializer failed, vertex=vertex_1646831571866_0006_2_00 [Map 1], 
> java.lang.RuntimeException: Failed to load plan: file:/tmp/dev/eebb53b4-db79
> -48b9-b78e-cd71fbe1b9d3/hive_2022-03-09_19-00-08_579_8816359375110151189-14/dev/_tez_scratch_dir/55415d69-07cf-45c3-8c57-fa607633a580/map.xml
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:535)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.prepare(HiveSplitGenerator.java:152)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:164)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:193)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializerAndProcessResult(RootInputInitializerManager.java:174)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$createAndStartInitializing$2(RootInputInitializerManager.java:168)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 

[jira] [Work logged] (HIVE-25994) Analyze table runs into ClassNotFoundException-s

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25994?focusedWorklogId=742392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742392
 ]

ASF GitHub Bot logged work on HIVE-25994:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 14:26
Start Date: 16/Mar/22 14:26
Worklog Time Spent: 10m 
  Work Description: zabetak commented on a change in pull request #3095:
URL: https://github.com/apache/hive/pull/3095#discussion_r828064727



##
File path: ql/pom.xml
##
@@ -1072,6 +1072,7 @@
   
 
   
+  org.antlr:antlr-runtime

Review comment:
   Adding/Removing things from the shaded jar can have a big impact on 
projects using Hive so that's why I am insisting a bit more on this. Have you 
checked if it is an issue with the classloader (e.g., HIVE-21971)?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742392)
Time Spent: 1h  (was: 50m)

> Analyze table runs into ClassNotFoundException-s
> 
>
> Key: HIVE-25994
> URL: https://issues.apache.org/jira/browse/HIVE-25994
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> any nightly release can be used to reproduce this:
> {code}
> create table t (a integer); insert into t values (1) ; analyze table t 
> compute statistics for columns;
> {code}
> results in
> {code}
> DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_164683
> 1571866_0006_2_00, diagnostics=[Vertex vertex_1646831571866_0006_2_00 [Map 1] 
> killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test
> initializer failed, vertex=vertex_1646831571866_0006_2_00 [Map 1], 
> java.lang.RuntimeException: Failed to load plan: file:/tmp/dev/eebb53b4-db79
> -48b9-b78e-cd71fbe1b9d3/hive_2022-03-09_19-00-08_579_8816359375110151189-14/dev/_tez_scratch_dir/55415d69-07cf-45c3-8c57-fa607633a580/map.xml
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:535)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.prepare(HiveSplitGenerator.java:152)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:164)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:193)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializerAndProcessResult(RootInputInitializerManager.java:174)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$createAndStartInitializing$2(RootInputInitializerManager.java:168)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NoClassDefFoundError: org/antlr/runtime/tree/CommonTree
> Serialization trace:
> tableSpec (org.apache.hadoop.hive.ql.metadata.Table)
> tableMetadata (org.apache.hadoop.hive.ql.plan.TableScanDesc)
> conf (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> 

[jira] [Work logged] (HIVE-25994) Analyze table runs into ClassNotFoundException-s

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25994?focusedWorklogId=742386=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742386
 ]

ASF GitHub Bot logged work on HIVE-25994:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 14:19
Start Date: 16/Mar/22 14:19
Worklog Time Spent: 10m 
  Work Description: zabetak commented on a change in pull request #3095:
URL: https://github.com/apache/hive/pull/3095#discussion_r828058065



##
File path: ql/pom.xml
##
@@ -1072,6 +1072,7 @@
   
 
   
+  org.antlr:antlr-runtime

Review comment:
   I run `for f in *.jar; do echo $f; jar tf $f | grep CommonTree; done` on 
the lib folder (for both 3.1.2 and master) and I see the following:
   **Hive 3.1.2**
   ```
   antlr-runtime-3.5.2.jar
   org/antlr/runtime/tree/CommonTree.class
   org/antlr/runtime/tree/CommonTreeAdaptor.class
   org/antlr/runtime/tree/CommonTreeNodeStream.class
   ...
   hive-druid-handler-3.1.2.jar
   org/antlr/runtime/tree/CommonTree.class
   org/antlr/runtime/tree/CommonTreeAdaptor.class
   org/antlr/runtime/tree/CommonTreeNodeStream.class
   org/skife/jdbi/org/antlr/runtime/tree/CommonTree.class
   org/skife/jdbi/org/antlr/runtime/tree/CommonTreeAdaptor.class
   org/skife/jdbi/org/antlr/runtime/tree/CommonTreeNodeStream.class
   ```
   **master**
   ```
   antlr-runtime-3.5.2.jar
   org/antlr/runtime/tree/CommonTree.class
   org/antlr/runtime/tree/CommonTreeAdaptor.class
   org/antlr/runtime/tree/CommonTreeNodeStream.class
   ...
   hive-druid-handler-4.0.0-SNAPSHOT.jar
   org/antlr/runtime/tree/CommonTree.class
   org/antlr/runtime/tree/CommonTreeAdaptor.class
   org/antlr/runtime/tree/CommonTreeNodeStream.class
   org/skife/jdbi/org/antlr/runtime/tree/CommonTree.class
   org/skife/jdbi/org/antlr/runtime/tree/CommonTreeAdaptor.class
   org/skife/jdbi/org/antlr/runtime/tree/CommonTreeNodeStream.class
   ...
   hive-exec-4.0.0-SNAPSHOT.jar
   org/antlr/runtime/tree/CommonTree.class
   org/antlr/runtime/tree/CommonTreeAdaptor.class
   org/antlr/runtime/tree/CommonTreeNodeStream.class
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742386)
Time Spent: 50m  (was: 40m)

> Analyze table runs into ClassNotFoundException-s
> 
>
> Key: HIVE-25994
> URL: https://issues.apache.org/jira/browse/HIVE-25994
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> any nightly release can be used to reproduce this:
> {code}
> create table t (a integer); insert into t values (1) ; analyze table t 
> compute statistics for columns;
> {code}
> results in
> {code}
> DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_164683
> 1571866_0006_2_00, diagnostics=[Vertex vertex_1646831571866_0006_2_00 [Map 1] 
> killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test
> initializer failed, vertex=vertex_1646831571866_0006_2_00 [Map 1], 
> java.lang.RuntimeException: Failed to load plan: file:/tmp/dev/eebb53b4-db79
> -48b9-b78e-cd71fbe1b9d3/hive_2022-03-09_19-00-08_579_8816359375110151189-14/dev/_tez_scratch_dir/55415d69-07cf-45c3-8c57-fa607633a580/map.xml
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:535)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.prepare(HiveSplitGenerator.java:152)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:164)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:193)
> at 
> 

[jira] [Work started] (HIVE-26002) Preparing for 4.0.0-alpha-1 development

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-26002 started by Peter Vary.
-
> Preparing for 4.0.0-alpha-1 development
> ---
>
> Key: HIVE-26002
> URL: https://issues.apache.org/jira/browse/HIVE-26002
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> For the release we need to create the appropriate sql scripts for HMS db 
> initialization



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-26002) Preparing for 4.0.0-alpha-1 development

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-26002:
--
Summary: Preparing for 4.0.0-alpha-1 development  (was: Update next version 
to 4.0.0-alpha-1)

> Preparing for 4.0.0-alpha-1 development
> ---
>
> Key: HIVE-26002
> URL: https://issues.apache.org/jira/browse/HIVE-26002
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> For the release we need to create the appropriate sql scripts for HMS db 
> initialization



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-25994) Analyze table runs into ClassNotFoundException-s

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25994?focusedWorklogId=742353=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742353
 ]

ASF GitHub Bot logged work on HIVE-25994:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 13:52
Start Date: 16/Mar/22 13:52
Worklog Time Spent: 10m 
  Work Description: asolimando commented on a change in pull request #3095:
URL: https://github.com/apache/hive/pull/3095#discussion_r828033152



##
File path: ql/pom.xml
##
@@ -1072,6 +1072,7 @@
   
 
   
+  org.antlr:antlr-runtime

Review comment:
   I did not figure this out, however we discussed with @abstractdog 
offline and we agreed that it wasn't on the tez's side, so there were no other 
options left. A reasonable explanation would be that antlr's classes where 
shipped by one of the dependencies in Hive, which changed behaviour.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742353)
Time Spent: 40m  (was: 0.5h)

> Analyze table runs into ClassNotFoundException-s
> 
>
> Key: HIVE-25994
> URL: https://issues.apache.org/jira/browse/HIVE-25994
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> any nightly release can be used to reproduce this:
> {code}
> create table t (a integer); insert into t values (1) ; analyze table t 
> compute statistics for columns;
> {code}
> results in
> {code}
> DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_164683
> 1571866_0006_2_00, diagnostics=[Vertex vertex_1646831571866_0006_2_00 [Map 1] 
> killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test
> initializer failed, vertex=vertex_1646831571866_0006_2_00 [Map 1], 
> java.lang.RuntimeException: Failed to load plan: file:/tmp/dev/eebb53b4-db79
> -48b9-b78e-cd71fbe1b9d3/hive_2022-03-09_19-00-08_579_8816359375110151189-14/dev/_tez_scratch_dir/55415d69-07cf-45c3-8c57-fa607633a580/map.xml
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:535)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.prepare(HiveSplitGenerator.java:152)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:164)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:193)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializerAndProcessResult(RootInputInitializerManager.java:174)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$createAndStartInitializing$2(RootInputInitializerManager.java:168)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NoClassDefFoundError: org/antlr/runtime/tree/CommonTree
> Serialization trace:
> tableSpec (org.apache.hadoop.hive.ql.metadata.Table)
> tableMetadata 

[jira] [Work logged] (HIVE-25994) Analyze table runs into ClassNotFoundException-s

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25994?focusedWorklogId=742344=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742344
 ]

ASF GitHub Bot logged work on HIVE-25994:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 13:39
Start Date: 16/Mar/22 13:39
Worklog Time Spent: 10m 
  Work Description: zabetak commented on a change in pull request #3095:
URL: https://github.com/apache/hive/pull/3095#discussion_r828021152



##
File path: ql/pom.xml
##
@@ -1072,6 +1072,7 @@
   
 
   
+  org.antlr:antlr-runtime

Review comment:
   Those antlr classes are not in hive-exec jar in previous Hive versions. 
For instance, `jar tf hive-exec-3.1.2.jar | grep antlr` does not return 
anything. Any idea on why we need them now?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742344)
Time Spent: 0.5h  (was: 20m)

> Analyze table runs into ClassNotFoundException-s
> 
>
> Key: HIVE-25994
> URL: https://issues.apache.org/jira/browse/HIVE-25994
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> any nightly release can be used to reproduce this:
> {code}
> create table t (a integer); insert into t values (1) ; analyze table t 
> compute statistics for columns;
> {code}
> results in
> {code}
> DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_164683
> 1571866_0006_2_00, diagnostics=[Vertex vertex_1646831571866_0006_2_00 [Map 1] 
> killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test
> initializer failed, vertex=vertex_1646831571866_0006_2_00 [Map 1], 
> java.lang.RuntimeException: Failed to load plan: file:/tmp/dev/eebb53b4-db79
> -48b9-b78e-cd71fbe1b9d3/hive_2022-03-09_19-00-08_579_8816359375110151189-14/dev/_tez_scratch_dir/55415d69-07cf-45c3-8c57-fa607633a580/map.xml
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:535)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.prepare(HiveSplitGenerator.java:152)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:164)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:193)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializerAndProcessResult(RootInputInitializerManager.java:174)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$createAndStartInitializing$2(RootInputInitializerManager.java:168)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NoClassDefFoundError: org/antlr/runtime/tree/CommonTree
> Serialization trace:
> tableSpec (org.apache.hadoop.hive.ql.metadata.Table)
> tableMetadata (org.apache.hadoop.hive.ql.plan.TableScanDesc)
> conf (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork 

[jira] [Resolved] (HIVE-25994) Analyze table runs into ClassNotFoundException-s

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary resolved HIVE-25994.
---
Resolution: Fixed

Pushed to master.
Thanks [~asolimando] and [~abstractdog]!

> Analyze table runs into ClassNotFoundException-s
> 
>
> Key: HIVE-25994
> URL: https://issues.apache.org/jira/browse/HIVE-25994
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> any nightly release can be used to reproduce this:
> {code}
> create table t (a integer); insert into t values (1) ; analyze table t 
> compute statistics for columns;
> {code}
> results in
> {code}
> DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_164683
> 1571866_0006_2_00, diagnostics=[Vertex vertex_1646831571866_0006_2_00 [Map 1] 
> killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test
> initializer failed, vertex=vertex_1646831571866_0006_2_00 [Map 1], 
> java.lang.RuntimeException: Failed to load plan: file:/tmp/dev/eebb53b4-db79
> -48b9-b78e-cd71fbe1b9d3/hive_2022-03-09_19-00-08_579_8816359375110151189-14/dev/_tez_scratch_dir/55415d69-07cf-45c3-8c57-fa607633a580/map.xml
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:535)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.prepare(HiveSplitGenerator.java:152)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:164)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:193)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializerAndProcessResult(RootInputInitializerManager.java:174)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$createAndStartInitializing$2(RootInputInitializerManager.java:168)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NoClassDefFoundError: org/antlr/runtime/tree/CommonTree
> Serialization trace:
> tableSpec (org.apache.hadoop.hive.ql.metadata.Table)
> tableMetadata (org.apache.hadoop.hive.ql.plan.TableScanDesc)
> conf (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ReflectField.read(ReflectField.java:147)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:124)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:729)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:216)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ReflectField.read(ReflectField.java:125)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:124)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:729)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:216)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ReflectField.read(ReflectField.java:125)
> at 
> 

[jira] [Work logged] (HIVE-25994) Analyze table runs into ClassNotFoundException-s

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25994?focusedWorklogId=742334=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742334
 ]

ASF GitHub Bot logged work on HIVE-25994:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 13:32
Start Date: 16/Mar/22 13:32
Worklog Time Spent: 10m 
  Work Description: pvary merged pull request #3095:
URL: https://github.com/apache/hive/pull/3095


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742334)
Time Spent: 20m  (was: 10m)

> Analyze table runs into ClassNotFoundException-s
> 
>
> Key: HIVE-25994
> URL: https://issues.apache.org/jira/browse/HIVE-25994
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Alessandro Solimando
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> any nightly release can be used to reproduce this:
> {code}
> create table t (a integer); insert into t values (1) ; analyze table t 
> compute statistics for columns;
> {code}
> results in
> {code}
> DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, 
> vertexId=vertex_164683
> 1571866_0006_2_00, diagnostics=[Vertex vertex_1646831571866_0006_2_00 [Map 1] 
> killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test
> initializer failed, vertex=vertex_1646831571866_0006_2_00 [Map 1], 
> java.lang.RuntimeException: Failed to load plan: file:/tmp/dev/eebb53b4-db79
> -48b9-b78e-cd71fbe1b9d3/hive_2022-03-09_19-00-08_579_8816359375110151189-14/dev/_tez_scratch_dir/55415d69-07cf-45c3-8c57-fa607633a580/map.xml
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:535)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.prepare(HiveSplitGenerator.java:152)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:164)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:193)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializerAndProcessResult(RootInputInitializerManager.java:174)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$createAndStartInitializing$2(RootInputInitializerManager.java:168)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.lang.NoClassDefFoundError: org/antlr/runtime/tree/CommonTree
> Serialization trace:
> tableSpec (org.apache.hadoop.hive.ql.metadata.Table)
> tableMetadata (org.apache.hadoop.hive.ql.plan.TableScanDesc)
> conf (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ReflectField.read(ReflectField.java:147)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:124)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:729)
> at 
> 

[jira] [Work logged] (HIVE-26002) Update next version to 4.0.0-alpha-1

2022-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26002?focusedWorklogId=742108=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-742108
 ]

ASF GitHub Bot logged work on HIVE-26002:
-

Author: ASF GitHub Bot
Created on: 16/Mar/22 08:39
Start Date: 16/Mar/22 08:39
Worklog Time Spent: 10m 
  Work Description: pvary commented on a change in pull request #3081:
URL: https://github.com/apache/hive/pull/3081#discussion_r827750423



##
File path: standalone-metastore/pom.xml
##
@@ -31,7 +31,7 @@
   
   
 4.0.0-SNAPSHOT

Review comment:
   Yeah, I needed to fix it in several more places 
   I hope I am finished for now  




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 742108)
Time Spent: 1h 20m  (was: 1h 10m)

> Update next version to 4.0.0-alpha-1
> 
>
> Key: HIVE-26002
> URL: https://issues.apache.org/jira/browse/HIVE-26002
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> For the release we need to create the appropriate sql scripts for HMS db 
> initialization



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-26002) Update next version to 4.0.0-alpha-1

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-26002:
--
Summary: Update next version to 4.0.0-alpha-1  (was: Create db scripts for 
4.0.0-alpha-1)

> Update next version to 4.0.0-alpha-1
> 
>
> Key: HIVE-26002
> URL: https://issues.apache.org/jira/browse/HIVE-26002
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For the release we need to create the appropriate sql scripts for HMS db 
> initialization



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (HIVE-26039) Fix TestSessionManagerMetrics.testActiveSessionMetrics

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-26039:
-

Assignee: (was: Peter Vary)

> Fix TestSessionManagerMetrics.testActiveSessionMetrics
> --
>
> Key: HIVE-26039
> URL: https://issues.apache.org/jira/browse/HIVE-26039
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Priority: Major
>
> The test is flaky.
> See: http://ci.hive.apache.org/job/hive-flaky-check/540/



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (HIVE-26039) Fix TestSessionManagerMetrics.testActiveSessionMetrics

2022-03-16 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-26039:
-


> Fix TestSessionManagerMetrics.testActiveSessionMetrics
> --
>
> Key: HIVE-26039
> URL: https://issues.apache.org/jira/browse/HIVE-26039
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>
> The test is flaky.
> See: http://ci.hive.apache.org/job/hive-flaky-check/540/



--
This message was sent by Atlassian Jira
(v8.20.1#820001)