[jira] [Created] (HIVE-26124) Upgrade HBase from 2.0.0-alpha4 to 2.0.0
Peter Vary created HIVE-26124: - Summary: Upgrade HBase from 2.0.0-alpha4 to 2.0.0 Key: HIVE-26124 URL: https://issues.apache.org/jira/browse/HIVE-26124 Project: Hive Issue Type: Task Reporter: Peter Vary We should remove the alpha version to the stable one -- This message was sent by Atlassian Jira (v8.20.1#820001)
Re: [VOTE] Apache Hive 3.1.3 Release Candidate 3
+1 (binding) Downloaded and ran create , insert, simple query on postgres. Verified checksums. Built from source. Thanks, Szehon On Mon, Apr 4, 2022 at 7:56 AM Naveen Gangam wrote: > *[No new commits from RC2]. Just cleaned up **apache-hive-3.1.3-src.tar.gz* > *archive* > > > Apache Hive 3.1.3 Release Candidate 3 is available here: > https://people.apache.org/~ngangam/apache-hive-3.1.3-rc-3 > > The checksums are these: > > > - 0c9b6a6359a7341b6029cc9347435ee7b379f93846f779d710b13f795b54bb16 > apache-hive-3.1.3-bin.tar.gz > > > - b5e17f664afbb5ac702f0de0a31363caf58e067b19229df63da01c38430f6fcc > apache-hive-3.1.3-src.tar.gz > > > Maven artifacts are available here: > https://repository.apache.org/content/repositories/orgapachehive-1116 > > > The tag release-3.1.3-rc3 has been applied to the source for this > release in github, you can see it at > > https://github.com/apache/hive/tree/release-3.1.3-rc2 > > The git commit hash is: 4df4d75bf1e16fe0af75aad0b4179c34c07fc975 > < > https://github.com/apache/hive/commit/4df4d75bf1e16fe0af75aad0b4179c34c07fc975 > > > Voting will conclude in 72 hours. > > Hive PMC Members: Please test and vote. > > Thanks. >
Re: separated authN configuration for binary and http transports
Thanks Naveen, I missed that HIVE-25875 recently provided explicit multi-AuthN support. The use case is simple: secured cluster with hive.server2.transport.mode=all for both secured native clients (kerberos) and external clients (user/pass) - KERBEROS is needed for cluster-local (or near-cluster) clients. - LDAP is needed for clients who can do only user/passwd authN - binary transport is the default for both native and external clients - http transport is only for Knox, which is in that case KERBEROS/http For binary transport we were using authnetication=LDAP as this gave us both KERBEROS *_and_* LDAP. For http transport - which would be needed for Knox - authentication=LDAP is exclusive. I thought HIVE-25875 would do something similar for http transport but based on the review comments it wont work, right? I mean authentication 'LDAP,KERBEROS'. Regarding the separated configuration property: the new one could have a valid technical value - which would be the default value for now - meaning if it's set, then just use the value(s) from the other config property. That way it would be backward compatible and in one of the later versions this can be deprecated and the technical 'fallback' value could be removed. R, Janos Naveen Gangam ezt írta (időpont: 2022. márc. 28., H, 23:11): > Hi Janos, > LDAP auth works in http mode as well. > > We have made some enhancements recently: > HIveServer2 is now capable of supporting multiple authentication mode. For > example: in http mode, you can set it to "LDAP,SAML" > We have just added another auth mode (JWT) for http transport via > HIVE-25575. So now, we can add "JWT" to this list as well. > > While we have checks to set it to something like "KERBEROS,SAML" (KERBEROS > in binary mode and SAML is http mode only), I understand your general point > about having the ability to use LDAP with binary mode and SAML in http > mode. > > I am not certain this is a huge usecase for us, but if there is general > consensus that we need this, we could create a jira around this. My biggest > concern with the separation of the properties is backward compatibility. > > Thank you > Naveen > > On Mon, Mar 28, 2022 at 4:56 AM Stamatis Zampetakis > wrote: > > > Hey Janos, > > > > You brought up an interesting subject. > > > > I haven't worked on the code around the authentication process so cannot > > foresee the impact on the codebase but high level your idea seems > > reasonable to me. > > > > I would be favorable in such a change but I would definitely like to see > > some tests and documentation come along from the one who pushes this > > forward. > > > > Best, > > Stamatis > > > > On Fri, Mar 18, 2022, 6:40 PM Janos Kovacs wrote: > > > > > Hi, > > > > > > I just found that while HS2 can do authentication with mixed methods - > > like > > > Kerberos+LDAP - it only works with the binary protocol. With the > > transport > > > set to http, the authentication basically works only against what is > set > > by > > > hive.server2.authentication. If e.g. it's set to LDAP, it doesn't try > > other > > > methods, even if the client is sending the Negotiate headers in the > > > request. > > > > > > While this is something that probably could be fixed, I was thinking > > about > > > a quick(er) fix that might sounds just a workaround first, but adding > the > > > fact that HS2 now can do both binary and http transports together > > > (HIVE-5312) and that there are other authentication methods which > support > > > only one type of transports - like SAML works only with http transport > -, > > > this might be a good enhancement by itself: split the > > > hive.server2.authentication between binary and http with introducing > > > hive.server2.http.authentication. > > > > > > If the http transport could be configured independently from the binary > > > transport, then HS2 could run in dual-transport mode, e.g. binary > > offering > > > Kerberos+LDAP while http offering SAML (or any other independent > method). > > > > > > Could you please share your thoughts on splitting the authN method > > between > > > the two transport modes? > > > > > > Thanks, Janos > > > > > >
[jira] [Created] (HIVE-26123) Introduce test coverage for sysdb for the different metastores
Alessandro Solimando created HIVE-26123: --- Summary: Introduce test coverage for sysdb for the different metastores Key: HIVE-26123 URL: https://issues.apache.org/jira/browse/HIVE-26123 Project: Hive Issue Type: Test Components: Testing Infrastructure Affects Versions: 4.0.0-alpha-1 Reporter: Alessandro Solimando Assignee: Alessandro Solimando Fix For: 4.0.0-alpha-2 _sydb_ provides a view over (some) metastore tables from Hive via JDBC queries. Existing tests are running only against Derby, meaning that any change against sysdb query mapping are not covered by CI. The present ticket aims at bridging this gap by introducing test coverage for the different supported metastore for sydb. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-26122) Factorize out common docker code between DatabaseRule and AbstractExternalDB
Alessandro Solimando created HIVE-26122: --- Summary: Factorize out common docker code between DatabaseRule and AbstractExternalDB Key: HIVE-26122 URL: https://issues.apache.org/jira/browse/HIVE-26122 Project: Hive Issue Type: Improvement Components: Testing Infrastructure Affects Versions: 4.0.0-alpha-1 Reporter: Alessandro Solimando Assignee: Alessandro Solimando Fix For: 4.0.0-alpha-2 Currently there is a lot of shared code between the two classes which could be extracted into a utility class called DockerUtils, since all this code pertains docker. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-26121) Hive transaction rollback should be thread-safe
Denys Kuzmenko created HIVE-26121: - Summary: Hive transaction rollback should be thread-safe Key: HIVE-26121 URL: https://issues.apache.org/jira/browse/HIVE-26121 Project: Hive Issue Type: Task Reporter: Denys Kuzmenko -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-26120) beeline return 0 when Could not open connection to the HS2 server
MK created HIVE-26120: - Summary: beeline return 0 when Could not open connection to the HS2 server Key: HIVE-26120 URL: https://issues.apache.org/jira/browse/HIVE-26120 Project: Hive Issue Type: Bug Components: Beeline Reporter: MK when execute : beeline -u 'jdbc:hive2://bigdata-hs111:10003' -n 'etl' -p '**' -f /opt/project/DWD/SPD/xxx.sql and bigdata-hs111 doesn't exists or can't connect , the command return code is 0 , NOT a Non-zero value . SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data/programs/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.17.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data/programs/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://bigdata-hs111:10003 2022-04-06T17:28:04,247 WARN [main] org.apache.hive.jdbc.Utils - Could not retrieve canonical hostname for bigdata-hs111 java.net.UnknownHostException: bigdata-hs111: Name or service not known at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[?:1.8.0_191] at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[?:1.8.0_191] at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[?:1.8.0_191] at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[?:1.8.0_191] at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_191] at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_191] at java.net.InetAddress.getByName(InetAddress.java:1077) ~[?:1.8.0_191] at org.apache.hive.jdbc.Utils.getCanonicalHostName(Utils.java:701) [hive-jdbc-3.1.2.jar:3.1.2] at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:178) [hive-jdbc-3.1.2.jar:3.1.2] at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) [hive-jdbc-3.1.2.jar:3.1.2] at java.sql.DriverManager.getConnection(DriverManager.java:664) [?:1.8.0_191] at java.sql.DriverManager.getConnection(DriverManager.java:208) [?:1.8.0_191] at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:209) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.Commands.connect(Commands.java:1641) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.Commands.connect(Commands.java:1536) [hive-beeline-3.1.2.jar:3.1.2] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191] at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:56) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1384) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1423) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.BeeLine.connectUsingArgs(BeeLine.java:900) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:795) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1048) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:538) [hive-beeline-3.1.2.jar:3.1.2] at org.apache.hive.beeline.BeeLine.main(BeeLine.java:520) [hive-beeline-3.1.2.jar:3.1.2] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191] at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.4.jar:?] at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.4.jar:?] 2022-04-06T17:28:04,335 WARN [main] org.apache.hive.jdbc.HiveConnection - Failed to connect to bigdata-hs111:10003 Could not open connection to the HS2 server. Please check the server URI and if the URI is correct, then ask the administrator to check the server status.