[jira] [Resolved] (HIVE-28092) Clean up invalid exception thrown in MetaStoreClient

2024-05-29 Thread Butao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Butao Zhang resolved HIVE-28092.

Fix Version/s: 4.1.0
   Resolution: Fixed

Merged to master.

Thanks [~dkuzmenko] for the review!

> Clean up invalid exception thrown in MetaStoreClient
> 
>
> Key: HIVE-28092
> URL: https://issues.apache.org/jira/browse/HIVE-28092
> Project: Hive
>  Issue Type: Improvement
>Reporter: Butao Zhang
>Assignee: Butao Zhang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28286) Add filtering support for get_table_metas API in Hive metastore

2024-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-28286:
--
Labels: pull-request-available  (was: )

> Add filtering support for get_table_metas API in Hive metastore
> ---
>
> Key: HIVE-28286
> URL: https://issues.apache.org/jira/browse/HIVE-28286
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
>  Labels: pull-request-available
>
> Hive Metastore has support for filtering objects thru the plugin authorizer 
> for some APIs like getTables(), getDatabases(), getDataConnectors() etc. 
> However, the same should be done for the get_table_metas() API call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28287) Attempt make the scratch directory writable before failing

2024-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-28287:
--
Labels: pull-request-available  (was: )

> Attempt make the scratch directory writable before failing
> --
>
> Key: HIVE-28287
> URL: https://issues.apache.org/jira/browse/HIVE-28287
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Agnes Tevesz
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> When hive is starting up it checks the tmp/hive directory privileges. Even if 
> the Azure managed identity have write access on an Azure storage account, the 
> rwx-wx-wx privileges still enforced, so in case rwxr-xr-x is set, the hive 
> process startup will fail. 
> See the logs of a startup that resulted in failure.
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.hadoop.hive.common.StringInternUtils 
> (file:/usr/lib/hive/lib/hive-common-3.1.3000.2023.0.16.0-142.jar) to field 
> java.net.URI.string
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.hadoop.hive.common.StringInternUtils
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> Exception in thread "main" java.lang.RuntimeException: Error applying 
> authorization policy on hive configuration: The dir: /tmp/hive on HDFS should 
> be writable. Current permissions are: rwxr-xr-x
> at org.apache.hive.service.cli.CLIService.init(CLIService.java:121)
> at 
> org.apache.hive.service.cli.thrift.EmbeddedThriftBinaryCLIService.init(EmbeddedThriftBinaryCLIService.java:63)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:357)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:287)
> at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
> at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
> at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:88)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:103)
> at 
> org.apache.hadoop.hive.metastore.CDHMetaStoreSchemaInfo.getMetaStoreSchemaVersion(CDHMetaStoreSchemaInfo.java:323)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.SchemaToolTaskInitOrUpgrade.execute(SchemaToolTaskInitOrUpgrade.java:41)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.MetastoreSchemaTool.run(MetastoreSchemaTool.java:482)
> at 
> org.apache.hive.beeline.schematool.HiveSchemaTool.main(HiveSchemaTool.java:143)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
> Caused by: java.lang.RuntimeException: The dir: /tmp/hive on HDFS should be 
> writable. Current permissions are: rwxr-xr-x
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:5088)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:896)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:837)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:749)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:708)
> at 
> org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:133)
> at org.apache.hive.service.cli.CLIService.init(CLIService.java:118)
> ... 18 more
> Information schema initialization failed!
> + '[' 1 -eq 0 ']'
> + echo 'Information schema initialization failed!'
> + exit 1
> {code}
> To overcome this issue one approach could be to try to set the tmp/hive 
> folder privileges to rwx-wx-wx before failing the startup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-28287) Attempt make the scratch directory writable before failing

2024-05-29 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HIVE-28287:
---

Assignee: Ayush Saxena  (was: Ayush Saxena)

> Attempt make the scratch directory writable before failing
> --
>
> Key: HIVE-28287
> URL: https://issues.apache.org/jira/browse/HIVE-28287
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Agnes Tevesz
>Assignee: Ayush Saxena
>Priority: Major
>
> When hive is starting up it checks the tmp/hive directory privileges. Even if 
> the Azure managed identity have write access on an Azure storage account, the 
> rwx-wx-wx privileges still enforced, so in case rwxr-xr-x is set, the hive 
> process startup will fail. 
> See the logs of a startup that resulted in failure.
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.hadoop.hive.common.StringInternUtils 
> (file:/usr/lib/hive/lib/hive-common-3.1.3000.2023.0.16.0-142.jar) to field 
> java.net.URI.string
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.hadoop.hive.common.StringInternUtils
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> Exception in thread "main" java.lang.RuntimeException: Error applying 
> authorization policy on hive configuration: The dir: /tmp/hive on HDFS should 
> be writable. Current permissions are: rwxr-xr-x
> at org.apache.hive.service.cli.CLIService.init(CLIService.java:121)
> at 
> org.apache.hive.service.cli.thrift.EmbeddedThriftBinaryCLIService.init(EmbeddedThriftBinaryCLIService.java:63)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:357)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:287)
> at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
> at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
> at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:88)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:103)
> at 
> org.apache.hadoop.hive.metastore.CDHMetaStoreSchemaInfo.getMetaStoreSchemaVersion(CDHMetaStoreSchemaInfo.java:323)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.SchemaToolTaskInitOrUpgrade.execute(SchemaToolTaskInitOrUpgrade.java:41)
> at 
> org.apache.hadoop.hive.metastore.tools.schematool.MetastoreSchemaTool.run(MetastoreSchemaTool.java:482)
> at 
> org.apache.hive.beeline.schematool.HiveSchemaTool.main(HiveSchemaTool.java:143)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
> Caused by: java.lang.RuntimeException: The dir: /tmp/hive on HDFS should be 
> writable. Current permissions are: rwxr-xr-x
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:5088)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:896)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:837)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:749)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:708)
> at 
> org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:133)
> at org.apache.hive.service.cli.CLIService.init(CLIService.java:118)
> ... 18 more
> Information schema initialization failed!
> + '[' 1 -eq 0 ']'
> + echo 'Information schema initialization failed!'
> + exit 1
> {code}
> To overcome this issue one approach could be to try to set the tmp/hive 
> folder privileges to rwx-wx-wx before failing the startup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28287) Attempt make the scratch directory writable before failing

2024-05-29 Thread Agnes Tevesz (Jira)
Agnes Tevesz created HIVE-28287:
---

 Summary: Attempt make the scratch directory writable before failing
 Key: HIVE-28287
 URL: https://issues.apache.org/jira/browse/HIVE-28287
 Project: Hive
  Issue Type: Bug
  Components: Hive
Reporter: Agnes Tevesz
Assignee: Ayush Saxena


When hive is starting up it checks the tmp/hive directory privileges. Even if 
the Azure managed identity have write access on an Azure storage account, the 
rwx-wx-wx privileges still enforced, so in case rwxr-xr-x is set, the hive 
process startup will fail. 

See the logs of a startup that resulted in failure.
{code}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.apache.hadoop.hive.common.StringInternUtils 
(file:/usr/lib/hive/lib/hive-common-3.1.3000.2023.0.16.0-142.jar) to field 
java.net.URI.string
WARNING: Please consider reporting this to the maintainers of 
org.apache.hadoop.hive.common.StringInternUtils
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
Exception in thread "main" java.lang.RuntimeException: Error applying 
authorization policy on hive configuration: The dir: /tmp/hive on HDFS should 
be writable. Current permissions are: rwxr-xr-x
at org.apache.hive.service.cli.CLIService.init(CLIService.java:121)
at 
org.apache.hive.service.cli.thrift.EmbeddedThriftBinaryCLIService.init(EmbeddedThriftBinaryCLIService.java:63)
at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:357)
at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:287)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
at 
org.apache.hadoop.hive.metastore.tools.schematool.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:88)
at 
org.apache.hadoop.hive.metastore.tools.schematool.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:103)
at 
org.apache.hadoop.hive.metastore.CDHMetaStoreSchemaInfo.getMetaStoreSchemaVersion(CDHMetaStoreSchemaInfo.java:323)
at 
org.apache.hadoop.hive.metastore.tools.schematool.SchemaToolTaskInitOrUpgrade.execute(SchemaToolTaskInitOrUpgrade.java:41)
at 
org.apache.hadoop.hive.metastore.tools.schematool.MetastoreSchemaTool.run(MetastoreSchemaTool.java:482)
at 
org.apache.hive.beeline.schematool.HiveSchemaTool.main(HiveSchemaTool.java:143)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: java.lang.RuntimeException: The dir: /tmp/hive on HDFS should be 
writable. Current permissions are: rwxr-xr-x
at 
org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:5088)
at 
org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:896)
at 
org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:837)
at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:749)
at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:708)
at 
org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:133)
at org.apache.hive.service.cli.CLIService.init(CLIService.java:118)
... 18 more
Information schema initialization failed!
+ '[' 1 -eq 0 ']'
+ echo 'Information schema initialization failed!'
+ exit 1
{code}

To overcome this issue one approach could be to try to set the tmp/hive folder 
privileges to rwx-wx-wx before failing the startup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28286) Add filtering support for get_table_metas API in Hive metastore

2024-05-29 Thread Naveen Gangam (Jira)
Naveen Gangam created HIVE-28286:


 Summary: Add filtering support for get_table_metas API in Hive 
metastore
 Key: HIVE-28286
 URL: https://issues.apache.org/jira/browse/HIVE-28286
 Project: Hive
  Issue Type: Bug
  Components: Standalone Metastore
Affects Versions: 4.0.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam


Hive Metastore has support for filtering objects thru the plugin authorizer for 
some APIs like getTables(), getDatabases(), getDataConnectors() etc. However, 
the same should be done for the get_table_metas() API call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28285) Exception when querying JDBC tables with Hive/DB column types mismatch

2024-05-29 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850412#comment-17850412
 ] 

Stamatis Zampetakis commented on HIVE-28285:


This is a regression caused by HIVE-27487.

> Exception when querying JDBC tables with Hive/DB column types mismatch
> --
>
> Key: HIVE-28285
> URL: https://issues.apache.org/jira/browse/HIVE-28285
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Major
>
> Queries over JDBC tables fail at runtime when the following conditions hold:
>  # there is a mismatch between the Hive type and the database type for some 
> columns
>  # CBO is not used
> CBO may not be used when compiling the query for various reasons:
>  * CBO is explicitly disabled (via hive.cbo.enable property)
>  * Query explicitly not supported in CBO (e.g., contains DISTRIBUTE BY clause)
>  * Problem/bug in compilation that will skip CBO execution
> The examples below demonstrate the problem with Postgres but the problem 
> itself is not database specific (although different errors may pop up 
> depending on the underlying database). Different type mappings may also lead 
> to different errors.
> h3. Map Postgres DATE to Hive TIMESTAMP
> +Postgres+
> {code:sql}
> create table date_table (cdate date);
> insert into date_table values ('2024-05-29');
> {code}
> +Hive+
> {code:sql}
> CREATE TABLE h_type_table (cdate timestamp)
> STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
> TBLPROPERTIES (
> "hive.sql.database.type" = "POSTGRES",
> "hive.sql.jdbc.driver" = "org.postgresql.Driver",
> "hive.sql.jdbc.url" = "jdbc:postgresql://...",
> "hive.sql.dbcp.username" = "user",
> "hive.sql.dbcp.password" = "pwd",
> "hive.sql.table" = "date_table"
> );
> {code}
> +Hive Result (CBO on)+
> |2024-05-29 00:00:00|
> +Error (CBO off)+
> {noformat}
>  java.lang.RuntimeException: java.io.IOException: 
> java.lang.IllegalArgumentException: Cannot create timestamp, parsing error 
> 2024-05-29
>   at 
> org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(FetchTask.java:210)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.execute(FetchTask.java:95)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:212)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:154)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:185)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:230)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:425)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:356)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:732)
>   at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:702)
>   at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:116)
>   at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
>   at 
> org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:62)
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Cannot 
> create timestamp, parsing error 2024-05-29
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:535)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(FetchTask.java:194)
>   ... 55 more
> Caused by: java.lang.IllegalArgumentException: Cannot create timestamp, 
> parsing error 2024-05-29
>   at 
> org.apache.hadoop.hive.common.type.Timestamp.valueOf(Timestamp.java:194)
>   at 
> org.apache.hive.storage.jdbc.JdbcSerDe.deserialize(JdbcSerDe.java:314)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:609)
>   ... 57 more
> Caused by: java.time.format.DateTimeParseException: Text '2024-05-29' could 
> not be parsed: Unable to obtain LocalDateTime from TemporalAccessor: {},ISO 
> resolved to 2024-05-29 of type java.time.format.Parsed
>   at 
> java.time.format.DateTimeFormatter.createError(DateTimeFormatter.java:1920)
>   at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1855)
>   at 

[jira] [Created] (HIVE-28285) Exception when querying JDBC tables with Hive/DB column types mismatch

2024-05-29 Thread Stamatis Zampetakis (Jira)
Stamatis Zampetakis created HIVE-28285:
--

 Summary: Exception when querying JDBC tables with Hive/DB column 
types mismatch
 Key: HIVE-28285
 URL: https://issues.apache.org/jira/browse/HIVE-28285
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 4.0.0
Reporter: Stamatis Zampetakis
Assignee: Stamatis Zampetakis


Queries over JDBC tables fail at runtime when the following conditions hold:
 # there is a mismatch between the Hive type and the database type for some 
columns
 # CBO is not used

CBO may not be used when compiling the query for various reasons:
 * CBO is explicitly disabled (via hive.cbo.enable property)
 * Query explicitly not supported in CBO (e.g., contains DISTRIBUTE BY clause)
 * Problem/bug in compilation that will skip CBO execution

The examples below demonstrate the problem with Postgres but the problem itself 
is not database specific (although different errors may pop up depending on the 
underlying database). Different type mappings may also lead to different errors.
h3. Map Postgres DATE to Hive TIMESTAMP

+Postgres+
{code:sql}
create table date_table (cdate date);
insert into date_table values ('2024-05-29');
{code}
+Hive+
{code:sql}
CREATE TABLE h_type_table (cdate timestamp)
STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
TBLPROPERTIES (
"hive.sql.database.type" = "POSTGRES",
"hive.sql.jdbc.driver" = "org.postgresql.Driver",
"hive.sql.jdbc.url" = "jdbc:postgresql://...",
"hive.sql.dbcp.username" = "user",
"hive.sql.dbcp.password" = "pwd",
"hive.sql.table" = "date_table"
);
{code}
+Hive Result (CBO on)+
|2024-05-29 00:00:00|

+Error (CBO off)+
{noformat}
 java.lang.RuntimeException: java.io.IOException: 
java.lang.IllegalArgumentException: Cannot create timestamp, parsing error 
2024-05-29
at 
org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(FetchTask.java:210)
at org.apache.hadoop.hive.ql.exec.FetchTask.execute(FetchTask.java:95)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:212)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:154)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:185)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:230)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257)
at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:425)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:356)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:732)
at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:702)
at 
org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:116)
at 
org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
at 
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:62)
Caused by: java.io.IOException: java.lang.IllegalArgumentException: Cannot 
create timestamp, parsing error 2024-05-29
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:628)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:535)
at 
org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(FetchTask.java:194)
... 55 more
Caused by: java.lang.IllegalArgumentException: Cannot create timestamp, parsing 
error 2024-05-29
at 
org.apache.hadoop.hive.common.type.Timestamp.valueOf(Timestamp.java:194)
at 
org.apache.hive.storage.jdbc.JdbcSerDe.deserialize(JdbcSerDe.java:314)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:609)
... 57 more
Caused by: java.time.format.DateTimeParseException: Text '2024-05-29' could not 
be parsed: Unable to obtain LocalDateTime from TemporalAccessor: {},ISO 
resolved to 2024-05-29 of type java.time.format.Parsed
at 
java.time.format.DateTimeFormatter.createError(DateTimeFormatter.java:1920)
at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1855)
at java.time.LocalDateTime.parse(LocalDateTime.java:492)
at 
org.apache.hadoop.hive.common.type.Timestamp.valueOf(Timestamp.java:188)
... 59 more
Caused by: java.time.DateTimeException: Unable to obtain LocalDateTime from 
TemporalAccessor: {},ISO resolved to 2024-05-29 of type java.time.format.Parsed
at java.time.LocalDateTime.from(LocalDateTime.java:461)
at java.time.format.Parsed.query(Parsed.java:226)
at 

[jira] [Created] (HIVE-28284) TestJdbcWithMiniHS2 to run on Tez

2024-05-29 Thread Jira
László Bodor created HIVE-28284:
---

 Summary: TestJdbcWithMiniHS2 to run on Tez
 Key: HIVE-28284
 URL: https://issues.apache.org/jira/browse/HIVE-28284
 Project: Hive
  Issue Type: Sub-task
Reporter: László Bodor






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28282) Merging into iceberg table fails with copy on write when values clause has a function call

2024-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-28282:
--
Labels: pull-request-available  (was: )

> Merging into iceberg table fails with copy on write when values clause has a 
> function call
> --
>
> Key: HIVE-28282
> URL: https://issues.apache.org/jira/browse/HIVE-28282
> Project: Hive
>  Issue Type: Bug
>  Components: Iceberg integration, Query Planning
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>
> {code}
> create external table target_ice(a int, b string, c int) stored by iceberg 
> tblproperties ('format-version'='2', 'write.merge.mode'='copy-on-write');
> create table source(a int, b string, c int);
> explain
> merge into target_ice as t using source src ON t.a = src.a
> when matched and t.a > 100 THEN DELETE
> when not matched then insert (a, b) values (src.a, concat(src.b, '-merge new 
> 2'));
> {code}
> {code}
>  org.apache.hadoop.hive.ql.parse.SemanticException: Encountered parse error 
> while parsing rewritten merge/update or delete query
>   at 
> org.apache.hadoop.hive.ql.parse.ParseUtils.parseRewrittenQuery(ParseUtils.java:721)
>   at 
> org.apache.hadoop.hive.ql.parse.rewrite.CopyOnWriteMergeRewriter.rewrite(CopyOnWriteMergeRewriter.java:84)
>   at 
> org.apache.hadoop.hive.ql.parse.rewrite.CopyOnWriteMergeRewriter.rewrite(CopyOnWriteMergeRewriter.java:48)
>   at 
> org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.rewriteAndAnalyze(RewriteSemanticAnalyzer.java:93)
>   at 
> org.apache.hadoop.hive.ql.parse.MergeSemanticAnalyzer.analyze(MergeSemanticAnalyzer.java:201)
>   at 
> org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.analyze(RewriteSemanticAnalyzer.java:84)
>   at 
> org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.analyzeInternal(RewriteSemanticAnalyzer.java:72)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224)
>   at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471)
>   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436)
>   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:229)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:425)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:356)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:732)
>   at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:702)
>   at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:115)
>   at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
>   at 
> org.apache.hadoop.hive.cli.TestIcebergLlapLocalCliDriver.testCliDriver(TestIcebergLlapLocalCliDriver.java:60)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:135)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> 

[jira] [Updated] (HIVE-28283) CliServiceTest.testExecuteStatementParallel to run on Tez

2024-05-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-28283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-28283:

Attachment: 
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService-output.txt

> CliServiceTest.testExecuteStatementParallel to run on Tez
> -
>
> Key: HIVE-28283
> URL: https://issues.apache.org/jira/browse/HIVE-28283
> Project: Hive
>  Issue Type: Sub-task
>Reporter: László Bodor
>Priority: Major
> Attachments: 
> org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService-output.txt
>
>
> hit TEZ-4566 while adapting this unit test to tez
> {code}
> 2024-05-29T06:02:24,695  INFO [CallbackExecutor] 
> launcher.LocalContainerLauncher: Container: 
> container_1716987673112_0001_00_08: Execution Failed: 
> java.lang.NullPointerException: null
>   at org.apache.tez.runtime.task.TezChild.run(TezChild.java:252) 
> ~[tez-runtime-internals-0.10.3.jar:0.10.3]
>   at 
> org.apache.tez.dag.app.launcher.LocalContainerLauncher$1.call(LocalContainerLauncher.java:409)
>  ~[tez-dag-0.10.3.jar:0.10.3]
>   at 
> org.apache.tez.dag.app.launcher.LocalContainerLauncher$1.call(LocalContainerLauncher.java:400)
>  ~[tez-dag-0.10.3.jar:0.10.3]
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
>  ~[guava-22.0.jar:?]
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
>  ~[guava-22.0.jar:?]
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
>  ~[guava-22.0.jar:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_292]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_292]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28283) CliServiceTest.testExecuteStatementParallel to run on Tez

2024-05-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-28283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-28283:

Description: 
hit TEZ-4566 while adapting this unit test to tez
{code}
2024-05-29T06:02:24,695  INFO [CallbackExecutor] 
launcher.LocalContainerLauncher: Container: 
container_1716987673112_0001_00_08: Execution Failed: 
java.lang.NullPointerException: null
at org.apache.tez.runtime.task.TezChild.run(TezChild.java:252) 
~[tez-runtime-internals-0.10.3.jar:0.10.3]
at 
org.apache.tez.dag.app.launcher.LocalContainerLauncher$1.call(LocalContainerLauncher.java:409)
 ~[tez-dag-0.10.3.jar:0.10.3]
at 
org.apache.tez.dag.app.launcher.LocalContainerLauncher$1.call(LocalContainerLauncher.java:400)
 ~[tez-dag-0.10.3.jar:0.10.3]
at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
 ~[guava-22.0.jar:?]
at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
 ~[guava-22.0.jar:?]
at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
 ~[guava-22.0.jar:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_292]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_292]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292]
{code}

> CliServiceTest.testExecuteStatementParallel to run on Tez
> -
>
> Key: HIVE-28283
> URL: https://issues.apache.org/jira/browse/HIVE-28283
> Project: Hive
>  Issue Type: Sub-task
>Reporter: László Bodor
>Priority: Major
> Attachments: 
> org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService-output.txt
>
>
> hit TEZ-4566 while adapting this unit test to tez
> {code}
> 2024-05-29T06:02:24,695  INFO [CallbackExecutor] 
> launcher.LocalContainerLauncher: Container: 
> container_1716987673112_0001_00_08: Execution Failed: 
> java.lang.NullPointerException: null
>   at org.apache.tez.runtime.task.TezChild.run(TezChild.java:252) 
> ~[tez-runtime-internals-0.10.3.jar:0.10.3]
>   at 
> org.apache.tez.dag.app.launcher.LocalContainerLauncher$1.call(LocalContainerLauncher.java:409)
>  ~[tez-dag-0.10.3.jar:0.10.3]
>   at 
> org.apache.tez.dag.app.launcher.LocalContainerLauncher$1.call(LocalContainerLauncher.java:400)
>  ~[tez-dag-0.10.3.jar:0.10.3]
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
>  ~[guava-22.0.jar:?]
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
>  ~[guava-22.0.jar:?]
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
>  ~[guava-22.0.jar:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_292]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_292]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28283) CliServiceTest.testExecuteStatementParallel to run on Tez

2024-05-29 Thread Jira
László Bodor created HIVE-28283:
---

 Summary: CliServiceTest.testExecuteStatementParallel to run on Tez
 Key: HIVE-28283
 URL: https://issues.apache.org/jira/browse/HIVE-28283
 Project: Hive
  Issue Type: Sub-task
Reporter: László Bodor






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28202) Incorrect projected column size after ORC upgrade to v1.6.7

2024-05-29 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850333#comment-17850333
 ] 

Stamatis Zampetakis commented on HIVE-28202:


Hey [~dkuzmenko], can you please include add a small comment in this ticket 
explaining how the bucket_00952_0 file was generated. Binary files are 
cumbersome in ASF releases so it's good to have a clear idea of what is inside 
for future reference.

> Incorrect projected column size after ORC upgrade to v1.6.7 
> 
>
> Key: HIVE-28202
> URL: https://issues.apache.org/jira/browse/HIVE-28202
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0, 4.0.0-beta-1
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Critical
>  Labels: hive-4.0.1-must, performance, pull-request-available
> Fix For: 4.1.0
>
>
> `ReaderImpl.getRawDataSizeFromColIndices` changed behavior for handling 
> struct type and now includes their subtypes. That caused an issue in Hive as 
> the root struct index is always "included", causing size estimation for the 
> complete schema, not just selected columns leading to incorrect split 
> estimations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28282) Merging into iceberg table fails with copy on write when values clause has a function call

2024-05-29 Thread Krisztian Kasa (Jira)
Krisztian Kasa created HIVE-28282:
-

 Summary: Merging into iceberg table fails with copy on write when 
values clause has a function call
 Key: HIVE-28282
 URL: https://issues.apache.org/jira/browse/HIVE-28282
 Project: Hive
  Issue Type: Bug
  Components: Iceberg integration, Query Planning
Reporter: Krisztian Kasa
Assignee: Krisztian Kasa


{code}
create external table target_ice(a int, b string, c int) stored by iceberg 
tblproperties ('format-version'='2', 'write.merge.mode'='copy-on-write');
create table source(a int, b string, c int);

explain
merge into target_ice as t using source src ON t.a = src.a
when matched and t.a > 100 THEN DELETE
when not matched then insert (a, b) values (src.a, concat(src.b, '-merge new 
2'));
{code}
{code}
 org.apache.hadoop.hive.ql.parse.SemanticException: Encountered parse error 
while parsing rewritten merge/update or delete query
at 
org.apache.hadoop.hive.ql.parse.ParseUtils.parseRewrittenQuery(ParseUtils.java:721)
at 
org.apache.hadoop.hive.ql.parse.rewrite.CopyOnWriteMergeRewriter.rewrite(CopyOnWriteMergeRewriter.java:84)
at 
org.apache.hadoop.hive.ql.parse.rewrite.CopyOnWriteMergeRewriter.rewrite(CopyOnWriteMergeRewriter.java:48)
at 
org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.rewriteAndAnalyze(RewriteSemanticAnalyzer.java:93)
at 
org.apache.hadoop.hive.ql.parse.MergeSemanticAnalyzer.analyze(MergeSemanticAnalyzer.java:201)
at 
org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.analyze(RewriteSemanticAnalyzer.java:84)
at 
org.apache.hadoop.hive.ql.parse.RewriteSemanticAnalyzer.analyzeInternal(RewriteSemanticAnalyzer.java:72)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
at 
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224)
at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:229)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257)
at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:425)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:356)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:732)
at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:702)
at 
org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:115)
at 
org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
at 
org.apache.hadoop.hive.cli.TestIcebergLlapLocalCliDriver.testCliDriver(TestIcebergLlapLocalCliDriver.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:135)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at 

[jira] [Created] (HIVE-28281) Hive produces incorrect result when a vertex contains multiple LimitOperator

2024-05-29 Thread Seonggon Namgung (Jira)
Seonggon Namgung created HIVE-28281:
---

 Summary: Hive produces incorrect result when a vertex contains 
multiple LimitOperator
 Key: HIVE-28281
 URL: https://issues.apache.org/jira/browse/HIVE-28281
 Project: Hive
  Issue Type: Bug
Reporter: Seonggon Namgung


I observed that running the following query with TestMiniLlapLocalCliDriver 
produces wrong result. I expected 3 rows for table401_3, but got only a single 
row.

 
{code:java}
--! qt:dataset:src

set mapred.min.split.size=10;
set mapred.max.split.size=10;
set tez.grouping.min-size=10;
set tez.grouping.max-size=10;
set tez.grouping.split-waves=10;

create table table230_1 (key string, value string);
create table table401_3 (key string, value string);

from (select * from src) a
insert overwrite table table230_1 select key, value where key = 230 limit 1
insert overwrite table table401_3 select key, value where key = 401 limit 3;

select * from table230_1;
select * from table401_3;{code}
 

It seems that current LimitReached optimization does not work well when a 
vertex contains multiple LimitOperators.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)