[jira] [Created] (HIVE-27223) Show Compactions failing with NPE
Ayush Saxena created HIVE-27223: --- Summary: Show Compactions failing with NPE Key: HIVE-27223 URL: https://issues.apache.org/jira/browse/HIVE-27223 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena {noformat} java.lang.NullPointerException: null at java.io.DataOutputStream.writeBytes(DataOutputStream.java:274) ~[?:?] at org.apache.hadoop.hive.ql.ddl.process.show.compactions.ShowCompactionsOperation.writeRow(ShowCompactionsOperation.java:135) at org.apache.hadoop.hive.ql.ddl.process.show.compactions.ShowCompactionsOperation.execute(ShowCompactionsOperation.java:57) at org.apache.hadoop.hive.ql.ddl.DDLTask.execute(DDLTask.java:84) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:360) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27208) Iceberg: Add support for rename table
Ayush Saxena created HIVE-27208: --- Summary: Iceberg: Add support for rename table Key: HIVE-27208 URL: https://issues.apache.org/jira/browse/HIVE-27208 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add support for renaming iceberg tables. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27185) Iceberg: Cache iceberg table while loading for stats
Ayush Saxena created HIVE-27185: --- Summary: Iceberg: Cache iceberg table while loading for stats Key: HIVE-27185 URL: https://issues.apache.org/jira/browse/HIVE-27185 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Presently iceberg for stats loads the iceberg table multiple times for stats via different routes. Cache it to avoid reading/loading the iceberg table multiple times. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27182) tez_union_with_udf.q with TestMiniTezCliDriver is flaky
Ayush Saxena created HIVE-27182: --- Summary: tez_union_with_udf.q with TestMiniTezCliDriver is flaky Key: HIVE-27182 URL: https://issues.apache.org/jira/browse/HIVE-27182 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Looks like memory issue: {noformat} < Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.OutOfMemoryError: GC overhead limit exceeded < Serialization trace: < genericUDF (org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc) < colExprMap (org.apache.hadoop.hive.ql.plan.SelectDesc) < conf (org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator) < childOperators (org.apache.hadoop.hive.ql.exec.vector.VectorLimitOperator) < childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27177) Add alter table...Convert to Iceberg command
Ayush Saxena created HIVE-27177: --- Summary: Add alter table...Convert to Iceberg command Key: HIVE-27177 URL: https://issues.apache.org/jira/browse/HIVE-27177 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add an alter table convert to Iceberg [TBLPROPERTIES('','')] to convert exiting external tables to iceberg tables -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27070) Multiple is NULL check together fails in CalcitePlanner
Ayush Saxena created HIVE-27070: --- Summary: Multiple is NULL check together fails in CalcitePlanner Key: HIVE-27070 URL: https://issues.apache.org/jira/browse/HIVE-27070 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Steps to Repro: {noformat} create external table ice01 (id int, name string); select (name is null) is NULL from ice01;{noformat} Exception: {noformat} Caused by: java.lang.AssertionError at org.apache.calcite.rex.RexSimplify.validateStrongPolicy(RexSimplify.java:851) at org.apache.calcite.rex.RexSimplify.simplifyIs2(RexSimplify.java:695) at org.apache.calcite.rex.RexSimplify.simplifyIs(RexSimplify.java:666) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27066) Fix mv_iceberg_partitioned_orc and mv_iceberg_partitioned_orc2
Ayush Saxena created HIVE-27066: --- Summary: Fix mv_iceberg_partitioned_orc and mv_iceberg_partitioned_orc2 Key: HIVE-27066 URL: https://issues.apache.org/jira/browse/HIVE-27066 Project: Hive Issue Type: Bug Reporter: Ayush Saxena These tests are flaky due to Total Size in the query output. The test is recently committed, and is around MV's in iceberg. It missed {noformat} --! qt:replace:/(\s+totalSize\s+)\S+(\s+)/$1#Masked#$2/ {noformat} Which is done in most of the tests adding totalSize in the query output to avoid flakiness -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27031) Iceberg: Implement Copy-On-Write for Delete Queries
Ayush Saxena created HIVE-27031: --- Summary: Iceberg: Implement Copy-On-Write for Delete Queries Key: HIVE-27031 URL: https://issues.apache.org/jira/browse/HIVE-27031 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena Implement copy on write mode for deletes for iceberg tables -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26885) Iceberg: Parquet Vectorized V2 reads fails with NPE
Ayush Saxena created HIVE-26885: --- Summary: Iceberg: Parquet Vectorized V2 reads fails with NPE Key: HIVE-26885 URL: https://issues.apache.org/jira/browse/HIVE-26885 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena In case the Iceberg Parquet table lands up having an empty batch, in that case while fetching the row number, used for filtering leads to NPE. The row number to block mapping is only done if the parquetSplit isn't null, so in that case, here: {code:java} if (parquetInputSplit != null) { initialize(parquetInputSplit, conf); } {code} row numbers aren't initialised, so we should skip fetching the row numbers later -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26884) Iceberg: V2 Vectorization returns wrong results with deletes
Ayush Saxena created HIVE-26884: --- Summary: Iceberg: V2 Vectorization returns wrong results with deletes Key: HIVE-26884 URL: https://issues.apache.org/jira/browse/HIVE-26884 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena In case of Iceberg V2 reads, if we have delete files, and a couple of parquet blocks are skipped in that case the row number calculation is screwed and that leads to mismatch with delete filter row positions and hence leading to wrong results. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26868) Iceberg: Allow IOW on empty table with Partition Evolution
Ayush Saxena created HIVE-26868: --- Summary: Iceberg: Allow IOW on empty table with Partition Evolution Key: HIVE-26868 URL: https://issues.apache.org/jira/browse/HIVE-26868 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena In case an iceberg table has gone through partition evolution, we don't allow an IOW operation on it. But if it is empty, we can allow an IOW since there ain't any data which can get messed by overwrite. This helps to compact data, & merge the delete files into data file via Truncate -> IOW with Snapshot ID before Truncate. Same flow is used by Impala for compacting Iceberg tables. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26848) Ozone: LOAD data fails in case of move across buckets
Ayush Saxena created HIVE-26848: --- Summary: Ozone: LOAD data fails in case of move across buckets Key: HIVE-26848 URL: https://issues.apache.org/jira/browse/HIVE-26848 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Hive decides whether to copy or to move on the basis of FileSystem URI, but in case of Ozone, if the file resides in different bucket than the original file. In that case rename isn't possible. We should fallback to copy -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26847) Allow a config to disable creation of Materialized Views
Ayush Saxena created HIVE-26847: --- Summary: Allow a config to disable creation of Materialized Views Key: HIVE-26847 URL: https://issues.apache.org/jira/browse/HIVE-26847 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26789) Add UserName in CallerContext for queries
Ayush Saxena created HIVE-26789: --- Summary: Add UserName in CallerContext for queries Key: HIVE-26789 URL: https://issues.apache.org/jira/browse/HIVE-26789 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena HDFS Audit logs if impersonation is false, tracks only the Hive user in the audit log, Can pass the actual user as part of the CallerContext, so that can be logged as well for better tracking -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26786) Iceberg: Read queries with copy-on-write failnig
Ayush Saxena created HIVE-26786: --- Summary: Iceberg: Read queries with copy-on-write failnig Key: HIVE-26786 URL: https://issues.apache.org/jira/browse/HIVE-26786 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Hive by default only supports merge-on-read, But the read queries have nothing to do with this config. The Read queries shouldn't fail due to this. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26756) Iceberg: Fetch format version from metadata file to avoid conflicts with spark
Ayush Saxena created HIVE-26756: --- Summary: Iceberg: Fetch format version from metadata file to avoid conflicts with spark Key: HIVE-26756 URL: https://issues.apache.org/jira/browse/HIVE-26756 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Spark & other engines don't set the format version for iceberg table in the HMS properties, which leads to misinterpretation of iceberg format & lead to wrong query results. Propose to extract the format version from the metadata file always rather than relying on the HMS properties. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26742) Iceberg: Vectorization fails incase of multiple tables in queries
Ayush Saxena created HIVE-26742: --- Summary: Iceberg: Vectorization fails incase of multiple tables in queries Key: HIVE-26742 URL: https://issues.apache.org/jira/browse/HIVE-26742 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena If the iceberg table as orc.files.only set to False and we have multiple tables. {noformat} Caused by: java.lang.ClassCastException: class org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector cannot be cast to class org.apache.hadoop.hive.ql.exec.vector.LongColumnVector (org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector and org.apache.hadoop.hive.ql.exec.vector.LongColumnVector are in unnamed module of loader 'app') at org.apache.hadoop.hive.ql.exec.vector.expressions.FuncLongToDecimal.evaluate(FuncLongToDecimal.java:53) at org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression.evaluateChildren(VectorExpression.java:314) at org.apache.hadoop.hive.ql.exec.vector.expressions.gen.DecimalColMultiplyDecimalColumn.evaluate(DecimalColMultiplyDecimalColumn.java:56) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146) ... 25 more{noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26736) Authorization failure for nested Views having WITH clause
Ayush Saxena created HIVE-26736: --- Summary: Authorization failure for nested Views having WITH clause Key: HIVE-26736 URL: https://issues.apache.org/jira/browse/HIVE-26736 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Authorization failure in case of nested views created using With clause, if the user doesn't have permissions for the inner view. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26734) Iceberg: Add an option to allow positional delete files without actual row data
Ayush Saxena created HIVE-26734: --- Summary: Iceberg: Add an option to allow positional delete files without actual row data Key: HIVE-26734 URL: https://issues.apache.org/jira/browse/HIVE-26734 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Allow an option to have actual row data in the Iceberg PositionalDelete file as optional, to avoid reading and writing huge amount of actual row data during query executions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26724) Mask UDF failing with NPE
Ayush Saxena created HIVE-26724: --- Summary: Mask UDF failing with NPE Key: HIVE-26724 URL: https://issues.apache.org/jira/browse/HIVE-26724 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena The mask UDF fails with NPE in prod, due to unavailability of the session conf. Trace: {noformat} ... 20 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.udf.generic.MaskHashTransformer.transform(GenericUDFMaskHash.java:50) at org.apache.hadoop.hive.ql.udf.generic.StringTransformerAdapter.getTransformedWritable(BaseMaskUDF.java:459) at org.apache.hadoop.hive.ql.udf.generic.BaseMaskUDF.evaluate(BaseMaskUDF.java:84) at org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:235) at org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80) at org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:68) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88) ... 24 more ], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : attempt_1667823513257_0010_2_00_00_1:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26709) Iceberg: Count(*) fails for V2 tables with delete files.
Ayush Saxena created HIVE-26709: --- Summary: Iceberg: Count(*) fails for V2 tables with delete files. Key: HIVE-26709 URL: https://issues.apache.org/jira/browse/HIVE-26709 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Steps to Repro. * Create a v2 table * Add some Data * Delete a Row * Do a count(*) on the table *Reason:* Missing RoaringBitmap dependency, Iceberg now requires it during runtime for Delete files filtering StackTrace: {noformat} Caused by: java.lang.ClassNotFoundException: org.roaringbitmap.longlong.Roaring64Bitmap at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 42 more , errorMessage=Cannot recover from this error:java.lang.NoClassDefFoundError: org/roaringbitmap/longlong/Roaring64Bitmap at org.apache.iceberg.deletes.BitmapPositionDeleteIndex.(BitmapPositionDeleteIndex.java:28) at org.apache.iceberg.deletes.Deletes.toPositionIndex(Deletes.java:102) at org.apache.iceberg.deletes.Deletes.toPositionIndex(Deletes.java:97) at org.apache.iceberg.data.DeleteFilter.applyPosDeletes(DeleteFilter.java:229) at org.apache.iceberg.data.DeleteFilter.filter(DeleteFilter.java:132) at org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.open(IcebergInputFormat.java:376) at org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.nextTask(IcebergInputFormat.java:266) at org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.initialize(IcebergInputFormat.java:262) at org.apache.iceberg.mr.mapred.AbstractMapredIcebergRecordReader.(AbstractMapredIcebergRecordReader.java:40) at org.apache.iceberg.mr.mapred.MapredIcebergInputFormat$MapredIcebergRecordReader.(MapredIcebergInputFormat.java:89) at org.apache.iceberg.mr.mapred.MapredIcebergInputFormat.getRecordReader(MapredIcebergInputFormat.java:79) at org.apache.iceberg.mr.hive.HiveIcebergInputFormat.getRecordReader(HiveIcebergInputFormat.java:169) at org.apache.hadoop.hive.ql.io.RecordReaderWrapper.create(RecordReaderWrapper.java:72) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:461) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164) at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:706) at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:665) at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150) at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:543) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:189) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26707) Iceberg: Write failing due to Ranger Authorization failure
Ayush Saxena created HIVE-26707: --- Summary: Iceberg: Write failing due to Ranger Authorization failure Key: HIVE-26707 URL: https://issues.apache.org/jira/browse/HIVE-26707 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26614) Fix adding custom jars in Job Classpath
Ayush Saxena created HIVE-26614: --- Summary: Fix adding custom jars in Job Classpath Key: HIVE-26614 URL: https://issues.apache.org/jira/browse/HIVE-26614 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Custom added Jars in LocalFs throws FNF: {noformat} ERROR : Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://localhost:9000/Users/ayushsaxena/code/hive-os/hive/packaging/target/apache-hive-4.0.0-alpha-2-SNAPSHOT-bin/apache-hive-4.0.0-alpha-2-SNAPSHOT-bin/lib/hive-iceberg-handler-4.0.0-alpha-2-SNAPSHOT.jar)' java.io.FileNotFoundException: File does not exist: hdfs://localhost:9000/Users/ayushsaxena/code/hive-os/hive/packaging/target/apache-hive-4.0.0-alpha-2-SNAPSHOT-bin/apache-hive-4.0.0-alpha-2-SNAPSHOT-bin/lib/hive-iceberg-handler-4.0.0-alpha-2-SNAPSHOT.jar at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1756) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764) at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:958) at org.apache.hadoop.mapreduce.v2.util.MRApps.addToClasspathIfNotJar(MRApps.java:342) at org.apache.hadoop.mapreduce.v2.util.MRApps.setClasspath(MRApps.java:275) at org.apache.hadoop.mapred.YARNRunner.setupContainerLaunchContextForAM(YARNRunner.java:525) at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:584) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:326) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:251) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1571) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1568) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1568) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:416) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:158){noformat} Some Applications do consider every Jar Path to be in LocalFileSystem but rest follow the standard Hadoop practice to check the Fs.DefaultFs. We should better make the path qualified -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26608) Iceberg: Allow parquet write properties to iceberg via session conf and Table Properties
Ayush Saxena created HIVE-26608: --- Summary: Iceberg: Allow parquet write properties to iceberg via session conf and Table Properties Key: HIVE-26608 URL: https://issues.apache.org/jira/browse/HIVE-26608 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Allow passing parquet.block.size & parquet.compression via TBLPROPERTIES and Session Conf. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26556) Iceberg: Properties set in HiveIcebergSerde are not propagated to jobconf
Ayush Saxena created HIVE-26556: --- Summary: Iceberg: Properties set in HiveIcebergSerde are not propagated to jobconf Key: HIVE-26556 URL: https://issues.apache.org/jira/browse/HIVE-26556 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Some hive properties (ex. InputFormatConfig.CASE_SENSITIVE) are not propagated to the jobconf. This scenario can be reproduced by running TestHiveIcebergSelects#testScanTableCaseInsensitive test method. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26535) Iceberg: Support adding parquet compression type via Table properties
Ayush Saxena created HIVE-26535: --- Summary: Iceberg: Support adding parquet compression type via Table properties Key: HIVE-26535 URL: https://issues.apache.org/jira/browse/HIVE-26535 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena As of now for Iceberg table the parquet compression format gets ignored. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26519) Iceberg: Add support for CTLT queries
Ayush Saxena created HIVE-26519: --- Summary: Iceberg: Add support for CTLT queries Key: HIVE-26519 URL: https://issues.apache.org/jira/browse/HIVE-26519 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add support to run `create table Like` queries with iceberg tables -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26511) Fix NoClassDefFoundError in HMS for HBaseConfiguration
Ayush Saxena created HIVE-26511: --- Summary: Fix NoClassDefFoundError in HMS for HBaseConfiguration Key: HIVE-26511 URL: https://issues.apache.org/jira/browse/HIVE-26511 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena While accessing Hbase tables via PySpark, the query fails with NoClassDefFoundError due to missing Hbase Jars in Classpath -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26488) Fix NPE in DDLSemanticAnalyzerFactory during compilation
Ayush Saxena created HIVE-26488: --- Summary: Fix NPE in DDLSemanticAnalyzerFactory during compilation Key: HIVE-26488 URL: https://issues.apache.org/jira/browse/HIVE-26488 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena *Exception Trace:* {noformat} java.lang.ExceptionInInitializerError at org.apache.hadoop.hive.ql.parse.SemanticAnalyzerFactory.getInternal(SemanticAnalyzerFactory.java:62) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzerFactory.get(SemanticAnalyzerFactory.java:41) at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:209) at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:106) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:507) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:459) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:424) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:418) {noformat} *Cause:* {noformat} Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.ddl.DDLSemanticAnalyzerFactory.(DDLSemanticAnalyzerFactory.java:84) ... 40 more {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26224) Add support for ESRI GeoSpatial SERDE formats
Ayush Saxena created HIVE-26224: --- Summary: Add support for ESRI GeoSpatial SERDE formats Key: HIVE-26224 URL: https://issues.apache.org/jira/browse/HIVE-26224 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena Add support to use ESRI geospatial serde formats -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26223) Integrate ESRI GeoSpatial UDFs
Ayush Saxena created HIVE-26223: --- Summary: Integrate ESRI GeoSpatial UDFs Key: HIVE-26223 URL: https://issues.apache.org/jira/browse/HIVE-26223 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena Add the GeoSpatial UDFs to hive -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26141) Fix vector_ptf_part_simple_all_datatypes source file
Ayush Saxena created HIVE-26141: --- Summary: Fix vector_ptf_part_simple_all_datatypes source file Key: HIVE-26141 URL: https://issues.apache.org/jira/browse/HIVE-26141 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena The source file has issues while parsing into a hive table due to tab/spaces irregularities . -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-26033) Repl Load fails with Wrong FS error.
Ayush Saxena created HIVE-26033: --- Summary: Repl Load fails with Wrong FS error. Key: HIVE-26033 URL: https://issues.apache.org/jira/browse/HIVE-26033 Project: Hive Issue Type: Bug Reporter: Ayush Saxena For External table replication with staging on source, the replication load fails with wrong FS error while cleaning up snapshots. {noformat} Exception : Wrong FS: hdfs://cluster1:8020/user/hive/replDir/policy_1646973828/_file_list_external_current, expected: hdfs://cluster2:8020 {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25990) Optimise multiple copies in case of CTAS in external tables for Object stores
Ayush Saxena created HIVE-25990: --- Summary: Optimise multiple copies in case of CTAS in external tables for Object stores Key: HIVE-25990 URL: https://issues.apache.org/jira/browse/HIVE-25990 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Presently for CTAS with external tables, there are two renames, operations, one from tmp to _ext and then from _ext to actual target. In case of object stores, the renames lead to actual copy. Avoid renaming by avoiding rename from tmp to _ext, but by creating a list of files to be copied in that directly, which can be consumed in the move task, to copy directly from tmp to actual target. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25960) Fix S3a recursive listing logic
Ayush Saxena created HIVE-25960: --- Summary: Fix S3a recursive listing logic Key: HIVE-25960 URL: https://issues.apache.org/jira/browse/HIVE-25960 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena To make the path relative: Path relativePath = new Path(each.getPath().toString().replace(base.toString(), "")); Here base in the FileStatus not the path. It should be base.getPath().toString() and instead of replace it should be replaceFirst() -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25928) Auto bootstrap incase of incremental failure
Ayush Saxena created HIVE-25928: --- Summary: Auto bootstrap incase of incremental failure Key: HIVE-25928 URL: https://issues.apache.org/jira/browse/HIVE-25928 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena In case the target database is dropped due to some failures in the incremental, make dump & load auto realise and move to doing bootstrap cycle -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25921) Overwrite table metadata for bootstraped tables
Ayush Saxena created HIVE-25921: --- Summary: Overwrite table metadata for bootstraped tables Key: HIVE-25921 URL: https://issues.apache.org/jira/browse/HIVE-25921 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena When bootstrapping the tables, the initial design drop & re-creates the tables, Instead of dropping & recreating implement just overwrite of tables metadata. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25895) Bootstrap tables in table_diff during Incremental Load
Ayush Saxena created HIVE-25895: --- Summary: Bootstrap tables in table_diff during Incremental Load Key: HIVE-25895 URL: https://issues.apache.org/jira/browse/HIVE-25895 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena Consume the table_diff_ack file and do a bootstrap dump & load for those tables -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25857) Replication fails in case of Control Character in the table description
Ayush Saxena created HIVE-25857: --- Summary: Replication fails in case of Control Character in the table description Key: HIVE-25857 URL: https://issues.apache.org/jira/browse/HIVE-25857 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena In case there is a control character in the table metadata. The LOAD fails while decoding the JSON. *Exception:* {noformat} Caused by: com.fasterxml.jackson.core.JsonParseException: Illegal unquoted character ((CTRL-CHAR, code 24)): has to be escaped using backslash to be included in string value at [Source: (String)"{"server":"","servicePrincipal":"","db":"sampletestreplic","table":"testlmo","tableType":"MANAGED_TABLE","tableObjBeforeJson":"{\"1\":{\"str\":\"testlmo\"},\"2\":{\"str\":\"sampletestreplic\"},\"3\":{\"str\":\"hive\"},\"4\":{\"i32\":1641717786},\"5\":{\"i32\":0},\"6\":{\"i32\":0},\"7\":{\"rec\":{\"1\":{\"lst\":[\"rec\",1,{\"1\":{\"str\":\"dc_codeacteurcandidat\"},\"2\":{\"str\":\"string\"},\"3\":{\"str\":\"Code de l'acteur de candidature (^XA' a dterminer, ^XC' conseiller ou ^XD' candidat)\"}}]},\"[truncated 3054 chars]; line: 1, column: 445] at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840) ~[jackson-core-2.10.5.jar:2.10.5] at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:712) ~[jackson-core-2.10.5.jar:2.10.5] at com.fasterxml.jackson.core.base.ParserBase._throwUnquotedSpace(ParserBase.java:1046) ~[jackson-core-2.10.5.jar:2.10.5] at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._finishString2(ReaderBasedJsonParser.java:2073) ~[jackson-core-2.10.5.jar:2.10.5] at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._finishString(ReaderBasedJsonParser.java:2044) ~[jackson-core-2.10.5.jar:2.10.5] at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.getText(ReaderBasedJsonParser.java:293) ~[jackson-core-2.10.5.jar:2.10.5] at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:35) ~[jackson-databind-2.10.5.1.jar:2.10.5.1] at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:10) ~[jackson-databind-2.10.5.1.jar:2.10.5.1] at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138) ~[jackson-databind-2.10.5.1.jar:2.10.5.1] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) ~[jackson-databind-2.10.5.1.jar:2.10.5.1] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.10.5.1.jar:2.10.5.1] at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4218) ~[jackson-databind-2.10.5.1.jar:2.10.5.1] at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3214) at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3182) at org.apache.hadoop.hive.metastore.messaging.json.JSONMessageDeserializer.getAlterTableMessage(JSONMessageDeserializer.java:111) at org.apache.hadoop.hive.ql.parse.repl.load.message.TableHandler.extract(TableHandler.java:111)] at org.apache.hadoop.hive.ql.parse.repl.load.message.TableHandler.handle(TableHandler.java:51) at org.apache.hadoop.hive.ql.exec.repl.incremental.IncrementalLoadTasksBuilder.analyzeEventLoad(IncrementalLoadTasksBuilder.java:213){noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25819) Track event id on target cluster with respect to source cluster
Ayush Saxena created HIVE-25819: --- Summary: Track event id on target cluster with respect to source cluster Key: HIVE-25819 URL: https://issues.apache.org/jira/browse/HIVE-25819 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena When loading an event on target cluster, we keep a track of event id loaded with respect to source Cluster, Keep a track of event Id with respect to target cluster as well. So, while coming back B->A, we can figure out from where the start dumping events. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25787) Prevent duplicate paths in the fileList while adding an entry to NotifcationLog
Ayush Saxena created HIVE-25787: --- Summary: Prevent duplicate paths in the fileList while adding an entry to NotifcationLog Key: HIVE-25787 URL: https://issues.apache.org/jira/browse/HIVE-25787 Project: Hive Issue Type: Bug Reporter: Ayush Saxena As of now, while adding entries to notification logs, in case of retries, sometimes the same path gets added to the insert notification log entry, which during replication leads to failures during copy. Avoid having same path more than once for single transaction. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25742) Fix WriteOutput in Utils for Replication
Ayush Saxena created HIVE-25742: --- Summary: Fix WriteOutput in Utils for Replication Key: HIVE-25742 URL: https://issues.apache.org/jira/browse/HIVE-25742 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena The present implementation uses {{IOUtils.closeStream(outStream);}} which eats up any exception while closing the file, hence falsely conveying the file has been successfully written in case there are any failures during flushing of data during close or any other exception while concluding the file. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25708) Implement creation of table_diff
Ayush Saxena created HIVE-25708: --- Summary: Implement creation of table_diff Key: HIVE-25708 URL: https://issues.apache.org/jira/browse/HIVE-25708 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena Generate table_diff file with the list of tables modified on cluster A after the last successful loaded event id on B, which needs to be bootstrapped -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25700) Prevent deletion of Notification Events post restarts
Ayush Saxena created HIVE-25700: --- Summary: Prevent deletion of Notification Events post restarts Key: HIVE-25700 URL: https://issues.apache.org/jira/browse/HIVE-25700 Project: Hive Issue Type: Sub-task Reporter: Ayush Saxena Assignee: Ayush Saxena In case of DR scenarios, when Hive service goes down, Prevent deletion of entries in the Notification Log immediately, Give time for ADMINs to reconfigure properties to handle further Replication process. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25699) Optimise Bootstrap incase of Failovers
Ayush Saxena created HIVE-25699: --- Summary: Optimise Bootstrap incase of Failovers Key: HIVE-25699 URL: https://issues.apache.org/jira/browse/HIVE-25699 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Optimise the bootstrap process in case of planned and unplanned failovers -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25579) LOAD overwrite appends rather than ovewriting
Ayush Saxena created HIVE-25579: --- Summary: LOAD overwrite appends rather than ovewriting Key: HIVE-25579 URL: https://issues.apache.org/jira/browse/HIVE-25579 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena The overwrite query gets converted to append. {noformat} 7b6-4b43-8452-52c44e8a2f71): LOAD DATA INPATH 'hdfs://ayushsaxena-1.ayushsaxena.root.hwx.site:8020/warehouse/tablespace/external/hive/test_ext/00_0' OVERWRITE INTO TABLE test_spark 2021-09-30 03:30:23,033 INFO org.apache.hadoop.hive.ql.lockmgr.DbTxnManager: [db2ab9c9-bf54-4304-bc06-e3bef76f2e79 HiveServer2-Handler-Pool: Thread-2600]: Opened txnid:15 2021-09-30 03:30:23,035 INFO org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer: [db2ab9c9-bf54-4304-bc06-e3bef76f2e79 HiveServer2-Handler-Pool: Thread-2600]: Starting caching scope for: hive_20210930033023_bb1f6dc4-d7b6-4b43-8452-52c44e8a2f71 2021-09-30 03:30:23,042 INFO org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer: [db2ab9c9-bf54-4304-bc06-e3bef76f2e79 HiveServer2-Handler-Pool: Thread-2600]: Load data triggered a Tez job instead of usual file operation 2021-09-30 03:30:23,042 INFO org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer: [db2ab9c9-bf54-4304-bc06-e3bef76f2e79 HiveServer2-Handler-Pool: Thread-2600]: Going to reparse as {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25571) Fix Metastore script for Oracle Database
Ayush Saxena created HIVE-25571: --- Summary: Fix Metastore script for Oracle Database Key: HIVE-25571 URL: https://issues.apache.org/jira/browse/HIVE-25571 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Error:1 {noformat} 354/359 CREATE UNIQUE INDEX DBPRIVILEGEINDEX ON DC_PRIVS (AUTHORIZER,NAME,PRINCIPAL_NAME,PRINCIPAL_TYPE,DC_PRIV,GRANTOR,GRANTOR_TYPE); Error: ORA-00955: name is already used by an existing object (state=42000,code=955) Aborting command set because "force" is false and command failed: "CREATE UNIQUE INDEX DBPRIVILEGEINDEX ON DC_PRIVS (AUTHORIZER,NAME,PRINCIPAL_NAME,PRINCIPAL_TYPE,DC_PRIV,GRANTOR,GRANTOR_TYPE);" [ERROR] 2021-09-29 09:18:59.075 [main] MetastoreSchemaTool - Schema initialization FAILED! Metastore state would be inconsistent! Schema initialization FAILED! Metastore state would be inconsistent!{noformat} Error:2 {noformat} Error: ORA-00900: invalid SQL statement (state=42000,code=900) Aborting command set because "force" is false and command failed: "=== -- HIVE-24396 -- Create DataCo{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25551) Fix Schema upgrade sql for mssql
Ayush Saxena created HIVE-25551: --- Summary: Fix Schema upgrade sql for mssql Key: HIVE-25551 URL: https://issues.apache.org/jira/browse/HIVE-25551 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena The schema upgrade is failing with: {noformat} ALTER TABLE "DBS" ADD "TYPE" nvarchar(32) DEFAULT "NATIVE" NOT NULL; Error: The name "NATIVE" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted. (state=S0001,code=128) Aborting command set because "force" is false and command failed: "ALTER TABLE "DBS" ADD "TYPE" nvarchar(32) DEFAULT "NATIVE" NOT NULL;" [ERROR] 2021-09-23 18:34:34.917 [main] MetastoreSchemaTool - Upgrade FAILED! Metastore state would be inconsistent !! Upgrade FAILED! Metastore state would be inconsistent !! [ERROR] 2021-09-23 18:34:34.917 [main] MetastoreSchemaTool - Underlying cause: java.io.IOException : Schema script failed, errorcode OTHER Underlying cause: java.io.IOException : Schema script failed, errorcode OTHER [ERROR] 2021-09-23 18:34:34.917 [main] MetastoreSchemaTool - Use --verbose for detailed stacktrace. Use --verbose for detailed stacktrace. [ERROR] 2021-09-23 18:34:34.917 [main] MetastoreSchemaTool - *** schemaTool failed *** {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25550) Increase the RM_PROGRESS column to accommodate the metrics stat
Ayush Saxena created HIVE-25550: --- Summary: Increase the RM_PROGRESS column to accommodate the metrics stat Key: HIVE-25550 URL: https://issues.apache.org/jira/browse/HIVE-25550 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Presently it fails with the following trace: {noformat} [[Event Name: EVENT_ALLOC_WRITE_ID; Total Number: 213; Total Time: 85347.0; Mean: 400.6901408450704; Median: 392.0; Standard Deviation: 33.99178239314741; Variance: 1155.4412702630862; Kurtosis: 83.69411620601193; Skewness: 83.69411620601193; 25th Percentile: 384.0; 50th Percentile: 392.0; 75th Percentile: 408.0; 90th Percentile: 417.0; Top 5 EventIds(EventId=Time) {1498476=791, 1498872=533, 1497805=508, 1498808=500, 1499027=492};]]}"}]}" in column ""RM_PROGRESS"" that has maximum length of 4000. Please correct your data! at org.datanucleus.store.rdbms.mapping.datastore.CharRDBMSMapping.setString(CharRDBMSMapping.java:254) ~[datanucleus-rdbms-4.1.19.jar:?] at org.datanucleus.store.rdbms.mapping.java.SingleFieldMapping.setString(SingleFieldMapping.java:180) ~{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25538) CommitTxn replay failing during incremental run
Ayush Saxena created HIVE-25538: --- Summary: CommitTxn replay failing during incremental run Key: HIVE-25538 URL: https://issues.apache.org/jira/browse/HIVE-25538 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena CommitTxn Fails during incremental run, in case the source file is deleted post copy & before checksum validation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25350) Replication fails for external tables on setting owner/groups
Ayush Saxena created HIVE-25350: --- Summary: Replication fails for external tables on setting owner/groups Key: HIVE-25350 URL: https://issues.apache.org/jira/browse/HIVE-25350 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena DirCopyTask tries to preserve user group permissions, irrespective whether they have been specified to be preserved or not. Changing user/group requires SuperUser privileges, hence the task fails. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25341) Reduce FileSystem calls in case drop database cascade
Ayush Saxena created HIVE-25341: --- Summary: Reduce FileSystem calls in case drop database cascade Key: HIVE-25341 URL: https://issues.apache.org/jira/browse/HIVE-25341 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Reduce the number of FileSystem calls made in case of drop database cascade -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25336) Use single call to get tables in DropDatabaseAnalyzer
Ayush Saxena created HIVE-25336: --- Summary: Use single call to get tables in DropDatabaseAnalyzer Key: HIVE-25336 URL: https://issues.apache.org/jira/browse/HIVE-25336 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Optimise org.apache.hadoop.hive.ql.ddl.database.drop.DropDatabaseAnalyzer.analyzeInternal(DropDatabaseAnalyzer.java:61), where it fetches entire tables one by one. Move to a single call. This could save around 20+ seconds when large number of tables are present. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25301) Expose notification log table through sys db
Ayush Saxena created HIVE-25301: --- Summary: Expose notification log table through sys db Key: HIVE-25301 URL: https://issues.apache.org/jira/browse/HIVE-25301 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Expose the notification_log table in RDBMS through Hive sys database -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25231) Add an ability to migrate CSV generated to hive table in replstats
Ayush Saxena created HIVE-25231: --- Summary: Add an ability to migrate CSV generated to hive table in replstats Key: HIVE-25231 URL: https://issues.apache.org/jira/browse/HIVE-25231 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add an option to replstats.sh to load the CSV generated using the replication policy into a hive table/view. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25218) Add a replication migration tool for external tables
Ayush Saxena created HIVE-25218: --- Summary: Add a replication migration tool for external tables Key: HIVE-25218 URL: https://issues.apache.org/jira/browse/HIVE-25218 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add a tool which can confirm migration of external tables post replication from one cluster to another. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25207) Expose incremental load statistics via JMX
Ayush Saxena created HIVE-25207: --- Summary: Expose incremental load statistics via JMX Key: HIVE-25207 URL: https://issues.apache.org/jira/browse/HIVE-25207 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Expose the incremental load details and statistics at per policy level in the JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25165) Generate & track statistics per event type for incremental load in replication metrics
Ayush Saxena created HIVE-25165: --- Summary: Generate & track statistics per event type for incremental load in replication metrics Key: HIVE-25165 URL: https://issues.apache.org/jira/browse/HIVE-25165 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Generate and track statistics like mean, median. standard deviation, variance etc per event type during incremental load and store them in replication statistics -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25133) Allow custom configs for database level paths in external table replication
Ayush Saxena created HIVE-25133: --- Summary: Allow custom configs for database level paths in external table replication Key: HIVE-25133 URL: https://issues.apache.org/jira/browse/HIVE-25133 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Allow a way to provide configurations which should be used only by the external data copy task of database level paths -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25092) Add a shell script to fetch the statistics of replication data copy taks
Ayush Saxena created HIVE-25092: --- Summary: Add a shell script to fetch the statistics of replication data copy taks Key: HIVE-25092 URL: https://issues.apache.org/jira/browse/HIVE-25092 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add a shell script which can fetch the statistics of the Mapred(Distcp) jobs launched as part of replication. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25059) Alter event is converted to rename during replication
Ayush Saxena created HIVE-25059: --- Summary: Alter event is converted to rename during replication Key: HIVE-25059 URL: https://issues.apache.org/jira/browse/HIVE-25059 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena In case the database/table name have different cases, while creating an alter event it considers change of name and creates a RENAME event rather than ALTER -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25035) Allow creating single copy tasks for configured paths during external table replication
Ayush Saxena created HIVE-25035: --- Summary: Allow creating single copy tasks for configured paths during external table replication Key: HIVE-25035 URL: https://issues.apache.org/jira/browse/HIVE-25035 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena As of now one tasks per table is created for external table replication, in case there are multiple tables under one common directory, provide a way to create a single task for all those table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24978) Optimise number of DROP_PARTITION events created.
Ayush Saxena created HIVE-24978: --- Summary: Optimise number of DROP_PARTITION events created. Key: HIVE-24978 URL: https://issues.apache.org/jira/browse/HIVE-24978 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Presently there is one event for every drop, optimise to merge them, to save the number of calls to HMS -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24961) Use database name as default in the mapred job name for replication
Ayush Saxena created HIVE-24961: --- Summary: Use database name as default in the mapred job name for replication Key: HIVE-24961 URL: https://issues.apache.org/jira/browse/HIVE-24961 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add database as job name for replication. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24924) Optimize checkpointing flow in incremental load
Ayush Saxena created HIVE-24924: --- Summary: Optimize checkpointing flow in incremental load Key: HIVE-24924 URL: https://issues.apache.org/jira/browse/HIVE-24924 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Attempt reducing alter calls for checkpointing during repl load -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24919) Optimise AddPartition replication call
Ayush Saxena created HIVE-24919: --- Summary: Optimise AddPartition replication call Key: HIVE-24919 URL: https://issues.apache.org/jira/browse/HIVE-24919 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Avoid unnecessary alter partition call while replaying add partition call during replication. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24895) Add a DataCopyEnd task for external table replication
Ayush Saxena created HIVE-24895: --- Summary: Add a DataCopyEnd task for external table replication Key: HIVE-24895 URL: https://issues.apache.org/jira/browse/HIVE-24895 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add a task to mark the end of external table copy. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24853) HMS leaks queries in case of timeout
Ayush Saxena created HIVE-24853: --- Summary: HMS leaks queries in case of timeout Key: HIVE-24853 URL: https://issues.apache.org/jira/browse/HIVE-24853 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena The queries aren't closed in case of timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24852) Add support for Snapshots during external table replication
Ayush Saxena created HIVE-24852: --- Summary: Add support for Snapshots during external table replication Key: HIVE-24852 URL: https://issues.apache.org/jira/browse/HIVE-24852 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add support for use of snapshot diff for external table replication. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24836) Add replication policy name and schedule id as a job name for all the distcp jobs
Ayush Saxena created HIVE-24836: --- Summary: Add replication policy name and schedule id as a job name for all the distcp jobs Key: HIVE-24836 URL: https://issues.apache.org/jira/browse/HIVE-24836 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Add replication policy name and schedule id as a job name for all the distcp jobs launched as part of the schedule -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24827) Hive aggregation query returns incorrect results for non text files
Ayush Saxena created HIVE-24827: --- Summary: Hive aggregation query returns incorrect results for non text files Key: HIVE-24827 URL: https://issues.apache.org/jira/browse/HIVE-24827 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena When header & footer are configured for non-text files, the aggregation query returns wrong result. Propose to ignore this property for non-text files -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24755) Recycle fails if the cm root isn't absolute
Ayush Saxena created HIVE-24755: --- Summary: Recycle fails if the cm root isn't absolute Key: HIVE-24755 URL: https://issues.apache.org/jira/browse/HIVE-24755 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24750) Create a single copy task for external tables within default DB location
Ayush Saxena created HIVE-24750: --- Summary: Create a single copy task for external tables within default DB location Key: HIVE-24750 URL: https://issues.apache.org/jira/browse/HIVE-24750 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena Assignee: Ayush Saxena Presently we create single task for each table, but for the tables within default DB location, we can copy the DB location in one task. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24698) Sync ACL's for the table directory during external table replication.
Ayush Saxena created HIVE-24698: --- Summary: Sync ACL's for the table directory during external table replication. Key: HIVE-24698 URL: https://issues.apache.org/jira/browse/HIVE-24698 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Set similar ACL's to destination table directory in case the source has ACL's enabled or set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24674) Set repl.source.for property in the db if db is under replication
Ayush Saxena created HIVE-24674: --- Summary: Set repl.source.for property in the db if db is under replication Key: HIVE-24674 URL: https://issues.apache.org/jira/browse/HIVE-24674 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Add repl.source.for property in the database in case not already set, if the database is under replication. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24426) Spark job fails with fixed umbilical server port
Ayush Saxena created HIVE-24426: --- Summary: Spark job fails with fixed umbilical server port Key: HIVE-24426 URL: https://issues.apache.org/jira/browse/HIVE-24426 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena In case of cloud deployments, multiple executors are launched on name node, and incase a fixed umbilical port is specified using {{spark.hadoop.hive.llap.daemon.umbilical.port=30006}} The job fails with BindException. {noformat} Caused by: java.net.BindException: Problem binding to [0.0.0.0:30006] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:840) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:741) at org.apache.hadoop.ipc.Server.bind(Server.java:605) at org.apache.hadoop.ipc.Server$Listener.(Server.java:1169) at org.apache.hadoop.ipc.Server.(Server.java:3032) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1039) at org.apache.hadoop.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:438) at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:332) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:848) at org.apache.hadoop.hive.llap.tezplugins.helpers.LlapTaskUmbilicalServer.(LlapTaskUmbilicalServer.java:67) at org.apache.hadoop.hive.llap.ext.LlapTaskUmbilicalExternalClient$SharedUmbilicalServer.(LlapTaskUmbilicalExternalClient.java:122) ... 26 more Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) at org.apache.hadoop.ipc.Server.bind(Server.java:588) ... 34 more{noformat} To counter this, better to provide a range of ports -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24407) Unable to read data with Hbase snapshot set
Ayush Saxena created HIVE-24407: --- Summary: Unable to read data with Hbase snapshot set Key: HIVE-24407 URL: https://issues.apache.org/jira/browse/HIVE-24407 Project: Hive Issue Type: Bug Components: HBase Handler Reporter: Ayush Saxena # CREATE TABLE foo(rowkey STRING, a STRING ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,f:c1') TBLPROPERTIES ('hbase.table.name' = 'foo'); # insert into foo values('row0','0'),('row1','1'),('row2','2'); # Move To Hbase-shell # hbase(main):002:0> snapshot 'foo','testing' # set hive.hbase.snapshot.name=testing; # select count(*) FROM foo WHERE rowkey = 'row0'; This should return 1, but doesn't -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24229) DirectSql fails in case of OracleDB
Ayush Saxena created HIVE-24229: --- Summary: DirectSql fails in case of OracleDB Key: HIVE-24229 URL: https://issues.apache.org/jira/browse/HIVE-24229 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Direct Sql fails due to different data type mapping incase of Oracle DB -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-23935) Fetching primaryKey through beeline fails with NPE
Ayush Saxena created HIVE-23935: --- Summary: Fetching primaryKey through beeline fails with NPE Key: HIVE-23935 URL: https://issues.apache.org/jira/browse/HIVE-23935 Project: Hive Issue Type: Bug Reporter: Ayush Saxena Assignee: Ayush Saxena Fetching PrimaryKey of a table through Beeline !primarykey fails with NPE {noformat} 0: jdbc:hive2://localhost:1> !primarykeys Persons Error: MetaException(message:java.lang.NullPointerException) (state=,code=0) org.apache.hive.service.cli.HiveSQLException: MetaException(message:java.lang.NullPointerException) at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:360) at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:351) at org.apache.hive.jdbc.HiveDatabaseMetaData.getPrimaryKeys(HiveDatabaseMetaData.java:573) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.beeline.Reflector.invoke(Reflector.java:89) at org.apache.hive.beeline.Commands.metadata(Commands.java:125) at org.apache.hive.beeline.Commands.primarykeys(Commands.java:231) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:57) at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1465) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1504) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1364) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1134) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1082) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:546) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:528) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-23772) Reallocate calcite-core to prevent NoSuchFiledError
Ayush Saxena created HIVE-23772: --- Summary: Reallocate calcite-core to prevent NoSuchFiledError Key: HIVE-23772 URL: https://issues.apache.org/jira/browse/HIVE-23772 Project: Hive Issue Type: Bug Affects Versions: 4.0.0 Reporter: Ayush Saxena Assignee: Ayush Saxena Exception trace due to conflict with {{calcite-core}} {noformat} Caused by: java.lang.NoSuchFieldError: operands at org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitCall(ASTConverter.java:785) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitCall(ASTConverter.java:509) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.calcite.rex.RexCall.accept(RexCall.java:191) ~[calcite-core-1.21.0.jar:1.21.0] at org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convert(ASTConverter.java:239) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convertSource(ASTConverter.java:437) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convert(ASTConverter.java:124) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convert(ASTConverter.java:112) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1620) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:555) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12456) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:433) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:290) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:220) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:104) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:184) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:602) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:548) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:542) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:125) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)