[jira] [Created] (HIVE-26395) Support CREATE TABLE LIKE FILE for PARQUET

2022-07-14 Thread John Sherman (Jira)
John Sherman created HIVE-26395:
---

 Summary: Support CREATE TABLE LIKE FILE for PARQUET
 Key: HIVE-26395
 URL: https://issues.apache.org/jira/browse/HIVE-26395
 Project: Hive
  Issue Type: New Feature
  Components: HiveServer2
Reporter: John Sherman
Assignee: John Sherman


The intent is to allow a user to create a table and derive the schema from a 
user provided parquet file. A secondary goal is to generalize this support so 
other SerDes/formats could implement the feature easily.

The proposed syntax is:
CREATE TABLE  LIKE FILE  'path to file';

Example being:
{code:java}
CREATE TABLE like_test_all_types LIKE FILE PARQUET 
'${system:test.tmp.dir}/test_all_types/00_0';{code}
with partitioning
{code}
CREATE TABLE like_test_partitioning LIKE FILE PARQUET 
'${system:test.tmp.dir}/test_all_types/00_0' PARTITIONED BY (year STRING, 
month STRING);
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-26394) Query based compaction fails for table with more than 6 columns

2022-07-14 Thread mahesh kumar behera (Jira)
mahesh kumar behera created HIVE-26394:
--

 Summary: Query based compaction fails for table with more than 6 
columns
 Key: HIVE-26394
 URL: https://issues.apache.org/jira/browse/HIVE-26394
 Project: Hive
  Issue Type: Bug
  Components: Hive, HiveServer2
Reporter: mahesh kumar behera
Assignee: mahesh kumar behera


Query based compaction creates a temp external table with location pointing to 
the location of the table being compacted. So this external table has file of 
ACID type. When query is done on this table, the table type is decided by 
reading the files present at the table location. As the table location has 
files compatible to ACID format, it is assuming it to be ACID table. This is 
causing issue while generating the SARG columns as the column number does not 
match with the schema.

 
{code:java}
Error doing query based minor compaction
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run INSERT into 
table delta_cara_pn_tmp_compactor_clean_1656061070392_result select 
`operation`, `originalTransaction`, `bucket`, `rowId`, `currentTransaction`, 
`row` from delta_clean_1656061070392 where `originalTransaction` not in 
(749,750,766,768,779,783,796,799,818,1145,1149,1150,1158,1159,1160,1165,1166,1169,1173,1175,1176,1871,9631)
at 
org.apache.hadoop.hive.ql.DriverUtils.runOnDriver(DriverUtils.java:73)
at 
org.apache.hadoop.hive.ql.txn.compactor.QueryCompactor.runCompactionQueries(QueryCompactor.java:138)
at 
org.apache.hadoop.hive.ql.txn.compactor.MinorQueryCompactor.runCompaction(MinorQueryCompactor.java:70)
at 
org.apache.hadoop.hive.ql.txn.compactor.Worker.findNextCompactionAndExecute(Worker.java:498)
at 
org.apache.hadoop.hive.ql.txn.compactor.Worker.lambda$run$0(Worker.java:120)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: (responseCode = 2, errorMessage = FAILED: Execution Error, return 
code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, 
vertexName=Map 1, vertexId=vertex_1656061159324__1_00, diagnostics=[Task 
failed, taskId=task_1656061159324__1_00_00, diagnostics=[TaskAttempt 0 
failed, info=[Error: Error while running task ( failure ) : 
attempt_1656061159324__1_00_00_0:java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: 
java.lang.ArrayIndexOutOfBoundsException: 6
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:348)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:277)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:75)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:62)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:62)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:38)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.RuntimeException: java.io.IOException: 
java.lang.ArrayIndexOutOfBoundsException: 6
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
at 
org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:706)

[jira] [Created] (HIVE-26393) add udf jar on tez has version conflict

2022-07-14 Thread katty he (Jira)
katty he created HIVE-26393:
---

 Summary: add udf jar on tez has version conflict
 Key: HIVE-26393
 URL: https://issues.apache.org/jira/browse/HIVE-26393
 Project: Hive
  Issue Type: Bug
Reporter: katty he


when i add a custom udf jar which has a different hadoop version with hive 
cluster, and then run select sql on tez,it will fail,because of version 
conflict with hive cluster, but if i use mr, it will success, why tez runs udf 
jar need load dependencies in udf?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-26392) Move StringTableMap tests into a dedicated test class

2022-07-14 Thread Zsolt Miskolczi (Jira)
Zsolt Miskolczi created HIVE-26392:
--

 Summary: Move StringTableMap tests into a dedicated test class
 Key: HIVE-26392
 URL: https://issues.apache.org/jira/browse/HIVE-26392
 Project: Hive
  Issue Type: Test
  Components: Hive
Reporter: Zsolt Miskolczi


`StringTableMap` has unit tests in `TestWorker.java`. They could be in their 
own dedicated test class instead. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-26391) [CVE-2020-36518] Upgrade com.fasterxml.jackson.core:jackson-databind from 2.13.0 to 2.13.2.1 to fix the vulnerability.

2022-07-14 Thread Aman Raj (Jira)
Aman Raj created HIVE-26391:
---

 Summary: [CVE-2020-36518] Upgrade 
com.fasterxml.jackson.core:jackson-databind from 2.13.0 to 2.13.2.1 to fix the 
vulnerability.
 Key: HIVE-26391
 URL: https://issues.apache.org/jira/browse/HIVE-26391
 Project: Hive
  Issue Type: Bug
  Components: Hive
Reporter: Aman Raj


jackson-databind is a data-binding package for the Jackson Data Processor. 
jackson-databind allows a Java stack overflow exception and denial of service 
via a large depth of nested objects.

Upgrade com.fasterxml.jackson.core:jackson-databind from 2.13.0 to 2.13.2.1 to 
fix the vulnerability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)