This is an automated email from the ASF dual-hosted git repository.
dkuzmenko pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git
The following commit(s) were added to refs/heads/master by this push:
new 26154ad51f2 HIVE-28591: Fix partition column names evaluation in
Vectorizer#validateInputFormatAndSchemaEvolution (Denys Kuzmenko, reviewed by
Dmitriy Fingerman, Soumyakanti Das)
26154ad51f2 is described below
commit 26154ad51f20d7dd21e4b8efc4052a18b4289c3c
Author: Denys Kuzmenko <[email protected]>
AuthorDate: Wed Nov 27 15:36:35 2024 +0100
HIVE-28591: Fix partition column names evaluation in
Vectorizer#validateInputFormatAndSchemaEvolution (Denys Kuzmenko, reviewed by
Dmitriy Fingerman, Soumyakanti Das)
Closes #5519
---
.../positive/dynamic_partition_writes.q.out | 70 +++++++++++-----------
.../hive/ql/optimizer/physical/Vectorizer.java | 4 +-
.../results/clientpositive/beeline/mapjoin2.q.out | 4 ++
.../clientpositive/llap/acid_nullscan.q.out | 2 +-
.../clientpositive/llap/annotate_stats_table.q.out | 2 +-
.../results/clientpositive/llap/auto_join29.q.out | 2 +-
.../results/clientpositive/llap/cte_mat_10.q.out | 6 +-
.../results/clientpositive/llap/cte_mat_8.q.out | 4 +-
.../results/clientpositive/llap/dynpart_cast.q.out | 2 +-
.../results/clientpositive/llap/fold_case.q.out | 4 +-
.../test/results/clientpositive/llap/input9.q.out | 2 +-
.../results/clientpositive/llap/insert_into1.q.out | 4 +-
.../llap/insert_only_empty_query.q.out | 2 +-
.../results/clientpositive/llap/mapjoin2.q.out | 16 ++---
.../clientpositive/llap/multi_insert_gby5.q.out | 2 +-
.../clientpositive/llap/partition_boolexpr.q.out | 4 +-
.../clientpositive/llap/ptf_register_use.q.out | 2 +-
.../clientpositive/llap/scratch_col_issue.q.out | 5 +-
.../clientpositive/llap/subquery_null_agg.q.out | 2 +-
.../llap/temp_table_partition_boolexpr.q.out | 4 +-
.../llap/tez_dynpart_hashjoin_4.q.out | 2 +-
.../clientpositive/llap/vector_bucket.q.out | 5 +-
.../vector_reduce_groupby_duplicate_cols.q.out | 23 ++++++-
.../llap/vector_tablesample_rows.q.out | 60 +++++++++++++++++--
.../vectorized_insert_into_bucketed_table.q.out | 5 +-
.../clientpositive/llap/vectorized_mapjoin3.q.out | 5 +-
26 files changed, 159 insertions(+), 84 deletions(-)
diff --git
a/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
b/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
index b7690c5579f..f1180d54e0a 100644
---
a/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
+++
b/iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
@@ -1079,9 +1079,9 @@ Stage-3
Dependency Collection{}
Stage-1
Reducer 2 vectorized
- File Output Operator [FS_16]
+ File Output Operator [FS_17]
table:{"name:":"default.tbl_year_date"}
- Select Operator [SEL_15]
+ Select Operator [SEL_16]
Output:["_col0","_col1","_col2","_col2","iceberg_year(_col1)"]
<-Map 1 [SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_13]
@@ -1095,10 +1095,10 @@ Stage-3
TableScan [TS_0] (rows=1 width=10)
_dummy_database@_dummy_table,_dummy_table,Tbl:COMPLETE,Col:COMPLETE
Reducer 3 vectorized
- File Output Operator [FS_19]
- Select Operator [SEL_18] (rows=1 width=890)
+ File Output Operator [FS_20]
+ Select Operator [SEL_19] (rows=1 width=890)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17"]
- Group By Operator [GBY_17] (rows=1 width=596)
+ Group By Operator [GBY_18] (rows=1 width=596)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12"],aggregations:["max(VALUE._col0)","avg(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","min(VALUE._col5)","max(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)","min(VALUE._col9)","max(VALUE._col10)","count(VALUE._col11)","compute_bit_vector_hll(VALUE._col12)"]
<-Map 1 [CUSTOM_SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_9]
@@ -1164,9 +1164,9 @@ Stage-3
Dependency Collection{}
Stage-1
Reducer 2 vectorized
- File Output Operator [FS_16]
+ File Output Operator [FS_17]
table:{"name:":"default.tbl_year_timestamp"}
- Select Operator [SEL_15]
+ Select Operator [SEL_16]
Output:["_col0","_col1","_col2","_col2","iceberg_year(_col1)"]
<-Map 1 [SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_13]
@@ -1180,10 +1180,10 @@ Stage-3
TableScan [TS_0] (rows=1 width=10)
_dummy_database@_dummy_table,_dummy_table,Tbl:COMPLETE,Col:COMPLETE
Reducer 3 vectorized
- File Output Operator [FS_19]
- Select Operator [SEL_18] (rows=1 width=863)
+ File Output Operator [FS_20]
+ Select Operator [SEL_19] (rows=1 width=863)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17"]
- Group By Operator [GBY_17] (rows=1 width=564)
+ Group By Operator [GBY_18] (rows=1 width=564)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12"],aggregations:["max(VALUE._col0)","avg(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","min(VALUE._col5)","max(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)","min(VALUE._col9)","max(VALUE._col10)","count(VALUE._col11)","compute_bit_vector_hll(VALUE._col12)"]
<-Map 1 [CUSTOM_SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_9]
@@ -1249,9 +1249,9 @@ Stage-3
Dependency Collection{}
Stage-1
Reducer 2 vectorized
- File Output Operator [FS_16]
+ File Output Operator [FS_17]
table:{"name:":"default.tbl_month_date"}
- Select Operator [SEL_15]
+ Select Operator [SEL_16]
Output:["_col0","_col1","_col2","_col2","iceberg_month(_col1)"]
<-Map 1 [SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_13]
@@ -1265,10 +1265,10 @@ Stage-3
TableScan [TS_0] (rows=1 width=10)
_dummy_database@_dummy_table,_dummy_table,Tbl:COMPLETE,Col:COMPLETE
Reducer 3 vectorized
- File Output Operator [FS_19]
- Select Operator [SEL_18] (rows=1 width=890)
+ File Output Operator [FS_20]
+ Select Operator [SEL_19] (rows=1 width=890)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17"]
- Group By Operator [GBY_17] (rows=1 width=596)
+ Group By Operator [GBY_18] (rows=1 width=596)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12"],aggregations:["max(VALUE._col0)","avg(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","min(VALUE._col5)","max(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)","min(VALUE._col9)","max(VALUE._col10)","count(VALUE._col11)","compute_bit_vector_hll(VALUE._col12)"]
<-Map 1 [CUSTOM_SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_9]
@@ -1334,9 +1334,9 @@ Stage-3
Dependency Collection{}
Stage-1
Reducer 2 vectorized
- File Output Operator [FS_16]
+ File Output Operator [FS_17]
table:{"name:":"default.tbl_month_timestamp"}
- Select Operator [SEL_15]
+ Select Operator [SEL_16]
Output:["_col0","_col1","_col2","_col2","iceberg_month(_col1)"]
<-Map 1 [SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_13]
@@ -1350,10 +1350,10 @@ Stage-3
TableScan [TS_0] (rows=1 width=10)
_dummy_database@_dummy_table,_dummy_table,Tbl:COMPLETE,Col:COMPLETE
Reducer 3 vectorized
- File Output Operator [FS_19]
- Select Operator [SEL_18] (rows=1 width=863)
+ File Output Operator [FS_20]
+ Select Operator [SEL_19] (rows=1 width=863)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17"]
- Group By Operator [GBY_17] (rows=1 width=564)
+ Group By Operator [GBY_18] (rows=1 width=564)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12"],aggregations:["max(VALUE._col0)","avg(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","min(VALUE._col5)","max(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)","min(VALUE._col9)","max(VALUE._col10)","count(VALUE._col11)","compute_bit_vector_hll(VALUE._col12)"]
<-Map 1 [CUSTOM_SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_9]
@@ -1419,9 +1419,9 @@ Stage-3
Dependency Collection{}
Stage-1
Reducer 2 vectorized
- File Output Operator [FS_16]
+ File Output Operator [FS_17]
table:{"name:":"default.tbl_day_date"}
- Select Operator [SEL_15]
+ Select Operator [SEL_16]
Output:["_col0","_col1","_col2","_col2","iceberg_day(_col1)"]
<-Map 1 [SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_13]
@@ -1435,10 +1435,10 @@ Stage-3
TableScan [TS_0] (rows=1 width=10)
_dummy_database@_dummy_table,_dummy_table,Tbl:COMPLETE,Col:COMPLETE
Reducer 3 vectorized
- File Output Operator [FS_19]
- Select Operator [SEL_18] (rows=1 width=890)
+ File Output Operator [FS_20]
+ Select Operator [SEL_19] (rows=1 width=890)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17"]
- Group By Operator [GBY_17] (rows=1 width=596)
+ Group By Operator [GBY_18] (rows=1 width=596)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12"],aggregations:["max(VALUE._col0)","avg(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","min(VALUE._col5)","max(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)","min(VALUE._col9)","max(VALUE._col10)","count(VALUE._col11)","compute_bit_vector_hll(VALUE._col12)"]
<-Map 1 [CUSTOM_SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_9]
@@ -1504,9 +1504,9 @@ Stage-3
Dependency Collection{}
Stage-1
Reducer 2 vectorized
- File Output Operator [FS_16]
+ File Output Operator [FS_17]
table:{"name:":"default.tbl_day_timestamp"}
- Select Operator [SEL_15]
+ Select Operator [SEL_16]
Output:["_col0","_col1","_col2","_col2","iceberg_day(_col1)"]
<-Map 1 [SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_13]
@@ -1520,10 +1520,10 @@ Stage-3
TableScan [TS_0] (rows=1 width=10)
_dummy_database@_dummy_table,_dummy_table,Tbl:COMPLETE,Col:COMPLETE
Reducer 3 vectorized
- File Output Operator [FS_19]
- Select Operator [SEL_18] (rows=1 width=863)
+ File Output Operator [FS_20]
+ Select Operator [SEL_19] (rows=1 width=863)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17"]
- Group By Operator [GBY_17] (rows=1 width=564)
+ Group By Operator [GBY_18] (rows=1 width=564)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12"],aggregations:["max(VALUE._col0)","avg(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","min(VALUE._col5)","max(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)","min(VALUE._col9)","max(VALUE._col10)","count(VALUE._col11)","compute_bit_vector_hll(VALUE._col12)"]
<-Map 1 [CUSTOM_SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_9]
@@ -1589,9 +1589,9 @@ Stage-3
Dependency Collection{}
Stage-1
Reducer 2 vectorized
- File Output Operator [FS_16]
+ File Output Operator [FS_17]
table:{"name:":"default.tbl_hour_timestamp"}
- Select Operator [SEL_15]
+ Select Operator [SEL_16]
Output:["_col0","_col1","_col2","_col2","iceberg_hour(_col1)"]
<-Map 1 [SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_13]
@@ -1605,10 +1605,10 @@ Stage-3
TableScan [TS_0] (rows=1 width=10)
_dummy_database@_dummy_table,_dummy_table,Tbl:COMPLETE,Col:COMPLETE
Reducer 3 vectorized
- File Output Operator [FS_19]
- Select Operator [SEL_18] (rows=1 width=863)
+ File Output Operator [FS_20]
+ Select Operator [SEL_19] (rows=1 width=863)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17"]
- Group By Operator [GBY_17] (rows=1 width=564)
+ Group By Operator [GBY_18] (rows=1 width=564)
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12"],aggregations:["max(VALUE._col0)","avg(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","min(VALUE._col5)","max(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)","min(VALUE._col9)","max(VALUE._col10)","count(VALUE._col11)","compute_bit_vector_hll(VALUE._col12)"]
<-Map 1 [CUSTOM_SIMPLE_EDGE]
PARTITION_ONLY_SHUFFLE [RS_9]
diff --git
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java
index 6033c190355..f3dadc46011 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java
@@ -42,6 +42,7 @@ import java.util.regex.Pattern;
import java.util.stream.Collectors;
import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.ArrayUtils;
import org.apache.commons.lang3.tuple.ImmutablePair;
import org.apache.hadoop.hive.ql.exec.vector.VectorizedInputFormatInterface;
@@ -1830,7 +1831,8 @@ public class Vectorizer implements PhysicalPlanResolver {
// (e.g. Avro provides the table schema and ignores the partition
schema..).
//
String nextDataColumnsString =
ObjectInspectorUtils.getFieldNames(partObjectInspector);
- String[] nextDataColumns = nextDataColumnsString.split(",");
+ String[] nextDataColumns = StringUtils.isBlank(nextDataColumnsString) ?
+ new String[0] : nextDataColumnsString.split(",");
List<String> nextDataColumnList = Arrays.asList(nextDataColumns);
/*
diff --git a/ql/src/test/results/clientpositive/beeline/mapjoin2.q.out
b/ql/src/test/results/clientpositive/beeline/mapjoin2.q.out
index 52b4ad11743..4d8296c9e99 100644
--- a/ql/src/test/results/clientpositive/beeline/mapjoin2.q.out
+++ b/ql/src/test/results/clientpositive/beeline/mapjoin2.q.out
@@ -287,6 +287,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
@@ -372,6 +373,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
@@ -463,6 +465,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
@@ -554,6 +557,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
diff --git a/ql/src/test/results/clientpositive/llap/acid_nullscan.q.out
b/ql/src/test/results/clientpositive/llap/acid_nullscan.q.out
index dfcd31721b7..c1b59389137 100644
--- a/ql/src/test/results/clientpositive/llap/acid_nullscan.q.out
+++ b/ql/src/test/results/clientpositive/llap/acid_nullscan.q.out
@@ -81,7 +81,7 @@ STAGE PLANS:
tag: -1
value expressions: _col0 (type: bigint)
auto parallelism: false
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Path -> Alias:
nullscan://null/_dummy_database._dummy_table/part_ [_dummy_table]
diff --git a/ql/src/test/results/clientpositive/llap/annotate_stats_table.q.out
b/ql/src/test/results/clientpositive/llap/annotate_stats_table.q.out
index ab3c65005dc..88a5a65c890 100644
--- a/ql/src/test/results/clientpositive/llap/annotate_stats_table.q.out
+++ b/ql/src/test/results/clientpositive/llap/annotate_stats_table.q.out
@@ -331,7 +331,7 @@ STAGE PLANS:
output format:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.tmp_n0
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-2
diff --git a/ql/src/test/results/clientpositive/llap/auto_join29.q.out
b/ql/src/test/results/clientpositive/llap/auto_join29.q.out
index 18f08394401..14182967c86 100644
--- a/ql/src/test/results/clientpositive/llap/auto_join29.q.out
+++ b/ql/src/test/results/clientpositive/llap/auto_join29.q.out
@@ -3327,7 +3327,7 @@ STAGE PLANS:
null sort order:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git a/ql/src/test/results/clientpositive/llap/cte_mat_10.q.out
b/ql/src/test/results/clientpositive/llap/cte_mat_10.q.out
index 45281a78563..d1444a9cd4c 100644
--- a/ql/src/test/results/clientpositive/llap/cte_mat_10.q.out
+++ b/ql/src/test/results/clientpositive/llap/cte_mat_10.q.out
@@ -84,7 +84,7 @@ STAGE PLANS:
output format:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.a2
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-2
@@ -182,7 +182,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 7
Map Operator Tree:
@@ -275,7 +275,7 @@ STAGE PLANS:
output format:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.b1
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-8
diff --git a/ql/src/test/results/clientpositive/llap/cte_mat_8.q.out
b/ql/src/test/results/clientpositive/llap/cte_mat_8.q.out
index a241fa36554..dd29a4fbcb4 100644
--- a/ql/src/test/results/clientpositive/llap/cte_mat_8.q.out
+++ b/ql/src/test/results/clientpositive/llap/cte_mat_8.q.out
@@ -61,7 +61,7 @@ STAGE PLANS:
output format:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.a1
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-2
@@ -139,7 +139,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 6
Map Operator Tree:
diff --git a/ql/src/test/results/clientpositive/llap/dynpart_cast.q.out
b/ql/src/test/results/clientpositive/llap/dynpart_cast.q.out
index cdc44d8f5e1..fcace1dab4f 100644
--- a/ql/src/test/results/clientpositive/llap/dynpart_cast.q.out
+++ b/ql/src/test/results/clientpositive/llap/dynpart_cast.q.out
@@ -75,7 +75,7 @@ STAGE PLANS:
Map-reduce partition columns: _col0 (type: int),
_col1 (type: int)
Statistics: Num rows: 1 Data size: 176 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col2 (type: int), _col3 (type:
int), _col4 (type: bigint), _col5 (type: bigint), _col6 (type: binary)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git a/ql/src/test/results/clientpositive/llap/fold_case.q.out
b/ql/src/test/results/clientpositive/llap/fold_case.q.out
index a9125241801..12daf8a143d 100644
--- a/ql/src/test/results/clientpositive/llap/fold_case.q.out
+++ b/ql/src/test/results/clientpositive/llap/fold_case.q.out
@@ -180,7 +180,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
@@ -338,7 +338,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git a/ql/src/test/results/clientpositive/llap/input9.q.out
b/ql/src/test/results/clientpositive/llap/input9.q.out
index 8034c7da7d4..9c528bb93ee 100644
--- a/ql/src/test/results/clientpositive/llap/input9.q.out
+++ b/ql/src/test/results/clientpositive/llap/input9.q.out
@@ -70,7 +70,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 400 Basic
stats: COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: int), _col1 (type:
struct<count:bigint,sum:double,input:int>), _col2 (type: bigint), _col3 (type:
bigint), _col4 (type: binary), _col5 (type: int), _col6 (type: int), _col7
(type: bigint), _col8 (type: binary)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git a/ql/src/test/results/clientpositive/llap/insert_into1.q.out
b/ql/src/test/results/clientpositive/llap/insert_into1.q.out
index d5e366f86ff..5febf3e4e93 100644
--- a/ql/src/test/results/clientpositive/llap/insert_into1.q.out
+++ b/ql/src/test/results/clientpositive/llap/insert_into1.q.out
@@ -595,7 +595,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 400 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: int), _col1 (type:
int), _col2 (type: bigint), _col3 (type: bigint), _col4 (type: binary), _col5
(type: int), _col6 (type: struct<count:bigint,sum:double,input:int>), _col7
(type: bigint), _col8 (type: binary)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
@@ -703,7 +703,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 400 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: int), _col1 (type:
int), _col2 (type: bigint), _col3 (type: bigint), _col4 (type: binary), _col5
(type: int), _col6 (type: struct<count:bigint,sum:double,input:int>), _col7
(type: bigint), _col8 (type: binary)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git
a/ql/src/test/results/clientpositive/llap/insert_only_empty_query.q.out
b/ql/src/test/results/clientpositive/llap/insert_only_empty_query.q.out
index c7f7147180c..1f7a34d81ad 100644
--- a/ql/src/test/results/clientpositive/llap/insert_only_empty_query.q.out
+++ b/ql/src/test/results/clientpositive/llap/insert_only_empty_query.q.out
@@ -80,7 +80,7 @@ STAGE PLANS:
Map-reduce partition columns: _col1 (type: int)
Statistics: Num rows: 1 Data size: 200 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: string), _col2 (type:
decimal(3,2))
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git a/ql/src/test/results/clientpositive/llap/mapjoin2.q.out
b/ql/src/test/results/clientpositive/llap/mapjoin2.q.out
index 5db5aac3a8a..6a04e58ba62 100644
--- a/ql/src/test/results/clientpositive/llap/mapjoin2.q.out
+++ b/ql/src/test/results/clientpositive/llap/mapjoin2.q.out
@@ -257,7 +257,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 2
Map Operator Tree:
@@ -271,7 +271,7 @@ STAGE PLANS:
null sort order:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-0
@@ -342,7 +342,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 2
Map Operator Tree:
@@ -359,7 +359,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: int), _col1 (type: int)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-0
@@ -416,7 +416,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 12 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: int), _col1 (type: int),
_col2 (type: int)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 2
Map Operator Tree:
@@ -450,7 +450,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-0
@@ -507,7 +507,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 12 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: int), _col1 (type: int),
_col2 (type: int)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 2
Map Operator Tree:
@@ -541,7 +541,7 @@ STAGE PLANS:
input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Stage: Stage-0
diff --git a/ql/src/test/results/clientpositive/llap/multi_insert_gby5.q.out
b/ql/src/test/results/clientpositive/llap/multi_insert_gby5.q.out
index 1345395e0ee..7ea49fea98c 100644
--- a/ql/src/test/results/clientpositive/llap/multi_insert_gby5.q.out
+++ b/ql/src/test/results/clientpositive/llap/multi_insert_gby5.q.out
@@ -61,7 +61,7 @@ STAGE PLANS:
Map-reduce partition columns: 100 (type: int)
Statistics: Num rows: 1 Data size: 10 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: 200 (type: int)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: llap
diff --git a/ql/src/test/results/clientpositive/llap/partition_boolexpr.q.out
b/ql/src/test/results/clientpositive/llap/partition_boolexpr.q.out
index f05fd536fcc..59c0d1f865b 100644
--- a/ql/src/test/results/clientpositive/llap/partition_boolexpr.q.out
+++ b/ql/src/test/results/clientpositive/llap/partition_boolexpr.q.out
@@ -87,7 +87,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
@@ -209,7 +209,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git a/ql/src/test/results/clientpositive/llap/ptf_register_use.q.out
b/ql/src/test/results/clientpositive/llap/ptf_register_use.q.out
index 3a788430661..8e44476cea8 100644
--- a/ql/src/test/results/clientpositive/llap/ptf_register_use.q.out
+++ b/ql/src/test/results/clientpositive/llap/ptf_register_use.q.out
@@ -30,7 +30,7 @@ STAGE PLANS:
sort order: +
Map-reduce partition columns: 0 (type: int)
Statistics: Num rows: 1 Data size: 10 Basic stats:
COMPLETE Column stats: COMPLETE
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: llap
diff --git a/ql/src/test/results/clientpositive/llap/scratch_col_issue.q.out
b/ql/src/test/results/clientpositive/llap/scratch_col_issue.q.out
index b542bc46799..907b6aa41b2 100644
--- a/ql/src/test/results/clientpositive/llap/scratch_col_issue.q.out
+++ b/ql/src/test/results/clientpositive/llap/scratch_col_issue.q.out
@@ -261,10 +261,11 @@ STAGE PLANS:
Execution mode: llap
LLAP IO: no inputs
Map Vectorization:
- enabled: false
+ enabled: true
enabledConditionsMet:
hive.vectorized.use.vectorized.input.format IS true
- enabledConditionsNotMet: Could not enable vectorization due to
partition column names size 1 is greater than the number of table column names
size 0 IS false
inputFileFormats:
org.apache.hadoop.hive.ql.io.NullRowsInputFormat
+ notVectorizedReason: UDTF Operator (UDTF) not supported
+ vectorized: false
Stage: Stage-0
Fetch Operator
diff --git a/ql/src/test/results/clientpositive/llap/subquery_null_agg.q.out
b/ql/src/test/results/clientpositive/llap/subquery_null_agg.q.out
index fff14fd4537..9b763c9cb4a 100644
--- a/ql/src/test/results/clientpositive/llap/subquery_null_agg.q.out
+++ b/ql/src/test/results/clientpositive/llap/subquery_null_agg.q.out
@@ -86,7 +86,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 4
Map Operator Tree:
diff --git
a/ql/src/test/results/clientpositive/llap/temp_table_partition_boolexpr.q.out
b/ql/src/test/results/clientpositive/llap/temp_table_partition_boolexpr.q.out
index 1e5edd8b923..c815f2e0851 100644
---
a/ql/src/test/results/clientpositive/llap/temp_table_partition_boolexpr.q.out
+++
b/ql/src/test/results/clientpositive/llap/temp_table_partition_boolexpr.q.out
@@ -87,7 +87,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
@@ -209,7 +209,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Reducer 2
Execution mode: vectorized, llap
diff --git
a/ql/src/test/results/clientpositive/llap/tez_dynpart_hashjoin_4.q.out
b/ql/src/test/results/clientpositive/llap/tez_dynpart_hashjoin_4.q.out
index 9c999893817..75cb43899a1 100644
--- a/ql/src/test/results/clientpositive/llap/tez_dynpart_hashjoin_4.q.out
+++ b/ql/src/test/results/clientpositive/llap/tez_dynpart_hashjoin_4.q.out
@@ -152,7 +152,7 @@ STAGE PLANS:
null sort order:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map 3
Map Operator Tree:
diff --git a/ql/src/test/results/clientpositive/llap/vector_bucket.q.out
b/ql/src/test/results/clientpositive/llap/vector_bucket.q.out
index 1c0f255e566..51da815f601 100644
--- a/ql/src/test/results/clientpositive/llap/vector_bucket.q.out
+++ b/ql/src/test/results/clientpositive/llap/vector_bucket.q.out
@@ -61,10 +61,11 @@ STAGE PLANS:
Execution mode: llap
LLAP IO: no inputs
Map Vectorization:
- enabled: false
+ enabled: true
enabledConditionsMet:
hive.vectorized.use.vectorized.input.format IS true
- enabledConditionsNotMet: Could not enable vectorization due to
partition column names size 1 is greater than the number of table column names
size 0 IS false
inputFileFormats:
org.apache.hadoop.hive.ql.io.NullRowsInputFormat
+ notVectorizedReason: UDTF Operator (UDTF) not supported
+ vectorized: false
Reducer 2
Execution mode: vectorized, llap
Reduce Vectorization:
diff --git
a/ql/src/test/results/clientpositive/llap/vector_reduce_groupby_duplicate_cols.q.out
b/ql/src/test/results/clientpositive/llap/vector_reduce_groupby_duplicate_cols.q.out
index 2f374d31426..5d9fcd5ee67 100644
---
a/ql/src/test/results/clientpositive/llap/vector_reduce_groupby_duplicate_cols.q.out
+++
b/ql/src/test/results/clientpositive/llap/vector_reduce_groupby_duplicate_cols.q.out
@@ -61,19 +61,36 @@ STAGE PLANS:
alias: _dummy_table
Row Limit Per Split: 1
Statistics: Num rows: 1 Data size: 10 Basic stats: COMPLETE
Column stats: COMPLETE
+ TableScan Vectorization:
+ native: true
+ vectorizationSchemaColumns:
[0:ROW__ID:struct<writeid:bigint,bucketid:int,rowid:bigint>,
1:ROW__IS__DELETED:boolean]
Reduce Output Operator
key expressions: 1 (type: int), 2 (type: int)
null sort order: zz
sort order: ++
Map-reduce partition columns: 1 (type: int), 2 (type: int)
+ Reduce Sink Vectorization:
+ className: VectorReduceSinkOperator
+ native: false
+ nativeConditionsMet: hive.execution.engine tez IN
[tez] IS true, No PTF TopN IS true, No DISTINCT columns IS true,
BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true
+ nativeConditionsNotMet:
hive.vectorized.execution.reducesink.new.enabled IS false
Statistics: Num rows: 1 Data size: 10 Basic stats:
COMPLETE Column stats: COMPLETE
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map Vectorization:
- enabled: false
+ enabled: true
enabledConditionsMet:
hive.vectorized.use.vectorized.input.format IS true
- enabledConditionsNotMet: Could not enable vectorization due to
partition column names size 1 is greater than the number of table column names
size 0 IS false
+ inputFormatFeatureSupport: []
+ featureSupportInUse: []
inputFileFormats:
org.apache.hadoop.hive.ql.io.NullRowsInputFormat
+ allNative: false
+ usesVectorUDFAdaptor: false
+ vectorized: true
+ rowBatchContext:
+ dataColumnCount: 0
+ includeColumns: []
+ partitionColumnCount: 0
+ scratchColumnTypeNames: [bigint, bigint]
Map 2
Map Operator Tree:
TableScan
diff --git
a/ql/src/test/results/clientpositive/llap/vector_tablesample_rows.q.out
b/ql/src/test/results/clientpositive/llap/vector_tablesample_rows.q.out
index 2a10ca7fb85..5898daafe07 100644
--- a/ql/src/test/results/clientpositive/llap/vector_tablesample_rows.q.out
+++ b/ql/src/test/results/clientpositive/llap/vector_tablesample_rows.q.out
@@ -249,10 +249,24 @@ STAGE PLANS:
alias: _dummy_table
Row Limit Per Split: 1
Statistics: Num rows: 1 Data size: 10 Basic stats: COMPLETE
Column stats: COMPLETE
+ TableScan Vectorization:
+ native: true
+ vectorizationSchemaColumns:
[0:ROW__ID:struct<writeid:bigint,bucketid:int,rowid:bigint>,
1:ROW__IS__DELETED:boolean]
Select Operator
+ Select Vectorization:
+ className: VectorSelectOperator
+ native: true
+ projectedOutputColumnNums: []
Statistics: Num rows: 1 Data size: 10 Basic stats:
COMPLETE Column stats: COMPLETE
Group By Operator
aggregations: count()
+ Group By Vectorization:
+ aggregators: VectorUDAFCountStar(*) -> bigint
+ className: VectorGroupByOperator
+ groupByMode: HASH
+ native: false
+ vectorProcessingMode: HASH
+ projectedOutputColumnNums: [0]
minReductionHashAggr: 0.4
mode: hash
outputColumnNames: _col0
@@ -260,15 +274,29 @@ STAGE PLANS:
Reduce Output Operator
null sort order:
sort order:
+ Reduce Sink Vectorization:
+ className: VectorReduceSinkEmptyKeyOperator
+ native: true
+ nativeConditionsMet:
hive.vectorized.execution.reducesink.new.enabled IS true, hive.execution.engine
tez IN [tez] IS true, No PTF TopN IS true, No DISTINCT columns IS true,
BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true
+ valueColumns: 0:bigint
Statistics: Num rows: 1 Data size: 8 Basic stats:
COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map Vectorization:
- enabled: false
+ enabled: true
enabledConditionsMet:
hive.vectorized.use.vectorized.input.format IS true
- enabledConditionsNotMet: Could not enable vectorization due to
partition column names size 1 is greater than the number of table column names
size 0 IS false
+ inputFormatFeatureSupport: []
+ featureSupportInUse: []
inputFileFormats:
org.apache.hadoop.hive.ql.io.NullRowsInputFormat
+ allNative: false
+ usesVectorUDFAdaptor: false
+ vectorized: true
+ rowBatchContext:
+ dataColumnCount: 0
+ includeColumns: []
+ partitionColumnCount: 0
+ scratchColumnTypeNames: []
Reducer 2
Execution mode: vectorized, llap
Reduce Vectorization:
@@ -359,25 +387,45 @@ STAGE PLANS:
alias: _dummy_table
Row Limit Per Split: 1
Statistics: Num rows: 1 Data size: 10 Basic stats: COMPLETE
Column stats: COMPLETE
+ TableScan Vectorization:
+ native: true
+ vectorizationSchemaColumns:
[0:ROW__ID:struct<writeid:bigint,bucketid:int,rowid:bigint>,
1:ROW__IS__DELETED:boolean]
Select Operator
expressions: 1 (type: int)
outputColumnNames: _col0
+ Select Vectorization:
+ className: VectorSelectOperator
+ native: true
+ projectedOutputColumnNums: [2]
+ selectExpressions: ConstantVectorExpression(val 1) ->
2:int
Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE
Column stats: COMPLETE
File Output Operator
compressed: false
+ File Sink Vectorization:
+ className: VectorFileSinkOperator
+ native: false
Statistics: Num rows: 1 Data size: 4 Basic stats:
COMPLETE Column stats: COMPLETE
table:
input format:
org.apache.hadoop.mapred.TextInputFormat
output format:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dual
- Execution mode: llap
+ Execution mode: vectorized, llap
LLAP IO: no inputs
Map Vectorization:
- enabled: false
+ enabled: true
enabledConditionsMet:
hive.vectorized.use.vectorized.input.format IS true
- enabledConditionsNotMet: Could not enable vectorization due to
partition column names size 1 is greater than the number of table column names
size 0 IS false
+ inputFormatFeatureSupport: []
+ featureSupportInUse: []
inputFileFormats:
org.apache.hadoop.hive.ql.io.NullRowsInputFormat
+ allNative: false
+ usesVectorUDFAdaptor: false
+ vectorized: true
+ rowBatchContext:
+ dataColumnCount: 0
+ includeColumns: []
+ partitionColumnCount: 0
+ scratchColumnTypeNames: [bigint]
Stage: Stage-2
Dependency Collection
diff --git
a/ql/src/test/results/clientpositive/llap/vectorized_insert_into_bucketed_table.q.out
b/ql/src/test/results/clientpositive/llap/vectorized_insert_into_bucketed_table.q.out
index 27b0a922d0d..b2f88bf294f 100644
---
a/ql/src/test/results/clientpositive/llap/vectorized_insert_into_bucketed_table.q.out
+++
b/ql/src/test/results/clientpositive/llap/vectorized_insert_into_bucketed_table.q.out
@@ -58,10 +58,11 @@ STAGE PLANS:
Execution mode: llap
LLAP IO: no inputs
Map Vectorization:
- enabled: false
+ enabled: true
enabledConditionsMet:
hive.vectorized.use.vectorized.input.format IS true
- enabledConditionsNotMet: Could not enable vectorization due to
partition column names size 1 is greater than the number of table column names
size 0 IS false
inputFileFormats:
org.apache.hadoop.hive.ql.io.NullRowsInputFormat
+ notVectorizedReason: UDTF Operator (UDTF) not supported
+ vectorized: false
Reducer 2
Execution mode: vectorized, llap
Reduce Vectorization:
diff --git a/ql/src/test/results/clientpositive/llap/vectorized_mapjoin3.q.out
b/ql/src/test/results/clientpositive/llap/vectorized_mapjoin3.q.out
index a20a3d90ee4..8c5c55cda36 100644
--- a/ql/src/test/results/clientpositive/llap/vectorized_mapjoin3.q.out
+++ b/ql/src/test/results/clientpositive/llap/vectorized_mapjoin3.q.out
@@ -974,10 +974,11 @@ STAGE PLANS:
Execution mode: llap
LLAP IO: no inputs
Map Vectorization:
- enabled: false
+ enabled: true
enabledConditionsMet:
hive.vectorized.use.vectorized.input.format IS true
- enabledConditionsNotMet: Could not enable vectorization due to
partition column names size 1 is greater than the number of table column names
size 0 IS false
inputFileFormats:
org.apache.hadoop.hive.ql.io.NullRowsInputFormat
+ notVectorizedReason: UDTF Operator (UDTF) not supported
+ vectorized: false
Map 3
Map Operator Tree:
TableScan