okumin commented on code in PR #6048:
URL: https://github.com/apache/hive/pull/6048#discussion_r2312973032
##########
ql/src/test/results/clientpositive/llap/bucketmapjoin14.q.out:
##########
@@ -0,0 +1,112 @@
+PREHOOK: query: CREATE TABLE tbl (foid string, part string, id string)
CLUSTERED BY (id, part) INTO 64 BUCKETS
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl
+POSTHOOK: query: CREATE TABLE tbl (foid string, part string, id string)
CLUSTERED BY (id, part) INTO 64 BUCKETS
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl
+PREHOOK: query: INSERT INTO tbl VALUES ('1234', 'PART_123', '1'), ('1235',
'PART_124', '2')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tbl
+POSTHOOK: query: INSERT INTO tbl VALUES ('1234', 'PART_123', '1'), ('1235',
'PART_124', '2')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tbl
+POSTHOOK: Lineage: tbl.foid SCRIPT []
+POSTHOOK: Lineage: tbl.id SCRIPT []
+POSTHOOK: Lineage: tbl.part SCRIPT []
+PREHOOK: query: EXPLAIN
+SELECT * FROM tbl JOIN tbl tbl2 ON tbl.id = tbl2.id AND tbl.part = tbl2.part
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl
+#### A masked pattern was here ####
+POSTHOOK: query: EXPLAIN
+SELECT * FROM tbl JOIN tbl tbl2 ON tbl.id = tbl2.id AND tbl.part = tbl2.part
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl
+#### A masked pattern was here ####
+STAGE DEPENDENCIES:
+ Stage-1 is a root stage
+ Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+ Stage: Stage-1
+ Tez
+#### A masked pattern was here ####
+ Edges:
+ Map 1 <- Map 2 (CUSTOM_EDGE)
+#### A masked pattern was here ####
+ Vertices:
+ Map 1
+ Map Operator Tree:
+ TableScan
+ alias: tbl
+ filterExpr: (id is not null and part is not null) (type:
boolean)
+ Statistics: Num rows: 2 Data size: 530 Basic stats: COMPLETE
Column stats: COMPLETE
+ Filter Operator
+ predicate: (id is not null and part is not null) (type:
boolean)
+ Statistics: Num rows: 2 Data size: 530 Basic stats:
COMPLETE Column stats: COMPLETE
+ Select Operator
+ expressions: foid (type: string), part (type: string),
id (type: string)
+ outputColumnNames: _col0, _col1, _col2
+ Statistics: Num rows: 2 Data size: 530 Basic stats:
COMPLETE Column stats: COMPLETE
+ Map Join Operator
+ condition map:
+ Inner Join 0 to 1
+ keys:
+ 0 _col1 (type: string), _col2 (type: string)
+ 1 _col1 (type: string), _col2 (type: string)
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4,
_col5
+ input vertices:
+ 1 Map 2
+ Statistics: Num rows: 2 Data size: 1060 Basic stats:
COMPLETE Column stats: COMPLETE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 2 Data size: 1060 Basic stats:
COMPLETE Column stats: COMPLETE
+ table:
+ input format:
org.apache.hadoop.mapred.SequenceFileInputFormat
+ output format:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+ serde:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized, llap
+ LLAP IO: all inputs
+ Map 2
+ Map Operator Tree:
+ TableScan
+ alias: tbl2
+ filterExpr: (id is not null and part is not null) (type:
boolean)
+ Statistics: Num rows: 2 Data size: 530 Basic stats: COMPLETE
Column stats: COMPLETE
+ Filter Operator
+ predicate: (id is not null and part is not null) (type:
boolean)
+ Statistics: Num rows: 2 Data size: 530 Basic stats:
COMPLETE Column stats: COMPLETE
+ Select Operator
+ expressions: foid (type: string), part (type: string),
id (type: string)
+ outputColumnNames: _col0, _col1, _col2
+ Statistics: Num rows: 2 Data size: 530 Basic stats:
COMPLETE Column stats: COMPLETE
+ Reduce Output Operator
+ key expressions: _col1 (type: string), _col2 (type:
string)
+ null sort order: zz
+ sort order: ++
+ Map-reduce partition columns: _col2 (type: string),
_col1 (type: string)
Review Comment:
I verified that this list is inverted on the master branch, and the result
would be empty.
```
Map-reduce partition columns: _col1 (type: string), _col2 (type: string)
```
##########
iceberg/iceberg-handler/src/test/results/positive/bucket_map_join_9.q.out:
##########
@@ -0,0 +1,65 @@
+PREHOOK: query: CREATE TABLE tbl (foid string, part string, id string)
PARTITIONED BY SPEC(bucket(10, id), bucket(10, part)) STORED BY ICEBERG
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl
+POSTHOOK: query: CREATE TABLE tbl (foid string, part string, id string)
PARTITIONED BY SPEC(bucket(10, id), bucket(10, part)) STORED BY ICEBERG
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl
+PREHOOK: query: INSERT INTO tbl VALUES ('1234', 'PART_123', '1'), ('1235',
'PART_124', '2')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tbl
+POSTHOOK: query: INSERT INTO tbl VALUES ('1234', 'PART_123', '1'), ('1235',
'PART_124', '2')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tbl
+PREHOOK: query: EXPLAIN
+SELECT * FROM tbl JOIN tbl tbl2 ON tbl.id = tbl2.id AND tbl.part = tbl2.part
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl
+PREHOOK: Output: hdfs://### HDFS PATH ###
+POSTHOOK: query: EXPLAIN
+SELECT * FROM tbl JOIN tbl tbl2 ON tbl.id = tbl2.id AND tbl.part = tbl2.part
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl
+POSTHOOK: Output: hdfs://### HDFS PATH ###
+Plan optimized by CBO.
+
+Vertex dependency in root stage
+Map 1 <- Map 2 (CUSTOM_EDGE)
+
+Stage-0
+ Fetch Operator
+ limit:-1
+ Stage-1
+ Map 1 vectorized
+ File Output Operator [FS_53]
+ Map Join Operator [MAPJOIN_52] (rows=2 width=530)
+ BucketMapJoin:true,Conds:SEL_51._col1, _col2=RS_49._col1,
_col2(Inner),Output:["_col0","_col1","_col2","_col3","_col4","_col5"]
+ <-Map 2 [CUSTOM_EDGE] vectorized
+ MULTICAST [RS_49]
+ PartitionCols:_col2, _col1
Review Comment:
I verified that this list is inverted on the master branch, and the result
would be empty.
```
PartitionCols:_col1, _col2
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]