Yohahaha opened a new issue, #9311: URL: https://github.com/apache/incubator-gluten/issues/9311
### Backend
VL (Velox)
### Bug description
in spark, I can use spark.sql.planChangeLog.level to check the details of
plan transformation, and it works in gluten before, but now, all I see is
HeuristicTransform.
@zhztheplayer would you help unwrap the transformation loop to make
spark.sql.planChangeLog.level works as before?
btw, spark.gluten.sql.transform.logLevel does not work for
HeuristicTransform.
```
=== Applying Rule
org.apache.gluten.extension.columnar.heuristic.HeuristicTransform took 53 ms ===
Execute InsertIntoHadoopFsRelationCommand
file:/root/emr-gluten/spark-warehouse/org.apache.spark.sql.execution.GlutenHiveUDFSuite/dest,
false, [dt#71], 4 buckets, bucket columns: [bid], Parquet,
[serialization.format=1, mergeSchema=false,
__hive_compatible_bucketed_table_insertion__=true,
partitionOverwriteMode=DYNAMIC], Overwrite, `spark_catalog`.`default`.`dest`,
org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,
org.apache.spark.sql.execution.datasources.CatalogFileIndex(file:/root/emr-gluten/spark-warehouse/org.apache.spark.sql.execution.GlutenHiveUDFSuite/dest),
[id, bid, dt] Execute InsertIntoHadoopFsRelationCommand
file:/root/emr-gluten/spark-warehouse/org.apache.spark.sql.execution.GlutenHiveUDFSuite/dest,
false, [dt#71], 4 buckets, bucket columns: [bid], Parquet,
[serialization.format=1, mergeSchema=false,
__hive_compatible_bucketed_table_insertion__=true,
partitionOverwriteMode=DYNAMIC], Overwrite, `spark_catalog`.`default`.`dest`,
org.apache.hadoop.hive.ql.
io.parquet.serde.ParquetHiveSerDe,
org.apache.spark.sql.execution.datasources.CatalogFileIndex(file:/root/emr-gluten/spark-warehouse/org.apache.spark.sql.execution.GlutenHiveUDFSuite/dest),
[id, bid, dt]
!+- WriteFiles
+- VeloxColumnarWriteFiles Parquet,
[dt#71], 4 buckets, bucket columns: [bid], [serialization.format=1,
mergeSchema=false, __hive_compatible_bucketed_table_insertion__=true,
partitionOverwriteMode=DYNAMIC]
! +- Sort [dt#71 ASC NULLS FIRST, pmod((hive-hash(bid#63) & 2147483647),
4) ASC NULLS FIRST], false, 0
:- WriteFilesExecTransformer
Parquet, [dt#71], 4 buckets, bucket columns: [bid], [serialization.format=1,
mergeSchema=false, __hive_compatible_bucketed_table_insertion__=true,
partitionOverwriteMode=DYNAMIC]
! +- Project [id#61, bid#63, empty2null(dt#62) AS dt#71]
: +- ProjectExecTransformer
[id#61, bid#63, dt#71]
! +- FileScan parquet spark_catalog.default.src[id#61,dt#62,bid#63]
Batched: true, DataFilters: [], Format: Parquet, Location: InMemoryFileIndex(1
paths)[file:/root/emr-gluten/spark-warehouse/org.apache.spark.sql.execution.G...,
PartitionFilters: [], PushedFilters: [], ReadSchema:
struct<id:int,dt:string,bid:int>
:
+- SortExecTransformer [dt#71 ASC NULLS FIRST, _pre_0#72 ASC NULLS FIRST],
false, 0, 0
!
: +- Project [id#61,
bid#63, dt#71, pmod((hive-hash(bid#63) & 2147483647), 4) AS _pre_0#72]
!
: +-
ProjectExecTransformer [id#61, bid#63, empty2null(dt#62) AS dt#71]
!
: +-
FileScanTransformer parquet spark_catalog.default.src[id#61,dt#62,bid#63]
Batched: true, DataFilters: [], Format: Parquet, Location: InMemoryFileIndex(1
paths)[file:/root/emr-gluten/spark-warehouse/org.apache.spark.sql.execution.G...,
PartitionFilters: [], PushedFilters: [], ReadSchema:
struct<id:int,dt:string,bid:int> NativeFilters: []
!
+- !WriteFiles
!
```
### Spark version
None
### Spark configurations
_No response_
### System information
_No response_
### Relevant logs
```bash
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
