jzhuge commented on code in PR #15193:
URL: https://github.com/apache/iceberg/pull/15193#discussion_r2755431346


##########
spark/v4.0/spark/src/test/java/org/apache/iceberg/spark/sql/TestSelect.java:
##########
@@ -743,4 +892,102 @@ public void variantTypeInFilter() {
                 tableName))
         .containsExactlyInAnyOrder(row(1L, 15), row(2L, 20));
   }
+
+  @TestTemplate
+  public void testSessionPropertyWithMultiTableJoin() {
+    // Create two tables with initial data
+    String table1Name = tableName("table1");
+    TableIdentifier table1Identifier = 
TableIdentifier.of(Namespace.of("default"), "table1");
+    String table2Name = tableName("table2");
+    TableIdentifier table2Identifier = 
TableIdentifier.of(Namespace.of("default"), "table2");
+    System.out.println(table1Identifier);

Review Comment:
   Debugging purpose? There are other printlns like this.



##########
docs/docs/spark-configuration.md:
##########
@@ -171,31 +171,32 @@ val spark = SparkSession.builder()
   .getOrCreate()
 ```
 
-| Spark option                                           | Default             
                                           | Description                        
                                                                                
             |
-|--------------------------------------------------------|----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|
-| spark.sql.iceberg.vectorization.enabled                | Table default       
                                           | Enables vectorized reads of data 
files                                                                           
               |
-| spark.sql.iceberg.parquet.reader-type                  | ICEBERG             
                                           | Sets Parquet reader implementation 
(`ICEBERG`,`COMET`)                                                             
             |
-| spark.sql.iceberg.check-nullability                    | true                
                                           | Validate that the write schema's 
nullability matches the table's nullability                                     
               |
-| spark.sql.iceberg.check-ordering                       | true                
                                           | Validates the write schema column 
order matches the table schema order                                            
              |
-| spark.sql.iceberg.planning.preserve-data-grouping      | false               
                                           | When true, co-locate scan tasks 
for the same partition in the same read split, used in Storage Partitioned 
Joins                |
-| spark.sql.iceberg.aggregate-push-down.enabled          | true                
                                           | Enables pushdown of aggregate 
functions (MAX, MIN, COUNT)                                                     
                  |
-| spark.sql.iceberg.distribution-mode                    | See [Spark 
Writes](spark-writes.md#writing-distribution-modes) | Controls distribution 
strategy during writes                                                          
                          |
-| spark.wap.id                                           | null                
                                           | 
[Write-Audit-Publish](branching.md#audit-branch) snapshot staging ID            
                                                |
-| spark.wap.branch                                       | null                
                                           | WAP branch name for snapshot 
commit                                                                          
                   |
-| spark.sql.iceberg.compression-codec                    | Table default       
                                           | Write compression codec (e.g., 
`zstd`, `snappy`)                                                               
                 |
-| spark.sql.iceberg.compression-level                    | Table default       
                                           | Compression level for Parquet/Avro 
                                                                                
             |
-| spark.sql.iceberg.compression-strategy                 | Table default       
                                           | Compression strategy for ORC       
                                                                                
             |
-| spark.sql.iceberg.data-planning-mode                   | AUTO                
                                           | Scan planning mode for data files 
(`AUTO`, `LOCAL`, `DISTRIBUTED`)                                                
              |
-| spark.sql.iceberg.delete-planning-mode                 | AUTO                
                                           | Scan planning mode for delete 
files (`AUTO`, `LOCAL`, `DISTRIBUTED`)                                          
                  |
-| spark.sql.iceberg.advisory-partition-size              | Table default       
                                           | Advisory size (bytes) used for 
writing to the Table when Spark's Adaptive Query Execution is enabled. Used to 
size output files |
-| spark.sql.iceberg.locality.enabled                     | false               
                                           | Report locality information for 
Spark task placement on executors                                               
                |
-| spark.sql.iceberg.executor-cache.enabled               | true                
                                           | Enables cache for executor-side 
(currently used to cache Delete Files)                                          
                |
-| spark.sql.iceberg.executor-cache.timeout               | 10                  
                                           | Timeout in minutes for executor 
cache entries                                                                   
                |
-| spark.sql.iceberg.executor-cache.max-entry-size        | 67108864 (64MB)     
                                           | Max size per cache entry (bytes)   
                                                                                
             |
-| spark.sql.iceberg.executor-cache.max-total-size        | 134217728 (128MB)   
                                           | Max total executor cache size 
(bytes)                                                                         
                  |
-| spark.sql.iceberg.executor-cache.locality.enabled      | false               
                                           | Enables locality-aware executor 
cache usage                                                                     
                |
-| spark.sql.iceberg.merge-schema                         | false               
                                           | Enables modifying the table schema 
to match the write schema. Only adds columns missing columns                    
             |
-| spark.sql.iceberg.report-column-stats                  | true                
                                           | Report Puffin Table Statistics if 
available to Spark's Cost Based Optimizer. CBO must be enabled for this to be 
effective       |
+| Spark option                                      | Default                  
                                      | Description                             
                                                                                
                                   |

Review Comment:
   Remove accidental format changes



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to