andygrove commented on PR #2205: URL: https://github.com/apache/datafusion-comet/pull/2205#issuecomment-3208054795
@parthchandra @hsiang-c This PR confirms that https://github.com/apache/datafusion-comet/issues/2086 is fixed, but the following tests fail when we enable shuffle. ``` 2025-08-20T20:06:43.7522129Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption() > format = PARQUET, branch = null FAILED 2025-08-20T20:06:44.2516704Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption() > format = PARQUET, branch = main FAILED 2025-08-20T20:06:44.7567492Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption() > format = PARQUET, branch = testBranch FAILED 2025-08-20T20:06:44.9515019Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption() > format = AVRO, branch = null FAILED 2025-08-20T20:06:45.3517948Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption() > format = ORC, branch = testBranch FAILED 2025-08-20T20:06:49.0526007Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption2() > format = PARQUET, branch = null FAILED 2025-08-20T20:06:49.3521908Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption2() > format = PARQUET, branch = main FAILED 2025-08-20T20:06:49.8519083Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption2() > format = PARQUET, branch = testBranch FAILED 2025-08-20T20:06:50.0520942Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption2() > format = AVRO, branch = null FAILED 2025-08-20T20:06:50.3515999Z TestSparkDataWrite > testPartitionedFanoutCreateWithTargetFileSizeViaOption2() > format = ORC, branch = testBranch FAILED 2025-08-20T20:06:54.1525873Z TestSparkDataWrite > testPartitionedCreateWithTargetFileSizeViaOption() > format = PARQUET, branch = null FAILED 2025-08-20T20:06:54.5515512Z TestSparkDataWrite > testPartitionedCreateWithTargetFileSizeViaOption() > format = PARQUET, branch = main FAILED 2025-08-20T20:06:55.0543297Z TestSparkDataWrite > testPartitionedCreateWithTargetFileSizeViaOption() > format = PARQUET, branch = testBranch FAILED 2025-08-20T20:06:55.3532509Z TestSparkDataWrite > testPartitionedCreateWithTargetFileSizeViaOption() > format = AVRO, branch = null FAILED 2025-08-20T20:06:55.7548046Z TestSparkDataWrite > testPartitionedCreateWithTargetFileSizeViaOption() > format = ORC, branch = testBranch FAILED 2025-08-20T20:25:16.8516314Z TestStoragePartitionedJoins > testJoinsWithBucketingOnLongColumn() > catalogName = testhadoop, implementation = org.apache.iceberg.spark.SparkCatalog, config = {type=hadoop, cache-enabled=false}, planningMode = LOCAL FAILED 2025-08-20T20:25:20.3514538Z TestStoragePartitionedJoins > testJoinsWithBucketingOnLongColumn() > catalogName = testhadoop, implementation = org.apache.iceberg.spark.SparkCatalog, config = {type=hadoop, cache-enabled=false}, planningMode = DISTRIBUTED FAILED ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
