This is an automated email from the ASF dual-hosted git repository.

agrove pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/main by this push:
     new b24a6d46f chore: Reenable nested types for CometFuzzTestSuite with 
int96 (#1761)
b24a6d46f is described below

commit b24a6d46f9c386f8e45e16bf5c33d127fd9c6cea
Author: Matt Butrovich <[email protected]>
AuthorDate: Wed May 21 09:55:53 2025 -0400

    chore: Reenable nested types for CometFuzzTestSuite with int96 (#1761)
---
 docs/source/user-guide/compatibility.md                        | 1 -
 docs/templates/compatibility-template.md                       | 1 -
 spark/src/test/scala/org/apache/comet/CometFuzzTestSuite.scala | 7 +------
 3 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/docs/source/user-guide/compatibility.md 
b/docs/source/user-guide/compatibility.md
index 6e9f0f825..4316c1c8c 100644
--- a/docs/source/user-guide/compatibility.md
+++ b/docs/source/user-guide/compatibility.md
@@ -62,7 +62,6 @@ logical types. Arrow-based readers, such as DataFusion and 
Comet do respect thes
 rather than signed. By default, Comet will fall back to Spark when scanning 
Parquet files containing `byte` or `short`
 types (regardless of the logical type). This behavior can be disabled by 
setting
 `spark.comet.scan.allowIncompatible=true`.
-- Reading legacy INT96 timestamps contained within complex types can produce 
different results to Spark
 - There is a known performance issue when pushing filters down to Parquet. See 
the [Comet Tuning Guide] for more
 information.
 - There are failures in the Spark SQL test suite when enabling these new scans 
(tracking issues: [#1542] and [#1545]).
diff --git a/docs/templates/compatibility-template.md 
b/docs/templates/compatibility-template.md
index a96382454..191507385 100644
--- a/docs/templates/compatibility-template.md
+++ b/docs/templates/compatibility-template.md
@@ -62,7 +62,6 @@ The new scans currently have the following limitations:
   rather than signed. By default, Comet will fall back to Spark when scanning 
Parquet files containing `byte` or `short`
   types (regardless of the logical type). This behavior can be disabled by 
setting
   `spark.comet.scan.allowIncompatible=true`.
-- Reading legacy INT96 timestamps contained within complex types can produce 
different results to Spark
 - There is a known performance issue when pushing filters down to Parquet. See 
the [Comet Tuning Guide] for more
   information.
 - There are failures in the Spark SQL test suite when enabling these new scans 
(tracking issues: [#1542] and [#1545]).
diff --git a/spark/src/test/scala/org/apache/comet/CometFuzzTestSuite.scala 
b/spark/src/test/scala/org/apache/comet/CometFuzzTestSuite.scala
index 05901339b..7f3aa24d2 100644
--- a/spark/src/test/scala/org/apache/comet/CometFuzzTestSuite.scala
+++ b/spark/src/test/scala/org/apache/comet/CometFuzzTestSuite.scala
@@ -247,12 +247,7 @@ class CometFuzzTestSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
   }
 
   test("Parquet temporal types written as INT96") {
-    // int96 coercion in DF does not work with nested types yet
-    // https://github.com/apache/datafusion/issues/15763
-    testParquetTemporalTypes(
-      ParquetOutputTimestampType.INT96,
-      generateArray = false,
-      generateStruct = false)
+    testParquetTemporalTypes(ParquetOutputTimestampType.INT96)
   }
 
   test("Parquet temporal types written as TIMESTAMP_MICROS") {


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to