[ https://issues.apache.org/jira/browse/SPARK-32808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yang Jie updated SPARK-32808: ----------------------------- Description: Now there are 319 TESTS FAILED based on commit `f5360e761ef161f7e04526b59a4baf53f1cf8cd5` {code:java} Run completed in 1 hour, 20 minutes, 25 seconds. Total number of tests run: 8485 Suites: completed 357, aborted 0 Tests: succeeded 8166, failed 319, canceled 1, ignored 52, pending 0 *** 319 TESTS FAILED *** {code} There are 293 failures associated with TPCDS_XXX_PlanStabilitySuite and TPCDS_XXX_PlanStabilityWithStatsSuite: * TPCDSV2_7_PlanStabilitySuite(33 FAILED) * TPCDSV1_4_PlanStabilityWithStatsSuite(94 FAILED) * TPCDSModifiedPlanStabilityWithStatsSuite(21 FAILED) * TPCDSV1_4_PlanStabilitySuite(92 FAILED) * TPCDSModifiedPlanStabilitySuite(21 FAILED) * TPCDSV2_7_PlanStabilityWithStatsSuite(32 FAILED) Other 26 FAILED cases as follow: * StreamingAggregationSuite ** count distinct - state format version 1 ** count distinct - state format version 2 * GeneratorFunctionSuite ** explode and other columns ** explode_outer and other columns * UDFSuite ** SPARK-26308: udf with complex types of decimal ** SPARK-32459: UDF should not fail on WrappedArray * SQLQueryTestSuite ** decimalArithmeticOperations.sql ** postgreSQL/aggregates_part2.sql ** ansi/decimalArithmeticOperations.sql ** udf/postgreSQL/udf-aggregates_part2.sql - Scala UDF ** udf/postgreSQL/udf-aggregates_part2.sql - Regular Python UDF * WholeStageCodegenSuite ** SPARK-26680: Stream in groupBy does not cause StackOverflowError * DataFrameSuite: ** explode ** SPARK-28067: Aggregate sum should not return wrong results for decimal overflow ** Star Expansion - ds.explode should fail with a meaningful message if it takes a star * DataStreamReaderWriterSuite ** SPARK-18510: use user specified types for partition columns in file sources * OrcV1QuerySuite\OrcV2QuerySuite ** Simple selection form ORC table * 2 * ExpressionsSchemaSuite ** Check schemas for expression examples * DataFrameStatSuite ** SPARK-28818: Respect original column nullability in `freqItems` * JsonV1Suite\JsonV2Suite\JsonLegacyTimeParserSuite ** SPARK-4228 DataFrame to JSON * 3 ** backward compatibility * 3 was: Now there are 319 TESTS FAILED based on commit `f5360e761ef161f7e04526b59a4baf53f1cf8cd5` {code:java} Run completed in 1 hour, 20 minutes, 25 seconds. Total number of tests run: 8485 Suites: completed 357, aborted 0 Tests: succeeded 8166, failed 319, canceled 1, ignored 52, pending 0 *** 319 TESTS FAILED *** {code} > Pass all `sql/core` module UTs in Scala 2.13 > -------------------------------------------- > > Key: SPARK-32808 > URL: https://issues.apache.org/jira/browse/SPARK-32808 > Project: Spark > Issue Type: Sub-task > Components: SQL > Affects Versions: 3.1.0 > Reporter: Yang Jie > Priority: Major > > Now there are 319 TESTS FAILED based on commit > `f5360e761ef161f7e04526b59a4baf53f1cf8cd5` > {code:java} > Run completed in 1 hour, 20 minutes, 25 seconds. > Total number of tests run: 8485 > Suites: completed 357, aborted 0 > Tests: succeeded 8166, failed 319, canceled 1, ignored 52, pending 0 > *** 319 TESTS FAILED *** > {code} > > There are 293 failures associated with TPCDS_XXX_PlanStabilitySuite and > TPCDS_XXX_PlanStabilityWithStatsSuite: > * TPCDSV2_7_PlanStabilitySuite(33 FAILED) > * TPCDSV1_4_PlanStabilityWithStatsSuite(94 FAILED) > * TPCDSModifiedPlanStabilityWithStatsSuite(21 FAILED) > * TPCDSV1_4_PlanStabilitySuite(92 FAILED) > * TPCDSModifiedPlanStabilitySuite(21 FAILED) > * TPCDSV2_7_PlanStabilityWithStatsSuite(32 FAILED) > > Other 26 FAILED cases as follow: > * StreamingAggregationSuite > ** count distinct - state format version 1 > ** count distinct - state format version 2 > * GeneratorFunctionSuite > ** explode and other columns > ** explode_outer and other columns > * UDFSuite > ** SPARK-26308: udf with complex types of decimal > ** SPARK-32459: UDF should not fail on WrappedArray > * SQLQueryTestSuite > ** decimalArithmeticOperations.sql > ** postgreSQL/aggregates_part2.sql > ** ansi/decimalArithmeticOperations.sql > ** udf/postgreSQL/udf-aggregates_part2.sql - Scala UDF > ** udf/postgreSQL/udf-aggregates_part2.sql - Regular Python UDF > * WholeStageCodegenSuite > ** SPARK-26680: Stream in groupBy does not cause StackOverflowError > * DataFrameSuite: > ** explode > ** SPARK-28067: Aggregate sum should not return wrong results for decimal > overflow > ** Star Expansion - ds.explode should fail with a meaningful message if it > takes a star > * DataStreamReaderWriterSuite > ** SPARK-18510: use user specified types for partition columns in file > sources > * OrcV1QuerySuite\OrcV2QuerySuite > ** Simple selection form ORC table * 2 > * ExpressionsSchemaSuite > ** Check schemas for expression examples > * DataFrameStatSuite > ** SPARK-28818: Respect original column nullability in `freqItems` > * JsonV1Suite\JsonV2Suite\JsonLegacyTimeParserSuite > ** SPARK-4228 DataFrame to JSON * 3 > ** backward compatibility * 3 -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org