alamb opened a new issue, #17882:
URL: https://github.com/apache/datafusion/issues/17882

   ### Describe the bug
   
   CI is failing on main 
   
   Here is an example: 
https://github.com/apache/datafusion/actions/runs/18184102333/job/51765212634
   
   ```
   failures:
   
   ---- 
physical_optimizer::partition_statistics::test::test_statistic_by_partition_of_repartition_hash_partitioning
 stdout ----
   
   thread 
'physical_optimizer::partition_statistics::test::test_statistic_by_partition_of_repartition_hash_partitioning'
 panicked at 
datafusion/core/tests/physical_optimizer/partition_statistics.rs:976:9:
   assertion `left == right` failed
     left: 4
    right: 1
   stack backtrace:
      0: __rustc::rust_begin_unwind
      1: core::panicking::panic_fmt
      2: core::panicking::assert_failed_inner
      3: core::panicking::assert_failed
      4: 
core_integration::physical_optimizer::partition_statistics::test::test_statistic_by_partition_of_repartition_hash_partitioning::{{closure}}
      5: <core::pin::Pin<P> as core::future::future::Future>::poll
      6: <core::pin::Pin<P> as core::future::future::Future>::poll
      7: 
tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}
      8: 
tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
      9: tokio::runtime::scheduler::current_thread::Context::enter
     10: 
tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
     11: 
tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
     12: tokio::runtime::context::scoped::Scoped<T>::set
     13: tokio::runtime::context::set_scheduler::{{closure}}
     14: std::thread::local::LocalKey<T>::try_with
     15: std::thread::local::LocalKey<T>::with
     16: tokio::runtime::context::set_scheduler
     17: tokio::runtime::scheduler::current_thread::CoreGuard::enter
     18: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
     19: 
tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}
     20: tokio::runtime::context::runtime::enter_runtime
     21: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
     22: tokio::runtime::runtime::Runtime::block_on_inner
     23: tokio::runtime::runtime::Runtime::block_on
     24: 
core_integration::physical_optimizer::partition_statistics::test::test_statistic_by_partition_of_repartition_hash_partitioning
     25: 
core_integration::physical_optimizer::partition_statistics::test::test_statistic_by_partition_of_repartition_hash_partitioning::{{closure}}
     26: core::ops::function::FnOnce::call_once
   note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose 
backtrace.
   
   ---- 
physical_optimizer::partition_statistics::test::test_statistics_by_partition_of_interleave
 stdout ----
   
   thread 
'physical_optimizer::partition_statistics::test::test_statistics_by_partition_of_interleave'
 panicked at 
datafusion/core/tests/physical_optimizer/partition_statistics.rs:443:9:
   assertion `left == right` failed
     left: 8
    right: 2
   stack backtrace:
      0: __rustc::rust_begin_unwind
      1: core::panicking::panic_fmt
      2: core::panicking::assert_failed_inner
      3: core::panicking::assert_failed
      4: 
core_integration::physical_optimizer::partition_statistics::test::test_statistics_by_partition_of_interleave::{{closure}}
      5: <core::pin::Pin<P> as core::future::future::Future>::poll
      6: <core::pin::Pin<P> as core::future::future::Future>::poll
      7: 
tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}
      8: 
tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
      9: tokio::runtime::scheduler::current_thread::Context::enter
     10: 
tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
     11: 
tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
     12: tokio::runtime::context::scoped::Scoped<T>::set
     13: tokio::runtime::context::set_scheduler::{{closure}}
     14: std::thread::local::LocalKey<T>::try_with
     15: std::thread::local::LocalKey<T>::with
     16: tokio::runtime::context::set_scheduler
     17: tokio::runtime::scheduler::current_thread::CoreGuard::enter
     18: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
     19: 
tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}
     20: tokio::runtime::context::runtime::enter_runtime
     21: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
     22: tokio::runtime::runtime::Runtime::block_on_inner
     23: tokio::runtime::runtime::Runtime::block_on
     24: 
core_integration::physical_optimizer::partition_statistics::test::test_statistics_by_partition_of_interleave
     25: 
core_integration::physical_optimizer::partition_statistics::test::test_statistics_by_partition_of_interleave::{{closure}}
     26: core::ops::function::FnOnce::call_once
   note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose 
backtrace.
   
   
   failures:
       
physical_optimizer::partition_statistics::test::test_statistic_by_partition_of_repartition_hash_partitioning
       
physical_optimizer::partition_statistics::test::test_statistics_by_partition_of_interleave
   
   test result: FAILED. 699 passed; 2 failed; 0 ignored; 0 measured; 0 filtered 
out; finished in 28.39s
   
   ```
   
   It appears to have first started with this PR
   - https://github.com/apache/datafusion/pull/17051
   
   <img width="885" height="697" alt="Image" 
src="https://github.com/user-attachments/assets/242d50ba-67a7-46b9-b379-f07e81e63d0d";
 />
   
   ### To Reproduce
   
   _No response_
   
   ### Expected behavior
   
   _No response_
   
   ### Additional context
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to