This is an automated email from the ASF dual-hosted git repository.

kevinjqliu pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/iceberg-rust.git


The following commit(s) were added to refs/heads/main by this push:
     new ef6fb435 chore: Bump the Java integration test to 1.10.0 (#1701)
ef6fb435 is described below

commit ef6fb435d5669d391b8293cd5278450802a6d225
Author: Fokko Driesprong <[email protected]>
AuthorDate: Mon Sep 22 22:40:40 2025 +0200

    chore: Bump the Java integration test to 1.10.0 (#1701)
    
    ## Which issue does this PR close?
    
    <!--
    We generally require a GitHub issue to be filed for all bug fixes and
    enhancements and this helps us generate change logs for our releases.
    You can link an issue to this PR using the GitHub syntax. For example
    `Closes #123` indicates that this PR will close issue #123.
    -->
    
    - Closes #.
    
    ## What changes are included in this PR?
    
    <!--
    Provide a summary of the modifications in this PR. List the main changes
    such as new features, bug fixes, refactoring, or any other updates.
    -->
    
    ## Are these changes tested?
    
    <!--
    Specify what test covers (unit test, integration test, etc.).
    
    If tests are not included in your PR, please explain why (for example,
    are they covered by existing tests)?
    -->
---
 crates/integration_tests/testdata/spark/Dockerfile                     | 2 +-
 crates/integration_tests/tests/shared_tests/read_positional_deletes.rs | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/crates/integration_tests/testdata/spark/Dockerfile 
b/crates/integration_tests/testdata/spark/Dockerfile
index 420edb23..339051bf 100644
--- a/crates/integration_tests/testdata/spark/Dockerfile
+++ b/crates/integration_tests/testdata/spark/Dockerfile
@@ -29,7 +29,7 @@ WORKDIR ${SPARK_HOME}
 
 ENV SPARK_VERSION=3.5.6
 ENV ICEBERG_SPARK_RUNTIME_VERSION=3.5_2.12
-ENV ICEBERG_VERSION=1.6.0
+ENV ICEBERG_VERSION=1.10.0
 
 RUN curl --retry 5 -s -C - 
https://dlcdn.apache.org/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop3.tgz
 -o spark-${SPARK_VERSION}-bin-hadoop3.tgz \
  && tar xzf spark-${SPARK_VERSION}-bin-hadoop3.tgz --directory /opt/spark 
--strip-components 1 \
diff --git 
a/crates/integration_tests/tests/shared_tests/read_positional_deletes.rs 
b/crates/integration_tests/tests/shared_tests/read_positional_deletes.rs
index 565f8ba4..76418100 100644
--- a/crates/integration_tests/tests/shared_tests/read_positional_deletes.rs
+++ b/crates/integration_tests/tests/shared_tests/read_positional_deletes.rs
@@ -53,7 +53,7 @@ async fn test_read_table_with_positional_deletes() {
 
     // Scan plan phase should include delete files in file plan
     // when with_delete_file_processing_enabled == true
-    assert_eq!(plan[0].deletes.len(), 2);
+    assert_eq!(plan[0].deletes.len(), 1);
 
     // we should see two rows deleted, returning 10 rows instead of 12
     let batch_stream = scan.to_arrow().await.unwrap();

Reply via email to