Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-azure-cosmos for 
openSUSE:Factory checked in at 2026-01-13 21:34:26
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-azure-cosmos (Old)
 and      /work/SRC/openSUSE:Factory/.python-azure-cosmos.new.1928 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-azure-cosmos"

Tue Jan 13 21:34:26 2026 rev:20 rq:1326971 version:4.14.4

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-azure-cosmos/python-azure-cosmos.changes  
2025-12-10 15:32:17.634713812 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-azure-cosmos.new.1928/python-azure-cosmos.changes
        2026-01-13 21:34:38.908204325 +0100
@@ -1,0 +2,8 @@
+Tue Jan 13 11:43:36 UTC 2026 - John Paul Adrian Glaubitz 
<[email protected]>
+
+- New upstream release
+  + Version 4.14.4
+  + For detailed information about changes see the
+    CHANGELOG.md file provided with this package
+
+-------------------------------------------------------------------

Old:
----
  azure_cosmos-4.14.3.tar.gz

New:
----
  azure_cosmos-4.14.4.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-azure-cosmos.spec ++++++
--- /var/tmp/diff_new_pack.BW77nc/_old  2026-01-13 21:34:39.920246125 +0100
+++ /var/tmp/diff_new_pack.BW77nc/_new  2026-01-13 21:34:39.924246290 +0100
@@ -18,7 +18,7 @@
 
 %{?sle15_python_module_pythons}
 Name:           python-azure-cosmos
-Version:        4.14.3
+Version:        4.14.4
 Release:        0
 Summary:        Microsoft Azure Cosmos client library for Python
 License:        MIT

++++++ azure_cosmos-4.14.3.tar.gz -> azure_cosmos-4.14.4.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/azure_cosmos-4.14.3/CHANGELOG.md 
new/azure_cosmos-4.14.4/CHANGELOG.md
--- old/azure_cosmos-4.14.3/CHANGELOG.md        2025-12-08 16:06:39.000000000 
+0100
+++ new/azure_cosmos-4.14.4/CHANGELOG.md        2026-01-12 02:16:28.000000000 
+0100
@@ -1,5 +1,10 @@
 ## Release History
 
+### 4.14.4 (2026-01-12)
+
+#### Bugs Fixed
+* Fixed bug where sdk was not properly retrying requests in some edge cases 
after partition splits.See [PR 
44425](https://github.com/Azure/azure-sdk-for-python/pull/44425)
+
 ### 4.14.3 (2025-12-08)
 
 #### Bugs Fixed
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/azure_cosmos-4.14.3/PKG-INFO 
new/azure_cosmos-4.14.4/PKG-INFO
--- old/azure_cosmos-4.14.3/PKG-INFO    2025-12-08 16:07:23.850633000 +0100
+++ new/azure_cosmos-4.14.4/PKG-INFO    2026-01-12 02:17:04.133261000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 2.4
 Name: azure-cosmos
-Version: 4.14.3
+Version: 4.14.4
 Summary: Microsoft Azure Cosmos Client Library for Python
 Home-page: https://github.com/Azure/azure-sdk-for-python
 Author: Microsoft Corporation
@@ -1148,7 +1148,7 @@
 [virtualenv]: https://virtualenv.pypa.io
 [telemetry_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/tracing_open_telemetry.py
 [timeouts_document]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/docs/TimeoutAndRetriesConfig.md
-[cosmos_transactional_batch]: 
https://learn.microsoft.com/azure/cosmos-db/nosql/transactional-batch
+[cosmos_transactional_batch]: 
https://learn.microsoft.com/azure/cosmos-db/transactional-batch
 [cosmos_concurrency_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/concurrency_sample.py
 [cosmos_index_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/index_management.py
 [cosmos_index_sample_async]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/index_management_async.py
@@ -1177,6 +1177,11 @@
 
 ## Release History
 
+### 4.14.4 (2026-01-12)
+
+#### Bugs Fixed
+* Fixed bug where sdk was not properly retrying requests in some edge cases 
after partition splits.See [PR 
44425](https://github.com/Azure/azure-sdk-for-python/pull/44425)
+
 ### 4.14.3 (2025-12-08)
 
 #### Bugs Fixed
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/azure_cosmos-4.14.3/README.md 
new/azure_cosmos-4.14.4/README.md
--- old/azure_cosmos-4.14.3/README.md   2025-12-08 16:06:39.000000000 +0100
+++ new/azure_cosmos-4.14.4/README.md   2026-01-12 02:16:28.000000000 +0100
@@ -1104,7 +1104,7 @@
 [virtualenv]: https://virtualenv.pypa.io
 [telemetry_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/tracing_open_telemetry.py
 [timeouts_document]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/docs/TimeoutAndRetriesConfig.md
-[cosmos_transactional_batch]: 
https://learn.microsoft.com/azure/cosmos-db/nosql/transactional-batch
+[cosmos_transactional_batch]: 
https://learn.microsoft.com/azure/cosmos-db/transactional-batch
 [cosmos_concurrency_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/concurrency_sample.py
 [cosmos_index_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/index_management.py
 [cosmos_index_sample_async]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/index_management_async.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/azure_cosmos-4.14.3/azure/cosmos/_base.py 
new/azure_cosmos-4.14.4/azure/cosmos/_base.py
--- old/azure_cosmos-4.14.3/azure/cosmos/_base.py       2025-12-08 
16:06:39.000000000 +0100
+++ new/azure_cosmos-4.14.4/azure/cosmos/_base.py       2026-01-12 
02:16:28.000000000 +0100
@@ -124,6 +124,115 @@
         options['accessCondition'] = {'type': 'IfNoneMatch', 'condition': 
if_none_match}
     return options
 
+def _merge_query_results(
+        results: dict[str, Any],
+        partial_result: dict[str, Any],
+        query: Optional[Union[str, dict[str, Any]]]
+) -> dict[str, Any]:
+    """Merges partial query results from different partitions.
+
+    This method is required for queries that are manually fanned out to 
multiple
+    partitions or ranges within the SDK, such as prefix partition key queries.
+    For non-aggregated queries, results from each partition are simply 
concatenated.
+    However, for aggregate queries (COUNT, SUM, MIN, MAX, AVG), each partition
+    returns a partial aggregate. This method merges these partial results to 
compute
+    the final, correct aggregate value.
+
+    TODO:This client-side aggregation is a temporary workaround. Ideally, this 
logic
+    should be integrated into the core pipeline as aggregate queries are 
handled by DefaultExecutionContext,
+    not MultiAggregatorExecutionContext, which is not split proof until the 
logic is moved to the core pipeline.
+    This method handles the aggregation of results when a query spans multiple
+    partitions. It specifically handles:
+    1. Standard queries: Appends documents from partial_result to results.
+    2. Aggregate queries that return a JSON object (e.g., `SELECT COUNT(1) 
FROM c`, `SELECT MIN(c.field) FROM c`).
+    3. VALUE queries with aggregation that return a scalar value (e.g., 
`SELECT VALUE COUNT(1) FROM c`).
+
+    :param dict[str, Any] results: The accumulated result's dictionary.
+    :param dict[str, Any] partial_result: The new partial result dictionary to 
merge.
+    :param query: The query being executed.
+    :type query: str or dict[str, Any]
+    :return: The merged result's dictionary.
+    :rtype: dict[str, Any]
+    """
+    if not results:
+        return partial_result
+
+    partial_docs = partial_result.get("Documents")
+    if not partial_docs:
+        return results
+
+    results_docs = results.get("Documents")
+
+    # Check if both results are aggregate queries
+    is_partial_agg = (
+            isinstance(partial_docs, list)
+            and len(partial_docs) == 1
+            and isinstance(partial_docs[0], dict)
+            and partial_docs[0].get("_aggregate") is not None
+    )
+    is_results_agg = (
+            results_docs
+            and isinstance(results_docs, list)
+            and len(results_docs) == 1
+            and isinstance(results_docs[0], dict)
+            and results_docs[0].get("_aggregate") is not None
+    )
+
+    if is_partial_agg and is_results_agg:
+        agg_results = results_docs[0]["_aggregate"] # type: ignore[index]
+        agg_partial = partial_docs[0]["_aggregate"]
+        for key in agg_partial:
+            if key not in agg_results:
+                agg_results[key] = agg_partial[key]
+            elif isinstance(agg_partial.get(key), dict) and "count" in 
agg_partial[key]:  # AVG
+                if isinstance(agg_results.get(key), dict):
+                    agg_results[key]["sum"] += agg_partial[key]["sum"]
+                    agg_results[key]["count"] += agg_partial[key]["count"]
+            elif key.lower().startswith("min"):
+                agg_results[key] = min(agg_results[key], agg_partial[key])
+            elif key.lower().startswith("max"):
+                agg_results[key] = max(agg_results[key], agg_partial[key])
+            else:  # COUNT, SUM
+                agg_results[key] += agg_partial[key]
+        return results
+
+    # Check if both are VALUE aggregate queries
+    is_partial_value_agg = (
+            isinstance(partial_docs, list)
+            and len(partial_docs) == 1
+            and isinstance(partial_docs[0], (int, float))
+    )
+    is_results_value_agg = (
+            results_docs
+            and isinstance(results_docs, list)
+            and len(results_docs) == 1
+            and isinstance(results_docs[0], (int, float))
+    )
+
+    if is_partial_value_agg and is_results_value_agg:
+        query_text = query.get("query") if isinstance(query, dict) else query
+        if query_text:
+            query_upper = query_text.upper()
+            # For MIN/MAX, we find the min/max of the partial results.
+            # For COUNT/SUM, we sum the partial results.
+            # Without robust query parsing, we can't distinguish them reliably.
+            # Defaulting to sum for COUNT/SUM. MIN/MAX VALUE queries are not 
fully supported client-side.
+            if " SELECT VALUE MIN" in query_upper:
+                results_docs[0] = min(results_docs[0], partial_docs[0]) # 
type: ignore[index]
+            elif " SELECT VALUE MAX" in query_upper:
+                results_docs[0] = max(results_docs[0], partial_docs[0]) # 
type: ignore[index]
+            else:  # For COUNT/SUM, we sum the partial results
+                results_docs[0] += partial_docs[0] # type: ignore[index]
+        return results
+
+    # Standard query, append documents
+    if results_docs is None:
+        results["Documents"] = partial_docs
+    elif isinstance(results_docs, list) and isinstance(partial_docs, list):
+        results_docs.extend(partial_docs)
+    results["_count"] = len(results["Documents"])
+    return results
+
 
 def GetHeaders(  # pylint: disable=too-many-statements,too-many-branches
         cosmos_client_connection: Union["CosmosClientConnection", 
"AsyncClientConnection"],
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/azure_cosmos-4.14.3/azure/cosmos/_cosmos_client_connection.py 
new/azure_cosmos-4.14.4/azure/cosmos/_cosmos_client_connection.py
--- old/azure_cosmos-4.14.3/azure/cosmos/_cosmos_client_connection.py   
2025-12-08 16:06:39.000000000 +0100
+++ new/azure_cosmos-4.14.4/azure/cosmos/_cosmos_client_connection.py   
2026-01-12 02:16:28.000000000 +0100
@@ -3314,11 +3314,16 @@
                 )
                 self.last_response_headers = last_response_headers
                 self._UpdateSessionIfRequired(req_headers, partial_result, 
last_response_headers)
-                if results:
-                    # add up all the query results from all over lapping ranges
-                    results["Documents"].extend(partial_result["Documents"])
-                else:
-                    results = partial_result
+                # Introducing a temporary complex function into a critical 
path to handle aggregated queries
+                # during splits, as a precaution falling back to the original 
logic if anything goes wrong
+                try:
+                    results = base._merge_query_results(results, 
partial_result, query)
+                except Exception: # pylint: disable=broad-exception-caught
+                    # If the new merge logic fails, fall back to the original 
logic.
+                    if results:
+                        
results["Documents"].extend(partial_result["Documents"])
+                    else:
+                        results = partial_result
                 if response_hook:
                     response_hook(last_response_headers, partial_result)
             # if the prefix partition query has results lets return it
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/azure_cosmos-4.14.3/azure/cosmos/_execution_context/aio/base_execution_context.py
 
new/azure_cosmos-4.14.4/azure/cosmos/_execution_context/aio/base_execution_context.py
--- 
old/azure_cosmos-4.14.3/azure/cosmos/_execution_context/aio/base_execution_context.py
       2025-12-08 16:06:39.000000000 +0100
+++ 
new/azure_cosmos-4.14.4/azure/cosmos/_execution_context/aio/base_execution_context.py
       2026-01-12 02:16:28.000000000 +0100
@@ -27,7 +27,8 @@
 import copy
 
 from ...aio import _retry_utility_async
-from ... import http_constants
+from ... import http_constants, exceptions
+
 
 # pylint: disable=protected-access
 
@@ -136,12 +137,30 @@
         # ExecuteAsync passes retry context parameters (timeout, operation 
start time, logger, etc.)
         # The callback need to accept these parameters even if unused
         # Removing **kwargs results in a TypeError when ExecuteAsync tries to 
pass these parameters
-        async def callback(**kwargs):  # pylint: disable=unused-argument
-            return await self._fetch_items_helper_no_retries(fetch_function)
-
-        return await _retry_utility_async.ExecuteAsync(
-            self._client, self._client._global_endpoint_manager, callback, 
**self._options
-        )
+        async def execute_fetch():
+            async def callback(**kwargs):  # pylint: disable=unused-argument
+                return await 
self._fetch_items_helper_no_retries(fetch_function)
+
+            return await _retry_utility_async.ExecuteAsync(
+                self._client, self._client._global_endpoint_manager, callback, 
**self._options
+            )
+
+        max_retries = 3
+        attempt = 0
+        while attempt <= max_retries:
+            try:
+                return await execute_fetch()
+            except exceptions.CosmosHttpResponseError as e:
+                if exceptions._partition_range_is_gone(e):
+                    attempt += 1
+                    if attempt > max_retries:
+                        raise  # Exhausted retries, propagate error
+
+                    # Refresh routing map to get new partition key ranges
+                    self._client.refresh_routing_map_provider()
+                    # Retry immediately (no backoff needed for partition 
splits)
+                    continue
+                raise  # Not a partition split error, propagate immediately
 
 
 class _DefaultQueryExecutionContext(_QueryExecutionContextBase):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/azure_cosmos-4.14.3/azure/cosmos/_execution_context/base_execution_context.py
 
new/azure_cosmos-4.14.4/azure/cosmos/_execution_context/base_execution_context.py
--- 
old/azure_cosmos-4.14.3/azure/cosmos/_execution_context/base_execution_context.py
   2025-12-08 16:06:39.000000000 +0100
+++ 
new/azure_cosmos-4.14.4/azure/cosmos/_execution_context/base_execution_context.py
   2026-01-12 02:16:28.000000000 +0100
@@ -25,7 +25,8 @@
 
 from collections import deque
 import copy
-from .. import _retry_utility, http_constants
+from .. import _retry_utility, http_constants, exceptions
+
 
 # pylint: disable=protected-access
 
@@ -131,14 +132,34 @@
     def _fetch_items_helper_with_retries(self, fetch_function):
         # TODO: Properly propagate kwargs from retry utility to fetch function
         # the callback keep the **kwargs parameter to maintain compatibility 
with the retry utility's execution pattern.
-        # ExecuteAsync passes retry context parameters (timeout, operation 
start time, logger, etc.)
+        # Execute passes retry context parameters (timeout, operation start 
time, logger, etc.)
         # The callback need to accept these parameters even if unused
-        # Removing **kwargs results in a TypeError when ExecuteAsync tries to 
pass these parameters
-        def callback(**kwargs): # pylint: disable=unused-argument
-            return self._fetch_items_helper_no_retries(fetch_function)
-
-        return _retry_utility.Execute(self._client, 
self._client._global_endpoint_manager, callback, **self._options)
-
+        # Removing **kwargs results in a TypeError when Execute tries to pass 
these parameters
+        def execute_fetch():
+            def callback(**kwargs):  # pylint: disable=unused-argument
+                return self._fetch_items_helper_no_retries(fetch_function)
+
+            return _retry_utility.Execute(
+                self._client, self._client._global_endpoint_manager, callback, 
**self._options
+            )
+
+        max_retries = 3
+        attempt = 0
+
+        while attempt <= max_retries:
+            try:
+                return execute_fetch()
+            except exceptions.CosmosHttpResponseError as e:
+                if exceptions._partition_range_is_gone(e):
+                    attempt += 1
+                    if attempt > max_retries:
+                        raise  # Exhausted retries, propagate error
+
+                    # Refresh routing map to get new partition key ranges
+                    self._client.refresh_routing_map_provider()
+                    # Retry immediately (no backoff needed for partition 
splits)
+                    continue
+                raise  # Not a partition split error, propagate immediately
     next = __next__  # Python 2 compatibility.
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/azure_cosmos-4.14.3/azure/cosmos/_version.py 
new/azure_cosmos-4.14.4/azure/cosmos/_version.py
--- old/azure_cosmos-4.14.3/azure/cosmos/_version.py    2025-12-08 
16:06:39.000000000 +0100
+++ new/azure_cosmos-4.14.4/azure/cosmos/_version.py    2026-01-12 
02:16:28.000000000 +0100
@@ -19,4 +19,4 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
 # SOFTWARE.
 
-VERSION = "4.14.3"
+VERSION = "4.14.4"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/azure_cosmos-4.14.3/azure/cosmos/aio/_cosmos_client_connection_async.py 
new/azure_cosmos-4.14.4/azure/cosmos/aio/_cosmos_client_connection_async.py
--- old/azure_cosmos-4.14.3/azure/cosmos/aio/_cosmos_client_connection_async.py 
2025-12-08 16:06:39.000000000 +0100
+++ new/azure_cosmos-4.14.4/azure/cosmos/aio/_cosmos_client_connection_async.py 
2026-01-12 02:16:28.000000000 +0100
@@ -3121,11 +3121,18 @@
                 )
                 self.last_response_headers = last_response_headers
                 self._UpdateSessionIfRequired(req_headers, partial_result, 
last_response_headers)
-                if results:
-                    # add up all the query results from all over lapping ranges
-                    results["Documents"].extend(partial_result["Documents"])
-                else:
-                    results = partial_result
+
+                # Introducing a temporary complex function into a critical 
path to handle aggregated queries,
+                # during splits as a precaution falling back to the original 
logic if anything goes wrong
+                try:
+                    results = base._merge_query_results(results, 
partial_result, query)
+                except Exception: # pylint: disable=broad-exception-caught
+                    # If the new merge logic fails, fall back to the original 
logic.
+                    if results:
+                        
results["Documents"].extend(partial_result["Documents"])
+                    else:
+                        results = partial_result
+
                 if response_hook:
                     response_hook(self.last_response_headers, partial_result)
             # if the prefix partition query has results lets return it
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/azure_cosmos-4.14.3/azure_cosmos.egg-info/PKG-INFO 
new/azure_cosmos-4.14.4/azure_cosmos.egg-info/PKG-INFO
--- old/azure_cosmos-4.14.3/azure_cosmos.egg-info/PKG-INFO      2025-12-08 
16:07:23.000000000 +0100
+++ new/azure_cosmos-4.14.4/azure_cosmos.egg-info/PKG-INFO      2026-01-12 
02:17:04.000000000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 2.4
 Name: azure-cosmos
-Version: 4.14.3
+Version: 4.14.4
 Summary: Microsoft Azure Cosmos Client Library for Python
 Home-page: https://github.com/Azure/azure-sdk-for-python
 Author: Microsoft Corporation
@@ -1148,7 +1148,7 @@
 [virtualenv]: https://virtualenv.pypa.io
 [telemetry_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/tracing_open_telemetry.py
 [timeouts_document]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/docs/TimeoutAndRetriesConfig.md
-[cosmos_transactional_batch]: 
https://learn.microsoft.com/azure/cosmos-db/nosql/transactional-batch
+[cosmos_transactional_batch]: 
https://learn.microsoft.com/azure/cosmos-db/transactional-batch
 [cosmos_concurrency_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/concurrency_sample.py
 [cosmos_index_sample]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/index_management.py
 [cosmos_index_sample_async]: 
https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples/index_management_async.py
@@ -1177,6 +1177,11 @@
 
 ## Release History
 
+### 4.14.4 (2026-01-12)
+
+#### Bugs Fixed
+* Fixed bug where sdk was not properly retrying requests in some edge cases 
after partition splits.See [PR 
44425](https://github.com/Azure/azure-sdk-for-python/pull/44425)
+
 ### 4.14.3 (2025-12-08)
 
 #### Bugs Fixed
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/azure_cosmos-4.14.3/tests/test_query_feed_range.py 
new/azure_cosmos-4.14.4/tests/test_query_feed_range.py
--- old/azure_cosmos-4.14.3/tests/test_query_feed_range.py      2025-12-08 
16:06:39.000000000 +0100
+++ new/azure_cosmos-4.14.4/tests/test_query_feed_range.py      2026-01-12 
02:16:28.000000000 +0100
@@ -1,13 +1,13 @@
 # The MIT License (MIT)
 # Copyright (c) Microsoft Corporation. All rights reserved.
+import time
 
 import pytest
 import test_config
 import unittest
 import uuid
 
-from azure.cosmos import CosmosClient
-from itertools import combinations
+from azure.cosmos import CosmosClient, exceptions
 from azure.cosmos.partition_key import PartitionKey
 from typing import List, Mapping, Set
 
@@ -32,7 +32,7 @@
 @pytest.fixture(scope="class", autouse=True)
 def setup_and_teardown():
     print("Setup: This runs before any tests")
-    document_definitions = [{PARTITION_KEY: pk, 'id': str(uuid.uuid4())} for 
pk in PK_VALUES]
+    document_definitions = [{PARTITION_KEY: pk, 'id': str(uuid.uuid4()), 
'value': 100} for pk in PK_VALUES]
     database = CosmosClient(HOST, KEY).get_database_client(DATABASE_ID)
 
     for container_id, offer_throughput in zip(TEST_CONTAINERS_IDS, 
TEST_OFFER_THROUGHPUTS):
@@ -123,6 +123,210 @@
         add_all_pk_values_to_set(items, actual_pk_values)
         assert expected_pk_values.issubset(actual_pk_values)
 
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    def test_query_with_feed_range_during_partition_split_combined(self, 
container_id):
+        container = get_container(container_id)
+
+        # Differentiate behavior based on container type
+        if container_id == SINGLE_PARTITION_CONTAINER_ID:
+            # Single partition: starts at 400 RU/s, increase to trigger split
+            target_throughput = 11000
+            print(f"Single-partition container: increasing from ~400 to 
{target_throughput}")
+        else:  # MULTI_PARTITION_CONTAINER_ID
+            # Multi-partition: starts at 30000 RU/s, increase further to 
trigger more splits
+            target_throughput = 60000
+            print(f"Multi-partition container: increasing from 30000 to 
{target_throughput}")
+
+        # Get feed ranges before split
+        feed_ranges_before_split = list(container.read_feed_ranges())
+        print(f"BEFORE SPLIT: Number of feed ranges: 
{len(feed_ranges_before_split)}")
+
+        # Get initial counts and sums before split
+        initial_count = 0
+        initial_sum = 0
+
+        for feed_range in feed_ranges_before_split:
+            count_items = list(container.query_items(
+                query='SELECT VALUE COUNT(1) FROM c',
+                feed_range=feed_range
+            ))
+            initial_count += count_items[0] if count_items else 0
+
+            sum_items = list(container.query_items(
+                query='SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])',
+                feed_range=feed_range
+            ))
+            initial_sum += sum_items[0] if sum_items else 0
+
+        print(f"Initial count: {initial_count}, Initial sum: {initial_sum}")
+
+        # verify we have some data
+        assert initial_count > 0, "Container should have at least some 
documents"
+
+        # Collect all PK values before split
+        expected_pk_values = set()
+        for feed_range in feed_ranges_before_split:
+            items = list(container.query_items(query='SELECT * FROM c', 
feed_range=feed_range))
+            add_all_pk_values_to_set(items, expected_pk_values)
+
+        print(f"Found {len(expected_pk_values)} unique partition keys before 
split")
+
+        # Trigger split
+        # test_config.TestConfig.trigger_split(container, target_throughput)
+        container.replace_throughput(target_throughput)
+        # wait for the split to begin
+        time.sleep(20)
+
+        # Test 1: Basic query with stale feed ranges (SDK should handle split)
+        actual_pk_values = set()
+        query = 'SELECT * from c'
+
+        for feed_range in feed_ranges_before_split:
+            items = list(container.query_items(query=query, 
feed_range=feed_range))
+            add_all_pk_values_to_set(items, actual_pk_values)
+
+        assert expected_pk_values == actual_pk_values, f"Expected 
{len(expected_pk_values)} PKs, got {len(actual_pk_values)}"
+        print("Test 1 (basic query with stale feed ranges) passed")
+
+        # Test 2: Order by query with stale feed ranges
+        actual_pk_values_order_by = set()
+        query_order_by = 'SELECT * FROM c ORDER BY c.id'
+
+        for feed_range in feed_ranges_before_split:
+            items = list(container.query_items(query=query_order_by, 
feed_range=feed_range))
+            add_all_pk_values_to_set(items, actual_pk_values_order_by)
+
+        assert expected_pk_values == actual_pk_values_order_by, f"Expected 
{len(expected_pk_values)} PKs, got {len(actual_pk_values_order_by)}"
+        print("Test 2 (order by query with stale feed ranges) passed")
+
+        # Test 3: Count aggregate query with stale feed ranges
+        post_split_count = 0
+        query_count = 'SELECT VALUE COUNT(1) FROM c'
+
+        for i, feed_range in enumerate(feed_ranges_before_split):
+            items = list(container.query_items(query=query_count, 
feed_range=feed_range))
+            count = items[0] if items else 0
+            print(f"Feed range {i} count AFTER split: {count}")
+            post_split_count += count
+
+        print(f"Total count AFTER split: {post_split_count}, Expected: 
{initial_count}")
+        assert initial_count == post_split_count, f"Count mismatch: 
before={initial_count}, after={post_split_count}"
+        print("Test 3 (count aggregate with stale feed ranges) passed")
+
+        # Test 4: Sum aggregate query with stale feed ranges
+        post_split_sum = 0
+        query_sum = 'SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])'
+
+        for feed_range in feed_ranges_before_split:
+            items = list(container.query_items(query=query_sum, 
feed_range=feed_range))
+            current_sum = items[0] if items else 0
+            post_split_sum += current_sum
+
+        print(f"Total sum AFTER split: {post_split_sum}, Expected: 
{initial_sum}")
+        assert initial_sum == post_split_sum, f"Sum mismatch: 
before={initial_sum}, after={post_split_sum}"
+        print("Test 4 (sum aggregate with stale feed ranges) passed")
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_during_partition_split_combined")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    def test_query_with_feed_range_during_partition_split(self, container_id):
+        container = get_container(container_id)
+        query = 'SELECT * from c'
+
+        expected_pk_values = set(PK_VALUES)
+        actual_pk_values = set()
+
+        feed_ranges = list(container.read_feed_ranges())
+        test_config.TestConfig.trigger_split(container, 11000)
+        for feed_range in feed_ranges:
+            items = list(container.query_items(
+                query=query,
+                feed_range=feed_range
+            ))
+            add_all_pk_values_to_set(items, actual_pk_values)
+        assert expected_pk_values == actual_pk_values
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_during_partition_split_combined")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    def test_query_with_order_by_and_feed_range_during_partition_split(self, 
container_id):
+        container = get_container(container_id)
+        query = 'SELECT * FROM c ORDER BY c.id'
+
+        expected_pk_values = set(PK_VALUES)
+        actual_pk_values = set()
+
+        feed_ranges = list(container.read_feed_ranges())
+        test_config.TestConfig.trigger_split(container, 11000)
+
+        for feed_range in feed_ranges:
+            items = list(container.query_items(
+                query=query,
+                feed_range=feed_range
+            ))
+            add_all_pk_values_to_set(items, actual_pk_values)
+
+        assert expected_pk_values == actual_pk_values
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_during_partition_split_combined")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    def 
test_query_with_count_aggregate_and_feed_range_during_partition_split(self, 
container_id):
+        container = get_container(container_id)
+        # Get initial counts per feed range before split
+        feed_ranges = list(container.read_feed_ranges())
+        initial_total_count = 0
+
+        for feed_range in feed_ranges:
+            query = 'SELECT VALUE COUNT(1) FROM c'
+            items = list(container.query_items(query=query, 
feed_range=feed_range))
+            count = items[0] if items else 0
+            initial_total_count += count
+
+        # Trigger split
+        test_config.TestConfig.trigger_split(container, 11000)
+
+        # Query with aggregate after split using original feed ranges
+        post_split_total_count = 0
+        for feed_range in feed_ranges:
+            query = 'SELECT VALUE COUNT(1) FROM c'
+            items = list(container.query_items(query=query, 
feed_range=feed_range))
+            count = items[0] if items else 0
+            post_split_total_count += count
+
+        # Verify counts match (no data loss during split)
+        assert initial_total_count == post_split_total_count
+        assert post_split_total_count == len(PK_VALUES)
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_during_partition_split_combined")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    def 
test_query_with_sum_aggregate_and_feed_range_during_partition_split(self, 
container_id):
+        container = get_container(container_id)
+        # Get initial sums per feed range before split
+        feed_ranges = list(container.read_feed_ranges())
+        initial_total_sum = 0
+        expected_total_sum = len(PK_VALUES) * 100
+
+        for feed_range in feed_ranges:
+            query = 'SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])'
+            items = list(container.query_items(query=query, 
feed_range=feed_range))
+            # The result for a SUM query on an empty set of rows is 
`undefined`.
+            # The query returns no result pages in this case.
+            current_sum = items[0] if items else 0
+            initial_total_sum += current_sum
+
+        # Trigger split
+        test_config.TestConfig.trigger_split(container, 11000)
+
+        # Query with aggregate after split using original feed ranges
+        post_split_total_sum = 0
+        for feed_range in feed_ranges:
+            query = 'SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])'
+            items = list(container.query_items(query=query, 
feed_range=feed_range))
+            current_sum = items[0] if items else 0
+            post_split_total_sum += current_sum
+
+        # Verify sums match (no data loss during split)
+        assert initial_total_sum == post_split_total_sum
+        assert post_split_total_sum == expected_total_sum
+
     def test_query_with_static_continuation(self):
         container = get_container(SINGLE_PARTITION_CONTAINER_ID)
         query = 'SELECT * from c'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/azure_cosmos-4.14.3/tests/test_query_feed_range_async.py 
new/azure_cosmos-4.14.4/tests/test_query_feed_range_async.py
--- old/azure_cosmos-4.14.3/tests/test_query_feed_range_async.py        
2025-12-08 16:06:39.000000000 +0100
+++ new/azure_cosmos-4.14.4/tests/test_query_feed_range_async.py        
2026-01-12 02:16:28.000000000 +0100
@@ -1,5 +1,7 @@
 # The MIT License (MIT)
 # Copyright (c) Microsoft Corporation. All rights reserved.
+import time
+from unittest import mock
 
 import pytest
 import pytest_asyncio
@@ -8,7 +10,6 @@
 import uuid
 
 from azure.cosmos.aio import CosmosClient
-from itertools import combinations
 from azure.cosmos.partition_key import PartitionKey
 from typing import List, Mapping, Set
 
@@ -33,7 +34,7 @@
 @pytest_asyncio.fixture(scope="class", autouse=True)
 async def setup_and_teardown_async():
     print("Setup: This runs before any tests")
-    document_definitions = [{PARTITION_KEY: pk, 'id': str(uuid.uuid4())} for 
pk in PK_VALUES]
+    document_definitions = [{PARTITION_KEY: pk, 'id': str(uuid.uuid4()), 
'value': 100} for pk in PK_VALUES]
     database = CosmosClient(HOST, KEY).get_database_client(DATABASE_ID)
 
     for container_id, offer_throughput in zip(TEST_CONTAINERS_IDS, 
TEST_OFFER_THROUGHPUTS):
@@ -48,6 +49,7 @@
     # Code to run after tests
     print("Teardown: This runs after all tests")
 
+
 async def get_container(container_id: str):
     client = CosmosClient(HOST, KEY)
     db = client.get_database_client(DATABASE_ID)
@@ -134,6 +136,256 @@
         await add_all_pk_values_to_set_async(items, actual_pk_values)
         assert expected_pk_values.issubset(actual_pk_values)
 
+    @pytest.mark.skip(reason="will be moved to a new pipeline")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    async def 
test_query_with_feed_range_async_during_back_to_back_partition_splits_async(self,
 container_id):
+        container = await get_container(container_id)
+        query = 'SELECT * from c'
+
+        expected_pk_values = set(PK_VALUES)
+        actual_pk_values = set()
+
+        # Get feed ranges before any splits
+        feed_ranges = [feed_range async for feed_range in 
container.read_feed_ranges()]
+
+        # Trigger two consecutive splits
+        await test_config.TestConfig.trigger_split_async(container, 11000)
+        await test_config.TestConfig.trigger_split_async(container, 24000)
+
+        # Query using the original feed ranges, the SDK should handle the 
splits
+        for feed_range in feed_ranges:
+            items = [item async for item in
+                     (container.query_items(
+                         query=query,
+                         feed_range=feed_range
+                     )
+                     )]
+            await add_all_pk_values_to_set_async(items, actual_pk_values)
+
+        assert expected_pk_values == actual_pk_values
+
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    async def 
test_query_with_feed_range_async_during_partition_split_combined_async(self, 
container_id):
+        container = await get_container(container_id)
+
+        # Differentiate behavior based on container type
+        if container_id == SINGLE_PARTITION_CONTAINER_ID:
+            # Single partition: starts at 400 RU/s, increase to trigger split
+            target_throughput = 11000
+            print(f"Single-partition container: increasing from ~400 to 
{target_throughput}")
+        else:  # MULTI_PARTITION_CONTAINER_ID
+            # Multi-partition: starts at 30000 RU/s, increase further to 
trigger more splits
+            target_throughput = 60000
+            print(f"Multi-partition container: increasing from 30000 to 
{target_throughput}")
+
+        # Get feed ranges before split
+        feed_ranges_before_split = [feed_range async for feed_range in 
container.read_feed_ranges()]
+        print(f"BEFORE SPLIT: Number of feed ranges: 
{len(feed_ranges_before_split)}")
+
+        # Get initial counts and sums before split
+        initial_count = 0
+        initial_sum = 0
+
+        for feed_range in feed_ranges_before_split:
+            count_items = [item async for item in container.query_items(
+                query='SELECT VALUE COUNT(1) FROM c',
+                feed_range=feed_range
+            )]
+            initial_count += count_items[0] if count_items else 0
+
+            sum_items = [item async for item in container.query_items(
+                query='SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])',
+                feed_range=feed_range
+            )]
+            initial_sum += sum_items[0] if sum_items else 0
+
+        print(f"Initial count: {initial_count}, Initial sum: {initial_sum}")
+
+        # verify we have some data
+        assert initial_count > 0, "Container should have at least some 
documents"
+
+        # Collect all PK values before split
+        expected_pk_values = set()
+        for feed_range in feed_ranges_before_split:
+            items = [item async for item in 
container.query_items(query='SELECT * FROM c', feed_range=feed_range)]
+            await add_all_pk_values_to_set_async(items, expected_pk_values)
+
+        print(f"Found {len(expected_pk_values)} unique partition keys before 
split")
+
+        # Trigger split
+        # await test_config.TestConfig.trigger_split_async(container, 
target_throughput)
+        container.replace_throughput(target_throughput)
+        # wait for the split to begin
+        time.sleep(20)
+
+        # Test 1: Basic query with stale feed ranges (SDK should handle split)
+        actual_pk_values = set()
+        query = 'SELECT * from c'
+
+        for feed_range in feed_ranges_before_split:
+            items = [item async for item in container.query_items(query=query, 
feed_range=feed_range)]
+            await add_all_pk_values_to_set_async(items, actual_pk_values)
+
+        assert expected_pk_values == actual_pk_values, f"Expected 
{len(expected_pk_values)} PKs, got {len(actual_pk_values)}"
+        print("Test 1 (basic query with stale feed ranges) passed")
+
+        # Test 2: Order by query with stale feed ranges
+        actual_pk_values_order_by = set()
+        query_order_by = 'SELECT * FROM c ORDER BY c.id'
+
+        for feed_range in feed_ranges_before_split:
+            items = [item async for item in 
container.query_items(query=query_order_by, feed_range=feed_range)]
+            await add_all_pk_values_to_set_async(items, 
actual_pk_values_order_by)
+
+        assert expected_pk_values == actual_pk_values_order_by, f"Expected 
{len(expected_pk_values)} PKs, got {len(actual_pk_values_order_by)}"
+        print("Test 2 (order by query with stale feed ranges) passed")
+
+        # Test 3: Count aggregate query with stale feed ranges
+        post_split_count = 0
+        query_count = 'SELECT VALUE COUNT(1) FROM c'
+
+        for i, feed_range in enumerate(feed_ranges_before_split):
+            items = [item async for item in 
container.query_items(query=query_count, feed_range=feed_range)]
+            count = items[0] if items else 0
+            print(f"Feed range {i} count AFTER split: {count}")
+            post_split_count += count
+
+        print(f"Total count AFTER split: {post_split_count}, Expected: 
{initial_count}")
+        assert initial_count == post_split_count, f"Count mismatch: 
before={initial_count}, after={post_split_count}"
+        print("Test 3 (count aggregate with stale feed ranges) passed")
+
+        # Test 4: Sum aggregate query with stale feed ranges
+        post_split_sum = 0
+        query_sum = 'SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])'
+
+        for feed_range in feed_ranges_before_split:
+            items = [item async for item in 
container.query_items(query=query_sum, feed_range=feed_range)]
+            current_sum = items[0] if items else 0
+            post_split_sum += current_sum
+
+        print(f"Total sum AFTER split: {post_split_sum}, Expected: 
{initial_sum}")
+        assert initial_sum == post_split_sum, f"Sum mismatch: 
before={initial_sum}, after={post_split_sum}"
+        print("Test 4 (sum aggregate with stale feed ranges) passed")
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_async_during_partition_split_combined_async")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    async def 
test_query_with_feed_range_async_during_partition_split_async(self, 
container_id):
+        container = await get_container(container_id)
+        query = 'SELECT * from c'
+
+        expected_pk_values = set(PK_VALUES)
+        actual_pk_values = set()
+
+        feed_ranges = [feed_range async for feed_range in 
container.read_feed_ranges()]
+        await test_config.TestConfig.trigger_split_async(container, 11000)
+        for feed_range in feed_ranges:
+            items = [item async for item in
+                     (container.query_items(
+                         query=query,
+                         feed_range=feed_range
+                     )
+                     )]
+            await add_all_pk_values_to_set_async(items, actual_pk_values)
+        assert expected_pk_values == actual_pk_values
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_async_during_partition_split_combined_async")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    async def 
test_query_with_order_by_and_feed_range_async_during_partition_split_async(self,
 container_id):
+        container = await get_container(container_id)
+        query = 'SELECT * FROM c ORDER BY c.id'
+
+        expected_pk_values = set(PK_VALUES)
+        actual_pk_values = set()
+
+        feed_ranges = [feed_range async for feed_range in 
container.read_feed_ranges()]
+        await test_config.TestConfig.trigger_split_async(container, 11000)
+
+        for feed_range in feed_ranges:
+            items = [item async for item in
+                     (container.query_items(
+                         query=query,
+                         feed_range=feed_range
+                     )
+                     )]
+            await add_all_pk_values_to_set_async(items, actual_pk_values)
+
+        assert expected_pk_values == actual_pk_values
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_async_during_partition_split_combined_async")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    async def 
test_query_with_count_aggregate_and_feed_range_async_during_partition_split_async(self,
 container_id):
+        container = await get_container(container_id)
+        # Get initial counts per feed range before split
+        feed_ranges = [feed_range async for feed_range in 
container.read_feed_ranges()]
+        print(f"BEFORE SPLIT: Number of feed ranges: {len(feed_ranges)}")
+        initial_total_count = 0
+
+        for i, feed_range in enumerate(feed_ranges):
+            query = 'SELECT VALUE COUNT(1) FROM c'
+            items = [item async for item in container.query_items(query=query, 
feed_range=feed_range)]
+            count = items[0] if items else 0
+            print(f"Feed range {i} count BEFORE split: {count}")
+            initial_total_count += count
+
+        print(f"Total count BEFORE split: {initial_total_count}")
+
+        # Trigger split
+        await test_config.TestConfig.trigger_split_async(container, 11000)
+
+        # Query with aggregate after split using original feed ranges
+        post_split_total_count = 0
+        for i, feed_range in enumerate(feed_ranges):
+            query = 'SELECT VALUE COUNT(1) FROM c'
+            items = [item async for item in container.query_items(query=query, 
feed_range=feed_range)]
+            count = items[0] if items else 0
+            print(f"Original feed range {i} count AFTER split: {count}")
+            post_split_total_count += count
+
+        print(f"Total count AFTER split using OLD ranges: 
{post_split_total_count}")
+        print(f"Expected: {len(PK_VALUES)}")
+
+        assert initial_total_count == post_split_total_count
+        assert post_split_total_count == len(PK_VALUES)
+
+        # Verify counts match (no data loss during split)
+        print(f"Initial total count: {initial_total_count}, Post-split total 
count: {post_split_total_count}")
+        assert initial_total_count == post_split_total_count
+        print(f"len(PK_VALUES): {len(PK_VALUES)}, Post-split total count: 
{post_split_total_count}")
+        assert post_split_total_count == len(PK_VALUES)
+
+    @pytest.mark.skip(reason="Covered by 
test_query_with_feed_range_async_during_partition_split_combined_async")
+    @pytest.mark.parametrize('container_id', TEST_CONTAINERS_IDS)
+    async def 
test_query_with_sum_aggregate_and_feed_range_async_during_partition_split_async(self,
 container_id):
+        container = await get_container(container_id)
+        # Get initial sums per feed range before split
+        feed_ranges = [feed_range async for feed_range in 
container.read_feed_ranges()]
+        initial_total_sum = 0
+        expected_total_sum = len(PK_VALUES) * 100
+
+        for feed_range in feed_ranges:
+            query = 'SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])'
+            items = [item async for item in container.query_items(query=query, 
feed_range=feed_range)]
+            # The result for a SUM query on an empty set of rows is 
`undefined`.
+            # The query returns no result pages in this case.
+            current_sum = items[0] if items else 0
+            initial_total_sum += current_sum
+
+        # Trigger split
+        await test_config.TestConfig.trigger_split_async(container, 11000)
+
+        # Query with aggregate after split using original feed ranges
+        post_split_total_sum = 0
+        for feed_range in feed_ranges:
+            query = 'SELECT VALUE SUM(c["value"]) FROM c WHERE 
IS_DEFINED(c["value"])'
+            items = [item async for item in container.query_items(query=query, 
feed_range=feed_range)]
+            current_sum = items[0] if items else 0
+            post_split_total_sum += current_sum
+
+        # Verify sums match (no data loss during split)
+        assert initial_total_sum == post_split_total_sum
+        assert post_split_total_sum == expected_total_sum
+
+
     async def test_query_with_static_continuation_async(self):
         container = await get_container(SINGLE_PARTITION_CONTAINER_ID)
         query = 'SELECT * from c'

Reply via email to