Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-xarray for openSUSE:Factory 
checked in at 2024-09-09 14:45:45
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-xarray (Old)
 and      /work/SRC/openSUSE:Factory/.python-xarray.new.10096 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-xarray"

Mon Sep  9 14:45:45 2024 rev:50 rq:1199626 version:2024.7.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-xarray/python-xarray.changes      
2024-06-07 15:04:25.970102948 +0200
+++ /work/SRC/openSUSE:Factory/.python-xarray.new.10096/python-xarray.changes   
2024-09-09 14:47:04.965278193 +0200
@@ -1,0 +2,144 @@
+Wed Sep  4 09:11:37 UTC 2024 - Ben Greiner <c...@bnavigator.de>
+
+- Update to 2024.7.0
+  * Add test for rechunking to a size string by @dcherian in #9117
+  * Update docstring in api.py for open_mfdataset(), clarifying
+    "chunks" argument by @arthur-e in #9121
+  * Grouper refactor by @dcherian in #9122
+  * adjust repr tests to account for different platforms (#9127) by
+    @mgorny in #9128
+  * Support duplicate dimensions in .chunk by @mraspaud in #9099
+  * Update zendoo badge link by @max-sixty in #9133
+  * Split out distributed writes in zarr docs by @max-sixty in
+    #9132
+  * Improve to_zarr docs by @max-sixty in #9139
+  * groupby: remove some internal use of IndexVariable by @dcherian
+    in #9123
+  * Improve zarr chunks docs by @max-sixty in #9140
+  * Include numbagg in type checks by @max-sixty in #9159
+  * Remove mypy exclusions for a couple more libraries by
+    @max-sixty in #9160
+  * Add test for #9155 by @max-sixty in #9161
+  * switch to datetime unit "D" by @keewis in #9170
+  * Slightly improve DataTree repr by @shoyer in #9064
+  * Fix example code formatting for CachingFileManager by @djhoese
+    in #9178
+  * Change np.core.defchararray to np.char (#9165) by @pont-us in
+    #9166
+  * temporarily remove pydap from CI by @keewis in #9183
+  * also pin numpy in the all-but-dask CI by @keewis in #9184
+  * promote floating-point numeric datetimes to 64-bit before
+    decoding by @keewis in #9182
+  * "source" encoding for datasets opened from fsspec objects by
+    @keewis in #8923
+  * properly diff objects with arrays as attributes on variables by
+    @keewis in #9169
+  * Allow str in static typing of reindex, ffill etc. by @headtr1ck
+    in #9194
+  * Fix dark-theme in html[data-theme=dark]-tags by @prisae in
+    #9200
+  * Add open_datatree benchmark by @aladinor in #9158
+  * use a composite strategy to generate the dataframe with a
+    tz-aware datetime column by @keewis in #9174
+  * Hierarchical coordinates in DataTree by @shoyer in #9063
+  * avoid converting custom indexes to pandas indexes when
+    formatting coordinate diffs by @keewis in #9157
+  * Fix reductions for np.complex_ dtypes with numbagg by
+    @max-sixty in #9210
+  * Consolidate some numbagg tests by @max-sixty in #9211
+  * Use numpy 2.0-compat np.complex64 dtype in test by @max-sixty
+    in #9217
+  * Fix two bugs in DataTree.update() by @shoyer in #9214
+  * Only use necessary dims when creating temporary dataarray by
+    @Illviljan in #9206
+  * Cleanup test_coding_times.py by @Illviljan in #9223
+  * Use reshape and ravel from duck_array_ops in coding/times.py by
+    @Illviljan in #9225
+  * Use duckarray assertions in test_coding_times by @Illviljan in
+    #9226
+  * Fix time indexing regression in convert_calendar by @hmaarrfk
+    in #9192
+  * numpy 2 compatibility in the netcdf4 and h5netcdf backends by
+    @keewis in #9136
+  * numpy 2 compatibility in the iris code paths by @keewis in
+    #9156
+  * switch the documentation to run with numpy>=2 by @keewis in
+    #9177
+  * exclude the bots from the release notes by @keewis in #9235
+  * Add a .drop_attrs method by @max-sixty in #8258
+  * Allow mypy to run in vscode by @max-sixty in #9239
+  * Fix typing for test_plot.py by @Illviljan in #9234
+  * Added a space to the documentation by @ChrisCleaner in #9247
+  * Per-variable specification of boolean parameters in
+    open_dataset by @Ostheer in #9218
+  * Enable pandas type checking by @headtr1ck in #9213
+  * fix typing of fallback isdtype method by @headtr1ck in #9250
+  * Grouper, Resampler as public api by @dcherian in #8840
+  * Update dropna docstring by @TomNicholas in #9257
+  * Delete base and loffset parameters to resample by @dcherian in
+    #9233
+  * groupby, resample: Deprecate some positional args by @dcherian
+    in #9236
+  * Add encode_cf_datetime benchmark by @spencerkclark in #9262
+  * Update signature for _arrayfunction.array by @Illviljan in
+    #9237
+  * Fix copybutton for multi line examples in double digit ipython
+    cells by @mosc9575 in #9264
+  * add backend intro and how-to diagram by @JessicaS11 in #9175
+  * Restore ability to specify _FillValue as Python native integers
+    by @djhoese in #9258
+  * Adding open_datatree backend-specific keyword arguments by
+    @aladinor in #9199
+  * Change .groupby fastpath to work for monotonic increasing and
+    decreasing by @JoelJaeschke in #7427
+  * Fully deprecate squeeze kwarg to groupby by @dcherian in #9280
+  * Support rechunking to a frequency. by @dcherian in #9109
+  * automate extracting the contributors by @keewis in #9288
+  * Allow importing from xarray.groupers by @dcherian in #9289
+- Release 2024.06.0
+  * TEST: Fix numbagg or bottlekneck skip by @hmaarrfk in #9034
+  * Use ME in test_plot instead of M by @hmaarrfk in #9035
+  * (fix): equality check against singleton PandasExtensionArray by
+    @ilan-gold in #9032
+  * array api-related upstream-dev failures by @keewis in #8854
+  * User-guide - pandas : Add alternative to
+    xarray.Dataset.from_dataframe by @loco-philippe in #9020
+  * Clarify matmul does xarray.dot by @mthramann in #9060
+  * Run tests on changes to root dotfiles by @max-sixty in #9062
+  * Speed up netCDF4, h5netcdf backends by @dcherian in #9067
+  * citation / orcid by @keewis in #9082
+  * fixes for the pint tests by @keewis in #8983
+  * Address latest pandas-related upstream test failures by
+    @spencerkclark in #9081
+  * Add scottyhq to CITATION.cff by @scottyhq in #9089
+  * Fix Typo in Bfill benchmark by @Ockenfuss in #9087
+  * add link to CF conventions on packed data in
+    doc/user-guide/io.rst by @kmuehlbauer in #9045
+  * add order for polynomial interpolation, fixes #8762 by
+    @nkarasiak in #9079
+  * Fix upcasting with python builtin numbers and numpy 2 by
+    @djhoese in #8946
+  * Add Eni to CITATION.cff by @eni-awowale in #9095
+  * add Jessica to citation by @JessicaS11 in #9096
+  * (fix): don't handle time-dtypes as extension arrays in
+    from_dataframe by @ilan-gold in #9042
+  * Micro optimizations to improve indexing by @hmaarrfk in #9002
+  * DAS-2067 - Migrate datatree io.py and common.py by
+    @owenlittlejohns in #9011
+  * open_datatree performance improvement on NetCDF, H5, and Zarr
+    files by @aladinor in #9014
+  * Adds Matt Savoie to CITATION.cff by @flamingbear in #9103
+  * skip the pandas datetime roundtrip test with pandas=3.0 by
+    @keewis in #9104
+  * Add user survey announcement to docs by @jhamman in #9101
+  * add remaining core-dev citations by @keewis in #9110
+  * Undo custom padding-top. by @dcherian in #9107
+- Drop xarray-pr8854-np2.patch
+- Drop xarray-pr9305-cftime.patch
+  * was actually gh#pydata/xarray#9035
+- Add xarray-pr9321-dasktests.patch gh#pydata/xarray#9321
+- Add xarray-pr9356-dasktests.patch gh#pydata/xarray#9356
+- Add xarray-pr9403-np2.1-scalar.patch gh#pydata/xarray#9403
+- Remove obsolete versions from extra requirements
+
+-------------------------------------------------------------------

Old:
----
  xarray-2024.05.0-gh.tar.gz
  xarray-pr8854-np2.patch
  xarray-pr9305-cftime.patch

New:
----
  xarray-2024.07.0-gh.tar.gz
  xarray-pr9321-dasktests.patch
  xarray-pr9356-dasktests.patch
  xarray-pr9403-np2.1-scalar.patch

BETA DEBUG BEGIN:
  Old:  * Undo custom padding-top. by @dcherian in #9107
- Drop xarray-pr8854-np2.patch
- Drop xarray-pr9305-cftime.patch
  Old:- Drop xarray-pr8854-np2.patch
- Drop xarray-pr9305-cftime.patch
  * was actually gh#pydata/xarray#9035
BETA DEBUG END:

BETA DEBUG BEGIN:
  New:  * was actually gh#pydata/xarray#9035
- Add xarray-pr9321-dasktests.patch gh#pydata/xarray#9321
- Add xarray-pr9356-dasktests.patch gh#pydata/xarray#9356
  New:- Add xarray-pr9321-dasktests.patch gh#pydata/xarray#9321
- Add xarray-pr9356-dasktests.patch gh#pydata/xarray#9356
- Add xarray-pr9403-np2.1-scalar.patch gh#pydata/xarray#9403
  New:- Add xarray-pr9356-dasktests.patch gh#pydata/xarray#9356
- Add xarray-pr9403-np2.1-scalar.patch gh#pydata/xarray#9403
- Remove obsolete versions from extra requirements
BETA DEBUG END:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-xarray.spec ++++++
--- /var/tmp/diff_new_pack.LAJtur/_old  2024-09-09 14:47:05.805313013 +0200
+++ /var/tmp/diff_new_pack.LAJtur/_new  2024-09-09 14:47:05.805313013 +0200
@@ -25,11 +25,11 @@
 %define psuffix %{nil}
 %endif
 
-%define ghversion 2024.05.0
+%define ghversion 2024.07.0
 
 %{?sle15_python_module_pythons}
 Name:           python-xarray%{psuffix}
-Version:        2024.5.0
+Version:        2024.7.0
 Release:        0
 Summary:        N-D labeled arrays and datasets in Python
 License:        Apache-2.0
@@ -38,10 +38,12 @@
 # PATCH-FEATURE-UPSTREAM local_dataset.patch gh#pydata/xarray#5377 
mc...@suse.com
 # fix xr.tutorial.open_dataset to work with the preloaded cache.
 Patch0:         local_dataset.patch
-# PATCH-FIX-UPSTREAM xarray-pr8854-np2.patch gh#pydata/xarray#8854
-Patch1:         xarray-pr8854-np2.patch
-# PATCH-FIX-UPSTREAM xarray-pr9305-cftime.patch gh#pydata/xarray#9305
-Patch2:         xarray-pr9305-cftime.patch
+# PATCH-FIX-UPSTREAM xarray-pr9321-dasktests.patch gh#pydata/xarray#9321
+Patch1:         xarray-pr9321-dasktests.patch
+# PATCH-FIX-UPSTREAM xarray-pr9356-dasktests.patch gh#pydata/xarray#9356
+Patch2:         xarray-pr9356-dasktests.patch
+# PATCH-FIX-UPSTREAM xarray-pr9403-np2.1-scalar.patch gh#pydata/xarray#9403
+Patch3:         xarray-pr9403-np2.1-scalar.patch
 BuildRequires:  %{python_module base >= 3.9}
 BuildRequires:  %{python_module pip}
 BuildRequires:  %{python_module setuptools_scm}
@@ -72,12 +74,12 @@
 %package accel
 # for minimum versions, check ci/requirements/min-all-deps.yml
 Summary:        The python xarray[accel] extra
-Requires:       python-Bottleneck >= 1.3
+Requires:       python-Bottleneck
 Requires:       python-opt-einsum
 Requires:       python-scipy
 Requires:       python-xarray = %{version}
 # not available yet
-Recommends:     python-flox >= 0.7
+Recommends:     python-flox
 Recommends:     python-numbagg
 
 %description accel
@@ -117,21 +119,21 @@
 
 %package io
 Summary:        The python xarray[io] extra
-Requires:       python-cftime >= 1.6
+Requires:       python-cftime
 Requires:       python-fsspec
-Requires:       python-h5netcdf >= 1.1
-Requires:       python-netCDF4 >= 1.6
+Requires:       python-h5netcdf
+Requires:       python-netCDF4
 Requires:       python-pooch
-Requires:       python-scipy >= 1.10
+Requires:       python-scipy
 Requires:       python-xarray = %{version}
-Requires:       python-zarr >= 2.13
+Requires:       python-zarr
 
 %description io
 The [io] extra for xarray, N-D labeled arrays and datasets in Python
 
 %package parallel
 Summary:        The python xarray[parallel] extra
-Requires:       python-dask-complete >= 2022.12
+Requires:       python-dask-complete
 Requires:       python-xarray = %{version}
 
 %description parallel
@@ -139,8 +141,8 @@
 
 %package viz
 Summary:        The python xarray[viz] extra
-Requires:       python-matplotlib >= 3.6
-Requires:       python-seaborn >= 0.12
+Requires:       python-matplotlib
+Requires:       python-seaborn
 Requires:       python-xarray = %{version}
 # Not available yet
 Recommends:     python-nc-time-axis

++++++ xarray-2024.05.0-gh.tar.gz -> xarray-2024.07.0-gh.tar.gz ++++++
++++ 15539 lines of diff (skipped)

++++++ xarray-pr9321-dasktests.patch ++++++
>From 9406c49fb281d9ffbf88bfd46133288bd23649a4 Mon Sep 17 00:00:00 2001
From: Deepak Cherian <dee...@cherian.net>
Date: Tue, 6 Aug 2024 22:21:29 -0600
Subject: [PATCH 1/2] Fix some dask tests

---
 xarray/tests/test_dask.py | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xarray/tests/test_dask.py b/xarray/tests/test_dask.py
index 20491eca91a..1ef759b3d6a 100644
--- a/xarray/tests/test_dask.py
+++ b/xarray/tests/test_dask.py
@@ -640,8 +640,10 @@ def counting_get(*args, **kwargs):
 
     def test_duplicate_dims(self):
         data = np.random.normal(size=(4, 4))
-        arr = DataArray(data, dims=("x", "x"))
-        chunked_array = arr.chunk({"x": 2})
+        with pytest.warns(UserWarning, match="Duplicate dimension"):
+            arr = DataArray(data, dims=("x", "x"))
+        with pytest.warns(UserWarning, match="Duplicate dimension"):
+            chunked_array = arr.chunk({"x": 2})
         assert chunked_array.chunks == ((2, 2), (2, 2))
         assert chunked_array.chunksizes == {"x": (2, 2)}
 
@@ -1364,7 +1366,8 @@ def test_map_blocks_ds_transformations(func, map_ds):
 @pytest.mark.parametrize("obj", [make_da(), make_ds()])
 def test_map_blocks_da_ds_with_template(obj):
     func = lambda x: x.isel(x=[1])
-    template = obj.isel(x=[1, 5, 9])
+    # a simple .isel(x=[1, 5, 9]) puts all those in a single chunk.
+    template = xr.concat([obj.isel(x=[i]) for i in [1, 5, 9]], dim="x")
     with raise_if_dask_computes():
         actual = xr.map_blocks(func, obj, template=template)
     assert_identical(actual, template)
@@ -1395,15 +1398,16 @@ def test_map_blocks_roundtrip_string_index():
 
 def test_map_blocks_template_convert_object():
     da = make_da()
+    ds = da.to_dataset()
+
     func = lambda x: x.to_dataset().isel(x=[1])
-    template = da.to_dataset().isel(x=[1, 5, 9])
+    template = xr.concat([da.to_dataset().isel(x=[i]) for i in [1, 5, 9]], 
dim="x")
     with raise_if_dask_computes():
         actual = xr.map_blocks(func, da, template=template)
     assert_identical(actual, template)
 
-    ds = da.to_dataset()
     func = lambda x: x.to_dataarray().isel(x=[1])
-    template = ds.to_dataarray().isel(x=[1, 5, 9])
+    template = xr.concat([ds.to_dataarray().isel(x=[i]) for i in [1, 5, 9]], 
dim="x")
     with raise_if_dask_computes():
         actual = xr.map_blocks(func, ds, template=template)
     assert_identical(actual, template)
@@ -1429,7 +1433,7 @@ def test_map_blocks_errors_bad_template(obj):
         xr.map_blocks(
             lambda a: a.isel(x=[1]).assign_coords(x=[120]),  # assign bad 
index values
             obj,
-            template=obj.isel(x=[1, 5, 9]),
+            template=xr.concat([obj.isel(x=[i]) for i in [1, 5, 9]], dim="x"),
         ).compute()
 
 

>From 6fa200e542fe18b99a86a53126c10639192ea5e1 Mon Sep 17 00:00:00 2001
From: Deepak Cherian <dee...@cherian.net>
Date: Tue, 6 Aug 2024 22:29:24 -0600
Subject: [PATCH 2/2] Cleanup

---
 xarray/tests/test_variable.py | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/xarray/tests/test_variable.py b/xarray/tests/test_variable.py
index 3f3f1756e45..ff6522c00eb 100644
--- a/xarray/tests/test_variable.py
+++ b/xarray/tests/test_variable.py
@@ -318,12 +318,11 @@ def test_datetime64_valid_range(self):
         with pytest.raises(pderror, match=r"Out of bounds nanosecond"):
             self.cls(["t"], [data])
 
-    @pytest.mark.xfail(reason="pandas issue 36615")
     @pytest.mark.filterwarnings("ignore:Converting non-nanosecond")
     def test_timedelta64_valid_range(self):
         data = np.timedelta64("200000", "D")
         pderror = pd.errors.OutOfBoundsTimedelta
-        with pytest.raises(pderror, match=r"Out of bounds nanosecond"):
+        with pytest.raises(pderror, match=r"Cannot convert"):
             self.cls(["t"], [data])
 
     def test_pandas_data(self):
@@ -2301,20 +2300,20 @@ def test_chunk(self):
         assert blocked.chunks == ((3,), (3, 1))
         assert blocked.data.name != first_dask_name
 
-    @pytest.mark.xfail
+    @pytest.mark.skip
     def test_0d_object_array_with_list(self):
         super().test_0d_object_array_with_list()
 
-    @pytest.mark.xfail
+    @pytest.mark.skip
     def test_array_interface(self):
         # dask array does not have `argsort`
         super().test_array_interface()
 
-    @pytest.mark.xfail
+    @pytest.mark.skip
     def test_copy_index(self):
         super().test_copy_index()
 
-    @pytest.mark.xfail
+    @pytest.mark.skip
     @pytest.mark.filterwarnings("ignore:elementwise comparison 
failed.*:FutureWarning")
     def test_eq_all_dtypes(self):
         super().test_eq_all_dtypes()

++++++ xarray-pr9356-dasktests.patch ++++++
>From 70e3f30d5a636f6d847acb2dd0d12cffeb601d41 Mon Sep 17 00:00:00 2001
From: Deepak Cherian <dee...@cherian.net>
Date: Tue, 13 Aug 2024 19:47:10 -0600
Subject: [PATCH 1/2] xfail np.cross tests

xref #9327
---
 xarray/core/computation.py       |  6 +++---
 xarray/tests/test_computation.py | 12 ++++++++----
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/xarray/core/computation.py b/xarray/core/computation.py
index 5d21d0836b9..bb7122e82de 100644
--- a/xarray/core/computation.py
+++ b/xarray/core/computation.py
@@ -23,7 +23,7 @@
 from xarray.core.merge import merge_attrs, merge_coordinates_without_align
 from xarray.core.options import OPTIONS, _get_keep_attrs
 from xarray.core.types import Dims, T_DataArray
-from xarray.core.utils import is_dict_like, is_duck_dask_array, is_scalar, 
parse_dims
+from xarray.core.utils import is_dict_like, is_scalar, parse_dims
 from xarray.core.variable import Variable
 from xarray.namedarray.parallelcompat import get_chunked_array_type
 from xarray.namedarray.pycompat import is_chunked_array
@@ -1693,11 +1693,11 @@ def cross(
             if a.sizes[dim] < b.sizes[dim]:
                 a = a.pad({dim: (0, 1)}, constant_values=0)
                 # TODO: Should pad or apply_ufunc handle correct chunking?
-                a = a.chunk({dim: -1}) if is_duck_dask_array(a.data) else a
+                a = a.chunk({dim: -1}) if is_chunked_array(a.data) else a
             else:
                 b = b.pad({dim: (0, 1)}, constant_values=0)
                 # TODO: Should pad or apply_ufunc handle correct chunking?
-                b = b.chunk({dim: -1}) if is_duck_dask_array(b.data) else b
+                b = b.chunk({dim: -1}) if is_chunked_array(b.data) else b
         else:
             raise ValueError(
                 f"{dim!r} on {'a' if a.sizes[dim] == 1 else 'b'} is 
incompatible:"
diff --git a/xarray/tests/test_computation.py b/xarray/tests/test_computation.py
index 8b480b02472..e974b8b1ac8 100644
--- a/xarray/tests/test_computation.py
+++ b/xarray/tests/test_computation.py
@@ -2547,7 +2547,8 @@ def test_polyfit_polyval_integration(
             "cartesian",
             1,
         ],
-        [  # Test 1 sized arrays with coords:
+        # Test 1 sized arrays with coords:
+        pytest.param(
             xr.DataArray(
                 np.array([1]),
                 dims=["cartesian"],
@@ -2562,8 +2563,10 @@ def test_polyfit_polyval_integration(
             np.array([4, 5, 6]),
             "cartesian",
             -1,
-        ],
-        [  # Test filling in between with coords:
+            marks=(pytest.mark.xfail(),),
+        ),
+        # Test filling in between with coords:
+        pytest.param(
             xr.DataArray(
                 [1, 2],
                 dims=["cartesian"],
@@ -2578,7 +2581,8 @@ def test_polyfit_polyval_integration(
             np.array([4, 5, 6]),
             "cartesian",
             -1,
-        ],
+            marks=(pytest.mark.xfail(),),
+        ),
     ],
 )
 def test_cross(a, b, ae, be, dim: str, axis: int, use_dask: bool) -> None:

>From deb9e3266ca163575b200960c14c87fc999dcfc6 Mon Sep 17 00:00:00 2001
From: Deepak Cherian <dee...@cherian.net>
Date: Tue, 13 Aug 2024 19:49:56 -0600
Subject: [PATCH 2/2] Force numpy>=2

---
 ci/requirements/environment.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ci/requirements/environment.yml b/ci/requirements/environment.yml
index ef02a3e7f23..40ef4a7fc74 100644
--- a/ci/requirements/environment.yml
+++ b/ci/requirements/environment.yml
@@ -26,7 +26,7 @@ dependencies:
   - numba
   - numbagg
   - numexpr
-  - numpy
+  - numpy>=2
   - opt_einsum
   - packaging
   - pandas

++++++ xarray-pr9403-np2.1-scalar.patch ++++++
>From 17367f3545a48d8b8a18bf8f7054b19351c255dc Mon Sep 17 00:00:00 2001
From: Justus Magin <kee...@posteo.de>
Date: Tue, 27 Aug 2024 15:18:32 +0200
Subject: [PATCH 1/3] also call `np.asarray` on numpy scalars

---
 xarray/core/variable.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: xarray-2024.07.0/xarray/core/variable.py
===================================================================
--- xarray-2024.07.0.orig/xarray/core/variable.py
+++ xarray-2024.07.0/xarray/core/variable.py
@@ -309,7 +309,7 @@ def as_compatible_data(
         else:
             data = np.asarray(data)
 
-    if not isinstance(data, np.ndarray) and (
+    if not isinstance(data, np.ndarray | np.generic) and (
         hasattr(data, "__array_function__") or hasattr(data, 
"__array_namespace__")
     ):
         return cast("T_DuckArray", data)
Index: xarray-2024.07.0/xarray/tests/test_variable.py
===================================================================
--- xarray-2024.07.0.orig/xarray/tests/test_variable.py
+++ xarray-2024.07.0/xarray/tests/test_variable.py
@@ -2585,10 +2585,15 @@ class TestAsCompatibleData(Generic[T_Duc
                 assert source_ndarray(x) is 
source_ndarray(as_compatible_data(x))
 
     def test_converted_types(self):
-        for input_array in [[[0, 1, 2]], pd.DataFrame([[0, 1, 2]])]:
+        for input_array in [
+            [[0, 1, 2]],
+            pd.DataFrame([[0, 1, 2]]),
+            np.float64(1.4),
+            np.str_("abc"),
+        ]:
             actual = as_compatible_data(input_array)
             assert_array_equal(np.asarray(input_array), actual)
-            assert np.ndarray == type(actual)
+            assert np.ndarray is type(actual)
             assert np.asarray(input_array).dtype == actual.dtype
 
     def test_masked_array(self):

Reply via email to