Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-cdflib for openSUSE:Factory 
checked in at 2025-09-15 19:52:17
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-cdflib (Old)
 and      /work/SRC/openSUSE:Factory/.python-cdflib.new.1977 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-cdflib"

Mon Sep 15 19:52:17 2025 rev:5 rq:1304691 version:1.3.6

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-cdflib/python-cdflib.changes      
2025-08-05 14:23:47.175499248 +0200
+++ /work/SRC/openSUSE:Factory/.python-cdflib.new.1977/python-cdflib.changes    
2025-09-15 19:56:09.555297247 +0200
@@ -1,0 +2,9 @@
+Sun Sep 14 21:19:47 UTC 2025 - Dirk Müller <[email protected]>
+
+- update to 1.3.6:
+  * Stopping uncertainty "DELTA_VAR" variables from becoming
+    coordinate variables
+  * newbyteorder call was changed to update with newer version of
+    numpy
+
+-------------------------------------------------------------------

Old:
----
  cdflib-1.3.4-gh.tar.gz

New:
----
  cdflib-1.3.6-gh.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-cdflib.spec ++++++
--- /var/tmp/diff_new_pack.6ibh97/_old  2025-09-15 19:56:10.031317234 +0200
+++ /var/tmp/diff_new_pack.6ibh97/_new  2025-09-15 19:56:10.035317402 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-cdflib
 #
-# Copyright (c) 2025 SUSE LLC
+# Copyright (c) 2025 SUSE LLC and contributors
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -17,7 +17,7 @@
 
 
 Name:           python-cdflib
-Version:        1.3.4
+Version:        1.3.6
 Release:        0
 Summary:        A python CDF reader toolkit
 License:        MIT

++++++ cdflib-1.3.4-gh.tar.gz -> cdflib-1.3.6-gh.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/cdflib-1.3.4/cdflib/cdfread.py 
new/cdflib-1.3.6/cdflib/cdfread.py
--- old/cdflib-1.3.4/cdflib/cdfread.py  2025-04-10 19:57:37.000000000 +0200
+++ new/cdflib-1.3.6/cdflib/cdfread.py  2025-08-06 01:57:13.000000000 +0200
@@ -1744,7 +1744,7 @@
 
         # Put the data into system byte order
         if self._convert_option() != "=":
-            ret = ret.view(ret.dtype.newbyteorder()).byteswap()
+            ret = ret.view(ret.dtype.newbyteorder()).byteswap()  # type: ignore
 
         if self._majority == "Column_major":
             if dimensions is not None:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/cdflib-1.3.4/cdflib/cdfwrite.py 
new/cdflib-1.3.6/cdflib/cdfwrite.py
--- old/cdflib-1.3.4/cdflib/cdfwrite.py 2025-04-10 19:57:37.000000000 +0200
+++ new/cdflib-1.3.6/cdflib/cdfwrite.py 2025-08-06 01:57:13.000000000 +0200
@@ -1431,18 +1431,18 @@
                 return numElems, self.CDF_EPOCH16
             elif hasattr(value, "dtype"):
                 # We are likely dealing with a numpy number at this point
-                if value.dtype.type in (np.int8, np.int16, np.int32, np.int64):
+                if value.dtype.type in (np.int8, np.int16, np.int32, 
np.int64):  # type: ignore
                     return numElems, self.CDF_INT8
-                elif value.dtype.type == np.complex128:
+                elif value.dtype.type == np.complex128:  # type: ignore
                     return numElems, self.CDF_EPOCH16
-                elif value.dtype.type in (np.uint8, np.uint16, np.uint32):
+                elif value.dtype.type in (np.uint8, np.uint16, np.uint32):  # 
type: ignore
                     return numElems, self.CDF_UINT4
-                elif value.dtype.type in (np.float16, np.float32, np.float64):
+                elif value.dtype.type in (np.float16, np.float32, np.float64): 
 # type: ignore
                     return numElems, self.CDF_DOUBLE
-                elif value.dtype.type == np.str_:
+                elif value.dtype.type == np.str_:  # type: ignore
                     return numElems, self.CDF_CHAR
                 else:
-                    logger.warning(f"Invalid data type for data 
{value.dtype.type}.... Skip")
+                    logger.warning(f"Invalid data type for data 
{value.dtype.type}.... Skip")  # type: ignore
                     return None, None
             else:
                 logger.warning("Invalid data type for data.... Skip")
@@ -2273,7 +2273,7 @@
         else:
             recs = len(indata)
 
-        if npdata.dtype.kind in ("S", "U"):
+        if npdata.dtype.kind in ("S", "U"):  # type: ignore
             dt_string = self._convert_type(data_type)
             form = str(recs * num_values * num_elems) + dt_string
             form2 = tofrom + str(recs * num_values * num_elems) + dt_string
@@ -2286,9 +2286,9 @@
             if tofrom in ("<", ">"):
                 # Only swap if our current endianness is not already correct.
                 # Note: '|' means not applicable (e.g., for strings) or 
already in a “neutral” byte order.
-                if npdata.dtype.byteorder not in (tofrom, "|"):
+                if npdata.dtype.byteorder not in (tofrom, "|"):  # type: ignore
                     # byteswap + newbyteorder will produce a correctly 
endianness-tagged array.
-                    npdata.byteswap(inplace=True).newbyteorder()  # type: 
ignore
+                    npdata = 
npdata.byteswap().view(npdata.dtype.newbyteorder())  # type: ignore
 
         return recs, npdata.tobytes()
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/cdflib-1.3.4/cdflib/epochs.py 
new/cdflib-1.3.6/cdflib/epochs.py
--- old/cdflib-1.3.4/cdflib/epochs.py   2025-04-10 19:57:37.000000000 +0200
+++ new/cdflib-1.3.6/cdflib/epochs.py   2025-08-06 01:57:13.000000000 +0200
@@ -143,11 +143,11 @@
             1D if scalar input, 2D otherwise.
         """
         epochs = np.array(epochs)
-        if epochs.dtype.type == np.int64:
+        if epochs.dtype.type == np.int64:  # type: ignore
             return CDFepoch.breakdown_tt2000(epochs)
-        elif epochs.dtype.type == np.float64:
+        elif epochs.dtype.type == np.float64:  # type: ignore
             return CDFepoch.breakdown_epoch(epochs)
-        elif epochs.dtype.type == np.complex128:
+        elif epochs.dtype.type == np.complex128:  # type: ignore
             return CDFepoch.breakdown_epoch16(epochs)
         else:
             raise TypeError(f"Not sure how to handle type {epochs.dtype}")
@@ -356,7 +356,7 @@
             raise TypeError("datetime must be in list form")
 
         datetimes = np.atleast_2d(datetimes)
-        items = datetimes.shape[1]
+        items = datetimes.shape[1]  # type: ignore
 
         if items == 7:
             return _squeeze_or_scalar_real(CDFepoch.compute_epoch(datetimes))
@@ -1206,7 +1206,7 @@
         if isinstance(epochs, (complex, np.complex128)) or isinstance(epochs, 
(list, tuple, np.ndarray)):
             new_epochs = np.asarray(epochs)
             if new_epochs.shape == ():
-                cshape = []
+                cshape: list[cdf_epoch16_type] = []
                 new_epochs = np.array([epochs])
             else:
                 cshape = list(new_epochs.shape)
@@ -1214,7 +1214,7 @@
             raise TypeError("Bad data for epochs: {:}".format(type(epochs)))
 
         cshape.append(10)
-        components = np.full(shape=cshape, fill_value=[9999, 12, 31, 23, 59, 
59, 999, 999, 999, 999])
+        components = np.full(shape=cshape, fill_value=[9999, 12, 31, 23, 59, 
59, 999, 999, 999, 999])  # type: ignore
         for i, epoch16 in enumerate(new_epochs):
             # Ignore fill values
             if (epoch16.real != -1.0e31) or (epoch16.imag != -1.0e31) or 
np.isnan(epoch16):
@@ -1402,7 +1402,7 @@
         # TODO Add docstring. What is the output format?
 
         new_dates = np.atleast_2d(dates)
-        count = new_dates.shape[0]
+        count = new_dates.shape[0]  # type: ignore
 
         epochs = []
         for x in range(0, count):
@@ -1540,7 +1540,7 @@
         ):
             new_epochs = np.asarray(epochs).astype(float)
             if new_epochs.shape == ():
-                cshape = []
+                cshape: list[cdf_epoch_type] = []
                 new_epochs = np.array([epochs], dtype=float)
             else:
                 cshape = list(new_epochs.shape)
@@ -1549,7 +1549,7 @@
 
         # Initialize output to default values
         cshape.append(7)
-        components = np.full(shape=cshape, fill_value=[9999, 12, 31, 23, 59, 
59, 999])
+        components = np.full(shape=cshape, fill_value=[9999, 12, 31, 23, 59, 
59, 999])  # type: ignore
         for i, epoch in enumerate(new_epochs):
             # Ignore fill values and NaNs
             if (epoch != -1.0e31) and not np.isnan(epoch):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/cdflib-1.3.4/cdflib/xarray/cdf_to_xarray.py 
new/cdflib-1.3.6/cdflib/xarray/cdf_to_xarray.py
--- old/cdflib-1.3.4/cdflib/xarray/cdf_to_xarray.py     2025-04-10 
19:57:37.000000000 +0200
+++ new/cdflib-1.3.6/cdflib/xarray/cdf_to_xarray.py     2025-08-06 
01:57:13.000000000 +0200
@@ -167,7 +167,7 @@
         )
         return False
     if len(coordinate_data.shape) == 2:
-        if primary_data.shape[0] != coordinate_data.shape[0]:
+        if primary_data.shape[0] != coordinate_data.shape[0]:  # type: ignore
             logger.warning(
                 f"ISTP Compliance Warning: {coordinate_variable_name} is 
listed as the DEPEND_{dimension_number} for variable {primary_variable_name}, 
but the Epoch dimensions do not match."
             )
@@ -180,7 +180,7 @@
             )
             return False
 
-        if primary_data.shape[dimension_number] != coordinate_data.shape[-1]:
+        if primary_data.shape[dimension_number] != coordinate_data.shape[-1]:  
# type: ignore
             logger.warning(
                 f"ISTP Compliance Warning: {coordinate_variable_name} is 
listed as the DEPEND_{dimension_number} for variable {primary_variable_name}, 
but the dimensions do not match."
             )
@@ -192,12 +192,12 @@
             )
             return False
 
-        if primary_data.shape[dimension_number - 1] != 
coordinate_data.shape[-1]:
+        if primary_data.shape[dimension_number - 1] != 
coordinate_data.shape[-1]:  # type: ignore
             # This is kind of a hack for now.
             # DEPEND_1 can sometimes refer to the first dimension in a 
variable, and sometimes the second.
             # So we require both the first and second dimensions don't match 
the coordinate size before we definitely
             # reject it.
-            if len(primary_data.shape) > dimension_number and 
primary_data.shape[dimension_number] != coordinate_data.shape[-1]:
+            if len(primary_data.shape) > dimension_number and 
primary_data.shape[dimension_number] != coordinate_data.shape[-1]:  # type: 
ignore
                 logger.warning(
                     f"ISTP Compliance Warning: {coordinate_variable_name} is 
listed as the DEPEND_{dimension_number} for variable {primary_variable_name}, 
but the dimensions do not match."
                 )
@@ -302,13 +302,13 @@
         ):
             if new_data.size > 1:
                 if new_data[new_data == var_atts["FILLVAL"]].size != 0:
-                    if new_data.dtype.type == np.datetime64:
+                    if new_data.dtype.type == np.datetime64:  # type: ignore
                         new_data[new_data == var_atts["FILLVAL"]] = 
np.datetime64("nat")
                     else:
                         new_data[new_data == var_atts["FILLVAL"]] = np.nan
             elif new_data.size == 1:
                 if new_data == var_atts["FILLVAL"]:
-                    if new_data.dtype.type == np.datetime64:
+                    if new_data.dtype.type == np.datetime64:  # type: ignore
                         new_data[new_data == var_atts["FILLVAL"]] = 
np.array(np.datetime64("nat"))
                     else:
                         new_data[new_data == var_atts["FILLVAL"]] = 
np.array(np.nan)
@@ -423,7 +423,7 @@
     if len(var_props.Dim_Sizes) != 0 and var_props.Last_Rec >= 0:
         i = 0
         skip_first_dim = bool(record_name_found)
-        for dim_size in var_data.shape:
+        for dim_size in var_data.shape:  # type: ignore
             if skip_first_dim:
                 skip_first_dim = False
                 continue
@@ -450,7 +450,7 @@
                         depend_i_variable_data.size != 0
                         and len(depend_i_variable_data.shape) == 1
                         and len(var_data.shape) > dimension_number
-                        and (depend_i_variable_data.shape[0] == 
var_data.shape[dimension_number])
+                        and (depend_i_variable_data.shape[0] == 
var_data.shape[dimension_number])  # type: ignore
                     ):
                         return_list.append((depend_i_variable_name, dim_size, 
True, False))
                         continue
@@ -458,7 +458,7 @@
                         len(depend_i_variable_data.shape) > 1
                         and depend_i_variable_data.size != 0
                         and len(var_data.shape) > dimension_number
-                        and (depend_i_variable_data.shape[1] == 
var_data.shape[dimension_number])
+                        and (depend_i_variable_data.shape[1] == 
var_data.shape[dimension_number])  # type: ignore
                     ):
                         return_list.append((depend_i_variable_name + "_dim", 
dim_size, True, False))
                         continue
@@ -478,7 +478,7 @@
                         depend_i_variable_data.size != 0
                         and len(depend_i_variable_data.shape) == 1
                         and len(var_data.shape) > i - 1
-                        and (depend_i_variable_data.shape[0] == 
var_data.shape[i - 1])
+                        and (depend_i_variable_data.shape[0] == 
var_data.shape[i - 1])  # type: ignore
                     ):
                         logger.warning(
                             f"Warning: Variable {var_name} has no determined 
time-varying component, but  "
@@ -491,7 +491,7 @@
                         len(depend_i_variable_data.shape) > 1
                         and depend_i_variable_data.size != 0
                         and len(var_data.shape) > i - 1
-                        and (depend_i_variable_data.shape[1] == 
var_data.shape[i - 1])
+                        and (depend_i_variable_data.shape[1] == 
var_data.shape[i - 1])  # type: ignore
                     ):
                         logger.warning(
                             f"Warning: Variable {var_name} has no determined 
time-varying component, but  "
@@ -800,7 +800,7 @@
                         else:
                             created_vars[lab].dims = 
created_vars[var_name].dims
                     else:
-                        if created_vars[lab].shape[0] != 
created_vars[var_name].shape[-1]:
+                        if created_vars[lab].shape[0] != 
created_vars[var_name].shape[-1]:  # type: ignore
                             logger.warning(
                                 f"Warning, label variable {lab} does not match 
the expected dimension sizes of {var_name}"
                             )
@@ -812,9 +812,7 @@
             # If there is an uncertainty variable, link it to the uncertainty 
along a dimension
             if created_vars[var_name].size == 
created_vars[uncertainty_variables[var_name]].size:
                 created_vars[var_name].dims = 
created_vars[uncertainty_variables[var_name]].dims
-                created_coord_vars[var_name] = created_vars[var_name]
-            else:
-                created_data_vars[var_name] = created_vars[var_name]
+            created_data_vars[var_name] = created_vars[var_name]
         else:
             created_data_vars[var_name] = created_vars[var_name]
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/cdflib-1.3.4/cdflib/xarray/xarray_to_cdf.py 
new/cdflib-1.3.6/cdflib/xarray/xarray_to_cdf.py
--- old/cdflib-1.3.4/cdflib/xarray/xarray_to_cdf.py     2025-04-10 
19:57:37.000000000 +0200
+++ new/cdflib-1.3.6/cdflib/xarray/xarray_to_cdf.py     2025-08-06 
01:57:13.000000000 +0200
@@ -116,7 +116,7 @@
     # Returns true if the input has a type of np.datetime64
     try:
         x = np.array(data)
-        return x.dtype.type == np.datetime64
+        return x.dtype.type == np.datetime64  # type: ignore
     except:
         return False
 
@@ -172,7 +172,7 @@
         cdf_data_type = "CDF_UINT4"
     elif numpy_data_type == np.complex128:
         cdf_data_type = "CDF_EPOCH16"
-    elif numpy_data_type.type in (np.str_, np.bytes_):
+    elif numpy_data_type.type in (np.str_, np.bytes_):  # type: ignore
         element_size = int(numpy_data_type.str[2:])  # The length of the 
longest string in the numpy array
     elif var.dtype == object:  # This commonly means we either have 
multidimensional arrays of strings or datetime objects
         if _is_datetime_array(var.data):
@@ -189,7 +189,7 @@
                     f"NOT SUPPORTED: Data in variable {var.name} has data type 
{var.dtype}.  Attempting to convert it to strings ran into the error: {str(e)}",
                     terminate_on_warning,
                 )
-    elif var.dtype.type == np.datetime64:
+    elif var.dtype.type == np.datetime64:  # type: ignore
         cdf_data_type = "CDF_TIME_TT2000"
     else:
         _warn_or_except(f"NOT SUPPORTED: Data in variable {var.name} has data 
type of {var.dtype}.", terminate_on_warning)
@@ -213,7 +213,7 @@
     for var_name in new_data.data_vars:
         data_array = new_data[var_name]
         fill_value = _dtype_to_fillval(data_array)
-        if fill_value.dtype.type != np.datetime64:
+        if fill_value.dtype.type != np.datetime64:  # type: ignore
             try:
                 new_data[var_name] = new_data[var_name].fillna(fill_value)
             except:
@@ -229,7 +229,7 @@
     for var_name in new_data.coords:
         data_array = new_data[var_name]
         fill_value = _dtype_to_fillval(data_array)
-        if fill_value.dtype.type != np.datetime64:
+        if fill_value.dtype.type != np.datetime64:  # type: ignore
             try:
                 new_data[var_name] = new_data[var_name].fillna(fill_value)
             except:
@@ -278,7 +278,7 @@
             return False
 
         if len(coordinate_data.shape) == 2:
-            if primary_data.shape[0] != coordinate_data.shape[0]:
+            if primary_data.shape[0] != coordinate_data.shape[0]:  # type: 
ignore
                 _warn_or_except(
                     f"ISTP Compliance Warning: {coordinate_variable_name} is 
listed as the DEPEND_{dimension_number} for variable {primary_variable_name}, 
but the Epoch dimensions do not match.",
                     terminate_on_warning,
@@ -318,14 +318,14 @@
         # Check that the size of the dimension that DEPEND_{i} is refering to 
is
         # also the same size of the DEPEND_{i}'s last dimension
         if any(k.lower() == "depend_0" for k in 
dataset[primary_variable_name].attrs):
-            if primary_data.shape[dimension_number] != 
coordinate_data.shape[-1]:
+            if primary_data.shape[dimension_number] != 
coordinate_data.shape[-1]:  # type: ignore
                 _warn_or_except(
                     f"ISTP Compliance Warning: {coordinate_variable_name} is 
listed as the DEPEND_{dimension_number} for variable {primary_variable_name}, 
but the dimensions do not match.",
                     terminate_on_warning,
                 )
                 return False
         else:
-            if primary_data.shape[dimension_number - 1] != 
coordinate_data.shape[-1]:
+            if primary_data.shape[dimension_number - 1] != 
coordinate_data.shape[-1]:  # type: ignore
                 _warn_or_except(
                     f"ISTP Compliance Warning: {coordinate_variable_name} is 
listed as the DEPEND_{dimension_number} for variable {primary_variable_name}, 
but the dimensions do not match.",
                     terminate_on_warning,
@@ -1098,7 +1098,7 @@
 
             if len(d[var].dims) > 0:
                 if var in time_varying_dimensions or var in depend_0_vars:
-                    dim_sizes = d[var].shape[1:]
+                    dim_sizes = d[var].shape[1:]  # type: ignore
                     record_vary = True
                 else:
                     dim_sizes = d[var].shape
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/cdflib-1.3.4/doc/changelog.rst 
new/cdflib-1.3.6/doc/changelog.rst
--- old/cdflib-1.3.4/doc/changelog.rst  2025-04-10 19:57:37.000000000 +0200
+++ new/cdflib-1.3.6/doc/changelog.rst  2025-08-06 01:57:13.000000000 +0200
@@ -2,6 +2,16 @@
 Changelog
 =========
 
+1.3.6
+=====
+cdf_to_xarray
+-------------
+- Stopping uncertainty "DELTA_VAR" variables from becoming coordinate variables
+
+cdfwrite
+--------
+- newbyteorder call was changed to update with newer version of numpy
+
 1.3.4
 =====
 Performance improvements in cdfwrite
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/cdflib-1.3.4/meta.yaml new/cdflib-1.3.6/meta.yaml
--- old/cdflib-1.3.4/meta.yaml  2025-04-10 19:57:37.000000000 +0200
+++ new/cdflib-1.3.6/meta.yaml  2025-08-06 01:57:13.000000000 +0200
@@ -1,5 +1,5 @@
 {% set name = "cdflib" %}
-{% set version = "1.3.4" %}
+{% set version = "1.3.6" %}
 
 package:
   name: "{{ name|lower }}"
@@ -8,7 +8,7 @@
 source:
   git_url: https://github.com/MAVENSDC/cdflib.git
   git_depth: 20
-  git_rev: 1.3.4
+  git_rev: 1.3.6
 
 build:
   number: 0

Reply via email to