Your message dated Mon, 21 Nov 2022 15:12:02 +0000
with message-id <e1ox8so-00fyoh...@fasolo.debian.org>
and subject line Bug#1022255: fixed in python-xarray 2022.11.0-2
has caused the Debian Bug report #1022255,
regarding python-xarray breaks satpy autopkgtest
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
1022255: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1022255
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Source: python-xarray, satpy
Control: found -1 python-xarray/2022.10.0-1
Control: found -1 satpy/0.37.1-1
Severity: serious
Tags: sid bookworm
User: debian...@lists.debian.org
Usertags: breaks needs-update

Dear maintainer(s),

With a recent upload of python-xarray the autopkgtest of satpy fails in testing when that autopkgtest is run with the binary packages of python-xarray from unstable. It passes when run with only packages from testing. In tabular form:

                       pass            fail
python-xarray          from testing    2022.10.0-1
satpy                  from testing    0.37.1-1
all others             from testing    from testing

I copied some of the output at the bottom of this report.

Currently this regression is blocking the migration of python-xarray to testing [1]. Due to the nature of this issue, I filed this bug report against both packages. Can you please investigate the situation and reassign the bug to the right package?

More information about this bug and the reason for filing it can be found on
https://wiki.debian.org/ContinuousIntegration/RegressionEmailInformation

Paul

[1] https://qa.debian.org/excuses.php?package=python-xarray

https://ci.debian.net/data/autopkgtest/testing/amd64/s/satpy/27389134/log.gz

____________________ TestNIRReflectance.test_no_sunz_no_co2 ____________________

self = <satpy.tests.test_modifiers.TestNIRReflectance testMethod=test_no_sunz_no_co2>
calculator = <MagicMock name='Calculator' id='139765627652032'>
apply_modifier_info = <MagicMock name='apply_modifier_info' id='139765627664464'>
sza = <MagicMock name='sun_zenith_angle' id='139765627656784'>

    @mock.patch('satpy.modifiers.spectral.sun_zenith_angle')
    @mock.patch('satpy.modifiers.NIRReflectance.apply_modifier_info')
    @mock.patch('satpy.modifiers.spectral.Calculator')
    def test_no_sunz_no_co2(self, calculator, apply_modifier_info, sza):
        """Test NIR reflectance compositor with minimal parameters."""
        calculator.return_value = mock.MagicMock(
            reflectance_from_tbs=self.refl_from_tbs)
        sza.return_value = self.da_sunz
        from satpy.modifiers.spectral import NIRReflectance
            comp = NIRReflectance(name='test')
        info = {'modifiers': None}
        res = comp([self.nir, self.ir_], optional_datasets=[], **info)
    >       self.get_lonlats.assert_called()

/usr/lib/python3/dist-packages/satpy/tests/test_modifiers.py:244: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <MagicMock name='mock.get_lonlats' id='139765625209184'>

    def assert_called(self):
        """assert that the mock was called at least once
        """
        if self.call_count == 0:
            msg = ("Expected '%s' to have been called." %
                   (self._mock_name or 'mock'))
          raise AssertionError(msg)
E           AssertionError: Expected 'get_lonlats' to have been called.

/usr/lib/python3.10/unittest/mock.py:888: AssertionError
_______________________ TestSceneLoading.test_load_comp4 _______________________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e012eb610>

    def test_load_comp4(self):
        """Test loading a composite that depends on a composite."""
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene.load(['comp4'])
        loaded_ids = list(scene._datasets.keys())
      assert len(loaded_ids) == 1
E       AssertionError: assert 2 == 1
E + where 2 = len([DataID(name='comp2'), DataID(name='ds3', modifiers=())])

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1059: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp4') _______________ TestSceneLoading.test_load_multiple_resolutions ________________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e012eb8b0>

    def test_load_multiple_resolutions(self):
"""Test loading a dataset has multiple resolutions available with different resolutions."""
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        comp25 = make_cid(name='comp25', resolution=1000)
scene[comp25] = xr.DataArray([], attrs={'name': 'comp25', 'resolution': 1000})
        scene.load(['comp25'], resolution=500)
            loaded_ids = list(scene._datasets.keys())
      assert len(loaded_ids) == 2
E       AssertionError: assert 3 == 2
E + where 3 = len([DataID(name='comp24', resolution=500), DataID(name='comp25', resolution=1000), DataID(name='ds5', resolution=500, modifiers=())])

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1070: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp25') _________________ TestSceneLoading.test_load_same_subcomposite _________________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e012eb190>

    def test_load_same_subcomposite(self):
"""Test loading a composite and one of it's subcomposites at the same time."""
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene.load(['comp24', 'comp25'], resolution=500)
        loaded_ids = list(scene._datasets.keys())
        assert len(loaded_ids) == 2
        assert loaded_ids[0]['name'] == 'comp24'
        assert loaded_ids[0]['resolution'] == 500
      assert loaded_ids[1]['name'] == 'comp25'
E       AssertionError: assert 'ds5' == 'comp25'
E         - comp25
E         + ds5

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1084: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp25') _______________________ TestSceneLoading.test_load_comp9 _______________________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e012e9000>

    def test_load_comp9(self):
"""Test loading a composite that has a non-existent optional prereq."""
        # it is fine that an optional prereq doesn't exist
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene.load(['comp9'])
        loaded_ids = list(scene._datasets.keys())
      assert len(loaded_ids) == 1
E       AssertionError: assert 2 == 1
E + where 2 = len([DataID(name='comp2'), DataID(name='ds1', resolution=250, calibration=<calibration.reflectance>, modifiers=())])

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1114: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp9') ______________________ TestSceneLoading.test_load_comp10 _______________________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e012e97e0>

    def test_load_comp10(self):
        """Test loading a composite that depends on a modified dataset."""
        # it is fine that an optional prereq doesn't exist
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene.load(['comp10'])
        loaded_ids = list(scene._datasets.keys())
      assert len(loaded_ids) == 1
E       AssertionError: assert 2 == 1
E + where 2 = len([DataID(name='comp2'), DataID(name='ds1', resolution=250, calibration=<calibration.reflectance>, modifiers=('mod1',))])

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1123: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp10') ______________________ TestSceneLoading.test_load_comp19 _______________________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e011abfa0>

    def test_load_comp19(self):
        """Test loading a composite that shares a dep with a dependency.
            More importantly test that loading a dependency that depends on
        the same dependency as this composite (a sibling dependency) and
        that sibling dependency includes a modifier. This test makes sure
        that the Node in the dependency tree is the exact same node.
            """
        # Check dependency tree nodes
        # initialize the dep tree without loading the data
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene._update_dependency_tree({'comp19'}, None)
            this_node = scene._dependency_tree['comp19']
        shared_dep_id = make_dataid(name='ds5', modifiers=('res_change',))
        shared_dep_expected_node = scene._dependency_tree[shared_dep_id]
        # get the node for the first dep in the prereqs list of the
        # comp13 node
        shared_dep_node = scene._dependency_tree['comp13'].data[1][0]
        shared_dep_node2 = this_node.data[1][0]
        assert shared_dep_expected_node is shared_dep_node
        assert shared_dep_expected_node is shared_dep_node2
            scene.load(['comp19'])
            loaded_ids = list(scene._datasets.keys())
      assert len(loaded_ids) == 1
E       AssertionError: assert 3 == 1
E + where 3 = len([DataID(name='comp13'), DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()), DataID(name='ds5', resolution=250, modifiers=('res_change',))])

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1266: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp19') _____________ TestSceneLoading.test_load_dataset_after_composite2 ______________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e011ab280>

    def test_load_dataset_after_composite2(self):
        """Test load complex composite followed by other datasets."""
        from satpy.readers.yaml_reader import FileYAMLReader
        from satpy.tests.utils import FakeCompositor, FakeModifier
        load_mock = spy_decorator(FileYAMLReader.load)
        comp_mock = spy_decorator(FakeCompositor.__call__)
        mod_mock = spy_decorator(FakeModifier.__call__)
        with mock.patch.object(FileYAMLReader, 'load', load_mock), \
             mock.patch.object(FakeCompositor, '__call__', comp_mock), \
             mock.patch.object(FakeModifier, '__call__', mod_mock):
            lmock = load_mock.mock
            scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
            scene.load(['comp10'])
            assert lmock.call_count == 1
            loaded_ids = list(scene._datasets.keys())
          assert len(loaded_ids) == 1
E           AssertionError: assert 2 == 1
E + where 2 = len([DataID(name='comp2'), DataID(name='ds1', resolution=250, calibration=<calibration.reflectance>, modifiers=('mod1',))])

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1353: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp10') ___________________ TestSceneLoading.test_no_generate_comp10 ___________________

self = <satpy.tests.test_scene.TestSceneLoading object at 0x7f1e01467520>

    def test_no_generate_comp10(self):
        """Test generating a composite after loading."""
        # it is fine that an optional prereq doesn't exist
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene.load(['comp10'], generate=False)
        assert any(ds_id['name'] == 'comp10' for ds_id in scene._wishlist)
        assert 'comp10' not in scene._datasets
        # two dependencies should have been loaded
        assert len(scene._datasets) == 2
        assert len(scene.missing_datasets) == 1
            scene._generate_composites_from_loaded_datasets()
        assert any(ds_id['name'] == 'comp10' for ds_id in scene._wishlist)
      assert 'comp10' in scene._datasets
E AssertionError: assert 'comp10' in {DataID(name='ds1', resolution=250, calibration=<calibration.reflectance>, modifiers=()): <xarray.DataArray 'zeros_like-1d67fcaf0431507eb1a9b35f8d43c7dc' (y: 20, x: 20)>\ndask.array<zeros_like, shape=(20, 20), dtype=float64, chunksize=(20, 20), chunktype=numpy.ndarray>\nCoordinates:\n crs object +proj=longlat +ellps=WGS84 +type=crs\nDimensions without coordinates: y, x\nAttributes: (12/14)\n start_time: 2020-01-01 00:00:00\n end_time: 2020-01-01 01:00:00\n name: ds1\n resolution: 250\n calibration: reflectance\n modifiers: ()\n ... ...\n sensor: fake_sensor\n platform_name: fake_platform\n reader: fake1\n area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_lik...\n _satpy_id: DataID(name='ds1', resolution=250, calibration=<cal...\n ancillary_variables: [], DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()): <xarray.DataArray 'zeros_like-1d67fcaf0431507eb1a9b35f8d43c7dc' (y: 20, x: 20)>\ndask.array<zeros_like, shape=(20, 20), dtype=float64, chunksize=(20, 20), chunktyp...: 2020-01-01 00:00:00\n end_time: 2020-01-01 01:00:00\n name: ds1\n resolution: 250\n calibration: reflectance\n modifiers: ('mod1',)\n ... ...\n sensor: fake_sensor\n platform_name: fake_platform\n reader: fake1\n area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_lik...\n _satpy_id: DataID(name='ds1', resolution=250, calibration=<cal...\n ancillary_variables: [], DataID(name='comp2'): <xarray.DataArray 'zeros_like-92483e10e77d6b5cc4ab18a829286920' (y: 20, x: 20,\n bands: 3)>\ndask.array<zeros_like, shape=(20, 20, 3), dtype=float64, chunksize=(20, 20, 3), chunktype=numpy.ndarray>\nCoordinates:\n * bands (bands) <U1 'R' 'G' 'B'\nDimensions without coordinates: y, x\nAttributes:\n _satpy_id: DataID(name='comp2')\n name: comp2\n prerequisites: ['ds1', 'ds2']\n optional_prerequisites: []\n optional_datasets: []\n area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_...} E + where {DataID(name='ds1', resolution=250, calibration=<calibration.reflectance>, modifiers=()): <xarray.DataArray 'zeros_like-1d67fcaf0431507eb1a9b35f8d43c7dc' (y: 20, x: 20)>\ndask.array<zeros_like, shape=(20, 20), dtype=float64, chunksize=(20, 20), chunktype=numpy.ndarray>\nCoordinates:\n crs object +proj=longlat +ellps=WGS84 +type=crs\nDimensions without coordinates: y, x\nAttributes: (12/14)\n start_time: 2020-01-01 00:00:00\n end_time: 2020-01-01 01:00:00\n name: ds1\n resolution: 250\n calibration: reflectance\n modifiers: ()\n ... ...\n sensor: fake_sensor\n platform_name: fake_platform\n reader: fake1\n area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_lik...\n _satpy_id: DataID(name='ds1', resolution=250, calibration=<cal...\n ancillary_variables: [], DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()): <xarray.DataArray 'zeros_like-1d67fcaf0431507eb1a9b35f8d43c7dc' (y: 20, x: 20)>\ndask.array<zeros_like, shape=(20, 20), dtype=float64, chunksize=(20, 20), chunktyp...: 2020-01-01 00:00:00\n end_time: 2020-01-01 01:00:00\n name: ds1\n resolution: 250\n calibration: reflectance\n modifiers: ('mod1',)\n ... ...\n sensor: fake_sensor\n platform_name: fake_platform\n reader: fake1\n area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_lik...\n _satpy_id: DataID(name='ds1', resolution=250, calibration=<cal...\n ancillary_variables: [], DataID(name='comp2'): <xarray.DataArray 'zeros_like-92483e10e77d6b5cc4ab18a829286920' (y: 20, x: 20,\n bands: 3)>\ndask.array<zeros_like, shape=(20, 20, 3), dtype=float64, chunksize=(20, 20, 3), chunktype=numpy.ndarray>\nCoordinates:\n * bands (bands) <U1 'R' 'G' 'B'\nDimensions without coordinates: y, x\nAttributes:\n _satpy_id: DataID(name='comp2')\n name: comp2\n prerequisites: ['ds1', 'ds2']\n optional_prerequisites: []\n optional_datasets: []\n area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_...} = <satpy.scene.Scene object at 0x7f1dd68203a0>._datasets

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1433: AssertionError _____________ TestSceneResampling.test_resample_reduce_data_toggle _____________

self = {DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()): <xarray.DataArray 'zeros_lik...: []
    optional_datasets:       []
area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_...}
item = 'comp19'

    def __getitem__(self, item):
        """Get item from container."""
        try:
            # short circuit - try to get the object without more work
          return super(DatasetDict, self).__getitem__(item)
E           KeyError: 'comp19'

/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:169: KeyError

During handling of the above exception, another exception occurred:

self = <satpy.tests.test_scene.TestSceneResampling object at 0x7f1e014664a0>
rs = <MagicMock name='resample_dataset' id='139766129303824'>

    @mock.patch('satpy.scene.resample_dataset')
    def test_resample_reduce_data_toggle(self, rs):
"""Test that the Scene can be reduced or not reduced during resampling."""
        from pyresample.geometry import AreaDefinition
            rs.side_effect = self._fake_resample_dataset_force_20x20
        proj_str = ('+proj=lcc +datum=WGS84 +ellps=WGS84 '
                    '+lon_0=-95. +lat_0=25 +lat_1=25 +units=m +no_defs')
target_area = AreaDefinition('test', 'test', 'test', proj_str, 4, 4, (-1000., -1500., 1000., 1500.)) area_def = AreaDefinition('test', 'test', 'test', proj_str, 5, 5, (-1000., -1500., 1000., 1500.))
        area_def.get_area_slices = mock.MagicMock()
        get_area_slices = area_def.get_area_slices
get_area_slices.return_value = (slice(0, 3, None), slice(0, 3, None)) area_def_big = AreaDefinition('test', 'test', 'test', proj_str, 10, 10, (-1000., -1500., 1000., 1500.))
        area_def_big.get_area_slices = mock.MagicMock()
        get_area_slices_big = area_def_big.get_area_slices
get_area_slices_big.return_value = (slice(0, 6, None), slice(0, 6, None))
            # Test that data reduction can be disabled
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene.load(['comp19'])
      scene['comp19'].attrs['area'] = area_def

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1665: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/satpy/scene.py:803: in __getitem__
    return self._datasets[key]
/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:171: in __getitem__
    key = self.get_key(item)
/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:158: in get_key
    return get_key(match_key, self.keys(), num_results=num_results,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
key = DataQuery(name='comp19')
key_container = [DataID(name='comp13'), DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()), DataID(name='ds5', resolution=250, modifiers=('res_change',))]
num_results = 1, best = True, query = None, kwargs = {}, res = []

    def get_key(key, key_container, num_results=1, best=True, query=None,
                **kwargs):
        """Get the fully-specified key best matching the provided key.
Only the best match is returned if `best` is `True` (default). See `get_best_dataset_key` for more information on how this is determined. `query` is provided as a convenience to filter by multiple parameters
        at once without having to filter by multiple `key` inputs.
            Args:
            key (DataID): DataID of query parameters to use for
                             searching. Any parameter that is `None`
                             is considered a wild card and any match is
                             accepted.
            key_container (dict or set): Container of DataID objects that
uses hashing to quickly access items.
            num_results (int): Number of results to return. Use `0` for all
matching results. If `1` then the single matching key is returned instead of a list of length 1.
                               (default: 1)
            best (bool): Sort results to get "best" result first
(default: True). See `get_best_dataset_key` for details. query (DataQuery): filter for the key which can contain for example: resolution (float, int, or list): Resolution of the dataset in
                                                dataset units (typically
                                                meters). This can also be a
                                                list of these numbers.
                calibration (str or list): Dataset calibration
(ex.'reflectance'). This can also be a
                                        list of these strings.
                polarization (str or list): Dataset polarization
                                            (ex.'V'). This can also be a
                                            list of these strings.
level (number or list): Dataset level (ex. 100). This can also be a
                                        list of these numbers.
                modifiers (list): Modifiers applied to the dataset. Unlike
resolution and calibration this is the exact desired list of modifiers for one dataset, not
                                a list of possible modifiers.
            Returns:
            list or DataID: Matching key(s)
Raises: KeyError if no matching results or if more than one result is
                found when `num_results` is `1`.
            """
        key = create_filtered_query(key, query)
            res = key.filter_dataids(key_container)
        if not res:
          raise KeyError("No dataset matching '{}' found".format(str(key)))
E           KeyError: "No dataset matching 'DataQuery(name='comp19')' found"

/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:107: KeyError
------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp19') _________________ TestSceneResampling.test_resample_ancillary __________________

self = {DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()): <xarray.DataArray 'zeros_lik...: []
    optional_datasets:       []
area: Shape: (10, 10)\nLons: <xarray.DataArray 'zeros_...}
item = 'comp19'

    def __getitem__(self, item):
        """Get item from container."""
        try:
            # short circuit - try to get the object without more work
          return super(DatasetDict, self).__getitem__(item)
E           KeyError: 'comp19'

/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:169: KeyError

During handling of the above exception, another exception occurred:

self = <satpy.tests.test_scene.TestSceneResampling object at 0x7f1e01464700>

    def test_resample_ancillary(self):
"""Test that the Scene reducing data does not affect final output."""
        from pyresample.geometry import AreaDefinition
        from pyresample.utils import proj4_str_to_dict
proj_dict = proj4_str_to_dict('+proj=lcc +datum=WGS84 +ellps=WGS84 '
                                      '+lon_0=-95. +lat_0=25 +lat_1=25 '
                                      '+units=m +no_defs')
area_def = AreaDefinition('test', 'test', 'test', proj_dict, 5, 5, (-1000., -1500., 1000., 1500.))
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
            scene.load(['comp19', 'comp20'])
      scene['comp19'].attrs['area'] = area_def

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1707: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/satpy/scene.py:803: in __getitem__
    return self._datasets[key]
/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:171: in __getitem__
    key = self.get_key(item)
/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:158: in get_key
    return get_key(match_key, self.keys(), num_results=num_results,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
key = DataQuery(name='comp19')
key_container = [DataID(name='comp13'), DataID(name='comp20'), DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()), DataID(name='ds5', resolution=250, modifiers=('res_change',))]
num_results = 1, best = True, query = None, kwargs = {}, res = []

    def get_key(key, key_container, num_results=1, best=True, query=None,
                **kwargs):
        """Get the fully-specified key best matching the provided key.
Only the best match is returned if `best` is `True` (default). See `get_best_dataset_key` for more information on how this is determined. `query` is provided as a convenience to filter by multiple parameters
        at once without having to filter by multiple `key` inputs.
            Args:
            key (DataID): DataID of query parameters to use for
                             searching. Any parameter that is `None`
                             is considered a wild card and any match is
                             accepted.
            key_container (dict or set): Container of DataID objects that
uses hashing to quickly access items.
            num_results (int): Number of results to return. Use `0` for all
matching results. If `1` then the single matching key is returned instead of a list of length 1.
                               (default: 1)
            best (bool): Sort results to get "best" result first
(default: True). See `get_best_dataset_key` for details. query (DataQuery): filter for the key which can contain for example: resolution (float, int, or list): Resolution of the dataset in
                                                dataset units (typically
                                                meters). This can also be a
                                                list of these numbers.
                calibration (str or list): Dataset calibration
(ex.'reflectance'). This can also be a
                                        list of these strings.
                polarization (str or list): Dataset polarization
                                            (ex.'V'). This can also be a
                                            list of these strings.
level (number or list): Dataset level (ex. 100). This can also be a
                                        list of these numbers.
                modifiers (list): Modifiers applied to the dataset. Unlike
resolution and calibration this is the exact desired list of modifiers for one dataset, not
                                a list of possible modifiers.
            Returns:
            list or DataID: Matching key(s)
Raises: KeyError if no matching results or if more than one result is
                found when `num_results` is `1`.
            """
        key = create_filtered_query(key, query)
            res = key.filter_dataids(key_container)
        if not res:
          raise KeyError("No dataset matching '{}' found".format(str(key)))
E           KeyError: "No dataset matching 'DataQuery(name='comp19')' found"

/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:107: KeyError
------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp19') ________________ TestSceneResampling.test_resample_reduce_data _________________

self = {DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()): <xarray.DataArray 'zeros_lik...: []
    optional_datasets:       []
area: Shape: (20, 20)\nLons: <xarray.DataArray 'zeros_...}
item = 'comp19'

    def __getitem__(self, item):
        """Get item from container."""
        try:
            # short circuit - try to get the object without more work
          return super(DatasetDict, self).__getitem__(item)
E           KeyError: 'comp19'

/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:169: KeyError

During handling of the above exception, another exception occurred:

self = <satpy.tests.test_scene.TestSceneResampling object at 0x7f1e01467a90>

    def test_resample_reduce_data(self):
"""Test that the Scene reducing data does not affect final output."""
        from pyresample.geometry import AreaDefinition
        proj_str = ('+proj=lcc +datum=WGS84 +ellps=WGS84 '
                    '+lon_0=-95. +lat_0=25 +lat_1=25 +units=m +no_defs')
area_def = AreaDefinition('test', 'test', 'test', proj_str, 20, 20, (-1000., -1500., 1000., 1500.))
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
            scene.load(['comp19'])
      scene['comp19'].attrs['area'] = area_def

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1728: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/satpy/scene.py:803: in __getitem__
    return self._datasets[key]
/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:171: in __getitem__
    key = self.get_key(item)
/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:158: in get_key
    return get_key(match_key, self.keys(), num_results=num_results,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
key = DataQuery(name='comp19')
key_container = [DataID(name='comp13'), DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=()), DataID(name='ds5', resolution=250, modifiers=('res_change',))]
num_results = 1, best = True, query = None, kwargs = {}, res = []

    def get_key(key, key_container, num_results=1, best=True, query=None,
                **kwargs):
        """Get the fully-specified key best matching the provided key.
Only the best match is returned if `best` is `True` (default). See `get_best_dataset_key` for more information on how this is determined. `query` is provided as a convenience to filter by multiple parameters
        at once without having to filter by multiple `key` inputs.
            Args:
            key (DataID): DataID of query parameters to use for
                             searching. Any parameter that is `None`
                             is considered a wild card and any match is
                             accepted.
            key_container (dict or set): Container of DataID objects that
uses hashing to quickly access items.
            num_results (int): Number of results to return. Use `0` for all
matching results. If `1` then the single matching key is returned instead of a list of length 1.
                               (default: 1)
            best (bool): Sort results to get "best" result first
(default: True). See `get_best_dataset_key` for details. query (DataQuery): filter for the key which can contain for example: resolution (float, int, or list): Resolution of the dataset in
                                                dataset units (typically
                                                meters). This can also be a
                                                list of these numbers.
                calibration (str or list): Dataset calibration
(ex.'reflectance'). This can also be a
                                        list of these strings.
                polarization (str or list): Dataset polarization
                                            (ex.'V'). This can also be a
                                            list of these strings.
level (number or list): Dataset level (ex. 100). This can also be a
                                        list of these numbers.
                modifiers (list): Modifiers applied to the dataset. Unlike
resolution and calibration this is the exact desired list of modifiers for one dataset, not
                                a list of possible modifiers.
            Returns:
            list or DataID: Matching key(s)
Raises: KeyError if no matching results or if more than one result is
                found when `num_results` is `1`.
            """
        key = create_filtered_query(key, query)
            res = key.filter_dataids(key_container)
        if not res:
          raise KeyError("No dataset matching '{}' found".format(str(key)))
E           KeyError: "No dataset matching 'DataQuery(name='comp19')' found"

/usr/lib/python3/dist-packages/satpy/dataset/data_dict.py:107: KeyError
------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp19') _________________ TestSceneResampling.test_no_generate_comp10 __________________

self = <satpy.tests.test_scene.TestSceneResampling object at 0x7f1e014665f0>
rs = <MagicMock name='resample_dataset' id='139767046526560'>

    @mock.patch('satpy.scene.resample_dataset')
    def test_no_generate_comp10(self, rs):
        """Test generating a composite after loading."""
        from pyresample.geometry import AreaDefinition
        from pyresample.utils import proj4_str_to_dict
            rs.side_effect = self._fake_resample_dataset
proj_dict = proj4_str_to_dict('+proj=lcc +datum=WGS84 +ellps=WGS84 '
                                      '+lon_0=-95. +lat_0=25 +lat_1=25 '
                                      '+units=m +no_defs')
        area_def = AreaDefinition(
            'test',
            'test',
            'test',
            proj_dict,
            200,
            400,
            (-1000., -1500., 1000., 1500.),
        )
            # it is fine that an optional prereq doesn't exist
        scene = Scene(filenames=['fake1_1.txt'], reader='fake1')
        scene.load(['comp10'], generate=False)
        assert any(ds_id['name'] == 'comp10' for ds_id in scene._wishlist)
        assert 'comp10' not in scene
        # two dependencies should have been loaded
        assert len(scene._datasets) == 2
        assert len(scene.missing_datasets) == 1
            new_scn = scene.resample(area_def, generate=False)
        assert 'comp10' not in scene
        # two dependencies should have been loaded
        assert len(scene._datasets) == 2
        assert len(scene.missing_datasets) == 1
            new_scn._generate_composites_from_loaded_datasets()
assert any(ds_id['name'] == 'comp10' for ds_id in new_scn._wishlist)
      assert 'comp10' in new_scn
E AssertionError: assert 'comp10' in <satpy.scene.Scene object at 0x7f1db8963550>

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1778: AssertionError ______ TestSceneResampling.test_comp_loading_after_resampling_new_sensor _______

self = <satpy.tests.test_scene.TestSceneResampling object at 0x7f1e01466380>

    def test_comp_loading_after_resampling_new_sensor(self):
"""Test requesting a composite after resampling when the sensor composites weren't loaded before."""
        # this is our base Scene with sensor "fake_sensor2"
        scene1 = Scene(filenames=['fake2_3ds_1.txt'], reader='fake2_3ds')
        scene1.load(["ds2"])
        new_scn = scene1.resample(resampler='native')
            # Can't load from readers after resampling
        with pytest.raises(KeyError):
            new_scn.load(["ds3"])
            # Can't load the composite from fake_sensor composites yet
        # 'ds1' is missing
        with pytest.raises(KeyError):
            new_scn.load(["comp2"])
            # artificial DataArray "created by the user"
        # mimics a user adding their own data with the same sensor
        user_da = scene1["ds2"].copy()
        user_da.attrs["name"] = "ds1"
        user_da.attrs["sensor"] = {"fake_sensor2"}
        # Add 'ds1' that doesn't provide the 'fake_sensor' sensor
        new_scn["ds1"] = user_da
        with pytest.raises(KeyError):
            new_scn.load(["comp2"])
        assert "comp2" not in new_scn
            # artificial DataArray "created by the user"
# mimics a user adding their own data with its own sensor to the Scene
        user_da = scene1["ds2"].copy()
        user_da.attrs["name"] = "ds1"
        user_da.attrs["sensor"] = {"fake_sensor"}
        # Now 'fake_sensor' composites have been loaded
        new_scn["ds1"] = user_da
        new_scn.load(["comp2"])
      assert "comp2" in new_scn
E AssertionError: assert 'comp2' in <satpy.scene.Scene object at 0x7f1db8d45a50>

/usr/lib/python3/dist-packages/satpy/tests/test_scene.py:1837: AssertionError ------------------------------ Captured log call ------------------------------- WARNING satpy.scene:scene.py:1275 The following datasets were not created and may require resampling to be generated: DataID(name='comp2') =============================== warnings summary ===============================
../../../../usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:453
/usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:453: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    ('GsicsCalMode', np.bool),

../../../../usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:454
/usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:454: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    ('GsicsCalValidity', np.bool),

../../../../usr/lib/python3/dist-packages/satpy/tests/reader_tests/test_mviri_l1b_fiduceo_nc.py:541

/usr/lib/python3/dist-packages/satpy/tests/reader_tests/test_mviri_l1b_fiduceo_nc.py:541: PytestUnknownMarkWarning: Unknown pytest.mark.file_handler_data - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
    @pytest.mark.file_handler_data(mask_bad_quality=False)

tests/test_composites.py::TestMatchDataArrays::test_nondimensional_coords
tests/test_composites.py::TestMatchDataArrays::test_nondimensional_coords
tests/reader_tests/test_goes_imager_nc.py::GOESNCEUMFileHandlerRadianceTest::test_get_dataset_radiance
tests/reader_tests/test_goes_imager_nc.py::GOESNCEUMFileHandlerRadianceTest::test_get_dataset_radiance
tests/reader_tests/test_goes_imager_nc.py::GOESNCEUMFileHandlerRadianceTest::test_get_dataset_radiance
tests/reader_tests/test_goes_imager_nc.py::GOESNCEUMFileHandlerRadianceTest::test_get_dataset_radiance
tests/reader_tests/test_goes_imager_nc.py::GOESNCEUMFileHandlerReflectanceTest::test_get_dataset_reflectance
/usr/lib/python3/dist-packages/xarray/core/dataarray.py:2837: PendingDeprecationWarning: dropping variables using `drop` will be deprecated; using drop_vars is encouraged. ds = self._to_temp_dataset().drop(labels, dim, errors=errors, **labels_kwargs)

tests/test_config.py: 137 warnings
tests/test_multiscene.py: 8 warnings
tests/test_resample.py: 14 warnings
tests/test_scene.py: 4 warnings
tests/test_writers.py: 4 warnings
tests/test_yaml_reader.py: 3 warnings
tests/modifier_tests/test_parallax.py: 39 warnings
tests/reader_tests/test_ahi_hsd.py: 2 warnings
tests/reader_tests/test_cmsaf_claas.py: 12 warnings
tests/reader_tests/test_fci_l1c_nc.py: 16 warnings
tests/reader_tests/test_fci_l2_nc.py: 1 warning
tests/reader_tests/test_geocat.py: 6 warnings
tests/reader_tests/test_geos_area.py: 1 warning
tests/reader_tests/test_goes_imager_hrit.py: 1 warning
tests/reader_tests/test_gpm_imerg.py: 1 warning
tests/reader_tests/test_hrit_base.py: 1 warning
tests/reader_tests/test_mviri_l1b_fiduceo_nc.py: 12 warnings
tests/reader_tests/test_nwcsaf_msg.py: 1 warning
tests/reader_tests/test_nwcsaf_nc.py: 3 warnings
tests/reader_tests/test_oceancolorcci_l3_nc.py: 1 warning
tests/reader_tests/test_seviri_l1b_hrit.py: 3 warnings
tests/reader_tests/test_seviri_l1b_native.py: 2 warnings
tests/writer_tests/test_mitiff.py: 24 warnings
/usr/lib/python3/dist-packages/pyproj/crs/crs.py:1286: UserWarning: You will likely lose important projection information when converting to a PROJ string from another format. See: https://proj.org/faq.html#what-is-the-best-format-for-describing-coordinate-reference-systems
    proj = self._crs.to_proj4(version=version)

tests/test_config.py::TestPluginsConfigs::test_plugin_reader_available_readers
/usr/lib/python3/dist-packages/pygac/reader.py:45: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
    from distutils.version import LooseVersion

tests/test_config.py::TestPluginsConfigs::test_plugin_writer_available_writers
/usr/lib/python3/dist-packages/pyninjotiff/tifffile.py:154: UserWarning: failed to import the optional _tifffile C extension module.
  Loading of some compressed images will be slow.
  Tifffile.c can be obtained at http://www.lfd.uci.edu/~gohlke/
    warnings.warn(

tests/test_dataset.py::test_combine_dicts_close
/usr/lib/python3/dist-packages/satpy/tests/test_dataset.py:329: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    'e': np.str('bar'),

tests/test_dataset.py::test_combine_dicts_close
/usr/lib/python3/dist-packages/satpy/tests/test_dataset.py:342: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    'e': np.str('bar'),

tests/test_dataset.py::test_combine_dicts_different[test_mda5]
/usr/lib/python3/dist-packages/satpy/dataset/metadata.py:199: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
    res = comp_func(a, b)

tests/test_dataset.py::TestIDQueryInteractions::test_seviri_hrv_has_priority_over_vis008
/usr/lib/python3/dist-packages/satpy/tests/test_dataset.py:688: UserWarning: Attribute access to DataIDs is deprecated, use key access instead.
    assert res[0].name == "HRV"

tests/test_dependency_tree.py::TestMultipleSensors::test_compositor_loaded_sensor_order

/usr/lib/python3/dist-packages/satpy/tests/test_dependency_tree.py:222: UserWarning: Attribute access to DataIDs is deprecated, use key access instead.
    self.assertEqual(comp_nodes[0].name.resolution, 500)

tests/test_readers.py::TestReaderLoader::test_missing_requirements
/usr/lib/python3/dist-packages/satpy/readers/yaml_reader.py:509: UserWarning: No handler for reading requirement 'HRIT_EPI' for H-000-MSG4__-MSG4________-IR_108___-000006___-201809050900-__
    warnings.warn(msg)

tests/test_readers.py::TestReaderLoader::test_missing_requirements
/usr/lib/python3/dist-packages/satpy/readers/yaml_reader.py:509: UserWarning: No handler for reading requirement 'HRIT_PRO' for H-000-MSG4__-MSG4________-IR_108___-000006___-201809050900-__
    warnings.warn(msg)

tests/test_readers.py::TestReaderLoader::test_missing_requirements
/usr/lib/python3/dist-packages/satpy/readers/yaml_reader.py:512: UserWarning: No matching requirement file of type HRIT_PRO for H-000-MSG4__-MSG4________-IR_108___-000006___-201809051000-__
    warnings.warn(str(err) + ' for {}'.format(filename))

tests/test_regressions.py::test_1088
/usr/lib/python3/dist-packages/numpy/lib/function_base.py:1292: RuntimeWarning: invalid value encountered in subtract
    a = op(a[slice1], a[slice2])

tests/test_resample.py::TestHLResample::test_type_preserve
tests/test_resample.py::TestHLResample::test_type_preserve
/usr/lib/python3/dist-packages/pyresample/geometry.py:658: DeprecationWarning: This function is deprecated. See: https://pyproj4.github.io/pyproj/stable/gotchas.html#upgrading-to-pyproj-2-from-pyproj-1
    xyz = np.stack(transform(src, dst, lons, lats, alt), axis=1)

tests/test_resample.py::TestKDTreeResampler::test_check_numpy_cache
/usr/lib/python3/dist-packages/satpy/resample.py:554: UserWarning: Using Numpy files as resampling cache is deprecated.
    warnings.warn("Using Numpy files as resampling cache is "

tests/test_resample.py::TestBucketAvg::test_compute_and_not_use_skipna_handling
tests/test_resample.py::TestBucketAvg::test_compute_and_not_use_skipna_handling
tests/test_resample.py::TestBucketSum::test_compute_and_not_use_skipna_handling
tests/test_resample.py::TestBucketSum::test_compute_and_not_use_skipna_handling
/usr/lib/python3/dist-packages/satpy/resample.py:1100: DeprecationWarning: Argument mask_all_nan is deprecated.Please update Pyresample and use skipna for missing values handling.
    warnings.warn('Argument mask_all_nan is deprecated.'

tests/test_resample.py::TestBucketAvg::test_compute_and_use_skipna_handling
tests/test_resample.py::TestBucketSum::test_compute_and_use_skipna_handling
/usr/lib/python3/dist-packages/satpy/resample.py:1095: DeprecationWarning: Argument mask_all_nan is deprecated. Please use skipna for missing values handling. Continuing with default skipna=True, if not provided differently. warnings.warn('Argument mask_all_nan is deprecated. Please use skipna for missing values handling. '

tests/test_scene.py: 2 warnings
tests/test_writers.py: 14 warnings
tests/writer_tests/test_geotiff.py: 5 warnings
/usr/lib/python3/dist-packages/rasterio/__init__.py:287: NotGeoreferencedWarning: Dataset has no geotransform, gcps, or rpcs. The identity matrix will be returned.
    dataset = writer(

tests/test_scene.py: 3 warnings
tests/test_writers.py: 10 warnings
tests/reader_tests/test_aapp_l1b.py: 3 warnings
tests/writer_tests/test_geotiff.py: 3 warnings
tests/writer_tests/test_mitiff.py: 5 warnings
tests/writer_tests/test_simple_image.py: 2 warnings
/usr/lib/python3/dist-packages/dask/core.py:119: RuntimeWarning: divide by zero encountered in true_divide
    return func(*(_execute_task(a, cache) for a in args))

tests/test_scene.py: 3 warnings
tests/test_writers.py: 10 warnings
tests/writer_tests/test_geotiff.py: 3 warnings
tests/writer_tests/test_mitiff.py: 5 warnings
tests/writer_tests/test_simple_image.py: 2 warnings
/usr/lib/python3/dist-packages/dask/core.py:119: RuntimeWarning: invalid value encountered in multiply
    return func(*(_execute_task(a, cache) for a in args))

tests/enhancement_tests/test_enhancements.py::TestEnhancementStretch::test_crefl_scaling
/usr/lib/python3/dist-packages/satpy/enhancements/__init__.py:129: DeprecationWarning: 'crefl_scaling' is deprecated, use 'piecewise_linear_stretch' instead. warnings.warn("'crefl_scaling' is deprecated, use 'piecewise_linear_stretch' instead.", DeprecationWarning)

tests/enhancement_tests/test_enhancements.py::TestColormapLoading::test_cmap_list
/usr/lib/python3/dist-packages/trollimage/colormap.py:229: UserWarning: Colormap 'colors' should be flotaing point numbers between 0 and 1. warnings.warn("Colormap 'colors' should be flotaing point numbers between 0 and 1.")

tests/modifier_tests/test_parallax.py: 46 warnings
/usr/lib/python3/dist-packages/pyresample/geometry.py:1545: PendingDeprecationWarning: 'name' is deprecated, use 'description' instead. warnings.warn("'name' is deprecated, use 'description' instead.", PendingDeprecationWarning)

tests/modifier_tests/test_parallax.py: 37 warnings
/usr/lib/python3/dist-packages/satpy/modifiers/parallax.py:390: UserWarning: Overlap checking not impelemented. Waiting for fix for https://github.com/pytroll/pyresample/issues/329
    warnings.warn(

tests/modifier_tests/test_parallax.py::TestParallaxCorrectionClass::test_correct_area_clearsky_different_resolutions[0.08-0.3]
tests/modifier_tests/test_parallax.py::TestParallaxCorrectionClass::test_correct_area_clearsky_different_resolutions[0.3-0.08]
/usr/lib/python3/dist-packages/_pytest/python.py:192: PytestRemovedIn8Warning: Passing None has been deprecated. See https://docs.pytest.org/en/latest/how-to/capture-warnings.html#additional-use-cases-of-warnings-in-tests for alternatives in common use cases.
    result = testfunction(**testargs)

tests/modifier_tests/test_parallax.py::TestParallaxCorrectionModifier::test_parallax_modifier_interface_with_cloud
/usr/lib/python3/dist-packages/dask/core.py:119: RuntimeWarning: invalid value encountered in cos
    return func(*(_execute_task(a, cache) for a in args))

tests/modifier_tests/test_parallax.py::TestParallaxCorrectionModifier::test_parallax_modifier_interface_with_cloud
/usr/lib/python3/dist-packages/dask/core.py:119: RuntimeWarning: invalid value encountered in sin
    return func(*(_execute_task(a, cache) for a in args))

tests/modifier_tests/test_parallax.py::TestParallaxCorrectionModifier::test_parallax_modifier_interface_with_cloud
/usr/lib/python3/dist-packages/dask/core.py:119: RuntimeWarning: invalid value encountered in remainder
    return func(*(_execute_task(a, cache) for a in args))

tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_navigation
tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_navigation
/usr/lib/python3/dist-packages/geotiepoints/geointerpolator.py:81: RuntimeWarning: invalid value encountered in arcsin lats = np.sign(z__) * (90 - np.rad2deg(np.arcsin(np.sqrt(x__ ** 2 + y__ ** 2) / radius)))

tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_read
tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_read
tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_read
tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_read
tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_read
tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_read
tests/reader_tests/test_aapp_l1b.py::TestAAPPL1BAllChannelsPresent::test_read
tests/reader_tests/test_ahi_hsd.py::TestAHICalibration::test_default_calibrate
/usr/lib/python3/dist-packages/dask/core.py:119: RuntimeWarning: invalid value encountered in log
    return func(*(_execute_task(a, cache) for a in args))

tests/reader_tests/test_aapp_mhs_amsub_l1c.py::TestMHS_AMSUB_AAPPL1CReadData::test_read
tests/reader_tests/test_aapp_mhs_amsub_l1c.py::TestMHS_AMSUB_AAPPL1CReadData::test_read
tests/reader_tests/test_aapp_mhs_amsub_l1c.py::TestMHS_AMSUB_AAPPL1CReadData::test_read
tests/reader_tests/test_aapp_mhs_amsub_l1c.py::TestMHS_AMSUB_AAPPL1CReadData::test_read
tests/reader_tests/test_aapp_mhs_amsub_l1c.py::TestMHS_AMSUB_AAPPL1CReadData::test_read

/usr/lib/python3/dist-packages/satpy/readers/aapp_mhs_amsub_l1c.py:402: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    channel = channel.astype(np.float)

tests/reader_tests/test_abi_l2_nc.py::TestMCMIPReading::test_mcmip_get_dataset
/usr/lib/python3/dist-packages/satpy/readers/abi_l2_nc.py:41: UserWarning: Attribute access to DataIDs is deprecated, use key access instead.
    var += "_" + key.name

tests/reader_tests/test_ahi_hsd.py: 5 warnings
tests/reader_tests/test_utils.py: 6 warnings
/usr/lib/python3/dist-packages/satpy/readers/utils.py:352: DeprecationWarning: This function is deprecated. See: https://pyproj4.github.io/pyproj/stable/gotchas.html#upgrading-to-pyproj-2-from-pyproj-1
    x, y, z = pyproj.transform(latlong, geocent, lon, lat, 0.)

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block1 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block2 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block3 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block4 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block5 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block6 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block7 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block8 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block9 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block10 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_read_band
tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_scene_loading
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:515: UserWarning: Actual block11 header size does not match expected
    warnings.warn(f"Actual {block} header size does not match expected")

tests/reader_tests/test_ahi_hsd.py::TestAHIHSDFileHandler::test_time_rounding
/usr/lib/python3/dist-packages/satpy/readers/ahi_hsd.py:461: UserWarning: Observation timeline is fill value, not rounding observation time. warnings.warn("Observation timeline is fill value, not rounding observation time.")

tests/reader_tests/test_ami_l1b.py::TestAMIL1bNetCDF::test_get_dataset
tests/reader_tests/test_ami_l1b.py::TestAMIL1bNetCDF::test_get_dataset_counts
tests/reader_tests/test_ami_l1b.py::TestAMIL1bNetCDF::test_get_dataset_vis
tests/reader_tests/test_ami_l1b.py::TestAMIL1bNetCDFIRCal::test_default_calibrate
tests/reader_tests/test_ami_l1b.py::TestAMIL1bNetCDFIRCal::test_gsics_radiance_corr
tests/reader_tests/test_ami_l1b.py::TestAMIL1bNetCDFIRCal::test_infile_calibrate
tests/reader_tests/test_ami_l1b.py::TestAMIL1bNetCDFIRCal::test_user_radiance_corr
/usr/lib/python3/dist-packages/satpy/readers/ami_l1b.py:165: DeprecationWarning: This function is deprecated. See: https://pyproj4.github.io/pyproj/stable/gotchas.html#upgrading-to-pyproj-2-from-pyproj-1
    sc_position = pyproj.transform(

tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTGetCalibratedReflectances::test_calibrated_reflectances_values
tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTGetCalibratedBT::test_calibrated_bt_values
tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTChannel3::test_channel_3a_masking
tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTChannel3::test_channel_3b_masking
tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTNavigation::test_latitudes_are_returned
tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTNavigation::test_longitudes_are_returned
/usr/lib/python3/dist-packages/satpy/readers/hrpt.py:80: DeprecationWarning: parsing timezone aware datetimes is deprecated; this will raise an error in the future
    return (np.datetime64(

tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTGetCalibratedReflectances::test_calibrated_reflectances_values
tests/reader_tests/test_avhrr_l0_hrpt.py::TestHRPTChannel3::test_channel_3a_masking
/usr/lib/python3/dist-packages/satpy/readers/hrpt.py:227: DeprecationWarning: parsing timezone aware datetimes is deprecated; this will raise an error in the future
    - np.datetime64(str(self.year) + '-01-01T00:00:00Z'))

tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCFileHandler::test_area_definition
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCFileHandler::test_dataset
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCFileHandler::test_dataset_with_layer
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCFileHandler::test_dataset_with_total_cot
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCReadingByteData::test_byte_extraction
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCReadingByteData::test_byte_extraction
/usr/lib/python3/dist-packages/satpy/readers/fci_l2_nc.py:255: UserWarning: Attribute access to DataIDs is deprecated, use key access instead.
    res = dataset_id.resolution

tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef_and_wrongs_dims
/usr/lib/python3/dist-packages/satpy/readers/fci_l2_nc.py:363: UserWarning: Attribute access to DataIDs is deprecated, use key access instead.
    res = dataset_id.resolution

tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef_and_wrongs_dims
/usr/lib/python3/dist-packages/pyresample/geometry.py:1291: PendingDeprecationWarning: 'x_size' is deprecated, use 'width' instead. warnings.warn("'x_size' is deprecated, use 'width' instead.", PendingDeprecationWarning)

tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCSegmentFileHandler::test_dataset_with_adef_and_wrongs_dims
/usr/lib/python3/dist-packages/pyresample/geometry.py:1297: PendingDeprecationWarning: 'y_size' is deprecated, use 'height' instead. warnings.warn("'y_size' is deprecated, use 'height' instead.", PendingDeprecationWarning)

tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCReadingByteData::test_byte_extraction
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCReadingByteData::test_byte_extraction
/usr/lib/python3/dist-packages/satpy/readers/fci_l2_nc.py:239: RuntimeWarning: Mean of empty slice.
    scale_factor = (x[1:]-x[0:-1]).values.mean()

tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCReadingByteData::test_byte_extraction
tests/reader_tests/test_fci_l2_nc.py::TestFciL2NCReadingByteData::test_byte_extraction
/usr/lib/python3/dist-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in true_divide
    ret = ret.dtype.type(ret / rcount)

tests/reader_tests/test_generic_image.py::TestGenericImage::test_GenericImageFileHandler
tests/reader_tests/test_generic_image.py::TestGenericImage::test_GenericImageFileHandler_datasetid
tests/reader_tests/test_generic_image.py::TestGenericImage::test_GenericImageFileHandler_nodata
tests/reader_tests/test_generic_image.py::TestGenericImage::test_geotiff_scene
tests/reader_tests/test_generic_image.py::TestGenericImage::test_geotiff_scene
tests/reader_tests/test_generic_image.py::TestGenericImage::test_geotiff_scene_nan
tests/reader_tests/test_generic_image.py::TestGenericImage::test_geotiff_scene_nan
tests/reader_tests/test_generic_image.py::TestGenericImage::test_png_scene
tests/reader_tests/test_generic_image.py::TestGenericImage::test_png_scene
/usr/lib/python3/dist-packages/satpy/readers/generic_image.py:78: DeprecationWarning: open_rasterio is Deprecated in favor of rioxarray. For information about transitioning, see: https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html
    data = xr.open_rasterio(dataset, chunks=(1, CHUNK_SIZE, CHUNK_SIZE))

tests/reader_tests/test_generic_image.py::TestGenericImage::test_png_scene
tests/reader_tests/test_generic_image.py::TestGenericImage::test_png_scene
/usr/lib/python3/dist-packages/rasterio/__init__.py:277: NotGeoreferencedWarning: Dataset has no geotransform, gcps, or rpcs. The identity matrix will be returned.
    dataset = DatasetReader(path, driver=driver, sharing=sharing, **kwargs)

tests/reader_tests/test_modis_l1b.py::TestModisL1b::test_load_longitude_latitude[modis_l1b_nasa_mod021km_file-True-False-False-1000]

/usr/lib/python3/dist-packages/geotiepoints/simple_modis_interpolator.py:38: DeprecationWarning: Please use `map_coordinates` from the `scipy.ndimage` namespace, the `scipy.ndimage.interpolation` namespace is deprecated.
    from scipy.ndimage.interpolation import map_coordinates

tests/reader_tests/test_msi_safe.py::TestMTDXML::test_satellite_zenith_array
/usr/lib/python3/dist-packages/satpy/readers/msi_safe.py:320: RuntimeWarning: Mean of empty slice
    angles = np.nanmean(np.dstack(arrays), -1)

tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_angles
tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_angles
tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_angles
tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_angles
tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_meteo
tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_meteo
tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_meteo
tests/reader_tests/test_olci_nc.py::TestOLCIReader::test_olci_meteo
/usr/lib/python3/dist-packages/geotiepoints/interpolator.py:239: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
    if np.all(self.hrow_indices == self.row_indices):

tests/reader_tests/test_satpy_cf_nc.py: 8 warnings
tests/writer_tests/test_cf.py: 20 warnings
/usr/lib/python3/dist-packages/satpy/writers/cf_writer.py:571: FutureWarning: The default behaviour of the CF writer will soon change to not compress data by default. warnings.warn("The default behaviour of the CF writer will soon change to not compress data by default.",

tests/reader_tests/test_satpy_cf_nc.py: 8 warnings
tests/writer_tests/test_cf.py: 24 warnings
/usr/lib/python3/dist-packages/satpy/writers/cf_writer.py:746: UserWarning: Dtype int64 not compatible with CF-1.7. warnings.warn('Dtype {} not compatible with {}.'.format(str(ds.dtype), CF_VERSION))

tests/reader_tests/test_satpy_cf_nc.py: 18 warnings
/usr/lib/python3/dist-packages/satpy/readers/satpy_cf_nc.py:240: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
    if 'modifiers' in ds_info and not ds_info['modifiers']:

tests/reader_tests/test_satpy_cf_nc.py::TestCFReader::test_read_prefixed_channels_by_user_no_prefix
tests/writer_tests/test_cf.py::TestCFWriter::test_save_dataset_a_digit_no_prefix_include_attr
/usr/lib/python3/dist-packages/satpy/writers/cf_writer.py:566: UserWarning: Invalid NetCDF dataset name: 1 starts with a digit. warnings.warn('Invalid NetCDF dataset name: {} starts with a digit.'.format(name))

tests/reader_tests/test_seviri_base.py::TestOrbitPolynomialFinder::test_get_orbit_polynomial[orbit_polynomials1-time1-orbit_polynomial_exp1]
tests/reader_tests/test_seviri_base.py::TestOrbitPolynomialFinder::test_get_orbit_polynomial_exceptions[orbit_polynomials1-time1]
tests/reader_tests/test_seviri_l1b_native.py::TestNativeMSGDataset::test_satpos_no_valid_orbit_polynomial
/usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:774: UserWarning: No orbit polynomial valid for 2006-01-01T12:15:00.000000. Using closest match.
    warnings.warn(

tests/reader_tests/test_seviri_base.py::TestOrbitPolynomialFinder::test_get_orbit_polynomial_exceptions[orbit_polynomials0-time0]
/usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:774: UserWarning: No orbit polynomial valid for 2006-01-02T12:15:00.000000. Using closest match.
    warnings.warn(

tests/reader_tests/test_seviri_l1b_hrit.py::TestHRITMSGFileHandler::test_satpos_no_valid_orbit_polynomial
/usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:774: UserWarning: No orbit polynomial valid for 2006-01-01T12:15:09.304888. Using closest match.
    warnings.warn(

tests/reader_tests/test_seviri_l1b_nc.py::TestNCSEVIRIFileHandler::test_satpos_no_valid_orbit_polynomial
/usr/lib/python3/dist-packages/satpy/readers/seviri_base.py:774: UserWarning: No orbit polynomial valid for 2020-01-01T00:00:00.000000. Using closest match.
    warnings.warn(

tests/reader_tests/test_seviri_l2_grib.py::Test_SeviriL2GribFileHandler::test_data_reading
/usr/lib/python3/dist-packages/satpy/readers/seviri_l2_grib.py:114: UserWarning: Attribute access to DataIDs is deprecated, use key access instead.
    self._res = dataset_id.resolution

tests/reader_tests/test_slstr_l1b.py::TestSLSTRReader::test_instantiate
/usr/lib/python3/dist-packages/satpy/readers/slstr_l1b.py:173: UserWarning: Warning: No radiance adjustment supplied for channel foo_nadir
    warnings.warn("Warning: No radiance adjustment supplied " +

tests/reader_tests/test_smos_l2_wind.py::TestSMOSL2WINDReader::test_load_lat
tests/reader_tests/test_smos_l2_wind.py::TestSMOSL2WINDReader::test_load_lon
tests/reader_tests/test_smos_l2_wind.py::TestSMOSL2WINDReader::test_load_wind_speed
tests/writer_tests/test_mitiff.py::TestMITIFFWriter::test_convert_proj4_string
tests/writer_tests/test_mitiff.py::TestMITIFFWriter::test_convert_proj4_string
tests/writer_tests/test_mitiff.py::TestMITIFFWriter::test_convert_proj4_string
tests/writer_tests/test_mitiff.py::TestMITIFFWriter::test_convert_proj4_string
tests/writer_tests/test_mitiff.py::TestMITIFFWriter::test_convert_proj4_string
/usr/lib/python3/dist-packages/pyproj/crs/crs.py:141: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
    in_crs_string = _prepare_from_proj_string(in_crs_string)

tests/reader_tests/test_viirs_compact.py::TestCompact::test_distributed
tests/reader_tests/test_viirs_compact.py::TestCompact::test_distributed
/usr/lib/python3/dist-packages/tornado/ioloop.py:265: DeprecationWarning: There is no current event loop
    loop = asyncio.get_event_loop()

tests/reader_tests/test_viirs_compact.py::TestCompact::test_distributed
/usr/lib/python3/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
    self.make_current()

tests/reader_tests/test_viirs_compact.py::TestCompact::test_distributed
/usr/lib/python3/dist-packages/tornado/platform/asyncio.py:360: DeprecationWarning: There is no current event loop
    self.old_asyncio = asyncio.get_event_loop()

tests/reader_tests/test_viirs_compact.py::TestCompact::test_distributed
/usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Profile
    Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/distributed/profile.py", line 115, in process
      d = state["children"][ident]
KeyError: 'write;/usr/lib/python3/dist-packages/distributed/comm/tcp.py;245'
    During handling of the above exception, another exception occurred:
    Traceback (most recent call last):
    File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
      self.run()
    File "/usr/lib/python3.10/threading.py", line 953, in run
      self._target(*self._args, **self._kwargs)
File "/usr/lib/python3/dist-packages/distributed/profile.py", line 274, in _watch
      process(frame, None, recent, omit=omit)
File "/usr/lib/python3/dist-packages/distributed/profile.py", line 119, in process
      "description": info_frame(frame),
File "/usr/lib/python3/dist-packages/distributed/profile.py", line 72, in info_frame line = linecache.getline(co.co_filename, frame.f_lineno, frame.f_globals).lstrip()
    File "/usr/lib/python3.10/linecache.py", line 31, in getline
      if 1 <= lineno <= len(lines):
  TypeError: '<=' not supported between instances of 'int' and 'NoneType'
      warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))

tests/writer_tests/test_awips_tiled.py::TestAWIPSTiledWriter::test_lettered_tiles_no_valid_data
/usr/lib/python3/dist-packages/dask/array/reductions.py:569: RuntimeWarning: All-NaN slice encountered
    return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)

tests/writer_tests/test_awips_tiled.py::TestAWIPSTiledWriter::test_lettered_tiles_no_valid_data
/usr/lib/python3/dist-packages/dask/array/reductions.py:540: RuntimeWarning: All-NaN slice encountered
    return np.nanmin(x_chunk, axis=axis, keepdims=keepdims)

tests/writer_tests/test_awips_tiled.py::TestAWIPSTiledWriter::test_basic_numbered_tiles_rgb

/usr/lib/python3/dist-packages/satpy/tests/writer_tests/test_awips_tiled.py:427: UserWarning: rename 'y' to 'y' does not create an index anymore. Try using swap_dims instead or use set_index after rename to create an indexed coordinate. ds = ds.rename(dict((old, new) for old, new in zip(ds.dims, ['bands', 'y', 'x'])))

tests/writer_tests/test_awips_tiled.py::TestAWIPSTiledWriter::test_basic_numbered_tiles_rgb

/usr/lib/python3/dist-packages/satpy/tests/writer_tests/test_awips_tiled.py:427: UserWarning: rename 'x' to 'x' does not create an index anymore. Try using swap_dims instead or use set_index after rename to create an indexed coordinate. ds = ds.rename(dict((old, new) for old, new in zip(ds.dims, ['bands', 'y', 'x'])))

tests/writer_tests/test_awips_tiled.py: 54 warnings
/usr/lib/python3/dist-packages/satpy/writers/awips_tiled.py:942: UserWarning: Production location attribute is longer than 31 characters (AWIPS limit). Set it to a smaller value with the 'ORGANIZATION' environment variable. Defaults to hostname and is currently set to '11111111111111111111111111111111111111111111111111'.
    warnings.warn("Production location attribute is longer than 31 "

tests/writer_tests/test_cf.py::TestCFWriter::test_groups
/usr/lib/python3/dist-packages/satpy/writers/cf_writer.py:361: UserWarning: Cannot pretty-format "acq_time" coordinates because they are not unique among the given datasets warnings.warn('Cannot pretty-format "{}" coordinates because they are not unique among the '

tests/writer_tests/test_cf.py::TestCFWriter::test_link_coords
/usr/lib/python3/dist-packages/satpy/writers/cf_writer.py:305: UserWarning: Coordinate "not_exist" referenced by dataarray var4 does not exist, dropping reference. warnings.warn('Coordinate "{}" referenced by dataarray {} does not exist, dropping reference.'

tests/writer_tests/test_cf.py::TestCFWriter::test_save_with_compression
/usr/lib/python3/dist-packages/satpy/writers/cf_writer.py:576: FutureWarning: The `compression` keyword will soon be deprecated. Please use the `encoding` of the DataArrays to tune compression from now on. warnings.warn("The `compression` keyword will soon be deprecated. Please use the `encoding` of the "

tests/writer_tests/test_mitiff.py: 23 warnings
/usr/lib/python3/dist-packages/libtiff/libtiff_ctypes.py:644: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    elif arr.dtype in np.sctypes['uint'] + [np.bool]:

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================ FAILED tests/test_modifiers.py::TestNIRReflectance::test_no_sunz_no_co2 - Ass... FAILED tests/test_scene.py::TestSceneLoading::test_load_comp4 - AssertionErro...
FAILED tests/test_scene.py::TestSceneLoading::test_load_multiple_resolutions
FAILED tests/test_scene.py::TestSceneLoading::test_load_same_subcomposite - A... FAILED tests/test_scene.py::TestSceneLoading::test_load_comp9 - AssertionErro... FAILED tests/test_scene.py::TestSceneLoading::test_load_comp10 - AssertionErr... FAILED tests/test_scene.py::TestSceneLoading::test_load_comp19 - AssertionErr... FAILED tests/test_scene.py::TestSceneLoading::test_load_dataset_after_composite2 FAILED tests/test_scene.py::TestSceneLoading::test_no_generate_comp10 - Asser... FAILED tests/test_scene.py::TestSceneResampling::test_resample_reduce_data_toggle FAILED tests/test_scene.py::TestSceneResampling::test_resample_ancillary - Ke... FAILED tests/test_scene.py::TestSceneResampling::test_resample_reduce_data - ... FAILED tests/test_scene.py::TestSceneResampling::test_no_generate_comp10 - As... FAILED tests/test_scene.py::TestSceneResampling::test_comp_loading_after_resampling_new_sensor = 14 failed, 1681 passed, 8 skipped, 64 deselected, 7 xfailed, 786 warnings in 243.92s (0:04:03) =
autopkgtest [21:51:10]: test python3

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


--- End Message ---
--- Begin Message ---
Source: python-xarray
Source-Version: 2022.11.0-2
Done: Alastair McKinstry <mckins...@debian.org>

We believe that the bug you reported is fixed in the latest version of
python-xarray, which is due to be installed in the Debian FTP archive.

A summary of the changes between this version and the previous one is
attached.

Thank you for reporting the bug, which will now be closed.  If you
have further comments please address them to 1022...@bugs.debian.org,
and the maintainer will reopen the bug report if appropriate.

Debian distribution maintenance software
pp.
Alastair McKinstry <mckins...@debian.org> (supplier of updated python-xarray 
package)

(This message was generated automatically at their request; if you
believe that there is a problem with it please contact the archive
administrators by mailing ftpmas...@ftp-master.debian.org)


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Format: 1.8
Date: Mon, 21 Nov 2022 14:37:38 +0000
Source: python-xarray
Architecture: source
Version: 2022.11.0-2
Distribution: unstable
Urgency: medium
Maintainer: Debian Science Maintainers 
<debian-science-maintain...@lists.alioth.debian.org>
Changed-By: Alastair McKinstry <mckins...@debian.org>
Closes: 1004869 1022255 1023222
Changes:
 python-xarray (2022.11.0-2) unstable; urgency=medium
 .
   * Ack bogs fixed in this and previous releases:
     Closes: #1022255, #1004869, #1023222
Checksums-Sha1:
 25bfc4ad68c46f9f7428c7a008736478f3bd2fa7 3395 python-xarray_2022.11.0-2.dsc
 9d5fa48523ea38b6684f9b18e3d3a10a8aee2875 14092 
python-xarray_2022.11.0-2.debian.tar.xz
Checksums-Sha256:
 2921fcf4d7b5b82ed54813180ceacb2261264e99eb22e1b9bcf11e8660755e6e 3395 
python-xarray_2022.11.0-2.dsc
 c480af287094772516773b559ac385d6eed33001e1ded8aa1ddab4db721fc91b 14092 
python-xarray_2022.11.0-2.debian.tar.xz
Files:
 16ee7843761bf700d78213b6a8061f38 3395 python optional 
python-xarray_2022.11.0-2.dsc
 b483b08efbb572a27c4af434d3df17d1 14092 python optional 
python-xarray_2022.11.0-2.debian.tar.xz

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEgjg86RZbNHx4cIGiy+a7Tl2a06UFAmN7jiUACgkQy+a7Tl2a
06Vb0Q//YmFju6NG15oqqygNRQiiGIGPXS+qvtwPbXKGJDq6g3rX24KdF2tP77i6
Aw9LjfskclEe7TRQCwaS/49qh+6CE+Tndv47nMGb3jnBGL0vYa9YqOn3XeBAHCcw
aYCvbkks5HxDWizJ6Kh2Ohef0okEl2Q/pHDKMDfF+Q8DEvGh7iFtedVF6ARquPSW
95u3V6QhCCQPws26cYPz1L2a+QJuEf49lohoVDC8u4EtaJHbv1NKXtQP1srEqf10
zqbSd6KDqT3mn4dONtF7etxzQGxSHRySEPJpXQ+2kkdmcueLUT5UAEazC6SCdCUB
GMCqMFbwi4kY7CaaK5646XxCm3QQUg3UeZAMHqsBI9gYModfKG8twRhlyA4N9mam
NCdKPU7mme9M5E8lY3oh2TeLCh3o52zpfEq0Y1Dq/QMSQ/HuHDsYU0Bi3QRNdEA9
vKkvV19cuN5/g7mkYLSEr7AvsfX4Nm2RKY6lo96NDNm/FIZlbL4HVwptnZnqECTy
R9ldkljopheHnEgora1f09tsti6mnypi8RwEZQ1Gd4FoXOfRcZgU5Qbz5ZTY7crj
UvkkKMreOMMDPUQ5noz2mG06XOdKkx20j5BMsf0pKbtOlR0+Uhn5/UE3IR261bTY
OVztEldVmpPWOes8mOGnzcoYDcAwHGGoey2y6VU1vDjBEm9jsjM=
=xhBQ
-----END PGP SIGNATURE-----

--- End Message ---

Reply via email to