Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-partd for openSUSE:Factory 
checked in at 2022-10-06 07:42:07
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-partd (Old)
 and      /work/SRC/openSUSE:Factory/.python-partd.new.2275 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-partd"

Thu Oct  6 07:42:07 2022 rev:7 rq:1008154 version:1.3.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-partd/python-partd.changes        
2021-08-16 10:17:40.082653159 +0200
+++ /work/SRC/openSUSE:Factory/.python-partd.new.2275/python-partd.changes      
2022-10-06 07:42:17.636699307 +0200
@@ -1,0 +2,9 @@
+Tue Oct  4 22:42:10 UTC 2022 - Yogalakshmi Arunachalam <yarunacha...@suse.com>
+
+- Update to Version 1.3.0
+  * Remove deprecated use of `Index._get_attributes_dict` (#60)
+  * Avoid `pandas` testing deprecation warning (#59)
+  * Add support for Python 3.9, drop support for Python 3.5 and 3.6 (#54)
+  * Use `is_alive` in favour of `isAlive` for Python 3.9 compatibility (#53)
+
+-------------------------------------------------------------------

Old:
----
  partd-1.2.0.tar.gz

New:
----
  partd-1.3.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-partd.spec ++++++
--- /var/tmp/diff_new_pack.2V94cZ/_old  2022-10-06 07:42:18.104700349 +0200
+++ /var/tmp/diff_new_pack.2V94cZ/_new  2022-10-06 07:42:18.108700358 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-partd
 #
-# Copyright (c) 2021 SUSE LLC
+# Copyright (c) 2022 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -18,7 +18,7 @@
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-partd
-Version:        1.2.0
+Version:        1.3.0
 Release:        0
 Summary:        Appendable key-value storage
 License:        BSD-3-Clause

++++++ partd-1.2.0.tar.gz -> partd-1.3.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/PKG-INFO new/partd-1.3.0/PKG-INFO
--- old/partd-1.2.0/PKG-INFO    2021-04-08 19:32:24.124318600 +0200
+++ new/partd-1.3.0/PKG-INFO    2022-08-12 01:22:05.256742000 +0200
@@ -1,146 +1,145 @@
 Metadata-Version: 2.1
 Name: partd
-Version: 1.2.0
+Version: 1.3.0
 Summary: Appendable key-value storage
 Home-page: http://github.com/dask/partd/
 Maintainer: Matthew Rocklin
 Maintainer-email: mrock...@gmail.com
 License: BSD
-Description: PartD
-        =====
-        
-        |Build Status| |Version Status|
-        
-        Key-value byte store with appendable values
-        
-            Partd stores key-value pairs.
-            Values are raw bytes.
-            We append on old values.
-        
-        Partd excels at shuffling operations.
-        
-        Operations
-        ----------
-        
-        PartD has two main operations, ``append`` and ``get``.
-        
-        
-        Example
-        -------
-        
-        1.  Create a Partd backed by a directory::
-        
-                >>> import partd
-                >>> p = partd.File('/path/to/new/dataset/')
-        
-        2.  Append key-byte pairs to dataset::
-        
-                >>> p.append({'x': b'Hello ', 'y': b'123'})
-                >>> p.append({'x': b'world!', 'y': b'456'})
-        
-        3.  Get bytes associated to keys::
-        
-                >>> p.get('x')         # One key
-                b'Hello world!'
-        
-                >>> p.get(['y', 'x'])  # List of keys
-                [b'123456', b'Hello world!']
-        
-        4.  Destroy partd dataset::
-        
-                >>> p.drop()
-        
-        That's it.
-        
-        
-        Implementations
-        ---------------
-        
-        We can back a partd by an in-memory dictionary::
-        
-            >>> p = Dict()
-        
-        For larger amounts of data or to share data between processes we back 
a partd
-        by a directory of files.  This uses file-based locks for consistency.::
-        
-            >>> p = File('/path/to/dataset/')
-        
-        However this can fail for many small writes.  In these cases you may 
wish to buffer one partd with another, keeping a fixed maximum of data in the 
buffering partd.  This writes the larger elements of the first partd to the 
second partd when space runs low::
-        
-            >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB memory 
buffer
-        
-        You might also want to have many distributed process write to a single 
partd
-        consistently.  This can be done with a server
-        
-        *   Server Process::
-        
-                >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB 
memory buffer
-                >>> s = Server(p, address='ipc://server')
-        
-        *   Worker processes::
-        
-                >>> p = Client('ipc://server')  # Client machine talks to 
remote server
-        
-        
-        Encodings and Compression
-        -------------------------
-        
-        Once we can robustly and efficiently append bytes to a partd we 
consider
-        compression and encodings.  This is generally available with the 
``Encode``
-        partd, which accepts three functions, one to apply on bytes as they are
-        written, one to apply to bytes as they are read, and one to join 
bytestreams.
-        Common configurations already exist for common data and compression 
formats.
-        
-        We may wish to compress and decompress data transparently as we 
interact with a
-        partd.  Objects like ``BZ2``, ``Blosc``, ``ZLib`` and ``Snappy`` exist 
and take
-        another partd as an argument.::
-        
-            >>> p = File(...)
-            >>> p = ZLib(p)
-        
-        These work exactly as before, the (de)compression happens 
automatically.
-        
-        Common data formats like Python lists, numpy arrays, and pandas
-        dataframes are also supported out of the box.::
-        
-            >>> p = File(...)
-            >>> p = NumPy(p)
-            >>> p.append({'x': np.array([...])})
-        
-        This lets us forget about bytes and think instead in our normal data 
types.
-        
-        Composition
-        -----------
-        
-        In principle we want to compose all of these choices together
-        
-        1.  Write policy:  ``Dict``, ``File``, ``Buffer``, ``Client``
-        2.  Encoding:  ``Pickle``, ``Numpy``, ``Pandas``, ...
-        3.  Compression:  ``Blosc``, ``Snappy``, ...
-        
-        Partd objects compose by nesting.  Here we make a partd that writes 
pickle
-        encoded BZ2 compressed bytes directly to disk::
-        
-            >>> p = Pickle(BZ2(File('foo')))
-        
-        We could construct more complex systems that include compression,
-        serialization, buffering, and remote access.::
-        
-            >>> server = Server(Buffer(Dict(), File(), available_memory=2e0))
-        
-            >>> client = Pickle(Snappy(Client(server.address)))
-            >>> client.append({'x': [1, 2, 3]})
-        
-        .. |Build Status| image:: 
https://github.com/dask/partd/workflows/CI/badge.svg
-           :target: https://github.com/dask/partd/actions?query=workflow%3ACI
-        .. |Version Status| image:: https://img.shields.io/pypi/v/partd.svg
-           :target: https://pypi.python.org/pypi/partd/
-        
-Platform: UNKNOWN
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.5
-Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: 3.8
-Requires-Python: >=3.5
+Classifier: Programming Language :: Python :: 3.9
+Requires-Python: >=3.7
 Provides-Extra: complete
+License-File: LICENSE.txt
+
+PartD
+=====
+
+|Build Status| |Version Status|
+
+Key-value byte store with appendable values
+
+    Partd stores key-value pairs.
+    Values are raw bytes.
+    We append on old values.
+
+Partd excels at shuffling operations.
+
+Operations
+----------
+
+PartD has two main operations, ``append`` and ``get``.
+
+
+Example
+-------
+
+1.  Create a Partd backed by a directory::
+
+        >>> import partd
+        >>> p = partd.File('/path/to/new/dataset/')
+
+2.  Append key-byte pairs to dataset::
+
+        >>> p.append({'x': b'Hello ', 'y': b'123'})
+        >>> p.append({'x': b'world!', 'y': b'456'})
+
+3.  Get bytes associated to keys::
+
+        >>> p.get('x')         # One key
+        b'Hello world!'
+
+        >>> p.get(['y', 'x'])  # List of keys
+        [b'123456', b'Hello world!']
+
+4.  Destroy partd dataset::
+
+        >>> p.drop()
+
+That's it.
+
+
+Implementations
+---------------
+
+We can back a partd by an in-memory dictionary::
+
+    >>> p = Dict()
+
+For larger amounts of data or to share data between processes we back a partd
+by a directory of files.  This uses file-based locks for consistency.::
+
+    >>> p = File('/path/to/dataset/')
+
+However this can fail for many small writes.  In these cases you may wish to 
buffer one partd with another, keeping a fixed maximum of data in the buffering 
partd.  This writes the larger elements of the first partd to the second partd 
when space runs low::
+
+    >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB memory buffer
+
+You might also want to have many distributed process write to a single partd
+consistently.  This can be done with a server
+
+*   Server Process::
+
+        >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB memory 
buffer
+        >>> s = Server(p, address='ipc://server')
+
+*   Worker processes::
+
+        >>> p = Client('ipc://server')  # Client machine talks to remote server
+
+
+Encodings and Compression
+-------------------------
+
+Once we can robustly and efficiently append bytes to a partd we consider
+compression and encodings.  This is generally available with the ``Encode``
+partd, which accepts three functions, one to apply on bytes as they are
+written, one to apply to bytes as they are read, and one to join bytestreams.
+Common configurations already exist for common data and compression formats.
+
+We may wish to compress and decompress data transparently as we interact with a
+partd.  Objects like ``BZ2``, ``Blosc``, ``ZLib`` and ``Snappy`` exist and take
+another partd as an argument.::
+
+    >>> p = File(...)
+    >>> p = ZLib(p)
+
+These work exactly as before, the (de)compression happens automatically.
+
+Common data formats like Python lists, numpy arrays, and pandas
+dataframes are also supported out of the box.::
+
+    >>> p = File(...)
+    >>> p = NumPy(p)
+    >>> p.append({'x': np.array([...])})
+
+This lets us forget about bytes and think instead in our normal data types.
+
+Composition
+-----------
+
+In principle we want to compose all of these choices together
+
+1.  Write policy:  ``Dict``, ``File``, ``Buffer``, ``Client``
+2.  Encoding:  ``Pickle``, ``Numpy``, ``Pandas``, ...
+3.  Compression:  ``Blosc``, ``Snappy``, ...
+
+Partd objects compose by nesting.  Here we make a partd that writes pickle
+encoded BZ2 compressed bytes directly to disk::
+
+    >>> p = Pickle(BZ2(File('foo')))
+
+We could construct more complex systems that include compression,
+serialization, buffering, and remote access.::
+
+    >>> server = Server(Buffer(Dict(), File(), available_memory=2e0))
+
+    >>> client = Pickle(Snappy(Client(server.address)))
+    >>> client.append({'x': [1, 2, 3]})
+
+.. |Build Status| image:: https://github.com/dask/partd/workflows/CI/badge.svg
+   :target: https://github.com/dask/partd/actions?query=workflow%3ACI
+.. |Version Status| image:: https://img.shields.io/pypi/v/partd.svg
+   :target: https://pypi.python.org/pypi/partd/
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/__init__.py 
new/partd-1.3.0/partd/__init__.py
--- old/partd-1.2.0/partd/__init__.py   2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/__init__.py   2021-10-21 22:59:21.000000000 +0200
@@ -1,4 +1,4 @@
-from __future__ import absolute_import
+from contextlib import suppress
 
 from .file import File
 from .dict import Dict
@@ -7,12 +7,11 @@
 from .pickle import Pickle
 from .python import Python
 from .compressed import *
-from .utils import ignoring
-with ignoring(ImportError):
+with suppress(ImportError):
     from .numpy import Numpy
-with ignoring(ImportError):
+with suppress(ImportError):
     from .pandas import PandasColumns, PandasBlocks
-with ignoring(ImportError):
+with suppress(ImportError):
     from .zmq import Client, Server
 
 from ._version import get_versions
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/_version.py 
new/partd-1.3.0/partd/_version.py
--- old/partd-1.2.0/partd/_version.py   2021-04-08 19:32:24.125734800 +0200
+++ new/partd-1.3.0/partd/_version.py   2022-08-12 01:22:05.258515100 +0200
@@ -8,11 +8,11 @@
 
 version_json = '''
 {
- "date": "2021-04-08T12:31:19-0500",
+ "date": "2022-08-11T18:17:31-0500",
  "dirty": false,
  "error": null,
- "full-revisionid": "9c9ba0a3a91b6b1eeb560615114a1df81fc427c1",
- "version": "1.2.0"
+ "full-revisionid": "d1faa885e2a7789ff3599b49be31b4c8cf48ba4d",
+ "version": "1.3.0"
 }
 '''  # END VERSION_JSON
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/buffer.py 
new/partd-1.3.0/partd/buffer.py
--- old/partd-1.2.0/partd/buffer.py     2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/buffer.py     2021-10-21 22:59:21.000000000 +0200
@@ -4,7 +4,7 @@
 from operator import add
 from bisect import bisect
 from collections import defaultdict
-from .compatibility import Queue, Empty
+from queue import Queue, Empty
 
 
 def zero():
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/compatibility.py 
new/partd-1.3.0/partd/compatibility.py
--- old/partd-1.2.0/partd/compatibility.py      2021-04-08 19:23:04.000000000 
+0200
+++ new/partd-1.3.0/partd/compatibility.py      1970-01-01 01:00:00.000000000 
+0100
@@ -1,15 +0,0 @@
-from __future__ import absolute_import
-
-import sys
-
-if sys.version_info[0] == 3:
-    from io import StringIO
-    unicode = str
-    import pickle
-    from queue import Queue, Empty
-if sys.version_info[0] == 2:
-    from StringIO import StringIO
-    unicode = unicode
-    import cPickle as pickle
-    from Queue import Queue, Empty
-
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/compressed.py 
new/partd-1.3.0/partd/compressed.py
--- old/partd-1.2.0/partd/compressed.py 2021-04-02 02:20:44.000000000 +0200
+++ new/partd-1.3.0/partd/compressed.py 2021-10-21 22:59:21.000000000 +0200
@@ -1,7 +1,8 @@
-from .utils import ignoring
-from .encode import Encode
+from contextlib import suppress
 from functools import partial
 
+from .encode import Encode
+
 __all__ = []
 
 
@@ -9,7 +10,7 @@
     return b''.join(L)
 
 
-with ignoring(ImportError, AttributeError):
+with suppress(ImportError, AttributeError):
     # In case snappy is not installed, or another package called snappy that 
does not implement compress / decompress.
     # For example, SnapPy (https://pypi.org/project/snappy/)
     import snappy
@@ -20,7 +21,7 @@
     __all__.append('Snappy')
 
 
-with ignoring(ImportError):
+with suppress(ImportError):
     import zlib
     ZLib = partial(Encode,
                    zlib.compress,
@@ -29,7 +30,7 @@
     __all__.append('ZLib')
 
 
-with ignoring(ImportError):
+with suppress(ImportError):
     import bz2
     BZ2 = partial(Encode,
                   bz2.compress,
@@ -38,7 +39,7 @@
     __all__.append('BZ2')
 
 
-with ignoring(ImportError):
+with suppress(ImportError):
     import blosc
     Blosc = partial(Encode,
                     blosc.compress,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/core.py 
new/partd-1.3.0/partd/core.py
--- old/partd-1.2.0/partd/core.py       2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/core.py       2021-10-21 22:59:21.000000000 +0200
@@ -1,5 +1,3 @@
-from __future__ import absolute_import
-
 import os
 import shutil
 import locket
@@ -44,7 +42,7 @@
         return str(key)
 
 
-class Interface(object):
+class Interface:
     def __init__(self):
         self._iset_seen = set()
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/file.py 
new/partd-1.3.0/partd/file.py
--- old/partd-1.2.0/partd/file.py       2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/file.py       2021-10-21 22:59:21.000000000 +0200
@@ -1,6 +1,5 @@
-from __future__ import absolute_import
-
 import atexit
+from contextlib import suppress
 import os
 import shutil
 import string
@@ -8,7 +7,6 @@
 
 from .core import Interface
 import locket
-from .utils import ignoring
 
 
 class File(Interface):
@@ -21,7 +19,7 @@
             self._explicitly_given_path = True
         self.path = path
         if not os.path.exists(path):
-            with ignoring(OSError):
+            with suppress(OSError):
                 os.makedirs(path)
         self.lock = locket.lock_file(self.filename('.lock'))
         Interface.__init__(self)
@@ -57,7 +55,7 @@
                 try:
                     with open(self.filename(key), 'rb') as f:
                         result.append(f.read())
-                except IOError:
+                except OSError:
                     result.append(b'')
         finally:
             if lock:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/numpy.py 
new/partd-1.3.0/partd/numpy.py
--- old/partd-1.2.0/partd/numpy.py      2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/numpy.py      2021-10-21 22:59:21.000000000 +0200
@@ -4,13 +4,14 @@
 Alongside each array x we ensure the value x.dtype which stores the string
 description of the array's dtype.
 """
-from __future__ import absolute_import
+from contextlib import suppress
+import pickle
+
 import numpy as np
 from toolz import valmap, identity, partial
-from .compatibility import pickle
 from .core import Interface
 from .file import File
-from .utils import frame, framesplit, suffix, ignoring
+from .utils import frame, framesplit, suffix
 
 
 def serialize_dtype(dt):
@@ -94,7 +95,7 @@
 def serialize(x):
     if x.dtype == 'O':
         l = x.flatten().tolist()
-        with ignoring(Exception):  # Try msgpack (faster on strings)
+        with suppress(Exception):  # Try msgpack (faster on strings)
             return frame(msgpack.packb(l, use_bin_type=True))
         return frame(pickle.dumps(l, protocol=pickle.HIGHEST_PROTOCOL))
     else:
@@ -132,7 +133,7 @@
 compress_bytes = lambda bytes, itemsize: bytes
 decompress_bytes = identity
 
-with ignoring(ImportError):
+with suppress(ImportError):
     import blosc
     blosc.set_nthreads(1)
 
@@ -142,7 +143,7 @@
     compress_text = partial(blosc.compress, typesize=1)
     decompress_text = blosc.decompress
 
-with ignoring(ImportError):
+with suppress(ImportError):
     from snappy import compress as compress_text
     from snappy import decompress as decompress_text
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/pandas.py 
new/partd-1.3.0/partd/pandas.py
--- old/partd-1.2.0/partd/pandas.py     2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/pandas.py     2022-08-12 01:16:45.000000000 +0200
@@ -1,6 +1,5 @@
-from __future__ import absolute_import
-
 from functools import partial
+import pickle
 
 import numpy as np
 import pandas as pd
@@ -8,7 +7,6 @@
 
 from . import numpy as pnp
 from .core import Interface
-from .compatibility import pickle
 from .encode import Encode
 from .utils import extend, framesplit, frame
 
@@ -47,11 +45,11 @@
 
         # TODO: don't use values, it does some work.  Look at _blocks instead
         #       pframe/cframe do this well
-        arrays = dict((extend(k, col), df[col].values)
+        arrays = {extend(k, col): df[col].values
                        for k, df in data.items()
-                       for col in df.columns)
-        arrays.update(dict((extend(k, '.index'), df.index.values)
-                            for k, df in data.items()))
+                       for col in df.columns}
+        arrays.update({extend(k, '.index'): df.index.values
+                            for k, df in data.items()})
         # TODO: handle categoricals
         self.partd.append(arrays, **kwargs)
 
@@ -110,7 +108,7 @@
         cat = None
         values = ind.values
 
-    header = (type(ind), ind._get_attributes_dict(), values.dtype, cat)
+    header = (type(ind), {k: getattr(ind, k, None) for k in ind._attributes}, 
values.dtype, cat)
     bytes = pnp.compress(pnp.serialize(values), values.dtype)
     return header, bytes
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/pickle.py 
new/partd-1.3.0/partd/pickle.py
--- old/partd-1.2.0/partd/pickle.py     2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/pickle.py     2021-10-21 22:59:21.000000000 +0200
@@ -1,9 +1,7 @@
 """
 get/put functions that consume/produce Python lists using Pickle to serialize
 """
-from __future__ import absolute_import
-from .compatibility import pickle
-
+import pickle
 
 from .encode import Encode
 from functools import partial
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/python.py 
new/partd-1.3.0/partd/python.py
--- old/partd-1.2.0/partd/python.py     2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/python.py     2021-10-21 22:59:21.000000000 +0200
@@ -4,8 +4,7 @@
 
 First we try msgpack (it's faster).  If that fails then we default to pickle.
 """
-from __future__ import absolute_import
-from .compatibility import pickle
+import pickle
 
 try:
     from pandas import msgpack
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/tests/test_numpy.py 
new/partd-1.3.0/partd/tests/test_numpy.py
--- old/partd-1.2.0/partd/tests/test_numpy.py   2021-04-08 19:23:04.000000000 
+0200
+++ new/partd-1.3.0/partd/tests/test_numpy.py   2021-10-21 22:59:21.000000000 
+0200
@@ -1,5 +1,3 @@
-from __future__ import absolute_import
-
 import pytest
 np = pytest.importorskip('numpy')  # noqa
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/tests/test_pandas.py 
new/partd-1.3.0/partd/tests/test_pandas.py
--- old/partd-1.2.0/partd/tests/test_pandas.py  2021-04-08 19:23:04.000000000 
+0200
+++ new/partd-1.3.0/partd/tests/test_pandas.py  2022-08-05 23:19:42.000000000 
+0200
@@ -1,11 +1,9 @@
-from __future__ import absolute_import
-
 import pytest
 pytest.importorskip('pandas')  # noqa
 
 import numpy as np
 import pandas as pd
-import pandas.util.testing as tm
+import pandas.testing as tm
 import os
 
 from partd.pandas import PandasColumns, PandasBlocks, serialize, deserialize
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/tests/test_zmq.py 
new/partd-1.3.0/partd/tests/test_zmq.py
--- old/partd-1.2.0/partd/tests/test_zmq.py     2021-04-02 02:20:44.000000000 
+0200
+++ new/partd-1.3.0/partd/tests/test_zmq.py     2021-10-21 22:59:21.000000000 
+0200
@@ -54,7 +54,7 @@
         held_append.start()
 
         sleep(0.1)
-        assert held_append.isAlive()  # held!
+        assert held_append.is_alive()  # held!
 
         assert not s._frozen_sockets.empty()
 
@@ -64,7 +64,7 @@
         free_frozen_sockets_thread.start()
 
         sleep(0.2)
-        assert not held_append.isAlive()
+        assert not held_append.is_alive()
         assert s._frozen_sockets.empty()
     finally:
         s.close()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/utils.py 
new/partd-1.3.0/partd/utils.py
--- old/partd-1.2.0/partd/utils.py      2021-04-02 02:20:51.000000000 +0200
+++ new/partd-1.3.0/partd/utils.py      2021-10-21 22:59:21.000000000 +0200
@@ -74,19 +74,6 @@
             yield bytes[i: i+n]
 
 
-@contextmanager
-def ignoring(*exc):
-    try:
-        yield
-    except exc:
-        pass
-
-
-@contextmanager
-def do_nothing(*args, **kwargs):
-    yield
-
-
 def nested_get(ind, coll, lazy=False):
     """ Get nested index from collection
 
@@ -129,8 +116,7 @@
     """
     for item in seq:
         if isinstance(item, list):
-            for item2 in flatten(item):
-                yield item2
+            yield from flatten(item)
         else:
             yield item
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd/zmq.py new/partd-1.3.0/partd/zmq.py
--- old/partd-1.2.0/partd/zmq.py        2021-04-08 19:23:04.000000000 +0200
+++ new/partd-1.3.0/partd/zmq.py        2021-10-21 22:59:21.000000000 +0200
@@ -1,5 +1,3 @@
-from __future__ import absolute_import, print_function
-
 import zmq
 import logging
 from itertools import chain
@@ -10,7 +8,7 @@
 from toolz import accumulate, topk, pluck, merge, keymap
 import uuid
 from collections import defaultdict
-from contextlib import contextmanager
+from contextlib import contextmanager, suppress
 from threading import Thread, Lock
 from datetime import datetime
 from multiprocessing import Process
@@ -20,8 +18,6 @@
 from .file import File
 from .buffer import Buffer
 from . import core
-from .compatibility import Queue, Empty, unicode
-from .utils import ignoring
 
 
 tuple_sep = b'-|-'
@@ -38,7 +34,7 @@
         raise
 
 
-class Server(object):
+class Server:
     def __init__(self, partd=None, bind=None, start=True, block=False,
             hostname=None):
         self.context = zmq.Context()
@@ -50,7 +46,7 @@
 
         if hostname is None:
             hostname = socket.gethostname()
-        if isinstance(bind, unicode):
+        if isinstance(bind, str):
             bind = bind.encode()
         if bind is None:
             port = self.socket.bind_to_random_port('tcp://*')
@@ -173,9 +169,9 @@
         logger.debug('Server closes')
         self.status = 'closed'
         self.block()
-        with ignoring(zmq.error.ZMQError):
+        with suppress(zmq.error.ZMQError):
             self.socket.close(1)
-        with ignoring(zmq.error.ZMQError):
+        with suppress(zmq.error.ZMQError):
             self.context.destroy(3)
         self.partd.lock.release()
 
@@ -305,12 +301,12 @@
 
     def close(self):
         if hasattr(self, 'server_process'):
-            with ignoring(zmq.error.ZMQError):
+            with suppress(zmq.error.ZMQError):
                 self.close_server()
             self.server_process.join()
-        with ignoring(zmq.error.ZMQError):
+        with suppress(zmq.error.ZMQError):
             self.socket.close(1)
-        with ignoring(zmq.error.ZMQError):
+        with suppress(zmq.error.ZMQError):
             self.context.destroy(1)
 
     def __exit__(self, type, value, traceback):
@@ -321,7 +317,7 @@
         self.close()
 
 
-class NotALock(object):
+class NotALock:
     def acquire(self): pass
     def release(self): pass
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd.egg-info/PKG-INFO 
new/partd-1.3.0/partd.egg-info/PKG-INFO
--- old/partd-1.2.0/partd.egg-info/PKG-INFO     2021-04-08 19:32:23.000000000 
+0200
+++ new/partd-1.3.0/partd.egg-info/PKG-INFO     2022-08-12 01:22:04.000000000 
+0200
@@ -1,146 +1,145 @@
 Metadata-Version: 2.1
 Name: partd
-Version: 1.2.0
+Version: 1.3.0
 Summary: Appendable key-value storage
 Home-page: http://github.com/dask/partd/
 Maintainer: Matthew Rocklin
 Maintainer-email: mrock...@gmail.com
 License: BSD
-Description: PartD
-        =====
-        
-        |Build Status| |Version Status|
-        
-        Key-value byte store with appendable values
-        
-            Partd stores key-value pairs.
-            Values are raw bytes.
-            We append on old values.
-        
-        Partd excels at shuffling operations.
-        
-        Operations
-        ----------
-        
-        PartD has two main operations, ``append`` and ``get``.
-        
-        
-        Example
-        -------
-        
-        1.  Create a Partd backed by a directory::
-        
-                >>> import partd
-                >>> p = partd.File('/path/to/new/dataset/')
-        
-        2.  Append key-byte pairs to dataset::
-        
-                >>> p.append({'x': b'Hello ', 'y': b'123'})
-                >>> p.append({'x': b'world!', 'y': b'456'})
-        
-        3.  Get bytes associated to keys::
-        
-                >>> p.get('x')         # One key
-                b'Hello world!'
-        
-                >>> p.get(['y', 'x'])  # List of keys
-                [b'123456', b'Hello world!']
-        
-        4.  Destroy partd dataset::
-        
-                >>> p.drop()
-        
-        That's it.
-        
-        
-        Implementations
-        ---------------
-        
-        We can back a partd by an in-memory dictionary::
-        
-            >>> p = Dict()
-        
-        For larger amounts of data or to share data between processes we back 
a partd
-        by a directory of files.  This uses file-based locks for consistency.::
-        
-            >>> p = File('/path/to/dataset/')
-        
-        However this can fail for many small writes.  In these cases you may 
wish to buffer one partd with another, keeping a fixed maximum of data in the 
buffering partd.  This writes the larger elements of the first partd to the 
second partd when space runs low::
-        
-            >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB memory 
buffer
-        
-        You might also want to have many distributed process write to a single 
partd
-        consistently.  This can be done with a server
-        
-        *   Server Process::
-        
-                >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB 
memory buffer
-                >>> s = Server(p, address='ipc://server')
-        
-        *   Worker processes::
-        
-                >>> p = Client('ipc://server')  # Client machine talks to 
remote server
-        
-        
-        Encodings and Compression
-        -------------------------
-        
-        Once we can robustly and efficiently append bytes to a partd we 
consider
-        compression and encodings.  This is generally available with the 
``Encode``
-        partd, which accepts three functions, one to apply on bytes as they are
-        written, one to apply to bytes as they are read, and one to join 
bytestreams.
-        Common configurations already exist for common data and compression 
formats.
-        
-        We may wish to compress and decompress data transparently as we 
interact with a
-        partd.  Objects like ``BZ2``, ``Blosc``, ``ZLib`` and ``Snappy`` exist 
and take
-        another partd as an argument.::
-        
-            >>> p = File(...)
-            >>> p = ZLib(p)
-        
-        These work exactly as before, the (de)compression happens 
automatically.
-        
-        Common data formats like Python lists, numpy arrays, and pandas
-        dataframes are also supported out of the box.::
-        
-            >>> p = File(...)
-            >>> p = NumPy(p)
-            >>> p.append({'x': np.array([...])})
-        
-        This lets us forget about bytes and think instead in our normal data 
types.
-        
-        Composition
-        -----------
-        
-        In principle we want to compose all of these choices together
-        
-        1.  Write policy:  ``Dict``, ``File``, ``Buffer``, ``Client``
-        2.  Encoding:  ``Pickle``, ``Numpy``, ``Pandas``, ...
-        3.  Compression:  ``Blosc``, ``Snappy``, ...
-        
-        Partd objects compose by nesting.  Here we make a partd that writes 
pickle
-        encoded BZ2 compressed bytes directly to disk::
-        
-            >>> p = Pickle(BZ2(File('foo')))
-        
-        We could construct more complex systems that include compression,
-        serialization, buffering, and remote access.::
-        
-            >>> server = Server(Buffer(Dict(), File(), available_memory=2e0))
-        
-            >>> client = Pickle(Snappy(Client(server.address)))
-            >>> client.append({'x': [1, 2, 3]})
-        
-        .. |Build Status| image:: 
https://github.com/dask/partd/workflows/CI/badge.svg
-           :target: https://github.com/dask/partd/actions?query=workflow%3ACI
-        .. |Version Status| image:: https://img.shields.io/pypi/v/partd.svg
-           :target: https://pypi.python.org/pypi/partd/
-        
-Platform: UNKNOWN
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.5
-Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: 3.8
-Requires-Python: >=3.5
+Classifier: Programming Language :: Python :: 3.9
+Requires-Python: >=3.7
 Provides-Extra: complete
+License-File: LICENSE.txt
+
+PartD
+=====
+
+|Build Status| |Version Status|
+
+Key-value byte store with appendable values
+
+    Partd stores key-value pairs.
+    Values are raw bytes.
+    We append on old values.
+
+Partd excels at shuffling operations.
+
+Operations
+----------
+
+PartD has two main operations, ``append`` and ``get``.
+
+
+Example
+-------
+
+1.  Create a Partd backed by a directory::
+
+        >>> import partd
+        >>> p = partd.File('/path/to/new/dataset/')
+
+2.  Append key-byte pairs to dataset::
+
+        >>> p.append({'x': b'Hello ', 'y': b'123'})
+        >>> p.append({'x': b'world!', 'y': b'456'})
+
+3.  Get bytes associated to keys::
+
+        >>> p.get('x')         # One key
+        b'Hello world!'
+
+        >>> p.get(['y', 'x'])  # List of keys
+        [b'123456', b'Hello world!']
+
+4.  Destroy partd dataset::
+
+        >>> p.drop()
+
+That's it.
+
+
+Implementations
+---------------
+
+We can back a partd by an in-memory dictionary::
+
+    >>> p = Dict()
+
+For larger amounts of data or to share data between processes we back a partd
+by a directory of files.  This uses file-based locks for consistency.::
+
+    >>> p = File('/path/to/dataset/')
+
+However this can fail for many small writes.  In these cases you may wish to 
buffer one partd with another, keeping a fixed maximum of data in the buffering 
partd.  This writes the larger elements of the first partd to the second partd 
when space runs low::
+
+    >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB memory buffer
+
+You might also want to have many distributed process write to a single partd
+consistently.  This can be done with a server
+
+*   Server Process::
+
+        >>> p = Buffer(Dict(), File(), available_memory=2e9)  # 2GB memory 
buffer
+        >>> s = Server(p, address='ipc://server')
+
+*   Worker processes::
+
+        >>> p = Client('ipc://server')  # Client machine talks to remote server
+
+
+Encodings and Compression
+-------------------------
+
+Once we can robustly and efficiently append bytes to a partd we consider
+compression and encodings.  This is generally available with the ``Encode``
+partd, which accepts three functions, one to apply on bytes as they are
+written, one to apply to bytes as they are read, and one to join bytestreams.
+Common configurations already exist for common data and compression formats.
+
+We may wish to compress and decompress data transparently as we interact with a
+partd.  Objects like ``BZ2``, ``Blosc``, ``ZLib`` and ``Snappy`` exist and take
+another partd as an argument.::
+
+    >>> p = File(...)
+    >>> p = ZLib(p)
+
+These work exactly as before, the (de)compression happens automatically.
+
+Common data formats like Python lists, numpy arrays, and pandas
+dataframes are also supported out of the box.::
+
+    >>> p = File(...)
+    >>> p = NumPy(p)
+    >>> p.append({'x': np.array([...])})
+
+This lets us forget about bytes and think instead in our normal data types.
+
+Composition
+-----------
+
+In principle we want to compose all of these choices together
+
+1.  Write policy:  ``Dict``, ``File``, ``Buffer``, ``Client``
+2.  Encoding:  ``Pickle``, ``Numpy``, ``Pandas``, ...
+3.  Compression:  ``Blosc``, ``Snappy``, ...
+
+Partd objects compose by nesting.  Here we make a partd that writes pickle
+encoded BZ2 compressed bytes directly to disk::
+
+    >>> p = Pickle(BZ2(File('foo')))
+
+We could construct more complex systems that include compression,
+serialization, buffering, and remote access.::
+
+    >>> server = Server(Buffer(Dict(), File(), available_memory=2e0))
+
+    >>> client = Pickle(Snappy(Client(server.address)))
+    >>> client.append({'x': [1, 2, 3]})
+
+.. |Build Status| image:: https://github.com/dask/partd/workflows/CI/badge.svg
+   :target: https://github.com/dask/partd/actions?query=workflow%3ACI
+.. |Version Status| image:: https://img.shields.io/pypi/v/partd.svg
+   :target: https://pypi.python.org/pypi/partd/
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/partd.egg-info/SOURCES.txt 
new/partd-1.3.0/partd.egg-info/SOURCES.txt
--- old/partd-1.2.0/partd.egg-info/SOURCES.txt  2021-04-08 19:32:23.000000000 
+0200
+++ new/partd-1.3.0/partd.egg-info/SOURCES.txt  2022-08-12 01:22:05.000000000 
+0200
@@ -8,7 +8,6 @@
 partd/__init__.py
 partd/_version.py
 partd/buffer.py
-partd/compatibility.py
 partd/compressed.py
 partd/core.py
 partd/dict.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/setup.cfg new/partd-1.3.0/setup.cfg
--- old/partd-1.2.0/setup.cfg   2021-04-08 19:32:24.125133000 +0200
+++ new/partd-1.3.0/setup.cfg   2022-08-12 01:22:05.257659400 +0200
@@ -1,5 +1,5 @@
 [versioneer]
-vcs = git
+VCS = git
 style = pep440
 versionfile_source = partd/_version.py
 versionfile_build = partd/_version.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/partd-1.2.0/setup.py new/partd-1.3.0/setup.py
--- old/partd-1.2.0/setup.py    2021-04-07 17:20:46.000000000 +0200
+++ new/partd-1.3.0/setup.py    2022-05-23 23:39:55.000000000 +0200
@@ -15,13 +15,12 @@
       keywords='',
       packages=['partd'],
       
install_requires=list(open('requirements.txt').read().strip().split('\n')),
-      python_requires=">=3.5",
+      python_requires=">=3.7",
       classifiers=[
           "Programming Language :: Python :: 3",
-          "Programming Language :: Python :: 3.5",
-          "Programming Language :: Python :: 3.6",
           "Programming Language :: Python :: 3.7",
           "Programming Language :: Python :: 3.8",
+          "Programming Language :: Python :: 3.9",
       ],
       long_description=(open('README.rst').read() if exists('README.rst')
                         else ''),

Reply via email to