Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-pylru for openSUSE:Factory 
checked in at 2022-10-08 01:25:01
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-pylru (Old)
 and      /work/SRC/openSUSE:Factory/.python-pylru.new.2275 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-pylru"

Sat Oct  8 01:25:01 2022 rev:2 rq:1008676 version:1.2.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-pylru/python-pylru.changes        
2019-12-10 22:42:55.301798020 +0100
+++ /work/SRC/openSUSE:Factory/.python-pylru.new.2275/python-pylru.changes      
2022-10-08 01:25:06.826214349 +0200
@@ -1,0 +2,40 @@
+Wed Oct  5 00:23:31 UTC 2022 - Yogalakshmi Arunachalam <[email protected]>
+
+- Update to Version 1.2.1
+  * Small optimization to popitem().
+  * Improved comments, removed whitespace, etc.
+  * Added __getstate__() and __setstate__().
+  * Moved from distutils to setuptools in setup.py
+  * Moved readme to Restructured Text.
+
+- Update to Version 1.2.0
+  * Renamed the markdown formatted README.txt to README.md
+  * Updated documentation.
+  * Added optional callback to FunctionCacheManager.
+  * Merge pull request #28 from marc1n/master
+  * Minimize memory consuption of _dlnode
+  * Merge pull request #26 from bpsuntrup/master
+  * Add optional callback funtionality to lrudecorator.
+
+- Update to version to 1.1.0
+  * Added pop, popitem, and setdefault methods to lrucache class.
+  * Improved update() method of lrucache.
+  * Added len() method to WriteBackCacheManager.
+  * Simplified logic of a couple __getitem__ implementations.
+  * Cleaned up some of the comments and whitespace.
+  * Merge pull request #22 from btimby/master
+  * Undo whitespace changes.
+  * Added update() method.
+  * Merge pull request #20 from pp-qq/patch-1
+  * refactor(lrucache): improve lrucache.get()
+  * Fixes #13, a bug in lrudecorator.
+  * Small change to README.
+  * Small change to README.
+  * Moved version to 1.0.8
+  * Added documentation for FunctionCacheManager.
+  * lrudecorator now updates the metadata to look more like the wrapped 
function.
+  * Refactored lrudecorator using FunctionCacheManager.
+  * Added clear() and size() to FunctionCacheManager.
+  * Added FunctionCacheManager.
+
+-------------------------------------------------------------------

Old:
----
  pylru-1.2.0.tar.gz

New:
----
  pylru-1.2.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-pylru.spec ++++++
--- /var/tmp/diff_new_pack.WXzoph/_old  2022-10-08 01:25:08.090217249 +0200
+++ /var/tmp/diff_new_pack.WXzoph/_new  2022-10-08 01:25:08.094217258 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-pylru
 #
-# Copyright (c) 2019 SUSE LLC
+# Copyright (c) 2022 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -18,7 +18,7 @@
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-pylru
-Version:        1.2.0
+Version:        1.2.1
 Release:        0
 Summary:        A least recently used (LRU) cache implementation
 License:        GPL-2.0-only

++++++ pylru-1.2.0.tar.gz -> pylru-1.2.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pylru-1.2.0/PKG-INFO new/pylru-1.2.1/PKG-INFO
--- old/pylru-1.2.0/PKG-INFO    2019-03-13 22:53:19.000000000 +0100
+++ new/pylru-1.2.1/PKG-INFO    2022-03-06 21:14:42.631002400 +0100
@@ -1,264 +1,11 @@
-Metadata-Version: 1.1
+Metadata-Version: 2.1
 Name: pylru
-Version: 1.2.0
+Version: 1.2.1
 Summary: A least recently used (LRU) cache implementation
 Home-page: https://github.com/jlhutch/pylru
 Author: Jay Hutchinson
 Author-email: [email protected]
 License: UNKNOWN
-Description: 
-        
-        PyLRU
-        =====
-        
-        A least recently used (LRU) cache for Python.
-        
-        Introduction
-        ============
-        
-        Pylru implements a true LRU cache along with several support classes. 
The cache is efficient and written in pure Python. It works with Python 2.6+ 
including the 3.x series. Basic operations (lookup, insert, delete) all run in 
a constant amount of time. Pylru provides a cache class with a simple dict 
interface. It also provides classes to wrap any object that has a dict 
interface with a cache. Both write-through and write-back semantics are 
supported. Pylru also provides classes to wrap functions in a similar way, 
including a function decorator.
-        
-        You can install pylru or you can just copy the source file pylru.py 
and use it directly in your own project. The rest of this file explains what 
the pylru module provides and how to use it. If you want to know more examine 
pylru.py. The code is straightforward and well commented.
-        
-        Usage
-        =====
-        
-        lrucache
-        --------
-        
-        An lrucache object has a dictionary like interface and can be used in 
the same way::
-        
-            import pylru
-        
-            size = 100          # Size of the cache. The maximum number of 
key/value
-                                # pairs you want the cache to hold.
-            
-            cache = pylru.lrucache(size)
-                                # Create a cache object.
-            
-            value = cache[key]  # Lookup a value given its key.
-            cache[key] = value  # Insert a key/value pair.
-            del cache[key]      # Delete a value given its key.
-                                #
-                                # These three operations affect the order of 
the cache.
-                                # Lookup and insert both move the key/value to 
the most
-                                # recently used position. Delete (obviously) 
removes a
-                                # key/value from whatever position it was in.
-                                
-            key in cache        # Test for membership. Does not affect the 
cache order.
-            
-            value = cache.peek(key)
-                                # Lookup a value given its key. Does not 
affect the
-                                # cache order.
-        
-            cache.keys()        # Return an iterator over the keys in the cache
-            cache.values()      # Return an iterator over the values in the 
cache
-            cache.items()       # Return an iterator over the (key, value) 
pairs in the
-                                # cache.
-                                #
-                                # These calls have no effect on the cache 
order.
-                                # lrucache is scan resistant when these calls 
are used.
-                                # The iterators iterate over their respective 
elements
-                                # in the order of most recently used to least 
recently
-                                # used.
-                                #
-                                # WARNING - While these iterators do not 
affect the
-                                # cache order the lookup, insert, and delete 
operations
-                                # do. The result of changing the cache's order
-                                # during iteration is undefined. If you really 
need to
-                                # do something of the sort use 
list(cache.keys()), then
-                                # loop over the list elements.
-                                
-            for key in cache:   # Caches support __iter__ so you can use them 
directly
-                pass            # in a for loop to loop over the keys just like
-                                # cache.keys()
-        
-            cache.size()        # Returns the size of the cache
-            cache.size(x)       # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            x = len(cache)      # Returns the number of items stored in the 
cache.
-                                # x will be less than or equal to cache.size()
-        
-            cache.clear()       # Remove all items from the cache.
-        
-        
-        Lrucache takes an optional callback function as a second argument. 
Since the cache has a fixed size, some operations (such as an insertion) may 
cause the least recently used key/value pair to be ejected. If the optional 
callback function is given it will be called when this occurs. For example::
-        
-            import pylru
-        
-            def callback(key, value):
-                print (key, value)    # A dumb callback that just prints the 
key/value
-        
-            size = 100
-            cache = pylru.lrucache(size, callback)
-        
-            # Use the cache... When it gets full some pairs may be ejected due 
to
-            # the fixed cache size. But, not before the callback is called to 
let you
-            # know.
-        
-        WriteThroughCacheManager
-        ------------------------
-        
-        Often a cache is used to speed up access to some other high latency 
object. For example, imagine you have a backend storage object that 
reads/writes from/to a remote server. Let us call this object *store*. If store 
has a dictionary interface a cache manager class can be used to compose the 
store object and an lrucache. The manager object exposes a dictionary 
interface. The programmer can then interact with the manager object as if it 
were the store. The manager object takes care of communicating with the store 
and caching key/value pairs in the lrucache object.
-        
-        Two different semantics are supported, write-through 
(WriteThroughCacheManager class) and write-back (WriteBackCacheManager class). 
With write-through, lookups from the store are cached for future lookups. 
Insertions and deletions are updated in the cache and written through to the 
store immediately. Write-back works the same way, but insertions are updated 
only in the cache. These "dirty" key/value pair will only be updated to the 
underlying store when they are ejected from the cache or when a sync is 
performed. The WriteBackCacheManager class is discussed more below. 
-        
-        The WriteThroughCacheManager class takes as arguments the store object 
you want to compose and the cache size. It then creates an LRU cache and 
automatically manages it::
-        
-            import pylru
-        
-            size = 100
-            cached = pylru.WriteThroughCacheManager(store, size)
-                                # Or
-            cached = pylru.lruwrap(store, size)
-                                # This is a factory function that does the 
same thing.
-        
-            # Now the object *cached* can be used just like store, except 
caching is
-            # automatically handled.
-            
-            value = cached[key] # Lookup a value given its key.
-            cached[key] = value # Insert a key/value pair.
-            del cached[key]     # Delete a value given its key.
-            
-            key in cache        # Test for membership. Does not affect the 
cache order.
-        
-            cached.keys()       # Returns store.keys()
-            cached.values()     # Returns store.values() 
-            cached.items()      # Returns store.items()
-                                #
-                                # These calls have no effect on the cache 
order.
-                                # The iterators iterate over their respective 
elements
-                                # in the order dictated by store.
-                                
-            for key in cached:  # Same as store.keys()
-        
-            cached.size()       # Returns the size of the cache
-            cached.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            x = len(cached)     # Returns the number of items stored in the 
store.
-        
-            cached.clear()      # Remove all items from the store and cache.
-        
-        
-        WriteBackCacheManager
-        ---------------------
-        
-        Similar to the WriteThroughCacheManager class except write-back 
semantics are used to manage the cache. The programmer is responsible for one 
more thing as well. They MUST call sync() when they are finished. This ensures 
that the last of the "dirty" entries in the cache are written back. This is not 
too bad as WriteBackCacheManager objects can be used in with statements. More 
about that below::
-        
-        
-            import pylru
-        
-            size = 100
-            cached = pylru.WriteBackCacheManager(store, size)
-                                # Or
-            cached = pylru.lruwrap(store, size, True)
-                                # This is a factory function that does the 
same thing.
-                                
-            value = cached[key] # Lookup a value given its key.
-            cached[key] = value # Insert a key/value pair.
-            del cached[key]     # Delete a value given its key.
-            
-            key in cache        # Test for membership. Does not affect the 
cache order.
-        
-                                
-            cached.keys()       # Return an iterator over the keys in the 
cache/store
-            cached.values()     # Return an iterator over the values in the 
cache/store
-            cached.items()      # Return an iterator over the (key, value) 
pairs in the
-                                # cache/store.
-                                #
-                                # The iterators iterate over a consistent view 
of the
-                                # respective elements. That is, except for the 
order,
-                                # the elements are the same as those returned 
if you
-                                # first called sync() then called
-                                # store.keys()[ or values() or items()]
-                                #
-                                # These calls have no effect on the cache 
order.
-                                # The iterators iterate over their respective 
elements
-                                # in arbitrary order.
-                                #
-                                # WARNING - While these iterators do not 
effect the
-                                # cache order the lookup, insert, and delete 
operations
-                                # do. The results of changing the cache's order
-                                # during iteration is undefined. If you really 
need to
-                                # do something of the sort use 
list(cached.keys()),
-                                # then loop over the list elements.
-                                
-            for key in cached:  # Same as cached.keys()
-        
-            cached.size()       # Returns the size of the cache
-            cached.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            x = len(cached)     # Returns the number of items stored in the 
store.
-                                #
-                                # WARNING - This method calls sync() 
internally. If
-                                # that has adverse performance effects for your
-                                # application, you may want to avoid calling 
this
-                                # method frequently.
-        
-            cached.clear()      # Remove all items from the store and cache.
-            
-            cached.sync()       # Make the store and cache consistent. Write 
all
-                                # cached changes to the store that have not 
been
-                                # yet.
-                                
-            cached.flush()      # Calls sync() then clears the cache.
-            
-        
-        To help the programmer ensure that the final sync() is called, 
WriteBackCacheManager objects can be used in a with statement::
-        
-            with pylru.WriteBackCacheManager(store, size) as cached:
-                # Use cached just like you would store. sync() is called 
automatically
-                # for you when leaving the with statement block.
-        
-        
-        FunctionCacheManager
-        ---------------------
-        
-        FunctionCacheManager allows you to compose a function with an 
lrucache. The resulting object can be called just like the original function, 
but the results are cached to speed up future calls. The function must have 
arguments that are hashable. FunctionCacheManager takes an optional callback 
function as a third argument::
-        
-            import pylru
-        
-            def square(x):
-                return x * x
-        
-            size = 100
-            cached = pylru.FunctionCacheManager(square, size)
-        
-            y = cached(7)
-        
-            # The results of cached are the same as square, but automatically 
cached
-            # to speed up future calls.
-        
-            cached.size()       # Returns the size of the cache
-            cached.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            cached.clear()      # Remove all items from the cache.
-        
-        
-        
-        lrudecorator
-        ------------
-        
-        PyLRU also provides a function decorator. This is basically the same 
functionality as FunctionCacheManager, but in the form of a decorator. The 
decorator takes an optional callback function as a second argument::
-        
-            from pylru import lrudecorator
-        
-            @lrudecorator(100)
-            def square(x):
-                return x * x
-        
-            # The results of the square function are cached to speed up future 
calls.
-        
-            square.size()       # Returns the size of the cache
-            square.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            square.clear()      # Remove all items from the cache.
-        
 Platform: UNKNOWN
 Classifier: Programming Language :: Python :: 2.6
 Classifier: Programming Language :: Python :: 2.7
@@ -268,3 +15,259 @@
 Classifier: License :: OSI Approved :: GNU General Public License (GPL)
 Classifier: Operating System :: OS Independent
 Classifier: Topic :: Software Development :: Libraries :: Python Modules
+License-File: LICENSE.txt
+
+
+
+PyLRU
+=====
+
+A least recently used (LRU) cache for Python.
+
+Introduction
+============
+
+Pylru implements a true LRU cache along with several support classes. The 
cache is efficient and written in pure Python. It works with Python 2.6+ 
including the 3.x series. Basic operations (lookup, insert, delete) all run in 
a constant amount of time. Pylru provides a cache class with a simple dict 
interface. It also provides classes to wrap any object that has a dict 
interface with a cache. Both write-through and write-back semantics are 
supported. Pylru also provides classes to wrap functions in a similar way, 
including a function decorator.
+
+You can install pylru or you can just copy the source file pylru.py and use it 
directly in your own project. The rest of this file explains what the pylru 
module provides and how to use it. If you want to know more examine pylru.py. 
The code is straightforward and well commented.
+
+Usage
+=====
+
+lrucache
+--------
+
+An lrucache object has a dictionary like interface and can be used in the same 
way::
+
+    import pylru
+
+    size = 100          # Size of the cache. The maximum number of key/value
+                        # pairs you want the cache to hold.
+    
+    cache = pylru.lrucache(size)
+                        # Create a cache object.
+    
+    value = cache[key]  # Lookup a value given its key.
+    cache[key] = value  # Insert a key/value pair.
+    del cache[key]      # Delete a value given its key.
+                        #
+                        # These three operations affect the order of the cache.
+                        # Lookup and insert both move the key/value to the most
+                        # recently used position. Delete (obviously) removes a
+                        # key/value from whatever position it was in.
+                        
+    key in cache        # Test for membership. Does not affect the cache order.
+    
+    value = cache.peek(key)
+                        # Lookup a value given its key. Does not affect the
+                        # cache order.
+
+    cache.keys()        # Return an iterator over the keys in the cache
+    cache.values()      # Return an iterator over the values in the cache
+    cache.items()       # Return an iterator over the (key, value) pairs in the
+                        # cache.
+                        #
+                        # These calls have no effect on the cache order.
+                        # lrucache is scan resistant when these calls are used.
+                        # The iterators iterate over their respective elements
+                        # in the order of most recently used to least recently
+                        # used.
+                        #
+                        # WARNING - While these iterators do not affect the
+                        # cache order the lookup, insert, and delete operations
+                        # do. The result of changing the cache's order
+                        # during iteration is undefined. If you really need to
+                        # do something of the sort use list(cache.keys()), then
+                        # loop over the list elements.
+                        
+    for key in cache:   # Caches support __iter__ so you can use them directly
+        pass            # in a for loop to loop over the keys just like
+                        # cache.keys()
+
+    cache.size()        # Returns the size of the cache
+    cache.size(x)       # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    x = len(cache)      # Returns the number of items stored in the cache.
+                        # x will be less than or equal to cache.size()
+
+    cache.clear()       # Remove all items from the cache.
+
+
+Lrucache takes an optional callback function as a second argument. Since the 
cache has a fixed size, some operations (such as an insertion) may cause the 
least recently used key/value pair to be ejected. If the optional callback 
function is given it will be called when this occurs. For example::
+
+    import pylru
+
+    def callback(key, value):
+        print (key, value)    # A dumb callback that just prints the key/value
+
+    size = 100
+    cache = pylru.lrucache(size, callback)
+
+    # Use the cache... When it gets full some pairs may be ejected due to
+    # the fixed cache size. But, not before the callback is called to let you
+    # know.
+
+WriteThroughCacheManager
+------------------------
+
+Often a cache is used to speed up access to some other high latency object. 
For example, imagine you have a backend storage object that reads/writes 
from/to a remote server. Let us call this object *store*. If store has a 
dictionary interface a cache manager class can be used to compose the store 
object and an lrucache. The manager object exposes a dictionary interface. The 
programmer can then interact with the manager object as if it were the store. 
The manager object takes care of communicating with the store and caching 
key/value pairs in the lrucache object.
+
+Two different semantics are supported, write-through (WriteThroughCacheManager 
class) and write-back (WriteBackCacheManager class). With write-through, 
lookups from the store are cached for future lookups. Insertions and deletions 
are updated in the cache and written through to the store immediately. 
Write-back works the same way, but insertions are updated only in the cache. 
These "dirty" key/value pair will only be updated to the underlying store when 
they are ejected from the cache or when a sync is performed. The 
WriteBackCacheManager class is discussed more below. 
+
+The WriteThroughCacheManager class takes as arguments the store object you 
want to compose and the cache size. It then creates an LRU cache and 
automatically manages it::
+
+    import pylru
+
+    size = 100
+    cached = pylru.WriteThroughCacheManager(store, size)
+                        # Or
+    cached = pylru.lruwrap(store, size)
+                        # This is a factory function that does the same thing.
+
+    # Now the object *cached* can be used just like store, except caching is
+    # automatically handled.
+    
+    value = cached[key] # Lookup a value given its key.
+    cached[key] = value # Insert a key/value pair.
+    del cached[key]     # Delete a value given its key.
+    
+    key in cache        # Test for membership. Does not affect the cache order.
+
+    cached.keys()       # Returns store.keys()
+    cached.values()     # Returns store.values() 
+    cached.items()      # Returns store.items()
+                        #
+                        # These calls have no effect on the cache order.
+                        # The iterators iterate over their respective elements
+                        # in the order dictated by store.
+                        
+    for key in cached:  # Same as store.keys()
+
+    cached.size()       # Returns the size of the cache
+    cached.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    x = len(cached)     # Returns the number of items stored in the store.
+
+    cached.clear()      # Remove all items from the store and cache.
+
+
+WriteBackCacheManager
+---------------------
+
+Similar to the WriteThroughCacheManager class except write-back semantics are 
used to manage the cache. The programmer is responsible for one more thing as 
well. They MUST call sync() when they are finished. This ensures that the last 
of the "dirty" entries in the cache are written back. This is not too bad as 
WriteBackCacheManager objects can be used in with statements. More about that 
below::
+
+
+    import pylru
+
+    size = 100
+    cached = pylru.WriteBackCacheManager(store, size)
+                        # Or
+    cached = pylru.lruwrap(store, size, True)
+                        # This is a factory function that does the same thing.
+                        
+    value = cached[key] # Lookup a value given its key.
+    cached[key] = value # Insert a key/value pair.
+    del cached[key]     # Delete a value given its key.
+    
+    key in cache        # Test for membership. Does not affect the cache order.
+
+                        
+    cached.keys()       # Return an iterator over the keys in the cache/store
+    cached.values()     # Return an iterator over the values in the cache/store
+    cached.items()      # Return an iterator over the (key, value) pairs in the
+                        # cache/store.
+                        #
+                        # The iterators iterate over a consistent view of the
+                        # respective elements. That is, except for the order,
+                        # the elements are the same as those returned if you
+                        # first called sync() then called
+                        # store.keys()[ or values() or items()]
+                        #
+                        # These calls have no effect on the cache order.
+                        # The iterators iterate over their respective elements
+                        # in arbitrary order.
+                        #
+                        # WARNING - While these iterators do not effect the
+                        # cache order the lookup, insert, and delete operations
+                        # do. The results of changing the cache's order
+                        # during iteration is undefined. If you really need to
+                        # do something of the sort use list(cached.keys()),
+                        # then loop over the list elements.
+                        
+    for key in cached:  # Same as cached.keys()
+
+    cached.size()       # Returns the size of the cache
+    cached.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    x = len(cached)     # Returns the number of items stored in the store.
+                        #
+                        # WARNING - This method calls sync() internally. If
+                        # that has adverse performance effects for your
+                        # application, you may want to avoid calling this
+                        # method frequently.
+
+    cached.clear()      # Remove all items from the store and cache.
+    
+    cached.sync()       # Make the store and cache consistent. Write all
+                        # cached changes to the store that have not been
+                        # yet.
+                        
+    cached.flush()      # Calls sync() then clears the cache.
+    
+
+To help the programmer ensure that the final sync() is called, 
WriteBackCacheManager objects can be used in a with statement::
+
+    with pylru.WriteBackCacheManager(store, size) as cached:
+        # Use cached just like you would store. sync() is called automatically
+        # for you when leaving the with statement block.
+
+
+FunctionCacheManager
+---------------------
+
+FunctionCacheManager allows you to compose a function with an lrucache. The 
resulting object can be called just like the original function, but the results 
are cached to speed up future calls. The function must have arguments that are 
hashable. FunctionCacheManager takes an optional callback function as a third 
argument::
+
+    import pylru
+
+    def square(x):
+        return x * x
+
+    size = 100
+    cached = pylru.FunctionCacheManager(square, size)
+
+    y = cached(7)
+
+    # The results of cached are the same as square, but automatically cached
+    # to speed up future calls.
+
+    cached.size()       # Returns the size of the cache
+    cached.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    cached.clear()      # Remove all items from the cache.
+
+
+
+lrudecorator
+------------
+
+PyLRU also provides a function decorator. This is basically the same 
functionality as FunctionCacheManager, but in the form of a decorator. The 
decorator takes an optional callback function as a second argument::
+
+    from pylru import lrudecorator
+
+    @lrudecorator(100)
+    def square(x):
+        return x * x
+
+    # The results of the square function are cached to speed up future calls.
+
+    square.size()       # Returns the size of the cache
+    square.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    square.clear()      # Remove all items from the cache.
+
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pylru-1.2.0/pylru.egg-info/PKG-INFO 
new/pylru-1.2.1/pylru.egg-info/PKG-INFO
--- old/pylru-1.2.0/pylru.egg-info/PKG-INFO     2019-03-13 22:53:18.000000000 
+0100
+++ new/pylru-1.2.1/pylru.egg-info/PKG-INFO     2022-03-06 21:14:42.000000000 
+0100
@@ -1,264 +1,11 @@
-Metadata-Version: 1.1
+Metadata-Version: 2.1
 Name: pylru
-Version: 1.2.0
+Version: 1.2.1
 Summary: A least recently used (LRU) cache implementation
 Home-page: https://github.com/jlhutch/pylru
 Author: Jay Hutchinson
 Author-email: [email protected]
 License: UNKNOWN
-Description: 
-        
-        PyLRU
-        =====
-        
-        A least recently used (LRU) cache for Python.
-        
-        Introduction
-        ============
-        
-        Pylru implements a true LRU cache along with several support classes. 
The cache is efficient and written in pure Python. It works with Python 2.6+ 
including the 3.x series. Basic operations (lookup, insert, delete) all run in 
a constant amount of time. Pylru provides a cache class with a simple dict 
interface. It also provides classes to wrap any object that has a dict 
interface with a cache. Both write-through and write-back semantics are 
supported. Pylru also provides classes to wrap functions in a similar way, 
including a function decorator.
-        
-        You can install pylru or you can just copy the source file pylru.py 
and use it directly in your own project. The rest of this file explains what 
the pylru module provides and how to use it. If you want to know more examine 
pylru.py. The code is straightforward and well commented.
-        
-        Usage
-        =====
-        
-        lrucache
-        --------
-        
-        An lrucache object has a dictionary like interface and can be used in 
the same way::
-        
-            import pylru
-        
-            size = 100          # Size of the cache. The maximum number of 
key/value
-                                # pairs you want the cache to hold.
-            
-            cache = pylru.lrucache(size)
-                                # Create a cache object.
-            
-            value = cache[key]  # Lookup a value given its key.
-            cache[key] = value  # Insert a key/value pair.
-            del cache[key]      # Delete a value given its key.
-                                #
-                                # These three operations affect the order of 
the cache.
-                                # Lookup and insert both move the key/value to 
the most
-                                # recently used position. Delete (obviously) 
removes a
-                                # key/value from whatever position it was in.
-                                
-            key in cache        # Test for membership. Does not affect the 
cache order.
-            
-            value = cache.peek(key)
-                                # Lookup a value given its key. Does not 
affect the
-                                # cache order.
-        
-            cache.keys()        # Return an iterator over the keys in the cache
-            cache.values()      # Return an iterator over the values in the 
cache
-            cache.items()       # Return an iterator over the (key, value) 
pairs in the
-                                # cache.
-                                #
-                                # These calls have no effect on the cache 
order.
-                                # lrucache is scan resistant when these calls 
are used.
-                                # The iterators iterate over their respective 
elements
-                                # in the order of most recently used to least 
recently
-                                # used.
-                                #
-                                # WARNING - While these iterators do not 
affect the
-                                # cache order the lookup, insert, and delete 
operations
-                                # do. The result of changing the cache's order
-                                # during iteration is undefined. If you really 
need to
-                                # do something of the sort use 
list(cache.keys()), then
-                                # loop over the list elements.
-                                
-            for key in cache:   # Caches support __iter__ so you can use them 
directly
-                pass            # in a for loop to loop over the keys just like
-                                # cache.keys()
-        
-            cache.size()        # Returns the size of the cache
-            cache.size(x)       # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            x = len(cache)      # Returns the number of items stored in the 
cache.
-                                # x will be less than or equal to cache.size()
-        
-            cache.clear()       # Remove all items from the cache.
-        
-        
-        Lrucache takes an optional callback function as a second argument. 
Since the cache has a fixed size, some operations (such as an insertion) may 
cause the least recently used key/value pair to be ejected. If the optional 
callback function is given it will be called when this occurs. For example::
-        
-            import pylru
-        
-            def callback(key, value):
-                print (key, value)    # A dumb callback that just prints the 
key/value
-        
-            size = 100
-            cache = pylru.lrucache(size, callback)
-        
-            # Use the cache... When it gets full some pairs may be ejected due 
to
-            # the fixed cache size. But, not before the callback is called to 
let you
-            # know.
-        
-        WriteThroughCacheManager
-        ------------------------
-        
-        Often a cache is used to speed up access to some other high latency 
object. For example, imagine you have a backend storage object that 
reads/writes from/to a remote server. Let us call this object *store*. If store 
has a dictionary interface a cache manager class can be used to compose the 
store object and an lrucache. The manager object exposes a dictionary 
interface. The programmer can then interact with the manager object as if it 
were the store. The manager object takes care of communicating with the store 
and caching key/value pairs in the lrucache object.
-        
-        Two different semantics are supported, write-through 
(WriteThroughCacheManager class) and write-back (WriteBackCacheManager class). 
With write-through, lookups from the store are cached for future lookups. 
Insertions and deletions are updated in the cache and written through to the 
store immediately. Write-back works the same way, but insertions are updated 
only in the cache. These "dirty" key/value pair will only be updated to the 
underlying store when they are ejected from the cache or when a sync is 
performed. The WriteBackCacheManager class is discussed more below. 
-        
-        The WriteThroughCacheManager class takes as arguments the store object 
you want to compose and the cache size. It then creates an LRU cache and 
automatically manages it::
-        
-            import pylru
-        
-            size = 100
-            cached = pylru.WriteThroughCacheManager(store, size)
-                                # Or
-            cached = pylru.lruwrap(store, size)
-                                # This is a factory function that does the 
same thing.
-        
-            # Now the object *cached* can be used just like store, except 
caching is
-            # automatically handled.
-            
-            value = cached[key] # Lookup a value given its key.
-            cached[key] = value # Insert a key/value pair.
-            del cached[key]     # Delete a value given its key.
-            
-            key in cache        # Test for membership. Does not affect the 
cache order.
-        
-            cached.keys()       # Returns store.keys()
-            cached.values()     # Returns store.values() 
-            cached.items()      # Returns store.items()
-                                #
-                                # These calls have no effect on the cache 
order.
-                                # The iterators iterate over their respective 
elements
-                                # in the order dictated by store.
-                                
-            for key in cached:  # Same as store.keys()
-        
-            cached.size()       # Returns the size of the cache
-            cached.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            x = len(cached)     # Returns the number of items stored in the 
store.
-        
-            cached.clear()      # Remove all items from the store and cache.
-        
-        
-        WriteBackCacheManager
-        ---------------------
-        
-        Similar to the WriteThroughCacheManager class except write-back 
semantics are used to manage the cache. The programmer is responsible for one 
more thing as well. They MUST call sync() when they are finished. This ensures 
that the last of the "dirty" entries in the cache are written back. This is not 
too bad as WriteBackCacheManager objects can be used in with statements. More 
about that below::
-        
-        
-            import pylru
-        
-            size = 100
-            cached = pylru.WriteBackCacheManager(store, size)
-                                # Or
-            cached = pylru.lruwrap(store, size, True)
-                                # This is a factory function that does the 
same thing.
-                                
-            value = cached[key] # Lookup a value given its key.
-            cached[key] = value # Insert a key/value pair.
-            del cached[key]     # Delete a value given its key.
-            
-            key in cache        # Test for membership. Does not affect the 
cache order.
-        
-                                
-            cached.keys()       # Return an iterator over the keys in the 
cache/store
-            cached.values()     # Return an iterator over the values in the 
cache/store
-            cached.items()      # Return an iterator over the (key, value) 
pairs in the
-                                # cache/store.
-                                #
-                                # The iterators iterate over a consistent view 
of the
-                                # respective elements. That is, except for the 
order,
-                                # the elements are the same as those returned 
if you
-                                # first called sync() then called
-                                # store.keys()[ or values() or items()]
-                                #
-                                # These calls have no effect on the cache 
order.
-                                # The iterators iterate over their respective 
elements
-                                # in arbitrary order.
-                                #
-                                # WARNING - While these iterators do not 
effect the
-                                # cache order the lookup, insert, and delete 
operations
-                                # do. The results of changing the cache's order
-                                # during iteration is undefined. If you really 
need to
-                                # do something of the sort use 
list(cached.keys()),
-                                # then loop over the list elements.
-                                
-            for key in cached:  # Same as cached.keys()
-        
-            cached.size()       # Returns the size of the cache
-            cached.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            x = len(cached)     # Returns the number of items stored in the 
store.
-                                #
-                                # WARNING - This method calls sync() 
internally. If
-                                # that has adverse performance effects for your
-                                # application, you may want to avoid calling 
this
-                                # method frequently.
-        
-            cached.clear()      # Remove all items from the store and cache.
-            
-            cached.sync()       # Make the store and cache consistent. Write 
all
-                                # cached changes to the store that have not 
been
-                                # yet.
-                                
-            cached.flush()      # Calls sync() then clears the cache.
-            
-        
-        To help the programmer ensure that the final sync() is called, 
WriteBackCacheManager objects can be used in a with statement::
-        
-            with pylru.WriteBackCacheManager(store, size) as cached:
-                # Use cached just like you would store. sync() is called 
automatically
-                # for you when leaving the with statement block.
-        
-        
-        FunctionCacheManager
-        ---------------------
-        
-        FunctionCacheManager allows you to compose a function with an 
lrucache. The resulting object can be called just like the original function, 
but the results are cached to speed up future calls. The function must have 
arguments that are hashable. FunctionCacheManager takes an optional callback 
function as a third argument::
-        
-            import pylru
-        
-            def square(x):
-                return x * x
-        
-            size = 100
-            cached = pylru.FunctionCacheManager(square, size)
-        
-            y = cached(7)
-        
-            # The results of cached are the same as square, but automatically 
cached
-            # to speed up future calls.
-        
-            cached.size()       # Returns the size of the cache
-            cached.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            cached.clear()      # Remove all items from the cache.
-        
-        
-        
-        lrudecorator
-        ------------
-        
-        PyLRU also provides a function decorator. This is basically the same 
functionality as FunctionCacheManager, but in the form of a decorator. The 
decorator takes an optional callback function as a second argument::
-        
-            from pylru import lrudecorator
-        
-            @lrudecorator(100)
-            def square(x):
-                return x * x
-        
-            # The results of the square function are cached to speed up future 
calls.
-        
-            square.size()       # Returns the size of the cache
-            square.size(x)      # Changes the size of the cache. x MUST be 
greater than
-                                # zero. Returns the new size x.
-        
-            square.clear()      # Remove all items from the cache.
-        
 Platform: UNKNOWN
 Classifier: Programming Language :: Python :: 2.6
 Classifier: Programming Language :: Python :: 2.7
@@ -268,3 +15,259 @@
 Classifier: License :: OSI Approved :: GNU General Public License (GPL)
 Classifier: Operating System :: OS Independent
 Classifier: Topic :: Software Development :: Libraries :: Python Modules
+License-File: LICENSE.txt
+
+
+
+PyLRU
+=====
+
+A least recently used (LRU) cache for Python.
+
+Introduction
+============
+
+Pylru implements a true LRU cache along with several support classes. The 
cache is efficient and written in pure Python. It works with Python 2.6+ 
including the 3.x series. Basic operations (lookup, insert, delete) all run in 
a constant amount of time. Pylru provides a cache class with a simple dict 
interface. It also provides classes to wrap any object that has a dict 
interface with a cache. Both write-through and write-back semantics are 
supported. Pylru also provides classes to wrap functions in a similar way, 
including a function decorator.
+
+You can install pylru or you can just copy the source file pylru.py and use it 
directly in your own project. The rest of this file explains what the pylru 
module provides and how to use it. If you want to know more examine pylru.py. 
The code is straightforward and well commented.
+
+Usage
+=====
+
+lrucache
+--------
+
+An lrucache object has a dictionary like interface and can be used in the same 
way::
+
+    import pylru
+
+    size = 100          # Size of the cache. The maximum number of key/value
+                        # pairs you want the cache to hold.
+    
+    cache = pylru.lrucache(size)
+                        # Create a cache object.
+    
+    value = cache[key]  # Lookup a value given its key.
+    cache[key] = value  # Insert a key/value pair.
+    del cache[key]      # Delete a value given its key.
+                        #
+                        # These three operations affect the order of the cache.
+                        # Lookup and insert both move the key/value to the most
+                        # recently used position. Delete (obviously) removes a
+                        # key/value from whatever position it was in.
+                        
+    key in cache        # Test for membership. Does not affect the cache order.
+    
+    value = cache.peek(key)
+                        # Lookup a value given its key. Does not affect the
+                        # cache order.
+
+    cache.keys()        # Return an iterator over the keys in the cache
+    cache.values()      # Return an iterator over the values in the cache
+    cache.items()       # Return an iterator over the (key, value) pairs in the
+                        # cache.
+                        #
+                        # These calls have no effect on the cache order.
+                        # lrucache is scan resistant when these calls are used.
+                        # The iterators iterate over their respective elements
+                        # in the order of most recently used to least recently
+                        # used.
+                        #
+                        # WARNING - While these iterators do not affect the
+                        # cache order the lookup, insert, and delete operations
+                        # do. The result of changing the cache's order
+                        # during iteration is undefined. If you really need to
+                        # do something of the sort use list(cache.keys()), then
+                        # loop over the list elements.
+                        
+    for key in cache:   # Caches support __iter__ so you can use them directly
+        pass            # in a for loop to loop over the keys just like
+                        # cache.keys()
+
+    cache.size()        # Returns the size of the cache
+    cache.size(x)       # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    x = len(cache)      # Returns the number of items stored in the cache.
+                        # x will be less than or equal to cache.size()
+
+    cache.clear()       # Remove all items from the cache.
+
+
+Lrucache takes an optional callback function as a second argument. Since the 
cache has a fixed size, some operations (such as an insertion) may cause the 
least recently used key/value pair to be ejected. If the optional callback 
function is given it will be called when this occurs. For example::
+
+    import pylru
+
+    def callback(key, value):
+        print (key, value)    # A dumb callback that just prints the key/value
+
+    size = 100
+    cache = pylru.lrucache(size, callback)
+
+    # Use the cache... When it gets full some pairs may be ejected due to
+    # the fixed cache size. But, not before the callback is called to let you
+    # know.
+
+WriteThroughCacheManager
+------------------------
+
+Often a cache is used to speed up access to some other high latency object. 
For example, imagine you have a backend storage object that reads/writes 
from/to a remote server. Let us call this object *store*. If store has a 
dictionary interface a cache manager class can be used to compose the store 
object and an lrucache. The manager object exposes a dictionary interface. The 
programmer can then interact with the manager object as if it were the store. 
The manager object takes care of communicating with the store and caching 
key/value pairs in the lrucache object.
+
+Two different semantics are supported, write-through (WriteThroughCacheManager 
class) and write-back (WriteBackCacheManager class). With write-through, 
lookups from the store are cached for future lookups. Insertions and deletions 
are updated in the cache and written through to the store immediately. 
Write-back works the same way, but insertions are updated only in the cache. 
These "dirty" key/value pair will only be updated to the underlying store when 
they are ejected from the cache or when a sync is performed. The 
WriteBackCacheManager class is discussed more below. 
+
+The WriteThroughCacheManager class takes as arguments the store object you 
want to compose and the cache size. It then creates an LRU cache and 
automatically manages it::
+
+    import pylru
+
+    size = 100
+    cached = pylru.WriteThroughCacheManager(store, size)
+                        # Or
+    cached = pylru.lruwrap(store, size)
+                        # This is a factory function that does the same thing.
+
+    # Now the object *cached* can be used just like store, except caching is
+    # automatically handled.
+    
+    value = cached[key] # Lookup a value given its key.
+    cached[key] = value # Insert a key/value pair.
+    del cached[key]     # Delete a value given its key.
+    
+    key in cache        # Test for membership. Does not affect the cache order.
+
+    cached.keys()       # Returns store.keys()
+    cached.values()     # Returns store.values() 
+    cached.items()      # Returns store.items()
+                        #
+                        # These calls have no effect on the cache order.
+                        # The iterators iterate over their respective elements
+                        # in the order dictated by store.
+                        
+    for key in cached:  # Same as store.keys()
+
+    cached.size()       # Returns the size of the cache
+    cached.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    x = len(cached)     # Returns the number of items stored in the store.
+
+    cached.clear()      # Remove all items from the store and cache.
+
+
+WriteBackCacheManager
+---------------------
+
+Similar to the WriteThroughCacheManager class except write-back semantics are 
used to manage the cache. The programmer is responsible for one more thing as 
well. They MUST call sync() when they are finished. This ensures that the last 
of the "dirty" entries in the cache are written back. This is not too bad as 
WriteBackCacheManager objects can be used in with statements. More about that 
below::
+
+
+    import pylru
+
+    size = 100
+    cached = pylru.WriteBackCacheManager(store, size)
+                        # Or
+    cached = pylru.lruwrap(store, size, True)
+                        # This is a factory function that does the same thing.
+                        
+    value = cached[key] # Lookup a value given its key.
+    cached[key] = value # Insert a key/value pair.
+    del cached[key]     # Delete a value given its key.
+    
+    key in cache        # Test for membership. Does not affect the cache order.
+
+                        
+    cached.keys()       # Return an iterator over the keys in the cache/store
+    cached.values()     # Return an iterator over the values in the cache/store
+    cached.items()      # Return an iterator over the (key, value) pairs in the
+                        # cache/store.
+                        #
+                        # The iterators iterate over a consistent view of the
+                        # respective elements. That is, except for the order,
+                        # the elements are the same as those returned if you
+                        # first called sync() then called
+                        # store.keys()[ or values() or items()]
+                        #
+                        # These calls have no effect on the cache order.
+                        # The iterators iterate over their respective elements
+                        # in arbitrary order.
+                        #
+                        # WARNING - While these iterators do not effect the
+                        # cache order the lookup, insert, and delete operations
+                        # do. The results of changing the cache's order
+                        # during iteration is undefined. If you really need to
+                        # do something of the sort use list(cached.keys()),
+                        # then loop over the list elements.
+                        
+    for key in cached:  # Same as cached.keys()
+
+    cached.size()       # Returns the size of the cache
+    cached.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    x = len(cached)     # Returns the number of items stored in the store.
+                        #
+                        # WARNING - This method calls sync() internally. If
+                        # that has adverse performance effects for your
+                        # application, you may want to avoid calling this
+                        # method frequently.
+
+    cached.clear()      # Remove all items from the store and cache.
+    
+    cached.sync()       # Make the store and cache consistent. Write all
+                        # cached changes to the store that have not been
+                        # yet.
+                        
+    cached.flush()      # Calls sync() then clears the cache.
+    
+
+To help the programmer ensure that the final sync() is called, 
WriteBackCacheManager objects can be used in a with statement::
+
+    with pylru.WriteBackCacheManager(store, size) as cached:
+        # Use cached just like you would store. sync() is called automatically
+        # for you when leaving the with statement block.
+
+
+FunctionCacheManager
+---------------------
+
+FunctionCacheManager allows you to compose a function with an lrucache. The 
resulting object can be called just like the original function, but the results 
are cached to speed up future calls. The function must have arguments that are 
hashable. FunctionCacheManager takes an optional callback function as a third 
argument::
+
+    import pylru
+
+    def square(x):
+        return x * x
+
+    size = 100
+    cached = pylru.FunctionCacheManager(square, size)
+
+    y = cached(7)
+
+    # The results of cached are the same as square, but automatically cached
+    # to speed up future calls.
+
+    cached.size()       # Returns the size of the cache
+    cached.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    cached.clear()      # Remove all items from the cache.
+
+
+
+lrudecorator
+------------
+
+PyLRU also provides a function decorator. This is basically the same 
functionality as FunctionCacheManager, but in the form of a decorator. The 
decorator takes an optional callback function as a second argument::
+
+    from pylru import lrudecorator
+
+    @lrudecorator(100)
+    def square(x):
+        return x * x
+
+    # The results of the square function are cached to speed up future calls.
+
+    square.size()       # Returns the size of the cache
+    square.size(x)      # Changes the size of the cache. x MUST be greater than
+                        # zero. Returns the new size x.
+
+    square.clear()      # Remove all items from the cache.
+
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pylru-1.2.0/pylru.py new/pylru-1.2.1/pylru.py
--- old/pylru-1.2.0/pylru.py    2019-03-13 22:53:00.000000000 +0100
+++ new/pylru-1.2.1/pylru.py    2022-03-06 17:33:57.000000000 +0100
@@ -2,7 +2,7 @@
 # Cache implementaion with a Least Recently Used (LRU) replacement policy and
 # a basic dictionary interface.
 
-# Copyright (C) 2006-2019 Jay Hutchinson
+# Copyright (C) 2006-2022 Jay Hutchinson
 
 # This program is free software; you can redistribute it and/or modify it
 # under the terms of the GNU General Public License as published by the Free
@@ -46,7 +46,6 @@
 
 
 class lrucache(object):
-
     def __init__(self, size, callback=None):
         self.callback = callback
 
@@ -99,7 +98,6 @@
         return node.value
 
     def get(self, key, default=None):
-        """Get an item - return default (None) if not present"""
         if key not in self.table:
             return default
         
@@ -207,9 +205,27 @@
         if len(self) < 1:
             raise KeyError
 
-        key = self.head.key
-        value = self.head.value
-        del self[key]
+        # Grab the head node
+        node = self.head
+
+        # Save the key and value so that we can return them.
+        key = node.key
+        value = node.value
+
+        # Remove the key from the hash table and mark the node as empty.
+        del self.table[key]
+        node.empty = True
+
+        # Not strictly necessary.
+        node.key = None
+        node.value = None
+
+        # Because this node is now empty we want to reuse it before any
+        # non-empty node. To do that we want to move it to the tail of the
+        # list. This node is the head node. Due to the list being circular,
+        # the ordering is already correct, we just need to adjust the 'head'
+        # variable.
+        self.head = node.next
 
         return key, value
 
@@ -317,6 +333,60 @@
             yield node
             node = node.next
 
+    # The methods __getstate__() and __setstate__() are used to correctly
+    # support the copy and pickle modules from the standard library. In
+    # particular, the doubly linked list trips up the introspection machinery
+    # used by copy/pickle.
+    def __getstate__(self):
+        # Copy the instance attributes.
+        d = self.__dict__.copy()
+
+        # Remove those that we need to do by hand.
+        del d['table']
+        del d['head']
+
+        # Package up the key/value pairs from the doubly linked list into a
+        # normal list that can be copied/pickled correctly. We put the
+        # key/value pairs into the list in order, as returned by dli(), from
+        # most recently to least recently used, so that the copy can be
+        # restored with the same ordering.
+        elements = [(node.key, node.value) for node in self.dli()]
+        return (d, elements)
+
+    def __setstate__(self, state):
+        d = state[0]
+        elements = state[1]
+
+        # Restore the instance attributes, except for the table and head.
+        self.__dict__.update(d)
+
+        # Rebuild the table and doubly linked list from the simple list of
+        # key/value pairs in 'elements'.
+
+        # The listSize is the size of the original cache. We want this cache
+        # to have the same size, but we need to reset it temporarily to set up
+        # table and head correctly, so save a copy of the size.
+        size = self.listSize
+
+        # Setup a table and double linked list. This is identical to the way
+        # __init__() does it.
+        self.table = {}
+
+        self.head = _dlnode()
+        self.head.next = self.head
+        self.head.prev = self.head
+
+        self.listSize = 1
+
+        # Now adjust the list to the desired size.
+        self.size(size)
+
+        # Fill the cache with the keys/values. Because inserted items are
+        # moved to the top of the doubly linked list, we insert the key/value
+        # pairs in reverse order. This ensures that the order of the doubly
+        # linked list is identical to the original cache.
+        for key, value in reversed(elements):
+            self[key] = value
 
 
 class WriteThroughCacheManager(object):
@@ -358,7 +428,6 @@
         return value
 
     def get(self, key, default=None):
-        """Get an item - return default (None) if not present"""
         try:
             return self[key]
         except KeyError:
@@ -394,7 +463,6 @@
         return self.store.items()
 
 
-
 class WriteBackCacheManager(object):
     def __init__(self, store, size):
         self.store = store
@@ -450,7 +518,6 @@
         return value
 
     def get(self, key, default=None):
-        """Get an item - return default (None) if not present"""
         try:
             return self[key]
         except KeyError:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pylru-1.2.0/setup.py new/pylru-1.2.1/setup.py
--- old/pylru-1.2.0/setup.py    2019-03-13 22:53:00.000000000 +0100
+++ new/pylru-1.2.1/setup.py    2022-03-06 17:33:57.000000000 +0100
@@ -2,7 +2,7 @@
 
 setuptools.setup(
     name = "pylru",
-    version = "1.2.0",
+    version = "1.2.1",
     py_modules=['pylru'],
     description = "A least recently used (LRU) cache implementation",
     author = "Jay Hutchinson",
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pylru-1.2.0/test.py new/pylru-1.2.1/test.py
--- old/pylru-1.2.0/test.py     2019-03-13 22:53:00.000000000 +0100
+++ new/pylru-1.2.1/test.py     2022-03-06 17:33:57.000000000 +0100
@@ -8,22 +8,18 @@
 class simplelrucache:
 
     def __init__(self, size):
-
         # Initialize the cache as empty.
         self.cache = []
         self.size = size
 
     def __contains__(self, key):
-
         for x in self.cache:
             if x[0] == key:
                 return True
 
         return False
 
-
     def __getitem__(self, key):
-
         for i in range(len(self.cache)):
             x = self.cache[i]
             if x[0] == key:
@@ -33,9 +29,7 @@
 
         raise KeyError
 
-
     def __setitem__(self, key, value):
-
         for i in range(len(self.cache)):
             x = self.cache[i]
             if x[0] == key:
@@ -49,9 +43,7 @@
 
         self.cache.append([key, value])
 
-
     def __delitem__(self, key):
-
         for i in range(len(self.cache)):
             if self.cache[i][0] == key:
                 del self.cache[i]
@@ -67,7 +59,6 @@
 
 
 def test(a, b, c, d, verify):
-
     for i in range(1000):
         x = random.randint(0, 512)
         y = random.randint(0, 512)
@@ -115,7 +106,6 @@
         assert list(zip(a.keys(), a.values())) == q2
         assert list(a.keys()) == list(a)
 
-
     a = lrucache(128)
     b = simplelrucache(128)
     verify(a, b)
@@ -138,7 +128,6 @@
 
 
 def wraptest():
-
     def verify(p, x):
         assert p == x.store
         for key, value in x.cache.items():
@@ -159,9 +148,7 @@
     test(p, x, p, x, verify)
 
 
-
 def wraptest2():
-
     def verify(p, x):
         for key, value in x.store.items():
             if key not in x.dirty:
@@ -192,7 +179,6 @@
     assert p == q
 
 def wraptest3():
-
     def verify(p, x):
         for key, value in x.store.items():
             if key not in x.dirty:
@@ -224,10 +210,8 @@
 
 
 if __name__ == '__main__':
-
     random.seed()
 
-
     for i in range(20):
         testcache()
         wraptest()

Reply via email to