Hello community,

here is the log from the commit of package python-lz4 for openSUSE:Factory 
checked in at 2018-11-08 09:47:19
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-lz4 (Old)
 and      /work/SRC/openSUSE:Factory/.python-lz4.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-lz4"

Thu Nov  8 09:47:19 2018 rev:3 rq:644635 version:2.1.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-lz4/python-lz4.changes    2018-07-21 
10:25:34.354952361 +0200
+++ /work/SRC/openSUSE:Factory/.python-lz4.new/python-lz4.changes       
2018-11-08 09:47:33.981165039 +0100
@@ -1,0 +2,8 @@
+Thu Oct 25 12:35:12 UTC 2018 - Tomáš Chvátal <tchva...@suse.com>
+
+- Version update to 2.1.1:
+  * fixes a bug with the block format compression/decompression
+  * the handling of errors for block decompression when uncompressed_size > 0
+  * introduce a new exception: LZ4BlockError which is raised whenever the LZ4 
library fails
+
+-------------------------------------------------------------------

Old:
----
  lz4-2.0.2.tar.gz

New:
----
  lz4-2.1.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-lz4.spec ++++++
--- /var/tmp/diff_new_pack.jvHxgT/_old  2018-11-08 09:47:35.877162811 +0100
+++ /var/tmp/diff_new_pack.jvHxgT/_new  2018-11-08 09:47:35.885162801 +0100
@@ -12,14 +12,14 @@
 # license that conforms to the Open Source Definition (Version 1.9)
 # published by the Open Source Initiative.
 
-# Please submit bugfixes or comments via http://bugs.opensuse.org/
+# Please submit bugfixes or comments via https://bugs.opensuse.org/
 #
 
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 %define modname lz4
 Name:           python-%{modname}
-Version:        2.0.2
+Version:        2.1.1
 Release:        0
 Summary:        LZ4 Bindings for Python
 License:        BSD-3-Clause

++++++ lz4-2.0.2.tar.gz -> lz4-2.1.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/PKG-INFO new/lz4-2.1.1/PKG-INFO
--- old/lz4-2.0.2/PKG-INFO      2018-07-07 19:52:21.000000000 +0200
+++ new/lz4-2.1.1/PKG-INFO      2018-10-13 13:49:16.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: lz4
-Version: 2.0.2
+Version: 2.1.1
 Summary: LZ4 Bindings for Python
 Home-page: https://github.com/python-lz4/python-lz4
 Author: Jonathan Underwood
@@ -81,6 +81,7 @@
 Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
 Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
 Provides-Extra: flake8
 Provides-Extra: tests
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/docs/lz4.block.rst 
new/lz4-2.1.1/docs/lz4.block.rst
--- old/lz4-2.0.2/docs/lz4.block.rst    2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/docs/lz4.block.rst    2018-10-13 13:45:30.000000000 +0200
@@ -5,30 +5,95 @@
 =====================
 
 This sub-package provides the capability to compress and decompress data using
-the `block specification <http://lz4.github.io/lz4/lz4_Block_format.html>`_.
+the `block specification <https://lz4.github.io/lz4/lz4_Block_format.html>`_.
 
-Because the LZ4 block format doesn't define a container format, the Python
-bindings will by default insert the original data size as an integer at the
-start of the compressed payload, like most other bindings do (Java...). 
However,
-it is possible to disable this functionality.
+Because the LZ4 block format doesn't define a container format, the
+Python bindings will by default insert the original data size as an
+integer at the start of the compressed payload. However, it is
+possible to disable this functionality, and you may wish to do so for
+compatibility with other language bindings, such as the `Java bindings
+<https://github.com/lz4/lz4-java>`_.
 
 
 
 Example usage
 -------------
-To use the lz4 block format bindings is straightforward::
+To use the lz4 block format bindings is straightforward:
 
 .. doctest::
 
-    >>> import lz4.block
-    >>> import os
-    >>> input_data = 20 * 128 * os.urandom(1024)  # Read 20 * 128kb
-    >>> compressed_data = lz4.block.compress(input_data)
-    >>> output_data = lz4.block.decompress(compressed_data)
-    >>> input_data == output_data
-    True
+   >>> import lz4.block
+   >>> import os
+   >>> input_data = 20 * 128 * os.urandom(1024)  # Read 20 * 128kb
+   >>> compressed_data = lz4.block.compress(input_data)
+   >>> output_data = lz4.block.decompress(compressed_data)
+   >>> input_data == output_data
+   True
+
+In this simple example, the size of the uncompressed data is stored in
+the compressed data, and this size is then utilized when uncompressing
+the data in order to correctly size the buffer. Instead, you may want
+to not store the size of the uncompressed data to ensure compatibility
+with the `Java bindings <https://github.com/lz4/lz4-java>`_. The
+example below demonstrates how to use the block format without storing
+the size of the uncompressed data.
 
+.. doctest::
+
+   >>> import lz4.block
+   >>> data = b'0' * 255
+   >>> compressed = lz4.block.compress(data, store_size=False)
+   >>> decompressed = lz4.block.decompress(compressed, uncompressed_size=255)
+   >>> decompressed == data
+   True
+
+The `uncompressed_size` argument specifies an upper bound on the size
+of the uncompressed data size rather than an absolute value, such that
+the following example also works.
 
+.. doctest::
+   
+   >>> import lz4.block
+   >>> data = b'0' * 255
+   >>> compressed = lz4.block.compress(data, store_size=False)
+   >>> decompressed = lz4.block.decompress(compressed, uncompressed_size=2048)
+   >>> decompressed == data
+   True
+
+A common situation is not knowing the size of the uncompressed data at
+decompression time. The following example illustrates a strategy that
+can be used in this case.
+
+.. doctest::
+   
+   >>> import lz4.block
+   >>> data = b'0' * 2048
+   >>> compressed = lz4.block.compress(data, store_size=False)
+   >>> usize = 255
+   >>> max_size = 4096
+   >>> while True:
+   ...     try:
+   ...         decompressed = lz4.block.decompress(compressed, 
uncompressed_size=usize)
+   ...         break
+   ...     except lz4.block.LZ4BlockError:
+   ...         usize *= 2
+   ...         if usize > max_size:
+   ...             print('Error: data too large or corrupt')
+   ...             break
+   >>> decompressed == data
+   True
+
+In this example we are catching the `lz4.block.LZ4BlockError`
+exception. This exception is raisedd if the LZ4 library call fails,
+which can be caused by either the buffer used to store the
+uncompressed data (as set by `usize`) being too small, or the input
+compressed data being invalid - it is not possible to distinguish the
+two cases, and this is why we set an absolute upper bound (`max_size`)
+on the memory that can be allocated for the uncompressed data. If we
+did not take this precaution, the code, if ppassed invalid compressed
+data would continuously try to allocate a larger and larger buffer for
+decompression until the system ran out of memory.
+   
 Contents
 ----------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4/block/__init__.py 
new/lz4-2.1.1/lz4/block/__init__.py
--- old/lz4-2.0.2/lz4/block/__init__.py 2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4/block/__init__.py 2018-10-13 13:45:30.000000000 +0200
@@ -1 +1 @@
-from ._block import compress, decompress  # noqa: F401
+from ._block import compress, decompress, LZ4BlockError  # noqa: F401
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4/block/_block.c 
new/lz4-2.1.1/lz4/block/_block.c
--- old/lz4-2.0.2/lz4/block/_block.c    2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4/block/_block.c    2018-10-13 13:45:30.000000000 +0200
@@ -89,6 +89,8 @@
   HIGH_COMPRESSION
 } compression_type;
 
+static PyObject * LZ4BlockError;
+
 static inline int
 lz4_compress_generic (int comp, char* source, char* dest, int source_size, int 
dest_size,
                       char* dict, int dict_size, int acceleration, int 
compression)
@@ -251,7 +253,7 @@
 
   if (output_size <= 0)
     {
-      PyErr_SetString (PyExc_ValueError, "Compression failed");
+      PyErr_SetString (LZ4BlockError, "Compression failed");
       PyMem_Free (dest);
       return NULL;
     }
@@ -360,7 +362,7 @@
     {
       PyBuffer_Release(&source);
       PyBuffer_Release(&dict);
-      PyErr_Format (PyExc_ValueError, "Invalid size in header: 0x%zu",
+      PyErr_Format (PyExc_ValueError, "Invalid size: 0x%zu",
                     dest_size);
       return NULL;
     }
@@ -384,14 +386,15 @@
 
   if (output_size < 0)
     {
-      PyErr_Format (PyExc_ValueError, "Corrupt input at byte %u", 
-output_size);
+      PyErr_Format (LZ4BlockError,
+                    "Decompression failed: corrupt input or insufficient space 
in destination buffer. Error code: %u",
+                    -output_size);
       PyMem_Free (dest);
       return NULL;
     }
-  else if ((size_t)output_size != dest_size)
+  else if (((size_t)output_size != dest_size) && (uncompressed_size < 0))
     {
-      /* Better to fail explicitly than to allow fishy data to pass through. */
-      PyErr_Format (PyExc_ValueError,
+      PyErr_Format (LZ4BlockError,
                     "Decompressor wrote %u bytes, but %zu bytes expected from 
header",
                     output_size, dest_size);
       PyMem_Free (dest);
@@ -461,14 +464,24 @@
              "Keyword Args:\n"                                          \
              "    uncompressed_size (int): If not specified or negative, the 
uncompressed\n" \
              "        data size is read from the start of the source block. If 
specified,\n" \
-             "        it is assumed that the full source data is compressed 
data.\n" \
+             "        it is assumed that the full source data is compressed 
data. If this\n" \
+             "        argument is specified, it is considered to be a maximum 
possible size\n" \
+             "        for the buffer used to hold the uncompressed data, and 
so less data\n" \
+             "        may be returned. If `uncompressed_size` is too small, 
`LZ4BlockError`\n" \
+             "        will be raised. By catching `LZ4BlockError` it is 
possible to increase\n" \
+             "        `uncompressed_size` and try again.\n"             \
              "    return_bytearray (bool): If ``False`` (the default) then the 
function\n" \
              "        will return a bytes object. If ``True``, then the 
function will\n" \
              "        return a bytearray object.\n\n" \
              "    dict (str, bytes or buffer-compatible object): If specified, 
perform\n" \
-             "        decompression using this initial dictionary.\n" \
+             "        decompression using this initial dictionary.\n"   \
+             "\n"                                                       \
              "Returns:\n"                                               \
-             "    bytes or bytearray: Decompressed data.\n");
+             "    bytes or bytearray: Decompressed data.\n"             \
+             "\n"                                                       \
+             "Raises:\n"                                                \
+             "    LZ4BlockError: raised if the call to the LZ4 library fails. 
This can be\n" \
+             "        caused by `uncompressed_size` being too small, or 
invalid data.\n");
 
 PyDoc_STRVAR(lz4block__doc,
              "A Python wrapper for the LZ4 block protocol"
@@ -517,5 +530,13 @@
   PyModule_AddIntConstant (module, "HC_LEVEL_OPT_MIN", LZ4HC_CLEVEL_OPT_MIN);
   PyModule_AddIntConstant (module, "HC_LEVEL_MAX", LZ4HC_CLEVEL_MAX);
 
+  LZ4BlockError = PyErr_NewExceptionWithDoc("_block.LZ4BlockError", "Call to 
LZ4 library failed.", NULL, NULL);
+  if (LZ4BlockError == NULL)
+    {
+      return NULL;
+    }
+  Py_INCREF(LZ4BlockError);
+  PyModule_AddObject(module, "LZ4BlockError", LZ4BlockError);
+  
   return module;
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4/version.py new/lz4-2.1.1/lz4/version.py
--- old/lz4-2.0.2/lz4/version.py        2018-07-07 19:52:20.000000000 +0200
+++ new/lz4-2.1.1/lz4/version.py        2018-10-13 13:49:16.000000000 +0200
@@ -1,4 +1,4 @@
 # coding: utf-8
 # file generated by setuptools_scm
 # don't change, don't track in version control
-version = '2.0.2'
+version = '2.1.1'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4.egg-info/PKG-INFO 
new/lz4-2.1.1/lz4.egg-info/PKG-INFO
--- old/lz4-2.0.2/lz4.egg-info/PKG-INFO 2018-07-07 19:52:20.000000000 +0200
+++ new/lz4-2.1.1/lz4.egg-info/PKG-INFO 2018-10-13 13:49:16.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: lz4
-Version: 2.0.2
+Version: 2.1.1
 Summary: LZ4 Bindings for Python
 Home-page: https://github.com/python-lz4/python-lz4
 Author: Jonathan Underwood
@@ -81,6 +81,7 @@
 Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
 Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
 Provides-Extra: flake8
 Provides-Extra: tests
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4.egg-info/SOURCES.txt 
new/lz4-2.1.1/lz4.egg-info/SOURCES.txt
--- old/lz4-2.0.2/lz4.egg-info/SOURCES.txt      2018-07-07 19:52:21.000000000 
+0200
+++ new/lz4-2.1.1/lz4.egg-info/SOURCES.txt      2018-10-13 13:49:16.000000000 
+0200
@@ -50,6 +50,7 @@
 py3c/py3c/py3shims.h
 py3c/py3c/tpflags.h
 tests/block/conftest.py
+tests/block/numpy_byte_array.bin
 tests/block/test_block_0.py
 tests/block/test_block_1.py
 tests/block/test_block_2.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4libs/lz4.c new/lz4-2.1.1/lz4libs/lz4.c
--- old/lz4-2.0.2/lz4libs/lz4.c 2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4libs/lz4.c 2018-10-13 13:45:30.000000000 +0200
@@ -1,6 +1,6 @@
 /*
    LZ4 - Fast LZ compression algorithm
-   Copyright (C) 2011-2017, Yann Collet.
+   Copyright (C) 2011-present, Yann Collet.
 
    BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
 
@@ -297,8 +297,9 @@
 #define MINMATCH 4
 
 #define WILDCOPYLENGTH 8
-#define LASTLITERALS 5
-#define MFLIMIT (WILDCOPYLENGTH+MINMATCH)
+#define LASTLITERALS   5   /* see 
../doc/lz4_Block_format.md#parsing-restrictions */
+#define MFLIMIT       12   /* see 
../doc/lz4_Block_format.md#parsing-restrictions */
+#define MATCH_SAFEGUARD_DISTANCE  ((2*WILDCOPYLENGTH) - MINMATCH)   /* ensure 
it's possible to write 2 x wildcopyLength without overflowing output buffer */
 static const int LZ4_minLength = (MFLIMIT+1);
 
 #define KB *(1 <<10)
@@ -483,9 +484,6 @@
 typedef enum { noDict = 0, withPrefix64k, usingExtDict, usingDictCtx } 
dict_directive;
 typedef enum { noDictIssue = 0, dictSmall } dictIssue_directive;
 
-typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } 
endCondition_directive;
-typedef enum { full = 0, partial = 1 } earlyEnd_directive;
-
 
 /*-************************************
 *  Local Utils
@@ -496,6 +494,21 @@
 int LZ4_sizeofState() { return LZ4_STREAMSIZE; }
 
 
+/*-************************************
+*  Internal Definitions used in Tests
+**************************************/
+#if defined (__cplusplus)
+extern "C" {
+#endif
+
+int LZ4_compress_forceExtDict (LZ4_stream_t* LZ4_stream, const char* source, 
char* dest, int inputSize);
+
+int LZ4_decompress_safe_forceExtDict(const char* in, char* out, int inSize, 
int outSize, const void* dict, size_t dictSize);
+
+#if defined (__cplusplus)
+}
+#endif
+
 /*-******************************
 *  Compression functions
 ********************************/
@@ -669,9 +682,9 @@
 
     /* the dictCtx currentOffset is indexed on the start of the dictionary,
      * while a dictionary in the current context precedes the currentOffset */
-    const BYTE* dictBase = dictDirective == usingDictCtx ?
-        dictionary + dictSize - dictCtx->currentOffset :
-        dictionary + dictSize - startIndex;
+    const BYTE* dictBase = (dictDirective == usingDictCtx) ?
+                            dictionary + dictSize - dictCtx->currentOffset :
+                            dictionary + dictSize - startIndex;
 
     BYTE* op = (BYTE*) dest;
     BYTE* const olimit = op + maxOutputSize;
@@ -699,7 +712,7 @@
         cctx->dictSize += (U32)inputSize;
     }
     cctx->currentOffset += (U32)inputSize;
-    cctx->tableType = tableType;
+    cctx->tableType = (U16)tableType;
 
     if (inputSize<LZ4_minLength) goto _last_literals;        /* Input too 
small, no compression (all literals) */
 
@@ -1370,26 +1383,32 @@
 
 
 
-/*-*****************************
-*  Decompression functions
-*******************************/
+/*-*******************************
+ *  Decompression functions
+ ********************************/
+
+typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } 
endCondition_directive;
+typedef enum { decode_full_block = 0, partial_decode = 1 } earlyEnd_directive;
+
+#undef MIN
+#define MIN(a,b)    ( (a) < (b) ? (a) : (b) )
+
 /*! LZ4_decompress_generic() :
  *  This generic decompression function covers all use cases.
  *  It shall be instantiated several times, using different sets of directives.
  *  Note that it is important for performance that this function really get 
inlined,
  *  in order to remove useless branches during compilation optimization.
  */
-LZ4_FORCE_O2_GCC_PPC64LE
-LZ4_FORCE_INLINE int LZ4_decompress_generic(
+LZ4_FORCE_INLINE int
+LZ4_decompress_generic(
                  const char* const src,
                  char* const dst,
                  int srcSize,
                  int outputSize,         /* If endOnInput==endOnInputSize, 
this value is `dstCapacity` */
 
-                 int endOnInput,         /* endOnOutputSize, endOnInputSize */
-                 int partialDecoding,    /* full, partial */
-                 int targetOutputSize,   /* only used if 
partialDecoding==partial */
-                 int dict,               /* noDict, withPrefix64k, 
usingExtDict */
+                 endCondition_directive endOnInput,   /* endOnOutputSize, 
endOnInputSize */
+                 earlyEnd_directive partialDecoding,  /* full, partial */
+                 dict_directive dict,                 /* noDict, 
withPrefix64k, usingExtDict */
                  const BYTE* const lowPrefix,  /* always <= dst, == dst when 
no prefix */
                  const BYTE* const dictStart,  /* only if dict==usingExtDict */
                  const size_t dictSize         /* note : = 0 if noDict */
@@ -1401,7 +1420,6 @@
     BYTE* op = (BYTE*) dst;
     BYTE* const oend = op + outputSize;
     BYTE* cpy;
-    BYTE* oexit = op + targetOutputSize;
 
     const BYTE* const dictEnd = (const BYTE*)dictStart + dictSize;
     const unsigned inc32table[8] = {0, 1, 2,  1,  0,  4, 4, 4};
@@ -1414,12 +1432,13 @@
     const BYTE* const shortiend = iend - (endOnInput ? 14 : 8) /*maxLL*/ - 2 
/*offset*/;
     const BYTE* const shortoend = oend - (endOnInput ? 14 : 8) /*maxLL*/ - 18 
/*maxML*/;
 
-    DEBUGLOG(5, "LZ4_decompress_generic (srcSize:%i)", srcSize);
+    DEBUGLOG(5, "LZ4_decompress_generic (srcSize:%i, dstSize:%i)", srcSize, 
outputSize);
 
     /* Special cases */
-    if ((partialDecoding) && (oexit > oend-MFLIMIT)) oexit = oend-MFLIMIT;     
                 /* targetOutputSize too high => just decode everything */
+    assert(lowPrefix <= op);
+    assert(src != NULL);
     if ((endOnInput) && (unlikely(outputSize==0))) return ((srcSize==1) && 
(*ip==0)) ? 0 : -1;  /* Empty output buffer */
-    if ((!endOnInput) && (unlikely(outputSize==0))) return (*ip==0?1:-1);
+    if ((!endOnInput) && (unlikely(outputSize==0))) return (*ip==0 ? 1 : -1);
     if ((endOnInput) && unlikely(srcSize==0)) return -1;
 
     /* Main Loop : decode sequences */
@@ -1428,7 +1447,7 @@
         size_t offset;
 
         unsigned const token = *ip++;
-        size_t length = token >> ML_BITS; /* literal length */
+        size_t length = token >> ML_BITS;  /* literal length */
 
         assert(!endOnInput || ip <= iend); /* ip < iend before the increment */
 
@@ -1453,6 +1472,7 @@
             length = token & ML_MASK; /* match length */
             offset = LZ4_readLE16(ip); ip += 2;
             match = op - offset;
+            assert(match <= op); /* check overflow */
 
             /* Do not deal with overlapping matches. */
             if ( (length != ML_MASK)
@@ -1486,11 +1506,12 @@
 
         /* copy literals */
         cpy = op+length;
-        if ( ((endOnInput) && ((cpy>(partialDecoding?oexit:oend-MFLIMIT)) || 
(ip+length>iend-(2+1+LASTLITERALS))) )
-            || ((!endOnInput) && (cpy>oend-WILDCOPYLENGTH)) )
+        LZ4_STATIC_ASSERT(MFLIMIT >= WILDCOPYLENGTH);
+        if ( ((endOnInput) && ((cpy>oend-MFLIMIT) || 
(ip+length>iend-(2+1+LASTLITERALS))) )
+          || ((!endOnInput) && (cpy>oend-WILDCOPYLENGTH)) )
         {
             if (partialDecoding) {
-                if (cpy > oend) goto _output_error;                           
/* Error : write attempt beyond end of output buffer */
+                if (cpy > oend) { cpy = oend; length = oend-op; }             
/* Partial decoding : stop in the middle of literal segment */
                 if ((endOnInput) && (ip+length > iend)) goto _output_error;   
/* Error : read attempt beyond end of input buffer */
             } else {
                 if ((!endOnInput) && (cpy != oend)) goto _output_error;       
/* Error : block decoding must stop exactly there */
@@ -1499,10 +1520,15 @@
             memcpy(op, ip, length);
             ip += length;
             op += length;
-            break;     /* Necessarily EOF, due to parsing restrictions */
+            if (!partialDecoding || (cpy == oend)) {
+                /* Necessarily EOF, due to parsing restrictions */
+                break;
+            }
+
+        } else {
+            LZ4_wildCopy(op, ip, cpy);   /* may overwrite up to WILDCOPYLENGTH 
beyond cpy */
+            ip += length; op = cpy;
         }
-        LZ4_wildCopy(op, ip, cpy);
-        ip += length; op = cpy;
 
         /* get offset */
         offset = LZ4_readLE16(ip); ip+=2;
@@ -1513,7 +1539,11 @@
 
 _copy_match:
         if ((checkOffset) && (unlikely(match + dictSize < lowPrefix))) goto 
_output_error;   /* Error : offset outside buffers */
-        LZ4_write32(op, (U32)offset);   /* costs ~1%; silence an msan warning 
when offset==0 */
+        if (!partialDecoding) {
+            assert(oend > op);
+            assert(oend - op >= 4);
+            LZ4_write32(op, 0);   /* silence an msan warning when offset==0; 
costs <1%; */
+        }   /* note : when partialDecoding, there is no guarantee that at 
least 4 bytes remain available in output buffer */
 
         if (length == ML_MASK) {
             unsigned s;
@@ -1526,21 +1556,24 @@
         }
         length += MINMATCH;
 
-        /* check external dictionary */
+        /* match starting within external dictionary */
         if ((dict==usingExtDict) && (match < lowPrefix)) {
-            if (unlikely(op+length > oend-LASTLITERALS)) goto _output_error;   
/* doesn't respect parsing restriction */
+            if (unlikely(op+length > oend-LASTLITERALS)) {
+                if (partialDecoding) length = MIN(length, (size_t)(oend-op));
+                else goto _output_error;   /* doesn't respect parsing 
restriction */
+            }
 
             if (length <= (size_t)(lowPrefix-match)) {
-                /* match can be copied as a single segment from external 
dictionary */
+                /* match fits entirely within external dictionary : just copy 
*/
                 memmove(op, dictEnd - (lowPrefix-match), length);
                 op += length;
             } else {
-                /* match encompass external dictionary and current block */
-                size_t const copySize = (size_t)(lowPrefix-match);
+                /* match stretches into both external dictionary and current 
block */
+                size_t const copySize = (size_t)(lowPrefix - match);
                 size_t const restSize = length - copySize;
                 memcpy(op, dictEnd - copySize, copySize);
                 op += copySize;
-                if (restSize > (size_t)(op-lowPrefix)) {  /* overlap copy */
+                if (restSize > (size_t)(op - lowPrefix)) {  /* overlap copy */
                     BYTE* const endOfMatch = op + restSize;
                     const BYTE* copyFrom = lowPrefix;
                     while (op < endOfMatch) *op++ = *copyFrom++;
@@ -1553,6 +1586,23 @@
 
         /* copy match within block */
         cpy = op + length;
+
+        /* partialDecoding : may not respect endBlock parsing restrictions */
+        assert(op<=oend);
+        if (partialDecoding && (cpy > oend-MATCH_SAFEGUARD_DISTANCE)) {
+            size_t const mlen = MIN(length, (size_t)(oend-op));
+            const BYTE* const matchEnd = match + mlen;
+            BYTE* const copyEnd = op + mlen;
+            if (matchEnd > op) {   /* overlap copy */
+                while (op < copyEnd) *op++ = *match++;
+            } else {
+                memcpy(op, match, mlen);
+            }
+            op = copyEnd;
+            if (op==oend) break;
+            continue;
+        }
+
         if (unlikely(offset<8)) {
             op[0] = match[0];
             op[1] = match[1];
@@ -1561,23 +1611,26 @@
             match += inc32table[offset];
             memcpy(op+4, match, 4);
             match -= dec64table[offset];
-        } else { memcpy(op, match, 8); match+=8; }
+        } else {
+            memcpy(op, match, 8);
+            match += 8;
+        }
         op += 8;
 
-        if (unlikely(cpy>oend-12)) {
-            BYTE* const oCopyLimit = oend-(WILDCOPYLENGTH-1);
+        if (unlikely(cpy > oend-MATCH_SAFEGUARD_DISTANCE)) {
+            BYTE* const oCopyLimit = oend - (WILDCOPYLENGTH-1);
             if (cpy > oend-LASTLITERALS) goto _output_error;    /* Error : 
last LASTLITERALS bytes must be literals (uncompressed) */
             if (op < oCopyLimit) {
                 LZ4_wildCopy(op, match, oCopyLimit);
                 match += oCopyLimit - op;
                 op = oCopyLimit;
             }
-            while (op<cpy) *op++ = *match++;
+            while (op < cpy) *op++ = *match++;
         } else {
             memcpy(op, match, 8);
-            if (length>16) LZ4_wildCopy(op+8, match+8, cpy);
+            if (length > 16) LZ4_wildCopy(op+8, match+8, cpy);
         }
-        op = cpy;   /* correction */
+        op = cpy;   /* wildcopy correction */
     }
 
     /* end of decoding */
@@ -1598,23 +1651,24 @@
 int LZ4_decompress_safe(const char* source, char* dest, int compressedSize, 
int maxDecompressedSize)
 {
     return LZ4_decompress_generic(source, dest, compressedSize, 
maxDecompressedSize,
-                                  endOnInputSize, full, 0, noDict,
+                                  endOnInputSize, decode_full_block, noDict,
                                   (BYTE*)dest, NULL, 0);
 }
 
 LZ4_FORCE_O2_GCC_PPC64LE
-int LZ4_decompress_safe_partial(const char* source, char* dest, int 
compressedSize, int targetOutputSize, int maxDecompressedSize)
+int LZ4_decompress_safe_partial(const char* src, char* dst, int 
compressedSize, int targetOutputSize, int dstCapacity)
 {
-    return LZ4_decompress_generic(source, dest, compressedSize, 
maxDecompressedSize,
-                                  endOnInputSize, partial, targetOutputSize,
-                                  noDict, (BYTE*)dest, NULL, 0);
+    dstCapacity = MIN(targetOutputSize, dstCapacity);
+    return LZ4_decompress_generic(src, dst, compressedSize, dstCapacity,
+                                  endOnInputSize, partial_decode,
+                                  noDict, (BYTE*)dst, NULL, 0);
 }
 
 LZ4_FORCE_O2_GCC_PPC64LE
 int LZ4_decompress_fast(const char* source, char* dest, int originalSize)
 {
     return LZ4_decompress_generic(source, dest, 0, originalSize,
-                                  endOnOutputSize, full, 0, withPrefix64k,
+                                  endOnOutputSize, decode_full_block, 
withPrefix64k,
                                   (BYTE*)dest - 64 KB, NULL, 0);
 }
 
@@ -1624,7 +1678,7 @@
 int LZ4_decompress_safe_withPrefix64k(const char* source, char* dest, int 
compressedSize, int maxOutputSize)
 {
     return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize,
-                                  endOnInputSize, full, 0, withPrefix64k,
+                                  endOnInputSize, decode_full_block, 
withPrefix64k,
                                   (BYTE*)dest - 64 KB, NULL, 0);
 }
 
@@ -1641,17 +1695,17 @@
                                                size_t prefixSize)
 {
     return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize,
-                                  endOnInputSize, full, 0, noDict,
+                                  endOnInputSize, decode_full_block, noDict,
                                   (BYTE*)dest-prefixSize, NULL, 0);
 }
 
-LZ4_FORCE_O2_GCC_PPC64LE /* Exported under another name, for tests/fullbench.c 
*/
-#define LZ4_decompress_safe_extDict LZ4_decompress_safe_forceExtDict
-int LZ4_decompress_safe_extDict(const char* source, char* dest, int 
compressedSize, int maxOutputSize,
-                                const void* dictStart, size_t dictSize)
+LZ4_FORCE_O2_GCC_PPC64LE
+int LZ4_decompress_safe_forceExtDict(const char* source, char* dest,
+                                     int compressedSize, int maxOutputSize,
+                                     const void* dictStart, size_t dictSize)
 {
     return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize,
-                                  endOnInputSize, full, 0, usingExtDict,
+                                  endOnInputSize, decode_full_block, 
usingExtDict,
                                   (BYTE*)dest, (const BYTE*)dictStart, 
dictSize);
 }
 
@@ -1660,7 +1714,7 @@
                                        const void* dictStart, size_t dictSize)
 {
     return LZ4_decompress_generic(source, dest, 0, originalSize,
-                                  endOnOutputSize, full, 0, usingExtDict,
+                                  endOnOutputSize, decode_full_block, 
usingExtDict,
                                   (BYTE*)dest, (const BYTE*)dictStart, 
dictSize);
 }
 
@@ -1673,7 +1727,7 @@
                                    size_t prefixSize, const void* dictStart, 
size_t dictSize)
 {
     return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize,
-                                  endOnInputSize, full, 0, usingExtDict,
+                                  endOnInputSize, decode_full_block, 
usingExtDict,
                                   (BYTE*)dest-prefixSize, (const 
BYTE*)dictStart, dictSize);
 }
 
@@ -1682,7 +1736,7 @@
                                    size_t prefixSize, const void* dictStart, 
size_t dictSize)
 {
     return LZ4_decompress_generic(source, dest, 0, originalSize,
-                                  endOnOutputSize, full, 0, usingExtDict,
+                                  endOnOutputSize, decode_full_block, 
usingExtDict,
                                   (BYTE*)dest-prefixSize, (const 
BYTE*)dictStart, dictSize);
 }
 
@@ -1773,8 +1827,8 @@
         /* The buffer wraps around, or they're switching to another buffer. */
         lz4sd->extDictSize = lz4sd->prefixSize;
         lz4sd->externalDict = lz4sd->prefixEnd - lz4sd->extDictSize;
-        result = LZ4_decompress_safe_extDict(source, dest, compressedSize, 
maxOutputSize,
-                                             lz4sd->externalDict, 
lz4sd->extDictSize);
+        result = LZ4_decompress_safe_forceExtDict(source, dest, 
compressedSize, maxOutputSize,
+                                                  lz4sd->externalDict, 
lz4sd->extDictSize);
         if (result <= 0) return result;
         lz4sd->prefixSize = result;
         lz4sd->prefixEnd  = (BYTE*)dest + result;
@@ -1834,7 +1888,7 @@
             return LZ4_decompress_safe_withPrefix64k(source, dest, 
compressedSize, maxOutputSize);
         return LZ4_decompress_safe_withSmallPrefix(source, dest, 
compressedSize, maxOutputSize, dictSize);
     }
-    return LZ4_decompress_safe_extDict(source, dest, compressedSize, 
maxOutputSize, dictStart, dictSize);
+    return LZ4_decompress_safe_forceExtDict(source, dest, compressedSize, 
maxOutputSize, dictStart, dictSize);
 }
 
 int LZ4_decompress_fast_usingDict(const char* source, char* dest, int 
originalSize, const char* dictStart, int dictSize)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4libs/lz4.h new/lz4-2.1.1/lz4libs/lz4.h
--- old/lz4-2.0.2/lz4libs/lz4.h 2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4libs/lz4.h 2018-10-13 13:45:30.000000000 +0200
@@ -1,7 +1,7 @@
 /*
  *  LZ4 - Fast LZ compression algorithm
  *  Header File
- *  Copyright (C) 2011-2017, Yann Collet.
+ *  Copyright (C) 2011-present, Yann Collet.
 
    BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
 
@@ -46,7 +46,7 @@
 /**
   Introduction
 
-  LZ4 is lossless compression algorithm, providing compression speed at 400 
MB/s per core,
+  LZ4 is lossless compression algorithm, providing compression speed at 500 
MB/s per core,
   scalable with multi-cores CPU. It features an extremely fast decoder, with 
speed in
   multiple GB/s per core, typically reaching RAM speed limits on multi-core 
systems.
 
@@ -62,8 +62,8 @@
 
   An additional format, called LZ4 frame specification 
(doc/lz4_Frame_format.md),
   take care of encoding standard metadata alongside LZ4-compressed blocks.
-  If your application requires interoperability, it's recommended to use it.
-  A library is provided to take care of it, see lz4frame.h.
+  Frame format is required for interoperability.
+  It is delivered through a companion API, declared in lz4frame.h.
 */
 
 /*^***************************************************************
@@ -93,7 +93,7 @@
 /*------   Version   ------*/
 #define LZ4_VERSION_MAJOR    1    /* for breaking interface changes  */
 #define LZ4_VERSION_MINOR    8    /* for new (non-breaking) interface 
capabilities */
-#define LZ4_VERSION_RELEASE  2    /* for tweaks, bug-fixes, or development */
+#define LZ4_VERSION_RELEASE  3    /* for tweaks, bug-fixes, or development */
 
 #define LZ4_VERSION_NUMBER (LZ4_VERSION_MAJOR *100*100 + LZ4_VERSION_MINOR 
*100 + LZ4_VERSION_RELEASE)
 
@@ -183,55 +183,72 @@
     Same compression function, just using an externally allocated memory space 
to store compression state.
     Use LZ4_sizeofState() to know how much memory must be allocated,
     and allocate it on 8-bytes boundaries (using malloc() typically).
-    Then, provide it as 'void* state' to compression function.
+    Then, provide this buffer as 'void* state' to compression function.
 */
 LZ4LIB_API int LZ4_sizeofState(void);
 LZ4LIB_API int LZ4_compress_fast_extState (void* state, const char* src, char* 
dst, int srcSize, int dstCapacity, int acceleration);
 
 
-/*!
-LZ4_compress_destSize() :
-    Reverse the logic : compresses as much data as possible from 'src' buffer
-    into already allocated buffer 'dst' of size 'targetDestSize'.
-    This function either compresses the entire 'src' content into 'dst' if 
it's large enough,
-    or fill 'dst' buffer completely with as much data as possible from 'src'.
-        *srcSizePtr : will be modified to indicate how many bytes where read 
from 'src' to fill 'dst'.
-                      New value is necessarily <= old value.
-        return : Nb bytes written into 'dst' (necessarily <= targetDestSize)
-                 or 0 if compression fails
+/*! LZ4_compress_destSize() :
+ *  Reverse the logic : compresses as much data as possible from 'src' buffer
+ *  into already allocated buffer 'dst', of size >= 'targetDestSize'.
+ *  This function either compresses the entire 'src' content into 'dst' if 
it's large enough,
+ *  or fill 'dst' buffer completely with as much data as possible from 'src'.
+ *  note: acceleration parameter is fixed to "default".
+ *
+ * *srcSizePtr : will be modified to indicate how many bytes where read from 
'src' to fill 'dst'.
+ *               New value is necessarily <= input value.
+ * @return : Nb bytes written into 'dst' (necessarily <= targetDestSize)
+ *           or 0 if compression fails.
 */
 LZ4LIB_API int LZ4_compress_destSize (const char* src, char* dst, int* 
srcSizePtr, int targetDstSize);
 
 
-/*!
-LZ4_decompress_fast() : **unsafe!**
-This function is a bit faster than LZ4_decompress_safe(),
-but it may misbehave on malformed input because it doesn't perform full 
validation of compressed data.
-    originalSize : is the uncompressed size to regenerate
-                   Destination buffer must be already allocated, and its size 
must be >= 'originalSize' bytes.
-    return : number of bytes read from source buffer (== compressed size).
-             If the source stream is detected malformed, the function stops 
decoding and return a negative result.
-    note : This function is only usable if the originalSize of uncompressed 
data is known in advance.
-           The caller should also check that all the compressed input has been 
consumed properly,
-           i.e. that the return value matches the size of the buffer with 
compressed input.
-           The function never writes past the output buffer.  However, since 
it doesn't know its 'src' size,
-           it may read past the intended input.  Also, because match offsets 
are not validated during decoding,
-           reads from 'src' may underflow.  Use this function in trusted 
environment **only**.
-*/
+/*! LZ4_decompress_fast() : **unsafe!**
+ *  This function used to be a bit faster than LZ4_decompress_safe(),
+ *  though situation has changed in recent versions,
+ *  and now `LZ4_decompress_safe()` can be as fast and sometimes faster than 
`LZ4_decompress_fast()`.
+ *  Moreover, LZ4_decompress_fast() is not protected vs malformed input, as it 
doesn't perform full validation of compressed data.
+ *  As a consequence, this function is no longer recommended, and may be 
deprecated in future versions.
+ *  It's only remaining specificity is that it can decompress data without 
knowing its compressed size.
+ *
+ *  originalSize : is the uncompressed size to regenerate.
+ *                 `dst` must be already allocated, its size must be >= 
'originalSize' bytes.
+ * @return : number of bytes read from source buffer (== compressed size).
+ *           If the source stream is detected malformed, the function stops 
decoding and returns a negative result.
+ *  note : This function requires uncompressed originalSize to be known in 
advance.
+ *         The function never writes past the output buffer.
+ *         However, since it doesn't know its 'src' size, it may read past the 
intended input.
+ *         Also, because match offsets are not validated during decoding,
+ *         reads from 'src' may underflow.
+ *         Use this function in trusted environment **only**.
+ */
 LZ4LIB_API int LZ4_decompress_fast (const char* src, char* dst, int 
originalSize);
 
-/*!
-LZ4_decompress_safe_partial() :
-    This function decompress a compressed block of size 'srcSize' at position 
'src'
-    into destination buffer 'dst' of size 'dstCapacity'.
-    The function will decompress a minimum of 'targetOutputSize' bytes, and 
stop after that.
-    However, it's not accurate, and may write more than 'targetOutputSize' 
(but always <= dstCapacity).
-   @return : the number of bytes decoded in the destination buffer 
(necessarily <= dstCapacity)
-        Note : this number can also be < targetOutputSize, if compressed block 
contains less data.
-            Therefore, always control how many bytes were decoded.
-            If source stream is detected malformed, function returns a 
negative result.
-            This function is protected against malicious data packets.
-*/
+/*! LZ4_decompress_safe_partial() :
+ *  Decompress an LZ4 compressed block, of size 'srcSize' at position 'src',
+ *  into destination buffer 'dst' of size 'dstCapacity'.
+ *  Up to 'targetOutputSize' bytes will be decoded.
+ *  The function stops decoding on reaching this objective,
+ *  which can boost performance when only the beginning of a block is required.
+ *
+ * @return : the number of bytes decoded in `dst` (necessarily <= dstCapacity)
+ *           If source stream is detected malformed, function returns a 
negative result.
+ *
+ *  Note : @return can be < targetOutputSize, if compressed block contains 
less data.
+ *
+ *  Note 2 : this function features 2 parameters, targetOutputSize and 
dstCapacity,
+ *           and expects targetOutputSize <= dstCapacity.
+ *           It effectively stops decoding on reaching targetOutputSize,
+ *           so dstCapacity is kind of redundant.
+ *           This is because in a previous version of this function,
+ *           decoding operation would not "break" a sequence in the middle.
+ *           As a consequence, there was no guarantee that decoding would stop 
at exactly targetOutputSize,
+ *           it could write more bytes, though only up to dstCapacity.
+ *           Some "margin" used to be required for this operation to work 
properly.
+ *           This is no longer necessary.
+ *           The function nonetheless keeps its signature, in an effort to not 
break API.
+ */
 LZ4LIB_API int LZ4_decompress_safe_partial (const char* src, char* dst, int 
srcSize, int targetOutputSize, int dstCapacity);
 
 
@@ -266,16 +283,23 @@
  *  'dst' buffer must be already allocated.
  *  If dstCapacity >= LZ4_compressBound(srcSize), compression is guaranteed to 
succeed, and runs faster.
  *
- *  Important : The previous 64KB of compressed data is assumed to remain 
present and unmodified in memory!
- *
- *  Special 1 : When input is a double-buffer, they can have any size, 
including < 64 KB.
- *              Make sure that buffers are separated by at least one byte.
- *              This way, each block only depends on previous block.
- *  Special 2 : If input buffer is a ring-buffer, it can have any size, 
including < 64 KB.
- *
  * @return : size of compressed block
  *           or 0 if there is an error (typically, cannot fit into 'dst').
- *  After an error, the stream status is invalid, it can only be reset or 
freed.
+ *
+ *  Note 1 : Each invocation to LZ4_compress_fast_continue() generates a new 
block.
+ *           Each block has precise boundaries.
+ *           It's not possible to append blocks together and expect a single 
invocation of LZ4_decompress_*() to decompress them together.
+ *           Each block must be decompressed separately, calling 
LZ4_decompress_*() with associated metadata.
+ *
+ *  Note 2 : The previous 64KB of source data is __assumed__ to remain 
present, unmodified, at same address in memory!
+ *
+ *  Note 3 : When input is structured as a double-buffer, each buffer can have 
any size, including < 64 KB.
+ *           Make sure that buffers are separated, by at least one byte.
+ *           This construction ensures that each block only depends on 
previous block.
+ *
+ *  Note 4 : If input buffer is a ring-buffer, it can have any size, including 
< 64 KB.
+ *
+ *  Note 5 : After an error, the stream status is invalid, it can only be 
reset or freed.
  */
 LZ4LIB_API int LZ4_compress_fast_continue (LZ4_stream_t* streamPtr, const 
char* src, char* dst, int srcSize, int dstCapacity, int acceleration);
 
@@ -305,7 +329,7 @@
 /*! LZ4_setStreamDecode() :
  *  An LZ4_streamDecode_t context can be allocated once and re-used multiple 
times.
  *  Use this function to start decompression of a new stream of blocks.
- *  A dictionary can optionnally be set. Use NULL or size 0 for a reset order.
+ *  A dictionary can optionally be set. Use NULL or size 0 for a reset order.
  *  Dictionary is presumed stable : it must remain accessible and unmodified 
during next decompression.
  * @return : 1 if OK, 0 if error
  */
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4libs/lz4frame.c 
new/lz4-2.1.1/lz4libs/lz4frame.c
--- old/lz4-2.0.2/lz4libs/lz4frame.c    2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4libs/lz4frame.c    2018-10-13 13:45:30.000000000 +0200
@@ -738,7 +738,7 @@
 
 static int LZ4F_compressBlock(void* ctx, const char* src, char* dst, int 
srcSize, int dstCapacity, int level, const LZ4F_CDict* cdict)
 {
-    int const acceleration = (level < -1) ? -level : 1;
+    int const acceleration = (level < 0) ? -level + 1 : 1;
     LZ4F_initStream(ctx, cdict, level, LZ4F_blockIndependent);
     if (cdict) {
         return LZ4_compress_fast_continue((LZ4_stream_t*)ctx, src, dst, 
srcSize, dstCapacity, acceleration);
@@ -749,7 +749,7 @@
 
 static int LZ4F_compressBlock_continue(void* ctx, const char* src, char* dst, 
int srcSize, int dstCapacity, int level, const LZ4F_CDict* cdict)
 {
-    int const acceleration = (level < -1) ? -level : 1;
+    int const acceleration = (level < 0) ? -level + 1 : 1;
     (void)cdict; /* init once at beginning of frame */
     return LZ4_compress_fast_continue((LZ4_stream_t*)ctx, src, dst, srcSize, 
dstCapacity, acceleration);
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4libs/lz4frame.h 
new/lz4-2.1.1/lz4libs/lz4frame.h
--- old/lz4-2.0.2/lz4libs/lz4frame.h    2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4libs/lz4frame.h    2018-10-13 13:45:30.000000000 +0200
@@ -33,9 +33,10 @@
 */
 
 /* LZ4F is a stand-alone API to create LZ4-compressed frames
- * conformant with specification v1.5.1.
+ * conformant with specification v1.6.1.
  * It also offers streaming capabilities.
- * lz4.h is not required when using lz4frame.h.
+ * lz4.h is not required when using lz4frame.h,
+ * except to get constant such as LZ4_VERSION_NUMBER.
  * */
 
 #ifndef LZ4F_H_09782039843
@@ -159,8 +160,9 @@
 
 /*! LZ4F_frameInfo_t :
  *  makes it possible to set or read frame parameters.
- *  It's not required to set all fields, as long as the structure was 
initially memset() to zero.
- *  For all fields, 0 sets it to default value */
+ *  Structure must be first init to 0, using memset() or LZ4F_INIT_FRAMEINFO,
+ *  setting all parameters to default.
+ *  It's then possible to update selectively some parameters */
 typedef struct {
   LZ4F_blockSizeID_t     blockSizeID;         /* max64KB, max256KB, max1MB, 
max4MB; 0 == default */
   LZ4F_blockMode_t       blockMode;           /* LZ4F_blockLinked, 
LZ4F_blockIndependent; 0 == default */
@@ -171,24 +173,30 @@
   LZ4F_blockChecksum_t   blockChecksumFlag;   /* 1: each block followed by a 
checksum of block's compressed data; 0: disabled (default) */
 } LZ4F_frameInfo_t;
 
+#define LZ4F_INIT_FRAMEINFO   { 0, 0, 0, 0, 0, 0, 0 }    /* v1.8.3+ */
+
 /*! LZ4F_preferences_t :
- *  makes it possible to supply detailed compression parameters to the stream 
interface.
- *  Structure is presumed initially memset() to zero, representing default 
settings.
+ *  makes it possible to supply advanced compression instructions to streaming 
interface.
+ *  Structure must be first init to 0, using memset() or LZ4F_INIT_PREFERENCES,
+ *  setting all parameters to default.
  *  All reserved fields must be set to zero. */
 typedef struct {
   LZ4F_frameInfo_t frameInfo;
   int      compressionLevel;    /* 0: default (fast mode); values > 
LZ4HC_CLEVEL_MAX count as LZ4HC_CLEVEL_MAX; values < 0 trigger "fast 
acceleration" */
-  unsigned autoFlush;           /* 1: always flush, to reduce usage of 
internal buffers */
-  unsigned favorDecSpeed;       /* 1: parser favors decompression speed vs 
compression ratio. Only works for high compression modes (>= 
LZ4LZ4HC_CLEVEL_OPT_MIN) */  /* >= v1.8.2 */
+  unsigned autoFlush;           /* 1: always flush; reduces usage of internal 
buffers */
+  unsigned favorDecSpeed;       /* 1: parser favors decompression speed vs 
compression ratio. Only works for high compression modes (>= 
LZ4HC_CLEVEL_OPT_MIN) */  /* v1.8.2+ */
   unsigned reserved[3];         /* must be zero for forward compatibility */
 } LZ4F_preferences_t;
 
-LZ4FLIB_API int LZ4F_compressionLevel_max(void);
+#define LZ4F_INIT_PREFERENCES   { LZ4F_INIT_FRAMEINFO, 0, 0, 0, { 0, 0, 0 } }  
  /* v1.8.3+ */
 
 
 /*-*********************************
 *  Simple compression function
 ***********************************/
+
+LZ4FLIB_API int LZ4F_compressionLevel_max(void);
+
 /*! LZ4F_compressFrameBound() :
  *  Returns the maximum possible compressed size with LZ4F_compressFrame() 
given srcSize and preferences.
  * `preferencesPtr` is optional. It can be replaced by NULL, in which case, 
the function will assume default preferences.
@@ -222,8 +230,9 @@
 
 /*---   Resource Management   ---*/
 
-#define LZ4F_VERSION 100
+#define LZ4F_VERSION 100    /* This number can be used to check for an 
incompatible API breaking change */
 LZ4FLIB_API unsigned LZ4F_getVersion(void);
+
 /*! LZ4F_createCompressionContext() :
  * The first thing to do is to create a compressionContext object, which will 
be used in all compression operations.
  * This is achieved using LZ4F_createCompressionContext(), which takes as 
argument a version.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4libs/lz4hc.c 
new/lz4-2.1.1/lz4libs/lz4hc.c
--- old/lz4-2.0.2/lz4libs/lz4hc.c       2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4libs/lz4hc.c       2018-10-13 13:45:30.000000000 +0200
@@ -327,6 +327,8 @@
                             if (lookBackLength==0) {  /* no back possible */
                                 size_t const maxML = MIN(currentSegmentLength, 
srcPatternLength);
                                 if ((size_t)longest < maxML) {
+                                    assert(base + matchIndex < ip);
+                                    if (ip - (base+matchIndex) > MAX_DISTANCE) 
break;
                                     assert(maxML < 2 GB);
                                     longest = (int)maxML;
                                     *matchpos = base + matchIndex;   /* 
virtual pos, relative to ip, to retrieve offset */
@@ -450,6 +452,7 @@
     *op += length;
 
     /* Encode Offset */
+    assert( (*ip - match) <= MAX_DISTANCE );   /* note : consider providing 
offset as a value, rather than as a pointer difference */
     LZ4_writeLE16(*op, (U16)(*ip-match)); *op += 2;
 
     /* Encode MatchLength */
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/lz4libs/lz4hc.h 
new/lz4-2.1.1/lz4libs/lz4hc.h
--- old/lz4-2.0.2/lz4libs/lz4hc.h       2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/lz4libs/lz4hc.h       2018-10-13 13:45:30.000000000 +0200
@@ -246,6 +246,10 @@
 #ifndef LZ4_HC_SLO_098092834
 #define LZ4_HC_SLO_098092834
 
+#if defined (__cplusplus)
+extern "C" {
+#endif
+
 /*! LZ4_compress_HC_destSize() : v1.8.0 (experimental)
  *  Will try to compress as much data from `src` as possible
  *  that can fit into `targetDstSize` budget.
@@ -343,5 +347,9 @@
  */
 LZ4LIB_API void LZ4_attach_HC_dictionary(LZ4_streamHC_t *working_stream, const 
LZ4_streamHC_t *dictionary_stream);
 
+#if defined (__cplusplus)
+}
+#endif
+
 #endif   /* LZ4_HC_SLO_098092834 */
 #endif   /* LZ4_HC_STATIC_LINKING_ONLY */
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/setup.py new/lz4-2.1.1/setup.py
--- old/lz4-2.0.2/setup.py      2018-07-07 19:47:33.000000000 +0200
+++ new/lz4-2.1.1/setup.py      2018-10-13 13:45:30.000000000 +0200
@@ -185,5 +185,6 @@
         'Programming Language :: Python :: 3.4',
         'Programming Language :: Python :: 3.5',
         'Programming Language :: Python :: 3.6',
+        'Programming Language :: Python :: 3.7',
     ],
 )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/tests/block/conftest.py 
new/lz4-2.1.1/tests/block/conftest.py
--- old/lz4-2.0.2/tests/block/conftest.py       2018-07-07 19:47:33.000000000 
+0200
+++ new/lz4-2.1.1/tests/block/conftest.py       2018-10-13 13:45:30.000000000 
+0200
@@ -9,7 +9,9 @@
     (b'0' * 8 * 1024),
     (bytearray(b'')),
     (bytearray(os.urandom(8 * 1024))),
+    (bytearray(open(os.path.join(os.path.dirname(__file__), 
'numpy_byte_array.bin'), 'rb').read()))
 ]
+
 if sys.version_info > (2, 7):
     test_data += [
         (memoryview(b'')),
Binary files old/lz4-2.0.2/tests/block/numpy_byte_array.bin and 
new/lz4-2.1.1/tests/block/numpy_byte_array.bin differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/lz4-2.0.2/tests/block/test_block_1.py 
new/lz4-2.1.1/tests/block/test_block_1.py
--- old/lz4-2.0.2/tests/block/test_block_1.py   2018-07-07 19:47:33.000000000 
+0200
+++ new/lz4-2.1.1/tests/block/test_block_1.py   2018-10-13 13:45:30.000000000 
+0200
@@ -14,10 +14,17 @@
     # Verify that hand-crafted packet does not leak uninitialized(?) memory.
     data = lz4.block.compress(b'A' * 64)
     message = r'^Decompressor wrote 64 bytes, but 79 bytes expected from 
header$'
-    with pytest.raises(ValueError, match=message):
+    with pytest.raises(lz4.block.LZ4BlockError, match=message):
         lz4.block.decompress(b'\x4f' + data[1:])
-    with pytest.raises(ValueError, match=message):
-        lz4.block.decompress(data[4:], uncompressed_size=79)
+
+
+def test_decompress_with_small_buffer():
+    data = lz4.block.compress(b'A' * 64, store_size=False)
+    message = r'^Decompression failed: corrupt input or insufficient space in 
destination buffer. Error code: \d+$'
+    with pytest.raises(lz4.block.LZ4BlockError, match=message):
+        lz4.block.decompress(data[4:], uncompressed_size=64)
+    with pytest.raises(lz4.block.LZ4BlockError, match=message):
+        lz4.block.decompress(data, uncompressed_size=60)
 
 
 def test_decompress_truncated():
@@ -34,19 +41,19 @@
         with pytest.raises(ValueError, match='Input source data size too 
small'):
             lz4.block.decompress(compressed[:n])
     for n in [24, 25, -2, 27, 67, 85]:
-        with pytest.raises(ValueError, match=r'Corrupt input at byte 
\d+|Decompressor wrote \d+ bytes, but \d+ bytes expected from header'):
+        with pytest.raises(lz4.block.LZ4BlockError):
             lz4.block.decompress(compressed[:n])
 
 
 def test_decompress_with_trailer():
     data = b'A' * 64
     comp = lz4.block.compress(data)
-    message = r'^Corrupt input at byte'
-    with pytest.raises(ValueError, match=message):
+    message = r'^Decompression failed: corrupt input or insufficient space in 
destination buffer. Error code: \d+$'
+    with pytest.raises(lz4.block.LZ4BlockError, match=message):
         lz4.block.decompress(comp + b'A')
-    with pytest.raises(ValueError, match=message):
+    with pytest.raises(lz4.block.LZ4BlockError, match=message):
         lz4.block.decompress(comp + comp)
-    with pytest.raises(ValueError, match=message):
+    with pytest.raises(lz4.block.LZ4BlockError, match=message):
         lz4.block.decompress(comp + comp[4:])
 
 
@@ -105,11 +112,12 @@
     input_data = 
b"2099023098234882923049823094823094898239230982349081231290381209380981203981209381238901283098908123109238098123"
 * 24
     dict1 = input_data[10:30]
     dict2 = input_data[20:40]
+    message = r'^Decompression failed: corrupt input or insufficient space in 
destination buffer. Error code: \d+$'
     for mode in ['default', 'high_compression']:
         compressed = lz4.block.compress(input_data, mode=mode, dict=dict1)
-        with pytest.raises(ValueError, match=r'Corrupt input at byte \d+'):
+        with pytest.raises(lz4.block.LZ4BlockError, match=message):
             lz4.block.decompress(compressed)
-        with pytest.raises(ValueError, match=r'Corrupt input at byte \d+'):
+        with pytest.raises(lz4.block.LZ4BlockError, match=message):
             lz4.block.decompress(compressed, dict=dict1[:2])
         assert lz4.block.decompress(compressed, dict=dict2) != input_data
         assert lz4.block.decompress(compressed, dict=dict1) == input_data


Reply via email to