On 3/12/20 4:22 AM, Denis Plotnikov wrote:
zstd significantly reduces cluster compression time.
It provides better compression performance maintaining
the same level of the compression ratio in comparison with
zlib, which, at the moment, is the only compression
method available.


+++ b/docs/interop/qcow2.txt
@@ -208,6 +208,7 @@ version 2.
Available compression type values:
                          0: zlib <https://www.zlib.net/>
+                        1: zstd <http://github.com/facebook/zstd>
=== Header padding ===
@@ -575,11 +576,30 @@ Compressed Clusters Descriptor (x = 62 - (cluster_bits - 
8)):
                      Another compressed cluster may map to the tail of the 
final
                      sector used by this compressed cluster.
+ The layout of the compressed data depends on the compression
+                    type used for the image (see compressed cluster layout).
+
  If a cluster is unallocated, read requests shall read the data from the 
backing
  file (except if bit 0 in the Standard Cluster Descriptor is set). If there is
  no backing file or the backing file is smaller than the image, they shall read
  zeros for all parts that are not covered by the backing file.
+=== Compressed Cluster Layout ===
+
+The compressed cluster data has a layout depending on the compression
+type used for the image, as follows:
+
+Compressed data layout for the available compression types:
+data_space_lenght - data chunk length available to store a compressed cluster.

length

+(for more details see "Compressed Clusters Descriptor")
+x = data_space_length - 1

If I understand correctly, data_space_length is really an upper bounds on the length available, because it is computed by rounding UP to the next 512-byte boundary (that is, the L2 descriptor lists the number of additional sectors used in storing the compressed data). Which really means that we have the following, where + is cluster boundaries, S and E are the start and end of the compressed data, and D is the offset determined by data_space_length:

+-------+-------+------+
      S============E...D

+
+    0:  (default)  zlib <http://zlib.net/>:
+            Byte  0 -  x:     the compressed data content
+                              all the space provided used for compressed data

For zlib, we have byte 0-E are compressed data, and bytes (E+1)-D (if any) are ignored. There is no way to tell how many bytes between E and D exist, because zlib doesn't care (the compression stream itself ensures that decompression stops when input reaches E because the output reached a cluster boundary at that point).

+    1:  zstd <http://github.com/facebook/zstd>:
+            Byte  0 -  3:     the length of compressed data in bytes
+                  4 -  x:     the compressed data content

Whereas for zstd, the decompression MUST know the actual location of E, rather than passing in the slop between E and D; bytes 0-3 give us that information.

But your description is not very accurate: if 'x' is point E, then it is NOT data_space_length - 1, but rather data_space_length - slop, where slop can be up to 511 bytes (the number of bytes from (E+1) to D). And if 'x' is point E, then the real layout for zlib is:

byte 0 - E: the compressed data content
byte E+1 - x: ignored slop (E is implied solely by the compressed data)

and for zstd is:

byte 0 - 3: the length of the compressed data
byte 4 - E: the compressed data (E computed from byte 0-3)
byte E+1 - x: ignored

I'm not sure what the best way is to document this.

+++ b/block/qcow2-threads.c

+static ssize_t qcow2_zstd_compress(void *dest, size_t dest_size,
+                                   const void *src, size_t src_size)
+{
+    size_t ret;
+
+    /*
+     * steal ZSTD_LEN_BUF bytes in the very beginning of the buffer
+     * to store compressed chunk size
+     */
+    char *d_buf = ((char *) dest) + ZSTD_LEN_BUF;
+
+    /*
+     * sanity check that we can store the compressed data length,
+     * and there is some space left for the compressor buffer
+     */
+    if (dest_size <= ZSTD_LEN_BUF) {
+        return -ENOMEM;
+    }
+
+    dest_size -= ZSTD_LEN_BUF;
+
+    ret = ZSTD_compress(d_buf, dest_size, src, src_size, 5);

Where does the magic number 5 come from?

+
+    if (ZSTD_isError(ret)) {
+        if (ZSTD_getErrorCode(ret) == ZSTD_error_dstSize_tooSmall) {
+            return -ENOMEM;
+        } else {
+            return -EIO;
+        }
+    }
+
+    /*
+     * paranoid sanity check that we can store
+     * the compressed size in the first 4 bytes
+     */
+    if (ret > UINT32_MAX) {
+        return -ENOMEM;
+    }

The if is awkward.  I'd prefer to change this to:

    /*
     * Our largest cluster is 2M, and we insist that compression
     * actually compressed things.
     */
    assert(ret < UINT32_MAX);

or even tighten to assert(ret <= dest_size)

+
+    /* store the compressed chunk size in the very beginning of the buffer */
+    stl_be_p(dest, ret);
+
+    return ret + ZSTD_LEN_BUF;
+}
+
+/*
+ * qcow2_zstd_decompress()
+ *
+ * Decompress some data (not more than @src_size bytes) to produce exactly
+ * @dest_size bytes using zstd compression method
+ *
+ * @dest - destination buffer, @dest_size bytes
+ * @src - source buffer, @src_size bytes
+ *
+ * Returns: 0 on success
+ *          -EIO on any error
+ */
+static ssize_t qcow2_zstd_decompress(void *dest, size_t dest_size,
+                                     const void *src, size_t src_size)
+{
+    /*
+     * zstd decompress wants to know the exact length of the data.
+     * For that purpose, on compression, the length is stored in
+     * the very beginning of the compressed buffer
+     */
+    size_t s_size;
+    const char *s_buf = ((const char *) src) + ZSTD_LEN_BUF;
+
+    /*
+     * sanity check that we can read 4 byte the content length and
+     * and there is some content to decompress
+     */
+    if (src_size <= ZSTD_LEN_BUF) {
+        return -EIO;
+    }
+
+    s_size = ldl_be_p(src);
+
+    /* sanity check that the buffer is big enough to read the content from */
+    if (src_size - ZSTD_LEN_BUF < s_size) {
+        return -EIO;
+    }
+
+    if (ZSTD_isError(
+            ZSTD_decompress(dest, dest_size, s_buf, s_size))) {

You are correct that ZSTD_decompress() is picky that it must be given the exact size of the compressed buffer it is decompressing. But the ZSTD manual mentions that if an exact size is not known in advance, that the streaming API can be used instead:

https://facebook.github.io/zstd/zstd_manual.html#Chapter9

In other words, would it be possible to NOT have to prepend four bytes of exact size information, by instead setting up decompression via the streaming API where the input is (usually) oversized, but the output buffer limited to exactly one cluster is sufficient to consume the exact compressed data and ignore the slop, just as we do in zlib?

The rest of this patch looks okay.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


Reply via email to