Moving the Z_FINISH into the loop also means we don't have to force a flush after every input page to guarantee that there won't be more than 4 KiB to write at the end. This patch lets zlib decide when to flush buffer, which offers a very moderate space savings (on my system, my 400MB test logfile goes from an 11.9% compression ratio to 11.2%, which is nothing to write home about) and might offer a similarly-slight performance boost.

Since the end result is still a valid zlib stream, it is completely backwards-compatible with the existing method.

Signed-off-by: Danielle Church <dchu...@cheri.shyou.org>
---
 fs/btrfs/zlib.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 1dc0455..df7d957 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -89,7 +89,7 @@ static int zlib_compress_pages(struct list_head *ws,
        struct page *in_page = NULL;
        struct page *out_page = NULL;
        unsigned long bytes_left;
-       int deflate_flush = Z_SYNC_FLUSH;
+       int deflate_flush = Z_NO_FLUSH;

        *out_pages = 0;
        *total_out = 0;
--
2.2.1
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to