Based on kdave for-next As heuristic skeleton already merged Populate heuristic with basic code.
First patch: add simple sampling code It's get 16 byte samples with 256 bytes shifts over input data. Collect info about how many different bytes (symbols) has been found in sample data Second patch: add code for calculate how many unique bytes has been found in sample data That can fast detect easy compressible data Third patch: add code for calculate byte core set size i.e. how many unique bytes use 90% of sample data That code require that numbers in bucket must be sorted That can detect easy compressible data with many repeated bytes That can detect not compressible data with evenly distributed bytes Changes v1 -> v2: - Change input data iterator shift 512 -> 256 - Replace magic macro numbers with direct values - Drop useless symbol population in bucket as no one care about where and what symbol stored in bucket at now Timofey Titovets (3): Btrfs: heuristic add simple sampling logic Btrfs: heuristic add byte set calculation Btrfs: heuristic add byte core set calculation fs/btrfs/compression.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++- fs/btrfs/compression.h | 13 ++++++ 2 files changed, 119 insertions(+), 2 deletions(-) -- 2.13.3 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html