On Fri, Dec 04, 2020 at 04:50:14PM +0800, Chao Yu wrote:
...
>
> >
> > About the speed, I think which is also limited to storage device and other
> > conditions (I mean the CPU loading during the writeback might be different
> > between lz4 and lz4hc-9 due to many other bounds, e.g. UFS 3.0 seq
Hi Xiang,
On 2020/12/4 15:43, Gao Xiang wrote:
Hi Chao,
On Fri, Dec 04, 2020 at 03:09:20PM +0800, Chao Yu wrote:
On 2020/12/4 8:31, Gao Xiang wrote:
could make more sense), could you leave some CR numbers about these
algorithms on typical datasets (enwik9, silisia.tar or else.) with 16k
clust
Hi Chao,
On Fri, Dec 04, 2020 at 03:09:20PM +0800, Chao Yu wrote:
> On 2020/12/4 8:31, Gao Xiang wrote:
> > could make more sense), could you leave some CR numbers about these
> > algorithms on typical datasets (enwik9, silisia.tar or else.) with 16k
> > cluster size?
>
> Just from a quick test w
On 2020/12/4 8:31, Gao Xiang wrote:
could make more sense), could you leave some CR numbers about these
algorithms on typical datasets (enwik9, silisia.tar or else.) with 16k
cluster size?
Just from a quick test with enwik9 on vm:
Original blocks:244382
lz4
On Fri, Dec 04, 2020 at 11:11:03AM +0800, Chao Yu wrote:
> On 2020/12/4 10:47, Gao Xiang wrote:
...
>
> > future (and add more dependency to algorithms, you might see BWT-based bzip2
> > removal patch
>
> Oops, is that really allowed? I don't this is a good idea...and I don't see
> there
> are
On 2020/12/4 10:47, Gao Xiang wrote:
On Fri, Dec 04, 2020 at 10:38:08AM +0800, Chao Yu wrote:
On 2020/12/4 10:06, Gao Xiang wrote:
On Fri, Dec 04, 2020 at 09:56:27AM +0800, Chao Yu wrote:
...
Keep lz4hc dirty data under writeback could block writeback, make kswapd
busy, and direct memory
On Fri, Dec 04, 2020 at 10:38:08AM +0800, Chao Yu wrote:
> On 2020/12/4 10:06, Gao Xiang wrote:
> > On Fri, Dec 04, 2020 at 09:56:27AM +0800, Chao Yu wrote:
...
>
> >
> > Keep lz4hc dirty data under writeback could block writeback, make kswapd
> > busy, and direct memory reclaim path, I guess t
On 2020/12/4 10:06, Gao Xiang wrote:
On Fri, Dec 04, 2020 at 09:56:27AM +0800, Chao Yu wrote:
Hi Xiang,
On 2020/12/4 8:31, Gao Xiang wrote:
Hi Chao,
On Thu, Dec 03, 2020 at 11:32:34AM -0800, Eric Biggers wrote:
...
What is the use case for storing the compression level on-disk?
Keep in m
On Fri, Dec 04, 2020 at 09:56:27AM +0800, Chao Yu wrote:
> Hi Xiang,
>
> On 2020/12/4 8:31, Gao Xiang wrote:
> > Hi Chao,
> >
> > On Thu, Dec 03, 2020 at 11:32:34AM -0800, Eric Biggers wrote:
> >
> > ...
> >
> > >
> > > What is the use case for storing the compression level on-disk?
> > >
> >
Hi Xiang,
On 2020/12/4 8:31, Gao Xiang wrote:
Hi Chao,
On Thu, Dec 03, 2020 at 11:32:34AM -0800, Eric Biggers wrote:
...
What is the use case for storing the compression level on-disk?
Keep in mind that compression levels are an implementation detail; the exact
compressed data that is prod
On 2020/12/4 3:32, Eric Biggers wrote:
On Thu, Dec 03, 2020 at 02:17:15PM +0800, Chao Yu wrote:
+config F2FS_FS_LZ4HC
+ bool "LZ4HC compression support"
+ depends on F2FS_FS_COMPRESSION
+ depends on F2FS_FS_LZ4
+ select LZ4HC_COMPRESS
+ default y
+ help
+
Hi Chao,
On Thu, Dec 03, 2020 at 11:32:34AM -0800, Eric Biggers wrote:
...
>
> What is the use case for storing the compression level on-disk?
>
> Keep in mind that compression levels are an implementation detail; the exact
> compressed data that is produced by a particular algorithm at a part
On Thu, Dec 03, 2020 at 02:17:15PM +0800, Chao Yu wrote:
> +config F2FS_FS_LZ4HC
> + bool "LZ4HC compression support"
> + depends on F2FS_FS_COMPRESSION
> + depends on F2FS_FS_LZ4
> + select LZ4HC_COMPRESS
> + default y
> + help
> + Support LZ4HC compress algorithm, if
Expand 'compress_algorithm' mount option to accept parameter as format of
:, by this way, it gives a way to allow user to do more
specified config on lz4 and zstd compression level, then f2fs compression
can provide higher compress ratio.
In order to set compress level for lz4 algorithm, it needs
14 matches
Mail list logo