[ceph-users] cephfs compression?

2018-06-28 Thread Youzhong Yang
For RGW, compression works very well. We use rgw to store crash dumps, in
most cases, the compression ratio is about 2.0 ~ 4.0.

I tried to enable compression for cephfs data pool:

# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compression_required_ratio: 0.95
compression_max_blob_size: 4194304
compression_min_blob_size: 4096

(we built ceph packages and enabled lz4.)

It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
used 8.7GB:

root@ceph-admin:~# ceph df
GLOBAL:
SIZE   AVAIL  RAW USED %RAW USED
16 TiB 16 TiB  111 GiB  0.69
POOLS:
NAMEID USED%USED MAX AVAIL OBJECTS
cephfs_data 1  8.7 GiB  0.17   5.0 TiB  360545
cephfs_metadata 2  221 MiB 0   5.0 TiB   77707

I know this folder can be compressed to ~4.0GB under zfs lz4 compression.

Am I missing anything? how to make cephfs compression work? is there any
trick?

By the way, I am evaluating ceph mimic v13.2.0.

Thanks in advance,
--Youzhong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs compression?

2018-06-28 Thread Richard Bade
I'm using compression on a cephfs-data pool in luminous. I didn't do
anything special

$ sudo ceph osd pool get cephfs-data all | grep ^compression
compression_mode: aggressive
compression_algorithm: zlib

You can check how much compression you're getting on the osd's
$ for osd in `seq 0 11`; do echo osd.$osd; sudo ceph daemon osd.$osd
perf dump | grep 'bluestore_compressed'; done
osd.0
"bluestore_compressed": 686487948225,
"bluestore_compressed_allocated": 788659830784,
"bluestore_compressed_original": 1660064620544,

osd.11
"bluestore_compressed": 700999601387,
"bluestore_compressed_allocated": 808854355968,
"bluestore_compressed_original": 1752045551616,

I can't say for mimic, but definitely for luminous v12.2.5 compression
is working well with mostly default options.

-Rich

> For RGW, compression works very well. We use rgw to store crash dumps, in
> most cases, the compression ratio is about 2.0 ~ 4.0.

> I tried to enable compression for cephfs data pool:

> # ceph osd pool get cephfs_data all | grep ^compression
> compression_mode: force
> compression_algorithm: lz4
> compression_required_ratio: 0.95
> compression_max_blob_size: 4194304
> compression_min_blob_size: 4096

> (we built ceph packages and enabled lz4.)

> It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
> used 8.7GB:

> root at ceph-admin:~# ceph df
> GLOBAL:
> SIZE   AVAIL  RAW USED %RAW USED
> 16 TiB 16 TiB  111 GiB  0.69
> POOLS:
> NAMEID USED%USED MAX AVAIL OBJECTS
> cephfs_data 1  8.7 GiB  0.17   5.0 TiB  360545
> cephfs_metadata 2  221 MiB 0   5.0 TiB   77707

> I know this folder can be compressed to ~4.0GB under zfs lz4 compression.

> Am I missing anything? how to make cephfs compression work? is there any
trick?

> By the way, I am evaluating ceph mimic v13.2.0.

> Thanks in advance,
> --Youzhong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs compression?

2018-06-28 Thread Richard Bade
Oh, also because the compression is at the osd level you don't see it
in ceph df. You just see that your RAW is not increasing as much as
you'd expect. E.g.
$ sudo ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
785T  300T 485T 61.73
POOLS:
NAMEID USED %USED MAX AVAIL OBJECTS
cephfs-metadata 11 185M 068692G   178
cephfs-data 12 408T 75.26  134T 132641159

You can see that we've used 408TB in the pool but only 485TB RAW -
Rather than ~600TB RAW that I'd expect for my k4, m2 pool settings.
On Fri, 29 Jun 2018 at 17:08, Richard Bade  wrote:
>
> I'm using compression on a cephfs-data pool in luminous. I didn't do
> anything special
>
> $ sudo ceph osd pool get cephfs-data all | grep ^compression
> compression_mode: aggressive
> compression_algorithm: zlib
>
> You can check how much compression you're getting on the osd's
> $ for osd in `seq 0 11`; do echo osd.$osd; sudo ceph daemon osd.$osd
> perf dump | grep 'bluestore_compressed'; done
> osd.0
> "bluestore_compressed": 686487948225,
> "bluestore_compressed_allocated": 788659830784,
> "bluestore_compressed_original": 1660064620544,
> 
> osd.11
> "bluestore_compressed": 700999601387,
> "bluestore_compressed_allocated": 808854355968,
> "bluestore_compressed_original": 1752045551616,
>
> I can't say for mimic, but definitely for luminous v12.2.5 compression
> is working well with mostly default options.
>
> -Rich
>
> > For RGW, compression works very well. We use rgw to store crash dumps, in
> > most cases, the compression ratio is about 2.0 ~ 4.0.
>
> > I tried to enable compression for cephfs data pool:
>
> > # ceph osd pool get cephfs_data all | grep ^compression
> > compression_mode: force
> > compression_algorithm: lz4
> > compression_required_ratio: 0.95
> > compression_max_blob_size: 4194304
> > compression_min_blob_size: 4096
>
> > (we built ceph packages and enabled lz4.)
>
> > It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
> > used 8.7GB:
>
> > root at ceph-admin:~# ceph df
> > GLOBAL:
> > SIZE   AVAIL  RAW USED %RAW USED
> > 16 TiB 16 TiB  111 GiB  0.69
> > POOLS:
> > NAMEID USED%USED MAX AVAIL OBJECTS
> > cephfs_data 1  8.7 GiB  0.17   5.0 TiB  360545
> > cephfs_metadata 2  221 MiB 0   5.0 TiB   77707
>
> > I know this folder can be compressed to ~4.0GB under zfs lz4 compression.
>
> > Am I missing anything? how to make cephfs compression work? is there any
> trick?
>
> > By the way, I am evaluating ceph mimic v13.2.0.
>
> > Thanks in advance,
> > --Youzhong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs compression?

2018-06-29 Thread Youzhong Yang
Thanks Richard. Yes, it seems working by perf dump:

osd.6
"bluestore_compressed":   62622444,
"bluestore_compressed_allocated": 186777600,
"bluestore_compressed_original":373555200,

It's very interesting that  bluestore_compressed_allocated is approximately
50% of  bluestore_compressed_original across all OSDs, just curious - why?

On Fri, Jun 29, 2018 at 1:15 AM, Richard Bade  wrote:

> Oh, also because the compression is at the osd level you don't see it
> in ceph df. You just see that your RAW is not increasing as much as
> you'd expect. E.g.
> $ sudo ceph df
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 785T  300T 485T 61.73
> POOLS:
> NAMEID USED %USED MAX AVAIL OBJECTS
> cephfs-metadata 11 185M 068692G   178
> cephfs-data 12 408T 75.26  134T 132641159
>
> You can see that we've used 408TB in the pool but only 485TB RAW -
> Rather than ~600TB RAW that I'd expect for my k4, m2 pool settings.
> On Fri, 29 Jun 2018 at 17:08, Richard Bade  wrote:
> >
> > I'm using compression on a cephfs-data pool in luminous. I didn't do
> > anything special
> >
> > $ sudo ceph osd pool get cephfs-data all | grep ^compression
> > compression_mode: aggressive
> > compression_algorithm: zlib
> >
> > You can check how much compression you're getting on the osd's
> > $ for osd in `seq 0 11`; do echo osd.$osd; sudo ceph daemon osd.$osd
> > perf dump | grep 'bluestore_compressed'; done
> > osd.0
> > "bluestore_compressed": 686487948225,
> > "bluestore_compressed_allocated": 788659830784,
> > "bluestore_compressed_original": 1660064620544,
> > 
> > osd.11
> > "bluestore_compressed": 700999601387,
> > "bluestore_compressed_allocated": 808854355968,
> > "bluestore_compressed_original": 1752045551616,
> >
> > I can't say for mimic, but definitely for luminous v12.2.5 compression
> > is working well with mostly default options.
> >
> > -Rich
> >
> > > For RGW, compression works very well. We use rgw to store crash dumps,
> in
> > > most cases, the compression ratio is about 2.0 ~ 4.0.
> >
> > > I tried to enable compression for cephfs data pool:
> >
> > > # ceph osd pool get cephfs_data all | grep ^compression
> > > compression_mode: force
> > > compression_algorithm: lz4
> > > compression_required_ratio: 0.95
> > > compression_max_blob_size: 4194304
> > > compression_min_blob_size: 4096
> >
> > > (we built ceph packages and enabled lz4.)
> >
> > > It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df
> says it
> > > used 8.7GB:
> >
> > > root at ceph-admin:~# ceph df
> > > GLOBAL:
> > > SIZE   AVAIL  RAW USED %RAW USED
> > > 16 TiB 16 TiB  111 GiB  0.69
> > > POOLS:
> > > NAMEID USED%USED MAX AVAIL
>  OBJECTS
> > > cephfs_data 1  8.7 GiB  0.17   5.0 TiB
> 360545
> > > cephfs_metadata 2  221 MiB 0   5.0 TiB
>  77707
> >
> > > I know this folder can be compressed to ~4.0GB under zfs lz4
> compression.
> >
> > > Am I missing anything? how to make cephfs compression work? is there
> any
> > trick?
> >
> > > By the way, I am evaluating ceph mimic v13.2.0.
> >
> > > Thanks in advance,
> > > --Youzhong
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com