Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Paul Emmerich
On Wed, Nov 20, 2019 at 5:16 PM  wrote:
>
> All;
>
> Since I haven't heard otherwise, I have to assume that the only way to get 
> this to go away is to dump the contents of the RGW bucket(s), and  recreate 
> it (them)?

Things to try:

* check the bucket sharding status: radosgw-admin bucket limit check
* reshard the bucket if you aren't running multi-site and the shards
are just too big (did you disable resharding?)
* resharding is working/index is actually smaller: check if the index
shard is actually in use, compare the id to the actual current id (see
bucket stats); it could be a leftover/leaked object (that sometimes
happened during resharding in older versions)


Paul

>
> How did this get past release approval?  A change which makes a valid cluster 
> state in-valid, with no mitigation other than downtime, in a minor release.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> dhils...@performair.com
> Sent: Friday, November 15, 2019 9:13 AM
> To: ceph-users@lists.ceph.com
> Cc: Stephen Self
> Subject: Re: [ceph-users] Large OMAP Object
>
> Wido;
>
> Ok, yes, I have tracked it down to the index for one of our buckets.  I 
> missed the ID in the ceph df output previously.  Next time I'll wait to read 
> replies until I've finished my morning coffee.
>
> How would I go about correcting this?
>
> The content for this bucket is basically just junk, as we're still doing 
> production qualification, and workflow planning.  Moving from Windows file 
> shares to self-hosted cloud storage is a significant undertaking.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido 
> den Hollander
> Sent: Friday, November 15, 2019 8:40 AM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Large OMAP Object
>
>
>
> On 11/15/19 4:35 PM, dhils...@performair.com wrote:
> > All;
> >
> > Thank you for your help so far.  I have found the log entries from when the 
> > object was found, but don't see a reference to the pool.
> >
> > Here the logs:
> > 2019-11-14 03:10:16.508601 osd.1 (osd.1) 21 : cluster [DBG] 56.7 deep-scrub 
> > starts
> > 2019-11-14 03:10:18.325881 osd.1 (osd.1) 22 : cluster [WRN] Large omap 
> > object found. Object: 
> > 56:f7d15b13:::.dir.f91aeff8-a365-47b4-a1c8-928cd66134e8.44130.1:head Key 
> > count: 380425 Size (bytes): 82896978
> >
>
> In this case it's in pool 56, check 'ceph df' to see which pool that is.
>
> To me this seems like a RGW bucket which index grew too big.
>
> Use:
>
> $ radosgw-admin bucket list
> $ radosgw-admin metadata get bucket:
>
> And match that UUID back to the bucket.
>
> Wido
>
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director – Information Technology
> > Perform Air International Inc.
> > dhils...@performair.com
> > www.PerformAir.com
> >
> >
> >
> > -Original Message-
> > From: Wido den Hollander [mailto:w...@42on.com]
> > Sent: Friday, November 15, 2019 1:56 AM
> > To: Dominic Hilsbos; ceph-users@lists.ceph.com
> > Cc: Stephen Self
> > Subject: Re: [ceph-users] Large OMAP Object
> >
> > Did you check /var/log/ceph/ceph.log on one of the Monitors to see which
> > pool and Object the large Object is in?
> >
> > Wido
> >
> > On 11/15/19 12:23 AM, dhils...@performair.com wrote:
> >> All;
> >>
> >> We had a warning about a large OMAP object pop up in one of our clusters 
> >> overnight.  The cluster is configured for CephFS, but nothing mounts a 
> >> CephFS, at this time.
> >>
> >> The cluster mostly uses RGW.  I've checked the cluster log, the MON log, 
> >> and the MGR log on one of the mons, with no useful references to the pool 
> >> / pg where the large OMAP objects resides.
> >>
> >> Is my only option to find this large OMAP object to go through the OSD 
> >> logs for the individual OSDs in the cluster?
> >>
> >> Thank you,
> >>
> >> Dominic L. Hilsbos, MBA
> >> Director - Information Technology
> >> Perform Air International Inc.
> >> dhils...@performair.com
> >> www.PerformAir.com
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com

[ceph-users] Introducing DeepSpace

2019-11-20 Thread Cranage, Steve
Greetings everyone, I wanted to post this notice that we are opening up our 
Catalog Service and file system extensions for Ceph as an open source project. 
DeepSpace takes a different approach in that we advocate using standard file 
systems (pretty much just using xfs at this time) so that the file system acts 
as a cache for working data and we transparently move things back and for the 
between the file system and the ceph cluster. We are using Ceph only at the 
libRADOS layer of the stack.


In the process we add a few notable features:


  *   File search across N file systems by name or time

  *   File versioning so you never overwrite files and can reverse a 
ransomeware attack instantly

  *   File System synthesis from a dbase query – This is very cool but needs a 
longer explanation than I want to do here…

  *   Automated tape library support so data can flow freely between tape and 
Ceph.


There is a lot more to cover and we’ll be posting some 600 pages of 
documentation and demo videos shortly, but in the mean time if you are at the 
Supercomputer show in Denver this week drop by booth 392 and we’ll be happy to 
show you more.





Steve Cranage

Principal Architect, Co-Founder

DeepSpace Storage

719-930-6960

[cid:image001.png@01D3FCBC.58FDB6F0]

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Nathan Fish
It's a warning, not an error, and if you consider it to not be a
problem, I believe you can change
osd_deep_scrub_large_omap_object_value_sum_threshold back to 2M.

On Wed, Nov 20, 2019 at 11:37 AM  wrote:
>
> All;
>
> Since I haven't heard otherwise, I have to assume that the only way to get 
> this to go away is to dump the contents of the RGW bucket(s), and  recreate 
> it (them)?
>
> How did this get past release approval?  A change which makes a valid cluster 
> state in-valid, with no mitigation other than downtime, in a minor release.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> dhils...@performair.com
> Sent: Friday, November 15, 2019 9:13 AM
> To: ceph-users@lists.ceph.com
> Cc: Stephen Self
> Subject: Re: [ceph-users] Large OMAP Object
>
> Wido;
>
> Ok, yes, I have tracked it down to the index for one of our buckets.  I 
> missed the ID in the ceph df output previously.  Next time I'll wait to read 
> replies until I've finished my morning coffee.
>
> How would I go about correcting this?
>
> The content for this bucket is basically just junk, as we're still doing 
> production qualification, and workflow planning.  Moving from Windows file 
> shares to self-hosted cloud storage is a significant undertaking.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido 
> den Hollander
> Sent: Friday, November 15, 2019 8:40 AM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Large OMAP Object
>
>
>
> On 11/15/19 4:35 PM, dhils...@performair.com wrote:
> > All;
> >
> > Thank you for your help so far.  I have found the log entries from when the 
> > object was found, but don't see a reference to the pool.
> >
> > Here the logs:
> > 2019-11-14 03:10:16.508601 osd.1 (osd.1) 21 : cluster [DBG] 56.7 deep-scrub 
> > starts
> > 2019-11-14 03:10:18.325881 osd.1 (osd.1) 22 : cluster [WRN] Large omap 
> > object found. Object: 
> > 56:f7d15b13:::.dir.f91aeff8-a365-47b4-a1c8-928cd66134e8.44130.1:head Key 
> > count: 380425 Size (bytes): 82896978
> >
>
> In this case it's in pool 56, check 'ceph df' to see which pool that is.
>
> To me this seems like a RGW bucket which index grew too big.
>
> Use:
>
> $ radosgw-admin bucket list
> $ radosgw-admin metadata get bucket:
>
> And match that UUID back to the bucket.
>
> Wido
>
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director – Information Technology
> > Perform Air International Inc.
> > dhils...@performair.com
> > www.PerformAir.com
> >
> >
> >
> > -Original Message-
> > From: Wido den Hollander [mailto:w...@42on.com]
> > Sent: Friday, November 15, 2019 1:56 AM
> > To: Dominic Hilsbos; ceph-users@lists.ceph.com
> > Cc: Stephen Self
> > Subject: Re: [ceph-users] Large OMAP Object
> >
> > Did you check /var/log/ceph/ceph.log on one of the Monitors to see which
> > pool and Object the large Object is in?
> >
> > Wido
> >
> > On 11/15/19 12:23 AM, dhils...@performair.com wrote:
> >> All;
> >>
> >> We had a warning about a large OMAP object pop up in one of our clusters 
> >> overnight.  The cluster is configured for CephFS, but nothing mounts a 
> >> CephFS, at this time.
> >>
> >> The cluster mostly uses RGW.  I've checked the cluster log, the MON log, 
> >> and the MGR log on one of the mons, with no useful references to the pool 
> >> / pg where the large OMAP objects resides.
> >>
> >> Is my only option to find this large OMAP object to go through the OSD 
> >> logs for the individual OSDs in the cluster?
> >>
> >> Thank you,
> >>
> >> Dominic L. Hilsbos, MBA
> >> Director - Information Technology
> >> Perform Air International Inc.
> >> dhils...@performair.com
> >> www.PerformAir.com
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/

Re: [ceph-users] Large OMAP Object

2019-11-20 Thread DHilsbos
All;

Since I haven't heard otherwise, I have to assume that the only way to get this 
to go away is to dump the contents of the RGW bucket(s), and  recreate it 
(them)?

How did this get past release approval?  A change which makes a valid cluster 
state in-valid, with no mitigation other than downtime, in a minor release.

Thank you,

Dominic L. Hilsbos, MBA 
Director – Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com


-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
dhils...@performair.com
Sent: Friday, November 15, 2019 9:13 AM
To: ceph-users@lists.ceph.com
Cc: Stephen Self
Subject: Re: [ceph-users] Large OMAP Object

Wido;

Ok, yes, I have tracked it down to the index for one of our buckets.  I missed 
the ID in the ceph df output previously.  Next time I'll wait to read replies 
until I've finished my morning coffee.

How would I go about correcting this?

The content for this bucket is basically just junk, as we're still doing 
production qualification, and workflow planning.  Moving from Windows file 
shares to self-hosted cloud storage is a significant undertaking.

Thank you,

Dominic L. Hilsbos, MBA 
Director – Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com



-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido 
den Hollander
Sent: Friday, November 15, 2019 8:40 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Large OMAP Object



On 11/15/19 4:35 PM, dhils...@performair.com wrote:
> All;
> 
> Thank you for your help so far.  I have found the log entries from when the 
> object was found, but don't see a reference to the pool.
> 
> Here the logs:
> 2019-11-14 03:10:16.508601 osd.1 (osd.1) 21 : cluster [DBG] 56.7 deep-scrub 
> starts
> 2019-11-14 03:10:18.325881 osd.1 (osd.1) 22 : cluster [WRN] Large omap object 
> found. Object: 
> 56:f7d15b13:::.dir.f91aeff8-a365-47b4-a1c8-928cd66134e8.44130.1:head Key 
> count: 380425 Size (bytes): 82896978
> 

In this case it's in pool 56, check 'ceph df' to see which pool that is.

To me this seems like a RGW bucket which index grew too big.

Use:

$ radosgw-admin bucket list
$ radosgw-admin metadata get bucket:

And match that UUID back to the bucket.

Wido

> Thank you,
> 
> Dominic L. Hilsbos, MBA 
> Director – Information Technology 
> Perform Air International Inc.
> dhils...@performair.com 
> www.PerformAir.com
> 
> 
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com] 
> Sent: Friday, November 15, 2019 1:56 AM
> To: Dominic Hilsbos; ceph-users@lists.ceph.com
> Cc: Stephen Self
> Subject: Re: [ceph-users] Large OMAP Object
> 
> Did you check /var/log/ceph/ceph.log on one of the Monitors to see which
> pool and Object the large Object is in?
> 
> Wido
> 
> On 11/15/19 12:23 AM, dhils...@performair.com wrote:
>> All;
>>
>> We had a warning about a large OMAP object pop up in one of our clusters 
>> overnight.  The cluster is configured for CephFS, but nothing mounts a 
>> CephFS, at this time.
>>
>> The cluster mostly uses RGW.  I've checked the cluster log, the MON log, and 
>> the MGR log on one of the mons, with no useful references to the pool / pg 
>> where the large OMAP objects resides.
>>
>> Is my only option to find this large OMAP object to go through the OSD logs 
>> for the individual OSDs in the cluster?
>>
>> Thank you,
>>
>> Dominic L. Hilsbos, MBA 
>> Director - Information Technology 
>> Perform Air International Inc.
>> dhils...@performair.com 
>> www.PerformAir.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dashboard hangs

2019-11-20 Thread thoralf schulze
hi,

we were able to track this down to the auto balancer: disabling the auto
balancer and cleaning out old (and probably not very meaningful)
upmap-entries via ceph osd rm-pg-upmap-items brought back stable mgr
daemons and an usable dashboard.

the not-so-sensible upmap-entries might or might not have been caused by
us updating from mimic to nautilus - it's too late to debug this now.
this seems to be consistent with bryan stillwell's findings ("mgr hangs
with upmap balancer").

thank you very much & with kind regards,
thoralf.



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] scrub error on object storage pool

2019-11-20 Thread M Ranga Swami Reddy
Hello - Recently we have upgraded to Luminous 12.2.11. After that we can
see the scrub errors on the object storage pool only on daily basis. After
repair, it will be cleared. But again it will come tomorrow after scrub
performed the PG.

Any known issue - on scrub errs with 12.2.11 version?

Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Error in MGR log: auth: could not find secret_id

2019-11-20 Thread Thomas Schneider
Hi,
my Ceph cluster is in unhealthy state and busy with recovery.

I'm observing the MGR log and this is showing this error message regularely:
2019-11-20 09:51:45.211 7f7205581700  0 auth: could not find secret_id=4193
2019-11-20 09:51:45.211 7f7205581700  0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=4193
2019-11-20 09:51:46.403 7f7205581700  0 auth: could not find secret_id=4193
2019-11-20 09:51:46.403 7f7205581700  0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=4193
2019-11-20 09:51:46.543 7f71f3826700  0 log_channel(cluster) log [DBG] :
pgmap v2508: 8432 pgs: 1 active+recovering+remapped, 1
active+remapped+backfilling, 4 active+recovering, 2
undersized+degraded+peered, 3 remapped+peering, 104 peering, 24
activating, 3 creating+peering, 8290 active+clean; 245 TiB data, 732 TiB
used, 791 TiB / 1.5 PiB avail; 67 KiB/s wr, 1 op/s; 8272/191737068
objects degraded (0.004%); 4392/191737068 objects misplaced (0.002%)
2019-11-20 09:51:46.603 7f7205d82700  0 auth: could not find secret_id=4193
2019-11-20 09:51:46.603 7f7205d82700  0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=4193
2019-11-20 09:51:46.947 7f7205d82700  0 auth: could not find secret_id=4193
2019-11-20 09:51:46.947 7f7205d82700  0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=4193
2019-11-20 09:51:47.015 7f7205d82700  0 auth: could not find secret_id=4193
2019-11-20 09:51:47.015 7f7205d82700  0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=4193
2019-11-20 09:51:47.815 7f7205d82700  0 auth: could not find secret_id=4193
2019-11-20 09:51:47.815 7f7205d82700  0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=4193
2019-11-20 09:51:48.567 7f71f3826700  0 log_channel(cluster) log [DBG] :
pgmap v2509: 8432 pgs: 1 active+recovering+remapped, 1
active+remapped+backfilling, 4 active+recovering, 2
undersized+degraded+peered, 3 remapped+peering, 104 peering, 24
activating, 3 creating+peering, 8290 active+clean; 245 TiB data, 732 TiB
used, 791 TiB / 1.5 PiB avail; 65 KiB/s wr, 0 op/s; 8272/191737068
objects degraded (0.004%); 4392/191737068 objects misplaced (0.002%)
2019-11-20 09:51:49.447 7f7204d80700  0 auth: could not find secret_id=4193
2019-11-20 09:51:49.447 7f7204d80700  0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=4193

The relevant MON log entries for this timestamp are:
2019-11-20 09:51:41.559 7f4f28311700  0 mon.ld5505@0(leader) e9
handle_command mon_command({"prefix":"df","format":"json"} v 0) v1
2019-11-20 09:51:41.559 7f4f28311700  0 log_channel(audit) log [DBG] :
from='client.? 10.97.206.97:0/1141066028' entity='client.admin'
cmd=[{"prefix":"df","format":"json"}]: dispatch
2019-11-20 09:51:45.847 7f4f28311700  0 mon.ld5505@0(leader) e9
handle_command mon_command({"format":"json","prefix":"df"} v 0) v1
2019-11-20 09:51:45.847 7f4f28311700  0 log_channel(audit) log [DBG] :
from='client.? 10.97.206.91:0/1573121305' entity='client.admin'
cmd=[{"format":"json","prefix":"df"}]: dispatch
2019-11-20 09:51:46.307 7f4f2730f700  0 --1-
[v2:10.97.206.93:3300/0,v1:10.97.206.93:6789/0] >>  conn(0x56253e8f5180
0x56253ebc1800 :6789 s=ACCEPTING pgs=0 cs=0 l=0).handle_client_banner
accept peer addr is really - (socket is v1:10.97.206.95:51494/0)
2019-11-20 09:51:46.839 7f4f28311700  0 mon.ld5505@0(leader) e9
handle_command mon_command({"format":"json","prefix":"df"} v 0) v1
2019-11-20 09:51:46.839 7f4f28311700  0 log_channel(audit) log [DBG] :
from='client.? 10.97.206.99:0/413315398' entity='client.admin'
cmd=[{"format":"json","prefix":"df"}]: dispatch
2019-11-20 09:51:49.579 7f4f28311700  0 mon.ld5505@0(leader) e9
handle_command mon_command({"prefix":"df","format":"json"} v 0) v1
2019-11-20 09:51:49.579 7f4f28311700  0 log_channel(audit) log [DBG] :
from='client.? 10.97.206.96:0/2753573650' entity='client.admin'
cmd=[{"prefix":"df","format":"json"}]: dispatch
2019-11-20 09:51:49.607 7f4f28311700  0 mon.ld5505@0(leader) e9
handle_command mon_command({"format":"json","prefix":"df"} v 0) v1
2019-11-20 09:51:49.607 7f4f28311700  0 log_channel(audit) log [DBG] :
from='client.? 10.97.206.98:0/2643276575' entity='client.admin'
cmd=[{"format":"json","prefix":"df"}]: dispatch
^C2019-11-20 09:51:50.703 7f4f2730f700  0 --1-
[v2:10.97.206.93:3300/0,v1:10.97.206.93:6789/0] >>  conn(0x562542ed2400
0x562541a8d000 :6789 s=ACCEPTING pgs=0 cs=0 l=0).handle_client_banner
accept peer addr is really - (socket is v1:10.97.206.98:52420/0)
2019-11-20 09:51:50.951 7f4f28311700  0 mon.ld5505@0(leader) e9
handle_command mon_command({"format":"json","prefix":"df"} v 0) v1
2019-11-20 09:51:50.951 7f4f28311700  0 log_channel(audit) log [DBG] :
from='client.127514502 10.97.206.92:0/3526816880' entity='client.admin'
cmd=[{"format":"json","prefix":"df"}]: dispatch


This auth issue must be fixed soon, because if not the error occurs
every second and this

[ceph-users] POOL_TARGET_SIZE_BYTES_OVERCOMMITTED and POOL_TARGET_SIZE_RATIO_OVERCOMMITTED

2019-11-20 Thread Björn Hinz
Hello,

I can also confirm the same problem described by Joe Ryner in 14.2.2. and 
Oliver Freyermuth.

My ceph version is 14.2.4

-
# ceph health detail
HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1 subtrees 
have overcommitted pool target_size_ratio
POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have overcommitted pool 
target_size_bytes
Pools ['volumes', 'backups', 'images', 'cephfs_cindercache', 'rbd', 'vms'] 
overcommit available storage by 1.308x due to target_size_bytes0  on pools 
[]
POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have overcommitted pool 
target_size_ratio
Pools ['volumes', 'backups', 'images', 'cephfs_cindercache', 'rbd', 'vms'] 
overcommit available storage by 1.308x due to target_size_ratio 0.000 on pools 
[]
-

-
# ceph df
RAW STORAGE:
CLASS SIZEAVAIL   USEDRAW USED %RAW USED
hdd   659 TiB 371 TiB 287 TiB  288 TiB 43.71
ssd67 TiB  56 TiB  11 TiB   11 TiB 16.47
TOTAL 726 TiB 427 TiB 298 TiB  299 TiB 41.19

POOLS:
POOLID STORED  OBJECTS USED
%USED MAX AVAIL
volumes  5  87 TiB  22.94M 261 TiB 
50.6385 TiB
backups  6 0 B   0 0 B  
   0   127 TiB
images   7 8.6 TiB   2.26M  26 TiB  
9.2185 TiB
fastvolumes  9 3.7 TiB   1.93M  11 TiB 
18.6716 TiB
cephfs_cindercache  10 0 B   0 0 B  
   085 TiB
cephfs_cindercache_metadata 11 312 KiB 102 1.3 MiB  
   016 TiB
rbd 12 0 B   0 0 B  
   085 TiB
vms 13 0 B   0 0 B  
   085 TiB
-

-
# ceph osd pool autoscale-status
 POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  
TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE
 cephfs_cindercache_metadata   1300k3.068930G  0.   
  1.0   4  on
 fastvolumes  11275G3.068930G  0.4907   
  1.0 128  on
 cephfs_cindercache   0 3.0658.5T  0.   
  1.0   4  on
 volumes  261.2T3.0658.5T  1.1898   
  1.02048  on
 images   26455G3.0658.5T  0.1177   
  1.0 128  on
 backups  0 2.0658.5T  0.   
  1.0   4  on
 rbd  0 3.0658.5T  0.   
  1.0   4  on
 vms  0 3.0658.5T  0.   
  1.0   4  on
-

Best regards
Björn
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com