Re: [ceph-users] ceph df shows global-used more than real data size

2019-12-23 Thread zx
you are using hdd?


> 在 2019年12月24日,下午3:06,Ch Wan  写道:
> 
> Hi all. I deploy a ceph cluster with Mimic 13.2.4. There are 26 nodes, 286 
> osds and 1.4 PiB avail space.
> I created nearly 5,000,000,000 objects by ceph-rgw, each object is 4K size. 
> So there should be 18TB * 3 disk used. But `ceph df detail` output shows that 
> the RAW USED is 889 TiB
> Is this a bug or I missed something?
> 
> ceph df 
>  
> GLOBAL:
> SIZEAVAIL   RAW USED %RAW USED
> 1.4 PiB 541 TiB  889 TiB 62.15
> POOLS:
> NAMEID USED%USED MAX 
> AVAIL OBJECTS
> .rgw.root 7  4.6 KiB 0
>   12 TiB  20
> default.rgw.control 8  0 B0   
>12 TiB  8
> default.rgw.meta9  0 B   0
>12 TiB  0
> default.rgw.log   10 0 B   0  
> 12 TiB 175
> test.rgw.buckets.index   11 0 B   0  
> 39 TiB 35349
> test.rgw.buckets.data 12  18 TiB   59.3512 
> TiB  4834552282
> test.rgw.buckets.non-ec 13 0 B  0  12 
> TiB  0
> test.rgw.control  17 0 B   0  
> 39 TiB  8
> test.rgw.meta 18 3.0 KiB 0
>   39 TiB 13
> test.rgw.log1963 B   0
>   39 TiB211
> 
> Here is `ceph -s` output
>   cluster:
> id: a61656e0-6086-42ce-97b7-330b3e44
> health: HEALTH_WARN
> 4 backfillfull osd(s)
> 9 nearfull osd(s)
> 6 pool(s) backfillfull
>   services:
> mon: 3 daemons, quorum ceph-test01,ceph-test03,ceph-test04
> mgr: ceph-test03(active), standbys: ceph-test01, ceph-test04
> osd: 286 osds: 286 up, 286 in; 
> rgw: 3 daemons active
>   data:
> pools:   10 pools, 8480 pgs
> objects: 4.83 G objects, 18 TiB
> usage:   889 TiB used, 541 TiB / 1.4 PiB avail
>  
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph df shows global-used more than real data size

2019-12-23 Thread Ch Wan
Hi all. I deploy a ceph cluster with Mimic 13.2.4. There are 26 nodes, 286
osds and 1.4 PiB avail space.
I created nearly 5,000,000,000 objects by ceph-rgw, each object is 4K size.
So there should be 18TB * 3 disk used. But `ceph df detail` output shows
that the RAW USED is 889 TiB
Is this a bug or I missed something?

ceph df



GLOBAL:
> SIZEAVAIL   RAW USED %RAW USED
> 1.4 PiB 541 TiB  889 TiB 62.15
> POOLS:
> NAMEID USED%USED
> MAX AVAIL OBJECTS
> .rgw.root 7  4.6 KiB 0
>   12 TiB  20
> default.rgw.control 8  0 B0
>   12 TiB  8
> default.rgw.meta9  0 B   0
>12 TiB  0
> default.rgw.log   10 0 B   0
> 12 TiB 175
> test.rgw.buckets.index   11 0 B   0
>   39 TiB 35349
> test.rgw.buckets.data 12  18 TiB   59.3512
> TiB  4834552282
> test.rgw.buckets.non-ec 13 0 B  0
> 12 TiB  0
> test.rgw.control  17 0 B   0
> 39 TiB  8
> test.rgw.meta 18 3.0 KiB 0
>   39 TiB 13
> test.rgw.log1963 B   0
>   39 TiB211
>

Here is `ceph -s` output

>   cluster:
> id: a61656e0-6086-42ce-97b7-330b3e44
> health: HEALTH_WARN
> 4 backfillfull osd(s)
> 9 nearfull osd(s)
> 6 pool(s) backfillfull
>   services:
> mon: 3 daemons, quorum ceph-test01,ceph-test03,ceph-test04
> mgr: ceph-test03(active), standbys: ceph-test01, ceph-test04
> osd: 286 osds: 286 up, 286 in;
> rgw: 3 daemons active
>   data:
> pools:   10 pools, 8480 pgs
> objects: 4.83 G objects, 18 TiB
> usage:   889 TiB used, 541 TiB / 1.4 PiB avail
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-mgr send zabbix data

2019-12-23 Thread Rene Diepstraten - PCextreme B.V.
Hi,

It seems that either you haven't imported the zabbix template on your zabbix 
server or you haven't added this template to the host 
"ceph-5f23a710-ca98-44f6-a323-41d412256f4d" which should be present in the 
zabbix server.
Also, if you have named the host "ceph-5f23a710-ca98-44f6-a323-41d412256f4d", 
you should set this in ceph config as the identifier.

Kind regards,

Rene Diepstraten
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow rbd read performance

2019-12-23 Thread Christian Balzer

Hello,

On Mon, 23 Dec 2019 22:14:15 +0100 Ml Ml wrote:

> Hohoho Merry Christmas and Hello,
> 
> i set up a "poor man´s" ceph cluster with 3 Nodes, one switch and
> normal standard HDDs.
> 
> My problem; with rbd benchmark i get 190MB/sec write, but only
> 45MB/sec read speed.
>
Something is severely off with your testing or cluster if reads are slower
than writes, especially by this margin.
 
> Here is the Setup: https://i.ibb.co/QdYkBYG/ceph.jpg
> 
> I plan to implement a separate switch to separate public from cluster
> network. But i think this is not my current problem here.
> 
You don't mention how many HDDs per server, 10Gbs is fine most likely and
a separate network (either physical or logical) is usually not needed or
beneficial. 
Your results indicate that the HIGHEST peak used 70% of your bandwidth and
that your disks can only maintain 20% of it.

Do your tests consistently with the same tool. 
Neither rados nor rbdbench are ideal, but at least they give ballpark
figures.
FIO on the actual mount on your backup server would be best.

And testing on a ceph node is also prone to skewed results, test from the
actual client, your backup server.

Make sure your network does what you want and monitor the ceph nodes with
ie. atop during the test runs to see where obvious bottlenecks are.

Christian

> I mount the stuff with rbd from the backup server. It seems that i get
> good write, but slow read speed. More details at the end of the mail.
> 
> rados bench -p scbench 30 write --no-cleanup:
> -
> Total time run: 34.269336
> ...
> Bandwidth (MB/sec): 162.945
> Stddev Bandwidth:   198.818
> Max bandwidth (MB/sec): 764
> Min bandwidth (MB/sec): 0
> Average IOPS:   40
> Stddev IOPS:49
> Max IOPS:   191
> Min IOPS:   0
> Average Latency(s): 0.387122
> Stddev Latency(s):  1.24094
> Max latency(s): 11.883
> Min latency(s): 0.0161869
> 
> 
> Here are the rbd benchmarks run on ceph01:
> --
> rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type write --io-size 8192
> --io-threads 256 --io-total 10G --io-pattern seq
> ...
> elapsed:56  ops:  1310720  ops/sec: 23295.63  bytes/sec:
> 190837820.82 (190MB/sec) => OKAY
> 
> 
> rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type read --io-size 8192
> --io-threads 256 --io-total 10G --io-pattern seq
> ...
> elapsed:   237  ops:  1310720  ops/sec:  5517.19  bytes/sec:
> 45196784.26 (45MB/sec) => WHY JUST 45MB/sec?
> 
> Since i ran those rbd benchmarks in ceph01, i guess the problem is not
> related to my backup rbd mount at all?
> 
> Thanks,
> Mario
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Mobile Inc.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unexpected "out" OSD behaviour

2019-12-23 Thread Oliver Freyermuth
Dear Jonas,

I tried just now on a 14.2.5 cluster, and sadly, the unexpected behaviour is 
still there,
i.e. an OSD marked "out" and then restarted is not considered as data source 
anymore. 
I also tried with a 13.2.8 OSD (in a cluster running 13.2.6 on other OSDs, MONs 
and MGRs), same effect. 

However, the trick you described ("mark your OSD in and then out right away") 
helps in both cases,
the data on the OSDs is considered as data source again and any degradation is 
gone. 

So while I think your patch should solve the issue, for some reason, it does 
not seem to be effective. 

Cheers,
Oliver

Am 22.12.19 um 23:50 schrieb Oliver Freyermuth:
> Dear Jonas,
> 
> Am 22.12.19 um 23:40 schrieb Jonas Jelten:
>> hi!
>>
>> I've also noticed that behavior and have submitted a patch some time ago 
>> that should fix (2):
>> https://github.com/ceph/ceph/pull/27288
> 
> thanks, this does indeed seem very much like the issue I saw! 
> I'm luckily not in a critical situation at the moment, but was just wondering 
> if this behaviour was normal (since it does not fit well
> with the goal of ensuring maximum possible redundancy at all times). 
> 
> However, I observed this on 13.2.6, which - if I read the release notes 
> correctly - should already have your patch in. Strange. 
> 
>> But it may well be that there's more cases where PGs are not discovered on 
>> devices that do have them. Just recently a
>> lot of my data was degraded and then recreated even though it would have 
>> been available on a node that had taken very
>> long to reboot.
> 
> We've set "mon_osd_down_out_subtree_limit" to "host" to make sure recovery of 
> data from full hosts does not start without one of us admins
> telling Ceph to go ahead. Maybe this also helps in your case? 
> 
>> What you can do also is to mark your OSD in and then out right away, the 
>> data is discovered then. Although with my patch
>> that shouldn't be necessary any more. Hope this helps you.
> 
> I will keep this in mind the next time it happens (I may be able to provoke 
> it, we have to drain more nodes, and once the next node is almost-empty,
> I can just restart one of the "out" OSDs and see what happens). 
> 
> Cheers and many thanks,
>   Oliver
> 
>>
>> Cheers
>>   -- Jonas
>>
>>
>> On 22/12/2019 19.48, Oliver Freyermuth wrote:
>>> Dear Cephers,
>>>
>>> I realized the following behaviour only recently:
>>>
>>> 1. Marking an OSD "out" sets the weight to zero and allows to migrate data 
>>> away (as long as it is up),
>>>i.e. it is still considered as a "source" and nothing goes to degraded 
>>> state (so far, everything expected). 
>>> 2. Restarting an "out" OSD, however, means it will come back with "0 pgs", 
>>> and if data was not fully migrated away yet,
>>>it means the PGs which were still kept on it before will enter degraded 
>>> state since they now lack a copy / shard.
>>>
>>> Is (2) expected? 
>>>
>>> If so, my understanding that taking an OSD "out" to let the data be 
>>> migrated away without losing any redundancy is wrong,
>>> since redundancy will be lost as soon as the "out" OSD is restarted (e.g. 
>>> due to a crash, node reboot,...) and the only safe options would be:
>>> 1. Disable the automatic balancer. 
>>> 2. Either adjust the weights of the OSDs to drain to zero, or use pg upmap 
>>> to drain them. 
>>> 3. Reenable the automatic balancer only after having fully drained those 
>>> OSDs and performing the necessary intervention
>>>(in our case, recreating the OSDs with a faster blockdb). 
>>>
>>> Is this correct? 
>>>
>>> Cheers,
>>> Oliver
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
> 



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Slow rbd read performance

2019-12-23 Thread Ml Ml
Hohoho Merry Christmas and Hello,

i set up a "poor man´s" ceph cluster with 3 Nodes, one switch and
normal standard HDDs.

My problem; with rbd benchmark i get 190MB/sec write, but only
45MB/sec read speed.

Here is the Setup: https://i.ibb.co/QdYkBYG/ceph.jpg

I plan to implement a separate switch to separate public from cluster
network. But i think this is not my current problem here.

I mount the stuff with rbd from the backup server. It seems that i get
good write, but slow read speed. More details at the end of the mail.

rados bench -p scbench 30 write --no-cleanup:
-
Total time run: 34.269336
...
Bandwidth (MB/sec): 162.945
Stddev Bandwidth:   198.818
Max bandwidth (MB/sec): 764
Min bandwidth (MB/sec): 0
Average IOPS:   40
Stddev IOPS:49
Max IOPS:   191
Min IOPS:   0
Average Latency(s): 0.387122
Stddev Latency(s):  1.24094
Max latency(s): 11.883
Min latency(s): 0.0161869


Here are the rbd benchmarks run on ceph01:
--
rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type write --io-size 8192
--io-threads 256 --io-total 10G --io-pattern seq
...
elapsed:56  ops:  1310720  ops/sec: 23295.63  bytes/sec:
190837820.82 (190MB/sec) => OKAY


rbd -p rbdbench bench $RBD_IMAGE_NAME --io-type read --io-size 8192
--io-threads 256 --io-total 10G --io-pattern seq
...
elapsed:   237  ops:  1310720  ops/sec:  5517.19  bytes/sec:
45196784.26 (45MB/sec) => WHY JUST 45MB/sec?

Since i ran those rbd benchmarks in ceph01, i guess the problem is not
related to my backup rbd mount at all?

Thanks,
Mario
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Bucket link tenanted to non-tenanted

2019-12-23 Thread Paul Emmerich
I think you need this pull request https://github.com/ceph/ceph/pull/28813
to do this, I don't think this was ever backported to any upstream release
branch

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Mon, Dec 23, 2019 at 2:53 PM Marcelo Miziara  wrote:

> Hello all. I'm trying to link a bucket from a user that's not "tenanted"
> (user1) to a tenanted user (usert$usert), but i'm getting an error message.
>
> I'm using Luminous:
> # ceph version
> ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous
> (stable)
>
> The steps I took were:
> 1) create user1:
> # radosgw-admin user create --uid='user1' --display-name='user1'
>
> 2) create one bucket (bucket1) withe the user1 credentials:
> # s3cmd -c .s3cfg mb s3://bucket1
> Bucket 's3://bucket1/' created
>
> 3) create usert$usert:
> # radosgw-admin user create --uid='usert$usert' --display-name='usert'
>
> 4) Tried to link:
> # radosgw-admin bucket link --uid="usert\$usert" --bucket=bucket1
> failure: (2) No such file or directory: 2019-12-23 10:20:49.078908
> 7f8f4f3d9dc0  0 could not get bucket info for bucket=bucket1
>
> Am I doing something wrong?
>
> Thanks in advance, Marcelo.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Bucket link tenanted to non-tenanted

2019-12-23 Thread Marcelo Miziara
Hello all. I'm trying to link a bucket from a user that's not "tenanted"
(user1) to a tenanted user (usert$usert), but i'm getting an error message.

I'm using Luminous:
# ceph version
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous
(stable)

The steps I took were:
1) create user1:
# radosgw-admin user create --uid='user1' --display-name='user1'

2) create one bucket (bucket1) withe the user1 credentials:
# s3cmd -c .s3cfg mb s3://bucket1
Bucket 's3://bucket1/' created

3) create usert$usert:
# radosgw-admin user create --uid='usert$usert' --display-name='usert'

4) Tried to link:
# radosgw-admin bucket link --uid="usert\$usert" --bucket=bucket1
failure: (2) No such file or directory: 2019-12-23 10:20:49.078908
7f8f4f3d9dc0  0 could not get bucket info for bucket=bucket1

Am I doing something wrong?

Thanks in advance, Marcelo.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Sum of bucket sizes dont match up to the cluster occupancy

2019-12-23 Thread Ronnie Puthukkeril
Hi,
We have a situation where the sum of the bucket sizes as observed from the 
bucket stats command is way less than the actual cluster usage.

Sum of bucket sizes = 11TB
Replication size = 2
Total expected cluster occupancy = 22TB
Actual cluster occupancy = 100TB

Any pointers on debugging this?

What we have checked:

  1.  We don’t have a whole lot of small object which would take up more space 
on the host due to a min_alloc size for HDD defaulting to 64KB
  2.  Tried to list orphan objects but this seems to take for ever

Questions:

  1.  Is there a way to manually identify stale objects in the data pool, so 
that these could be subsequently deleted directly from the pool level?
  2.  I guess the bucket index stores only the reference to the head object? 
How to find the associated tail objects for this head object?

Thanks,
Ronnie



[https://d1dejaj6dcqv24.cloudfront.net/asset/image/email-banner-384-2x.png]



This message may contain confidential and privileged information. If it has 
been sent to you in error, please reply to advise the sender of the error and 
then immediately delete it. If you are not the intended recipient, do not read, 
copy, disclose or otherwise use this message. The sender disclaims any 
liability for such unauthorized use. NOTE that all incoming emails sent to 
Qualys email accounts will be archived and may be scanned by us and/or by 
external service providers to detect and prevent threats to our systems, 
investigate illegal or inappropriate behavior, and/or eliminate unsolicited 
promotional emails (“spam”). If you have any concerns about this process, 
please contact us.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-mgr send zabbix data

2019-12-23 Thread 展荣臻(信泰)
hi all:
I want to monitor my luminous ceph with zabbix.my ceph runs in docker.
I enable mgr zabbix module,

[root@ceph2 /]# ceph mgr module ls
{
"enabled_modules": [
"dashboard",
"restful",
"status",
"zabbix"
],
"disabled_modules": [
"balancer",
"influx",
"localpool",
"prometheus",
"selftest"
]
}


I set zabbix according to https://docs.ceph.com/docs/luminous/mgr/zabbix/

[root@ceph2 ceph]# ceph zabbix config-show
{"zabbix_port": 10051, "zabbix_host": "172.16.195.49", "identifier": "", 
"zabbix_sender": "/usr/bin/zabbix_sender", "interval": 10}


when send data to zabbix,error occur

[root@ceph2 ceph]# ceph zabbix send
Failed to send data to Zabbix


I manually send data.it also is error
[root@ceph2 ceph]# /usr/bin/zabbix_sender -z 172.16.195.49 -s 
ceph-5f23a710-ca98-44f6-a323-41d412256f4d -p 10051 -k ceph.total_bytes -o 30 -vv
zabbix_sender [1200]: DEBUG: answer [{"response":"success","info":"processed: 
0; failed: 1; total: 1; seconds spent: 0.33"}]
info from server: "processed: 0; failed: 1; total: 1; seconds spent: 0.33"
sent: 1; skipped: 0; total: 1


messages in ceph-mgr log:

Dec 23 11:01:12 ceph2.novalocal docker[21930]: 2019-12-23 11:01:12.758460 
7fa750c57700 20 mgr update loading 0 new types, 0 old types, had 211 types, got 
534 bytes of data
Dec 23 11:01:13 ceph2.novalocal docker[21930]: 2019-12-23 11:01:13.083029 
7fa74e452700 20 mgr[zabbix] Waking up for new iteration
Dec 23 11:01:13 ceph2.novalocal docker[21930]: 2019-12-23 11:01:13.084548 
7fa74e452700  4 mgr[zabbix] Sending data to Zabbix server 172.16.195.49 as 
host/identifier ceph-5f23a710-ca98-44f6-a323-41d412256f4d
Dec 23 11:01:13 ceph2.novalocal docker[21930]: 2019-12-23 11:01:13.084667 
7fa74e452700 20 mgr[zabbix] {'rd_bytes': 4114432L, 'total_bytes': 
321958871040L, 'overall_status_int': 0, 'osd_latency_apply_min': 0L, 
'osd_latency_apply_max': 0L, 'num_osd_up': 6, 'osd_max_fill': 
0.21190729542446343, 'osd_backfillfull_ratio': 0.899761581421, 
'osd_latency_commit_max': 0L, 'wr_bytes': 14336L, 'osd_latency_commit_min': 0L, 
'num_mon': 3, 'osd_min_fill': 0.21118976775003168, 'total_used_bytes': 
681099264L, 'wr_ops': 2816L, 'osd_nearfull_ratio': 0.850238418579, 
'osd_latency_apply_avg': 0.0, 'overall_status': u'HEALTH_OK', 'num_pg': 80L, 
'osd_latency_commit_avg': 0.0, 'osd_avg_fill': 0.21154853158724754, 'num_osd': 
6, 'osd_full_ratio': 0.94988079071, 'total_objects': 210L, 'num_pools': 10, 
'num_osd_in': 6, 'num_pg_temp': 0, 'rd_ops': 4215L, 'total_avail_bytes': 
32121776L}
Dec 23 11:01:13 ceph2.novalocal docker[21930]: 2019-12-23 11:01:13.100752 
7fa74e452700  0 mgr[zabbix] Exception when sending: /usr/bin/zabbix_sender 
exited non-zero: zabbix_sender [1020]: DEBUG: answer 
[{"response":"success","info":"processed: 0; failed: 29; total: 29; seconds 
spent: 0.000311"}]
Dec 23 11:01:13 ceph2.novalocal docker[21930]: 2019-12-23 11:01:13.100944 
7fa74e452700 20 mgr[zabbix] Sleeping for 10 seconds
Dec 23 11:01:13 ceph2.novalocal docker[21930]: 2019-12-23 11:01:13.859475 
7fa76444a700 10 mgr tick tick
Dec 23 11:01:13 ceph2.novalocal docker[21930]: 2019-12-23 11:01:13.859504 
7fa76444a700  1 mgr send_beacon active

Where is wrong? 









___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How can I stop this logging?

2019-12-23 Thread Marc Roos



 384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.4 
KiB/s rd, 573 KiB/s wr, 20 op/s
Dec 23 11:58:25 c02 ceph-mgr: 2019-12-23 11:58:25.194 7f7d3a2f8700  0 
log_channel(cluster) log [DBG] : pgmap v411196: 384 pgs: 384 
active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.3 
KiB/s rd, 521 KiB/s wr, 20 op/s
Dec 23 11:58:27 c02 ceph-mgr: 2019-12-23 11:58:27.196 7f7d3a2f8700  0 
log_channel(cluster) log [DBG] : pgmap v411197: 384 pgs: 384 
active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.4 
KiB/s rd, 237 KiB/s wr, 19 op/s
Dec 23 11:58:29 c02 ceph-mgr: 2019-12-23 11:58:29.197 7f7d3a2f8700  0 
log_channel(cluster) log [DBG] : pgmap v411198: 384 pgs: 384 
active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.2 
KiB/s rd, 254 KiB/s wr, 17 op/s
Dec 23 11:58:31 c02 ceph-mgr: 2019-12-23 11:58:31.199 7f7d3a2f8700  0 
log_channel(cluster) log [DBG] : pgmap v411199: 384 pgs: 384 
active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 2.9 
KiB/s rd, 258 KiB/s wr, 17 op/s
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com