[ceph-users] Re: how to handle rgw leaked data (aka data that is not available via buckets but eats diskspace)

2021-04-26 Thread Boris Behrens
Hi Anthony, yes we are using replication, the lost space is calculated before it's replicated. RAW STORAGE: CLASS SIZEAVAIL USEDRAW USED %RAW USED hdd 1.1 PiB 191 TiB 968 TiB 968 TiB 83.55 TOTAL 1.1 PiB 191 TiB 968

[ceph-users] how to handle rgw leaked data (aka data that is not available via buckets but eats diskspace)

2021-04-26 Thread Boris Behrens
HI, we still have the problem that our rgw eats more diskspace than it should. Summing up the "size_kb_actual" of all buckets show only half of the used diskspace. There are 312TiB stored acording to "ceph df" but we only need around 158TB. I've already wrote to this ML with the problem, but

[ceph-users] Cephadm multiple public networks

2021-04-26 Thread Stanislav Datskevych
Hi all, I've installed latest Pacific version 16.2.1 using Cephadm. I try using multiple public networks with this setting: ceph config set mon public_network "100.90.1.0/24,100.90.2.0/24" The networks seem to be successfully passed to /etc/ceph/ceph.conf on the daemons, however I

[ceph-users] Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume

2021-04-26 Thread Tecnología CHARNE . NET
Hello! I'm working with Openstack Wallaby (1 controller, 2 compute nodes) connected to Ceph Pacific cluster in a devel environment. With Openstack Victoria and Ceph Pacific (before last friday update) everything was running like a charm. Then, I upgraded Openstack to Wallaby and Ceph  to

[ceph-users] DocuBetter Meeting 1AM UTC Thursday, April 29 2021

2021-04-26 Thread John Zachary Dover
A DocuBetter meeting is scheduled for later this week at 11AM AEST Thursday, which is 6PM PDT Wednesday. This meeting is not much attended, though, so unless I get responses to this email thread, I'm not going to hold it. This email is a sincere request for documentation complaints. If anything

[ceph-users] Re: [Suspicious newsletter] RGW: Multiple Site does not sync olds data

2021-04-26 Thread 特木勒
Hi Amit: Both clusters have a lot of recovering shards. Actually I do not know if it’s normal or not.  The rgw_rados_hander is the default value, I have not touched this parameter. Do I need to increase this value? Thanks Amit Ghadge 于2021年4月26日 周一下午10:42写道: > Both clusters show sync status

[ceph-users] Updating a CentOS7-Nautilus cluster to CentOS8-Pacific

2021-04-26 Thread Massimo Sgaravatto
Hi I have a ceph cluster running Nautilus. The ceph services are hosted on CentOS7 servers. Right now I have: - 3 servers, each one running MON+MGR - 10 servers running OSDs - 2 servers running RGW I need to update this cluster to CentOS8 (actually CentOS stream 8) and Pacific. What is the