Hi Anthony,
yes we are using replication, the lost space is calculated before it's
replicated.
RAW STORAGE:
CLASS SIZEAVAIL USEDRAW USED %RAW USED
hdd 1.1 PiB 191 TiB 968 TiB 968 TiB 83.55
TOTAL 1.1 PiB 191 TiB 968
HI,
we still have the problem that our rgw eats more diskspace than it should.
Summing up the "size_kb_actual" of all buckets show only half of the used
diskspace.
There are 312TiB stored acording to "ceph df" but we only need around 158TB.
I've already wrote to this ML with the problem, but
Hi all,
I've installed latest Pacific version 16.2.1 using Cephadm. I try using
multiple public networks with this setting:
ceph config set mon public_network "100.90.1.0/24,100.90.2.0/24"
The networks seem to be successfully passed to /etc/ceph/ceph.conf on
the daemons, however I
Hello!
I'm working with Openstack Wallaby (1 controller, 2 compute nodes)
connected to Ceph Pacific cluster in a devel environment.
With Openstack Victoria and Ceph Pacific (before last friday update)
everything was running like a charm.
Then, I upgraded Openstack to Wallaby and Ceph to
A DocuBetter meeting is scheduled for later this week at 11AM AEST
Thursday, which is 6PM PDT Wednesday. This meeting is not much attended,
though, so unless I get responses to this email thread, I'm not going to
hold it.
This email is a sincere request for documentation complaints. If anything
Hi Amit:
Both clusters have a lot of recovering shards. Actually I do not know if
it’s normal or not.
The rgw_rados_hander is the default value, I have not touched this
parameter. Do I need to increase this value?
Thanks
Amit Ghadge 于2021年4月26日 周一下午10:42写道:
> Both clusters show sync status
Hi
I have a ceph cluster running Nautilus. The ceph services are hosted on
CentOS7
servers.
Right now I have:
- 3 servers, each one running MON+MGR
- 10 servers running OSDs
- 2 servers running RGW
I need to update this cluster to CentOS8 (actually CentOS stream 8) and
Pacific.
What is the