same, but with function name
"_set_alloc_sizes":
https://github.com/ceph/ceph/blob/v15.2.17/src/os/bluestore/BlueStore.cc#L5220-L5226,
althoug "dout(10)" probably means it only outputs this in a higher debug
level.
Best regar
As a data point: we've been running Octopus (solely for CephFS) on
Ubuntu 20.04 with 5.4.0(-122) for some time now, with packages from
download.ceph.com.
On 11/05/2023 07.12, Szabo, Istvan (Agoda) wrote:
I can answer my question, even in the official ubuntu repo they are using by
default the
dacted.local",
"interval": 60,
"log_level": "",
"log_to_cluster": false,
"log_to_cluster_level": "info",
"log_to_file": false,
"zabbix_host": "zabbix.redacted.local",
"zabbix_port": 10051,
"zabbix_sender": "/usr/bin/zabbix_sender"
}
Hope this helps.
Best regards,
Gerdriaan Mulder
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
rue to false in 16.2.8).
Best regards,
Gerdriaan Mulder
On 01/09/2022 14.39, Kamil Madac wrote:
Hi Ceph Community
One of my customer has an issue with the MDS cluster. Ceph cluster is
deployed with cephadm and is in version 16.2.7. As soon as MDS is switched
from Active-Standby to Active-Active-S
cmd info s3://bucket/kanariepiet.jpg
[snip]
Last mod: Tue, 10 Dec 2019 08:09:58 GMT
Storage: STANDARD
[snip]
$ s3cmd info s3://bucket/darthvader.png
[snip]
Last mod: Wed, 04 Dec 2019 10:35:14 GMT
Storage: SPINNING_RUST
[snip]
$ s3cmd info s3://bucket/2019-10-15-090436_1254x522_scrubbed.png
[snip]
Last mod: Tue, 10 Dec 2019 10:33:24 GMT
Storage: STANDARD
[snip]
==
Any thoughts on what might occur here?
Best regards,
Gerdriaan Mulder
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
-hdd. This gives me the suggestion that I made an error in
configuring the storage_class->pool association.
My hunch is that the zone(group) placement is incorrect, but I can't
really find clear documentation on that subject.
Any thoughts on that?
Best regards,
Gerdriaan Mulder
On T
jects, which have funky names with _shadow_
in them, and it's these objects that you see placed correctly in the
tier2-hdd pool.
Thanks for the explanation. It seems the documentation on Nautilus is
somewhat lacking on these particular intricacies.
Best regards,
Gerdriaan Mulder
__