good morning,
i am using ver 17 but cannot have s3 lock get-object-retention working:
$ aws --profile=user01 --endpoint-url=http://x.x.x.x:80 s3api
list-object-versions --bucket demo-compliance
{
"ETag": "\"b8d6acf7d330d241a8ef851694365b94\"",
"Size": 32,
"St
Hi,
could you share your storage specs?
Zitat von Oğuz Yarımtepe :
Hi,
I have a cluster with rook operator running the ceph version 1.6 and
upgraded first rook operator and then the ceph cluster definition.
Everything was fine, every component except from osds are upgraded. Below
is the reaso
Hello, i have an issue about my multisite configuration.
pacific 16.2.9
My problem:
i have a permission denied on the the master zone when i use the command below.
$ radosgw-admin sync status
realm 8df19226-a200-48fa-bd43-1491d32c636c (myrealm)
zonegroup 29592d75-224d-49b6-bc36-2703efa4f67f
Hi,
I am trying to deploy Ceph Quincy using ceph-ansible on Rocky9. I am having
some problems and I don't know where to search for the reason.
PS : I did the same deployment on Rocky8 using ceph-ansible for the Pacific
version on the same hardware and it worked perfectly.
I have 03 controllers n
Hi, currently we are testing LRC codes and I got a cluster setup with 3 racks
and 4 hosts in each of those. What I want to achieve is to have a storage
efficient erasure code (<=200%) and also availability during a rack outage. In
(my) theory, that should have worked with the LRC k6m3l3 having a
Hi Claas,
which type of SSD are you using? If these are enterprise-grade NVMe SSDs, there
is a good chance they support multiple namespaces. In that case, i would
suggest to create 4 namespaces per SSD (you might consider more, depending on
your load, available CPU cores and type of SSD) and de
I have an ochestrated (cephadm) ceph cluster (16.2.11) with 2 radosgw services
on 2 separate hosts without HA (i.e. no ingress/haproxy in front). Both of the
rgw servers use SSL and have a properly signed certificate. We can access them
with standard S3 tools like s3cmd, cyberduck, etc.
The
Good evening everyone.
I'm having trouble with rbd-mirror.
In test environment I have the following scenario:
DC1:
public_network: 172.20.0.0/24, 192.168.0.0/24
--mon-ip 172.20.0.1
ip: 192.168.0.1
DC2:
public_network: 172.21.0.0/24, 192.168.0.0/24
--mon-ip 172.21.0.1
ip 192.168.0.2
If I add the
I have a large number of misplaced objects, and I have all osd settings to “1”
already:
sudo ceph tell osd.\* injectargs '--osd_max_backfills=1
--osd_recovery_max_active=1 --osd_recovery_op_priority=1'
How can I slow it down even more? The cluster is too large, it’s impacting
other network t
How about sleep:
ceph tell osd.* injectargs '--osd-recovery-sleep 0.5'
You can raise this number more (in seconds) to obtain your desired
throughput
--
Alex Gorbachev
ISS/Storcium
On Wed, Mar 8, 2023 at 6:56 PM Rice, Christian wrote:
> I have a large number of misplaced objects, and I have
On 08.03.23 13:22, wodel youchi wrote:
I am trying to deploy Ceph Quincy using ceph-ansible on Rocky9. I am having
some problems and I don't know where to search for the reason.
The README.rst of the ceph-ansible project on
https://github.com/ceph/ceph-ansible encourages you to move to cephadm
11 matches
Mail list logo