[ceph-users] s3 lock api get-object-retention

2023-03-08 Thread garcetto
good morning, i am using ver 17 but cannot have s3 lock get-object-retention working: $ aws --profile=user01 --endpoint-url=http://x.x.x.x:80 s3api list-object-versions --bucket demo-compliance { "ETag": "\"b8d6acf7d330d241a8ef851694365b94\"", "Size": 32, "St

[ceph-users] Re: Upgrade problem from 1.6 to 1.7

2023-03-08 Thread Eugen Block
Hi, could you share your storage specs? Zitat von Oğuz Yarımtepe : Hi, I have a cluster with rook operator running the ceph version 1.6 and upgraded first rook operator and then the ceph cluster definition. Everything was fine, every component except from osds are upgraded. Below is the reaso

[ceph-users] user and bucket not sync ( permission denied )

2023-03-08 Thread Guillaume Morin
Hello, i have an issue about my multisite configuration. pacific 16.2.9 My problem: i have a permission denied on the the master zone when i use the command below. $ radosgw-admin sync status realm 8df19226-a200-48fa-bd43-1491d32c636c (myrealm) zonegroup 29592d75-224d-49b6-bc36-2703efa4f67f

[ceph-users] Error deploying Ceph Qunicy using ceph-ansible 7 on Rocky 9

2023-03-08 Thread wodel youchi
Hi, I am trying to deploy Ceph Quincy using ceph-ansible on Rocky9. I am having some problems and I don't know where to search for the reason. PS : I did the same deployment on Rocky8 using ceph-ansible for the Pacific version on the same hardware and it worked perfectly. I have 03 controllers n

[ceph-users] LRC k6m3l3, rack outage and availability

2023-03-08 Thread steve . bakerx1
Hi, currently we are testing LRC codes and I got a cluster setup with 3 racks and 4 hosts in each of those. What I want to achieve is to have a storage efficient erasure code (<=200%) and also availability during a rack outage. In (my) theory, that should have worked with the LRC k6m3l3 having a

[ceph-users] Re: Problem with cephadm and deploying 4 ODSs on nvme Storage

2023-03-08 Thread Gregor Radtke
Hi Claas, which type of SSD are you using? If these are enterprise-grade NVMe SSDs, there is a good chance they support multiple namespaces. In that case, i would suggest to create 4 namespaces per SSD (you might consider more, depending on your load, available CPU cores and type of SSD) and de

[ceph-users] Dashboard for Object Servers using wrong hostname

2023-03-08 Thread Wyll Ingersoll
I have an ochestrated (cephadm) ceph cluster (16.2.11) with 2 radosgw services on 2 separate hosts without HA (i.e. no ingress/haproxy in front). Both of the rgw servers use SSL and have a properly signed certificate. We can access them with standard S3 tools like s3cmd, cyberduck, etc. The

[ceph-users] Difficulty with rbd-mirror on different networks.

2023-03-08 Thread Murilo Morais
Good evening everyone. I'm having trouble with rbd-mirror. In test environment I have the following scenario: DC1: public_network: 172.20.0.0/24, 192.168.0.0/24 --mon-ip 172.20.0.1 ip: 192.168.0.1 DC2: public_network: 172.21.0.0/24, 192.168.0.0/24 --mon-ip 172.21.0.1 ip 192.168.0.2 If I add the

[ceph-users] Trying to throttle global backfill

2023-03-08 Thread Rice, Christian
I have a large number of misplaced objects, and I have all osd settings to “1” already: sudo ceph tell osd.\* injectargs '--osd_max_backfills=1 --osd_recovery_max_active=1 --osd_recovery_op_priority=1' How can I slow it down even more? The cluster is too large, it’s impacting other network t

[ceph-users] Re: Trying to throttle global backfill

2023-03-08 Thread Alex Gorbachev
How about sleep: ceph tell osd.* injectargs '--osd-recovery-sleep 0.5' You can raise this number more (in seconds) to obtain your desired throughput -- Alex Gorbachev ISS/Storcium On Wed, Mar 8, 2023 at 6:56 PM Rice, Christian wrote: > I have a large number of misplaced objects, and I have

[ceph-users] Re: Error deploying Ceph Qunicy using ceph-ansible 7 on Rocky 9

2023-03-08 Thread Robert Sander
On 08.03.23 13:22, wodel youchi wrote: I am trying to deploy Ceph Quincy using ceph-ansible on Rocky9. I am having some problems and I don't know where to search for the reason. The README.rst of the ceph-ansible project on https://github.com/ceph/ceph-ansible encourages you to move to cephadm