[ceph-users] Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-25 Thread Jorge JP
Hello, After deep-scrub my cluster shown this error: HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy: 2/77158878 objects degraded (0.000%), 1 pg degraded [WRN] OBJECT_UNFOUND: 1/38578006 obj

[ceph-users] copy file in nfs over cephfs error "error: error in file IO (code 11)"

2023-06-25 Thread farhad kh
hi everybody we have problem with nfs gansha load balancer whene use rsync -avre to copy file from another share to ceph nfs share path we get this error `rsync -rav /mnt/elasticsearch/newLogCluster/acr-202* /archive/Elastic-v7-archive` rsync : close failed on "/archive/Elastic-v7-archive/"

[ceph-users] Re: alerts in dashboard

2023-06-25 Thread Ben
attached screenshot was filtered out. Here it is partially: name Severity Group Duration Summary CephadmDaemonFailed critical cephadm 30 seconds A ceph daemon manged by cephadm is down CephadmPaused warning cephadm 1 minute Orchestration tasks via cephadm are PAUSED CephadmUpgradeFailed criti

[ceph-users] Re: radosgw hang under pressure

2023-06-25 Thread Szabo, Istvan (Agoda)
Hi, Can you check the read and write latency of your osds? Maybe it hangs because it’s waiting for pg’s but maybe the pg are under scrub or something else. Also with many small objects don’t rely on pg autoscaler, it might not tell to increase pg but maybe it should be. Istvan Szabo Staff Infra

[ceph-users] Changing bucket owner in a multi-zonegroup Ceph cluster

2023-06-25 Thread Ramin Najjarbashi
Hi all I have a Ceph cluster consisting of two zonegroups with metadata syncing enabled. I need to change the owner of a bucket that is located in the secondary zonegroup. I followed the steps below: Unlinked the bucket from the old user on the secondary zonegroup: bash Copy code $ radosgw-admin