[ceph-users] Re: Multisite recovering shards
Hi, We are using octopus 15.2.7 for bucket sync with symmetrical replication. replication is asynchronous with both CephFS and RGW, so if your clients keep writing new data into the cluster as you state the sync status will always stay behind a little bit. I have two one-node test clusters with no client traffic where the sync status is actually up-to-date: siteb:~ # radosgw-admin sync status realm c7d5fd30-9c06-46a1-baf4-497f95bf3abc (masterrealm) zonegroup 68adec15-aace-403d-bd63-f5182a6437b1 (master-zonegroup) zone 69329911-c3b0-48c3-a359-7f6214e0480c (siteb-zone) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 0fb33fa1-8110-4179-ae45-acf5f5f825c5 (sitea-zone) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source Zitat von "Szabo, Istvan (Agoda)" : Hi, I’ve never seen in our multisite sync status healthy output, almost all the sync shards are recovering. What can I do with recovering shards? We have 1 realm, 1 zonegroup and inside the zonegroup we have 3 zones in 3 different geo location. We are using octopus 15.2.7 for bucket sync with symmetrical replication. The user is at the moment migrating their data and the sites are always behind which is replicated from the place where it was uploaded. I’ve restarted all rgw and disable / enable bucket sync, it started to work, but I think when it comes to close sync it will stop again due to the recovering shards. Any idea? Thank you This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Multisite recovering shards
2 things I forgot to mention which might be interesting, we have only 2 bucket at the moment, one is presharded to 9000 shards, the other presharded to 24000 shards (different users) > On 2021. Jan 30., at 10:02, Szabo, Istvan (Agoda) > wrote: > > Hi, > > I’ve never seen in our multisite sync status healthy output, almost all the > sync shards are recovering. > > What can I do with recovering shards? > > We have 1 realm, 1 zonegroup and inside the zonegroup we have 3 zones in 3 > different geo location. > > We are using octopus 15.2.7 for bucket sync with symmetrical replication. > > The user is at the moment migrating their data and the sites are always > behind which is replicated from the place where it was uploaded. > > I’ve restarted all rgw and disable / enable bucket sync, it started to work, > but I think when it comes to close sync it will stop again due to the > recovering shards. > > Any idea? > > Thank you This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Multisite recovering shards
Hi Szabo, For what it's worth, I have a two clusters in a multisite that has never appeared to be synced either, but have never found a single object that can't be found in both clusters. There are always at least a few recovering shards, while the "data sync source" is always "syncing" with "full sync: 0/128 shards" and "incremental sync: 128/128 shards" in both clusters. For us, the secondary site is for DR purposes, and backups are automatically tested from there every week, which leads me to believe that everything _appears_ to be syncing correctly. What does your `radosgw-admin sync status` look like? Thanks, Matt On 2021-01-29 22:17, Szabo, Istvan (Agoda) wrote: 2 things I forgot to mention which might be interesting, we have only 2 bucket at the moment, one is presharded to 9000 shards, the other presharded to 24000 shards (different users) On 2021. Jan 30., at 10:02, Szabo, Istvan (Agoda) wrote: Hi, I’ve never seen in our multisite sync status healthy output, almost all the sync shards are recovering. What can I do with recovering shards? We have 1 realm, 1 zonegroup and inside the zonegroup we have 3 zones in 3 different geo location. We are using octopus 15.2.7 for bucket sync with symmetrical replication. The user is at the moment migrating their data and the sites are always behind which is replicated from the place where it was uploaded. I’ve restarted all rgw and disable / enable bucket sync, it started to work, but I think when it comes to close sync it will stop again due to the recovering shards. Any idea? Thank you This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io