Hello. I was have one-way multisite S3 cluster and we've seen issues with rgw-sync due to sharding problems and I've stopped the multisite sync. This is not the topic just a knowledge about my story. I have some leftover 0 byte objects in destination and I'm trying to overwrite them with Rclone "path to path". But somehow I can not overwrite these objects. If I delete with rclone or rados rm and do rclone copy again, I got the result below. Rclone gives error but the object is created again "0 byte" with pending attrs. Why is this happening? I think somehow I need to clean these objects and copy from source again but how?
What is "user.rgw.olh.pending" ? [root@SRV1]# radosgw-admin --id radosgw.prod1 object stat --bucket=mybucket --object=images/2019/05/29/ad4ba79c-bb66-4ff6-847a-09a1e0cff49f { "name": "images/2019/05/29/ad4ba79c-bb66-4ff6-847a-09a1e0cff49f", "size": 0, "tag": "713li30rvcrjfwhctx894mj7vf1wa1a8", "attrs": { "user.rgw.manifest": "", "user.rgw.olh.idtag": "v1m9jy4cjck38ptel09qebsbb10pe2af", "user.rgw.olh.info": "\u0001\u0001�", "user.rgw.olh.pending.00000000606b04728gs23ecq11b3i3l1": "\u0001\u0001\u0008", "user.rgw.olh.pending.00000000606b0472bfhdzxeb9wesd8t7": "\u0001\u0001\u0008", "user.rgw.olh.pending.00000000606b0472fv06t1dob3vmo4da": "\u0001\u0001\u0008", "user.rgw.olh.pending.00000000606b0472lql6c9o88rt211r9": "\u0001\u0001\u0008", "user.rgw.olh.ver": "" } } [root@SRV1]# rados listxattr -p prod.rgw.buckets.data c106b26b-xxx-xxxx-xxx-dee3ca5c0968.121384004.3_images/2019/05/29/ad4ba79c-bb66-4ff6-847a-09a1e0cff49f user.rgw.idtag user.rgw.olh.idtag user.rgw.olh.info user.rgw.olh.ver [root@SRV1]# rados -p prod.rgw.buckets.data stat c106b26b-xxx-xxxx-xxx-dee3ca5c0968.121384004.3_images/2019/05/29/ad4ba79c-bb66-4ff6-847a-09a1e0cff49f prod.rgw.buckets.data/c106b26b-xxx-xxxx-xxx-dee3ca5c0968.121384004.3_images/2019/05/29/ad4ba79c-bb66-4ff6-847a-09a1e0cff49f mtime 2021-04-05 17:10:55.000000, size 0 _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io