On Wed, Sep 12, 2018 at 6:49 PM Anuradha Talur <ata...@commvault.com> wrote:

> Hi,
>
> We recently started testing cloudsync xlator on a replica volume.
> And we have noticed a few issues. We would like some advice on how to
> proceed with them.
>
> 1) As we know, when stubbing a file cloudsync uses mtime of files to
> decide whether a file should be truncated or not.
>
> If the mtime provided as part of the setfattr operation is lesser than the
> current mtime of the file on brick, stubbing isn't completed.
>
> This works fine in a plain distribute volume. But in case of a replica
> volume, the mtime could be different for the files on each of the replica
> brick.
>
>
> During our testing we came across the following scenario for a replica 3
> volume with 3 bricks:
>
>     We performed `setfattr -n "trusted.glusterfs.csou.complete" -v m1
> file1` from our gluster mount to stub the files.
>     It so happened that on brick1 this operation succeeded and truncated
> file1 as it should have. But on brick2 and brick3, mtime found on file1
>     was greater than m1, leading to failure there.
>
>     From AFR's perspective this operation failed as a whole because quorum
> could not be met. But on the brick where this setxattr succeeded,
> truncate was already performed. So now we have one of the replica bricks
> out of sync and AFR has no awareness of this. This file needs to be rolled
> back to its state before the
>
> setfattr.
>
> Ideally, it appears that we should add intelligence in AFR to handle this. How
> do you suggest we do that?
>
> The case is also applicable to EC volumes of course.
>
> 2) Given that cloudsync depends on mtime to make the decision of
> truncating, how do we ensure that we don't end up in this situation again?
>

Thank you for your feedback.

At the outset it looks like these problems can be addressed by enabling
consistent attributes feature in posix [1]. Can you please enable that
option and re-test these cases?

Regards,
Vijay

[1] https://review.gluster.org/#/c/19267/8/doc/features/ctime.md
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to