Re: [Gluster-devel] Cloudsync with AFR

2018-09-16 Thread Kotresh Hiremath Ravishankar
Hi Anuradha,

To enable the c-time (consistent time) feature. Please enable following two
options.

gluster vol set  utime on
gluster vol set  ctime on

Thanks,
Kotresh HR

On Fri, Sep 14, 2018 at 12:18 PM, Rafi Kavungal Chundattu Parambil <
rkavu...@redhat.com> wrote:

> Hi Anuradha,
>
> We have an xlator to provide consistent time across replica set. You can
> enable this xlator to get the consistent mtime,atime, ctime.
>
>
> Regards
> Rafi KC
>
> - Original Message -
> From: "Anuradha Talur" 
> To: gluster-devel@gluster.org
> Cc: ama...@redhat.com, "Ram Ankireddypalle" ,
> "Sachin Pandit" 
> Sent: Thursday, September 13, 2018 7:19:26 AM
> Subject: [Gluster-devel] Cloudsync with AFR
>
>
>
> Hi,
>
> We recently started testing cloudsync xlator on a replica volume.
> And we have noticed a few issues. We would like some advice on how to
> proceed with them.
>
>
>
> 1) As we know, when stubbing a file cloudsync uses mtime of files to
> decide whether a file should be truncated or not.
>
> If the mtime provided as part of the setfattr operation is lesser than the
> current mtime of the file on brick, stubbing isn't completed.
>
> This works fine in a plain distribute volume. B ut i n case of a replica
> volume, the mtime could be different for the files on each of the replica
> brick.
>
>
> During our testing we came across the following scenario for a replica 3
> volume with 3 bricks:
>
> We performed `setfattr -n "trusted.glusterfs.csou.complete" -v m1 file1`
> from our gluster mount to stub the files.
> It so happened that on brick1 this operation succeeded and truncated file1
> as it should have. But on brick2 and brick3, mtime found on file1
> was greater than m1, leading to failure there.
>
> From AFR's perspective this operation failed as a whole because quorum
> could not be met. But on the brick where this setxattr succeeded, truncate
> was already performed. So now we have one of the replica bricks out of sync
> and AFR has no awareness of this. This file needs to be rolled back to its
> state before the
>
>
> setfattr.
>
> Ideally, it appears that we should add intelligence in AFR to handle this.
> How do you suggest we do that?
>
>
> The case is also applicable to EC volumes of course.
>
> 2) Given that cloudsync depends on mtime to make the decision of
> truncating, how do we ensure that we don't end up in this situation again?
>
> Thanks,
> Anuradha
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank
> you."
> **
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Thanks and Regards,
Kotresh H R
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Cloudsync with AFR

2018-09-13 Thread Rafi Kavungal Chundattu Parambil
Hi Anuradha,

We have an xlator to provide consistent time across replica set. You can enable 
this xlator to get the consistent mtime,atime, ctime. 


Regards
Rafi KC

- Original Message -
From: "Anuradha Talur" 
To: gluster-devel@gluster.org
Cc: ama...@redhat.com, "Ram Ankireddypalle" , "Sachin 
Pandit" 
Sent: Thursday, September 13, 2018 7:19:26 AM
Subject: [Gluster-devel] Cloudsync with AFR



Hi, 

We recently started testing cloudsync xlator on a replica volume. 
And we have noticed a few issues. We would like some advice on how to proceed 
with them. 



1) As we know, when stubbing a file cloudsync uses mtime of files to decide 
whether a file should be truncated or not. 

If the mtime provided as part of the setfattr operation is lesser than the 
current mtime of the file on brick, stubbing isn't completed. 

This works fine in a plain distribute volume. B ut i n case of a replica 
volume, the mtime could be different for the files on each of the replica 
brick. 


During our testing we came across the following scenario for a replica 3 volume 
with 3 bricks: 

We performed `setfattr -n "trusted.glusterfs.csou.complete" -v m1 file1` from 
our gluster mount to stub the files. 
It so happened that on brick1 this operation succeeded and truncated file1 as 
it should have. But on brick2 and brick3, mtime found on file1 
was greater than m1, leading to failure there. 

>From AFR's perspective this operation failed as a whole because quorum could 
>not be met. But on the brick where this setxattr succeeded, truncate was 
>already performed. So now we have one of the replica bricks out of sync and 
>AFR has no awareness of this. This file needs to be rolled back to its state 
>before the 


setfattr. 

Ideally, it appears that we should add intelligence in AFR to handle this. How 
do you suggest we do that? 


The case is also applicable to EC volumes of course. 

2) Given that cloudsync depends on mtime to make the decision of truncating, 
how do we ensure that we don't end up in this situation again? 

Thanks, 
Anuradha 
***Legal Disclaimer*** 
"This communication may contain confidential and privileged material for the 
sole use of the intended recipient. Any unauthorized review, use or 
distribution 
by others is strictly prohibited. If you have received the message by mistake, 
please advise the sender by reply email and delete the message. Thank you." 
** 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cloudsync with AFR

2018-09-12 Thread Vijay Bellur
On Wed, Sep 12, 2018 at 6:49 PM Anuradha Talur  wrote:

> Hi,
>
> We recently started testing cloudsync xlator on a replica volume.
> And we have noticed a few issues. We would like some advice on how to
> proceed with them.
>
> 1) As we know, when stubbing a file cloudsync uses mtime of files to
> decide whether a file should be truncated or not.
>
> If the mtime provided as part of the setfattr operation is lesser than the
> current mtime of the file on brick, stubbing isn't completed.
>
> This works fine in a plain distribute volume. But in case of a replica
> volume, the mtime could be different for the files on each of the replica
> brick.
>
>
> During our testing we came across the following scenario for a replica 3
> volume with 3 bricks:
>
> We performed `setfattr -n "trusted.glusterfs.csou.complete" -v m1
> file1` from our gluster mount to stub the files.
> It so happened that on brick1 this operation succeeded and truncated
> file1 as it should have. But on brick2 and brick3, mtime found on file1
> was greater than m1, leading to failure there.
>
> From AFR's perspective this operation failed as a whole because quorum
> could not be met. But on the brick where this setxattr succeeded,
> truncate was already performed. So now we have one of the replica bricks
> out of sync and AFR has no awareness of this. This file needs to be rolled
> back to its state before the
>
> setfattr.
>
> Ideally, it appears that we should add intelligence in AFR to handle this. How
> do you suggest we do that?
>
> The case is also applicable to EC volumes of course.
>
> 2) Given that cloudsync depends on mtime to make the decision of
> truncating, how do we ensure that we don't end up in this situation again?
>

Thank you for your feedback.

At the outset it looks like these problems can be addressed by enabling
consistent attributes feature in posix [1]. Can you please enable that
option and re-test these cases?

Regards,
Vijay

[1] https://review.gluster.org/#/c/19267/8/doc/features/ctime.md
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel