[Cluster-devel] [PATCH] gfs2: fix timestamp handling on quota inodes

2023-07-13 Thread Jeff Layton
While these aren't generally visible from userland, it's best to be consistent with timestamp handling. When adjusting the quota, update the mtime and ctime like we would with a write operation on any other inode, and avoid updating the atime which should only be done for reads. Signed-off-by: Jef

Re: [Cluster-devel] [PATCH] gfs2: fix timestamp handling on quota inodes

2023-07-13 Thread Christian Brauner
On Thu, Jul 13, 2023 at 09:52:48AM -0400, Jeff Layton wrote: > While these aren't generally visible from userland, it's best to be > consistent with timestamp handling. When adjusting the quota, update the > mtime and ctime like we would with a write operation on any other inode, > and avoid updati

Re: [Cluster-devel] [PATCH] gfs2: fix timestamp handling on quota inodes

2023-07-13 Thread Christian Brauner
On Thu, 13 Jul 2023 09:52:48 -0400, Jeff Layton wrote: > While these aren't generally visible from userland, it's best to be > consistent with timestamp handling. When adjusting the quota, update the > mtime and ctime like we would with a write operation on any other inode, > and avoid updating the

Re: [Cluster-devel] [PATCH] gfs2: fix timestamp handling on quota inodes

2023-07-13 Thread Andreas Gruenbacher
Jeff and Christian, On Thu, Jul 13, 2023 at 3:52 PM Jeff Layton wrote: > While these aren't generally visible from userland, it's best to be > consistent with timestamp handling. When adjusting the quota, update the > mtime and ctime like we would with a write operation on any other inode, > and

[Cluster-devel] [PATCH v6.5-rc1 2/2] fs: dlm: allow to F_SETLKW getting interrupted

2023-07-13 Thread Alexander Aring
This patch implements dlm plock F_SETLKW interruption feature. If the pending plock operation is not sent to user space yet it can simple be dropped out of the send_list. In case it's already being sent we need to try to remove the waiters in dlm user space tool. If it was successful a reply with D

[Cluster-devel] [PATCH v6.5-rc1 1/2] fs: dlm: introduce DLM_PLOCK_FL_NO_REPLY flag

2023-07-13 Thread Alexander Aring
This patch introduces a new flag DLM_PLOCK_FL_NO_REPLY in case an dlm plock operation should not send a reply back. Currently this is kind of being handled in DLM_PLOCK_FL_CLOSE, but DLM_PLOCK_FL_CLOSE has more meanings that it will remove all waiters for a specific nodeid/owner values in by doing

Re: [Cluster-devel] [PATCH v6.5-rc1 1/2] fs: dlm: introduce DLM_PLOCK_FL_NO_REPLY flag

2023-07-13 Thread Greg KH
On Thu, Jul 13, 2023 at 10:40:28AM -0400, Alexander Aring wrote: > This patch introduces a new flag DLM_PLOCK_FL_NO_REPLY in case an dlm > plock operation should not send a reply back. Currently this is kind of > being handled in DLM_PLOCK_FL_CLOSE, but DLM_PLOCK_FL_CLOSE has more > meanings that i

Re: [Cluster-devel] [PATCH v6.5-rc1 1/2] fs: dlm: introduce DLM_PLOCK_FL_NO_REPLY flag

2023-07-13 Thread Alexander Aring
Hi, On Thu, Jul 13, 2023 at 10:49 AM Greg KH wrote: > > On Thu, Jul 13, 2023 at 10:40:28AM -0400, Alexander Aring wrote: > > This patch introduces a new flag DLM_PLOCK_FL_NO_REPLY in case an dlm > > plock operation should not send a reply back. Currently this is kind of > > being handled in DLM_P

Re: [Cluster-devel] [PATCH v6.5-rc1 1/2] fs: dlm: introduce DLM_PLOCK_FL_NO_REPLY flag

2023-07-13 Thread Alexander Aring
Hi, On Thu, Jul 13, 2023 at 10:57 AM Alexander Aring wrote: > > Hi, > > On Thu, Jul 13, 2023 at 10:49 AM Greg KH wrote: > > > > On Thu, Jul 13, 2023 at 10:40:28AM -0400, Alexander Aring wrote: > > > This patch introduces a new flag DLM_PLOCK_FL_NO_REPLY in case an dlm > > > plock operation shoul

Re: [Cluster-devel] [LTP] [linus:master] [iomap] 219580eea1: ltp.writev07.fail

2023-07-13 Thread Cyril Hrubis
Hi! The test description: Verify writev() behaviour with partially valid iovec list. Kernel <4.8 used to shorten write up to first bad invalid iovec. Starting with 4.8, a writev with short data (under page size) is likely to get shorten to 0 bytes and return EFAULT. This test doesn't make a

Re: [Cluster-devel] [LTP] [linus:master] [iomap] 219580eea1: ltp.writev07.fail

2023-07-13 Thread Cyril Hrubis
Hi! > I can't reproduce this on current mainline. Is this a robust failure > or flapping test? Especiall as the FAIL conditions look rather > unrelated. Actually the test is spot on, the difference is that previously the error was returned form the iomap_file_buffered_write() only if we failed w

Re: [Cluster-devel] [LTP] [linus:master] [iomap] 219580eea1: ltp.writev07.fail

2023-07-13 Thread Christoph Hellwig
On Thu, Jul 13, 2023 at 05:34:55PM +0200, Cyril Hrubis wrote: > iter.processed = iomap_write_iter(&iter, i); > > + iocb->ki_pos += iter.pos - iocb->ki_pos; > + > if (unlikely(ret < 0)) > return ret; > - ret = iter.pos - iocb->ki_pos; > - io

[Cluster-devel] [PATCHv2 v6.5-rc1 2/3] fs: dlm: introduce DLM_PLOCK_FL_NO_REPLY flag

2023-07-13 Thread Alexander Aring
This patch introduces a new flag DLM_PLOCK_FL_NO_REPLY in case an dlm plock operation should never send a reply back. Currently this is kind of being handled in DLM_PLOCK_FL_CLOSE, but DLM_PLOCK_FL_CLOSE has more meanings that it will remove all waiters for a specific nodeid/owner values in by doin

[Cluster-devel] [PATCHv2 v6.5-rc1 1/3] fs: dlm: ignore DLM_PLOCK_FL_CLOSE flag results

2023-07-13 Thread Alexander Aring
This patch will ignore dlm plock results with DLM_PLOCK_FL_CLOSE being set. When DLM_PLOCK_FL_CLOSE is set then no reply is expected and a plock op cannot being matched and the result cannot be delivered to the caller. In some user space software applications like dlm_controld (the common applicati

[Cluster-devel] [PATCHv2 v6.5-rc1 3/3] fs: dlm: allow to F_SETLKW getting interrupted

2023-07-13 Thread Alexander Aring
This patch implements dlm plock F_SETLKW interruption feature. If the pending plock operation is not sent to user space yet it can simple be dropped out of the send_list. In case it's already being sent we need to try to remove the waiters in dlm user space tool. If it was successful a reply with D

[Cluster-devel] [PATCHv2 v6.5-rc1 0/3] fs: dlm: workarounds and cancellation

2023-07-13 Thread Alexander Aring
Hi, This patch-series trying to avoid issues when plock ops with DLM_PLOCK_FL_CLOSE flag is set sends a reply back which should never be the case. This problem getting more serious when introducing a new plock op and an answer was not expected as I changed in v2 to check on DLM_PLOCK_FL_CLOSE fl

[Cluster-devel] [PATCH dlm-tool 1/2] fs: dlm: handle DLM_PLOCK_FL_NO_REPLY

2023-07-13 Thread Alexander Aring
This patch will handle a newly introduced op flag DLM_PLOCK_FL_NO_REPLY to be sure we never send a result back in case of the kernel doesn't expect never a result back. --- dlm_controld/plock.c | 10 ++ 1 file changed, 10 insertions(+) diff --git a/dlm_controld/plock.c b/dlm_controld/ploc

[Cluster-devel] [PATCH dlm-tool 2/2] fs: dlm: implement DLM_PLOCK_OP_CANCEL

2023-07-13 Thread Alexander Aring
This patch implements DLM_PLOCK_OP_CANCEL to try to delete waiters for a lock request which are waiting to being granted. If the waiter can be deleted the reply to the kernel will be replaced from DLM_PLOCK_OP_LOCK to the sent DLM_PLOCK_OP_CANCEL and clearing the DLM_PLOCK_FL_NO_REPLY flag. --- dl

[Cluster-devel] [PATCH v5 5/8] xfs: XFS_ICHGTIME_CREATE is unused

2023-07-13 Thread Jeff Layton
Nothing ever sets this flag, which makes sense since the create time is set at inode instantiation and is never changed. Remove it and the handling of it in xfs_trans_ichgtime. Signed-off-by: Jeff Layton --- fs/xfs/libxfs/xfs_shared.h | 2 -- fs/xfs/libxfs/xfs_trans_inode.c | 2 -- 2 files

[Cluster-devel] [PATCH v5 0/8] fs: implement multigrain timestamps

2023-07-13 Thread Jeff Layton
8 insertions(+), 129 deletions(-) --- base-commit: cf22d118b89a09a0160586412160d89098f7c4c7 change-id: 20230713-mgctime-f2a9fc324918 Best regards, -- Jeff Layton

[Cluster-devel] [PATCH v5 3/8] tmpfs: bump the mtime/ctime/iversion when page becomes writeable

2023-07-13 Thread Jeff Layton
Most filesystems that use the pagecache will update the mtime, ctime, and change attribute when a page becomes writeable. Add a page_mkwrite operation for tmpfs and just use it to bump the mtime, ctime and change attribute. This fixes xfstest generic/080 on tmpfs. Signed-off-by: Jeff Layton ---

[Cluster-devel] [PATCH v5 1/8] fs: pass the request_mask to generic_fillattr

2023-07-13 Thread Jeff Layton
generic_fillattr just fills in the entire stat struct indiscriminately today, copying data from the inode. There is at least one attribute (STATX_CHANGE_COOKIE) that can have side effects when it is reported, and we're looking at adding more with the addition of multigrain timestamps. Add a reques

[Cluster-devel] [PATCH v5 7/8] ext4: switch to multigrain timestamps

2023-07-13 Thread Jeff Layton
Enable multigrain timestamps, which should ensure that there is an apparent change to the timestamp whenever it has been written after being actively observed via getattr. For ext4, we only need to enable the FS_MGTIME flag. Signed-off-by: Jeff Layton --- fs/ext4/super.c | 2 +- 1 file changed,

[Cluster-devel] [PATCH v5 4/8] tmpfs: add support for multigrain timestamps

2023-07-13 Thread Jeff Layton
Enable multigrain timestamps, which should ensure that there is an apparent change to the timestamp whenever it has been written after being actively observed via getattr. tmpfs only requires the FS_MGTIME flag. Signed-off-by: Jeff Layton --- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1

[Cluster-devel] [PATCH v5 2/8] fs: add infrastructure for multigrain timestamps

2023-07-13 Thread Jeff Layton
The VFS always uses coarse-grained timestamps when updating the ctime and mtime after a change. This has the benefit of allowing filesystems to optimize away a lot metadata updates, down to around 1 per jiffy, even when a file is under heavy writes. Unfortunately, this has always been an issue whe

[Cluster-devel] [PATCH v5 6/8] xfs: switch to multigrain timestamps

2023-07-13 Thread Jeff Layton
Enable multigrain timestamps, which should ensure that there is an apparent change to the timestamp whenever it has been written after being actively observed via getattr. Also, anytime the mtime changes, the ctime must also change, and those are now the only two options for xfs_trans_ichgtime. Ha

[Cluster-devel] [PATCH v5 8/8] btrfs: convert to multigrain timestamps

2023-07-13 Thread Jeff Layton
Enable multigrain timestamps, which should ensure that there is an apparent change to the timestamp whenever it has been written after being actively observed via getattr. Beyond enabling the FS_MGTIME flag, this patch eliminates update_time_for_write, which goes to great pains to avoid in-memory