Re: [Devel] [PATCH rh7] fs/cleancache: fix data invalidation in the cleancache during direct_io

2017-04-11 Thread Alexey Kuznetsov
Hello!

Good job!

Before submitting this to mainstream look
at truncate_inode_pages.

It has condition:

if (mapping->nrpages == 0 && mapping->nrexceptional == 0)
return;

I have no idea what are those exceptions are, but it definitely
looks illegal to check only for nrpages in invalidate_inode_pages2_range,
it clears exceptional entries as well.

Also I see no point in invalidation of cleancache on entry
to these routines. It is waste of time, cleancache will be
repopulated by invalidation (which it stupid, of course).
It is enough to do this once at exit.
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel


Re: [Devel] [PATCH rh7] fs/cleancache: fix data invalidation in the cleancache during direct_io

2017-04-12 Thread Andrey Ryabinin
On 04/11/2017 08:08 PM, Alexey Kuznetsov wrote:
> Hello!
> 
> Good job!
> 
> Before submitting this to mainstream look
> at truncate_inode_pages.
> 
> It has condition:
> 
> if (mapping->nrpages == 0 && mapping->nrexceptional == 0)
> return;
> 
> I have no idea what are those exceptions are, but it definitely
> looks illegal to check only for nrpages in invalidate_inode_pages2_range,
> it clears exceptional entries as well.
> 

AFAIU exceptional entries are either dax entires, they store sector number and
entry size (PMD, PTE,...), or they used by workingset code to store some 
information
about page eviction. 

Given that, invalidate_inode_pages2_range() supposed to invalidate stale data 
in page
cache/cleancache (per my understanding at least) I would say that 
invalidate_inode_pages2_range()
shouldn't remove exceptional entries. But not sure that my understanding is 
correct,
so I'm going to add ->nrexceptional check in v2, but will ask about this 
ambiguousity
upstream folks.

> Also I see no point in invalidation of cleancache on entry
> to these routines. It is waste of time, cleancache will be
> repopulated by invalidation (which it stupid, of course).
> It is enough to do this once at exit.
> 

Agreed.
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel


Re: [Devel] [PATCH rh7] fs/cleancache: fix data invalidation in the cleancache during direct_io

2017-04-12 Thread Andrey Ryabinin


On 04/12/2017 05:45 PM, Andrey Ryabinin wrote:
> On 04/11/2017 08:08 PM, Alexey Kuznetsov wrote:
>> Hello!
>>
>> Good job!
>>
>> Before submitting this to mainstream look
>> at truncate_inode_pages.
>>
>> It has condition:
>>
>> if (mapping->nrpages == 0 && mapping->nrexceptional == 0)
>> return;
>>
>> I have no idea what are those exceptions are, but it definitely
>> looks illegal to check only for nrpages in invalidate_inode_pages2_range,
>> it clears exceptional entries as well.
>>
> 
> AFAIU exceptional entries are either dax entires, they store sector number and
> entry size (PMD, PTE,...), or they used by workingset code to store some 
> information
> about page eviction. 
> 
> Given that, invalidate_inode_pages2_range() supposed to invalidate stale data 
> in page
> cache/cleancache (per my understanding at least) I would say that 
> invalidate_inode_pages2_range()
> shouldn't remove exceptional entries. But not sure that my understanding is 
> correct,
> so I'm going to add ->nrexceptional check in v2, but will ask about this 
> ambiguousity
> upstream folks.
> 


Just for easier review, this is diff between v1 and v2:

---
 fs/xfs/xfs_file.c | 14 ++
 mm/truncate.c |  2 +-
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index d758443..0b7a35b 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -346,7 +346,7 @@ xfs_file_aio_read(
 * serialisation.
 */
xfs_rw_ilock(ip, XFS_IOLOCK_SHARED);
-   if ((ioflags & XFS_IO_ISDIRECT) && inode->i_mapping->nrpages) {
+   if ((ioflags & XFS_IO_ISDIRECT)) {
xfs_rw_iunlock(ip, XFS_IOLOCK_SHARED);
xfs_rw_ilock(ip, XFS_IOLOCK_EXCL);
 
@@ -361,14 +361,12 @@ xfs_file_aio_read(
 * flush and reduce the chances of repeated iolock cycles going
 * forward.
 */
-   if (inode->i_mapping->nrpages) {
-   ret = filemap_write_and_wait(VFS_I(ip)->i_mapping);
-   if (ret) {
-   xfs_rw_iunlock(ip, XFS_IOLOCK_EXCL);
-   return ret;
-   }
-
+   ret = filemap_write_and_wait(VFS_I(ip)->i_mapping);
+   if (ret) {
+   xfs_rw_iunlock(ip, XFS_IOLOCK_EXCL);
+   return ret;
}
+
/*
 * Invalidate whole pages. This can return an error if
 * we fail to invalidate a page, but this should never
diff --git a/mm/truncate.c b/mm/truncate.c
index f015a86..ce4b1d8 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -629,7 +629,7 @@ int invalidate_inode_pages2_range(struct address_space 
*mapping,
 
cleancache_invalidate_inode(mapping);
 
-   if (mapping->nrpages == 0)
+   if (mapping->nrpages == 0 && mapping->nrexceptional == 0)
return 0;
 
pagevec_init(&pvec, 0);
-- 
2.10.2

___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel