On Fri, Oct 23, 2020 at 06:44:23PM +0300, Konstantin Komarov wrote:
> +
> +/*ntfs_readpage*/
> +/*ntfs_readpages*/
> +/*ntfs_writepage*/
> +/*ntfs_writepages*/
> +/*ntfs_block_truncate_page*/

What are these for?

> +int ntfs_readpage(struct file *file, struct page *page)
> +{
> +     int err;
> +     struct address_space *mapping = page->mapping;
> +     struct inode *inode = mapping->host;
> +     struct ntfs_inode *ni = ntfs_i(inode);
> +     u64 vbo = (u64)page->index << PAGE_SHIFT;
> +     u64 valid;
> +     struct ATTRIB *attr;
> +     const char *data;
> +     u32 data_size;
> +
[...]
> +
> +     if (is_compressed(ni)) {
> +             if (PageUptodate(page)) {
> +                     unlock_page(page);
> +                     return 0;
> +             }

You can skip this -- the readpage op won't be called for pages which
are Uptodate.

> +     /* normal + sparse files */
> +     err = mpage_readpage(page, ntfs_get_block);
> +     if (err)
> +             goto out;

It would be nice to use iomap instead of mpage, but that's a big ask.

> +     valid = ni->i_valid;
> +     if (vbo < valid && valid < vbo + PAGE_SIZE) {
> +             if (PageLocked(page))
> +                     wait_on_page_bit(page, PG_locked);
> +             if (PageError(page)) {
> +                     ntfs_inode_warn(inode, "file garbage at 0x%llx", valid);
> +                     goto out;
> +             }
> +             zero_user_segment(page, valid & (PAGE_SIZE - 1), PAGE_SIZE);

Nono, you can't zero data after the page has been unlocked.  You can
handle this case in ntfs_get_block().  If the block is entirely beyond
i_size, returning a hole will cause mpage_readpage() to zero it.  If it
straddles i_size, you can either ensure that the on-media block contains
zeroes after the EOF, or if you can't depend on that, you can read it
in synchronously in your get_block() and then zero the tail and set the
buffer Uptodate.  Not the most appetising solution, but what you have here
is racy with the user writing to it after reading.

Reply via email to