On Thu, Nov 03, 2022 at 09:33:28PM +0000, David Howells wrote:
> +++ b/fs/netfs/buffered_read.c
> @@ -46,10 +46,15 @@ void netfs_rreq_unlock_folios(struct netfs_io_request 
> *rreq)
>  
>       rcu_read_lock();
>       xas_for_each(&xas, folio, last_page) {
> -             unsigned int pgpos = (folio_index(folio) - start_page) * 
> PAGE_SIZE;
> -             unsigned int pgend = pgpos + folio_size(folio);
> +             unsigned int pgpos, pgend;

"unsigned int" assumes that the number of bytes isn't going to exceed 32
bits.  I tend to err on the side of safety here and use size_t.

>               bool pg_failed = false;
>  
> +             if (xas_retry(&xas, folio))
> +                     continue;
> +
> +             pgpos = (folio_index(folio) - start_page) * PAGE_SIZE;
> +             pgend = pgpos + folio_size(folio);

What happens if start_page is somewhere inside folio?  Seems to me
that pgend ends up overhanging into the next folio?

--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs

Reply via email to