Hugh Dickins wrote:
> Checking page counts in a GB file prior to sealing does not appeal at
> all: we'd be lucky ever to find them all accounted for.

Here is a refinement of that idea: during a seal operation, iterate over
all the pages in the file and check their refcounts.  On any page that
has an unexpected extra reference, allocate a new page, copy the data
over to the new page, and then replace the page having the extra
reference with the newly-allocated page in the file.  That way you still
get zero-copy on pages that don't have extra references, and you don't
have to fail the seal operation if some of the pages are still being
referenced by something else.

The downside of course is the extra memory usage and memcpy overhead if
something is holding extra references to the pages.  So whether this is
a good approach depends on:

*) Whether extra page references would happen frequently or infrequently
under various kernel configurations and usage scenarios.  I don't know
enough about the mm system to answer this myself.

*) Whether or not the extra memory usage and memcpy overhead could be
considered a DoS attack vector by someone who has found a way to add
extra references to the pages intentionally.

Tony Battersby
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to