David Howells <[EMAIL PROTECTED]> wrote:
> I don't quite what your limitations are
(8) Handling mappings that extend beyond the end of file.
David
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at htt
David Howells <[EMAIL PROTECTED]> wrote:
> I don't quite what your limitations are
(7) Two shared mappings on the same offset in the same file must, of
necessity, appear at the same address. This means you get two VMAs in the
rbtree at the same place.
David
-
To unsubscribe from this
Eric W. Biederman <[EMAIL PROTECTED]> wrote:
> >> For shared mappings you share in some sense the page cache.
> >
> > Currently, no - not unless the driver does something clever as ramfs does.
> > Sharing through the page cache is a nice idea, but it has some limitations,
> > mainly that non-shari
David Howells <[EMAIL PROTECTED]> writes:
> Eric W. Biederman <[EMAIL PROTECTED]> wrote:
>
>> As I understand your description for non-shared mappings the VMAs are
>> per process.
>
> Are you talking about the current state of play? If so, not precisely. In
> the current scheme of things, *all*
Eric W. Biederman <[EMAIL PROTECTED]> wrote:
> As I understand your description for non-shared mappings the VMAs are
> per process.
Are you talking about the current state of play? If so, not precisely. In
the current scheme of things, *all* VMAs are kept in a global tree and are
globally avail
I'm just trying to digest this a little.
As I understand your description for non-shared mappings the VMAs are
per process.
For shared mappings you share in some sense the page cache.
My gut feel says just keep a vma per process of the regions the
process has and do the appropriate book keeping
Hugh Dickins <[EMAIL PROTECTED]> wrote:
> But if "the SYSV SHM problem" you mention at the beginning
> is just the "nattch" problem you mention at the end, I doubt
> that's worth such a redesign as you're considering here.
Yes, as far as I know that's the problem. nattch is available to userspac
On Fri, 9 Mar 2007, David Howells wrote:
>
> I've been considering how to deal with the SYSV SHM problem, and I think we
> may have to move to unshared VMAs in NOMMU mode to deal with this. Currently,
> what we have is each mm_struct has in its arch-specific context argument
Robin Getz <[EMAIL PROTECTED]> wrote:
> We (noMMU) folks need to have special code anyway - so why not put it there,
> and try not to increase memory footprint?
I'd like to have the drivers and filesystems need to know as little as possible
about whether they're working in MMU-mode or NOMMU-mode
On Fri 9 Mar 2007 09:12, David Howells pondered:
> I've been considering how to deal with the SYSV SHM problem, and I think we
> may have to move to unshared VMAs in NOMMU mode to deal with this.
Thanks for putting some good thoughts down.
> Currently, what we have is each mm_str
I've been considering how to deal with the SYSV SHM problem, and I think we
may have to move to unshared VMAs in NOMMU mode to deal with this. Currently,
what we have is each mm_struct has in its arch-specific context argument a
list of VMLs. Take the FRV context for example:
[in
11 matches
Mail list logo