On Wed, 6 Feb 2008, Andrea Arcangeli wrote:
> > You can of course setup a 2M granularity lock to get the same granularity
> > as the pte lock. That would even work for the cases where you have to page
> > pin now.
>
> If you set a 2M granularity lock, the _start callback would need to
> do:
>
On Tue, Feb 05, 2008 at 03:10:52PM -0800, Christoph Lameter wrote:
> On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
>
> > > You can avoid the page-pin and the pt lock completely by zapping the
> > > mappings at _start and then holding off new references until _end.
> >
> > "holding off new referenc
On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> > You can avoid the page-pin and the pt lock completely by zapping the
> > mappings at _start and then holding off new references until _end.
>
> "holding off new references until _end" = per-range mutex less scalar
> and more expensive than the PT l
On Tue, Feb 05, 2008 at 02:06:23PM -0800, Christoph Lameter wrote:
> On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
>
> > On Tue, Feb 05, 2008 at 10:17:41AM -0800, Christoph Lameter wrote:
> > > The other approach will not have any remote ptes at that point. Why would
> > > there be a coherency issu
On Tue, Feb 05, 2008 at 02:06:23PM -0800, Christoph Lameter wrote:
> On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
>
> > On Tue, Feb 05, 2008 at 10:17:41AM -0800, Christoph Lameter wrote:
> > > The other approach will not have any remote ptes at that point. Why would
> > > there be a coherency issu
On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> On Tue, Feb 05, 2008 at 10:17:41AM -0800, Christoph Lameter wrote:
> > The other approach will not have any remote ptes at that point. Why would
> > there be a coherency issue?
>
> It never happens that two threads writes to two different physical
>
On Tue, Feb 05, 2008 at 10:17:41AM -0800, Christoph Lameter wrote:
> The other approach will not have any remote ptes at that point. Why would
> there be a coherency issue?
It never happens that two threads writes to two different physical
pages by working on the same process virtual address. Thi
On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> given I never allow a coherency-loss between two threads that will
> read/write to two different physical pages for the same virtual
> adddress in remap_file_pages).
The other approach will not have any remote ptes at that point. Why would
there be a
On Mon, Feb 04, 2008 at 10:11:24PM -0800, Christoph Lameter wrote:
> Zero problems only if you find having a single callout for every page
> acceptable. So the invalidate_range in your patch is only working
invalidate_pages is only a further optimization that was
strightforward in some places wh
On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> On Mon, Feb 04, 2008 at 11:09:01AM -0800, Christoph Lameter wrote:
> > On Sun, 3 Feb 2008, Andrea Arcangeli wrote:
> >
> > > > Right but that pin requires taking a refcount which we cannot do.
> > >
> > > GRU can use my patch without the pin. XPMEM o
On Mon, Feb 04, 2008 at 11:09:01AM -0800, Christoph Lameter wrote:
> On Sun, 3 Feb 2008, Andrea Arcangeli wrote:
>
> > > Right but that pin requires taking a refcount which we cannot do.
> >
> > GRU can use my patch without the pin. XPMEM obviously can't use my
> > patch as my invalidate_page[s]
On Sun, 3 Feb 2008, Andrea Arcangeli wrote:
> > Right but that pin requires taking a refcount which we cannot do.
>
> GRU can use my patch without the pin. XPMEM obviously can't use my
> patch as my invalidate_page[s] are under the PT lock (a feature to fit
> GRU/KVM in the simplest way), this is
On Sat, Feb 02, 2008 at 09:14:57PM -0600, Jack Steiner wrote:
> Also, most (but not all) applications that use the GRU do not usually do
> anything that requires frequent flushing (fortunately). The GRU is intended
> for HPC-like applications. These don't usually do frequent map/unmap
> operations
On Sun, Feb 03, 2008 at 03:17:04AM +0100, Andrea Arcangeli wrote:
> On Fri, Feb 01, 2008 at 11:23:57AM -0800, Christoph Lameter wrote:
> > Yes so your invalidate_range is still some sort of dysfunctional
> > optimization? Gazillions of invalidate_page's will have to be executed
> > when tearing d
On Fri, Feb 01, 2008 at 11:23:57AM -0800, Christoph Lameter wrote:
> Yes so your invalidate_range is still some sort of dysfunctional
> optimization? Gazillions of invalidate_page's will have to be executed
> when tearing down large memory areas.
I don't know if gru can flush the external TLB re
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> Note that my #v5 doesn't require to increase the page count all the
> time, so GRU will work fine with #v5.
But that comes with the cost of firing invalidate_page for every page
being evicted. In order to make your single invalidate_range work withou
On Thu, Jan 31, 2008 at 05:44:24PM -0800, Christoph Lameter wrote:
> The trouble is that the invalidates are much more expensive if you have to
> send theses to remote partitions (XPmem). And its really great if you can
> simple tear down everything. Certainly this is a significant improvement
>
On Thu, Jan 31, 2008 at 05:37:21PM -0800, Christoph Lameter wrote:
> On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
>
> > I appreciate the review! I hope my entirely bug free and
> > strightforward #v5 will strongly increase the probability of getting
> > this in sooner than later. If something else
On Thu, 31 Jan 2008, Robin Holt wrote:
> > Mutex locking? Could you be more specific?
>
> I think he is talking about the external locking that xpmem will need
> to do to ensure we are not able to refault pages inside of regions that
> are undergoing recall/page table clearing. At least that has
On Thu, Jan 31, 2008 at 05:37:21PM -0800, Christoph Lameter wrote:
> On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
>
> > I appreciate the review! I hope my entirely bug free and
> > strightforward #v5 will strongly increase the probability of getting
> > this in sooner than later. If something else
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> GRU. Thanks to the PT lock this remains a totally obviously safe
> design and it requires zero additional locking anywhere (nor linux VM,
> nor in the mmu notifier methods, nor in the KVM/GRU page fault).
Na. I would not be so sure about having caught
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> I appreciate the review! I hope my entirely bug free and
> strightforward #v5 will strongly increase the probability of getting
> this in sooner than later. If something else it shows the approach I
> prefer to cover GRU/KVM 100%, leaving the overkill
On Thu, Jan 31, 2008 at 03:09:55PM -0800, Christoph Lameter wrote:
> On Thu, 31 Jan 2008, Christoph Lameter wrote:
>
> > > pagefault against the main linux page fault, given we already have all
> > > needed serialization out of the PT lock. XPMEM is forced to do that
> >
> > pt lock cannot serial
On Thu, Jan 31, 2008 at 12:18:54PM -0800, Christoph Lameter wrote:
> pt lock cannot serialize with invalidate_range since it is split. A range
> requires locking for a series of ptes not only individual ones.
The lock I take already protects up to 512 ptes yes. I call
invalidate_pages only across
On Thu, 31 Jan 2008, Christoph Lameter wrote:
> > pagefault against the main linux page fault, given we already have all
> > needed serialization out of the PT lock. XPMEM is forced to do that
>
> pt lock cannot serialize with invalidate_range since it is split. A range
> requires locking for a
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> My suggestion is to add the invalidate_range_start/end incrementally
> with this, and to keep all the xpmem mmu notifiers in a separate
> incremental patch (those are going to require many more changes to
> perfect). They've very different things. GRU
GRU should implement at least invalidate_page and invalidate_pages,
and it should be mostly covered with that.
My suggestion is to add the invalidate_range_start/end incrementally
with this, and to keep all the xpmem mmu notifiers in a separate
incremental patch (those are going to require many mo
27 matches
Mail list logo