On Wed, Oct 23, 2019 at 12:52:23PM -0400, Jerome Glisse wrote:
> > Going another step further what hinders us to put the lock into the mmu
> > range notifier itself and have _lock()/_unlock() helpers?
> >
> > I mean having the lock in the driver only makes sense when the driver would
> > be us
On Wed, Oct 23, 2019 at 05:24:45PM +, Jason Gunthorpe wrote:
> mlx5 is similar, but not currently coded quite right, there is one
> lock that protects the command queue for submitting invalidations to
> the HW and it doesn't make a lot of sense to have additional fine
> grained locking beyond t
On Mon, Oct 21, 2019 at 07:06:00PM +, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 02:40:41PM -0400, Jerome Glisse wrote:
> > On Tue, Oct 15, 2019 at 03:12:27PM -0300, Jason Gunthorpe wrote:
> > > From: Jason Gunthorpe
> > >
> > > 8 of the mmu_notifier using drivers (i915_gem, radeon_mn,
On Wed, Oct 23, 2019 at 11:32:16AM +0200, Christian König wrote:
> Am 23.10.19 um 11:08 schrieb Daniel Vetter:
> > On Tue, Oct 22, 2019 at 03:01:13PM +, Jason Gunthorpe wrote:
> > > On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
> > >
> > > > > The unusual bit in all of this is
Am 23.10.19 um 11:08 schrieb Daniel Vetter:
On Tue, Oct 22, 2019 at 03:01:13PM +, Jason Gunthorpe wrote:
On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
The unusual bit in all of this is using a lock's critical region to
'protect' data for read, but updating that same data b
On Tue, Oct 22, 2019 at 03:01:13PM +, Jason Gunthorpe wrote:
> On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
>
> > > The unusual bit in all of this is using a lock's critical region to
> > > 'protect' data for read, but updating that same data before the lock's
> > > critical
On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
> > The unusual bit in all of this is using a lock's critical region to
> > 'protect' data for read, but updating that same data before the lock's
> > critical secion. ie relying on the unlock barrier to 'release' program
> > ordered s
On Tue, Oct 22, 2019 at 07:56:12AM -0400, Dennis Dalessandro wrote:
> On 10/21/2019 12:58 PM, Jason Gunthorpe wrote:
> > On Mon, Oct 21, 2019 at 11:55:51AM -0400, Dennis Dalessandro wrote:
> > > On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
> > > > This is still being tested, but I figured to send
On 10/21/2019 12:58 PM, Jason Gunthorpe wrote:
On Mon, Oct 21, 2019 at 11:55:51AM -0400, Dennis Dalessandro wrote:
On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
This is still being tested, but I figured to send it to start getting help
from the xen, amd and hfi drivers which I cannot test here.
On Mon, Oct 21, 2019 at 03:12:26PM +, Jason Gunthorpe wrote:
> On Mon, Oct 21, 2019 at 02:28:46PM +, Koenig, Christian wrote:
> > Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> > > On Sun, Oct 20, 2019 at 02:21:42PM +, Koenig, Christian wrote:
> > >> Am 18.10.19 um 22:36 schrieb Jason
On Mon, Oct 21, 2019 at 02:40:41PM -0400, Jerome Glisse wrote:
> On Tue, Oct 15, 2019 at 03:12:27PM -0300, Jason Gunthorpe wrote:
> > From: Jason Gunthorpe
> >
> > 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> > scif_dma, vhost, gntdev, hmm) drivers are using a commo
On Mon, Oct 21, 2019 at 02:28:46PM +, Koenig, Christian wrote:
> Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> > On Sun, Oct 20, 2019 at 02:21:42PM +, Koenig, Christian wrote:
> >> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
> >>> On Thu, Oct 17, 2019 at 04:47:20PM +, Koenig, Christ
On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
This is still being tested, but I figured to send it to start getting help
from the xen, amd and hfi drivers which I cannot test here.
Sorry for the delay, I never seen this. Was not on Cc list and didn't
register to me it impacted hfi. I'll take a
On Sun, Oct 20, 2019 at 02:21:42PM +, Koenig, Christian wrote:
> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
> > On Thu, Oct 17, 2019 at 04:47:20PM +, Koenig, Christian wrote:
> >
> >>> get_user_pages/hmm_range_fault() and invalidate_range_start() both are
> >>> called while holding mm->m
On Mon, Oct 21, 2019 at 11:55:51AM -0400, Dennis Dalessandro wrote:
> On 10/15/2019 2:12 PM, Jason Gunthorpe wrote:
> > This is still being tested, but I figured to send it to start getting help
> > from the xen, amd and hfi drivers which I cannot test here.
>
> Sorry for the delay, I never seen t
On Tue, Oct 15, 2019 at 03:12:27PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe
>
> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
> they only use invalidate_range_start/end and immediatel
Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> On Sun, Oct 20, 2019 at 02:21:42PM +, Koenig, Christian wrote:
>> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
>>> On Thu, Oct 17, 2019 at 04:47:20PM +, Koenig, Christian wrote:
>>> [SNIP]
>>>
So again how are they serialized?
>>> Th
Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
> On Thu, Oct 17, 2019 at 04:47:20PM +, Koenig, Christian wrote:
>
>>> get_user_pages/hmm_range_fault() and invalidate_range_start() both are
>>> called while holding mm->map_sem, so they are always serialized.
>> Not even remotely.
>>
>> For callin
On Thu, Oct 17, 2019 at 04:47:20PM +, Koenig, Christian wrote:
> > get_user_pages/hmm_range_fault() and invalidate_range_start() both are
> > called while holding mm->map_sem, so they are always serialized.
>
> Not even remotely.
>
> For calling get_user_pages()/hmm_range_fault() you only ne
Sending once more as text.
Am 17.10.19 um 18:26 schrieb Yang, Philip:
> On 2019-10-17 4:54 a.m., Christian König wrote:
>> Am 16.10.19 um 18:04 schrieb Jason Gunthorpe:
>>> On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
> Fro
Am 17.10.2019 18:26 schrieb "Yang, Philip" :
On 2019-10-17 4:54 a.m., Christian König wrote:
> Am 16.10.19 um 18:04 schrieb Jason Gunthorpe:
>> On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
>>> Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
From: Jason Gunthorpe
>>>
On 2019-10-17 4:54 a.m., Christian König wrote:
> Am 16.10.19 um 18:04 schrieb Jason Gunthorpe:
>> On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
>>> Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
From: Jason Gunthorpe
8 of the mmu_notifier using drivers (i915_gem
Am 16.10.19 um 18:04 schrieb Jason Gunthorpe:
On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
From: Jason Gunthorpe
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers ar
On Wed, Oct 16, 2019 at 10:58:02AM +0200, Christian König wrote:
> Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
> > From: Jason Gunthorpe
> >
> > 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> > scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
>
Am 15.10.19 um 20:12 schrieb Jason Gunthorpe:
From: Jason Gunthorpe
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating ra
From: Jason Gunthorpe
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell i
26 matches
Mail list logo