Re: [Intel-gfx] [PATCH 2/2] drm/i915/gem: Migrate to system at dma-buf attach time (v5)

2021-07-14 Thread Daniel Vetter
On Wed, Jul 14, 2021 at 11:01 PM Jason Ekstrand  wrote:
>
> On Tue, Jul 13, 2021 at 10:23 AM Daniel Vetter  wrote:
> >
> > On Tue, Jul 13, 2021 at 04:06:13PM +0100, Matthew Auld wrote:
> > > On Tue, 13 Jul 2021 at 15:44, Daniel Vetter  wrote:
> > > >
> > > > On Mon, Jul 12, 2021 at 06:12:34PM -0500, Jason Ekstrand wrote:
> > > > > From: Thomas Hellström 
> > > > >
> > > > > Until we support p2p dma or as a complement to that, migrate data
> > > > > to system memory at dma-buf attach time if possible.
> > > > >
> > > > > v2:
> > > > > - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
> > > > >   selftest to migrate if we are LMEM capable.
> > > > > v3:
> > > > > - Migrate also in the pin() callback.
> > > > > v4:
> > > > > - Migrate in attach
> > > > > v5: (jason)
> > > > > - Lock around the migration
> > > > >
> > > > > Signed-off-by: Thomas Hellström 
> > > > > Signed-off-by: Michael J. Ruhl 
> > > > > Reported-by: kernel test robot 
> > > > > Signed-off-by: Jason Ekstrand 
> > > > > Reviewed-by: Jason Ekstrand 
> > > > > ---
> > > > >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c| 25 
> > > > > ++-
> > > > >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  4 ++-
> > > > >  2 files changed, 27 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
> > > > > b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > > > index 9a655f69a0671..3163f00554476 100644
> > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > > > @@ -170,8 +170,31 @@ static int i915_gem_dmabuf_attach(struct dma_buf 
> > > > > *dmabuf,
> > > > > struct dma_buf_attachment *attach)
> > > > >  {
> > > > >   struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > > > > + struct i915_gem_ww_ctx ww;
> > > > > + int err;
> > > > > +
> > > > > + for_i915_gem_ww(, err, true) {
> > > > > + err = i915_gem_object_lock(obj, );
> > > > > + if (err)
> > > > > + continue;
> > > > > +
> > > > > + if (!i915_gem_object_can_migrate(obj, 
> > > > > INTEL_REGION_SMEM)) {
> > > > > + err = -EOPNOTSUPP;
> > > > > + continue;
> > > > > + }
> > > > > +
> > > > > + err = i915_gem_object_migrate(obj, , 
> > > > > INTEL_REGION_SMEM);
> > > > > + if (err)
> > > > > + continue;
> > > > >
> > > > > - return i915_gem_object_pin_pages_unlocked(obj);
> > > > > + err = i915_gem_object_wait_migration(obj, 0);
> > > > > + if (err)
> > > > > + continue;
> > > > > +
> > > > > + err = i915_gem_object_pin_pages(obj);
> > > > > + }
> > > > > +
> > > > > + return err;
> > > > >  }
> > > > >
> > > > >  static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> > > > > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c 
> > > > > b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > > > index 3dc0f8b3cdab0..4f7e77b1c0152 100644
> > > > > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > > > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > > > @@ -106,7 +106,9 @@ static int igt_dmabuf_import_same_driver(void 
> > > > > *arg)
> > > > >   int err;
> > > > >
> > > > >   force_different_devices = true;
> > > > > - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > > > > + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
> > > >
> > > > I'm wondering (and couldn't answer) whether this creates an lmem+smem
> > > > buffer, since if we create an lmem-only buffer then the migration above
> > > > should fail.
> > >
> > > It's lmem-only, but it's also a kernel internal object, so the
> > > migration path will still happily migrate it if asked. On the other
> > > hand if it's a userspace object then we always have to respect the
> > > placements.
> > >
> > > I think for now the only usecase for that is in the selftests.
> >
> > Yeah I've read the kerneldoc, it's all nicely documented but feels a bit
> > dangerous. What I proposed on irc:
> > - i915_gem_object_migrate does the placement check, i.e. as strict as
> >   can_migrate.
> > - A new __i915_gem_object_migrate is for selftest that do special stuff.
>
> I just sent out a patch which does this except we don't actually need
> the __ version because there are no self-tests that want to do a
> dangerous migrate.  We could add such a helper later if we needed.
>
> > - In the import selftest we check that lmem-only fails (because we can't
> >   pin it into smem) for a non-dynamic importer, but lmem+smem works and
> >   gets migrated.
>
> I think we maybe want multiple things here?  The test we have right
> now is useful because, by creating an internal LMEM buffer we ensure
> that the migration actually happens.  If we create LMEM+SMEM, then

Re: [Intel-gfx] [PATCH 2/2] drm/i915/gem: Migrate to system at dma-buf attach time (v5)

2021-07-14 Thread Jason Ekstrand
On Tue, Jul 13, 2021 at 10:23 AM Daniel Vetter  wrote:
>
> On Tue, Jul 13, 2021 at 04:06:13PM +0100, Matthew Auld wrote:
> > On Tue, 13 Jul 2021 at 15:44, Daniel Vetter  wrote:
> > >
> > > On Mon, Jul 12, 2021 at 06:12:34PM -0500, Jason Ekstrand wrote:
> > > > From: Thomas Hellström 
> > > >
> > > > Until we support p2p dma or as a complement to that, migrate data
> > > > to system memory at dma-buf attach time if possible.
> > > >
> > > > v2:
> > > > - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
> > > >   selftest to migrate if we are LMEM capable.
> > > > v3:
> > > > - Migrate also in the pin() callback.
> > > > v4:
> > > > - Migrate in attach
> > > > v5: (jason)
> > > > - Lock around the migration
> > > >
> > > > Signed-off-by: Thomas Hellström 
> > > > Signed-off-by: Michael J. Ruhl 
> > > > Reported-by: kernel test robot 
> > > > Signed-off-by: Jason Ekstrand 
> > > > Reviewed-by: Jason Ekstrand 
> > > > ---
> > > >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c| 25 ++-
> > > >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  4 ++-
> > > >  2 files changed, 27 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
> > > > b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > > index 9a655f69a0671..3163f00554476 100644
> > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > > @@ -170,8 +170,31 @@ static int i915_gem_dmabuf_attach(struct dma_buf 
> > > > *dmabuf,
> > > > struct dma_buf_attachment *attach)
> > > >  {
> > > >   struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > > > + struct i915_gem_ww_ctx ww;
> > > > + int err;
> > > > +
> > > > + for_i915_gem_ww(, err, true) {
> > > > + err = i915_gem_object_lock(obj, );
> > > > + if (err)
> > > > + continue;
> > > > +
> > > > + if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) 
> > > > {
> > > > + err = -EOPNOTSUPP;
> > > > + continue;
> > > > + }
> > > > +
> > > > + err = i915_gem_object_migrate(obj, , 
> > > > INTEL_REGION_SMEM);
> > > > + if (err)
> > > > + continue;
> > > >
> > > > - return i915_gem_object_pin_pages_unlocked(obj);
> > > > + err = i915_gem_object_wait_migration(obj, 0);
> > > > + if (err)
> > > > + continue;
> > > > +
> > > > + err = i915_gem_object_pin_pages(obj);
> > > > + }
> > > > +
> > > > + return err;
> > > >  }
> > > >
> > > >  static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> > > > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c 
> > > > b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > > index 3dc0f8b3cdab0..4f7e77b1c0152 100644
> > > > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > > @@ -106,7 +106,9 @@ static int igt_dmabuf_import_same_driver(void *arg)
> > > >   int err;
> > > >
> > > >   force_different_devices = true;
> > > > - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > > > + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
> > >
> > > I'm wondering (and couldn't answer) whether this creates an lmem+smem
> > > buffer, since if we create an lmem-only buffer then the migration above
> > > should fail.
> >
> > It's lmem-only, but it's also a kernel internal object, so the
> > migration path will still happily migrate it if asked. On the other
> > hand if it's a userspace object then we always have to respect the
> > placements.
> >
> > I think for now the only usecase for that is in the selftests.
>
> Yeah I've read the kerneldoc, it's all nicely documented but feels a bit
> dangerous. What I proposed on irc:
> - i915_gem_object_migrate does the placement check, i.e. as strict as
>   can_migrate.
> - A new __i915_gem_object_migrate is for selftest that do special stuff.

I just sent out a patch which does this except we don't actually need
the __ version because there are no self-tests that want to do a
dangerous migrate.  We could add such a helper later if we needed.

> - In the import selftest we check that lmem-only fails (because we can't
>   pin it into smem) for a non-dynamic importer, but lmem+smem works and
>   gets migrated.

I think we maybe want multiple things here?  The test we have right
now is useful because, by creating an internal LMEM buffer we ensure
that the migration actually happens.  If we create LMEM+SMEM, then
it's possible it'll start off in SMEM and the migration would be a
no-op.  Not sure how likely that is in reality in a self-test
environment, though.

--Jason

> - Once we have dynamic dma-buf for p2p pci, then we'll have another
>   selftest which checks that things work for lmem only if and only if 

Re: [Intel-gfx] [PATCH 2/2] drm/i915/gem: Migrate to system at dma-buf attach time (v5)

2021-07-13 Thread Daniel Vetter
On Tue, Jul 13, 2021 at 04:06:13PM +0100, Matthew Auld wrote:
> On Tue, 13 Jul 2021 at 15:44, Daniel Vetter  wrote:
> >
> > On Mon, Jul 12, 2021 at 06:12:34PM -0500, Jason Ekstrand wrote:
> > > From: Thomas Hellström 
> > >
> > > Until we support p2p dma or as a complement to that, migrate data
> > > to system memory at dma-buf attach time if possible.
> > >
> > > v2:
> > > - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
> > >   selftest to migrate if we are LMEM capable.
> > > v3:
> > > - Migrate also in the pin() callback.
> > > v4:
> > > - Migrate in attach
> > > v5: (jason)
> > > - Lock around the migration
> > >
> > > Signed-off-by: Thomas Hellström 
> > > Signed-off-by: Michael J. Ruhl 
> > > Reported-by: kernel test robot 
> > > Signed-off-by: Jason Ekstrand 
> > > Reviewed-by: Jason Ekstrand 
> > > ---
> > >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c| 25 ++-
> > >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  4 ++-
> > >  2 files changed, 27 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
> > > b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > index 9a655f69a0671..3163f00554476 100644
> > > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > > @@ -170,8 +170,31 @@ static int i915_gem_dmabuf_attach(struct dma_buf 
> > > *dmabuf,
> > > struct dma_buf_attachment *attach)
> > >  {
> > >   struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > > + struct i915_gem_ww_ctx ww;
> > > + int err;
> > > +
> > > + for_i915_gem_ww(, err, true) {
> > > + err = i915_gem_object_lock(obj, );
> > > + if (err)
> > > + continue;
> > > +
> > > + if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
> > > + err = -EOPNOTSUPP;
> > > + continue;
> > > + }
> > > +
> > > + err = i915_gem_object_migrate(obj, , INTEL_REGION_SMEM);
> > > + if (err)
> > > + continue;
> > >
> > > - return i915_gem_object_pin_pages_unlocked(obj);
> > > + err = i915_gem_object_wait_migration(obj, 0);
> > > + if (err)
> > > + continue;
> > > +
> > > + err = i915_gem_object_pin_pages(obj);
> > > + }
> > > +
> > > + return err;
> > >  }
> > >
> > >  static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> > > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c 
> > > b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > index 3dc0f8b3cdab0..4f7e77b1c0152 100644
> > > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > > @@ -106,7 +106,9 @@ static int igt_dmabuf_import_same_driver(void *arg)
> > >   int err;
> > >
> > >   force_different_devices = true;
> > > - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > > + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
> >
> > I'm wondering (and couldn't answer) whether this creates an lmem+smem
> > buffer, since if we create an lmem-only buffer then the migration above
> > should fail.
> 
> It's lmem-only, but it's also a kernel internal object, so the
> migration path will still happily migrate it if asked. On the other
> hand if it's a userspace object then we always have to respect the
> placements.
> 
> I think for now the only usecase for that is in the selftests.

Yeah I've read the kerneldoc, it's all nicely documented but feels a bit
dangerous. What I proposed on irc:
- i915_gem_object_migrate does the placement check, i.e. as strict as
  can_migrate.
- A new __i915_gem_object_migrate is for selftest that do special stuff.
- In the import selftest we check that lmem-only fails (because we can't
  pin it into smem) for a non-dynamic importer, but lmem+smem works and
  gets migrated.
- Once we have dynamic dma-buf for p2p pci, then we'll have another
  selftest which checks that things work for lmem only if and only if the
  importer is dynamic and has set the allow_p2p flag.

We could also add the can_migrate check everywhere (including
dma_buf->attach), but that feels like the less save api.
-Daniel


> 
> >
> > Which I'm also not sure we have a testcase for that testcase either ...
> >
> > I tried to read some code here, but got a bit lost. Ideas?
> > -Daniel
> >
> > > + if (IS_ERR(obj))
> > > + obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > >   if (IS_ERR(obj))
> > >   goto out_ret;
> > >
> > > --
> > > 2.31.1
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org

Re: [Intel-gfx] [PATCH 2/2] drm/i915/gem: Migrate to system at dma-buf attach time (v5)

2021-07-13 Thread Matthew Auld
On Tue, 13 Jul 2021 at 15:44, Daniel Vetter  wrote:
>
> On Mon, Jul 12, 2021 at 06:12:34PM -0500, Jason Ekstrand wrote:
> > From: Thomas Hellström 
> >
> > Until we support p2p dma or as a complement to that, migrate data
> > to system memory at dma-buf attach time if possible.
> >
> > v2:
> > - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
> >   selftest to migrate if we are LMEM capable.
> > v3:
> > - Migrate also in the pin() callback.
> > v4:
> > - Migrate in attach
> > v5: (jason)
> > - Lock around the migration
> >
> > Signed-off-by: Thomas Hellström 
> > Signed-off-by: Michael J. Ruhl 
> > Reported-by: kernel test robot 
> > Signed-off-by: Jason Ekstrand 
> > Reviewed-by: Jason Ekstrand 
> > ---
> >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c| 25 ++-
> >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  4 ++-
> >  2 files changed, 27 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
> > b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > index 9a655f69a0671..3163f00554476 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > @@ -170,8 +170,31 @@ static int i915_gem_dmabuf_attach(struct dma_buf 
> > *dmabuf,
> > struct dma_buf_attachment *attach)
> >  {
> >   struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > + struct i915_gem_ww_ctx ww;
> > + int err;
> > +
> > + for_i915_gem_ww(, err, true) {
> > + err = i915_gem_object_lock(obj, );
> > + if (err)
> > + continue;
> > +
> > + if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
> > + err = -EOPNOTSUPP;
> > + continue;
> > + }
> > +
> > + err = i915_gem_object_migrate(obj, , INTEL_REGION_SMEM);
> > + if (err)
> > + continue;
> >
> > - return i915_gem_object_pin_pages_unlocked(obj);
> > + err = i915_gem_object_wait_migration(obj, 0);
> > + if (err)
> > + continue;
> > +
> > + err = i915_gem_object_pin_pages(obj);
> > + }
> > +
> > + return err;
> >  }
> >
> >  static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c 
> > b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > index 3dc0f8b3cdab0..4f7e77b1c0152 100644
> > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > @@ -106,7 +106,9 @@ static int igt_dmabuf_import_same_driver(void *arg)
> >   int err;
> >
> >   force_different_devices = true;
> > - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
>
> I'm wondering (and couldn't answer) whether this creates an lmem+smem
> buffer, since if we create an lmem-only buffer then the migration above
> should fail.

It's lmem-only, but it's also a kernel internal object, so the
migration path will still happily migrate it if asked. On the other
hand if it's a userspace object then we always have to respect the
placements.

I think for now the only usecase for that is in the selftests.

>
> Which I'm also not sure we have a testcase for that testcase either ...
>
> I tried to read some code here, but got a bit lost. Ideas?
> -Daniel
>
> > + if (IS_ERR(obj))
> > + obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> >   if (IS_ERR(obj))
> >   goto out_ret;
> >
> > --
> > 2.31.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 2/2] drm/i915/gem: Migrate to system at dma-buf attach time (v5)

2021-07-13 Thread Daniel Vetter
On Mon, Jul 12, 2021 at 06:12:34PM -0500, Jason Ekstrand wrote:
> From: Thomas Hellström 
> 
> Until we support p2p dma or as a complement to that, migrate data
> to system memory at dma-buf attach time if possible.
> 
> v2:
> - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
>   selftest to migrate if we are LMEM capable.
> v3:
> - Migrate also in the pin() callback.
> v4:
> - Migrate in attach
> v5: (jason)
> - Lock around the migration
> 
> Signed-off-by: Thomas Hellström 
> Signed-off-by: Michael J. Ruhl 
> Reported-by: kernel test robot 
> Signed-off-by: Jason Ekstrand 
> Reviewed-by: Jason Ekstrand 
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c| 25 ++-
>  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  4 ++-
>  2 files changed, 27 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 9a655f69a0671..3163f00554476 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -170,8 +170,31 @@ static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
> struct dma_buf_attachment *attach)
>  {
>   struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> + struct i915_gem_ww_ctx ww;
> + int err;
> +
> + for_i915_gem_ww(, err, true) {
> + err = i915_gem_object_lock(obj, );
> + if (err)
> + continue;
> +
> + if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
> + err = -EOPNOTSUPP;
> + continue;
> + }
> +
> + err = i915_gem_object_migrate(obj, , INTEL_REGION_SMEM);
> + if (err)
> + continue;
>  
> - return i915_gem_object_pin_pages_unlocked(obj);
> + err = i915_gem_object_wait_migration(obj, 0);
> + if (err)
> + continue;
> +
> + err = i915_gem_object_pin_pages(obj);
> + }
> +
> + return err;
>  }
>  
>  static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c 
> b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> index 3dc0f8b3cdab0..4f7e77b1c0152 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> @@ -106,7 +106,9 @@ static int igt_dmabuf_import_same_driver(void *arg)
>   int err;
>  
>   force_different_devices = true;
> - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);

I'm wondering (and couldn't answer) whether this creates an lmem+smem
buffer, since if we create an lmem-only buffer then the migration above
should fail.

Which I'm also not sure we have a testcase for that testcase either ...

I tried to read some code here, but got a bit lost. Ideas?
-Daniel

> + if (IS_ERR(obj))
> + obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
>   if (IS_ERR(obj))
>   goto out_ret;
>  
> -- 
> 2.31.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 2/2] drm/i915/gem: Migrate to system at dma-buf attach time (v5)

2021-07-12 Thread Jason Ekstrand
From: Thomas Hellström 

Until we support p2p dma or as a complement to that, migrate data
to system memory at dma-buf attach time if possible.

v2:
- Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
  selftest to migrate if we are LMEM capable.
v3:
- Migrate also in the pin() callback.
v4:
- Migrate in attach
v5: (jason)
- Lock around the migration

Signed-off-by: Thomas Hellström 
Signed-off-by: Michael J. Ruhl 
Reported-by: kernel test robot 
Signed-off-by: Jason Ekstrand 
Reviewed-by: Jason Ekstrand 
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c| 25 ++-
 .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  4 ++-
 2 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 9a655f69a0671..3163f00554476 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -170,8 +170,31 @@ static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
  struct dma_buf_attachment *attach)
 {
struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
+   struct i915_gem_ww_ctx ww;
+   int err;
+
+   for_i915_gem_ww(, err, true) {
+   err = i915_gem_object_lock(obj, );
+   if (err)
+   continue;
+
+   if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
+   err = -EOPNOTSUPP;
+   continue;
+   }
+
+   err = i915_gem_object_migrate(obj, , INTEL_REGION_SMEM);
+   if (err)
+   continue;
 
-   return i915_gem_object_pin_pages_unlocked(obj);
+   err = i915_gem_object_wait_migration(obj, 0);
+   if (err)
+   continue;
+
+   err = i915_gem_object_pin_pages(obj);
+   }
+
+   return err;
 }
 
 static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c 
b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 3dc0f8b3cdab0..4f7e77b1c0152 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -106,7 +106,9 @@ static int igt_dmabuf_import_same_driver(void *arg)
int err;
 
force_different_devices = true;
-   obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
+   obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
+   if (IS_ERR(obj))
+   obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
if (IS_ERR(obj))
goto out_ret;
 
-- 
2.31.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx