[PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread Daniel Vetter
On Mon, Feb 4, 2013 at 8:36 PM, Jerome Glisse  wrote:
> On Mon, Feb 4, 2013 at 2:21 PM, Daniel Vetter  wrote:
>> On Mon, Feb 4, 2013 at 7:34 PM,   wrote:
>>> From: Jerome Glisse 
>>>
>>> We need to take reference on the sync object while holding the
>>> fence spinlock but at the same time we don't want to allocate
>>> memory while holding the spinlock. This patch make sure we
>>> enforce both of this constraint.
>>>
>>> v2: actually test build it
>>>
>>> Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296
>>>
>>> Signed-off-by: Jerome Glisse 
>>
>> Isn't that just another iteration of
>> https://patchwork.kernel.org/patch/1972071/ which somehow never
>> reached -fixes?
>> -Daniel
>
> Yes but my version doesn't drop the lock before taking the ref, iirc
> there might be a race if droping the lock and then taking it again.
> Another process might race to unref the sync obj but i haven't
> tortured too much my brain on how likely if at all this is possible.

Hm, mine rechecks whether the sync object disappeared (spotted by
Maarten) before grabbing a reference. So should be ok if the fence
signals. Ofc, if we don't hold a reservation on bo someone else might
sneak and add a new one. But since we're trying to move the bo that'd
be a pretty bug already.

In any case yours is a bit nicer since it only grabs the fence_lock
once. My poke was more a stab at Dave, since he originally prodded me
on irc for breaking this, and then it seems to have fallen by the
wayside ;-)

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


[PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread Daniel Vetter
On Mon, Feb 4, 2013 at 7:34 PM,   wrote:
> From: Jerome Glisse 
>
> We need to take reference on the sync object while holding the
> fence spinlock but at the same time we don't want to allocate
> memory while holding the spinlock. This patch make sure we
> enforce both of this constraint.
>
> v2: actually test build it
>
> Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296
>
> Signed-off-by: Jerome Glisse 

Isn't that just another iteration of
https://patchwork.kernel.org/patch/1972071/ which somehow never
reached -fixes?
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


[PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread Jerome Glisse
On Mon, Feb 4, 2013 at 2:21 PM, Daniel Vetter  wrote:
> On Mon, Feb 4, 2013 at 7:34 PM,   wrote:
>> From: Jerome Glisse 
>>
>> We need to take reference on the sync object while holding the
>> fence spinlock but at the same time we don't want to allocate
>> memory while holding the spinlock. This patch make sure we
>> enforce both of this constraint.
>>
>> v2: actually test build it
>>
>> Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296
>>
>> Signed-off-by: Jerome Glisse 
>
> Isn't that just another iteration of
> https://patchwork.kernel.org/patch/1972071/ which somehow never
> reached -fixes?
> -Daniel

Yes but my version doesn't drop the lock before taking the ref, iirc
there might be a race if droping the lock and then taking it again.
Another process might race to unref the sync obj but i haven't
tortured too much my brain on how likely if at all this is possible.

Cheers,
Jerome


[PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread j.gli...@gmail.com
From: Jerome Glisse 

We need to take reference on the sync object while holding the
fence spinlock but at the same time we don't want to allocate
memory while holding the spinlock. This patch make sure we
enforce both of this constraint.

v2: actually test build it

Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/ttm/ttm_bo_util.c |   17 -
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 44420fc..f4b7acd 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -413,6 +413,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  * @bo: A pointer to a struct ttm_buffer_object.
  * @new_obj: A pointer to a pointer to a newly created ttm_buffer_object,
  * holding the data of @bo with the old placement.
+ * @sync_obj: the sync object caller is responsible to take a reference on
+ * behalf of this function
  *
  * This is a utility function that may be called after an accelerated move
  * has been scheduled. A new buffer object is created as a placeholder for
@@ -423,11 +425,11 @@ static void ttm_transfered_destroy(struct 
ttm_buffer_object *bo)
  */

 static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
- struct ttm_buffer_object **new_obj)
+ struct ttm_buffer_object **new_obj,
+ void *sync_obj)
 {
struct ttm_buffer_object *fbo;
struct ttm_bo_device *bdev = bo->bdev;
-   struct ttm_bo_driver *driver = bdev->driver;

fbo = kzalloc(sizeof(*fbo), GFP_KERNEL);
if (!fbo)
@@ -448,7 +450,8 @@ static int ttm_buffer_object_transfer(struct 
ttm_buffer_object *bo,
fbo->vm_node = NULL;
atomic_set(>cpu_writers, 0);

-   fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
+   /* reference on sync obj is taken by the caller of this function */
+   fbo->sync_obj = sync_obj;
kref_init(>list_kref);
kref_init(>kref);
fbo->destroy = _transfered_destroy;
@@ -652,6 +655,8 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
}
ttm_bo_free_old_node(bo);
} else {
+   void *sync_obj;
+
/**
 * This should help pipeline ordinary buffer moves.
 *
@@ -662,12 +667,14 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object 
*bo,

set_bit(TTM_BO_PRIV_FLAG_MOVING, >priv_flags);

-   /* ttm_buffer_object_transfer accesses bo->sync_obj */
-   ret = ttm_buffer_object_transfer(bo, _obj);
+   /* take the ref on the sync object before releasing the 
spinlock */
+   sync_obj = driver->sync_obj_ref(bo->sync_obj);
spin_unlock(>fence_lock);
+
if (tmp_obj)
driver->sync_obj_unref(_obj);

+   ret = ttm_buffer_object_transfer(bo, _obj, sync_obj);
if (ret)
return ret;

-- 
1.7.10.4



[PATCH] drm/ttm: avoid allocation memory while spinlock is held

2013-02-04 Thread j.gli...@gmail.com
From: Jerome Glisse 

We need to take reference on the sync object while holding the
fence spinlock but at the same time we don't want to allocate
memory while holding the spinlock. This patch make sure we
enforce both of this constraint.

Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/ttm/ttm_bo_util.c |   16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 44420fc..77799a5 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -413,6 +413,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  * @bo: A pointer to a struct ttm_buffer_object.
  * @new_obj: A pointer to a pointer to a newly created ttm_buffer_object,
  * holding the data of @bo with the old placement.
+ * @sync_obj: the sync object caller is responsible to take a reference on
+ * behalf of this function
  *
  * This is a utility function that may be called after an accelerated move
  * has been scheduled. A new buffer object is created as a placeholder for
@@ -423,7 +425,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  */

 static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
- struct ttm_buffer_object **new_obj)
+ struct ttm_buffer_object **new_obj,
+ void sync_obj)
 {
struct ttm_buffer_object *fbo;
struct ttm_bo_device *bdev = bo->bdev;
@@ -448,7 +451,8 @@ static int ttm_buffer_object_transfer(struct 
ttm_buffer_object *bo,
fbo->vm_node = NULL;
atomic_set(>cpu_writers, 0);

-   fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
+   /* reference on sync obj is taken by the caller of this function */
+   fbo->sync_obj = sync_obj;
kref_init(>list_kref);
kref_init(>kref);
fbo->destroy = _transfered_destroy;
@@ -652,6 +656,8 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
}
ttm_bo_free_old_node(bo);
} else {
+   void *sync_obj;
+
/**
 * This should help pipeline ordinary buffer moves.
 *
@@ -662,12 +668,14 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object 
*bo,

set_bit(TTM_BO_PRIV_FLAG_MOVING, >priv_flags);

-   /* ttm_buffer_object_transfer accesses bo->sync_obj */
-   ret = ttm_buffer_object_transfer(bo, _obj);
+   /* take the ref on the sync object before releasing the 
spinlock */
+   sync_obj = driver->sync_obj_ref(bo->sync_obj);
spin_unlock(>fence_lock);
+
if (tmp_obj)
driver->sync_obj_unref(_obj);

+   ret = ttm_buffer_object_transfer(bo, _obj, sync_obj);
if (ret)
return ret;

-- 
1.7.10.4



[PATCH] drm/ttm: avoid allocation memory while spinlock is held

2013-02-04 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

We need to take reference on the sync object while holding the
fence spinlock but at the same time we don't want to allocate
memory while holding the spinlock. This patch make sure we
enforce both of this constraint.

Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

Signed-off-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/ttm/ttm_bo_util.c |   16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 44420fc..77799a5 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -413,6 +413,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  * @bo: A pointer to a struct ttm_buffer_object.
  * @new_obj: A pointer to a pointer to a newly created ttm_buffer_object,
  * holding the data of @bo with the old placement.
+ * @sync_obj: the sync object caller is responsible to take a reference on
+ * behalf of this function
  *
  * This is a utility function that may be called after an accelerated move
  * has been scheduled. A new buffer object is created as a placeholder for
@@ -423,7 +425,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  */
 
 static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
- struct ttm_buffer_object **new_obj)
+ struct ttm_buffer_object **new_obj,
+ void sync_obj)
 {
struct ttm_buffer_object *fbo;
struct ttm_bo_device *bdev = bo-bdev;
@@ -448,7 +451,8 @@ static int ttm_buffer_object_transfer(struct 
ttm_buffer_object *bo,
fbo-vm_node = NULL;
atomic_set(fbo-cpu_writers, 0);
 
-   fbo-sync_obj = driver-sync_obj_ref(bo-sync_obj);
+   /* reference on sync obj is taken by the caller of this function */
+   fbo-sync_obj = sync_obj;
kref_init(fbo-list_kref);
kref_init(fbo-kref);
fbo-destroy = ttm_transfered_destroy;
@@ -652,6 +656,8 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
}
ttm_bo_free_old_node(bo);
} else {
+   void *sync_obj;
+
/**
 * This should help pipeline ordinary buffer moves.
 *
@@ -662,12 +668,14 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object 
*bo,
 
set_bit(TTM_BO_PRIV_FLAG_MOVING, bo-priv_flags);
 
-   /* ttm_buffer_object_transfer accesses bo-sync_obj */
-   ret = ttm_buffer_object_transfer(bo, ghost_obj);
+   /* take the ref on the sync object before releasing the 
spinlock */
+   sync_obj = driver-sync_obj_ref(bo-sync_obj);
spin_unlock(bdev-fence_lock);
+
if (tmp_obj)
driver-sync_obj_unref(tmp_obj);
 
+   ret = ttm_buffer_object_transfer(bo, ghost_obj, sync_obj);
if (ret)
return ret;
 
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

We need to take reference on the sync object while holding the
fence spinlock but at the same time we don't want to allocate
memory while holding the spinlock. This patch make sure we
enforce both of this constraint.

v2: actually test build it

Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

Signed-off-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/ttm/ttm_bo_util.c |   17 -
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 44420fc..f4b7acd 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -413,6 +413,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  * @bo: A pointer to a struct ttm_buffer_object.
  * @new_obj: A pointer to a pointer to a newly created ttm_buffer_object,
  * holding the data of @bo with the old placement.
+ * @sync_obj: the sync object caller is responsible to take a reference on
+ * behalf of this function
  *
  * This is a utility function that may be called after an accelerated move
  * has been scheduled. A new buffer object is created as a placeholder for
@@ -423,11 +425,11 @@ static void ttm_transfered_destroy(struct 
ttm_buffer_object *bo)
  */
 
 static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
- struct ttm_buffer_object **new_obj)
+ struct ttm_buffer_object **new_obj,
+ void *sync_obj)
 {
struct ttm_buffer_object *fbo;
struct ttm_bo_device *bdev = bo-bdev;
-   struct ttm_bo_driver *driver = bdev-driver;
 
fbo = kzalloc(sizeof(*fbo), GFP_KERNEL);
if (!fbo)
@@ -448,7 +450,8 @@ static int ttm_buffer_object_transfer(struct 
ttm_buffer_object *bo,
fbo-vm_node = NULL;
atomic_set(fbo-cpu_writers, 0);
 
-   fbo-sync_obj = driver-sync_obj_ref(bo-sync_obj);
+   /* reference on sync obj is taken by the caller of this function */
+   fbo-sync_obj = sync_obj;
kref_init(fbo-list_kref);
kref_init(fbo-kref);
fbo-destroy = ttm_transfered_destroy;
@@ -652,6 +655,8 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
}
ttm_bo_free_old_node(bo);
} else {
+   void *sync_obj;
+
/**
 * This should help pipeline ordinary buffer moves.
 *
@@ -662,12 +667,14 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object 
*bo,
 
set_bit(TTM_BO_PRIV_FLAG_MOVING, bo-priv_flags);
 
-   /* ttm_buffer_object_transfer accesses bo-sync_obj */
-   ret = ttm_buffer_object_transfer(bo, ghost_obj);
+   /* take the ref on the sync object before releasing the 
spinlock */
+   sync_obj = driver-sync_obj_ref(bo-sync_obj);
spin_unlock(bdev-fence_lock);
+
if (tmp_obj)
driver-sync_obj_unref(tmp_obj);
 
+   ret = ttm_buffer_object_transfer(bo, ghost_obj, sync_obj);
if (ret)
return ret;
 
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread Daniel Vetter
On Mon, Feb 4, 2013 at 7:34 PM,  j.gli...@gmail.com wrote:
 From: Jerome Glisse jgli...@redhat.com

 We need to take reference on the sync object while holding the
 fence spinlock but at the same time we don't want to allocate
 memory while holding the spinlock. This patch make sure we
 enforce both of this constraint.

 v2: actually test build it

 Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

 Signed-off-by: Jerome Glisse jgli...@redhat.com

Isn't that just another iteration of
https://patchwork.kernel.org/patch/1972071/ which somehow never
reached -fixes?
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread Jerome Glisse
On Mon, Feb 4, 2013 at 2:21 PM, Daniel Vetter dan...@ffwll.ch wrote:
 On Mon, Feb 4, 2013 at 7:34 PM,  j.gli...@gmail.com wrote:
 From: Jerome Glisse jgli...@redhat.com

 We need to take reference on the sync object while holding the
 fence spinlock but at the same time we don't want to allocate
 memory while holding the spinlock. This patch make sure we
 enforce both of this constraint.

 v2: actually test build it

 Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

 Signed-off-by: Jerome Glisse jgli...@redhat.com

 Isn't that just another iteration of
 https://patchwork.kernel.org/patch/1972071/ which somehow never
 reached -fixes?
 -Daniel

Yes but my version doesn't drop the lock before taking the ref, iirc
there might be a race if droping the lock and then taking it again.
Another process might race to unref the sync obj but i haven't
tortured too much my brain on how likely if at all this is possible.

Cheers,
Jerome
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread Daniel Vetter
On Mon, Feb 4, 2013 at 8:36 PM, Jerome Glisse j.gli...@gmail.com wrote:
 On Mon, Feb 4, 2013 at 2:21 PM, Daniel Vetter dan...@ffwll.ch wrote:
 On Mon, Feb 4, 2013 at 7:34 PM,  j.gli...@gmail.com wrote:
 From: Jerome Glisse jgli...@redhat.com

 We need to take reference on the sync object while holding the
 fence spinlock but at the same time we don't want to allocate
 memory while holding the spinlock. This patch make sure we
 enforce both of this constraint.

 v2: actually test build it

 Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

 Signed-off-by: Jerome Glisse jgli...@redhat.com

 Isn't that just another iteration of
 https://patchwork.kernel.org/patch/1972071/ which somehow never
 reached -fixes?
 -Daniel

 Yes but my version doesn't drop the lock before taking the ref, iirc
 there might be a race if droping the lock and then taking it again.
 Another process might race to unref the sync obj but i haven't
 tortured too much my brain on how likely if at all this is possible.

Hm, mine rechecks whether the sync object disappeared (spotted by
Maarten) before grabbing a reference. So should be ok if the fence
signals. Ofc, if we don't hold a reservation on bo someone else might
sneak and add a new one. But since we're trying to move the bo that'd
be a pretty bug already.

In any case yours is a bit nicer since it only grabs the fence_lock
once. My poke was more a stab at Dave, since he originally prodded me
on irc for breaking this, and then it seems to have fallen by the
wayside ;-)

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel