[PATCH 2/2] drm/ttm: improve uncached page deallocation.

2015-07-09 Thread j.gli...@gmail.com
From: Jérôme Glisse 

Calls to set_memory_wb() incure heavy TLB flush and IPI cost. To
minimize those wait until pool grow beyond batch size before
draining the pool.

Signed-off-by: Jérôme Glisse 
Reviewed-by: Mario Kleiner 
Reviewed-and-Tested-by: Michel Dänzer 
Reviewed-by: Konrad Rzeszutek Wilk 
Cc: Thomas Hellstrom 
---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
index af23080..624d941 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
@@ -963,13 +963,13 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, 
struct device *dev)
} else {
pool->npages_free += count;
list_splice(&ttm_dma->pages_list, &pool->free_list);
-   if (pool->npages_free > _manager->options.max_size) {
+   /*
+* Wait to have at at least NUM_PAGES_TO_ALLOC number of pages
+* to free in order to minimize calls to set_memory_wb().
+*/
+   if (pool->npages_free >= (_manager->options.max_size +
+ NUM_PAGES_TO_ALLOC))
npages = pool->npages_free - _manager->options.max_size;
-   /* free at least NUM_PAGES_TO_ALLOC number of pages
-* to reduce calls to set_memory_wb */
-   if (npages < NUM_PAGES_TO_ALLOC)
-   npages = NUM_PAGES_TO_ALLOC;
-   }
}
spin_unlock_irqrestore(&pool->lock, irq_flags);

-- 
1.8.3.1



[PATCH 1/2] drm/ttm: fix uncached page deallocation to properly fill page pool v3.

2015-07-09 Thread j.gli...@gmail.com
From: Jérôme Glisse 

Current code never allowed the page pool to actualy fill in anyway.
This fix it, so that we only start freeing page from the pool when
we go over the pool size.

Changed since v1:
  - Move the page batching optimization to its separate patch.

Changed since v2:
  - Do not remove code part of the batching optimization with
this patch.
  - Better commit message.

Signed-off-by: Jérôme Glisse 
Reviewed-by: Mario Kleiner 
Reviewed-and-Tested-by: Michel Dänzer 
Reviewed-by: Konrad Rzeszutek Wilk 
Cc: Thomas Hellstrom 
---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
index 3077f15..af23080 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
@@ -963,7 +963,6 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct 
device *dev)
} else {
pool->npages_free += count;
list_splice(&ttm_dma->pages_list, &pool->free_list);
-   npages = count;
if (pool->npages_free > _manager->options.max_size) {
npages = pool->npages_free - _manager->options.max_size;
/* free at least NUM_PAGES_TO_ALLOC number of pages
-- 
1.8.3.1



[PATCH 2/2] drm/ttm: improve uncached page deallocation.

2015-07-08 Thread j.gli...@gmail.com
From: Jérôme Glisse 

Calls to set_memory_wb() incure heavy TLB flush and IPI cost. To
minimize those wait until pool grow beyond batch size before
draining the pool.

Signed-off-by: Jérôme Glisse 
Reviewed-by: Mario Kleiner 
Cc: Michel Dänzer 
Cc: Thomas Hellstrom 
Cc: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
index 0194a93..8028dd6 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
@@ -953,7 +953,12 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct 
device *dev)
} else {
pool->npages_free += count;
list_splice(&ttm_dma->pages_list, &pool->free_list);
-   if (pool->npages_free > _manager->options.max_size)
+   /*
+* Wait to have at at least NUM_PAGES_TO_ALLOC number of pages
+* to free in order to minimize calls to set_memory_wb().
+*/
+   if (pool->npages_free >= (_manager->options.max_size +
+ NUM_PAGES_TO_ALLOC))
npages = pool->npages_free - _manager->options.max_size;
}
spin_unlock_irqrestore(&pool->lock, irq_flags);
-- 
1.8.3.1



[PATCH 1/2] drm/ttm: fix object deallocation to properly fill in the page pool.

2015-07-08 Thread j.gli...@gmail.com
From: Jérôme Glisse 

Current code never allowed the page pool to actualy fill in anyway.
This fix it, so that we only start freeing page from the pool when
we go over the pool size.

Signed-off-by: Jérôme Glisse 
Reviewed-by: Mario Kleiner 
Tested-by: Michel Dänzer 
Cc: Thomas Hellstrom 
Cc: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
index c96db43..0194a93 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
@@ -953,14 +953,8 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct 
device *dev)
} else {
pool->npages_free += count;
list_splice(&ttm_dma->pages_list, &pool->free_list);
-   npages = count;
-   if (pool->npages_free > _manager->options.max_size) {
+   if (pool->npages_free > _manager->options.max_size)
npages = pool->npages_free - _manager->options.max_size;
-   /* free at least NUM_PAGES_TO_ALLOC number of pages
-* to reduce calls to set_memory_wb */
-   if (npages < NUM_PAGES_TO_ALLOC)
-   npages = NUM_PAGES_TO_ALLOC;
-   }
}
spin_unlock_irqrestore(&pool->lock, irq_flags);

-- 
1.8.3.1



[PATCH 2/2] drm/radeon: SDMA fix hibernation (CI GPU family).

2015-06-19 Thread j.gli...@gmail.com
From: Jérôme Glisse 

In order for hibernation to reliably work we need to properly turn
off the SDMA block, sadly after numerous attemps i haven't not found
proper sequence for clean and full shutdown. So simply reset both
SDMA block, this makes hibernation works reliably on sea island GPU
family (CI)

Hibernation and suspend to ram were tested (several times) on :
Bonaire
Hawaii
Mullins
Kaveri
Kabini

Cc: stable at vger.kernel.org
Signed-off-by: Jérôme Glisse 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/radeon/cik_sdma.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/radeon/cik_sdma.c 
b/drivers/gpu/drm/radeon/cik_sdma.c
index f86eb54..d16f2ee 100644
--- a/drivers/gpu/drm/radeon/cik_sdma.c
+++ b/drivers/gpu/drm/radeon/cik_sdma.c
@@ -268,6 +268,17 @@ static void cik_sdma_gfx_stop(struct radeon_device *rdev)
}
rdev->ring[R600_RING_TYPE_DMA_INDEX].ready = false;
rdev->ring[CAYMAN_RING_TYPE_DMA1_INDEX].ready = false;
+
+   /* FIXME use something else than big hammer but after few days can not
+* seem to find good combination so reset SDMA blocks as it seems we
+* do not shut them down properly. This fix hibernation and does not
+* affect suspend to ram.
+*/
+   WREG32(SRBM_SOFT_RESET, SOFT_RESET_SDMA | SOFT_RESET_SDMA1);
+   (void)RREG32(SRBM_SOFT_RESET);
+   udelay(50);
+   WREG32(SRBM_SOFT_RESET, 0);
+   (void)RREG32(SRBM_SOFT_RESET);
 }

 /**
-- 
2.1.0



[PATCH 1/2] drm/radeon: compute ring fix hibernation (CI GPU family) v2.

2015-06-19 Thread j.gli...@gmail.com
From: Jérôme Glisse 

In order for hibernation to reliably work we need to cleanup more
thoroughly the compute ring. Hibernation is different from suspend
resume as when we resume from hibernation the hardware is first
fully initialize by regular kernel then freeze callback happens
(which correspond to a suspend inside the radeon kernel driver)
and turn off each of the block. It turns out we were not cleanly
shutting down the compute ring. This patch fix that.

Hibernation and suspend to ram were tested (several times) on :
Bonaire
Hawaii
Mullins
Kaveri
Kabini

Changed since v1:
  - Factor the ring stop logic into a function taking ring as arg.

Cc: stable at vger.kernel.org
Signed-off-by: Jérôme Glisse 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/radeon/cik.c | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index ba50f3c..8de5081 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -4579,6 +4579,31 @@ void cik_compute_set_wptr(struct radeon_device *rdev,
WDOORBELL32(ring->doorbell_index, ring->wptr);
 }

+static void cik_compute_stop(struct radeon_device *rdev,
+struct radeon_ring *ring)
+{
+   u32 j, tmp;
+
+   cik_srbm_select(rdev, ring->me, ring->pipe, ring->queue, 0);
+   /* Disable wptr polling. */
+   tmp = RREG32(CP_PQ_WPTR_POLL_CNTL);
+   tmp &= ~WPTR_POLL_EN;
+   WREG32(CP_PQ_WPTR_POLL_CNTL, tmp);
+   /* Disable HQD. */
+   if (RREG32(CP_HQD_ACTIVE) & 1) {
+   WREG32(CP_HQD_DEQUEUE_REQUEST, 1);
+   for (j = 0; j < rdev->usec_timeout; j++) {
+   if (!(RREG32(CP_HQD_ACTIVE) & 1))
+   break;
+   udelay(1);
+   }
+   WREG32(CP_HQD_DEQUEUE_REQUEST, 0);
+   WREG32(CP_HQD_PQ_RPTR, 0);
+   WREG32(CP_HQD_PQ_WPTR, 0);
+   }
+   cik_srbm_select(rdev, 0, 0, 0, 0);
+}
+
 /**
  * cik_cp_compute_enable - enable/disable the compute CP MEs
  *
@@ -4592,6 +4617,15 @@ static void cik_cp_compute_enable(struct radeon_device 
*rdev, bool enable)
if (enable)
WREG32(CP_MEC_CNTL, 0);
else {
+   /*
+* To make hibernation reliable we need to clear compute ring
+* configuration before halting the compute ring.
+*/
+   mutex_lock(&rdev->srbm_mutex);
+   cik_compute_stop(rdev,&rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX]);
+   cik_compute_stop(rdev,&rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX]);
+   mutex_unlock(&rdev->srbm_mutex);
+
WREG32(CP_MEC_CNTL, (MEC_ME1_HALT | MEC_ME2_HALT));
rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX].ready = false;
rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX].ready = false;
-- 
2.1.0



[PATCH 2/2] drm/radeon: SDMA fix hibernation (CI GPU family).

2015-06-18 Thread j.gli...@gmail.com
From: Jérôme Glisse 

In order for hibernation to reliably work we need to properly turn
off the SDMA block, sadly after numerous attemps i haven't not found
proper sequence for clean and full shutdown. So simply reset both
SDMA block, this makes hibernation works reliably on sea island GPU
family (CI)

Hibernation and suspend to ram were tested (several times) on :
Bonaire
Hawaii
Mullins
Kaveri
Kabini

Cc: stable at vger.kernel.org
Signed-off-by: Jérôme Glisse 
---
 drivers/gpu/drm/radeon/cik_sdma.c |   11 +++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/drivers/gpu/drm/radeon/cik_sdma.c 
b/drivers/gpu/drm/radeon/cik_sdma.c
index f86eb54..d16f2ee 100644
--- a/drivers/gpu/drm/radeon/cik_sdma.c
+++ b/drivers/gpu/drm/radeon/cik_sdma.c
@@ -268,6 +268,17 @@ static void cik_sdma_gfx_stop(struct radeon_device *rdev)
}
rdev->ring[R600_RING_TYPE_DMA_INDEX].ready = false;
rdev->ring[CAYMAN_RING_TYPE_DMA1_INDEX].ready = false;
+
+   /* FIXME use something else than big hammer but after few days can not
+* seem to find good combination so reset SDMA blocks as it seems we
+* do not shut them down properly. This fix hibernation and does not
+* affect suspend to ram.
+*/
+   WREG32(SRBM_SOFT_RESET, SOFT_RESET_SDMA | SOFT_RESET_SDMA1);
+   (void)RREG32(SRBM_SOFT_RESET);
+   udelay(50);
+   WREG32(SRBM_SOFT_RESET, 0);
+   (void)RREG32(SRBM_SOFT_RESET);
 }

 /**
-- 
1.7.1



[PATCH 1/2] drm/radeon: compute ring fix hibernation (CI GPU family).

2015-06-18 Thread j.gli...@gmail.com
From: Jérôme Glisse 

In order for hibernation to reliably work we need to cleanup more
thoroughly the compute ring. Hibernation is different from suspend
resume as when we resume from hibernation the hardware is first
fully initialize by regular kernel then freeze callback happens
(which correspond to a suspend inside the radeon kernel driver)
and turn off each of the block. It turns out we were not cleanly
shutting down the compute ring. This patch fix that.

Hibernation and suspend to ram were tested (several times) on :
Bonaire
Hawaii
Mullins
Kaveri
Kabini

Cc: stable at vger.kernel.org
Signed-off-by: Jérôme Glisse 
---
 drivers/gpu/drm/radeon/cik.c |   55 ++
 1 files changed, 55 insertions(+), 0 deletions(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index ba50f3c..d2576d4 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -4592,6 +4592,61 @@ static void cik_cp_compute_enable(struct radeon_device 
*rdev, bool enable)
if (enable)
WREG32(CP_MEC_CNTL, 0);
else {
+   u32 tmp;
+   int idx, j;
+
+   /*
+* To make hibernation reliable we need to clear compute ring
+* configuration before halting the compute ring.
+*/
+   mutex_lock(&rdev->srbm_mutex);
+
+   idx = CAYMAN_RING_TYPE_CP1_INDEX;
+   cik_srbm_select(rdev, rdev->ring[idx].me,
+   rdev->ring[idx].pipe,
+   rdev->ring[idx].queue, 0);
+   /* Disable wptr polling. */
+   tmp = RREG32(CP_PQ_WPTR_POLL_CNTL);
+   tmp &= ~WPTR_POLL_EN;
+   WREG32(CP_PQ_WPTR_POLL_CNTL, tmp);
+   /* Disable HQD. */
+   if (RREG32(CP_HQD_ACTIVE) & 1) {
+   WREG32(CP_HQD_DEQUEUE_REQUEST, 1);
+   for (j = 0; j < rdev->usec_timeout; j++) {
+   if (!(RREG32(CP_HQD_ACTIVE) & 1))
+   break;
+   udelay(1);
+   }
+   WREG32(CP_HQD_DEQUEUE_REQUEST, 0);
+   WREG32(CP_HQD_PQ_RPTR, 0);
+   WREG32(CP_HQD_PQ_WPTR, 0);
+   }
+   cik_srbm_select(rdev, 0, 0, 0, 0);
+
+   idx = CAYMAN_RING_TYPE_CP2_INDEX;
+   cik_srbm_select(rdev, rdev->ring[idx].me,
+   rdev->ring[idx].pipe,
+   rdev->ring[idx].queue, 0);
+   /* Disable wptr polling. */
+   tmp = RREG32(CP_PQ_WPTR_POLL_CNTL);
+   tmp &= ~WPTR_POLL_EN;
+   WREG32(CP_PQ_WPTR_POLL_CNTL, tmp);
+   /* Disable HQD. */
+   if (RREG32(CP_HQD_ACTIVE) & 1) {
+   WREG32(CP_HQD_DEQUEUE_REQUEST, 1);
+   for (j = 0; j < rdev->usec_timeout; j++) {
+   if (!(RREG32(CP_HQD_ACTIVE) & 1))
+   break;
+   udelay(1);
+   }
+   WREG32(CP_HQD_DEQUEUE_REQUEST, 0);
+   WREG32(CP_HQD_PQ_RPTR, 0);
+   WREG32(CP_HQD_PQ_WPTR, 0);
+   }
+   cik_srbm_select(rdev, 0, 0, 0, 0);
+
+   mutex_unlock(&rdev->srbm_mutex);
+
WREG32(CP_MEC_CNTL, (MEC_ME1_HALT | MEC_ME2_HALT));
rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX].ready = false;
rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX].ready = false;
-- 
1.7.1



[PATCH] drm/radeon: fix freeze for laptop with Turks/Thames GPU.

2015-06-05 Thread j.gli...@gmail.com
From: Jérôme Glisse 

Laptop with Turks/Thames GPU will freeze if dpm is enabled. It seems
the SMC engine is relying on some state inside the CP engine. CP needs
to chew at least one packet for it to get in good state for dynamic
power management.

This patch simply disabled and re-enable DPM after the ring test which
is enough to avoid the freeze.

Signed-off-by: Jérôme Glisse 
Cc: stable at vger.kernel.org
---
 drivers/gpu/drm/radeon/radeon_device.c |   15 +++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index b7ca4c5..a7fdfa4 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1463,6 +1463,21 @@ int radeon_device_init(struct radeon_device *rdev,
if (r)
DRM_ERROR("ib ring test failed (%d).\n", r);

+   /*
+* Turks/Thames GPU will freeze whole laptop if DPM is not restarted
+* after the CP ring have chew one packet at least. Hence here we stop
+* and restart DPM after the radeon_ib_ring_tests().
+*/
+   if (rdev->pm.dpm_enabled &&
+   (rdev->pm.pm_method == PM_METHOD_DPM) &&
+   (rdev->family == CHIP_TURKS) &&
+   (rdev->flags & RADEON_IS_MOBILITY)) {
+   mutex_lock(&rdev->pm.mutex);
+   radeon_dpm_disable(rdev);
+   radeon_dpm_enable(rdev);
+   mutex_unlock(&rdev->pm.mutex);
+   }
+
if ((radeon_testing & 1)) {
if (rdev->accel_working)
radeon_test_moves(rdev);
-- 
1.7.1



[PATCH] drm/radeon: fix cut and paste issue for hawaii.

2014-07-24 Thread j.gli...@gmail.com
From: Jerome Glisse 

This is a halfway fix for hawaii acceleration. More fixes to come
but hopefully isolated to userspace.

Signed-off-by: J?r?me Glisse 
Cc: stable at vger.kernel.org
---
 drivers/gpu/drm/radeon/cik.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index fc560b0..2b87edb 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -2521,6 +2521,7 @@ static void cik_tiling_mode_table_init(struct 
radeon_device *rdev)
gb_tile_moden = 0;
break;
}
+   rdev->config.cik.macrotile_mode_array[reg_offset] = 
gb_tile_moden;
WREG32(GB_MACROTILE_MODE0 + (reg_offset * 4), 
gb_tile_moden);
}
} else if (num_pipe_configs == 8) {
-- 
1.8.3.1



[PATCH] exynos: Put a stop to the userptr heresy.

2014-06-30 Thread j.gli...@gmail.com
From: Jerome Glisse 

get_user_pages gives no garanty that page it returns are still
the one backing the vma by the time it returns. Thus any ioctl
that rely on this behavior is broken and rely on pure luck. To
avoid any false hope from userspace stop such useage by simply
flat out returning -EFAULT. Better to have a reliable behavior
than to depend on pure luck and currently observed behavior of
mm code.

Note this was not even compile tested but i think i did update
all places.

Signed-off-by: J?r?me Glisse 
---
 drivers/gpu/drm/exynos/exynos_drm_drv.h |   1 -
 drivers/gpu/drm/exynos/exynos_drm_g2d.c | 277 +---
 drivers/gpu/drm/exynos/exynos_drm_gem.c |  60 ---
 drivers/gpu/drm/exynos/exynos_drm_gem.h |  20 ---
 4 files changed, 3 insertions(+), 355 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.h 
b/drivers/gpu/drm/exynos/exynos_drm_drv.h
index 36535f3..7b55e89 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_drv.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_drv.h
@@ -233,7 +233,6 @@ struct exynos_drm_g2d_private {
struct device   *dev;
struct list_headinuse_cmdlist;
struct list_headevent_list;
-   struct list_headuserptr_list;
 };

 struct exynos_drm_ipp_private {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c 
b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
index 8001587..d0be6dc 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
@@ -118,9 +118,6 @@
 #define G2D_CMDLIST_POOL_SIZE  (G2D_CMDLIST_SIZE * G2D_CMDLIST_NUM)
 #define G2D_CMDLIST_DATA_NUM   (G2D_CMDLIST_SIZE / sizeof(u32) - 2)

-/* maximum buffer pool size of userptr is 64MB as default */
-#define MAX_POOL   (64 * 1024 * 1024)
-
 enum {
BUF_TYPE_GEM = 1,
BUF_TYPE_USERPTR,
@@ -185,19 +182,6 @@ struct drm_exynos_pending_g2d_event {
struct drm_exynos_g2d_event event;
 };

-struct g2d_cmdlist_userptr {
-   struct list_headlist;
-   dma_addr_t  dma_addr;
-   unsigned long   userptr;
-   unsigned long   size;
-   struct page **pages;
-   unsigned intnpages;
-   struct sg_table *sgt;
-   struct vm_area_struct   *vma;
-   atomic_trefcount;
-   boolin_pool;
-   boolout_of_list;
-};
 struct g2d_cmdlist_node {
struct list_headlist;
struct g2d_cmdlist  *cmdlist;
@@ -242,7 +226,6 @@ struct g2d_data {
struct kmem_cache   *runqueue_slab;

unsigned long   current_pool;
-   unsigned long   max_pool;
 };

 static int g2d_init_cmdlist(struct g2d_data *g2d)
@@ -354,232 +337,6 @@ add_to_list:
list_add_tail(&node->event->base.link, &g2d_priv->event_list);
 }

-static void g2d_userptr_put_dma_addr(struct drm_device *drm_dev,
-   unsigned long obj,
-   bool force)
-{
-   struct g2d_cmdlist_userptr *g2d_userptr =
-   (struct g2d_cmdlist_userptr *)obj;
-
-   if (!obj)
-   return;
-
-   if (force)
-   goto out;
-
-   atomic_dec(&g2d_userptr->refcount);
-
-   if (atomic_read(&g2d_userptr->refcount) > 0)
-   return;
-
-   if (g2d_userptr->in_pool)
-   return;
-
-out:
-   exynos_gem_unmap_sgt_from_dma(drm_dev, g2d_userptr->sgt,
-   DMA_BIDIRECTIONAL);
-
-   exynos_gem_put_pages_to_userptr(g2d_userptr->pages,
-   g2d_userptr->npages,
-   g2d_userptr->vma);
-
-   exynos_gem_put_vma(g2d_userptr->vma);
-
-   if (!g2d_userptr->out_of_list)
-   list_del_init(&g2d_userptr->list);
-
-   sg_free_table(g2d_userptr->sgt);
-   kfree(g2d_userptr->sgt);
-
-   drm_free_large(g2d_userptr->pages);
-   kfree(g2d_userptr);
-}
-
-static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
-   unsigned long userptr,
-   unsigned long size,
-   struct drm_file *filp,
-   unsigned long *obj)
-{
-   struct drm_exynos_file_private *file_priv = filp->driver_priv;
-   struct exynos_drm_g2d_private *g2d_priv = file_priv->g2d_priv;
-   struct g2d_cmdlist_userptr *g2d_userptr;
-   struct g2d_data *g2d;
-   struct page **pages;
-   struct sg_table *sgt;
-   struct vm_area_struct *vma;
-   unsigned long start, end;
-   unsigned int npages, offset;
-   int ret;
-
-   if (!size) {
-   DRM_ERROR("invalid userptr size.\n");
-   return ERR_PTR(-EINVAL);
-   }
-
-   g2d = dev_get_dr

[PATCH] i915: purify user ptr ioctl through the fire of truth.

2014-06-30 Thread j.gli...@gmail.com
From: Jerome Glisse 

Heresy should not be tolerated, any ioctl that rely on pure luck
should die. Violating memory pining kernel policy and all the
reasonable expection kernel have about user of mmu_notifier api
is not tolerable.

Because we can neither broke old userspace the ioctl is left but
i will never succeed which is the only reliable option for it.

Signed-off-by: J?r?me Glisse 
---
 drivers/gpu/drm/i915/i915_drv.h |  15 -
 drivers/gpu/drm/i915/i915_gem.c |   1 -
 drivers/gpu/drm/i915/i915_gem_userptr.c | 687 +---
 drivers/gpu/drm/i915/i915_gpu_error.c   |   2 -
 4 files changed, 5 insertions(+), 700 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8cea596..3970bd0 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -393,7 +393,6 @@ struct drm_i915_error_state {
u32 tiling:2;
u32 dirty:1;
u32 purgeable:1;
-   u32 userptr:1;
s32 ring:4;
u32 cache_level:3;
} **active_bo, **pinned_bo;
@@ -1750,19 +1749,6 @@ struct drm_i915_gem_object {

/** for phy allocated objects */
drm_dma_handle_t *phys_handle;
-
-   union {
-   struct i915_gem_userptr {
-   uintptr_t ptr;
-   unsigned read_only :1;
-   unsigned workers :4;
-#define I915_GEM_USERPTR_MAX_WORKERS 15
-
-   struct mm_struct *mm;
-   struct i915_mmu_object *mn;
-   struct work_struct *work;
-   } userptr;
-   };
 };
 #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)

@@ -2195,7 +2181,6 @@ int i915_gem_set_tiling(struct drm_device *dev, void 
*data,
struct drm_file *file_priv);
 int i915_gem_get_tiling(struct drm_device *dev, void *data,
struct drm_file *file_priv);
-int i915_gem_init_userptr(struct drm_device *dev);
 int i915_gem_userptr_ioctl(struct drm_device *dev, void *data,
   struct drm_file *file);
 int i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f6d1238..30fc784 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4754,7 +4754,6 @@ int i915_gem_init(struct drm_device *dev)
DRM_DEBUG_DRIVER("allow wake ack timed out\n");
}

-   i915_gem_init_userptr(dev);
i915_gem_init_global_gtt(dev);

ret = i915_gem_context_init(dev);
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 191ac71..f261cb9 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -22,692 +22,15 @@
  *
  */

-#include "drmP.h"
 #include "i915_drm.h"
 #include "i915_drv.h"
-#include "i915_trace.h"
 #include "intel_drv.h"
-#include 
-#include 
-#include 
-#include 

-#if defined(CONFIG_MMU_NOTIFIER)
-#include 
-
-struct i915_mmu_notifier {
-   spinlock_t lock;
-   struct hlist_node node;
-   struct mmu_notifier mn;
-   struct rb_root objects;
-   struct drm_device *dev;
-   struct mm_struct *mm;
-   struct work_struct work;
-   unsigned long count;
-   unsigned long serial;
-};
-
-struct i915_mmu_object {
-   struct i915_mmu_notifier *mmu;
-   struct interval_tree_node it;
-   struct drm_i915_gem_object *obj;
-};
-
-static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier 
*_mn,
-  struct mm_struct *mm,
-  struct vm_area_struct 
*vma,
-  unsigned long start,
-  unsigned long end,
-  enum mmu_event event)
-{
-   struct i915_mmu_notifier *mn = container_of(_mn, struct 
i915_mmu_notifier, mn);
-   struct interval_tree_node *it = NULL;
-   unsigned long serial = 0;
-
-   end--; /* interval ranges are inclusive, but invalidate range is 
exclusive */
-   while (start < end) {
-   struct drm_i915_gem_object *obj;
-
-   obj = NULL;
-   spin_lock(&mn->lock);
-   if (serial == mn->serial)
-   it = interval_tree_iter_next(it, start, end);
-   else
-   it = interval_tree_iter_first(&mn->objects, start, end);
-   if (it != NULL) {
-   obj = container_of(it, struct i915_mmu_object, it)->obj;
-   drm_gem_object_reference(&obj->base);
-   serial = mn->serial;
-   }
-   spin_unlock(&mn->lock);
-   

[PATCH] drm/radeon: avoid segfault on device open when accel is not working.

2014-05-07 Thread j.gli...@gmail.com
From: J?r?me Glisse 

When accel is not working on device with virtual address space radeon
segfault because the ib buffer is NULL and trying to map it inside the
virtual address space trigger segfault. This patch only map the ib
buffer if accel is working.

Cc: 
Signed-off-by: J?r?me Glisse 
---
 drivers/gpu/drm/radeon/radeon_kms.c | 55 +++--
 1 file changed, 29 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_kms.c 
b/drivers/gpu/drm/radeon/radeon_kms.c
index 0cc47f1..eaaedba 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -577,28 +577,29 @@ int radeon_driver_open_kms(struct drm_device *dev, struct 
drm_file *file_priv)
return r;
}

-   r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);
-   if (r) {
-   radeon_vm_fini(rdev, &fpriv->vm);
-   kfree(fpriv);
-   return r;
-   }
+   if (rdev->accel_working) {
+   r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);
+   if (r) {
+   radeon_vm_fini(rdev, &fpriv->vm);
+   kfree(fpriv);
+   return r;
+   }

-   /* map the ib pool buffer read only into
-* virtual address space */
-   bo_va = radeon_vm_bo_add(rdev, &fpriv->vm,
-rdev->ring_tmp_bo.bo);
-   r = radeon_vm_bo_set_addr(rdev, bo_va, RADEON_VA_IB_OFFSET,
- RADEON_VM_PAGE_READABLE |
- RADEON_VM_PAGE_SNOOPED);
+   /* map the ib pool buffer read only into
+* virtual address space */
+   bo_va = radeon_vm_bo_add(rdev, &fpriv->vm,
+rdev->ring_tmp_bo.bo);
+   r = radeon_vm_bo_set_addr(rdev, bo_va, 
RADEON_VA_IB_OFFSET,
+ RADEON_VM_PAGE_READABLE |
+ RADEON_VM_PAGE_SNOOPED);

-   radeon_bo_unreserve(rdev->ring_tmp_bo.bo);
-   if (r) {
-   radeon_vm_fini(rdev, &fpriv->vm);
-   kfree(fpriv);
-   return r;
+   radeon_bo_unreserve(rdev->ring_tmp_bo.bo);
+   if (r) {
+   radeon_vm_fini(rdev, &fpriv->vm);
+   kfree(fpriv);
+   return r;
+   }
}
-
file_priv->driver_priv = fpriv;
}

@@ -626,13 +627,15 @@ void radeon_driver_postclose_kms(struct drm_device *dev,
struct radeon_bo_va *bo_va;
int r;

-   r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);
-   if (!r) {
-   bo_va = radeon_vm_bo_find(&fpriv->vm,
- rdev->ring_tmp_bo.bo);
-   if (bo_va)
-   radeon_vm_bo_rmv(rdev, bo_va);
-   radeon_bo_unreserve(rdev->ring_tmp_bo.bo);
+   if (rdev->accel_working) {
+   r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);
+   if (!r) {
+   bo_va = radeon_vm_bo_find(&fpriv->vm,
+ rdev->ring_tmp_bo.bo);
+   if (bo_va)
+   radeon_vm_bo_rmv(rdev, bo_va);
+   radeon_bo_unreserve(rdev->ring_tmp_bo.bo);
+   }
}

radeon_vm_fini(rdev, &fpriv->vm);
-- 
1.9.0



[PATCH] drm/radeon: free uvd ring on unload

2014-02-26 Thread j.gli...@gmail.com
From: Jerome Glisse 

Need to free the uvd ring. Also reshuffle gart tear down to
happen after uvd tear down.

Signed-off-by: J?r?me Glisse 
Cc: stable at vger.kernel.org
---
 drivers/gpu/drm/radeon/evergreen.c  | 2 +-
 drivers/gpu/drm/radeon/radeon_uvd.c | 2 ++
 drivers/gpu/drm/radeon/rv770.c  | 2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index 5623e75..8a2c010 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -5475,9 +5475,9 @@ void evergreen_fini(struct radeon_device *rdev)
radeon_wb_fini(rdev);
radeon_ib_pool_fini(rdev);
radeon_irq_kms_fini(rdev);
-   evergreen_pcie_gart_fini(rdev);
uvd_v1_0_fini(rdev);
radeon_uvd_fini(rdev);
+   evergreen_pcie_gart_fini(rdev);
r600_vram_scratch_fini(rdev);
radeon_gem_fini(rdev);
radeon_fence_driver_fini(rdev);
diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c 
b/drivers/gpu/drm/radeon/radeon_uvd.c
index 6781fee..3e6804b 100644
--- a/drivers/gpu/drm/radeon/radeon_uvd.c
+++ b/drivers/gpu/drm/radeon/radeon_uvd.c
@@ -171,6 +171,8 @@ void radeon_uvd_fini(struct radeon_device *rdev)

radeon_bo_unref(&rdev->uvd.vcpu_bo);

+   radeon_ring_fini(rdev, &rdev->ring[R600_RING_TYPE_UVD_INDEX]);
+
release_firmware(rdev->uvd_fw);
 }

diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
index 6c772e5..4e37a42 100644
--- a/drivers/gpu/drm/radeon/rv770.c
+++ b/drivers/gpu/drm/radeon/rv770.c
@@ -1955,9 +1955,9 @@ void rv770_fini(struct radeon_device *rdev)
radeon_wb_fini(rdev);
radeon_ib_pool_fini(rdev);
radeon_irq_kms_fini(rdev);
-   rv770_pcie_gart_fini(rdev);
uvd_v1_0_fini(rdev);
radeon_uvd_fini(rdev);
+   rv770_pcie_gart_fini(rdev);
r600_vram_scratch_fini(rdev);
radeon_gem_fini(rdev);
radeon_fence_driver_fini(rdev);
-- 
1.8.3.1



[PATCH] radeon: workaround pining failure on low ram gpu

2013-11-12 Thread j.gli...@gmail.com
From: Jerome Glisse 

GPU with low amount of ram can fails at pining new framebuffer before
unpining old one. On such failure, retry with unping old one before
pining new one allowing to work around the issue. This is somewhat
ugly but only affect those old GPU we care about.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_legacy_crtc.c | 28 
 1 file changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c 
b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
index 0c7b8c6..0b158f9 100644
--- a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
+++ b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
@@ -422,6 +422,7 @@ int radeon_crtc_do_set_base(struct drm_crtc *crtc,
/* Pin framebuffer & get tilling informations */
obj = radeon_fb->obj;
rbo = gem_to_radeon_bo(obj);
+retry:
r = radeon_bo_reserve(rbo, false);
if (unlikely(r != 0))
return r;
@@ -430,6 +431,33 @@ int radeon_crtc_do_set_base(struct drm_crtc *crtc,
 &base);
if (unlikely(r != 0)) {
radeon_bo_unreserve(rbo);
+
+   /* On old GPU like RN50 with little vram pining can fails 
because
+* current fb is taking all space needed. So instead of unpining
+* the old buffer after pining the new one, first unpin old one
+* and then retry pining new one.
+*
+* As only master can set mode only master can pin and it is
+* unlikely the master client will race with itself especialy
+* on those old gpu with single crtc.
+*
+* We don't shutdown the display controller because new buffer
+* will end up in same spot.
+*/
+   if (!atomic && fb && fb != crtc->fb) {
+   struct radeon_bo *old_rbo;
+   unsigned long nsize, osize;
+
+   old_rbo = 
gem_to_radeon_bo(to_radeon_framebuffer(fb)->obj);
+   osize = radeon_bo_size(old_rbo);
+   nsize = radeon_bo_size(rbo);
+   if (nsize <= osize && !radeon_bo_reserve(old_rbo, 
false)) {
+   radeon_bo_unpin(old_rbo);
+   radeon_bo_unreserve(old_rbo);
+   fb = NULL;
+   goto retry;
+   }
+   }
return -EINVAL;
}
radeon_bo_get_tiling_flags(rbo, &tiling_flags, NULL);
-- 
1.8.3.1



[PATCH] drm/radeon: use radeon device for request firmware

2013-07-11 Thread j.gli...@gmail.com
From: Jerome Glisse 

Avoid creating temporary platform device that will lead to issue
when several radeon gpu are in same computer. Instead directly use
the radeon device for requesting firmware.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/cik.c| 25 +++--
 drivers/gpu/drm/radeon/ni.c | 21 +
 drivers/gpu/drm/radeon/r100.c   | 11 +--
 drivers/gpu/drm/radeon/r600.c   | 19 ---
 drivers/gpu/drm/radeon/radeon_uvd.c | 13 +
 drivers/gpu/drm/radeon/si.c | 23 ++-
 6 files changed, 24 insertions(+), 88 deletions(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index db507a4..b893165 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -22,7 +22,6 @@
  * Authors: Alex Deucher
  */
 #include 
-#include 
 #include 
 #include 
 #include "drmP.h"
@@ -742,7 +741,6 @@ static int ci_mc_load_microcode(struct radeon_device *rdev)
  */
 static int cik_init_microcode(struct radeon_device *rdev)
 {
-   struct platform_device *pdev;
const char *chip_name;
size_t pfp_req_size, me_req_size, ce_req_size,
mec_req_size, rlc_req_size, mc_req_size,
@@ -752,13 +750,6 @@ static int cik_init_microcode(struct radeon_device *rdev)

DRM_DEBUG("\n");

-   pdev = platform_device_register_simple("radeon_cp", 0, NULL, 0);
-   err = IS_ERR(pdev);
-   if (err) {
-   printk(KERN_ERR "radeon_cp: Failed to register firmware\n");
-   return -EINVAL;
-   }
-
switch (rdev->family) {
case CHIP_BONAIRE:
chip_name = "BONAIRE";
@@ -794,7 +785,7 @@ static int cik_init_microcode(struct radeon_device *rdev)
DRM_INFO("Loading %s Microcode\n", chip_name);

snprintf(fw_name, sizeof(fw_name), "radeon/%s_pfp.bin", chip_name);
-   err = request_firmware(&rdev->pfp_fw, fw_name, &pdev->dev);
+   err = request_firmware(&rdev->pfp_fw, fw_name, rdev->dev);
if (err)
goto out;
if (rdev->pfp_fw->size != pfp_req_size) {
@@ -806,7 +797,7 @@ static int cik_init_microcode(struct radeon_device *rdev)
}

snprintf(fw_name, sizeof(fw_name), "radeon/%s_me.bin", chip_name);
-   err = request_firmware(&rdev->me_fw, fw_name, &pdev->dev);
+   err = request_firmware(&rdev->me_fw, fw_name, rdev->dev);
if (err)
goto out;
if (rdev->me_fw->size != me_req_size) {
@@ -817,7 +808,7 @@ static int cik_init_microcode(struct radeon_device *rdev)
}

snprintf(fw_name, sizeof(fw_name), "radeon/%s_ce.bin", chip_name);
-   err = request_firmware(&rdev->ce_fw, fw_name, &pdev->dev);
+   err = request_firmware(&rdev->ce_fw, fw_name, rdev->dev);
if (err)
goto out;
if (rdev->ce_fw->size != ce_req_size) {
@@ -828,7 +819,7 @@ static int cik_init_microcode(struct radeon_device *rdev)
}

snprintf(fw_name, sizeof(fw_name), "radeon/%s_mec.bin", chip_name);
-   err = request_firmware(&rdev->mec_fw, fw_name, &pdev->dev);
+   err = request_firmware(&rdev->mec_fw, fw_name, rdev->dev);
if (err)
goto out;
if (rdev->mec_fw->size != mec_req_size) {
@@ -839,7 +830,7 @@ static int cik_init_microcode(struct radeon_device *rdev)
}

snprintf(fw_name, sizeof(fw_name), "radeon/%s_rlc.bin", chip_name);
-   err = request_firmware(&rdev->rlc_fw, fw_name, &pdev->dev);
+   err = request_firmware(&rdev->rlc_fw, fw_name, rdev->dev);
if (err)
goto out;
if (rdev->rlc_fw->size != rlc_req_size) {
@@ -850,7 +841,7 @@ static int cik_init_microcode(struct radeon_device *rdev)
}

snprintf(fw_name, sizeof(fw_name), "radeon/%s_sdma.bin", chip_name);
-   err = request_firmware(&rdev->sdma_fw, fw_name, &pdev->dev);
+   err = request_firmware(&rdev->sdma_fw, fw_name, rdev->dev);
if (err)
goto out;
if (rdev->sdma_fw->size != sdma_req_size) {
@@ -863,7 +854,7 @@ static int cik_init_microcode(struct radeon_device *rdev)
/* No MC ucode on APUs */
if (!(rdev->flags & RADEON_IS_IGP)) {
snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc.bin", 
chip_name);
-   err = request_firmware(&rdev->mc_fw, fw_name, &pdev->dev);
+   err = request_firmware(&rdev->mc_fw, fw_name, rdev->dev);
if (err)
goto out;
if (rdev->mc_fw->size != mc_req_size) {
@@ -875,8 +866,6 @@ static int cik_init_microcode(struct radeon_device *rdev)
}

 out:
-   platform_device_unregister(pdev);
-
if (err) {
if (err != -EINVAL)
printk(KERN_ERR
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index f30127c..465b17e 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers

[PATCH] drm/radeon: update lockup tracking when scheduling in empty ring

2013-06-19 Thread j.gli...@gmail.com
From: Jerome Glisse 

There might be issue with lockup detection when scheduling on an
empty ring that have been sitting idle for a while. Thus update
the lockup tracking data when scheduling new work in an empty ring.

Signed-off-by: Jerome Glisse 
Tested-by: Andy Lutomirski 
Cc: stable at vger.kernel.org
---
 drivers/gpu/drm/radeon/radeon_ring.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_ring.c 
b/drivers/gpu/drm/radeon/radeon_ring.c
index e17faa7..82434018 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -402,6 +402,13 @@ int radeon_ring_alloc(struct radeon_device *rdev, struct 
radeon_ring *ring, unsi
return -ENOMEM;
/* Align requested size with padding so unlock_commit can
 * pad safely */
+   radeon_ring_free_size(rdev, ring);
+   if (ring->ring_free_dw == (ring->ring_size / 4)) {
+   /* This is an empty ring update lockup info to avoid
+* false positive.
+*/
+   radeon_ring_lockup_update(ring);
+   }
ndw = (ndw + ring->align_mask) & ~ring->align_mask;
while (ndw > (ring->ring_free_dw - 1)) {
radeon_ring_free_size(rdev, ring);
-- 
1.7.11.7



[PATCH] drm/radeon: fix write back suspend regression with uvd

2013-06-06 Thread j.gli...@gmail.com
From: Jerome Glisse 

UVD ring can't use scratch thus it does need writeback buffer to keep
a valid address or radeon_ring_backup will trigger a kernel fault.

It's ok to not unpin the write back buffer on suspend as it leave in
gtt and thus does not need eviction.

Reported and tracked by Wojtek 

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_device.c | 15 +--
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index 1899738..eb8068a 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -244,16 +244,6 @@ void radeon_scratch_free(struct radeon_device *rdev, 
uint32_t reg)
  */
 void radeon_wb_disable(struct radeon_device *rdev)
 {
-   int r;
-
-   if (rdev->wb.wb_obj) {
-   r = radeon_bo_reserve(rdev->wb.wb_obj, false);
-   if (unlikely(r != 0))
-   return;
-   radeon_bo_kunmap(rdev->wb.wb_obj);
-   radeon_bo_unpin(rdev->wb.wb_obj);
-   radeon_bo_unreserve(rdev->wb.wb_obj);
-   }
rdev->wb.enabled = false;
 }

@@ -269,6 +259,11 @@ void radeon_wb_fini(struct radeon_device *rdev)
 {
radeon_wb_disable(rdev);
if (rdev->wb.wb_obj) {
+   if (!radeon_bo_reserve(rdev->wb.wb_obj, false)) {
+   radeon_bo_kunmap(rdev->wb.wb_obj);
+   radeon_bo_unpin(rdev->wb.wb_obj);
+   radeon_bo_unreserve(rdev->wb.wb_obj);
+   }
radeon_bo_unref(&rdev->wb.wb_obj);
rdev->wb.wb = NULL;
rdev->wb.wb_obj = NULL;
-- 
1.7.11.7



[PATCH] drm/radeon: do not try to uselessly update virtual memory pagetable

2013-06-06 Thread j.gli...@gmail.com
From: Jerome Glisse 

If a buffer is never bind to a virtual memory pagetable than don't try
to unbind it. Only drawback is that we don't update the pagetable when
unbinding the ib pool buffer which is fine because it only happens at
suspend or module unload/shutdown.

Cc: stable at kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_gart.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index 6e24f84..daf9710 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -1209,11 +1209,13 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
 int radeon_vm_bo_rmv(struct radeon_device *rdev,
 struct radeon_bo_va *bo_va)
 {
-   int r;
+   int r = 0;

mutex_lock(&rdev->vm_manager.lock);
mutex_lock(&bo_va->vm->mutex);
-   r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL);
+   if (bo_va->soffset) {
+   r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL);
+   }
mutex_unlock(&rdev->vm_manager.lock);
list_del(&bo_va->vm_list);
mutex_unlock(&bo_va->vm->mutex);
-- 
1.7.11.7



[PATCH] radeon: add bo tracking debugfs

2013-04-25 Thread j.gli...@gmail.com
From: Jerome Glisse 

This is to allow debugging of userspace program not freeing buffer
after, which is basicly a memory leak. This print the list of all
gem object along with their size and placement (VRAM,GTT,CPU) and
with the pid of the task that created them.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  5 +++-
 drivers/gpu/drm/radeon/radeon_device.c |  5 
 drivers/gpu/drm/radeon/radeon_gem.c| 50 ++
 3 files changed, 59 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 18904fb..bd28ee6 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -358,7 +358,8 @@ struct radeon_bo {
struct radeon_device*rdev;
struct drm_gem_object   gem_base;

-   struct ttm_bo_kmap_obj dma_buf_vmap;
+   struct ttm_bo_kmap_obj  dma_buf_vmap;
+   pid_t   pid;
 };
 #define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, gem_base)

@@ -372,6 +373,8 @@ struct radeon_bo_list {
u32 tiling_flags;
 };

+int radeon_gem_debugfs_init(struct radeon_device *rdev);
+
 /* sub-allocation manager, it has to be protected by another lock.
  * By conception this is an helper for other part of the driver
  * like the indirect buffer or semaphore, which both have their
diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index 62d0ba3..76166ae 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1142,6 +1142,11 @@ int radeon_device_init(struct radeon_device *rdev,
if (r)
DRM_ERROR("ib ring test failed (%d).\n", r);

+   r = radeon_gem_debugfs_init(rdev);
+   if (r) {
+   DRM_ERROR("registering gem debugfs failed (%d).\n", r);
+   }
+
if (rdev->flags & RADEON_IS_AGP && !rdev->accel_working) {
/* Acceleration not working on AGP card try again
 * with fallback to PCI or PCIE GART
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c 
b/drivers/gpu/drm/radeon/radeon_gem.c
index fe5c1f6..87f8c52 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -84,6 +84,7 @@ retry:
return r;
}
*obj = &robj->gem_base;
+   robj->pid = task_pid_nr(current);

mutex_lock(&rdev->gem.mutex);
list_add_tail(&robj->list, &rdev->gem.objects);
@@ -575,3 +576,52 @@ int radeon_mode_dumb_destroy(struct drm_file *file_priv,
 {
return drm_gem_handle_delete(file_priv, handle);
 }
+
+#if defined(CONFIG_DEBUG_FS)
+static int radeon_debugfs_gem_info(struct seq_file *m, void *data)
+{
+   struct drm_info_node *node = (struct drm_info_node *)m->private;
+   struct drm_device *dev = node->minor->dev;
+   struct radeon_device *rdev = dev->dev_private;
+   struct radeon_bo *rbo;
+   unsigned i = 0;
+
+   mutex_lock(&rdev->gem.mutex);
+   list_for_each_entry(rbo, &rdev->gem.objects, list) {
+   unsigned domain;
+   const char *placement;
+
+   domain = radeon_mem_type_to_domain(rbo->tbo.mem.mem_type);
+   switch (domain) {
+   case RADEON_GEM_DOMAIN_VRAM:
+   placement = "VRAM";
+   break;
+   case RADEON_GEM_DOMAIN_GTT:
+   placement = " GTT";
+   break;
+   case RADEON_GEM_DOMAIN_CPU:
+   default:
+   placement = " CPU";
+   break;
+   }
+   seq_printf(m, "bo[0x%08x] %8dkB %8dMB %s pid %8ld\n",
+  i, radeon_bo_size(rbo) >> 10, radeon_bo_size(rbo) >> 
20,
+  placement, (unsigned long)rbo->pid);
+   i++;
+   }
+   mutex_unlock(&rdev->gem.mutex);
+   return 0;
+}
+
+static struct drm_info_list radeon_debugfs_gem_list[] = {
+   {"radeon_gem_info", &radeon_debugfs_gem_info, 0, NULL},
+};
+#endif
+
+int radeon_gem_debugfs_init(struct radeon_device *rdev)
+{
+#if defined(CONFIG_DEBUG_FS)
+   return radeon_debugfs_add_files(rdev, radeon_debugfs_gem_list, 1);
+#endif
+   return 0;
+}
-- 
1.8.2.1



[PATCH 2/2] radeon: add si tiling support v5

2013-04-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

v2: Only writte tile index if flags for it is set
v3: Remove useless allow2d scanout flags
v4: Split radeon_drm.h update to its own patch
v5: update against lastest next tree for radeon

Signed-off-by: Jerome Glisse 
---
 radeon/radeon_surface.c | 658 
 radeon/radeon_surface.h |  31 +++
 2 files changed, 644 insertions(+), 45 deletions(-)

diff --git a/radeon/radeon_surface.c b/radeon/radeon_surface.c
index 5935c23..288b5e2 100644
--- a/radeon/radeon_surface.c
+++ b/radeon/radeon_surface.c
@@ -83,12 +83,14 @@ typedef int (*hw_best_surface_t)(struct 
radeon_surface_manager *surf_man,

 struct radeon_hw_info {
 /* apply to r6, eg */
-uint32_tgroup_bytes;
-uint32_tnum_banks;
-uint32_tnum_pipes;
+uint32_tgroup_bytes;
+uint32_tnum_banks;
+uint32_tnum_pipes;
 /* apply to eg */
-uint32_trow_size;
-unsignedallow_2d;
+uint32_trow_size;
+unsignedallow_2d;
+/* apply to si */
+uint32_ttile_mode_array[32];
 };

 struct radeon_surface_manager {
@@ -1000,12 +1002,403 @@ static int eg_surface_best(struct 
radeon_surface_manager *surf_man,
 /* ===
  * Southern Islands family
  */
+#define SI__GB_TILE_MODE__PIPE_CONFIG(x)(((x) >> 6) & 0x1f)
+#define SI__PIPE_CONFIG__ADDR_SURF_P2   0
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_8x16  4
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_16x16 5
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_16x32 6
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_32x32 7
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_16x16_8x168
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_16x32_8x169
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_8x1610
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_16x32_16x16   11
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_16x16   12
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_16x32   13
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x64_32x32   14
+#define SI__GB_TILE_MODE__TILE_SPLIT(x) (((x) >> 11) & 0x7)
+#define SI__TILE_SPLIT__64B 0
+#define SI__TILE_SPLIT__128B1
+#define SI__TILE_SPLIT__256B2
+#define SI__TILE_SPLIT__512B3
+#define SI__TILE_SPLIT__1024B   4
+#define SI__TILE_SPLIT__2048B   5
+#define SI__TILE_SPLIT__4096B   6
+#define SI__GB_TILE_MODE__BANK_WIDTH(x) (((x) >> 14) & 0x3)
+#define SI__BANK_WIDTH__1   0
+#define SI__BANK_WIDTH__2   1
+#define SI__BANK_WIDTH__4   2
+#define SI__BANK_WIDTH__8   3
+#define SI__GB_TILE_MODE__BANK_HEIGHT(x)(((x) >> 16) & 0x3)
+#define SI__BANK_HEIGHT__1  0
+#define SI__BANK_HEIGHT__2  1
+#define SI__BANK_HEIGHT__4  2
+#define SI__BANK_HEIGHT__8  3
+#define SI__GB_TILE_MODE__MACRO_TILE_ASPECT(x)  (((x) >> 18) & 0x3)
+#define SI__MACRO_TILE_ASPECT__10
+#define SI__MACRO_TILE_ASPECT__21
+#define SI__MACRO_TILE_ASPECT__42
+#define SI__MACRO_TILE_ASPECT__83
+#define SI__GB_TILE_MODE__NUM_BANKS(x)  (((x) >> 20) & 0x3)
+#define SI__NUM_BANKS__2_BANK   0
+#define SI__NUM_BANKS__4_BANK   1
+#define SI__NUM_BANKS__8_BANK   2
+#define SI__NUM_BANKS__16_BANK  3
+
+
+static void si_gb_tile_mode(uint32_t gb_tile_mode,
+unsigned *num_pipes,
+unsigned *num_banks,
+uint32_t *macro_tile_aspect,
+uint32_t *bank_w,
+uint32_t *bank_h,
+uint32_t *tile_split)
+{
+if (num_pipes) {
+switch (SI__GB_TILE_MODE__PIPE_CONFIG(gb_tile_mode)) {
+case SI__PIPE_CONFIG__ADDR_SURF_P2:
+default:
+*num_pipes = 2;
+break;
+case SI__PIPE_CONFIG__ADDR_SURF_P4_8x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P4_16x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P4_16x32:
+case SI__PIPE_CONFIG__ADDR_SURF_P4_32x32:
+*num_pipes = 4;
+break;
+case SI__PIPE_CONFIG__ADDR_SURF_P8_16x16_8x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P8_16x32_8x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_8x16:
+case

[PATCH 1/2] radeon: update radeon_drm.h to kernel last API additions v2

2013-04-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

v2: sync with radeon-next tree for 3.10

http://cgit.freedesktop.org/~agd5f/linux/log/?h=drm-next-3.10-wip

Signed-off-by: Jerome Glisse 
---
 include/drm/radeon_drm.h | 81 
 1 file changed, 81 insertions(+)

diff --git a/include/drm/radeon_drm.h b/include/drm/radeon_drm.h
index 00d66b3..86cef15 100644
--- a/include/drm/radeon_drm.h
+++ b/include/drm/radeon_drm.h
@@ -509,6 +509,7 @@ typedef struct {
 #define DRM_RADEON_GEM_SET_TILING  0x28
 #define DRM_RADEON_GEM_GET_TILING  0x29
 #define DRM_RADEON_GEM_BUSY0x2a
+#define DRM_RADEON_GEM_VA  0x2b

 #define DRM_IOCTL_RADEON_CP_INITDRM_IOW( DRM_COMMAND_BASE + 
DRM_RADEON_CP_INIT, drm_radeon_init_t)
 #define DRM_IOCTL_RADEON_CP_START   DRM_IO(  DRM_COMMAND_BASE + 
DRM_RADEON_CP_START)
@@ -550,6 +551,7 @@ typedef struct {
 #define DRM_IOCTL_RADEON_SET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_SET_TILING, struct drm_radeon_gem_set_tiling)
 #define DRM_IOCTL_RADEON_GET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_GET_TILING, struct drm_radeon_gem_get_tiling)
 #define DRM_IOCTL_RADEON_GEM_BUSY  DRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_BUSY, struct drm_radeon_gem_busy)
+#define DRM_IOCTL_RADEON_GEM_VADRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_VA, struct drm_radeon_gem_va)

 typedef struct drm_radeon_init {
enum {
@@ -882,8 +884,43 @@ struct drm_radeon_gem_pwrite {
uint64_t data_ptr;
 };

+#define RADEON_VA_MAP  1
+#define RADEON_VA_UNMAP2
+
+#define RADEON_VA_RESULT_OK0
+#define RADEON_VA_RESULT_ERROR 1
+#define RADEON_VA_RESULT_VA_EXIST  2
+
+#define RADEON_VM_PAGE_VALID   (1 << 0)
+#define RADEON_VM_PAGE_READABLE(1 << 1)
+#define RADEON_VM_PAGE_WRITEABLE   (1 << 2)
+#define RADEON_VM_PAGE_SYSTEM  (1 << 3)
+#define RADEON_VM_PAGE_SNOOPED (1 << 4)
+
+struct drm_radeon_gem_va {
+   uint32_thandle;
+   uint32_toperation;
+   uint32_tvm_id;
+   uint32_tflags;
+   uint64_toffset;
+};
+
 #define RADEON_CHUNK_ID_RELOCS 0x01
 #define RADEON_CHUNK_ID_IB 0x02
+#define RADEON_CHUNK_ID_FLAGS  0x03
+#define RADEON_CHUNK_ID_CONST_IB   0x04
+
+/* The first dword of RADEON_CHUNK_ID_FLAGS is a uint32 of these flags: */
+#define RADEON_CS_KEEP_TILING_FLAGS 0x01
+#define RADEON_CS_USE_VM0x02
+#define RADEON_CS_END_OF_FRAME  0x04 /* a hint from userspace which CS is 
the last one */
+/* The second dword of RADEON_CHUNK_ID_FLAGS is a uint32 that sets the ring 
type */
+#define RADEON_CS_RING_GFX  0
+#define RADEON_CS_RING_COMPUTE  1
+#define RADEON_CS_RING_DMA  2
+#define RADEON_CS_RING_UVD  3
+/* The third dword of RADEON_CHUNK_ID_FLAGS is a sint32 that sets the priority 
*/
+/* 0 = normal, + = higher priority, - = lower priority */

 struct drm_radeon_cs_chunk {
uint32_tchunk_id;
@@ -891,6 +928,8 @@ struct drm_radeon_cs_chunk {
uint64_tchunk_data;
 };

+/* drm_radeon_cs_reloc.flags */
+
 struct drm_radeon_cs_reloc {
uint32_thandle;
uint32_tread_domains;
@@ -916,6 +955,30 @@ struct drm_radeon_cs {
 #define RADEON_INFO_ACCEL_WORKING2 0x05
 #define RADEON_INFO_TILING_CONFIG  0x06
 #define RADEON_INFO_WANT_HYPERZ0x07
+#define RADEON_INFO_WANT_CMASK 0x08 /* get access to CMASK on r300 */
+#define RADEON_INFO_CLOCK_CRYSTAL_FREQ 0x09 /* clock crystal frequency */
+#define RADEON_INFO_NUM_BACKENDS   0x0a /* DB/backends for r600+ - need 
for OQ */
+#define RADEON_INFO_NUM_TILE_PIPES 0x0b /* tile pipes for r600+ */
+#define RADEON_INFO_FUSION_GART_WORKING0x0c /* fusion writes to GTT 
were broken before this */
+#define RADEON_INFO_BACKEND_MAP0x0d /* pipe to backend map, 
needed by mesa */
+/* virtual address start, va < start are reserved by the kernel */
+#define RADEON_INFO_VA_START   0x0e
+/* maximum size of ib using the virtual memory cs */
+#define RADEON_INFO_IB_VM_MAX_SIZE 0x0f
+/* max pipes - needed for compute shaders */
+#define RADEON_INFO_MAX_PIPES  0x10
+/* timestamp for GL_ARB_timer_query (OpenGL), returns the current GPU clock */
+#define RADEON_INFO_TIMESTAMP  0x11
+/* max shader engines (SE) - needed for geometry shaders, etc. */
+#define RADEON_INFO_MAX_SE 0x12
+/* max SH per SE */
+#define RADEON_INFO_MAX_SH_PER_SE  0x13
+/* fast fb access is enabled */
+#define RADEON_INFO_FASTFB_WORKING 0x14
+/* query if a RADEON_CS_RING_* submission is supported */
+#define RADEON_INFO_RING_WORKING   0x15
+/* SI tile mode array */
+#define RADEON_INFO_SI_TILE_MODE_ARRAY 0x16

 struct drm_radeon_info {
uint32_trequest;
@@ -923,4 +986,22 @@

[PATCH 2/2] radeon: add si tiling support v4

2013-04-08 Thread j.gli...@gmail.com
From: Jerome Glisse 

v2: Only writte tile index if flags for it is set
v3: Remove useless allow2d scanout flags
v4: Split radeon_drm.h update to its own patch

Signed-off-by: Jerome Glisse 
---
 radeon/radeon_surface.c | 658 
 radeon/radeon_surface.h |  31 +++
 2 files changed, 644 insertions(+), 45 deletions(-)

diff --git a/radeon/radeon_surface.c b/radeon/radeon_surface.c
index 5935c23..e567fba 100644
--- a/radeon/radeon_surface.c
+++ b/radeon/radeon_surface.c
@@ -83,12 +83,14 @@ typedef int (*hw_best_surface_t)(struct 
radeon_surface_manager *surf_man,

 struct radeon_hw_info {
 /* apply to r6, eg */
-uint32_tgroup_bytes;
-uint32_tnum_banks;
-uint32_tnum_pipes;
+uint32_tgroup_bytes;
+uint32_tnum_banks;
+uint32_tnum_pipes;
 /* apply to eg */
-uint32_trow_size;
-unsignedallow_2d;
+uint32_trow_size;
+unsignedallow_2d;
+/* apply to si */
+uint32_ttile_mode_array[32];
 };

 struct radeon_surface_manager {
@@ -1000,12 +1002,403 @@ static int eg_surface_best(struct 
radeon_surface_manager *surf_man,
 /* ===
  * Southern Islands family
  */
+#define SI__GB_TILE_MODE__PIPE_CONFIG(x)(((x) >> 6) & 0x1f)
+#define SI__PIPE_CONFIG__ADDR_SURF_P2   0
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_8x16  4
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_16x16 5
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_16x32 6
+#define SI__PIPE_CONFIG__ADDR_SURF_P4_32x32 7
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_16x16_8x168
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_16x32_8x169
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_8x1610
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_16x32_16x16   11
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_16x16   12
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_16x32   13
+#define SI__PIPE_CONFIG__ADDR_SURF_P8_32x64_32x32   14
+#define SI__GB_TILE_MODE__TILE_SPLIT(x) (((x) >> 11) & 0x7)
+#define SI__TILE_SPLIT__64B 0
+#define SI__TILE_SPLIT__128B1
+#define SI__TILE_SPLIT__256B2
+#define SI__TILE_SPLIT__512B3
+#define SI__TILE_SPLIT__1024B   4
+#define SI__TILE_SPLIT__2048B   5
+#define SI__TILE_SPLIT__4096B   6
+#define SI__GB_TILE_MODE__BANK_WIDTH(x) (((x) >> 14) & 0x3)
+#define SI__BANK_WIDTH__1   0
+#define SI__BANK_WIDTH__2   1
+#define SI__BANK_WIDTH__4   2
+#define SI__BANK_WIDTH__8   3
+#define SI__GB_TILE_MODE__BANK_HEIGHT(x)(((x) >> 16) & 0x3)
+#define SI__BANK_HEIGHT__1  0
+#define SI__BANK_HEIGHT__2  1
+#define SI__BANK_HEIGHT__4  2
+#define SI__BANK_HEIGHT__8  3
+#define SI__GB_TILE_MODE__MACRO_TILE_ASPECT(x)  (((x) >> 18) & 0x3)
+#define SI__MACRO_TILE_ASPECT__10
+#define SI__MACRO_TILE_ASPECT__21
+#define SI__MACRO_TILE_ASPECT__42
+#define SI__MACRO_TILE_ASPECT__83
+#define SI__GB_TILE_MODE__NUM_BANKS(x)  (((x) >> 20) & 0x3)
+#define SI__NUM_BANKS__2_BANK   0
+#define SI__NUM_BANKS__4_BANK   1
+#define SI__NUM_BANKS__8_BANK   2
+#define SI__NUM_BANKS__16_BANK  3
+
+
+static void si_gb_tile_mode(uint32_t gb_tile_mode,
+unsigned *num_pipes,
+unsigned *num_banks,
+uint32_t *macro_tile_aspect,
+uint32_t *bank_w,
+uint32_t *bank_h,
+uint32_t *tile_split)
+{
+if (num_pipes) {
+switch (SI__GB_TILE_MODE__PIPE_CONFIG(gb_tile_mode)) {
+case SI__PIPE_CONFIG__ADDR_SURF_P2:
+default:
+*num_pipes = 2;
+break;
+case SI__PIPE_CONFIG__ADDR_SURF_P4_8x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P4_16x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P4_16x32:
+case SI__PIPE_CONFIG__ADDR_SURF_P4_32x32:
+*num_pipes = 4;
+break;
+case SI__PIPE_CONFIG__ADDR_SURF_P8_16x16_8x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P8_16x32_8x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P8_32x32_8x16:
+case SI__PIPE_CONFIG__ADDR_SURF_P8_16x32_16x16:
+   

[PATCH 1/2] radeon: update radeon_drm.h to kernel last API additions

2013-04-08 Thread j.gli...@gmail.com
From: Jerome Glisse 

Signed-off-by: Jerome Glisse 
---
 include/drm/radeon_drm.h | 61 
 1 file changed, 61 insertions(+)

diff --git a/include/drm/radeon_drm.h b/include/drm/radeon_drm.h
index 00d66b3..ff3ce3a 100644
--- a/include/drm/radeon_drm.h
+++ b/include/drm/radeon_drm.h
@@ -509,6 +509,7 @@ typedef struct {
 #define DRM_RADEON_GEM_SET_TILING  0x28
 #define DRM_RADEON_GEM_GET_TILING  0x29
 #define DRM_RADEON_GEM_BUSY0x2a
+#define DRM_RADEON_GEM_VA  0x2b

 #define DRM_IOCTL_RADEON_CP_INITDRM_IOW( DRM_COMMAND_BASE + 
DRM_RADEON_CP_INIT, drm_radeon_init_t)
 #define DRM_IOCTL_RADEON_CP_START   DRM_IO(  DRM_COMMAND_BASE + 
DRM_RADEON_CP_START)
@@ -550,6 +551,7 @@ typedef struct {
 #define DRM_IOCTL_RADEON_SET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_SET_TILING, struct drm_radeon_gem_set_tiling)
 #define DRM_IOCTL_RADEON_GET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_GET_TILING, struct drm_radeon_gem_get_tiling)
 #define DRM_IOCTL_RADEON_GEM_BUSY  DRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_BUSY, struct drm_radeon_gem_busy)
+#define DRM_IOCTL_RADEON_GEM_VADRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_VA, struct drm_radeon_gem_va)

 typedef struct drm_radeon_init {
enum {
@@ -882,6 +884,27 @@ struct drm_radeon_gem_pwrite {
uint64_t data_ptr;
 };

+#define RADEON_VA_MAP  1
+#define RADEON_VA_UNMAP2
+
+#define RADEON_VA_RESULT_OK0
+#define RADEON_VA_RESULT_ERROR 1
+#define RADEON_VA_RESULT_VA_EXIST  2
+
+#define RADEON_VM_PAGE_VALID   (1 << 0)
+#define RADEON_VM_PAGE_READABLE(1 << 1)
+#define RADEON_VM_PAGE_WRITEABLE   (1 << 2)
+#define RADEON_VM_PAGE_SYSTEM  (1 << 3)
+#define RADEON_VM_PAGE_SNOOPED (1 << 4)
+
+struct drm_radeon_gem_va {
+   uint32_thandle;
+   uint32_toperation;
+   uint32_tvm_id;
+   uint32_tflags;
+   uint64_toffset;
+};
+
 #define RADEON_CHUNK_ID_RELOCS 0x01
 #define RADEON_CHUNK_ID_IB 0x02

@@ -916,6 +939,26 @@ struct drm_radeon_cs {
 #define RADEON_INFO_ACCEL_WORKING2 0x05
 #define RADEON_INFO_TILING_CONFIG  0x06
 #define RADEON_INFO_WANT_HYPERZ0x07
+#define RADEON_INFO_WANT_CMASK 0x08 /* get access to CMASK on r300 */
+#define RADEON_INFO_CLOCK_CRYSTAL_FREQ 0x09 /* clock crystal frequency */
+#define RADEON_INFO_NUM_BACKENDS   0x0a /* DB/backends for r600+ - need 
for OQ */
+#define RADEON_INFO_NUM_TILE_PIPES 0x0b /* tile pipes for r600+ */
+#define RADEON_INFO_FUSION_GART_WORKING0x0c /* fusion writes to GTT 
were broken before this */
+#define RADEON_INFO_BACKEND_MAP0x0d /* pipe to backend map, 
needed by mesa */
+/* virtual address start, va < start are reserved by the kernel */
+#define RADEON_INFO_VA_START   0x0e
+/* maximum size of ib using the virtual memory cs */
+#define RADEON_INFO_IB_VM_MAX_SIZE 0x0f
+/* max pipes - needed for compute shaders */
+#define RADEON_INFO_MAX_PIPES  0x10
+/* timestamp for GL_ARB_timer_query (OpenGL), returns the current GPU clock */
+#define RADEON_INFO_TIMESTAMP  0x11
+/* max shader engines (SE) - needed for geometry shaders, etc. */
+#define RADEON_INFO_MAX_SE 0x12
+/* max SH per SE */
+#define RADEON_INFO_MAX_SH_PER_SE  0x13
+/* SI tile mode array */
+#define RADEON_INFO_SI_TILE_MODE_ARRAY 0x14

 struct drm_radeon_info {
uint32_trequest;
@@ -923,4 +966,22 @@ struct drm_radeon_info {
uint64_tvalue;
 };

+/* Those correspond to the tile index to use, this is to explicitly state
+ * the API that is implicitly defined by the tile mode array.
+ */
+#define SI_TILE_MODE_COLOR_LINEAR_ALIGNED  8
+#define SI_TILE_MODE_COLOR_1D  13
+#define SI_TILE_MODE_COLOR_1D_SCANOUT  9
+#define SI_TILE_MODE_COLOR_2D_8BPP 14
+#define SI_TILE_MODE_COLOR_2D_16BPP15
+#define SI_TILE_MODE_COLOR_2D_32BPP16
+#define SI_TILE_MODE_COLOR_2D_64BPP17
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_16BPP11
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_32BPP12
+#define SI_TILE_MODE_DEPTH_STENCIL_1D  4
+#define SI_TILE_MODE_DEPTH_STENCIL_2D  0
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_2AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_4AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_8AA  2
+
 #endif
-- 
1.8.1.4



[PATCH] radeon: add si tiling support v3

2013-04-05 Thread j.gli...@gmail.com
From: Jerome Glisse 

v2: Only writte tile index if flags for it is set
v3: Remove useless allow2d scanout flags

Signed-off-by: Jerome Glisse 
---
 include/drm/radeon_drm.h |  61 +
 radeon/radeon_surface.c  | 658 +++
 radeon/radeon_surface.h  |  31 +++
 3 files changed, 705 insertions(+), 45 deletions(-)

diff --git a/include/drm/radeon_drm.h b/include/drm/radeon_drm.h
index 00d66b3..ff3ce3a 100644
--- a/include/drm/radeon_drm.h
+++ b/include/drm/radeon_drm.h
@@ -509,6 +509,7 @@ typedef struct {
 #define DRM_RADEON_GEM_SET_TILING  0x28
 #define DRM_RADEON_GEM_GET_TILING  0x29
 #define DRM_RADEON_GEM_BUSY0x2a
+#define DRM_RADEON_GEM_VA  0x2b

 #define DRM_IOCTL_RADEON_CP_INITDRM_IOW( DRM_COMMAND_BASE + 
DRM_RADEON_CP_INIT, drm_radeon_init_t)
 #define DRM_IOCTL_RADEON_CP_START   DRM_IO(  DRM_COMMAND_BASE + 
DRM_RADEON_CP_START)
@@ -550,6 +551,7 @@ typedef struct {
 #define DRM_IOCTL_RADEON_SET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_SET_TILING, struct drm_radeon_gem_set_tiling)
 #define DRM_IOCTL_RADEON_GET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_GET_TILING, struct drm_radeon_gem_get_tiling)
 #define DRM_IOCTL_RADEON_GEM_BUSY  DRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_BUSY, struct drm_radeon_gem_busy)
+#define DRM_IOCTL_RADEON_GEM_VADRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_VA, struct drm_radeon_gem_va)

 typedef struct drm_radeon_init {
enum {
@@ -882,6 +884,27 @@ struct drm_radeon_gem_pwrite {
uint64_t data_ptr;
 };

+#define RADEON_VA_MAP  1
+#define RADEON_VA_UNMAP2
+
+#define RADEON_VA_RESULT_OK0
+#define RADEON_VA_RESULT_ERROR 1
+#define RADEON_VA_RESULT_VA_EXIST  2
+
+#define RADEON_VM_PAGE_VALID   (1 << 0)
+#define RADEON_VM_PAGE_READABLE(1 << 1)
+#define RADEON_VM_PAGE_WRITEABLE   (1 << 2)
+#define RADEON_VM_PAGE_SYSTEM  (1 << 3)
+#define RADEON_VM_PAGE_SNOOPED (1 << 4)
+
+struct drm_radeon_gem_va {
+   uint32_thandle;
+   uint32_toperation;
+   uint32_tvm_id;
+   uint32_tflags;
+   uint64_toffset;
+};
+
 #define RADEON_CHUNK_ID_RELOCS 0x01
 #define RADEON_CHUNK_ID_IB 0x02

@@ -916,6 +939,26 @@ struct drm_radeon_cs {
 #define RADEON_INFO_ACCEL_WORKING2 0x05
 #define RADEON_INFO_TILING_CONFIG  0x06
 #define RADEON_INFO_WANT_HYPERZ0x07
+#define RADEON_INFO_WANT_CMASK 0x08 /* get access to CMASK on r300 */
+#define RADEON_INFO_CLOCK_CRYSTAL_FREQ 0x09 /* clock crystal frequency */
+#define RADEON_INFO_NUM_BACKENDS   0x0a /* DB/backends for r600+ - need 
for OQ */
+#define RADEON_INFO_NUM_TILE_PIPES 0x0b /* tile pipes for r600+ */
+#define RADEON_INFO_FUSION_GART_WORKING0x0c /* fusion writes to GTT 
were broken before this */
+#define RADEON_INFO_BACKEND_MAP0x0d /* pipe to backend map, 
needed by mesa */
+/* virtual address start, va < start are reserved by the kernel */
+#define RADEON_INFO_VA_START   0x0e
+/* maximum size of ib using the virtual memory cs */
+#define RADEON_INFO_IB_VM_MAX_SIZE 0x0f
+/* max pipes - needed for compute shaders */
+#define RADEON_INFO_MAX_PIPES  0x10
+/* timestamp for GL_ARB_timer_query (OpenGL), returns the current GPU clock */
+#define RADEON_INFO_TIMESTAMP  0x11
+/* max shader engines (SE) - needed for geometry shaders, etc. */
+#define RADEON_INFO_MAX_SE 0x12
+/* max SH per SE */
+#define RADEON_INFO_MAX_SH_PER_SE  0x13
+/* SI tile mode array */
+#define RADEON_INFO_SI_TILE_MODE_ARRAY 0x14

 struct drm_radeon_info {
uint32_trequest;
@@ -923,4 +966,22 @@ struct drm_radeon_info {
uint64_tvalue;
 };

+/* Those correspond to the tile index to use, this is to explicitly state
+ * the API that is implicitly defined by the tile mode array.
+ */
+#define SI_TILE_MODE_COLOR_LINEAR_ALIGNED  8
+#define SI_TILE_MODE_COLOR_1D  13
+#define SI_TILE_MODE_COLOR_1D_SCANOUT  9
+#define SI_TILE_MODE_COLOR_2D_8BPP 14
+#define SI_TILE_MODE_COLOR_2D_16BPP15
+#define SI_TILE_MODE_COLOR_2D_32BPP16
+#define SI_TILE_MODE_COLOR_2D_64BPP17
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_16BPP11
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_32BPP12
+#define SI_TILE_MODE_DEPTH_STENCIL_1D  4
+#define SI_TILE_MODE_DEPTH_STENCIL_2D  0
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_2AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_4AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_8AA  2
+
 #endif
diff --git a/radeon/radeon_surface.c b/radeon/radeon_surface.c
index 5935c23..e567fba 100644
--- a/radeon/radeon_surface.c
+++ b/radeon/radeon_surface.c
@@ -83,12 +83,14 @@ typedef int (*hw_best_surface_t)(struct 
radeon_

[PATCH] drm/radeon: add si tile mode array query v2

2013-04-05 Thread j.gli...@gmail.com
From: Jerome Glisse 

Allow userspace to query for the tile mode array so userspace can properly
compute surface pitch and alignment requirement depending on tiling.

v2: Make strict aliasing safer by casting to char when copying

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h |   1 +
 drivers/gpu/drm/radeon/radeon_drv.c |   3 +-
 drivers/gpu/drm/radeon/radeon_kms.c | 158 +++-
 drivers/gpu/drm/radeon/si.c |   2 +
 include/uapi/drm/radeon_drm.h   |  20 +
 5 files changed, 109 insertions(+), 75 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 8263af3..961659e 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1443,6 +1443,7 @@ struct si_asic {
unsigned multi_gpu_tile_size;

unsigned tile_config;
+   uint32_t tile_mode_array[32];
 };

 union radeon_asic_config {
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
b/drivers/gpu/drm/radeon/radeon_drv.c
index 66a7f0f..1e48ab6 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -71,9 +71,10 @@
  *   2.28.0 - r600-eg: Add MEM_WRITE packet support
  *   2.29.0 - R500 FP16 color clear registers
  *   2.30.0 - fix for FMASK texturing
+ *   2.31.0 - Add SI tiling mode array query
  */
 #define KMS_DRIVER_MAJOR   2
-#define KMS_DRIVER_MINOR   30
+#define KMS_DRIVER_MINOR   31
 #define KMS_DRIVER_PATCHLEVEL  0
 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
 int radeon_driver_unload_kms(struct drm_device *dev);
diff --git a/drivers/gpu/drm/radeon/radeon_kms.c 
b/drivers/gpu/drm/radeon/radeon_kms.c
index c75cb2c..63e28e0 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -176,80 +176,65 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
struct radeon_device *rdev = dev->dev_private;
struct drm_radeon_info *info = data;
struct radeon_mode_info *minfo = &rdev->mode_info;
-   uint32_t value, *value_ptr;
-   uint64_t value64, *value_ptr64;
+   uint32_t *value, value_tmp, *value_ptr, value_size;
+   uint64_t value64;
struct drm_crtc *crtc;
int i, found;

-   /* TIMESTAMP is a 64-bit value, needs special handling. */
-   if (info->request == RADEON_INFO_TIMESTAMP) {
-   if (rdev->family >= CHIP_R600) {
-   value_ptr64 = (uint64_t*)((unsigned long)info->value);
-   value64 = radeon_get_gpu_clock_counter(rdev);
-
-   if (DRM_COPY_TO_USER(value_ptr64, &value64, 
sizeof(value64))) {
-   DRM_ERROR("copy_to_user %s:%u\n", __func__, 
__LINE__);
-   return -EFAULT;
-   }
-   return 0;
-   } else {
-   DRM_DEBUG_KMS("timestamp is r6xx+ only!\n");
-   return -EINVAL;
-   }
-   }
-
value_ptr = (uint32_t *)((unsigned long)info->value);
-   if (DRM_COPY_FROM_USER(&value, value_ptr, sizeof(value))) {
-   DRM_ERROR("copy_from_user %s:%u\n", __func__, __LINE__);
-   return -EFAULT;
-   }
+   value = &value_tmp;
+   value_size = sizeof(uint32_t);

switch (info->request) {
case RADEON_INFO_DEVICE_ID:
-   value = dev->pci_device;
+   *value = dev->pci_device;
break;
case RADEON_INFO_NUM_GB_PIPES:
-   value = rdev->num_gb_pipes;
+   *value = rdev->num_gb_pipes;
break;
case RADEON_INFO_NUM_Z_PIPES:
-   value = rdev->num_z_pipes;
+   *value = rdev->num_z_pipes;
break;
case RADEON_INFO_ACCEL_WORKING:
/* xf86-video-ati 6.13.0 relies on this being false for 
evergreen */
if ((rdev->family >= CHIP_CEDAR) && (rdev->family <= 
CHIP_HEMLOCK))
-   value = false;
+   *value = false;
else
-   value = rdev->accel_working;
+   *value = rdev->accel_working;
break;
case RADEON_INFO_CRTC_FROM_ID:
+   if (DRM_COPY_FROM_USER(value, value_ptr, sizeof(uint32_t))) {
+   DRM_ERROR("copy_from_user %s:%u\n", __func__, __LINE__);
+   return -EFAULT;
+   }
for (i = 0, found = 0; i < rdev->num_crtc; i++) {
crtc = (struct drm_crtc *)minfo->crtcs[i];
-   if (crtc && crtc->base.id == value) {
+   if (crtc && crtc->base.id == *value) {
struct radeon_crtc *radeon_crtc = 
to_radeon_crtc(crtc);
-   value = radeon_crtc->crtc_id;
+

[PATCH] radeon: add si tiling support v2

2013-04-04 Thread j.gli...@gmail.com
From: Jerome Glisse 

v2: Only writte tile index if flags for it is set

Signed-off-by: Jerome Glisse 
---
 include/drm/radeon_drm.h |  61 +
 radeon/radeon_surface.c  | 664 +++
 radeon/radeon_surface.h  |  31 +++
 3 files changed, 711 insertions(+), 45 deletions(-)

diff --git a/include/drm/radeon_drm.h b/include/drm/radeon_drm.h
index 00d66b3..ff3ce3a 100644
--- a/include/drm/radeon_drm.h
+++ b/include/drm/radeon_drm.h
@@ -509,6 +509,7 @@ typedef struct {
 #define DRM_RADEON_GEM_SET_TILING  0x28
 #define DRM_RADEON_GEM_GET_TILING  0x29
 #define DRM_RADEON_GEM_BUSY0x2a
+#define DRM_RADEON_GEM_VA  0x2b

 #define DRM_IOCTL_RADEON_CP_INITDRM_IOW( DRM_COMMAND_BASE + 
DRM_RADEON_CP_INIT, drm_radeon_init_t)
 #define DRM_IOCTL_RADEON_CP_START   DRM_IO(  DRM_COMMAND_BASE + 
DRM_RADEON_CP_START)
@@ -550,6 +551,7 @@ typedef struct {
 #define DRM_IOCTL_RADEON_SET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_SET_TILING, struct drm_radeon_gem_set_tiling)
 #define DRM_IOCTL_RADEON_GET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_GET_TILING, struct drm_radeon_gem_get_tiling)
 #define DRM_IOCTL_RADEON_GEM_BUSY  DRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_BUSY, struct drm_radeon_gem_busy)
+#define DRM_IOCTL_RADEON_GEM_VADRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_VA, struct drm_radeon_gem_va)

 typedef struct drm_radeon_init {
enum {
@@ -882,6 +884,27 @@ struct drm_radeon_gem_pwrite {
uint64_t data_ptr;
 };

+#define RADEON_VA_MAP  1
+#define RADEON_VA_UNMAP2
+
+#define RADEON_VA_RESULT_OK0
+#define RADEON_VA_RESULT_ERROR 1
+#define RADEON_VA_RESULT_VA_EXIST  2
+
+#define RADEON_VM_PAGE_VALID   (1 << 0)
+#define RADEON_VM_PAGE_READABLE(1 << 1)
+#define RADEON_VM_PAGE_WRITEABLE   (1 << 2)
+#define RADEON_VM_PAGE_SYSTEM  (1 << 3)
+#define RADEON_VM_PAGE_SNOOPED (1 << 4)
+
+struct drm_radeon_gem_va {
+   uint32_thandle;
+   uint32_toperation;
+   uint32_tvm_id;
+   uint32_tflags;
+   uint64_toffset;
+};
+
 #define RADEON_CHUNK_ID_RELOCS 0x01
 #define RADEON_CHUNK_ID_IB 0x02

@@ -916,6 +939,26 @@ struct drm_radeon_cs {
 #define RADEON_INFO_ACCEL_WORKING2 0x05
 #define RADEON_INFO_TILING_CONFIG  0x06
 #define RADEON_INFO_WANT_HYPERZ0x07
+#define RADEON_INFO_WANT_CMASK 0x08 /* get access to CMASK on r300 */
+#define RADEON_INFO_CLOCK_CRYSTAL_FREQ 0x09 /* clock crystal frequency */
+#define RADEON_INFO_NUM_BACKENDS   0x0a /* DB/backends for r600+ - need 
for OQ */
+#define RADEON_INFO_NUM_TILE_PIPES 0x0b /* tile pipes for r600+ */
+#define RADEON_INFO_FUSION_GART_WORKING0x0c /* fusion writes to GTT 
were broken before this */
+#define RADEON_INFO_BACKEND_MAP0x0d /* pipe to backend map, 
needed by mesa */
+/* virtual address start, va < start are reserved by the kernel */
+#define RADEON_INFO_VA_START   0x0e
+/* maximum size of ib using the virtual memory cs */
+#define RADEON_INFO_IB_VM_MAX_SIZE 0x0f
+/* max pipes - needed for compute shaders */
+#define RADEON_INFO_MAX_PIPES  0x10
+/* timestamp for GL_ARB_timer_query (OpenGL), returns the current GPU clock */
+#define RADEON_INFO_TIMESTAMP  0x11
+/* max shader engines (SE) - needed for geometry shaders, etc. */
+#define RADEON_INFO_MAX_SE 0x12
+/* max SH per SE */
+#define RADEON_INFO_MAX_SH_PER_SE  0x13
+/* SI tile mode array */
+#define RADEON_INFO_SI_TILE_MODE_ARRAY 0x14

 struct drm_radeon_info {
uint32_trequest;
@@ -923,4 +966,22 @@ struct drm_radeon_info {
uint64_tvalue;
 };

+/* Those correspond to the tile index to use, this is to explicitly state
+ * the API that is implicitly defined by the tile mode array.
+ */
+#define SI_TILE_MODE_COLOR_LINEAR_ALIGNED  8
+#define SI_TILE_MODE_COLOR_1D  13
+#define SI_TILE_MODE_COLOR_1D_SCANOUT  9
+#define SI_TILE_MODE_COLOR_2D_8BPP 14
+#define SI_TILE_MODE_COLOR_2D_16BPP15
+#define SI_TILE_MODE_COLOR_2D_32BPP16
+#define SI_TILE_MODE_COLOR_2D_64BPP17
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_16BPP11
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_32BPP12
+#define SI_TILE_MODE_DEPTH_STENCIL_1D  4
+#define SI_TILE_MODE_DEPTH_STENCIL_2D  0
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_2AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_4AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_8AA  2
+
 #endif
diff --git a/radeon/radeon_surface.c b/radeon/radeon_surface.c
index 5935c23..2056899 100644
--- a/radeon/radeon_surface.c
+++ b/radeon/radeon_surface.c
@@ -83,12 +83,15 @@ typedef int (*hw_best_surface_t)(struct 
radeon_surface_manager *surf_man,

 struct radeo

[PATCH] drm/radeon: add si tile mode array query

2013-04-03 Thread j.gli...@gmail.com
From: Jerome Glisse 

Allow userspace to query for the tile mode array so userspace can properly
compute surface pitch and alignment requirement depending on tiling.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h |   1 +
 drivers/gpu/drm/radeon/radeon_drv.c |   3 +-
 drivers/gpu/drm/radeon/radeon_kms.c | 158 +++-
 drivers/gpu/drm/radeon/si.c |   2 +
 include/uapi/drm/radeon_drm.h   |  20 +
 5 files changed, 109 insertions(+), 75 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 8263af3..961659e 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1443,6 +1443,7 @@ struct si_asic {
unsigned multi_gpu_tile_size;

unsigned tile_config;
+   uint32_t tile_mode_array[32];
 };

 union radeon_asic_config {
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
b/drivers/gpu/drm/radeon/radeon_drv.c
index 66a7f0f..1e48ab6 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -71,9 +71,10 @@
  *   2.28.0 - r600-eg: Add MEM_WRITE packet support
  *   2.29.0 - R500 FP16 color clear registers
  *   2.30.0 - fix for FMASK texturing
+ *   2.31.0 - Add SI tiling mode array query
  */
 #define KMS_DRIVER_MAJOR   2
-#define KMS_DRIVER_MINOR   30
+#define KMS_DRIVER_MINOR   31
 #define KMS_DRIVER_PATCHLEVEL  0
 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
 int radeon_driver_unload_kms(struct drm_device *dev);
diff --git a/drivers/gpu/drm/radeon/radeon_kms.c 
b/drivers/gpu/drm/radeon/radeon_kms.c
index c75cb2c..8076434 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -176,80 +176,65 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
struct radeon_device *rdev = dev->dev_private;
struct drm_radeon_info *info = data;
struct radeon_mode_info *minfo = &rdev->mode_info;
-   uint32_t value, *value_ptr;
-   uint64_t value64, *value_ptr64;
+   uint32_t *value, value_tmp, *value_ptr, value_size;
+   uint64_t value64;
struct drm_crtc *crtc;
int i, found;

-   /* TIMESTAMP is a 64-bit value, needs special handling. */
-   if (info->request == RADEON_INFO_TIMESTAMP) {
-   if (rdev->family >= CHIP_R600) {
-   value_ptr64 = (uint64_t*)((unsigned long)info->value);
-   value64 = radeon_get_gpu_clock_counter(rdev);
-
-   if (DRM_COPY_TO_USER(value_ptr64, &value64, 
sizeof(value64))) {
-   DRM_ERROR("copy_to_user %s:%u\n", __func__, 
__LINE__);
-   return -EFAULT;
-   }
-   return 0;
-   } else {
-   DRM_DEBUG_KMS("timestamp is r6xx+ only!\n");
-   return -EINVAL;
-   }
-   }
-
value_ptr = (uint32_t *)((unsigned long)info->value);
-   if (DRM_COPY_FROM_USER(&value, value_ptr, sizeof(value))) {
-   DRM_ERROR("copy_from_user %s:%u\n", __func__, __LINE__);
-   return -EFAULT;
-   }
+   value = &value_tmp;
+   value_size = sizeof(uint32_t);

switch (info->request) {
case RADEON_INFO_DEVICE_ID:
-   value = dev->pci_device;
+   *value = dev->pci_device;
break;
case RADEON_INFO_NUM_GB_PIPES:
-   value = rdev->num_gb_pipes;
+   *value = rdev->num_gb_pipes;
break;
case RADEON_INFO_NUM_Z_PIPES:
-   value = rdev->num_z_pipes;
+   *value = rdev->num_z_pipes;
break;
case RADEON_INFO_ACCEL_WORKING:
/* xf86-video-ati 6.13.0 relies on this being false for 
evergreen */
if ((rdev->family >= CHIP_CEDAR) && (rdev->family <= 
CHIP_HEMLOCK))
-   value = false;
+   *value = false;
else
-   value = rdev->accel_working;
+   *value = rdev->accel_working;
break;
case RADEON_INFO_CRTC_FROM_ID:
+   if (DRM_COPY_FROM_USER(value, value_ptr, sizeof(uint32_t))) {
+   DRM_ERROR("copy_from_user %s:%u\n", __func__, __LINE__);
+   return -EFAULT;
+   }
for (i = 0, found = 0; i < rdev->num_crtc; i++) {
crtc = (struct drm_crtc *)minfo->crtcs[i];
-   if (crtc && crtc->base.id == value) {
+   if (crtc && crtc->base.id == *value) {
struct radeon_crtc *radeon_crtc = 
to_radeon_crtc(crtc);
-   value = radeon_crtc->crtc_id;
+   *value = radeon_crtc->crtc_id;
  

[PATCH] radeon: add si tiling support

2013-04-03 Thread j.gli...@gmail.com
From: Jerome Glisse 

Signed-off-by: Jerome Glisse 
---
 include/drm/radeon_drm.h |  61 +
 radeon/radeon_surface.c  | 663 +++
 radeon/radeon_surface.h  |  30 +++
 3 files changed, 709 insertions(+), 45 deletions(-)

diff --git a/include/drm/radeon_drm.h b/include/drm/radeon_drm.h
index 00d66b3..ff3ce3a 100644
--- a/include/drm/radeon_drm.h
+++ b/include/drm/radeon_drm.h
@@ -509,6 +509,7 @@ typedef struct {
 #define DRM_RADEON_GEM_SET_TILING  0x28
 #define DRM_RADEON_GEM_GET_TILING  0x29
 #define DRM_RADEON_GEM_BUSY0x2a
+#define DRM_RADEON_GEM_VA  0x2b

 #define DRM_IOCTL_RADEON_CP_INITDRM_IOW( DRM_COMMAND_BASE + 
DRM_RADEON_CP_INIT, drm_radeon_init_t)
 #define DRM_IOCTL_RADEON_CP_START   DRM_IO(  DRM_COMMAND_BASE + 
DRM_RADEON_CP_START)
@@ -550,6 +551,7 @@ typedef struct {
 #define DRM_IOCTL_RADEON_SET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_SET_TILING, struct drm_radeon_gem_set_tiling)
 #define DRM_IOCTL_RADEON_GET_TILINGDRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_GET_TILING, struct drm_radeon_gem_get_tiling)
 #define DRM_IOCTL_RADEON_GEM_BUSY  DRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_BUSY, struct drm_radeon_gem_busy)
+#define DRM_IOCTL_RADEON_GEM_VADRM_IOWR(DRM_COMMAND_BASE + 
DRM_RADEON_GEM_VA, struct drm_radeon_gem_va)

 typedef struct drm_radeon_init {
enum {
@@ -882,6 +884,27 @@ struct drm_radeon_gem_pwrite {
uint64_t data_ptr;
 };

+#define RADEON_VA_MAP  1
+#define RADEON_VA_UNMAP2
+
+#define RADEON_VA_RESULT_OK0
+#define RADEON_VA_RESULT_ERROR 1
+#define RADEON_VA_RESULT_VA_EXIST  2
+
+#define RADEON_VM_PAGE_VALID   (1 << 0)
+#define RADEON_VM_PAGE_READABLE(1 << 1)
+#define RADEON_VM_PAGE_WRITEABLE   (1 << 2)
+#define RADEON_VM_PAGE_SYSTEM  (1 << 3)
+#define RADEON_VM_PAGE_SNOOPED (1 << 4)
+
+struct drm_radeon_gem_va {
+   uint32_thandle;
+   uint32_toperation;
+   uint32_tvm_id;
+   uint32_tflags;
+   uint64_toffset;
+};
+
 #define RADEON_CHUNK_ID_RELOCS 0x01
 #define RADEON_CHUNK_ID_IB 0x02

@@ -916,6 +939,26 @@ struct drm_radeon_cs {
 #define RADEON_INFO_ACCEL_WORKING2 0x05
 #define RADEON_INFO_TILING_CONFIG  0x06
 #define RADEON_INFO_WANT_HYPERZ0x07
+#define RADEON_INFO_WANT_CMASK 0x08 /* get access to CMASK on r300 */
+#define RADEON_INFO_CLOCK_CRYSTAL_FREQ 0x09 /* clock crystal frequency */
+#define RADEON_INFO_NUM_BACKENDS   0x0a /* DB/backends for r600+ - need 
for OQ */
+#define RADEON_INFO_NUM_TILE_PIPES 0x0b /* tile pipes for r600+ */
+#define RADEON_INFO_FUSION_GART_WORKING0x0c /* fusion writes to GTT 
were broken before this */
+#define RADEON_INFO_BACKEND_MAP0x0d /* pipe to backend map, 
needed by mesa */
+/* virtual address start, va < start are reserved by the kernel */
+#define RADEON_INFO_VA_START   0x0e
+/* maximum size of ib using the virtual memory cs */
+#define RADEON_INFO_IB_VM_MAX_SIZE 0x0f
+/* max pipes - needed for compute shaders */
+#define RADEON_INFO_MAX_PIPES  0x10
+/* timestamp for GL_ARB_timer_query (OpenGL), returns the current GPU clock */
+#define RADEON_INFO_TIMESTAMP  0x11
+/* max shader engines (SE) - needed for geometry shaders, etc. */
+#define RADEON_INFO_MAX_SE 0x12
+/* max SH per SE */
+#define RADEON_INFO_MAX_SH_PER_SE  0x13
+/* SI tile mode array */
+#define RADEON_INFO_SI_TILE_MODE_ARRAY 0x14

 struct drm_radeon_info {
uint32_trequest;
@@ -923,4 +966,22 @@ struct drm_radeon_info {
uint64_tvalue;
 };

+/* Those correspond to the tile index to use, this is to explicitly state
+ * the API that is implicitly defined by the tile mode array.
+ */
+#define SI_TILE_MODE_COLOR_LINEAR_ALIGNED  8
+#define SI_TILE_MODE_COLOR_1D  13
+#define SI_TILE_MODE_COLOR_1D_SCANOUT  9
+#define SI_TILE_MODE_COLOR_2D_8BPP 14
+#define SI_TILE_MODE_COLOR_2D_16BPP15
+#define SI_TILE_MODE_COLOR_2D_32BPP16
+#define SI_TILE_MODE_COLOR_2D_64BPP17
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_16BPP11
+#define SI_TILE_MODE_COLOR_2D_SCANOUT_32BPP12
+#define SI_TILE_MODE_DEPTH_STENCIL_1D  4
+#define SI_TILE_MODE_DEPTH_STENCIL_2D  0
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_2AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_4AA  3
+#define SI_TILE_MODE_DEPTH_STENCIL_2D_8AA  2
+
 #endif
diff --git a/radeon/radeon_surface.c b/radeon/radeon_surface.c
index 5935c23..dfdcbbc 100644
--- a/radeon/radeon_surface.c
+++ b/radeon/radeon_surface.c
@@ -83,12 +83,15 @@ typedef int (*hw_best_surface_t)(struct 
radeon_surface_manager *surf_man,

 struct radeon_hw_info {
 /* apply to r6, eg */
-uint32_

[PATCH] drm/radeon: Catch reservation deadlock on same buffer with different handle v2

2013-02-13 Thread j.gli...@gmail.com
From: Jerome Glisse 

This patch print a warning message when trying to reserve same buffer
twice in same cs ioctl (because the buffer is known by userspace under
2 different handle). It does not try to fix the issue like :

https://patchwork.kernel.org/patch/1812991/

Just to make this case easier to debug.

v2: Make message a debug one not an error

Cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_object.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index d3aface..624ea8c 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -355,6 +355,18 @@ int radeon_bo_list_validate(struct list_head *head)

r = ttm_eu_reserve_buffers(head);
if (unlikely(r != 0)) {
+   if (r == -EDEADLK) {
+   /* this is not a GPU lockup, ttm_eu_reserve_buffers
+* can not trigger detection of GPU lockup. This is
+* a dead lock trying to reserve the same buffer again
+* probably because the buffer is know as 2 different
+* handle by userspace. Print a warning message so
+* that we know what's going on.
+*/
+   DRM_DEBUG("Dead lock reserving buffer (one buffer is 
know by userspace under 2 different handle)\n");
+   /* Do not return -EDEADLK to avoid useless GPU reset */
+   return -EINVAL;
+   }
return r;
}
list_for_each_entry(lobj, head, tv.head) {
-- 
1.7.11.7



[PATCH] drm/radeon: Catch reservation deadlock on same buffer with different handle

2013-02-13 Thread j.gli...@gmail.com
From: Jerome Glisse 

This patch print a warning message when trying to reserve same buffer
twice in same cs ioctl (because the buffer is known by userspace under
2 different handle). It does not try to fix the issue like :

https://patchwork.kernel.org/patch/1812991/

Just to make this case easier to debug.

Cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_object.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index d3aface..e40743d 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -355,6 +355,18 @@ int radeon_bo_list_validate(struct list_head *head)

r = ttm_eu_reserve_buffers(head);
if (unlikely(r != 0)) {
+   if (r == -EDEADLK) {
+   /* this is not a GPU lockup, ttm_eu_reserve_buffers
+* can not trigger detection of GPU lockup. This is
+* a dead lock trying to reserve the same buffer again
+* probably because the buffer is know as 2 different
+* handle by userspace. Print a warning message so
+* that we know what's going on.
+*/
+   DRM_ERROR("Dead lock reserving buffer (one buffer is 
know by userspace under 2 different handle)\n");
+   /* Do not return -EDEADLK to avoid useless GPU reset */
+   return -EINVAL;
+   }
return r;
}
list_for_each_entry(lobj, head, tv.head) {
-- 
1.7.11.7



[PATCH] drm/radeon: enforce use of radeon_get_ib_value when reading user cmd

2013-02-11 Thread j.gli...@gmail.com
From: Jerome Glisse 

When ever parsing cmd buffer supplied by userspace we need to use
radeon_get_ib_value rather than directly accessing the ib as the user
cmd might not yet be copied into the ib thus the parser might read
value that does not correspond to what user is sending and possibly
allowing user to send malicious command undected.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen_cs.c | 86 +--
 drivers/gpu/drm/radeon/r600_cs.c  | 38 
 2 files changed, 62 insertions(+), 62 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c 
b/drivers/gpu/drm/radeon/evergreen_cs.c
index 7a44566..ee4cff5 100644
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -2909,14 +2909,14 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
return -EINVAL;
}
if (tiled) {
-   dst_offset = ib[idx+1];
+   dst_offset = radeon_get_ib_value(p, idx+1);
dst_offset <<= 8;

ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset 
>> 8);
p->idx += count + 7;
} else {
-   dst_offset = ib[idx+1];
-   dst_offset |= ((u64)(ib[idx+2] & 0xff)) << 32;
+   dst_offset = radeon_get_ib_value(p, idx+1);
+   dst_offset |= ((u64)(radeon_get_ib_value(p, 
idx+2) & 0xff)) << 32;

ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset & 
0xfffc);
ib[idx+2] += 
upper_32_bits(dst_reloc->lobj.gpu_offset) & 0xff;
@@ -2954,12 +2954,12 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
DRM_ERROR("bad L2T, 
frame to fields DMA_PACKET_COPY\n");
return -EINVAL;
}
-   dst_offset = ib[idx+1];
+   dst_offset = 
radeon_get_ib_value(p, idx+1);
dst_offset <<= 8;
-   dst2_offset = ib[idx+2];
+   dst2_offset = 
radeon_get_ib_value(p, idx+2);
dst2_offset <<= 8;
-   src_offset = ib[idx+8];
-   src_offset |= ((u64)(ib[idx+9] 
& 0xff)) << 32;
+   src_offset = 
radeon_get_ib_value(p, idx+8);
+   src_offset |= 
((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
if ((src_offset + (count * 4)) 
> radeon_bo_size(src_reloc->robj)) {
dev_warn(p->dev, "DMA 
L2T, frame to fields src buffer too small (%llu %lu)\n",
 src_offset + 
(count * 4), radeon_bo_size(src_reloc->robj));
@@ -3014,12 +3014,12 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
DRM_ERROR("bad L2T, 
broadcast DMA_PACKET_COPY\n");
return -EINVAL;
}
-   dst_offset = ib[idx+1];
+   dst_offset = 
radeon_get_ib_value(p, idx+1);
dst_offset <<= 8;
-   dst2_offset = ib[idx+2];
+   dst2_offset = 
radeon_get_ib_value(p, idx+2);
dst2_offset <<= 8;
-   src_offset = ib[idx+8];
-   src_offset |= ((u64)(ib[idx+9] 
& 0xff)) << 32;
+   src_offset = 
radeon_get_ib_value(p, idx+8);
+   src_offset |= 
((u64)(radeon_get_ib_value(p, idx+9) & 0xff)) << 32;
if ((src_offset + (count * 4)) 
> radeon_bo_size(src_reloc->robj)) {
dev_warn(p->dev, "DMA 
L2T, broadcast src buffer too small (%llu %lu)\n",
 src_offset + 
(count * 4), radeon_bo_size(src_reloc->robj));
@@ -3046,22 +3046,22 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
   

[PATCH] drm/radeon: copy userspace cmd to local copy before processing it v3

2013-02-08 Thread j.gli...@gmail.com
From: Jerome Glisse 

In some rare case were packet is big enough to go over page boundary
we might not have copied yet the userspace data into the local copy
resulting in kernel reading garbage data.

Without this patch kernel might submit unprocessed/unrelocated cmd
to the GPU which might lead to GPU lockup.

v2: Make sure we do copy all the page and don't forget some when
the packet count dw is bigger than 1 page
v3: Rebase patch against Linus master

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen_cs.c | 35 ++-
 drivers/gpu/drm/radeon/r600_cs.c  | 19 ++-
 2 files changed, 52 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c 
b/drivers/gpu/drm/radeon/evergreen_cs.c
index 7a44566..51ad74a 100644
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -1021,7 +1021,7 @@ static int evergreen_cs_packet_parse(struct 
radeon_cs_parser *p,
  unsigned idx)
 {
struct radeon_cs_chunk *ib_chunk = &p->chunks[p->chunk_ib_idx];
-   uint32_t header;
+   uint32_t header, i, npages;

if (idx >= ib_chunk->length_dw) {
DRM_ERROR("Can not parse packet at %d after CS end %d !\n",
@@ -1052,6 +1052,11 @@ static int evergreen_cs_packet_parse(struct 
radeon_cs_parser *p,
  pkt->idx, pkt->type, pkt->count, ib_chunk->length_dw);
return -EINVAL;
}
+   /* make sure we copied packet fully from userspace */
+   npages = ((idx + pkt->count + 1) >> 10) - (idx >> 10);
+   for (i = 1; i <= npages; i++) {
+   radeon_get_ib_value(p, (idx & 0xfc00) + i * 0x400);
+   }
return 0;
 }

@@ -2909,12 +2914,16 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
return -EINVAL;
}
if (tiled) {
+   /* make sure we copied packet fully from 
userspace */
+   radeon_get_ib_value(p, idx + 6);
dst_offset = ib[idx+1];
dst_offset <<= 8;

ib[idx+1] += (u32)(dst_reloc->lobj.gpu_offset 
>> 8);
p->idx += count + 7;
} else {
+   /* make sure we copied packet fully from 
userspace */
+   radeon_get_ib_value(p, idx + 2);
dst_offset = ib[idx+1];
dst_offset |= ((u64)(ib[idx+2] & 0xff)) << 32;

@@ -2945,6 +2954,8 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
switch (misc) {
case 0:
/* L2T, frame to fields */
+   /* make sure we copied packet 
fully from userspace */
+   radeon_get_ib_value(p, idx + 9);
if (idx_value & (1 << 31)) {
DRM_ERROR("bad L2T, 
frame to fields DMA_PACKET_COPY\n");
return -EINVAL;
@@ -2983,6 +2994,8 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
break;
case 1:
/* L2T, T2L partial */
+   /* make sure we copied packet 
fully from userspace */
+   radeon_get_ib_value(p, idx + 
11);
if (p->family < CHIP_CAYMAN) {
DRM_ERROR("L2T, T2L 
Partial is cayman only !\n");
return -EINVAL;
@@ -3005,6 +3018,8 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
break;
case 3:
/* L2T, broadcast */
+   /* make sure we copied packet 
fully from userspace */
+   radeon_get_ib_value(p, idx + 9);
if (idx_value & (1 << 31)) {
DRM_ERROR("bad L2T, 
broadcast DMA_PACKET_COPY\n");
return -EINVAL;
@@ -3043,6 +3058,8 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
break;
case 4:
/* 

[PATCH] drm/ttm: avoid allocation memory while spinlock is held v2

2013-02-04 Thread j.gli...@gmail.com
From: Jerome Glisse 

We need to take reference on the sync object while holding the
fence spinlock but at the same time we don't want to allocate
memory while holding the spinlock. This patch make sure we
enforce both of this constraint.

v2: actually test build it

Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/ttm/ttm_bo_util.c |   17 -
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 44420fc..f4b7acd 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -413,6 +413,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  * @bo: A pointer to a struct ttm_buffer_object.
  * @new_obj: A pointer to a pointer to a newly created ttm_buffer_object,
  * holding the data of @bo with the old placement.
+ * @sync_obj: the sync object caller is responsible to take a reference on
+ * behalf of this function
  *
  * This is a utility function that may be called after an accelerated move
  * has been scheduled. A new buffer object is created as a placeholder for
@@ -423,11 +425,11 @@ static void ttm_transfered_destroy(struct 
ttm_buffer_object *bo)
  */

 static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
- struct ttm_buffer_object **new_obj)
+ struct ttm_buffer_object **new_obj,
+ void *sync_obj)
 {
struct ttm_buffer_object *fbo;
struct ttm_bo_device *bdev = bo->bdev;
-   struct ttm_bo_driver *driver = bdev->driver;

fbo = kzalloc(sizeof(*fbo), GFP_KERNEL);
if (!fbo)
@@ -448,7 +450,8 @@ static int ttm_buffer_object_transfer(struct 
ttm_buffer_object *bo,
fbo->vm_node = NULL;
atomic_set(&fbo->cpu_writers, 0);

-   fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
+   /* reference on sync obj is taken by the caller of this function */
+   fbo->sync_obj = sync_obj;
kref_init(&fbo->list_kref);
kref_init(&fbo->kref);
fbo->destroy = &ttm_transfered_destroy;
@@ -652,6 +655,8 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
}
ttm_bo_free_old_node(bo);
} else {
+   void *sync_obj;
+
/**
 * This should help pipeline ordinary buffer moves.
 *
@@ -662,12 +667,14 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object 
*bo,

set_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags);

-   /* ttm_buffer_object_transfer accesses bo->sync_obj */
-   ret = ttm_buffer_object_transfer(bo, &ghost_obj);
+   /* take the ref on the sync object before releasing the 
spinlock */
+   sync_obj = driver->sync_obj_ref(bo->sync_obj);
spin_unlock(&bdev->fence_lock);
+
if (tmp_obj)
driver->sync_obj_unref(&tmp_obj);

+   ret = ttm_buffer_object_transfer(bo, &ghost_obj, sync_obj);
if (ret)
return ret;

-- 
1.7.10.4



[PATCH] drm/ttm: avoid allocation memory while spinlock is held

2013-02-04 Thread j.gli...@gmail.com
From: Jerome Glisse 

We need to take reference on the sync object while holding the
fence spinlock but at the same time we don't want to allocate
memory while holding the spinlock. This patch make sure we
enforce both of this constraint.

Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/ttm/ttm_bo_util.c |   16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 44420fc..77799a5 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -413,6 +413,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  * @bo: A pointer to a struct ttm_buffer_object.
  * @new_obj: A pointer to a pointer to a newly created ttm_buffer_object,
  * holding the data of @bo with the old placement.
+ * @sync_obj: the sync object caller is responsible to take a reference on
+ * behalf of this function
  *
  * This is a utility function that may be called after an accelerated move
  * has been scheduled. A new buffer object is created as a placeholder for
@@ -423,7 +425,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object 
*bo)
  */

 static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
- struct ttm_buffer_object **new_obj)
+ struct ttm_buffer_object **new_obj,
+ void sync_obj)
 {
struct ttm_buffer_object *fbo;
struct ttm_bo_device *bdev = bo->bdev;
@@ -448,7 +451,8 @@ static int ttm_buffer_object_transfer(struct 
ttm_buffer_object *bo,
fbo->vm_node = NULL;
atomic_set(&fbo->cpu_writers, 0);

-   fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
+   /* reference on sync obj is taken by the caller of this function */
+   fbo->sync_obj = sync_obj;
kref_init(&fbo->list_kref);
kref_init(&fbo->kref);
fbo->destroy = &ttm_transfered_destroy;
@@ -652,6 +656,8 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
}
ttm_bo_free_old_node(bo);
} else {
+   void *sync_obj;
+
/**
 * This should help pipeline ordinary buffer moves.
 *
@@ -662,12 +668,14 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object 
*bo,

set_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags);

-   /* ttm_buffer_object_transfer accesses bo->sync_obj */
-   ret = ttm_buffer_object_transfer(bo, &ghost_obj);
+   /* take the ref on the sync object before releasing the 
spinlock */
+   sync_obj = driver->sync_obj_ref(bo->sync_obj);
spin_unlock(&bdev->fence_lock);
+
if (tmp_obj)
driver->sync_obj_unref(&tmp_obj);

+   ret = ttm_buffer_object_transfer(bo, &ghost_obj, sync_obj);
if (ret)
return ret;

-- 
1.7.10.4



[PATCH] drm/radeon: fix cursor corruption on aruba and newer

2013-01-21 Thread j.gli...@gmail.com
From: Jerome Glisse 

Aruba and newer gpu does not need the avivo cursor work around,
quite the opposite this work around lead to corruption.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_cursor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_cursor.c 
b/drivers/gpu/drm/radeon/radeon_cursor.c
index ad6df62..30f71cc 100644
--- a/drivers/gpu/drm/radeon/radeon_cursor.c
+++ b/drivers/gpu/drm/radeon/radeon_cursor.c
@@ -241,7 +241,7 @@ int radeon_crtc_cursor_move(struct drm_crtc *crtc,
y = 0;
}

-   if (ASIC_IS_AVIVO(rdev)) {
+   if (ASIC_IS_AVIVO(rdev) && (rdev->family < CHIP_ARUBA)) {
int i = 0;
struct drm_crtc *crtc_p;

-- 
1.7.11.7



[PATCH] drm/radeon: improve semaphore debugging on lockup

2013-01-11 Thread j.gli...@gmail.com
From: Jerome Glisse 

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h   | 2 ++
 drivers/gpu/drm/radeon/radeon_ring.c  | 2 ++
 drivers/gpu/drm/radeon/radeon_semaphore.c | 4 
 3 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 9b9422c..f0bb8d5 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -649,6 +649,8 @@ struct radeon_ring {
u32 ptr_reg_mask;
u32 nop;
u32 idx;
+   u64 last_semaphore_signal_addr;
+   u64 last_semaphore_wait_addr;
 };

 /*
diff --git a/drivers/gpu/drm/radeon/radeon_ring.c 
b/drivers/gpu/drm/radeon/radeon_ring.c
index 141f2b6..2430d80 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -784,6 +784,8 @@ static int radeon_debugfs_ring_info(struct seq_file *m, 
void *data)
}
seq_printf(m, "driver's copy of the wptr: 0x%08x [%5d]\n", ring->wptr, 
ring->wptr);
seq_printf(m, "driver's copy of the rptr: 0x%08x [%5d]\n", ring->rptr, 
ring->rptr);
+   seq_printf(m, "last semaphore signal addr : 0x%016llx\n", 
ring->last_semaphore_signal_addr);
+   seq_printf(m, "last semaphore wait addr   : 0x%016llx\n", 
ring->last_semaphore_wait_addr);
seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw);
seq_printf(m, "%u dwords in ring\n", count);
/* print 8 dw before current rptr as often it's the last executed
diff --git a/drivers/gpu/drm/radeon/radeon_semaphore.c 
b/drivers/gpu/drm/radeon/radeon_semaphore.c
index 97f3ece..8dcc20f 100644
--- a/drivers/gpu/drm/radeon/radeon_semaphore.c
+++ b/drivers/gpu/drm/radeon/radeon_semaphore.c
@@ -95,6 +95,10 @@ int radeon_semaphore_sync_rings(struct radeon_device *rdev,
/* we assume caller has already allocated space on waiters ring */
radeon_semaphore_emit_wait(rdev, waiter, semaphore);

+   /* for debugging lockup only, used by sysfs debug files */
+   rdev->ring[signaler].last_semaphore_signal_addr = semaphore->gpu_addr;
+   rdev->ring[waiter].last_semaphore_wait_addr = semaphore->gpu_addr;
+
return 0;
 }

-- 
1.7.11.7



[PATCH 2/2] radeon/kms: cleanup async dma packet checking

2013-01-09 Thread j.gli...@gmail.com
From: Jerome Glisse 

This simplify and cleanup the async dma checking.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c|  16 +-
 drivers/gpu/drm/radeon/evergreen_cs.c | 807 +-
 drivers/gpu/drm/radeon/evergreend.h   |  29 +-
 3 files changed, 417 insertions(+), 435 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index f92f6bb..28f8d4f 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -3223,14 +3223,14 @@ void evergreen_dma_fence_ring_emit(struct radeon_device 
*rdev,
struct radeon_ring *ring = &rdev->ring[fence->ring];
u64 addr = rdev->fence_drv[fence->ring].gpu_addr;
/* write the fence */
-   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_FENCE, 0, 0, 0));
+   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_FENCE, 0, 0));
radeon_ring_write(ring, addr & 0xfffc);
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff));
radeon_ring_write(ring, fence->seq);
/* generate an interrupt */
-   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_TRAP, 0, 0, 0));
+   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_TRAP, 0, 0));
/* flush HDP */
-   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_SRBM_WRITE, 0, 0, 0));
+   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_SRBM_WRITE, 0, 0));
radeon_ring_write(ring, (0xf << 16) | HDP_MEM_COHERENCY_FLUSH_CNTL);
radeon_ring_write(ring, 1);
 }
@@ -3253,7 +3253,7 @@ void evergreen_dma_ring_ib_execute(struct radeon_device 
*rdev,
while ((next_rptr & 7) != 5)
next_rptr++;
next_rptr += 3;
-   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_WRITE, 0, 0, 1));
+   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_WRITE, 0, 1));
radeon_ring_write(ring, ring->next_rptr_gpu_addr & 0xfffc);
radeon_ring_write(ring, upper_32_bits(ring->next_rptr_gpu_addr) 
& 0xff);
radeon_ring_write(ring, next_rptr);
@@ -3263,8 +3263,8 @@ void evergreen_dma_ring_ib_execute(struct radeon_device 
*rdev,
 * Pad as necessary with NOPs.
 */
while ((ring->wptr & 7) != 5)
-   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_NOP, 0, 0, 0));
-   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_INDIRECT_BUFFER, 0, 0, 
0));
+   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_NOP, 0, 0));
+   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_INDIRECT_BUFFER, 0, 0));
radeon_ring_write(ring, (ib->gpu_addr & 0xFFE0));
radeon_ring_write(ring, (ib->length_dw << 12) | 
(upper_32_bits(ib->gpu_addr) & 0xFF));

@@ -3323,7 +3323,7 @@ int evergreen_copy_dma(struct radeon_device *rdev,
if (cur_size_in_dw > 0xF)
cur_size_in_dw = 0xF;
size_in_dw -= cur_size_in_dw;
-   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_COPY, 0, 0, 
cur_size_in_dw));
+   radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_COPY, 0, 
cur_size_in_dw));
radeon_ring_write(ring, dst_offset & 0xfffc);
radeon_ring_write(ring, src_offset & 0xfffc);
radeon_ring_write(ring, upper_32_bits(dst_offset) & 0xff);
@@ -3431,7 +3431,7 @@ static int evergreen_startup(struct radeon_device *rdev)
ring = &rdev->ring[R600_RING_TYPE_DMA_INDEX];
r = radeon_ring_init(rdev, ring, ring->ring_size, 
R600_WB_DMA_RPTR_OFFSET,
 DMA_RB_RPTR, DMA_RB_WPTR,
-2, 0x3fffc, DMA_PACKET(DMA_PACKET_NOP, 0, 0, 0));
+2, 0x3fffc, DMA_PACKET(DMA_PACKET_NOP, 0, 0));
if (r)
return r;

diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c 
b/drivers/gpu/drm/radeon/evergreen_cs.c
index 7a44566..32c07bb 100644
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -2858,16 +2858,6 @@ int evergreen_cs_parse(struct radeon_cs_parser *p)
return 0;
 }

-/*
- *  DMA
- */
-
-#define GET_DMA_CMD(h) (((h) & 0xf000) >> 28)
-#define GET_DMA_COUNT(h) ((h) & 0x000f)
-#define GET_DMA_T(h) (((h) & 0x0080) >> 23)
-#define GET_DMA_NEW(h) (((h) & 0x0400) >> 26)
-#define GET_DMA_MISC(h) (((h) & 0x070) >> 20)
-
 /**
  * evergreen_dma_cs_parse() - parse the DMA IB
  * @p: parser structure holding parsing context.
@@ -2881,9 +2871,9 @@ int evergreen_dma_cs_parse(struct radeon_cs_parser *p)
 {
struct radeon_cs_chunk *ib_chunk = &p->chunks[p->chunk_ib_idx];
struct radeon_cs_reloc *src_reloc, *dst_reloc, *dst2_reloc;
-   u32 header, cmd, count, tiled, new_cmd, misc;
+   u32 header, cmd, count, sub_cmd;
volatile u32 *ib = p->ib.ptr;
-   u32 idx, idx_value;
+   u32 idx;
u64 src_offset, dst_offset, dst2_offset;
int r;

@@ -2897

[PATCH 1/2] radeon/kms: fix dma relocation checking

2013-01-09 Thread j.gli...@gmail.com
From: Jerome Glisse 

We were checking the index against the size of the relocation buffer
instead of against the last index. This fix kernel segfault when
userspace submit ill formated command stream/relocation buffer pair.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/r600_cs.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
index 9ea13d0..f91919e 100644
--- a/drivers/gpu/drm/radeon/r600_cs.c
+++ b/drivers/gpu/drm/radeon/r600_cs.c
@@ -2561,16 +2561,16 @@ int r600_dma_cs_next_reloc(struct radeon_cs_parser *p,
struct radeon_cs_chunk *relocs_chunk;
unsigned idx;

+   *cs_reloc = NULL;
if (p->chunk_relocs_idx == -1) {
DRM_ERROR("No relocation chunk !\n");
return -EINVAL;
}
-   *cs_reloc = NULL;
relocs_chunk = &p->chunks[p->chunk_relocs_idx];
idx = p->dma_reloc_idx;
-   if (idx >= relocs_chunk->length_dw) {
+   if (idx >= p->nrelocs) {
DRM_ERROR("Relocs at %d after relocations chunk end %d !\n",
- idx, relocs_chunk->length_dw);
+ idx, p->nrelocs);
return -EINVAL;
}
*cs_reloc = p->relocs_ptr[idx];
-- 
1.7.11.7



[PATCH] radeon/kms: force rn50 chip to always report connected on analog output

2013-01-08 Thread j.gli...@gmail.com
From: Jerome Glisse 

Those rn50 chip are often connected to console remoting hw and load
detection often fails with those. Just don't try to load detect and
report connect.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_legacy_encoders.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c 
b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
index f5ba224..62cd512 100644
--- a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
+++ b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
@@ -640,6 +640,14 @@ static enum drm_connector_status 
radeon_legacy_primary_dac_detect(struct drm_enc
enum drm_connector_status found = connector_status_disconnected;
bool color = true;

+   /* just don't bother on RN50 those chip are often connected to remoting
+* console hw and often we get failure to load detect those. So to make
+* everyone happy report the encoder as always connected.
+*/
+   if (ASIC_IS_RN50(rdev)) {
+   return connector_status_connected;
+   }
+
/* save the regs we need */
vclk_ecp_cntl = RREG32_PLL(RADEON_VCLK_ECP_CNTL);
crtc_ext_cntl = RREG32(RADEON_CRTC_EXT_CNTL);
-- 
1.7.11.7



[PATCH 2/2] drm/radeon: reset dma engine on gpu reset

2013-01-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

This try to reset the dma engine when performing gpu reset. Hopefully
bringing back the gpu dma engine in sane state.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c  | 30 +-
 drivers/gpu/drm/radeon/evergreend.h | 10 +-
 drivers/gpu/drm/radeon/ni.c | 30 +-
 drivers/gpu/drm/radeon/nid.h|  2 +-
 drivers/gpu/drm/radeon/r600.c   | 28 ++--
 5 files changed, 74 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index 6dc9ee7..f92f6bb 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -2309,19 +2309,19 @@ bool evergreen_gpu_is_lockup(struct radeon_device 
*rdev, struct radeon_ring *rin
 static int evergreen_gpu_soft_reset(struct radeon_device *rdev)
 {
struct evergreen_mc_save save;
-   u32 grbm_reset = 0;
+   u32 grbm_reset = 0, tmp;

if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE))
return 0;

dev_info(rdev->dev, "GPU softreset \n");
-   dev_info(rdev->dev, "  GRBM_STATUS=0x%08X\n",
+   dev_info(rdev->dev, "  GRBM_STATUS   = 0x%08X\n",
RREG32(GRBM_STATUS));
-   dev_info(rdev->dev, "  GRBM_STATUS_SE0=0x%08X\n",
+   dev_info(rdev->dev, "  GRBM_STATUS_SE0   = 0x%08X\n",
RREG32(GRBM_STATUS_SE0));
-   dev_info(rdev->dev, "  GRBM_STATUS_SE1=0x%08X\n",
+   dev_info(rdev->dev, "  GRBM_STATUS_SE1   = 0x%08X\n",
RREG32(GRBM_STATUS_SE1));
-   dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
+   dev_info(rdev->dev, "  SRBM_STATUS   = 0x%08X\n",
RREG32(SRBM_STATUS));
dev_info(rdev->dev, "  R_008674_CP_STALLED_STAT1 = 0x%08X\n",
RREG32(CP_STALLED_STAT1));
@@ -2337,9 +2337,21 @@ static int evergreen_gpu_soft_reset(struct radeon_device 
*rdev)
if (evergreen_mc_wait_for_idle(rdev)) {
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
}
+
/* Disable CP parsing/prefetching */
WREG32(CP_ME_CNTL, CP_ME_HALT | CP_PFP_HALT);

+   /* Disable DMA */
+   tmp = RREG32(DMA_RB_CNTL);
+   tmp &= ~DMA_RB_ENABLE;
+   WREG32(DMA_RB_CNTL, tmp);
+
+   /* Reset dma */
+   WREG32(SRBM_SOFT_RESET, SOFT_RESET_DMA);
+   RREG32(SRBM_SOFT_RESET);
+   udelay(50);
+   WREG32(SRBM_SOFT_RESET, 0);
+
/* reset all the gfx blocks */
grbm_reset = (SOFT_RESET_CP |
  SOFT_RESET_CB |
@@ -2362,13 +2374,13 @@ static int evergreen_gpu_soft_reset(struct 
radeon_device *rdev)
(void)RREG32(GRBM_SOFT_RESET);
/* Wait a little for things to settle down */
udelay(50);
-   dev_info(rdev->dev, "  GRBM_STATUS=0x%08X\n",
+   dev_info(rdev->dev, "  GRBM_STATUS   = 0x%08X\n",
RREG32(GRBM_STATUS));
-   dev_info(rdev->dev, "  GRBM_STATUS_SE0=0x%08X\n",
+   dev_info(rdev->dev, "  GRBM_STATUS_SE0   = 0x%08X\n",
RREG32(GRBM_STATUS_SE0));
-   dev_info(rdev->dev, "  GRBM_STATUS_SE1=0x%08X\n",
+   dev_info(rdev->dev, "  GRBM_STATUS_SE1   = 0x%08X\n",
RREG32(GRBM_STATUS_SE1));
-   dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
+   dev_info(rdev->dev, "  SRBM_STATUS   = 0x%08X\n",
RREG32(SRBM_STATUS));
dev_info(rdev->dev, "  R_008674_CP_STALLED_STAT1 = 0x%08X\n",
RREG32(CP_STALLED_STAT1));
diff --git a/drivers/gpu/drm/radeon/evergreend.h 
b/drivers/gpu/drm/radeon/evergreend.h
index f82f98a..5786a32 100644
--- a/drivers/gpu/drm/radeon/evergreend.h
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -742,8 +742,9 @@
 #defineSOFT_RESET_ROM  (1 << 14)
 #defineSOFT_RESET_SEM  (1 << 15)
 #defineSOFT_RESET_VMC  (1 << 17)
+#defineSOFT_RESET_DMA  (1 << 20)
 #defineSOFT_RESET_TST  (1 << 21)
-#defineSOFT_RESET_REGBB(1 << 22)
+#defineSOFT_RESET_REGBB(1 << 22)
 #defineSOFT_RESET_ORB  (1 << 23)

 /* display watermarks */
@@ -2028,6 +2029,13 @@
 #defineCAYMAN_PACKET3_DEALLOC_STATE0x14

 /* DMA regs common on r6xx/r7xx/evergreen/ni */
+#define DMA_RB_CNTL   0xd000
+#   define DMA_RB_ENABLE  (1 << 0)
+#   define DMA_RB_SIZE(x) ((x) << 1) /* log2 */
+#   define DMA_RB_SWAP_ENABLE (1 << 9) /* 8IN32 */
+#   define DMA_RPTR_WRITEBACK_ENABLE  (1 << 12)
+#   define DMA_RPTR_WRITE

[PATCH 1/2] drm/radeon: improve ring debugfs printing

2013-01-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

Print 32dword before last know rptr as problem most likely comes
from previous command. Also small cosmetic change to the printing.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_ring.c | 20 +---
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ring.c 
b/drivers/gpu/drm/radeon/radeon_ring.c
index 9410e43..141f2b6 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -770,22 +770,28 @@ static int radeon_debugfs_ring_info(struct seq_file *m, 
void *data)
int ridx = *(int*)node->info_ent->data;
struct radeon_ring *ring = &rdev->ring[ridx];
unsigned count, i, j;
+   u32 tmp;

radeon_ring_free_size(rdev, ring);
count = (ring->ring_size / 4) - ring->ring_free_dw;
-   seq_printf(m, "wptr(0x%04x): 0x%08x\n", ring->wptr_reg, 
RREG32(ring->wptr_reg));
-   seq_printf(m, "rptr(0x%04x): 0x%08x\n", ring->rptr_reg, 
RREG32(ring->rptr_reg));
+   tmp = RREG32(ring->wptr_reg) >> ring->ptr_reg_shift;
+   seq_printf(m, "wptr(0x%04x): 0x%08x [%5d]\n", ring->wptr_reg, tmp, tmp);
+   tmp = RREG32(ring->rptr_reg) >> ring->ptr_reg_shift;
+   seq_printf(m, "rptr(0x%04x): 0x%08x [%5d]\n", ring->rptr_reg, tmp, tmp);
if (ring->rptr_save_reg) {
seq_printf(m, "rptr next(0x%04x): 0x%08x\n", 
ring->rptr_save_reg,
   RREG32(ring->rptr_save_reg));
}
-   seq_printf(m, "driver's copy of the wptr: 0x%08x\n", ring->wptr);
-   seq_printf(m, "driver's copy of the rptr: 0x%08x\n", ring->rptr);
+   seq_printf(m, "driver's copy of the wptr: 0x%08x [%5d]\n", ring->wptr, 
ring->wptr);
+   seq_printf(m, "driver's copy of the rptr: 0x%08x [%5d]\n", ring->rptr, 
ring->rptr);
seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw);
seq_printf(m, "%u dwords in ring\n", count);
-   i = ring->rptr;
-   for (j = 0; j <= count; j++) {
-   seq_printf(m, "r[%04d]=0x%08x\n", i, ring->ring[i]);
+   /* print 8 dw before current rptr as often it's the last executed
+* packet that is the root issue
+*/
+   i = (ring->rptr + ring->ptr_mask + 1 - 32) & ring->ptr_mask;
+   for (j = 0; j <= (count + 32); j++) {
+   seq_printf(m, "r[%5d]=0x%08x\n", i, ring->ring[i]);
i = (i + 1) & ring->ptr_mask;
}
return 0;
-- 
1.7.11.7



[PATCH 2/2] drm/radeon: print dma status reg on lockup

2013-01-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

To help debug dma related lockup.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c  | 4 
 drivers/gpu/drm/radeon/evergreend.h | 3 +++
 drivers/gpu/drm/radeon/ni.c | 4 
 drivers/gpu/drm/radeon/nid.h| 1 -
 drivers/gpu/drm/radeon/r600.c   | 4 
 5 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index f95d7fc..6dc9ee7 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -2331,6 +2331,8 @@ static int evergreen_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(CP_BUSY_STAT));
dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
RREG32(CP_STAT));
+   dev_info(rdev->dev, "  R_00D034_DMA_STATUS_REG   = 0x%08X\n",
+   RREG32(DMA_STATUS_REG));
evergreen_mc_stop(rdev, &save);
if (evergreen_mc_wait_for_idle(rdev)) {
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
@@ -2376,6 +2378,8 @@ static int evergreen_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(CP_BUSY_STAT));
dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
RREG32(CP_STAT));
+   dev_info(rdev->dev, "  R_00D034_DMA_STATUS_REG   = 0x%08X\n",
+   RREG32(DMA_STATUS_REG));
evergreen_mc_resume(rdev, &save);
return 0;
 }
diff --git a/drivers/gpu/drm/radeon/evergreend.h 
b/drivers/gpu/drm/radeon/evergreend.h
index cb9baaa..f82f98a 100644
--- a/drivers/gpu/drm/radeon/evergreend.h
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -2027,4 +2027,7 @@
 /* cayman packet3 addition */
 #defineCAYMAN_PACKET3_DEALLOC_STATE0x14

+/* DMA regs common on r6xx/r7xx/evergreen/ni */
+#define DMA_STATUS_REG0xd034
+
 #endif
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index 7bdbcb0..6dae387 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1331,6 +1331,8 @@ static int cayman_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(CP_BUSY_STAT));
dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
RREG32(CP_STAT));
+   dev_info(rdev->dev, "  R_00D034_DMA_STATUS_REG   = 0x%08X\n",
+   RREG32(DMA_STATUS_REG));
dev_info(rdev->dev, "  VM_CONTEXT0_PROTECTION_FAULT_ADDR   0x%08X\n",
 RREG32(0x14F8));
dev_info(rdev->dev, "  VM_CONTEXT0_PROTECTION_FAULT_STATUS 0x%08X\n",
@@ -1387,6 +1389,8 @@ static int cayman_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(CP_BUSY_STAT));
dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
RREG32(CP_STAT));
+   dev_info(rdev->dev, "  R_00D034_DMA_STATUS_REG   = 0x%08X\n",
+   RREG32(DMA_STATUS_REG));
evergreen_mc_resume(rdev, &save);
return 0;
 }
diff --git a/drivers/gpu/drm/radeon/nid.h b/drivers/gpu/drm/radeon/nid.h
index b93186b..22a62c6 100644
--- a/drivers/gpu/drm/radeon/nid.h
+++ b/drivers/gpu/drm/radeon/nid.h
@@ -675,4 +675,3 @@
 #defineDMA_PACKET_NOP0xf

 #endif
-
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index 2aaf147..4605551 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -1297,6 +1297,8 @@ static int r600_gpu_soft_reset(struct radeon_device *rdev)
RREG32(CP_BUSY_STAT));
dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
RREG32(CP_STAT));
+   dev_info(rdev->dev, "  R_00D034_DMA_STATUS_REG   = 0x%08X\n",
+   RREG32(DMA_STATUS_REG));
rv515_mc_stop(rdev, &save);
if (r600_mc_wait_for_idle(rdev)) {
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
@@ -1348,6 +1350,8 @@ static int r600_gpu_soft_reset(struct radeon_device *rdev)
RREG32(CP_BUSY_STAT));
dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
RREG32(CP_STAT));
+   dev_info(rdev->dev, "  R_00D034_DMA_STATUS_REG   = 0x%08X\n",
+   RREG32(DMA_STATUS_REG));
rv515_mc_resume(rdev, &save);
return 0;
 }
-- 
1.7.11.7



[PATCH 1/2] drm/radeon: add debugfs file for dma rings

2013-01-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_ring.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_ring.c 
b/drivers/gpu/drm/radeon/radeon_ring.c
index ebd6956..9410e43 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -794,11 +794,15 @@ static int radeon_debugfs_ring_info(struct seq_file *m, 
void *data)
 static int radeon_ring_type_gfx_index = RADEON_RING_TYPE_GFX_INDEX;
 static int cayman_ring_type_cp1_index = CAYMAN_RING_TYPE_CP1_INDEX;
 static int cayman_ring_type_cp2_index = CAYMAN_RING_TYPE_CP2_INDEX;
+static int radeon_ring_type_dma1_index = R600_RING_TYPE_DMA_INDEX;
+static int radeon_ring_type_dma2_index = CAYMAN_RING_TYPE_DMA1_INDEX;

 static struct drm_info_list radeon_debugfs_ring_info_list[] = {
{"radeon_ring_gfx", radeon_debugfs_ring_info, 0, 
&radeon_ring_type_gfx_index},
{"radeon_ring_cp1", radeon_debugfs_ring_info, 0, 
&cayman_ring_type_cp1_index},
{"radeon_ring_cp2", radeon_debugfs_ring_info, 0, 
&cayman_ring_type_cp2_index},
+   {"radeon_ring_dma1", radeon_debugfs_ring_info, 0, 
&radeon_ring_type_dma1_index},
+   {"radeon_ring_dma2", radeon_debugfs_ring_info, 0, 
&radeon_ring_type_dma2_index},
 };

 static int radeon_debugfs_sa_info(struct seq_file *m, void *data)
-- 
1.7.11.7



[PATCH] drm/radeon: add support for MEM_WRITE packet

2012-12-19 Thread j.gli...@gmail.com
From: Jerome Glisse 

To make it easier to debug some lockup from userspace add support
to MEM_WRITE packet.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen_cs.c | 29 +
 drivers/gpu/drm/radeon/r600_cs.c  | 29 +
 drivers/gpu/drm/radeon/radeon_drv.c   |  3 ++-
 3 files changed, 60 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c 
b/drivers/gpu/drm/radeon/evergreen_cs.c
index 74c6b42..5cea852 100644
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -2654,6 +2654,35 @@ static int evergreen_packet3_check(struct 
radeon_cs_parser *p,
ib[idx+4] = upper_32_bits(offset) & 0xff;
}
break;
+   case PACKET3_MEM_WRITE:
+   {
+   u64 offset;
+
+   if (pkt->count != 3) {
+   DRM_ERROR("bad MEM_WRITE (invalid count)\n");
+   return -EINVAL;
+   }
+   r = evergreen_cs_packet_next_reloc(p, &reloc);
+   if (r) {
+   DRM_ERROR("bad MEM_WRITE (missing reloc)\n");
+   return -EINVAL;
+   }
+   offset = radeon_get_ib_value(p, idx+0);
+   offset += ((u64)(radeon_get_ib_value(p, idx+1) & 0xff)) << 32UL;
+   if (offset & 0x7) {
+   DRM_ERROR("bad MEM_WRITE (address not qwords 
aligned)\n");
+   return -EINVAL;
+   }
+   if ((offset + 8) > radeon_bo_size(reloc->robj)) {
+   DRM_ERROR("bad MEM_WRITE bo too small: 0x%llx, 0x%lx\n",
+ offset + 8, radeon_bo_size(reloc->robj));
+   return -EINVAL;
+   }
+   offset += reloc->lobj.gpu_offset;
+   ib[idx+0] = offset;
+   ib[idx+1] = upper_32_bits(offset) & 0xff;
+   break;
+   }
case PACKET3_COPY_DW:
if (pkt->count != 4) {
DRM_ERROR("bad COPY_DW (invalid count)\n");
diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
index 0be768b..9ea13d0 100644
--- a/drivers/gpu/drm/radeon/r600_cs.c
+++ b/drivers/gpu/drm/radeon/r600_cs.c
@@ -2294,6 +2294,35 @@ static int r600_packet3_check(struct radeon_cs_parser *p,
ib[idx+4] = upper_32_bits(offset) & 0xff;
}
break;
+   case PACKET3_MEM_WRITE:
+   {
+   u64 offset;
+
+   if (pkt->count != 3) {
+   DRM_ERROR("bad MEM_WRITE (invalid count)\n");
+   return -EINVAL;
+   }
+   r = r600_cs_packet_next_reloc(p, &reloc);
+   if (r) {
+   DRM_ERROR("bad MEM_WRITE (missing reloc)\n");
+   return -EINVAL;
+   }
+   offset = radeon_get_ib_value(p, idx+0);
+   offset += ((u64)(radeon_get_ib_value(p, idx+1) & 0xff)) << 32UL;
+   if (offset & 0x7) {
+   DRM_ERROR("bad MEM_WRITE (address not qwords 
aligned)\n");
+   return -EINVAL;
+   }
+   if ((offset + 8) > radeon_bo_size(reloc->robj)) {
+   DRM_ERROR("bad MEM_WRITE bo too small: 0x%llx, 0x%lx\n",
+ offset + 8, radeon_bo_size(reloc->robj));
+   return -EINVAL;
+   }
+   offset += reloc->lobj.gpu_offset;
+   ib[idx+0] = offset;
+   ib[idx+1] = upper_32_bits(offset) & 0xff;
+   break;
+   }
case PACKET3_COPY_DW:
if (pkt->count != 4) {
DRM_ERROR("bad COPY_DW (invalid count)\n");
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
b/drivers/gpu/drm/radeon/radeon_drv.c
index 9b1a727..ff75934 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -68,9 +68,10 @@
  *   2.25.0 - eg+: new info request for num SE and num SH
  *   2.26.0 - r600-eg: fix htile size computation
  *   2.27.0 - r600-SI: Add CS ioctl support for async DMA
+ *   2.28.0 - r600-eg: Add MEM_WRITE packet support
  */
 #define KMS_DRIVER_MAJOR   2
-#define KMS_DRIVER_MINOR   27
+#define KMS_DRIVER_MINOR   28
 #define KMS_DRIVER_PATCHLEVEL  0
 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
 int radeon_driver_unload_kms(struct drm_device *dev);
-- 
1.7.11.7



[PATCH] drm/radeon: avoid deadlock in pm path when waiting for fence

2012-12-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

radeon_fence_wait_empty_locked should not trigger GPU reset as no
place where it's call from would benefit from such thing and it
actually lead to a kernel deadlock in case the reset is triggered
from pm codepath. Instead force ring completion in place where it
makes sense or return early in others.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  2 +-
 drivers/gpu/drm/radeon/radeon_device.c | 13 +++--
 drivers/gpu/drm/radeon/radeon_fence.c  | 30 ++
 drivers/gpu/drm/radeon/radeon_pm.c | 15 ---
 4 files changed, 38 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 9c7625c..071b2d7 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -231,7 +231,7 @@ void radeon_fence_process(struct radeon_device *rdev, int 
ring);
 bool radeon_fence_signaled(struct radeon_fence *fence);
 int radeon_fence_wait(struct radeon_fence *fence, bool interruptible);
 int radeon_fence_wait_next_locked(struct radeon_device *rdev, int ring);
-void radeon_fence_wait_empty_locked(struct radeon_device *rdev, int ring);
+int radeon_fence_wait_empty_locked(struct radeon_device *rdev, int ring);
 int radeon_fence_wait_any(struct radeon_device *rdev,
  struct radeon_fence **fences,
  bool intr);
diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index 774fae7..53a9223 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1163,6 +1163,7 @@ int radeon_suspend_kms(struct drm_device *dev, 
pm_message_t state)
struct drm_crtc *crtc;
struct drm_connector *connector;
int i, r;
+   bool force_completion = false;

if (dev == NULL || dev->dev_private == NULL) {
return -ENODEV;
@@ -1205,8 +1206,16 @@ int radeon_suspend_kms(struct drm_device *dev, 
pm_message_t state)

mutex_lock(&rdev->ring_lock);
/* wait for gpu to finish processing current batch */
-   for (i = 0; i < RADEON_NUM_RINGS; i++)
-   radeon_fence_wait_empty_locked(rdev, i);
+   for (i = 0; i < RADEON_NUM_RINGS; i++) {
+   r = radeon_fence_wait_empty_locked(rdev, i);
+   if (r) {
+   /* delay GPU reset to resume */
+   force_completion = true;
+   }
+   }
+   if (force_completion) {
+   radeon_fence_driver_force_completion(rdev);
+   }
mutex_unlock(&rdev->ring_lock);

radeon_save_bios_scratch_regs(rdev);
diff --git a/drivers/gpu/drm/radeon/radeon_fence.c 
b/drivers/gpu/drm/radeon/radeon_fence.c
index bf7b20e..28c09b6 100644
--- a/drivers/gpu/drm/radeon/radeon_fence.c
+++ b/drivers/gpu/drm/radeon/radeon_fence.c
@@ -609,26 +609,20 @@ int radeon_fence_wait_next_locked(struct radeon_device 
*rdev, int ring)
  * Returns 0 if the fences have passed, error for all other cases.
  * Caller must hold ring lock.
  */
-void radeon_fence_wait_empty_locked(struct radeon_device *rdev, int ring)
+int radeon_fence_wait_empty_locked(struct radeon_device *rdev, int ring)
 {
uint64_t seq = rdev->fence_drv[ring].sync_seq[ring];
+   int r;

-   while(1) {
-   int r;
-   r = radeon_fence_wait_seq(rdev, seq, ring, false, false);
+   r = radeon_fence_wait_seq(rdev, seq, ring, false, false);
+   if (r) {
if (r == -EDEADLK) {
-   mutex_unlock(&rdev->ring_lock);
-   r = radeon_gpu_reset(rdev);
-   mutex_lock(&rdev->ring_lock);
-   if (!r)
-   continue;
-   }
-   if (r) {
-   dev_err(rdev->dev, "error waiting for ring to become"
-   " idle (%d)\n", r);
+   return -EDEADLK;
}
-   return;
+   dev_err(rdev->dev, "error waiting for ring[%d] to become idle 
(%d)\n",
+   ring, r);
}
+   return 0;
 }

 /**
@@ -854,13 +848,17 @@ int radeon_fence_driver_init(struct radeon_device *rdev)
  */
 void radeon_fence_driver_fini(struct radeon_device *rdev)
 {
-   int ring;
+   int ring, r;

mutex_lock(&rdev->ring_lock);
for (ring = 0; ring < RADEON_NUM_RINGS; ring++) {
if (!rdev->fence_drv[ring].initialized)
continue;
-   radeon_fence_wait_empty_locked(rdev, ring);
+   r = radeon_fence_wait_empty_locked(rdev, ring);
+   if (r) {
+   /* no need to trigger GPU reset as we are unloading */
+   radeon_fence_driver_force_completion(rdev);
+   }
wake_up_all(&rdev->fence_queue);
radeon_scratch_free(

[PATCH] drm/radeon: don't leave fence blocked process on failed GPU reset

2012-12-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

Force all fence to signal if GPU reset failed so no process get stuck
on waiting fence.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  1 +
 drivers/gpu/drm/radeon/radeon_device.c |  1 +
 drivers/gpu/drm/radeon/radeon_fence.c  | 19 +++
 3 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d68346..9c7625c 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -225,6 +225,7 @@ struct radeon_fence {
 int radeon_fence_driver_start_ring(struct radeon_device *rdev, int ring);
 int radeon_fence_driver_init(struct radeon_device *rdev);
 void radeon_fence_driver_fini(struct radeon_device *rdev);
+void radeon_fence_driver_force_completion(struct radeon_device *rdev);
 int radeon_fence_emit(struct radeon_device *rdev, struct radeon_fence **fence, 
int ring);
 void radeon_fence_process(struct radeon_device *rdev, int ring);
 bool radeon_fence_signaled(struct radeon_fence *fence);
diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index e2f5f88..774fae7 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1357,6 +1357,7 @@ retry:
}
}
} else {
+   radeon_fence_driver_force_completion(rdev);
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
kfree(ring_data[i]);
}
diff --git a/drivers/gpu/drm/radeon/radeon_fence.c 
b/drivers/gpu/drm/radeon/radeon_fence.c
index 22bd6c2..bf7b20e 100644
--- a/drivers/gpu/drm/radeon/radeon_fence.c
+++ b/drivers/gpu/drm/radeon/radeon_fence.c
@@ -868,6 +868,25 @@ void radeon_fence_driver_fini(struct radeon_device *rdev)
mutex_unlock(&rdev->ring_lock);
 }

+/**
+ * radeon_fence_driver_force_completion - force all fence waiter to complete
+ *
+ * @rdev: radeon device pointer
+ *
+ * In case of GPU reset failure make sure no process keep waiting on fence
+ * that will never complete.
+ */
+void radeon_fence_driver_force_completion(struct radeon_device *rdev)
+{
+   int ring;
+
+   for (ring = 0; ring < RADEON_NUM_RINGS; ring++) {
+   if (!rdev->fence_drv[ring].initialized)
+   continue;
+   radeon_fence_write(rdev, rdev->fence_drv[ring].sync_seq[ring], 
ring);
+   }
+}
+

 /*
  * Fence debugfs
-- 
1.7.11.7



[PATCH] drm/radeon: restore modeset late in GPU reset path

2012-12-14 Thread j.gli...@gmail.com
From: Jerome Glisse 

Modeset path seems to conflict sometimes with the memory management
leading to kernel deadlock. This move modesetting reset after GPU
acceleration reset.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_device.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index e2f5f88..ffd5534 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1337,7 +1337,6 @@ retry:
}

radeon_restore_bios_scratch_regs(rdev);
-   drm_helper_resume_force_mode(rdev->ddev);

if (!r) {
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
@@ -1362,6 +1361,8 @@ retry:
}
}

+   drm_helper_resume_force_mode(rdev->ddev);
+
ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
if (r) {
/* bad news, how to tell it to userspace ? */
-- 
1.7.11.7



[PATCH] drm/radeon: resume fence driver to last sync sequence on lockup

2012-12-14 Thread j.gli...@gmail.com
From: Jerome Glisse 

After lockup we need to resume fence to last sync sequence and not
last received sequence so that all thread waiting on command stream
that lockedup resume. Otherwise GPU reset will be ineffective in most
cases.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_fence.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_fence.c 
b/drivers/gpu/drm/radeon/radeon_fence.c
index 22bd6c2..38233e7 100644
--- a/drivers/gpu/drm/radeon/radeon_fence.c
+++ b/drivers/gpu/drm/radeon/radeon_fence.c
@@ -787,7 +787,7 @@ int radeon_fence_driver_start_ring(struct radeon_device 
*rdev, int ring)
}
rdev->fence_drv[ring].cpu_addr = &rdev->wb.wb[index/4];
rdev->fence_drv[ring].gpu_addr = rdev->wb.gpu_addr + index;
-   radeon_fence_write(rdev, 
atomic64_read(&rdev->fence_drv[ring].last_seq), ring);
+   radeon_fence_write(rdev, rdev->fence_drv[ring].sync_seq[ring], ring);
rdev->fence_drv[ring].initialized = true;
dev_info(rdev->dev, "fence driver on ring %d use gpu addr 0x%016llx and 
cpu addr 0x%p\n",
 ring, rdev->fence_drv[ring].gpu_addr, 
rdev->fence_drv[ring].cpu_addr);
-- 
1.7.11.7



[PATCH] drm/radeon: fix htile buffer size computation for command stream checker

2012-12-13 Thread j.gli...@gmail.com
From: Jerome Glisse 

Fix the size computation of the htile buffer.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen_cs.c | 17 +--
 drivers/gpu/drm/radeon/r600_cs.c  | 92 ---
 drivers/gpu/drm/radeon/radeon_drv.c   |  3 +-
 3 files changed, 35 insertions(+), 77 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c 
b/drivers/gpu/drm/radeon/evergreen_cs.c
index 62c2271..fc7e613 100644
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -507,20 +507,28 @@ static int evergreen_cs_track_validate_htile(struct 
radeon_cs_parser *p,
/* height is npipes htiles aligned == npipes * 8 pixel aligned 
*/
nby = round_up(nby, track->npipes * 8);
} else {
+   /* always assume 8x8 htile */
+   /* align is htile align * 8, htile align vary according to
+* number of pipe and tile width and nby
+*/
switch (track->npipes) {
case 8:
+   /* HTILE_WIDTH = 8 & HTILE_HEIGHT = 8*/
nbx = round_up(nbx, 64 * 8);
nby = round_up(nby, 64 * 8);
break;
case 4:
+   /* HTILE_WIDTH = 8 & HTILE_HEIGHT = 8*/
nbx = round_up(nbx, 64 * 8);
nby = round_up(nby, 32 * 8);
break;
case 2:
+   /* HTILE_WIDTH = 8 & HTILE_HEIGHT = 8*/
nbx = round_up(nbx, 32 * 8);
nby = round_up(nby, 32 * 8);
break;
case 1:
+   /* HTILE_WIDTH = 8 & HTILE_HEIGHT = 8*/
nbx = round_up(nbx, 32 * 8);
nby = round_up(nby, 16 * 8);
break;
@@ -531,9 +539,10 @@ static int evergreen_cs_track_validate_htile(struct 
radeon_cs_parser *p,
}
}
/* compute number of htile */
-   nbx = nbx / 8;
-   nby = nby / 8;
-   size = nbx * nby * 4;
+   nbx = nbx >> 3;
+   nby = nby >> 3;
+   /* size must be aligned on npipes * 2K boundary */
+   size = roundup(nbx * nby * 4, track->npipes * (2 << 10));
size += track->htile_offset;

if (size > radeon_bo_size(track->htile_bo)) {
@@ -1790,6 +1799,8 @@ static int evergreen_cs_check_reg(struct radeon_cs_parser 
*p, u32 reg, u32 idx)
case DB_HTILE_SURFACE:
/* 8x8 only */
track->htile_surface = radeon_get_ib_value(p, idx);
+   /* force 8x8 htile width and height */
+   ib[idx] |= 3;
track->db_dirty = true;
break;
case CB_IMMED0_BASE:
diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
index 5d6e7f9..0b4d833 100644
--- a/drivers/gpu/drm/radeon/r600_cs.c
+++ b/drivers/gpu/drm/radeon/r600_cs.c
@@ -657,87 +657,30 @@ static int r600_cs_track_validate_db(struct 
radeon_cs_parser *p)
/* nby is npipes htiles aligned == npipes * 8 pixel 
aligned */
nby = round_up(nby, track->npipes * 8);
} else {
-   /* htile widht & nby (8 or 4) make 2 bits number */
-   tmp = track->htile_surface & 3;
+   /* always assume 8x8 htile */
/* align is htile align * 8, htile align vary according 
to
 * number of pipe and tile width and nby
 */
switch (track->npipes) {
case 8:
-   switch (tmp) {
-   case 3: /* HTILE_WIDTH = 8 & HTILE_HEIGHT = 8*/
-   nbx = round_up(nbx, 64 * 8);
-   nby = round_up(nby, 64 * 8);
-   break;
-   case 2: /* HTILE_WIDTH = 4 & HTILE_HEIGHT = 8*/
-   case 1: /* HTILE_WIDTH = 8 & HTILE_HEIGHT = 4*/
-   nbx = round_up(nbx, 64 * 8);
-   nby = round_up(nby, 32 * 8);
-   break;
-   case 0: /* HTILE_WIDTH = 4 & HTILE_HEIGHT = 4*/
-   nbx = round_up(nbx, 32 * 8);
-   nby = round_up(nby, 32 * 8);
-   break;
-   default:
-   return -EINVAL;
-   }
+   /* HTILE_WIDTH = 8 & HTILE_HEIGHT = 8*/
+   nbx = round_up(nbx, 64 * 8);
+   nby = round_up(nby, 64 * 8);
  

[PATCH] drm/radeon: fix fence driver for dma ring when wb is disabled

2012-12-12 Thread j.gli...@gmail.com
From: Jerome Glisse 

The dma ring can't write to register thus have to write to memory
its fence value. This ensure that it doesn't try to use scratch
register for dma ring fence driver.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/r600.c | 3 ++-
 drivers/gpu/drm/radeon/radeon_fence.c | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index a76eca1..2aaf147 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -2533,11 +2533,12 @@ void r600_dma_fence_ring_emit(struct radeon_device 
*rdev,
 {
struct radeon_ring *ring = &rdev->ring[fence->ring];
u64 addr = rdev->fence_drv[fence->ring].gpu_addr;
+
/* write the fence */
radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_FENCE, 0, 0, 0));
radeon_ring_write(ring, addr & 0xfffc);
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff));
-   radeon_ring_write(ring, fence->seq);
+   radeon_ring_write(ring, lower_32_bits(fence->seq));
/* generate an interrupt */
radeon_ring_write(ring, DMA_PACKET(DMA_PACKET_TRAP, 0, 0, 0));
 }
diff --git a/drivers/gpu/drm/radeon/radeon_fence.c 
b/drivers/gpu/drm/radeon/radeon_fence.c
index 22bd6c2..410a975 100644
--- a/drivers/gpu/drm/radeon/radeon_fence.c
+++ b/drivers/gpu/drm/radeon/radeon_fence.c
@@ -772,7 +772,7 @@ int radeon_fence_driver_start_ring(struct radeon_device 
*rdev, int ring)
int r;

radeon_scratch_free(rdev, rdev->fence_drv[ring].scratch_reg);
-   if (rdev->wb.use_event) {
+   if (rdev->wb.use_event || !radeon_ring_supports_scratch_reg(rdev, 
&rdev->ring[ring])) {
rdev->fence_drv[ring].scratch_reg = 0;
index = R600_WB_EVENT_OFFSET + ring * 4;
} else {
-- 
1.8.0



[PATCH] drm/radeon: fix amd afusion gpu setup aka sumo v2

2012-12-11 Thread j.gli...@gmail.com
From: Jerome Glisse 

Set the proper number of tile pipe that should be a multiple of
pipe depending on the number of se engine.

Fix:
https://bugs.freedesktop.org/show_bug.cgi?id=56405
https://bugs.freedesktop.org/show_bug.cgi?id=56720

v2: Don't change sumo2

Signed-off-by: Jerome Glisse 
Cc: stable at vger.kernel.org
---
 drivers/gpu/drm/radeon/evergreen.c  | 8 
 drivers/gpu/drm/radeon/evergreend.h | 2 ++
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index 14313ad..b957de1 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1819,7 +1819,7 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
case CHIP_SUMO:
rdev->config.evergreen.num_ses = 1;
rdev->config.evergreen.max_pipes = 4;
-   rdev->config.evergreen.max_tile_pipes = 2;
+   rdev->config.evergreen.max_tile_pipes = 4;
if (rdev->pdev->device == 0x9648)
rdev->config.evergreen.max_simds = 3;
else if ((rdev->pdev->device == 0x9647) ||
@@ -1842,7 +1842,7 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
rdev->config.evergreen.sc_prim_fifo_size = 0x40;
rdev->config.evergreen.sc_hiz_tile_fifo_size = 0x30;
rdev->config.evergreen.sc_earlyz_tile_fifo_size = 0x130;
-   gb_addr_config = REDWOOD_GB_ADDR_CONFIG_GOLDEN;
+   gb_addr_config = SUMO_GB_ADDR_CONFIG_GOLDEN;
break;
case CHIP_SUMO2:
rdev->config.evergreen.num_ses = 1;
@@ -1864,7 +1864,7 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
rdev->config.evergreen.sc_prim_fifo_size = 0x40;
rdev->config.evergreen.sc_hiz_tile_fifo_size = 0x30;
rdev->config.evergreen.sc_earlyz_tile_fifo_size = 0x130;
-   gb_addr_config = REDWOOD_GB_ADDR_CONFIG_GOLDEN;
+   gb_addr_config = SUMO2_GB_ADDR_CONFIG_GOLDEN;
break;
case CHIP_BARTS:
rdev->config.evergreen.num_ses = 2;
@@ -1912,7 +1912,7 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
break;
case CHIP_CAICOS:
rdev->config.evergreen.num_ses = 1;
-   rdev->config.evergreen.max_pipes = 4;
+   rdev->config.evergreen.max_pipes = 2;
rdev->config.evergreen.max_tile_pipes = 2;
rdev->config.evergreen.max_simds = 2;
rdev->config.evergreen.max_backends = 1 * 
rdev->config.evergreen.num_ses;
diff --git a/drivers/gpu/drm/radeon/evergreend.h 
b/drivers/gpu/drm/radeon/evergreend.h
index df542f1..52c89c9 100644
--- a/drivers/gpu/drm/radeon/evergreend.h
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -45,6 +45,8 @@
 #define TURKS_GB_ADDR_CONFIG_GOLDEN  0x02010002
 #define CEDAR_GB_ADDR_CONFIG_GOLDEN  0x02010001
 #define CAICOS_GB_ADDR_CONFIG_GOLDEN 0x02010001
+#define SUMO_GB_ADDR_CONFIG_GOLDEN   0x02010002
+#define SUMO2_GB_ADDR_CONFIG_GOLDEN  0x02010002

 /* Registers */

-- 
1.7.11.7



[PATCH 2/2] drm/radeon: buffer memory placement work thread WIP

2012-11-29 Thread j.gli...@gmail.com
From: Jerome Glisse 

Use delayed work thread to move buffer out of vram if they haven't
been use over some period of time. This allow to make room for
buffer that are actively use.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  13 ++
 drivers/gpu/drm/radeon/radeon_cs.c |   2 +-
 drivers/gpu/drm/radeon/radeon_device.c |   8 ++
 drivers/gpu/drm/radeon/radeon_object.c | 241 -
 drivers/gpu/drm/radeon/radeon_object.h |   3 +-
 5 files changed, 262 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 0a2664c..a2e92da 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -102,6 +102,8 @@ extern int radeon_lockup_timeout;
  */
 #define RADEON_MAX_USEC_TIMEOUT10  /* 100 ms */
 #define RADEON_FENCE_JIFFIES_TIMEOUT   (HZ / 2)
+#define RADEON_PLACEMENT_WORK_MS   500
+#define RADEON_PLACEMENT_MAX_EVICTION  8
 /* RADEON_IB_POOL_SIZE must be a power of 2 */
 #define RADEON_IB_POOL_SIZE16
 #define RADEON_DEBUGFS_MAX_COMPONENTS  32
@@ -311,6 +313,10 @@ struct radeon_bo_va {
 struct radeon_bo {
/* Protected by gem.mutex */
struct list_headlist;
+   /* Protected by rdev->placement_mutex */
+   struct list_headplist;
+   struct list_head*head;
+   unsigned long   last_use_jiffies;
/* Protected by tbo.reserved */
u32 placements[3];
u32 busy_placements[3];
@@ -1523,6 +1529,13 @@ struct radeon_device {
struct drm_device   *ddev;
struct pci_dev  *pdev;
struct rw_semaphore exclusive_lock;
+   struct mutexplacement_mutex;
+   struct list_headwvram_in_list;
+   struct list_headrvram_in_list;
+   struct list_headwvram_out_list;
+   struct list_headrvram_out_list;
+   struct delayed_work placement_work;
+   unsigned long   vram_in_size;
/* ASIC */
union radeon_asic_configconfig;
enum radeon_family  family;
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 41672cc..e9e90bc 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -88,7 +88,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
} else
p->relocs[i].handle = 0;
}
-   return radeon_bo_list_validate(&p->validated);
+   return radeon_bo_list_validate(p->rdev, &p->validated);
 }

 static int radeon_cs_get_ring(struct radeon_cs_parser *p, u32 ring, s32 
priority)
diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index e2f5f88..0c4c874 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1001,6 +1001,14 @@ int radeon_device_init(struct radeon_device *rdev,
init_rwsem(&rdev->pm.mclk_lock);
init_rwsem(&rdev->exclusive_lock);
init_waitqueue_head(&rdev->irq.vblank_queue);
+
+   mutex_init(&rdev->placement_mutex);
+   INIT_LIST_HEAD(&rdev->wvram_in_list);
+   INIT_LIST_HEAD(&rdev->rvram_in_list);
+   INIT_LIST_HEAD(&rdev->wvram_out_list);
+   INIT_LIST_HEAD(&rdev->rvram_out_list);
+   INIT_DELAYED_WORK(&rdev->placement_work, radeon_placement_work_handler);
+
r = radeon_gem_init(rdev);
if (r)
return r;
diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index e25ae20..f2bcc5f 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -64,6 +64,10 @@ static void radeon_ttm_bo_destroy(struct ttm_buffer_object 
*tbo)
mutex_lock(&bo->rdev->gem.mutex);
list_del_init(&bo->list);
mutex_unlock(&bo->rdev->gem.mutex);
+   mutex_lock(&bo->rdev->placement_mutex);
+   list_del_init(&bo->plist);
+   bo->head = NULL;
+   mutex_unlock(&bo->rdev->placement_mutex);
radeon_bo_clear_surface_reg(bo);
radeon_bo_clear_va(bo);
drm_gem_object_release(&bo->gem_base);
@@ -153,6 +157,8 @@ int radeon_bo_create(struct radeon_device *rdev,
bo->surface_reg = -1;
INIT_LIST_HEAD(&bo->list);
INIT_LIST_HEAD(&bo->va);
+   INIT_LIST_HEAD(&bo->plist);
+   bo->head = NULL;
radeon_ttm_placement_from_domain(bo, domain);
/* Kernel allocation are uninterruptible */
down_read(&rdev->pm.mclk_lock);
@@ -263,8 +269,14 @@ int radeon_bo_pin_restricted(struct radeon_bo *bo, u32 
domain, u64 max_offset,
if (gpu_addr != NULL)
*g

[PATCH 1/2] drm/radeon: do not move bo to different placement at each cs

2012-11-29 Thread j.gli...@gmail.com
From: Jerome Glisse 

The bo creation placement is where the bo will be. Instead of trying
to move bo at each command stream let this work to another worker
thread that will use more advance heuristic.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  1 +
 drivers/gpu/drm/radeon/radeon_object.c | 17 -
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 8c42d54..0a2664c 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -313,6 +313,7 @@ struct radeon_bo {
struct list_headlist;
/* Protected by tbo.reserved */
u32 placements[3];
+   u32 busy_placements[3];
struct ttm_placementplacement;
struct ttm_buffer_objecttbo;
struct ttm_bo_kmap_obj  kmap;
diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index 3f9f3bb..e25ae20 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -84,7 +84,6 @@ void radeon_ttm_placement_from_domain(struct radeon_bo *rbo, 
u32 domain)
rbo->placement.fpfn = 0;
rbo->placement.lpfn = 0;
rbo->placement.placement = rbo->placements;
-   rbo->placement.busy_placement = rbo->placements;
if (domain & RADEON_GEM_DOMAIN_VRAM)
rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_VRAM;
@@ -105,6 +104,14 @@ void radeon_ttm_placement_from_domain(struct radeon_bo 
*rbo, u32 domain)
if (!c)
rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
rbo->placement.num_placement = c;
+
+   c = 0;
+   rbo->placement.busy_placement = rbo->busy_placements;
+   if (rbo->rdev->flags & RADEON_IS_AGP) {
+   rbo->busy_placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_TT;
+   } else {
+   rbo->busy_placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_TT;
+   }
rbo->placement.num_busy_placement = c;
 }

@@ -360,17 +367,9 @@ int radeon_bo_list_validate(struct list_head *head)
list_for_each_entry(lobj, head, tv.head) {
bo = lobj->bo;
if (!bo->pin_count) {
-   domain = lobj->wdomain ? lobj->wdomain : lobj->rdomain;
-   
-   retry:
-   radeon_ttm_placement_from_domain(bo, domain);
r = ttm_bo_validate(&bo->tbo, &bo->placement,
true, false, false);
if (unlikely(r)) {
-   if (r != -ERESTARTSYS && domain == 
RADEON_GEM_DOMAIN_VRAM) {
-   domain |= RADEON_GEM_DOMAIN_GTT;
-   goto retry;
-   }
return r;
}
}
-- 
1.7.11.7



[RFC] improve memory placement for radeon

2012-11-29 Thread j.gli...@gmail.com
So as a followup is 2 patch. The first one just stop trying to move
object at each cs ioctl i believe it could be included in 3.7 as it
improve performances (especialy with vram change from userspace).

The second one implement a vram eviction policy. It's a simple one,
buffer used for write operation are more important than buffer used
for read operation. Buffer get evicted from vram only if they haven't
been use in the last 50ms (so in the last few frames) and only if
there is buffer that have been recently use and that could be move
into vram. This is mostly were i believe discussion should be,
what kind of heuristic would work better than tat.

So without first patch and with mesa master xonotic high is at 17fps,
with first patch it goes to 40fps, with second patch it goes to 48fps.

Cheers,
Jerome



[PATCH] drm/radeon: fix rare segfault after gpu lockup on r7xx

2012-11-29 Thread j.gli...@gmail.com
From: Jerome Glisse 

If GPU reset fails the gart table ptr might be NULL avoid a
kernel segfault in this rare event.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/r600.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index cda280d..0e3a68a 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -843,7 +843,9 @@ void r600_pcie_gart_tlb_flush(struct radeon_device *rdev)
 * method for them.
 */
WREG32(HDP_DEBUG1, 0);
-   tmp = readl((void __iomem *)ptr);
+   if (ptr) {
+   tmp = readl((void __iomem *)ptr);
+   }
} else
WREG32(R_005480_HDP_MEM_COHERENCY_FLUSH_CNTL, 0x1);

-- 
1.7.11.7



[PATCH] drm/radeon: use cached memory when evicting for vram on non agp

2012-11-28 Thread j.gli...@gmail.com
From: Jerome Glisse 

Force the use of cached memory when evicting from vram on non agp
hardware. Also force write combine on agp hw. This is to insure
the minimum cache type change when allocating memory and improving
memory eviction especialy on pci/pcie hw.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_object.c | 18 ++
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index b91118c..3f9f3bb 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -88,10 +88,20 @@ void radeon_ttm_placement_from_domain(struct radeon_bo 
*rbo, u32 domain)
if (domain & RADEON_GEM_DOMAIN_VRAM)
rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_VRAM;
-   if (domain & RADEON_GEM_DOMAIN_GTT)
-   rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
-   if (domain & RADEON_GEM_DOMAIN_CPU)
-   rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+   if (domain & RADEON_GEM_DOMAIN_GTT) {
+   if (rbo->rdev->flags & RADEON_IS_AGP) {
+   rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_TT;
+   } else {
+   rbo->placements[c++] = TTM_PL_FLAG_CACHED | 
TTM_PL_FLAG_TT;
+   }
+   }
+   if (domain & RADEON_GEM_DOMAIN_CPU) {
+   if (rbo->rdev->flags & RADEON_IS_AGP) {
+   rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_TT;
+   } else {
+   rbo->placements[c++] = TTM_PL_FLAG_CACHED | 
TTM_PL_FLAG_TT;
+   }
+   }
if (!c)
rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
rbo->placement.num_placement = c;
-- 
1.7.11.7



[PATCH] drm/ttm: add minimum residency constraint for bo eviction

2012-11-28 Thread j.gli...@gmail.com
From: Jerome Glisse 

This patch add a minimum residency time configurable for each memory
pool (VRAM, GTT, ...). Intention is to avoid having a lot of memory
eviction from VRAM up to a point where the GPU pretty much spend all
it's time moving things in and out.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_ttm.c | 3 +++
 drivers/gpu/drm/ttm/ttm_bo.c| 7 +++
 include/drm/ttm/ttm_bo_api.h| 1 +
 include/drm/ttm/ttm_bo_driver.h | 1 +
 4 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 5ebe1b3..88722c4 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -129,11 +129,13 @@ static int radeon_init_mem_type(struct ttm_bo_device 
*bdev, uint32_t type,
switch (type) {
case TTM_PL_SYSTEM:
/* System memory */
+   man->minimum_residency_time_ms = 0;
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
man->available_caching = TTM_PL_MASK_CACHING;
man->default_caching = TTM_PL_FLAG_CACHED;
break;
case TTM_PL_TT:
+   man->minimum_residency_time_ms = 0;
man->func = &ttm_bo_manager_func;
man->gpu_offset = rdev->mc.gtt_start;
man->available_caching = TTM_PL_MASK_CACHING;
@@ -156,6 +158,7 @@ static int radeon_init_mem_type(struct ttm_bo_device *bdev, 
uint32_t type,
break;
case TTM_PL_VRAM:
/* "On-card" video ram */
+   man->minimum_residency_time_ms = 500;
man->func = &ttm_bo_manager_func;
man->gpu_offset = rdev->mc.vram_start;
man->flags = TTM_MEMTYPE_FLAG_FIXED |
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 39dcc58..40476121 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -452,6 +452,7 @@ moved:
bo->cur_placement = bo->mem.placement;
} else
bo->offset = 0;
+   bo->jiffies = jiffies;

return 0;

@@ -810,6 +811,12 @@ retry:
}

bo = list_first_entry(&man->lru, struct ttm_buffer_object, lru);
+
+   if (time_after(jiffies, bo->jiffies) && jiffies_to_msecs(jiffies - 
bo->jiffies) >= man->minimum_residency_time_ms) {
+   spin_unlock(&glob->lru_lock);
+   return -EBUSY;
+   }
+
kref_get(&bo->list_kref);

if (!list_empty(&bo->ddestroy)) {
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index e8028ad..9e12313 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -275,6 +275,7 @@ struct ttm_buffer_object {

unsigned long offset;
uint32_t cur_placement;
+   unsigned long jiffies;

struct sg_table *sg;
 };
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index d803b92..7f60a18e6 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -280,6 +280,7 @@ struct ttm_mem_type_manager {
struct mutex io_reserve_mutex;
bool use_io_reserve_lru;
bool io_reserve_fastpath;
+   unsigned long minimum_residency_time_ms;

/*
 * Protected by @io_reserve_mutex:
-- 
1.7.11.7



[RFC] drm/ttm: add minimum residency constraint for bo eviction

2012-11-28 Thread j.gli...@gmail.com
So i spend the day looking at ttm and eviction. The first patch i sent
earlier is i believe something that should be merged. This patch however
is more about discussing if other people are interested in similar mecanism
to be share among driver through ttm. I could otherwise just move its logic
to the radeon driver.

So the idea of this patch is that we don't want to constantly move object
in and out of certain memory pool, mostly VRAM. So it adds a minimum
residency time and no object that have been in the given pool for less
than this residency time can be moved out. It closely solve regression
we are having with radeon since gallium driver change and probably improve
some other workload.

Statistic i gathered on xonotic/realquake showed that we can have as much
as 1GB in each direction (VRAM to system and system to vram) over a second.
So we are obviously not saturating the PCIE bandwidth. Profiling shows that
80-90% of the cost of this eviction is in memory allocation/deallocation for
the system memory (lot of irqlock, and mostly kernel spending time
allocating pages thing 256 000 or more page per second to allocate/deallocate.

I used this WIP patch to gather statistic and play with various combination :
http://people.freedesktop.org/~glisse/0001-TTM-EVICT-WIP.patch

Some numbers with xonotic :
17.369fps stock 3.7 kernel
27.883fps 3.7 kernel + do not preserve caching patch ~ +60%
49.292fps 3.7 kernel + WIP with 500ms residency for all pool and no bo wait
  for eviction
49.258fps 3.7 kernel + WIP with 500ms residency for all pool and bo wait
48.213fps 3.7 kernel always allowing GTT placement (basicly revent the
  gallium patch effect)

Other design i am thinking of is changing the way radeon handle it's memory
and stop trying to revalidate object to different memory pool at each cs,
instead i think we should keep a vram lru list probably per process and move
bo out of vram according to this lru and following some euristic. So radeon
would only move bo into vram when there is room.

Other improvement i am thinking of is to reuse GTT memory of object that are
moved in for object that are evicted as statistic i gathered showed that it's
often close amount that move in and out. But this would require true dma
as it would mean scheduling in/out move on page granularity or group of
page (write 4 pages from vram to scratch 4pages into sys, write 4 pages of
system memory bo to vram 4 pages, write 4pages of vram to the just moved
4pages of system memory ...).

Cheers,
Jerome



[PATCH] drm/ttm: do not try to preserve caching state

2012-11-28 Thread j.gli...@gmail.com
From: Jerome Glisse 

It make no sense to preserve caching state especialy when
moving from vram to system. It burden the page allocator to
match the vram caching (often WC) which just burn CPU cycle
for no good reasons.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/ttm/ttm_bo.c | 15 +++
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index bf6e4b5..39dcc58 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -896,19 +896,12 @@ static int ttm_bo_mem_force_space(struct 
ttm_buffer_object *bo,
 }

 static uint32_t ttm_bo_select_caching(struct ttm_mem_type_manager *man,
- uint32_t cur_placement,
  uint32_t proposed_placement)
 {
uint32_t caching = proposed_placement & TTM_PL_MASK_CACHING;
uint32_t result = proposed_placement & ~TTM_PL_MASK_CACHING;

-   /**
-* Keep current caching if possible.
-*/
-
-   if ((cur_placement & caching) != 0)
-   result |= (cur_placement & caching);
-   else if ((man->default_caching & caching) != 0)
+   if ((man->default_caching & caching) != 0)
result |= man->default_caching;
else if ((TTM_PL_FLAG_CACHED & caching) != 0)
result |= TTM_PL_FLAG_CACHED;
@@ -978,8 +971,7 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo,
if (!type_ok)
continue;

-   cur_flags = ttm_bo_select_caching(man, bo->mem.placement,
- cur_flags);
+   cur_flags = ttm_bo_select_caching(man, cur_flags);
/*
 * Use the access and other non-mapping-related flag bits from
 * the memory placement flags to the current flags
@@ -1023,8 +1015,7 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo,
&cur_flags))
continue;

-   cur_flags = ttm_bo_select_caching(man, bo->mem.placement,
- cur_flags);
+   cur_flags = ttm_bo_select_caching(man, cur_flags);
/*
 * Use the access and other non-mapping-related flag bits from
 * the memory placement flags to the current flags
-- 
1.7.11.7



[PATCH] radeon: fix pll/ctrc mapping on dce2 and dce3 hardware

2012-11-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

This fix black screen on resume issue that some people are
experiencing. There is a bug in the atombios code regarding
pll/crtc mapping. The atombios code reverse the logic for
the pll and crtc mapping.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/atombios_crtc.c | 54 +-
 1 file changed, 20 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c 
b/drivers/gpu/drm/radeon/atombios_crtc.c
index 3bce029..7c1f080 100644
--- a/drivers/gpu/drm/radeon/atombios_crtc.c
+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
@@ -1696,42 +1696,28 @@ static int radeon_atom_pick_pll(struct drm_crtc *crtc)
return ATOM_PPLL2;
DRM_ERROR("unable to allocate a PPLL\n");
return ATOM_PPLL_INVALID;
-   } else if (ASIC_IS_AVIVO(rdev)) {
-   /* in DP mode, the DP ref clock can come from either PPLL
-* depending on the asic:
-* DCE3: PPLL1 or PPLL2
-*/
-   if 
(ENCODER_MODE_IS_DP(atombios_get_encoder_mode(radeon_crtc->encoder))) {
-   /* use the same PPLL for all DP monitors */
-   pll = radeon_get_shared_dp_ppll(crtc);
-   if (pll != ATOM_PPLL_INVALID)
-   return pll;
-   } else {
-   /* use the same PPLL for all monitors with the same 
clock */
-   pll = radeon_get_shared_nondp_ppll(crtc);
-   if (pll != ATOM_PPLL_INVALID)
-   return pll;
-   }
-   /* all other cases */
-   pll_in_use = radeon_get_pll_use_mask(crtc);
-   /* the order shouldn't matter here, but we probably
-* need this until we have atomic modeset
-*/
-   if (rdev->flags & RADEON_IS_IGP) {
-   if (!(pll_in_use & (1 << ATOM_PPLL1)))
-   return ATOM_PPLL1;
-   if (!(pll_in_use & (1 << ATOM_PPLL2)))
-   return ATOM_PPLL2;
-   } else {
-   if (!(pll_in_use & (1 << ATOM_PPLL2)))
-   return ATOM_PPLL2;
-   if (!(pll_in_use & (1 << ATOM_PPLL1)))
-   return ATOM_PPLL1;
-   }
-   DRM_ERROR("unable to allocate a PPLL\n");
-   return ATOM_PPLL_INVALID;
} else {
/* on pre-R5xx asics, the crtc to pll mapping is hardcoded */
+   /* some atombios (observed in some DCE2/DCE3) code have a bug,
+* the matching btw pll and crtc is done through
+* PCLK_CRTC[1|2]_CNTL (0x480/0x484) but atombios code use the
+* pll (1 or 2) to select which register to write. ie if using
+* pll1 it will use PCLK_CRTC1_CNTL (0x480) and if using pll2
+* it will use PCLK_CRTC2_CNTL (0x484), it then use crtc id to
+* choose which value to write. Which is reverse order from
+* register logic. So only case that works is when pllid is
+* same as crtcid or when both pll and crtc are enabled and
+* both use same clock.
+*
+* So just return crtc id as if crtc and pll were hard linked
+* together even if they aren't
+*/
+   if (radeon_crtc->crtc_id > 1) {
+   /* crtc other than crtc1 and crtc2 can only be use for
+* DP those doesn't need a valid pll to work.
+*/
+   return ATOM_PPLL_INVALID;
+   }
return radeon_crtc->crtc_id;
}
 }
-- 
1.7.11.7



[PATCH] drm/radeon: track global bo name and always return the same

2012-11-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

To avoid kernel rejecting cs if we return different global name
for same bo keep track of global name and always return the same.
Seems to fix issue with suspend/resume failing and repeatly printing
following message :
[drm:radeon_cs_ioctl] *ERROR* Failed to parse relocation -35!

There might still be way for a rogue program to trigger this issue.

Signed-off-by: Jerome Glisse 
---
 radeon/radeon_bo_gem.c | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/radeon/radeon_bo_gem.c b/radeon/radeon_bo_gem.c
index 265f177..fca0aaf 100644
--- a/radeon/radeon_bo_gem.c
+++ b/radeon/radeon_bo_gem.c
@@ -47,11 +47,11 @@
 #include "radeon_bo_gem.h"
 #include 
 struct radeon_bo_gem {
-struct radeon_bo_int base;
-uint32_tname;
-int map_count;
-atomic_treloc_in_cs;
-void *priv_ptr;
+struct radeon_bo_intbase;
+uint32_tname;
+int map_count;
+atomic_treloc_in_cs;
+void*priv_ptr;
 };

 struct bo_manager_gem {
@@ -320,15 +320,21 @@ void *radeon_gem_get_reloc_in_cs(struct radeon_bo *bo)

 int radeon_gem_get_kernel_name(struct radeon_bo *bo, uint32_t *name)
 {
+struct radeon_bo_gem *bo_gem = (struct radeon_bo_gem*)bo;
 struct radeon_bo_int *boi = (struct radeon_bo_int *)bo;
 struct drm_gem_flink flink;
 int r;

+if (bo_gem->name) {
+*name = bo_gem->name;
+return 0;
+}
 flink.handle = bo->handle;
 r = drmIoctl(boi->bom->fd, DRM_IOCTL_GEM_FLINK, &flink);
 if (r) {
 return r;
 }
+bo_gem->name = flink.name;
 *name = flink.name;
 return 0;
 }
-- 
1.7.11.7



[PATCH 2/2] drm/radeon: fix deadlock when bo is associated to different handle

2012-11-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

There is a rare case, that seems to only happen accross suspend/resume
cycle, where a bo is associated with several different handle. This
lead to a deadlock in ttm buffer reservation path. This could only
happen with flinked(globaly exported) object. Userspace should not
reopen multiple time a globaly exported object.

However the kernel should handle gracefully this corner case and not
keep rejecting the userspace command stream. This is the object of
this patch.

Fix suspend/resume issue where user see following message :
[drm:radeon_cs_ioctl] *ERROR* Failed to parse relocation -35!

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_cs.c | 53 ++
 1 file changed, 31 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 41672cc..064e64d 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -54,39 +54,48 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser 
*p)
return -ENOMEM;
}
for (i = 0; i < p->nrelocs; i++) {
-   struct drm_radeon_cs_reloc *r;
-
+   struct drm_radeon_cs_reloc *reloc;
+
+   /* One bo could be associated with several different handle.
+* Only happen for flinked bo that are open several time.
+*
+* FIXME:
+* Maybe we should consider an alternative to idr for gem
+* object to insure a 1:1 uniq mapping btw handle and gem
+* object.
+*/
duplicate = false;
-   r = (struct drm_radeon_cs_reloc *)&chunk->kdata[i*4];
+   reloc = (struct drm_radeon_cs_reloc *)&chunk->kdata[i*4];
+   p->relocs[i].handle = 0;
+   p->relocs[i].flags = reloc->flags;
+   p->relocs[i].gobj = drm_gem_object_lookup(ddev,
+ p->filp,
+ reloc->handle);
+   if (p->relocs[i].gobj == NULL) {
+   DRM_ERROR("gem object lookup failed 0x%x\n",
+ reloc->handle);
+   return -ENOENT;
+   }
+   p->relocs[i].robj = gem_to_radeon_bo(p->relocs[i].gobj);
+   p->relocs[i].lobj.bo = p->relocs[i].robj;
+   p->relocs[i].lobj.wdomain = reloc->write_domain;
+   p->relocs[i].lobj.rdomain = reloc->read_domains;
+   p->relocs[i].lobj.tv.bo = &p->relocs[i].robj->tbo;
+
for (j = 0; j < i; j++) {
-   if (r->handle == p->relocs[j].handle) {
+   if (p->relocs[i].lobj.bo == p->relocs[j].lobj.bo) {
p->relocs_ptr[i] = &p->relocs[j];
duplicate = true;
break;
}
}
+
if (!duplicate) {
-   p->relocs[i].gobj = drm_gem_object_lookup(ddev,
- p->filp,
- r->handle);
-   if (p->relocs[i].gobj == NULL) {
-   DRM_ERROR("gem object lookup failed 0x%x\n",
- r->handle);
-   return -ENOENT;
-   }
p->relocs_ptr[i] = &p->relocs[i];
-   p->relocs[i].robj = gem_to_radeon_bo(p->relocs[i].gobj);
-   p->relocs[i].lobj.bo = p->relocs[i].robj;
-   p->relocs[i].lobj.wdomain = r->write_domain;
-   p->relocs[i].lobj.rdomain = r->read_domains;
-   p->relocs[i].lobj.tv.bo = &p->relocs[i].robj->tbo;
-   p->relocs[i].handle = r->handle;
-   p->relocs[i].flags = r->flags;
+   p->relocs[i].handle = reloc->handle;
radeon_bo_list_add_object(&p->relocs[i].lobj,
  &p->validated);
-
-   } else
-   p->relocs[i].handle = 0;
+   }
}
return radeon_bo_list_validate(&p->validated);
 }
-- 
1.7.11.7



[PATCH 1/2] radeon: fix pll/ctrc mapping on dce2 and dce3 hardware v2

2012-11-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

This fix black screen on resume issue that some people are
experiencing. There is a bug in the atombios code regarding
pll/crtc mapping. The atombios code reverse the logic for
the pll and crtc mapping.

v2: DCE3 or DCE2 only have 2 crtc

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/atombios_crtc.c | 48 ++
 1 file changed, 14 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c 
b/drivers/gpu/drm/radeon/atombios_crtc.c
index 3bce029..24d932f 100644
--- a/drivers/gpu/drm/radeon/atombios_crtc.c
+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
@@ -1696,42 +1696,22 @@ static int radeon_atom_pick_pll(struct drm_crtc *crtc)
return ATOM_PPLL2;
DRM_ERROR("unable to allocate a PPLL\n");
return ATOM_PPLL_INVALID;
-   } else if (ASIC_IS_AVIVO(rdev)) {
-   /* in DP mode, the DP ref clock can come from either PPLL
-* depending on the asic:
-* DCE3: PPLL1 or PPLL2
-*/
-   if 
(ENCODER_MODE_IS_DP(atombios_get_encoder_mode(radeon_crtc->encoder))) {
-   /* use the same PPLL for all DP monitors */
-   pll = radeon_get_shared_dp_ppll(crtc);
-   if (pll != ATOM_PPLL_INVALID)
-   return pll;
-   } else {
-   /* use the same PPLL for all monitors with the same 
clock */
-   pll = radeon_get_shared_nondp_ppll(crtc);
-   if (pll != ATOM_PPLL_INVALID)
-   return pll;
-   }
-   /* all other cases */
-   pll_in_use = radeon_get_pll_use_mask(crtc);
-   /* the order shouldn't matter here, but we probably
-* need this until we have atomic modeset
-*/
-   if (rdev->flags & RADEON_IS_IGP) {
-   if (!(pll_in_use & (1 << ATOM_PPLL1)))
-   return ATOM_PPLL1;
-   if (!(pll_in_use & (1 << ATOM_PPLL2)))
-   return ATOM_PPLL2;
-   } else {
-   if (!(pll_in_use & (1 << ATOM_PPLL2)))
-   return ATOM_PPLL2;
-   if (!(pll_in_use & (1 << ATOM_PPLL1)))
-   return ATOM_PPLL1;
-   }
-   DRM_ERROR("unable to allocate a PPLL\n");
-   return ATOM_PPLL_INVALID;
} else {
/* on pre-R5xx asics, the crtc to pll mapping is hardcoded */
+   /* some atombios (observed in some DCE2/DCE3) code have a bug,
+* the matching btw pll and crtc is done through
+* PCLK_CRTC[1|2]_CNTL (0x480/0x484) but atombios code use the
+* pll (1 or 2) to select which register to write. ie if using
+* pll1 it will use PCLK_CRTC1_CNTL (0x480) and if using pll2
+* it will use PCLK_CRTC2_CNTL (0x484), it then use crtc id to
+* choose which value to write. Which is reverse order from
+* register logic. So only case that works is when pllid is
+* same as crtcid or when both pll and crtc are enabled and
+* both use same clock.
+*
+* So just return crtc id as if crtc and pll were hard linked
+* together even if they aren't
+*/
return radeon_crtc->crtc_id;
}
 }
-- 
1.7.11.7



[PATCH] drm/radeon: force dma32 to fix regression rs4xx,rs6xx,rs740

2012-08-28 Thread j.gli...@gmail.com
From: Jerome Glisse 

It seems some of those IGP dislike non dma32 page despite what
documentation says. Fix regression since we allowed non dma32
pages. It seems it only affect some revision of those IGP chips
as we don't know which one just force dma32 for all of them.

https://bugzilla.redhat.com/show_bug.cgi?id=785375

Signed-off-by: Jerome Glisse 
Cc: 
---
 drivers/gpu/drm/radeon/radeon_device.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index 066c98b..8867400 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -774,7 +774,7 @@ int radeon_device_init(struct radeon_device *rdev,
if (rdev->flags & RADEON_IS_AGP)
rdev->need_dma32 = true;
if ((rdev->flags & RADEON_IS_PCI) &&
-   (rdev->family < CHIP_RS400))
+   (rdev->family <= CHIP_RS740))
rdev->need_dma32 = true;

dma_bits = rdev->need_dma32 ? 32 : 40;
-- 
1.7.11.2



[PATCH] drm/radeon: force dma32 on rs400, rs690, rs740 IGP

2012-08-28 Thread j.gli...@gmail.com
From: Jerome Glisse 

It seems some of those IGP dislike non dma32 page.

https://bugzilla.redhat.com/show_bug.cgi?id=785375

Signed-off-by: Jerome Glisse 
Cc: 
---
 drivers/gpu/drm/radeon/radeon_device.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index 066c98b..8867400 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -774,7 +774,7 @@ int radeon_device_init(struct radeon_device *rdev,
if (rdev->flags & RADEON_IS_AGP)
rdev->need_dma32 = true;
if ((rdev->flags & RADEON_IS_PCI) &&
-   (rdev->family < CHIP_RS400))
+   (rdev->family <= CHIP_RS740))
rdev->need_dma32 = true;

dma_bits = rdev->need_dma32 ? 32 : 40;
-- 
1.7.11.2



[PATCH] drm/radeon: avoid turning off spread spectrum for used pll

2012-08-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

If spread spectrum is enabled and in use for a given pll we
should not turn it off as it will lead to turning off display
for crtc that use the pll (this behavior was observed on chelsea
edp).

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/atombios_crtc.c |   25 +
 1 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c 
b/drivers/gpu/drm/radeon/atombios_crtc.c
index c6fcb5b..cb18813 100644
--- a/drivers/gpu/drm/radeon/atombios_crtc.c
+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
@@ -444,11 +444,28 @@ union atom_enable_ss {
 static void atombios_crtc_program_ss(struct radeon_device *rdev,
 int enable,
 int pll_id,
+int crtc_id,
 struct radeon_atom_ss *ss)
 {
+   unsigned i;
int index = GetIndexIntoMasterTable(COMMAND, 
EnableSpreadSpectrumOnPPLL);
union atom_enable_ss args;

+   if (!enable) {
+   for (i = 0; i < 6; i++) {
+   if (rdev->mode_info.crtcs[i] &&
+   rdev->mode_info.crtcs[i]->enabled &&
+   i != crtc_id &&
+   pll_id == rdev->mode_info.crtcs[i]->pll_id) {
+   /* one other crtc is using this pll don't turn
+* off spread spectrum as it might turn off
+* display on active crtc
+*/
+   return;
+   }
+   }
+   }
+
memset(&args, 0, sizeof(args));

if (ASIC_IS_DCE5(rdev)) {
@@ -1028,7 +1045,7 @@ static void atombios_crtc_set_pll(struct drm_crtc *crtc, 
struct drm_display_mode
radeon_compute_pll_legacy(pll, adjusted_clock, &pll_clock, 
&fb_div, &frac_fb_div,
  &ref_div, &post_div);

-   atombios_crtc_program_ss(rdev, ATOM_DISABLE, radeon_crtc->pll_id, &ss);
+   atombios_crtc_program_ss(rdev, ATOM_DISABLE, radeon_crtc->pll_id, 
radeon_crtc->crtc_id, &ss);

atombios_crtc_program_pll(crtc, radeon_crtc->crtc_id, 
radeon_crtc->pll_id,
  encoder_mode, radeon_encoder->encoder_id, 
mode->clock,
@@ -1051,7 +1068,7 @@ static void atombios_crtc_set_pll(struct drm_crtc *crtc, 
struct drm_display_mode
ss.step = step_size;
}

-   atombios_crtc_program_ss(rdev, ATOM_ENABLE, 
radeon_crtc->pll_id, &ss);
+   atombios_crtc_program_ss(rdev, ATOM_ENABLE, 
radeon_crtc->pll_id, radeon_crtc->crtc_id, &ss);
}
 }

@@ -1572,11 +1589,11 @@ void radeon_atom_disp_eng_pll_init(struct radeon_device 
*rdev)
   
ASIC_INTERNAL_SS_ON_DCPLL,
   
rdev->clock.default_dispclk);
if (ss_enabled)
-   atombios_crtc_program_ss(rdev, ATOM_DISABLE, 
ATOM_DCPLL, &ss);
+   atombios_crtc_program_ss(rdev, ATOM_DISABLE, 
ATOM_DCPLL, -1, &ss);
/* XXX: DCE5, make sure voltage, dispclk is high enough */
atombios_crtc_set_disp_eng_pll(rdev, 
rdev->clock.default_dispclk);
if (ss_enabled)
-   atombios_crtc_program_ss(rdev, ATOM_ENABLE, ATOM_DCPLL, 
&ss);
+   atombios_crtc_program_ss(rdev, ATOM_ENABLE, ATOM_DCPLL, 
-1, &ss);
}

 }
-- 
1.7.1



[PATCH] drm/edid: limit printk when facing bad edid

2012-08-09 Thread j.gli...@gmail.com
From: Jerome Glisse 

Limit printing bad edid information at one time per connector.
Connector that are connected to a bad monitor/kvm will likely
stay connected to the same bad monitor/kvm and it makes no
sense to keep printing the bad edid message.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/drm_edid.c  | 22 ++
 drivers/gpu/drm/drm_edid_load.c |  6 --
 include/drm/drm_crtc.h  |  3 ++-
 3 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index a8743c3..7380ee3 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -158,7 +158,7 @@ MODULE_PARM_DESC(edid_fixup,
  * Sanity check the EDID block (base or extension).  Return 0 if the block
  * doesn't check out, or 1 if it's valid.
  */
-bool drm_edid_block_valid(u8 *raw_edid, int block)
+bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid)
 {
int i;
u8 csum = 0;
@@ -181,7 +181,9 @@ bool drm_edid_block_valid(u8 *raw_edid, int block)
for (i = 0; i < EDID_LENGTH; i++)
csum += raw_edid[i];
if (csum) {
-   DRM_ERROR("EDID checksum is invalid, remainder is %d\n", csum);
+   if (print_bad_edid) {
+   DRM_ERROR("EDID checksum is invalid, remainder is 
%d\n", csum);
+   }

/* allow CEA to slide through, switches mangle this */
if (raw_edid[0] != 0x02)
@@ -207,7 +209,7 @@ bool drm_edid_block_valid(u8 *raw_edid, int block)
return 1;

 bad:
-   if (raw_edid) {
+   if (raw_edid && print_bad_edid) {
printk(KERN_ERR "Raw EDID:\n");
print_hex_dump(KERN_ERR, " \t", DUMP_PREFIX_NONE, 16, 1,
   raw_edid, EDID_LENGTH, false);
@@ -231,7 +233,7 @@ bool drm_edid_is_valid(struct edid *edid)
return false;

for (i = 0; i <= edid->extensions; i++)
-   if (!drm_edid_block_valid(raw + i * EDID_LENGTH, i))
+   if (!drm_edid_block_valid(raw + i * EDID_LENGTH, i, true))
return false;

return true;
@@ -303,6 +305,7 @@ drm_do_get_edid(struct drm_connector *connector, struct 
i2c_adapter *adapter)
 {
int i, j = 0, valid_extensions = 0;
u8 *block, *new;
+   bool print_bad_edid = !connector->bad_edid_counter || (drm_debug & 
DRM_UT_KMS);

if ((block = kmalloc(EDID_LENGTH, GFP_KERNEL)) == NULL)
return NULL;
@@ -311,7 +314,7 @@ drm_do_get_edid(struct drm_connector *connector, struct 
i2c_adapter *adapter)
for (i = 0; i < 4; i++) {
if (drm_do_probe_ddc_edid(adapter, block, 0, EDID_LENGTH))
goto out;
-   if (drm_edid_block_valid(block, 0))
+   if (drm_edid_block_valid(block, 0, print_bad_edid))
break;
if (i == 0 && drm_edid_is_zero(block, EDID_LENGTH)) {
connector->null_edid_counter++;
@@ -336,7 +339,7 @@ drm_do_get_edid(struct drm_connector *connector, struct 
i2c_adapter *adapter)
  block + (valid_extensions + 1) * EDID_LENGTH,
  j, EDID_LENGTH))
goto out;
-   if (drm_edid_block_valid(block + (valid_extensions + 1) 
* EDID_LENGTH, j)) {
+   if (drm_edid_block_valid(block + (valid_extensions + 1) 
* EDID_LENGTH, j, print_bad_edid)) {
valid_extensions++;
break;
}
@@ -359,8 +362,11 @@ drm_do_get_edid(struct drm_connector *connector, struct 
i2c_adapter *adapter)
return block;

 carp:
-   dev_warn(connector->dev->dev, "%s: EDID block %d invalid.\n",
-drm_get_connector_name(connector), j);
+   if (print_bad_edid) {
+   dev_warn(connector->dev->dev, "%s: EDID block %d invalid.\n",
+drm_get_connector_name(connector), j);
+   }
+   connector->bad_edid_counter++;

 out:
kfree(block);
diff --git a/drivers/gpu/drm/drm_edid_load.c b/drivers/gpu/drm/drm_edid_load.c
index 66d4a28..14f46dd 100644
--- a/drivers/gpu/drm/drm_edid_load.c
+++ b/drivers/gpu/drm/drm_edid_load.c
@@ -123,6 +123,7 @@ static int edid_load(struct drm_connector *connector, char 
*name,
int fwsize, expected;
int builtin = 0, err = 0;
int i, valid_extensions = 0;
+   bool print_bad_edid = !connector->bad_edid_counter || (drm_debug & 
DRM_UT_KMS);

pdev = platform_device_register_simple(connector_name, -1, NULL, 0);
if (IS_ERR(pdev)) {
@@ -173,7 +174,8 @@ static int edid_load(struct drm_connector *connector, char 
*name,
}
memcpy(edid, fwdata, fwsize);

-   if (!drm_edid_block_valid(edid, 0)) {
+   if (!drm_edid_block_valid(edid, 0, print_bad_edid)) {
+   

[PATCH] drm/radeon: delay virtual address destruction to bo destruction

2012-08-08 Thread j.gli...@gmail.com
From: Jerome Glisse 

Use the ttm bo delayed destruction queue so that we don't block
userspace when destroying bo. The virtual address destruction
will happen at same time as the real bo destruction when everythings
using the bo is done.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_drv.c |2 +-
 drivers/gpu/drm/radeon/radeon_gem.c |   20 
 2 files changed, 1 insertion(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
b/drivers/gpu/drm/radeon/radeon_drv.c
index dcea6f0..38443e7 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -368,7 +368,7 @@ static struct drm_driver kms_driver = {
.gem_init_object = radeon_gem_object_init,
.gem_free_object = radeon_gem_object_free,
.gem_open_object = radeon_gem_object_open,
-   .gem_close_object = radeon_gem_object_close,
+   .gem_close_object = NULL,
.dma_ioctl = radeon_dma_ioctl_kms,
.dumb_create = radeon_mode_dumb_create,
.dumb_map_offset = radeon_mode_dumb_mmap,
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c 
b/drivers/gpu/drm/radeon/radeon_gem.c
index 1b57b00..b5835c8 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -127,26 +127,6 @@ int radeon_gem_object_open(struct drm_gem_object *obj, 
struct drm_file *file_pri
return 0;
 }

-void radeon_gem_object_close(struct drm_gem_object *obj,
-struct drm_file *file_priv)
-{
-   struct radeon_bo *rbo = gem_to_radeon_bo(obj);
-   struct radeon_device *rdev = rbo->rdev;
-   struct radeon_fpriv *fpriv = file_priv->driver_priv;
-   struct radeon_vm *vm = &fpriv->vm;
-
-   if (rdev->family < CHIP_CAYMAN) {
-   return;
-   }
-
-   if (radeon_bo_reserve(rbo, false)) {
-   dev_err(rdev->dev, "leaking bo va because we fail to reserve 
bo\n");
-   return;
-   }
-   radeon_vm_bo_rmv(rdev, vm, rbo);
-   radeon_bo_unreserve(rbo);
-}
-
 static int radeon_gem_handle_lockup(struct radeon_device *rdev, int r)
 {
if (r == -EDEADLK) {
-- 
1.7.10.4



[PATCH] drm/radeon: fence virtual address and free it once idle [3.5] v4

2012-08-06 Thread j.gli...@gmail.com
From: Jerome Glisse 

Virtual address need to be fenced to know when we can safely remove it.
This patch also properly clear the pagetable. Previously it was
serouisly broken.

v2: For to update pagetable when unbinding bo (don't bailout if
bo_va->valid is true).
v3: Fix compilation warnings
v4: We need a special version for 3.5 because the locking scheme
is different btw 3.5 and 3.6. There is no longer cs mutex in
3.6 instead there is a global vm mutex.

This version is for stable 3.5 only.

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  1 +
 drivers/gpu/drm/radeon/radeon_cs.c | 30 +++---
 drivers/gpu/drm/radeon/radeon_gart.c   | 24 ++--
 drivers/gpu/drm/radeon/radeon_gem.c| 13 ++---
 drivers/gpu/drm/radeon/radeon_object.c |  6 +-
 5 files changed, 53 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index fefcca5..01d2a87 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -323,6 +323,7 @@ struct radeon_bo_va {
uint64_tsoffset;
uint64_teoffset;
uint32_tflags;
+   struct radeon_fence *fence;
boolvalid;
 };

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 142f894..3680df0 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -294,6 +294,28 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
return 0;
 }

+static void radeon_bo_vm_fence_va(struct radeon_cs_parser *parser,
+ struct radeon_fence *fence)
+{
+   struct radeon_fpriv *fpriv = parser->filp->driver_priv;
+   struct radeon_vm *vm = &fpriv->vm;
+   struct radeon_bo_list *lobj;
+
+   if (parser->chunk_ib_idx == -1)
+   return;
+   if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
+   return;
+
+   list_for_each_entry(lobj, &parser->validated, tv.head) {
+   struct radeon_bo_va *bo_va;
+   struct radeon_bo *rbo = lobj->bo;
+
+   bo_va = radeon_bo_va(rbo, vm);
+   radeon_fence_unref(&bo_va->fence);
+   bo_va->fence = radeon_fence_ref(fence);
+   }
+}
+
 /**
  * cs_parser_fini() - clean parser states
  * @parser:parser structure holding parsing context.
@@ -306,11 +328,14 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error)
 {
unsigned i;

-   if (!error)
+   if (!error) {
+   /* fence all bo va before ttm_eu_fence_buffer_objects so bo are 
still reserved */
+   radeon_bo_vm_fence_va(parser, parser->ib.fence);
ttm_eu_fence_buffer_objects(&parser->validated,
parser->ib.fence);
-   else
+   } else {
ttm_eu_backoff_reservation(&parser->validated);
+   }

if (parser->relocs != NULL) {
for (i = 0; i < parser->nrelocs; i++) {
@@ -407,7 +432,6 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev,

if (parser->chunk_ib_idx == -1)
return 0;
-
if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
return 0;

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index 84b648a..f651f22 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -564,7 +564,7 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
return -EINVAL;
}

-   if (bo_va->valid)
+   if (bo_va->valid && mem)
return 0;

ngpu_pages = radeon_bo_ngpu_pages(bo);
@@ -597,11 +597,27 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,
 struct radeon_bo *bo)
 {
struct radeon_bo_va *bo_va;
+   int r;

bo_va = radeon_bo_va(bo, vm);
if (bo_va == NULL)
return 0;

+   /* wait for va use to end */
+   while (bo_va->fence) {
+   r = radeon_fence_wait(bo_va->fence, false);
+   if (r) {
+   DRM_ERROR("error while waiting for fence: %d\n", r);
+   }
+   if (r == -EDEADLK) {
+   r = radeon_gpu_reset(rdev);
+   if (!r)
+   continue;
+   }
+   break;
+   }
+   radeon_fence_unref(&bo_va->fence);
+
radeon_mutex_lock(&rdev->cs_mutex);
mutex_lock(&vm->mutex);
radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
@@ -661,12 +677,15 @@ void radeon_vm_fini(struct radeon_device *rdev, struct 
radeon_vm *vm)
radeon_vm_unbind_locked(rdev, vm);
radeon_mutex_unlock(&rdev->cs_mutex);

[PATCH] drm/radeon: fence virtual address and free it once idle v4

2012-08-06 Thread j.gli...@gmail.com
From: Jerome Glisse 

Virtual address need to be fenced to know when we can safely remove it.
This patch also properly clear the pagetable. Previously it was
serouisly broken.

Kernel 3.5/3.4 need a similar patch but adapted for difference in mutex locking.

v2: For to update pagetable when unbinding bo (don't bailout if
bo_va->valid is true).
v3: Add kernel 3.5/3.4 comment.
v4: Fix compilation warnings.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  1 +
 drivers/gpu/drm/radeon/radeon_cs.c | 32 +---
 drivers/gpu/drm/radeon/radeon_gart.c   | 24 ++--
 drivers/gpu/drm/radeon/radeon_gem.c| 13 ++---
 drivers/gpu/drm/radeon/radeon_object.c |  6 +-
 5 files changed, 55 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5431af2..8d75c65 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -300,6 +300,7 @@ struct radeon_bo_va {
uint64_tsoffset;
uint64_teoffset;
uint32_tflags;
+   struct radeon_fence *fence;
boolvalid;
 };

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 8a4c49e..b4a0db24 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -278,6 +278,30 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
return 0;
 }

+static void radeon_bo_vm_fence_va(struct radeon_cs_parser *parser,
+ struct radeon_fence *fence)
+{
+   struct radeon_fpriv *fpriv = parser->filp->driver_priv;
+   struct radeon_vm *vm = &fpriv->vm;
+   struct radeon_bo_list *lobj;
+
+   if (parser->chunk_ib_idx == -1) {
+   return;
+   }
+   if ((parser->cs_flags & RADEON_CS_USE_VM) == 0) {
+   return;
+   }
+
+   list_for_each_entry(lobj, &parser->validated, tv.head) {
+   struct radeon_bo_va *bo_va;
+   struct radeon_bo *rbo = lobj->bo;
+
+   bo_va = radeon_bo_va(rbo, vm);
+   radeon_fence_unref(&bo_va->fence);
+   bo_va->fence = radeon_fence_ref(fence);
+   }
+}
+
 /**
  * cs_parser_fini() - clean parser states
  * @parser:parser structure holding parsing context.
@@ -290,11 +314,14 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error)
 {
unsigned i;

-   if (!error)
+   if (!error) {
+   /* fence all bo va before ttm_eu_fence_buffer_objects so bo are 
still reserved */
+   radeon_bo_vm_fence_va(parser, parser->ib.fence);
ttm_eu_fence_buffer_objects(&parser->validated,
parser->ib.fence);
-   else
+   } else {
ttm_eu_backoff_reservation(&parser->validated);
+   }

if (parser->relocs != NULL) {
for (i = 0; i < parser->nrelocs; i++) {
@@ -388,7 +415,6 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev,

if (parser->chunk_ib_idx == -1)
return 0;
-
if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
return 0;

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index b372005..9912182 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -814,7 +814,7 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
return -EINVAL;
}

-   if (bo_va->valid)
+   if (bo_va->valid && mem)
return 0;

ngpu_pages = radeon_bo_ngpu_pages(bo);
@@ -859,11 +859,27 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,
 struct radeon_bo *bo)
 {
struct radeon_bo_va *bo_va;
+   int r;

bo_va = radeon_bo_va(bo, vm);
if (bo_va == NULL)
return 0;

+   /* wait for va use to end */
+   while (bo_va->fence) {
+   r = radeon_fence_wait(bo_va->fence, false);
+   if (r) {
+   DRM_ERROR("error while waiting for fence: %d\n", r);
+   }
+   if (r == -EDEADLK) {
+   r = radeon_gpu_reset(rdev);
+   if (!r)
+   continue;
+   }
+   break;
+   }
+   radeon_fence_unref(&bo_va->fence);
+
mutex_lock(&rdev->vm_manager.lock);
mutex_lock(&vm->mutex);
radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
@@ -952,12 +968,15 @@ void radeon_vm_fini(struct radeon_device *rdev, struct 
radeon_vm *vm)
radeon_vm_unbind_locked(rdev, vm);
mutex_unlock(&rdev->vm_manager.lock);

-   /* remove all bo */
+   /* remove all bo at this point non are busy any more because un

[PATCH] drm/radeon: fence virtual address and free it once idle [3.5] v3

2012-08-06 Thread j.gli...@gmail.com
From: Jerome Glisse 

Virtual address need to be fenced to know when we can safely remove it.
This patch also properly clear the pagetable. Previously it was
serouisly broken.

v2: For to update pagetable when unbinding bo (don't bailout if
bo_va->valid is true).
v3: Fix compilation warnings

This version is for stable 3.5 only.

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  1 +
 drivers/gpu/drm/radeon/radeon_cs.c | 30 +++---
 drivers/gpu/drm/radeon/radeon_gart.c   | 24 ++--
 drivers/gpu/drm/radeon/radeon_gem.c| 13 ++---
 drivers/gpu/drm/radeon/radeon_object.c |  6 +-
 5 files changed, 53 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index fefcca5..01d2a87 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -323,6 +323,7 @@ struct radeon_bo_va {
uint64_tsoffset;
uint64_teoffset;
uint32_tflags;
+   struct radeon_fence *fence;
boolvalid;
 };

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 142f894..3680df0 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -294,6 +294,28 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
return 0;
 }

+static void radeon_bo_vm_fence_va(struct radeon_cs_parser *parser,
+ struct radeon_fence *fence)
+{
+   struct radeon_fpriv *fpriv = parser->filp->driver_priv;
+   struct radeon_vm *vm = &fpriv->vm;
+   struct radeon_bo_list *lobj;
+
+   if (parser->chunk_ib_idx == -1)
+   return;
+   if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
+   return;
+
+   list_for_each_entry(lobj, &parser->validated, tv.head) {
+   struct radeon_bo_va *bo_va;
+   struct radeon_bo *rbo = lobj->bo;
+
+   bo_va = radeon_bo_va(rbo, vm);
+   radeon_fence_unref(&bo_va->fence);
+   bo_va->fence = radeon_fence_ref(fence);
+   }
+}
+
 /**
  * cs_parser_fini() - clean parser states
  * @parser:parser structure holding parsing context.
@@ -306,11 +328,14 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error)
 {
unsigned i;

-   if (!error)
+   if (!error) {
+   /* fence all bo va before ttm_eu_fence_buffer_objects so bo are 
still reserved */
+   radeon_bo_vm_fence_va(parser, parser->ib.fence);
ttm_eu_fence_buffer_objects(&parser->validated,
parser->ib.fence);
-   else
+   } else {
ttm_eu_backoff_reservation(&parser->validated);
+   }

if (parser->relocs != NULL) {
for (i = 0; i < parser->nrelocs; i++) {
@@ -407,7 +432,6 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev,

if (parser->chunk_ib_idx == -1)
return 0;
-
if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
return 0;

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index 84b648a..f651f22 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -564,7 +564,7 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
return -EINVAL;
}

-   if (bo_va->valid)
+   if (bo_va->valid && mem)
return 0;

ngpu_pages = radeon_bo_ngpu_pages(bo);
@@ -597,11 +597,27 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,
 struct radeon_bo *bo)
 {
struct radeon_bo_va *bo_va;
+   int r;

bo_va = radeon_bo_va(bo, vm);
if (bo_va == NULL)
return 0;

+   /* wait for va use to end */
+   while (bo_va->fence) {
+   r = radeon_fence_wait(bo_va->fence, false);
+   if (r) {
+   DRM_ERROR("error while waiting for fence: %d\n", r);
+   }
+   if (r == -EDEADLK) {
+   r = radeon_gpu_reset(rdev);
+   if (!r)
+   continue;
+   }
+   break;
+   }
+   radeon_fence_unref(&bo_va->fence);
+
radeon_mutex_lock(&rdev->cs_mutex);
mutex_lock(&vm->mutex);
radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
@@ -661,12 +677,15 @@ void radeon_vm_fini(struct radeon_device *rdev, struct 
radeon_vm *vm)
radeon_vm_unbind_locked(rdev, vm);
radeon_mutex_unlock(&rdev->cs_mutex);

-   /* remove all bo */
+   /* remove all bo at this point non are busy any more because unbind
+* waited for the last vm fence to signal
+*/
   

[PATCH] drm/radeon: fence virtual address and free it once idle v3

2012-08-03 Thread j.gli...@gmail.com
From: Jerome Glisse 

Virtual address need to be fenced to know when we can safely remove it.
This patch also properly clear the pagetable. Previously it was
serouisly broken.

Kernel 3.5/3.4 need a similar patch but adapted for difference in mutex locking.

v2: For to update pagetable when unbinding bo (don't bailout if
bo_va->valid is true).
v3: Add kernel 3.5/3.4 comment.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|1 +
 drivers/gpu/drm/radeon/radeon_cs.c |   32 +---
 drivers/gpu/drm/radeon/radeon_gart.c   |   24 ++--
 drivers/gpu/drm/radeon/radeon_gem.c|   13 ++---
 drivers/gpu/drm/radeon/radeon_object.c |6 +-
 5 files changed, 55 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5431af2..8d75c65 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -300,6 +300,7 @@ struct radeon_bo_va {
uint64_tsoffset;
uint64_teoffset;
uint32_tflags;
+   struct radeon_fence *fence;
boolvalid;
 };

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 8a4c49e..995f3ab 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -278,6 +278,30 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
return 0;
 }

+static void radeon_bo_vm_fence_va(struct radeon_cs_parser *parser,
+ struct radeon_fence *fence)
+{
+   struct radeon_fpriv *fpriv = parser->filp->driver_priv;
+   struct radeon_vm *vm = &fpriv->vm;
+   struct radeon_bo_list *lobj;
+   int r;
+
+   if (parser->chunk_ib_idx == -1)
+   return;
+   if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
+   return;
+
+   list_for_each_entry(lobj, &parser->validated, tv.head) {
+   struct radeon_bo_va *bo_va;
+   struct radeon_bo *rbo = lobj->bo;
+
+   bo_va = radeon_bo_va(rbo, vm);
+   radeon_fence_unref(&bo_va->fence);
+   bo_va->fence = radeon_fence_ref(fence);
+   }
+   return 0;
+}
+
 /**
  * cs_parser_fini() - clean parser states
  * @parser:parser structure holding parsing context.
@@ -290,11 +314,14 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error)
 {
unsigned i;

-   if (!error)
+   if (!error) {
+   /* fence all bo va before ttm_eu_fence_buffer_objects so bo are 
still reserved */
+   radeon_bo_vm_fence_va(parser, parser->ib.fence);
ttm_eu_fence_buffer_objects(&parser->validated,
parser->ib.fence);
-   else
+   } else {
ttm_eu_backoff_reservation(&parser->validated);
+   }

if (parser->relocs != NULL) {
for (i = 0; i < parser->nrelocs; i++) {
@@ -388,7 +415,6 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev,

if (parser->chunk_ib_idx == -1)
return 0;
-
if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
return 0;

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index b372005..9912182 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -814,7 +814,7 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
return -EINVAL;
}

-   if (bo_va->valid)
+   if (bo_va->valid && mem)
return 0;

ngpu_pages = radeon_bo_ngpu_pages(bo);
@@ -859,11 +859,27 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,
 struct radeon_bo *bo)
 {
struct radeon_bo_va *bo_va;
+   int r;

bo_va = radeon_bo_va(bo, vm);
if (bo_va == NULL)
return 0;

+   /* wait for va use to end */
+   while (bo_va->fence) {
+   r = radeon_fence_wait(bo_va->fence, false);
+   if (r) {
+   DRM_ERROR("error while waiting for fence: %d\n", r);
+   }
+   if (r == -EDEADLK) {
+   r = radeon_gpu_reset(rdev);
+   if (!r)
+   continue;
+   }
+   break;
+   }
+   radeon_fence_unref(&bo_va->fence);
+
mutex_lock(&rdev->vm_manager.lock);
mutex_lock(&vm->mutex);
radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
@@ -952,12 +968,15 @@ void radeon_vm_fini(struct radeon_device *rdev, struct 
radeon_vm *vm)
radeon_vm_unbind_locked(rdev, vm);
mutex_unlock(&rdev->vm_manager.lock);

-   /* remove all bo */
+   /* remove all bo at this point non are busy any more because unbind
+  

[PATCH] drm/radeon: fence virtual address and free it once idle [3.5] v2

2012-08-03 Thread j.gli...@gmail.com
From: Jerome Glisse 

Virtual address need to be fenced to know when we can safely remove it.
This patch also properly clear the pagetable. Previously it was
serouisly broken.

v2: For to update pagetable when unbinding bo (don't bailout if
bo_va->valid is true).

This version is for stable 3.5 only.

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|  1 +
 drivers/gpu/drm/radeon/radeon_cs.c | 32 +---
 drivers/gpu/drm/radeon/radeon_gart.c   | 24 ++--
 drivers/gpu/drm/radeon/radeon_gem.c| 13 ++---
 drivers/gpu/drm/radeon/radeon_object.c |  6 +-
 5 files changed, 55 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index fefcca5..01d2a87 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -323,6 +323,7 @@ struct radeon_bo_va {
uint64_tsoffset;
uint64_teoffset;
uint32_tflags;
+   struct radeon_fence *fence;
boolvalid;
 };

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 142f894..70f6d08 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -294,6 +294,30 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
return 0;
 }

+static void radeon_bo_vm_fence_va(struct radeon_cs_parser *parser,
+ struct radeon_fence *fence)
+{
+   struct radeon_fpriv *fpriv = parser->filp->driver_priv;
+   struct radeon_vm *vm = &fpriv->vm;
+   struct radeon_bo_list *lobj;
+   int r;
+
+   if (parser->chunk_ib_idx == -1)
+   return;
+   if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
+   return;
+
+   list_for_each_entry(lobj, &parser->validated, tv.head) {
+   struct radeon_bo_va *bo_va;
+   struct radeon_bo *rbo = lobj->bo;
+
+   bo_va = radeon_bo_va(rbo, vm);
+   radeon_fence_unref(&bo_va->fence);
+   bo_va->fence = radeon_fence_ref(fence);
+   }
+   return 0;
+}
+
 /**
  * cs_parser_fini() - clean parser states
  * @parser:parser structure holding parsing context.
@@ -306,11 +330,14 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error)
 {
unsigned i;

-   if (!error)
+   if (!error) {
+   /* fence all bo va before ttm_eu_fence_buffer_objects so bo are 
still reserved */
+   radeon_bo_vm_fence_va(parser, parser->ib.fence);
ttm_eu_fence_buffer_objects(&parser->validated,
parser->ib.fence);
-   else
+   } else {
ttm_eu_backoff_reservation(&parser->validated);
+   }

if (parser->relocs != NULL) {
for (i = 0; i < parser->nrelocs; i++) {
@@ -407,7 +434,6 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev,

if (parser->chunk_ib_idx == -1)
return 0;
-
if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
return 0;

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index 84b648a..f651f22 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -564,7 +564,7 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
return -EINVAL;
}

-   if (bo_va->valid)
+   if (bo_va->valid && mem)
return 0;

ngpu_pages = radeon_bo_ngpu_pages(bo);
@@ -597,11 +597,27 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,
 struct radeon_bo *bo)
 {
struct radeon_bo_va *bo_va;
+   int r;

bo_va = radeon_bo_va(bo, vm);
if (bo_va == NULL)
return 0;

+   /* wait for va use to end */
+   while (bo_va->fence) {
+   r = radeon_fence_wait(bo_va->fence, false);
+   if (r) {
+   DRM_ERROR("error while waiting for fence: %d\n", r);
+   }
+   if (r == -EDEADLK) {
+   r = radeon_gpu_reset(rdev);
+   if (!r)
+   continue;
+   }
+   break;
+   }
+   radeon_fence_unref(&bo_va->fence);
+
radeon_mutex_lock(&rdev->cs_mutex);
mutex_lock(&vm->mutex);
radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
@@ -661,12 +677,15 @@ void radeon_vm_fini(struct radeon_device *rdev, struct 
radeon_vm *vm)
radeon_vm_unbind_locked(rdev, vm);
radeon_mutex_unlock(&rdev->cs_mutex);

-   /* remove all bo */
+   /* remove all bo at this point non are busy any more because unbind
+* waited for the last vm fence to signal
+*/
 

[PATCH] drm/radeon: fence virtual address and free it once idle v2

2012-08-03 Thread j.gli...@gmail.com
From: Jerome Glisse 

Virtual address need to be fenced to know when we can safely remove it.
This patch also properly clear the pagetable. Previously it was
serouisly broken.

v2: For to update pagetable when unbinding bo (don't bailout if
bo_va->valid is true).

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|1 +
 drivers/gpu/drm/radeon/radeon_cs.c |   32 +---
 drivers/gpu/drm/radeon/radeon_gart.c   |   24 ++--
 drivers/gpu/drm/radeon/radeon_gem.c|   13 ++---
 drivers/gpu/drm/radeon/radeon_object.c |6 +-
 5 files changed, 55 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5431af2..8d75c65 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -300,6 +300,7 @@ struct radeon_bo_va {
uint64_tsoffset;
uint64_teoffset;
uint32_tflags;
+   struct radeon_fence *fence;
boolvalid;
 };

diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index 8a4c49e..995f3ab 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -278,6 +278,30 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void 
*data)
return 0;
 }

+static void radeon_bo_vm_fence_va(struct radeon_cs_parser *parser,
+ struct radeon_fence *fence)
+{
+   struct radeon_fpriv *fpriv = parser->filp->driver_priv;
+   struct radeon_vm *vm = &fpriv->vm;
+   struct radeon_bo_list *lobj;
+   int r;
+
+   if (parser->chunk_ib_idx == -1)
+   return;
+   if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
+   return;
+
+   list_for_each_entry(lobj, &parser->validated, tv.head) {
+   struct radeon_bo_va *bo_va;
+   struct radeon_bo *rbo = lobj->bo;
+
+   bo_va = radeon_bo_va(rbo, vm);
+   radeon_fence_unref(&bo_va->fence);
+   bo_va->fence = radeon_fence_ref(fence);
+   }
+   return 0;
+}
+
 /**
  * cs_parser_fini() - clean parser states
  * @parser:parser structure holding parsing context.
@@ -290,11 +314,14 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser 
*parser, int error)
 {
unsigned i;

-   if (!error)
+   if (!error) {
+   /* fence all bo va before ttm_eu_fence_buffer_objects so bo are 
still reserved */
+   radeon_bo_vm_fence_va(parser, parser->ib.fence);
ttm_eu_fence_buffer_objects(&parser->validated,
parser->ib.fence);
-   else
+   } else {
ttm_eu_backoff_reservation(&parser->validated);
+   }

if (parser->relocs != NULL) {
for (i = 0; i < parser->nrelocs; i++) {
@@ -388,7 +415,6 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev,

if (parser->chunk_ib_idx == -1)
return 0;
-
if ((parser->cs_flags & RADEON_CS_USE_VM) == 0)
return 0;

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index b372005..9912182 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -814,7 +814,7 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
return -EINVAL;
}

-   if (bo_va->valid)
+   if (bo_va->valid && mem)
return 0;

ngpu_pages = radeon_bo_ngpu_pages(bo);
@@ -859,11 +859,27 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,
 struct radeon_bo *bo)
 {
struct radeon_bo_va *bo_va;
+   int r;

bo_va = radeon_bo_va(bo, vm);
if (bo_va == NULL)
return 0;

+   /* wait for va use to end */
+   while (bo_va->fence) {
+   r = radeon_fence_wait(bo_va->fence, false);
+   if (r) {
+   DRM_ERROR("error while waiting for fence: %d\n", r);
+   }
+   if (r == -EDEADLK) {
+   r = radeon_gpu_reset(rdev);
+   if (!r)
+   continue;
+   }
+   break;
+   }
+   radeon_fence_unref(&bo_va->fence);
+
mutex_lock(&rdev->vm_manager.lock);
mutex_lock(&vm->mutex);
radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
@@ -952,12 +968,15 @@ void radeon_vm_fini(struct radeon_device *rdev, struct 
radeon_vm *vm)
radeon_vm_unbind_locked(rdev, vm);
mutex_unlock(&rdev->vm_manager.lock);

-   /* remove all bo */
+   /* remove all bo at this point non are busy any more because unbind
+* waited for the last vm fence to signal
+*/
r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);

[PATCH] drm/radeon: fix virtual memory locking in case of reset

2012-08-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

Lock/unlock mutex in proper order to avoid deadlock in case
of GPU reset triggered from VM code path.

Cc: stable at vger.kernel.org [3.5]
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_gart.c |   11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index b372005..7eabb59 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -508,14 +508,19 @@ static void radeon_vm_unbind_locked(struct radeon_device 
*rdev,
while (vm->fence) {
int r;
r = radeon_fence_wait(vm->fence, false);
-   if (r)
+   if (r) {
DRM_ERROR("error while waiting for fence: %d\n", r);
+   }
if (r == -EDEADLK) {
+   /* release mutex and lock in right order */
mutex_unlock(&rdev->vm_manager.lock);
+   mutex_unlock(&vm->mutex);
r = radeon_gpu_reset(rdev);
mutex_lock(&rdev->vm_manager.lock);
-   if (!r)
+   mutex_lock(&vm->mutex);
+   if (!r) {
continue;
+   }
}
break;
}
@@ -551,7 +556,9 @@ void radeon_vm_manager_fini(struct radeon_device *rdev)
mutex_lock(&rdev->vm_manager.lock);
/* unbind all active vm */
list_for_each_entry_safe(vm, tmp, &rdev->vm_manager.lru_vm, list) {
+   mutex_lock(&vm->mutex);
radeon_vm_unbind_locked(rdev, vm);
+   mutex_unlock(&vm->mutex);
}
rdev->vm_manager.funcs->fini(rdev);
mutex_unlock(&rdev->vm_manager.lock);
-- 
1.7.10.4



[PATCH 2/3] drm/radeon: try to keep current vram GPU address

2012-07-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

It seems we can't move the VRAM GPU address without disabling CRTC.
Thus if we want to support flicker free boot from UEFI to X, we need
to keep the VRAM GPU address UEFI programmed. So far on all UEFI
checked this address was something sane.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c   |   10 +++-
 drivers/gpu/drm/radeon/r600.c|  105 --
 drivers/gpu/drm/radeon/radeon_asic.h |3 +-
 drivers/gpu/drm/radeon/rv770.c   |   44 ++
 drivers/gpu/drm/radeon/si.c  |   59 ++-
 5 files changed, 80 insertions(+), 141 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index db85262..c56fa0a 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -2166,6 +2166,7 @@ static void evergreen_gpu_init(struct radeon_device *rdev)

 int evergreen_mc_init(struct radeon_device *rdev)
 {
+   u64 fb_base;
u32 tmp;
int chansize, numchan;

@@ -2217,7 +2218,14 @@ int evergreen_mc_init(struct radeon_device *rdev)
rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024;
}
rdev->mc.visible_vram_size = rdev->mc.aper_size;
-   r700_vram_gtt_location(rdev, &rdev->mc);
+   fb_base = RREG32(MC_VM_FB_LOCATION) & 0x;
+   fb_base <<= 24;
+   if ((rdev->family == CHIP_PALM) ||
+   (rdev->family == CHIP_SUMO) ||
+   (rdev->family == CHIP_SUMO2)) {
+   fb_base |= ((RREG32(MC_FUS_VM_FB_OFFSET) >> 20) & 0xf) << 20;
+   }
+   r600_vram_gtt_location(rdev, &rdev->mc, fb_base, 1ULL << 36ULL);
radeon_update_bandwidth_info(rdev);

return 0;
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index 637280f..87f47b6 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -1097,69 +1097,84 @@ static void r600_mc_program(struct radeon_device *rdev)
  * r600_vram_gtt_location - try to find VRAM & GTT location
  * @rdev: radeon device structure holding all necessary informations
  * @mc: memory controller structure holding memory informations
+ * @fb_base: current start address of VRAM
+ * @mc_max: maximum address supported by the MC controler (32bits on
+ *  r6xx/r7xx, 36bits on evergreen, 40bits cayman and newer)
  *
- * Function will place try to place VRAM at same place as in CPU (PCI)
- * address space as some GPU seems to have issue when we reprogram at
- * different address space.
+ * Function will place try to not move VRAM start address. But if VRAM
+ * conflict with AGP then we move VRAM as we have more issue with moving
+ * AGP than with moving VRAM start address.
  *
- * If there is not enough space to fit the unvisible VRAM after the
- * aperture then we limit the VRAM size to the aperture.
+ * We will also move VRAM if current address + VRAM size overflow mc_max.
  *
- * If we are using AGP then place VRAM adjacent to AGP aperture are we need
- * them to be in one from GPU point of view so that we can program GPU to
- * catch access outside them (weird GPU policy see ??).
- *
- * This function will never fails, worst case are limiting VRAM or GTT.
+ * GTT is place before VRAM if there is enough room there, or after VRAM.
+ * GTT is shrink to maximum size we can find, we shrink GTT instead of
+ * VRAM as VRAM is better than GTT :)
  *
  * Note: GTT start, end, size should be initialized before calling this
  * function on AGP platform.
  */
-static void r600_vram_gtt_location(struct radeon_device *rdev, struct 
radeon_mc *mc)
+void r600_vram_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc,
+   uint64_t fb, uint64_t mc_max)
 {
-   u64 size_bf, size_af;
+   /* try to keep same vram address */
+   mc->vram_start = fb;

-   if (mc->mc_vram_size > 0xE000) {
-   /* leave room for at least 512M GTT */
-   dev_warn(rdev->dev, "limiting VRAM\n");
-   mc->real_vram_size = 0xE000;
-   mc->mc_vram_size = 0xE000;
+   /* make sure there is enough room for vram */
+   if ((mc->vram_start + mc->mc_vram_size) > mc_max) {
+   /* moving fb, show me the GPU on which this happen free beer 
reward */
+   mc->vram_start = mc_max - mc->mc_vram_size;
}
+   mc->vram_end = mc->vram_start + mc->mc_vram_size - 1;
+
if (rdev->flags & RADEON_IS_AGP) {
-   size_bf = mc->gtt_start;
-   size_af = 0x - mc->gtt_end;
-   if (size_bf > size_af) {
-   if (mc->mc_vram_size > size_bf) {
-   dev_warn(rdev->dev, "limiting VRAM\n");
-   mc->real_vram_size = size_bf;
-   mc->mc_vram_size = size_bf;
-   }
-   mc->vram_start = mc->gtt_start - mc->

[PATCH 1/3] drm/radeon: do not reenable crtc after moving vram start address

2012-07-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

It seems we can not update the crtc scanout address. After disabling
crtc, update to base address do not take effect after crtc being
reenable leading to at least frame being scanout from the old crtc
base address. Disabling crtc display request lead to same behavior.

So after changing the vram address if we don't keep crtc disabled
we will have the GPU trying to read some random system memory address
with some iommu this will broke the crtc engine and will lead to
broken display and iommu error message.

So to avoid this, disable crtc. For flicker less boot we will need
to avoid moving the vram start address.

This patch should also fix :

https://bugs.freedesktop.org/show_bug.cgi?id=42373

Cc: 
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c   |   57 --
 drivers/gpu/drm/radeon/radeon_asic.h |8 ++---
 drivers/gpu/drm/radeon/rv515.c   |   13 
 3 files changed, 2 insertions(+), 76 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index e585a3b..db85262 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1229,24 +1229,8 @@ void evergreen_agp_enable(struct radeon_device *rdev)

 void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save 
*save)
 {
-   save->vga_control[0] = RREG32(D1VGA_CONTROL);
-   save->vga_control[1] = RREG32(D2VGA_CONTROL);
save->vga_render_control = RREG32(VGA_RENDER_CONTROL);
save->vga_hdp_control = RREG32(VGA_HDP_CONTROL);
-   save->crtc_control[0] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC0_REGISTER_OFFSET);
-   save->crtc_control[1] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC1_REGISTER_OFFSET);
-   if (rdev->num_crtc >= 4) {
-   save->vga_control[2] = RREG32(EVERGREEN_D3VGA_CONTROL);
-   save->vga_control[3] = RREG32(EVERGREEN_D4VGA_CONTROL);
-   save->crtc_control[2] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC2_REGISTER_OFFSET);
-   save->crtc_control[3] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC3_REGISTER_OFFSET);
-   }
-   if (rdev->num_crtc >= 6) {
-   save->vga_control[4] = RREG32(EVERGREEN_D5VGA_CONTROL);
-   save->vga_control[5] = RREG32(EVERGREEN_D6VGA_CONTROL);
-   save->crtc_control[4] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC4_REGISTER_OFFSET);
-   save->crtc_control[5] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC5_REGISTER_OFFSET);
-   }

/* Stop all video */
WREG32(VGA_RENDER_CONTROL, 0);
@@ -1357,47 +1341,6 @@ void evergreen_mc_resume(struct radeon_device *rdev, 
struct evergreen_mc_save *s
/* Unlock host access */
WREG32(VGA_HDP_CONTROL, save->vga_hdp_control);
mdelay(1);
-   /* Restore video state */
-   WREG32(D1VGA_CONTROL, save->vga_control[0]);
-   WREG32(D2VGA_CONTROL, save->vga_control[1]);
-   if (rdev->num_crtc >= 4) {
-   WREG32(EVERGREEN_D3VGA_CONTROL, save->vga_control[2]);
-   WREG32(EVERGREEN_D4VGA_CONTROL, save->vga_control[3]);
-   }
-   if (rdev->num_crtc >= 6) {
-   WREG32(EVERGREEN_D5VGA_CONTROL, save->vga_control[4]);
-   WREG32(EVERGREEN_D6VGA_CONTROL, save->vga_control[5]);
-   }
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC0_REGISTER_OFFSET, 1);
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC1_REGISTER_OFFSET, 1);
-   if (rdev->num_crtc >= 4) {
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + 
EVERGREEN_CRTC2_REGISTER_OFFSET, 1);
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + 
EVERGREEN_CRTC3_REGISTER_OFFSET, 1);
-   }
-   if (rdev->num_crtc >= 6) {
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + 
EVERGREEN_CRTC4_REGISTER_OFFSET, 1);
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + 
EVERGREEN_CRTC5_REGISTER_OFFSET, 1);
-   }
-   WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET, 
save->crtc_control[0]);
-   WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC1_REGISTER_OFFSET, 
save->crtc_control[1]);
-   if (rdev->num_crtc >= 4) {
-   WREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC2_REGISTER_OFFSET, save->crtc_control[2]);
-   WREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC3_REGISTER_OFFSET, save->crtc_control[3]);
-   }
-   if (rdev->num_crtc >= 6) {
-   WREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC4_REGISTER_OFFSET, save->crtc_control[4]);
-   WREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC5_REGISTER_OFFSET, save->crtc_control[5]);
-   }
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC0_REGISTER_OFFSET, 0);
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
-   if (rdev->num_crtc >= 4) {
-   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + 
EVERGREEN_CRTC2_REGISTER_OFFSET, 0)

Fix GPU triggering random system read after VRAM start change

2012-07-27 Thread j.gli...@gmail.com
So first patch is a fix in itself, smallest possible and should go to
stable. Second patch is an improvement as a first step to flicker free
boot.

I have yet extensively tested second patch, especialy not on AGP but
so far on few GPU/motherboard it looks good. It can probably wait 3.7.
Will test it more and report.

I have a third patch that is a step closer to flicker free boot on uefi,
waiting ack to release a reg.

Cheers,
Jerome



[PATCH] drm/radeon: cleanup and fix crtc while programming mc

2012-07-26 Thread j.gli...@gmail.com
From: Jerome Glisse 

When we change start address of vram for the GPU memory controller
we need to make sure that nothing in the GPU still use the old vram
address. This patch cleanup and fix crtc address.

However there is still someissue somewhere if we reenable the crtc
after updating the address at which they sancout. So to avoid any
issue disable crtc. Once we want to do flicker less transition
btw uefi and radeon we will probably want to revisit how we program
the GPU memory controller.

This probably fix :
https://bugs.freedesktop.org/show_bug.cgi?id=52467
https://bugs.freedesktop.org/show_bug.cgi?id=42373

Cc: 
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c   |  178 --
 drivers/gpu/drm/radeon/radeon_asic.h |   18 +++-
 2 files changed, 99 insertions(+), 97 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index e585a3b..c6ede66 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1227,70 +1227,94 @@ void evergreen_agp_enable(struct radeon_device *rdev)
WREG32(VM_CONTEXT1_CNTL, 0);
 }

+static void evergreen_crtc_save(struct radeon_device *rdev, struct 
evergreen_mc_save *save, unsigned idx)
+{
+   save->crtc[idx].paddress = 0;
+   save->crtc[idx].saddress = 0;
+   save->crtc[idx].crtc_control = 0;
+
+   if (idx >= rdev->num_crtc) {
+   /* it seems accessing non existant crtc trigger high latency */
+   return;
+   }
+
+   save->crtc[idx].paddress = 
RREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + save->crtc[idx].offset);
+   save->crtc[idx].paddress |= 
((uint64_t)RREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + 
save->crtc[idx].offset)) << 32ULL;
+   save->crtc[idx].saddress = 
RREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + save->crtc[idx].offset);
+   save->crtc[idx].saddress |= 
((uint64_t)RREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + 
save->crtc[idx].offset)) << 32ULL;
+   save->crtc[idx].crtc_control = RREG32(EVERGREEN_CRTC_CONTROL + 
save->crtc[idx].offset);
+   /* We need to disable all crtc as for some reason crtc scanout
+* base address does not happen properly when changing the
+* mc vram start address. Don't remove this or you will break
+* things on uefi system.
+*/
+   save->crtc[idx].crtc_control = 0;
+   save->crtc[idx].vga_control = 
RREG32(save->crtc[idx].vga_control_offset);
+
+   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + save->crtc[idx].offset, 1);
+   WREG32(EVERGREEN_CRTC_CONTROL + save->crtc[idx].offset, 0);
+   WREG32(save->crtc[idx].vga_control_offset, 0);
+   WREG32(EVERGREEN_CRTC_UPDATE_LOCK + save->crtc[idx].offset, 0);
+}
+
 void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save 
*save)
 {
-   save->vga_control[0] = RREG32(D1VGA_CONTROL);
-   save->vga_control[1] = RREG32(D2VGA_CONTROL);
+   save->crtc[0].offset = EVERGREEN_CRTC0_REGISTER_OFFSET;
+   save->crtc[1].offset = EVERGREEN_CRTC1_REGISTER_OFFSET;
+   save->crtc[2].offset = EVERGREEN_CRTC2_REGISTER_OFFSET;
+   save->crtc[3].offset = EVERGREEN_CRTC3_REGISTER_OFFSET;
+   save->crtc[4].offset = EVERGREEN_CRTC4_REGISTER_OFFSET;
+   save->crtc[5].offset = EVERGREEN_CRTC5_REGISTER_OFFSET;
+   save->crtc[0].vga_control_offset = D1VGA_CONTROL;
+   save->crtc[1].vga_control_offset = D2VGA_CONTROL;
+   save->crtc[2].vga_control_offset = EVERGREEN_D3VGA_CONTROL;
+   save->crtc[3].vga_control_offset = EVERGREEN_D4VGA_CONTROL;
+   save->crtc[4].vga_control_offset = EVERGREEN_D5VGA_CONTROL;
+   save->crtc[5].vga_control_offset = EVERGREEN_D6VGA_CONTROL;
+
+   save->fb_address = (uint64_t)(RREG32(MC_VM_FB_LOCATION) & 0x) << 
24ULL;
save->vga_render_control = RREG32(VGA_RENDER_CONTROL);
save->vga_hdp_control = RREG32(VGA_HDP_CONTROL);
-   save->crtc_control[0] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC0_REGISTER_OFFSET);
-   save->crtc_control[1] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC1_REGISTER_OFFSET);
-   if (rdev->num_crtc >= 4) {
-   save->vga_control[2] = RREG32(EVERGREEN_D3VGA_CONTROL);
-   save->vga_control[3] = RREG32(EVERGREEN_D4VGA_CONTROL);
-   save->crtc_control[2] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC2_REGISTER_OFFSET);
-   save->crtc_control[3] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC3_REGISTER_OFFSET);
-   }
-   if (rdev->num_crtc >= 6) {
-   save->vga_control[4] = RREG32(EVERGREEN_D5VGA_CONTROL);
-   save->vga_control[5] = RREG32(EVERGREEN_D6VGA_CONTROL);
-   save->crtc_control[4] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC4_REGISTER_OFFSET);
-   save->crtc_control[5] = RREG32(EVERGREEN_CRTC_CONTROL + 
EVERGREEN_CRTC5_REGISTER_OFFSET);
-   }

/* Stop all video *

[PATCH] drm/radeon: fix dpms on/off on trinity/aruba v2

2012-07-24 Thread j.gli...@gmail.com
From: Jerome Glisse 

The external encoder need to be setup again before enabling the
transmiter. This seems to be only needed on some trinity/aruba
to fix dpms on.

v2: Add comment, only setup again on dce6 ie aruba or newer.

Cc: 
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/atombios_encoders.c |   12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c 
b/drivers/gpu/drm/radeon/atombios_encoders.c
index 486ccdf..8676b1b 100644
--- a/drivers/gpu/drm/radeon/atombios_encoders.c
+++ b/drivers/gpu/drm/radeon/atombios_encoders.c
@@ -1392,10 +1392,18 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder 
*encoder, int mode)
case DRM_MODE_DPMS_ON:
/* some early dce3.2 boards have a bug in their transmitter 
control table */
if ((rdev->family == CHIP_RV710) || (rdev->family == 
CHIP_RV730) ||
-   ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev))
+   ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) {
+   if (ASIC_IS_DCE6(rdev)) {
+   /* It seems we need to call 
ATOM_ENCODER_CMD_SETUP again
+* before reenabling encoder on DPMS ON, 
otherwise we never
+* get picture
+*/
+   atombios_dig_encoder_setup(encoder, 
ATOM_ENCODER_CMD_SETUP, 0);
+   }
atombios_dig_transmitter_setup(encoder, 
ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0);
-   else
+   } else {
atombios_dig_transmitter_setup(encoder, 
ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0);
+   }
if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && 
connector) {
if (connector->connector_type == 
DRM_MODE_CONNECTOR_eDP) {
atombios_set_edp_panel_power(connector,
-- 
1.7.10.4



[PATCH] drm/radeon: fix dpms on/off on trinity/aruba

2012-07-24 Thread j.gli...@gmail.com
From: Jerome Glisse 

The external encoder need to be setup again before enabling the
transmiter. This seems to be only needed on some trinity/aruba
to fix dpms on.

Cc: 
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/atombios_encoders.c |6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c 
b/drivers/gpu/drm/radeon/atombios_encoders.c
index 486ccdf..5302c42 100644
--- a/drivers/gpu/drm/radeon/atombios_encoders.c
+++ b/drivers/gpu/drm/radeon/atombios_encoders.c
@@ -1392,10 +1392,12 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder 
*encoder, int mode)
case DRM_MODE_DPMS_ON:
/* some early dce3.2 boards have a bug in their transmitter 
control table */
if ((rdev->family == CHIP_RV710) || (rdev->family == 
CHIP_RV730) ||
-   ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev))
+   ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) {
+   atombios_dig_encoder_setup(encoder, 
ATOM_ENCODER_CMD_SETUP, 0);
atombios_dig_transmitter_setup(encoder, 
ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0);
-   else
+   } else {
atombios_dig_transmitter_setup(encoder, 
ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0);
+   }
if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && 
connector) {
if (connector->connector_type == 
DRM_MODE_CONNECTOR_eDP) {
atombios_set_edp_panel_power(connector,
-- 
1.7.10.4



[PATCH] drm/radeon: hotplug of passive dp to dvi|hdmi|vga adaptor

2012-07-19 Thread j.gli...@gmail.com
From: Jerome Glisse 

We should not turn off the connector neither try to retrain DP link
if a passive DP adaptor is connected to a DP port.

Cc: 
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_connectors.c |   22 --
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c 
b/drivers/gpu/drm/radeon/radeon_connectors.c
index 2914c57..890cf1d 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -49,6 +49,7 @@ void radeon_connector_hotplug(struct drm_connector *connector)
struct drm_device *dev = connector->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_connector *radeon_connector = 
to_radeon_connector(connector);
+   struct radeon_connector_atom_dig *radeon_dig_connector = 
radeon_connector->con_priv;

/* bail if the connector does not have hpd pin, e.g.,
 * VGA, TV, etc.
@@ -62,15 +63,32 @@ void radeon_connector_hotplug(struct drm_connector 
*connector)
if (connector->dpms != DRM_MODE_DPMS_ON)
return;

+   /* don't do anything is sink is not display port
+* (passive dp->(dvi|hdmi|vga) adaptor
+*/
+   if (radeon_dig_connector->dp_sink_type != 
CONNECTOR_OBJECT_ID_DISPLAYPORT) {
+   return;
+   }
+
/* just deal with DP (not eDP) here. */
if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
int saved_dpms = connector->dpms;

/* Only turn off the display it it's physically disconnected */
-   if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd))
+   if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
-   else if (radeon_dp_needs_link_train(radeon_connector))
+   } else if (radeon_dp_needs_link_train(radeon_connector)) {
+
+   /* first get sink type as it's reset after unplug */
+   radeon_dig_connector->dp_sink_type = 
radeon_dp_getsinktype(radeon_connector);
+   /* don't do anything is sink is not display port
+* (passive dp->(dvi|hdmi|vga) adaptor
+*/
+   if (radeon_dig_connector->dp_sink_type != 
CONNECTOR_OBJECT_ID_DISPLAYPORT) {
+   return;
+   }
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
+   }
connector->dpms = saved_dpms;
}
 }
-- 
1.7.10.4



[PATCH] drm/radeon: fix non revealent error message

2012-07-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

We want to print link status query failed only if it's
an unexepected fail. If we query to see if we need
link training it might be because there is nothing
connected and thus link status query have the right
to fail in that case.

To avoid printing failure when it's expected, move the
failure message to proper place.

Cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/atombios_dp.c |   10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_dp.c 
b/drivers/gpu/drm/radeon/atombios_dp.c
index 7bb5d7e..43e9bcd 100644
--- a/drivers/gpu/drm/radeon/atombios_dp.c
+++ b/drivers/gpu/drm/radeon/atombios_dp.c
@@ -22,6 +22,7 @@
  *
  * Authors: Dave Airlie
  *  Alex Deucher
+ *  Jerome Glisse
  */
 #include "drmP.h"
 #include "radeon_drm.h"
@@ -654,7 +655,6 @@ static bool radeon_dp_get_link_status(struct 
radeon_connector *radeon_connector,
ret = radeon_dp_aux_native_read(radeon_connector, DP_LANE0_1_STATUS,
link_status, DP_LINK_STATUS_SIZE, 100);
if (ret <= 0) {
-   DRM_ERROR("displayport link status failed\n");
return false;
}

@@ -833,8 +833,10 @@ static int radeon_dp_link_train_cr(struct 
radeon_dp_link_train_info *dp_info)
else
mdelay(dp_info->rd_interval * 4);

-   if (!radeon_dp_get_link_status(dp_info->radeon_connector, 
dp_info->link_status))
+   if (!radeon_dp_get_link_status(dp_info->radeon_connector, 
dp_info->link_status)) {
+   DRM_ERROR("displayport link status failed\n");
break;
+   }

if (dp_clock_recovery_ok(dp_info->link_status, 
dp_info->dp_lane_count)) {
clock_recovery = true;
@@ -896,8 +898,10 @@ static int radeon_dp_link_train_ce(struct 
radeon_dp_link_train_info *dp_info)
else
mdelay(dp_info->rd_interval * 4);

-   if (!radeon_dp_get_link_status(dp_info->radeon_connector, 
dp_info->link_status))
+   if (!radeon_dp_get_link_status(dp_info->radeon_connector, 
dp_info->link_status)) {
+   DRM_ERROR("displayport link status failed\n");
break;
+   }

if (dp_channel_eq_ok(dp_info->link_status, 
dp_info->dp_lane_count)) {
channel_eq = true;
-- 
1.7.10.4



[PATCH] drm/radeon: on hotplug force link training to happen

2012-07-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

To have kernel behave like VGA/DVI we need to retrain link
on hotplug. For this to happen with need to report that
we need to link training to happen if we fail to get link
status and we need to force link training to happen by
setting connector dpms to off before asking it on.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/atombios_dp.c   |2 +-
 drivers/gpu/drm/radeon/radeon_connectors.c |7 +--
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_dp.c 
b/drivers/gpu/drm/radeon/atombios_dp.c
index 5131b3b..7bb5d7e 100644
--- a/drivers/gpu/drm/radeon/atombios_dp.c
+++ b/drivers/gpu/drm/radeon/atombios_dp.c
@@ -670,7 +670,7 @@ bool radeon_dp_needs_link_train(struct radeon_connector 
*radeon_connector)
struct radeon_connector_atom_dig *dig = radeon_connector->con_priv;

if (!radeon_dp_get_link_status(radeon_connector, link_status))
-   return false;
+   return true;
if (dp_channel_eq_ok(link_status, dig->dp_lane_count))
return false;
return true;
diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c 
b/drivers/gpu/drm/radeon/radeon_connectors.c
index 2914c57..4b42c9f 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -67,10 +67,13 @@ void radeon_connector_hotplug(struct drm_connector 
*connector)
int saved_dpms = connector->dpms;

/* Only turn off the display it it's physically disconnected */
-   if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd))
+   if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
-   else if (radeon_dp_needs_link_train(radeon_connector))
+   } else if (radeon_dp_needs_link_train(radeon_connector)) {
+   /* force a mode on to trigger dp link training */
+   connector->dpms = DRM_MODE_DPMS_OFF;
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
+   }
connector->dpms = saved_dpms;
}
 }
-- 
1.7.10.4



[PATCH] drm/radeon: fix bo creation retry path

2012-07-12 Thread j.gli...@gmail.com
From: Jerome Glisse 

Retry label was at wrong place in function leading to memory
leak.

Cc: 
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_object.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index 6ecb200..f71e472 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -138,7 +138,6 @@ int radeon_bo_create(struct radeon_device *rdev,
acc_size = ttm_bo_dma_acc_size(&rdev->mman.bdev, size,
   sizeof(struct radeon_bo));

-retry:
bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL);
if (bo == NULL)
return -ENOMEM;
@@ -152,6 +151,8 @@ retry:
bo->surface_reg = -1;
INIT_LIST_HEAD(&bo->list);
INIT_LIST_HEAD(&bo->va);
+
+retry:
radeon_ttm_placement_from_domain(bo, domain);
/* Kernel allocation are uninterruptible */
down_read(&rdev->pm.mclk_lock);
-- 
1.7.10.4



[PATCH] drm/radeon: add an exclusive lock for GPU reset v2

2012-07-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

GPU reset need to be exclusive, one happening at a time. For this
add a rw semaphore so that any path that trigger GPU activities
have to take the semaphore as a reader thus allowing concurency.

The GPU reset path take the semaphore as a writer ensuring that
no concurrent reset take place.

v2: init rw semaphore

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|1 +
 drivers/gpu/drm/radeon/radeon_cs.c |5 +
 drivers/gpu/drm/radeon/radeon_device.c |3 +++
 drivers/gpu/drm/radeon/radeon_gem.c|8 
 4 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 77b4519b..29d6986 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1446,6 +1446,7 @@ struct radeon_device {
struct device   *dev;
struct drm_device   *ddev;
struct pci_dev  *pdev;
+   struct rw_semaphore exclusive_lock;
/* ASIC */
union radeon_asic_configconfig;
enum radeon_family  family;
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index f1b7527..7ee6491 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -499,7 +499,9 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
struct radeon_cs_parser parser;
int r;

+   down_read(&rdev->exclusive_lock);
if (!rdev->accel_working) {
+   up_read(&rdev->exclusive_lock);
return -EBUSY;
}
/* initialize parser */
@@ -512,6 +514,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r) {
DRM_ERROR("Failed to initialize parser !\n");
radeon_cs_parser_fini(&parser, r);
+   up_read(&rdev->exclusive_lock);
r = radeon_cs_handle_lockup(rdev, r);
return r;
}
@@ -520,6 +523,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r != -ERESTARTSYS)
DRM_ERROR("Failed to parse relocation %d!\n", r);
radeon_cs_parser_fini(&parser, r);
+   up_read(&rdev->exclusive_lock);
r = radeon_cs_handle_lockup(rdev, r);
return r;
}
@@ -533,6 +537,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
}
 out:
radeon_cs_parser_fini(&parser, r);
+   up_read(&rdev->exclusive_lock);
r = radeon_cs_handle_lockup(rdev, r);
return r;
 }
diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index f654ba8..254fdb4 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -734,6 +734,7 @@ int radeon_device_init(struct radeon_device *rdev,
mutex_init(&rdev->gem.mutex);
mutex_init(&rdev->pm.mutex);
init_rwsem(&rdev->pm.mclk_lock);
+   init_rwsem(&rdev->exclusive_lock);
init_waitqueue_head(&rdev->irq.vblank_queue);
init_waitqueue_head(&rdev->irq.idle_queue);
r = radeon_gem_init(rdev);
@@ -988,6 +989,7 @@ int radeon_gpu_reset(struct radeon_device *rdev)
int r;
int resched;

+   down_write(&rdev->exclusive_lock);
radeon_save_bios_scratch_regs(rdev);
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
@@ -1007,6 +1009,7 @@ int radeon_gpu_reset(struct radeon_device *rdev)
dev_info(rdev->dev, "GPU reset failed\n");
}

+   up_write(&rdev->exclusive_lock);
return r;
 }

diff --git a/drivers/gpu/drm/radeon/radeon_gem.c 
b/drivers/gpu/drm/radeon/radeon_gem.c
index d9b0809..b0be9c4 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -215,12 +215,14 @@ int radeon_gem_create_ioctl(struct drm_device *dev, void 
*data,
uint32_t handle;
int r;

+   down_read(&rdev->exclusive_lock);
/* create a gem object to contain this object in */
args->size = roundup(args->size, PAGE_SIZE);
r = radeon_gem_object_create(rdev, args->size, args->alignment,
args->initial_domain, false,
false, &gobj);
if (r) {
+   up_read(&rdev->exclusive_lock);
r = radeon_gem_handle_lockup(rdev, r);
return r;
}
@@ -228,10 +230,12 @@ int radeon_gem_create_ioctl(struct drm_device *dev, void 
*data,
/* drop reference from allocate - handle holds it now */
drm_gem_object_unreference_unlocked(gobj);
if (r) {
+   up_read(&rdev->exclusive_lock);
r = radeon_gem_handle_lockup(rdev, r);
r

[PATCH] drm/radeon: fix rare segfault

2012-07-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

In gem idle/busy ioctl the radeon object was derefenced after
drm_gem_object_unreference_unlocked which in case the object
have been destroyed lead to use of a possibly free pointer with
possibly wrong data.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_gem.c |   10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_gem.c 
b/drivers/gpu/drm/radeon/radeon_gem.c
index 74176c5..c8838fc 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -325,6 +325,7 @@ int radeon_gem_mmap_ioctl(struct drm_device *dev, void 
*data,
 int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
  struct drm_file *filp)
 {
+   struct radeon_device *rdev = dev->dev_private;
struct drm_radeon_gem_busy *args = data;
struct drm_gem_object *gobj;
struct radeon_bo *robj;
@@ -350,13 +351,14 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void 
*data,
break;
}
drm_gem_object_unreference_unlocked(gobj);
-   r = radeon_gem_handle_lockup(robj->rdev, r);
+   r = radeon_gem_handle_lockup(rdev, r);
return r;
 }

 int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
  struct drm_file *filp)
 {
+   struct radeon_device *rdev = dev->dev_private;
struct drm_radeon_gem_wait_idle *args = data;
struct drm_gem_object *gobj;
struct radeon_bo *robj;
@@ -369,10 +371,10 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, 
void *data,
robj = gem_to_radeon_bo(gobj);
r = radeon_bo_wait(robj, NULL, false);
/* callback hw specific functions if any */
-   if (robj->rdev->asic->ioctl_wait_idle)
-   robj->rdev->asic->ioctl_wait_idle(robj->rdev, robj);
+   if (rdev->asic->ioctl_wait_idle)
+   robj->rdev->asic->ioctl_wait_idle(rdev, robj);
drm_gem_object_unreference_unlocked(gobj);
-   r = radeon_gem_handle_lockup(robj->rdev, r);
+   r = radeon_gem_handle_lockup(rdev, r);
return r;
 }

-- 
1.7.10.2



[PATCH] drm/radeon: add an exclusive lock for GPU reset

2012-07-02 Thread j.gli...@gmail.com
From: Jerome Glisse 

GPU reset need to be exclusive, one happening at a time. For this
add a rw semaphore so that any path that trigger GPU activities
have to take the semaphore as a reader thus allowing concurency.

The GPU reset path take the semaphore as a writer ensuring that
no concurrent reset take place.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|1 +
 drivers/gpu/drm/radeon/radeon_cs.c |5 +
 drivers/gpu/drm/radeon/radeon_device.c |2 ++
 drivers/gpu/drm/radeon/radeon_gem.c|7 +++
 4 files changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 77b4519b..29d6986 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1446,6 +1446,7 @@ struct radeon_device {
struct device   *dev;
struct drm_device   *ddev;
struct pci_dev  *pdev;
+   struct rw_semaphore exclusive_lock;
/* ASIC */
union radeon_asic_configconfig;
enum radeon_family  family;
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index f1b7527..7ee6491 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -499,7 +499,9 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
struct radeon_cs_parser parser;
int r;

+   down_read(&rdev->exclusive_lock);
if (!rdev->accel_working) {
+   up_read(&rdev->exclusive_lock);
return -EBUSY;
}
/* initialize parser */
@@ -512,6 +514,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r) {
DRM_ERROR("Failed to initialize parser !\n");
radeon_cs_parser_fini(&parser, r);
+   up_read(&rdev->exclusive_lock);
r = radeon_cs_handle_lockup(rdev, r);
return r;
}
@@ -520,6 +523,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r != -ERESTARTSYS)
DRM_ERROR("Failed to parse relocation %d!\n", r);
radeon_cs_parser_fini(&parser, r);
+   up_read(&rdev->exclusive_lock);
r = radeon_cs_handle_lockup(rdev, r);
return r;
}
@@ -533,6 +537,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
}
 out:
radeon_cs_parser_fini(&parser, r);
+   up_read(&rdev->exclusive_lock);
r = radeon_cs_handle_lockup(rdev, r);
return r;
 }
diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index f654ba8..c8fdb40 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -988,6 +988,7 @@ int radeon_gpu_reset(struct radeon_device *rdev)
int r;
int resched;

+   down_write(&rdev->exclusive_lock);
radeon_save_bios_scratch_regs(rdev);
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
@@ -1007,6 +1008,7 @@ int radeon_gpu_reset(struct radeon_device *rdev)
dev_info(rdev->dev, "GPU reset failed\n");
}

+   up_write(&rdev->exclusive_lock);
return r;
 }

diff --git a/drivers/gpu/drm/radeon/radeon_gem.c 
b/drivers/gpu/drm/radeon/radeon_gem.c
index d9b0809..f99db63 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -215,12 +215,14 @@ int radeon_gem_create_ioctl(struct drm_device *dev, void 
*data,
uint32_t handle;
int r;

+   down_read(&rdev->exclusive_lock);
/* create a gem object to contain this object in */
args->size = roundup(args->size, PAGE_SIZE);
r = radeon_gem_object_create(rdev, args->size, args->alignment,
args->initial_domain, false,
false, &gobj);
if (r) {
+   up_read(&rdev->exclusive_lock);
r = radeon_gem_handle_lockup(rdev, r);
return r;
}
@@ -228,10 +230,12 @@ int radeon_gem_create_ioctl(struct drm_device *dev, void 
*data,
/* drop reference from allocate - handle holds it now */
drm_gem_object_unreference_unlocked(gobj);
if (r) {
+   up_read(&rdev->exclusive_lock);
r = radeon_gem_handle_lockup(rdev, r);
return r;
}
args->handle = handle;
+   up_read(&rdev->exclusive_lock);
return 0;
 }

@@ -240,6 +244,7 @@ int radeon_gem_set_domain_ioctl(struct drm_device *dev, 
void *data,
 {
/* transition the BO to a domain -
 * just validate the BO into a certain domain */
+   struct radeon_device *rdev = dev->dev_private;
struct drm_radeon_gem_set_dom

[PATCH] drm/radeon: disable any GPU activity after unrecovered lockup v5

2012-06-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

After unrecovered GPU lockup avoid any GPU activities to avoid
things like kernel segfault and alike to happen in any of the
path that assume hw is working.

The segfault is due to PCIE vram gart table being unmapped after
suspend in the GPU reset path. To avoid segault to happen and to
avoid further GPU activity if unsuccessful at reseting GPU we
use the accel_working boolean to transform ttm activities into
noop. It does not impact the module load path because in that
path ttm have an empty schedule queue and accel_working will be
set to true as soon as the gart table is in valid state. Because
ttm might have work queued it is better to use the accel working
then disabling radeon_bo ioctl.

To trigger the segfault launch a program that repeatly create bo
in ttm and let it run in background, then trigger gpu lockup from
another process.

This patch also for video mode restoring on r1xx,r2xx,r3xx,r4xx,
r5xx,rs4xx,rs6xx GPU even if GPU reset fail. When GPU reset fails
it is very likely (so far i never had it not working) that the
modesetting part of the GPU is still alive. So we can have a
chance to get kernel backtrace or other debugging informations
on the screen if we always restore the video mode.

v2: fix spelling error and disable accel before suspend and reenable
it after pcie gart initialization to be even more cautious about
possible segfault. Improve commit message
v3: Improve commit message to describe the video mode restoring no
matter what.
v4: Avoid issue after successfull GPU lockup recovery. Don't do noop
ttm move, instead report error if move needs bind or unbind or
fallback to memcpy. Don't restrict new bo domain instead refuse
to create new bo if gpu reset failed. Disable accel working
in gart vram table unpin thus we don't change the behavior of
the suspend path.
v5: Avoid set domain to also trigger noop bind/unbind, instead force
it to wait for GPU reset to go through or return failure if
gpu reset fails.

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c |2 +-
 drivers/gpu/drm/radeon/ni.c|2 +-
 drivers/gpu/drm/radeon/r300.c  |2 +-
 drivers/gpu/drm/radeon/r520.c  |2 +-
 drivers/gpu/drm/radeon/r600.c  |2 +-
 drivers/gpu/drm/radeon/radeon_device.c |8 +---
 drivers/gpu/drm/radeon/radeon_gart.c   |1 +
 drivers/gpu/drm/radeon/radeon_gem.c|   33 
 drivers/gpu/drm/radeon/radeon_ttm.c|   23 ++
 drivers/gpu/drm/radeon/rs400.c |2 +-
 drivers/gpu/drm/radeon/rs600.c |2 +-
 drivers/gpu/drm/radeon/rs690.c |2 +-
 drivers/gpu/drm/radeon/rv515.c |2 +-
 drivers/gpu/drm/radeon/rv770.c |2 +-
 drivers/gpu/drm/radeon/si.c|2 +-
 15 files changed, 73 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index c3073f7..5f154e3 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -3071,6 +3071,7 @@ static int evergreen_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;
evergreen_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -3145,7 +3146,6 @@ int evergreen_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = evergreen_startup(rdev);
if (r) {
DRM_ERROR("evergreen startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index dc2e34d..486faa8 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1245,6 +1245,7 @@ static int cayman_startup(struct radeon_device *rdev)
r = cayman_pcie_gart_enable(rdev);
if (r)
return r;
+   rdev->accel_working = true;
cayman_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -1337,7 +1338,6 @@ int cayman_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = cayman_startup(rdev);
if (r) {
DRM_ERROR("cayman startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 97722a3..206ac1f 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -1358,6 +1358,7 @@ static int r300_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;

if (rdev->family == CHIP_R300 ||
rdev->family == CHIP_R350 ||
@@ -1426,7 +1427,6 @@ int r300_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeo

[PATCH] drm/radeon: disable any GPU activity after unrecovered lockup v4

2012-06-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

After unrecovered GPU lockup avoid any GPU activities to avoid
things like kernel segfault and alike to happen in any of the
path that assume hw is working.

The segfault is due to PCIE vram gart table being unmapped after
suspend in the GPU reset path. To avoid segault to happen and to
avoid further GPU activity if unsuccessful at reseting GPU we
use the accel_working boolean to transform ttm activities into
noop. It does not impact the module load path because in that
path ttm have an empty schedule queue and accel_working will be
set to true as soon as the gart table is in valid state. Because
ttm might have work queued it is better to use the accel working
then disabling radeon_bo ioctl.

To trigger the segfault launch a program that repeatly create bo
in ttm and let it run in background, then trigger gpu lockup from
another process.

This patch also for video mode restoring on r1xx,r2xx,r3xx,r4xx,
r5xx,rs4xx,rs6xx GPU even if GPU reset fail. When GPU reset fails
it is very likely (so far i never had it not working) that the
modesetting part of the GPU is still alive. So we can have a
chance to get kernel backtrace or other debugging informations
on the screen if we always restore the video mode.

v2: fix spelling error and disable accel before suspend and reenable
it after pcie gart initialization to be even more cautious about
possible segfault. Improve commit message
v3: Improve commit message to describe the video mode restoring no
matter what.
v4: Avoid issue after successfull GPU lockup recovery. Don't do noop
ttm move, instead report error if move needs bind or unbind or
fallback to memcpy. Don't restrict new bo domain instead refuse
to create new bo if gpu reset failed. Disable accel working
in gart vram table unpin thus we don't change the behavior of
the suspend path.

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c |2 +-
 drivers/gpu/drm/radeon/ni.c|2 +-
 drivers/gpu/drm/radeon/r300.c  |2 +-
 drivers/gpu/drm/radeon/r520.c  |2 +-
 drivers/gpu/drm/radeon/r600.c  |2 +-
 drivers/gpu/drm/radeon/radeon_device.c |8 +---
 drivers/gpu/drm/radeon/radeon_gart.c   |1 +
 drivers/gpu/drm/radeon/radeon_gem.c|   13 +
 drivers/gpu/drm/radeon/radeon_ttm.c|   23 +++
 drivers/gpu/drm/radeon/rs400.c |2 +-
 drivers/gpu/drm/radeon/rs600.c |2 +-
 drivers/gpu/drm/radeon/rs690.c |2 +-
 drivers/gpu/drm/radeon/rv515.c |2 +-
 drivers/gpu/drm/radeon/rv770.c |2 +-
 drivers/gpu/drm/radeon/si.c|2 +-
 15 files changed, 53 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index c3073f7..5f154e3 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -3071,6 +3071,7 @@ static int evergreen_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;
evergreen_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -3145,7 +3146,6 @@ int evergreen_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = evergreen_startup(rdev);
if (r) {
DRM_ERROR("evergreen startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index dc2e34d..486faa8 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1245,6 +1245,7 @@ static int cayman_startup(struct radeon_device *rdev)
r = cayman_pcie_gart_enable(rdev);
if (r)
return r;
+   rdev->accel_working = true;
cayman_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -1337,7 +1338,6 @@ int cayman_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = cayman_startup(rdev);
if (r) {
DRM_ERROR("cayman startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 97722a3..206ac1f 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -1358,6 +1358,7 @@ static int r300_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;

if (rdev->family == CHIP_R300 ||
rdev->family == CHIP_R350 ||
@@ -1426,7 +1427,6 @@ int r300_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);

-   rdev->accel_working = true;
r = r300_startup(rdev);
if (r) {
rdev->accel_working = false;
diff --git a/driver

[PATCH] drm/radeon: disable any GPU activity after unrecovered lockup v3

2012-06-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

After unrecovered GPU lockup avoid any GPU activities to avoid
things like kernel segfault and alike to happen in any of the
path that assume hw is working.

The segfault is due to PCIE vram gart table being unmapped after
suspend in the GPU reset path. To avoid segault to happen and to
avoid further GPU activity if unsuccessful at reseting GPU we
use the accel_working boolean to transform ttm activities into
noop. It does not impact the module load path because in that
path ttm have an empty schedule queue and accel_working will be
set to true as soon as the gart table is in valid state. Because
ttm might have work queued it is better to use the accel working
then disabling radeon_bo ioctl.

To trigger the segfault launch a program that repeatly create bo
in ttm and let it run in background, then trigger gpu lockup from
another process.

This patch also for video mode restoring on r1xx,r2xx,r3xx,r4xx,
r5xx,rs4xx,rs6xx GPU even if GPU reset fail. When GPU reset fails
it is very likely (so far i never had it not working) that the
modesetting part of the GPU is still alive. So we can have a
chance to get kernel backtrace or other debugging informations
on the screen if we always restore the video mode.

v2: fix spelling error and disable accel before suspend and reenable
it after pcie gart initialization to be even more cautious about
possible segfault. Improve commit message
v3: Improve commit message to describe the video mode restoring no
matter what.

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c |2 +-
 drivers/gpu/drm/radeon/ni.c|2 +-
 drivers/gpu/drm/radeon/r300.c  |2 +-
 drivers/gpu/drm/radeon/r520.c  |2 +-
 drivers/gpu/drm/radeon/r600.c  |2 +-
 drivers/gpu/drm/radeon/radeon_device.c |9 ---
 drivers/gpu/drm/radeon/radeon_object.c |7 ++
 drivers/gpu/drm/radeon/radeon_ttm.c|   41 
 drivers/gpu/drm/radeon/rs400.c |2 +-
 drivers/gpu/drm/radeon/rs600.c |2 +-
 drivers/gpu/drm/radeon/rs690.c |2 +-
 drivers/gpu/drm/radeon/rv515.c |2 +-
 drivers/gpu/drm/radeon/rv770.c |2 +-
 drivers/gpu/drm/radeon/si.c|2 +-
 drivers/gpu/drm/ttm/ttm_tt.c   |1 +
 15 files changed, 66 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index c3073f7..5f154e3 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -3071,6 +3071,7 @@ static int evergreen_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;
evergreen_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -3145,7 +3146,6 @@ int evergreen_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = evergreen_startup(rdev);
if (r) {
DRM_ERROR("evergreen startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index dc2e34d..486faa8 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1245,6 +1245,7 @@ static int cayman_startup(struct radeon_device *rdev)
r = cayman_pcie_gart_enable(rdev);
if (r)
return r;
+   rdev->accel_working = true;
cayman_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -1337,7 +1338,6 @@ int cayman_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = cayman_startup(rdev);
if (r) {
DRM_ERROR("cayman startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 97722a3..206ac1f 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -1358,6 +1358,7 @@ static int r300_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;

if (rdev->family == CHIP_R300 ||
rdev->family == CHIP_R350 ||
@@ -1426,7 +1427,6 @@ int r300_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);

-   rdev->accel_working = true;
r = r300_startup(rdev);
if (r) {
rdev->accel_working = false;
diff --git a/drivers/gpu/drm/radeon/r520.c b/drivers/gpu/drm/radeon/r520.c
index b5cf837..6409eb0 100644
--- a/drivers/gpu/drm/radeon/r520.c
+++ b/drivers/gpu/drm/radeon/r520.c
@@ -181,6 +181,7 @@ static int r520_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;

/* allocate w

[PATCH] drm/radeon: improve GPU lockup debugging info on r6xx/r7xx/r8xx/r9xx

2012-06-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

Print various CP register that have valuable informations regarding
GPU lockup.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c  |   16 
 drivers/gpu/drm/radeon/evergreend.h |4 
 drivers/gpu/drm/radeon/ni.c |   16 
 drivers/gpu/drm/radeon/nid.h|4 
 drivers/gpu/drm/radeon/r600.c   |   16 
 drivers/gpu/drm/radeon/r600d.h  |3 +++
 6 files changed, 59 insertions(+)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index 7fb3d2e..c3073f7 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -2188,6 +2188,14 @@ static int evergreen_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(GRBM_STATUS_SE1));
dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
RREG32(SRBM_STATUS));
+   dev_info(rdev->dev, "  R_008674_CP_STALLED_STAT1 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT1));
+   dev_info(rdev->dev, "  R_008678_CP_STALLED_STAT2 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT2));
+   dev_info(rdev->dev, "  R_00867C_CP_BUSY_STAT = 0x%08X\n",
+   RREG32(CP_BUSY_STAT));
+   dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
+   RREG32(CP_STAT));
evergreen_mc_stop(rdev, &save);
if (evergreen_mc_wait_for_idle(rdev)) {
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
@@ -2225,6 +2233,14 @@ static int evergreen_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(GRBM_STATUS_SE1));
dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
RREG32(SRBM_STATUS));
+   dev_info(rdev->dev, "  R_008674_CP_STALLED_STAT1 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT1));
+   dev_info(rdev->dev, "  R_008678_CP_STALLED_STAT2 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT2));
+   dev_info(rdev->dev, "  R_00867C_CP_BUSY_STAT = 0x%08X\n",
+   RREG32(CP_BUSY_STAT));
+   dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
+   RREG32(CP_STAT));
evergreen_mc_resume(rdev, &save);
return 0;
 }
diff --git a/drivers/gpu/drm/radeon/evergreend.h 
b/drivers/gpu/drm/radeon/evergreend.h
index b50b15c..d3bd098 100644
--- a/drivers/gpu/drm/radeon/evergreend.h
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -88,6 +88,10 @@
 #defineCONFIG_MEMSIZE  0x5428

 #defineCP_COHER_BASE   0x85F8
+#defineCP_STALLED_STAT10x8674
+#defineCP_STALLED_STAT20x8678
+#defineCP_BUSY_STAT0x867C
+#defineCP_STAT 0x8680
 #define CP_ME_CNTL 0x86D8
 #defineCP_ME_HALT  (1 << 
28)
 #defineCP_PFP_HALT (1 << 
26)
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index b7bf18e..dc2e34d 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1132,6 +1132,14 @@ static int cayman_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(GRBM_STATUS_SE1));
dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
RREG32(SRBM_STATUS));
+   dev_info(rdev->dev, "  R_008674_CP_STALLED_STAT1 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT1));
+   dev_info(rdev->dev, "  R_008678_CP_STALLED_STAT2 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT2));
+   dev_info(rdev->dev, "  R_00867C_CP_BUSY_STAT = 0x%08X\n",
+   RREG32(CP_BUSY_STAT));
+   dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
+   RREG32(CP_STAT));
dev_info(rdev->dev, "  VM_CONTEXT0_PROTECTION_FAULT_ADDR   0x%08X\n",
 RREG32(0x14F8));
dev_info(rdev->dev, "  VM_CONTEXT0_PROTECTION_FAULT_STATUS 0x%08X\n",
@@ -1180,6 +1188,14 @@ static int cayman_gpu_soft_reset(struct radeon_device 
*rdev)
RREG32(GRBM_STATUS_SE1));
dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
RREG32(SRBM_STATUS));
+   dev_info(rdev->dev, "  R_008674_CP_STALLED_STAT1 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT1));
+   dev_info(rdev->dev, "  R_008678_CP_STALLED_STAT2 = 0x%08X\n",
+   RREG32(CP_STALLED_STAT2));
+   dev_info(rdev->dev, "  R_00867C_CP_BUSY_STAT = 0x%08X\n",
+   RREG32(CP_BUSY_STAT));
+   dev_info(rdev->dev, "  R_008680_CP_STAT  = 0x%08X\n",
+   RREG32(CP_STAT));
evergreen_mc_resume(rdev, &save);
return 0;
 }
diff --git a/drivers/gpu/drm/radeon/nid.h b/drivers/gpu/drm/radeon/nid.h
index a0b9806..870db34 100644
--- a/drivers/gpu/drm/radeon/nid.h
+++ b/drivers/gpu/drm/radeo

[PATCH] drm/radeon: disable any GPU activity after unrecovered lockup v2

2012-06-27 Thread j.gli...@gmail.com
From: Jerome Glisse 

After unrecovered GPU lockup avoid any GPU activities to avoid
things like kernel segfault and alike to happen in any of the
path that assume hw is working.

The segfault is due to PCIE vram gart table being unmapped after
suspend in the GPU reset path. To avoid segault to happen and to
avoid further GPU activity if unsuccessful at reseting GPU we
use the accel_working boolean to transform ttm activities into
noop. It does not impact the module load path because in that
path ttm have an empty schedule queue and accel_working will be
set to true as soon as the gart table is in valid state. Because
ttm might have work queued it is better to use the accel working
then disabling radeon_bo ioctl.

To trigger the segfault launch a program that repeatly create bo
in ttm and let it run in background, then trigger gpu lockup from
another process.

v2: fix spelling error and disable accel before suspend and reenable
it after pcie gart initialization to be even more cautious about
possible segfault. Improve commit message

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen.c |2 +-
 drivers/gpu/drm/radeon/ni.c|2 +-
 drivers/gpu/drm/radeon/r300.c  |2 +-
 drivers/gpu/drm/radeon/r520.c  |2 +-
 drivers/gpu/drm/radeon/r600.c  |2 +-
 drivers/gpu/drm/radeon/radeon_device.c |9 ---
 drivers/gpu/drm/radeon/radeon_object.c |7 ++
 drivers/gpu/drm/radeon/radeon_ttm.c|   41 
 drivers/gpu/drm/radeon/rs400.c |2 +-
 drivers/gpu/drm/radeon/rs600.c |2 +-
 drivers/gpu/drm/radeon/rs690.c |2 +-
 drivers/gpu/drm/radeon/rv515.c |2 +-
 drivers/gpu/drm/radeon/rv770.c |2 +-
 drivers/gpu/drm/radeon/si.c|2 +-
 drivers/gpu/drm/ttm/ttm_tt.c   |1 +
 15 files changed, 66 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index 7fb3d2e..2a4be53 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -3055,6 +3055,7 @@ static int evergreen_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;
evergreen_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -3129,7 +3130,6 @@ int evergreen_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = evergreen_startup(rdev);
if (r) {
DRM_ERROR("evergreen startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index b7bf18e..18f87ca 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1229,6 +1229,7 @@ static int cayman_startup(struct radeon_device *rdev)
r = cayman_pcie_gart_enable(rdev);
if (r)
return r;
+   rdev->accel_working = true;
cayman_gpu_init(rdev);

r = evergreen_blit_init(rdev);
@@ -1321,7 +1322,6 @@ int cayman_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);

-   rdev->accel_working = true;
r = cayman_startup(rdev);
if (r) {
DRM_ERROR("cayman startup failed on resume\n");
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 97722a3..206ac1f 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -1358,6 +1358,7 @@ static int r300_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;

if (rdev->family == CHIP_R300 ||
rdev->family == CHIP_R350 ||
@@ -1426,7 +1427,6 @@ int r300_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);

-   rdev->accel_working = true;
r = r300_startup(rdev);
if (r) {
rdev->accel_working = false;
diff --git a/drivers/gpu/drm/radeon/r520.c b/drivers/gpu/drm/radeon/r520.c
index b5cf837..6409eb0 100644
--- a/drivers/gpu/drm/radeon/r520.c
+++ b/drivers/gpu/drm/radeon/r520.c
@@ -181,6 +181,7 @@ static int r520_startup(struct radeon_device *rdev)
if (r)
return r;
}
+   rdev->accel_working = true;

/* allocate wb buffer */
r = radeon_wb_init(rdev);
@@ -236,7 +237,6 @@ int r520_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);

-   rdev->accel_working = true;
r = r520_startup(rdev);
if (r) {
rdev->accel_working = false;
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index 78c0d0d..692b48b 100644
--- a/drivers/gpu/drm/radeon/r

[PATCH] drm/radeon: disable any GPU activity after unrecovered lockup

2012-06-26 Thread j.gli...@gmail.com
From: Jerome Glisse 

After unrecovered GPU lockup avoid any GPU activities to avoid
things like kernel segfault and alike to happen in any of the
path that assume hw is working.

cc: stable at vger.kernel.org
Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon_device.c |9 ---
 drivers/gpu/drm/radeon/radeon_object.c |7 ++
 drivers/gpu/drm/radeon/radeon_ttm.c|   41 
 drivers/gpu/drm/ttm/ttm_tt.c   |1 +
 4 files changed, 55 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index 066c98b..653f352 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -993,16 +993,19 @@ int radeon_gpu_reset(struct radeon_device *rdev)
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
radeon_suspend(rdev);
+   rdev->accel_working = false;

r = radeon_asic_reset(rdev);
if (!r) {
dev_info(rdev->dev, "GPU reset succeed\n");
radeon_resume(rdev);
-   radeon_restore_bios_scratch_regs(rdev);
-   drm_helper_resume_force_mode(rdev->ddev);
-   ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
}

+   /* no matter what restore video mode */
+   radeon_restore_bios_scratch_regs(rdev);
+   drm_helper_resume_force_mode(rdev->ddev);
+   ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
+
if (r) {
/* bad news, how to tell it to userspace ? */
dev_info(rdev->dev, "GPU reset failed\n");
diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index 830f1a7..27e8e53 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -89,6 +89,13 @@ void radeon_ttm_placement_from_domain(struct radeon_bo *rbo, 
u32 domain)
rbo->placement.lpfn = 0;
rbo->placement.placement = rbo->placements;
rbo->placement.busy_placement = rbo->placements;
+   if (!rbo->rdev->accel_working) {
+   /* for new bo to system ram when GPU is not working */
+   rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
+   rbo->placement.num_placement = c;
+   rbo->placement.num_busy_placement = c;
+   return;
+   }
if (domain & RADEON_GEM_DOMAIN_VRAM)
rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_VRAM;
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index c94a225..0994d1e 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -215,6 +215,25 @@ static void radeon_move_null(struct ttm_buffer_object *bo,
new_mem->mm_node = NULL;
 }

+static void radeon_move_noop(struct ttm_buffer_object *bo,
+struct ttm_mem_reg *new_mem)
+{
+   struct ttm_bo_device *bdev = bo->bdev;
+   struct ttm_mem_type_manager *man = &bdev->man[new_mem->mem_type];
+   struct ttm_mem_reg *old_mem = &bo->mem;
+   struct ttm_mem_reg old_copy = *old_mem;
+
+   *old_mem = *new_mem;
+   new_mem->mm_node = NULL;
+
+   if ((man->flags & TTM_MEMTYPE_FLAG_FIXED) && (bo->ttm != NULL)) {
+   ttm_tt_destroy(bo->ttm);
+   bo->ttm = NULL;
+   }
+
+   ttm_bo_mem_put(bo, &old_copy);
+}
+
 static int radeon_move_blit(struct ttm_buffer_object *bo,
bool evict, int no_wait_reserve, bool no_wait_gpu,
struct ttm_mem_reg *new_mem,
@@ -399,6 +418,14 @@ static int radeon_bo_move(struct ttm_buffer_object *bo,
radeon_move_null(bo, new_mem);
return 0;
}
+   if (!rdev->accel_working) {
+   /* when accel is not working GPU is in broken state just
+* do nothing for any ttm operation to avoid making the
+* situation worst than it's
+*/
+   radeon_move_noop(bo, new_mem);
+   return 0;
+   }
if ((old_mem->mem_type == TTM_PL_TT &&
 new_mem->mem_type == TTM_PL_SYSTEM) ||
(old_mem->mem_type == TTM_PL_SYSTEM &&
@@ -545,6 +572,13 @@ static int radeon_ttm_backend_bind(struct ttm_tt *ttm,
WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n",
 ttm->num_pages, bo_mem, ttm);
}
+   if (!gtt->rdev->accel_working) {
+   /* when accel is not working GPU is in broken state just
+* do nothing for any ttm operation to avoid making the
+* situation worst than it's
+*/
+   return 0;
+   }
r = radeon_gart_bind(gtt->rdev, gtt->offset,
 ttm->num_pages, t

[PATCH] drm/radeon: fix tiling and command stream checking on evergreen v3

2012-06-09 Thread j.gli...@gmail.com
From: Jerome Glisse 

Fix regresson since the introduction of command stream checking on
evergreen (thread referenced below). Issue is cause by ddx allocating
bo with formula width*height*bpp while programming the GPU command
stream with ALIGN(height, 8). In some case (where page alignment does
not hide the extra size bo should be according to height alignment)
the kernel will reject the command stream.

This patch reprogram the command stream to slice - 1 (slice is
a derivative value from height) which avoid rejecting the command
stream while keeping the value of command stream checking from a
security point of view.

This patch also fix wrong computation of layer size for 2D tiled
surface. Which should fix issue when 2D color tiling is enabled.
This dump the radeon KMS_DRIVER_MINOR so userspace can know if
they are on a fixed kernel or not.

https://lkml.org/lkml/2012/6/3/80
https://bugs.freedesktop.org/show_bug.cgi?id=50892
https://bugs.freedesktop.org/show_bug.cgi?id=50857

!!! STABLE need a custom version of this patch for 3.4 !!!

v2: actually bump the minor version and add comment about stable
v3: do compute the height the ddx was trying to use

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/evergreen_cs.c |   50 ++---
 drivers/gpu/drm/radeon/radeon_drv.c   |3 +-
 2 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c 
b/drivers/gpu/drm/radeon/evergreen_cs.c
index 4e7dd2b..29c43c6 100644
--- a/drivers/gpu/drm/radeon/evergreen_cs.c
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -52,6 +52,7 @@ struct evergreen_cs_track {
u32 cb_color_view[12];
u32 cb_color_pitch[12];
u32 cb_color_slice[12];
+   u32 cb_color_slice_idx[12];
u32 cb_color_attrib[12];
u32 cb_color_cmask_slice[8];/* unused */
u32 cb_color_fmask_slice[8];/* unused */
@@ -127,12 +128,14 @@ static void evergreen_cs_track_init(struct 
evergreen_cs_track *track)
track->cb_color_info[i] = 0;
track->cb_color_view[i] = 0x;
track->cb_color_pitch[i] = 0;
-   track->cb_color_slice[i] = 0;
+   track->cb_color_slice[i] = 0xfff;
+   track->cb_color_slice_idx[i] = 0;
}
track->cb_target_mask = 0x;
track->cb_shader_mask = 0x;
track->cb_dirty = true;

+   track->db_depth_slice = 0x;
track->db_depth_view = 0xC000;
track->db_depth_size = 0x;
track->db_depth_control = 0x;
@@ -250,10 +253,9 @@ static int evergreen_surface_check_2d(struct 
radeon_cs_parser *p,
 {
struct evergreen_cs_track *track = p->track;
unsigned palign, halign, tileb, slice_pt;
+   unsigned mtile_pr, mtile_ps, mtileb;

tileb = 64 * surf->bpe * surf->nsamples;
-   palign = track->group_size / (8 * surf->bpe * surf->nsamples);
-   palign = MAX(8, palign);
slice_pt = 1;
if (tileb > surf->tsplit) {
slice_pt = tileb / surf->tsplit;
@@ -262,7 +264,10 @@ static int evergreen_surface_check_2d(struct 
radeon_cs_parser *p,
/* macro tile width & height */
palign = (8 * surf->bankw * track->npipes) * surf->mtilea;
halign = (8 * surf->bankh * surf->nbanks) / surf->mtilea;
-   surf->layer_size = surf->nbx * surf->nby * surf->bpe * slice_pt;
+   mtileb = (palign / 8) * (halign / 8) * tileb;;
+   mtile_pr = surf->nbx / palign;
+   mtile_ps = (mtile_pr * surf->nby) / halign;
+   surf->layer_size = mtile_ps * mtileb * slice_pt;
surf->base_align = (palign / 8) * (halign / 8) * tileb;
surf->palign = palign;
surf->halign = halign;
@@ -434,6 +439,39 @@ static int evergreen_cs_track_validate_cb(struct 
radeon_cs_parser *p, unsigned i

offset += surf.layer_size * mslice;
if (offset > radeon_bo_size(track->cb_color_bo[id])) {
+   /* old ddx are broken they allocate bo with w*h*bpp but
+* program slice with ALIGN(h, 8), catch this and patch
+* command stream.
+*/
+   if (!surf.mode) {
+   volatile u32 *ib = p->ib.ptr;
+   unsigned long tmp, nby, bsize, size, min = 0;
+
+   /* find the height the ddx wants */
+   if (surf.nby > 8) {
+   min = surf.nby - 8;
+   }
+   bsize = radeon_bo_size(track->cb_color_bo[id]);
+   tmp = track->cb_color_bo_offset[id] << 8;
+   for (nby = surf.nby; nby > min; nby--) {
+   size = nby * surf.nbx * surf.bpe * 
surf.nsamples;
+   if ((tmp + size * mslice) <

[PATCH 5/5] drm/radeon: restore consistant whitespace & indentation

2012-05-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h  |  528 +-
 drivers/gpu/drm/radeon/radeon_ring.c |3 +-
 2 files changed, 267 insertions(+), 264 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 3fbb469..efc642a 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -132,9 +132,9 @@ extern int radeon_lockup_timeout;
  * Errata workarounds.
  */
 enum radeon_pll_errata {
-   CHIP_ERRATA_R300_CG = 0x0001,
-   CHIP_ERRATA_PLL_DUMMYREADS  = 0x0002,
-   CHIP_ERRATA_PLL_DELAY   = 0x0004
+   CHIP_ERRATA_R300_CG = 0x0001,
+   CHIP_ERRATA_PLL_DUMMYREADS  = 0x0002,
+   CHIP_ERRATA_PLL_DELAY   = 0x0004
 };


@@ -218,17 +218,17 @@ void radeon_dummy_page_fini(struct radeon_device *rdev);
  * Clocks
  */
 struct radeon_clock {
-   struct radeon_pll p1pll;
-   struct radeon_pll p2pll;
-   struct radeon_pll dcpll;
-   struct radeon_pll spll;
-   struct radeon_pll mpll;
+   struct radeon_pll   p1pll;
+   struct radeon_pll   p2pll;
+   struct radeon_pll   dcpll;
+   struct radeon_pll   spll;
+   struct radeon_pll   mpll;
/* 10 Khz units */
-   uint32_t default_mclk;
-   uint32_t default_sclk;
-   uint32_t default_dispclk;
-   uint32_t dp_extclk;
-   uint32_t max_pixel_clock;
+   uint32_tdefault_mclk;
+   uint32_tdefault_sclk;
+   uint32_tdefault_dispclk;
+   uint32_tdp_extclk;
+   uint32_tmax_pixel_clock;
 };

 /*
@@ -296,7 +296,7 @@ unsigned radeon_fence_count_emitted(struct radeon_device 
*rdev, int ring);
  * Tiling registers
  */
 struct radeon_surface_reg {
-   struct radeon_bo *bo;
+   struct radeon_bo*bo;
 };

 #define RADEON_GEM_MAX_SURFACES 8
@@ -305,7 +305,7 @@ struct radeon_surface_reg {
  * TTM.
  */
 struct radeon_mman {
-   struct ttm_bo_global_refbo_global_ref;
+   struct ttm_bo_global_refbo_global_ref;
struct drm_global_reference mem_global_ref;
struct ttm_bo_devicebdev;
boolmem_global_referenced;
@@ -351,12 +351,12 @@ struct radeon_bo {
 #define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, gem_base)

 struct radeon_bo_list {
-   struct ttm_validate_buffer tv;
-   struct radeon_bo*bo;
-   uint64_tgpu_offset;
-   unsignedrdomain;
-   unsignedwdomain;
-   u32 tiling_flags;
+   struct ttm_validate_buffer  tv;
+   struct radeon_bo*bo;
+   uint64_tgpu_offset;
+   unsignedrdomain;
+   unsignedwdomain;
+   u32 tiling_flags;
 };

 /* sub-allocation manager, it has to be protected by another lock.
@@ -522,7 +522,7 @@ struct radeon_mc {
int vram_mtrr;
boolvram_is_ddr;
booligp_sideport_enabled;
-   u64 gtt_base_align;
+   u64 gtt_base_align;
 };

 bool radeon_combios_sideport_present(struct radeon_device *rdev);
@@ -533,7 +533,7 @@ bool radeon_atombios_sideport_present(struct radeon_device 
*rdev);
  */
 struct radeon_scratch {
unsignednum_reg;
-   uint32_treg_base;
+   uint32_treg_base;
boolfree[32];
uint32_treg[32];
 };
@@ -547,55 +547,55 @@ void radeon_scratch_free(struct radeon_device *rdev, 
uint32_t reg);
  */

 struct radeon_unpin_work {
-   struct work_struct work;
-   struct radeon_device *rdev;
-   int crtc_id;
-   struct radeon_fence *fence;
-   struct drm_pending_vblank_event *event;
-   struct radeon_bo *old_rbo;
-   u64 new_crtc_base;
+   struct work_struct  work;
+   struct radeon_device*rdev;
+   int crtc_id;
+   struct radeon_fence *fence;
+   struct drm_pending_vblank_event *event;
+   struct radeon_bo*old_rbo;
+   u64 new_crtc_base;
 };

 struct r500_irq_stat_regs {
-   u32 disp_int;
-   u32 hdmi0_status;
+   u32 disp_int;
+   u32 hdmi0_status;
 };

 struct r600_irq_stat_regs {
-   u32 disp_int;
-   u32 disp_int_cont;
-   u32 disp_int_cont2;
-   u32 d1grph_int;
-   u32 d2grph_int;
-   u32 hdmi0_status;
-   u32 hdmi1_status;
+   u32 disp_int;
+   u32 disp_int_cont;
+

[PATCH 4/5] drm/radeon: add lockup faulty command recording v4

2012-05-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

This try to identify the faulty user command stream that caused
lockup. If it finds one it create big blob that contains all
information, this include packet stream but also snapshot of all
bo used by the faulty packet stream.

This means that the blod is self contained and can be fully
replayed.

v2: Better commit message. Split out the radeon debugfs change
into its own patch. Split out the vm offset change into its
own patch. Add data buffer flags so kernel can flags
bo that are dumped with valid data. Remove the family from
the header and instead let the userspace tools rely on the
pci id. Avoid doing whitespace/indentation cleaning.
v3: Add a chunk size field so older userspace can easily skip
newer chunk.
v4: Add flags to cmd buffer to facilitate userspace tools job.
Allow userspace tool to easily know if it needs to clear
offset or to add relocation packet.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|   16 
 drivers/gpu/drm/radeon/radeon_cs.c |   72 +--
 drivers/gpu/drm/radeon/radeon_device.c |   17 +
 drivers/gpu/drm/radeon/radeon_object.h |   11 +++-
 drivers/gpu/drm/radeon/radeon_ring.c   |   18 +-
 drivers/gpu/drm/radeon/radeon_sa.c |  121 
 include/drm/radeon_drm.h   |  110 +
 7 files changed, 357 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index dc51ee9..3fbb469 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -74,6 +74,7 @@
 #include "radeon_family.h"
 #include "radeon_mode.h"
 #include "radeon_reg.h"
+#include "radeon_drm.h"

 /*
  * Modules parameters.
@@ -395,6 +396,11 @@ struct radeon_sa_manager {

 struct radeon_sa_bo;

+struct radeon_dump {
+   struct rati_data_buffer buffer;
+   struct radeon_bo*bo;
+};
+
 /* sub-allocation buffer */
 struct radeon_sa_bo {
struct list_headolist;
@@ -403,6 +409,9 @@ struct radeon_sa_bo {
unsignedsoffset;
unsignedeoffset;
struct radeon_fence *fence;
+   unsignednbuffers;
+   struct radeon_dump  *buffers;
+   uint32_tcmd_flags;
 };

 /*
@@ -846,6 +855,8 @@ struct radeon_cs_parser {
u32 cs_flags;
u32 ring;
s32 priority;
+   unsignednbuffers;
+   struct radeon_dump  *buffers;
 };

 extern int radeon_cs_update_pages(struct radeon_cs_parser *p, int pg_idx);
@@ -1548,6 +1559,11 @@ struct radeon_device {
unsigneddebugfs_count;
/* virtual memory */
struct radeon_vm_managervm_manager;
+   /* lockup blob dumping */
+   unsignedblob_dump;
+   struct rati_header  blob_header;
+   uint64_tblob_size;
+   void*blob;
 };

 int radeon_device_init(struct radeon_device *rdev,
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index e86907a..a5b6610 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -36,7 +36,7 @@ int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
 {
struct drm_device *ddev = p->rdev->ddev;
struct radeon_cs_chunk *chunk;
-   unsigned i, j;
+   unsigned i, j, ib;
bool duplicate;

if (p->chunk_relocs_idx == -1) {
@@ -53,7 +53,16 @@ int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
if (p->relocs == NULL) {
return -ENOMEM;
}
-   for (i = 0; i < p->nrelocs; i++) {
+   p->buffers = NULL;
+   p->nbuffers = 0;
+   if (p->rdev->blob_dump) {
+   p->buffers = kcalloc(p->nrelocs, sizeof(*p->buffers), 
GFP_KERNEL);
+   if (p->buffers == NULL) {
+   return -ENOMEM;
+   }
+   p->nbuffers = p->nrelocs;
+   }
+   for (i = 0, ib = 0; i < p->nrelocs; i++) {
struct drm_radeon_cs_reloc *r;

duplicate = false;
@@ -85,8 +94,24 @@ int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
radeon_bo_list_add_object(&p->relocs[i].lobj,
  &p->validated);

-   } else
+   /* initialize dump struct */
+   if (p->rdev->blob_dump) {
+   p->buffers[ib].bo =  p->relocs[i].robj;
+   p->buffers[ib].buffer.id = RATI_DATA_BUFFER;
+   p->buffers[ib].buffer.ver = 1;
+   p->buffers[ib].buffer.size = 
radeon_bo_size(p->buffers[i].

[PATCH 3/5] drm/radeon: allow radeon_vm_bo_update_pte caller to get bo virtual offset

2012-05-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

Allow caller of radeon_vm_bo_update_pte to get the virtual bo offset.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h  |3 ++-
 drivers/gpu/drm/radeon/radeon_cs.c   |2 +-
 drivers/gpu/drm/radeon/radeon_gart.c |   11 ---
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 2aa0dfa..dc51ee9 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1794,7 +1794,8 @@ void radeon_vm_unbind(struct radeon_device *rdev, struct 
radeon_vm *vm);
 int radeon_vm_bo_update_pte(struct radeon_device *rdev,
struct radeon_vm *vm,
struct radeon_bo *bo,
-   struct ttm_mem_reg *mem);
+   struct ttm_mem_reg *mem,
+   uint64_t *vm_offset);
 void radeon_vm_bo_invalidate(struct radeon_device *rdev,
 struct radeon_bo *bo);
 int radeon_vm_bo_add(struct radeon_device *rdev,
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index c7d64a7..e86907a 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -386,7 +386,7 @@ static int radeon_bo_vm_update_pte(struct radeon_cs_parser 
*parser,

list_for_each_entry(lobj, &parser->validated, tv.head) {
bo = lobj->bo;
-   r = radeon_vm_bo_update_pte(parser->rdev, vm, bo, &bo->tbo.mem);
+   r = radeon_vm_bo_update_pte(parser->rdev, vm, bo, &bo->tbo.mem, 
NULL);
if (r) {
return r;
}
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index 8e9ef34..f91a95d 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -433,7 +433,7 @@ retry_id:
vm->id = id;
list_add_tail(&vm->list, &rdev->vm_manager.lru_vm);
return radeon_vm_bo_update_pte(rdev, vm, rdev->ring_tmp_bo.bo,
-  &rdev->ring_tmp_bo.bo->tbo.mem);
+  &rdev->ring_tmp_bo.bo->tbo.mem, NULL);
 }

 /* object have to be reserved */
@@ -540,7 +540,8 @@ static u64 radeon_vm_get_addr(struct radeon_device *rdev,
 int radeon_vm_bo_update_pte(struct radeon_device *rdev,
struct radeon_vm *vm,
struct radeon_bo *bo,
-   struct ttm_mem_reg *mem)
+   struct ttm_mem_reg *mem,
+   uint64_t *vm_offset)
 {
struct radeon_bo_va *bo_va;
unsigned ngpu_pages, i;
@@ -560,6 +561,10 @@ int radeon_vm_bo_update_pte(struct radeon_device *rdev,
if (bo_va->valid)
return 0;

+   if (vm_offset) {
+   *vm_offset = bo_va->soffset;
+   }
+
ngpu_pages = radeon_bo_ngpu_pages(bo);
bo_va->flags &= ~RADEON_VM_PAGE_VALID;
bo_va->flags &= ~RADEON_VM_PAGE_SYSTEM;
@@ -597,7 +602,7 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,

mutex_lock(&vm->mutex);
radeon_mutex_lock(&rdev->cs_mutex);
-   radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
+   radeon_vm_bo_update_pte(rdev, vm, bo, NULL, NULL);
radeon_mutex_unlock(&rdev->cs_mutex);
list_del(&bo_va->vm_list);
mutex_unlock(&vm->mutex);
-- 
1.7.7.6



[PATCH 2/5] drm/radeon: allow radeon debugfs helper to provide custom read

2012-05-17 Thread j.gli...@gmail.com
From: Jerome Glisse 

Allow radeon debugfs file to provide a custom read function. This
is usefull in case you don't want to double buffer with seq_file,
or simply in case the buffer data is too big to be buffered by
seq_file.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/r100.c  |6 +++---
 drivers/gpu/drm/radeon/r300.c  |2 +-
 drivers/gpu/drm/radeon/r420.c  |2 +-
 drivers/gpu/drm/radeon/r600.c  |2 +-
 drivers/gpu/drm/radeon/radeon.h|3 ++-
 drivers/gpu/drm/radeon/radeon_device.c |7 ---
 drivers/gpu/drm/radeon/radeon_fence.c  |2 +-
 drivers/gpu/drm/radeon/radeon_pm.c |2 +-
 drivers/gpu/drm/radeon/radeon_ring.c   |4 ++--
 drivers/gpu/drm/radeon/radeon_ttm.c|2 +-
 drivers/gpu/drm/radeon/rs400.c |2 +-
 drivers/gpu/drm/radeon/rv515.c |4 ++--
 12 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index 0874a6d..8ed365f 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -2694,7 +2694,7 @@ static struct drm_info_list r100_debugfs_mc_info_list[] = 
{
 int r100_debugfs_rbbm_init(struct radeon_device *rdev)
 {
 #if defined(CONFIG_DEBUG_FS)
-   return radeon_debugfs_add_files(rdev, r100_debugfs_rbbm_list, 1);
+   return radeon_debugfs_add_files(rdev, r100_debugfs_rbbm_list, 1, NULL);
 #else
return 0;
 #endif
@@ -2703,7 +2703,7 @@ int r100_debugfs_rbbm_init(struct radeon_device *rdev)
 int r100_debugfs_cp_init(struct radeon_device *rdev)
 {
 #if defined(CONFIG_DEBUG_FS)
-   return radeon_debugfs_add_files(rdev, r100_debugfs_cp_list, 2);
+   return radeon_debugfs_add_files(rdev, r100_debugfs_cp_list, 2, NULL);
 #else
return 0;
 #endif
@@ -2712,7 +2712,7 @@ int r100_debugfs_cp_init(struct radeon_device *rdev)
 int r100_debugfs_mc_info_init(struct radeon_device *rdev)
 {
 #if defined(CONFIG_DEBUG_FS)
-   return radeon_debugfs_add_files(rdev, r100_debugfs_mc_info_list, 1);
+   return radeon_debugfs_add_files(rdev, r100_debugfs_mc_info_list, 1, 
NULL);
 #else
return 0;
 #endif
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 97722a3..edb9eeb 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -586,7 +586,7 @@ static struct drm_info_list rv370_pcie_gart_info_list[] = {
 static int rv370_debugfs_pcie_gart_info_init(struct radeon_device *rdev)
 {
 #if defined(CONFIG_DEBUG_FS)
-   return radeon_debugfs_add_files(rdev, rv370_pcie_gart_info_list, 1);
+   return radeon_debugfs_add_files(rdev, rv370_pcie_gart_info_list, 1, 
NULL);
 #else
return 0;
 #endif
diff --git a/drivers/gpu/drm/radeon/r420.c b/drivers/gpu/drm/radeon/r420.c
index 99137be..4eddcfc 100644
--- a/drivers/gpu/drm/radeon/r420.c
+++ b/drivers/gpu/drm/radeon/r420.c
@@ -491,7 +491,7 @@ static struct drm_info_list r420_pipes_info_list[] = {
 int r420_debugfs_pipes_info_init(struct radeon_device *rdev)
 {
 #if defined(CONFIG_DEBUG_FS)
-   return radeon_debugfs_add_files(rdev, r420_pipes_info_list, 1);
+   return radeon_debugfs_add_files(rdev, r420_pipes_info_list, 1, NULL);
 #else
return 0;
 #endif
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index 4c0d8c9..afc458a 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -3589,7 +3589,7 @@ static struct drm_info_list r600_mc_info_list[] = {
 int r600_debugfs_mc_info_init(struct radeon_device *rdev)
 {
 #if defined(CONFIG_DEBUG_FS)
-   return radeon_debugfs_add_files(rdev, r600_mc_info_list, 
ARRAY_SIZE(r600_mc_info_list));
+   return radeon_debugfs_add_files(rdev, r600_mc_info_list, 
ARRAY_SIZE(r600_mc_info_list), NULL);
 #else
return 0;
 #endif
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 9783178..2aa0dfa 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1128,7 +1128,8 @@ struct radeon_debugfs {

 int radeon_debugfs_add_files(struct radeon_device *rdev,
 struct drm_info_list *files,
-unsigned nfiles);
+unsigned nfiles,
+drm_debugfs_read_t read);
 int radeon_debugfs_fence_init(struct radeon_device *rdev);


diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index 944ac11..b5f4fb9 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1015,7 +1015,8 @@ int radeon_gpu_reset(struct radeon_device *rdev)
  */
 int radeon_debugfs_add_files(struct radeon_device *rdev,
 struct drm_info_list *files,
-unsigned nfiles)
+unsigned nfiles,
+drm_debugfs_read_t read)
 {
unsigned i;

@@ -1039,10 +1040,10 @@ i

  1   2   3   4   >