[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=40952





--- Comment #4 from Linus Torvalds   
2011-08-24 21:57:45 ---
On Wed, Aug 24, 2011 at 2:04 PM,   
wrote:
>
> --- Comment #3 from Michel D?nzer  ?2011-08-24 
> 21:03:48 ---
> From Elimar Riesebieter (riesebie at lxtec.de):
>
> bisecting brought me to commit
> 288d5abec8314ae50fe6692f324b0444acae8486. Reverting seems to work as
> microcode is loaded with compiled in firmware and radeon kms driver.

Grr.

So _request_firmware() does this:

if (WARN_ON(usermodehelper_is_disabled())) {
dev_err(device, "firmware: %s will not be loaded\n", name);
return -EBUSY;
}

which is reasonable, but it does mean that it will warn even if the
firmware is built into the kernel.

On the one hand, that's really nice, because it implies a driver does
a firmware load too early, at a point where it cannot do the generic
firmware load.

On the other hand, it sucks, because it does disallow this situation
that used to work, now that we actually do the sane thing and don't
allow usermode helpers before init has been set up.

So I bet the attached patch fixes the R100 problem, but I'm not 100%
happy with it.

Comments?

   Linus

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=40952


Michel D?nzer  changed:

   What|Removed |Added

 CC||riesebie at lxtec.de,
   ||torvalds at linux-foundation.o
   ||rg




--- Comment #3 from Michel D?nzer   2011-08-24 21:03:48 
---


No subject

2011-08-24 Thread
bisecting brought me to commit
288d5abec8314ae50fe6692f324b0444acae8486. Reverting seems to work as
microcode is loaded with compiled in firmware and radeon kms driver.


That's

commit 288d5abec8314ae50fe6692f324b0444acae8486
Author: Linus Torvalds 
Date:   Wed Aug 3 22:03:29 2011 -1000

Boot up with usermodehelper disabled

--=20
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=3Demail
--- You are receiving this mail because: ---
You are watching the assignee of the bug.=


[Bug 41642] radeon [XPRESS 200M 5955] wakeup from S3 and going into S4 broken with KMS and enabled framebuffer

2011-08-24 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=41642


Alex Deucher  changed:

   What|Removed |Added

 CC||alexdeucher at gmail.com




--- Comment #3 from Alex Deucher   2011-08-24 
20:11:01 ---
Does this patch help?

diff --git a/drivers/gpu/drm/radeon/radeon_combios.c
b/drivers/gpu/drm/radeon/radeon_combios.c
index e0138b6..913a30b 100644
--- a/drivers/gpu/drm/radeon/radeon_combios.c
+++ b/drivers/gpu/drm/radeon/radeon_combios.c
@@ -3298,6 +3298,8 @@ void radeon_combios_asic_init(struct drm_device *dev)
rdev->pdev->subsystem_device == 0x30a4)
return;

+   return;
+
/* DYN CLK 1 */
table = combios_get_table_offset(dev, COMBIOS_DYN_CLK_1_TABLE);
if (table)

If so, please attach your lspci -vnn output so we can add a proper quirk for
your laptop.

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


[Bug 41642] radeon [XPRESS 200M 5955] wakeup from S3 and going into S4 broken with KMS and enabled framebuffer

2011-08-24 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=41642


Rafael J. Wysocki  changed:

   What|Removed |Added

 Blocks||7216




-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


[Bug 41642] radeon [XPRESS 200M 5955] wakeup from S3 and going into S4 broken with KMS and enabled framebuffer

2011-08-24 Thread bugzilla-dae...@bugzilla.kernel.org
https://bugzilla.kernel.org/show_bug.cgi?id=41642


Rafael J. Wysocki  changed:

   What|Removed |Added

 CC||rjw at sisk.pl
  Component|Hibernation/Suspend |Video(DRI - non Intel)
 AssignedTo|power-management_other at kern |drivers_video-dri at 
kernel-bu
   |el-bugs.osdl.org|gs.osdl.org
Product|Power Management|Drivers




-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.


[PATCH 7/8] w1: ds2760 and ds2780, use ida for id and ida_simple_get to get it.

2011-08-24 Thread Barnes, Clifton A.
On Fri 7/22/2011 12:41 PM, Jonathan Cameron wrote:

> Straight forward.  As an aside, the ida_init calls are not needed as far
> as I can see needed. (DEFINE_IDA does the same already).

> Signed-off-by: Jonathan Cameron 

Checked out ok here for ds2780.

Acked-by: Clifton Barnes 


[PATCH] drm/radeon/kms: evergreen & ni reset SPI block on CP resume

2011-08-24 Thread Alex Deucher
On Wed, Aug 24, 2011 at 4:00 PM,   wrote:
> From: Jerome Glisse 
>
> For some reason SPI block is in broken state after module
> unloading. This lead to broken rendering after reloading
> module. Fix this by reseting SPI block in CP resume function

Looks good to me.

Reviewed-by: Alex Deucher 

>
> Signed-off-by: Jerome Glisse  ---
> ?drivers/gpu/drm/radeon/evergreen.c | ? ?1 +
> ?drivers/gpu/drm/radeon/ni.c ? ? ? ?| ? ?1 +
> ?2 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/evergreen.c 
> b/drivers/gpu/drm/radeon/evergreen.c
> index fb5fa08..d8d71a3 100644
> --- a/drivers/gpu/drm/radeon/evergreen.c
> +++ b/drivers/gpu/drm/radeon/evergreen.c
> @@ -1357,6 +1357,7 @@ int evergreen_cp_resume(struct radeon_device *rdev)
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_PA |
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_SH |
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_VGT |
> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?SOFT_RESET_SPI |
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_SX));
> ? ? ? ?RREG32(GRBM_SOFT_RESET);
> ? ? ? ?mdelay(15);
> diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
> index 44c4750..a2e00fa 100644
> --- a/drivers/gpu/drm/radeon/ni.c
> +++ b/drivers/gpu/drm/radeon/ni.c
> @@ -1159,6 +1159,7 @@ int cayman_cp_resume(struct radeon_device *rdev)
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_PA |
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_SH |
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_VGT |
> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?SOFT_RESET_SPI |
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? SOFT_RESET_SX));
> ? ? ? ?RREG32(GRBM_SOFT_RESET);
> ? ? ? ?mdelay(15);
> --
> 1.7.1
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>


[PATCH] drm/radeon/kms: evergreen & ni reset SPI block on CP resume

2011-08-24 Thread j.gli...@gmail.com
From: Jerome Glisse 

For some reason SPI block is in broken state after module
unloading. This lead to broken rendering after reloading
module. Fix this by reseting SPI block in CP resume function

Signed-off-by: Jerome Glisse 

[PATCH 6/6] ttm: Add 'no_dma' parameter to turn the TTM DMA pool off during runtime.

2011-08-24 Thread Konrad Rzeszutek Wilk
The TTM DMA only gets turned on when the SWIOTLB is enabled - but
we might also want to turn it off when SWIOTLB is on to
use the non-DMA TTM pool code.

In the future this parameter can be removed.

Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c |4 
 include/drm/ttm/ttm_page_alloc.h |4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
index 2cc4b54..9e09eb9 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
@@ -51,6 +51,10 @@
 #include 
 #endif

+int __read_mostly dma_ttm_disable;
+MODULE_PARM_DESC(no_dma, "Disable TTM DMA pool");
+module_param_named(no_dma, dma_ttm_disable, bool, S_IRUGO);
+
 #define NUM_PAGES_TO_ALLOC (PAGE_SIZE/sizeof(struct page *))
 #define SMALL_ALLOCATION   16
 #define FREE_ALL_PAGES (~0U)
diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h
index 192c5f8..e75af77 100644
--- a/include/drm/ttm/ttm_page_alloc.h
+++ b/include/drm/ttm/ttm_page_alloc.h
@@ -103,10 +103,10 @@ extern struct ttm_page_alloc_func ttm_page_alloc_default;
 #ifdef CONFIG_SWIOTLB
 /* Defined in ttm_page_alloc_dma.c */
 extern struct ttm_page_alloc_func ttm_page_alloc_dma;
-
+extern int dma_ttm_disable;
 static inline bool ttm_page_alloc_need_dma(void)
 {
-   if (swiotlb_enabled()) {
+   if (!dma_ttm_disable && swiotlb_enabled()) {
ttm_page_alloc = _page_alloc_dma;
return true;
}
-- 
1.7.4.1



[PATCH 5/6] ttm: Provide a DMA aware TTM page pool code.

2011-08-24 Thread Konrad Rzeszutek Wilk
In TTM world the pages for the graphic drivers are kept in three different
pools: write combined, uncached, and cached (write-back). When the pages
are used by the graphic driver the graphic adapter via its built in MMU
(or AGP) programs these pages in. The programming requires the virtual address
(from the graphic adapter perspective) and the physical address (either System 
RAM
or the memory on the card) which is obtained using the pci_map_* calls (which 
does the
virtual to physical - or bus address translation). During the graphic 
application's
"life" those pages can be shuffled around, swapped out to disk, moved from the
VRAM to System RAM or vice-versa. This all works with the existing TTM pool code
- except when we want to use the software IOTLB (SWIOTLB) code to "map" the 
physical
addresses to the graphic adapter MMU. We end up programming the bounce buffer's
physical address instead of the TTM pool memory's and get a non-worky driver.
There are two solutions:
1) using the DMA API to allocate pages that are screened by the DMA API, or
2) using the pci_sync_* calls to copy the pages from the bounce-buffer and back.

This patch fixes the issue by allocating pages using the DMA API. The second
is a viable option - but it has performance drawbacks and potential correctness
issues - think of the write cache page being bounced (SWIOTLB->TTM), the
WC is set on the TTM page and the copy from SWIOTLB not making it to the TTM
page until the page has been recycled in the pool (and used by another 
application).

The bounce buffer does not get activated often - only in cases where we have
a 32-bit capable card and we want to use a page that is allocated above the
4GB limit. The bounce buffer offers the solution of copying the contents
of that 4GB page to an location below 4GB and then back when the operation has 
been
completed (or vice-versa). This is done by using the 'pci_sync_*' calls.
Note: If you look carefully enough in the existing TTM page pool code you will
notice the GFP_DMA32 flag is used  - which should guarantee that the provided 
page
is under 4GB. It certainly is the case, except this gets ignored in two cases:
 - If user specifies 'swiotlb=force' which bounces _every_ page.
 - If user is using a Xen's PV Linux guest (which uses the SWIOTLB and the
   underlaying PFN's aren't necessarily under 4GB).

To not have this extra copying done the other option is to allocate the pages
using the DMA API so that there is not need to map the page and perform the
expensive 'pci_sync_*' calls.

This DMA API capable TTM pool requires for this the 'struct device' to
properly call the DMA API. It also has to track the virtual and bus address of
the page being handed out in case it ends up being swapped out or de-allocated -
to make sure it is de-allocated using the proper's 'struct device'.

Implementation wise the code keeps two lists: one that is attached to the
'struct device' (via the dev->dma_pools list) and a global one to be used when
the 'struct device' is unavailable (think shrinker code). The global list can
iterate over all of the 'struct device' and its associated dma_pool. The list
in dev->dma_pools can only iterate the device's dma_pool.
/[struct 
device_pool]\
/---| dev   
 |
   /+---| dma_pool  
 |
 /-+--\/
\/
 |struct device| /-->[struct dma_pool for WC][struct dma_pool for uncached]<-/--| dma_pool  
 |
 \-+--/ /   
\/
\--/
[Two pools associated with the device (WC and UC), and the parallel list
containing the 'struct dev' and 'struct dma_pool' entries]

The maximum amount of dma pools a device can have is six: write-combined,
uncached, and cached; then there are the DMA32 variants which are:
write-combined dma32, uncached dma32, and cached dma32.

Currently this code only gets activated when any variant of the SWIOTLB IOMMU
code is running (Intel without VT-d, AMD without GART, IBM Calgary and Xen PV
with PCI devices).

Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/ttm/Makefile |3 +
 drivers/gpu/drm/ttm/ttm_memory.c |2 +
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1307 ++
 include/drm/ttm/ttm_page_alloc.h |   32 +-
 4 files changed, 1340 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c

diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile
index f3cf6f0..8300bc0 100644
--- a/drivers/gpu/drm/ttm/Makefile
+++ b/drivers/gpu/drm/ttm/Makefile
@@ -7,4 +7,7 @@ ttm-y := ttm_agp_backend.o ttm_memory.o ttm_tt.o ttm_bo.o \
ttm_object.o ttm_lock.o ttm_execbuf_util.o 

[PATCH 4/6] swiotlb: Expose swiotlb_nr_tlb function to modules as swiotlb_enabled

2011-08-24 Thread Konrad Rzeszutek Wilk
As a mechanism to detect whether SWIOTLB is enabled or not.
And as such, we might as well wrap it within an 'swiotlb_enabled()'
function that will call the swiotlb_nr_tlb.

We also fix the spelling - it was swioltb instead of
swiotlb.

CC: FUJITA Tomonori 
Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/xen/swiotlb-xen.c |2 +-
 include/linux/swiotlb.h   |7 ++-
 lib/swiotlb.c |5 +++--
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 6e8c15a..cbcd8cc 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -149,7 +149,7 @@ void __init xen_swiotlb_init(int verbose)
int rc;
unsigned long nr_tbl;

-   nr_tbl = swioltb_nr_tbl();
+   nr_tbl = swiotlb_nr_tbl();
if (nr_tbl)
xen_io_tlb_nslabs = nr_tbl;
else {
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 445702c..014ff53 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -24,7 +24,12 @@ extern int swiotlb_force;

 extern void swiotlb_init(int verbose);
 extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int 
verbose);
-extern unsigned long swioltb_nr_tbl(void);
+extern unsigned long swiotlb_nr_tbl(void);
+
+static inline bool swiotlb_enabled(void)
+{
+   return !!swiotlb_nr_tbl();
+}

 /*
  * Enumeration for sync targets
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 99093b3..058935e 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -110,11 +110,11 @@ setup_io_tlb_npages(char *str)
 __setup("swiotlb=", setup_io_tlb_npages);
 /* make io_tlb_overflow tunable too? */

-unsigned long swioltb_nr_tbl(void)
+unsigned long swiotlb_nr_tbl(void)
 {
return io_tlb_nslabs;
 }
-
+EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
 /* Note that this doesn't work with highmem page */
 static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
  volatile void *address)
@@ -321,6 +321,7 @@ void __init swiotlb_free(void)
free_bootmem_late(__pa(io_tlb_start),
  PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
}
+   io_tlb_nslabs = 0;
 }

 static int is_swiotlb_buffer(phys_addr_t paddr)
-- 
1.7.4.1



[PATCH 3/6] ttm: Pass in 'struct device' to TTM so it can do DMA API on behalf of device.

2011-08-24 Thread Konrad Rzeszutek Wilk
We want to pass in the 'struct device' to the TTM layer so that
the TTM DMA pool code (if enabled) can use it. The DMA API code
needs the 'struct device' to do the DMA API operations.

Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/nouveau/nouveau_mem.c |3 ++-
 drivers/gpu/drm/radeon/radeon_ttm.c   |3 ++-
 drivers/gpu/drm/ttm/ttm_bo.c  |4 +++-
 drivers/gpu/drm/ttm/ttm_page_alloc.c  |   17 ++---
 drivers/gpu/drm/ttm/ttm_tt.c  |5 +++--
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c   |4 ++--
 include/drm/ttm/ttm_bo_driver.h   |7 ++-
 include/drm/ttm/ttm_page_alloc.h  |   16 
 8 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c 
b/drivers/gpu/drm/nouveau/nouveau_mem.c
index 5ee14d2..a2d7e35 100644
--- a/drivers/gpu/drm/nouveau/nouveau_mem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_mem.c
@@ -417,7 +417,8 @@ nouveau_mem_vram_init(struct drm_device *dev)
ret = ttm_bo_device_init(_priv->ttm.bdev,
 dev_priv->ttm.bo_global_ref.ref.object,
 _bo_driver, DRM_FILE_PAGE_OFFSET,
-dma_bits <= 32 ? true : false);
+dma_bits <= 32 ? true : false,
+dev->dev);
if (ret) {
NV_ERROR(dev, "Error initialising bo driver: %d\n", ret);
return ret;
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 60125dd..dbc6bcb 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -517,7 +517,8 @@ int radeon_ttm_init(struct radeon_device *rdev)
r = ttm_bo_device_init(>mman.bdev,
   rdev->mman.bo_global_ref.ref.object,
   _bo_driver, DRM_FILE_PAGE_OFFSET,
-  rdev->need_dma32);
+  rdev->need_dma32,
+  rdev->dev);
if (r) {
DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
return r;
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 2e618b5..0358889 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -1527,12 +1527,14 @@ int ttm_bo_device_init(struct ttm_bo_device *bdev,
   struct ttm_bo_global *glob,
   struct ttm_bo_driver *driver,
   uint64_t file_page_offset,
-  bool need_dma32)
+  bool need_dma32,
+  struct device *dev)
 {
int ret = -EINVAL;

rwlock_init(>vm_lock);
bdev->driver = driver;
+   bdev->dev = dev;

memset(bdev->man, 0, sizeof(bdev->man));

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 6a888f8..f9a4d83 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -666,7 +666,7 @@ out:
  */
 int __ttm_get_pages(struct list_head *pages, int flags,
enum ttm_caching_state cstate, unsigned count,
-   dma_addr_t *dma_address)
+   dma_addr_t *dma_address, struct device *dev)
 {
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
struct page *p = NULL;
@@ -724,7 +724,8 @@ int __ttm_get_pages(struct list_head *pages, int flags,
printk(KERN_ERR TTM_PFX
   "Failed to allocate extra pages "
   "for large request.");
-   ttm_put_pages(pages, 0, flags, cstate, NULL);
+   ttm_put_pages(pages, 0, flags, cstate, dma_address,
+ dev);
return r;
}
}
@@ -735,7 +736,8 @@ int __ttm_get_pages(struct list_head *pages, int flags,

 /* Put all pages in pages list to correct pool to wait for reuse */
 void __ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
-enum ttm_caching_state cstate, dma_addr_t *dma_address)
+enum ttm_caching_state cstate, dma_addr_t *dma_address,
+struct device *dev)
 {
unsigned long irq_flags;
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
@@ -867,19 +869,20 @@ struct ttm_page_alloc_func ttm_page_alloc_default = {

 int ttm_get_pages(struct list_head *pages, int flags,
  enum ttm_caching_state cstate, unsigned count,
- dma_addr_t *dma_address)
+ dma_addr_t *dma_address, struct device *dev)
 {
if (ttm_page_alloc && ttm_page_alloc->get_pages)
return ttm_page_alloc->get_pages(pages, flags, cstate, count,
-dma_address);
+  

[PATCH 2/6] ttm: Introduce ttm_page_alloc_func structure.

2011-08-24 Thread Konrad Rzeszutek Wilk
Which has the function members for all of the current page pool
operations defined. The old calls (ttm_put_pages, ttm_get_pages, etc)
are plumbed through little functions which lookup in the ttm_page_alloc_func
the appropiate implementation and call it.

There is currently only one page pool code so the default registration
goes to 'ttm_page_alloc_default'. The subsequent patch
"ttm: Provide a DMA aware TTM page pool code." introduces the one
to be used when the SWIOTLB code is turned on (that implementation
is a union of the default TTM pool code with the DMA pool code).

Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/ttm/ttm_memory.c |3 ++
 drivers/gpu/drm/ttm/ttm_page_alloc.c |   58 
 include/drm/ttm/ttm_page_alloc.h |   60 ++
 3 files changed, 113 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c
index e70ddd8..c7d97a5 100644
--- a/drivers/gpu/drm/ttm/ttm_memory.c
+++ b/drivers/gpu/drm/ttm/ttm_memory.c
@@ -356,6 +356,8 @@ static int ttm_mem_init_dma32_zone(struct ttm_mem_global 
*glob,
 }
 #endif

+struct ttm_page_alloc_func *ttm_page_alloc;
+
 int ttm_mem_global_init(struct ttm_mem_global *glob)
 {
struct sysinfo si;
@@ -394,6 +396,7 @@ int ttm_mem_global_init(struct ttm_mem_global *glob)
   "Zone %7s: Available graphics memory: %llu kiB.\n",
   zone->name, (unsigned long long) zone->max_mem >> 10);
}
+   ttm_page_alloc = _page_alloc_default;
ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));
return 0;
 out_no_zone:
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index d948575..6a888f8 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -664,9 +664,9 @@ out:
  * On success pages list will hold count number of correctly
  * cached pages.
  */
-int ttm_get_pages(struct list_head *pages, int flags,
- enum ttm_caching_state cstate, unsigned count,
- dma_addr_t *dma_address)
+int __ttm_get_pages(struct list_head *pages, int flags,
+   enum ttm_caching_state cstate, unsigned count,
+   dma_addr_t *dma_address)
 {
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
struct page *p = NULL;
@@ -734,8 +734,8 @@ int ttm_get_pages(struct list_head *pages, int flags,
 }

 /* Put all pages in pages list to correct pool to wait for reuse */
-void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
-  enum ttm_caching_state cstate, dma_addr_t *dma_address)
+void __ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
+enum ttm_caching_state cstate, dma_addr_t *dma_address)
 {
unsigned long irq_flags;
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
@@ -785,7 +785,7 @@ static void ttm_page_pool_init_locked(struct ttm_page_pool 
*pool, int flags,
pool->name = name;
 }

-int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages)
+int __ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages)
 {
int ret;

@@ -822,7 +822,7 @@ int ttm_page_alloc_init(struct ttm_mem_global *glob, 
unsigned max_pages)
return 0;
 }

-void ttm_page_alloc_fini(void)
+void __ttm_page_alloc_fini(void)
 {
int i;

@@ -836,7 +836,7 @@ void ttm_page_alloc_fini(void)
_manager = NULL;
 }

-int ttm_page_alloc_debugfs(struct seq_file *m, void *data)
+int __ttm_page_alloc_debugfs(struct seq_file *m, void *data)
 {
struct ttm_page_pool *p;
unsigned i;
@@ -856,4 +856,46 @@ int ttm_page_alloc_debugfs(struct seq_file *m, void *data)
}
return 0;
 }
+
+struct ttm_page_alloc_func ttm_page_alloc_default = {
+   .get_pages  = __ttm_get_pages,
+   .put_pages  = __ttm_put_pages,
+   .alloc_init = __ttm_page_alloc_init,
+   .alloc_fini = __ttm_page_alloc_fini,
+   .debugfs= __ttm_page_alloc_debugfs,
+};
+
+int ttm_get_pages(struct list_head *pages, int flags,
+ enum ttm_caching_state cstate, unsigned count,
+ dma_addr_t *dma_address)
+{
+   if (ttm_page_alloc && ttm_page_alloc->get_pages)
+   return ttm_page_alloc->get_pages(pages, flags, cstate, count,
+dma_address);
+   return -1;
+}
+void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
+  enum ttm_caching_state cstate, dma_addr_t *dma_address)
+{
+   if (ttm_page_alloc && ttm_page_alloc->put_pages)
+   ttm_page_alloc->put_pages(pages, page_count, flags, cstate,
+ dma_address);
+}
+int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages)
+{
+   if 

[PATCH 1/6] ttm/radeon/nouveau: Check the DMA address from TTM against known value.

2011-08-24 Thread Konrad Rzeszutek Wilk
. instead of checking against the DMA_ERROR_CODE value which is
per-platform specific. The zero value is a known invalid value
that the TTM layer sets on the dma_address array if it is not
used (ttm_tt_alloc_page_directory calls drm_calloc_large which
creates a page with GFP_ZERO).

We can't use pci_dma_mapping_error as that is IOMMU
specific (some check for a specific physical address, some
for ranges, some just do a check against zero).

Also update the comments in the header about the true state
of that parameter.

Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/nouveau/nouveau_sgdma.c |3 +--
 drivers/gpu/drm/radeon/radeon_gart.c|4 +---
 include/drm/ttm/ttm_page_alloc.h|4 ++--
 3 files changed, 4 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c 
b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index 82fad91..624e2db 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
@@ -42,8 +42,7 @@ nouveau_sgdma_populate(struct ttm_backend *be, unsigned long 
num_pages,

nvbe->nr_pages = 0;
while (num_pages--) {
-   /* this code path isn't called and is incorrect anyways */
-   if (0) { /*dma_addrs[nvbe->nr_pages] != DMA_ERROR_CODE)*/
+   if (dev->pdev, dma_addrs[nvbe->nr_pages] != 0) {
nvbe->pages[nvbe->nr_pages] =
dma_addrs[nvbe->nr_pages];
nvbe->ttm_alloced[nvbe->nr_pages] = true;
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index a533f52..41f7e51 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -181,9 +181,7 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned 
offset,
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);

for (i = 0; i < pages; i++, p++) {
-   /* we reverted the patch using dma_addr in TTM for now but this
-* code stops building on alpha so just comment it out for now 
*/
-   if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */
+   if (rdev->pdev, dma_addr[i] != 0) {
rdev->gart.ttm_alloced[p] = true;
rdev->gart.pages_addr[p] = dma_addr[i];
} else {
diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h
index 8062890..0017b17 100644
--- a/include/drm/ttm/ttm_page_alloc.h
+++ b/include/drm/ttm/ttm_page_alloc.h
@@ -36,7 +36,7 @@
  * @flags: ttm flags for page allocation.
  * @cstate: ttm caching state for the page.
  * @count: number of pages to allocate.
- * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set).
+ * @dma_address: The DMA (bus) address of pages - (by default zero).
  */
 int ttm_get_pages(struct list_head *pages,
  int flags,
@@ -51,7 +51,7 @@ int ttm_get_pages(struct list_head *pages,
  * count.
  * @flags: ttm flags for page allocation.
  * @cstate: ttm caching state.
- * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set).
+ * @dma_address: The DMA (bus) address of pages (by default zero).
  */
 void ttm_put_pages(struct list_head *pages,
   unsigned page_count,
-- 
1.7.4.1



[RFC PATCH] TTM DMA pool v1

2011-08-24 Thread Konrad Rzeszutek Wilk
Way back in January this patchset:
http://lists.freedesktop.org/archives/dri-devel/2011-January/006905.html
was merged in, but pieces of it had to be reverted b/c they did not
work properly under PowerPC, ARM, and when swapping out pages to disk.

After a bit of discussion on the mailing list
http://marc.info/?i=4D769726.2030307 at shipmail.org I started working on it, 
but
got waylaid by other things .. and finally I am able to post the RFC patches.

There was a lot of discussion about it and I am not sure if I captured
everybody's thoughts - if I did not - that is _not_ intentional - it has just
been quite some time..

Anyhow .. the patches explore what the "lib/dmapool.c" does - which is to have a
DMA pool that the device has associated with. I kind of married that code
along with drivers/gpu/drm/ttm/ttm_page_alloc.c to create a TTM DMA pool code.
The end result is DMA pool with extra features: can do write-combine, uncached,
writeback (and tracks them and sets back to WB when freed); tracks "cached"
pages that don't really need to be returned to a pool; and hooks up to
the shrinker code so that the pools can be shrunk.

If you guys think this set of patches make sense  - my future plans were to
move a bulk of this in the lib/dmapool.c (I spoke with Matthew Wilcox about it
and he is OK as long as I don't introduce performance regressions).

As I mentioned, in the past, the patches I introduced broke certain scenarios
so I've been running compile/runtime tests to make sure I don't repeat my past
mistakes.  Both nouveau and radeon - IGP, PCI-e, and with various IOMMUs -
Calgary, GART, SWIOTLB, Xen SWIOTLB, AMD-VI work correctly (still need to test
Intel VT-d).

My PowerPC has issues with booting a virgin 3.0 kernel (something about
"ELF image not correct") so I don't have that yet covered.

The patches are also located in a git tree:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git 
devel/ttm.dma_pool.v1.5

And the full diffstat is:

 drivers/gpu/drm/nouveau/nouveau_mem.c|3 +-
 drivers/gpu/drm/nouveau/nouveau_sgdma.c  |3 +-
 drivers/gpu/drm/radeon/radeon_gart.c |4 +-
 drivers/gpu/drm/radeon/radeon_ttm.c  |3 +-
 drivers/gpu/drm/ttm/Makefile |3 +
 drivers/gpu/drm/ttm/ttm_bo.c |4 +-
 drivers/gpu/drm/ttm/ttm_memory.c |5 +
 drivers/gpu/drm/ttm/ttm_page_alloc.c |   63 ++-
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1311 ++
 drivers/gpu/drm/ttm/ttm_tt.c |5 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c  |4 +-
 drivers/xen/swiotlb-xen.c|2 +-
 include/drm/ttm/ttm_bo_driver.h  |7 +-
 include/drm/ttm/ttm_page_alloc.h |  100 +++-
 include/linux/swiotlb.h  |7 +-
 lib/swiotlb.c|5 +-
 16 files changed, 1499 insertions(+), 30 deletions(-)


[PATCH 3/6] ttm: Pass in 'struct device' to TTM so it can do DMA API on behalf of device.

2011-08-24 Thread Konrad Rzeszutek Wilk
We want to pass in the 'struct device' to the TTM layer so that
the TTM DMA pool code (if enabled) can use it. The DMA API code
needs the 'struct device' to do the DMA API operations.

Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
 drivers/gpu/drm/nouveau/nouveau_mem.c |3 ++-
 drivers/gpu/drm/radeon/radeon_ttm.c   |3 ++-
 drivers/gpu/drm/ttm/ttm_bo.c  |4 +++-
 drivers/gpu/drm/ttm/ttm_page_alloc.c  |   17 ++---
 drivers/gpu/drm/ttm/ttm_tt.c  |5 +++--
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c   |4 ++--
 include/drm/ttm/ttm_bo_driver.h   |7 ++-
 include/drm/ttm/ttm_page_alloc.h  |   16 
 8 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c 
b/drivers/gpu/drm/nouveau/nouveau_mem.c
index 5ee14d2..a2d7e35 100644
--- a/drivers/gpu/drm/nouveau/nouveau_mem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_mem.c
@@ -417,7 +417,8 @@ nouveau_mem_vram_init(struct drm_device *dev)
ret = ttm_bo_device_init(dev_priv-ttm.bdev,
 dev_priv-ttm.bo_global_ref.ref.object,
 nouveau_bo_driver, DRM_FILE_PAGE_OFFSET,
-dma_bits = 32 ? true : false);
+dma_bits = 32 ? true : false,
+dev-dev);
if (ret) {
NV_ERROR(dev, Error initialising bo driver: %d\n, ret);
return ret;
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 60125dd..dbc6bcb 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -517,7 +517,8 @@ int radeon_ttm_init(struct radeon_device *rdev)
r = ttm_bo_device_init(rdev-mman.bdev,
   rdev-mman.bo_global_ref.ref.object,
   radeon_bo_driver, DRM_FILE_PAGE_OFFSET,
-  rdev-need_dma32);
+  rdev-need_dma32,
+  rdev-dev);
if (r) {
DRM_ERROR(failed initializing buffer object driver(%d).\n, r);
return r;
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 2e618b5..0358889 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -1527,12 +1527,14 @@ int ttm_bo_device_init(struct ttm_bo_device *bdev,
   struct ttm_bo_global *glob,
   struct ttm_bo_driver *driver,
   uint64_t file_page_offset,
-  bool need_dma32)
+  bool need_dma32,
+  struct device *dev)
 {
int ret = -EINVAL;
 
rwlock_init(bdev-vm_lock);
bdev-driver = driver;
+   bdev-dev = dev;
 
memset(bdev-man, 0, sizeof(bdev-man));
 
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 6a888f8..f9a4d83 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -666,7 +666,7 @@ out:
  */
 int __ttm_get_pages(struct list_head *pages, int flags,
enum ttm_caching_state cstate, unsigned count,
-   dma_addr_t *dma_address)
+   dma_addr_t *dma_address, struct device *dev)
 {
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
struct page *p = NULL;
@@ -724,7 +724,8 @@ int __ttm_get_pages(struct list_head *pages, int flags,
printk(KERN_ERR TTM_PFX
   Failed to allocate extra pages 
   for large request.);
-   ttm_put_pages(pages, 0, flags, cstate, NULL);
+   ttm_put_pages(pages, 0, flags, cstate, dma_address,
+ dev);
return r;
}
}
@@ -735,7 +736,8 @@ int __ttm_get_pages(struct list_head *pages, int flags,
 
 /* Put all pages in pages list to correct pool to wait for reuse */
 void __ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
-enum ttm_caching_state cstate, dma_addr_t *dma_address)
+enum ttm_caching_state cstate, dma_addr_t *dma_address,
+struct device *dev)
 {
unsigned long irq_flags;
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
@@ -867,19 +869,20 @@ struct ttm_page_alloc_func ttm_page_alloc_default = {
 
 int ttm_get_pages(struct list_head *pages, int flags,
  enum ttm_caching_state cstate, unsigned count,
- dma_addr_t *dma_address)
+ dma_addr_t *dma_address, struct device *dev)
 {
if (ttm_page_alloc  ttm_page_alloc-get_pages)
return ttm_page_alloc-get_pages(pages, flags, cstate, count,
-dma_address);

[PATCH 5/6] ttm: Provide a DMA aware TTM page pool code.

2011-08-24 Thread Konrad Rzeszutek Wilk
In TTM world the pages for the graphic drivers are kept in three different
pools: write combined, uncached, and cached (write-back). When the pages
are used by the graphic driver the graphic adapter via its built in MMU
(or AGP) programs these pages in. The programming requires the virtual address
(from the graphic adapter perspective) and the physical address (either System 
RAM
or the memory on the card) which is obtained using the pci_map_* calls (which 
does the
virtual to physical - or bus address translation). During the graphic 
application's
life those pages can be shuffled around, swapped out to disk, moved from the
VRAM to System RAM or vice-versa. This all works with the existing TTM pool code
- except when we want to use the software IOTLB (SWIOTLB) code to map the 
physical
addresses to the graphic adapter MMU. We end up programming the bounce buffer's
physical address instead of the TTM pool memory's and get a non-worky driver.
There are two solutions:
1) using the DMA API to allocate pages that are screened by the DMA API, or
2) using the pci_sync_* calls to copy the pages from the bounce-buffer and back.

This patch fixes the issue by allocating pages using the DMA API. The second
is a viable option - but it has performance drawbacks and potential correctness
issues - think of the write cache page being bounced (SWIOTLB-TTM), the
WC is set on the TTM page and the copy from SWIOTLB not making it to the TTM
page until the page has been recycled in the pool (and used by another 
application).

The bounce buffer does not get activated often - only in cases where we have
a 32-bit capable card and we want to use a page that is allocated above the
4GB limit. The bounce buffer offers the solution of copying the contents
of that 4GB page to an location below 4GB and then back when the operation has 
been
completed (or vice-versa). This is done by using the 'pci_sync_*' calls.
Note: If you look carefully enough in the existing TTM page pool code you will
notice the GFP_DMA32 flag is used  - which should guarantee that the provided 
page
is under 4GB. It certainly is the case, except this gets ignored in two cases:
 - If user specifies 'swiotlb=force' which bounces _every_ page.
 - If user is using a Xen's PV Linux guest (which uses the SWIOTLB and the
   underlaying PFN's aren't necessarily under 4GB).

To not have this extra copying done the other option is to allocate the pages
using the DMA API so that there is not need to map the page and perform the
expensive 'pci_sync_*' calls.

This DMA API capable TTM pool requires for this the 'struct device' to
properly call the DMA API. It also has to track the virtual and bus address of
the page being handed out in case it ends up being swapped out or de-allocated -
to make sure it is de-allocated using the proper's 'struct device'.

Implementation wise the code keeps two lists: one that is attached to the
'struct device' (via the dev-dma_pools list) and a global one to be used when
the 'struct device' is unavailable (think shrinker code). The global list can
iterate over all of the 'struct device' and its associated dma_pool. The list
in dev-dma_pools can only iterate the device's dma_pool.
/[struct 
device_pool]\
/---| dev   
 |
   /+---| dma_pool  
 |
 /-+--\/
\/
 |struct device| /--[struct dma_pool for WC]/ /[struct 
device_pool]\
 | dma_pools   ++ /-| dev   
 |
 |  ...|\---[struct dma_pool for uncached]-/--| dma_pool  
 |
 \-+--/ /   
\/
\--/
[Two pools associated with the device (WC and UC), and the parallel list
containing the 'struct dev' and 'struct dma_pool' entries]

The maximum amount of dma pools a device can have is six: write-combined,
uncached, and cached; then there are the DMA32 variants which are:
write-combined dma32, uncached dma32, and cached dma32.

Currently this code only gets activated when any variant of the SWIOTLB IOMMU
code is running (Intel without VT-d, AMD without GART, IBM Calgary and Xen PV
with PCI devices).

Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
 drivers/gpu/drm/ttm/Makefile |3 +
 drivers/gpu/drm/ttm/ttm_memory.c |2 +
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1307 ++
 include/drm/ttm/ttm_page_alloc.h |   32 +-
 4 files changed, 1340 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c

diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile
index f3cf6f0..8300bc0 100644
--- a/drivers/gpu/drm/ttm/Makefile
+++ 

[RFC PATCH] TTM DMA pool v1

2011-08-24 Thread Konrad Rzeszutek Wilk
Way back in January this patchset:
http://lists.freedesktop.org/archives/dri-devel/2011-January/006905.html
was merged in, but pieces of it had to be reverted b/c they did not
work properly under PowerPC, ARM, and when swapping out pages to disk.

After a bit of discussion on the mailing list
http://marc.info/?i=4d769726.2030...@shipmail.org I started working on it, but
got waylaid by other things .. and finally I am able to post the RFC patches.

There was a lot of discussion about it and I am not sure if I captured
everybody's thoughts - if I did not - that is _not_ intentional - it has just
been quite some time..

Anyhow .. the patches explore what the lib/dmapool.c does - which is to have a
DMA pool that the device has associated with. I kind of married that code
along with drivers/gpu/drm/ttm/ttm_page_alloc.c to create a TTM DMA pool code.
The end result is DMA pool with extra features: can do write-combine, uncached,
writeback (and tracks them and sets back to WB when freed); tracks cached
pages that don't really need to be returned to a pool; and hooks up to
the shrinker code so that the pools can be shrunk.

If you guys think this set of patches make sense  - my future plans were to
move a bulk of this in the lib/dmapool.c (I spoke with Matthew Wilcox about it
and he is OK as long as I don't introduce performance regressions).

As I mentioned, in the past, the patches I introduced broke certain scenarios
so I've been running compile/runtime tests to make sure I don't repeat my past
mistakes.  Both nouveau and radeon - IGP, PCI-e, and with various IOMMUs -
Calgary, GART, SWIOTLB, Xen SWIOTLB, AMD-VI work correctly (still need to test
Intel VT-d).

My PowerPC has issues with booting a virgin 3.0 kernel (something about
ELF image not correct) so I don't have that yet covered.

The patches are also located in a git tree:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git 
devel/ttm.dma_pool.v1.5

And the full diffstat is:

 drivers/gpu/drm/nouveau/nouveau_mem.c|3 +-
 drivers/gpu/drm/nouveau/nouveau_sgdma.c  |3 +-
 drivers/gpu/drm/radeon/radeon_gart.c |4 +-
 drivers/gpu/drm/radeon/radeon_ttm.c  |3 +-
 drivers/gpu/drm/ttm/Makefile |3 +
 drivers/gpu/drm/ttm/ttm_bo.c |4 +-
 drivers/gpu/drm/ttm/ttm_memory.c |5 +
 drivers/gpu/drm/ttm/ttm_page_alloc.c |   63 ++-
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1311 ++
 drivers/gpu/drm/ttm/ttm_tt.c |5 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c  |4 +-
 drivers/xen/swiotlb-xen.c|2 +-
 include/drm/ttm/ttm_bo_driver.h  |7 +-
 include/drm/ttm/ttm_page_alloc.h |  100 +++-
 include/linux/swiotlb.h  |7 +-
 lib/swiotlb.c|5 +-
 16 files changed, 1499 insertions(+), 30 deletions(-)
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/6] ttm/radeon/nouveau: Check the DMA address from TTM against known value.

2011-08-24 Thread Konrad Rzeszutek Wilk
. instead of checking against the DMA_ERROR_CODE value which is
per-platform specific. The zero value is a known invalid value
that the TTM layer sets on the dma_address array if it is not
used (ttm_tt_alloc_page_directory calls drm_calloc_large which
creates a page with GFP_ZERO).

We can't use pci_dma_mapping_error as that is IOMMU
specific (some check for a specific physical address, some
for ranges, some just do a check against zero).

Also update the comments in the header about the true state
of that parameter.

Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
 drivers/gpu/drm/nouveau/nouveau_sgdma.c |3 +--
 drivers/gpu/drm/radeon/radeon_gart.c|4 +---
 include/drm/ttm/ttm_page_alloc.h|4 ++--
 3 files changed, 4 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c 
b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index 82fad91..624e2db 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
@@ -42,8 +42,7 @@ nouveau_sgdma_populate(struct ttm_backend *be, unsigned long 
num_pages,
 
nvbe-nr_pages = 0;
while (num_pages--) {
-   /* this code path isn't called and is incorrect anyways */
-   if (0) { /*dma_addrs[nvbe-nr_pages] != DMA_ERROR_CODE)*/
+   if (dev-pdev, dma_addrs[nvbe-nr_pages] != 0) {
nvbe-pages[nvbe-nr_pages] =
dma_addrs[nvbe-nr_pages];
nvbe-ttm_alloced[nvbe-nr_pages] = true;
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index a533f52..41f7e51 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -181,9 +181,7 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned 
offset,
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
 
for (i = 0; i  pages; i++, p++) {
-   /* we reverted the patch using dma_addr in TTM for now but this
-* code stops building on alpha so just comment it out for now 
*/
-   if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */
+   if (rdev-pdev, dma_addr[i] != 0) {
rdev-gart.ttm_alloced[p] = true;
rdev-gart.pages_addr[p] = dma_addr[i];
} else {
diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h
index 8062890..0017b17 100644
--- a/include/drm/ttm/ttm_page_alloc.h
+++ b/include/drm/ttm/ttm_page_alloc.h
@@ -36,7 +36,7 @@
  * @flags: ttm flags for page allocation.
  * @cstate: ttm caching state for the page.
  * @count: number of pages to allocate.
- * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set).
+ * @dma_address: The DMA (bus) address of pages - (by default zero).
  */
 int ttm_get_pages(struct list_head *pages,
  int flags,
@@ -51,7 +51,7 @@ int ttm_get_pages(struct list_head *pages,
  * count.
  * @flags: ttm flags for page allocation.
  * @cstate: ttm caching state.
- * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set).
+ * @dma_address: The DMA (bus) address of pages (by default zero).
  */
 void ttm_put_pages(struct list_head *pages,
   unsigned page_count,
-- 
1.7.4.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 2/6] ttm: Introduce ttm_page_alloc_func structure.

2011-08-24 Thread Konrad Rzeszutek Wilk
Which has the function members for all of the current page pool
operations defined. The old calls (ttm_put_pages, ttm_get_pages, etc)
are plumbed through little functions which lookup in the ttm_page_alloc_func
the appropiate implementation and call it.

There is currently only one page pool code so the default registration
goes to 'ttm_page_alloc_default'. The subsequent patch
ttm: Provide a DMA aware TTM page pool code. introduces the one
to be used when the SWIOTLB code is turned on (that implementation
is a union of the default TTM pool code with the DMA pool code).

Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
 drivers/gpu/drm/ttm/ttm_memory.c |3 ++
 drivers/gpu/drm/ttm/ttm_page_alloc.c |   58 
 include/drm/ttm/ttm_page_alloc.h |   60 ++
 3 files changed, 113 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c
index e70ddd8..c7d97a5 100644
--- a/drivers/gpu/drm/ttm/ttm_memory.c
+++ b/drivers/gpu/drm/ttm/ttm_memory.c
@@ -356,6 +356,8 @@ static int ttm_mem_init_dma32_zone(struct ttm_mem_global 
*glob,
 }
 #endif
 
+struct ttm_page_alloc_func *ttm_page_alloc;
+
 int ttm_mem_global_init(struct ttm_mem_global *glob)
 {
struct sysinfo si;
@@ -394,6 +396,7 @@ int ttm_mem_global_init(struct ttm_mem_global *glob)
   Zone %7s: Available graphics memory: %llu kiB.\n,
   zone-name, (unsigned long long) zone-max_mem  10);
}
+   ttm_page_alloc = ttm_page_alloc_default;
ttm_page_alloc_init(glob, glob-zone_kernel-max_mem/(2*PAGE_SIZE));
return 0;
 out_no_zone:
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index d948575..6a888f8 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -664,9 +664,9 @@ out:
  * On success pages list will hold count number of correctly
  * cached pages.
  */
-int ttm_get_pages(struct list_head *pages, int flags,
- enum ttm_caching_state cstate, unsigned count,
- dma_addr_t *dma_address)
+int __ttm_get_pages(struct list_head *pages, int flags,
+   enum ttm_caching_state cstate, unsigned count,
+   dma_addr_t *dma_address)
 {
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
struct page *p = NULL;
@@ -734,8 +734,8 @@ int ttm_get_pages(struct list_head *pages, int flags,
 }
 
 /* Put all pages in pages list to correct pool to wait for reuse */
-void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
-  enum ttm_caching_state cstate, dma_addr_t *dma_address)
+void __ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
+enum ttm_caching_state cstate, dma_addr_t *dma_address)
 {
unsigned long irq_flags;
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
@@ -785,7 +785,7 @@ static void ttm_page_pool_init_locked(struct ttm_page_pool 
*pool, int flags,
pool-name = name;
 }
 
-int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages)
+int __ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages)
 {
int ret;
 
@@ -822,7 +822,7 @@ int ttm_page_alloc_init(struct ttm_mem_global *glob, 
unsigned max_pages)
return 0;
 }
 
-void ttm_page_alloc_fini(void)
+void __ttm_page_alloc_fini(void)
 {
int i;
 
@@ -836,7 +836,7 @@ void ttm_page_alloc_fini(void)
_manager = NULL;
 }
 
-int ttm_page_alloc_debugfs(struct seq_file *m, void *data)
+int __ttm_page_alloc_debugfs(struct seq_file *m, void *data)
 {
struct ttm_page_pool *p;
unsigned i;
@@ -856,4 +856,46 @@ int ttm_page_alloc_debugfs(struct seq_file *m, void *data)
}
return 0;
 }
+
+struct ttm_page_alloc_func ttm_page_alloc_default = {
+   .get_pages  = __ttm_get_pages,
+   .put_pages  = __ttm_put_pages,
+   .alloc_init = __ttm_page_alloc_init,
+   .alloc_fini = __ttm_page_alloc_fini,
+   .debugfs= __ttm_page_alloc_debugfs,
+};
+
+int ttm_get_pages(struct list_head *pages, int flags,
+ enum ttm_caching_state cstate, unsigned count,
+ dma_addr_t *dma_address)
+{
+   if (ttm_page_alloc  ttm_page_alloc-get_pages)
+   return ttm_page_alloc-get_pages(pages, flags, cstate, count,
+dma_address);
+   return -1;
+}
+void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
+  enum ttm_caching_state cstate, dma_addr_t *dma_address)
+{
+   if (ttm_page_alloc  ttm_page_alloc-put_pages)
+   ttm_page_alloc-put_pages(pages, page_count, flags, cstate,
+ dma_address);
+}
+int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages)
+{
+   if 

[PATCH 6/6] ttm: Add 'no_dma' parameter to turn the TTM DMA pool off during runtime.

2011-08-24 Thread Konrad Rzeszutek Wilk
The TTM DMA only gets turned on when the SWIOTLB is enabled - but
we might also want to turn it off when SWIOTLB is on to
use the non-DMA TTM pool code.

In the future this parameter can be removed.

Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c |4 
 include/drm/ttm/ttm_page_alloc.h |4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
index 2cc4b54..9e09eb9 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
@@ -51,6 +51,10 @@
 #include asm/agp.h
 #endif
 
+int __read_mostly dma_ttm_disable;
+MODULE_PARM_DESC(no_dma, Disable TTM DMA pool);
+module_param_named(no_dma, dma_ttm_disable, bool, S_IRUGO);
+
 #define NUM_PAGES_TO_ALLOC (PAGE_SIZE/sizeof(struct page *))
 #define SMALL_ALLOCATION   16
 #define FREE_ALL_PAGES (~0U)
diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h
index 192c5f8..e75af77 100644
--- a/include/drm/ttm/ttm_page_alloc.h
+++ b/include/drm/ttm/ttm_page_alloc.h
@@ -103,10 +103,10 @@ extern struct ttm_page_alloc_func ttm_page_alloc_default;
 #ifdef CONFIG_SWIOTLB
 /* Defined in ttm_page_alloc_dma.c */
 extern struct ttm_page_alloc_func ttm_page_alloc_dma;
-
+extern int dma_ttm_disable;
 static inline bool ttm_page_alloc_need_dma(void)
 {
-   if (swiotlb_enabled()) {
+   if (!dma_ttm_disable  swiotlb_enabled()) {
ttm_page_alloc = ttm_page_alloc_dma;
return true;
}
-- 
1.7.4.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 4/6] swiotlb: Expose swiotlb_nr_tlb function to modules as swiotlb_enabled

2011-08-24 Thread Konrad Rzeszutek Wilk
As a mechanism to detect whether SWIOTLB is enabled or not.
And as such, we might as well wrap it within an 'swiotlb_enabled()'
function that will call the swiotlb_nr_tlb.

We also fix the spelling - it was swioltb instead of
swiotlb.

CC: FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp
Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
 drivers/xen/swiotlb-xen.c |2 +-
 include/linux/swiotlb.h   |7 ++-
 lib/swiotlb.c |5 +++--
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 6e8c15a..cbcd8cc 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -149,7 +149,7 @@ void __init xen_swiotlb_init(int verbose)
int rc;
unsigned long nr_tbl;
 
-   nr_tbl = swioltb_nr_tbl();
+   nr_tbl = swiotlb_nr_tbl();
if (nr_tbl)
xen_io_tlb_nslabs = nr_tbl;
else {
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 445702c..014ff53 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -24,7 +24,12 @@ extern int swiotlb_force;
 
 extern void swiotlb_init(int verbose);
 extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int 
verbose);
-extern unsigned long swioltb_nr_tbl(void);
+extern unsigned long swiotlb_nr_tbl(void);
+
+static inline bool swiotlb_enabled(void)
+{
+   return !!swiotlb_nr_tbl();
+}
 
 /*
  * Enumeration for sync targets
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 99093b3..058935e 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -110,11 +110,11 @@ setup_io_tlb_npages(char *str)
 __setup(swiotlb=, setup_io_tlb_npages);
 /* make io_tlb_overflow tunable too? */
 
-unsigned long swioltb_nr_tbl(void)
+unsigned long swiotlb_nr_tbl(void)
 {
return io_tlb_nslabs;
 }
-
+EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
 /* Note that this doesn't work with highmem page */
 static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
  volatile void *address)
@@ -321,6 +321,7 @@ void __init swiotlb_free(void)
free_bootmem_late(__pa(io_tlb_start),
  PAGE_ALIGN(io_tlb_nslabs  IO_TLB_SHIFT));
}
+   io_tlb_nslabs = 0;
 }
 
 static int is_swiotlb_buffer(phys_addr_t paddr)
-- 
1.7.4.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/radeon/kms: evergreen ni reset SPI block on CP resume

2011-08-24 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

For some reason SPI block is in broken state after module
unloading. This lead to broken rendering after reloading
module. Fix this by reseting SPI block in CP resume function

Signed-off-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/radeon/evergreen.c |1 +
 drivers/gpu/drm/radeon/ni.c|1 +
 2 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index fb5fa08..d8d71a3 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1357,6 +1357,7 @@ int evergreen_cp_resume(struct radeon_device *rdev)
 SOFT_RESET_PA |
 SOFT_RESET_SH |
 SOFT_RESET_VGT |
+SOFT_RESET_SPI |
 SOFT_RESET_SX));
RREG32(GRBM_SOFT_RESET);
mdelay(15);
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index 44c4750..a2e00fa 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1159,6 +1159,7 @@ int cayman_cp_resume(struct radeon_device *rdev)
 SOFT_RESET_PA |
 SOFT_RESET_SH |
 SOFT_RESET_VGT |
+SOFT_RESET_SPI |
 SOFT_RESET_SX));
RREG32(GRBM_SOFT_RESET);
mdelay(15);
-- 
1.7.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/radeon/kms: evergreen ni reset SPI block on CP resume

2011-08-24 Thread Alex Deucher
On Wed, Aug 24, 2011 at 4:00 PM,  j.gli...@gmail.com wrote:
 From: Jerome Glisse jgli...@redhat.com

 For some reason SPI block is in broken state after module
 unloading. This lead to broken rendering after reloading
 module. Fix this by reseting SPI block in CP resume function

Looks good to me.

Reviewed-by: Alex Deucher alexander.deuc...@amd.com


 Signed-off-by: Jerome Glisse jgli...@redhat.com
 ---
  drivers/gpu/drm/radeon/evergreen.c |    1 +
  drivers/gpu/drm/radeon/ni.c        |    1 +
  2 files changed, 2 insertions(+), 0 deletions(-)

 diff --git a/drivers/gpu/drm/radeon/evergreen.c 
 b/drivers/gpu/drm/radeon/evergreen.c
 index fb5fa08..d8d71a3 100644
 --- a/drivers/gpu/drm/radeon/evergreen.c
 +++ b/drivers/gpu/drm/radeon/evergreen.c
 @@ -1357,6 +1357,7 @@ int evergreen_cp_resume(struct radeon_device *rdev)
                                 SOFT_RESET_PA |
                                 SOFT_RESET_SH |
                                 SOFT_RESET_VGT |
 +                                SOFT_RESET_SPI |
                                 SOFT_RESET_SX));
        RREG32(GRBM_SOFT_RESET);
        mdelay(15);
 diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
 index 44c4750..a2e00fa 100644
 --- a/drivers/gpu/drm/radeon/ni.c
 +++ b/drivers/gpu/drm/radeon/ni.c
 @@ -1159,6 +1159,7 @@ int cayman_cp_resume(struct radeon_device *rdev)
                                 SOFT_RESET_PA |
                                 SOFT_RESET_SH |
                                 SOFT_RESET_VGT |
 +                                SOFT_RESET_SPI |
                                 SOFT_RESET_SX));
        RREG32(GRBM_SOFT_RESET);
        mdelay(15);
 --
 1.7.1

 ___
 dri-devel mailing list
 dri-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/dri-devel

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 41642] radeon [XPRESS 200M 5955] wakeup from S3 and going into S4 broken with KMS and enabled framebuffer

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=41642


Rafael J. Wysocki r...@sisk.pl changed:

   What|Removed |Added

 CC||r...@sisk.pl
  Component|Hibernation/Suspend |Video(DRI - non Intel)
 AssignedTo|power-management_other@kern |drivers_video-dri@kernel-bu
   |el-bugs.osdl.org|gs.osdl.org
Product|Power Management|Drivers




-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 41642] radeon [XPRESS 200M 5955] wakeup from S3 and going into S4 broken with KMS and enabled framebuffer

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=41642


Rafael J. Wysocki r...@sisk.pl changed:

   What|Removed |Added

 Blocks||7216




-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 41642] radeon [XPRESS 200M 5955] wakeup from S3 and going into S4 broken with KMS and enabled framebuffer

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=41642


Alex Deucher alexdeuc...@gmail.com changed:

   What|Removed |Added

 CC||alexdeuc...@gmail.com




--- Comment #3 from Alex Deucher alexdeuc...@gmail.com  2011-08-24 20:11:01 
---
Does this patch help?

diff --git a/drivers/gpu/drm/radeon/radeon_combios.c
b/drivers/gpu/drm/radeon/radeon_combios.c
index e0138b6..913a30b 100644
--- a/drivers/gpu/drm/radeon/radeon_combios.c
+++ b/drivers/gpu/drm/radeon/radeon_combios.c
@@ -3298,6 +3298,8 @@ void radeon_combios_asic_init(struct drm_device *dev)
rdev-pdev-subsystem_device == 0x30a4)
return;

+   return;
+
/* DYN CLK 1 */
table = combios_get_table_offset(dev, COMBIOS_DYN_CLK_1_TABLE);
if (table)

If so, please attach your lspci -vnn output so we can add a proper quirk for
your laptop.

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=40952


Michel Dänzer mic...@daenzer.net changed:

   What|Removed |Added

 CC||riese...@lxtec.de,
   ||torvalds@linux-foundation.o
   ||rg




--- Comment #3 from Michel Dänzer mic...@daenzer.net  2011-08-24 21:03:48 ---
From Elimar Riesebieter (riesebie at lxtec.de):

bisecting brought me to commit
288d5abec8314ae50fe6692f324b0444acae8486. Reverting seems to work as
microcode is loaded with compiled in firmware and radeon kms driver.


That's

commit 288d5abec8314ae50fe6692f324b0444acae8486
Author: Linus Torvalds torva...@linux-foundation.org
Date:   Wed Aug 3 22:03:29 2011 -1000

Boot up with usermodehelper disabled

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=40952





--- Comment #4 from Linus Torvalds torva...@linux-foundation.org  2011-08-24 
21:57:45 ---
On Wed, Aug 24, 2011 at 2:04 PM,  bugzilla-dae...@bugzilla.kernel.org wrote:

 --- Comment #3 from Michel Dänzer mic...@daenzer.net  2011-08-24 21:03:48 
 ---
 From Elimar Riesebieter (riesebie at lxtec.de):

 bisecting brought me to commit
 288d5abec8314ae50fe6692f324b0444acae8486. Reverting seems to work as
 microcode is loaded with compiled in firmware and radeon kms driver.

Grr.

So _request_firmware() does this:

if (WARN_ON(usermodehelper_is_disabled())) {
dev_err(device, firmware: %s will not be loaded\n, name);
return -EBUSY;
}

which is reasonable, but it does mean that it will warn even if the
firmware is built into the kernel.

On the one hand, that's really nice, because it implies a driver does
a firmware load too early, at a point where it cannot do the generic
firmware load.

On the other hand, it sucks, because it does disallow this situation
that used to work, now that we actually do the sane thing and don't
allow usermode helpers before init has been set up.

So I bet the attached patch fixes the R100 problem, but I'm not 100%
happy with it.

Comments?

   Linus

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=40952





--- Comment #5 from Linus Torvalds torva...@linux-foundation.org  2011-08-24 
23:05:19 ---
Ok, that fix is committed as caca9510ff4e (firmware loader: allow builtin
firmware load even if usermodehelper is disabled). 

Please verify that it does fix it, and close this bug if so.

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=40952





--- Comment #6 from James Cloos cl...@jhcloos.com  2011-08-24 23:54:29 ---
 Ok, that fix is committed as caca9510ff4e (firmware loader: allow
 builtin firmware load even if usermodehelper is disabled).

 Please verify that it does fix it, and close this bug if so.

Will do, just as soon as it hits hera.  (Not there yet.)

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=40952





--- Comment #7 from Linus Torvalds torva...@linux-foundation.org  2011-08-25 
00:00:21 ---
James Cloos cl...@jhcloos.com  2011-08-24 23:54:29:

 Will do, just as soon as it hits hera.  (Not there yet.)

Duh. I forgot to push.

Done,

  Linus

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 40952] R100 firmware no longer loads

2011-08-24 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=40952





--- Comment #8 from Anonymous Emailer anonym...@kernel-bugs.osdl.org  
2011-08-25 01:07:40 ---
Reply-To: gre...@suse.de

On Wed, Aug 24, 2011 at 02:22:27PM -0700, Linus Torvalds wrote:
 On Wed, Aug 24, 2011 at 2:04 PM,  bugzilla-dae...@bugzilla.kernel.org wrote:
 
  --- Comment #3 from Michel Dänzer mic...@daenzer.net  2011-08-24 21:03:48 
  ---
  From Elimar Riesebieter (riesebie at lxtec.de):
 
  bisecting brought me to commit
  288d5abec8314ae50fe6692f324b0444acae8486. Reverting seems to work as
  microcode is loaded with compiled in firmware and radeon kms driver.
 
 Grr.
 
 So _request_firmware() does this:
 
 if (WARN_ON(usermodehelper_is_disabled())) {
 dev_err(device, firmware: %s will not be loaded\n, name);
 return -EBUSY;
 }
 
 which is reasonable, but it does mean that it will warn even if the
 firmware is built into the kernel.
 
 On the one hand, that's really nice, because it implies a driver does
 a firmware load too early, at a point where it cannot do the generic
 firmware load.
 
 On the other hand, it sucks, because it does disallow this situation
 that used to work, now that we actually do the sane thing and don't
 allow usermode helpers before init has been set up.
 
 So I bet the attached patch fixes the R100 problem, but I'm not 100%
 happy with it.
 
 Comments?

That patch looks good to me.

Any ideas on ways that this all could be rewritten to be saner?

greg k-h

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel