Re: bcm43xx: FATAL ERROR DMA RINGMEMORY 1G

2006-04-12 Thread Benjamin Herrenschmidt

 +static inline unsigned long device_to_mask(struct device *hwdev)
 +{
 + struct pci_dev *pdev;
 +
 + if (!hwdev) {
 + pdev = ppc64_isabridge_dev;
 + if (!pdev) /* This is the best guess we can do */
 + return 0xul;
 + } else
 + pdev = to_pci_dev(hwdev);
 +
 + if (pdev-dma_mask)
 + return pdev-dma_mask;
 +
 + /* Assume devices without mask can take 32 bit addresses */
 + return 0xul;
 +}

Won't that blow up in flames with non-PCI devices like ... vio ?

We really need the kernel to move the dma mask to struct device instead
of struct pci_dev but that's a different debate ...

Ben.


___
Bcm43xx-dev mailing list
Bcm43xx-dev@lists.berlios.de
http://lists.berlios.de/mailman/listinfo/bcm43xx-dev


Re: bcm43xx: FATAL ERROR DMA RINGMEMORY 1G

2006-04-12 Thread Benjamin Herrenschmidt

 Why would vio use pci_iommu_map_.*()? That's where the above change
 is. The change to the vio code (arch/powerpc/kernel/vio.c) will pass in
 ~0ul as the mask.
 
 Or did I misunderstand your comment?

No, it's me. misread the patch, though you were hacking in dma_* .
Forgot about the pci_* intermediary wrappers, funny, I think I actually
wrote them :) 

  We really need the kernel to move the dma mask to struct device instead
  of struct pci_dev but that's a different debate ...
 
 I'm not so sure. Besides the awkwardness above, it's not like it's a
 huge penalty to go to the pci_dev. We have to do it to get to the table
 anyway, so caches are hot, etc.
 
 
 -Olof

___
Bcm43xx-dev mailing list
Bcm43xx-dev@lists.berlios.de
http://lists.berlios.de/mailman/listinfo/bcm43xx-dev


Re: bcm43xx: FATAL ERROR DMA RINGMEMORY 1G

2006-04-12 Thread Benjamin Herrenschmidt
On Wed, 2006-04-12 at 00:41 -0500, Olof Johansson wrote:
 On Tue, Apr 11, 2006 at 01:16:26PM -0400, [EMAIL PROTECTED] wrote:
  I have had intermittent success using my PowerMac G5's Airport Extreme
  card, and it turns out it is because I've switched up how much RAM my
  system uses between installs (specifically from 512MB to 1.5GB).
 
 Yup. Care to give this patch a run? It's against 2.6.17-rc1-git5, but it
 should apply on top of 2.6.16 with just a few line offsets.
 
 I don't have a G5 with airport myself, so I'm afraid I can't fully test
 it. Not only that, but since I'm not at home at the moment and my G5s
 are there, I can't even really boot test it on Apple hardware. I have
 at least been able to boot it on a pSeries.
 
 I'd appreciate quick feedback; there might still be a slim chance to get
 this into 2.6.17. I'll submit it if it works for you.

You also need to boot with iommu=on to force enabling it if you have 2Gb
or less of RAM as currently the kernel only enables it by default for
configs with more than 2Gb.

Ben

___
Bcm43xx-dev mailing list
Bcm43xx-dev@lists.berlios.de
http://lists.berlios.de/mailman/listinfo/bcm43xx-dev


Re: bcm43xx: FATAL ERROR DMA RINGMEMORY 1G

2006-04-11 Thread Benjamin Herrenschmidt

 Apparently it's a DMA issue specific to Broadcom and the PPC Linux kernel 
 people don't
 want to mess with the purity of the kernel just to fix it.

Heh... where did you get that from ? :)

 Any chance you guys could
 give it a whirl or know how I can resolve this?  Forgive my inexperience.  
 I'm using
 Ubuntu Linux Dapper Flight 6, upgraded to the latest PPC 64 kernel.

There isn't much the driver folks can do there except maybe allowing for
a PIO only mode.

Regarding the purity of the kernel, it's not that simple :) Adding a
ZONE_DMA like x86 would really suck big time and would affect all sort
of things in pretty shitty ways. It's an ISA-ism, there is simply no
point in bringing that to ppc.

At this point, the best option would be to use iommu tricks, along with
always enabling that iommu even on configs with less than 2Gb of RAM. A
bit nasty but I can't see any other way out at this point. I plan to
give it a go sooner or later, just didn't have time so far.

Ben.


___
Bcm43xx-dev mailing list
Bcm43xx-dev@lists.berlios.de
http://lists.berlios.de/mailman/listinfo/bcm43xx-dev


Re: bcm43xx: FATAL ERROR DMA RINGMEMORY 1G

2006-04-11 Thread Joseph Jezak

There isn't much the driver folks can do there except maybe allowing for
a PIO only mode.


Unfortunately, on newer cards, the hardware doesn't support PIO mode.
The queue lengths are reported as 0, so we can't use them. :p

-Joe
___
Bcm43xx-dev mailing list
Bcm43xx-dev@lists.berlios.de
http://lists.berlios.de/mailman/listinfo/bcm43xx-dev


Re: bcm43xx: FATAL ERROR DMA RINGMEMORY 1G

2006-04-11 Thread Olof Johansson
On Tue, Apr 11, 2006 at 01:16:26PM -0400, [EMAIL PROTECTED] wrote:
 I have had intermittent success using my PowerMac G5's Airport Extreme
 card, and it turns out it is because I've switched up how much RAM my
 system uses between installs (specifically from 512MB to 1.5GB).

Yup. Care to give this patch a run? It's against 2.6.17-rc1-git5, but it
should apply on top of 2.6.16 with just a few line offsets.

I don't have a G5 with airport myself, so I'm afraid I can't fully test
it. Not only that, but since I'm not at home at the moment and my G5s
are there, I can't even really boot test it on Apple hardware. I have
at least been able to boot it on a pSeries.

I'd appreciate quick feedback; there might still be a slim chance to get
this into 2.6.17. I'll submit it if it works for you.


-Olof



Index: 2.6/arch/powerpc/kernel/iommu.c
===
--- 2.6.orig/arch/powerpc/kernel/iommu.c
+++ 2.6/arch/powerpc/kernel/iommu.c
@@ -61,6 +61,7 @@ __setup(iommu=, setup_iommu);
 static unsigned long iommu_range_alloc(struct iommu_table *tbl,
unsigned long npages,
unsigned long *handle,
+  unsigned long mask,
unsigned int align_order)
 { 
unsigned long n, end, i, start;
@@ -97,9 +98,21 @@ static unsigned long iommu_range_alloc(s
 */
if (start = limit)
start = largealloc ? tbl-it_largehint : tbl-it_hint;
-   
+
  again:
 
+   if (limit + tbl-it_offset  mask) {
+   limit = mask - tbl-it_offset + 1;
+   /* If we're constrained on address range, first try
+* at the masked hint to avoid O(n) search complexity,
+* but on second pass, start at 0.
+*/
+   if ((start  mask) = limit || pass  0)
+   start = 0;
+   else
+   start = mask;
+   }
+
n = find_next_zero_bit(tbl-it_map, limit, start);
 
/* Align allocation */
@@ -150,14 +163,14 @@ static unsigned long iommu_range_alloc(s
 
 static dma_addr_t iommu_alloc(struct iommu_table *tbl, void *page,
   unsigned int npages, enum dma_data_direction direction,
-  unsigned int align_order)
+  unsigned long mask, unsigned int align_order)
 {
unsigned long entry, flags;
dma_addr_t ret = DMA_ERROR_CODE;
-   
+
spin_lock_irqsave((tbl-it_lock), flags);
 
-   entry = iommu_range_alloc(tbl, npages, NULL, align_order);
+   entry = iommu_range_alloc(tbl, npages, NULL, mask, align_order);
 
if (unlikely(entry == DMA_ERROR_CODE)) {
spin_unlock_irqrestore((tbl-it_lock), flags);
@@ -236,7 +249,7 @@ static void iommu_free(struct iommu_tabl
 
 int iommu_map_sg(struct device *dev, struct iommu_table *tbl,
struct scatterlist *sglist, int nelems,
-   enum dma_data_direction direction)
+   unsigned long mask, enum dma_data_direction direction)
 {
dma_addr_t dma_next = 0, dma_addr;
unsigned long flags;
@@ -274,7 +287,7 @@ int iommu_map_sg(struct device *dev, str
vaddr = (unsigned long)page_address(s-page) + s-offset;
npages = PAGE_ALIGN(vaddr + slen) - (vaddr  PAGE_MASK);
npages = PAGE_SHIFT;
-   entry = iommu_range_alloc(tbl, npages, handle, 0);
+   entry = iommu_range_alloc(tbl, npages, handle, mask  
PAGE_SHIFT, 0);
 
DBG(  - vaddr: %lx, size: %lx\n, vaddr, slen);
 
@@ -479,7 +492,8 @@ void iommu_free_table(struct device_node
  * byte within the page as vaddr.
  */
 dma_addr_t iommu_map_single(struct iommu_table *tbl, void *vaddr,
-   size_t size, enum dma_data_direction direction)
+   size_t size, unsigned long mask,
+   enum dma_data_direction direction)
 {
dma_addr_t dma_handle = DMA_ERROR_CODE;
unsigned long uaddr;
@@ -492,7 +506,8 @@ dma_addr_t iommu_map_single(struct iommu
npages = PAGE_SHIFT;
 
if (tbl) {
-   dma_handle = iommu_alloc(tbl, vaddr, npages, direction, 0);
+   dma_handle = iommu_alloc(tbl, vaddr, npages, direction,
+mask  PAGE_SHIFT, 0);
if (dma_handle == DMA_ERROR_CODE) {
if (printk_ratelimit())  {
printk(KERN_INFO iommu_alloc failed, 
@@ -521,7 +536,7 @@ void iommu_unmap_single(struct iommu_tab
  * to the dma address (mapping) of the first page.
  */
 void *iommu_alloc_coherent(struct iommu_table *tbl, size_t size,
-   dma_addr_t *dma_handle, gfp_t flag)
+   dma_addr_t *dma_handle, unsigned long mask, gfp_t flag)
 {
void *ret = NULL;
dma_addr_t mapping;
@@ -551,7 +566,8 @@