Hi Thomas, Thomas Huth <th...@redhat.com> writes: > Am Wed, 22 Apr 2015 16:27:19 +0530 > schrieb Nikunj A Dadhania <nik...@linux.vnet.ibm.com>: > >> With the addition of 64-bit BARS and increase in the mmio address >> space, the code was hitting this limit. The memory of pci devices >> across the bridges were not accessible due to which the drivers >> failed. >> >> Signed-off-by: Nikunj A Dadhania <nik...@linux.vnet.ibm.com> >> --- >> board-qemu/slof/pci-phb.fs | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs >> index 529772f..e307d95 100644 >> --- a/board-qemu/slof/pci-phb.fs >> +++ b/board-qemu/slof/pci-phb.fs >> @@ -258,7 +258,8 @@ setup-puid >> decode-64 2 / dup >r \ Decode and calc size/2 >> pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address > > Could pci-max-mem overflow, too?
Should not, its only the boundary that was an issue. Qemu sends base and size, base + size can be till uint32 max. So for example base was 0xC000.0000 and size was 0x4000.0000, we add up base + size and put pci-max-mmio as 0x1.0000.0000, which would get programmend in the bridge bars: lower limit as 0xC000 and 0x0000 as upper limit. And no mmio access were going across the bridge. In my testing, I have found one more issue with translate-my-address, it does not take care of 64-bit addresses. I have a patch working for SLOF, but its breaking the guest kernel booting. > >> dup pci-next-mmio ! \ which is the same as MMIO >> base >> - r> + pci-max-mmio ! \ calc max MMIO address >> + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and >> + \ check the 32-bit boundary > > Thomas Regards, Nikunj _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev