When we get an MMIO request, we always get variables in host endianness. The only time we need to actually reverse byte order is when we read bytes from guest memory.
Apparently the DBDMA implementation is different there. A lot of the logic in there depends on values being big endian. Now, qemu does all the conversion in the MMIO handlers for us already though, so it turns out that we're in the same byte order from a C point of view, but cpu_to_be32 and be32_to_cpu end up being nops. This makes the code work differently on x86 (little endian) than on ppc (big endian). On x86 it works, on ppc it doesn't. This patch (while being seriously hacky and ugly) makes dbdma emulation work on ppc hosts. I'll leave the real fixing to someone else. Signed-off-by: Alexander Graf <ag...@suse.de> CC: Laurent Vivier <laur...@vivier.eu> --- hw/mac_dbdma.c | 8 ++++++++ 1 files changed, 8 insertions(+), 0 deletions(-) diff --git a/hw/mac_dbdma.c b/hw/mac_dbdma.c index 98dccfd..4dbfc16 100644 --- a/hw/mac_dbdma.c +++ b/hw/mac_dbdma.c @@ -40,6 +40,14 @@ #include "isa.h" #include "mac_dbdma.h" +/* + * XXX This is just plain wrong. Apparently we don't want to have big endian + * values, but reversed endian ones. The code as is doesn't work on big + * endian hosts. With these defines it does. + */ +#define cpu_to_be32 bswap32 +#define be32_to_cpu bswap32 + /* debug DBDMA */ //#define DEBUG_DBDMA -- 1.6.0.2