On 06/12/2012 04:32 PM, Peter Maydell wrote: > On 12 June 2012 14:18, Avi Kivity <a...@redhat.com> wrote: >> On 06/12/2012 03:58 PM, Peter Maydell wrote: >>> On 11 June 2012 18:31, Avi Kivity <a...@redhat.com> wrote: >>>> On 06/11/2012 06:01 PM, Anthony Liguori wrote: >>>>> Perhaps we should just make MemoryRegion work in both directions? >>> >>>> The other direction is currently cpu_physical_memory_rw(). We do need >>>> to support issuing transactions from arbitrary points in the memory >>>> hierarchy, but I don't think a device's MemoryRegion is the right >>>> interface. Being able to respond to memory transactions, and being able >>>> to issue them are two different things. >>> >>> ...they're just opposite sides of the same interface, though, >>> really. For instance you could say that any memory transaction >>> master (cpu, dma controller, whatever) should take a single >>> MemoryRegion* and must issue all its memory accesses to that MR*. >>> (obviously that would usually be a container region.) >> >> It would be a container region, and it would be unrelated to any other >> regions held by the device (the device might not have any memory >> regions; instead it would only be able to do dma). > > It shouldn't actually be owned by the transaction master, but > by whatever the parent object is that created the transaction > master. So for instance for an ARM board you'd have something > like: > * top level machine QOM object creates a 'system-memory' > container region, and puts all the devices in it in their > correct locations > * top level object also creates the cortex-a9 device, and > passes it a pointer to the system-memory container > * the cortex-a9 device instantiates the CPU cores and the > per-cpu devices, and creates a container region for > each cpu containing (devices for that cpu, plus the > system-memory region it got passed). It passes a pointer > to the right region to each cpu core > * the cpu cores just use the region they're given > * if there's a dma controller in the system, the top level > machine object creates the controller and hands it a > pointer to the system-memory container region too. (So > the dma controller correctly doesn't see the per-cpu > devices.) > > (when I say 'passes a pointer' I mean something involving > QOM links I expect. I'm not sure if anybody's thought about > how we expose memory regions in a QOM manner.) > > Notice that in this approach it's perfectly valid to have > a board model which creates a single device and a single > CPU and passes the device's MemoryRegion directly to the > CPU. This corresponds to a hardware design where the CPU's > address lines just connect straight to the device's, ie > there's no bus fabric or address decoding.
Yes, exactly. If the devices sees a byte-swapping bus, then instead of giving it a container region, we give it an io region; the callbacks byte-swap and write the contents to a container that does the rest of the forwarding. If it's an address remapping iommu, then we pass it a container region with an alias-per-page that remaps the device addresses to addresses in another container: iommu | +-alias-page-0 ---> system_memory[7] |-alias-page-1 ---> system_memory[3] . . . So a device write to page 1 is redirected to page 3. Of course in both cases we'll want to fold the functionality into the memory API instead of making the iommu writer work so hard. -- error compiling committee.c: too many arguments to function