On 09/24/2012 08:33 AM, liu ping fan wrote: > On Wed, Sep 19, 2012 at 5:50 PM, Avi Kivity <a...@redhat.com> wrote: > > On 09/19/2012 12:34 PM, Jan Kiszka wrote: > >> > >> What about the following: > >> > >> What we really need to support in practice is MMIO access triggers RAM > >> access of device model. Scenarios where a device access triggers another > >> MMIO access could likely just be rejected without causing troubles. > >> > >> So, when we dispatch a request to a device, we mark that the current > >> thread is in a MMIO dispatch and reject any follow-up c_p_m_rw that does > >> _not_ target RAM, ie. is another, nested MMIO request - independent of > >> its destination. How much of the known issues would this solve? And what > >> would remain open? > > > > Various iommu-like devices re-dispatch I/O, like changing endianness or > > bitband. I don't know whether it targets I/O rather than RAM. > > > Have not found the exact code. But I think the call chain may look > like this: dev mmio-handler --> c_p_m_rw() --> iommu mmio-handler --> > c_p_m_rw() > And I think you worry about the case for "c_p_m_rw() --> iommu > mmio-handler". Right? How about introduce an member can_nest for > MemoryRegionOps of iommu's mr? >
I would rather push the iommu logic into the memory API: memory_region_init_iommu(MemoryRegion *mr, const char *name, MemoryRegion *target, MemoryRegionIOMMUOps *ops, unsigned size) struct MemoryRegionIOMMUOps { target_physical_addr_t (*translate)(target_physical_addr_t addr, bool write); void (*fault)(target_physical_addr_t addr); }; I'll look at a proposal for this. It's a generalized case of memory_region_init_alias(). -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.