device-assignment: difference between assigned_dev_iomem_map and ...map_slow

2011-04-21 Thread Jan Kiszka
Hi,

latest qemu-kvm bails out on cleanup as it tries to call
cpu_register_physical_memory with a zero-sized region of an assigned
device. That made me dig into the setup/cleanup of memory mapped io
regions, trying to consolidate and fix the code.

What are the differences between normal and slow mmio regions? The
former are mapped directly to the physical device (via
qemu_ram_alloc_from_ptr + cpu_register_physical_memory), the latter have
to be dispatched in user land (thus cpu_register_io_memory +
cpu_register_physical_memory), right?

But why do we need to postpone cpu_register_io_memory to
assigned_dev_iomem_map_slow? It looks like that's effectively the only
difference between both mapping callbacks (subtracting some bugs and
dead code). Can't we set up the io region in
assigned_dev_register_regions analogously to normal regions?

BTW, the current code is leaking the slow io region on cleanup.

Comments appreciated, will translate them into a cleanup patch series.

Thanks,
Jan (who wanted to do something completely different...)

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: device-assignment: difference between assigned_dev_iomem_map and ...map_slow

2011-04-21 Thread Alex Williamson
On Thu, 2011-04-21 at 18:23 +0200, Jan Kiszka wrote:
 Hi,
 
 latest qemu-kvm bails out on cleanup as it tries to call
 cpu_register_physical_memory with a zero-sized region of an assigned
 device. That made me dig into the setup/cleanup of memory mapped io
 regions, trying to consolidate and fix the code.

The teardown is gated by memory_index, so that means it's tearing down a
region that wasn't mapped by the guest?

 What are the differences between normal and slow mmio regions? The
 former are mapped directly to the physical device (via
 qemu_ram_alloc_from_ptr + cpu_register_physical_memory), the latter have
 to be dispatched in user land (thus cpu_register_io_memory +
 cpu_register_physical_memory), right?

Right.

 But why do we need to postpone cpu_register_io_memory to
 assigned_dev_iomem_map_slow? It looks like that's effectively the only
 difference between both mapping callbacks (subtracting some bugs and
 dead code). Can't we set up the io region in
 assigned_dev_register_regions analogously to normal regions?

I imagine it was an attempt not to overload memory_index, for tests like
the one above, but apparently it's not working out so well.  I don't see
any reason we shouldn't do the cpu_register_io_memory on setup.

 BTW, the current code is leaking the slow io region on cleanup.

Yep, I don't see an cpu_unregister_io_memory() for that region either.

 Comments appreciated, will translate them into a cleanup patch series.

Thanks!

Alex


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html