On Wed, 28 Feb 2024 at 19:07, Heinrich Schuchardt
<heinrich.schucha...@canonical.com> wrote:
>
> On 28.02.24 19:39, Peter Maydell wrote:
> > The limitation to a page dates back to commit 6d16c2f88f2a in 2009,
> > which was the first implementation of this function. I don't think
> > there's a particular reason for that value beyond that it was
> > probably a convenient value that was assumed to be likely "big enough".
> >
> > I think the idea with this bounce-buffer has always been that this
> > isn't really a code path we expected to end up in very often --
> > it's supposed to be for when devices are doing DMA, which they
> > will typically be doing to memory (backed by host RAM), not
> > devices (backed by MMIO and needing a bounce buffer). So the
> > whole mechanism is a bit "last fallback to stop things breaking
> > entirely".
> >
> > The address_space_map() API says that it's allowed to return
> > a subset of the range you ask for, so if the virtio code doesn't
> > cope with the minimum being set to TARGET_PAGE_SIZE then either
> > we need to fix that virtio code or we need to change the API
> > of this function. (But I think you will also get a reduced
> > range if you try to use it across a boundary between normal
> > host-memory-backed RAM and a device MemoryRegion.)
>
> If we allow a bounce buffer only to be used once (via the in_use flag),
> why do we allow only a single bounce buffer?
>
> Could address_space_map() allocate a new bounce buffer on every call and
> address_space_unmap() deallocate it?
>
> Isn't the design with a single bounce buffer bound to fail with a
> multi-threaded client as collision can be expected?

Yeah, I don't suppose multi-threaded was particularly expected.
Again, this is really a "handle the case where the guest does
something silly" setup, which is why only one bounce buffer.

Why is your guest ending up in the bounce-buffer path?

-- PMM

Reply via email to