Hello, all.
The potential issue on Arm (which might happen when remapping
grant-table frame) is still present, it hasn't disappeared.
Some effort was put in trying to fix that by current patch. Although I
have addressed (I hope) all review comments received for this patch, I
realize this patch (in its current form) cannot go in without resolving
locking issue I described in a post-commit message (we don't want to
make things worse than the current state). I would appreciate any
thoughts regarding that.
On 25.09.21 04:48, Julien Grall wrote:
Hi Roger,
On 24/09/2021 21:10, Roger Pau Monné wrote:
On Fri, Sep 24, 2021 at 07:52:24PM +0500, Julien Grall wrote:
Hi Roger,
On 24/09/2021 13:41, Roger Pau Monné wrote:
On Thu, Sep 23, 2021 at 09:59:26PM +0100, Andrew Cooper wrote:
On 23/09/2021 20:32, Oleksandr Tyshchenko wrote:
Suggested-by: Julien Grall <jgr...@amazon.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
---
You can find the related discussions at:
https://lore.kernel.org/xen-devel/93d0df14-2c8a-c2e3-8c51-544121901...@xen.org/
https://lore.kernel.org/xen-devel/1628890077-12545-1-git-send-email-olekst...@gmail.com/
https://lore.kernel.org/xen-devel/1631652245-30746-1-git-send-email-olekst...@gmail.com/
! Please note, there is still unresolved locking question here
for which
I failed to find a suitable solution. So, it is still an RFC !
Just FYI, I thought I'd share some of the plans for ABI v2.
Obviously
these plans are future work and don't solve the current problem.
Guests mapping Xen pages is backwards. There are reasons why it was
used for x86 PV guests, but the entire interface should have been
design
differently for x86 HVM.
In particular, Xen should be mapping guest RAM, rather than the guest
manipulating the 2nd stage tables to map Xen RAM. Amongst other
things,
its far far lower overhead.
A much better design is one where the grant table looks like an MMIO
device. The domain builder decides the ABI (v1 vs v2 - none of this
dynamic switch at runtime nonsense), and picks a block of guest
physical
addresses, which are registered with Xen. This forms the grant
table,
status table (v2 only), and holes to map into.
I think this could be problematic for identity mapped Arm dom0, as
IIRC in that case grants are mapped so that gfn == mfn in order to
account for the lack of an IOMMU. You could use a bounce buffer, but
that would introduce a big performance penalty.
Or you could find a hole that is outside of the RAM regions. This is
not
trivial but not impossible (see [1]).
I certainly not familiar with the Arm identity map.
If you map them at random areas (so no longer identity mapped), how do
you pass the addresses to the physical devices for DMA operations? I
assume there must be some kind of translation then that converts from
gfn to mfn in order to cope with the lack of an IOMMU,
For grant mapping, the hypercall will return the machine address in
dev_bus_addr. Dom0, will keep the conversion dom0 GFN <-> MFN for
later use in the swiotlb.
For foreign mapping, AFAICT, we are expecting them to bounce
everytime. But DMA into a foreign mapping should be rarer.
and because
dom0 doesn't know the mfn of the grant reference in order to map it at
the same gfn.
IIRC, we tried an approach where the grant mapping would be direct
mapped in dom0. However, this was an issue on arm32 because Debian was
(is?) using short descriptor page tables. This didn't allow dom0 to
cover all the mappings and therefore some mappings would not be
accessible.
--
Regards,
Oleksandr Tyshchenko