On Mon, Jan 24, 2022 at 05:01:04PM +0100, Jan Beulich wrote: > On 24.01.2022 16:23, Roger Pau Monne wrote: > > RMRRs are setup ahead of populating the p2m and hence the ASSERT when > > populating the low 1MB needs to be relaxed when it finds an existing > > entry: it's either RAM or a RMRR resulting from the IOMMU setup. > > > > Rework the logic a bit and introduce a local mfn variable in order to > > assert that if the gfn is populated and not RAM it is an identity map. > > > > Fixes: 6b4f6a31ac ('x86/PVH: de-duplicate mappings for first Mb of Dom0 > > memory') > > Signed-off-by: Roger Pau Monné <roger....@citrix.com> > > Reviewed-by: Jan Beulich <jbeul...@suse.com> > albeit ... > > > --- a/xen/arch/x86/hvm/dom0_build.c > > +++ b/xen/arch/x86/hvm/dom0_build.c > > @@ -466,11 +466,17 @@ static int __init pvh_populate_p2m(struct domain *d) > > for ( i = rc = 0; i < MB1_PAGES; ++i ) > > { > > p2m_type_t p2mt; > > + mfn_t mfn; > > > > - if ( mfn_eq(get_gfn_query(d, i, &p2mt), INVALID_MFN) ) > > + mfn = get_gfn_query(d, i, &p2mt); > > ... preferably with this becoming the initializer of the variable then > and ... > > > + if ( mfn_eq(mfn, INVALID_MFN) ) > > rc = set_mmio_p2m_entry(d, _gfn(i), _mfn(i), PAGE_ORDER_4K); > > - else > > - ASSERT(p2mt == p2m_ram_rw); > > + else if ( p2mt != p2m_ram_rw && !mfn_eq(mfn, _mfn(i)) ) > > + /* > > + * If the p2m entry is already set it must belong to a > > RMRR and > > + * already be identity mapped, or be a RAM region. > > + */ > > + ASSERT_UNREACHABLE(); > > ... (not just preferably) indentation reduced by a level here. (I wonder > why you didn't simply extend the existing ASSERT() - ASSERT_UNREACHABLE() > is nice in certain cases, but the expression it logs is entirely unhelpful > for disambiguating the reason without looking at the source.)
Indeed, that's better. Sorry about the indentation, not sure what my editor has done here. Thanks, Roger.