Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Marek Marczykowski-Górecki
On Tue, Jun 11, 2024 at 04:07:03PM +0200, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 03:15:42PM +0200, Marek Marczykowski-Górecki wrote:
> > It's location is discovered at startup
> > (device presents a linked-list of capabilities in one of its BARs).
> > The spec talks only about alignment of individual registers, not the
> > whole group...
> 
> Never mind then, I had the expectation we could get away with a single
> page, but doesn't look to be the case.
> 
> I assume the spec doesn't mention anything about the BAR where the
> capabilities reside having a size <= 4KiB.

No, and in fact I see it's a BAR of 64KiB on one of devices...

> > > Maybe worth adding a comment that the logic here intends to deal only
> > > with the RW bits of a page that's otherwise RO, and that by not
> > > handling the RO regions the intention is that those are dealt just
> > > like fully RO pages.
> > 
> > I can extend the comment, but I assumed it's kinda implied already (if
> > nothing else, by the function name).
> 
> Well, at this point we know the write is not going to make it to host
> memory.  The only reason to not handle the access here is that we want
> to unify the consequences it has for a guest writing to a RO address.

Yup.

> > > I guess there's some message printed when attempting to write to a RO
> > > page that you would also like to print here?
> > 
> > If a HVM domain writes to an R/O area, it is crashed, so you will get a
> > message. This applies to both full page R/O and partial R/O. PV doesn't
> > go through subpage_mmio_write_accept().
> 
> Oh, crashing the domain is more strict than I was expecting.

That's how it was before, I'm not really changing it here. It's less
strict for PV though (it either gets a #PF forwarded back to the guest,
or is ignored).

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab


signature.asc
Description: PGP signature


Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Roger Pau Monné
On Tue, Jun 11, 2024 at 03:15:42PM +0200, Marek Marczykowski-Górecki wrote:
> On Tue, Jun 11, 2024 at 02:55:22PM +0200, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 01:38:35PM +0200, Marek Marczykowski-Górecki wrote:
> > > On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monné wrote:
> > > > On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki 
> > > > wrote:
> > > > > +if ( !entry )
> > > > > +{
> > > > > +/* iter == NULL marks it was a newly allocated entry */
> > > > > +iter = NULL;
> > > > > +entry = xzalloc(struct subpage_ro_range);
> > > > > +if ( !entry )
> > > > > +return -ENOMEM;
> > > > > +entry->mfn = mfn;
> > > > > +}
> > > > > +
> > > > > +for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > > > +{
> > > > > +bool oldbit = __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRAN,
> > > > > +entry->ro_elems);
> > > > > +ASSERT(!oldbit);
> > > > > +}
> > > > > +
> > > > > +if ( !iter )
> > > > > +list_add(&entry->list, &subpage_ro_ranges);
> > > > > +
> > > > > +return iter ? 1 : 0;
> > > > > +}
> > > > > +
> > > > > +/* This needs subpage_ro_lock already taken */
> > > > > +static void __init subpage_mmio_ro_remove_page(
> > > > > +mfn_t mfn,
> > > > > +unsigned int offset_s,
> > > > > +unsigned int offset_e)
> > > > > +{
> > > > > +struct subpage_ro_range *entry = NULL, *iter;
> > > > > +unsigned int i;
> > > > > +
> > > > > +list_for_each_entry(iter, &subpage_ro_ranges, list)
> > > > > +{
> > > > > +if ( mfn_eq(iter->mfn, mfn) )
> > > > > +{
> > > > > +entry = iter;
> > > > > +break;
> > > > > +}
> > > > > +}
> > > > > +if ( !entry )
> > > > > +return;
> > > > > +
> > > > > +for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > > > +__clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> > > > > +
> > > > > +if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / 
> > > > > MMIO_RO_SUBPAGE_GRAN) )
> > > > > +return;
> > > > > +
> > > > > +list_del(&entry->list);
> > > > > +if ( entry->mapped )
> > > > > +iounmap(entry->mapped);
> > > > > +xfree(entry);
> > > > > +}
> > > > > +
> > > > > +int __init subpage_mmio_ro_add(
> > > > > +paddr_t start,
> > > > > +size_t size)
> > > > > +{
> > > > > +mfn_t mfn_start = maddr_to_mfn(start);
> > > > > +paddr_t end = start + size - 1;
> > > > > +mfn_t mfn_end = maddr_to_mfn(end);
> > > > > +unsigned int offset_end = 0;
> > > > > +int rc;
> > > > > +bool subpage_start, subpage_end;
> > > > > +
> > > > > +ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > > > > +ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > > > > +if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > > > > +size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> > > > > +
> > > > > +if ( !size )
> > > > > +return 0;
> > > > > +
> > > > > +if ( mfn_eq(mfn_start, mfn_end) )
> > > > > +{
> > > > > +/* Both starting and ending parts handled at once */
> > > > > +subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != 
> > > > > PAGE_SIZE - 1;
> > > > > +subpage_end = false;
> > > > 
> > > > Given the intended usage of this, don't we want to limit to only a
> > > > single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
> > > > that would simplify the logic here?
> > > 
> > > I have considered that, but I haven't found anything in the spec
> > > mandating the XHCI DbC registers to not cross page boundary. Currently
> > > (on a system I test this on) they don't cross page boundary, but I don't
> > > want to assume extra constrains - to avoid issues like before (when
> > > on the older system I tested the DbC registers didn't shared page with
> > > other registers, but then they shared the page on a newer hardware).
> > 
> > Oh, from our conversation at XenSummit I got the impression debug registers
> > where always at the same position.  Looking at patch 2/2, it seems you
> > only need to block access to a single register.  Are registers in XHCI
> > size aligned?  As this would guarantee it doesn't cross a page
> > boundary (as long as the register is <= 4096 in size).
> 
> It's a couple of registers (one "extended capability"), see
> `struct dbc_reg` in xhci-dbc.c.

That looks to be an awful lot of individual registers...

> It's location is discovered at startup
> (device presents a linked-list of capabilities in one of its BARs).
> The spec talks only about alignment of individual registers, not the
> whole group...

Never mind then, I had the expectation we could get away with a single
page, but doesn't look to be the case.

I assume the spec doesn't mention anything about the BAR where the
capabilities reside having a size <= 4KiB.

> > > > > +if ( !addr )
> > 

Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Marek Marczykowski-Górecki
On Tue, Jun 11, 2024 at 02:55:22PM +0200, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 01:38:35PM +0200, Marek Marczykowski-Górecki wrote:
> > On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monné wrote:
> > > On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki 
> > > wrote:
> > > > +if ( !entry )
> > > > +{
> > > > +/* iter == NULL marks it was a newly allocated entry */
> > > > +iter = NULL;
> > > > +entry = xzalloc(struct subpage_ro_range);
> > > > +if ( !entry )
> > > > +return -ENOMEM;
> > > > +entry->mfn = mfn;
> > > > +}
> > > > +
> > > > +for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > > +{
> > > > +bool oldbit = __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRAN,
> > > > +entry->ro_elems);
> > > > +ASSERT(!oldbit);
> > > > +}
> > > > +
> > > > +if ( !iter )
> > > > +list_add(&entry->list, &subpage_ro_ranges);
> > > > +
> > > > +return iter ? 1 : 0;
> > > > +}
> > > > +
> > > > +/* This needs subpage_ro_lock already taken */
> > > > +static void __init subpage_mmio_ro_remove_page(
> > > > +mfn_t mfn,
> > > > +unsigned int offset_s,
> > > > +unsigned int offset_e)
> > > > +{
> > > > +struct subpage_ro_range *entry = NULL, *iter;
> > > > +unsigned int i;
> > > > +
> > > > +list_for_each_entry(iter, &subpage_ro_ranges, list)
> > > > +{
> > > > +if ( mfn_eq(iter->mfn, mfn) )
> > > > +{
> > > > +entry = iter;
> > > > +break;
> > > > +}
> > > > +}
> > > > +if ( !entry )
> > > > +return;
> > > > +
> > > > +for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > > +__clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> > > > +
> > > > +if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / 
> > > > MMIO_RO_SUBPAGE_GRAN) )
> > > > +return;
> > > > +
> > > > +list_del(&entry->list);
> > > > +if ( entry->mapped )
> > > > +iounmap(entry->mapped);
> > > > +xfree(entry);
> > > > +}
> > > > +
> > > > +int __init subpage_mmio_ro_add(
> > > > +paddr_t start,
> > > > +size_t size)
> > > > +{
> > > > +mfn_t mfn_start = maddr_to_mfn(start);
> > > > +paddr_t end = start + size - 1;
> > > > +mfn_t mfn_end = maddr_to_mfn(end);
> > > > +unsigned int offset_end = 0;
> > > > +int rc;
> > > > +bool subpage_start, subpage_end;
> > > > +
> > > > +ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > > > +ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > > > +if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > > > +size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> > > > +
> > > > +if ( !size )
> > > > +return 0;
> > > > +
> > > > +if ( mfn_eq(mfn_start, mfn_end) )
> > > > +{
> > > > +/* Both starting and ending parts handled at once */
> > > > +subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != 
> > > > PAGE_SIZE - 1;
> > > > +subpage_end = false;
> > > 
> > > Given the intended usage of this, don't we want to limit to only a
> > > single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
> > > that would simplify the logic here?
> > 
> > I have considered that, but I haven't found anything in the spec
> > mandating the XHCI DbC registers to not cross page boundary. Currently
> > (on a system I test this on) they don't cross page boundary, but I don't
> > want to assume extra constrains - to avoid issues like before (when
> > on the older system I tested the DbC registers didn't shared page with
> > other registers, but then they shared the page on a newer hardware).
> 
> Oh, from our conversation at XenSummit I got the impression debug registers
> where always at the same position.  Looking at patch 2/2, it seems you
> only need to block access to a single register.  Are registers in XHCI
> size aligned?  As this would guarantee it doesn't cross a page
> boundary (as long as the register is <= 4096 in size).

It's a couple of registers (one "extended capability"), see
`struct dbc_reg` in xhci-dbc.c. It's location is discovered at startup
(device presents a linked-list of capabilities in one of its BARs).
The spec talks only about alignment of individual registers, not the
whole group...

> > > > +if ( !addr )
> > > > +{
> > > > +gprintk(XENLOG_ERR,
> > > > +"Failed to map page for MMIO write at 
> > > > 0x%"PRI_mfn"%03x\n",
> > > > +mfn_x(mfn), offset);
> > > > +return;
> > > > +}
> > > > +
> > > > +switch ( len )
> > > > +{
> > > > +case 1:
> > > > +writeb(*(const uint8_t*)data, addr);
> > > > +break;
> > > > +case 2:
> > > > +writew(*(const uint16_t*)data, addr);
> > > > +   

Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Roger Pau Monné
On Tue, Jun 11, 2024 at 01:38:35PM +0200, Marek Marczykowski-Górecki wrote:
> On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monné wrote:
> > On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
> > > +if ( !entry )
> > > +{
> > > +/* iter == NULL marks it was a newly allocated entry */
> > > +iter = NULL;
> > > +entry = xzalloc(struct subpage_ro_range);
> > > +if ( !entry )
> > > +return -ENOMEM;
> > > +entry->mfn = mfn;
> > > +}
> > > +
> > > +for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > +{
> > > +bool oldbit = __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRAN,
> > > +entry->ro_elems);
> > > +ASSERT(!oldbit);
> > > +}
> > > +
> > > +if ( !iter )
> > > +list_add(&entry->list, &subpage_ro_ranges);
> > > +
> > > +return iter ? 1 : 0;
> > > +}
> > > +
> > > +/* This needs subpage_ro_lock already taken */
> > > +static void __init subpage_mmio_ro_remove_page(
> > > +mfn_t mfn,
> > > +unsigned int offset_s,
> > > +unsigned int offset_e)
> > > +{
> > > +struct subpage_ro_range *entry = NULL, *iter;
> > > +unsigned int i;
> > > +
> > > +list_for_each_entry(iter, &subpage_ro_ranges, list)
> > > +{
> > > +if ( mfn_eq(iter->mfn, mfn) )
> > > +{
> > > +entry = iter;
> > > +break;
> > > +}
> > > +}
> > > +if ( !entry )
> > > +return;
> > > +
> > > +for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > +__clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> > > +
> > > +if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / 
> > > MMIO_RO_SUBPAGE_GRAN) )
> > > +return;
> > > +
> > > +list_del(&entry->list);
> > > +if ( entry->mapped )
> > > +iounmap(entry->mapped);
> > > +xfree(entry);
> > > +}
> > > +
> > > +int __init subpage_mmio_ro_add(
> > > +paddr_t start,
> > > +size_t size)
> > > +{
> > > +mfn_t mfn_start = maddr_to_mfn(start);
> > > +paddr_t end = start + size - 1;
> > > +mfn_t mfn_end = maddr_to_mfn(end);
> > > +unsigned int offset_end = 0;
> > > +int rc;
> > > +bool subpage_start, subpage_end;
> > > +
> > > +ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > > +ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > > +if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > > +size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> > > +
> > > +if ( !size )
> > > +return 0;
> > > +
> > > +if ( mfn_eq(mfn_start, mfn_end) )
> > > +{
> > > +/* Both starting and ending parts handled at once */
> > > +subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != 
> > > PAGE_SIZE - 1;
> > > +subpage_end = false;
> > 
> > Given the intended usage of this, don't we want to limit to only a
> > single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
> > that would simplify the logic here?
> 
> I have considered that, but I haven't found anything in the spec
> mandating the XHCI DbC registers to not cross page boundary. Currently
> (on a system I test this on) they don't cross page boundary, but I don't
> want to assume extra constrains - to avoid issues like before (when
> on the older system I tested the DbC registers didn't shared page with
> other registers, but then they shared the page on a newer hardware).

Oh, from our conversation at XenSummit I got the impression debug registers
where always at the same position.  Looking at patch 2/2, it seems you
only need to block access to a single register.  Are registers in XHCI
size aligned?  As this would guarantee it doesn't cross a page
boundary (as long as the register is <= 4096 in size).

> > > +if ( !addr )
> > > +{
> > > +gprintk(XENLOG_ERR,
> > > +"Failed to map page for MMIO write at 
> > > 0x%"PRI_mfn"%03x\n",
> > > +mfn_x(mfn), offset);
> > > +return;
> > > +}
> > > +
> > > +switch ( len )
> > > +{
> > > +case 1:
> > > +writeb(*(const uint8_t*)data, addr);
> > > +break;
> > > +case 2:
> > > +writew(*(const uint16_t*)data, addr);
> > > +break;
> > > +case 4:
> > > +writel(*(const uint32_t*)data, addr);
> > > +break;
> > > +case 8:
> > > +writeq(*(const uint64_t*)data, addr);
> > > +break;
> > > +default:
> > > +/* mmio_ro_emulated_write() already validated the size */
> > > +ASSERT_UNREACHABLE();
> > > +goto write_ignored;
> > > +}
> > > +return;
> > > +}
> > > +}
> > > +/* Do not print message for pages without any writable par

Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Marek Marczykowski-Górecki
On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monné wrote:
> On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
> > In some cases, only few registers on a page needs to be write-protected.
> > Examples include USB3 console (64 bytes worth of registers) or MSI-X's
> > PBA table (which doesn't need to span the whole table either), although
> > in the latter case the spec forbids placing other registers on the same
> > page. Current API allows only marking whole pages pages read-only,
> > which sometimes may cover other registers that guest may need to
> > write into.
> > 
> > Currently, when a guest tries to write to an MMIO page on the
> > mmio_ro_ranges, it's either immediately crashed on EPT violation - if
> > that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
> > from userspace (like, /dev/mem), it will try to fixup by updating page
> > tables (that Xen again will force to read-only) and will hit that #PF
> > again (looping endlessly). Both behaviors are undesirable if guest could
> > actually be allowed the write.
> > 
> > Introduce an API that allows marking part of a page read-only. Since
> > sub-page permissions are not a thing in page tables (they are in EPT,
> > but not granular enough), do this via emulation (or simply page fault
> > handler for PV) that handles writes that are supposed to be allowed.
> > The new subpage_mmio_ro_add() takes a start physical address and the
> > region size in bytes. Both start address and the size need to be 8-byte
> > aligned, as a practical simplification (allows using smaller bitmask,
> > and a smaller granularity isn't really necessary right now).
> > It will internally add relevant pages to mmio_ro_ranges, but if either
> > start or end address is not page-aligned, it additionally adds that page
> > to a list for sub-page R/O handling. The list holds a bitmask which
> > qwords are supposed to be read-only and an address where page is mapped
> > for write emulation - this mapping is done only on the first access. A
> > plain list is used instead of more efficient structure, because there
> > isn't supposed to be many pages needing this precise r/o control.
> > 
> > The mechanism this API is plugged in is slightly different for PV and
> > HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
> > it's already called for #PF on read-only MMIO page. For HVM however, EPT
> > violation on p2m_mmio_direct page results in a direct domain_crash() for
> > non hardware domains.  To reach mmio_ro_emulated_write(), change how
> > write violations for p2m_mmio_direct are handled - specifically, check
> > if they relate to such partially protected page via
> > subpage_mmio_write_accept() and if so, call hvm_emulate_one_mmio() for
> > them too. This decodes what guest is trying write and finally calls
> > mmio_ro_emulated_write(). The EPT write violation is detected as
> > npfec.write_access and npfec.present both being true (similar to other
> > places), which may cover some other (future?) cases - if that happens,
> > emulator might get involved unnecessarily, but since it's limited to
> > pages marked with subpage_mmio_ro_add() only, the impact is minimal.
> > Both of those paths need an MFN to which guest tried to write (to check
> > which part of the page is supposed to be read-only, and where
> > the page is mapped for writes). This information currently isn't
> > available directly in mmio_ro_emulated_write(), but in both cases it is
> > already resolved somewhere higher in the call tree. Pass it down to
> > mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field.
> > 
> > This may give a bit more access to the instruction emulator to HVM
> > guests (the change in hvm_hap_nested_page_fault()), but only for pages
> > explicitly marked with subpage_mmio_ro_add() - so, if the guest has a
> > passed through a device partially used by Xen.
> > As of the next patch, it applies only configuration explicitly
> > documented as not security supported.
> > 
> > The subpage_mmio_ro_add() function cannot be called with overlapping
> > ranges, and on pages already added to mmio_ro_ranges separately.
> > Successful calls would result in correct handling, but error paths may
> > result in incorrect state (like pages removed from mmio_ro_ranges too
> > early). Debug build has asserts for relevant cases.
> > 
> > Signed-off-by: Marek Marczykowski-Górecki 
> > ---
> > Shadow mode is not tested, but I don't expect it to work differently than
> > HAP in areas related to this patch.
> > 
> > Changes in v4:
> > - rename SUBPAGE_MMIO_RO_ALIGN to MMIO_RO_SUBPAGE_GRAN
> > - guard subpage_mmio_write_accept with CONFIG_HVM, as it's used only
> >   there
> > - rename ro_qwords to ro_elems
> > - use unsigned arguments for subpage_mmio_ro_remove_page()
> > - use volatile for __iomem
> > - do not set mmio_ro_ctxt.mfn for mmcfg case
> > - comment where fields of mmio_ro_ctxt are used
> > - use bool for result of __test_and_set

Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Jan Beulich
On 11.06.2024 12:40, Roger Pau Monné wrote:
> On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
>> +int __init subpage_mmio_ro_add(
>> +paddr_t start,
>> +size_t size)
>> +{
>> +mfn_t mfn_start = maddr_to_mfn(start);
>> +paddr_t end = start + size - 1;
>> +mfn_t mfn_end = maddr_to_mfn(end);
>> +unsigned int offset_end = 0;
>> +int rc;
>> +bool subpage_start, subpage_end;
>> +
>> +ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
>> +ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
>> +if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
>> +size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
>> +
>> +if ( !size )
>> +return 0;
>> +
>> +if ( mfn_eq(mfn_start, mfn_end) )
>> +{
>> +/* Both starting and ending parts handled at once */
>> +subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != PAGE_SIZE 
>> - 1;
>> +subpage_end = false;
> 
> Given the intended usage of this, don't we want to limit to only a
> single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
> that would simplify the logic here?
> 
> Mostly asking because I think for the usage of XHCI the registers that
> need to be marked RO are all inside the same page, and hence would
> like to avoid introducing logic to handle multipage ranges if that's
> not tested at all.
> 
>> +}
>> +else
>> +{
>> +subpage_start = PAGE_OFFSET(start);
>> +subpage_end = PAGE_OFFSET(end) != PAGE_SIZE - 1;
>> +}
>> +
>> +spin_lock(&subpage_ro_lock);
> 
> Do you really need the lock if modifications can only happen during
> init?  Xen initialization is single threaded, so you can likely avoid
> the lock during boot.

I was wondering the same, but then concluded the locking here is for
the sake of completenese, not because it's strictly needed.

Jan



Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Marek Marczykowski-Górecki
On Fri, Jun 07, 2024 at 09:01:25AM +0200, Jan Beulich wrote:
> On 22.05.2024 17:39, Marek Marczykowski-Górecki wrote:
> > --- a/xen/arch/x86/include/asm/mm.h
> > +++ b/xen/arch/x86/include/asm/mm.h
> > @@ -522,9 +522,34 @@ extern struct rangeset *mmio_ro_ranges;
> >  void memguard_guard_stack(void *p);
> >  void memguard_unguard_stack(void *p);
> >  
> > +/*
> > + * Add more precise r/o marking for a MMIO page. Range specified here
> > + * will still be R/O, but the rest of the page (not marked as R/O via 
> > another
> > + * call) will have writes passed through.
> > + * The start address and the size must be aligned to MMIO_RO_SUBPAGE_GRAN.
> > + *
> > + * This API cannot be used for overlapping ranges, nor for pages already 
> > added
> > + * to mmio_ro_ranges separately.
> > + *
> > + * Since there is currently no subpage_mmio_ro_remove(), relevant device 
> > should
> > + * not be hot-unplugged.
> 
> Yet there are no guarantees whatsoever. I think we should refuse
> hot-unplug attempts (not just here, but also e.g. for an EHCI
> controller that we use the debug feature of), but doing so would
> likely require coordination with Dom0. Nothing to be done right
> here, of course.
> 
> > + * Return values:
> > + *  - negative: error
> > + *  - 0: success
> > + */
> > +#define MMIO_RO_SUBPAGE_GRAN 8
> > +int subpage_mmio_ro_add(paddr_t start, size_t size);
> > +#ifdef CONFIG_HVM
> > +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla);
> > +#endif
> 
> I'd suggest to omit the #ifdef here. The declaration alone doesn't
> hurt, and the #ifdef harms readability (if only a bit).

Ok.


> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -150,6 +150,17 @@ bool __read_mostly machine_to_phys_mapping_valid;
> >  
> >  struct rangeset *__read_mostly mmio_ro_ranges;
> >  
> > +/* Handling sub-page read-only MMIO regions */
> > +struct subpage_ro_range {
> > +struct list_head list;
> > +mfn_t mfn;
> > +void __iomem *mapped;
> > +DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN);
> > +};
> > +
> > +static LIST_HEAD(subpage_ro_ranges);
> 
> With modifications all happen from __init code, this likely wants
> to be LIST_HEAD_RO_AFTER_INIT() (which would need introducing, to
> parallel LIST_HEAD_READ_MOSTLY()).

Makes sense. And then I would be comfortable with dropping the spinlock
as Roger suggested.
I tried to make this API a bit more generic than I currently need, but
indeed it can be simplified for this particular use case.

> > +int __init subpage_mmio_ro_add(
> > +paddr_t start,
> > +size_t size)
> > +{
> > +mfn_t mfn_start = maddr_to_mfn(start);
> > +paddr_t end = start + size - 1;
> > +mfn_t mfn_end = maddr_to_mfn(end);
> > +unsigned int offset_end = 0;
> > +int rc;
> > +bool subpage_start, subpage_end;
> > +
> > +ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > +ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > +if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > +size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> 
> I'm puzzled: You first check suitable alignment and then adjust size
> to have suitable granularity. Either it is a mistake to call the
> function with a bad size, or it is not. If it's a mistake, the
> release build alternative to the assertion would be to return an
> error. If it's not a mistake, the assertion ought to go away.
> 
> If the assertion is to stay, then I'll further question why the
> other one doesn't also have release build safety fallback logic.

For some reason I read your earlier comment as a request to (try to)
continue safely in this case. But indeed an error is a better option, it
isn't supposed to happen anyway.

> > +if ( !size )
> > +return 0;
> > +
> > +if ( mfn_eq(mfn_start, mfn_end) )
> > +{
> > +/* Both starting and ending parts handled at once */
> > +subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != 
> > PAGE_SIZE - 1;
> > +subpage_end = false;
> > +}
> > +else
> > +{
> > +subpage_start = PAGE_OFFSET(start);
> > +subpage_end = PAGE_OFFSET(end) != PAGE_SIZE - 1;
> > +}
> 
> Since you calculate "end" before adjusting "size", the logic here
> depends on there being the assertion further up.
> 
> Jan

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab


signature.asc
Description: PGP signature


Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-11 Thread Roger Pau Monné
On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
> In some cases, only few registers on a page needs to be write-protected.
> Examples include USB3 console (64 bytes worth of registers) or MSI-X's
> PBA table (which doesn't need to span the whole table either), although
> in the latter case the spec forbids placing other registers on the same
> page. Current API allows only marking whole pages pages read-only,
> which sometimes may cover other registers that guest may need to
> write into.
> 
> Currently, when a guest tries to write to an MMIO page on the
> mmio_ro_ranges, it's either immediately crashed on EPT violation - if
> that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
> from userspace (like, /dev/mem), it will try to fixup by updating page
> tables (that Xen again will force to read-only) and will hit that #PF
> again (looping endlessly). Both behaviors are undesirable if guest could
> actually be allowed the write.
> 
> Introduce an API that allows marking part of a page read-only. Since
> sub-page permissions are not a thing in page tables (they are in EPT,
> but not granular enough), do this via emulation (or simply page fault
> handler for PV) that handles writes that are supposed to be allowed.
> The new subpage_mmio_ro_add() takes a start physical address and the
> region size in bytes. Both start address and the size need to be 8-byte
> aligned, as a practical simplification (allows using smaller bitmask,
> and a smaller granularity isn't really necessary right now).
> It will internally add relevant pages to mmio_ro_ranges, but if either
> start or end address is not page-aligned, it additionally adds that page
> to a list for sub-page R/O handling. The list holds a bitmask which
> qwords are supposed to be read-only and an address where page is mapped
> for write emulation - this mapping is done only on the first access. A
> plain list is used instead of more efficient structure, because there
> isn't supposed to be many pages needing this precise r/o control.
> 
> The mechanism this API is plugged in is slightly different for PV and
> HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
> it's already called for #PF on read-only MMIO page. For HVM however, EPT
> violation on p2m_mmio_direct page results in a direct domain_crash() for
> non hardware domains.  To reach mmio_ro_emulated_write(), change how
> write violations for p2m_mmio_direct are handled - specifically, check
> if they relate to such partially protected page via
> subpage_mmio_write_accept() and if so, call hvm_emulate_one_mmio() for
> them too. This decodes what guest is trying write and finally calls
> mmio_ro_emulated_write(). The EPT write violation is detected as
> npfec.write_access and npfec.present both being true (similar to other
> places), which may cover some other (future?) cases - if that happens,
> emulator might get involved unnecessarily, but since it's limited to
> pages marked with subpage_mmio_ro_add() only, the impact is minimal.
> Both of those paths need an MFN to which guest tried to write (to check
> which part of the page is supposed to be read-only, and where
> the page is mapped for writes). This information currently isn't
> available directly in mmio_ro_emulated_write(), but in both cases it is
> already resolved somewhere higher in the call tree. Pass it down to
> mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field.
> 
> This may give a bit more access to the instruction emulator to HVM
> guests (the change in hvm_hap_nested_page_fault()), but only for pages
> explicitly marked with subpage_mmio_ro_add() - so, if the guest has a
> passed through a device partially used by Xen.
> As of the next patch, it applies only configuration explicitly
> documented as not security supported.
> 
> The subpage_mmio_ro_add() function cannot be called with overlapping
> ranges, and on pages already added to mmio_ro_ranges separately.
> Successful calls would result in correct handling, but error paths may
> result in incorrect state (like pages removed from mmio_ro_ranges too
> early). Debug build has asserts for relevant cases.
> 
> Signed-off-by: Marek Marczykowski-Górecki 
> ---
> Shadow mode is not tested, but I don't expect it to work differently than
> HAP in areas related to this patch.
> 
> Changes in v4:
> - rename SUBPAGE_MMIO_RO_ALIGN to MMIO_RO_SUBPAGE_GRAN
> - guard subpage_mmio_write_accept with CONFIG_HVM, as it's used only
>   there
> - rename ro_qwords to ro_elems
> - use unsigned arguments for subpage_mmio_ro_remove_page()
> - use volatile for __iomem
> - do not set mmio_ro_ctxt.mfn for mmcfg case
> - comment where fields of mmio_ro_ctxt are used
> - use bool for result of __test_and_set_bit
> - do not open-code mfn_to_maddr()
> - remove leftover RCU
> - mention hvm_hap_nested_page_fault() explicitly in the commit message
> Changes in v3:
> - use unsigned int for loop iterators
> - use __set_bit/__clear_bit when

Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-06-07 Thread Jan Beulich
On 22.05.2024 17:39, Marek Marczykowski-Górecki wrote:
> --- a/xen/arch/x86/include/asm/mm.h
> +++ b/xen/arch/x86/include/asm/mm.h
> @@ -522,9 +522,34 @@ extern struct rangeset *mmio_ro_ranges;
>  void memguard_guard_stack(void *p);
>  void memguard_unguard_stack(void *p);
>  
> +/*
> + * Add more precise r/o marking for a MMIO page. Range specified here
> + * will still be R/O, but the rest of the page (not marked as R/O via another
> + * call) will have writes passed through.
> + * The start address and the size must be aligned to MMIO_RO_SUBPAGE_GRAN.
> + *
> + * This API cannot be used for overlapping ranges, nor for pages already 
> added
> + * to mmio_ro_ranges separately.
> + *
> + * Since there is currently no subpage_mmio_ro_remove(), relevant device 
> should
> + * not be hot-unplugged.

Yet there are no guarantees whatsoever. I think we should refuse
hot-unplug attempts (not just here, but also e.g. for an EHCI
controller that we use the debug feature of), but doing so would
likely require coordination with Dom0. Nothing to be done right
here, of course.

> + * Return values:
> + *  - negative: error
> + *  - 0: success
> + */
> +#define MMIO_RO_SUBPAGE_GRAN 8
> +int subpage_mmio_ro_add(paddr_t start, size_t size);
> +#ifdef CONFIG_HVM
> +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla);
> +#endif

I'd suggest to omit the #ifdef here. The declaration alone doesn't
hurt, and the #ifdef harms readability (if only a bit).

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -150,6 +150,17 @@ bool __read_mostly machine_to_phys_mapping_valid;
>  
>  struct rangeset *__read_mostly mmio_ro_ranges;
>  
> +/* Handling sub-page read-only MMIO regions */
> +struct subpage_ro_range {
> +struct list_head list;
> +mfn_t mfn;
> +void __iomem *mapped;
> +DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN);
> +};
> +
> +static LIST_HEAD(subpage_ro_ranges);

With modifications all happen from __init code, this likely wants
to be LIST_HEAD_RO_AFTER_INIT() (which would need introducing, to
parallel LIST_HEAD_READ_MOSTLY()).

> +int __init subpage_mmio_ro_add(
> +paddr_t start,
> +size_t size)
> +{
> +mfn_t mfn_start = maddr_to_mfn(start);
> +paddr_t end = start + size - 1;
> +mfn_t mfn_end = maddr_to_mfn(end);
> +unsigned int offset_end = 0;
> +int rc;
> +bool subpage_start, subpage_end;
> +
> +ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> +ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> +if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> +size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);

I'm puzzled: You first check suitable alignment and then adjust size
to have suitable granularity. Either it is a mistake to call the
function with a bad size, or it is not. If it's a mistake, the
release build alternative to the assertion would be to return an
error. If it's not a mistake, the assertion ought to go away.

If the assertion is to stay, then I'll further question why the
other one doesn't also have release build safety fallback logic.

> +if ( !size )
> +return 0;
> +
> +if ( mfn_eq(mfn_start, mfn_end) )
> +{
> +/* Both starting and ending parts handled at once */
> +subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != PAGE_SIZE 
> - 1;
> +subpage_end = false;
> +}
> +else
> +{
> +subpage_start = PAGE_OFFSET(start);
> +subpage_end = PAGE_OFFSET(end) != PAGE_SIZE - 1;
> +}

Since you calculate "end" before adjusting "size", the logic here
depends on there being the assertion further up.

Jan



[PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO page read only

2024-05-22 Thread Marek Marczykowski-Górecki
In some cases, only few registers on a page needs to be write-protected.
Examples include USB3 console (64 bytes worth of registers) or MSI-X's
PBA table (which doesn't need to span the whole table either), although
in the latter case the spec forbids placing other registers on the same
page. Current API allows only marking whole pages pages read-only,
which sometimes may cover other registers that guest may need to
write into.

Currently, when a guest tries to write to an MMIO page on the
mmio_ro_ranges, it's either immediately crashed on EPT violation - if
that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
from userspace (like, /dev/mem), it will try to fixup by updating page
tables (that Xen again will force to read-only) and will hit that #PF
again (looping endlessly). Both behaviors are undesirable if guest could
actually be allowed the write.

Introduce an API that allows marking part of a page read-only. Since
sub-page permissions are not a thing in page tables (they are in EPT,
but not granular enough), do this via emulation (or simply page fault
handler for PV) that handles writes that are supposed to be allowed.
The new subpage_mmio_ro_add() takes a start physical address and the
region size in bytes. Both start address and the size need to be 8-byte
aligned, as a practical simplification (allows using smaller bitmask,
and a smaller granularity isn't really necessary right now).
It will internally add relevant pages to mmio_ro_ranges, but if either
start or end address is not page-aligned, it additionally adds that page
to a list for sub-page R/O handling. The list holds a bitmask which
qwords are supposed to be read-only and an address where page is mapped
for write emulation - this mapping is done only on the first access. A
plain list is used instead of more efficient structure, because there
isn't supposed to be many pages needing this precise r/o control.

The mechanism this API is plugged in is slightly different for PV and
HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
it's already called for #PF on read-only MMIO page. For HVM however, EPT
violation on p2m_mmio_direct page results in a direct domain_crash() for
non hardware domains.  To reach mmio_ro_emulated_write(), change how
write violations for p2m_mmio_direct are handled - specifically, check
if they relate to such partially protected page via
subpage_mmio_write_accept() and if so, call hvm_emulate_one_mmio() for
them too. This decodes what guest is trying write and finally calls
mmio_ro_emulated_write(). The EPT write violation is detected as
npfec.write_access and npfec.present both being true (similar to other
places), which may cover some other (future?) cases - if that happens,
emulator might get involved unnecessarily, but since it's limited to
pages marked with subpage_mmio_ro_add() only, the impact is minimal.
Both of those paths need an MFN to which guest tried to write (to check
which part of the page is supposed to be read-only, and where
the page is mapped for writes). This information currently isn't
available directly in mmio_ro_emulated_write(), but in both cases it is
already resolved somewhere higher in the call tree. Pass it down to
mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field.

This may give a bit more access to the instruction emulator to HVM
guests (the change in hvm_hap_nested_page_fault()), but only for pages
explicitly marked with subpage_mmio_ro_add() - so, if the guest has a
passed through a device partially used by Xen.
As of the next patch, it applies only configuration explicitly
documented as not security supported.

The subpage_mmio_ro_add() function cannot be called with overlapping
ranges, and on pages already added to mmio_ro_ranges separately.
Successful calls would result in correct handling, but error paths may
result in incorrect state (like pages removed from mmio_ro_ranges too
early). Debug build has asserts for relevant cases.

Signed-off-by: Marek Marczykowski-Górecki 
---
Shadow mode is not tested, but I don't expect it to work differently than
HAP in areas related to this patch.

Changes in v4:
- rename SUBPAGE_MMIO_RO_ALIGN to MMIO_RO_SUBPAGE_GRAN
- guard subpage_mmio_write_accept with CONFIG_HVM, as it's used only
  there
- rename ro_qwords to ro_elems
- use unsigned arguments for subpage_mmio_ro_remove_page()
- use volatile for __iomem
- do not set mmio_ro_ctxt.mfn for mmcfg case
- comment where fields of mmio_ro_ctxt are used
- use bool for result of __test_and_set_bit
- do not open-code mfn_to_maddr()
- remove leftover RCU
- mention hvm_hap_nested_page_fault() explicitly in the commit message
Changes in v3:
- use unsigned int for loop iterators
- use __set_bit/__clear_bit when under spinlock
- avoid ioremap() under spinlock
- do not cast away const
- handle unaligned parameters in release build
- comment fixes
- remove RCU - the add functions are __init and actual usage is only
  much later after domains are running
- add