Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-09-04 Thread Jan Beulich
>>> On 04.09.17 at 17:06,  wrote:
> On Mon, Sep 04, 2017 at 04:52:35PM +0800, Chao Gao wrote:
>> On Mon, Sep 04, 2017 at 10:26:04AM +0100, Roger Pau Monné wrote:
>> >On Mon, Sep 04, 2017 at 10:00:00AM +0100, Roger Pau Monné wrote:
>> >> So your box seems to be capable of generating faults. Missing RMRR
>> >> regions is (sadly) expected, but at least you get faults and not a
>> >> complete hang. Which chipset does this box have? Is it a C600/X79?
>> 
>> No. The haswell's chipset is C610/x99. 
> 
> Can you try with the C600/x79 chipset? I'm afraid the issue is
> probably more related to the chipset rather than the CPU itself.

Or even the firmware.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-09-04 Thread Roger Pau Monné
OK, I know why my MUA doesn't add your email to the To or Cc when
replying, this is because your original email contain the following
header tag:

Mail-Followup-To: Roger Pau =?iso-8859-1?Q?Monn=E9?= ,
"Tian, Kevin" ,
Jan Beulich ,
Andrew Cooper ,
"xen-de...@lists.xenproject.org" 

When replying, the following addresses are placed in the "To" field,
and as you can see your address is missing from this list. So either
you add your address here, or you stop setting "Mail-Followup-To".

Roger.

On Mon, Sep 04, 2017 at 04:06:51PM +0100, Roger Pau Monné wrote:
> On Mon, Sep 04, 2017 at 04:52:35PM +0800, Chao Gao wrote:
> > On Mon, Sep 04, 2017 at 10:26:04AM +0100, Roger Pau Monné wrote:
> > >(Adding Chao again because my MUA seems to drop him each time)
> > >
> > >On Mon, Sep 04, 2017 at 10:00:00AM +0100, Roger Pau Monné wrote:
> > >> On Mon, Sep 04, 2017 at 02:25:10PM +0800, Chao Gao wrote:
> > >> > On Thu, Aug 31, 2017 at 11:09:48AM +0100, Roger Pau Monne wrote:
> > >> > >I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
> > >> > >(in fact I didn't even know about Ivy Bridge, that's why I said all
> > >> > >pre-Haswell).
> > >> > >
> > >> > >In fact I'm now trying with a Nehalem processor that seem to work, so
> > >> > >whatever this issue is it certainly doesn't affect all models or
> > >> > >chipsets.
> > >> > 
> > >> > Hi, Roger.
> > >> > 
> > >> > Last week, I borrowed a Sandy Bridge with Intel(R) Xeon(R) E5-2690
> > >> > 2.7GHz and tested with 'dom0=pvh'. But I didn't see the machine hang.
> > >> > 
> > >> > I also tested on Haswell and found RMRRs in dmar are incorrect on my
> > >> > haswell. The e820 on that machine is:
> > >> > (XEN) [0.00] Xen-e820 RAM map:
> > >> > (XEN) [0.00]   - 0009a400 (usable)
> > >> > (XEN) [0.00]  0009a400 - 000a (reserved)
> > >> > (XEN) [0.00]  000e - 0010 (reserved)
> > >> > (XEN) [0.00]  0010 - 6ff84000 (usable)
> > >> > (XEN) [0.00]  6ff84000 - 7ac51000 (reserved)
> > >> > (XEN) [0.00]  7ac51000 - 7b681000 (ACPI NVS)
> > >> > (XEN) [0.00]  7b681000 - 7b7cf000 (ACPI data)
> > >> > (XEN) [0.00]  7b7cf000 - 7b80 (usable)
> > >> > (XEN) [0.00]  7b80 - 9000 (reserved)
> > >> > (XEN) [0.00]  fed1c000 - fed2 (reserved)
> > >> > (XEN) [0.00]  ff40 - 0001 (reserved)
> > >> > (XEN) [0.00]  0001 - 00208000 (usable)
> > >> > 
> > >> > And the RMRRs in DMAR are:
> > >> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> > >> > (XEN) [0.00] [VT-D] endpoint: :05:00.0
> > >> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 
> > >> > 723b4000
> > >> > end_addr 7a3f3fff
> > >> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> > >> > (XEN) [0.00] [VT-D] endpoint: :00:1d.0
> > >> > (XEN) [0.00] [VT-D] endpoint: :00:1a.0
> > >> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 
> > >> > 723ac000
> > >> > end_addr 723aefff
> > >> > (Endpoint 05:00.0 is a RAID bus controller. Endpoints 00.1d.0 and 
> > >> > 00.1a.0
> > >> > are USB controllers.)
> > >> > 
> > >> > After DMA remapping is enabled, two DMA translation faults are reported
> > >> > by VT-d:
> > >> > (XEN) [9.547924] [VT-D]iommu_enable_translation: iommu->reg =
> > >> > 82c00021b000
> > >> > (XEN) [9.550620] [VT-D]iommu_enable_translation: iommu->reg =
> > >> > 82c00021d000
> > >> > (XEN) [9.553327] [VT-D]iommu.c:921: iommu_fault_status: Primary
> > >> > Pending Fault
> > >> > (XEN) [9.555906] [VT-D]DMAR:[DMA Read] Request device 
> > >> > [:00:1a.0]
> > >> > fault addr 7a3f5000, iommu reg = 82c00021d000
> > >> > (XEN) [9.558537] [VT-D]DMAR: reason 06 - PTE Read access is not set
> > >> > (XEN) [9.559860] print_vtd_entries: iommu #1 dev :00:1a.0 gmfn
> > >> > 7a3f5
> > >> > (XEN) [9.561179] root_entry[00] = 107277c001
> > >> > (XEN) [9.562447] context[d0] = 2_1072c06001
> > >> > (XEN) [9.563776] l4[000] = 9c202f171107
> > >> > (XEN) [9.565125] l3[001] = 9c202f152107
> > >> > (XEN) [9.566483] l2[1d1] = 9c10727ce107
> > >> > (XEN) [9.567821] l1[1f5] = 8000
> > >> > (XEN) [9.569168] l1[1f5] not present
> > >> > (XEN) [9.570502] [VT-D]DMAR:[DMA Read] Request device 
> > >> > [:00:1d.0]
> > >> > fault addr 7a3f4000, iommu reg = 82c00021d000
> > >> > (XEN) [9.573147] [VT-D]DMAR: reason 06 - PTE Read access is not set
> > >> > (XEN) [9.574488] print_vtd_entries: iommu #1 dev :00:1d.0 gmfn
> > >> > 7a3f4
> > >> 

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-09-04 Thread Roger Pau Monné
On Mon, Sep 04, 2017 at 04:52:35PM +0800, Chao Gao wrote:
> On Mon, Sep 04, 2017 at 10:26:04AM +0100, Roger Pau Monné wrote:
> >(Adding Chao again because my MUA seems to drop him each time)
> >
> >On Mon, Sep 04, 2017 at 10:00:00AM +0100, Roger Pau Monné wrote:
> >> On Mon, Sep 04, 2017 at 02:25:10PM +0800, Chao Gao wrote:
> >> > On Thu, Aug 31, 2017 at 11:09:48AM +0100, Roger Pau Monne wrote:
> >> > >I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
> >> > >(in fact I didn't even know about Ivy Bridge, that's why I said all
> >> > >pre-Haswell).
> >> > >
> >> > >In fact I'm now trying with a Nehalem processor that seem to work, so
> >> > >whatever this issue is it certainly doesn't affect all models or
> >> > >chipsets.
> >> > 
> >> > Hi, Roger.
> >> > 
> >> > Last week, I borrowed a Sandy Bridge with Intel(R) Xeon(R) E5-2690
> >> > 2.7GHz and tested with 'dom0=pvh'. But I didn't see the machine hang.
> >> > 
> >> > I also tested on Haswell and found RMRRs in dmar are incorrect on my
> >> > haswell. The e820 on that machine is:
> >> > (XEN) [0.00] Xen-e820 RAM map:
> >> > (XEN) [0.00]   - 0009a400 (usable)
> >> > (XEN) [0.00]  0009a400 - 000a (reserved)
> >> > (XEN) [0.00]  000e - 0010 (reserved)
> >> > (XEN) [0.00]  0010 - 6ff84000 (usable)
> >> > (XEN) [0.00]  6ff84000 - 7ac51000 (reserved)
> >> > (XEN) [0.00]  7ac51000 - 7b681000 (ACPI NVS)
> >> > (XEN) [0.00]  7b681000 - 7b7cf000 (ACPI data)
> >> > (XEN) [0.00]  7b7cf000 - 7b80 (usable)
> >> > (XEN) [0.00]  7b80 - 9000 (reserved)
> >> > (XEN) [0.00]  fed1c000 - fed2 (reserved)
> >> > (XEN) [0.00]  ff40 - 0001 (reserved)
> >> > (XEN) [0.00]  0001 - 00208000 (usable)
> >> > 
> >> > And the RMRRs in DMAR are:
> >> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> >> > (XEN) [0.00] [VT-D] endpoint: :05:00.0
> >> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723b4000
> >> > end_addr 7a3f3fff
> >> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> >> > (XEN) [0.00] [VT-D] endpoint: :00:1d.0
> >> > (XEN) [0.00] [VT-D] endpoint: :00:1a.0
> >> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723ac000
> >> > end_addr 723aefff
> >> > (Endpoint 05:00.0 is a RAID bus controller. Endpoints 00.1d.0 and 00.1a.0
> >> > are USB controllers.)
> >> > 
> >> > After DMA remapping is enabled, two DMA translation faults are reported
> >> > by VT-d:
> >> > (XEN) [9.547924] [VT-D]iommu_enable_translation: iommu->reg =
> >> > 82c00021b000
> >> > (XEN) [9.550620] [VT-D]iommu_enable_translation: iommu->reg =
> >> > 82c00021d000
> >> > (XEN) [9.553327] [VT-D]iommu.c:921: iommu_fault_status: Primary
> >> > Pending Fault
> >> > (XEN) [9.555906] [VT-D]DMAR:[DMA Read] Request device [:00:1a.0]
> >> > fault addr 7a3f5000, iommu reg = 82c00021d000
> >> > (XEN) [9.558537] [VT-D]DMAR: reason 06 - PTE Read access is not set
> >> > (XEN) [9.559860] print_vtd_entries: iommu #1 dev :00:1a.0 gmfn
> >> > 7a3f5
> >> > (XEN) [9.561179] root_entry[00] = 107277c001
> >> > (XEN) [9.562447] context[d0] = 2_1072c06001
> >> > (XEN) [9.563776] l4[000] = 9c202f171107
> >> > (XEN) [9.565125] l3[001] = 9c202f152107
> >> > (XEN) [9.566483] l2[1d1] = 9c10727ce107
> >> > (XEN) [9.567821] l1[1f5] = 8000
> >> > (XEN) [9.569168] l1[1f5] not present
> >> > (XEN) [9.570502] [VT-D]DMAR:[DMA Read] Request device [:00:1d.0]
> >> > fault addr 7a3f4000, iommu reg = 82c00021d000
> >> > (XEN) [9.573147] [VT-D]DMAR: reason 06 - PTE Read access is not set
> >> > (XEN) [9.574488] print_vtd_entries: iommu #1 dev :00:1d.0 gmfn
> >> > 7a3f4
> >> > (XEN) [9.575819] root_entry[00] = 107277c001
> >> > (XEN) [9.577129] context[e8] = 2_1072c06001
> >> > (XEN) [9.578439] l4[000] = 9c202f171107
> >> > (XEN) [9.579778] l3[001] = 9c202f152107
> >> > (XEN) [9.58] l2[1d1] = 9c10727ce107
> >> > (XEN) [9.582482] l1[1f4] = 8000
> >> > (XEN) [9.583812] l1[1f4] not present
> >> > (XEN) [   10.520172] Unable to find XEN_ELFNOTE_PHYS32_ENTRY address
> >> > (XEN) [   10.521499] Failed to load Dom0 kernel
> >> > (XEN) [   10.532171] 
> >> > (XEN) [   10.535464] 
> >> > (XEN) [   10.542636] Panic on CPU 0:
> >> > (XEN) [   10.547394] Could not set up DOM0 guest OS
> >> > (XEN) [   10.553605] 
> >> > 
> >> > The fault address the devices failed to access is marked as reserved in
> >> > e820 and isn't 

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-09-04 Thread Chao Gao
On Mon, Sep 04, 2017 at 10:26:04AM +0100, Roger Pau Monné wrote:
>(Adding Chao again because my MUA seems to drop him each time)
>
>On Mon, Sep 04, 2017 at 10:00:00AM +0100, Roger Pau Monné wrote:
>> On Mon, Sep 04, 2017 at 02:25:10PM +0800, Chao Gao wrote:
>> > On Thu, Aug 31, 2017 at 11:09:48AM +0100, Roger Pau Monne wrote:
>> > >I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
>> > >(in fact I didn't even know about Ivy Bridge, that's why I said all
>> > >pre-Haswell).
>> > >
>> > >In fact I'm now trying with a Nehalem processor that seem to work, so
>> > >whatever this issue is it certainly doesn't affect all models or
>> > >chipsets.
>> > 
>> > Hi, Roger.
>> > 
>> > Last week, I borrowed a Sandy Bridge with Intel(R) Xeon(R) E5-2690
>> > 2.7GHz and tested with 'dom0=pvh'. But I didn't see the machine hang.
>> > 
>> > I also tested on Haswell and found RMRRs in dmar are incorrect on my
>> > haswell. The e820 on that machine is:
>> > (XEN) [0.00] Xen-e820 RAM map:
>> > (XEN) [0.00]   - 0009a400 (usable)
>> > (XEN) [0.00]  0009a400 - 000a (reserved)
>> > (XEN) [0.00]  000e - 0010 (reserved)
>> > (XEN) [0.00]  0010 - 6ff84000 (usable)
>> > (XEN) [0.00]  6ff84000 - 7ac51000 (reserved)
>> > (XEN) [0.00]  7ac51000 - 7b681000 (ACPI NVS)
>> > (XEN) [0.00]  7b681000 - 7b7cf000 (ACPI data)
>> > (XEN) [0.00]  7b7cf000 - 7b80 (usable)
>> > (XEN) [0.00]  7b80 - 9000 (reserved)
>> > (XEN) [0.00]  fed1c000 - fed2 (reserved)
>> > (XEN) [0.00]  ff40 - 0001 (reserved)
>> > (XEN) [0.00]  0001 - 00208000 (usable)
>> > 
>> > And the RMRRs in DMAR are:
>> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
>> > (XEN) [0.00] [VT-D] endpoint: :05:00.0
>> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723b4000
>> > end_addr 7a3f3fff
>> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
>> > (XEN) [0.00] [VT-D] endpoint: :00:1d.0
>> > (XEN) [0.00] [VT-D] endpoint: :00:1a.0
>> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723ac000
>> > end_addr 723aefff
>> > (Endpoint 05:00.0 is a RAID bus controller. Endpoints 00.1d.0 and 00.1a.0
>> > are USB controllers.)
>> > 
>> > After DMA remapping is enabled, two DMA translation faults are reported
>> > by VT-d:
>> > (XEN) [9.547924] [VT-D]iommu_enable_translation: iommu->reg =
>> > 82c00021b000
>> > (XEN) [9.550620] [VT-D]iommu_enable_translation: iommu->reg =
>> > 82c00021d000
>> > (XEN) [9.553327] [VT-D]iommu.c:921: iommu_fault_status: Primary
>> > Pending Fault
>> > (XEN) [9.555906] [VT-D]DMAR:[DMA Read] Request device [:00:1a.0]
>> > fault addr 7a3f5000, iommu reg = 82c00021d000
>> > (XEN) [9.558537] [VT-D]DMAR: reason 06 - PTE Read access is not set
>> > (XEN) [9.559860] print_vtd_entries: iommu #1 dev :00:1a.0 gmfn
>> > 7a3f5
>> > (XEN) [9.561179] root_entry[00] = 107277c001
>> > (XEN) [9.562447] context[d0] = 2_1072c06001
>> > (XEN) [9.563776] l4[000] = 9c202f171107
>> > (XEN) [9.565125] l3[001] = 9c202f152107
>> > (XEN) [9.566483] l2[1d1] = 9c10727ce107
>> > (XEN) [9.567821] l1[1f5] = 8000
>> > (XEN) [9.569168] l1[1f5] not present
>> > (XEN) [9.570502] [VT-D]DMAR:[DMA Read] Request device [:00:1d.0]
>> > fault addr 7a3f4000, iommu reg = 82c00021d000
>> > (XEN) [9.573147] [VT-D]DMAR: reason 06 - PTE Read access is not set
>> > (XEN) [9.574488] print_vtd_entries: iommu #1 dev :00:1d.0 gmfn
>> > 7a3f4
>> > (XEN) [9.575819] root_entry[00] = 107277c001
>> > (XEN) [9.577129] context[e8] = 2_1072c06001
>> > (XEN) [9.578439] l4[000] = 9c202f171107
>> > (XEN) [9.579778] l3[001] = 9c202f152107
>> > (XEN) [9.58] l2[1d1] = 9c10727ce107
>> > (XEN) [9.582482] l1[1f4] = 8000
>> > (XEN) [9.583812] l1[1f4] not present
>> > (XEN) [   10.520172] Unable to find XEN_ELFNOTE_PHYS32_ENTRY address
>> > (XEN) [   10.521499] Failed to load Dom0 kernel
>> > (XEN) [   10.532171] 
>> > (XEN) [   10.535464] 
>> > (XEN) [   10.542636] Panic on CPU 0:
>> > (XEN) [   10.547394] Could not set up DOM0 guest OS
>> > (XEN) [   10.553605] 
>> > 
>> > The fault address the devices failed to access is marked as reserved in
>> > e820 and isn't reserved for the devices according to the RMRRs in DMAR.
>> > So I think we can draw a conclusion that some existing BIOSs don't
>> > expose correct RMRR to OS by DMAR. And we need a workaround such as
>> > iommu_inclusive_mapping to deal 

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-09-04 Thread Roger Pau Monné
(Adding Chao again because my MUA seems to drop him each time)

On Mon, Sep 04, 2017 at 10:00:00AM +0100, Roger Pau Monné wrote:
> On Mon, Sep 04, 2017 at 02:25:10PM +0800, Chao Gao wrote:
> > On Thu, Aug 31, 2017 at 11:09:48AM +0100, Roger Pau Monne wrote:
> > >I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
> > >(in fact I didn't even know about Ivy Bridge, that's why I said all
> > >pre-Haswell).
> > >
> > >In fact I'm now trying with a Nehalem processor that seem to work, so
> > >whatever this issue is it certainly doesn't affect all models or
> > >chipsets.
> > 
> > Hi, Roger.
> > 
> > Last week, I borrowed a Sandy Bridge with Intel(R) Xeon(R) E5-2690
> > 2.7GHz and tested with 'dom0=pvh'. But I didn't see the machine hang.
> > 
> > I also tested on Haswell and found RMRRs in dmar are incorrect on my
> > haswell. The e820 on that machine is:
> > (XEN) [0.00] Xen-e820 RAM map:
> > (XEN) [0.00]   - 0009a400 (usable)
> > (XEN) [0.00]  0009a400 - 000a (reserved)
> > (XEN) [0.00]  000e - 0010 (reserved)
> > (XEN) [0.00]  0010 - 6ff84000 (usable)
> > (XEN) [0.00]  6ff84000 - 7ac51000 (reserved)
> > (XEN) [0.00]  7ac51000 - 7b681000 (ACPI NVS)
> > (XEN) [0.00]  7b681000 - 7b7cf000 (ACPI data)
> > (XEN) [0.00]  7b7cf000 - 7b80 (usable)
> > (XEN) [0.00]  7b80 - 9000 (reserved)
> > (XEN) [0.00]  fed1c000 - fed2 (reserved)
> > (XEN) [0.00]  ff40 - 0001 (reserved)
> > (XEN) [0.00]  0001 - 00208000 (usable)
> > 
> > And the RMRRs in DMAR are:
> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> > (XEN) [0.00] [VT-D] endpoint: :05:00.0
> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723b4000
> > end_addr 7a3f3fff
> > (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> > (XEN) [0.00] [VT-D] endpoint: :00:1d.0
> > (XEN) [0.00] [VT-D] endpoint: :00:1a.0
> > (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723ac000
> > end_addr 723aefff
> > (Endpoint 05:00.0 is a RAID bus controller. Endpoints 00.1d.0 and 00.1a.0
> > are USB controllers.)
> > 
> > After DMA remapping is enabled, two DMA translation faults are reported
> > by VT-d:
> > (XEN) [9.547924] [VT-D]iommu_enable_translation: iommu->reg =
> > 82c00021b000
> > (XEN) [9.550620] [VT-D]iommu_enable_translation: iommu->reg =
> > 82c00021d000
> > (XEN) [9.553327] [VT-D]iommu.c:921: iommu_fault_status: Primary
> > Pending Fault
> > (XEN) [9.555906] [VT-D]DMAR:[DMA Read] Request device [:00:1a.0]
> > fault addr 7a3f5000, iommu reg = 82c00021d000
> > (XEN) [9.558537] [VT-D]DMAR: reason 06 - PTE Read access is not set
> > (XEN) [9.559860] print_vtd_entries: iommu #1 dev :00:1a.0 gmfn
> > 7a3f5
> > (XEN) [9.561179] root_entry[00] = 107277c001
> > (XEN) [9.562447] context[d0] = 2_1072c06001
> > (XEN) [9.563776] l4[000] = 9c202f171107
> > (XEN) [9.565125] l3[001] = 9c202f152107
> > (XEN) [9.566483] l2[1d1] = 9c10727ce107
> > (XEN) [9.567821] l1[1f5] = 8000
> > (XEN) [9.569168] l1[1f5] not present
> > (XEN) [9.570502] [VT-D]DMAR:[DMA Read] Request device [:00:1d.0]
> > fault addr 7a3f4000, iommu reg = 82c00021d000
> > (XEN) [9.573147] [VT-D]DMAR: reason 06 - PTE Read access is not set
> > (XEN) [9.574488] print_vtd_entries: iommu #1 dev :00:1d.0 gmfn
> > 7a3f4
> > (XEN) [9.575819] root_entry[00] = 107277c001
> > (XEN) [9.577129] context[e8] = 2_1072c06001
> > (XEN) [9.578439] l4[000] = 9c202f171107
> > (XEN) [9.579778] l3[001] = 9c202f152107
> > (XEN) [9.58] l2[1d1] = 9c10727ce107
> > (XEN) [9.582482] l1[1f4] = 8000
> > (XEN) [9.583812] l1[1f4] not present
> > (XEN) [   10.520172] Unable to find XEN_ELFNOTE_PHYS32_ENTRY address
> > (XEN) [   10.521499] Failed to load Dom0 kernel
> > (XEN) [   10.532171] 
> > (XEN) [   10.535464] 
> > (XEN) [   10.542636] Panic on CPU 0:
> > (XEN) [   10.547394] Could not set up DOM0 guest OS
> > (XEN) [   10.553605] 
> > 
> > The fault address the devices failed to access is marked as reserved in
> > e820 and isn't reserved for the devices according to the RMRRs in DMAR.
> > So I think we can draw a conclusion that some existing BIOSs don't
> > expose correct RMRR to OS by DMAR. And we need a workaround such as
> > iommu_inclusive_mapping to deal with such kind of BIOS for both pv dom0
> > and pvh dom0.
> 
> So your box seems to be capable of generating faults. Missing RMRR
> regions is (sadly) 

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-09-04 Thread Roger Pau Monné
On Mon, Sep 04, 2017 at 02:25:10PM +0800, Chao Gao wrote:
> On Thu, Aug 31, 2017 at 11:09:48AM +0100, Roger Pau Monne wrote:
> >I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
> >(in fact I didn't even know about Ivy Bridge, that's why I said all
> >pre-Haswell).
> >
> >In fact I'm now trying with a Nehalem processor that seem to work, so
> >whatever this issue is it certainly doesn't affect all models or
> >chipsets.
> 
> Hi, Roger.
> 
> Last week, I borrowed a Sandy Bridge with Intel(R) Xeon(R) E5-2690
> 2.7GHz and tested with 'dom0=pvh'. But I didn't see the machine hang.
> 
> I also tested on Haswell and found RMRRs in dmar are incorrect on my
> haswell. The e820 on that machine is:
> (XEN) [0.00] Xen-e820 RAM map:
> (XEN) [0.00]   - 0009a400 (usable)
> (XEN) [0.00]  0009a400 - 000a (reserved)
> (XEN) [0.00]  000e - 0010 (reserved)
> (XEN) [0.00]  0010 - 6ff84000 (usable)
> (XEN) [0.00]  6ff84000 - 7ac51000 (reserved)
> (XEN) [0.00]  7ac51000 - 7b681000 (ACPI NVS)
> (XEN) [0.00]  7b681000 - 7b7cf000 (ACPI data)
> (XEN) [0.00]  7b7cf000 - 7b80 (usable)
> (XEN) [0.00]  7b80 - 9000 (reserved)
> (XEN) [0.00]  fed1c000 - fed2 (reserved)
> (XEN) [0.00]  ff40 - 0001 (reserved)
> (XEN) [0.00]  0001 - 00208000 (usable)
> 
> And the RMRRs in DMAR are:
> (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> (XEN) [0.00] [VT-D] endpoint: :05:00.0
> (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723b4000
> end_addr 7a3f3fff
> (XEN) [0.00] [VT-D]found ACPI_DMAR_RMRR:
> (XEN) [0.00] [VT-D] endpoint: :00:1d.0
> (XEN) [0.00] [VT-D] endpoint: :00:1a.0
> (XEN) [0.00] [VT-D]dmar.c:638:   RMRR region: base_addr 723ac000
> end_addr 723aefff
> (Endpoint 05:00.0 is a RAID bus controller. Endpoints 00.1d.0 and 00.1a.0
> are USB controllers.)
> 
> After DMA remapping is enabled, two DMA translation faults are reported
> by VT-d:
> (XEN) [9.547924] [VT-D]iommu_enable_translation: iommu->reg =
> 82c00021b000
> (XEN) [9.550620] [VT-D]iommu_enable_translation: iommu->reg =
> 82c00021d000
> (XEN) [9.553327] [VT-D]iommu.c:921: iommu_fault_status: Primary
> Pending Fault
> (XEN) [9.555906] [VT-D]DMAR:[DMA Read] Request device [:00:1a.0]
> fault addr 7a3f5000, iommu reg = 82c00021d000
> (XEN) [9.558537] [VT-D]DMAR: reason 06 - PTE Read access is not set
> (XEN) [9.559860] print_vtd_entries: iommu #1 dev :00:1a.0 gmfn
> 7a3f5
> (XEN) [9.561179] root_entry[00] = 107277c001
> (XEN) [9.562447] context[d0] = 2_1072c06001
> (XEN) [9.563776] l4[000] = 9c202f171107
> (XEN) [9.565125] l3[001] = 9c202f152107
> (XEN) [9.566483] l2[1d1] = 9c10727ce107
> (XEN) [9.567821] l1[1f5] = 8000
> (XEN) [9.569168] l1[1f5] not present
> (XEN) [9.570502] [VT-D]DMAR:[DMA Read] Request device [:00:1d.0]
> fault addr 7a3f4000, iommu reg = 82c00021d000
> (XEN) [9.573147] [VT-D]DMAR: reason 06 - PTE Read access is not set
> (XEN) [9.574488] print_vtd_entries: iommu #1 dev :00:1d.0 gmfn
> 7a3f4
> (XEN) [9.575819] root_entry[00] = 107277c001
> (XEN) [9.577129] context[e8] = 2_1072c06001
> (XEN) [9.578439] l4[000] = 9c202f171107
> (XEN) [9.579778] l3[001] = 9c202f152107
> (XEN) [9.58] l2[1d1] = 9c10727ce107
> (XEN) [9.582482] l1[1f4] = 8000
> (XEN) [9.583812] l1[1f4] not present
> (XEN) [   10.520172] Unable to find XEN_ELFNOTE_PHYS32_ENTRY address
> (XEN) [   10.521499] Failed to load Dom0 kernel
> (XEN) [   10.532171] 
> (XEN) [   10.535464] 
> (XEN) [   10.542636] Panic on CPU 0:
> (XEN) [   10.547394] Could not set up DOM0 guest OS
> (XEN) [   10.553605] 
> 
> The fault address the devices failed to access is marked as reserved in
> e820 and isn't reserved for the devices according to the RMRRs in DMAR.
> So I think we can draw a conclusion that some existing BIOSs don't
> expose correct RMRR to OS by DMAR. And we need a workaround such as
> iommu_inclusive_mapping to deal with such kind of BIOS for both pv dom0
> and pvh dom0.

So your box seems to be capable of generating faults. Missing RMRR
regions is (sadly) expected, but at least you get faults and not a
complete hang. Which chipset does this box have? Is it a C600/X79?

> 
> As to the machine hang Roger observed, I have no idea on the cause. Roger,
> have you ever seen the VT-d on that machine reporting a DMA
> translation fault? If not, can you create one fault in native? 

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-09-04 Thread Chao Gao
On Thu, Aug 31, 2017 at 11:09:48AM +0100, Roger Pau Monne wrote:
>On Thu, Aug 31, 2017 at 04:45:23PM +0800, Chao Gao wrote:
>> On Thu, Aug 31, 2017 at 10:03:19AM +0100, Roger Pau Monne wrote:
>> >On Thu, Aug 31, 2017 at 03:32:42PM +0800, Chao Gao wrote:
>> >> On Tue, Aug 29, 2017 at 08:33:25AM +0100, Roger Pau Monne wrote:
>> >> >On Mon, Aug 28, 2017 at 06:18:13AM +, Tian, Kevin wrote:
>> >> >> > From: Roger Pau Monne [mailto:roger@citrix.com]
>> >> >> > Sent: Friday, August 25, 2017 9:59 PM
>> >> >> > 
>> >> >> > On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
>> >> >> > > >>> On 25.08.17 at 14:15,  wrote:
>> >> >> > > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
>> >> >> > > >> >>> On 22.08.17 at 15:54,  wrote:
>> >> >> > > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
>> >> >> > > >> >> >>> On 11.08.17 at 18:43,  wrote:
>> >> >> > > >> >> > --- a/xen/arch/x86/dom0_build.c
>> >> >> > > >> >> > +++ b/xen/arch/x86/dom0_build.c
>> >> >> > > >> >> > @@ -440,6 +440,10 @@ int __init
>> >> >> > dom0_setup_permissions(struct domain *d)
>> >> >> > > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, 
>> >> >> > > >> >> > mfn);
>> >> >> > > >> >> >  }
>> >> >> > > >> >> >
>> >> >> > > >> >> > +/* For PVH prevent access to the MMCFG areas. */
>> >> >> > > >> >> > +if ( dom0_pvh )
>> >> >> > > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
>> >> >> > > >> >>
>> >> >> > > >> >> What about ones reported by Dom0 later on? Which then raises 
>> >> >> > > >> >> the
>> >> >> > > >> >> question whether ...
>> >> >> > > >> >
>> >> >> > > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
>> >> >> > handler.
>> >> >> > > >> > But since you propose to do white listing, I guess it doesn't 
>> >> >> > > >> > matter
>> >> >> > > >> > that much anymore.
>> >> >> > > >>
>> >> >> > > >> Well, a fundamental question is whether white listing would 
>> >> >> > > >> work in
>> >> >> > > >> the first place. I could see room for severe problems e.g. with 
>> >> >> > > >> ACPI
>> >> >> > > >> methods wanting to access MMIO that's not described by any PCI
>> >> >> > > >> devices' BARs. Typically that would be regions in the chipset 
>> >> >> > > >> which
>> >> >> > > >> firmware is responsible for configuring/managing, the addresses 
>> >> >> > > >> of
>> >> >> > > >> which can be found/set in custom config space registers.
>> >> >> > > >
>> >> >> > > > The question would also be what would Xen allow in such 
>> >> >> > > > white-listing.
>> >> >> > > > Obviously you can get to map the same using both white-list and
>> >> >> > > > black-listing (see below).
>> >> >> > >
>> >> >> > > Not really - what you've said there regarding MMCFG regions is
>> >> >> > > a clear indication that we should _not_ map reserved regions, i.e.
>> >> >> > > it would need to be full white listing with perhaps just the PCI
>> >> >> > > device BARs being handled automatically.
>> >> >> > 
>> >> >> > I've tried just mapping the BARs and that sadly doesn't work, the box
>> >> >> > hangs after the IOMMU is enabled:
>> >> >> > 
>> >> >> > [...]
>> >> >> > (XEN) [VT-D]d0:PCI: map :3f:13.5
>> >> >> > (XEN) [VT-D]d0:PCI: map :3f:13.6
>> >> >> > (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
>> >> >> > 
>> >> >> > I will park this ATM and leave it for the Intel guys to diagnose.
>> >> >> > 
>> >> >> > For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
>> >> >> > E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
>> >> >> > 
>> >> >> 
>> >> >> +Chao who can help check whether we have such a box at hand.
>> >> >> 
>> >> >> btw please also give your BIOS version.
>> >> >
>> >> >It's a Precision T3600 BIOS A14.
>> >> 
>> >> Hi, Roger.
>> >> 
>> >> I found a Ivy bridge box with E5-2697 v2 and tested with "dom0=pvh", and
>> >
>> >The ones I've seen issues with are Sandy Bridge or Nehalem, can you
>> >find some of this hardware?
>> 
>> As I expected, I was removed from recipents :(, which made me
>> hard to notice your replies in time. 
>
>Sorry, I have no idea why my MUA does that, it seems to be able to
>deal fine with other recipients.
>
>> Yes. I will. But may take some time (for even Ivy Bridge is rare).
>> 
>> >
>> >I haven't tested Ivy Bridge, but all Haswell boxes I've tested seem to
>> >work just fine.
>> 
>> The reason why I chose Ivy Bridge partly is you said you found this bug on
>> almost pre-haswell box.
>
>I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
>(in fact I didn't even know about Ivy Bridge, that's why I said all
>pre-Haswell).
>
>In fact I'm now trying with a Nehalem processor that seem to work, so
>whatever this issue is it certainly doesn't affect all models or
>chipsets.

Hi, Roger.

Last week, I borrowed a Sandy Bridge with Intel(R) Xeon(R) E5-2690
2.7GHz and tested with 'dom0=pvh'. But I didn't see the machine 

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-31 Thread Roger Pau Monne
On Thu, Aug 31, 2017 at 04:45:23PM +0800, Chao Gao wrote:
> On Thu, Aug 31, 2017 at 10:03:19AM +0100, Roger Pau Monne wrote:
> >On Thu, Aug 31, 2017 at 03:32:42PM +0800, Chao Gao wrote:
> >> On Tue, Aug 29, 2017 at 08:33:25AM +0100, Roger Pau Monne wrote:
> >> >On Mon, Aug 28, 2017 at 06:18:13AM +, Tian, Kevin wrote:
> >> >> > From: Roger Pau Monne [mailto:roger@citrix.com]
> >> >> > Sent: Friday, August 25, 2017 9:59 PM
> >> >> > 
> >> >> > On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
> >> >> > > >>> On 25.08.17 at 14:15,  wrote:
> >> >> > > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
> >> >> > > >> >>> On 22.08.17 at 15:54,  wrote:
> >> >> > > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> >> >> > > >> >> >>> On 11.08.17 at 18:43,  wrote:
> >> >> > > >> >> > --- a/xen/arch/x86/dom0_build.c
> >> >> > > >> >> > +++ b/xen/arch/x86/dom0_build.c
> >> >> > > >> >> > @@ -440,6 +440,10 @@ int __init
> >> >> > dom0_setup_permissions(struct domain *d)
> >> >> > > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, 
> >> >> > > >> >> > mfn);
> >> >> > > >> >> >  }
> >> >> > > >> >> >
> >> >> > > >> >> > +/* For PVH prevent access to the MMCFG areas. */
> >> >> > > >> >> > +if ( dom0_pvh )
> >> >> > > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
> >> >> > > >> >>
> >> >> > > >> >> What about ones reported by Dom0 later on? Which then raises 
> >> >> > > >> >> the
> >> >> > > >> >> question whether ...
> >> >> > > >> >
> >> >> > > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
> >> >> > handler.
> >> >> > > >> > But since you propose to do white listing, I guess it doesn't 
> >> >> > > >> > matter
> >> >> > > >> > that much anymore.
> >> >> > > >>
> >> >> > > >> Well, a fundamental question is whether white listing would work 
> >> >> > > >> in
> >> >> > > >> the first place. I could see room for severe problems e.g. with 
> >> >> > > >> ACPI
> >> >> > > >> methods wanting to access MMIO that's not described by any PCI
> >> >> > > >> devices' BARs. Typically that would be regions in the chipset 
> >> >> > > >> which
> >> >> > > >> firmware is responsible for configuring/managing, the addresses 
> >> >> > > >> of
> >> >> > > >> which can be found/set in custom config space registers.
> >> >> > > >
> >> >> > > > The question would also be what would Xen allow in such 
> >> >> > > > white-listing.
> >> >> > > > Obviously you can get to map the same using both white-list and
> >> >> > > > black-listing (see below).
> >> >> > >
> >> >> > > Not really - what you've said there regarding MMCFG regions is
> >> >> > > a clear indication that we should _not_ map reserved regions, i.e.
> >> >> > > it would need to be full white listing with perhaps just the PCI
> >> >> > > device BARs being handled automatically.
> >> >> > 
> >> >> > I've tried just mapping the BARs and that sadly doesn't work, the box
> >> >> > hangs after the IOMMU is enabled:
> >> >> > 
> >> >> > [...]
> >> >> > (XEN) [VT-D]d0:PCI: map :3f:13.5
> >> >> > (XEN) [VT-D]d0:PCI: map :3f:13.6
> >> >> > (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
> >> >> > 
> >> >> > I will park this ATM and leave it for the Intel guys to diagnose.
> >> >> > 
> >> >> > For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
> >> >> > E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
> >> >> > 
> >> >> 
> >> >> +Chao who can help check whether we have such a box at hand.
> >> >> 
> >> >> btw please also give your BIOS version.
> >> >
> >> >It's a Precision T3600 BIOS A14.
> >> 
> >> Hi, Roger.
> >> 
> >> I found a Ivy bridge box with E5-2697 v2 and tested with "dom0=pvh", and
> >
> >The ones I've seen issues with are Sandy Bridge or Nehalem, can you
> >find some of this hardware?
> 
> As I expected, I was removed from recipents :(, which made me
> hard to notice your replies in time. 

Sorry, I have no idea why my MUA does that, it seems to be able to
deal fine with other recipients.

> Yes. I will. But may take some time (for even Ivy Bridge is rare).
> 
> >
> >I haven't tested Ivy Bridge, but all Haswell boxes I've tested seem to
> >work just fine.
> 
> The reason why I chose Ivy Bridge partly is you said you found this bug on
> almost pre-haswell box.

I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
(in fact I didn't even know about Ivy Bridge, that's why I said all
pre-Haswell).

In fact I'm now trying with a Nehalem processor that seem to work, so
whatever this issue is it certainly doesn't affect all models or
chipsets.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-31 Thread Chao Gao
On Thu, Aug 31, 2017 at 10:03:19AM +0100, Roger Pau Monne wrote:
>On Thu, Aug 31, 2017 at 03:32:42PM +0800, Chao Gao wrote:
>> On Tue, Aug 29, 2017 at 08:33:25AM +0100, Roger Pau Monne wrote:
>> >On Mon, Aug 28, 2017 at 06:18:13AM +, Tian, Kevin wrote:
>> >> > From: Roger Pau Monne [mailto:roger@citrix.com]
>> >> > Sent: Friday, August 25, 2017 9:59 PM
>> >> > 
>> >> > On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
>> >> > > >>> On 25.08.17 at 14:15,  wrote:
>> >> > > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
>> >> > > >> >>> On 22.08.17 at 15:54,  wrote:
>> >> > > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
>> >> > > >> >> >>> On 11.08.17 at 18:43,  wrote:
>> >> > > >> >> > --- a/xen/arch/x86/dom0_build.c
>> >> > > >> >> > +++ b/xen/arch/x86/dom0_build.c
>> >> > > >> >> > @@ -440,6 +440,10 @@ int __init
>> >> > dom0_setup_permissions(struct domain *d)
>> >> > > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, 
>> >> > > >> >> > mfn);
>> >> > > >> >> >  }
>> >> > > >> >> >
>> >> > > >> >> > +/* For PVH prevent access to the MMCFG areas. */
>> >> > > >> >> > +if ( dom0_pvh )
>> >> > > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
>> >> > > >> >>
>> >> > > >> >> What about ones reported by Dom0 later on? Which then raises the
>> >> > > >> >> question whether ...
>> >> > > >> >
>> >> > > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
>> >> > handler.
>> >> > > >> > But since you propose to do white listing, I guess it doesn't 
>> >> > > >> > matter
>> >> > > >> > that much anymore.
>> >> > > >>
>> >> > > >> Well, a fundamental question is whether white listing would work in
>> >> > > >> the first place. I could see room for severe problems e.g. with 
>> >> > > >> ACPI
>> >> > > >> methods wanting to access MMIO that's not described by any PCI
>> >> > > >> devices' BARs. Typically that would be regions in the chipset which
>> >> > > >> firmware is responsible for configuring/managing, the addresses of
>> >> > > >> which can be found/set in custom config space registers.
>> >> > > >
>> >> > > > The question would also be what would Xen allow in such 
>> >> > > > white-listing.
>> >> > > > Obviously you can get to map the same using both white-list and
>> >> > > > black-listing (see below).
>> >> > >
>> >> > > Not really - what you've said there regarding MMCFG regions is
>> >> > > a clear indication that we should _not_ map reserved regions, i.e.
>> >> > > it would need to be full white listing with perhaps just the PCI
>> >> > > device BARs being handled automatically.
>> >> > 
>> >> > I've tried just mapping the BARs and that sadly doesn't work, the box
>> >> > hangs after the IOMMU is enabled:
>> >> > 
>> >> > [...]
>> >> > (XEN) [VT-D]d0:PCI: map :3f:13.5
>> >> > (XEN) [VT-D]d0:PCI: map :3f:13.6
>> >> > (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
>> >> > 
>> >> > I will park this ATM and leave it for the Intel guys to diagnose.
>> >> > 
>> >> > For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
>> >> > E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
>> >> > 
>> >> 
>> >> +Chao who can help check whether we have such a box at hand.
>> >> 
>> >> btw please also give your BIOS version.
>> >
>> >It's a Precision T3600 BIOS A14.
>> 
>> Hi, Roger.
>> 
>> I found a Ivy bridge box with E5-2697 v2 and tested with "dom0=pvh", and
>
>The ones I've seen issues with are Sandy Bridge or Nehalem, can you
>find some of this hardware?

As I expected, I was removed from recipents :(, which made me
hard to notice your replies in time. 

Yes. I will. But may take some time (for even Ivy Bridge is rare).

>
>I haven't tested Ivy Bridge, but all Haswell boxes I've tested seem to
>work just fine.

The reason why I chose Ivy Bridge partly is you said you found this bug on
almost pre-haswell box.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-31 Thread Roger Pau Monne
On Thu, Aug 31, 2017 at 03:32:42PM +0800, Chao Gao wrote:
> On Tue, Aug 29, 2017 at 08:33:25AM +0100, Roger Pau Monne wrote:
> >On Mon, Aug 28, 2017 at 06:18:13AM +, Tian, Kevin wrote:
> >> > From: Roger Pau Monne [mailto:roger@citrix.com]
> >> > Sent: Friday, August 25, 2017 9:59 PM
> >> > 
> >> > On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
> >> > > >>> On 25.08.17 at 14:15,  wrote:
> >> > > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
> >> > > >> >>> On 22.08.17 at 15:54,  wrote:
> >> > > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> >> > > >> >> >>> On 11.08.17 at 18:43,  wrote:
> >> > > >> >> > --- a/xen/arch/x86/dom0_build.c
> >> > > >> >> > +++ b/xen/arch/x86/dom0_build.c
> >> > > >> >> > @@ -440,6 +440,10 @@ int __init
> >> > dom0_setup_permissions(struct domain *d)
> >> > > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
> >> > > >> >> >  }
> >> > > >> >> >
> >> > > >> >> > +/* For PVH prevent access to the MMCFG areas. */
> >> > > >> >> > +if ( dom0_pvh )
> >> > > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
> >> > > >> >>
> >> > > >> >> What about ones reported by Dom0 later on? Which then raises the
> >> > > >> >> question whether ...
> >> > > >> >
> >> > > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
> >> > handler.
> >> > > >> > But since you propose to do white listing, I guess it doesn't 
> >> > > >> > matter
> >> > > >> > that much anymore.
> >> > > >>
> >> > > >> Well, a fundamental question is whether white listing would work in
> >> > > >> the first place. I could see room for severe problems e.g. with ACPI
> >> > > >> methods wanting to access MMIO that's not described by any PCI
> >> > > >> devices' BARs. Typically that would be regions in the chipset which
> >> > > >> firmware is responsible for configuring/managing, the addresses of
> >> > > >> which can be found/set in custom config space registers.
> >> > > >
> >> > > > The question would also be what would Xen allow in such 
> >> > > > white-listing.
> >> > > > Obviously you can get to map the same using both white-list and
> >> > > > black-listing (see below).
> >> > >
> >> > > Not really - what you've said there regarding MMCFG regions is
> >> > > a clear indication that we should _not_ map reserved regions, i.e.
> >> > > it would need to be full white listing with perhaps just the PCI
> >> > > device BARs being handled automatically.
> >> > 
> >> > I've tried just mapping the BARs and that sadly doesn't work, the box
> >> > hangs after the IOMMU is enabled:
> >> > 
> >> > [...]
> >> > (XEN) [VT-D]d0:PCI: map :3f:13.5
> >> > (XEN) [VT-D]d0:PCI: map :3f:13.6
> >> > (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
> >> > 
> >> > I will park this ATM and leave it for the Intel guys to diagnose.
> >> > 
> >> > For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
> >> > E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
> >> > 
> >> 
> >> +Chao who can help check whether we have such a box at hand.
> >> 
> >> btw please also give your BIOS version.
> >
> >It's a Precision T3600 BIOS A14.
> 
> Hi, Roger.
> 
> I found a Ivy bridge box with E5-2697 v2 and tested with "dom0=pvh", and

The ones I've seen issues with are Sandy Bridge or Nehalem, can you
find some of this hardware?

I haven't tested Ivy Bridge, but all Haswell boxes I've tested seem to
work just fine.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-31 Thread Roger Pau Monne
On Thu, Aug 31, 2017 at 03:32:42PM +0800, Chao Gao wrote:
> On Tue, Aug 29, 2017 at 08:33:25AM +0100, Roger Pau Monne wrote:
> >On Mon, Aug 28, 2017 at 06:18:13AM +, Tian, Kevin wrote:
> >> > From: Roger Pau Monne [mailto:roger@citrix.com]
> >> > Sent: Friday, August 25, 2017 9:59 PM
> >> > 
> >> > On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
> >> > > >>> On 25.08.17 at 14:15,  wrote:
> >> > > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
> >> > > >> >>> On 22.08.17 at 15:54,  wrote:
> >> > > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> >> > > >> >> >>> On 11.08.17 at 18:43,  wrote:
> >> > > >> >> > --- a/xen/arch/x86/dom0_build.c
> >> > > >> >> > +++ b/xen/arch/x86/dom0_build.c
> >> > > >> >> > @@ -440,6 +440,10 @@ int __init
> >> > dom0_setup_permissions(struct domain *d)
> >> > > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
> >> > > >> >> >  }
> >> > > >> >> >
> >> > > >> >> > +/* For PVH prevent access to the MMCFG areas. */
> >> > > >> >> > +if ( dom0_pvh )
> >> > > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
> >> > > >> >>
> >> > > >> >> What about ones reported by Dom0 later on? Which then raises the
> >> > > >> >> question whether ...
> >> > > >> >
> >> > > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
> >> > handler.
> >> > > >> > But since you propose to do white listing, I guess it doesn't 
> >> > > >> > matter
> >> > > >> > that much anymore.
> >> > > >>
> >> > > >> Well, a fundamental question is whether white listing would work in
> >> > > >> the first place. I could see room for severe problems e.g. with ACPI
> >> > > >> methods wanting to access MMIO that's not described by any PCI
> >> > > >> devices' BARs. Typically that would be regions in the chipset which
> >> > > >> firmware is responsible for configuring/managing, the addresses of
> >> > > >> which can be found/set in custom config space registers.
> >> > > >
> >> > > > The question would also be what would Xen allow in such 
> >> > > > white-listing.
> >> > > > Obviously you can get to map the same using both white-list and
> >> > > > black-listing (see below).
> >> > >
> >> > > Not really - what you've said there regarding MMCFG regions is
> >> > > a clear indication that we should _not_ map reserved regions, i.e.
> >> > > it would need to be full white listing with perhaps just the PCI
> >> > > device BARs being handled automatically.
> >> > 
> >> > I've tried just mapping the BARs and that sadly doesn't work, the box
> >> > hangs after the IOMMU is enabled:
> >> > 
> >> > [...]
> >> > (XEN) [VT-D]d0:PCI: map :3f:13.5
> >> > (XEN) [VT-D]d0:PCI: map :3f:13.6
> >> > (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
> >> > 
> >> > I will park this ATM and leave it for the Intel guys to diagnose.
> >> > 
> >> > For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
> >> > E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
> >> > 
> >> 
> >> +Chao who can help check whether we have such a box at hand.
> >> 
> >> btw please also give your BIOS version.
> >
> >It's a Precision T3600 BIOS A14.
> 
> Hi, Roger.
> 
> I found a Ivy bridge box with E5-2697 v2 and tested with "dom0=pvh", and
> the bug didn't occur on this box. The log is below:
> XEN) [7.509588] [VT-D]d0:PCIe: map :ff:1e.2
> (XEN) [7.511047] [VT-D]d0:PCIe: map :ff:1e.3
> (XEN) [7.512463] [VT-D]d0:PCIe: map :ff:1e.4
> (XEN) [7.513927] [VT-D]d0:PCIe: map :ff:1e.5
> (XEN) [7.515342] [VT-D]d0:PCIe: map :ff:1e.6
> (XEN) [7.516808] [VT-D]d0:PCIe: map :ff:1e.7
> (XEN) [7.519449] [VT-D]iommu_enable_translation: iommu->reg =
> 82c00021b000
> (XEN) [7.522295] [VT-D]iommu_enable_translation: iommu->reg =
> 82c00021d000
> (XEN) [8.675096] OS: linux version: 2.6 loader: generic bitness:
> 64-bit
> (XEN) [8.726763] 
> (XEN) [8.730171] 
> (XEN) [8.737491] Panic on CPU 0:
> (XEN) [8.742376] Building a PVHv2 Dom0 is not yet supported.
> (XEN) [8.750148] 
> (XEN) [8.757457] 
> (XEN) [8.760877] Reboot in five seconds...
> (XEN) [   13.769050] Resetting with ACPI MEMORY or I/O RESET_REG
> 
> I agree with you that there may be some bugs in firmware and VT-d.
> I did trials on a haswell box with iommu_inclusive_mapping=false. I did
> see DMA traslation fault. The address to be translated is reserved in
> e820 but isn't included in RMRR. Even that, the box booted up
> successfully.
> 
> But if the bug exists in pvh dom0, it also exists in pv dom0. Could you
> try to boot with pv dom0 and set iommu_inclusive_mapping=false?
> Theoretically, the system would halt exactly like what you met in
> pvh dom0. Is that right? or I miss some difference between pvh dom0 and
> pv dom0.

Yes, the same 

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-31 Thread Chao Gao
On Tue, Aug 29, 2017 at 08:33:25AM +0100, Roger Pau Monne wrote:
>On Mon, Aug 28, 2017 at 06:18:13AM +, Tian, Kevin wrote:
>> > From: Roger Pau Monne [mailto:roger@citrix.com]
>> > Sent: Friday, August 25, 2017 9:59 PM
>> > 
>> > On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
>> > > >>> On 25.08.17 at 14:15,  wrote:
>> > > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
>> > > >> >>> On 22.08.17 at 15:54,  wrote:
>> > > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
>> > > >> >> >>> On 11.08.17 at 18:43,  wrote:
>> > > >> >> > --- a/xen/arch/x86/dom0_build.c
>> > > >> >> > +++ b/xen/arch/x86/dom0_build.c
>> > > >> >> > @@ -440,6 +440,10 @@ int __init
>> > dom0_setup_permissions(struct domain *d)
>> > > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
>> > > >> >> >  }
>> > > >> >> >
>> > > >> >> > +/* For PVH prevent access to the MMCFG areas. */
>> > > >> >> > +if ( dom0_pvh )
>> > > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
>> > > >> >>
>> > > >> >> What about ones reported by Dom0 later on? Which then raises the
>> > > >> >> question whether ...
>> > > >> >
>> > > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
>> > handler.
>> > > >> > But since you propose to do white listing, I guess it doesn't matter
>> > > >> > that much anymore.
>> > > >>
>> > > >> Well, a fundamental question is whether white listing would work in
>> > > >> the first place. I could see room for severe problems e.g. with ACPI
>> > > >> methods wanting to access MMIO that's not described by any PCI
>> > > >> devices' BARs. Typically that would be regions in the chipset which
>> > > >> firmware is responsible for configuring/managing, the addresses of
>> > > >> which can be found/set in custom config space registers.
>> > > >
>> > > > The question would also be what would Xen allow in such white-listing.
>> > > > Obviously you can get to map the same using both white-list and
>> > > > black-listing (see below).
>> > >
>> > > Not really - what you've said there regarding MMCFG regions is
>> > > a clear indication that we should _not_ map reserved regions, i.e.
>> > > it would need to be full white listing with perhaps just the PCI
>> > > device BARs being handled automatically.
>> > 
>> > I've tried just mapping the BARs and that sadly doesn't work, the box
>> > hangs after the IOMMU is enabled:
>> > 
>> > [...]
>> > (XEN) [VT-D]d0:PCI: map :3f:13.5
>> > (XEN) [VT-D]d0:PCI: map :3f:13.6
>> > (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
>> > 
>> > I will park this ATM and leave it for the Intel guys to diagnose.
>> > 
>> > For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
>> > E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
>> > 
>> 
>> +Chao who can help check whether we have such a box at hand.
>> 
>> btw please also give your BIOS version.
>
>It's a Precision T3600 BIOS A14.

Hi, Roger.

I found a Ivy bridge box with E5-2697 v2 and tested with "dom0=pvh", and
the bug didn't occur on this box. The log is below:
XEN) [7.509588] [VT-D]d0:PCIe: map :ff:1e.2
(XEN) [7.511047] [VT-D]d0:PCIe: map :ff:1e.3
(XEN) [7.512463] [VT-D]d0:PCIe: map :ff:1e.4
(XEN) [7.513927] [VT-D]d0:PCIe: map :ff:1e.5
(XEN) [7.515342] [VT-D]d0:PCIe: map :ff:1e.6
(XEN) [7.516808] [VT-D]d0:PCIe: map :ff:1e.7
(XEN) [7.519449] [VT-D]iommu_enable_translation: iommu->reg =
82c00021b000
(XEN) [7.522295] [VT-D]iommu_enable_translation: iommu->reg =
82c00021d000
(XEN) [8.675096] OS: linux version: 2.6 loader: generic bitness:
64-bit
(XEN) [8.726763] 
(XEN) [8.730171] 
(XEN) [8.737491] Panic on CPU 0:
(XEN) [8.742376] Building a PVHv2 Dom0 is not yet supported.
(XEN) [8.750148] 
(XEN) [8.757457] 
(XEN) [8.760877] Reboot in five seconds...
(XEN) [   13.769050] Resetting with ACPI MEMORY or I/O RESET_REG

I agree with you that there may be some bugs in firmware and VT-d.
I did trials on a haswell box with iommu_inclusive_mapping=false. I did
see DMA traslation fault. The address to be translated is reserved in
e820 but isn't included in RMRR. Even that, the box booted up
successfully.

But if the bug exists in pvh dom0, it also exists in pv dom0. Could you
try to boot with pv dom0 and set iommu_inclusive_mapping=false?
Theoretically, the system would halt exactly like what you met in
pvh dom0. Is that right? or I miss some difference between pvh dom0 and
pv dom0.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-29 Thread Roger Pau Monne
On Mon, Aug 28, 2017 at 06:18:13AM +, Tian, Kevin wrote:
> > From: Roger Pau Monne [mailto:roger@citrix.com]
> > Sent: Friday, August 25, 2017 9:59 PM
> > 
> > On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
> > > >>> On 25.08.17 at 14:15,  wrote:
> > > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
> > > >> >>> On 22.08.17 at 15:54,  wrote:
> > > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> > > >> >> >>> On 11.08.17 at 18:43,  wrote:
> > > >> >> > --- a/xen/arch/x86/dom0_build.c
> > > >> >> > +++ b/xen/arch/x86/dom0_build.c
> > > >> >> > @@ -440,6 +440,10 @@ int __init
> > dom0_setup_permissions(struct domain *d)
> > > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
> > > >> >> >  }
> > > >> >> >
> > > >> >> > +/* For PVH prevent access to the MMCFG areas. */
> > > >> >> > +if ( dom0_pvh )
> > > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
> > > >> >>
> > > >> >> What about ones reported by Dom0 later on? Which then raises the
> > > >> >> question whether ...
> > > >> >
> > > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
> > handler.
> > > >> > But since you propose to do white listing, I guess it doesn't matter
> > > >> > that much anymore.
> > > >>
> > > >> Well, a fundamental question is whether white listing would work in
> > > >> the first place. I could see room for severe problems e.g. with ACPI
> > > >> methods wanting to access MMIO that's not described by any PCI
> > > >> devices' BARs. Typically that would be regions in the chipset which
> > > >> firmware is responsible for configuring/managing, the addresses of
> > > >> which can be found/set in custom config space registers.
> > > >
> > > > The question would also be what would Xen allow in such white-listing.
> > > > Obviously you can get to map the same using both white-list and
> > > > black-listing (see below).
> > >
> > > Not really - what you've said there regarding MMCFG regions is
> > > a clear indication that we should _not_ map reserved regions, i.e.
> > > it would need to be full white listing with perhaps just the PCI
> > > device BARs being handled automatically.
> > 
> > I've tried just mapping the BARs and that sadly doesn't work, the box
> > hangs after the IOMMU is enabled:
> > 
> > [...]
> > (XEN) [VT-D]d0:PCI: map :3f:13.5
> > (XEN) [VT-D]d0:PCI: map :3f:13.6
> > (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
> > 
> > I will park this ATM and leave it for the Intel guys to diagnose.
> > 
> > For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
> > E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
> > 
> 
> +Chao who can help check whether we have such a box at hand.
> 
> btw please also give your BIOS version.

It's a Precision T3600 BIOS A14.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-28 Thread Tian, Kevin
> From: Roger Pau Monne [mailto:roger@citrix.com]
> Sent: Friday, August 25, 2017 9:59 PM
> 
> On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
> > >>> On 25.08.17 at 14:15,  wrote:
> > > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
> > >> >>> On 22.08.17 at 15:54,  wrote:
> > >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> > >> >> >>> On 11.08.17 at 18:43,  wrote:
> > >> >> > --- a/xen/arch/x86/dom0_build.c
> > >> >> > +++ b/xen/arch/x86/dom0_build.c
> > >> >> > @@ -440,6 +440,10 @@ int __init
> dom0_setup_permissions(struct domain *d)
> > >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
> > >> >> >  }
> > >> >> >
> > >> >> > +/* For PVH prevent access to the MMCFG areas. */
> > >> >> > +if ( dom0_pvh )
> > >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
> > >> >>
> > >> >> What about ones reported by Dom0 later on? Which then raises the
> > >> >> question whether ...
> > >> >
> > >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved
> handler.
> > >> > But since you propose to do white listing, I guess it doesn't matter
> > >> > that much anymore.
> > >>
> > >> Well, a fundamental question is whether white listing would work in
> > >> the first place. I could see room for severe problems e.g. with ACPI
> > >> methods wanting to access MMIO that's not described by any PCI
> > >> devices' BARs. Typically that would be regions in the chipset which
> > >> firmware is responsible for configuring/managing, the addresses of
> > >> which can be found/set in custom config space registers.
> > >
> > > The question would also be what would Xen allow in such white-listing.
> > > Obviously you can get to map the same using both white-list and
> > > black-listing (see below).
> >
> > Not really - what you've said there regarding MMCFG regions is
> > a clear indication that we should _not_ map reserved regions, i.e.
> > it would need to be full white listing with perhaps just the PCI
> > device BARs being handled automatically.
> 
> I've tried just mapping the BARs and that sadly doesn't work, the box
> hangs after the IOMMU is enabled:
> 
> [...]
> (XEN) [VT-D]d0:PCI: map :3f:13.5
> (XEN) [VT-D]d0:PCI: map :3f:13.6
> (XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000
> 
> I will park this ATM and leave it for the Intel guys to diagnose.
> 
> For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
> E5-1607 0 @ 3.00GHz and a C600/X79 chipset.
> 

+Chao who can help check whether we have such a box at hand.

btw please also give your BIOS version.

Thanks
kevin

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-28 Thread Tian, Kevin
> From: Roger Pau Monne [mailto:roger@citrix.com]
> Sent: Thursday, August 17, 2017 5:32 PM
> 
> On Thu, Aug 17, 2017 at 03:12:02AM +, Tian, Kevin wrote:
> > > From: Roger Pau Monne
> > > Sent: Saturday, August 12, 2017 12:43 AM
> > >
> > > They are emulated by Xen, so they must not be mapped into Dom0 p2m.
> > > Introduce a helper function to add the MMCFG areas to the list of
> > > denied iomem regions for PVH Dom0.
> > >
> > > Signed-off-by: Roger Pau Monné 
> >
> > this patch is a general fix, not just for inclusive mapping. please send
> > it separately.
> 
> Hm, not really.
> 
> PV Dom0 should have access to the MMCFG areas, PVH Dom0 shouldn't
> because they will emulated by Xen.
> 
> So far MMCFG areas are not mapped into PVH Dom0 p2m, but they will be
> once iommu_inclusive_mapping is implemented for PVH Dom0. So I
> consider this a preparatory change before enabling
> iommu_inclusive_mapping for PVH, rather than a fix. It would be a
> fix if iommu_inclusive_mapping was already enabled for PVH Dom0.
>  

Possibly you need a better description here. otherwise current
description has nothing to do with inclusive mapping, based on
which it looks a basic PVH dom0 problem (while from your 
explanation it's not valid today).

Thanks
Kevin

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-25 Thread Roger Pau Monne
On Fri, Aug 25, 2017 at 06:25:36AM -0600, Jan Beulich wrote:
> >>> On 25.08.17 at 14:15,  wrote:
> > On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
> >> >>> On 22.08.17 at 15:54,  wrote:
> >> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> >> >> >>> On 11.08.17 at 18:43,  wrote:
> >> >> > --- a/xen/arch/x86/dom0_build.c
> >> >> > +++ b/xen/arch/x86/dom0_build.c
> >> >> > @@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain 
> >> >> > *d)
> >> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
> >> >> >  }
> >> >> >  
> >> >> > +/* For PVH prevent access to the MMCFG areas. */
> >> >> > +if ( dom0_pvh )
> >> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
> >> >> 
> >> >> What about ones reported by Dom0 later on? Which then raises the
> >> >> question whether ...
> >> > 
> >> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved handler.
> >> > But since you propose to do white listing, I guess it doesn't matter
> >> > that much anymore.
> >> 
> >> Well, a fundamental question is whether white listing would work in
> >> the first place. I could see room for severe problems e.g. with ACPI
> >> methods wanting to access MMIO that's not described by any PCI
> >> devices' BARs. Typically that would be regions in the chipset which
> >> firmware is responsible for configuring/managing, the addresses of
> >> which can be found/set in custom config space registers.
> > 
> > The question would also be what would Xen allow in such white-listing.
> > Obviously you can get to map the same using both white-list and
> > black-listing (see below).
> 
> Not really - what you've said there regarding MMCFG regions is
> a clear indication that we should _not_ map reserved regions, i.e.
> it would need to be full white listing with perhaps just the PCI
> device BARs being handled automatically.

I've tried just mapping the BARs and that sadly doesn't work, the box
hangs after the IOMMU is enabled:

[...]
(XEN) [VT-D]d0:PCI: map :3f:13.5
(XEN) [VT-D]d0:PCI: map :3f:13.6
(XEN) [VT-D]iommu_enable_translation: iommu->reg = 82c00021b000

I will park this ATM and leave it for the Intel guys to diagnose.

For the reference, the specific box I'm testing ATM has a Xeon(R) CPU
E5-1607 0 @ 3.00GHz and a C600/X79 chipset.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-25 Thread Jan Beulich
>>> On 25.08.17 at 14:15,  wrote:
> On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
>> >>> On 22.08.17 at 15:54,  wrote:
>> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
>> >> >>> On 11.08.17 at 18:43,  wrote:
>> >> > --- a/xen/arch/x86/dom0_build.c
>> >> > +++ b/xen/arch/x86/dom0_build.c
>> >> > @@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain *d)
>> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
>> >> >  }
>> >> >  
>> >> > +/* For PVH prevent access to the MMCFG areas. */
>> >> > +if ( dom0_pvh )
>> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
>> >> 
>> >> What about ones reported by Dom0 later on? Which then raises the
>> >> question whether ...
>> > 
>> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved handler.
>> > But since you propose to do white listing, I guess it doesn't matter
>> > that much anymore.
>> 
>> Well, a fundamental question is whether white listing would work in
>> the first place. I could see room for severe problems e.g. with ACPI
>> methods wanting to access MMIO that's not described by any PCI
>> devices' BARs. Typically that would be regions in the chipset which
>> firmware is responsible for configuring/managing, the addresses of
>> which can be found/set in custom config space registers.
> 
> The question would also be what would Xen allow in such white-listing.
> Obviously you can get to map the same using both white-list and
> black-listing (see below).

Not really - what you've said there regarding MMCFG regions is
a clear indication that we should _not_ map reserved regions, i.e.
it would need to be full white listing with perhaps just the PCI
device BARs being handled automatically.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-25 Thread Roger Pau Monne
On Wed, Aug 23, 2017 at 02:16:38AM -0600, Jan Beulich wrote:
> >>> On 22.08.17 at 15:54,  wrote:
> > On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> >> >>> On 11.08.17 at 18:43,  wrote:
> >> > --- a/xen/arch/x86/dom0_build.c
> >> > +++ b/xen/arch/x86/dom0_build.c
> >> > @@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain *d)
> >> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
> >> >  }
> >> >  
> >> > +/* For PVH prevent access to the MMCFG areas. */
> >> > +if ( dom0_pvh )
> >> > +rc |= pci_mmcfg_set_domain_permissions(d);
> >> 
> >> What about ones reported by Dom0 later on? Which then raises the
> >> question whether ...
> > 
> > This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved handler.
> > But since you propose to do white listing, I guess it doesn't matter
> > that much anymore.
> 
> Well, a fundamental question is whether white listing would work in
> the first place. I could see room for severe problems e.g. with ACPI
> methods wanting to access MMIO that's not described by any PCI
> devices' BARs. Typically that would be regions in the chipset which
> firmware is responsible for configuring/managing, the addresses of
> which can be found/set in custom config space registers.

The question would also be what would Xen allow in such white-listing.
Obviously you can get to map the same using both white-list and
black-listing (see below).

> >> > @@ -175,6 +177,25 @@ void pci_mmcfg_arch_disable(unsigned int idx)
> >> > cfg->pci_segment, cfg->start_bus_number, 
> >> > cfg->end_bus_number);
> >> >  }
> >> >  
> >> > +int pci_mmcfg_set_domain_permissions(struct domain *d)
> >> > +{
> >> > +unsigned int idx;
> >> > +int rc = 0;
> >> > +
> >> > +for ( idx = 0; idx < pci_mmcfg_config_num; idx++ )
> >> > +{
> >> > +const struct acpi_mcfg_allocation *cfg = 
> >> > pci_mmcfg_virt[idx].cfg;
> >> > +unsigned long start = PFN_DOWN(cfg->address) +
> >> > +  PCI_BDF(cfg->start_bus_number, 0, 0);
> >> > +unsigned long end = PFN_DOWN(cfg->address) +
> >> > +PCI_BDF(cfg->end_bus_number, ~0, ~0);
> >> > +
> >> > +rc |= iomem_deny_access(d, start, end);
> >> 
> >> ... this shouldn't be unnecessary by, other than PV Dom0,
> >> starting out with no I/O memory being made accessible (i.e.
> >> white listing just like we decided we would do for other
> >> properties for PVH).
> > 
> > So would you like to switch to this white listing mode even for PV
> > Dom0, or just for PVH?
> 
> No, I certainly don't think we should touch PV here.
> 
> > Should reserved regions and holes be added to it? Maybe only reserved
> > regions?
> 
> See above - reserved regions may be a minimum that's needed to
> be added, but then again we can't be certain all BIOSes properly
> report everything in use by the chipset/firmware as reserved. Otoh
> they're called reserved because no-one outside of the firmware
> should touch them.

Right. On a more general comment I can see your suspicions on this
series, TBH I don't like to implement something like this either. This
series just paper over an issue in either the VT-d IOMMU
implementation in Xen, or a hardware errata in some IOMMUs found on
older hardware.

Having that said, I've tested now a slightly less intrusive variant,
which only maps reserved regions. This will still require Xen to
blacklsit the MMCFG regions, which reside in reserved areas. Is there
anything else Xen should blacklist from reserved regions?

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-23 Thread Jan Beulich
>>> On 22.08.17 at 15:54,  wrote:
> On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
>> >>> On 11.08.17 at 18:43,  wrote:
>> > They are emulated by Xen, so they must not be mapped into Dom0 p2m.
>> > Introduce a helper function to add the MMCFG areas to the list of
>> > denied iomem regions for PVH Dom0.
>> 
>> "They are" or "They are going to be"?
> 
> This started as a series on top of vPCI, but I think it has a chance
> of getting in before vPCI. I will change it.

I guessed this would be the reason, but while reviewing the vPCI
series you've said somewhere functionality from the series here
would be implied.

>> > --- a/xen/arch/x86/dom0_build.c
>> > +++ b/xen/arch/x86/dom0_build.c
>> > @@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain *d)
>> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
>> >  }
>> >  
>> > +/* For PVH prevent access to the MMCFG areas. */
>> > +if ( dom0_pvh )
>> > +rc |= pci_mmcfg_set_domain_permissions(d);
>> 
>> What about ones reported by Dom0 later on? Which then raises the
>> question whether ...
> 
> This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved handler.
> But since you propose to do white listing, I guess it doesn't matter
> that much anymore.

Well, a fundamental question is whether white listing would work in
the first place. I could see room for severe problems e.g. with ACPI
methods wanting to access MMIO that's not described by any PCI
devices' BARs. Typically that would be regions in the chipset which
firmware is responsible for configuring/managing, the addresses of
which can be found/set in custom config space registers.

>> > @@ -175,6 +177,25 @@ void pci_mmcfg_arch_disable(unsigned int idx)
>> > cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number);
>> >  }
>> >  
>> > +int pci_mmcfg_set_domain_permissions(struct domain *d)
>> > +{
>> > +unsigned int idx;
>> > +int rc = 0;
>> > +
>> > +for ( idx = 0; idx < pci_mmcfg_config_num; idx++ )
>> > +{
>> > +const struct acpi_mcfg_allocation *cfg = pci_mmcfg_virt[idx].cfg;
>> > +unsigned long start = PFN_DOWN(cfg->address) +
>> > +  PCI_BDF(cfg->start_bus_number, 0, 0);
>> > +unsigned long end = PFN_DOWN(cfg->address) +
>> > +PCI_BDF(cfg->end_bus_number, ~0, ~0);
>> > +
>> > +rc |= iomem_deny_access(d, start, end);
>> 
>> ... this shouldn't be unnecessary by, other than PV Dom0,
>> starting out with no I/O memory being made accessible (i.e.
>> white listing just like we decided we would do for other
>> properties for PVH).
> 
> So would you like to switch to this white listing mode even for PV
> Dom0, or just for PVH?

No, I certainly don't think we should touch PV here.

> Should reserved regions and holes be added to it? Maybe only reserved
> regions?

See above - reserved regions may be a minimum that's needed to
be added, but then again we can't be certain all BIOSes properly
report everything in use by the chipset/firmware as reserved. Otoh
they're called reserved because no-one outside of the firmware
should touch them.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-22 Thread Roger Pau Monne
On Tue, Aug 22, 2017 at 06:26:23AM -0600, Jan Beulich wrote:
> >>> On 11.08.17 at 18:43,  wrote:
> > They are emulated by Xen, so they must not be mapped into Dom0 p2m.
> > Introduce a helper function to add the MMCFG areas to the list of
> > denied iomem regions for PVH Dom0.
> 
> "They are" or "They are going to be"?

This started as a series on top of vPCI, but I think it has a chance
of getting in before vPCI. I will change it.

> > --- a/xen/arch/x86/dom0_build.c
> > +++ b/xen/arch/x86/dom0_build.c
> > @@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain *d)
> >  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
> >  }
> >  
> > +/* For PVH prevent access to the MMCFG areas. */
> > +if ( dom0_pvh )
> > +rc |= pci_mmcfg_set_domain_permissions(d);
> 
> What about ones reported by Dom0 later on? Which then raises the
> question whether ...

This should be dealt with in the PHYSDEVOP_pci_mmcfg_reserved handler.
But since you propose to do white listing, I guess it doesn't matter
that much anymore.

> > @@ -175,6 +177,25 @@ void pci_mmcfg_arch_disable(unsigned int idx)
> > cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number);
> >  }
> >  
> > +int pci_mmcfg_set_domain_permissions(struct domain *d)
> > +{
> > +unsigned int idx;
> > +int rc = 0;
> > +
> > +for ( idx = 0; idx < pci_mmcfg_config_num; idx++ )
> > +{
> > +const struct acpi_mcfg_allocation *cfg = pci_mmcfg_virt[idx].cfg;
> > +unsigned long start = PFN_DOWN(cfg->address) +
> > +  PCI_BDF(cfg->start_bus_number, 0, 0);
> > +unsigned long end = PFN_DOWN(cfg->address) +
> > +PCI_BDF(cfg->end_bus_number, ~0, ~0);
> > +
> > +rc |= iomem_deny_access(d, start, end);
> 
> ... this shouldn't be unnecessary by, other than PV Dom0,
> starting out with no I/O memory being made accessible (i.e.
> white listing just like we decided we would do for other
> properties for PVH).

So would you like to switch to this white listing mode even for PV
Dom0, or just for PVH?

Should reserved regions and holes be added to it? Maybe only reserved
regions?

> Additionally while in the code that dom0_setup_permissions()
> was broken out from using |= was fine, there and here it's not
> really appropriate unless we want to continue to bake in the
> assumption that either iomem_deny_access() can only ever
> return a single error indicator or (b) the callers only care about
> the value being (non-)zero.

Right, I can fix that.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-22 Thread Jan Beulich
>>> On 11.08.17 at 18:43,  wrote:
> They are emulated by Xen, so they must not be mapped into Dom0 p2m.
> Introduce a helper function to add the MMCFG areas to the list of
> denied iomem regions for PVH Dom0.

"They are" or "They are going to be"?

> --- a/xen/arch/x86/dom0_build.c
> +++ b/xen/arch/x86/dom0_build.c
> @@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain *d)
>  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
>  }
>  
> +/* For PVH prevent access to the MMCFG areas. */
> +if ( dom0_pvh )
> +rc |= pci_mmcfg_set_domain_permissions(d);

What about ones reported by Dom0 later on? Which then raises the
question whether ...

> @@ -175,6 +177,25 @@ void pci_mmcfg_arch_disable(unsigned int idx)
> cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number);
>  }
>  
> +int pci_mmcfg_set_domain_permissions(struct domain *d)
> +{
> +unsigned int idx;
> +int rc = 0;
> +
> +for ( idx = 0; idx < pci_mmcfg_config_num; idx++ )
> +{
> +const struct acpi_mcfg_allocation *cfg = pci_mmcfg_virt[idx].cfg;
> +unsigned long start = PFN_DOWN(cfg->address) +
> +  PCI_BDF(cfg->start_bus_number, 0, 0);
> +unsigned long end = PFN_DOWN(cfg->address) +
> +PCI_BDF(cfg->end_bus_number, ~0, ~0);
> +
> +rc |= iomem_deny_access(d, start, end);

... this shouldn't be unnecessary by, other than PV Dom0,
starting out with no I/O memory being made accessible (i.e.
white listing just like we decided we would do for other
properties for PVH).

Additionally while in the code that dom0_setup_permissions()
was broken out from using |= was fine, there and here it's not
really appropriate unless we want to continue to bake in the
assumption that either iomem_deny_access() can only ever
return a single error indicator or (b) the callers only care about
the value being (non-)zero.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-17 Thread Roger Pau Monne
On Thu, Aug 17, 2017 at 03:12:02AM +, Tian, Kevin wrote:
> > From: Roger Pau Monne
> > Sent: Saturday, August 12, 2017 12:43 AM
> > 
> > They are emulated by Xen, so they must not be mapped into Dom0 p2m.
> > Introduce a helper function to add the MMCFG areas to the list of
> > denied iomem regions for PVH Dom0.
> > 
> > Signed-off-by: Roger Pau Monné 
> 
> this patch is a general fix, not just for inclusive mapping. please send
> it separately.

Hm, not really.

PV Dom0 should have access to the MMCFG areas, PVH Dom0 shouldn't
because they will emulated by Xen.

So far MMCFG areas are not mapped into PVH Dom0 p2m, but they will be
once iommu_inclusive_mapping is implemented for PVH Dom0. So I
consider this a preparatory change before enabling
iommu_inclusive_mapping for PVH, rather than a fix. It would be a
fix if iommu_inclusive_mapping was already enabled for PVH Dom0.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-16 Thread Tian, Kevin
> From: Roger Pau Monne
> Sent: Saturday, August 12, 2017 12:43 AM
> 
> They are emulated by Xen, so they must not be mapped into Dom0 p2m.
> Introduce a helper function to add the MMCFG areas to the list of
> denied iomem regions for PVH Dom0.
> 
> Signed-off-by: Roger Pau Monné 

this patch is a general fix, not just for inclusive mapping. please send
it separately.

> ---
> Cc: Jan Beulich 
> Cc: Andrew Cooper 
> ---
> Changes since RFC:
>  - Introduce as helper instead of exposing the internal mmcfg
>variables to the Dom0 builder.
> ---
>  xen/arch/x86/dom0_build.c |  4 
>  xen/arch/x86/x86_64/mmconfig_64.c | 21 +
>  xen/include/xen/pci.h |  2 ++
>  3 files changed, 27 insertions(+)
> 
> diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
> index 0c125e61eb..3e0910d779 100644
> --- a/xen/arch/x86/dom0_build.c
> +++ b/xen/arch/x86/dom0_build.c
> @@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain
> *d)
>  rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
>  }
> 
> +/* For PVH prevent access to the MMCFG areas. */
> +if ( dom0_pvh )
> +rc |= pci_mmcfg_set_domain_permissions(d);
> +
>  return rc;
>  }
> 
> diff --git a/xen/arch/x86/x86_64/mmconfig_64.c
> b/xen/arch/x86/x86_64/mmconfig_64.c
> index e84a67dfc4..271fad407f 100644
> --- a/xen/arch/x86/x86_64/mmconfig_64.c
> +++ b/xen/arch/x86/x86_64/mmconfig_64.c
> @@ -15,6 +15,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
> 
>  #include "mmconfig.h"
> 
> @@ -175,6 +177,25 @@ void pci_mmcfg_arch_disable(unsigned int idx)
> cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number);
>  }
> 
> +int pci_mmcfg_set_domain_permissions(struct domain *d)
> +{
> +unsigned int idx;
> +int rc = 0;
> +
> +for ( idx = 0; idx < pci_mmcfg_config_num; idx++ )
> +{
> +const struct acpi_mcfg_allocation *cfg = pci_mmcfg_virt[idx].cfg;
> +unsigned long start = PFN_DOWN(cfg->address) +
> +  PCI_BDF(cfg->start_bus_number, 0, 0);
> +unsigned long end = PFN_DOWN(cfg->address) +
> +PCI_BDF(cfg->end_bus_number, ~0, ~0);
> +
> +rc |= iomem_deny_access(d, start, end);
> +}
> +
> +return rc;
> +}
> +
>  bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg,
>  unsigned int *bdf)
>  {
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 59b6e8a81c..ea6a66b248 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -170,4 +170,6 @@ int msixtbl_pt_register(struct domain *, struct pirq
> *, uint64_t gtable);
>  void msixtbl_pt_unregister(struct domain *, struct pirq *);
>  void msixtbl_pt_cleanup(struct domain *d);
> 
> +int pci_mmcfg_set_domain_permissions(struct domain *d);
> +
>  #endif /* __XEN_PCI_H__ */
> --
> 2.11.0 (Apple Git-81)
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0

2017-08-11 Thread Roger Pau Monne
They are emulated by Xen, so they must not be mapped into Dom0 p2m.
Introduce a helper function to add the MMCFG areas to the list of
denied iomem regions for PVH Dom0.

Signed-off-by: Roger Pau Monné 
---
Cc: Jan Beulich 
Cc: Andrew Cooper 
---
Changes since RFC:
 - Introduce as helper instead of exposing the internal mmcfg
   variables to the Dom0 builder.
---
 xen/arch/x86/dom0_build.c |  4 
 xen/arch/x86/x86_64/mmconfig_64.c | 21 +
 xen/include/xen/pci.h |  2 ++
 3 files changed, 27 insertions(+)

diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 0c125e61eb..3e0910d779 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -440,6 +440,10 @@ int __init dom0_setup_permissions(struct domain *d)
 rc |= rangeset_add_singleton(mmio_ro_ranges, mfn);
 }
 
+/* For PVH prevent access to the MMCFG areas. */
+if ( dom0_pvh )
+rc |= pci_mmcfg_set_domain_permissions(d);
+
 return rc;
 }
 
diff --git a/xen/arch/x86/x86_64/mmconfig_64.c 
b/xen/arch/x86/x86_64/mmconfig_64.c
index e84a67dfc4..271fad407f 100644
--- a/xen/arch/x86/x86_64/mmconfig_64.c
+++ b/xen/arch/x86/x86_64/mmconfig_64.c
@@ -15,6 +15,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include "mmconfig.h"
 
@@ -175,6 +177,25 @@ void pci_mmcfg_arch_disable(unsigned int idx)
cfg->pci_segment, cfg->start_bus_number, cfg->end_bus_number);
 }
 
+int pci_mmcfg_set_domain_permissions(struct domain *d)
+{
+unsigned int idx;
+int rc = 0;
+
+for ( idx = 0; idx < pci_mmcfg_config_num; idx++ )
+{
+const struct acpi_mcfg_allocation *cfg = pci_mmcfg_virt[idx].cfg;
+unsigned long start = PFN_DOWN(cfg->address) +
+  PCI_BDF(cfg->start_bus_number, 0, 0);
+unsigned long end = PFN_DOWN(cfg->address) +
+PCI_BDF(cfg->end_bus_number, ~0, ~0);
+
+rc |= iomem_deny_access(d, start, end);
+}
+
+return rc;
+}
+
 bool_t pci_mmcfg_decode(unsigned long mfn, unsigned int *seg,
 unsigned int *bdf)
 {
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 59b6e8a81c..ea6a66b248 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -170,4 +170,6 @@ int msixtbl_pt_register(struct domain *, struct pirq *, 
uint64_t gtable);
 void msixtbl_pt_unregister(struct domain *, struct pirq *);
 void msixtbl_pt_cleanup(struct domain *d);
 
+int pci_mmcfg_set_domain_permissions(struct domain *d);
+
 #endif /* __XEN_PCI_H__ */
-- 
2.11.0 (Apple Git-81)


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel