Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-18 Thread Roger Pau Monné
On Tue, Apr 18, 2017 at 02:48:31AM -0600, Jan Beulich wrote:
> >>> On 18.04.17 at 09:34,  wrote:
> > (XEN) Before DMA_GCMD_TE
> > (
> > 
> > The hang seems to happen when writing DMA_GCMD_TE to the global command
> > register, which enables the DMA remapping. After that the box is completely
> > unresponsive, not even the watchdog is working.
> 
> How sure are you that this is pre-Haswell specific vs e.g. chipset or
> firmware (think of RMRRs [or their lack] for the latter) dependent?
> Iirc Elena's command line specifiable RMRR patch series was
> motivated by similar behavior she had observed on some system.

This is mostly from trial/error. I don't think it's strictly CPU related, but
rather chipset related (ie: chipsets that come with pre-haswell CPUs).

Elena IIRC was at least getting IOMMU faults, which I don't even get in my
case, and I think that's the issue itself.

> Another odd aspect is - why would IOMMU enabling cause the hang
> only when intending to use a PVH Dom0? The IOMMU is being
> enabled in either case, which again might point at differences in use
> of memory.

Not sure, for PVH Dom0 the IOMMU is enabled quite early in the domain build
process (before populating the domain p2m), which seems to be fine on other
systems.

I've done that (initializing the IOMMU so early) to avoid having to iterate
over the list of domain pages afterwards when the IOMMU is initialized with the
p2m already populated.

FWIW, moving the iommu_hwdom_init call to the end of the PVH Dom0 build process
doesn't solve the issue. I've also tried with and without shared pt, and the
result is the same.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-18 Thread Jan Beulich
>>> On 18.04.17 at 09:34,  wrote:
> (XEN) Before DMA_GCMD_TE
> (
> 
> The hang seems to happen when writing DMA_GCMD_TE to the global command
> register, which enables the DMA remapping. After that the box is completely
> unresponsive, not even the watchdog is working.

How sure are you that this is pre-Haswell specific vs e.g. chipset or
firmware (think of RMRRs [or their lack] for the latter) dependent?
Iirc Elena's command line specifiable RMRR patch series was
motivated by similar behavior she had observed on some system.

Another odd aspect is - why would IOMMU enabling cause the hang
only when intending to use a PVH Dom0? The IOMMU is being
enabled in either case, which again might point at differences in use
of memory.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-18 Thread Roger Pau Monné
On Tue, Apr 18, 2017 at 03:04:51AM +, Tian, Kevin wrote:
> > From: Roger Pau Monné [mailto:roger@citrix.com]
> > Sent: Friday, April 14, 2017 11:35 PM
> > 
> > Hello,
> > 
> > Although PVHv2 Dom0 is not yet finished, I've been trying the current code
> > on
> > different hardware, and found that with pre-Haswell Intel hardware PVHv2
> > Dom0
> > completely freezes the box when calling iommu_hwdom_init in
> > dom0_construct_pvh.
> > OTOH the same doesn't happen when using a newer CPU (ie: haswell or
> > newer).
> > 
> > I'm not able to debug that in any meaningful way because the box seems to
> > lock
> > up completely, even the watchdog NMI stops working. Here is the boot log,
> > up to
> > the point where it freezes:
> > 
> 
> I don't have an idea now w/o seeing more meaningful debug message.
> Maybe you have to add more fine-grained prints to capture some
> useful hints.

Hello, I've added the following debug patch:

diff --git a/xen/drivers/passthrough/vtd/iommu.c 
b/xen/drivers/passthrough/vtd/iommu.c
index a5c61c6e21..cb039d74e7 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -765,7 +765,9 @@ static void iommu_enable_translation(struct acpi_drhd_unit 
*drhd)
iommu->reg);
 spin_lock_irqsave(>register_lock, flags);
 sts = dmar_readl(iommu->reg, DMAR_GSTS_REG);
+printk("Before DMA_GCMD_TE\n");
 dmar_writel(iommu->reg, DMAR_GCMD_REG, sts | DMA_GCMD_TE);
+printk("After DMA_GCMD_TE\n");
 
 /* Make sure hardware complete it */
 IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl,

And got this output:

(XEN) Xen version 4.9-rc (root@) (FreeBSD clang version 3.9.0 
(tags/RELEASE_390/final 280324) (based on LLVM 3.9.0)) debug=y  Tue Apr 18 
08:22:39 BST 2017
(XEN) Latest ChangeSet:
(XEN) Console output is synchronous.
(XEN) Bootloader: FreeBSD Loader
(XEN) Command line: dom0_mem=4096M dom0=pvh com1=115200,8n1 console=com1,vga 
guest_loglvl=all loglvl=all iommu=debug,verbose sync_console watchdog
(XEN) Xen image load base address: 0
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN) Disc information:
(XEN)  Found 2 MBR signatures
(XEN)  Found 2 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)   - 0008dc00 (usable)
(XEN)  0008dc00 - 000a (reserved)
(XEN)  000e - 0010 (reserved)
(XEN)  0010 - 18ebb000 (usable)
(XEN)  18ebb000 - 18fe8000 (ACPI NVS)
(XEN)  18fe8000 - 18fe9000 (usable)
(XEN)  18fe9000 - 1900 (ACPI NVS)
(XEN)  1900 - 1dffd000 (usable)
(XEN)  1dffd000 - 1e00 (ACPI data)
(XEN)  1e00 - ac784000 (usable)
(XEN)  ac784000 - ac818000 (reserved)
(XEN)  ac818000 - ad80 (usable)
(XEN)  b000 - b400 (reserved)
(XEN)  fed2 - fed4 (reserved)
(XEN)  fed5 - fed9 (reserved)
(XEN)  ffa0 - ffa4 (reserved)
(XEN)  0001 - 00025000 (usable)
(XEN) New Xen image base address: 0xad20
(XEN) ACPI: RSDP 000FE300, 0024 (r2 DELL  )
(XEN) ACPI: XSDT 1DFFEE18, 0074 (r1 DELLCBX3 6222004 MSFT10013)
(XEN) ACPI: FACP 18FEFD98, 00F4 (r4 DELLCBX3 6222004 MSFT10013)
(XEN) ACPI: DSDT 18FA9018, 6373 (r1 DELLCBX3   0 INTL 20091112)
(XEN) ACPI: FACS 18FF1F40, 0040
(XEN) ACPI: APIC 1DFFDC18, 0158 (r2 DELLCBX3 6222004 MSFT10013)
(XEN) ACPI: MCFG 18FFED18, 003C (r1 A M I  OEMMCFG.  6222004 MSFT   97)
(XEN) ACPI: TCPA 18FFEC98, 0032 (r20 0)
(XEN) ACPI: SSDT 18FF0A98, 0306 (r1 DELLTP  TPM 3000 INTL 20091112)
(XEN) ACPI: HPET 18FFEC18, 0038 (r1 A M I   PCHHPET  6222004 AMI.3)
(XEN) ACPI: BOOT 18FFEB98, 0028 (r1 DELL   CBX3  6222004 AMI 10013)
(XEN) ACPI: SSDT 18FB0018, 36FFE (r2  INTELCpuPm 4000 INTL 20091112)
(XEN) ACPI: SLIC 18FEEC18, 0176 (r3 DELLCBX3 6222004 MSFT10013)
(XEN) ACPI: DMAR 18FF1B18, 0094 (r1 A M I   OEMDMAR1 INTL1)
(XEN) System RAM: 8149MB (8345288kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at -00025000
(XEN) Domain heap initialised
(XEN) CPU Vendor: Intel, Family 6 (0x6), Model 45 (0x2d), Stepping 7 (raw 
000206d7)
(XEN) found SMP MP-table at 000f1db0
(XEN) DMI 2.6 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x408 (32 bits)
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1:404,1:0], pm1x_evt[1:400,1:0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - 18ffdf40/18ff1f40, 
using 32
(XEN) ACPI: wakeup_vec[18ffdf4c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee0
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) ACPI: 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Tian, Kevin
> From: Roger Pau Monné [mailto:roger@citrix.com]
> Sent: Friday, April 14, 2017 11:35 PM
> 
> Hello,
> 
> Although PVHv2 Dom0 is not yet finished, I've been trying the current code
> on
> different hardware, and found that with pre-Haswell Intel hardware PVHv2
> Dom0
> completely freezes the box when calling iommu_hwdom_init in
> dom0_construct_pvh.
> OTOH the same doesn't happen when using a newer CPU (ie: haswell or
> newer).
> 
> I'm not able to debug that in any meaningful way because the box seems to
> lock
> up completely, even the watchdog NMI stops working. Here is the boot log,
> up to
> the point where it freezes:
> 

I don't have an idea now w/o seeing more meaningful debug message.
Maybe you have to add more fine-grained prints to capture some
useful hints.

Thanks
Kevin

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 01:57:10PM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 01:47:47PM +0800, Chao Gao wrote:
>> On Mon, Apr 17, 2017 at 01:21:01PM +0100, Roger Pau Monné wrote:
>> >On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
>> >[...]
>> >> It works. I can test for you when you send out a formal patch.
>> >
>> >Thanks for the testing, will send formal patch shortly.
>> >
>> >Have you been able to reproduce the IOMMU issue with that, or you just hit 
>> >the
>> >panic at the end of PVH Dom0 build?
>> 
>> No, I haven't. The output is some like ELFxxx not found, I think, due to lack
>> of pvh domain0 kernel. As mentioned before, my platform is skylake.
>
>Right, if you get to the ELF stuff it means the IOMMU has been initialized
>successfully. Skylake is post-haswell, so I don't think it's going to exhibit
>those issues. Is there any chance you can test on something older
>(pre-haswell?).

I am not sure that I can find a pre-haswell machine. Will try later.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 01:47:47PM +0800, Chao Gao wrote:
> On Mon, Apr 17, 2017 at 01:21:01PM +0100, Roger Pau Monné wrote:
> >On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
> >[...]
> >> It works. I can test for you when you send out a formal patch.
> >
> >Thanks for the testing, will send formal patch shortly.
> >
> >Have you been able to reproduce the IOMMU issue with that, or you just hit 
> >the
> >panic at the end of PVH Dom0 build?
> 
> No, I haven't. The output is some like ELFxxx not found, I think, due to lack
> of pvh domain0 kernel. As mentioned before, my platform is skylake.

Right, if you get to the ELF stuff it means the IOMMU has been initialized
successfully. Skylake is post-haswell, so I don't think it's going to exhibit
those issues. Is there any chance you can test on something older
(pre-haswell?).

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 01:21:01PM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
>[...]
>> It works. I can test for you when you send out a formal patch.
>
>Thanks for the testing, will send formal patch shortly.
>
>Have you been able to reproduce the IOMMU issue with that, or you just hit the
>panic at the end of PVH Dom0 build?

No, I haven't. The output is some like ELFxxx not found, I think, due to lack
of pvh domain0 kernel. As mentioned before, my platform is skylake.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 01:12:27PM +0800, Chao Gao wrote:
[...]
> It works. I can test for you when you send out a formal patch.

Thanks for the testing, will send formal patch shortly.

Have you been able to reproduce the IOMMU issue with that, or you just hit the
panic at the end of PVH Dom0 build?

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 11:38:33AM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 10:49:45AM +0800, Chao Gao wrote:
>> On Mon, Apr 17, 2017 at 09:38:54AM +0100, Roger Pau Monné wrote:
>> >On Mon, Apr 17, 2017 at 09:03:12AM +0800, Chao Gao wrote:
>> >> On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
>> >> >On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
>> >> >> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>> >> >> >Hello,
>> >> >> >
>> >> >> >Although PVHv2 Dom0 is not yet finished, I've been trying the current 
>> >> >> >code on
>> >> >> >different hardware, and found that with pre-Haswell Intel hardware 
>> >> >> >PVHv2 Dom0
>> >> >> >completely freezes the box when calling iommu_hwdom_init in 
>> >> >> >dom0_construct_pvh.
>> >> >> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or 
>> >> >> >newer).
>> >> >> >
>> >> >> >I'm not able to debug that in any meaningful way because the box 
>> >> >> >seems to lock
>> >> >> >up completely, even the watchdog NMI stops working. Here is the boot 
>> >> >> >log, up to
>> >> >> >the point where it freezes:
>> >> >> 
>> >> >> I try "dom0=pvh" with my skylake. An assertion failed. Is it a 
>> >> >> software bug?
>> >> >> 
>> >
>> >It seems like we are not properly adding/accounting the vIO APICs, but I 
>> >cannot
>> >really see how. I have another patch for you to try below.
>> >
>> >Thanks, Roger.
>> >
>> >---8<---
>> >diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>> >index 527ac2aadd..40075e2756 100644
>> >--- a/xen/arch/x86/hvm/vioapic.c
>> >+++ b/xen/arch/x86/hvm/vioapic.c
>> >@@ -610,11 +610,15 @@ int vioapic_init(struct domain *d)
>> >xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
>> > return -ENOMEM;
>> > 
>> >+printk("Adding %u vIO APICs\n", nr_vioapics);
>> >+
>> > for ( i = 0; i < nr_vioapics; i++ )
>> > {
>> > unsigned int nr_pins = is_hardware_domain(d) ? 
>> > nr_ioapic_entries[i] :
>> > ARRAY_SIZE(domain_vioapic(d, 0)->domU.redirtbl);
>> > 
>> >+printk("vIO APIC %u has %u pins\n", i, nr_pins);
>> >+
>> > if ( (domain_vioapic(d, i) =
>> >   xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL )
>> > {
>> >@@ -623,8 +627,12 @@ int vioapic_init(struct domain *d)
>> > }
>> > domain_vioapic(d, i)->nr_pins = nr_pins;
>> > nr_gsis += nr_pins;
>> >+printk("nr_gsis: %u\n", nr_gsis);
>> > }
>> > 
>> >+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u 
>> >highest_gsi: %u\n",
>> >+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, 
>> >highest_gsi());
>> >+
>> > ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
>> > 
>> > d->arch.hvm_domain.nr_vioapics = nr_vioapics;
>> >
>> 
>> Please Cc or To me.  Is there some holes in all physical IOAPICs gsi ranges?
>
>That's weird, my MUA (Mutt) seems to automatically remove your address from the
>"To:" field. I have no idea why it does that.
>
>So yes, your box has as GSI gap which is not handled by any IO APIC. TBH, I
>didn't even knew that was possible. In any case, patch below should solve it.
>
>---8<---
>commit f52d05fca03440d771eb56077c9d60bb630eb423
>diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>index 5157db7a4e..ec87a97651 100644
>--- a/xen/arch/x86/hvm/vioapic.c
>+++ b/xen/arch/x86/hvm/vioapic.c
>@@ -64,37 +64,23 @@ static struct hvm_vioapic *addr_vioapic(const struct 
>domain *d,
> struct hvm_vioapic *gsi_vioapic(const struct domain *d, unsigned int gsi,
> unsigned int *pin)
> {
>-unsigned int i, base_gsi = 0;
>+unsigned int i;
> 
> for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
> {
> struct hvm_vioapic *vioapic = domain_vioapic(d, i);
> 
>-if ( gsi >= base_gsi && gsi < base_gsi + vioapic->nr_pins )
>+if ( gsi >= vioapic->base_gsi &&
>+ gsi < vioapic->base_gsi + vioapic->nr_pins )
> {
>-*pin = gsi - base_gsi;
>+*pin = gsi - vioapic->base_gsi;
> return vioapic;
> }
>-
>-base_gsi += vioapic->nr_pins;
> }
> 
> return NULL;
> }
> 
>-static unsigned int base_gsi(const struct domain *d,
>- const struct hvm_vioapic *vioapic)
>-{
>-unsigned int nr_vioapics = d->arch.hvm_domain.nr_vioapics;
>-unsigned int base_gsi = 0, i = 0;
>-const struct hvm_vioapic *tmp;
>-
>-while ( i < nr_vioapics && (tmp = domain_vioapic(d, i++)) != vioapic )
>-base_gsi += tmp->nr_pins;
>-
>-return base_gsi;
>-}
>-
> static uint32_t vioapic_read_indirect(const struct hvm_vioapic *vioapic)
> {
> uint32_t result = 0;
>@@ -180,7 +166,7 @@ static void vioapic_write_redirent(
> struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> union vioapic_redir_entry *pent, ent;
> int unmasked = 0;
>-unsigned int gsi = base_gsi(d, vioapic) + idx;
>+

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 10:49:45AM +0800, Chao Gao wrote:
> On Mon, Apr 17, 2017 at 09:38:54AM +0100, Roger Pau Monné wrote:
> >On Mon, Apr 17, 2017 at 09:03:12AM +0800, Chao Gao wrote:
> >> On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
> >> >On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
> >> >> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
> >> >> >Hello,
> >> >> >
> >> >> >Although PVHv2 Dom0 is not yet finished, I've been trying the current 
> >> >> >code on
> >> >> >different hardware, and found that with pre-Haswell Intel hardware 
> >> >> >PVHv2 Dom0
> >> >> >completely freezes the box when calling iommu_hwdom_init in 
> >> >> >dom0_construct_pvh.
> >> >> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or 
> >> >> >newer).
> >> >> >
> >> >> >I'm not able to debug that in any meaningful way because the box seems 
> >> >> >to lock
> >> >> >up completely, even the watchdog NMI stops working. Here is the boot 
> >> >> >log, up to
> >> >> >the point where it freezes:
> >> >> 
> >> >> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software 
> >> >> bug?
> >> >> 
> >
> >It seems like we are not properly adding/accounting the vIO APICs, but I 
> >cannot
> >really see how. I have another patch for you to try below.
> >
> >Thanks, Roger.
> >
> >---8<---
> > diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
> >index 527ac2aadd..40075e2756 100644
> >--- a/xen/arch/x86/hvm/vioapic.c
> >+++ b/xen/arch/x86/hvm/vioapic.c
> >@@ -610,11 +610,15 @@ int vioapic_init(struct domain *d)
> >xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
> > return -ENOMEM;
> > 
> >+printk("Adding %u vIO APICs\n", nr_vioapics);
> >+
> > for ( i = 0; i < nr_vioapics; i++ )
> > {
> > unsigned int nr_pins = is_hardware_domain(d) ? nr_ioapic_entries[i] 
> > :
> > ARRAY_SIZE(domain_vioapic(d, 0)->domU.redirtbl);
> > 
> >+printk("vIO APIC %u has %u pins\n", i, nr_pins);
> >+
> > if ( (domain_vioapic(d, i) =
> >   xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL )
> > {
> >@@ -623,8 +627,12 @@ int vioapic_init(struct domain *d)
> > }
> > domain_vioapic(d, i)->nr_pins = nr_pins;
> > nr_gsis += nr_pins;
> >+printk("nr_gsis: %u\n", nr_gsis);
> > }
> > 
> >+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u 
> >highest_gsi: %u\n",
> >+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, highest_gsi());
> >+
> > ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
> > 
> > d->arch.hvm_domain.nr_vioapics = nr_vioapics;
> >
> 
> Please Cc or To me.  Is there some holes in all physical IOAPICs gsi ranges?

That's weird, my MUA (Mutt) seems to automatically remove your address from the
"To:" field. I have no idea why it does that.

So yes, your box has as GSI gap which is not handled by any IO APIC. TBH, I
didn't even knew that was possible. In any case, patch below should solve it.

---8<---
commit f52d05fca03440d771eb56077c9d60bb630eb423
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 5157db7a4e..ec87a97651 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -64,37 +64,23 @@ static struct hvm_vioapic *addr_vioapic(const struct domain 
*d,
 struct hvm_vioapic *gsi_vioapic(const struct domain *d, unsigned int gsi,
 unsigned int *pin)
 {
-unsigned int i, base_gsi = 0;
+unsigned int i;
 
 for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
 {
 struct hvm_vioapic *vioapic = domain_vioapic(d, i);
 
-if ( gsi >= base_gsi && gsi < base_gsi + vioapic->nr_pins )
+if ( gsi >= vioapic->base_gsi &&
+ gsi < vioapic->base_gsi + vioapic->nr_pins )
 {
-*pin = gsi - base_gsi;
+*pin = gsi - vioapic->base_gsi;
 return vioapic;
 }
-
-base_gsi += vioapic->nr_pins;
 }
 
 return NULL;
 }
 
-static unsigned int base_gsi(const struct domain *d,
- const struct hvm_vioapic *vioapic)
-{
-unsigned int nr_vioapics = d->arch.hvm_domain.nr_vioapics;
-unsigned int base_gsi = 0, i = 0;
-const struct hvm_vioapic *tmp;
-
-while ( i < nr_vioapics && (tmp = domain_vioapic(d, i++)) != vioapic )
-base_gsi += tmp->nr_pins;
-
-return base_gsi;
-}
-
 static uint32_t vioapic_read_indirect(const struct hvm_vioapic *vioapic)
 {
 uint32_t result = 0;
@@ -180,7 +166,7 @@ static void vioapic_write_redirent(
 struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 union vioapic_redir_entry *pent, ent;
 int unmasked = 0;
-unsigned int gsi = base_gsi(d, vioapic) + idx;
+unsigned int gsi = vioapic->base_gsi + idx;
 
 spin_lock(>arch.hvm_domain.irq_lock);
 
@@ -340,7 +326,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, 
unsigned int pin)
 struct domain 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 09:38:54AM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 09:03:12AM +0800, Chao Gao wrote:
>> On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
>> >On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
>> >> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>> >> >Hello,
>> >> >
>> >> >Although PVHv2 Dom0 is not yet finished, I've been trying the current 
>> >> >code on
>> >> >different hardware, and found that with pre-Haswell Intel hardware PVHv2 
>> >> >Dom0
>> >> >completely freezes the box when calling iommu_hwdom_init in 
>> >> >dom0_construct_pvh.
>> >> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or 
>> >> >newer).
>> >> >
>> >> >I'm not able to debug that in any meaningful way because the box seems 
>> >> >to lock
>> >> >up completely, even the watchdog NMI stops working. Here is the boot 
>> >> >log, up to
>> >> >the point where it freezes:
>> >> 
>> >> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software 
>> >> bug?
>> >> 
>
>It seems like we are not properly adding/accounting the vIO APICs, but I cannot
>really see how. I have another patch for you to try below.
>
>Thanks, Roger.
>
>---8<---
>   diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>index 527ac2aadd..40075e2756 100644
>--- a/xen/arch/x86/hvm/vioapic.c
>+++ b/xen/arch/x86/hvm/vioapic.c
>@@ -610,11 +610,15 @@ int vioapic_init(struct domain *d)
>xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
> return -ENOMEM;
> 
>+printk("Adding %u vIO APICs\n", nr_vioapics);
>+
> for ( i = 0; i < nr_vioapics; i++ )
> {
> unsigned int nr_pins = is_hardware_domain(d) ? nr_ioapic_entries[i] :
> ARRAY_SIZE(domain_vioapic(d, 0)->domU.redirtbl);
> 
>+printk("vIO APIC %u has %u pins\n", i, nr_pins);
>+
> if ( (domain_vioapic(d, i) =
>   xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL )
> {
>@@ -623,8 +627,12 @@ int vioapic_init(struct domain *d)
> }
> domain_vioapic(d, i)->nr_pins = nr_pins;
> nr_gsis += nr_pins;
>+printk("nr_gsis: %u\n", nr_gsis);
> }
> 
>+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u highest_gsi: 
>%u\n",
>+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, highest_gsi());
>+
> ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
> 
> d->arch.hvm_domain.nr_vioapics = nr_vioapics;
>

Please Cc or To me.  Is there some holes in all physical IOAPICs gsi ranges?

with the above patch,

(XEN) [   14.262237] Dom0 has maximum 1448 PIRQs
(XEN) [   14.264413] Adding 9 vIO APICs
(XEN) [   14.265827] vIO APIC 0 has 24 pins
(XEN) [   14.267256] nr_gsis: 24
(XEN) [   14.268673] vIO APIC 1 has 8 pins
(XEN) [   14.270175] nr_gsis: 32
(XEN) [   14.271589] vIO APIC 2 has 8 pins
(XEN) [   14.273011] nr_gsis: 40
(XEN) [   14.274434] vIO APIC 3 has 8 pins
(XEN) [   14.275864] nr_gsis: 48
(XEN) [   14.277283] vIO APIC 4 has 8 pins
(XEN) [   14.278709] nr_gsis: 56
(XEN) [   14.280127] vIO APIC 5 has 8 pins
(XEN) [   14.281561] nr_gsis: 64
(XEN) [   14.282986] vIO APIC 6 has 8 pins
(XEN) [   14.284417] nr_gsis: 72
(XEN) [   14.285837] vIO APIC 7 has 8 pins
(XEN) [   14.287262] nr_gsis: 80
(XEN) [   14.288683] vIO APIC 8 has 8 pins
(XEN) [   14.290114] nr_gsis: 88
(XEN) [   14.291538] domain nr_gsis: 104 vioapic gsis: 88 nr_irqs_gsi: 104 
highest_gsi: 103
(XEN) [   14.294417] Assertion 'hvm_domain_irq(d)->nr_gsis == nr_gsis' failed 
at vioapic.c:608
(XEN) [   14.297282] [ Xen-4.9-unstable  x86_64  debug=y   Not tainted ]
(XEN) [   14.298743] CPU:0
(XEN) [   14.300161] RIP:e008:[] vioapic_init+0x186/0x1dd
(XEN) [   14.301633] RFLAGS: 00010287   CONTEXT: hypervisor
(XEN) [   14.303094] rax: 830837c7ea00   rbx: 0009   rcx: 

(XEN) [   14.305976] rdx: 82d080457fff   rsi: 000a   rdi: 
82d08044d6b8
(XEN) [   14.308851] rbp: 82d080457d28   rsp: 82d080457ce8   r8:  
83083e00
(XEN) [   14.311781] r9:  0006   r10: 000472d2   r11: 
0006
(XEN) [   14.314654] r12: 0008   r13: 830837d2e000   r14: 
0058
(XEN) [   14.317528] r15: 830837c7eb20   cr0: 8005003b   cr4: 
003526e0
(XEN) [   14.320403] cr3: 6f84c000   cr2: 
(XEN) [   14.321855] ds:    es:    fs:    gs:    ss:    cs: 
e008
(XEN) [   14.324734] Xen code around  
(vioapic_init+0x186/0x1dd):
(XEN) [   14.327591]  00 00 44 3b 70 40 74 02 <0f> 0b 8b 45 cc 41 89 85 b0 02 
00 00 4c 89 ef e8
(XEN) [   14.330458] Xen stack trace from rsp=82d080457ce8:
(XEN) [   14.331908]82d08029e7de 000937c7e010 82d080457d08 
830837d2e000
(XEN) [   14.334790]0068 0001  

(XEN) [   14.337661]82d080457d48 82d0802de276 830837d2e000 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Mon, Apr 17, 2017 at 08:47:48AM +0100, Roger Pau Monné wrote:
>On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
>> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>> >Hello,
>> >
>> >Although PVHv2 Dom0 is not yet finished, I've been trying the current code 
>> >on
>> >different hardware, and found that with pre-Haswell Intel hardware PVHv2 
>> >Dom0
>> >completely freezes the box when calling iommu_hwdom_init in 
>> >dom0_construct_pvh.
>> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or newer).
>> >
>> >I'm not able to debug that in any meaningful way because the box seems to 
>> >lock
>> >up completely, even the watchdog NMI stops working. Here is the boot log, 
>> >up to
>> >the point where it freezes:
>> 
>> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software bug?
>> 
>---8<---
>diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
>index 527ac2aadd..1df7710041 100644
>--- a/xen/arch/x86/hvm/vioapic.c
>+++ b/xen/arch/x86/hvm/vioapic.c
>@@ -625,6 +625,9 @@ int vioapic_init(struct domain *d)
> nr_gsis += nr_pins;
> }
> 
>+printk("domain nr_gsis: %u vioapic gsis: %u nr_irqs_gsi: %u highest_gsi: 
>%u\n",
>+   hvm_domain_irq(d)->nr_gsis, nr_gsis, nr_irqs_gsi, highest_gsi());
>+
> ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis);
> 
> d->arch.hvm_domain.nr_vioapics = nr_vioapics;

With the above patch,
(XEN) [   10.420001] PCI: MCFG area at 8000 reserved in E820
(XEN) [   10.426854] PCI: Using MCFG for segment  bus 00-ff
(XEN) [   10.433952] Intel VT-d iommu 6 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.441856] Intel VT-d iommu 5 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.449759] Intel VT-d iommu 4 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.457671] Intel VT-d iommu 3 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.465585] Intel VT-d iommu 2 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.473485] Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.481394] Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.489299] Intel VT-d iommu 7 supported page sizes: 4kB, 2MB, 1GB.
(XEN) [   10.497196] Intel VT-d Snoop Control enabled.
(XEN) [   10.503196] Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) [   10.510145] Intel VT-d Queued Invalidation enabled.
(XEN) [   10.516646] Intel VT-d Interrupt Remapping enabled.
(XEN) [   10.523173] Intel VT-d Posted Interrupt not enabled.
(XEN) [   10.529775] Intel VT-d Shared EPT tables enabled.
(XEN) [   10.548815] I/O virtualisation enabled
(XEN) [   10.554186]  - Dom0 mode: Relaxed
(XEN) [   10.559264] Interrupt remapping enabled
(XEN) [   10.564854] nr_sockets: 5
(XEN) [   10.569231] Enabled directed EOI with ioapic_ack_old on!
(XEN) [   10.577294] ENABLING IO-APIC IRQs
(XEN) [   10.582245]  -> Using old ACK method
(XEN) [   10.587967] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) [   10.797645] TSC deadline timer enabled
(XEN) [   10.887286] Defaulting to alternative key handling; send 'A' to switch 
to normal mode.
(XEN) [   10.897864] mwait-idle: MWAIT substates: 0x2020
(XEN) [   10.899335] mwait-idle: v0.4.1 model 0x55
(XEN) [   10.900799] mwait-idle: lapic_timer_reliable_states 0x
(XEN) [   10.902304] VMX: Supported advanced features:
(XEN) [   10.903781]  - APIC MMIO access virtualisation
(XEN) [   10.905258]  - APIC TPR shadow
(XEN) [   10.907138]  - Extended Page Tables (EPT)
(XEN) [   10.908782]  - Virtual-Processor Identifiers (VPID)
(XEN) [   10.910262]  - Virtual NMI
(XEN) [   10.911719]  - MSR direct-access bitmap
(XEN) [   10.913188]  - Unrestricted Guest
(XEN) [   10.914650]  - APIC Register Virtualization
(XEN) [   10.916126]  - Virtual Interrupt Delivery
(XEN) [   10.917596]  - Posted Interrupt Processing
(XEN) [   10.919066]  - VMCS shadowing
(XEN) [   10.920519]  - VM Functions
(XEN) [   10.921976]  - Virtualisation Exceptions
(XEN) [   10.923448]  - Page Modification Logging
(XEN) [   10.924918]  - TSC Scaling
(XEN) [   10.926371] HVM: ASIDs enabled.
(XEN) [   10.927829] HVM: VMX enabled
(XEN) [   10.929278] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [   10.930762] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 6, using 0x1
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 9, using 0x1
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 10, using 0x1
(XEN) [0.00] CMCI: threshold 0x2 too large for CPU56 bank 11, using 0x1
(XEN) [   13.216648] Brought up 112 CPUs
(XEN) [   13.739330] build-id: dc4540250abe5d96614d340c67069e390c37c21c
(XEN) [   13.740816] Running stub recovery selftests...
(XEN) [   13.742258] traps.c:3466: GPF (): 82d0b041 
[82d0b041] -> 82d080359cf2
(XEN) [   13.745155] traps.c:813: Trap 12: 82d0b040 [82d0b040] 
-> 82d080359cf2
(XEN) [   13.748046] traps.c:1215: Trap 3: 82d0b041 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Roger Pau Monné
On Mon, Apr 17, 2017 at 07:32:45AM +0800, Chao Gao wrote:
> On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
> >Hello,
> >
> >Although PVHv2 Dom0 is not yet finished, I've been trying the current code on
> >different hardware, and found that with pre-Haswell Intel hardware PVHv2 Dom0
> >completely freezes the box when calling iommu_hwdom_init in 
> >dom0_construct_pvh.
> >OTOH the same doesn't happen when using a newer CPU (ie: haswell or newer).
> >
> >I'm not able to debug that in any meaningful way because the box seems to 
> >lock
> >up completely, even the watchdog NMI stops working. Here is the boot log, up 
> >to
> >the point where it freezes:
> 
> I try "dom0=pvh" with my skylake. An assertion failed. Is it a software bug?
> 
[...]
> (XEN) [0.00] ACPI: IOAPIC (id[0x08] address[0xfec0] gsi_base[0])
> (XEN) [0.00] IOAPIC[0]: apic_id 8, version 32, address 0xfec0, 
> GSI 0-23
> (XEN) [0.00] ACPI: IOAPIC (id[0x09] address[0xfec01000] gsi_base[24])
> (XEN) [0.00] IOAPIC[1]: apic_id 9, version 32, address 0xfec01000, 
> GSI 24-31
> (XEN) [0.00] ACPI: IOAPIC (id[0x0a] address[0xfec08000] gsi_base[32])
> (XEN) [0.00] IOAPIC[2]: apic_id 10, version 32, address 0xfec08000, 
> GSI 32-39
> (XEN) [0.00] ACPI: IOAPIC (id[0x0b] address[0xfec1] gsi_base[40])
> (XEN) [0.00] IOAPIC[3]: apic_id 11, version 32, address 0xfec1, 
> GSI 40-47
> (XEN) [0.00] ACPI: IOAPIC (id[0x0c] address[0xfec18000] gsi_base[48])
> (XEN) [0.00] IOAPIC[4]: apic_id 12, version 32, address 0xfec18000, 
> GSI 48-55
> (XEN) [0.00] ACPI: IOAPIC (id[0x0f] address[0xfec2] gsi_base[72])
> (XEN) [0.00] IOAPIC[5]: apic_id 15, version 32, address 0xfec2, 
> GSI 72-79
> (XEN) [0.00] ACPI: IOAPIC (id[0x10] address[0xfec28000] gsi_base[80])
> (XEN) [0.00] IOAPIC[6]: apic_id 16, version 32, address 0xfec28000, 
> GSI 80-87
> (XEN) [0.00] ACPI: IOAPIC (id[0x11] address[0xfec3] gsi_base[88])
> (XEN) [0.00] IOAPIC[7]: apic_id 17, version 32, address 0xfec3, 
> GSI 88-95
> (XEN) [0.00] ACPI: IOAPIC (id[0x12] address[0xfec38000] gsi_base[96])
> (XEN) [0.00] IOAPIC[8]: apic_id 18, version 32, address 0xfec38000, 
> GSI 96-103
[...]
> (XEN) [0.00] IRQ limits: 104 GSI, 21416 MSI/MSI-X
[...]
> (XEN) [   14.147217] Dom0 has maximum 1448 PIRQs
> (XEN) [   14.151527] Assertion 'hvm_domain_irq(d)->nr_gsis == nr_gsis' failed 
> at vioapic.c:600
> (XEN) [   14.154404] [ Xen-4.9-unstable  x86_64  debug=y   Not tainted 
> ]
> (XEN) [   14.155867] CPU:0
> (XEN) [   14.157286] RIP:e008:[] 
> vioapic_init+0x110/0x167
> (XEN) [   14.158750] RFLAGS: 00010287   CONTEXT: hypervisor
> (XEN) [   14.160203] rax: 830837c7fa00   rbx: 0009   rcx: 
> c8381c70
> (XEN) [   14.163073] rdx: 0071   rsi: 830837c7e400   rdi: 
> 83083fff7868
> (XEN) [   14.165937] rbp: 82d080457d28   rsp: 82d080457ce8   r8:  
> 82e0
> (XEN) [   14.168797] r9:  0381   r10: 82d08045f400   r11: 
> 
> (XEN) [   14.171657] r12: 0008   r13: 830837d29000   r14: 
> 0058
> (XEN) [   14.174568] r15: 830837c7fb20   cr0: 8005003b   cr4: 
> 003526e0
> (XEN) [   14.177437] cr3: 6f84c000   cr2: 
> (XEN) [   14.178887] ds:    es:    fs:    gs:    ss:    
> cs: e008
> (XEN) [   14.181753] Xen code around  
> (vioapic_init+0x110/0x167):
> (XEN) [   14.184609]  00 00 44 3b 70 40 74 02 <0f> 0b 8b 45 cc 41 89 85 b0 02 
> 00 00 4c 89 ef e8
> (XEN) [   14.187473] Xen stack trace from rsp=82d080457ce8:
> (XEN) [   14.188916]82d08029e7de 000937c7f010 82d080457d08 
> 830837d29000
> (XEN) [   14.191784]0068 0001  
> 
> (XEN) [   14.194645]82d080457d48 82d0802de276 830837d29000 
> 
> (XEN) [   14.197507]82d080457d78 82d08026d593 82d080457d78 
> 830837d29000
> (XEN) [   14.200371]001f 0007 82d080457de8 
> 82d080205226
> (XEN) [   14.203234]82d0804380e0 0004 82d080457eb4 
> 
> (XEN) [   14.206097]82d080457dc8 f7fa32231fcbfbff 01212c100800 
> 00e0
> (XEN) [   14.208956]830838543850 00e0 82d08043b780 
> 006f
> (XEN) [   14.211817]82d080457f08 82d0803ee1be 0028fe80 
> 015c
> (XEN) [   14.214739]01df 0002 0002 
> 0002
> (XEN) [   14.217598]0002 0001 0001 
> 0001
> (XEN) [   14.220459]0001  82d080429a90 
> 0017
> (XEN) [   14.223317]001075ec7000 013b7000 0108 
> 

Re: [Xen-devel] PVH Dom0 Intel IOMMU issues

2017-04-17 Thread Chao Gao
On Fri, Apr 14, 2017 at 04:34:41PM +0100, Roger Pau Monné wrote:
>Hello,
>
>Although PVHv2 Dom0 is not yet finished, I've been trying the current code on
>different hardware, and found that with pre-Haswell Intel hardware PVHv2 Dom0
>completely freezes the box when calling iommu_hwdom_init in dom0_construct_pvh.
>OTOH the same doesn't happen when using a newer CPU (ie: haswell or newer).
>
>I'm not able to debug that in any meaningful way because the box seems to lock
>up completely, even the watchdog NMI stops working. Here is the boot log, up to
>the point where it freezes:

I try "dom0=pvh" with my skylake. An assertion failed. Is it a software bug?

 Xen 4.9-unstable
(XEN) [0.00] Xen version 4.9-unstable (r...@sh.intel.com) (gcc (GCC) 
4.8.5 20150623 (Red Hat 4.8.5-11)) debug=y  Mon Apr 17 04:41:08 CST 2017
   
(XEN) [0.00] Latest ChangeSet: Mon Apr 10 17:32:01 2017 +0200 
git:17cd662 
  
(XEN) [0.00] Bootloader: GRUB 2.02~beta2
(XEN) [0.00] Command line: conring_size=16m iommu=verbose,debug 
loglvl=all guest_loglvl=all com1=115200,8n1,0x3f8,4 console=com1,vga 
console_timestamps=boot vvtd_debug=0x3a dom0_mem=10G dom0=pvh   
   
(XEN) [0.00] Xen image load base address: 0 
(XEN) [0.00] Video information: 
(XEN) [0.00]  VGA is text mode 80x25, font 8x16 
(XEN) [0.00]  VBE/DDC methods: none; EDID transfer time: 1 seconds  
(XEN) [0.00]  EDID info not retrieved because no DDC retrieval method 
detected
  
(XEN) [0.00] Disc information:  
(XEN) [0.00]  Found 1 MBR signatures
(XEN) [0.00]  Found 1 EDD information structures
(XEN) [0.00] Xen-e820 RAM map:  
(XEN) [0.00]   - 00099800 (usable)  
(XEN) [0.00]  00099800 - 000a (reserved)
(XEN) [0.00]  000e - 0010 (reserved)
(XEN) [0.00]  0010 - 67b3b000 (usable)
(XEN) [0.00]  67b3b000 - 67d62000 (reserved)
(XEN) [0.00]  67d62000 - 681fc000 (usable)
(XEN) [0.00]  681fc000 - 6829f000 (ACPI data)
(XEN) [0.00]  6829f000 - 6908a000 (usable)
(XEN) [0.00]  6908a000 - 6a08a000 (reserved)
(XEN) [0.00]  6a08a000 - 6b6e6000 (usable)
(XEN) [0.00]  6b6e6000 - 6b9e6000 (reserved)
(XEN) [0.00]  6b9e6000 - 6c416000 (ACPI NVS)
(XEN) [0.00]  6c416000 - 6c516000 (ACPI data)
(XEN) [0.00]  6c516000 - 6fb0 (usable)
(XEN) [0.00]  6fb0 - 9000 (reserved)
(XEN) [0.00]  fd00 - fe80 (reserved)
(XEN) [0.00]  fec0 - fec01000 (reserved)
(XEN) [0.00]  fec8 - fed01000 (reserved)
(XEN) [0.00]  ff80 - 000100c0 (reserved)
(XEN) [0.00]  000100c0 - 00108000 (usable)
(XEN) [0.00] New Xen image base address: 0x6f40
(XEN) [0.00] ACPI: RSDP 000F0510, 0024 (r2 INTEL )
(XEN) [0.00] ACPI: XSDT 6C42C188, 0104 (r1 INTEL  S2600WF 0 
INTL 20091013)
(XEN) [0.00] ACPI: FACP 6C512000, 010C (r5 INTEL  S2600WF 0 
INTL 20091013)
(XEN) [0.00] ACPI: DSDT 6C4B4000, 36756 (r2 INTEL  S2600WF 3 
INTL 20091013)
(XEN) [0.00] ACPI: FACS 6C38E000, 0040
(XEN) [0.00] ACPI: SSDT 6C513000, 04B0 (r2 INTEL  S2600WF 0 
MSFT  10D)
(XEN) [0.00] ACPI: UEFI 6C405000, 0042 (r1 INTEL  S2600WF 2 
INTL 20091013)
(XEN) [0.00] ACPI: UEFI 6C39, 005C (r1  INTEL RstUefiV0 
0)
(XEN) [0.00] ACPI: HPET 6C511000, 0038 (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: APIC 6C50F000, 16DE (r3 INTEL  S2600WF 0 
INTL 20091013)
(XEN) [0.00] ACPI: MCFG 6C50E000, 003C (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: MSCT 6C50D000, 0090 (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: NFIT 6C4F4000, 18028 (r10
 0)
(XEN) [0.00] ACPI: PCAT 6C4F3000, 0048 (r1 INTEL  S2600WF 2 
INTL 20091013)
(XEN) [0.00] ACPI: PCCT 6C4F2000, 00AC (r1 INTEL  S2600WF 2 
INTL 20091013)
(XEN) [0.00] ACPI: RASF 6C4F1000, 0030 (r1 INTEL  S2600WF 1 
INTL 20091013)
(XEN) [0.00] ACPI: SLIT 6C4F, 006C (r1 INTEL  S2600WF