[linux-linus test] 167949: tolerable FAIL - PUSHED

2022-01-29 Thread osstest service owner
flight 167949 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167949/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10   fail REGR. vs. 167941

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 167941
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 167941
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167941
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 167941
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 167941
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 167941
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 167941
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167941
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass

version targeted for testing:
 linuxf8c7e4ede46fe63ff1669652648aab09d112
baseline version:
 linux169387e2aa291a4e3cb856053730fe99d6cec06f

Last test of basis   167941  2022-01-29 02:33:22 Z1 days
Testing same since   167949  2022-01-29 19:11:13 Z0 days1 attempts


People who touched revisions under test:
  Alan Stern 
  Alex Xu (Hello71) 
  Amelie Delaunay 
  Anshuman Khandual 
  Arnaud Pouliquen 
  Athira Rajeev 
  Badhri Jagan Sridharan 
  Bartosz Golaszewski 
  Bjorn Helgaas 
  Cameron Williams 
  Casey Schaufler 
  Catalin Marinas 
  Changcheng

RE: [PATCH v3 16/23] IOMMU: fold flush-all hook into "flush one"

2022-01-29 Thread Tian, Kevin
> From: Jan Beulich 
> Sent: Tuesday, January 11, 2022 12:34 AM
> 
> Having a separate flush-all hook has always been puzzling me some. We
> will want to be able to force a full flush via accumulated flush flags
> from the map/unmap functions. Introduce a respective new flag and fold
> all flush handling to use the single remaining hook.
> 
> Note that because of the respective comments in SMMU and IPMMU-VMSA
> code, I've folded the two prior hook functions into one. For SMMU-v3,
> which lacks a comment towards incapable hardware, I've left both
> functions in place on the assumption that selective and full flushes
> will eventually want separating.
> 
> Signed-off-by: Jan Beulich 
> Reviewed-by: Roger Pau Monné 
> [IPMMU-VMSA and SMMU-V2]
> Reviewed-by: Oleksandr Tyshchenko 
> [SMMUv3]
> Reviewed-by: Rahul Singh 
> [Arm]
> Acked-by: Julien Grall 

Reviewed-by: Kevin Tian 

> ---
> TBD: What we really are going to need is for the map/unmap functions to
>  specify that a wider region needs flushing than just the one
>  covered by the present set of (un)maps. This may still be less than
>  a full flush, but at least as a first step it seemed better to me
>  to keep things simple and go the flush-all route.
> ---
> v3: Re-base over changes earlier in the series.
> v2: New.
> 
> --- a/xen/drivers/passthrough/amd/iommu.h
> +++ b/xen/drivers/passthrough/amd/iommu.h
> @@ -255,7 +255,6 @@ int amd_iommu_get_reserved_device_memory
>  int __must_check amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t
> dfn,
>   unsigned long page_count,
>   unsigned int flush_flags);
> -int __must_check amd_iommu_flush_iotlb_all(struct domain *d);
>  void amd_iommu_print_entries(const struct amd_iommu *iommu, unsigned
> int dev_id,
>   dfn_t dfn);
> 
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -478,15 +478,18 @@ int amd_iommu_flush_iotlb_pages(struct d
>  {
>  unsigned long dfn_l = dfn_x(dfn);
> 
> -ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
> -ASSERT(flush_flags);
> +if ( !(flush_flags & IOMMU_FLUSHF_all) )
> +{
> +ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
> +ASSERT(flush_flags);
> +}
> 
>  /* Unless a PTE was modified, no flush is required */
>  if ( !(flush_flags & IOMMU_FLUSHF_modified) )
>  return 0;
> 
> -/* If the range wraps then just flush everything */
> -if ( dfn_l + page_count < dfn_l )
> +/* If so requested or if the range wraps then just flush everything. */
> +if ( (flush_flags & IOMMU_FLUSHF_all) || dfn_l + page_count < dfn_l )
>  {
>  amd_iommu_flush_all_pages(d);
>  return 0;
> @@ -511,13 +514,6 @@ int amd_iommu_flush_iotlb_pages(struct d
> 
>  return 0;
>  }
> -
> -int amd_iommu_flush_iotlb_all(struct domain *d)
> -{
> -amd_iommu_flush_all_pages(d);
> -
> -return 0;
> -}
> 
>  int amd_iommu_reserve_domain_unity_map(struct domain *d,
> const struct ivrs_unity_map *map,
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -642,7 +642,6 @@ static const struct iommu_ops __initcons
>  .map_page = amd_iommu_map_page,
>  .unmap_page = amd_iommu_unmap_page,
>  .iotlb_flush = amd_iommu_flush_iotlb_pages,
> -.iotlb_flush_all = amd_iommu_flush_iotlb_all,
>  .reassign_device = reassign_device,
>  .get_device_group_id = amd_iommu_group_id,
>  .enable_x2apic = iov_enable_xt,
> --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> @@ -930,13 +930,19 @@ out:
>  }
> 
>  /* Xen IOMMU ops */
> -static int __must_check ipmmu_iotlb_flush_all(struct domain *d)
> +static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn,
> +  unsigned long page_count,
> +  unsigned int flush_flags)
>  {
>  struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)-
> >arch.priv;
> 
> +ASSERT(flush_flags);
> +
>  if ( !xen_domain || !xen_domain->root_domain )
>  return 0;
> 
> +/* The hardware doesn't support selective TLB flush. */
> +
>  spin_lock(&xen_domain->lock);
>  ipmmu_tlb_invalidate(xen_domain->root_domain);
>  spin_unlock(&xen_domain->lock);
> @@ -944,16 +950,6 @@ static int __must_check ipmmu_iotlb_flus
>  return 0;
>  }
> 
> -static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn,
> -  unsigned long page_count,
> -  unsigned int flush_flags)
> -{
> -ASSERT(flush_flags);
> -
> -/* The hardware doesn't support selective TLB flush. */
> -return ipmmu_iotlb_flush_all(d);
> -}
> -
>  static struct ipmmu_vmsa_domain *ipmmu_get_cache_domain(struct
> 

RE: [PATCH v3 15/23] VT-d: allow use of superpage mappings

2022-01-29 Thread Tian, Kevin
> From: Jan Beulich
> Sent: Tuesday, January 11, 2022 12:32 AM
> 
> ... depending on feature availability (and absence of quirks).
> 
> Also make the page table dumping function aware of superpages.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Kevin Tian 

> ---
> v3: Rename queue_free_pt()'s last parameter. Replace "level > 1" checks
> where possible. Tighten assertion.
> 
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -826,18 +826,37 @@ static int __must_check iommu_flush_iotl
>  return iommu_flush_iotlb(d, INVALID_DFN, 0, 0);
>  }
> 
> +static void queue_free_pt(struct domain *d, mfn_t mfn, unsigned int level)
> +{
> +if ( level > 1 )
> +{
> +struct dma_pte *pt = map_domain_page(mfn);
> +unsigned int i;
> +
> +for ( i = 0; i < PTE_NUM; ++i )
> +if ( dma_pte_present(pt[i]) && !dma_pte_superpage(pt[i]) )
> +queue_free_pt(d, maddr_to_mfn(dma_pte_addr(pt[i])),
> +  level - 1);
> +
> +unmap_domain_page(pt);
> +}
> +
> +iommu_queue_free_pgtable(d, mfn_to_page(mfn));
> +}
> +
>  /* clear one page's page table */
>  static int dma_pte_clear_one(struct domain *domain, daddr_t addr,
>   unsigned int order,
>   unsigned int *flush_flags)
>  {
>  struct domain_iommu *hd = dom_iommu(domain);
> -struct dma_pte *page = NULL, *pte = NULL;
> +struct dma_pte *page = NULL, *pte = NULL, old;
>  u64 pg_maddr;
> +unsigned int level = (order / LEVEL_STRIDE) + 1;
> 
>  spin_lock(&hd->arch.mapping_lock);
> -/* get last level pte */
> -pg_maddr = addr_to_dma_page_maddr(domain, addr, 1, flush_flags,
> false);
> +/* get target level pte */
> +pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags,
> false);
>  if ( pg_maddr < PAGE_SIZE )
>  {
>  spin_unlock(&hd->arch.mapping_lock);
> @@ -845,7 +864,7 @@ static int dma_pte_clear_one(struct doma
>  }
> 
>  page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> -pte = page + address_level_offset(addr, 1);
> +pte = &page[address_level_offset(addr, level)];
> 
>  if ( !dma_pte_present(*pte) )
>  {
> @@ -854,14 +873,20 @@ static int dma_pte_clear_one(struct doma
>  return 0;
>  }
> 
> +old = *pte;
>  dma_clear_pte(*pte);
> -*flush_flags |= IOMMU_FLUSHF_modified;
> 
>  spin_unlock(&hd->arch.mapping_lock);
>  iommu_sync_cache(pte, sizeof(struct dma_pte));
> 
>  unmap_vtd_domain_page(page);
> 
> +*flush_flags |= IOMMU_FLUSHF_modified;
> +
> +if ( order && !dma_pte_superpage(old) )
> +queue_free_pt(domain, maddr_to_mfn(dma_pte_addr(old)),
> +  order / LEVEL_STRIDE);
> +
>  return 0;
>  }
> 
> @@ -1952,6 +1977,7 @@ static int __must_check intel_iommu_map_
>  struct domain_iommu *hd = dom_iommu(d);
>  struct dma_pte *page, *pte, old, new = {};
>  u64 pg_maddr;
> +unsigned int level = (IOMMUF_order(flags) / LEVEL_STRIDE) + 1;
>  int rc = 0;
> 
>  /* Do nothing if VT-d shares EPT page table */
> @@ -1976,7 +2002,7 @@ static int __must_check intel_iommu_map_
>  return 0;
>  }
> 
> -pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1,
> flush_flags,
> +pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), level,
> flush_flags,
>true);
>  if ( pg_maddr < PAGE_SIZE )
>  {
> @@ -1985,13 +2011,15 @@ static int __must_check intel_iommu_map_
>  }
> 
>  page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> -pte = &page[dfn_x(dfn) & LEVEL_MASK];
> +pte = &page[address_level_offset(dfn_to_daddr(dfn), level)];
>  old = *pte;
> 
>  dma_set_pte_addr(new, mfn_to_maddr(mfn));
>  dma_set_pte_prot(new,
>   ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
>   ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
> +if ( IOMMUF_order(flags) )
> +dma_set_pte_superpage(new);
> 
>  /* Set the SNP on leaf page table if Snoop Control available */
>  if ( iommu_snoop )
> @@ -2012,8 +2040,14 @@ static int __must_check intel_iommu_map_
> 
>  *flush_flags |= IOMMU_FLUSHF_added;
>  if ( dma_pte_present(old) )
> +{
>  *flush_flags |= IOMMU_FLUSHF_modified;
> 
> +if ( IOMMUF_order(flags) && !dma_pte_superpage(old) )
> +queue_free_pt(d, maddr_to_mfn(dma_pte_addr(old)),
> +  IOMMUF_order(flags) / LEVEL_STRIDE);
> +}
> +
>  return rc;
>  }
> 
> @@ -2370,6 +2404,7 @@ static int __init vtd_setup(void)
>  {
>  struct acpi_drhd_unit *drhd;
>  struct vtd_iommu *iommu;
> +unsigned int large_sizes = PAGE_SIZE_2M | PAGE_SIZE_1G;
>  int ret;
>  bool reg_inval_supported = true;
> 
> @@ -2412,6 +2447,11 @@ static int __init vtd_setup(void)
> cap_sps_2mb(iommu->cap) ? "

RE: [PATCH v3 03/23] VT-d: limit page table population in domain_pgd_maddr()

2022-01-29 Thread Tian, Kevin
> From: Jan Beulich 
> Sent: Tuesday, January 11, 2022 12:23 AM
> 
> I have to admit that I never understood why domain_pgd_maddr() wants to
> populate all page table levels for DFN 0. I can only assume that despite
> the comment there what is needed is population just down to the smallest
> possible nr_pt_levels that the loop later in the function may need to
> run to. Hence what is needed is the minimum of all possible
> iommu->nr_pt_levels, to then be passed into addr_to_dma_page_maddr()
> instead of literal 1.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Kevin Tian 

> ---
> v3: New.
> 
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -55,6 +55,7 @@ bool __read_mostly iommu_snoop = true;
>  #endif
> 
>  static unsigned int __read_mostly nr_iommus;
> +static unsigned int __read_mostly min_pt_levels = UINT_MAX;
> 
>  static struct iommu_ops vtd_ops;
>  static struct tasklet vtd_fault_tasklet;
> @@ -482,8 +483,11 @@ static uint64_t domain_pgd_maddr(struct
>  {
>  if ( !hd->arch.vtd.pgd_maddr )
>  {
> -/* Ensure we have pagetables allocated down to leaf PTE. */
> -addr_to_dma_page_maddr(d, 0, 1, NULL, true);
> +/*
> + * Ensure we have pagetables allocated down to the smallest
> + * level the loop below may need to run to.
> + */
> +addr_to_dma_page_maddr(d, 0, min_pt_levels, NULL, true);
> 
>  if ( !hd->arch.vtd.pgd_maddr )
>  return 0;
> @@ -1381,6 +1385,8 @@ int __init iommu_alloc(struct acpi_drhd_
>  return -ENODEV;
>  }
>  iommu->nr_pt_levels = agaw_to_level(agaw);
> +if ( min_pt_levels > iommu->nr_pt_levels )
> +min_pt_levels = iommu->nr_pt_levels;
> 
>  if ( !ecap_coherent(iommu->ecap) )
>  vtd_ops.sync_cache = sync_cache;



RE: [PATCH v3 02/23] VT-d: have callers specify the target level for page table walks

2022-01-29 Thread Tian, Kevin
> From: Jan Beulich 
> Sent: Tuesday, January 11, 2022 12:23 AM
> 
> In order to be able to insert/remove super-pages we need to allow
> callers of the walking function to specify at which point to stop the
> walk.
> 
> For intel_iommu_lookup_page() integrate the last level access into
> the main walking function.
> 
> dma_pte_clear_one() gets only partly adjusted for now: Error handling
> and order parameter get put in place, but the order parameter remains
> ignored (just like intel_iommu_map_page()'s order part of the flags).
> 
> Signed-off-by: Jan Beulich 
> ---
> I was actually wondering whether it wouldn't make sense to integrate
> dma_pte_clear_one() into its only caller intel_iommu_unmap_page(), for
> better symmetry with intel_iommu_map_page().

I think it's the right thing to do. It was there due to multiple callers
when firstly introduced. But now given only one caller mering it
with the caller to be symmetry makes sense.

with or without that change (given it's simple):

Reviewed-by: Kevin Tian 

> ---
> v2: Fix build.
> 
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -347,63 +347,116 @@ static u64 bus_to_context_maddr(struct v
>  return maddr;
>  }
> 
> -static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int
> alloc)
> +/*
> + * This function walks (and if requested allocates) page tables to the
> + * designated target level. It returns
> + * - 0 when a non-present entry was encountered and no allocation was
> + *   requested,
> + * - a small positive value (the level, i.e. below PAGE_SIZE) upon allocation
> + *   failure,
> + * - for target > 0 the physical address of the page table holding the leaf
> + *   PTE for the requested address,
> + * - for target == 0 the full PTE.
> + */
> +static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t
> addr,
> +   unsigned int target,
> +   unsigned int *flush_flags, bool alloc)
>  {
>  struct domain_iommu *hd = dom_iommu(domain);
>  int addr_width = agaw_to_width(hd->arch.vtd.agaw);
>  struct dma_pte *parent, *pte = NULL;
> -int level = agaw_to_level(hd->arch.vtd.agaw);
> -int offset;
> +unsigned int level = agaw_to_level(hd->arch.vtd.agaw), offset;
>  u64 pte_maddr = 0;
> 
>  addr &= (((u64)1) << addr_width) - 1;
>  ASSERT(spin_is_locked(&hd->arch.mapping_lock));
> +ASSERT(target || !alloc);
> +
>  if ( !hd->arch.vtd.pgd_maddr )
>  {
>  struct page_info *pg;
> 
> -if ( !alloc || !(pg = iommu_alloc_pgtable(domain)) )
> +if ( !alloc )
> +goto out;
> +
> +pte_maddr = level;
> +if ( !(pg = iommu_alloc_pgtable(domain)) )
>  goto out;
> 
>  hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
>  }
> 
> -parent = (struct dma_pte *)map_vtd_domain_page(hd-
> >arch.vtd.pgd_maddr);
> -while ( level > 1 )
> +pte_maddr = hd->arch.vtd.pgd_maddr;
> +parent = map_vtd_domain_page(pte_maddr);
> +while ( level > target )
>  {
>  offset = address_level_offset(addr, level);
>  pte = &parent[offset];
> 
>  pte_maddr = dma_pte_addr(*pte);
> -if ( !pte_maddr )
> +if ( !dma_pte_present(*pte) || (level > 1 &&
> dma_pte_superpage(*pte)) )
>  {
>  struct page_info *pg;
> +/*
> + * Higher level tables always set r/w, last level page table
> + * controls read/write.
> + */
> +struct dma_pte new_pte = { DMA_PTE_PROT };
> 
>  if ( !alloc )
> -break;
> +{
> +pte_maddr = 0;
> +if ( !dma_pte_present(*pte) )
> +break;
> +
> +/*
> + * When the leaf entry was requested, pass back the full PTE,
> + * with the address adjusted to account for the residual of
> + * the walk.
> + */
> +pte_maddr = pte->val +
> +(addr & ((1UL << level_to_offset_bits(level)) - 1) &
> + PAGE_MASK);
> +if ( !target )
> +break;
> +}
> 
> +pte_maddr = level - 1;
>  pg = iommu_alloc_pgtable(domain);
>  if ( !pg )
>  break;
> 
>  pte_maddr = page_to_maddr(pg);
> -dma_set_pte_addr(*pte, pte_maddr);
> +dma_set_pte_addr(new_pte, pte_maddr);
> 
> -/*
> - * high level table always sets r/w, last level
> - * page table control read/write
> - */
> -dma_set_pte_readable(*pte);
> -dma_set_pte_writable(*pte);
> +if ( dma_pte_present(*pte) )
> +{
> +struct dma_pte *split = map_vtd_domain_page(pte_maddr);
> +unsigned long inc = 1UL << 

RE: [PATCH v2 1/3] VMX: sync VM-exit perf counters with known VM-exit reasons

2022-01-29 Thread Tian, Kevin
> From: Beulich
> Sent: Wednesday, January 5, 2022 9:58 PM
> 
> This has gone out of sync over time. Introduce a simplistic mechanism to
> hopefully keep things in sync going forward.
> 
> Also limit the array index to just the "basic exit reason" part, which is
> what the pseudo-enumeration covers.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Kevin Tian 

> ---
> v2: Use sentinel comment only.
> 
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -3869,7 +3869,7 @@ void vmx_vmexit_handler(struct cpu_user_
>  else
>  HVMTRACE_ND(VMEXIT, 0, 1/*cycles*/, exit_reason, regs->eip);
> 
> -perfc_incra(vmexits, exit_reason);
> +perfc_incra(vmexits, (uint16_t)exit_reason);
> 
>  /* Handle the interrupt we missed before allowing any more in. */
>  switch ( (uint16_t)exit_reason )
> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> @@ -219,6 +219,7 @@ static inline void pi_clear_sn(struct pi
>  #define EXIT_REASON_PML_FULL62
>  #define EXIT_REASON_XSAVES  63
>  #define EXIT_REASON_XRSTORS 64
> +/* Remember to also update VMX_PERF_EXIT_REASON_SIZE! */
> 
>  /*
>   * Interruption-information format
> --- a/xen/arch/x86/include/asm/perfc_defn.h
> +++ b/xen/arch/x86/include/asm/perfc_defn.h
> @@ -6,7 +6,7 @@ PERFCOUNTER_ARRAY(exceptions,
> 
>  #ifdef CONFIG_HVM
> 
> -#define VMX_PERF_EXIT_REASON_SIZE 56
> +#define VMX_PERF_EXIT_REASON_SIZE 65
>  #define VMX_PERF_VECTOR_SIZE 0x20
>  PERFCOUNTER_ARRAY(vmexits,  "vmexits",
> VMX_PERF_EXIT_REASON_SIZE)
>  PERFCOUNTER_ARRAY(cause_vector, "cause vector",
> VMX_PERF_VECTOR_SIZE)
> 



[qemu-mainline test] 167947: tolerable FAIL - PUSHED

2022-01-29 Thread osstest service owner
flight 167947 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167947/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 167939
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167939
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 167939
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 167939
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 167939
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 167939
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 167939
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167939
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu95a6af2a006e7160c958215c20e513ed29a0a76c
baseline version:
 qemuu7a1043cef91739ff4b59812d30f1ed2850d3d34e

Last test of basis   167939  2022-01-28 23:39:37 Z1 days
Testing same since   167947  2022-01-29 16:06:59 Z0 days1 attempts


People who touched revisions under test:
  Andrew Baumann 
  Cédric Le Goater 
  Edgar E. Igles

[ovmf test] 167950: all pass - PUSHED

2022-01-29 Thread osstest service owner
flight 167950 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167950/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf ba79becd553c4d9118fafcaedef4d36f1cb9c851
baseline version:
 ovmf ae35314e7b86417c166eb873eb26df012ae3787a

Last test of basis   167946  2022-01-29 15:43:04 Z0 days
Testing same since   167950  2022-01-29 20:11:41 Z0 days1 attempts


People who touched revisions under test:
  Abner Chang 
  Ard Biesheuvel 
  Gerd Hoffmann 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ae35314e7b..ba79becd55  ba79becd553c4d9118fafcaedef4d36f1cb9c851 -> 
xen-tested-master



[linux-5.4 test] 167945: tolerable FAIL - PUSHED

2022-01-29 Thread osstest service owner
flight 167945 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167945/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 167916
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 167916
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 167916
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 167916
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167916
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 167916
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 167916
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 167916
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 167916
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167916
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 167916
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 167916
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass

version targeted for testing:
 linux7cdf2951f80d189e9a0a5b6836664ccc8bfb2e7e
baseline version:
 linux411d8da1c84369f4d4e

[xen-unstable test] 167944: tolerable FAIL

2022-01-29 Thread osstest service owner
flight 167944 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167944/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 167938
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 167938
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167938
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 167938
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 167938
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 167938
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 167938
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 167938
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 167938
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 167938
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 167938
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167938
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass

version targeted for testing:
 xen  21170a738c11b24815b4afab2151bd3aa2a29acc
baseline version:
 xen  21170a738c11b248

[ovmf test] 167946: all pass - PUSHED

2022-01-29 Thread osstest service owner
flight 167946 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167946/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf ae35314e7b86417c166eb873eb26df012ae3787a
baseline version:
 ovmf 8542fc5f956821841154d4c11851c5484847ac0d

Last test of basis   167940  2022-01-29 01:41:58 Z0 days
Testing same since   167946  2022-01-29 15:43:04 Z0 days1 attempts


People who touched revisions under test:
  Ard Biesheuvel 
  Sami Mujawar 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8542fc5f95..ae35314e7b  ae35314e7b86417c166eb873eb26df012ae3787a -> 
xen-tested-master



Re: [PATCH v3 5/5] tools: add example application to initialize dom0less PV drivers

2022-01-29 Thread Julien Grall

Hi,

On 28/01/2022 21:33, Stefano Stabellini wrote:

From: Luca Miccio 

Add an example application that can be run in dom0 to complete the
dom0less domains initialization so that they can get access to xenstore
and use PV drivers.

Signed-off-by: Luca Miccio 
Signed-off-by: Stefano Stabellini 
CC: Wei Liu 
CC: Anthony PERARD 
CC: Juergen Gross 
---
Changes in v3:
- handle xenstore errors
- add an in-code comment about xenstore entries
- less verbose output
- clean-up error path in main

Changes in v2:
- do not set HVM_PARAM_STORE_EVTCHN twice
- rename restore_xenstore to create_xenstore
- increase maxmem
---
  tools/helpers/Makefile|  13 ++
  tools/helpers/init-dom0less.c | 269 ++


Should we document how this is meant to be used?


  2 files changed, 282 insertions(+)
  create mode 100644 tools/helpers/init-dom0less.c

diff --git a/tools/helpers/Makefile b/tools/helpers/Makefile
index 7f6c422440..8e42997052 100644
--- a/tools/helpers/Makefile
+++ b/tools/helpers/Makefile
@@ -10,6 +10,9 @@ ifeq ($(CONFIG_Linux),y)
  ifeq ($(CONFIG_X86),y)
  PROGS += init-xenstore-domain
  endif
+ifeq ($(CONFIG_ARM),y)
+PROGS += init-dom0less
+endif >   endif
  
  XEN_INIT_DOM0_OBJS = xen-init-dom0.o init-dom-json.o

@@ -26,6 +29,13 @@ $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenstore)
  $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenlight)
  $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h
  
+INIT_DOM0LESS_OBJS = init-dom0less.o init-dom-json.o

+$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
+$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenstore)
+$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenlight)
+$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
+$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenevtchn)
+
  .PHONY: all
  all: $(PROGS)
  
@@ -35,6 +45,9 @@ xen-init-dom0: $(XEN_INIT_DOM0_OBJS)

  init-xenstore-domain: $(INIT_XENSTORE_DOMAIN_OBJS)
$(CC) $(LDFLAGS) -o $@ $(INIT_XENSTORE_DOMAIN_OBJS) 
$(LDLIBS_libxentoollog) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) 
$(LDLIBS_libxenguest) $(LDLIBS_libxenlight) $(APPEND_LDFLAGS)
  
+init-dom0less: $(INIT_DOM0LESS_OBJS)

+   $(CC) $(LDFLAGS) -o $@ $(INIT_DOM0LESS_OBJS) $(LDLIBS_libxenctrl) 
$(LDLIBS_libxenevtchn) $(LDLIBS_libxentoollog) $(LDLIBS_libxenstore) 
$(LDLIBS_libxenlight) $(LDLIBS_libxenguest)  $(APPEND_LDFLAGS)
+
  .PHONY: install
  install: all
$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
diff --git a/tools/helpers/init-dom0less.c b/tools/helpers/init-dom0less.c
new file mode 100644
index 00..b6a3831cb5
--- /dev/null
+++ b/tools/helpers/init-dom0less.c
@@ -0,0 +1,269 @@
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "init-dom-json.h"
+
+#define NR_MAGIC_PAGES 4


Why are we allocating 4 pages when only 2 (maybe 1) is necessary?


+#define CONSOLE_PFN_OFFSET 0
+#define XENSTORE_PFN_OFFSET 1
+#define STR_MAX_LENGTH 64
+
+static int alloc_magic_pages(libxl_dominfo *info, struct xc_dom_image *dom)
+{
+int rc, i;
+const xen_pfn_t base = GUEST_MAGIC_BASE >> XC_PAGE_SHIFT;
+xen_pfn_t p2m[NR_MAGIC_PAGES];
+
+rc = xc_domain_setmaxmem(dom->xch, dom->guest_domid,
+ info->max_memkb + NR_MAGIC_PAGES * 4);


Please don't rely on the fact the page size will be 4KB in Xen. Instead, 
use XC_PAGE_*.



+if (rc < 0)
+return rc;
+
+for (i = 0; i < NR_MAGIC_PAGES; i++)
+p2m[i] = base + i;
+
+rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
+  NR_MAGIC_PAGES, 0, 0, p2m);
+if (rc < 0)
+return rc;
+
+dom->xenstore_pfn = base + XENSTORE_PFN_OFFSET;
+
+xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);


So you allocate 4 pages, use 2, but only clear 1. Can you explain why?

Also, should not you check the error return here and  ...


+
+xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
+ dom->xenstore_pfn);


here...?

Also, in theory, as soon as you set xc_hvm_param_set(), the guest may be 
able to start using Xenstore. So wouldn't it be better to set it once 
you know everything is in place (i.e. just before calling 
xs_introduce_domain())?



+return 0;
+}
+
+static bool do_xs_write_dom(struct xs_handle *xsh, xs_transaction_t t,
+domid_t domid, char *path, char *val)
+{
+char full_path[STR_MAX_LENGTH];
+
+snprintf(full_path, STR_MAX_LENGTH,
+ "/local/domain/%d/%s", domid, path);
+return xs_write(xsh, t, full_path, val, strlen(val));


From my understanding, xs_write() will create a node that will only be 
readable/writable by the domain executing this binary (i.e. dom0). IOW, 
the guest will not see the nodes.


So shouldn't you also set the permissions?


+}
+
+static bool do_xs_write_libxl(struct xs_handle *xsh, xs_transaction_t

[linux-linus test] 167941: tolerable FAIL - PUSHED

2022-01-29 Thread osstest service owner
flight 167941 linux-linus real [real]
flight 167948 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/167941/
http://logs.test-lab.xenproject.org/osstest/logs/167948/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd11-amd64 19 guest-localmigrate/x10 fail pass in 
167948-retest
 test-armhf-armhf-libvirt 10 host-ping-check-xen fail pass in 167948-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 167948 like 
167937
 test-armhf-armhf-libvirt15 migrate-support-check fail in 167948 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 167937
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167937
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 167937
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 167937
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 167937
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 167937
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167937
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass

version targeted for testing:
 linux169387e2aa291a4e3cb856053730fe99d6cec06f
baseline version:
 linux145d9b498fc827b79c1260b4caa29a8e59d4c2b9

Last test of basis   167937  2022-01-28 15:11:08 Z1 days
Testing same since   167941  2022-01-29 02:33:22 Z0 days1 attempts


People who touched revisions under test:
  "Eric W. Biederman" 
  Aditya Garg 
  Amadeusz Sławiński 

Re: [PATCH v3 3/5] xen/arm: configure dom0less domain for enabling xenstore after boot

2022-01-29 Thread Julien Grall

Hi Stefano,

On 28/01/2022 21:33, Stefano Stabellini wrote:

From: Luca Miccio 

If "xen,enhanced" is enabled, then add to dom0less domains:

- the hypervisor node in device tree
- the xenstore event channel

The xenstore event channel is also used for the first notification to
let the guest know that xenstore has become available.

Signed-off-by: Luca Miccio 
Signed-off-by: Stefano Stabellini 
Reviewed-by: Bertrand Marquis 
CC: Julien Grall 
CC: Volodymyr Babchuk 
CC: Bertrand Marquis 

---
Changes in v3:
- use evtchn_alloc_unbound

Changes in v2:
- set HVM_PARAM_STORE_PFN to ~0ULL at domain creation
- in alloc_xenstore_evtchn do not call _evtchn_alloc_unbound
---
  xen/arch/arm/domain_build.c | 41 +
  1 file changed, 41 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9144d6c0b6..8e030a7f05 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -27,6 +27,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include 

  #include 
@@ -2619,6 +2620,8 @@ static int __init prepare_dtb_domU(struct domain *d, 
struct kernel_info *kinfo)
  int ret;
  
  kinfo->phandle_gic = GUEST_PHANDLE_GIC;

+kinfo->gnttab_start = GUEST_GNTTAB_BASE;
+kinfo->gnttab_size = GUEST_GNTTAB_SIZE;
  
  addrcells = GUEST_ROOT_ADDRESS_CELLS;

  sizecells = GUEST_ROOT_SIZE_CELLS;
@@ -2693,6 +2696,13 @@ static int __init prepare_dtb_domU(struct domain *d, 
struct kernel_info *kinfo)
  goto err;
  }
  
+if ( kinfo->dom0less_enhanced )

+{
+ret = make_hypervisor_node(d, kinfo, addrcells, sizecells);


Looking at the code, I think the extended regions will not work properly 
because we are looking at the host memory layout. In the case of domU, 
we want to use the guest layout. Please have a look how it was done in 
libxl.



+if ( ret )
+goto err;
+}
+
  ret = fdt_end_node(kinfo->fdt);
  if ( ret < 0 )
  goto err;
@@ -2959,6 +2969,25 @@ static int __init construct_domain(struct domain *d, 
struct kernel_info *kinfo)
  return 0;
  }
  
+static int __init alloc_xenstore_evtchn(struct domain *d)

+{
+evtchn_alloc_unbound_t alloc;
+int rc;
+
+alloc.dom = d->domain_id;
+alloc.remote_dom = hardware_domain->domain_id;


The first thing evtchn_alloc_unbound() will do is looking up the domain. 
This seems a bit pointless given that we have the domain in hand. 
Shouldn't we extend evtchn_alloc_unbound() to pass the domain?



+rc = evtchn_alloc_unbound(&alloc, true);
+if ( rc )
+{
+printk("Failed allocating event channel for domain\n");
+return rc;
+}
+
+d->arch.hvm.params[HVM_PARAM_STORE_EVTCHN] = alloc.port;
+
+return 0;
+}
+
  static int __init construct_domU(struct domain *d,
   const struct dt_device_node *node)
  {
@@ -3014,7 +3043,19 @@ static int __init construct_domU(struct domain *d,
  return rc;
  
  if ( kinfo.vpl011 )

+{
  rc = domain_vpl011_init(d, NULL);
+if ( rc < 0 )
+return rc;
+}
+
+if ( kinfo.dom0less_enhanced )
+{
+rc = alloc_xenstore_evtchn(d);
+if ( rc < 0 )
+return rc;
+d->arch.hvm.params[HVM_PARAM_STORE_PFN] = ~0ULL;


I think it would be easy to allocate the page right now. So what prevent 
us to do it right now?


Cheers,

--
Julien Grall



Re: [PATCH v3 1/5] xen: introduce xen,enhanced dom0less property

2022-01-29 Thread Julien Grall

Hi Stefano,

On 28/01/2022 21:33, Stefano Stabellini wrote:

From: Stefano Stabellini 

Introduce a new "xen,enhanced" dom0less property to enable/disable PV
driver interfaces for dom0less guests. Currently only "enabled" and
"disabled" are supported property values (and empty). Leave the option
open to implement further possible values in the future (e.g.
"xenstore" to enable only xenstore.)

The configurable option is for domUs only. For dom0 we always set the
corresponding property in the Xen code to true (PV interfaces enabled.)

This patch only parses the property. Next patches will make use of it.

Signed-off-by: Stefano Stabellini 
Reviewed-by: Bertrand Marquis 
CC: Julien Grall 
CC: Volodymyr Babchuk 
CC: Bertrand Marquis 
---
Changes in v3:
- improve commit message

Changes in v2:
- rename kinfo.enhanced to kinfo.dom0less_enhanced
- set kinfo.dom0less_enhanced to true for dom0
- handle -ENODATA in addition to -EILSEQ
---
  docs/misc/arm/device-tree/booting.txt | 18 ++
  xen/arch/arm/domain_build.c   |  8 
  xen/arch/arm/include/asm/kernel.h |  3 +++
  3 files changed, 29 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt 
b/docs/misc/arm/device-tree/booting.txt
index 71895663a4..38c29fb3d8 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -169,6 +169,24 @@ with the following properties:
  Please note that the SPI used for the virtual pl011 could clash with the
  physical SPI of a physical device assigned to the guest.
  
+- xen,enhanced


NIT: I find a bit strange this is added in the middle of the property. 
Can you either sort the property alphabtically or move this one to the end?



+
+A string property. Possible property values are:
+
+- "enabled" (or missing property value)
+Xen PV interfaces, including grant-table and xenstore, will be
+enabled for the VM.
+
+- "disabled"
+Xen PV interfaces are disabled.
+
+If the xen,enhanced property is present with no value, it defaults
+to "enabled". If the xen,enhanced property is not present, PV
+interfaces are disabled.
+
+In the future other possible property values might be added to
+enable only selected interfaces.
+
  - nr_spis
  
  Optional. A 32-bit integer specifying the number of SPIs (Shared

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 6931c022a2..9144d6c0b6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2963,6 +2963,7 @@ static int __init construct_domU(struct domain *d,
   const struct dt_device_node *node)
  {
  struct kernel_info kinfo = {};
+const char *dom0less_enhanced;
  int rc;
  u64 mem;
  
@@ -2978,6 +2979,12 @@ static int __init construct_domU(struct domain *d,
  
  kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
  
+rc = dt_property_read_string(node, "xen,enhanced", &dom0less_enhanced);

+if ( rc == -EILSEQ ||


I think the use an -EILSEQ wants an explanation. In a previous version, 
you wrote that the value would be returned when:


fdt set /chosen/domU0 xen,enhanced

But it is not clear why. Can you print pp->value, pp->length, 
strnlen(..) when this happens?




+ rc == -ENODATA ||
+ (rc == 0 && !strcmp(dom0less_enhanced, "enabled")) )
+kinfo.dom0less_enhanced = true;
+
  if ( vcpu_create(d, 0) == NULL )
  return -ENOMEM;
  
@@ -3095,6 +3102,7 @@ static int __init construct_dom0(struct domain *d)
  
  kinfo.unassigned_mem = dom0_mem;

  kinfo.d = d;
+kinfo.dom0less_enhanced = true;


This is a bit odd. The name suggests that this is a dom0less specific 
option. But then you are setting it to dom0.


Given that this variable is about enable PV drivers, I think this should 
be false for dom0.


  
  rc = kernel_probe(&kinfo, NULL);

  if ( rc < 0 )
diff --git a/xen/arch/arm/include/asm/kernel.h 
b/xen/arch/arm/include/asm/kernel.h
index 874aa108a7..c4dc039b54 100644
--- a/xen/arch/arm/include/asm/kernel.h
+++ b/xen/arch/arm/include/asm/kernel.h
@@ -36,6 +36,9 @@ struct kernel_info {
  /* Enable pl011 emulation */
  bool vpl011;
  
+/* Enable PV drivers */

+bool dom0less_enhanced;
+
  /* GIC phandle */
  uint32_t phandle_gic;
  


Cheers,

--
Julien Grall



Re: [XEN v1] xen/arm: io: Check ESR_EL2.ISV != 0 before searching for a MMIO handler

2022-01-29 Thread Julien Grall

Hi Stefano,

On 28/01/2022 20:23, Stefano Stabellini wrote:

On Fri, 28 Jan 2022, Julien Grall wrote:

On 28/01/2022 01:20, Stefano Stabellini wrote:

On Thu, 27 Jan 2022, Julien Grall wrote:

On Thu, 27 Jan 2022 at 23:05, Julien Grall 
wrote:


On Thu, 27 Jan 2022 at 22:40, Stefano Stabellini
 wrote:

I am with you on both points.

One thing I noticed is that the code today is not able to deal with
IO_UNHANDLED for MMIO regions handled by IOREQ servers or Xen MMIO
emulator handlers. p2m_resolve_translation_fault and try_map_mmio are
called after try_handle_mmio returns IO_UNHANDLED but try_handle_mmio
is
not called a second time (or am I mistaken?)


Why would you need it? If try_mmio_fault() doesn't work the first time,
then


Sorry I meant try_handle_mmio().


it will not work the second it.


I think I explained myself badly, I'll try again below.



Another thing I noticed is that currently find_mmio_handler and
try_fwd_ioserv expect dabt to be already populated and valid so it
would
be better if we could get there only when dabt.valid.

With these two things in mind, I think maybe the best thing to do is
to
change the code in do_trap_stage2_abort_guest slightly so that
p2m_resolve_translation_fault and try_map_mmio are called first when
!dabt.valid.


An abort will mostly likely happen because of emulated I/O. If we call
p2m_resolve_translation_fault() and try_map_mmio() first, then it means
the processing will take longer than necessary for the common case.

So I think we want to keep the order as it is. I.e first trying the MMIO
and then falling back to the less likely reason for a trap.


Yeah I thought about it as well. The idea would be that if dabt.valid is
set then we leave things as they are (we call try_handle_mmio first) but
if dabt.valid is not set (it is not valid) then we skip the
try_handle_mmio() call because it wouldn't succeed anyway and go
directly to p2m_resolve_translation_fault() and try_map_mmio().

If either of them work (also reading what you wrote about it) then we
return immediately.


Ok. So the assumption is data abort with invalid syndrome would mostly likely
be because of a fault handled by p2m_resolve_translation_fault().

I think this makes sense. However, I am not convinced we can currently safely
call try_map_mmio() before try_handle_mmio(). This is because the logic in
try_map_mmio() is quite fragile and we may mistakenly map an emulated region.

Similarly, we can't call try_map_mmio() before p2m_resolve_translation_fault()
because a transient fault may be
misinterpreted.

I think we may be able to harden try_map_mmio() by checking if the I/O region
is emulated. But this will need to be fully thought through first.


That's a good point. I wonder if it could be as simple as making sure
that iomem_access_permitted returns false for all emulated regions?


I have replied to that in the other thread. The short answer is no and...


Looking at the code, it looks like it is already the case today. Is that
right?


not 100%. The thing is iomem_access_permitted() is telling you which 
*host* physical address is accessible. Not which *guest* physical 
address is emulated.


We could possibly take some short cut at the risk of bitting back in the 
future if we end up to emulate non-existing region in the host physical 
address.


Cheers,

--
Julien Grall



Re: [XEN v1] xen/arm: io: Check ESR_EL2.ISV != 0 before searching for a MMIO handler

2022-01-29 Thread Julien Grall

Hi,

Replying to Ayan's e-mail at the same time.

On 28/01/2022 20:30, Stefano Stabellini wrote:

On Fri, 28 Jan 2022, Ayan Kumar Halder wrote:

Hi Julien/Stefano,

Good discussion to learn about Xen (from a newbie's perspective). :)

I am trying to clarify my understanding. Some queries as below :-

On 28/01/2022 09:46, Julien Grall wrote:



On 28/01/2022 01:20, Stefano Stabellini wrote:

On Thu, 27 Jan 2022, Julien Grall wrote:

On Thu, 27 Jan 2022 at 23:05, Julien Grall 
wrote:


On Thu, 27 Jan 2022 at 22:40, Stefano Stabellini
 wrote:

I am with you on both points.

One thing I noticed is that the code today is not able to deal with
IO_UNHANDLED for MMIO regions handled by IOREQ servers or Xen MMIO
emulator handlers. p2m_resolve_translation_fault and try_map_mmio
are
called after try_handle_mmio returns IO_UNHANDLED but
try_handle_mmio is
not called a second time (or am I mistaken?)


Why would you need it? If try_mmio_fault() doesn't work the first
time, then


Sorry I meant try_handle_mmio().


it will not work the second it.


I think I explained myself badly, I'll try again below.



Another thing I noticed is that currently find_mmio_handler and
try_fwd_ioserv expect dabt to be already populated and valid so it
would
be better if we could get there only when dabt.valid.

With these two things in mind, I think maybe the best thing to do is
to
change the code in do_trap_stage2_abort_guest slightly so that
p2m_resolve_translation_fault and try_map_mmio are called first when
!dabt.valid.


An abort will mostly likely happen because of emulated I/O. If we call
p2m_resolve_translation_fault() and try_map_mmio() first, then it
means
the processing will take longer than necessary for the common case.

So I think we want to keep the order as it is. I.e first trying the
MMIO
and then falling back to the less likely reason for a trap.


Yeah I thought about it as well. The idea would be that if dabt.valid is
set then we leave things as they are (we call try_handle_mmio first) but
if dabt.valid is not set (it is not valid) then we skip the
try_handle_mmio() call because it wouldn't succeed anyway and go
directly to p2m_resolve_translation_fault() and try_map_mmio().

If either of them work (also reading what you wrote about it) then we
return immediately.


Ok. So the assumption is data abort with invalid syndrome would mostly
likely be because of a fault handled by p2m_resolve_translation_fault().

I think this makes sense. However, I am not convinced we can currently
safely call try_map_mmio() before try_handle_mmio(). This is because the
logic in try_map_mmio() is quite fragile and we may mistakenly map an
emulated region.


By emulated region, you mean vgic.dbase (Refer
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/arch/arm/vgic-v2.c;h=589c033eda8f5e11af33c868eae2c159f985eac9;hb=0bdc43c8dec993258e930b34855853c22b917519#l702,
which has not been mapped to the guest) and thus requires an MMIO handler.

Is my understanding correcr ?
  
I'll try to answer for Julien but yes.




If so, can Xen mantain a table of such emulated regions ? I am guessing that
all emulated regions will have a mmio_handler. Then, before invoking
try_map_mmio(), it can check the table.


Today we keep those as a list, see find_mmio_handler (for regions
emulated in Xen) and also ioreq_server_select (for regions emulated by
QEMU or other external emulators.)

But I think there might be a simpler way: if you look at try_map_mmio,
you'll notice that there is iomem_access_permitted check. I don't think
that check can succeed for an emulated region. 


It can. iomem_access_permitted() is telling which host physical frame is 
accessible by the domain. This is different to which guest physical 
address is emulated.


It happens that most (all?) of them are the same today for the hardware 
domain. But that's not something we should rely on.


So I think we want to check that the region will be used for emulated I/O.

You could use find_mmio() but I think ioreq_server_select() is not 
directly suitable to us because we want to check that the full page is 
not emulated (You could technically only emulate part of it).



Similarly, we can't call try_map_mmio() before
p2m_resolve_translation_fault() because a transient fault may be
misinterpreted.

I think we may be able to harden try_map_mmio() by checking if the I/O
region is emulated. But this will need to be fully thought through first.



If not, then we call decode_instruction from do_trap_stage2_abort_guest
and try again. The second time dabt.valid is set so we end up calling
try_handle_mmio() as usual.


With the approach below, you will also end up to call
p2m_resolve_translation_fault() and try_map_mmio() a second time if
try_handle_mmio() fails.



Just for clarity let me copy/paste the relevant code, apologies if it
was already obvious to you -- I got the impression my suggestion wasn't
very clear.



+again:
+    if ( is_data && hsr.dabt.valid )
  {
    

[libvirt test] 167942: regressions - FAIL

2022-01-29 Thread osstest service owner
flight 167942 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167942/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 151777
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  18813edbf28b42ad9d068e0584c4408019c09bff
baseline version:
 libvirt  2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  568 days
Failing since151818  2020-07-11 04:18:52 Z  567 days  549 attempts
Testing same since   167942  2022-01-29 04:18:56 Z0 days1 attempts


People who touched revisions under test:
Adolfo Jayme Barrientos 
  Aleksandr Alekseev 
  Aleksei Zakharov 
  Andika Triwidada 
  Andrea Bolognani 
  Ani Sinha 
  Balázs Meskó 
  Barrett Schonefeld 
  Bastian Germann 
  Bastien Orivel 
  BiaoXiang Ye 
  Bihong Yu 
  Binfeng Wu 
  Bjoern Walk 
  Boris Fiuczynski 
  Brad Laue 
  Brian Turek 
  Bruno Haible 
  Chris Mayo 
  Christian Borntraeger 
  Christian Ehrhardt 
  Christian Kirbach 
  Christian Schoenebeck 
  Christophe Fergeau 
  Cole Robinson 
  Collin Walling 
  Cornelia Huck 
  Cédric Bosdonnat 
  Côme Borsoi 
  Daniel Henrique Barboza 
  Daniel Letai 
  Daniel P. Berrange 
  Daniel P. Berrangé 
  Didik Supriadi 
  dinglimin 
  Divya Garg 
  Dmitrii Shcherbakov 
  Dmytro Linkin 
  Eiichi Tsukata 
  Eric Farman 
  Erik Skultety 
  Fabian Affolter 
  Fabian Freyer 
  Fabiano Fidêncio 
  Fangge Jin 
  Farhan Ali 
  Fedora Weblate Translation 
  Franck Ridel 
  Gavi Teitz 
  gongwei 
  Guoyi Tu
  Göran Uddeborg 
  Halil Pasic 
  Han Han 
  Hao Wang 
  Hela Basa 
  Helmut Grohne 
  Hiroki Narukawa 
  Hyman Huang(黄勇) 
  Ian Wienand 
  Ioanna Alifieraki 
  Ivan Teterevkov 
  Jakob Meng 
  Jamie Strandboge 
  Jamie Strandboge 
  Jan Kuparinen 
  jason lee 
  Jean-Baptiste Holcroft 
  Jia Zhou 
  Jianan Gao 
  Jim Fehlig 
  Jin Yan 
  Jinsheng Zhang 
  Jiri Denemark 
  Joachim Falk 
  John Ferlan 
  Jonathan Watt 
  Jonathon Jongsma 
  Julio Faracco 
  Justin Gatzen 
  Ján Tomko 
  Kashyap Chamarthy 
  Kevin Locke 
  Koichi Murase 
  Kristina Hanicova 
  Laine Stump 
  Laszlo Ersek 
  Lee Yarwood 
  Lei Yang 
  Liao Pingfang 
  Lin Ma 
  Lin Ma 
  Lin Ma 
  Liu Yiding 
  Luke Yue 
  Luyao Zhong 
  Marc Hartmayer 
  Marc-André Lureau 
  Marek Marczykowski-Górecki 
  Markus Schade 
  Martin Kletzander 
  Masayoshi Mizuma 
  Matej Cepl 
  Matt Coleman 
  Matt Coleman 
  Mauro Matteo Cascella 
  Meina Li 
  Michal Privoznik 
  Michał Smyk 
  Milo Casagrande 
  Moshe Levi 
  Muha Aliss 
  Nathan 
  Neal Gompa 
  Nick Chevsky 
  Nick Shyrokovskiy 
  Nickys Music Group 
  Nico Pache 
  Nicolas Lécureuil 
  Nicolas Lécureuil 
  Nikolay Shirokovskiy 
  Olaf Hering 
  Olesya Gerasimenko 
  Or Ozeri 
  Orion Poplawski 
  Pany 
  Patrick Magauran 
  Paulo de Rezende Pinatti 
  Pavel Hrdina 
  Peng Liang 
  Peter Krempa 
  Pino Toscano 
  Pino Toscano 
  Piotr Drąg 
  Prathamesh Chavan 
  Praveen K Paladugu 
  Richard W.M. Jones 
  Ricky Tigg 
  Robin Lee 
  Rohit Kumar 
  Roman Bogorodskiy 
  Roman Bolshakov 
  Ryan Gahagan 
  Ryan Schmidt 
  Sam Hartman 
  Scott Shambarger 
  Sebastian Mitterle 
  SeongHyun Jo 
  Shalini Chellathurai Saroja 
  Shaojun Yang 
  shenjiatong 
  Shi Lei 
  simmon 
  Simon Ch

[qemu-mainline test] 167939: tolerable FAIL - PUSHED

2022-01-29 Thread osstest service owner
flight 167939 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/167939/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 167936
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167936
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 167936
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 167936
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 167936
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 167936
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 167936
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167936
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass

version targeted for testing:
 qemuu7a1043cef91739ff4b59812d30f1ed2850d3d34e
baseline version:
 qemuub367db48126d4ee14579af6cf5cdbffeb9496627

Last test of basis   167936  2022-01-28 14:38:09 Z0 days
Testing same since   167939  2022-01-28 23:39:37 Z0 days1 attempts


People who touched revisions under test:
  Bernhard Beschow 
  Marc-André Lureau 
  Matheus Fer

[xen-unstable test] 167938: tolerable FAIL - PUSHED

2022-01-29 Thread osstest service owner
flight 167938 xen-unstable real [real]
flight 167943 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/167938/
http://logs.test-lab.xenproject.org/osstest/logs/167943/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 
167943-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 167931
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 167931
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167931
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 167931
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 167931
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 167931
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 167931
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 167931
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 167931
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 167931
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 167931
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167931
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-check