[Xen-devel] [linux-3.14 test] 100510: tolerable FAIL - PUSHED

2016-08-16 Thread osstest service owner
flight 100510 linux-3.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100510/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 100389
 build-i386-rumpuserxen6 xen-buildfail  like 100400
 build-amd64-rumpuserxen   6 xen-buildfail  like 100400
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 100400
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 100400
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 100400

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass

version targeted for testing:
 linuxc0e754d6bff2367d1de98e637285c8efbd1680ea
baseline version:
 linuxb8b6a72089869dee41bd9f29e86bbcf6501e5524

Last test of basis   100400  2016-08-10 22:50:01 Z6 days
Testing same since   100510  2016-08-16 07:47:50 Z0 days1 attempts


People who touched revisions under test:
  Alan Stern 
  Alexandru Cornea 
  Andrew Morton 
  Andy Lutomirski 
  Beniamino Galvani 
  Bjørn Mork 
  Charles (Chas) Williams 
  Chas Williams <3ch...@gmail.com>
  Chas Williams 
  Christoph Hellwig 
  Dave Weinstein 
  David Howells 
  David S. Miller 
  Doug Ledford 
  Eric Dumazet 
  Fabian Frederick 
  Greg Kroah-Hartman 
  Herbert Xu 
  Hugh Dickins 
  Ingo Molnar 
  Jack Wang 
  James Bottomley 
  James E.J. Bottomley 
  Jan Kara 
  Jan Kara 
  Jason Gunthorpe 
  Jens Axboe 
  John Johansen 
  Karl Heiss 
  Linus Torvalds 
  Luis Henriques 
  Martin K. Petersen 
  Miklos Szeredi 
  Neal Cardwell 
  Phil Turnbull 
  Ralf Baechle 
  Seth Arnold 
  Soheil Hassas Yeganeh 
  Tejun Heo 
  Theodore Ts'o 
  Vegard Nossum 
  Wei Fang 
  Yuchung Cheng 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 

Re: [Xen-devel] [PATCH v1 6/9] livepatch: Initial ARM64 support.

2016-08-16 Thread Konrad Rzeszutek Wilk
On Mon, Aug 15, 2016 at 04:27:12PM +0100, Andrew Cooper wrote:
> On 15/08/16 16:25, Julien Grall wrote:
> >
> >
> > On 15/08/2016 17:17, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Aug 15, 2016 at 04:57:26PM +0200, Julien Grall wrote:
> >>> Hi Jan and Konrad,
> >>>
> >>> On 15/08/2016 16:23, Jan Beulich wrote:
> >>> On 15.08.16 at 16:09,  wrote:
> > On Mon, Aug 15, 2016 at 02:21:48AM -0600, Jan Beulich wrote:
> > On 15.08.16 at 01:07,  wrote:
> >>> @@ -711,9 +711,15 @@ static int prepare_payload(struct payload
> >>> *payload,
> >>>  return -EINVAL;
> >>>  }
> >>>  }
> >>> +#ifndef CONFIG_ARM
> >>>  apply_alternatives_nocheck(start, end);
> >>> +#else
> >>> +apply_alternatives(start, sec->sec->sh_size);
> >>> +#endif
> >>
> >> Conditionals like this are ugly - can't this be properly abstracted?
> >
> > Yes, I can introduce an apply_alternatives_nocheck on ARM that will
> > hava the same set of arguments on x86.
> >
> > Or I can make a new function name?
> 
>  Either way is fine with me, with a slight preference to the former
>  one.
> >>>
> >>> I am fine with the prototype of the function
> >>> apply_alternatives_nocheck but
> >>> I don't think the name is relevant for ARM.
> >>>
> >>> Is there any reason we don't want to call directly
> >>> apply_alternatives in
> >>> x86?
> >>
> >> It assumes (and has an ASSERT) that it is called with interrupts
> >> disabled.
> >> And we don't need to do that (as during livepatch loading we can
> >> modify the
> >> livepatch payload without worrying about interrupts).
> >
> > Oh, it makes more sense now.
> >
> >>
> >> P.S.
> >> loading != applying.
> >>
> >> I could do a patch where we rename 'apply_alternatives' ->
> >> 'apply_alternatives_boot'
> >> and 'apply_alternatives_nocheck' to 'apply_alternatives'.
> 
> The only reason apply_alternatives() is named thusly is to match Linux. 
> I am not fussed if it changes.

Would this be OK with folks?

There is a bit of disreprancy - ARM has 'const struct alt_instr *'
where the 'const' gets dropped later on. That can't be done x86
as 'apply_alternatives' at the gecko modifies the structure.

Thoughts? Make it the same on ARM and x86?

From 2c26d4d214926cd23b73f98c1fdaecd98b010da6 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk 
Date: Tue, 16 Aug 2016 22:20:54 -0400
Subject: [PATCH] alternatives: x86 rename and change parameters on ARM

On x86 we rename 'apply_alternatives' -> 'apply_alternatives_boot'
and 'apply_alternatives_nocheck' to 'apply_alternatives'.

On ARM we change the parameters for 'apply_alternatives'
to be of 'const struct alt_instr *' instead of void pointer and
size length.

Signed-off-by: Konrad Rzeszutek Wilk 

---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: Andrew Cooper 
Cc: Jan Beulich 

v3.1: First submission.
---
 xen/arch/arm/alternative.c| 4 ++--
 xen/arch/x86/alternative.c| 8 
 xen/common/livepatch.c| 2 +-
 xen/include/asm-arm/alternative.h | 2 +-
 xen/include/asm-x86/alternative.h | 5 ++---
 5 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index bf4101c..aba06db 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -200,11 +200,11 @@ void __init apply_alternatives_all(void)
 BUG_ON(ret);
 }
 
-int apply_alternatives(void *start, size_t length)
+int apply_alternatives(const struct alt_instr *start, const struct alt_instr 
*end)
 {
 const struct alt_region region = {
 .begin = start,
-.end = start + length,
+.end = end,
 };
 
 return __apply_alternatives();
diff --git a/xen/arch/x86/alternative.c b/xen/arch/x86/alternative.c
index fd8528e..7addc2c 100644
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -144,7 +144,7 @@ static void *init_or_livepatch text_poke(void *addr, const 
void *opcode, size_t
  * APs have less capabilities than the boot processor are not handled.
  * Tough. Make sure you disable such features by hand.
  */
-void init_or_livepatch apply_alternatives_nocheck(struct alt_instr *start, 
struct alt_instr *end)
+void init_or_livepatch apply_alternatives(struct alt_instr *start, struct 
alt_instr *end)
 {
 struct alt_instr *a;
 u8 *instr, *replacement;
@@ -187,7 +187,7 @@ void init_or_livepatch apply_alternatives_nocheck(struct 
alt_instr *start, struc
  * This routine is called with local interrupt disabled and used during
  * bootup.
  */
-void __init apply_alternatives(struct alt_instr *start, struct alt_instr *end)
+void __init apply_alternatives_boot(struct alt_instr *start, struct alt_instr 
*end)
 {
 unsigned long cr0 = read_cr0();
 
@@ -196,7 

Re: [Xen-devel] [RFC 00/22] xen/arm: Rework the P2M code to follow break-before-make sequence

2016-08-16 Thread Shanker Donthineni

Hi Julien,

I have verified this patch series on Qualcomm Server platform QDF2XXX 
without any issue.


Tested-by: Shanker Donthineni 

On 08/15/2016 10:06 AM, Edgar E. Iglesias wrote:

On Thu, Jul 28, 2016 at 03:51:23PM +0100, Julien Grall wrote:

Hello all,

The ARM architecture mandates the use of a break-before-make sequence when
changing translation entries if the page table is shared between multiple
CPUs whenever a valid entry is replaced by another valid entry (see D4.7.1
in ARM DDI 0487A.j for more details).

The current P2M code does not respect this sequence and may result to
break coherency on some processors.

Adapting the current implementation to use break-before-make sequence would
imply some code duplication and more TLBs invalidations than necessary.
For instance, if we are replacing a 4KB page and the current mapping in
the P2M is using a 1GB superpage, the following steps will happen:
 1) Shatter the 1GB superpage into a series of 2MB superpages
 2) Shatter the 2MB superpage into a series of 4KB superpages
 3) Replace the 4KB page

As the current implementation is shattering while descending and install
the mapping before continuing to the next level, Xen would need to issue 3
TLB invalidation instructions which is clearly inefficient.

Furthermore, all the operations which modify the page table are using the
same skeleton. It is more complicated to maintain different code paths than
having a generic function that set an entry and take care of the break-before-
make sequence.

The new implementation is based on the x86 EPT one which, I think, fits
quite well for the break-before-make sequence whilst keeping the code
simple.

I sent this patch series as an RFC because there are still some TODOs
in the code (mostly sanity check and possible optimization) and I have
done limited testing. However, I think it is a good shape to start reviewing,
get more feedback and have wider testing on different board.

Also, I need to figure out the impact on ARM32 because the domheap is not
always mapped.

This series has dependencies on some rework sent separately ([1] and [2]).
I have provided a branch with all the dependencies and this series applied:

git://xenbits.xen.org/people/julieng/xen-unstable.git branch p2m-rfc


Hi Julien,

FWIW, I gave this a spin on the ZynqMP and it seems to be working fine.
I tried dom0 and starting a few additional guests. All looks good.

Tested-by: Edgar E. Iglesias 

Cheers,
Edgar



Comments are welcome.

Yours sincerely,

Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 
Cc: Shanker Donthineni 
Cc: Dirk Behme 
Cc: Edgar E. Iglesias 

[1] https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02936.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02830.html

Julien Grall (22):
   xen/arm: do_trap_instr_abort_guest: Move the IPA computation out of
 the switch
   xen/arm: p2m: Store in p2m_domain whether we need to clean the entry
   xen/arm: p2m: Rename parameter in p2m_{remove,write}_pte...
   xen/arm: p2m: Use typesafe gfn in p2m_mem_access_radix_set
   xen/arm: traps: Move MMIO emulation code in a separate helper
   xen/arm: traps: Check the P2M before injecting a data/instruction
 abort
   xen/arm: p2m: Rework p2m_put_l3_page
   xen/arm: p2m: Invalidate the TLBs when write unlocking the p2m
   xen/arm: p2m: Change the type of level_shifts from paddr_t to unsigned
 int
   xen/arm: p2m: Move the lookup helpers at the top of the file
   xen/arm: p2m: Introduce p2m_get_root_pointer and use it in
 __p2m_lookup
   xen/arm: p2m: Introduce p2m_get_entry and use it to implement
 __p2m_lookup
   xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry
   xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry
   xen/arm: p2m: Re-implement relinquish_p2m_mapping using p2m_get_entry
   xen/arm: p2m: Make p2m_{valid,table,mapping} helpers inline
   xen/arm: p2m: Introduce a helper to check if an entry is a superpage
   xen/arm: p2m: Introduce p2m_set_entry and __p2m_set_entry
   xen/arm: p2m: Re-implement p2m_remove_using using p2m_set_entry
   xen/arm: p2m: Re-implement p2m_insert_mapping using p2m_set_entry
   xen/arm: p2m: Re-implement p2m_set_mem_access using
 p2m_{set,get}_entry
   xen/arm: p2m: Do not handle shattering in p2m_create_table

  xen/arch/arm/domain.c  |8 +-
  xen/arch/arm/p2m.c | 1274 ++--
  xen/arch/arm/traps.c   |  126 +++--
  xen/include/asm-arm/p2m.h  |   14 +
  xen/include/asm-arm/page.h |4 +
  5 files changed, 742 insertions(+), 684 deletions(-)

--
1.9.1



--
Shanker Donthineni
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm 
Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a 

Re: [Xen-devel] [PATCH v2 5/6] xen/arm: traps: Avoid unnecessary VA -> IPA translation in abort handlers

2016-08-16 Thread Shanker Donthineni

Hi Julien,

On 07/27/2016 12:09 PM, Julien Grall wrote:

Translating a VA to a IPA is expensive. Currently, Xen is assuming that
HPFAR_EL2 is only valid when the stage-2 data/instruction abort happened
during a translation table walk of a first stage translation (i.e S1PTW
is set).

However, based on the ARM ARM (D7.2.34 in DDI 0487A.j), the register is
also valid when the data/instruction abort occured for a translation
fault.

With this change, the VA -> IPA translation will only happen for
permission faults that are not related to a translation table of a
first stage translation.

Signed-off-by: Julien Grall 

---
 Changes in v2:
 - Use fsc in the switch in do_trap_data_abort_guest
---
  xen/arch/arm/traps.c | 24 
  1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ea105f2..83a30fa 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2382,13 +2382,28 @@ static inline paddr_t get_faulting_ipa(vaddr_t gva)
  return ipa;
  }

+static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
+{
+/*
+ * HPFAR is valid if one of the following cases are true:
+ *  1. the stage 2 fault happen during a stage 1 page table walk
+ *  (the bit ESR_EL2.S1PTW is set)
+ *  2. the fault was due to a translation fault
+ *
+ * Note that technically HPFAR is valid for other cases, but they
+ * are currently not supported by Xen.
+ */
+return s1ptw || (fsc == FSC_FLT_TRANS);


Yes, XEN is not supporting the stage 2 access flag but we should handle 
a stage 2 address size fault.

I think we should do some thing like to below to match ARM ARM.

return s1ptw || (fsc != FSC_FLT_PERM);



+}
+
  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
const union hsr hsr)
  {
  int rc;
  register_t gva = READ_SYSREG(FAR_EL2);
+uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;

-switch ( hsr.iabt.ifsc & ~FSC_LL_MASK )
+switch ( fsc )
  {
  case FSC_FLT_PERM:
  {
@@ -2399,7 +2414,7 @@ static void do_trap_instr_abort_guest(struct
cpu_user_regs *regs,
  .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
npfec_kind_with_gla
  };

-if ( hsr.iabt.s1ptw )
+if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
  gpa = get_faulting_ipa(gva);
  else
  {
@@ -2434,6 +2449,7 @@ static void do_trap_data_abort_guest(struct
cpu_user_regs *regs,
  const struct hsr_dabt dabt = hsr.dabt;
  int rc;
  mmio_info_t info;
+uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;

  info.dabt = dabt;
  #ifdef CONFIG_ARM_32
@@ -2442,7 +2458,7 @@ static void do_trap_data_abort_guest(struct
cpu_user_regs *regs,
  info.gva = READ_SYSREG64(FAR_EL2);
  #endif

-if ( dabt.s1ptw )
+if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
  info.gpa = get_faulting_ipa(info.gva);
  else
  {
@@ -2451,7 +2467,7 @@ static void do_trap_data_abort_guest(struct
cpu_user_regs *regs,
  return; /* Try again */
  }

-switch ( dabt.dfsc & ~FSC_LL_MASK )
+switch ( fsc )
  {
  case FSC_FLT_PERM:
  {


--
Shanker Donthineni
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm 
Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux 
Foundation Collaborative Project.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 6/9] livepatch: Initial ARM64 support.

2016-08-16 Thread Konrad Rzeszutek Wilk
> > +int arch_livepatch_perform_rela(struct livepatch_elf *elf,
> > +const struct livepatch_elf_sec *base,
> > +const struct livepatch_elf_sec *rela)
> > +{
.. snip..
> > +switch ( ELF64_R_TYPE(r->r_info) ) {
> > +/* Data */
> > +case R_AARCH64_ABS64:
> > +if ( r->r_offset + sizeof(uint64_t) > base->sec->sh_size )
> > +goto bad_offset;
> 
> As you borrow the code from Linux, could we keep the abstraction with
> reloc_data and defer the overflow check? It would avoid to have the same if
> in multiple place in this code.

The above 'if' conditional is a check to make sure that we don't
go past the section (sh_size). In other words it is a boundary check to
make sure the Elf file is not messed up.

I can still copy the reloc_data so we lessen the:
> > +if ( (int64_t)val !=  *(int32_t *)dest )
> > +err = -EOVERFLOW;

And such.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.5-testing test] 100507: tolerable FAIL - PUSHED

2016-08-16 Thread osstest service owner
flight 100507 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100507/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow29 debian-di-install fail in 100496 pass in 100507
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 15 guest-localmigrate/x10 fail in 
100496 pass in 100507
 test-amd64-i386-xl-qemuu-winxpsp3 15 guest-localmigrate/x10 fail in 100496 
pass in 100507
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
100496

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-start  fail REGR. vs. 100338
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-localmigrate fail in 100496 like 
100308
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop   fail in 100496 like 100338
 test-amd64-amd64-xl-rtds  6 xen-boot fail  like 100338
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 100338
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 100338
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 100338

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd  10 guest-start  fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 10 guest-start  fail never pass
 test-armhf-armhf-libvirt-raw 10 guest-start  fail   never pass

version targeted for testing:
 xen  2ad058efbbe42035784d8b32b53e7708b05cf94c
baseline version:
 xen  08313b45bfc75fa4cbadb9d25a0561e5f5b2fee6

Last test of basis   100338  2016-08-08 08:24:15 Z8 days
Testing same since   100496  2016-08-15 12:13:37 Z1 days2 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anil Madhavapeddy 
  Anthony PERARD 
  Bob Liu 
  Boris Ostrovsky 
  Daniel De Graaf 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Jason Andryuk 
  Juergen Gross 
  Konrad Rzeszutek Wilk 
  Marek Marczykowski-Górecki 
  Matthew Daley 
  Olaf Hering 
  Roger Pau Monne 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  pass
 build-i386-rumpuserxen   pass
 test-amd64-amd64-xl  

[Xen-devel] [PATCH 8/9] x86/mtrr: drop unused func prototypes and struct

2016-08-16 Thread Doug Goldstein
These weren't used so drop them.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/mtrr.h | 15 ---
 1 file changed, 15 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/mtrr.h b/xen/arch/x86/cpu/mtrr/mtrr.h
index 9405cbc..1a3b1e5 100644
--- a/xen/arch/x86/cpu/mtrr/mtrr.h
+++ b/xen/arch/x86/cpu/mtrr/mtrr.h
@@ -22,18 +22,6 @@ void mtrr_generic_set(unsigned int reg, unsigned long base,
 unsigned long size, mtrr_type type);
 int mtrr_generic_have_wrcomb(void);
 
-/* library functions for processor-specific routines */
-struct set_mtrr_context {
-   unsigned long flags;
-   unsigned long cr4val;
-   uint64_t deftype;
-   u32 ccr3;
-};
-
-void set_mtrr_done(struct set_mtrr_context *ctxt);
-void set_mtrr_cache_disable(struct set_mtrr_context *ctxt);
-void set_mtrr_prepare_save(struct set_mtrr_context *ctxt);
-
 void get_mtrr_state(void);
 
 extern u64 size_or_mask, size_and_mask;
@@ -41,6 +29,3 @@ extern u64 size_or_mask, size_and_mask;
 extern unsigned int num_var_ranges;
 
 void mtrr_state_warn(void);
-
-extern int amd_init_mtrr(void);
-extern int cyrix_init_mtrr(void);
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 5/9] x86/mtrr: drop unused is_cpu() macro

2016-08-16 Thread Doug Goldstein
is_cpu() evaluated to Intel always so just drop it entirely.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/generic.c | 2 +-
 xen/arch/x86/cpu/mtrr/mtrr.h| 2 --
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 5c4b273..45d4def 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -520,7 +520,7 @@ int mtrr_generic_validate_add_page(unsigned long base, 
unsigned long size, unsig
 
/*  For Intel PPro stepping <= 7, must be 4 MiB aligned 
and not touch 0x7000->0x7003 */
-   if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
+   if (boot_cpu_data.x86 == 6 &&
boot_cpu_data.x86_model == 1 &&
boot_cpu_data.x86_mask <= 7) {
if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
diff --git a/xen/arch/x86/cpu/mtrr/mtrr.h b/xen/arch/x86/cpu/mtrr/mtrr.h
index 5e0d832..25f4867 100644
--- a/xen/arch/x86/cpu/mtrr/mtrr.h
+++ b/xen/arch/x86/cpu/mtrr/mtrr.h
@@ -63,8 +63,6 @@ extern void set_mtrr_ops(const struct mtrr_ops *);
 extern u64 size_or_mask, size_and_mask;
 extern const struct mtrr_ops *mtrr_if;
 
-#define is_cpu(vnd)(X86_VENDOR_INTEL == X86_VENDOR_##vnd)
-
 extern unsigned int num_var_ranges;
 
 void mtrr_state_warn(void);
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 6/9] x86/mtrr: drop unused mtrr_ops struct

2016-08-16 Thread Doug Goldstein
There are no users of the mtrr_ops struct or any of the callers on it so
drop those.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/generic.c | 12 
 xen/arch/x86/cpu/mtrr/mtrr.h| 23 ---
 2 files changed, 35 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 45d4def..1d67035 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -560,15 +560,3 @@ int positive_have_wrcomb(void)
 {
return 1;
 }
-
-/* generic structure...
- */
-const struct mtrr_ops generic_mtrr_ops = {
-   .use_intel_if  = 1,
-   .set_all   = mtrr_generic_set_all,
-   .get   = mtrr_generic_get,
-   .get_free_region   = mtrr_generic_get_free_region,
-   .set   = mtrr_generic_set,
-   .validate_add_page = mtrr_generic_validate_add_page,
-   .have_wrcomb   = mtrr_generic_have_wrcomb,
-};
diff --git a/xen/arch/x86/cpu/mtrr/mtrr.h b/xen/arch/x86/cpu/mtrr/mtrr.h
index 25f4867..9391fc5 100644
--- a/xen/arch/x86/cpu/mtrr/mtrr.h
+++ b/xen/arch/x86/cpu/mtrr/mtrr.h
@@ -11,24 +11,6 @@
 #define MTRR_CHANGE_MASK_VARIABLE  0x02
 #define MTRR_CHANGE_MASK_DEFTYPE   0x04
 
-
-struct mtrr_ops {
-   u32 vendor;
-   u32 use_intel_if;
-// void(*init)(void);
-   void(*set)(unsigned int reg, unsigned long base,
-  unsigned long size, mtrr_type type);
-   void(*set_all)(void);
-
-   void(*get)(unsigned int reg, unsigned long *base,
-  unsigned long *size, mtrr_type * type);
-   int (*get_free_region)(unsigned long base, unsigned long size,
-  int replace_reg);
-   int (*validate_add_page)(unsigned long base, unsigned long size,
-unsigned int type);
-   int (*have_wrcomb)(void);
-};
-
 void mtrr_generic_get(unsigned int reg, unsigned long *base,
 unsigned long *size, mtrr_type *type);
 int mtrr_generic_get_free_region(unsigned long base, unsigned long size,
@@ -40,8 +22,6 @@ void mtrr_generic_set(unsigned int reg, unsigned long base,
 unsigned long size, mtrr_type type);
 int mtrr_generic_have_wrcomb(void);
 
-extern const struct mtrr_ops generic_mtrr_ops;
-
 extern int positive_have_wrcomb(void);
 
 /* library functions for processor-specific routines */
@@ -58,10 +38,7 @@ void set_mtrr_prepare_save(struct set_mtrr_context *ctxt);
 
 void get_mtrr_state(void);
 
-extern void set_mtrr_ops(const struct mtrr_ops *);
-
 extern u64 size_or_mask, size_and_mask;
-extern const struct mtrr_ops *mtrr_if;
 
 extern unsigned int num_var_ranges;
 
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 2/9] x86/mtrr: drop mtrr_if indirection

2016-08-16 Thread Doug Goldstein
There can only ever be one mtrr_if now and that is the generic
implementation so instead of going through an indirect call change
everything to call the generic implementation directly. The is_cpu()
macro would result in the left side always being equal to
X86_VENDOR_INTEL for the generic implementation due to Intel having a
value of 0. The use_intel() macro was always true in the generic
implementation case as well. Removed some extraneous whitespace at
the same time.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/generic.c   |  2 +-
 xen/arch/x86/cpu/mtrr/main.c  | 47 ++-
 xen/arch/x86/cpu/mtrr/mtrr.h  |  4 ++--
 xen/arch/x86/platform_hypercall.c |  2 +-
 4 files changed, 21 insertions(+), 34 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 224d231..5c4b273 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -265,7 +265,7 @@ int mtrr_generic_get_free_region(unsigned long base, 
unsigned long size, int rep
if (replace_reg >= 0 && replace_reg < max)
return replace_reg;
for (i = 0; i < max; ++i) {
-   mtrr_if->get(i, , , );
+   mtrr_generic_get(i, , , );
if (lsize == 0)
return i;
}
diff --git a/xen/arch/x86/cpu/mtrr/main.c b/xen/arch/x86/cpu/mtrr/main.c
index bf489e3..ff908ad 100644
--- a/xen/arch/x86/cpu/mtrr/main.c
+++ b/xen/arch/x86/cpu/mtrr/main.c
@@ -58,8 +58,6 @@ static DEFINE_MUTEX(mtrr_mutex);
 u64 __read_mostly size_or_mask;
 u64 __read_mostly size_and_mask;
 
-const struct mtrr_ops *__read_mostly mtrr_if = NULL;
-
 static void set_mtrr(unsigned int reg, unsigned long base,
 unsigned long size, mtrr_type type);
 
@@ -82,7 +80,7 @@ static const char *mtrr_attrib_to_str(int x)
 /*  Returns non-zero if we have the write-combining memory type  */
 static int have_wrcomb(void)
 {
-   return (mtrr_if->have_wrcomb ? mtrr_if->have_wrcomb() : 0);
+   return mtrr_generic_have_wrcomb();
 }
 
 /*  This function returns the number of variable MTRRs  */
@@ -150,9 +148,9 @@ static void ipi_handler(void *info)
if (data->smp_reg == ~0U) /* update all mtrr registers */
/* At the cpu hot-add time this will reinitialize mtrr 
 * registres on the existing cpus. It is ok.  */
-   mtrr_if->set_all();
+   mtrr_generic_set_all();
else /* single mtrr register update */
-   mtrr_if->set(data->smp_reg, data->smp_base, 
+   mtrr_generic_set(data->smp_reg, data->smp_base,
 data->smp_size, data->smp_type);
 
atomic_dec(>count);
@@ -200,7 +198,7 @@ static inline int types_compatible(mtrr_type type1, 
mtrr_type type2) {
  * until it hits 0 and proceed. We set the data.gate flag and reset data.count.
  * Meanwhile, they are waiting for that flag to be set. Once it's set, each 
  * CPU goes through the transition of updating MTRRs. The CPU vendors may each 
do it 
- * differently, so we call mtrr_if->set() callback and let them take care of 
it.
+ * differently, so we call mtrr_generic_set() callback and let them take care 
of it.
  * When they're done, they again decrement data->count and wait for data.gate 
to 
  * be reset. 
  * When we finish, we wait for data.count to hit 0 and toggle the data.gate 
flag.
@@ -252,9 +250,9 @@ static void set_mtrr(unsigned int reg, unsigned long base,
if (reg == ~0U)  /* update all mtrr registers */
/* at boot or resume time, this will reinitialize the mtrrs on 
 * the bp. It is ok. */
-   mtrr_if->set_all();
+   mtrr_generic_set_all();
else /* update the single mtrr register */
-   mtrr_if->set(reg,base,size,type);
+   mtrr_generic_set(reg, base, size, type);
 
/* wait for the others */
while (atomic_read())
@@ -317,10 +315,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
mtrr_type ltype;
unsigned long lbase, lsize;
 
-   if (!mtrr_if)
-   return -ENXIO;
-   
-   if ((error = mtrr_if->validate_add_page(base,size,type)))
+   if ((error = mtrr_generic_validate_add_page(base,size,type)))
return error;
 
if (type >= MTRR_NUM_TYPES) {
@@ -351,7 +346,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
/*  Search for existing MTRR  */
mutex_lock(_mutex);
for (i = 0; i < num_var_ranges; ++i) {
-   mtrr_if->get(i, , , );
+   mtrr_generic_get(i, , , );
if (!lsize || base > lbase + lsize - 1 || base + size - 1 < 
lbase)
continue;
/*  At this point we know there is some kind of 
overlap/enclosure  */
@@ -386,7 +381,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,

[Xen-devel] [PATCH 4/9] x86/mtrr: drop unnecessary use_intel() macro

2016-08-16 Thread Doug Goldstein
The use_intel() macro always evaluates to true so don't bother using it.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/main.c | 21 -
 xen/arch/x86/cpu/mtrr/mtrr.h |  1 -
 2 files changed, 4 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/main.c b/xen/arch/x86/cpu/mtrr/main.c
index 5dd1f5d..6f0113a 100644
--- a/xen/arch/x86/cpu/mtrr/main.c
+++ b/xen/arch/x86/cpu/mtrr/main.c
@@ -82,12 +82,7 @@ static void __init set_num_var_ranges(void)
 {
unsigned long config = 0;
 
-   if (use_intel()) {
-   rdmsrl(MSR_MTRRcap, config);
-   } else if (is_cpu(AMD))
-   config = 2;
-   else if (is_cpu(CYRIX) || is_cpu(CENTAUR))
-   config = 8;
+rdmsrl(MSR_MTRRcap, config);
num_var_ranges = config & 0xff;
 }
 
@@ -561,13 +556,12 @@ void __init mtrr_bp_init(void)
 
 set_num_var_ranges();
 init_table();
-if (use_intel())
-get_mtrr_state();
+get_mtrr_state();
 }
 
 void mtrr_ap_init(void)
 {
-   if (!use_intel() || hold_mtrr_updates_on_aps)
+   if (hold_mtrr_updates_on_aps)
return;
/*
 * Ideally we should hold mtrr_mutex here to avoid mtrr entries changed,
@@ -596,30 +590,23 @@ void mtrr_save_state(void)
 
 void mtrr_aps_sync_begin(void)
 {
-   if (!use_intel())
-   return;
hold_mtrr_updates_on_aps = 1;
 }
 
 void mtrr_aps_sync_end(void)
 {
-   if (!use_intel())
-   return;
set_mtrr(~0U, 0, 0, 0);
hold_mtrr_updates_on_aps = 0;
 }
 
 void mtrr_bp_restore(void)
 {
-   if (!use_intel())
-   return;
mtrr_generic_set_all();
 }
 
 static int __init mtrr_init_finialize(void)
 {
-   if (use_intel())
-   mtrr_state_warn();
+mtrr_state_warn();
return 0;
 }
 __initcall(mtrr_init_finialize);
diff --git a/xen/arch/x86/cpu/mtrr/mtrr.h b/xen/arch/x86/cpu/mtrr/mtrr.h
index 92b0b11..5e0d832 100644
--- a/xen/arch/x86/cpu/mtrr/mtrr.h
+++ b/xen/arch/x86/cpu/mtrr/mtrr.h
@@ -64,7 +64,6 @@ extern u64 size_or_mask, size_and_mask;
 extern const struct mtrr_ops *mtrr_if;
 
 #define is_cpu(vnd)(X86_VENDOR_INTEL == X86_VENDOR_##vnd)
-#define use_intel()(1)
 
 extern unsigned int num_var_ranges;
 
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 7/9] x86/mtrr: drop unused positive_have_wrcomb()

2016-08-16 Thread Doug Goldstein
Unused function, gone.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/generic.c | 5 -
 xen/arch/x86/cpu/mtrr/mtrr.h| 2 --
 2 files changed, 7 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 1d67035..012aca4 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -555,8 +555,3 @@ int mtrr_generic_have_wrcomb(void)
rdmsrl(MSR_MTRRcap, config);
return (config & (1ULL << 10));
 }
-
-int positive_have_wrcomb(void)
-{
-   return 1;
-}
diff --git a/xen/arch/x86/cpu/mtrr/mtrr.h b/xen/arch/x86/cpu/mtrr/mtrr.h
index 9391fc5..9405cbc 100644
--- a/xen/arch/x86/cpu/mtrr/mtrr.h
+++ b/xen/arch/x86/cpu/mtrr/mtrr.h
@@ -22,8 +22,6 @@ void mtrr_generic_set(unsigned int reg, unsigned long base,
 unsigned long size, mtrr_type type);
 int mtrr_generic_have_wrcomb(void);
 
-extern int positive_have_wrcomb(void);
-
 /* library functions for processor-specific routines */
 struct set_mtrr_context {
unsigned long flags;
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 1/9] x86/mtrr: prefix fns with mtrr and drop static

2016-08-16 Thread Doug Goldstein
For the functions that make up the interface to the MTRR code, drop the
static keyword and prefix them all with mtrr for improved clarity when
they're called outside this file. This also required adjusting or
providing function prototypes to make them callable.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/generic.c | 24 
 xen/arch/x86/cpu/mtrr/mtrr.h| 14 ++
 2 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 234d2ba..224d231 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -250,7 +250,7 @@ static void set_fixed_range(int msr, int * changed, 
unsigned int * msrwords)
}
 }
 
-int generic_get_free_region(unsigned long base, unsigned long size, int 
replace_reg)
+int mtrr_generic_get_free_region(unsigned long base, unsigned long size, int 
replace_reg)
 /*  [SUMMARY] Get a free MTRR.
  The starting (base) address of the region.
  The size (in bytes) of the region.
@@ -272,7 +272,7 @@ int generic_get_free_region(unsigned long base, unsigned 
long size, int replace_
return -ENOSPC;
 }
 
-static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+void mtrr_generic_get(unsigned int reg, unsigned long *base,
 unsigned long *size, mtrr_type *type)
 {
uint64_t _mask, _base;
@@ -448,7 +448,7 @@ static void post_set(void)
spin_unlock(_atomicity_lock);
 }
 
-static void generic_set_all(void)
+void mtrr_generic_set_all(void)
 {
unsigned long mask, count;
unsigned long flags;
@@ -471,7 +471,7 @@ static void generic_set_all(void)

 }
 
-static void generic_set_mtrr(unsigned int reg, unsigned long base,
+void mtrr_generic_set(unsigned int reg, unsigned long base,
 unsigned long size, mtrr_type type)
 /*  [SUMMARY] Set variable MTRR register on the local CPU.
  The register to set.
@@ -514,7 +514,7 @@ static void generic_set_mtrr(unsigned int reg, unsigned 
long base,
local_irq_restore(flags);
 }
 
-int generic_validate_add_page(unsigned long base, unsigned long size, unsigned 
int type)
+int mtrr_generic_validate_add_page(unsigned long base, unsigned long size, 
unsigned int type)
 {
unsigned long lbase, last;
 
@@ -549,7 +549,7 @@ int generic_validate_add_page(unsigned long base, unsigned 
long size, unsigned i
 }
 
 
-static int generic_have_wrcomb(void)
+int mtrr_generic_have_wrcomb(void)
 {
unsigned long config;
rdmsrl(MSR_MTRRcap, config);
@@ -565,10 +565,10 @@ int positive_have_wrcomb(void)
  */
 const struct mtrr_ops generic_mtrr_ops = {
.use_intel_if  = 1,
-   .set_all   = generic_set_all,
-   .get   = generic_get_mtrr,
-   .get_free_region   = generic_get_free_region,
-   .set   = generic_set_mtrr,
-   .validate_add_page = generic_validate_add_page,
-   .have_wrcomb   = generic_have_wrcomb,
+   .set_all   = mtrr_generic_set_all,
+   .get   = mtrr_generic_get,
+   .get_free_region   = mtrr_generic_get_free_region,
+   .set   = mtrr_generic_set,
+   .validate_add_page = mtrr_generic_validate_add_page,
+   .have_wrcomb   = mtrr_generic_have_wrcomb,
 };
diff --git a/xen/arch/x86/cpu/mtrr/mtrr.h b/xen/arch/x86/cpu/mtrr/mtrr.h
index b41eb58..619575f 100644
--- a/xen/arch/x86/cpu/mtrr/mtrr.h
+++ b/xen/arch/x86/cpu/mtrr/mtrr.h
@@ -29,10 +29,16 @@ struct mtrr_ops {
int (*have_wrcomb)(void);
 };
 
-extern int generic_get_free_region(unsigned long base, unsigned long size,
-  int replace_reg);
-extern int generic_validate_add_page(unsigned long base, unsigned long size,
-unsigned int type);
+void mtrr_generic_get(unsigned int reg, unsigned long *base,
+unsigned long *size, mtrr_type *type);
+int mtrr_generic_get_free_region(unsigned long base, unsigned long size,
+int replace_reg);
+int mtrr_generic_validate_add_page(unsigned long base, unsigned long size,
+unsigned int type);
+void mtrr_generic_set_all(void);
+void mtrr_generic_set(unsigned int reg, unsigned long base,
+unsigned long size, mtrr_type type);
+int mtrr_generic_have_wrcomb(void);
 
 extern const struct mtrr_ops generic_mtrr_ops;
 
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 9/9] x86/mtrr: use stdbool instead of int + define

2016-08-16 Thread Doug Goldstein
Instead of using an int and providing a define for TRUE and FALSE,
change the code to use stdbool that Xen provides.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/generic.c | 21 +++--
 xen/arch/x86/cpu/mtrr/mtrr.h|  5 -
 2 files changed, 11 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 012aca4..2d2eadc 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -3,6 +3,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -237,7 +238,7 @@ static void mtrr_wrmsr(unsigned int msr, uint64_t 
msr_content)
  * \param changed pointer which indicates whether the MTRR needed to be changed
  * \param msrwords pointer to the MSR values which the MSR should have
  */
-static void set_fixed_range(int msr, int * changed, unsigned int * msrwords)
+static void set_fixed_range(int msr, bool * changed, unsigned int * msrwords)
 {
uint64_t msr_content, val;
 
@@ -246,7 +247,7 @@ static void set_fixed_range(int msr, int * changed, 
unsigned int * msrwords)
 
if (msr_content != val) {
mtrr_wrmsr(msr, val);
-   *changed = TRUE;
+   *changed = true;
}
 }
 
@@ -302,10 +303,10 @@ void mtrr_generic_get(unsigned int reg, unsigned long 
*base,
  * Checks and updates the fixed-range MTRRs if they differ from the saved set
  * \param frs pointer to fixed-range MTRR values, saved by get_fixed_ranges()
  */
-static int set_fixed_ranges(mtrr_type * frs)
+static bool set_fixed_ranges(mtrr_type * frs)
 {
unsigned long long *saved = (unsigned long long *) frs;
-   int changed = FALSE;
+   bool changed = false;
int block=-1, range;
 
while (fixed_range_blocks[++block].ranges)
@@ -316,13 +317,13 @@ static int set_fixed_ranges(mtrr_type * frs)
return changed;
 }
 
-/*  Set the MSR pair relating to a var range. Returns TRUE if
+/*  Set the MSR pair relating to a var range. Returns true if
 changes are made  */
-static int set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
+static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
 {
uint32_t lo, hi, base_lo, base_hi, mask_lo, mask_hi;
uint64_t msr_content;
-   int changed = FALSE;
+   bool changed = false;
 
rdmsrl(MSR_IA32_MTRR_PHYSBASE(index), msr_content);
lo = (uint32_t)msr_content;
@@ -337,7 +338,7 @@ static int set_mtrr_var_ranges(unsigned int index, struct 
mtrr_var_range *vr)
 
if ((base_lo != lo) || (base_hi != hi)) {
mtrr_wrmsr(MSR_IA32_MTRR_PHYSBASE(index), vr->base);
-   changed = TRUE;
+   changed = true;
}
 
rdmsrl(MSR_IA32_MTRR_PHYSMASK(index), msr_content);
@@ -353,7 +354,7 @@ static int set_mtrr_var_ranges(unsigned int index, struct 
mtrr_var_range *vr)
 
if ((mask_lo != lo) || (mask_hi != hi)) {
mtrr_wrmsr(MSR_IA32_MTRR_PHYSMASK(index), vr->mask);
-   changed = TRUE;
+   changed = true;
}
return changed;
 }
@@ -478,7 +479,7 @@ void mtrr_generic_set(unsigned int reg, unsigned long base,
  The base address of the region.
  The size of the region. If this is 0 the region is disabled.
  The type of the region.
- If TRUE, do the change safely. If FALSE, safety measures should
+ If true, do the change safely. If false, safety measures should
 be done externally.
 [RETURNS] Nothing.
 */
diff --git a/xen/arch/x86/cpu/mtrr/mtrr.h b/xen/arch/x86/cpu/mtrr/mtrr.h
index 1a3b1e5..9d55c68 100644
--- a/xen/arch/x86/cpu/mtrr/mtrr.h
+++ b/xen/arch/x86/cpu/mtrr/mtrr.h
@@ -2,11 +2,6 @@
  * local mtrr defines.
  */
 
-#ifndef TRUE
-#define TRUE  1
-#define FALSE 0
-#endif
-
 #define MTRR_CHANGE_MASK_FIXED 0x01
 #define MTRR_CHANGE_MASK_VARIABLE  0x02
 #define MTRR_CHANGE_MASK_DEFTYPE   0x04
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 3/9] x86/mtrr: drop have_wrcomb() wrapper

2016-08-16 Thread Doug Goldstein
The only call was always to the generic implementation.

Signed-off-by: Doug Goldstein 
---
 xen/arch/x86/cpu/mtrr/main.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/main.c b/xen/arch/x86/cpu/mtrr/main.c
index ff908ad..5dd1f5d 100644
--- a/xen/arch/x86/cpu/mtrr/main.c
+++ b/xen/arch/x86/cpu/mtrr/main.c
@@ -77,12 +77,6 @@ static const char *mtrr_attrib_to_str(int x)
return (x <= 6) ? mtrr_strings[x] : "?";
 }
 
-/*  Returns non-zero if we have the write-combining memory type  */
-static int have_wrcomb(void)
-{
-   return mtrr_generic_have_wrcomb();
-}
-
 /*  This function returns the number of variable MTRRs  */
 static void __init set_num_var_ranges(void)
 {
@@ -324,7 +318,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
}
 
/*  If the type is WC, check that this processor supports it  */
-   if ((type == MTRR_TYPE_WRCOMB) && !have_wrcomb()) {
+   if ((type == MTRR_TYPE_WRCOMB) && !mtrr_generic_have_wrcomb()) {
printk(KERN_WARNING
   "mtrr: your processor doesn't support 
write-combining\n");
return -ENOSYS;
-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 0/9] x86/mtrr: basic cleanups

2016-08-16 Thread Doug Goldstein
I was stuck on an airplane and as idly reading this code and noticed that
Xen does not need multiple MTRR implementations anymore since only x86_64
is supported. This guts some of the indirection and drops what should be
dead code paths. I will admit I have only compiled this and not booted it.

Doug Goldstein (9):
  x86/mtrr: prefix fns with mtrr and drop static
  x86/mtrr: drop mtrr_if indirection
  x86/mtrr: drop have_wrcomb() wrapper
  x86/mtrr: drop unnecessary use_intel() macro
  x86/mtrr: drop unused is_cpu() macro
  x86/mtrr: drop unused mtrr_ops struct
  x86/mtrr: drop unused positive_have_wrcomb()
  x86/mtrr: drop unused func prototypes and struct
  x86/mtrr: use stdbool instead of int + define

 xen/arch/x86/cpu/mtrr/generic.c   | 54 +++
 xen/arch/x86/cpu/mtrr/main.c  | 68 +++
 xen/arch/x86/cpu/mtrr/mtrr.h  | 62 ++-
 xen/arch/x86/platform_hypercall.c |  2 +-
 4 files changed, 48 insertions(+), 138 deletions(-)

-- 
2.7.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [libvirt test] 100509: tolerable FAIL - PUSHED

2016-08-16 Thread osstest service owner
flight 100509 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100509/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass

version targeted for testing:
 libvirt  da5dfd0e06cac740f746bcc7c839f839c4d64c30
baseline version:
 libvirt  541e9ae6d4290b9004ed73648ea663563b329b3d

Last test of basis   100482  2016-08-14 04:22:37 Z2 days
Testing same since   100509  2016-08-16 04:20:45 Z0 days1 attempts


People who touched revisions under test:
  John Ferlan 
  Jovanka Gulicoska 
  Michal Privoznik 
  Pavel Hrdina 
  Roman Bogorodskiy 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm fail
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt fail
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-armhf-armhf-libvirt-qcow2   fail
 test-armhf-armhf-libvirt-raw fail
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=libvirt
+ revision=da5dfd0e06cac740f746bcc7c839f839c4d64c30
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ 

[Xen-devel] [xen-4.6-testing test] 100504: tolerable FAIL - PUSHED

2016-08-16 Thread osstest service owner
flight 100504 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100504/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-xsm   6 xen-boot fail in 100495 pass in 100504
 test-amd64-amd64-rumpuserxen-amd64 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail pass in 100495

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 100352
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 100352
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 100352
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 100352
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 100352

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  e06d2bae53cb1a3542e7269fd35bf3885dd2e244
baseline version:
 xen  55292d3dee83c974e3d89d3a24cd35a8956ceaf5

Last test of basis   100352  2016-08-08 19:46:24 Z8 days
Testing same since   100495  2016-08-15 11:13:56 Z1 days2 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anil Madhavapeddy 
  Anthony PERARD 
  Bob Liu 
  Boris Ostrovsky 
  Daniel De Graaf 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Jason Andryuk 
  Juergen Gross 
  Konrad Rzeszutek Wilk 
  Marek Marczykowski-Górecki 
  Matthew Daley 
  Olaf Hering 
  Roger Pau Monne 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64

[Xen-devel] [PATCH v3 29/38] arm/p2m: Add HVMOP_altp2m_set_mem_access

2016-08-16 Thread Sergej Proskurin
The HVMOP_altp2m_set_mem_access allows to set gfn permissions of
(currently one page at a time) of a specific altp2m view. In case the
view does not hold the requested gfn entry, it will be first copied from
the host's p2m table and then modified as requested.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Prevent the page reference count from being falsely updated on
altp2m modification. Therefore, we add a check determining whether
the target p2m is a hostp2m before p2m_put_l3_page is called.

v3: Cosmetic fixes.

Added the functionality to set/get the default_access also in/from
the requested altp2m view.

Read-locked hp2m in "altp2m_set_mem_access".

Moved the functions "p2m_is_(hostp2m|altp2m)" out of this commit.

Moved the funtion "modify_altp2m_entry" out of this commit.

Moved the function "p2m_lookup_attr" out of this commit.

Moved guards for "p2m_put_l3_page" out of this commit.
---
 xen/arch/arm/altp2m.c| 53 
 xen/arch/arm/hvm.c   |  7 +++-
 xen/arch/arm/p2m.c   | 82 
 xen/include/asm-arm/altp2m.h | 12 +++
 4 files changed, 131 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index ba345b9..03b8ce5 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -80,6 +80,59 @@ int altp2m_switch_domain_altp2m_by_id(struct domain *d, 
unsigned int idx)
 return rc;
 }
 
+int altp2m_set_mem_access(struct domain *d,
+  struct p2m_domain *hp2m,
+  struct p2m_domain *ap2m,
+  p2m_access_t a,
+  gfn_t gfn)
+{
+p2m_type_t p2mt;
+p2m_access_t old_a;
+mfn_t mfn;
+unsigned int page_order;
+int rc;
+
+altp2m_lock(d);
+p2m_read_lock(hp2m);
+
+/* Check if entry is part of the altp2m view. */
+mfn = p2m_lookup_attr(ap2m, gfn, , NULL, _order);
+
+/* Check host p2m if no valid entry in ap2m. */
+if ( mfn_eq(mfn, INVALID_MFN) )
+{
+/* Check if entry is part of the host p2m view. */
+mfn = p2m_lookup_attr(hp2m, gfn, , _a, _order);
+if ( mfn_eq(mfn, INVALID_MFN) ||
+ ((p2mt != p2m_ram_rw) && (p2mt != p2m_ram_ro)) )
+{
+rc = -ESRCH;
+goto out;
+}
+
+/* If this is a superpage, copy that first. */
+if ( page_order != THIRD_ORDER )
+{
+rc = modify_altp2m_entry(ap2m, gfn, mfn, p2mt, old_a, page_order);
+if ( rc < 0 )
+{
+rc = -ESRCH;
+goto out;
+}
+}
+}
+
+/* Set mem access attributes - currently supporting only one (4K) page. */
+page_order = THIRD_ORDER;
+rc = modify_altp2m_entry(ap2m, gfn, mfn, p2mt, a, page_order);
+
+out:
+p2m_read_unlock(hp2m);
+altp2m_unlock(d);
+
+return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 struct altp2mvcpu *av = _vcpu(v);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 9ac3422..df78893 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -136,7 +136,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_set_mem_access:
-rc = -EOPNOTSUPP;
+if ( a.u.set_mem_access.pad )
+rc = -EINVAL;
+else
+rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+a.u.set_mem_access.hvmmem_access,
+a.u.set_mem_access.view);
 break;
 
 case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index df2b85b..8dee02187 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1913,7 +1913,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, 
uint32_t nr,
 uint32_t start, uint32_t mask, xenmem_access_t access,
 unsigned int altp2m_idx)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
+struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
 p2m_access_t a;
 unsigned int order;
 long rc = 0;
@@ -1933,13 +1933,26 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, 
uint32_t nr,
 #undef ACCESS
 };
 
+/* altp2m view 0 is treated as the hostp2m */
+if ( altp2m_idx )
+{
+if ( altp2m_idx >= MAX_ALTP2M ||
+ d->arch.altp2m_p2m[altp2m_idx] == NULL )
+return -EINVAL;
+
+ap2m = d->arch.altp2m_p2m[altp2m_idx];
+}
+
 switch ( access )
 {
 case 0 ... ARRAY_SIZE(memaccess) - 1:
 a = memaccess[access];
 break;
 case XENMEM_access_default:
-a = p2m->default_access;
+if ( ap2m )
+a = ap2m->default_access;
+else
+

[Xen-devel] [PATCH v3 05/38] arm/p2m: Add hvm_allow_(set|get)_param

2016-08-16 Thread Sergej Proskurin
This commit introduces the functions hvm_allow_(set|get)_param. These
can be used as a filter controlling access to HVM params. This
functionality has been inspired by the x86 implementation.

The introduced filter ensures that the HVM param HVM_PARAM_ALTP2M is set
once and not altered by guest domains.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c | 65 ++
 1 file changed, 56 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 45d51c6..ce6a436 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -117,6 +117,48 @@ out:
 return rc;
 }
 
+static int hvm_allow_set_param(struct domain *d, const struct xen_hvm_param *a)
+{
+uint64_t value = d->arch.hvm_domain.params[a->index];
+int rc;
+
+rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
+if ( rc )
+return rc;
+
+switch ( a->index )
+{
+/* The following parameters should only be changed once. */
+case HVM_PARAM_ALTP2M:
+if ( value != 0 && a->value != value )
+rc = -EEXIST;
+break;
+default:
+break;
+}
+
+return rc;
+}
+
+static int hvm_allow_get_param(struct domain *d, const struct xen_hvm_param *a)
+{
+int rc;
+
+rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_param);
+if ( rc )
+return rc;
+
+switch ( a->index )
+{
+/* This switch statement can be used to control/limit guest access to
+ * certain HVM params. */
+default:
+break;
+}
+
+return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
@@ -139,21 +181,26 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 if ( d == NULL )
 return -ESRCH;
 
-rc = xsm_hvm_param(XSM_TARGET, d, op);
-if ( rc )
-goto param_fail;
-
-if ( op == HVMOP_set_param )
+switch ( op )
 {
+case HVMOP_set_param:
+rc = hvm_allow_set_param(d, );
+if ( rc )
+break;
+
 d->arch.hvm_domain.params[a.index] = a.value;
-}
-else
-{
+break;
+
+case HVMOP_get_param:
+rc = hvm_allow_get_param(d, );
+if ( rc )
+break;
+
 a.value = d->arch.hvm_domain.params[a.index];
 rc = copy_to_guest(arg, , 1) ? -EFAULT : 0;
+break;
 }
 
-param_fail:
 rcu_unlock_domain(d);
 break;
 }
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 17/38] arm/p2m: Add HVMOP_altp2m_create_p2m

2016-08-16 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Cosmetic fixes.

v3: Cosmetic fixes.

Renamed the function "altp2m_init_next" to
"altp2m_init_next_available".

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_init_next_available".
---
 xen/arch/arm/altp2m.c| 23 +++
 xen/arch/arm/hvm.c   |  3 ++-
 xen/include/asm-arm/altp2m.h |  4 
 3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 02a52ec..b5d1951 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -122,6 +122,29 @@ int altp2m_init_by_id(struct domain *d, unsigned int idx)
 return rc;
 }
 
+int altp2m_init_next_available(struct domain *d, uint16_t *idx)
+{
+int rc = -EINVAL;
+uint16_t i;
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_p2m[i] != NULL )
+continue;
+
+rc = altp2m_init_helper(d, i);
+*idx = i;
+
+break;
+}
+
+altp2m_unlock(d);
+
+return rc;
+}
+
 int altp2m_init(struct domain *d)
 {
 unsigned int i;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c69da36..a504dfd 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -123,7 +123,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_create_p2m:
-rc = -EOPNOTSUPP;
+if ( !(rc = altp2m_init_next_available(d, )) )
+rc = __copy_to_guest(arg, , 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_destroy_p2m:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index f604ffd..5701012 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -56,6 +56,10 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
 int altp2m_init_by_id(struct domain *d,
   unsigned int idx);
 
+/* Find and initialize the next available alternate p2m. */
+int altp2m_init_next_available(struct domain *d,
+   uint16_t *idx);
+
 /* Flush all the alternate p2m's for a domain. */
 void altp2m_flush(struct domain *d);
 
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 24/38] arm/p2m: Make p2m_mem_access_check ready for altp2m

2016-08-16 Thread Sergej Proskurin
This commit extends the function "p2m_mem_access_check" and
"p2m_mem_access_check_and_get_page" to consider altp2m. The function
"p2m_mem_access_check_and_get_page" needs to translate the gva upon the
hostp2m's vttbr, as it contains all valid mappings while the currently
active altp2m view might not have the required gva mapping yet.

Also, the new implementation fills the request buffer to hold
altp2m-related information.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Extended the function "p2m_mem_access_check_and_get_page" to
consider altp2m. Similar to "get_page_from_gva", the function
"p2m_mem_access_check_and_get_page" needs to translate the gva upon
the hostp2m's vttbr. Although, the function "gva_to_ipa" (called in
"p2m_mem_access_check_and_get_page") performs a stage 1 table walk,
it will access page tables residing in memory. Accesses to this
memory are controlled by the underlying 2nd stage translation table
and hence require the original mappings of the hostp2m.
---
 xen/arch/arm/p2m.c | 43 +++
 1 file changed, 39 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5819ae0..ed9e0f0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 
+#include 
 #include 
 
 #ifdef CONFIG_ARM_64
@@ -1479,9 +1480,32 @@ p2m_mem_access_check_and_get_page(struct vcpu *v, 
vaddr_t gva, unsigned long fla
 xenmem_access_t xma;
 p2m_type_t t;
 struct page_info *page = NULL;
-struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+struct domain *d = v->domain;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+/*
+ * If altp2m is active, we need to translate the gva upon the hostp2m's
+ * vttbr, as it contains all valid mappings while the currently active
+ * altp2m view might not have the required gva mapping yet. Although, the
+ * function gva_to_ipa performs a stage 1 table walk, it will access page
+ * tables residing in memory. Accesses to this memory are controlled by the
+ * underlying 2nd stage translation table and hence require the original
+ * mappings of the hostp2m.
+ */
+if ( unlikely(altp2m_active(d)) )
+{
+unsigned long flags = 0;
+uint64_t ovttbr = READ_SYSREG64(VTTBR_EL2);
+
+p2m_switch_vttbr_and_get_flags(ovttbr, p2m->vttbr, flags);
+
+rc = gva_to_ipa(gva, , flag);
+
+p2m_restore_vttbr_and_set_flags(ovttbr, flags);
+}
+else
+rc = gva_to_ipa(gva, , flag);
 
-rc = gva_to_ipa(gva, , flag);
 if ( rc < 0 )
 goto err;
 
@@ -1698,13 +1722,16 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 xenmem_access_t xma;
 vm_event_request_t *req;
 struct vcpu *v = current;
-struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+struct domain *d = v->domain;
+struct p2m_domain *p2m = p2m_get_active_p2m(v);
 
 /* Mem_access is not in use. */
 if ( !p2m->mem_access_enabled )
 return true;
 
-rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), );
+p2m_read_lock(p2m);
+rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(gpa)), );
+p2m_read_unlock(p2m);
 if ( rc )
 return true;
 
@@ -1810,6 +1837,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
 req->vcpu_id = v->vcpu_id;
 
+vm_event_fill_regs(req);
+
+if ( unlikely(altp2m_active(d)) )
+{
+req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
+req->altp2m_idx = altp2m_vcpu(v).p2midx;
+}
+
 mem_access_send_req(v->domain, req);
 xfree(req);
 }
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 33/38] arm/p2m: Add altp2m paging mechanism

2016-08-16 Thread Sergej Proskurin
This commit adds the function "altp2m_lazy_copy" implementing the altp2m
paging mechanism. The function "altp2m_lazy_copy" lazily copies the
hostp2m's mapping into the currently active altp2m view on 2nd stage
translation faults on instruction or data access.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Cosmetic fixes.

Locked hostp2m in the function "altp2m_lazy_copy" to avoid a mapping
being changed in hostp2m before it has been inserted into the
valtp2m view.

Removed unnecessary calls to "p2m_mem_access_check" in the functions
"do_trap_instr_abort_guest" and "do_trap_data_abort_guest" after a
translation fault has been handled by the function
"altp2m_lazy_copy".

Adapted "altp2m_lazy_copy" to return the value "true" if the
encountered translation fault encounters a valid entry inside of the
currently active altp2m view. If multiple vcpu's are using the same
altp2m, it is likely that both generate a translation fault, whereas
the first one will be already handled by "altp2m_lazy_copy". With
this change the 2nd vcpu will retry accessing the faulting address.

Changed order of altp2m checking and MMIO emulation within the
function "do_trap_data_abort_guest".  Now, altp2m is checked and
handled only if the MMIO does not have to be emulated.

Changed the function prototype of "altp2m_lazy_copy".  This commit
removes the unnecessary struct p2m_domain* from the previous
function prototype.  Also, this commit removes the unnecessary
argument gva.  Finally, this commit changes the address of the
function parameter gpa from paddr_t to gfn_t and renames it to gfn.

Moved the altp2m handling mechanism into a separate function
"try_handle_altp2m".

Moved the functions "p2m_altp2m_check" and
"altp2m_switch_vcpu_altp2m_by_id" out of this patch.

Moved applied code movement into a separate patch.
---
 xen/arch/arm/altp2m.c| 62 
 xen/arch/arm/traps.c | 35 +
 xen/include/asm-arm/altp2m.h |  5 
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 11272e9..2009bad 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -165,6 +165,68 @@ out:
 return rc;
 }
 
+/*
+ * The function altp2m_lazy_copy returns "false" on error.  The return value
+ * "true" signals that either the mapping has been successfully lazy-copied
+ * from the hostp2m to the currently active altp2m view or that the altp2m view
+ * holds already a valid mapping. The latter is the case if multiple vcpu's
+ * using the same altp2m view generate a translation fault that is led back in
+ * both cases to the same mapping and the first fault has been already handled.
+ */
+bool_t altp2m_lazy_copy(struct vcpu *v,
+gfn_t gfn,
+struct npfec npfec)
+{
+struct domain *d = v->domain;
+struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
+p2m_type_t p2mt;
+p2m_access_t p2ma;
+mfn_t mfn;
+unsigned int page_order;
+int rc;
+
+ap2m = altp2m_get_altp2m(v);
+if ( ap2m == NULL)
+return false;
+
+/* Check if entry is part of the altp2m view. */
+mfn = p2m_lookup_attr(ap2m, gfn, NULL, NULL, NULL);
+if ( !mfn_eq(mfn, INVALID_MFN) )
+/*
+ * If multiple vcpu's are using the same altp2m, it is likely that both
+ * generate a translation fault, whereas the first one will be handled
+ * successfully and the second will encounter a valid mapping that has
+ * already been added as a result of the previous translation fault.
+ * In this case, the 2nd vcpu need to retry accessing the faulting
+ * address.
+ */
+return true;
+
+/*
+ * Lock hp2m to prevent the hostp2m to change a mapping before it is added
+ * to the altp2m view.
+ */
+p2m_read_lock(hp2m);
+
+/* Check if entry is part of the host p2m view. */
+mfn = p2m_lookup_attr(hp2m, gfn, , , _order);
+if ( mfn_eq(mfn, INVALID_MFN) )
+goto out;
+
+rc = modify_altp2m_entry(ap2m, gfn, mfn, p2mt, p2ma, page_order);
+if ( rc )
+{
+gdprintk(XENLOG_ERR, "altp2m[%d] failed to set entry for %#"PRI_gfn" 
-> %#"PRI_mfn"\n",
+ altp2m_vcpu(v).p2midx, gfn_x(gfn), mfn_x(mfn));
+domain_crash(hp2m->domain);
+}
+
+out:
+p2m_read_unlock(hp2m);
+
+return true;
+}
+
 static inline void altp2m_reset(struct p2m_domain *p2m)
 {
 p2m_write_lock(p2m);
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 0bf1653..a4c923c 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -48,6 +48,8 @@
 #include 
 #include 
 
+#include 
+
 /* The base of the stack must always be double-word aligned, which means
  

[Xen-devel] [PATCH v3 06/38] arm/p2m: Add HVMOP_altp2m_get_domain_state

2016-08-16 Thread Sergej Proskurin
This commit adopts the x86 HVMOP_altp2m_get_domain_state implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Removed the "altp2m_enabled" check in HVMOP_altp2m_get_domain_state
case as it has been moved in front of the switch statement in
"do_altp2m_op".

Removed the macro "altp2m_enabled". Instead, check directly for the
HVM_PARAM_ALTP2M param in d->arch.hvm_domain.
---
 xen/arch/arm/hvm.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index ce6a436..180154e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -66,7 +66,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 goto out;
 }
 
-if ( !(d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
 {
 rc = -EINVAL;
 goto out;
@@ -78,7 +78,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 switch ( a.cmd )
 {
 case HVMOP_altp2m_get_domain_state:
-rc = -EOPNOTSUPP;
+a.u.domain_state.state = altp2m_active(d);
+rc = __copy_to_guest(arg, , 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_set_domain_state:
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 01/38] arm/p2m: Cosmetic fixes - apply p2m_get_hostp2m

2016-08-16 Thread Sergej Proskurin
This commit substitutes the direct access of the host's p2m
(>arch.p2m) for the macro "p2m_get_hostp2m". This macro simplifies
the differentiation between the host's p2m and introduced alternative
p2m's, in the following commits.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index beaaf43..da6c7d4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -107,7 +107,7 @@ static inline int p2m_is_write_locked(struct p2m_domain 
*p2m)
 
 void p2m_dump_info(struct domain *d)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
 p2m_read_lock(p2m);
 printk("p2m mappings for domain %d (vmid %d):\n",
@@ -127,7 +127,7 @@ void memory_type_changed(struct domain *d)
 
 void dump_p2m_lookup(struct domain *d, paddr_t addr)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
 printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
 
@@ -146,7 +146,7 @@ void p2m_save_state(struct vcpu *p)
 void p2m_restore_state(struct vcpu *n)
 {
 register_t hcr;
-struct p2m_domain *p2m = >domain->arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(n->domain);
 
 if ( is_idle_vcpu(n) )
 return;
@@ -405,7 +405,7 @@ out:
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
 mfn_t ret;
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
 p2m_read_lock(p2m);
 ret = p2m_get_entry(p2m, gfn, t, NULL, NULL);
@@ -1052,7 +1052,7 @@ static inline int p2m_insert_mapping(struct domain *d,
  mfn_t mfn,
  p2m_type_t t)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 int rc;
 
 p2m_write_lock(p2m);
@@ -1071,7 +1071,7 @@ static inline int p2m_remove_mapping(struct domain *d,
  unsigned long nr,
  mfn_t mfn)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 int rc;
 
 p2m_write_lock(p2m);
@@ -1153,7 +1153,7 @@ void guest_physmap_remove_page(struct domain *d,
 
 static int p2m_alloc_table(struct domain *d)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 struct page_info *page;
 unsigned int i;
 
@@ -1196,7 +1196,7 @@ void p2m_vmid_allocator_init(void)
 
 static int p2m_alloc_vmid(struct domain *d)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
 int rc, nr;
 
@@ -1226,7 +1226,7 @@ out:
 
 static void p2m_free_vmid(struct domain *d)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 spin_lock(_alloc_lock);
 if ( p2m->vmid != INVALID_VMID )
 clear_bit(p2m->vmid, vmid_mask);
@@ -1236,7 +1236,7 @@ static void p2m_free_vmid(struct domain *d)
 
 void p2m_teardown(struct domain *d)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 struct page_info *pg;
 
 while ( (pg = page_list_remove_head(>pages)) )
@@ -1254,7 +1254,7 @@ void p2m_teardown(struct domain *d)
 
 int p2m_init(struct domain *d)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 int rc = 0;
 
 rwlock_init(>lock);
@@ -1296,7 +1296,7 @@ int p2m_init(struct domain *d)
  */
 int relinquish_p2m_mapping(struct domain *d)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 unsigned long count = 0;
 p2m_type_t t;
 int rc = 0;
@@ -1347,7 +1347,7 @@ int relinquish_p2m_mapping(struct domain *d)
 
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
 {
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 gfn_t end = gfn_add(start, nr);
 p2m_type_t t;
 unsigned int order;
@@ -1410,7 +1410,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag)
 xenmem_access_t xma;
 p2m_type_t t;
 struct page_info *page = NULL;
-struct p2m_domain *p2m = >domain->arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(current->domain);
 
 rc = gva_to_ipa(gva, , flag);
 if ( rc < 0 )
@@ -1497,7 +1497,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
 unsigned long flags)
 {
 struct domain *d = v->domain;
-struct p2m_domain *p2m = >arch.p2m;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
 struct page_info *page = NULL;
 paddr_t maddr = 0;
 int rc;
-- 
2.9.0


___
Xen-devel mailing 

[Xen-devel] [PATCH v3 10/38] arm/p2m: Move hostp2m init/teardown to individual functions

2016-08-16 Thread Sergej Proskurin
This commit pulls out generic init/teardown functionality out of
"p2m_init" and "p2m_teardown" into "p2m_init_one", "p2m_teardown_one",
and "p2m_flush_table" functions.  This allows our future implementation
to reuse existing code for the initialization/teardown of altp2m views.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Added the function p2m_flush_table to the previous version.

v3: Removed struct vttbr.

Moved define INVALID_VTTBR to p2m.h.

Exported function prototypes of "p2m_flush_table", "p2m_init_one",
and "p2m_teardown_one" in p2m.h.

Extended the function "p2m_flush_table" by additionally resetting
the fields lowest_mapped_gfn and max_mapped_gfn.

Added a "p2m_flush_tlb" call in "p2m_flush_table". On altp2m reset
in function "altp2m_reset", it is important to flush the TLBs after
clearing the root table pages and before clearing the intermediate
altp2m page tables to prevent illegal access to stalled TLB entries
on currently active VCPUs.

Added a check checking whether p2m->root is NULL in p2m_flush_table.

Renamed the function "p2m_free_one" to "p2m_teardown_one".

Removed resetting p2m->vttbr in "p2m_teardown_one", as it the p2m
will be destroyed afterwards.

Moved call to "p2m_alloc_table" back to "p2m_init_one".

Moved the introduction of the type p2m_class_t out of this patch.

Moved the backpointer to the struct domain out of the struct
p2m_domain.
---
 xen/arch/arm/p2m.c| 71 +--
 xen/include/asm-arm/p2m.h | 11 
 2 files changed, 73 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e859fca..9ef19d4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1245,27 +1245,53 @@ static void p2m_free_vmid(struct domain *d)
 spin_unlock(_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+/* Reset this p2m table to be empty. */
+void p2m_flush_table(struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
-struct page_info *pg;
+struct page_info *page, *pg;
+unsigned int i;
+
+if ( p2m->root )
+{
+page = p2m->root;
+
+/* Clear all concatenated first level pages. */
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+clear_and_clean_page(page + i);
+}
+
+/*
+ * Flush TLBs before releasing remaining intermediate p2m page tables to
+ * prevent illegal access to stalled TLB entries.
+ */
+p2m_flush_tlb(p2m);
 
+/* Free the rest of the trie pages back to the paging pool. */
 while ( (pg = page_list_remove_head(>pages)) )
 free_domheap_page(pg);
 
+p2m->lowest_mapped_gfn = INVALID_GFN;
+p2m->max_mapped_gfn = _gfn(0);
+}
+
+void p2m_teardown_one(struct p2m_domain *p2m)
+{
+p2m_flush_table(p2m);
+
 if ( p2m->root )
 free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
 
 p2m->root = NULL;
 
-p2m_free_vmid(d);
+p2m_free_vmid(p2m->domain);
+
+p2m->vttbr = INVALID_VTTBR;
 
 radix_tree_destroy(>mem_access_settings, NULL);
 }
 
-int p2m_init(struct domain *d)
+int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 int rc = 0;
 
 rwlock_init(>lock);
@@ -1278,11 +1304,14 @@ int p2m_init(struct domain *d)
 return rc;
 
 p2m->max_mapped_gfn = _gfn(0);
-p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
+p2m->lowest_mapped_gfn = INVALID_GFN;
 
 p2m->domain = d;
+p2m->access_required = false;
 p2m->default_access = p2m_access_rwx;
 p2m->mem_access_enabled = false;
+p2m->root = NULL;
+p2m->vttbr = INVALID_VTTBR;
 radix_tree_init(>mem_access_settings);
 
 /*
@@ -1293,9 +1322,33 @@ int p2m_init(struct domain *d)
 p2m->clean_pte = iommu_enabled &&
 !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-rc = p2m_alloc_table(d);
+return p2m_alloc_table(d);
+}
 
-return rc;
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+p2m_teardown_one(p2m);
+}
+
+void p2m_teardown(struct domain *d)
+{
+p2m_teardown_hostp2m(d);
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+p2m->p2m_class = p2m_host;
+
+return p2m_init_one(d, p2m);
+}
+
+int p2m_init(struct domain *d)
+{
+return p2m_init_hostp2m(d);
 }
 
 /*
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index fa07e19..1a004ed 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -11,6 +11,8 @@
 
 #define paddr_bits PADDR_BITS
 
+#define INVALID_VTTBR (0UL)
+
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
@@ -226,6 +228,15 @@ void guest_physmap_remove_page(struct domain *d,
 
 mfn_t gfn_to_mfn(struct domain 

[Xen-devel] [PATCH v3 31/38] altp2m: Introduce altp2m_switch_vcpu_altp2m_by_id

2016-08-16 Thread Sergej Proskurin
This commit adds the function "altp2m_switch_vcpu_altp2m_by_id" that is
executed after checking whether the vcpu should be switched to a
different altp2m within the function "altp2m_check".

Please note that in this commit, the function "p2m_altp2m_check" is
renamed to "altp2m_check" and moved from p2m.c to altp2m.c for the x86
architecuture. This change was perfomed in order to move altp2m related
functions to one spot (which is altp2m.c). The reason for modifying the
function's name is due the association of the function with the
associated .c file.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: George Dunlap 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 
---
v3: This commit has been moved out of the commit "arm/p2m: Add altp2m
paging mechanism".

Moved the function "p2m_altp2m_check" from p2m.c to altp2m.c and
renamed it to "altp2m_check". This change required the adoption of
the complementary function in the x86 architecture.
---
 xen/arch/arm/altp2m.c| 32 
 xen/arch/x86/mm/altp2m.c |  6 ++
 xen/arch/x86/mm/p2m.c|  6 --
 xen/common/vm_event.c|  3 ++-
 xen/include/asm-arm/altp2m.h |  7 ---
 xen/include/asm-arm/p2m.h|  6 --
 xen/include/asm-x86/altp2m.h |  3 +++
 xen/include/asm-x86/p2m.h|  3 ---
 8 files changed, 47 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index b10711e..11272e9 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -32,6 +32,38 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
 return v->domain->arch.altp2m_p2m[index];
 }
 
+static bool_t altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
+{
+struct domain *d = v->domain;
+bool_t rc = false;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_p2m[idx] != NULL )
+{
+if ( idx != altp2m_vcpu(v).p2midx )
+{
+atomic_dec(_get_altp2m(v)->active_vcpus);
+altp2m_vcpu(v).p2midx = idx;
+atomic_inc(_get_altp2m(v)->active_vcpus);
+}
+rc = true;
+}
+
+altp2m_unlock(d);
+
+return rc;
+}
+
+void altp2m_check(struct vcpu *v, uint16_t idx)
+{
+if ( altp2m_active(v->domain) )
+altp2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
 {
 struct vcpu *v;
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 930bdc2..00abb5a 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -65,6 +65,12 @@ altp2m_vcpu_destroy(struct vcpu *v)
 vcpu_unpause(v);
 }
 
+void altp2m_check(struct vcpu *v, uint16_t idx)
+{
+if ( altp2m_active(v->domain) )
+p2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 812dbf6..cb28cc2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1646,12 +1646,6 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
 }
 }
 
-void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
-{
-if ( altp2m_active(v->domain) )
-p2m_switch_vcpu_altp2m_by_id(v, idx);
-}
-
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
 struct npfec npfec,
 vm_event_request_t **req_ptr)
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 8398af7..e48d111 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -29,6 +29,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* for public/io/ring.h macros */
 #define xen_mb()   mb()
@@ -423,7 +424,7 @@ void vm_event_resume(struct domain *d, struct 
vm_event_domain *ved)
 
 /* Check for altp2m switch */
 if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
-p2m_altp2m_check(v, rsp.altp2m_idx);
+altp2m_check(v, rsp.altp2m_idx);
 
 /* Check flags which apply only when the vCPU is paused */
 if ( atomic_read(>vm_event_pause_count) )
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 7f385d9..ef80829 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -38,9 +38,7 @@ static inline bool_t altp2m_active(const struct domain *d)
 /* Alternate p2m VCPU */
 static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 {
-/* Not implemented on ARM, should not be reached. */
-BUG();
-return 0;
+return altp2m_vcpu(v).p2midx;
 }
 
 int altp2m_init(struct domain *d);
@@ -52,6 +50,9 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 /* Get current alternate p2m table. */
 struct p2m_domain *altp2m_get_altp2m(struct 

[Xen-devel] [PATCH v3 23/38] arm/p2m: Cosmetic fixes -- __p2m_get_mem_access

2016-08-16 Thread Sergej Proskurin
This commit extends the function prototypes of the functions:
* __p2m_get_mem_access
* p2m_mem_access_check_and_get_page

We extend the function prototype of "__p2m_get_mem_access" to hold an
argument of type "struct p2m_domain*", as we need to distinguish between
the host's p2m and different altp2m views. While doing so, we needed to
extend the function's prototype of "p2m_mem_access_check_and_get_page"
to hold an argument of type "struct vcpu*".

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Changed the parameter of "p2m_mem_access_check_and_get_page"
from "struct p2m_domain*" to "struct vcpu*".
---
 xen/arch/arm/p2m.c | 15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 06f7eb8..5819ae0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -606,10 +606,9 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t 
*entry)
 return 0;
 }
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
 xenmem_access_t *access)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 void *i;
 unsigned int index;
 
@@ -1471,7 +1470,7 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
  * we indeed found a conflicting mem_access setting.
  */
 static struct page_info*
-p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
+p2m_mem_access_check_and_get_page(struct vcpu *v, vaddr_t gva, unsigned long 
flag)
 {
 long rc;
 paddr_t ipa;
@@ -1480,7 +1479,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag)
 xenmem_access_t xma;
 p2m_type_t t;
 struct page_info *page = NULL;
-struct p2m_domain *p2m = p2m_get_hostp2m(current->domain);
+struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
 
 rc = gva_to_ipa(gva, , flag);
 if ( rc < 0 )
@@ -1492,7 +1491,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag)
  * We do this first as this is faster in the default case when no
  * permission is set on the page.
  */
-rc = __p2m_get_mem_access(current->domain, gfn, );
+rc = __p2m_get_mem_access(p2m, gfn, );
 if ( rc < 0 )
 goto err;
 
@@ -1556,7 +1555,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag)
 
 page = mfn_to_page(mfn_x(mfn));
 
-if ( unlikely(!get_page(page, current->domain)) )
+if ( unlikely(!get_page(page, v->domain)) )
 page = NULL;
 
 err:
@@ -1614,7 +1613,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
 
 err:
 if ( !page && p2m->mem_access_enabled )
-page = p2m_mem_access_check_and_get_page(va, flags);
+page = p2m_mem_access_check_and_get_page(v, va, flags);
 
 p2m_read_unlock(p2m);
 
@@ -1927,7 +1926,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
 struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
 p2m_read_lock(p2m);
-ret = __p2m_get_mem_access(d, gfn, access);
+ret = __p2m_get_mem_access(p2m, gfn, access);
 p2m_read_unlock(p2m);
 
 return ret;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 15/38] arm/p2m: Add altp2m table flushing routine

2016-08-16 Thread Sergej Proskurin
The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the function altp2m_flush,
which allows to release all of the alternate p2m views.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Pages in p2m->pages are not cleared in p2m_flush_table anymore.
VMID is freed in p2m_free_one.
Cosmetic fixes.

v3: Changed the locking mechanism to "p2m_write_lock" inside the
function "altp2m_flush".

Do not flush but rather teardown the altp2m in the function
"altp2m_flush".

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_flush".
---
 xen/arch/arm/altp2m.c| 31 +++
 xen/include/asm-arm/altp2m.h |  3 +++
 2 files changed, 34 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 66a373a..02cffd7 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -34,6 +34,37 @@ int altp2m_init(struct domain *d)
 return 0;
 }
 
+void altp2m_flush(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+/*
+ * If altp2m is active, we are not allowed to flush altp2m[0]. This special
+ * view is considered as the hostp2m as long as altp2m is active.
+ */
+ASSERT(!altp2m_active(d));
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_p2m[i] == NULL )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+
+p2m_write_lock(p2m);
+p2m_teardown_one(p2m);
+p2m_write_unlock(p2m);
+
+xfree(p2m);
+d->arch.altp2m_p2m[i] = NULL;
+}
+
+altp2m_unlock(d);
+}
+
 void altp2m_teardown(struct domain *d)
 {
 unsigned int i;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a156109..4c15b75 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -42,4 +42,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 int altp2m_init(struct domain *d);
 void altp2m_teardown(struct domain *d);
 
+/* Flush all the alternate p2m's for a domain. */
+void altp2m_flush(struct domain *d);
+
 #endif /* __ASM_ARM_ALTP2M_H */
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 37/38] arm/p2m: Extend xen-access for altp2m on ARM

2016-08-16 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
Acked-by: Razvan Cojocaru 
---
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 
Cc: Ian Jackson 
Cc: Wei Liu 
---
 tools/tests/xen-access/xen-access.c | 27 +--
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index ebb63b1..eafd7d6 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -337,8 +337,9 @@ void usage(char* progname)
 {
 fprintf(stderr, "Usage: %s [-m]  write|exec", progname);
 #if defined(__i386__) || defined(__x86_64__)
-fprintf(stderr, 
"|breakpoint|altp2m_write|altp2m_exec|debug|cpuid");
+fprintf(stderr, "|breakpoint|debug|cpuid");
 #endif
+fprintf(stderr, "|altp2m_write|altp2m_exec");
 fprintf(stderr,
 "\n"
 "Logs first page writes, execs, or breakpoint traps that occur on 
the domain.\n"
@@ -411,6 +412,15 @@ int main(int argc, char *argv[])
 {
 breakpoint = 1;
 }
+else if ( !strcmp(argv[0], "debug") )
+{
+debug = 1;
+}
+else if ( !strcmp(argv[0], "cpuid") )
+{
+cpuid = 1;
+}
+#endif
 else if ( !strcmp(argv[0], "altp2m_write") )
 {
 default_access = XENMEM_access_rx;
@@ -423,15 +433,6 @@ int main(int argc, char *argv[])
 altp2m = 1;
 memaccess = 1;
 }
-else if ( !strcmp(argv[0], "debug") )
-{
-debug = 1;
-}
-else if ( !strcmp(argv[0], "cpuid") )
-{
-cpuid = 1;
-}
-#endif
 else
 {
 usage(argv[0]);
@@ -504,12 +505,14 @@ int main(int argc, char *argv[])
 goto exit;
 }
 
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep( xch, domain_id, 1 );
 if ( rc < 0 )
 {
 ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
 goto exit;
 }
+#endif
 }
 
 if ( memaccess && !altp2m )
@@ -583,7 +586,9 @@ int main(int argc, char *argv[])
 rc = xc_altp2m_switch_to_view( xch, domain_id, 0 );
 rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id);
 rc = xc_altp2m_set_domain_state(xch, domain_id, 0);
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep(xch, domain_id, 0);
+#endif
 } else {
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
~0ull, 0);
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
START_PFN,
@@ -773,9 +778,11 @@ int main(int argc, char *argv[])
 exit:
 if ( altp2m )
 {
+#if defined(__i386__) || defined(__x86_64__)
 uint32_t vcpu_id;
 for ( vcpu_id = 0; vcpu_id

[Xen-devel] [PATCH v3 00/38] arm/altp2m: Introducing altp2m to ARM

2016-08-16 Thread Sergej Proskurin
Hello all,

The following patch series can be found on Github[0] and is part of my
contribution to this year's Google Summer of Code (GSoC)[1]. My project is
managed by the organization The Honeynet Project. As part of GSoC, I am being
supervised by the Xen developer Tamas K. Lengyel , George
D. Webster, and Steven Maresca.

In this patch series, we provide an implementation of the altp2m subsystem for
ARM. Our implementation is based on the altp2m subsystem for x86, providing
additional --alternate-- views on the guest's physical memory by means of the
ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM and
modify xen-access to test the suggested functionality.

To be more precise, altp2m allows to create and switch to additional p2m views
(i.e. gfn to mfn mappings). These views can be manipulated and activated as
will through the provided HVMOPs. In this way, the active guest instance in
question can seamlessly proceed execution without noticing that anything has
changed. The prime scope of application of altp2m is Virtual Machine
Introspection, where guest systems are analyzed from the outside of the VM.

Altp2m can be activated by means of the guest control parameter "altp2m" on x86
and ARM architectures. For use-cases requiring purely external access to
altp2m, this patch allows to specify if the altp2m interface should be external
only.

The current code-base is based on Julien Grall's branch p2m-rfc[2].

Please note: To work properly, the provided patch must include the fix that has
been presented in [3]. The fix makes sure that the flag p2m->mem_access_enabled
is considered during the manipulation of (alt)p2m entries.

Best regards,
~Sergej

[0] https://github.com/sergej-proskurin/xen (branch arm-altp2m-v3)
[1] https://summerofcode.withgoogle.com/projects/#4970052843470848
[2] git://xenbits.xen.org/people/julieng/xen-unstable.git (branch p2m-rfc)
[3] https://lists.xenproject.org/archives/html/xen-devel/2016-08/msg01870.html

Sergej Proskurin (37):
  arm/p2m: Cosmetic fixes - apply p2m_get_hostp2m
  arm/p2m: Expose p2m_*lock helpers
  arm/p2m: Introduce p2m_(switch|restore)_vttbr_and_(g|s)et_flags
  arm/p2m: Add first altp2m HVMOP stubs
  arm/p2m: Add hvm_allow_(set|get)_param
  arm/p2m: Add HVMOP_altp2m_get_domain_state
  arm/p2m: Introduce p2m_is_(hostp2m|altp2m)
  arm/p2m: Free p2m entries only in the hostp2m
  arm/p2m: Add backpointer to the domain in p2m_domain
  arm/p2m: Move hostp2m init/teardown to individual functions
  arm/p2m: Cosmetic fix - function prototype of p2m_alloc_table
  arm/p2m: Rename parameter in p2m_alloc_vmid
  arm/p2m: Change func prototype and impl of p2m_(alloc|free)_vmid
  arm/p2m: Add altp2m init/teardown routines
  arm/p2m: Add altp2m table flushing routine
  arm/p2m: Add HVMOP_altp2m_set_domain_state
  arm/p2m: Add HVMOP_altp2m_create_p2m
  arm/p2m: Add HVMOP_altp2m_destroy_p2m
  arm/p2m: Add HVMOP_altp2m_switch_p2m
  arm/p2m: Add p2m_get_active_p2m macro
  arm/p2m: Make p2m_restore_state ready for altp2m
  arm/p2m: Make get_page_from_gva ready for altp2m
  arm/p2m: Cosmetic fixes -- __p2m_get_mem_access
  arm/p2m: Make p2m_mem_access_check ready for altp2m
  arm/p2m: Cosmetic fixes - function prototypes
  arm/p2m: Introduce helpers managing altp2m entries
  arm/p2m: Introduce p2m_lookup_attr
  arm/p2m: Modify reference count only if hostp2m active
  arm/p2m: Add HVMOP_altp2m_set_mem_access
  arm/p2m: Add altp2m_propagate_change
  altp2m: Introduce altp2m_switch_vcpu_altp2m_by_id
  arm/p2m: Code movement in instr/data abort handlers
  arm/p2m: Add altp2m paging mechanism
  arm/p2m: Add HVMOP_altp2m_change_gfn
  arm/p2m: Adjust debug information to altp2m
  arm/p2m: Extend xen-access for altp2m on ARM
  arm/p2m: Add test of xc_altp2m_change_gfn

Tamas K Lengyel (1):
  altp2m: Allow specifying external-only use-case

 docs/man/xl.cfg.pod.5.in|  37 ++-
 tools/libxl/libxl.h |  10 +-
 tools/libxl/libxl_create.c  |   7 +-
 tools/libxl/libxl_dom.c |  30 +-
 tools/libxl/libxl_types.idl |  13 +
 tools/libxl/xl_cmdimpl.c|  25 +-
 tools/tests/xen-access/xen-access.c | 183 ++-
 xen/arch/arm/Makefile   |   1 +
 xen/arch/arm/altp2m.c   | 627 
 xen/arch/arm/hvm.c  | 210 +++-
 xen/arch/arm/p2m.c  | 452 +++---
 xen/arch/arm/traps.c|  65 +++-
 xen/arch/x86/hvm/hvm.c  |  20 +-
 xen/arch/x86/mm/altp2m.c|   6 +
 xen/arch/x86/mm/p2m.c   |   6 -
 xen/common/vm_event.c   |   3 +-
 xen/include/asm-arm/altp2m.h|  77 -
 xen/include/asm-arm/domain.h|  16 +
 xen/include/asm-arm/p2m.h   |  85 -
 xen/include/asm-x86/altp2m.h|   3 +
 xen/include/asm-x86/p2m.h   |   3 -
 

[Xen-devel] [PATCH v3 27/38] arm/p2m: Introduce p2m_lookup_attr

2016-08-16 Thread Sergej Proskurin
The function "p2m_lookup_attr" allows to lookup the mfn, memory type,
access rights, and page order corresponding to a domain's gfn.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Change function prototype of "p2m_lookup_attr" by removing the
function parameter "unsigned int *mattr", as it is not needed by the
callers.

Change function prototype of "p2m_lookup_attr" by changing the
parameter of type xenmem_access_t to p2m_access_t.
---
 xen/arch/arm/p2m.c| 15 +++
 xen/include/asm-arm/p2m.h | 10 ++
 2 files changed, 25 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1d3df0f..cef05ed 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -429,6 +429,21 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t 
*t)
 return ret;
 }
 
+mfn_t p2m_lookup_attr(struct p2m_domain *p2m,
+  gfn_t gfn,
+  p2m_type_t *t,
+  p2m_access_t *a,
+  unsigned int *page_order)
+{
+mfn_t ret;
+
+p2m_read_lock(p2m);
+ret = p2m_get_entry(p2m, gfn, t, a, page_order);
+p2m_read_unlock(p2m);
+
+return ret;
+}
+
 int guest_physmap_mark_populate_on_demand(struct domain *d,
   unsigned long gfn,
   unsigned int order)
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e02f69e..384ef3b 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -196,6 +196,16 @@ void p2m_dump_info(struct domain *d);
 /* Look up the MFN corresponding to a domain's GFN. */
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
+/*
+ * Lookup the MFN, memory type, access rights, and page table level
+ * corresponding to a domain's GFN.
+ */
+mfn_t p2m_lookup_attr(struct p2m_domain *p2m,
+  gfn_t gfn,
+  p2m_type_t *t,
+  p2m_access_t *a,
+  unsigned int *page_order);
+
 /* Remove an altp2m view's entry. */
 int remove_altp2m_entry(struct p2m_domain *p2m,
 gfn_t gfn,
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 03/38] arm/p2m: Introduce p2m_(switch|restore)_vttbr_and_(g|s)et_flags

2016-08-16 Thread Sergej Proskurin
This commit introduces macros for switching and restoring the vttbr
considering the currently set irq flags. We define these macros, as the
following commits will use the associated functionality multiple times
throughout the file ./xen/arch/arm/p2m.c.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 37 +++--
 1 file changed, 23 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 08114d8..02e9ee7 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -27,6 +27,26 @@ static unsigned int __read_mostly p2m_root_level;
 
 #define P2M_ROOT_PAGES(1<vttbr )
-{
-local_irq_save(flags);
-WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
-isb();
-}
+p2m_switch_vttbr_and_get_flags(ovttbr, p2m->vttbr, flags);
 
 flush_tlb();
 
-if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
-{
-WRITE_SYSREG64(ovttbr, VTTBR_EL2);
-isb();
-local_irq_restore(flags);
-}
+p2m_restore_vttbr_and_set_flags(ovttbr, flags);
 }
 
 /*
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 02/38] arm/p2m: Expose p2m_*lock helpers

2016-08-16 Thread Sergej Proskurin
This commit exposes the "p2m_*lock" helpers, as they will be used within
the file ./xen/arch/arm/altp2m.c, as will be shown in the following
commits.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c| 12 ++--
 xen/include/asm-arm/p2m.h | 16 
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index da6c7d4..08114d8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -62,14 +62,14 @@ static inline bool_t p2m_is_superpage(lpae_t pte, unsigned 
int level)
 return (level < 3) && p2m_mapping(pte);
 }
 
-static inline void p2m_write_lock(struct p2m_domain *p2m)
+void p2m_write_lock(struct p2m_domain *p2m)
 {
 write_lock(>lock);
 }
 
 static void p2m_flush_tlb(struct p2m_domain *p2m);
 
-static inline void p2m_write_unlock(struct p2m_domain *p2m)
+void p2m_write_unlock(struct p2m_domain *p2m)
 {
 if ( p2m->need_flush )
 {
@@ -85,22 +85,22 @@ static inline void p2m_write_unlock(struct p2m_domain *p2m)
 write_unlock(>lock);
 }
 
-static inline void p2m_read_lock(struct p2m_domain *p2m)
+void p2m_read_lock(struct p2m_domain *p2m)
 {
 read_lock(>lock);
 }
 
-static inline void p2m_read_unlock(struct p2m_domain *p2m)
+void p2m_read_unlock(struct p2m_domain *p2m)
 {
 read_unlock(>lock);
 }
 
-static inline int p2m_is_locked(struct p2m_domain *p2m)
+int p2m_is_locked(struct p2m_domain *p2m)
 {
 return rw_is_locked(>lock);
 }
 
-static inline int p2m_is_write_locked(struct p2m_domain *p2m)
+int p2m_is_write_locked(struct p2m_domain *p2m)
 {
 return rw_is_write_locked(>lock);
 }
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e6be3ea..eae31c1 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -216,6 +216,22 @@ void guest_physmap_remove_page(struct domain *d,
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
 
 /*
+ * P2M rwlock helpers.
+ */
+
+void p2m_write_lock(struct p2m_domain *p2m);
+
+void p2m_write_unlock(struct p2m_domain *p2m);
+
+void p2m_read_lock(struct p2m_domain *p2m);
+
+void p2m_read_unlock(struct p2m_domain *p2m);
+
+int p2m_is_locked(struct p2m_domain *p2m);
+
+int p2m_is_write_locked(struct p2m_domain *p2m);
+
+/*
  * Populate-on-demand
  */
 
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 04/38] arm/p2m: Add first altp2m HVMOP stubs

2016-08-16 Thread Sergej Proskurin
This commit moves the altp2m-related code from x86 to ARM. Functions
that are no yet supported notify the caller or print a BUG message
stating their absence.

Also, the struct arch_domain is extended with the altp2m_active
attribute, representing the current altp2m activity configuration of the
domain.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
Removed not used altp2m helper stubs in altp2m.h.

v3: Cosmetic fixes.

Added domain lock in "do_altp2m_op" to avoid concurrent execution of
altp2m-related HVMOPs.

Added check making sure that HVM_PARAM_ALTP2M is set before
execution of altp2m-related HVMOPs.
---
 xen/arch/arm/hvm.c   | 89 
 xen/include/asm-arm/altp2m.h |  4 +-
 xen/include/asm-arm/domain.h |  3 ++
 3 files changed, 94 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index d999bde..45d51c6 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,91 @@
 
 #include 
 
+#include 
+
+static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+struct xen_hvm_altp2m_op a;
+struct domain *d = NULL;
+int rc = 0;
+
+if ( copy_from_guest(, arg, 1) )
+return -EFAULT;
+
+if ( a.pad1 || a.pad2 ||
+ (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
+ (a.cmd < HVMOP_altp2m_get_domain_state) ||
+ (a.cmd > HVMOP_altp2m_change_gfn) )
+return -EINVAL;
+
+d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
+
+if ( d == NULL )
+return -ESRCH;
+
+/* Prevent concurrent execution of the following HVMOPs. */
+domain_lock(d);
+
+if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
+ (a.cmd != HVMOP_altp2m_set_domain_state) &&
+ !altp2m_active(d) )
+{
+rc = -EOPNOTSUPP;
+goto out;
+}
+
+if ( !(d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+{
+rc = -EINVAL;
+goto out;
+}
+
+if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
+goto out;
+
+switch ( a.cmd )
+{
+case HVMOP_altp2m_get_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_vcpu_enable_notify:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_create_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_destroy_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_switch_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_mem_access:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_change_gfn:
+rc = -EOPNOTSUPP;
+break;
+}
+
+out:
+domain_unlock(d);
+rcu_unlock_domain(d);
+
+return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
@@ -80,6 +165,10 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 rc = -EINVAL;
 break;
 
+case HVMOP_altp2m:
+rc = do_altp2m_op(arg);
+break;
+
 default:
 {
 gdprintk(XENLOG_DEBUG, "HVMOP op=%lu: not implemented\n", op);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a..0711796 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
  * Alternate p2m
  *
  * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin .
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-/* Not implemented on ARM. */
-return 0;
+return d->arch.altp2m_active;
 }
 
 /* Alternate p2m VCPU */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9452fcd..cc4bda0 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -126,6 +126,9 @@ struct arch_domain
 paddr_t efi_acpi_gpa;
 paddr_t efi_acpi_len;
 #endif
+
+/* altp2m: allow multiple copies of host p2m */
+bool_t altp2m_active;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 28/38] arm/p2m: Modify reference count only if hostp2m active

2016-08-16 Thread Sergej Proskurin
This commit makes sure that the page reference count is updated through
the function "p2m_put_l3_page" only the entries have been freed from the
hosts's p2m.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index cef05ed..df2b85b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -754,7 +754,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
 if ( !p2m_valid(entry) || p2m_is_superpage(entry, level) )
 return;
 
-if ( level == 3 )
+if ( level == 3 && p2m_is_hostp2m(p2m) )
 {
 p2m_put_l3_page(_mfn(entry.p2m.base), entry.p2m.type);
 return;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 26/38] arm/p2m: Introduce helpers managing altp2m entries

2016-08-16 Thread Sergej Proskurin
This commit introduces the following functions:
* remove_altp2m_entry
* modify_altp2m_entry

These functions are responsible to manage an altp2m view's entries and
their attributes.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Changed the function prototype of "modify_altp2m_entry" and
"remove_altp2m_entry" to hold arguments of type gfn_t/mfn_t instead
of addresses.

Remove the argument of type "struct domain*" from the function's
prototypes.

Remove the function "modify_altp2m_range".
---
 xen/arch/arm/p2m.c| 36 
 xen/include/asm-arm/p2m.h | 14 ++
 2 files changed, 50 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ca5ae97..1d3df0f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1165,6 +1165,42 @@ void guest_physmap_remove_page(struct domain *d,
 p2m_remove_mapping(p2m_get_hostp2m(d), gfn, (1 << page_order), mfn);
 }
 
+int remove_altp2m_entry(struct p2m_domain *ap2m,
+gfn_t gfn,
+mfn_t mfn,
+unsigned int page_order)
+{
+ASSERT(p2m_is_altp2m(ap2m));
+
+/* Align the gfn and mfn to the given pager order. */
+gfn = _gfn(gfn_x(gfn) & ~((1UL << page_order)-1));
+mfn = _mfn(mfn_x(mfn) & ~((1UL << page_order)-1));
+
+return p2m_remove_mapping(ap2m, gfn, (1UL << page_order), mfn);
+}
+
+int modify_altp2m_entry(struct p2m_domain *ap2m,
+gfn_t gfn,
+mfn_t mfn,
+p2m_type_t t,
+p2m_access_t a,
+unsigned int page_order)
+{
+int rc;
+
+ASSERT(p2m_is_altp2m(ap2m));
+
+/* Align the gfn and mfn to the given pager order. */
+gfn = _gfn(gfn_x(gfn) & ~((1UL << page_order)-1));
+mfn = _mfn(mfn_x(mfn) & ~((1UL << page_order)-1));
+
+p2m_write_lock(ap2m);
+rc = p2m_set_entry(ap2m, gfn, (1UL << page_order), mfn, t, a);
+p2m_write_unlock(ap2m);
+
+return rc;
+}
+
 static int p2m_alloc_table(struct p2m_domain *p2m)
 {
 struct page_info *page;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 978125a..e02f69e 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -196,6 +196,20 @@ void p2m_dump_info(struct domain *d);
 /* Look up the MFN corresponding to a domain's GFN. */
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
+/* Remove an altp2m view's entry. */
+int remove_altp2m_entry(struct p2m_domain *p2m,
+gfn_t gfn,
+mfn_t mfn,
+unsigned int page_order);
+
+/* Modify an altp2m view's entry or its attributes. */
+int modify_altp2m_entry(struct p2m_domain *p2m,
+gfn_t gfn,
+mfn_t mfn,
+p2m_type_t t,
+p2m_access_t a,
+unsigned int page_order);
+
 /* Clean & invalidate caches corresponding to a region of guest address space 
*/
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 30/38] arm/p2m: Add altp2m_propagate_change

2016-08-16 Thread Sergej Proskurin
This commit introduces the function "altp2m_propagate_change" that is
responsible to propagate changes applied to the host's p2m to a specific
or even all altp2m views. In this way, Xen can in-/decrease the guest's
physmem at run-time without leaving the altp2m views with
stalled/invalid entries.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Cosmetic fixes.

Changed the locking mechanism to "p2m_write_lock" inside the
function "altp2m_reset".

Removed TLB flushing and resetting of the max_mapped_gfn
lowest_mapped_gfn fields within the function "altp2m_reset". These
operations are performed in the function "p2m_flush_table".

Protected altp2m_active(d) check in "altp2m_propagate_change".

The function "altp2m_propagate_change" now decides whether an entry
needs to be dropped out of the altp2m view only if the smfn value
equals INVALID_MFN.

Extended the function "altp2m_propagate_change" so that it returns
an int value to the caller. Also, the function "apply_p2m_changes"
checks the return value and fails the entire operation on error.

Moved the funtion "modify_altp2m_range" out of this commit.
---
 xen/arch/arm/altp2m.c| 74 
 xen/arch/arm/p2m.c   |  4 +++
 xen/include/asm-arm/altp2m.h |  8 +
 3 files changed, 86 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 03b8ce5..b10711e 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -133,6 +133,80 @@ out:
 return rc;
 }
 
+static inline void altp2m_reset(struct p2m_domain *p2m)
+{
+p2m_write_lock(p2m);
+p2m_flush_table(p2m);
+p2m_write_unlock(p2m);
+}
+
+int altp2m_propagate_change(struct domain *d,
+gfn_t sgfn,
+unsigned int page_order,
+mfn_t smfn,
+p2m_type_t p2mt,
+p2m_access_t p2ma)
+{
+int rc = 0;
+unsigned int i;
+unsigned int reset_count = 0;
+unsigned int last_reset_idx = ~0;
+struct p2m_domain *p2m;
+mfn_t m;
+
+altp2m_lock(d);
+
+if ( !altp2m_active(d) )
+goto out;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_p2m[i] == NULL )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+
+/*
+ * Get the altp2m mapping. If the smfn has not been dropped, a valid
+ * altp2m mapping needs to be changed/modified accordingly.
+ */
+m = p2m_lookup_attr(p2m, sgfn, NULL, NULL, NULL);
+
+/* Check for a dropped page that may impact this altp2m. */
+if ( mfn_eq(smfn, INVALID_MFN) &&
+ (gfn_x(sgfn) >= gfn_x(p2m->lowest_mapped_gfn)) &&
+ (gfn_x(sgfn) <= gfn_x(p2m->max_mapped_gfn)) )
+{
+if ( !reset_count++ )
+{
+altp2m_reset(p2m);
+last_reset_idx = i;
+}
+else
+{
+/* At least 2 altp2m's impacted, so reset everything. */
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( i == last_reset_idx ||
+ d->arch.altp2m_p2m[i] == NULL )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+altp2m_reset(p2m);
+}
+goto out;
+}
+}
+else if ( !mfn_eq(m, INVALID_MFN) )
+rc = modify_altp2m_entry(p2m, sgfn, smfn, p2mt, p2ma, page_order);
+}
+
+out:
+altp2m_unlock(d);
+
+return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 struct altp2mvcpu *av = _vcpu(v);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8dee02187..dea3038 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1033,6 +1033,10 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 
 rc = 0;
 
+/* Update all affected altp2m views if necessary. */
+if ( p2m_is_hostp2m(p2m) )
+rc = altp2m_propagate_change(p2m->domain, sgfn, page_order, smfn, t, 
a);
+
 out:
 unmap_domain_page(table);
 
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 3e4c36d..7f385d9 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -83,4 +83,12 @@ int altp2m_set_mem_access(struct domain *d,
   p2m_access_t a,
   gfn_t gfn);
 
+/* Propagates changes made to hostp2m to affected altp2m views. */
+int altp2m_propagate_change(struct domain *d,
+gfn_t sgfn,
+unsigned int page_order,
+mfn_t smfn,
+p2m_type_t p2mt,
+p2m_access_t p2ma);
+
 #endif /* __ASM_ARM_ALTP2M_H 

[Xen-devel] [PATCH v3 12/38] arm/p2m: Rename parameter in p2m_alloc_vmid

2016-08-16 Thread Sergej Proskurin
This commit does not change or introduce any additional functionality
but rather is a part of the following commit that alters the
functionality of the function "p2m_alloc_vmid".

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index dd5d700..a295fdc 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1208,24 +1208,24 @@ static int p2m_alloc_vmid(struct domain *d)
 {
 struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-int rc, nr;
+int rc, vmid;
 
 spin_lock(_alloc_lock);
 
-nr = find_first_zero_bit(vmid_mask, MAX_VMID);
+vmid = find_first_zero_bit(vmid_mask, MAX_VMID);
 
-ASSERT(nr != INVALID_VMID);
+ASSERT(vmid != INVALID_VMID);
 
-if ( nr == MAX_VMID )
+if ( vmid == MAX_VMID )
 {
 rc = -EBUSY;
 printk(XENLOG_ERR "p2m.c: dom%d: VMID pool exhausted\n", d->domain_id);
 goto out;
 }
 
-set_bit(nr, vmid_mask);
+set_bit(vmid, vmid_mask);
 
-p2m->vmid = nr;
+p2m->vmid = vmid;
 
 rc = 0;
 
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 21/38] arm/p2m: Make p2m_restore_state ready for altp2m

2016-08-16 Thread Sergej Proskurin
This commit adapts the function "p2m_restore_state" in a way that the
currently active altp2m table is considered during state restoration.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Moved declaration of "altp2m_switch_domain_altp2m_by_id" out of this
patch.
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 12b3dcc..15abd39 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -171,7 +171,7 @@ void p2m_save_state(struct vcpu *p)
 void p2m_restore_state(struct vcpu *n)
 {
 register_t hcr;
-struct p2m_domain *p2m = p2m_get_hostp2m(n->domain);
+struct p2m_domain *p2m = p2m_get_active_p2m(n);
 
 if ( is_idle_vcpu(n) )
 return;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 36/38] altp2m: Allow specifying external-only use-case

2016-08-16 Thread Sergej Proskurin
From: Tamas K Lengyel 

Currently setting altp2mhvm=1 in the domain configuration allows access to the
altp2m interface for both in-guest and external privileged tools. This poses
a problem for use-cases where only external access should be allowed, requiring
the user to compile Xen with XSM enabled to be able to appropriately restrict
access.

In this patch we deprecate the altp2mhvm domain configuration option and
introduce the altp2m option, which allows specifying if by default the altp2m
interface should be external-only. The information is stored in
HVM_PARAM_ALTP2M which we now define with specific XEN_ALTP2M_* modes.
If external_only mode is selected, the XSM check is shifted to use XSM_DM_PRIV
type check, thus restricting access to the interface by the guest itself. Note
that we keep the default XSM policy untouched. Users of XSM who wish to enforce
external_only mode for altp2m can do so by adjusting their XSM policy directly,
as this domain config option does not override an active XSM policy.

Also, as part of this patch we adjust the hvmop handler to require
HVM_PARAM_ALTP2M to be of a type other then disabled for all ops. This has been
previously only required for get/set altp2m domain state, all other options
were gated on altp2m_enabled. Since altp2m_enabled only gets set during set
altp2m domain state, this change introduces no new requirements to the other
ops but makes it more clear that it is required for all ops.

Signed-off-by: Tamas K Lengyel 
Signed-off-by: Sergej Proskurin 
---
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Daniel De Graaf 

v2: Rename HVMALTP2M_* to XEN_ALTP2M_*
Relax xsm check to XSM_DM_PRIV for external-only mode

v3: Introduce macro LIBXL_HAVE_ARM_ALTP2M in parallel to the former
LIBXL_HAVE_ALTP2M to differentiate between altp2m for x86 and and
altp2m for ARM architectures.

Document the option "altp2m" in ./docs/man/xl.cfg.pod.5.in.

Maintain the legacy info->u.hvm.altp2m field for x86 HVM domains in
parallel to the introduced info->altp2m field for x86 HVM and ARM
domains.
---
 docs/man/xl.cfg.pod.5.in| 37 -
 tools/libxl/libxl.h | 10 +-
 tools/libxl/libxl_create.c  |  7 +--
 tools/libxl/libxl_dom.c | 30 --
 tools/libxl/libxl_types.idl | 13 +
 tools/libxl/xl_cmdimpl.c| 25 -
 xen/arch/arm/hvm.c  | 14 +-
 xen/arch/x86/hvm/hvm.c  | 20 ++--
 xen/include/public/hvm/params.h | 10 +-
 xen/include/xsm/dummy.h | 14 +++---
 xen/include/xsm/xsm.h   |  6 +++---
 xen/xsm/flask/hooks.c   |  2 +-
 12 files changed, 162 insertions(+), 26 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index 48c9c0d..bf9a48a 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -1268,6 +1268,37 @@ enabled by default and you should usually omit it. It 
may be necessary
 to disable the HPET in order to improve compatibility with guest
 Operating Systems (X86 only)
 
+=item 

[Xen-devel] [PATCH v3 07/38] arm/p2m: Introduce p2m_is_(hostp2m|altp2m)

2016-08-16 Thread Sergej Proskurin
This commit adds a p2m class to the struct p2m_domain to distinguish
between the host's original p2m and alternate p2m's. The need for this
functionality will be shown in the following commits.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/include/asm-arm/p2m.h | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index eae31c1..040ca13 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -18,6 +18,11 @@ struct domain;
 
 extern void memory_type_changed(struct domain *);
 
+typedef enum {
+p2m_host,
+p2m_alternate,
+} p2m_class_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
 /* Lock that protects updates to the p2m */
@@ -92,6 +97,9 @@ struct p2m_domain {
  * enough available bits to store this information.
  */
 struct radix_tree_root mem_access_settings;
+
+/* Choose between: host/alternate. */
+p2m_class_t p2m_class;
 };
 
 /*
@@ -303,6 +311,16 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_host;
+}
+
+static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_alternate;
+}
+
 /* vm_event and mem_access are supported on any ARM guest */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 13/38] arm/p2m: Change func prototype and impl of p2m_(alloc|free)_vmid

2016-08-16 Thread Sergej Proskurin
This commit changes the prototype and implementation of the functions
"p2m_alloc_vmid" and "p2m_free_vmid". The function "p2m_alloc_vmid" does
not expect the struct domain as argument anymore and returns an
allocated vmid. The function "p2m_free_vmid" takes only the vmid that is
to be freed as argument.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Changed function prototypes and implementation of the functions
"p2m_alloc_vmid" and "p2m_free_vmid".

Changes in "p2m_alloc_vmid":
This function does not expect any arguments. Also, in this commit,
the function "p2m_alloc_vmid" returns either the successfully
allocated vmid or the value INVALID_VMID. Thus, it is now the
responsibility of the caller to set the returned vmid in the
associated fields.

Changes in "p2m_free_vmid":
This function expects now only the vmid of type uint8_t.
---
 xen/arch/arm/p2m.c | 35 ---
 1 file changed, 12 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a295fdc..23ceb96 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1204,11 +1204,9 @@ void p2m_vmid_allocator_init(void)
 set_bit(INVALID_VMID, vmid_mask);
 }
 
-static int p2m_alloc_vmid(struct domain *d)
+static uint8_t p2m_alloc_vmid(void)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-int rc, vmid;
+uint8_t vmid;
 
 spin_lock(_alloc_lock);
 
@@ -1218,28 +1216,23 @@ static int p2m_alloc_vmid(struct domain *d)
 
 if ( vmid == MAX_VMID )
 {
-rc = -EBUSY;
-printk(XENLOG_ERR "p2m.c: dom%d: VMID pool exhausted\n", d->domain_id);
+vmid = INVALID_VMID;
+printk(XENLOG_ERR "p2m.c: VMID pool exhausted\n");
 goto out;
 }
 
 set_bit(vmid, vmid_mask);
 
-p2m->vmid = vmid;
-
-rc = 0;
-
 out:
 spin_unlock(_alloc_lock);
-return rc;
+return vmid;
 }
 
-static void p2m_free_vmid(struct domain *d)
+static void p2m_free_vmid(uint8_t vmid)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 spin_lock(_alloc_lock);
-if ( p2m->vmid != INVALID_VMID )
-clear_bit(p2m->vmid, vmid_mask);
+if ( vmid != INVALID_VMID )
+clear_bit(vmid, vmid_mask);
 
 spin_unlock(_alloc_lock);
 }
@@ -1282,7 +1275,7 @@ void p2m_teardown_one(struct p2m_domain *p2m)
 
 p2m->root = NULL;
 
-p2m_free_vmid(p2m->domain);
+p2m_free_vmid(p2m->vmid);
 
 p2m->vttbr = INVALID_VTTBR;
 
@@ -1291,16 +1284,12 @@ void p2m_teardown_one(struct p2m_domain *p2m)
 
 int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
 {
-int rc = 0;
-
 rwlock_init(>lock);
 INIT_PAGE_LIST_HEAD(>pages);
 
-p2m->vmid = INVALID_VMID;
-
-rc = p2m_alloc_vmid(d);
-if ( rc != 0 )
-return rc;
+p2m->vmid = p2m_alloc_vmid();
+if ( p2m->vmid == INVALID_VMID )
+return -EBUSY;
 
 p2m->max_mapped_gfn = _gfn(0);
 p2m->lowest_mapped_gfn = INVALID_GFN;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 11/38] arm/p2m: Cosmetic fix - function prototype of p2m_alloc_table

2016-08-16 Thread Sergej Proskurin
The function "p2m_alloc_table" should be able to allocate 2nd stage
translation tables not only for the host's p2m but also for alternate
p2m's.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Removed altp2m table initialization from "p2m_table_init".

v3: Removed initialization of the field d->arch.altp2m_active in
"p2m_table_init" to avoid altp2m initialization throughout different
files.

Merged the function "p2m_alloc_table" and "p2m_table_init".
---
 xen/arch/arm/p2m.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 9ef19d4..dd5d700 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1162,9 +1162,8 @@ void guest_physmap_remove_page(struct domain *d,
 p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
-static int p2m_alloc_table(struct domain *d)
+static int p2m_alloc_table(struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 struct page_info *page;
 unsigned int i;
 
@@ -1322,7 +1321,7 @@ int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
 p2m->clean_pte = iommu_enabled &&
 !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-return p2m_alloc_table(d);
+return p2m_alloc_table(p2m);
 }
 
 static void p2m_teardown_hostp2m(struct domain *d)
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 20/38] arm/p2m: Add p2m_get_active_p2m macro

2016-08-16 Thread Sergej Proskurin
This commit introduces the macro "p2m_get_active_p2m" returning the
currently active (alt)p2m. The need for this macro will be shown in the
following commits.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 63c0df0..12b3dcc 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -29,6 +29,9 @@ static unsigned int __read_mostly p2m_root_level;
 
 #define P2M_ROOT_PAGES(1<domain)) ?  \
+  altp2m_get_altp2m(v) : 
p2m_get_hostp2m(v->domain);
+
 #define p2m_switch_vttbr_and_get_flags(ovttbr, nvttbr, flags)   \
 ({  \
 if ( ovttbr != nvttbr ) \
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 08/38] arm/p2m: Free p2m entries only in the hostp2m

2016-08-16 Thread Sergej Proskurin
Freeing p2m entries of arbitrary p2m's (in particular in alternate
p2m's) will lead to unpredicted behavior as the entries might still be
used within the host's p2m. The host's p2m should, however, free the
entries, as it is the main instance responsible for their management. If
entries were freed in the host's p2m, but still reside in one or more of
the alternate p2m's, the change will be propagated to these functions as
will be shown in the following commits.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 02e9ee7..bfbccca 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1004,7 +1004,9 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
  * Free the entry only if the original pte was valid and the base
  * is different (to avoid freeing when permission is changed).
  */
-if ( p2m_valid(orig_pte) && entry->p2m.base != orig_pte.p2m.base )
+if ( p2m_valid(orig_pte) &&
+ entry->p2m.base != orig_pte.p2m.base &&
+ p2m_is_hostp2m(p2m) )
 p2m_free_entry(p2m, orig_pte, level);
 
 /* XXX: Flush iommu */
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 22/38] arm/p2m: Make get_page_from_gva ready for altp2m

2016-08-16 Thread Sergej Proskurin
The function get_page_from_gva uses ARM's hardware support to translate
gva's to machine addresses. This function is used, among others, for
memory regulation purposes, e.g, within the context of memory ballooning.
To ensure correct behavior while altp2m is in use, we use the host's p2m
table for the associated gva to ma translation. This is required at this
point, as altp2m lazily copies pages from the host's p2m and even might
be flushed because of changes to the host's p2m (as it is done within
the context of memory ballooning).

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Cosmetic fixes.

Make use of the p2m_(switch|restore)_vttbr_and_(g|s)et_flags macros
to avoid code duplication.
---
 xen/arch/arm/p2m.c | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 15abd39..06f7eb8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1581,7 +1581,24 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
 
 p2m_read_lock(p2m);
 
-rc = gvirt_to_maddr(va, , flags);
+/*
+ * If altp2m is active, we need to translate the gva upon the hostp2m's
+ * vttbr, as it contains all valid mappings while the currently active
+ * altp2m view might not have the required gva mapping yet.
+ */
+if ( unlikely(altp2m_active(d)) )
+{
+unsigned long flags = 0;
+uint64_t ovttbr = READ_SYSREG64(VTTBR_EL2);
+
+p2m_switch_vttbr_and_get_flags(ovttbr, p2m->vttbr, flags);
+
+rc = gvirt_to_maddr(va, , flags);
+
+p2m_restore_vttbr_and_set_flags(ovttbr, flags);
+}
+else
+rc = gvirt_to_maddr(va, , flags);
 
 if ( rc )
 goto err;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 32/38] arm/p2m: Code movement in instr/data abort handlers

2016-08-16 Thread Sergej Proskurin
This commit moves code in the functions
"do_trap_data_(instr|abort)_guest" without changing the original
functionality. The code movement is limited to moving the struct npfec
out of the switch statements in both functions. This commit acts as a
basis for the following commit implementing the altp2m paging mechanism.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/traps.c | 30 +-
 1 file changed, 13 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index da56cc0..0bf1653 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2406,6 +2406,12 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 paddr_t gpa;
 mfn_t mfn;
 
+const struct npfec npfec = {
+.insn_fetch = 1,
+.gla_valid = 1,
+.kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+};
+
 if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
 gpa = get_faulting_ipa(gva);
 else
@@ -2431,20 +2437,12 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 switch ( fsc )
 {
 case FSC_FLT_PERM:
-{
-const struct npfec npfec = {
-.insn_fetch = 1,
-.gla_valid = 1,
-.kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
-};
-
 rc = p2m_mem_access_check(gpa, gva, npfec);
 
 /* Trap was triggered by mem_access, work here is done */
 if ( !rc )
 return;
 break;
-}
 case FSC_FLT_TRANS:
 /*
  * The PT walk may have failed because someone was playing
@@ -2500,6 +2498,13 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
 uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
 mfn_t mfn;
 
+const struct npfec npfec = {
+.read_access = !dabt.write,
+.write_access = dabt.write,
+.gla_valid = 1,
+.kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+};
+
 info.dabt = dabt;
 #ifdef CONFIG_ARM_32
 info.gva = READ_CP32(HDFAR);
@@ -2524,21 +2529,12 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
 switch ( fsc )
 {
 case FSC_FLT_PERM:
-{
-const struct npfec npfec = {
-.read_access = !dabt.write,
-.write_access = dabt.write,
-.gla_valid = 1,
-.kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
-};
-
 rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
 
 /* Trap was triggered by mem_access, work here is done */
 if ( !rc )
 return;
 break;
-}
 case FSC_FLT_TRANS:
 /*
  * Attempt first to emulate the MMIO has the data abort will
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 09/38] arm/p2m: Add backpointer to the domain in p2m_domain

2016-08-16 Thread Sergej Proskurin
With the introduction of altp2m, many functions have been adapted to
receive an argument of type "struct p2m_domain*" instead of "struct
domain*". A backpointer to the associated domain within the "struct
p2m_domain*" reduces the number of function parameters without losing
the accessibility of the "struct domain". The need for this pointer is
shown in the following commits.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c| 1 +
 xen/include/asm-arm/p2m.h | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index bfbccca..e859fca 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1280,6 +1280,7 @@ int p2m_init(struct domain *d)
 p2m->max_mapped_gfn = _gfn(0);
 p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
+p2m->domain = d;
 p2m->default_access = p2m_access_rwx;
 p2m->mem_access_enabled = false;
 radix_tree_init(>mem_access_settings);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 040ca13..fa07e19 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -100,6 +100,9 @@ struct p2m_domain {
 
 /* Choose between: host/alternate. */
 p2m_class_t p2m_class;
+
+/* Back pointer to struct domain. */
+struct domain *domain;
 };
 
 /*
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 38/38] arm/p2m: Add test of xc_altp2m_change_gfn

2016-08-16 Thread Sergej Proskurin
This commit extends xen-access by a simple test of the functionality
provided by "xc_altp2m_change_gfn". The idea is to dynamically remap a
trapping gfn to another mfn, which holds the same content as the
original mfn. On success, the guest will continue to run. Subsequent
altp2m access violations will trap into Xen and be forced by xen-access
to switch to the default view (altp2m[0]) as before. The introduced test
can be invoked by providing the argument "altp2m_remap".

Signed-off-by: Sergej Proskurin 
---
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 
Cc: Ian Jackson 
Cc: Wei Liu 
---
v3: Cosmetic fixes in "xenaccess_copy_gfn" and "xenaccess_change_gfn".

Added munmap in "copy_gfn" in the second error case.

Added option "altp2m_remap" selecting the altp2m-remap test.
---
 tools/tests/xen-access/xen-access.c | 162 +++-
 1 file changed, 158 insertions(+), 4 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index eafd7d6..5909a8a 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 
+#define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include 
 #include 
 #include 
@@ -49,6 +50,8 @@
 #define START_PFN 0ULL
 #endif
 
+#define INVALID_GFN ~(0UL)
+
 #define DPRINTF(a, b...) fprintf(stderr, a, ## b)
 #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
 #define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
@@ -72,9 +75,14 @@ typedef struct xenaccess {
 xen_pfn_t max_gpfn;
 
 vm_event_t vm_event;
+
+unsigned int ap2m_idx;
+xen_pfn_t gfn_old;
+xen_pfn_t gfn_new;
 } xenaccess_t;
 
 static int interrupted;
+static int gfn_changed = 0;
 bool evtchn_bind = 0, evtchn_open = 0, mem_access_enable = 0;
 
 static void close_handler(int sig)
@@ -82,6 +90,100 @@ static void close_handler(int sig)
 interrupted = sig;
 }
 
+static int xenaccess_copy_gfn(xc_interface *xch,
+  domid_t domain_id,
+  xen_pfn_t dst_gfn,
+  xen_pfn_t src_gfn)
+{
+void *src_vaddr = NULL;
+void *dst_vaddr = NULL;
+
+src_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
+ PROT_READ, src_gfn);
+if ( src_vaddr == MAP_FAILED || src_vaddr == NULL)
+return -1;
+
+dst_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
+ PROT_WRITE, dst_gfn);
+if ( dst_vaddr == MAP_FAILED || dst_vaddr == NULL)
+{
+munmap(src_vaddr, XC_PAGE_SIZE);
+return -1;
+}
+
+memcpy(dst_vaddr, src_vaddr, XC_PAGE_SIZE);
+
+munmap(src_vaddr, XC_PAGE_SIZE);
+munmap(dst_vaddr, XC_PAGE_SIZE);
+
+return 0;
+}
+
+/*
+ * This function allocates and populates a page in the guest's physmap that is
+ * subsequently filled with contents of the trapping address. Finally, through
+ * the invocation of xc_altp2m_change_gfn, the altp2m subsystem changes the gfn
+ * to mfn mapping of the target altp2m view.
+ */
+static int xenaccess_change_gfn(xc_interface *xch,
+domid_t domain_id,
+unsigned int ap2m_idx,
+xen_pfn_t gfn_old,
+xen_pfn_t *gfn_new)
+{
+int rc;
+
+/*
+ * We perform this function only once as it is intended to be used for
+ * testing and demonstration purposes. Thus, we signalize that further
+ * altp2m-related traps will not change trapping gfn's.
+ */
+gfn_changed = 1;
+
+rc = xc_domain_increase_reservation_exact(xch, domain_id, 1, 0, 0, 
gfn_new);
+if ( rc < 0 )
+return -1;
+
+rc = xc_domain_populate_physmap_exact(xch, domain_id, 1, 0, 0, gfn_new);
+if ( rc < 0 )
+goto err;
+
+/* Copy content of the old gfn into the newly allocated gfn */
+rc = xenaccess_copy_gfn(xch, domain_id, *gfn_new, gfn_old);
+if ( rc < 0 )
+goto err;
+
+xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_old, *gfn_new);
+
+return 0;
+
+err:
+xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, gfn_new);
+
+return -1;
+}
+
+static int xenaccess_reset_gfn(xc_interface *xch,
+   domid_t domain_id,
+   unsigned int ap2m_idx,
+   xen_pfn_t gfn_old,
+   xen_pfn_t gfn_new)
+{
+int rc;
+
+/* Reset previous state */
+xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_old, INVALID_GFN);
+
+/* Invalidate the new gfn */
+xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_new, INVALID_GFN);
+
+rc = xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, _new);
+if ( rc < 0 )
+return -1;
+
+

[Xen-devel] [PATCH v3 14/38] arm/p2m: Add altp2m init/teardown routines

2016-08-16 Thread Sergej Proskurin
The p2m initialization now invokes initialization routines responsible
for the allocation and initialization of altp2m structures. The same
applies to teardown routines. The functionality has been adopted from
the x86 altp2m implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Shared code between host/altp2m init/teardown functions.
Added conditional init/teardown of altp2m.
Altp2m related functions are moved to altp2m.c

v3: Removed locking the altp2m_lock in altp2m_teardown. Locking this
lock at this point is unnecessary.

Removed re-setting altp2m_vttbr, altp2m_p2m, and altp2m_active
values in the function "altp2m_teardown". Re-setting these values is
unnecessary as the entire domain will be destroyed right afterwards.

Removed check for "altp2m_enabled" in "p2m_init" as altp2m has not yet
been enabled by libxl at this point.

Removed check for "altp2m_enabled" before tearing down altp2m within
the function "p2m_teardown" so that altp2m gets destroyed even if
the HVM_PARAM_ALTP2M gets reset before "p2m_teardown" is called.

Added initialization of the field d->arch.altp2m_active in
"altp2m_init".

Removed check for already initialized vmid's in "altp2m_init_one",
as "altp2m_init_one" is now called always with an uninitialized p2m.

Removed the array altp2m_vttbr[] in struct arch_domain.
---
 xen/arch/arm/Makefile|  1 +
 xen/arch/arm/altp2m.c| 61 
 xen/arch/arm/p2m.c   | 16 +++-
 xen/include/asm-arm/altp2m.h |  6 +
 xen/include/asm-arm/domain.h |  6 +
 xen/include/asm-arm/p2m.h|  2 ++
 6 files changed, 91 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/altp2m.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 23aaf52..4a7f660 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -5,6 +5,7 @@ subdir-$(CONFIG_ARM_64) += efi
 subdir-$(CONFIG_ACPI) += acpi
 
 obj-$(CONFIG_ALTERNATIVE) += alternative.o
+obj-y += altp2m.o
 obj-y += bootfdt.o
 obj-y += cpu.o
 obj-y += cpuerrata.o
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
new file mode 100644
index 000..66a373a
--- /dev/null
+++ b/xen/arch/arm/altp2m.c
@@ -0,0 +1,61 @@
+/*
+ * arch/arm/altp2m.c
+ *
+ * Alternate p2m
+ * Copyright (c) 2016 Sergej Proskurin 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#include 
+#include 
+
+int altp2m_init(struct domain *d)
+{
+unsigned int i;
+
+spin_lock_init(>arch.altp2m_lock);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+d->arch.altp2m_p2m[i] = NULL;
+
+d->arch.altp2m_active = false;
+
+return 0;
+}
+
+void altp2m_teardown(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( !d->arch.altp2m_p2m[i] )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+p2m_teardown_one(p2m);
+xfree(p2m);
+}
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 23ceb96..63c0df0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -14,6 +14,8 @@
 #include 
 #include 
 
+#include 
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -1322,6 +1324,12 @@ static void p2m_teardown_hostp2m(struct domain *d)
 
 void p2m_teardown(struct domain *d)
 {
+/*
+ * Teardown altp2m unconditionally so that altp2m gets always destroyed --
+ * even if HVM_PARAM_ALTP2M gets reset before teardown.
+ */
+altp2m_teardown(d);
+
 p2m_teardown_hostp2m(d);
 }
 
@@ -1336,7 +1344,13 @@ static int p2m_init_hostp2m(struct domain *d)
 
 int p2m_init(struct domain *d)
 {
-return p2m_init_hostp2m(d);
+int rc;
+
+rc = p2m_init_hostp2m(d);
+if ( rc )
+return rc;
+
+return altp2m_init(d);
 }
 
 /*
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 0711796..a156109 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -22,6 +22,9 @@
 
 #include 
 
+#define altp2m_lock(d)

[Xen-devel] [PATCH v3 34/38] arm/p2m: Add HVMOP_altp2m_change_gfn

2016-08-16 Thread Sergej Proskurin
This commit adds the functionality to change mfn mappings for specified
gfn's in altp2m views. This mechanism can be used within the context of
VMI, e.g., to establish stealthy debugging.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Moved the altp2m_lock to guard access to d->arch.altp2m_vttbr[idx]
in altp2m_change_gfn.

Locked hp2m to prevent hp2m entries from being modified while the
function "altp2m_change_gfn" is active.

Removed setting ap2m->mem_access_enabled in "altp2m_change_gfn", as
we do not need explicitly splitting pages at this point.

Extended checks allowing to change gfn's in p2m_ram_(rw|ro) memory
only.

Moved the funtion "remove_altp2m_entry" out of this commit.
---
 xen/arch/arm/altp2m.c| 98 
 xen/arch/arm/hvm.c   |  7 +++-
 xen/include/asm-arm/altp2m.h |  6 +++
 3 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 2009bad..fa8d526 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -301,6 +301,104 @@ out:
 return rc;
 }
 
+int altp2m_change_gfn(struct domain *d,
+  unsigned int idx,
+  gfn_t old_gfn,
+  gfn_t new_gfn)
+{
+struct p2m_domain *hp2m, *ap2m;
+mfn_t mfn;
+p2m_access_t p2ma;
+p2m_type_t p2mt;
+unsigned int page_order;
+int rc = -EINVAL;
+
+hp2m = p2m_get_hostp2m(d);
+ap2m = d->arch.altp2m_p2m[idx];
+
+altp2m_lock(d);
+p2m_read_lock(hp2m);
+
+if ( idx >= MAX_ALTP2M || d->arch.altp2m_p2m[idx] == NULL )
+goto out;
+
+mfn = p2m_lookup_attr(ap2m, old_gfn, , NULL, _order);
+
+/* Check whether the page needs to be reset. */
+if ( gfn_eq(new_gfn, INVALID_GFN) )
+{
+/* If mfn is mapped by old_gfn, remove old_gfn from the altp2m table. 
*/
+if ( !mfn_eq(mfn, INVALID_MFN) )
+{
+rc = remove_altp2m_entry(ap2m, old_gfn, mfn, page_order);
+if ( rc )
+{
+rc = -EINVAL;
+goto out;
+}
+}
+
+rc = 0;
+goto out;
+}
+
+/* Check hostp2m if no valid entry in altp2m present. */
+if ( mfn_eq(mfn, INVALID_MFN) )
+{
+mfn = p2m_lookup_attr(hp2m, old_gfn, , , _order);
+if ( mfn_eq(mfn, INVALID_MFN) ||
+ /* Allow changing gfn's in p2m_ram_(rw|ro) memory only. */
+ ((p2mt != p2m_ram_rw) && (p2mt != p2m_ram_ro)) )
+{
+rc = -EINVAL;
+goto out;
+}
+
+/* If this is a superpage, copy that first. */
+if ( page_order != THIRD_ORDER )
+{
+rc = modify_altp2m_entry(ap2m, old_gfn, mfn, p2mt, p2ma, 
page_order);
+if ( rc )
+{
+rc = -EINVAL;
+goto out;
+}
+}
+}
+
+mfn = p2m_lookup_attr(ap2m, new_gfn, , , _order);
+
+/* If new_gfn is not part of altp2m, get the mapping information from hp2m 
*/
+if ( mfn_eq(mfn, INVALID_MFN) )
+mfn = p2m_lookup_attr(hp2m, new_gfn, , , _order);
+
+if ( mfn_eq(mfn, INVALID_MFN) ||
+ /* Allow changing gfn's in p2m_ram_(rw|ro) memory only. */
+ ((p2mt != p2m_ram_rw) && (p2mt != p2m_ram_ro)) )
+{
+rc = -EINVAL;
+goto out;
+}
+
+/* Set mem access attributes - currently supporting only one (4K) page. */
+page_order = THIRD_ORDER;
+rc = modify_altp2m_entry(ap2m, old_gfn, mfn, p2mt, p2ma, page_order);
+if ( rc )
+{
+rc = -EINVAL;
+goto out;
+}
+
+rc = 0;
+
+out:
+p2m_read_unlock(hp2m);
+altp2m_unlock(d);
+
+return rc;
+}
+
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 struct altp2mvcpu *av = _vcpu(v);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index df78893..c754ad1 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -145,7 +145,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_change_gfn:
-rc = -EOPNOTSUPP;
+if ( a.u.change_gfn.pad1 || a.u.change_gfn.pad2 )
+rc = -EINVAL;
+else
+rc = altp2m_change_gfn(d, a.u.change_gfn.view,
+   _gfn(a.u.change_gfn.old_gfn),
+   _gfn(a.u.change_gfn.new_gfn));
 break;
 }
 
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 8e40c45..8b459bf 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -97,4 +97,10 @@ int altp2m_propagate_change(struct domain *d,
 p2m_type_t p2mt,
 p2m_access_t p2ma);
 
+/* Change a gfn->mfn mapping */
+int altp2m_change_gfn(struct domain *d,
+  

[Xen-devel] [PATCH v3 16/38] arm/p2m: Add HVMOP_altp2m_set_domain_state

2016-08-16 Thread Sergej Proskurin
The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
specific domain. This commit adopts the x86
HVMOP_altp2m_set_domain_state implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Dynamically allocate memory for altp2m views only when needed.
Move altp2m related helpers to altp2m.c.
p2m_flush_tlb is made publicly accessible.

v3: Cosmetic fixes.

Removed call to "p2m_alloc_table" in "altp2m_init_helper" as the
entire p2m allocation is now done within the function
"p2m_init_one". The same applies to the call of the function
"p2m_flush_tlb" from "p2m_init_one".

Removed the "altp2m_enabled" check in HVMOP_altp2m_set_domain_state
case as it has been moved in front of the switch statement in
"do_altp2m_op".

Changed the order of setting the new altp2m state (depending on
setting/resetting the state) in HVMOP_altp2m_set_domain_state case.

Removed the call to altp2m_vcpu_reset from altp2m_vcpu_initialise,
as the p2midx is set right after the call to 0, representing the
default view.

Moved the define "vcpu_altp2m" from domain.h to altp2m.h to avoid
defining altp2m-related functionality in multiple files. Also renamed
"vcpu_altp2m" to "altp2m_vcpu".

Declared the function "p2m_flush_tlb" as static, as it is not called
from altp2m.h anymore.

Exported the function "altp2m_get_altp2m" in altp2m.h.

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_init_by_id".

Set the field p2m->access_required to false by default.
---
 xen/arch/arm/altp2m.c| 102 +++
 xen/arch/arm/hvm.c   |  34 ++-
 xen/include/asm-arm/altp2m.h |  14 ++
 xen/include/asm-arm/domain.h |   7 +++
 xen/include/asm-arm/p2m.h|   5 +++
 5 files changed, 161 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 02cffd7..02a52ec 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -20,6 +20,108 @@
 #include 
 #include 
 
+struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
+{
+unsigned int index = altp2m_vcpu(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return NULL;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return v->domain->arch.altp2m_p2m[index];
+}
+
+static void altp2m_vcpu_reset(struct vcpu *v)
+{
+struct altp2mvcpu *av = _vcpu(v);
+
+av->p2midx = INVALID_ALTP2M;
+}
+
+void altp2m_vcpu_initialise(struct vcpu *v)
+{
+if ( v != current )
+vcpu_pause(v);
+
+altp2m_vcpu(v).p2midx = 0;
+atomic_inc(_get_altp2m(v)->active_vcpus);
+
+if ( v != current )
+vcpu_unpause(v);
+}
+
+void altp2m_vcpu_destroy(struct vcpu *v)
+{
+struct p2m_domain *p2m;
+
+if ( v != current )
+vcpu_pause(v);
+
+if ( (p2m = altp2m_get_altp2m(v)) )
+atomic_dec(>active_vcpus);
+
+altp2m_vcpu_reset(v);
+
+if ( v != current )
+vcpu_unpause(v);
+}
+
+static int altp2m_init_helper(struct domain *d, unsigned int idx)
+{
+int rc;
+struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];
+
+ASSERT(p2m == NULL);
+
+/* Allocate a new, zeroed altp2m view. */
+p2m = xzalloc(struct p2m_domain);
+if ( p2m == NULL)
+{
+rc = -ENOMEM;
+goto err;
+}
+
+p2m->p2m_class = p2m_alternate;
+
+/* Initialize the new altp2m view. */
+rc = p2m_init_one(d, p2m);
+if ( rc )
+goto err;
+
+p2m->access_required = false;
+_atomic_set(>active_vcpus, 0);
+
+d->arch.altp2m_p2m[idx] = p2m;
+
+return rc;
+
+err:
+if ( p2m )
+xfree(p2m);
+
+d->arch.altp2m_p2m[idx] = NULL;
+
+return rc;
+}
+
+int altp2m_init_by_id(struct domain *d, unsigned int idx)
+{
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_p2m[idx] == NULL )
+rc = altp2m_init_helper(d, idx);
+
+altp2m_unlock(d);
+
+return rc;
+}
+
 int altp2m_init(struct domain *d)
 {
 unsigned int i;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 180154e..c69da36 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -83,8 +83,40 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_set_domain_state:
-rc = -EOPNOTSUPP;
+{
+struct vcpu *v;
+bool_t ostate, nstate;
+
+ostate = d->arch.altp2m_active;
+nstate = !!a.u.domain_state.state;
+
+/* If the alternate p2m state has changed, handle appropriately */
+if ( (nstate != ostate) &&
+ (ostate || !(rc = altp2m_init_by_id(d, 0))) )
+{
+for_each_vcpu( d, v )
+{
+if ( !ostate )
+{
+altp2m_vcpu_initialise(v);
+

[Xen-devel] [PATCH v3 19/38] arm/p2m: Add HVMOP_altp2m_switch_p2m

2016-08-16 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v3: Extended the function "altp2m_switch_domain_altp2m_by_id" so that if
the guest domain indirectly calles this function, the current vcpu also
changes the altp2m view without performing an explicit context switch.

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_switch_domain_altp2m_by_id".
---
 xen/arch/arm/altp2m.c| 48 
 xen/arch/arm/hvm.c   |  2 +-
 xen/include/asm-arm/altp2m.h |  4 
 3 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index c14ab0b..ba345b9 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -32,6 +32,54 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
 return v->domain->arch.altp2m_p2m[index];
 }
 
+int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+struct vcpu *v;
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_p2m[idx] != NULL )
+{
+for_each_vcpu( d, v )
+if ( idx != altp2m_vcpu(v).p2midx )
+{
+atomic_dec(_get_altp2m(v)->active_vcpus);
+altp2m_vcpu(v).p2midx = idx;
+atomic_inc(_get_altp2m(v)->active_vcpus);
+
+/*
+ * In case it is the guest domain, which indirectly called this
+ * function, the current vcpu will not switch its context
+ * within the function "p2m_restore_state". That is, changing
+ * the altp2m_vcpu(v).p2midx will not lead to the requested
+ * altp2m switch on that specific vcpu. To achieve the desired
+ * behavior, we write to VTTBR_EL2 directly.
+ */
+if ( v->domain == d && v == current )
+{
+struct p2m_domain *ap2m = d->arch.altp2m_p2m[idx];
+
+WRITE_SYSREG64(ap2m->vttbr, VTTBR_EL2);
+isb();
+}
+}
+
+rc = 0;
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 struct altp2mvcpu *av = _vcpu(v);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index df973ef..9ac3422 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -132,7 +132,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_switch_p2m:
-rc = -EOPNOTSUPP;
+rc = altp2m_switch_domain_altp2m_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_set_mem_access:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 6074079..c2e44ab 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -52,6 +52,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 /* Get current alternate p2m table. */
 struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
 
+/* Switch alternate p2m for entire domain */
+int altp2m_switch_domain_altp2m_by_id(struct domain *d,
+  unsigned int idx);
+
 /* Make a specific alternate p2m valid. */
 int altp2m_init_by_id(struct domain *d,
   unsigned int idx);
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 35/38] arm/p2m: Adjust debug information to altp2m

2016-08-16 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Dump p2m information of the hostp2m and all altp2m views.
---
 xen/arch/arm/p2m.c | 20 
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index dea3038..86e2a1d 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -162,6 +162,26 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
 
 dump_pt_walk(page_to_maddr(p2m->root), addr,
  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
+printk("\n");
+
+if ( altp2m_active(d) )
+{
+unsigned int i;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_p2m[i] == NULL )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+
+printk("AP2M[%d] @ %p mfn:0x%lx\n",
+i, p2m->root, page_to_mfn(p2m->root));
+
+dump_pt_walk(page_to_maddr(p2m->root), addr, P2M_ROOT_LEVEL, 
P2M_ROOT_PAGES);
+printk("\n");
+}
+}
 }
 
 void p2m_save_state(struct vcpu *p)
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 18/38] arm/p2m: Add HVMOP_altp2m_destroy_p2m

2016-08-16 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
v2: Substituted the call to tlb_flush for p2m_flush_table.
Added comments.
Cosmetic fixes.

v3: Changed the locking mechanism to "p2m_write_lock" inside the
function "altp2m_destroy_by_id".

Do not flush but rather teardown the altp2m in the function
"altp2m_destroy_by_id".

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_destroy_by_id".
---
 xen/arch/arm/altp2m.c| 43 +++
 xen/arch/arm/hvm.c   |  2 +-
 xen/include/asm-arm/altp2m.h |  4 
 3 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index b5d1951..c14ab0b 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -190,6 +190,49 @@ void altp2m_flush(struct domain *d)
 altp2m_unlock(d);
 }
 
+int altp2m_destroy_by_id(struct domain *d, unsigned int idx)
+{
+struct p2m_domain *p2m;
+int rc = -EBUSY;
+
+/*
+ * The altp2m[0] is considered as the hostp2m and is used as a safe harbor
+ * to which you can switch as long as altp2m is active. After deactivating
+ * altp2m, the system switches back to the original hostp2m view. That is,
+ * altp2m[0] should only be destroyed/flushed/freed, when altp2m is
+ * deactivated.
+ */
+if ( !idx || idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_p2m[idx] != NULL )
+{
+p2m = d->arch.altp2m_p2m[idx];
+
+if ( !_atomic_read(p2m->active_vcpus) )
+{
+p2m_write_lock(p2m);
+p2m_teardown_one(p2m);
+p2m_write_unlock(p2m);
+
+xfree(p2m);
+d->arch.altp2m_p2m[idx] = NULL;
+
+rc = 0;
+}
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 void altp2m_teardown(struct domain *d)
 {
 unsigned int i;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index a504dfd..df973ef 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -128,7 +128,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_destroy_p2m:
-rc = -EOPNOTSUPP;
+rc = altp2m_destroy_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_switch_p2m:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 5701012..6074079 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -63,4 +63,8 @@ int altp2m_init_next_available(struct domain *d,
 /* Flush all the alternate p2m's for a domain. */
 void altp2m_flush(struct domain *d);
 
+/* Make a specific alternate p2m invalid */
+int altp2m_destroy_by_id(struct domain *d,
+ unsigned int idx);
+
 #endif /* __ASM_ARM_ALTP2M_H */
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] domctl: relax getdomaininfo permissions

2016-08-16 Thread Daniel De Graaf

On 08/11/2016 07:33 AM, Jan Beulich wrote:

On 05.08.16 at 13:20,  wrote:


Daniel,

I've only now realized that I forgot to Cc you on this v2.

Jan


Qemu needs access to this for the domain it controls, both due to it
being used by xc_domain_memory_mapping() (which qemu calls) and the
explicit use in hw/xenpv/xen_domainbuild.c:xen_domain_poll(). Extend
permissions to that of any "ordinary" domctl: A domain controlling the
targeted domain can invoke this operation for that target domain (which
is being achieved by no longer passing NULL to xsm_domctl()).

This at once avoids a for_each_domain() loop when the ID of an
existing domain gets passed in.

Reported-by: Marek Marczykowski-Górecki 
Signed-off-by: Jan Beulich 


Acked-by: Daniel De Graaf 

[...]

I know there had been an alternative patch suggestion, but that one
doesn't seem have seen a formal submission so far, so here is my
original proposal.

I wonder what good the duplication of the returned domain ID does: I'm
tempted to remove the one in the command-specific structure. Does
anyone have insight into why it was done that way?

I further wonder why we have XSM_OTHER: The respective conversion into
other XSM_* values in xsm/dummy.h could as well move into the callers,
making intentions more obvious when looking at the actual code.


The XSM_* values are not actually present in the XSM hook functions, so
they have to be a static value per function.  Otherwise, the dummy XSM
module won't have enough information to make the same decision as the
inlined dummy.h version does.

An alternate solution would be to add an explicit action parameter to
the hooks that currently use XSM_OTHER, but that mostly just moves the
conversion switch statement around and adds a pointless computation in
the case when the parameter is not used.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/2] x86/altp2m: allow specifying external-only use-case

2016-08-16 Thread Daniel De Graaf

On 08/11/2016 10:51 AM, Jan Beulich wrote:

On 11.08.16 at 16:37,  wrote:

On Aug 11, 2016 06:02, "Jan Beulich"  wrote:



On 10.08.16 at 17:00,  wrote:

@@ -5238,18 +5238,19 @@ static int do_altp2m_op(
 goto out;
 }

-if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
+if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+{
+rc = -EINVAL;
+goto out;
+}
+
+if ( (rc = xsm_hvm_altp2mhvm_op(XSM_OTHER, d,
+d->arch.hvm_domain.params[HVM_PARAM_ALTP2M])) )


I'm sorry that this didn't occur to me on v1 already, but is there
really a need for passing this extra argument, when the callee
could - if it cared in the first place - read the value itself?


I'm not sure if it's ok to have xsm poke around in arch specific parts like
this. We are adding this hvm param for ARM in another series but still..


Daniel, what's your opinion?

Jan


XSM does have some required arch-specific knowledge already (x86 IO port
labeling, in particular), so it's really a style question.  I'd prefer the
form with the value passed in so that it's clearer what the XSM check is
inspecting to determine what to do, especially in this case where it changes
what permissions are actually being enforced (in the non-FLASK case).

--
Daniel De Graaf
National Security Agency

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 100505: all pass - PUSHED

2016-08-16 Thread osstest service owner
flight 100505 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100505/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf d35ec1e0507dc612ed6485410f12e683a726a3bf
baseline version:
 ovmf de74668f5ea713b7e91e01318f0d15d2bf0effce

Last test of basis   100489  2016-08-15 07:14:39 Z1 days
Testing same since   100505  2016-08-16 02:44:25 Z0 days1 attempts


People who touched revisions under test:
  Dong, Eric 
  Eric Dong 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=ovmf
+ revision=d35ec1e0507dc612ed6485410f12e683a726a3bf
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push ovmf 
d35ec1e0507dc612ed6485410f12e683a726a3bf
+ branch=ovmf
+ revision=d35ec1e0507dc612ed6485410f12e683a726a3bf
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=ovmf
+ xenbranch=xen-unstable
+ '[' xovmf = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable
+ prevxenbranch=xen-4.7-testing
+ '[' xd35ec1e0507dc612ed6485410f12e683a726a3bf = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://cache:9419/
+++ '[' xgit://cache:9419/ '!=' x ']'
+++ echo 

[Xen-devel] [xen-unstable test] 100501: regressions - FAIL

2016-08-16 Thread osstest service owner
flight 100501 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100501/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl   6 xen-boot fail REGR. vs. 100488
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 15 guest-localmigrate/x10 fail 
REGR. vs. 100488

Regressions which are regarded as allowable (not blocking):
 build-amd64-rumpuserxen   6 xen-buildfail  like 100488
 build-i386-rumpuserxen6 xen-buildfail  like 100488
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 100488
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 100488
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 100488
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 100488
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 100488
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 100488

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass

version targeted for testing:
 xen  1f848de6f229e2b3a5aa84399d2639a958a6e945
baseline version:
 xen  a55ad65d3a30d5b3a026a7481ce05f28065920f0

Last test of basis   100488  2016-08-15 01:58:52 Z1 days
Failing since100492  2016-08-15 10:43:55 Z1 days2 attempts
Testing same since   100501  2016-08-15 21:46:33 Z0 days1 attempts


People who touched revisions under test:
  Jan Beulich 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   

Re: [Xen-devel] [PATCH 0/4] boot-wrapper: arm64: Xen support

2016-08-16 Thread Julien Grall

Hi Andre,

Do you plan to send a new version of this series?

Cheers,

On 20/06/2016 16:09, Andre Przywara wrote:

These patches allow to include a Xen hypervisor binary into a boot-wrapper
ELF file, so that a Foundation Platform or a Fast Model can boot a Xen
system (including a Dom0 kernel).
This has been floating around for a while, I just updated the patches
to apply on the latest boot-wrapper tree. Also I increased Xen's load
address to accomodate for Dom0 kernels bigger than 16 MB.
For testing this just add: "--with-xen=/path/to/xen/xen/xen" to the
./configure command line and feed the resulting xen-system.axf file to
the model.

Cheers,
Andre.

Christoffer Dall (3):
  Support for building in a Xen binary
  Xen: Support adding DT nodes
  Explicitly clean linux-system.axf and xen-system.axf

Ian Campbell (1):
  Xen: Select correct dom0 console

 .gitignore|  1 +
 Makefile.am   | 38 +-
 boot_common.c |  4 ++--
 configure.ac  | 26 +-
 model.lds.S   | 14 ++
 5 files changed, 67 insertions(+), 16 deletions(-)



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 01/22] xen/arm: do_trap_instr_abort_guest: Move the IPA computation out of the switch

2016-08-16 Thread Julien Grall

Hi Stefano,

On 16/08/2016 01:21, Stefano Stabellini wrote:

On Thu, 28 Jul 2016, Julien Grall wrote:

A follow-up patch will add more case to the switch that will require the
IPA. So move the computation out of the switch.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/traps.c | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 683bcb2..46e0663 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2403,35 +2403,35 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 int rc;
 register_t gva = READ_SYSREG(FAR_EL2);
 uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
+paddr_t gpa;
+
+if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
+gpa = get_faulting_ipa(gva);
+else
+{
+/*
+ * Flush the TLB to make sure the DTLB is clear before
+ * doing GVA->IPA translation. If we got here because of
+ * an entry only present in the ITLB, this translation may
+ * still be inaccurate.
+ */
+flush_tlb_local();
+
+rc = gva_to_ipa(gva, , GV2M_READ);
+if ( rc == -EFAULT )
+return; /* Try again */


The issue with this is that now for any cases that don't require a gpa
if gva_to_ipa fails we wrongly return -EFAULT.


Well, stage-1 fault is prioritized over stage-2 fault (see B3.12.3 in 
ARM DDI 0406C.b), so gva_to_ipa should never fail unless someone is 
playing with the stage-1 page table at the same time or because of an 
erratum (see 834220). In both case, we should replay the instruction to 
let the processor injecting the correct fault.


FWIW, this is already what we do for the data abort handler.



I suggest having two switches or falling through from the first case to
the second.


I am not sure to understand your suggestion. Could you detail it?

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [distros-debian-snapshot test] 67540: regressions - FAIL

2016-08-16 Thread Platform Team regression test user
flight 67540 distros-debian-snapshot real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/67540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-i386-current-netinst-pygrub 10 guest-start fail REGR. vs. 66946

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-armhf-daily-netboot-pygrub 9 debian-di-install fail like 66946
 test-amd64-i386-amd64-daily-netboot-pygrub 9 debian-di-install fail like 66946
 test-amd64-amd64-i386-daily-netboot-pygrub 9 debian-di-install fail like 66946
 test-amd64-i386-i386-weekly-netinst-pygrub 9 debian-di-install fail like 66946
 test-amd64-i386-i386-daily-netboot-pvgrub  9 debian-di-install fail like 66946
 test-amd64-i386-amd64-weekly-netinst-pygrub 9 debian-di-install fail like 66946
 test-amd64-amd64-amd64-daily-netboot-pvgrub 9 debian-di-install fail like 66946
 test-amd64-amd64-i386-weekly-netinst-pygrub 9 debian-di-install fail like 66946
 test-amd64-amd64-amd64-weekly-netinst-pygrub 9 debian-di-install fail like 
66946

baseline version:
 flight   66946

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-daily-netboot-pvgrub  fail
 test-amd64-i386-i386-daily-netboot-pvgrubfail
 test-amd64-i386-amd64-daily-netboot-pygrub   fail
 test-armhf-armhf-armhf-daily-netboot-pygrub  fail
 test-amd64-amd64-i386-daily-netboot-pygrub   fail
 test-amd64-amd64-amd64-current-netinst-pygrubpass
 test-amd64-i386-amd64-current-netinst-pygrub pass
 test-amd64-amd64-i386-current-netinst-pygrub pass
 test-amd64-i386-i386-current-netinst-pygrub  fail
 test-amd64-amd64-amd64-weekly-netinst-pygrub fail
 test-amd64-i386-amd64-weekly-netinst-pygrub  fail
 test-amd64-amd64-i386-weekly-netinst-pygrub  fail
 test-amd64-i386-i386-weekly-netinst-pygrub   fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/3] x86emul: use DstEax also for {, I}{MUL, DIV}

2016-08-16 Thread Andrew Cooper
On 16/08/16 17:07, Jan Beulich wrote:
 On 16.08.16 at 17:23,  wrote:
> On 16.08.16 at 17:11,  wrote:
>>> On 16/08/16 15:57, Jan Beulich wrote:
>>> On 16.08.16 at 16:08,  wrote:
> On 16/08/16 10:32, Jan Beulich wrote:
>> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
>> possible"): While it avoids just a few instructions, we should
>> nevertheless make use of generic code as much as possible.
>>
>> Signed-off-by: Jan Beulich 
> This does reduce the amount of code, but it isn't strictly true.  The
> mul and div instructions are DstEaxEdx, as are a number of other
> instructions.
>
> We shouldn't end up with special casing the eax part because we have an
> easy literal for it, but leaving the edx hard coded because that is
> easier to express in the current code.
 I think the code reduction is nevertheless worth it, and reduction
 here can only help readability imo. Would you be okay if I added
 a comment to the place where the DstEax gets set here? (Note
 that DstEdxEax wouldn't be true for 8-bit operations, so I'd rather
 not use this as another alias or even a completely new operand
 kind description. And please also remember that the tables don't
 express all operands in all cases anyway - just consider
 SHLD/SHRD.)
>>> The other option would be to use DstNone and explicitly fill in
>>> _regs.eax, which avoids all the code to play with dst, and matches how
>>> rdtsc/rdmsr/wrmsr currently work.
>> Well, that would be more code, but not less of a lie. Or maybe, if
>> it we stayed with DstImplicit (as it is without this patch) instead of
>> making it DstNone. Let me see how that ends up looking.
> Actually - no, we can't do that: The imul case has other imul cases
> funneled into it (via the imul: label), and I wouldn't want the mul,
> div, and idiv cases be different from the imul one. So I'd really like
> to ask you to reconsider whether the patch in its current form
> (perhaps with some comment added) isn't acceptable.

Ok - with a comment, Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/PV: don't hide CPUID.OSXSAVE from user mode

2016-08-16 Thread Andrew Cooper
On 16/08/16 17:00, Jan Beulich wrote:
 On 16.08.16 at 17:41,  wrote:
>> On 16/08/16 16:20, Jan Beulich wrote:
>>> User mode code generally cannot be expected to invoke the PV-enabled
>>> CPUID Xen supports, and prior to the CPUID levelling changes for 4.7
>>> (as well as even nowadays on levelling incapable hardware) such CPUID
>>> invocations actually saw the host CR4.OSXSAVE value. Fold in the guest
>>> view of CR4.OSXSAVE when setting the levelling MSRs, just like we do
>>> in other CPUID handling.
>> How does this work?  The OSXSAVE is a fast-forwarded bit, not a regular bit.
>>
>> There is nothing you can do to control it on Intel, as the MSRs are
>> strictly and AND mask, applied before OSXSAVE and APIC are fast
>> forwarded from real hardware state.
> Considering that the change works (and things didn't work before) I
> assume the AND-ing happens after the fast forwarding.

That is specifically contrary to my findings.  What hardware is this
on?  (Given the undocumented state of the rest of masking, I wouldn't be
surprised if it differed across models).

>
>> On AMD, you can force it to zero by clearing the OSXSAVE bit, but you
>> can never cause it to appear set if Xen has it cleared in CR4.
> We don't allow guests to use XSAVE (and hence set the bit in CR4) if
> we don't enable it ourselves. Hence if it's off in Xen, it'll be off
> everywhere else (and that's what we want); i.e. in the consideration
> of how this works, please assume CR4.OSXSAVE=1 for the raw
> hardware reg.

And I presume the usecase is to hide it from guest userspace if it is
not enabled in the guest kernel?

On the AMD side, this is a simple two-liner

/* Force OSXSAVE to zero if not enabled by the guest kernel. */
if (masks != _defaults && !(current->arch.pv_vcpu.ctrlreg[4] &
X86_CR4_OSXSAVE))
masks->_1cd &= ~cpufeat_mask(X86_FEATURE_OSXSAVE);

to counteract the default set up in update_domain_cpuid_info().

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/3] x86emul: use DstEax also for {, I}{MUL, DIV}

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 17:23,  wrote:
 On 16.08.16 at 17:11,  wrote:
>> On 16/08/16 15:57, Jan Beulich wrote:
>> On 16.08.16 at 16:08,  wrote:
 On 16/08/16 10:32, Jan Beulich wrote:
> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
> possible"): While it avoids just a few instructions, we should
> nevertheless make use of generic code as much as possible.
>
> Signed-off-by: Jan Beulich 
 This does reduce the amount of code, but it isn't strictly true.  The
 mul and div instructions are DstEaxEdx, as are a number of other
 instructions.

 We shouldn't end up with special casing the eax part because we have an
 easy literal for it, but leaving the edx hard coded because that is
 easier to express in the current code.
>>> I think the code reduction is nevertheless worth it, and reduction
>>> here can only help readability imo. Would you be okay if I added
>>> a comment to the place where the DstEax gets set here? (Note
>>> that DstEdxEax wouldn't be true for 8-bit operations, so I'd rather
>>> not use this as another alias or even a completely new operand
>>> kind description. And please also remember that the tables don't
>>> express all operands in all cases anyway - just consider
>>> SHLD/SHRD.)
>> 
>> The other option would be to use DstNone and explicitly fill in
>> _regs.eax, which avoids all the code to play with dst, and matches how
>> rdtsc/rdmsr/wrmsr currently work.
> 
> Well, that would be more code, but not less of a lie. Or maybe, if
> it we stayed with DstImplicit (as it is without this patch) instead of
> making it DstNone. Let me see how that ends up looking.

Actually - no, we can't do that: The imul case has other imul cases
funneled into it (via the imul: label), and I wouldn't want the mul,
div, and idiv cases be different from the imul one. So I'd really like
to ask you to reconsider whether the patch in its current form
(perhaps with some comment added) isn't acceptable.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 100515: tolerable all pass - PUSHED

2016-08-16 Thread osstest service owner
flight 100515 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100515/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  c4e7a67e3a109a3d507d2617b77017e40d59f04a
baseline version:
 xen  1f848de6f229e2b3a5aa84399d2639a958a6e945

Last test of basis   100493  2016-08-15 11:01:42 Z1 days
Testing same since   100515  2016-08-16 14:03:06 Z0 days1 attempts


People who touched revisions under test:
  Jan Beulich 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=c4e7a67e3a109a3d507d2617b77017e40d59f04a
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
c4e7a67e3a109a3d507d2617b77017e40d59f04a
+ branch=xen-unstable-smoke
+ revision=c4e7a67e3a109a3d507d2617b77017e40d59f04a
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.7-testing
+ '[' xc4e7a67e3a109a3d507d2617b77017e40d59f04a = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://cache:9419/
+++ '[' xgit://cache:9419/ '!=' x ']'
+++ echo 

Re: [Xen-devel] [PATCH] x86/PV: don't hide CPUID.OSXSAVE from user mode

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 17:41,  wrote:
> On 16/08/16 16:20, Jan Beulich wrote:
>> User mode code generally cannot be expected to invoke the PV-enabled
>> CPUID Xen supports, and prior to the CPUID levelling changes for 4.7
>> (as well as even nowadays on levelling incapable hardware) such CPUID
>> invocations actually saw the host CR4.OSXSAVE value. Fold in the guest
>> view of CR4.OSXSAVE when setting the levelling MSRs, just like we do
>> in other CPUID handling.
> 
> How does this work?  The OSXSAVE is a fast-forwarded bit, not a regular bit.
> 
> There is nothing you can do to control it on Intel, as the MSRs are
> strictly and AND mask, applied before OSXSAVE and APIC are fast
> forwarded from real hardware state.

Considering that the change works (and things didn't work before) I
assume the AND-ing happens after the fast forwarding.

> On AMD, you can force it to zero by clearing the OSXSAVE bit, but you
> can never cause it to appear set if Xen has it cleared in CR4.

We don't allow guests to use XSAVE (and hence set the bit in CR4) if
we don't enable it ourselves. Hence if it's off in Xen, it'll be off
everywhere else (and that's what we want); i.e. in the consideration
of how this works, please assume CR4.OSXSAVE=1 for the raw
hardware reg.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/PV: don't hide CPUID.OSXSAVE from user mode

2016-08-16 Thread Andrew Cooper
On 16/08/16 16:20, Jan Beulich wrote:
> User mode code generally cannot be expected to invoke the PV-enabled
> CPUID Xen supports, and prior to the CPUID levelling changes for 4.7
> (as well as even nowadays on levelling incapable hardware) such CPUID
> invocations actually saw the host CR4.OSXSAVE value. Fold in the guest
> view of CR4.OSXSAVE when setting the levelling MSRs, just like we do
> in other CPUID handling.

How does this work?  The OSXSAVE is a fast-forwarded bit, not a regular bit.

There is nothing you can do to control it on Intel, as the MSRs are
strictly and AND mask, applied before OSXSAVE and APIC are fast
forwarded from real hardware state.

On AMD, you can force it to zero by clearing the OSXSAVE bit, but you
can never cause it to appear set if Xen has it cleared in CR4.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/3] x86emul: use DstEax also for {, I}{MUL, DIV}

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 17:11,  wrote:
> On 16/08/16 15:57, Jan Beulich wrote:
> On 16.08.16 at 16:08,  wrote:
>>> On 16/08/16 10:32, Jan Beulich wrote:
 Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
 possible"): While it avoids just a few instructions, we should
 nevertheless make use of generic code as much as possible.

 Signed-off-by: Jan Beulich 
>>> This does reduce the amount of code, but it isn't strictly true.  The
>>> mul and div instructions are DstEaxEdx, as are a number of other
>>> instructions.
>>>
>>> We shouldn't end up with special casing the eax part because we have an
>>> easy literal for it, but leaving the edx hard coded because that is
>>> easier to express in the current code.
>> I think the code reduction is nevertheless worth it, and reduction
>> here can only help readability imo. Would you be okay if I added
>> a comment to the place where the DstEax gets set here? (Note
>> that DstEdxEax wouldn't be true for 8-bit operations, so I'd rather
>> not use this as another alias or even a completely new operand
>> kind description. And please also remember that the tables don't
>> express all operands in all cases anyway - just consider
>> SHLD/SHRD.)
> 
> The other option would be to use DstNone and explicitly fill in
> _regs.eax, which avoids all the code to play with dst, and matches how
> rdtsc/rdmsr/wrmsr currently work.

Well, that would be more code, but not less of a lie. Or maybe, if
it we stayed with DstImplicit (as it is without this patch) instead of
making it DstNone. Let me see how that ends up looking.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] Remus memcpy dirty pages if local host

2016-08-16 Thread Sunny Raj
Hi,

I'm Sunny Raj, and this is the first time I'm posting to xen-devel. I'm a
research student currently working on some aspects of virtual machine
introspection. Specifically, instead of introspecting the VM directly, I
use remus to checkpoint the VM to a backup on to the localhost, and run
introspection on the backup; one of the main reasons is that this allows me
to do more complex introspection techniques without incurring addition
overhead on the primary VM.

But remus, regardless of localhost or not, seems to use ssh to write the
data (dirty pages) to a stream, and have the backup read from the stream.
This seems to take a lot of time (~50 to 100 milliseconds).

What I would like to do instead is to see if there is a way to memcpy the
dirty pages from primary to the backup since it is on the localhost.
Currently in write_batch() function in xc_sr_save.c, we have all the data
we need in the iov datastructure. Is it possible to memcpy this onto the
backup?

Thanks,
Sunny
-- 
Sunny
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86/PV: don't hide CPUID.OSXSAVE from user mode

2016-08-16 Thread Jan Beulich
User mode code generally cannot be expected to invoke the PV-enabled
CPUID Xen supports, and prior to the CPUID levelling changes for 4.7
(as well as even nowadays on levelling incapable hardware) such CPUID
invocations actually saw the host CR4.OSXSAVE value. Fold in the guest
view of CR4.OSXSAVE when setting the levelling MSRs, just like we do
in other CPUID handling.

To make guest CR4 changes immediately visible via CPUID, also invoke
ctxt_switch_levelling() from the CR4 write path.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -206,17 +206,30 @@ static void __init noinline probe_maskin
 static void amd_ctxt_switch_levelling(const struct domain *nextd)
 {
struct cpuidmasks *these_masks = _cpu(cpuidmasks);
-   const struct cpuidmasks *masks =
-   (nextd && is_pv_domain(nextd) && 
nextd->arch.pv_domain.cpuidmasks)
-   ? nextd->arch.pv_domain.cpuidmasks : _defaults;
+   const struct cpuidmasks *masks = NULL;
+   unsigned long cr4;
+   uint64_t val__1cd = 0, val_e1cd = 0, val__7ab0 = 0, val__6c = 0;
+
+   if (nextd && is_pv_domain(nextd) && !is_idle_domain(nextd)) {
+   cr4 = current->arch.pv_vcpu.ctrlreg[4];
+   masks = nextd->arch.pv_domain.cpuidmasks;
+   } else
+   cr4 = read_cr4();
+
+   if (cr4 & X86_CR4_OSXSAVE)
+   val__1cd |= (uint64_t)cpufeat_mask(X86_FEATURE_OSXSAVE) << 32;
+
+   if (!masks)
+   masks = _defaults;
 
 #define LAZY(cap, msr, field)  \
({  \
-   if (unlikely(these_masks->field != masks->field) && \
+   val_##field |= masks->field;\
+   if (unlikely(these_masks->field != val_##field) &&  \
((levelling_caps & cap) == cap))\
{   \
-   wrmsr_amd(msr, masks->field);   \
-   these_masks->field = masks->field;  \
+   wrmsr_amd(msr, val_##field);\
+   these_masks->field = val_##field;   \
}   \
})
 
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -154,7 +154,9 @@ static void __init probe_masking_msrs(vo
 static void intel_ctxt_switch_levelling(const struct domain *nextd)
 {
struct cpuidmasks *these_masks = _cpu(cpuidmasks);
-   const struct cpuidmasks *masks;
+   const struct cpuidmasks *masks = NULL;
+   unsigned long cr4;
+   uint64_t val__1cd = 0, val_e1cd = 0, val_Da1 = 0;
 
if (cpu_has_cpuid_faulting) {
/*
@@ -178,16 +180,27 @@ static void intel_ctxt_switch_levelling(
return;
}
 
-   masks = (nextd && is_pv_domain(nextd) && 
nextd->arch.pv_domain.cpuidmasks)
-   ? nextd->arch.pv_domain.cpuidmasks : _defaults;
+   if (nextd && is_pv_domain(nextd) && !is_idle_domain(nextd)) {
+   cr4 = current->arch.pv_vcpu.ctrlreg[4];
+   masks = nextd->arch.pv_domain.cpuidmasks;
+   } else
+   cr4 = read_cr4();
+
+   /* OSXSAVE cleared by pv_featureset.  Fast-forward CR4 back in. */
+   if (cr4 & X86_CR4_OSXSAVE)
+   val__1cd |= cpufeat_mask(X86_FEATURE_OSXSAVE);
+
+   if (!masks)
+   masks = _defaults;
 
 #define LAZY(msr, field)   \
({  \
-   if (unlikely(these_masks->field != masks->field) && \
+   val_##field |= masks->field;\
+   if (unlikely(these_masks->field != val_##field) &&  \
(msr))  \
{   \
-   wrmsrl((msr), masks->field);\
-   these_masks->field = masks->field;  \
+   wrmsrl((msr), val_##field); \
+   these_masks->field = val_##field;   \
}   \
})
 
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2733,6 +2733,7 @@ static int emulate_privileged_op(struct
 case 4: /* Write CR4 */
 v->arch.pv_vcpu.ctrlreg[4] = pv_guest_cr4_fixup(v, *reg);
 write_cr4(pv_guest_cr4_to_real_cr4(v));
+ctxt_switch_levelling(currd);
 break;
 
 default:



x86/PV: don't hide CPUID.OSXSAVE from user mode

User mode code generally cannot be expected to 

Re: [Xen-devel] [PATCH 1/3] x86emul: use DstEax also for {, I}{MUL, DIV}

2016-08-16 Thread Andrew Cooper
On 16/08/16 15:57, Jan Beulich wrote:
 On 16.08.16 at 16:08,  wrote:
>> On 16/08/16 10:32, Jan Beulich wrote:
>>> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
>>> possible"): While it avoids just a few instructions, we should
>>> nevertheless make use of generic code as much as possible.
>>>
>>> Signed-off-by: Jan Beulich 
>> This does reduce the amount of code, but it isn't strictly true.  The
>> mul and div instructions are DstEaxEdx, as are a number of other
>> instructions.
>>
>> We shouldn't end up with special casing the eax part because we have an
>> easy literal for it, but leaving the edx hard coded because that is
>> easier to express in the current code.
> I think the code reduction is nevertheless worth it, and reduction
> here can only help readability imo. Would you be okay if I added
> a comment to the place where the DstEax gets set here? (Note
> that DstEdxEax wouldn't be true for 8-bit operations, so I'd rather
> not use this as another alias or even a completely new operand
> kind description. And please also remember that the tables don't
> express all operands in all cases anyway - just consider
> SHLD/SHRD.)

The other option would be to use DstNone and explicitly fill in
_regs.eax, which avoids all the code to play with dst, and matches how
rdtsc/rdmsr/wrmsr currently work.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86emul: improve LOCK handling

2016-08-16 Thread Andrew Cooper
On 16/08/16 14:51, Jan Beulich wrote:
> Certain opcodes would so far not have got #UD when a LOCK prefix was
> present. Adjust this by
> - moving the too early generic check into destination operand decoding,
>   where DstNone and DstReg already have respective handling
> - switching source and destination of TEST r,r/m, for it to be taken
>   care of by aforementioned generic checks
> - explicitly dealing with all forms of CMP, SHLD, SHRD, as well as
>   TEST $imm,r/m
>
> To make the handling of opcodes F6 and F7 more obvious, reduce the
> amount of state set in the table, and adjust the respective switch()
> statement accordingly.
>
> Also eliminate the latent bug of the check in DstNone handling not
> considering the opcode extension set.
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/3] x86emul: use DstEax also for {, I}{MUL, DIV}

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 16:08,  wrote:
> On 16/08/16 10:32, Jan Beulich wrote:
>> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
>> possible"): While it avoids just a few instructions, we should
>> nevertheless make use of generic code as much as possible.
>>
>> Signed-off-by: Jan Beulich 
> 
> This does reduce the amount of code, but it isn't strictly true.  The
> mul and div instructions are DstEaxEdx, as are a number of other
> instructions.
> 
> We shouldn't end up with special casing the eax part because we have an
> easy literal for it, but leaving the edx hard coded because that is
> easier to express in the current code.

I think the code reduction is nevertheless worth it, and reduction
here can only help readability imo. Would you be okay if I added
a comment to the place where the DstEax gets set here? (Note
that DstEdxEax wouldn't be true for 8-bit operations, so I'd rather
not use this as another alias or even a completely new operand
kind description. And please also remember that the tables don't
express all operands in all cases anyway - just consider
SHLD/SHRD.)

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86emul: introduce SrcEax for XCHG

2016-08-16 Thread Andrew Cooper
On 16/08/16 15:50, Jan Beulich wrote:
 On 16.08.16 at 16:32,  wrote:
>> On 16/08/16 14:51, Jan Beulich wrote:
>>> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
>>> possible"): While it avoids just a few instructions, we should
>>> nevertheless make use of generic code as much as possible. Here we can
>>> arrange for that by simply introducing SrcEax (which requires no other
>>> code adjustments).
>>>
>>> Signed-off-by: Jan Beulich 
>> Reviewed-by: Andrew Cooper 
> I'll take the liberty to correct the typo, to aid Lars' stats collection.

Oops yes.  Sorry about that.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86emul: introduce SrcEax for XCHG

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 16:32,  wrote:
> On 16/08/16 14:51, Jan Beulich wrote:
>> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
>> possible"): While it avoids just a few instructions, we should
>> nevertheless make use of generic code as much as possible. Here we can
>> arrange for that by simply introducing SrcEax (which requires no other
>> code adjustments).
>>
>> Signed-off-by: Jan Beulich 
> 
> Reviewed-by: Andrew Cooper 

I'll take the liberty to correct the typo, to aid Lars' stats collection.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86emul: introduce SrcEax for XCHG

2016-08-16 Thread Andrew Cooper
On 16/08/16 14:51, Jan Beulich wrote:
> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
> possible"): While it avoids just a few instructions, we should
> nevertheless make use of generic code as much as possible. Here we can
> arrange for that by simply introducing SrcEax (which requires no other
> code adjustments).
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/3] x86emul: re-order main 2-byte opcode switch() statement

2016-08-16 Thread Andrew Cooper
On 16/08/16 10:34, Jan Beulich wrote:
> This was meant to be numerically sorted (with reasonable exceptions),
> but we've manage to diverge from that.
>
> No functional change, only code movement.
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/3] x86emul: use DstEax also for {, I}{MUL, DIV}

2016-08-16 Thread Andrew Cooper
On 16/08/16 10:32, Jan Beulich wrote:
> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
> possible"): While it avoids just a few instructions, we should
> nevertheless make use of generic code as much as possible.
>
> Signed-off-by: Jan Beulich 

This does reduce the amount of code, but it isn't strictly true.  The
mul and div instructions are DstEaxEdx, as are a number of other
instructions.

We shouldn't end up with special casing the eax part because we have an
easy literal for it, but leaving the edx hard coded because that is
easier to express in the current code.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86emul: introduce SrcEax for XCHG

2016-08-16 Thread Jan Beulich
Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
possible"): While it avoids just a few instructions, we should
nevertheless make use of generic code as much as possible. Here we can
arrange for that by simply introducing SrcEax (which requires no other
code adjustments).

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -34,6 +34,7 @@
 #define SrcNone (0<<3) /* No source operand. */
 #define SrcImplicit (0<<3) /* Source operand is implicit in the opcode. */
 #define SrcReg  (1<<3) /* Register operand. */
+#define SrcEax  SrcReg /* Register EAX (aka SrcReg with no ModRM) */
 #define SrcMem  (2<<3) /* Memory operand. */
 #define SrcMem16(3<<3) /* Memory operand (16-bit). */
 #define SrcImm  (4<<3) /* Immediate operand. */
@@ -118,8 +119,10 @@ static uint8_t opcode_table[256] = {
 DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
 DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
 /* 0x90 - 0x97 */
-ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
 /* 0x98 - 0x9F */
 ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
 ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps, ImplicitOps,
@@ -2491,12 +2494,11 @@ x86_emulate(
 case 0x90: /* nop / xchg %%r8,%%rax */
 if ( !(rex_prefix & 1) )
 break; /* nop */
+/* fall through */
 
 case 0x91 ... 0x97: /* xchg reg,%%rax */
-src.type = dst.type = OP_REG;
-src.bytes = dst.bytes = op_bytes;
-src.reg  = (unsigned long *)&_regs.eax;
-src.val  = *src.reg;
+dst.type = OP_REG;
+dst.bytes = op_bytes;
 dst.reg  = decode_register(
 (b & 7) | ((rex_prefix & 1) << 3), &_regs, 0);
 dst.val  = *dst.reg;



x86emul: introduce SrcEax for XCHG

Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
possible"): While it avoids just a few instructions, we should
nevertheless make use of generic code as much as possible. Here we can
arrange for that by simply introducing SrcEax (which requires no other
code adjustments).

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -34,6 +34,7 @@
 #define SrcNone (0<<3) /* No source operand. */
 #define SrcImplicit (0<<3) /* Source operand is implicit in the opcode. */
 #define SrcReg  (1<<3) /* Register operand. */
+#define SrcEax  SrcReg /* Register EAX (aka SrcReg with no ModRM) */
 #define SrcMem  (2<<3) /* Memory operand. */
 #define SrcMem16(3<<3) /* Memory operand (16-bit). */
 #define SrcImm  (4<<3) /* Immediate operand. */
@@ -118,8 +119,10 @@ static uint8_t opcode_table[256] = {
 DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
 DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
 /* 0x90 - 0x97 */
-ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
+DstImplicit|SrcEax, DstImplicit|SrcEax,
 /* 0x98 - 0x9F */
 ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
 ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps, ImplicitOps,
@@ -2491,12 +2494,11 @@ x86_emulate(
 case 0x90: /* nop / xchg %%r8,%%rax */
 if ( !(rex_prefix & 1) )
 break; /* nop */
+/* fall through */
 
 case 0x91 ... 0x97: /* xchg reg,%%rax */
-src.type = dst.type = OP_REG;
-src.bytes = dst.bytes = op_bytes;
-src.reg  = (unsigned long *)&_regs.eax;
-src.val  = *src.reg;
+dst.type = OP_REG;
+dst.bytes = op_bytes;
 dst.reg  = decode_register(
 (b & 7) | ((rex_prefix & 1) << 3), &_regs, 0);
 dst.val  = *dst.reg;
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 15:35,  wrote:
> Although really, it seems like having a "p2m_finish_type_change()"
> function which looked for misconfigured entries and reset them would be
> a step closer to the right direction, in that it could be re-used in
> other situations where the type change may not have finished.

That's a good idea imo.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86emul: improve LOCK handling

2016-08-16 Thread Jan Beulich
Certain opcodes would so far not have got #UD when a LOCK prefix was
present. Adjust this by
- moving the too early generic check into destination operand decoding,
  where DstNone and DstReg already have respective handling
- switching source and destination of TEST r,r/m, for it to be taken
  care of by aforementioned generic checks
- explicitly dealing with all forms of CMP, SHLD, SHRD, as well as
  TEST $imm,r/m

To make the handling of opcodes F6 and F7 more obvious, reduce the
amount of state set in the table, and adjust the respective switch()
statement accordingly.

Also eliminate the latent bug of the check in DstNone handling not
considering the opcode extension set.

Signed-off-by: Jan Beulich 
---
This will only apply cleanly on top of
https://lists.xenproject.org/archives/html/xen-devel/2016-08/msg01975.html.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -110,7 +110,7 @@ static uint8_t opcode_table[256] = {
 /* 0x80 - 0x87 */
 ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImm|ModRM,
 ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM,
-ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
 ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
 /* 0x88 - 0x8F */
 ByteOp|DstMem|SrcReg|ModRM|Mov, DstMem|SrcReg|ModRM|Mov,
@@ -169,8 +169,7 @@ static uint8_t opcode_table[256] = {
 DstEax|SrcImplicit, DstEax|SrcImplicit, ImplicitOps, ImplicitOps,
 /* 0xF0 - 0xF7 */
 0, ImplicitOps, 0, 0,
-ImplicitOps, ImplicitOps,
-ByteOp|DstMem|SrcNone|ModRM, DstMem|SrcNone|ModRM,
+ImplicitOps, ImplicitOps, ByteOp|ModRM, ModRM,
 /* 0xF8 - 0xFF */
 ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
 ImplicitOps, ImplicitOps, ByteOp|DstMem|SrcNone|ModRM, DstMem|SrcNone|ModRM
@@ -1651,9 +1650,6 @@ x86_emulate(
 }
 }
 
-/* Lock prefix is allowed only on RMW instructions. */
-generate_exception_if((d & Mov) && lock_prefix, EXC_UD, -1);
-
 /* ModRM and SIB bytes. */
 if ( d & ModRM )
 {
@@ -1729,13 +1725,17 @@ x86_emulate(
 switch ( modrm_reg & 7 )
 {
 case 0 ... 1: /* test */
-d = (d & ~SrcMask) | SrcImm;
+d |= DstMem | SrcImm;
+break;
+case 2: /* not */
+case 3: /* neg */
+d |= DstMem;
 break;
 case 4: /* mul */
 case 5: /* imul */
 case 6: /* div */
 case 7: /* idiv */
-d = (d & (ByteOp | ModRM)) | DstEax | SrcMem;
+d |= DstEax | SrcMem;
 break;
 }
 break;
@@ -1983,8 +1983,9 @@ x86_emulate(
  */
 generate_exception_if(
 lock_prefix &&
-((b < 0x20) || (b > 0x23)) && /* MOV CRn/DRn */
-(b != 0xc7),  /* CMPXCHG{8,16}B */
+(ext != ext_0f ||
+ (((b < 0x20) || (b > 0x23)) && /* MOV CRn/DRn */
+  (b != 0xc7))),/* CMPXCHG{8,16}B */
 EXC_UD, -1);
 dst.type = OP_NONE;
 break;
@@ -2062,6 +2063,8 @@ x86_emulate(
 goto done;
 dst.orig_val = dst.val;
 }
+else /* Lock prefix is allowed only on RMW instructions. */
+generate_exception_if(lock_prefix, EXC_UD, -1);
 break;
 }
 
@@ -2111,6 +2114,7 @@ x86_emulate(
 break;
 
 case 0x38 ... 0x3d: cmp: /* cmp */
+generate_exception_if(lock_prefix, EXC_UD, -1);
 emulate_2op_SrcV("cmp", src, dst, _regs.eflags);
 dst.type = OP_NONE;
 break;
@@ -3545,6 +3549,7 @@ x86_emulate(
 unsigned long u[2], v;
 
 case 0 ... 1: /* test */
+generate_exception_if(lock_prefix, EXC_UD, -1);
 goto test;
 case 2: /* not */
 dst.val = ~dst.val;
@@ -4507,6 +4512,7 @@ x86_emulate(
 case 0xad: /* shrd %%cl,r,r/m */ {
 uint8_t shift, width = dst.bytes << 3;
 
+generate_exception_if(lock_prefix, EXC_UD, -1);
 if ( b & 1 )
 shift = _regs.ecx;
 else


x86emul: improve LOCK handling

Certain opcodes would so far not have got #UD when a LOCK prefix was
present. Adjust this by
- moving the too early generic check into destination operand decoding,
  where DstNone and DstReg already have respective handling
- switching source and destination of TEST r,r/m, for it to be taken
  care of by aforementioned generic checks
- explicitly dealing with all forms of CMP, SHLD, SHRD, as well as
  TEST $imm,r/m

To make the handling of opcodes F6 and F7 more obvious, reduce the
amount of state set in the table, and adjust the respective switch()
statement accordingly.

Also eliminate the latent bug of the check in DstNone handling not

Re: [Xen-devel] [PATCH 3/4] x86emul: drop SrcInvalid

2016-08-16 Thread Andrew Cooper
On 16/08/16 12:27, Jan Beulich wrote:
 On 16.08.16 at 12:12,  wrote:
>> On 15/08/16 09:35, Jan Beulich wrote:
>>> As of commit a800e4f611 ("x86emul: drop pointless and add useful
>>> default cases") we no longer need the early bailing when "d == 0" (the
>>> default cases in the main switch() statements take care of that),
>>> removal of which renders internal_error() wrong and SrcInvalid useless.
>> "the removal of which".
> Is the article really necessary in that case? So far I thought I had
> learned it's optional in such situations.

The sentence sounds wrong without it.

>
>> However, SrcInvalid is already unused, irrespective of internal_error().
> Well, it's not explicitly referenced, but it having been zero and
> the zero checks now getting dropped ...
>
>> I don't however see how this renders internal_error() incorrect.
> ... both callers of internal_error() need to go away (perhaps I
> simply used unclear wording, which obviously I could improve:
> "renders both callers of internal_error() wrong"). IOW it is now
> no longer an internal error to reach these default labels.

Ah - that makes more sense.  With suitable wording adjustments,
Reviewed-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.

2016-08-16 Thread George Dunlap
On 12/07/16 10:02, Yu Zhang wrote:
> This patch resets p2m_ioreq_server entries back to p2m_ram_rw,
> after an ioreq server has unmapped. The resync is done both
> asynchronously with the current p2m_change_entry_type_global()
> interface, and synchronously by iterating the p2m table. The
> synchronous resetting is necessary because we need to guarantee
> the p2m table is clean before another ioreq server is mapped.
> And since the sweeping of p2m table could be time consuming,
> it is done with hypercall continuation. Asynchronous approach
> is also taken so that p2m_ioreq_server entries can also be reset
> when the HVM is scheduled between hypercall continuations.
> 
> This patch also disallows live migration, when there's still any
> outstanding p2m_ioreq_server entry left. The core reason is our
> current implementation of p2m_change_entry_type_global() can not
> tell the state of p2m_ioreq_server entries(can not decide if an
> entry is to be emulated or to be resynced).
> 
> Signed-off-by: Yu Zhang 

Thanks for doing this Yu Zhang!  A couple of comments.

> ---
> Cc: Paul Durrant 
> Cc: Jan Beulich 
> Cc: Andrew Cooper 
> Cc: George Dunlap 
> Cc: Jun Nakajima 
> Cc: Kevin Tian 
> ---
>  xen/arch/x86/hvm/hvm.c| 52 
> ---
>  xen/arch/x86/mm/hap/hap.c |  9 
>  xen/arch/x86/mm/p2m-ept.c |  6 +-
>  xen/arch/x86/mm/p2m-pt.c  | 10 +++--
>  xen/arch/x86/mm/p2m.c |  3 +++
>  xen/include/asm-x86/p2m.h |  5 -
>  6 files changed, 78 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 4d98cc6..e57c8b9 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5485,6 +5485,7 @@ static int hvmop_set_mem_type(
>  {
>  unsigned long pfn = a.first_pfn + start_iter;
>  p2m_type_t t;
> +struct p2m_domain *p2m = p2m_get_hostp2m(d);
>  
>  get_gfn_unshare(d, pfn, );
>  if ( p2m_is_paging(t) )
> @@ -5512,6 +5513,12 @@ static int hvmop_set_mem_type(
>  if ( rc )
>  goto out;
>  
> +if ( t == p2m_ram_rw && memtype[a.hvmmem_type] == p2m_ioreq_server )
> +p2m->ioreq.entry_count++;
> +
> +if ( t == p2m_ioreq_server && memtype[a.hvmmem_type] == p2m_ram_rw )
> +p2m->ioreq.entry_count--;

Changing these here might make sense if they were *only* changed in the
hvm code; but as you also have to modify this value in the p2m code (in
resolve_misconfig), I think it makes sense to try to do all the counting
in the p2m code.  That would take care of any locking issues as well.

Logically the most sensible place to do it would be
atomic_write_ept_entry(); that would make it basically impossible to
miss a case where we change to of from p2m_ioreq_server.

On the other hand, it would mean adding code to a core path for all p2m
updates.

> +
>  /* Check for continuation if it's not the last interation */
>  if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) &&
>   hypercall_preempt_check() )
> @@ -5530,11 +5537,13 @@ static int hvmop_set_mem_type(
>  }
>  
>  static int hvmop_map_mem_type_to_ioreq_server(
> -XEN_GUEST_HANDLE_PARAM(xen_hvm_map_mem_type_to_ioreq_server_t) uop)
> +XEN_GUEST_HANDLE_PARAM(xen_hvm_map_mem_type_to_ioreq_server_t) uop,
> +unsigned long *iter)
>  {
>  xen_hvm_map_mem_type_to_ioreq_server_t op;
>  struct domain *d;
>  int rc;
> +unsigned long gfn = *iter;
>  
>  if ( copy_from_guest(, uop, 1) )
>  return -EFAULT;
> @@ -5559,7 +5568,42 @@ static int hvmop_map_mem_type_to_ioreq_server(
>  if ( rc != 0 )
>  goto out;
>  
> -rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags);
> +if ( gfn == 0 || op.flags != 0 )
> +rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags);
> +
> +/*
> + * Iterate p2m table when an ioreq server unmaps from p2m_ioreq_server,
> + * and reset the remaining p2m_ioreq_server entries back to p2m_ram_rw.
> + */
> +if ( op.flags == 0 && rc == 0 )
> +{
> +struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +while ( gfn <= p2m->max_mapped_pfn )
> +{
> +p2m_type_t t;
> +
> +if ( p2m->ioreq.entry_count == 0 )
> +break;

Any reason not to make this part of the while() condition?

> +
> +get_gfn_unshare(d, gfn, );

This will completely unshare all pages in a domain below the last
dangling p2m_ioreq_server page.  I don't think unsharing is necessary at
all; if it's shared, it certainly won't be of type p2m_ioreq_server.

Actually -- it seems like ept_get_entry() really should be calling
resolve_misconfig(), the same way that ept_set_entry() does.  In that
case, simply 

Re: [Xen-devel] [PATCH 4/4] x86emul: use DstEax also for XCHG

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 14:46,  wrote:
> On 16/08/16 12:31, Jan Beulich wrote:
> On 16.08.16 at 12:59,  wrote:
>>> On 15/08/16 09:35, Jan Beulich wrote:
 Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
 possible"): While it avoids just a few instructions, we should
 nevertheless make use of generic code as much as possible. Here we can
 arrange for that by simply swapping source and destination (as they're
 interchangeable).

 Signed-off-by: Jan Beulich 

 --- a/xen/arch/x86/x86_emulate/x86_emulate.c
 +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
 @@ -118,8 +118,10 @@ static uint8_t opcode_table[256] = {
  DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
  DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
  /* 0x90 - 0x97 */
 -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
 -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
 +DstEax|SrcImplicit, DstEax|SrcImplicit,
 +DstEax|SrcImplicit, DstEax|SrcImplicit,
 +DstEax|SrcImplicit, DstEax|SrcImplicit,
 +DstEax|SrcImplicit, DstEax|SrcImplicit,
>>> Please add a comment explaining that DstEax is interchangeable with
>>> SrcEax in the xchg case.  Otherwise, the decode table reads incorrectly.
>> Do you mean me to do so even considering there's no SrcEax
>> (yet, it'll come with the not yet posted patch finally doing the
>> split off of the decode part)? (Nor can I see why the decode
>> table reads incorrectly the way it is above.)
> 
> xchg is explicitly specified to have SrcEax, so people comparing the
> instruction manuals to our implementation can be forgiven for thinking
> that our code is wrong if it has DstEax instead.
> 
> If SrcEax is imminent then it perhaps it doesn't matter too much.

Otoh I could obviously re-order this with the other one and
use SrcEax here, making for a slightly smaller overall change.
Or introduce SrcEax right here. Maybe that's the most natural
route to go.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH] tools: remove blktap2 related code and documentation

2016-08-16 Thread Yang Hongyang
On Mon, Aug 15, 2016 at 6:50 PM, Wei Liu  wrote:

> Blktap2 is effectively dead code for a few years.
>
> Notable changes in this patch:
>
> 0. Unhook blktap2 from build system
> 1. Now libxl no longer supports TAP ask backend, appropriate assertions
>

s/ask/disk/

   are added and some code paths now return ERROR_FAIL
> 2. Tap is no longer a supported backend in doc
> 3. Remove relevant entries in MAINTAINERS
>
> A patch to actually remove blktap2 directory will come later.
>
> Signed-off-by: Wei Liu 
> ---
> Compile-test only at this stage.
>
> Ross, do you have any objection for this? I haven't seen update from the
> joint blktap2 maintenance for a few months.
>
> Cc: Andrew Cooper 
> Cc: George Dunlap 
> Cc: Ian Jackson 
> Cc: Jan Beulich 
> Cc: Konrad Rzeszutek Wilk 
> Cc: Stefano Stabellini 
> Cc: Tim Deegan 
> Cc: Shriram Rajagopalan 
> Cc: Yang Hongyang 
> Cc: Ross Philipson 
> Cc: Lars Kurth 
> ---
>  .gitignore  | 14 --
>  INSTALL |  4 --
>  MAINTAINERS |  2 -
>  config/Tools.mk.in  |  1 -
>  docs/misc/xl-disk-configuration.txt |  2 +-
>  tools/Makefile  |  1 -
>  tools/Rules.mk  | 17 +--
>  tools/config.h.in   |  6 ---
>  tools/configure | 83 
>  tools/configure.ac  | 22 -
>  tools/libxl/Makefile|  8 +---
>  tools/libxl/check-xl-disk-parse |  2 +-
>  tools/libxl/libxl.c | 25 ++
>  tools/libxl/libxl_blktap2.c | 94 --
> ---
>  tools/libxl/libxl_device.c  | 32 ++---
>  tools/libxl/libxl_dm.c  | 17 ++-
>  tools/libxl/libxl_internal.h| 19 
>  tools/libxl/libxl_noblktap2.c   | 42 -
>  tools/xenstore/hashtable.c  |  5 --
>  tools/xenstore/hashtable.h  |  5 --
>  tools/xenstore/hashtable_private.h  |  5 --
>  21 files changed, 13 insertions(+), 393 deletions(-)
>  delete mode 100644 tools/libxl/libxl_blktap2.c
>  delete mode 100644 tools/libxl/libxl_noblktap2.c
>
> diff --git a/.gitignore b/.gitignore
> index d193820..ea2 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -97,19 +97,6 @@ tools/libs/evtchn/headers.chk
>  tools/libs/gnttab/headers.chk
>  tools/libs/call/headers.chk
>  tools/libs/foreignmemory/headers.chk
> -tools/blktap2/daemon/blktapctrl
> -tools/blktap2/drivers/img2qcow
> -tools/blktap2/drivers/lock-util
> -tools/blktap2/drivers/qcow-create
> -tools/blktap2/drivers/qcow2raw
> -tools/blktap2/drivers/tapdisk
> -tools/blktap2/drivers/tapdisk-client
> -tools/blktap2/drivers/tapdisk-diff
> -tools/blktap2/drivers/tapdisk-stream
> -tools/blktap2/drivers/tapdisk2
> -tools/blktap2/drivers/td-util
> -tools/blktap2/vhd/vhd-update
> -tools/blktap2/vhd/vhd-util
>  tools/console/xenconsole
>  tools/console/xenconsoled
>  tools/console/client/_paths.h
> @@ -327,7 +314,6 @@ tools/libxl/*.pyc
>  tools/libxl/libxl-save-helper
>  tools/libxl/test_timedereg
>  tools/libxl/test_fdderegrace
> -tools/blktap2/control/tap-ctl
>  tools/firmware/etherboot/eb-roms.h
>  tools/firmware/etherboot/gpxe-git-snapshot.tar.gz
>  tools/misc/xenwatchdogd
> diff --git a/INSTALL b/INSTALL
> index 9759354..3b255c7 100644
> --- a/INSTALL
> +++ b/INSTALL
> @@ -144,10 +144,6 @@ this detection and the sysv runlevel scripts have to
> be used.
>--with-systemd=DIR
>--with-systemd-modules-load=DIR
>
> -The old backend drivers are disabled because qdisk is now the default.
> -This option can be used to build them anyway.
> -  --enable-blktap2
> -
>  Build various stubom components, some are only example code. Its usually
>  enough to specify just --enable-stubdom and leave these options alone.
>--enable-ioemu-stubdom
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 97720a8..d54795b 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -322,8 +322,6 @@ M:  Shriram Rajagopalan 
>  M: Yang Hongyang 
>  S: Maintained
>  F: docs/README.remus
> -F: tools/blktap2/drivers/block-remus.c
> -F: tools/blktap2/drivers/hashtable*
>  F: tools/libxl/libxl_remus_*
>  F: tools/libxl/libxl_netbuffer.c
>  F: tools/libxl/libxl_nonetbuffer.c
> diff --git a/config/Tools.mk.in b/config/Tools.mk.in
> index 0f79f4e..511406c 100644
> --- a/config/Tools.mk.in
> +++ b/config/Tools.mk.in
> @@ -56,7 +56,6 @@ CONFIG_ROMBIOS  := @rombios@
>  CONFIG_SEABIOS  := @seabios@
>  CONFIG_QEMU_TRAD:= @qemu_traditional@
>  CONFIG_QEMU_XEN := @qemu_xen@
> -CONFIG_BLKTAP2  := @blktap2@
>  

Re: [Xen-devel] [PATCH 4/4] x86emul: use DstEax also for XCHG

2016-08-16 Thread Andrew Cooper
On 16/08/16 12:31, Jan Beulich wrote:
 On 16.08.16 at 12:59,  wrote:
>> On 15/08/16 09:35, Jan Beulich wrote:
>>> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
>>> possible"): While it avoids just a few instructions, we should
>>> nevertheless make use of generic code as much as possible. Here we can
>>> arrange for that by simply swapping source and destination (as they're
>>> interchangeable).
>>>
>>> Signed-off-by: Jan Beulich 
>>>
>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>> @@ -118,8 +118,10 @@ static uint8_t opcode_table[256] = {
>>>  DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
>>>  DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
>>>  /* 0x90 - 0x97 */
>>> -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
>>> -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
>>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
>>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
>>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
>>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
>> Please add a comment explaining that DstEax is interchangeable with
>> SrcEax in the xchg case.  Otherwise, the decode table reads incorrectly.
> Do you mean me to do so even considering there's no SrcEax
> (yet, it'll come with the not yet posted patch finally doing the
> split off of the decode part)? (Nor can I see why the decode
> table reads incorrectly the way it is above.)

xchg is explicitly specified to have SrcEax, so people comparing the
instruction manuals to our implementation can be forgiven for thinking
that our code is wrong if it has DstEax instead.

If SrcEax is imminent then it perhaps it doesn't matter too much.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 4/4] x86emul: use DstEax also for XCHG

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 12:59,  wrote:
> On 15/08/16 09:35, Jan Beulich wrote:
>> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
>> possible"): While it avoids just a few instructions, we should
>> nevertheless make use of generic code as much as possible. Here we can
>> arrange for that by simply swapping source and destination (as they're
>> interchangeable).
>>
>> Signed-off-by: Jan Beulich 
>>
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -118,8 +118,10 @@ static uint8_t opcode_table[256] = {
>>  DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
>>  DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
>>  /* 0x90 - 0x97 */
>> -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
>> -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
>> +DstEax|SrcImplicit, DstEax|SrcImplicit,
> 
> Please add a comment explaining that DstEax is interchangeable with
> SrcEax in the xchg case.  Otherwise, the decode table reads incorrectly.

Do you mean me to do so even considering there's no SrcEax
(yet, it'll come with the not yet posted patch finally doing the
split off of the decode part)? (Nor can I see why the decode
table reads incorrectly the way it is above.)

>> @@ -2491,16 +2493,14 @@ x86_emulate(
>>  
>>  case 0x90: /* nop / xchg %%r8,%%rax */
>>  if ( !(rex_prefix & 1) )
>> -break; /* nop */
>> +goto no_writeback; /* nop */
> 
> Could you add an explicit /* fallthrough */ here?  The only reason it
> isn't currently a coverity defect is because of the /* nop */ comment.

Will do; I actually had considered that, but then thought the
present comment is enough to silence Coverity.

> With these two, Reviewed-by: Andrew Cooper 

I'll wait with using this until the first point above got clarified.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/4] x86emul: drop SrcInvalid

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 12:12,  wrote:
> On 15/08/16 09:35, Jan Beulich wrote:
>> As of commit a800e4f611 ("x86emul: drop pointless and add useful
>> default cases") we no longer need the early bailing when "d == 0" (the
>> default cases in the main switch() statements take care of that),
>> removal of which renders internal_error() wrong and SrcInvalid useless.
> 
> "the removal of which".

Is the article really necessary in that case? So far I thought I had
learned it's optional in such situations.

> However, SrcInvalid is already unused, irrespective of internal_error().

Well, it's not explicitly referenced, but it having been zero and
the zero checks now getting dropped ...

> I don't however see how this renders internal_error() incorrect.

... both callers of internal_error() need to go away (perhaps I
simply used unclear wording, which obviously I could improve:
"renders both callers of internal_error() wrong"). IOW it is now
no longer an internal error to reach these default labels.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] dependences for backporting to 4.6 [was: Re: [PATCH 2/3] xen: Have schedulers revise initial placement]

2016-08-16 Thread Jan Beulich
>>> On 16.08.16 at 12:21,  wrote:
> On Fri, 2016-08-12 at 07:53 -0600, Jan Beulich wrote:
>> Same
>> for 4.5 then, were the backport adjusted for 4.6 then applied
>> cleanly.
>> 
> So, you've done the backports yourself, and you don't want/need me to
> do them right?

Indeed.

> I'm asking because that's how I read what you're saying here, but I
> don't see that having happened in staging-{4.5,4.6}. If that's me
> failing to check, or checking in the wrong place, sorry for the noise.

Well, I do things in batches, so these will now simply be part of the
next batch.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 4/4] x86emul: use DstEax also for XCHG

2016-08-16 Thread Andrew Cooper
On 15/08/16 09:35, Jan Beulich wrote:
> Just like said in commit c0bc0adf24 ("x86emul: use DstEax where
> possible"): While it avoids just a few instructions, we should
> nevertheless make use of generic code as much as possible. Here we can
> arrange for that by simply swapping source and destination (as they're
> interchangeable).
>
> Signed-off-by: Jan Beulich 
>
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -118,8 +118,10 @@ static uint8_t opcode_table[256] = {
>  DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
>  DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
>  /* 0x90 - 0x97 */
> -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
> -ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
> +DstEax|SrcImplicit, DstEax|SrcImplicit,
> +DstEax|SrcImplicit, DstEax|SrcImplicit,
> +DstEax|SrcImplicit, DstEax|SrcImplicit,
> +DstEax|SrcImplicit, DstEax|SrcImplicit,

Please add a comment explaining that DstEax is interchangeable with
SrcEax in the xchg case.  Otherwise, the decode table reads incorrectly.

>  /* 0x98 - 0x9F */
>  ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
>  ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps, ImplicitOps,
> @@ -2491,16 +2493,14 @@ x86_emulate(
>  
>  case 0x90: /* nop / xchg %%r8,%%rax */
>  if ( !(rex_prefix & 1) )
> -break; /* nop */
> +goto no_writeback; /* nop */

Could you add an explicit /* fallthrough */ here?  The only reason it
isn't currently a coverity defect is because of the /* nop */ comment.

With these two, Reviewed-by: Andrew Cooper 

>  
>  case 0x91 ... 0x97: /* xchg reg,%%rax */
> -src.type = dst.type = OP_REG;
> -src.bytes = dst.bytes = op_bytes;
> -src.reg  = (unsigned long *)&_regs.eax;
> -src.val  = *src.reg;
> -dst.reg  = decode_register(
> +src.type = OP_REG;
> +src.bytes = op_bytes;
> +src.reg  = decode_register(
>  (b & 7) | ((rex_prefix & 1) << 3), &_regs, 0);
> -dst.val  = *dst.reg;
> +src.val  = *src.reg;
>  goto xchg;
>  
>  case 0x98: /* cbw/cwde/cdqe */
>
>
>


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.7-testing test] 100499: tolerable FAIL - PUSHED

2016-08-16 Thread osstest service owner
flight 100499 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/100499/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit2 19 guest-start/debian.repeat fail in 100491 pass 
in 100499
 test-armhf-armhf-xl-credit2   6 xen-boot   fail pass in 100491

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  9 debian-installfail REGR. vs. 99972
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 99972
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 99972

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2 13 saverestore-support-check fail in 100491 never 
pass
 test-armhf-armhf-xl-credit2 12 migrate-support-check fail in 100491 never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  822464961ae1bac44dcabb049255d61d5511e368
baseline version:
 xen  f2160ba6e60e990060de96f2fc9be645f51f5995

Last test of basis99972  2016-08-05 23:11:40 Z   10 days
Testing same since   100491  2016-08-15 10:42:45 Z0 days2 attempts


People who touched revisions under test:
  Bob Liu 
  Boris Ostrovsky 
  Ian Jackson 
  Juergen Gross 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt

[Xen-devel] [PATCH v4 00/16] Xen ARM DomU ACPI support

2016-08-16 Thread Shannon Zhao
From: Shannon Zhao 

The design of this feature is described as below.
Firstly, the toolstack (libxl) generates the ACPI tables according the
number of vcpus and gic controller.

Then, it copies these ACPI tables to DomU non-RAM memory map space and
passes them to UEFI firmware through the "ARM multiboot" protocol.

At last, UEFI gets the ACPI tables through the "ARM multiboot" protocol
and installs these tables like the usual way and passes both ACPI and DT
information to the Xen DomU.

Currently libxl only generates RSDP, XSDT, GTDT, MADT, FADT, DSDT tables
since it's enough now.

This has been tested using guest kernel with the Dom0 ACPI support
patches which could be fetched from linux master or:
https://git.kernel.org/cgit/linux/kernel/git/mfleming/efi.git/log/?h=efi/arm-xen

The UEFI binary could be fetched from or built from edk2 master branch:
http://people.linaro.org/~shannon.zhao/DomU_ACPI/XEN_EFI.fd

This series can be fetched from:
https://git.linaro.org/people/shannon.zhao/xen.git  domu_acpi_v4

Changes since v3:
* use goto style error handle
* unify configuration option for ACPI
* use extended_checksum instead of checksum in RSDP table
* only require iasl on arm64
* count acpi tables size for maxmem

Changes since v2:
* return error for 32bit domain with acpi enabled
* include actypes.h to reuse the definitions
* rename libxl_arm_acpi.h to libxl_arm.h
* use ACPI_MADT_ENABLED
* rebased on top of Boris's ACPI branch to reuse mk_dsdt.c

Changes since v1:
* move ACPI tables generation codes to a new file
* use static asl file to generate DSDT table and include processor
  device objects
* assign a non-RAM map for ACPI blob
* use existing ACPI table definitions under xen/include/acpi/
* add a configuration for user to enable/disable ACPI generation
* calculate the ACPI table checksum

Shannon Zhao (16):
  tools/libxl: Add an unified configuration option for ACPI
  libxl/arm: prepare for constructing ACPI tables
  libxl/arm: Generate static ACPI DSDT table
  libxl/arm: Estimate the size of ACPI tables
  libxl/arm: Construct ACPI RSDP table
  libxl/arm: Construct ACPI XSDT table
  libxl/arm: Construct ACPI GTDT table
  libxl/arm: Factor MPIDR computing codes out as a helper
  libxl/arm: Construct ACPI MADT table
  libxl/arm: Construct ACPI FADT table
  libxl/arm: Construct ACPI DSDT table
  libxl/arm: Factor finalise_one_memory_node as a gerneric function
  libxl/arm: Add ACPI module
  public/hvm/params.h: Add macros for HVM_PARAM_CALLBACK_TYPE_PPI
  libxl/arm: Initialize domain param HVM_PARAM_CALLBACK_IRQ
  libxl/arm: Add the size of ACPI tables to maxmem

 docs/misc/arm/device-tree/acpi.txt |  24 +++
 tools/configure|   2 +-
 tools/libacpi/Makefile |  15 +-
 tools/libacpi/mk_dsdt.c|  51 --
 tools/libxl/Makefile   |   7 +
 tools/libxl/libxl_arm.c|  87 +++--
 tools/libxl/libxl_arm.h|  55 ++
 tools/libxl/libxl_arm_acpi.c   | 365 +
 tools/libxl/libxl_create.c |   9 +-
 tools/libxl/libxl_dm.c |   6 +-
 tools/libxl/libxl_types.idl|   4 +
 tools/libxl/xl_cmdimpl.c   |   2 +-
 xen/arch/arm/domain_build.c|   8 +-
 xen/include/public/arch-arm.h  |   7 +
 xen/include/public/hvm/params.h|   4 +
 15 files changed, 612 insertions(+), 34 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/acpi.txt
 create mode 100644 tools/libxl/libxl_arm.h
 create mode 100644 tools/libxl/libxl_arm_acpi.c

-- 
2.0.4



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 14/16] public/hvm/params.h: Add macros for HVM_PARAM_CALLBACK_TYPE_PPI

2016-08-16 Thread Shannon Zhao
From: Shannon Zhao 

Add macros for HVM_PARAM_CALLBACK_TYPE_PPI operation values and update
them in evtchn_fixup().

Signed-off-by: Shannon Zhao 
---
 xen/arch/arm/domain_build.c | 8 +---
 xen/include/public/hvm/params.h | 4 
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 60db9e4..94cd3ce 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2019,9 +2019,11 @@ static void evtchn_fixup(struct domain *d, struct 
kernel_info *kinfo)
d->arch.evtchn_irq);
 
 /* Set the value of domain param HVM_PARAM_CALLBACK_IRQ */
-val = (u64)HVM_PARAM_CALLBACK_TYPE_PPI << 56;
-val |= (2 << 8); /* Active-low level-sensitive  */
-val |= d->arch.evtchn_irq & 0xff;
+val = (u64)HVM_PARAM_CALLBACK_TYPE_PPI << 
HVM_PARAM_CALLBACK_IRQ_TYPE_SHIFT;
+/* Active-low level-sensitive  */
+val |= (HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_LOW_LEVEL <<
+HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_SHIFT);
+val |= d->arch.evtchn_irq & HVM_PARAM_CALLBACK_TYPE_PPI_MASK;
 d->arch.hvm_domain.params[HVM_PARAM_CALLBACK_IRQ] = val;
 
 /*
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index f7338a3..8a0327d 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -30,6 +30,7 @@
  */
 
 #define HVM_PARAM_CALLBACK_IRQ 0
+#define HVM_PARAM_CALLBACK_IRQ_TYPE_SHIFT 56
 /*
  * How should CPU0 event-channel notifications be delivered?
  *
@@ -66,6 +67,9 @@
  * This is only used by ARM/ARM64 and masking/eoi the interrupt associated to
  * the notification is handled by the interrupt controller.
  */
+#define HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_SHIFT 8
+#define HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_LOW_LEVEL 2
+#define HVM_PARAM_CALLBACK_TYPE_PPI_MASK   0xff
 #endif
 
 /*
-- 
2.0.4



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 15/16] libxl/arm: Initialize domain param HVM_PARAM_CALLBACK_IRQ

2016-08-16 Thread Shannon Zhao
From: Shannon Zhao 

The guest kernel will get the event channel interrupt information via
domain param HVM_PARAM_CALLBACK_IRQ. Initialize it here.

Signed-off-by: Shannon Zhao 
---
 tools/libxl/libxl_arm.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 11a6f6e..d436167 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -900,8 +900,21 @@ int libxl__arch_domain_init_hw_description(libxl__gc *gc,
struct xc_dom_image *dom)
 {
 int rc;
+uint64_t val;
 
 assert(info->type == LIBXL_DOMAIN_TYPE_PV);
+
+/* Set the value of domain param HVM_PARAM_CALLBACK_IRQ. */
+val = (uint64_t)HVM_PARAM_CALLBACK_TYPE_PPI << 
HVM_PARAM_CALLBACK_IRQ_TYPE_SHIFT;
+/* Active-low level-sensitive  */
+val |= (HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_LOW_LEVEL <<
+HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_SHIFT);
+val |= GUEST_EVTCHN_PPI & HVM_PARAM_CALLBACK_TYPE_PPI_MASK;
+rc = xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CALLBACK_IRQ,
+  val);
+if (rc)
+return rc;
+
 rc = libxl__prepare_dtb(gc, info, state, dom);
 if (rc) goto out;
 
-- 
2.0.4



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 03/16] libxl/arm: Generate static ACPI DSDT table

2016-08-16 Thread Shannon Zhao
From: Shannon Zhao 

It uses static DSDT table like the way x86 uses. Currently the DSDT
table only contains processor device objects and it generates the
maximal objects which so far is 128.

Also only check iasl for aarch64 in configure since ACPI on ARM32 is not
supported.

Signed-off-by: Shannon Zhao 
---
 tools/configure   |  2 +-
 tools/libacpi/Makefile| 15 -
 tools/libacpi/mk_dsdt.c   | 51 ---
 tools/libxl/Makefile  |  5 -
 tools/libxl/libxl_arm_acpi.c  |  5 +
 xen/include/public/arch-arm.h |  3 +++
 6 files changed, 65 insertions(+), 16 deletions(-)

diff --git a/tools/configure b/tools/configure
index 5b5dcce..48239c0 100755
--- a/tools/configure
+++ b/tools/configure
@@ -7458,7 +7458,7 @@ then
 as_fn_error $? "Unable to find xgettext, please install xgettext" 
"$LINENO" 5
 fi
 case "$host_cpu" in
-i[3456]86|x86_64)
+i[3456]86|x86_64|aarch64)
 # Extract the first word of "iasl", so it can be a program name with args.
 set dummy iasl; ac_word=$2
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
index d741ac5..7f50a33 100644
--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -19,6 +19,7 @@ MK_DSDT = $(ACPI_BUILD_DIR)/mk_dsdt
 
 # Sources to be generated
 C_SRC = $(addprefix $(ACPI_BUILD_DIR)/, dsdt_anycpu.c dsdt_15cpu.c  
dsdt_anycpu_qemu_xen.c dsdt_pvh.c)
+C_SRC += $(ACPI_BUILD_DIR)/dsdt_anycpu_arm.c
 H_SRC = $(addprefix $(ACPI_BUILD_DIR)/, ssdt_s3.h ssdt_s4.h ssdt_pm.h 
ssdt_tpm.h)
 
 vpath iasl $(PATH)
@@ -32,7 +33,7 @@ $(H_SRC): $(ACPI_BUILD_DIR)/%.h: %.asl iasl
cd $(CURDIR)
 
 $(MK_DSDT): mk_dsdt.c
-   $(HOSTCC) $(HOSTCFLAGS) $(CFLAGS_xeninclude) -o $@ mk_dsdt.c
+   $(HOSTCC) $(HOSTCFLAGS) $(CFLAGS_xeninclude) -D__XEN_TOOLS__ -o $@ 
mk_dsdt.c
 
 $(ACPI_BUILD_DIR)/dsdt_anycpu_qemu_xen.asl: dsdt.asl dsdt_acpi_info.asl 
$(MK_DSDT)
awk 'NR > 1 {print s} {s=$$0}' $< > $@
@@ -62,6 +63,18 @@ $(ACPI_BUILD_DIR)/dsdt_pvh.c: iasl 
$(ACPI_BUILD_DIR)/dsdt_pvh.asl
echo "int dsdt_pvh_len=sizeof(dsdt_pvh);" >>$@
rm -f $(ACPI_BUILD_DIR)/$*.aml $(ACPI_BUILD_DIR)/$*.hex
 
+$(ACPI_BUILD_DIR)/dsdt_anycpu_arm.asl: $(MK_DSDT)
+   printf "DefinitionBlock (\"DSDT.aml\", \"DSDT\", 3, \"XenARM\", \"Xen 
DSDT\", 1)\n{" > $@
+   $(MK_DSDT) --debug=$(debug) --arch arm >> $@
+
+$(ACPI_BUILD_DIR)/dsdt_anycpu_arm.c: iasl $(ACPI_BUILD_DIR)/dsdt_anycpu_arm.asl
+   cd $(ACPI_BUILD_DIR)
+   iasl -vs -p $* -tc $(ACPI_BUILD_DIR)/$*.asl
+   sed -e 's/AmlCode/$*/g' $*.hex >$@
+   echo "int $*_len=sizeof($*);" >>$@
+   rm -f $*.aml $*.hex
+   cd $(CURDIR)
+
 iasl:
@echo
@echo "ACPI ASL compiler (iasl) is needed"
diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index 7d76784..f3ab28f 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static unsigned int indent_level;
 static bool debug = false;
@@ -99,6 +100,7 @@ static struct option options[] = {
 { "dm-version", 1, 0, 'q' },
 { "debug", 1, 0, 'd' },
 { "no-dm", 0, 0, 'n' },
+{ "arch", 1, 0, 'a' },
 { 0, 0, 0, 0 }
 };
 
@@ -106,7 +108,7 @@ int main(int argc, char **argv)
 {
 unsigned int slot, dev, intx, link, cpu, max_cpus = HVM_MAX_VCPUS;
 dm_version dm_version = QEMU_XEN_TRADITIONAL;
-bool no_dm = 0;
+bool no_dm = 0, arch_is_arm = false;
 
 for ( ; ; )
 {
@@ -145,6 +147,10 @@ int main(int argc, char **argv)
 case 'n':
 no_dm = 1;
 break;
+case 'a':
+if (strcmp(optarg, "arm") == 0)
+arch_is_arm = true;
+break;
 case 'd':
 if (*optarg == 'y')
 debug = true;
@@ -154,6 +160,9 @@ int main(int argc, char **argv)
 }
 }
 
+if (arch_is_arm)
+max_cpus = GUEST_MAX_VCPUS;
+
 / DSDT DefinitionBlock start /
 /* (we append to existing DSDT definition block) */
 indent_level++;
@@ -161,19 +170,21 @@ int main(int argc, char **argv)
 / Processor start /
 push_block("Scope", "\\_SB");
 
-/* MADT checksum */
-stmt("OperationRegion", "MSUM, SystemMemory, \\_SB.MSUA, 1");
-push_block("Field", "MSUM, ByteAcc, NoLock, Preserve");
-indent(); printf("MSU, 8\n");
-pop_block();
+if (!arch_is_arm) {
+/* MADT checksum */
+stmt("OperationRegion", "MSUM, SystemMemory, \\_SB.MSUA, 1");
+push_block("Field", "MSUM, ByteAcc, NoLock, Preserve");
+indent(); printf("MSU, 8\n");
+pop_block();
 
-/* Processor object helpers. */
-push_block("Method", "PMAT, 2");
-push_block("If", "LLess(Arg0, NCPU)");
-stmt("Return", "ToBuffer(Arg1)");
-pop_block();
-stmt("Return", "Buffer() {0, 8, 0xff, 0xff, 0, 0, 0, 

Re: [Xen-devel] [V4 PATCH 2/2] mips/panic: Replace smp_send_stop() with kdump friendly version in panic path

2016-08-16 Thread 河合英宏 / KAWAI,HIDEHIRO
> From: Corey Minyard [mailto:cminy...@mvista.com]
> Sent: Tuesday, August 16, 2016 3:02 AM
> On 08/15/2016 12:06 PM, Corey Minyard wrote:
> > On 08/15/2016 06:35 AM, 河合英宏 / KAWAI,HIDEHIRO wrote:
> >> Hi Corey,
> >>
> >>> From: Corey Minyard [mailto:cminy...@mvista.com]
> >>> Sent: Friday, August 12, 2016 10:56 PM
> >>> I'll try to test this, but I have one comment inline...
> >> Thank you very much!
> >>
> >>> On 08/11/2016 10:17 PM, Dave Young wrote:
>  On 08/10/16 at 05:09pm, Hidehiro Kawai wrote:
> >> [snip]
> > diff --git a/arch/mips/kernel/crash.c b/arch/mips/kernel/crash.c
> > index 610f0f3..1723b17 100644
> > --- a/arch/mips/kernel/crash.c
> > +++ b/arch/mips/kernel/crash.c
> > @@ -47,9 +47,14 @@ static void crash_shutdown_secondary(void
> > *passed_regs)
> >
> >static void crash_kexec_prepare_cpus(void)
> >{
> > +static int cpus_stopped;
> >unsigned int msecs;
> > +unsigned int ncpus;
> >
> > -unsigned int ncpus = num_online_cpus() - 1;/* Excluding the
> > panic cpu */
> > +if (cpus_stopped)
> > +return;
> >>> Wouldn't you want an atomic operation and some special handling here to
> >>> ensure that only one CPU does this?  So if a CPU comes in here and
> >>> another CPU is already in the process stopping the CPUs it won't
> >>> result in a
> >>> deadlock.
> >> Because this function can be called only one panicking CPU,
> >> there is no problem.
> >>
> >> There are two paths which crash_kexec_prepare_cpus is called.
> >>
> >> Path 1 (panic path):
> >> panic()
> >>crash_smp_send_stop()
> >>  crash_kexec_prepare_cpus()
> >>
> >> Path 2 (oops path):
> >> crash_kexec()
> >>__crash_kexec()
> >>  machine_crash_shutdown()
> >>default_machine_crash_shutdown() // for MIPS
> >>  crash_kexec_prepare_cpus()
> >>
> >> Here, panic() and crash_kexec() run exclusively via
> >> panic_cpu atomic variable.  So we can use cpus_stopped as
> >> normal variable.
> >
> > Ok, if the code can only be entered once, what's the purpose of
> > cpus_stopped?
> > I guess that's what confused me.  You are right, the panic_cpu atomic
> > should
> > keep this on a single CPU.
> 
> Never mind, I see the path through panic() where that is required. My
> question
> below still remains, though.
> 
> > Also, panic() will call panic_smp_self_stop() if it finds another CPU
> > has already
> > called panic, which will just spin with interrupts off by default. I
> > didn't see a
> > definition for it in MIPS, wouldn't it need to be overridden to avoid
> > a deadlock?

No deadlock should happen. Panicking CPU calls crash_kexec_prepare_cpus(),
and it issues an IPI and wait for other CPUs handle it.  If some of them
are looping in panic_smp_self_stop() with interrupt disabled, they can't
handle the IPI.  But it's not a severe problem.  crash_kexec_prepare_cpus()
has a timeout mechanism, and it will go out from the wait loop when it
happens.

In that case, of course, their registers are not saved.  This could be
improved, but I'd like to entrust MIPS experts with the improvement.
This is another issue.

Best regards,

Hidehiro Kawai

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 01/16] tools/libxl: Add an unified configuration option for ACPI

2016-08-16 Thread Shannon Zhao
From: Shannon Zhao 

Since the existing configuration option "u.hvm.acpi" is x86 specific and
we want to reuse it on ARM as well, add a unified option "acpi" for
x86 and ARM, and for ARM it's disabled by default.

Signed-off-by: Shannon Zhao 
---
 tools/libxl/libxl_create.c  | 9 -
 tools/libxl/libxl_dm.c  | 6 --
 tools/libxl/libxl_types.idl | 4 
 tools/libxl/xl_cmdimpl.c| 2 +-
 4 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 08822e3..3043b1f 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -215,6 +215,12 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 if (!b_info->event_channels)
 b_info->event_channels = 1023;
 
+#if defined(__arm__) || defined(__aarch64__)
+libxl_defbool_setdefault(_info->acpi, false);
+#else
+libxl_defbool_setdefault(_info->acpi, true);
+#endif
+
 switch (b_info->type) {
 case LIBXL_DOMAIN_TYPE_HVM:
 if (b_info->shadow_memkb == LIBXL_MEMKB_DEFAULT)
@@ -454,7 +460,8 @@ int libxl__domain_build(libxl__gc *gc,
 localents = libxl__calloc(gc, 9, sizeof(char *));
 i = 0;
 localents[i++] = "platform/acpi";
-localents[i++] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
+localents[i++] = (libxl_defbool_val(info->acpi) &&
+ libxl_defbool_val(info->u.hvm.acpi)) ? "1" : "0";
 localents[i++] = "platform/acpi_s3";
 localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
 localents[i++] = "platform/acpi_s4";
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index de16a59..12e4084 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -583,7 +583,8 @@ static int libxl__build_device_model_args_old(libxl__gc *gc,
 if (b_info->u.hvm.soundhw) {
 flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, 
NULL);
 }
-if (libxl_defbool_val(b_info->u.hvm.acpi)) {
+if (libxl_defbool_val(b_info->acpi) &&
+libxl_defbool_val(b_info->u.hvm.acpi)) {
 flexarray_append(dm_args, "-acpi");
 }
 if (b_info->max_vcpus > 1) {
@@ -1204,7 +1205,8 @@ static int libxl__build_device_model_args_new(libxl__gc 
*gc,
 if (b_info->u.hvm.soundhw) {
 flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, 
NULL);
 }
-if (!libxl_defbool_val(b_info->u.hvm.acpi)) {
+if (!(libxl_defbool_val(b_info->acpi) &&
+ libxl_defbool_val(b_info->u.hvm.acpi))) {
 flexarray_append(dm_args, "-no-acpi");
 }
 if (b_info->max_vcpus > 1) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 98bfc3a..a02446f 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -494,11 +494,15 @@ libxl_domain_build_info = Struct("domain_build_info",[
 # Note that the partial device tree should avoid to use the phandle
 # 65000 which is reserved by the toolstack.
 ("device_tree",  string),
+("acpi", libxl_defbool),
 ("u", KeyedUnion(None, libxl_domain_type, "type",
 [("hvm", Struct(None, [("firmware", string),
("bios", libxl_bios_type),
("pae",  libxl_defbool),
("apic", libxl_defbool),
+   # The following acpi field is 
deprecated.
+   # Please use the unified acpi field 
above
+   # which works for both x86 and ARM.
("acpi", libxl_defbool),
("acpi_s3",  libxl_defbool),
("acpi_s4",  libxl_defbool),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 1d06598..be17702 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1547,6 +1547,7 @@ static void parse_config_data(const char *config_source,
 b_info->cmdline = parse_cmdline(config);
 
 xlu_cfg_get_defbool(config, "driver_domain", _info->driver_domain, 0);
+xlu_cfg_get_defbool(config, "acpi", _info->acpi, 0);
 
 switch(b_info->type) {
 case LIBXL_DOMAIN_TYPE_HVM:
@@ -1576,7 +1577,6 @@ static void parse_config_data(const char *config_source,
 
 xlu_cfg_get_defbool(config, "pae", _info->u.hvm.pae, 0);
 xlu_cfg_get_defbool(config, "apic", _info->u.hvm.apic, 0);
-xlu_cfg_get_defbool(config, "acpi", _info->u.hvm.acpi, 0);
 xlu_cfg_get_defbool(config, "acpi_s3", _info->u.hvm.acpi_s3, 0);
 xlu_cfg_get_defbool(config, "acpi_s4", _info->u.hvm.acpi_s4, 0);
 xlu_cfg_get_defbool(config, "nx", 

[Xen-devel] [PATCH v4 16/16] libxl/arm: Add the size of ACPI tables to maxmem

2016-08-16 Thread Shannon Zhao
From: Shannon Zhao 

While it defines the maximum size of guest ACPI tables in guest
memory layout, here it adds the size to set the target maxmem
to avoid providing less available memory for guest.

Signed-off-by: Shannon Zhao 
---
 tools/libxl/libxl_arm.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index d436167..75b2589 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -103,6 +103,17 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
 int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
   uint32_t domid)
 {
+libxl_domain_build_info *const info = _config->b_info;
+libxl_ctx *ctx = libxl__gc_owner(gc);
+
+/* Add the size of ACPI tables to maxmem if ACPI is enabled for guest. */
+if (libxl_defbool_val(info->acpi) &&
+xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
+LIBXL_MAXMEM_CONSTANT + GUEST_ACPI_SIZE / 1024) < 0) {
+LOGE(ERROR, "Couldn't set max memory");
+return ERROR_FAIL;
+}
+
 return 0;
 }
 
-- 
2.0.4



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


  1   2   >