Re: [Xen-devel] Ping: [PATCH v2 01/11] public / x86: introduce hvmctl hypercall

2016-07-04 Thread Jan Beulich
>>> On 01.07.16 at 18:42,  wrote:
> On 01/07/16 17:18, Jan Beulich wrote:
> On 24.06.16 at 12:28,  wrote:
>>> ... as a means to replace all HVMOP_* which a domain can't issue on
>>> itself (i.e. intended for use by only the control domain or device
>>> model).
>>>
>>> Signed-off-by: Jan Beulich 
>>> Reviewed-by: Wei Liu 
>> On the x86 side I'm just lacking feedback for this patch.
> 
> I have just spent the afternoon being bitten extremely hard by our
> current unstable domctl abi, and in particular, the change of
> DOMCTL_API_VERSION when nothing relevant has changed, and am leaning
> towards David's views.
> 
> With the current definition, we have 32 bits of cmd space, proper
> continuation logic via the opaque field, and 120 bytes of per-cmd space
> in the union, which plenty.
> 
> How about making a proactive start to our ABI stabilisation effort,
> dropping the interface_version entirely and declaring this stable?  We
> would of course want to triple check the suitability of the existing
> ops, but that can easily be rolled into this series (if any action is
> needed).

I have to admit that I'm a little frustrated by this request: The series
has been out for quite some time, and was supposedly ready to go
in if all acks had been given. Yet with what you say above you
effectively would withdraw the ones you gave on the later patches
in the series, even if you don't say so explicitly. The fact that you've
got bitten by the domctl being modeled in similar ways (yet without
actually saying how you got bitten, or what's wrong with that model)
shouldn't really have much of an influence here. Even more so if, as
I would guess, that issue of yours was with the domctl wrapping
logic in your privcmd driver in Dom0 (which is already conceptually
problematic, as the kernel isn't intended to know of or make use of
domctl-s, and hence you having made it know set you up for such
problems).

Nor do I see myself do the auditing of the involved operations right
now (and basically as a prereq for this series to go in), the more that
I've learned from commits aa956976a9 ("domctl: perform initial
post-XSA-77 auditing") and then 5590bd17c4 ("XSA-77: widen scope
again") that it's very easy to overlook some aspect(s), no matter
how much time and effort one invests. I really think that for any
such future efforts we first need to put down a complete check list
of things that need to be ensured prior to making _any_ interface
stable and usable by other than fully trusted entities.

As to the option of marking this interface stable without doing the
security audit - I don't think I would see the point.

> Another area (which is related to the issue which bit me) is the
> stability of operations like DOMCTL_pausedomain, which is unlikely to
> ever change.
> 
> If we do chose to stabilise, we should design the new calls around how
> they would be used.  We could do with a stable interface for "general
> emulator routines", which applies equally to things like pause/unpause
> and ioreq_server*, as opposed to most of the new hvmctl ops, which are
> specific to qemu being an LPC bus emulator.
> 
> One thing I hadn't considered until now is that this takes an existing
> stable ABI and replaces it with an unstable ABI, which is a step
> backwards in terms of usability.  There are certainly other advantages
> to moving the ops out of hvmop, but the instability is a detracting factor.

This is not true imo: The existing interface wasn't stable (demonstrated
by it being framed by __XEN_ / __XEN_TOOLS__ conditionals), yet it
_also_ wasn't versioned, so its instablility wasn't represented properly.
So as said to David already, I continue to think this series is an
improvement, albeit other than in the direction that both of you would
like things to move. And while I'm with you for those intentions, I don't
think I should be required to make two steps at once here.

Plus you don't comment at all on my counter proposal to stabilize the
libxc wrappers around these hypercalls instead of the hypercalls
themselves.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86: use gcc6'es flags asm() output support

2016-07-04 Thread Jan Beulich
>>> On 01.07.16 at 18:51,  wrote:
> On 01/07/16 17:10, Jan Beulich wrote:
> On 01.07.16 at 17:38,  wrote:
>>> As for interleaving inside the asm statement itself, we already have
>>> precedent for that with the HAVE_GAS_* predicates.  It would make the
>>> patch rather larger, but might end up looking cleaner.  It is probably
>>> also worth switching to named parameters to reduce the risk of getting
>>> positional parameters out of order.
>> So taking just the first example I've converted: Do you think this
>>
>> static bool_t even_parity(uint8_t v)
>> {
>> asm ( "test %1,%1"
>> #ifdef __GCC_ASM_FLAG_OUTPUTS__
>>   : "=@ccp" (v)
>> #else
>>   "; setp %0"
>>   : "=qm" (v)
>> #endif
>>   : "q" (v) );
>>
>> return v;
>> }
>>
>> is better than the original?
> 
> How about a different example, from the second hunk
> 
> diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c
> b/xen/arch/x86/x86_emulate/x86_emulate.c
> index 460d1f7..8d52a41 100644
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -832,8 +832,19 @@ static int read_ulong(
>  static bool_t mul_dbl(unsigned long m[2])
>  {
>  bool_t rc;
> -asm ( "mul %1; seto %2"
> -  : "+a" (m[0]), "+d" (m[1]), "=qm" (rc) );
> +
> +asm ( "mul %1;"
> +#ifndef __GCC_ASM_FLAG_OUTPUTS__
> +  "seto %[rc];"
> +#endif
> +  : "+a" (m[0]), "+d" (m[1]),
> +#ifdef __GCC_ASM_FLAG_OUTPUTS__
> +[rc] "=@cco" (rc)
> +#else
> +[rc] "=qm" (rc)
> +#endif
> +);
> +
>  return rc;
>  }
>  
> This at least doesn't mix the : inside an #ifdef

At the price of two #ifdef-s. And in the example I'm really not
worried about the colon going into both branches of the #if, but
about general readability of the resulting code.

>> I'm unsure, and I'm actually inclined to
>> think that then the abstraction alternative might look better.
> 
> If the abstraction comes in two parts, one which may insert a `setcc`
> instruction, and one which selects between =qm and =@cc, it wouldn't end
> up hiding the :.

Opening an easy route to making mistakes. Imo such an abstraction
needs to be either a single item, ot not be done at all.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [distros-debian-stretch test] 66512: trouble: blocked/broken

2016-07-04 Thread Platform Team regression test user
flight 66512 distros-debian-stretch real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/66512/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops 4 capture-logs !broken [st=!broken!]
 build-armhf   4 capture-logs !broken [st=!broken!]

Regressions which are regarded as allowable (not blocking):
 build-armhf-pvops 3 host-install(3)  broken like 44415
 build-armhf   3 host-install(3)  broken like 44415
 build-i386-pvops  3 host-install(3)  broken like 44415
 build-i3863 host-install(3)  broken like 44415
 build-amd64   3 host-install(3)  broken like 44415
 build-amd64-pvops 3 host-install(3)  broken like 44415

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-i386-stretch-netboot-pygrub  1 build-check(1) blocked n/a
 test-amd64-i386-amd64-stretch-netboot-pygrub  1 build-check(1) blocked n/a
 test-amd64-amd64-amd64-stretch-netboot-pvgrub  1 build-check(1)blocked n/a
 test-amd64-i386-i386-stretch-netboot-pvgrub  1 build-check(1)  blocked n/a
 test-armhf-armhf-armhf-stretch-netboot-pygrub  1 build-check(1)blocked n/a

baseline version:
 flight   44415

jobs:
 build-amd64  broken  
 build-armhf  broken  
 build-i386   broken  
 build-amd64-pvopsbroken  
 build-armhf-pvopsbroken  
 build-i386-pvops broken  
 test-amd64-amd64-amd64-stretch-netboot-pvgrubblocked 
 test-amd64-i386-i386-stretch-netboot-pvgrub  blocked 
 test-amd64-i386-amd64-stretch-netboot-pygrub blocked 
 test-armhf-armhf-armhf-stretch-netboot-pygrubblocked 
 test-amd64-amd64-i386-stretch-netboot-pygrub blocked 



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.6 script calling conventions

2016-07-04 Thread John Nemeth
On Jul 2, 12:40pm, Wei Liu wrote:
} On Tue, Jun 28, 2016 at 06:00:49PM -0700, John Nemeth wrote:
} >  I'm trying to package Xen 4.6 (specifically Xen 4.6.3) for
} > use with NetBSD.  I have it mostly done; however, when I try to
} > create a domU, libxl goes into an infinite loop calling the scripts.
} > If I try to create a domU with no network or disk, it works fine
} > (albeit of rather limited use).  Have there been changes between
} > Xen 4.5 and Xen 4.6 in the calling convention for the scripts?  Is
} > there documentation on what is expected somewhere?  Please CC me on
} > any responses.  Here is my domU config file:
} 
} Can you give this patch a try? I don't have netbsd system at hand to
} test it.

 Thanks.  It pretty much did the trick.  I just had to make
one minor change to your patch.

} I suspect netbsd doesn't support stubdom because that pile of code is a
} bit Linux centric, but it wouldn't hurt to prepare for it.

 No, NetBSD doesn't do stubdom.  However, I would certainly
like to NetBSD's support for Xen become equal to Linux's support
(or better :-) ), so anything that makes that easier is a good
thing.

}-- End of excerpt from Wei Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/VT-x: Dump VMCS on VMLAUNCH/VMRESUME failure

2016-07-04 Thread Jan Beulich
>>> On 01.07.16 at 19:52,  wrote:
> If a VMLAUNCH/VMRESUME fails due to invalid control or host state, dump the
> VMCS before crashing the domain.
> 
> Signed-off-by: Andrew Cooper 

Good idea:
Reviewed-by: Jan Beulich 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v14 0/3] VT-d Device-TLB flush issue

2016-07-04 Thread Xu, Quan
From: Quan Xu 

these patches fix current timeout concern and also allow limited ATS support.

these patches are the rest ones:
1. move the domain crash logic up to the generic IOMMU layer

2. If Device-TLB flush timed out, we hide the target ATS device
   immediately. By hiding the device, we make sure it can't be
   assigned to any domain any longer (see device_assigned).

---
Not covered in this series:

a) Eliminate the panic() in IOMMU_WAIT_OP, used only in VT-d register 
read/write.
   Further discussion is required on whether and how to improve it.
b) Handle IOTLB/Context/IEC flush timeout.
---
Quan Xu (3):
  IOMMU/x86: use a struct pci_dev* instead of SBDF
  IOMMU: add domain crash logic
  IOMMU: fix vt-d Device-TLB flush timeout issue

 xen/drivers/passthrough/amd/iommu_cmd.c | 19 
 xen/drivers/passthrough/amd/pci_amd_iommu.c |  4 +-
 xen/drivers/passthrough/ats.h   | 10 ++---
 xen/drivers/passthrough/iommu.c | 54 ++-
 xen/drivers/passthrough/pci.c   |  6 +--
 xen/drivers/passthrough/vtd/extern.h|  5 ++-
 xen/drivers/passthrough/vtd/intremap.c  |  8 ++--
 xen/drivers/passthrough/vtd/iommu.c | 25 ---
 xen/drivers/passthrough/vtd/qinval.c| 56 ++--
 xen/drivers/passthrough/vtd/x86/ats.c   | 21 +
 xen/drivers/passthrough/x86/ats.c   | 67 ++---
 xen/include/xen/iommu.h |  3 ++
 xen/include/xen/pci.h   |  1 +
 13 files changed, 195 insertions(+), 84 deletions(-)

-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v14 3/3] IOMMU: fix vt-d Device-TLB flush timeout issue

2016-07-04 Thread Xu, Quan
From: Quan Xu 

If Device-TLB flush timed out, we hide the target ATS device
immediately. By hiding the device, we make sure it can't be
assigned to any domain any longer (see device_assigned).

Signed-off-by: Quan Xu 

CC: Jan Beulich 
CC: Kevin Tian 
CC: Feng Wu 

---
v14: release the lock before return.
---
 xen/drivers/passthrough/iommu.c   | 24 +++
 xen/drivers/passthrough/pci.c |  6 ++--
 xen/drivers/passthrough/vtd/extern.h  |  5 ++--
 xen/drivers/passthrough/vtd/qinval.c  | 56 +++
 xen/drivers/passthrough/vtd/x86/ats.c | 11 ++-
 xen/include/xen/iommu.h   |  3 ++
 xen/include/xen/pci.h |  1 +
 7 files changed, 79 insertions(+), 27 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index d793f5d..2353c7d 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -361,6 +361,30 @@ int iommu_iotlb_flush_all(struct domain *d)
 return rc;
 }
 
+void iommu_dev_iotlb_flush_timeout(struct domain *d,
+   struct pci_dev *pdev)
+{
+pcidevs_lock();
+
+ASSERT(pdev->domain);
+if ( d != pdev->domain )
+{
+pcidevs_unlock();
+return;
+}
+
+list_del(&pdev->domain_list);
+pdev->domain = NULL;
+pci_hide_existing_device(pdev);
+if ( !d->is_shutting_down && printk_ratelimit() )
+printk(XENLOG_ERR
+   "dom%d: ATS device %04x:%02x:%02x.%u flush failed\n",
+   d->domain_id, pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
+   PCI_FUNC(pdev->devfn));
+
+pcidevs_unlock();
+}
+
 int __init iommu_setup(void)
 {
 int rc = -ENODEV;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index bb5f344..58bfb79 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -419,7 +419,7 @@ static void free_pdev(struct pci_seg *pseg, struct pci_dev 
*pdev)
 xfree(pdev);
 }
 
-static void _pci_hide_device(struct pci_dev *pdev)
+void pci_hide_existing_device(struct pci_dev *pdev)
 {
 if ( pdev->domain )
 return;
@@ -436,7 +436,7 @@ int __init pci_hide_device(int bus, int devfn)
 pdev = alloc_pdev(get_pseg(0), bus, devfn);
 if ( pdev )
 {
-_pci_hide_device(pdev);
+pci_hide_existing_device(pdev);
 rc = 0;
 }
 pcidevs_unlock();
@@ -466,7 +466,7 @@ int __init pci_ro_device(int seg, int bus, int devfn)
 }
 
 __set_bit(PCI_BDF2(bus, devfn), pseg->ro_map);
-_pci_hide_device(pdev);
+pci_hide_existing_device(pdev);
 
 return 0;
 }
diff --git a/xen/drivers/passthrough/vtd/extern.h 
b/xen/drivers/passthrough/vtd/extern.h
index 45357f2..efaff28 100644
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -25,6 +25,7 @@
 
 #define VTDPREFIX "[VT-D]"
 
+struct pci_ats_dev;
 extern bool_t rwbf_quirk;
 
 void print_iommu_regs(struct acpi_drhd_unit *drhd);
@@ -60,8 +61,8 @@ int dev_invalidate_iotlb(struct iommu *iommu, u16 did,
  u64 addr, unsigned int size_order, u64 type);
 
 int __must_check qinval_device_iotlb_sync(struct iommu *iommu,
-  u32 max_invs_pend,
-  u16 sid, u16 size, u64 addr);
+  struct pci_ats_dev *ats_dev,
+  u16 did, u16 size, u64 addr);
 
 unsigned int get_cache_line_size(void);
 void cacheline_flush(char *);
diff --git a/xen/drivers/passthrough/vtd/qinval.c 
b/xen/drivers/passthrough/vtd/qinval.c
index 4492b29..7a5c433 100644
--- a/xen/drivers/passthrough/vtd/qinval.c
+++ b/xen/drivers/passthrough/vtd/qinval.c
@@ -27,11 +27,11 @@
 #include "dmar.h"
 #include "vtd.h"
 #include "extern.h"
+#include "../ats.h"
 
 #define VTD_QI_TIMEOUT 1
 
-static int __must_check invalidate_sync(struct iommu *iommu,
-bool_t flush_dev_iotlb);
+static int __must_check invalidate_sync(struct iommu *iommu);
 
 static void print_qi_regs(struct iommu *iommu)
 {
@@ -103,7 +103,7 @@ static int __must_check 
queue_invalidate_context_sync(struct iommu *iommu,
 
 unmap_vtd_domain_page(qinval_entries);
 
-return invalidate_sync(iommu, 0);
+return invalidate_sync(iommu);
 }
 
 static int __must_check queue_invalidate_iotlb_sync(struct iommu *iommu,
@@ -140,7 +140,7 @@ static int __must_check queue_invalidate_iotlb_sync(struct 
iommu *iommu,
 qinval_update_qtail(iommu, index);
 spin_unlock_irqrestore(&iommu->register_lock, flags);
 
-return invalidate_sync(iommu, 0);
+return invalidate_sync(iommu);
 }
 
 static int __must_check queue_invalidate_wait(struct iommu *iommu,
@@ -199,25 +199,55 @@ static int __must_check queue_invalidate_wait(struct 
iommu *iommu,
 return -EOPNOTSUPP;
 }
 
-static int __must_check invalidate_sync(struct iommu *iommu,
-

[Xen-devel] [PATCH v2] mini-os: replace lib/printf.c with a version not under GPL

2016-07-04 Thread Juergen Gross
Instead of a Linux kernel based implementation use one from freeBSD.

As a result some of the printings will change due to more posix like
behavior of %p format (omitting leading zeroes, prepending "0x").

Signed-off-by: Juergen Gross 
---
V2: remove include/lib-gpl.h as requested by Samuel Thibault
---
 blkfront.c|4 -
 include/lib-gpl.h |   59 --
 include/lib.h |   27 +-
 lib/printf.c  | 1744 +
 tpmback.c |4 -
 5 files changed, 1119 insertions(+), 719 deletions(-)
 delete mode 100644 include/lib-gpl.h

diff --git a/blkfront.c b/blkfront.c
index bdb7765..f747216 100644
--- a/blkfront.c
+++ b/blkfront.c
@@ -17,10 +17,6 @@
 #include 
 #include 
 
-#ifndef HAVE_LIBC
-#define strtoul simple_strtoul
-#endif
-
 /* Note: we generally don't need to disable IRQs since we hardly do anything in
  * the interrupt handler.  */
 
diff --git a/include/lib-gpl.h b/include/lib-gpl.h
deleted file mode 100644
index d5602b2..000
--- a/include/lib-gpl.h
+++ /dev/null
@@ -1,59 +0,0 @@
-/* -*-  Mode:C; c-basic-offset:4; tab-width:4 -*-
- 
- * (C) 2003 - Rolf Neugebauer - Intel Research Cambridge
- 
- *
- *File: lib.h
- *  Author: Rolf Neugebauer (neuge...@dcs.gla.ac.uk)
- * Changes: 
- *  
- *Date: Aug 2003
- * 
- * Environment: Xen Minimal OS
- * Description: Random useful library functions, from Linux'
- * include/linux/kernel.h
- *
- *  This program is free software; you can redistribute it and/or modify
- *  it under the terms of the GNU General Public License as published by
- *  the Free Software Foundation; either version 2 of the License, or
- *  (at your option) any later version.
- *
- *  This program is distributed in the hope that it will be useful,
- *  but WITHOUT ANY WARRANTY; without even the implied warranty of
- *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- *  GNU General Public License for more details.
- *
- *  You should have received a copy of the GNU General Public License
- *  along with this program; if not, write to the Free Software
- *  Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
- */
-
-#ifndef _LIB_GPL_H_
-#define _LIB_GPL_H_
-
-#ifndef HAVE_LIBC
-/* printing */
-extern unsigned long simple_strtoul(const char *,char **,unsigned int);
-extern long simple_strtol(const char *,char **,unsigned int);
-extern unsigned long long simple_strtoull(const char *,char **,unsigned int);
-extern long long simple_strtoll(const char *,char **,unsigned int);
-
-extern int sprintf(char * buf, const char * fmt, ...)
-   __attribute__ ((format (printf, 2, 3)));
-extern int vsprintf(char *buf, const char *, va_list)
-   __attribute__ ((format (printf, 2, 0)));
-extern int snprintf(char * buf, size_t size, const char * fmt, ...)
-   __attribute__ ((format (printf, 3, 4)));
-extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
-   __attribute__ ((format (printf, 3, 0)));
-extern int scnprintf(char * buf, size_t size, const char * fmt, ...)
-   __attribute__ ((format (printf, 3, 4)));
-extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
-   __attribute__ ((format (printf, 3, 0)));
-extern int sscanf(const char *, const char *, ...)
-   __attribute__ ((format (scanf, 2, 3)));
-extern int vsscanf(const char *, const char *, va_list)
-   __attribute__ ((format (scanf, 2, 0)));
-#endif
-
-#endif /* _LIB_GPL_H_ */
diff --git a/include/lib.h b/include/lib.h
index 62836c7..39d6a18 100644
--- a/include/lib.h
+++ b/include/lib.h
@@ -66,11 +66,6 @@
 #ifdef HAVE_LIBC
 #include 
 #include 
-#else
-#include 
-#endif
-
-#ifdef HAVE_LIBC
 #include 
 #else
 /* string and memory manipulation */
@@ -107,6 +102,28 @@ char *strrchr(const char *p, int ch);
 void   *memcpy(void *to, const void *from, size_t len);
 
 size_t strnlen(const char *, size_t);
+
+unsigned long strtoul(const char *nptr, char **endptr, int base);
+int64_t strtoq(const char *nptr, char **endptr, int base);
+uint64_t strtouq(const char *nptr, char **endptr, int base);
+
+extern int sprintf(char * buf, const char * fmt, ...)
+__attribute__ ((format (printf, 2, 3)));
+extern int vsprintf(char *buf, const char *, va_list)
+__attribute__ ((format (printf, 2, 0)));
+extern int snprintf(char * buf, size_t size, const char * fmt, ...)
+__attribute__ ((format (printf, 3, 4)));
+extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
+__attribute__ ((format (printf, 3, 0)));
+extern int scnprintf(char * buf, size_t size, const char * fmt, ...)
+__attribute__ ((format (printf, 3, 4)));
+extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
+__attribute__ ((format (printf, 3, 0)));
+extern int 

[Xen-devel] [PATCH v14 2/3] IOMMU: add domain crash logic

2016-07-04 Thread Xu, Quan
From: Quan Xu 

Add domain crash logic to the generic IOMMU layer to benefit
all platforms.

No spamming of the log can occur. For DomU, we avoid logging any
message for already dying domains. For Dom0, that'll still be more
verbose than we'd really like, but it at least wouldn't outright
flood the console.

Signed-off-by: Quan Xu 
Acked-by: Kevin Tian 

CC: Julien Grall 
CC: Kevin Tian 
CC: Feng Wu 
CC: Jan Beulich 
CC: Suravee Suthikulpanit 
---
 xen/drivers/passthrough/iommu.c | 30 --
 xen/drivers/passthrough/vtd/iommu.c | 11 +++
 2 files changed, 39 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 7656aeb..d793f5d 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -318,21 +318,47 @@ int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
   unsigned int page_count)
 {
 const struct domain_iommu *hd = dom_iommu(d);
+int rc;
 
 if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush 
)
 return 0;
 
-return hd->platform_ops->iotlb_flush(d, gfn, page_count);
+rc = hd->platform_ops->iotlb_flush(d, gfn, page_count);
+if ( unlikely(rc) )
+{
+if ( !d->is_shutting_down && printk_ratelimit() )
+printk(XENLOG_ERR
+   "d%d: IOMMU IOTLB flush failed: %d, gfn %#lx, page count 
%u\n",
+   d->domain_id, rc, gfn, page_count);
+
+if ( !is_hardware_domain(d) )
+domain_crash(d);
+}
+
+return rc;
 }
 
 int iommu_iotlb_flush_all(struct domain *d)
 {
 const struct domain_iommu *hd = dom_iommu(d);
+int rc;
 
 if ( !iommu_enabled || !hd->platform_ops || 
!hd->platform_ops->iotlb_flush_all )
 return 0;
 
-return hd->platform_ops->iotlb_flush_all(d);
+rc = hd->platform_ops->iotlb_flush_all(d);
+if ( unlikely(rc) )
+{
+if ( !d->is_shutting_down && printk_ratelimit() )
+printk(XENLOG_ERR
+   "d%d: IOMMU IOTLB flush all failed: %d\n",
+   d->domain_id, rc);
+
+if ( !is_hardware_domain(d) )
+domain_crash(d);
+}
+
+return rc;
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/vtd/iommu.c 
b/xen/drivers/passthrough/vtd/iommu.c
index cc34497..a02b4c40 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1847,6 +1847,17 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
 }
 }
 
+if ( unlikely(rc) )
+{
+if ( !d->is_shutting_down && printk_ratelimit() )
+printk(XENLOG_ERR VTDPREFIX
+   " d%d: IOMMU pages flush failed: %d\n",
+   d->domain_id, rc);
+
+if ( !is_hardware_domain(d) )
+domain_crash(d);
+}
+
 return rc;
 }
 
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v14 1/3] IOMMU/x86: use a struct pci_dev* instead of SBDF

2016-07-04 Thread Xu, Quan
From: Quan Xu 

A struct pci_dev* instead of SBDF is stored inside struct
pci_ats_dev and parameter to *_ats_device().

Also use ats_dev for "struct pci_ats_dev" variable, while
pdev (_pdev, if there is already a pdev) for "struct pci_dev"
in the scope of IOMMU.

Signed-off-by: Quan Xu 
Acked-by: Kevin Tian 

CC: Jan Beulich 
CC: Kevin Tian 
CC: Feng Wu 
CC: Suravee Suthikulpanit 

---
v14: change 'ats_pdev' to 'ats_dev'.
---
 xen/drivers/passthrough/amd/iommu_cmd.c | 19 
 xen/drivers/passthrough/amd/pci_amd_iommu.c |  4 +-
 xen/drivers/passthrough/ats.h   | 10 ++---
 xen/drivers/passthrough/vtd/intremap.c  |  8 ++--
 xen/drivers/passthrough/vtd/iommu.c | 14 +++---
 xen/drivers/passthrough/vtd/x86/ats.c   | 24 +++
 xen/drivers/passthrough/x86/ats.c   | 67 ++---
 7 files changed, 84 insertions(+), 62 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c 
b/xen/drivers/passthrough/amd/iommu_cmd.c
index 7c9d9be..7e010e6 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -289,35 +289,34 @@ void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev 
*pdev,
 unsigned long flags;
 struct amd_iommu *iommu;
 unsigned int req_id, queueid, maxpend;
-struct pci_ats_dev *ats_pdev;
+struct pci_ats_dev *ats_dev;
 
 if ( !ats_enabled )
 return;
 
-ats_pdev = get_ats_device(pdev->seg, pdev->bus, pdev->devfn);
-if ( ats_pdev == NULL )
+ats_dev = get_ats_device(pdev);
+if ( ats_dev == NULL )
 return;
 
-if ( !pci_ats_enabled(ats_pdev->seg, ats_pdev->bus, ats_pdev->devfn) )
+if ( !pci_ats_enabled(pdev->seg, pdev->bus, pdev->devfn) )
 return;
 
-iommu = find_iommu_for_device(ats_pdev->seg,
-  PCI_BDF2(ats_pdev->bus, ats_pdev->devfn));
+iommu = find_iommu_for_device(pdev->seg, PCI_BDF2(pdev->bus, pdev->devfn));
 
 if ( !iommu )
 {
 AMD_IOMMU_DEBUG("%s: Can't find iommu for %04x:%02x:%02x.%u\n",
-__func__, ats_pdev->seg, ats_pdev->bus,
-PCI_SLOT(ats_pdev->devfn), PCI_FUNC(ats_pdev->devfn));
+__func__, pdev->seg, pdev->bus,
+PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
 return;
 }
 
 if ( !iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
 return;
 
-req_id = get_dma_requestor_id(iommu->seg, PCI_BDF2(ats_pdev->bus, devfn));
+req_id = get_dma_requestor_id(iommu->seg, PCI_BDF2(pdev->bus, 
pdev->devfn));
 queueid = req_id;
-maxpend = ats_pdev->ats_queue_depth & 0xff;
+maxpend = ats_dev->ats_queue_depth & 0xff;
 
 /* send INVALIDATE_IOTLB_PAGES command */
 spin_lock_irqsave(&iommu->lock, flags);
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c 
b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 7761241..dad4a71 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -162,7 +162,7 @@ static void amd_iommu_setup_domain_device(
  !pci_ats_enabled(iommu->seg, bus, pdev->devfn) )
 {
 if ( devfn == pdev->devfn )
-enable_ats_device(iommu->seg, bus, devfn, iommu);
+enable_ats_device(iommu, pdev);
 
 amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
 }
@@ -356,7 +356,7 @@ void amd_iommu_disable_domain_device(struct domain *domain,
 if ( devfn == pdev->devfn &&
  pci_ats_device(iommu->seg, bus, devfn) &&
  pci_ats_enabled(iommu->seg, bus, devfn) )
-disable_ats_device(iommu->seg, bus, devfn);
+disable_ats_device(pdev);
 }
 
 static int reassign_device(struct domain *source, struct domain *target,
diff --git a/xen/drivers/passthrough/ats.h b/xen/drivers/passthrough/ats.h
index 5c91572..47ff22d 100644
--- a/xen/drivers/passthrough/ats.h
+++ b/xen/drivers/passthrough/ats.h
@@ -19,9 +19,7 @@
 
 struct pci_ats_dev {
 struct list_head list;
-u16 seg;
-u8 bus;
-u8 devfn;
+struct pci_dev *pdev;
 u16 ats_queue_depth;/* ATS device invalidation queue depth */
 const void *iommu;  /* No common IOMMU struct so use void pointer */
 };
@@ -34,9 +32,9 @@ struct pci_ats_dev {
 extern struct list_head ats_devices;
 extern bool_t ats_enabled;
 
-int enable_ats_device(int seg, int bus, int devfn, const void *iommu);
-void disable_ats_device(int seg, int bus, int devfn);
-struct pci_ats_dev *get_ats_device(int seg, int bus, int devfn);
+int enable_ats_device(const void *iommu, struct pci_dev *pdev);
+void disable_ats_device(struct pci_dev *pdev);
+struct pci_ats_dev *get_ats_device(const struct pci_dev *pdev);
 
 static inline int pci_ats_enabled(int seg, int bus, int devfn)
 {
diff --git a/xen/drivers/passthrough/vtd/intremap.c 
b/xen/drivers/passthrough/vtd/intremap.c
index 7c11565..28e3cf4 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen

Re: [Xen-devel] [PATCH 2/3] arm/xen: add support for vm_assist hypercall

2016-07-04 Thread Juergen Gross
On 22/06/16 09:03, Juergen Gross wrote:
> Add support for the Xen HYPERVISOR_vm_assist hypercall.
> 
> Signed-off-by: Juergen Gross 

Stefano, could you please comment?


Juergen

> ---
>  arch/arm/include/asm/xen/hypercall.h | 1 +
>  arch/arm/xen/enlighten.c | 1 +
>  arch/arm/xen/hypercall.S | 1 +
>  arch/arm64/xen/hypercall.S   | 1 +
>  4 files changed, 4 insertions(+)
> 
> diff --git a/arch/arm/include/asm/xen/hypercall.h 
> b/arch/arm/include/asm/xen/hypercall.h
> index b6b962d..9d874db 100644
> --- a/arch/arm/include/asm/xen/hypercall.h
> +++ b/arch/arm/include/asm/xen/hypercall.h
> @@ -52,6 +52,7 @@ int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
>  int HYPERVISOR_physdev_op(int cmd, void *arg);
>  int HYPERVISOR_vcpu_op(int cmd, int vcpuid, void *extra_args);
>  int HYPERVISOR_tmem_op(void *arg);
> +int HYPERVISOR_vm_assist(unsigned int cmd, unsigned int type);
>  int HYPERVISOR_platform_op_raw(void *arg);
>  static inline int HYPERVISOR_platform_op(struct xen_platform_op *op)
>  {
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 71db30c..0f3aa12 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -389,4 +389,5 @@ EXPORT_SYMBOL_GPL(HYPERVISOR_vcpu_op);
>  EXPORT_SYMBOL_GPL(HYPERVISOR_tmem_op);
>  EXPORT_SYMBOL_GPL(HYPERVISOR_platform_op);
>  EXPORT_SYMBOL_GPL(HYPERVISOR_multicall);
> +EXPORT_SYMBOL_GPL(HYPERVISOR_vm_assist);
>  EXPORT_SYMBOL_GPL(privcmd_call);
> diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
> index 9a36f4f..a648dfc 100644
> --- a/arch/arm/xen/hypercall.S
> +++ b/arch/arm/xen/hypercall.S
> @@ -91,6 +91,7 @@ HYPERCALL3(vcpu_op);
>  HYPERCALL1(tmem_op);
>  HYPERCALL1(platform_op_raw);
>  HYPERCALL2(multicall);
> +HYPERCALL2(vm_assist);
>  
>  ENTRY(privcmd_call)
>   stmdb sp!, {r4}
> diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S
> index 70df80e..329c802 100644
> --- a/arch/arm64/xen/hypercall.S
> +++ b/arch/arm64/xen/hypercall.S
> @@ -82,6 +82,7 @@ HYPERCALL3(vcpu_op);
>  HYPERCALL1(tmem_op);
>  HYPERCALL1(platform_op_raw);
>  HYPERCALL2(multicall);
> +HYPERCALL2(vm_assist);
>  
>  ENTRY(privcmd_call)
>   mov x16, x0
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH] README: Update version to 4.7 (from 4.7.0)

2016-07-04 Thread Ian Jackson
For ongoing stable releases.

Signed-off-by: Ian Jackson 
CC: Jan Beulich 

[already applied, following discussion in irc -iwj]
---
 README | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/README b/README
index 6380a79..9b69d91 100644
--- a/README
+++ b/README
@@ -1,10 +1,10 @@
 #
-__  ___  _   _ ___  
-\ \/ /___ _ __   | || | |___  / _ \ 
- \  // _ \ '_ \  | || |_   / / | | |
- /  \  __/ | | | |__   _| / /| |_| |
-/_/\_\___|_| |_||_|(_)_/(_)___/ 
-
+__  ___  _   _ 
+\ \/ /___ _ __   | || | |___  |
+ \  // _ \ '_ \  | || |_   / / 
+ /  \  __/ | | | |__   _| / /  
+/_/\_\___|_| |_||_|(_)_/   
+   
 #
 
 http://www.xen.org/
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-3.10 baseline-only test] 66494: regressions - trouble: blocked/broken/fail/pass

2016-07-04 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 66494 linux-3.10 real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/66494/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt4 capture-logs !broken [st=!broken!]
 build-i386-libvirt3 host-install(3) broken REGR. vs. 44253
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 guest-saverestore fail 
REGR. vs. 44253

Regressions which are regarded as allowable (not blocking):
 build-i386-rumpuserxen6 xen-buildfail   like 44253
 build-amd64-rumpuserxen   6 xen-buildfail   like 44253
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 44253
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 44253

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail never pass

version targeted for testing:
 linuxca1199fccf14540e86f6da955333e31d6fec5f3e
baseline version:
 linux326a1b2d50cbe1f890e56ab60b9215cd30053f5a

Last test of basis44253  2016-03-17 07:54:49 Z  109 days
Testing same since66494  2016-07-01 10:55:17 Z2 days1 attempts


People who touched revisions under test:
  Aaro Koskinen 
  Adrian Hunter 
  Al Viro 
  Alan Stern 
  Alex Deucher 
  Alexandre Belloni 
  Alexandre Bounine 
  Alexey Khoroshilov 
  Allain Legacy 
  Andi Kleen 
  Andrew Morton 
  Andrey Gelman 
  Andy Lutomirski 
  Andy Lutomirski  # On a Dell XPS 13 9350
  Anton Blanchard 
  Antonio Quartulli 
  Aristeu Rozanski 
  Arnaldo Carvalho de Melo 
  Arnd Bergmann 
  Artem Bityutskiy 
  Aurelien Jacquiot 
  Behan Webster 
  Ben Hutchings 
  Bill Sommerfeld 
  Bjorn Helgaas 
  Bjørn Mork 
  Bob Moore 
  Borislav Petkov 
  Brian King 
  Brian Norris 
  Chanwoo Choi 
  Chas Williams <3ch...@gmail.com>
  Chris Friesen 
  Dan Carpenter 
  Dan Streetman 
  Daniel Fraga 
  Daniel Lezcano 
  David S. Miller 
  Diego Viola 
  Dinh Nguyen 
  Dmitry Ivanov 
  Dmitry Ivanov 
  Dmitry Torokhov 
  Douglas Gilbert 
  Eric Wheeler 
  Eric Wheeler 
  Eryu Guan 
  Felipe Balbi 
  Florian Westphal 
  Gabriel Krisman Bertazi 
  Geert Uytterhoeven 
  Greg Kroah-Hartman 
  Guenter Roeck 
  Guillaume Nault 
  H. Nikolaus Schaller 
  H. Peter Anvin 
  Haibo Chen 
  Haishuang Yan 
  Hans de Goede 
  Hans Verkuil 
  Hans-Christoph Schemmel 
  Helge Deller 
  Herbert Xu 
  Ian Campbell 
  Ignat Korchagin 
  Igor Grinberg 
  Ingo Molnar 
  Insu Yun 
  Jan-Simon Möller 
  Jasem Mutlaq 
  Jes Sorensen 
  Jiri Kosina 
  Jiri Slaby 
  Joe Perches 
  Johan Hovold 
  Johannes Berg 
  Joseph Qi 
  Josh Boyer 
  Julia Lawall 
  Julian Anastasov 
  K. Y. Srinivasan 
  Kamal Mostafa 
  Kangjie Lu 
  Kangjie Lu 
  Kevin Hilman 
  Kevin Hilman 
  Krzysztof Kozlowski 
  Laszlo Ersek 
  Lee Jones 
  Linus Lüssing 
  Linus Torvalds 
  Linus Walleij 
  Lu Baolu 
  Lutz Euler 
  Lv Zheng 
  Manish Chopra 
  Marcel Holtmann 
  Marco Angaroni 
  Marek Lindner 
  Marek Szyprowski 
  Mario Kleiner 
  Mark Brown 
  Markus Pargmann 
  Martin K. Petersen 
  Martyn Welch 
  Mathias Krause 
  Mathias Nyman 
  Matt Fleming 
  Matt Gumbel 
  Maurizio Lombardi 
  Mauro Carvalho Chehab 
  Mauro Carvalho Chehab 
  Max Filippov 
  Michael Ellerman 
  Michael Hennerich 
  Michael S. Tsirkin 
  Michal Marek 
  Mike Manning 
  Nicolai Hähnle 
  Nicolas Dichtel 
  Nikolay Aleksandrov 
  Nishanth Menon 
  OGAWA Hirofumi 
  Oliver Neukum 
  Pali Rohár 
  Paolo Bonzini 
  Paul Gortmaker 
  Pavel Emelyanov 
  Peter Hurley 
  Peter Zijlstra (Intel) 
  Philip Müller 
  ph...@manjaro.org
  Prarit Bhargava 
  Rabin Vincent 
  Radim Krčmář 
  Rafael J. Wysoc

Re: [Xen-devel] [PATCH 3/8] x86/vm-event/monitor: relocate code-motion more appropriately

2016-07-04 Thread Jan Beulich
>>> On 30.06.16 at 20:43,  wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -475,8 +475,6 @@ void hvm_do_resume(struct vcpu *v)
>  
>  if ( unlikely(v->arch.vm_event) )
>  {
> -struct monitor_write_data *w = &v->arch.vm_event->write_data;
> -
>  if ( v->arch.vm_event->emulate_flags )
>  {
>  enum emul_kind kind = EMUL_KIND_NORMAL;
> @@ -493,32 +491,10 @@ void hvm_do_resume(struct vcpu *v)
>  
>  v->arch.vm_event->emulate_flags = 0;
>  }
> -
> -if ( w->do_write.msr )
> -{
> -hvm_msr_write_intercept(w->msr, w->value, 0);
> -w->do_write.msr = 0;
> -}
> -
> -if ( w->do_write.cr0 )
> -{
> -hvm_set_cr0(w->cr0, 0);
> -w->do_write.cr0 = 0;
> -}
> -
> -if ( w->do_write.cr4 )
> -{
> -hvm_set_cr4(w->cr4, 0);
> -w->do_write.cr4 = 0;
> -}
> -
> -if ( w->do_write.cr3 )
> -{
> -hvm_set_cr3(w->cr3, 0);
> -w->do_write.cr3 = 0;
> -}
>  }
>  
> +arch_monitor_write_data(v);

Why does this get moved outside the if(), with the same condition
getting added inside the function (inverted for bailing early)?

> @@ -119,6 +156,55 @@ bool_t monitored_msr(const struct domain *d, u32 msr)
>  return test_bit(msr, bitmap);
>  }
>  
> +static void write_ctrlreg_adjust_traps(struct domain *d)
> +{
> +struct vcpu *v;
> +struct arch_vmx_struct *avmx;
> +unsigned int cr3_bitmask;
> +bool_t cr3_vmevent, cr3_ldexit;
> +
> +/* Adjust CR3 load-exiting. */
> +
> +/* vmx only */
> +ASSERT(cpu_has_vmx);
> +
> +/* non-hap domains trap CR3 writes unconditionally */
> +if ( !paging_mode_hap(d) )
> +{
> +for_each_vcpu ( d, v )
> +ASSERT(v->arch.hvm_vmx.exec_control & 
> CPU_BASED_CR3_LOAD_EXITING);
> +return;
> +}
> +
> +cr3_bitmask = monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3);
> +cr3_vmevent = !!(d->arch.monitor.write_ctrlreg_enabled & cr3_bitmask);
> +
> +for_each_vcpu ( d, v )
> +{
> +avmx = &v->arch.hvm_vmx;
> +cr3_ldexit = !!(avmx->exec_control & CPU_BASED_CR3_LOAD_EXITING);
> +
> +if ( cr3_vmevent == cr3_ldexit )
> +continue;
> +
> +/*
> + * If CR0.PE=0, CR3 load exiting must remain enabled.
> + * See vmx_update_guest_cr code motion for cr = 0.
> + */
> +if ( cr3_ldexit && !hvm_paging_enabled(v) && 
> !vmx_unrestricted_guest(v) 
> )
> +continue;
> +
> +if ( cr3_vmevent )
> +avmx->exec_control |= CPU_BASED_CR3_LOAD_EXITING;
> +else
> +avmx->exec_control &= ~CPU_BASED_CR3_LOAD_EXITING;
> +
> +vmx_vmcs_enter(v);
> +vmx_update_cpu_exec_control(v);
> +vmx_vmcs_exit(v);
> +}
> +}

While Razvan gave his ack already, I wonder whether it's really a
good idea to put deeply VMX-specific code outside of a VMX-specific
file.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 0/3] libxl: add framework for device types

2016-07-04 Thread Juergen Gross
On 21/06/16 16:24, Juergen Gross wrote:
> Instead of duplicate coding for each device type (vtpms, usbctrls, ...)
> especially on domain creation introduce a framework for that purpose.
> 
> I especially found it annoying that e.g. the vtpm callback issued the
> error message for a failed attach of nic devices.
> 
> Juergen Gross (3):
>   libxl: add framework for device types
>   libxl: refactor domcreate_attach_pci() to use device type framework
>   libxl: refactor domcreate_attach_dtdev() to use device type framework
> 
>  tools/libxl/libxl.c  |  12 ++
>  tools/libxl/libxl_create.c   | 275 
> +--
>  tools/libxl/libxl_internal.h |  14 +++
>  tools/libxl/libxl_pci.c  |  36 ++
>  tools/libxl/libxl_pvusb.c|  13 ++
>  5 files changed, 159 insertions(+), 191 deletions(-)
> 

Ping?


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 00/11] grub-xen: support booting huge pv-domains

2016-07-04 Thread Juergen Gross
On 12/05/16 07:35, Juergen Gross wrote:
> Gentle ping...

Okay, now 4 months since posting the last version. Could it please be
included in a more timely manner?


Juergen

> 
> On 03/03/16 10:38, Juergen Gross wrote:
>> The Xen hypervisor supports starting a dom0 with large memory (up to
>> the TB range) by not including the initrd and p2m list in the initial
>> kernel mapping. Especially the p2m list can grow larger than the
>> available virtual space in the initial mapping.
>>
>> The started kernel is indicating the support of each feature via
>> elf notes.
>>
>> This series enables grub-xen to do the same as the hypervisor.
>>
>> Tested with:
>> - 32 bit domU (kernel not supporting unmapped initrd)
>> - 32 bit domU (kernel supporting unmapped initrd)
>> - 1 GB 64 bit domU (kernel supporting unmapped initrd, not p2m)
>> - 1 GB 64 bit domU (kernel supporting unmapped initrd and p2m)
>> - 900GB 64 bit domU (kernel supporting unmapped initrd and p2m)
>>
>> Changes in V7:
>> - patch 9: set initrd size once instead of in if and else clause as requested
>>   by Daniel Kiper
>> - patch 10: add GRUB_PACKED attribute to structure, drop alignments in 
>> assembly
>>   files as requested by Daniel Kiper
>>
>> Changes in V6:
>> - patch 9: rename grub_xen_alloc_final() as requested by Daniel Kiper
>>
>> Changes in V5:
>> - patch 2: set grub_errno to GRUB_ERR_NONE to avoid false error reports as
>>   requested by Daniel Kiper
>> - patch 9: let call grub_xen_alloc_final() all subfunctions unconditionally
>>   and let them decide whether they need to do anything as suggested by
>>   Daniel Kiper
>>
>> Changes in V4:
>> - split patch 1 into two patches as requested by Daniel Kiper
>> - patch 9 (was 8): rename grub_xen_alloc_end() as requested by Daniel Kiper
>> - patch 10 (was 9): align variables in assembly sources,
>>   use separate structure define as requested by Daniel Kiper
>>
>> Changes in V3:
>> - added new patch 1 (free memory in case of error) as requested by
>>   Daniel Kiper
>> - added new patch 2 (avoid global variables) as requested by Daniel Kiper
>> - added new patch 3 (use constants for elf notes) as requested by Daniel 
>> Kiper
>> - added new patch 4 (sync with new Xen headers) in order to use constants
>>   in assembly code
>> - modified patch 9 (was patch 5) to use constants instead of numbers as
>>   requested by Daniel Kiper
>>
>> Changes in V2:
>> - rebased patch 5 to current master
>>
>> Juergen Gross (11):
>>   xen: make xen loader callable multiple times
>>   xen: avoid memleaks on error
>>   xen: reduce number of global variables in xen loader
>>   xen: add elfnote.h to avoid using numbers instead of constants
>>   xen: synchronize xen header
>>   xen: factor out p2m list allocation into separate function
>>   xen: factor out allocation of special pages into separate function
>>   xen: factor out allocation of page tables into separate function
>>   xen: add capability to load initrd outside of initial mapping
>>   xen: modify page table construction
>>   xen: add capability to load p2m list outside of kernel mapping
>>
>>  grub-core/lib/i386/xen/relocator.S   |  87 ++--
>>  grub-core/lib/x86_64/xen/relocator.S | 134 +++---
>>  grub-core/lib/xen/relocator.c|  28 +-
>>  grub-core/loader/i386/xen.c  | 778 
>> +++
>>  grub-core/loader/i386/xen_fileXX.c   |  45 +-
>>  include/grub/i386/memory.h   |   7 +
>>  include/grub/xen/relocator.h |   6 +-
>>  include/grub/xen_file.h  |   3 +
>>  include/xen/arch-x86/xen-x86_32.h|  22 +-
>>  include/xen/arch-x86/xen-x86_64.h|   8 +-
>>  include/xen/elfnote.h| 281 +
>>  include/xen/xen.h| 125 --
>>  12 files changed, 1076 insertions(+), 448 deletions(-)
>>  create mode 100644 include/xen/elfnote.h
>>
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] Fwd: [Xen-users] Adaptec (PMC Sierra) RAID controller is not working under XEN

2016-07-04 Thread George Dunlap
Forwarding to xen-devel to see if anyone has any ideas...

 -George

-- Forwarded message --
From: Alex Wakizashi 
Date: Tue, Jun 28, 2016 at 11:38 PM
Subject: [Xen-users] Adaptec (PMC Sierra) RAID controller is not
working under XEN
To: xen-us...@lists.xen.org


Hello, all,

Have old Supermicro server with 55xx chipset, and tried to use it as
server for virtual machines.
System have only SATA2 onboard, so I've used spare Adaptec RAID
controller in HBA mode (Using ZFS, so no need to use it's hw RAID
capabilities).
Server is remote-controlled over IPMI (placed at colocation), so I
have cryptsetup and dropbear installed in initramfs.
OS is Debian Jessie 8.5.

Have tried both standard Jessie kernel (3.16), and backported (just
recompiled from .dsc) kernel from SID (4.6.2).
There are no difference in work with XEN - both always hang at Adaptec
adapter initialization.
Without XEN both are working (Except for extremely poor disk
performance in 3.16, but that's a different story, maybe related to
chipset bug).
Have tried XEN 4.4 (from Jessie) and 4.6 (compiled manually) - no difference.

While looking for solution, I've found this link:
http://support.citrix.com/article/CTX136517
This is article about broken 55xx chipset versions, and yes - it's
exactly the case.
Spec update: 5520-and-5500-chipset-ioh-specification-update.pdf

Recommendation was to disable interrupt remapping for XEN and for
Linux kernel... but that does not work.

Also, I've tried to disable IOMMU completely (Both for kernel and for
Xen, as well as disable VT-d and IO/AT in BIOS) - and that does not
work either.

In general, I don't need IOMMU of VT-d, just need to run few Debian
VMs in HVM(preferable) or PV/PVH mode...

Situation looks very similar to some Marvell SATA controllers, which
are refusing to work with XEN as well... AFAIK, there is no solution
for it so far.

Does anyone have advice - how to make it working?
Or maybe there is recommended model of HBA/RAID controller, which are
known to work correctly with buggy 5520 chipset?

Here is lspci output:

- Cut -# lspci -nn
00:00.0 Host bridge [0600]: Intel Corporation 5520 I/O Hub to ESI Port
[8086:3406] (rev 22)
00:01.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI
Express Root Port 1 [8086:3408] (rev 22)
00:03.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI
Express Root Port 3 [8086:340a] (rev 22)
00:05.0 PCI bridge [0604]: Intel Corporation 5520/X58 I/O Hub PCI
Express Root Port 5 [8086:340c] (rev 22)
00:07.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI
Express Root Port 7 [8086:340e] (rev 22)
00:09.0 PCI bridge [0604]: Intel Corporation 7500/5520/5500/X58 I/O
Hub PCI Express Root Port 9 [8086:3410] (rev 22)
00:13.0 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub
I/OxAPIC Interrupt Controller [8086:342d] (rev 22)
00:14.0 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub
System Management Registers [8086:342e] (rev 22)
00:14.1 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO
and Scratch Pad Registers [8086:3422] (rev 22)
00:14.2 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub
Control Status and RAS Registers [8086:3423] (rev 22)
00:14.3 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub
Throttle Registers [8086:3438] (rev 22)
00:16.0 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:3430] (rev 22)
00:16.1 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:3431] (rev 22)
00:16.2 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:3432] (rev 22)
00:16.3 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:3433] (rev 22)
00:16.4 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:3429] (rev 22)
00:16.5 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:342a] (rev 22)
00:16.6 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:342b] (rev 22)
00:16.7 System peripheral [0880]: Intel Corporation 5520/5500/X58
Chipset QuickData Technology Device [8086:342c] (rev 22)
00:1a.0 USB controller [0c03]: Intel Corporation 82801JI (ICH10
Family) USB UHCI Controller #4 [8086:3a37]
00:1a.1 USB controller [0c03]: Intel Corporation 82801JI (ICH10
Family) USB UHCI Controller #5 [8086:3a38]
00:1a.2 USB controller [0c03]: Intel Corporation 82801JI (ICH10
Family) USB UHCI Controller #6 [8086:3a39]
00:1a.7 USB controller [0c03]: Intel Corporation 82801JI (ICH10
Family) USB2 EHCI Controller #2 [8086:3a3c]
00:1d.0 USB controller [0c03]: Intel Corporation 82801JI (ICH10
Family) USB UHCI Controller #1 [8086:3a34]
00:1d.1 USB controller [0c03]: Intel Corporation 82801JI (ICH10
Family) USB UHCI Controller #2 [8086:3a35]
00:1d.2 USB controller [0c03]: Intel Corporation 82801JI

Re: [Xen-devel] [PATCH v7 00/11] grub-xen: support booting huge pv-domains

2016-07-04 Thread Daniel Kiper
On Mon, Jul 04, 2016 at 12:33:17PM +0200, Juergen Gross wrote:
> On 12/05/16 07:35, Juergen Gross wrote:
> > Gentle ping...
>
> Okay, now 4 months since posting the last version. Could it please be
> included in a more timely manner?

Juergen and others, please be patient a bit longer. GNU, current maintainer,
others and I are working on improving GRUB2 maintenance. It will take some
time (2-4 weeks). We will drop more info when everything is established.

Stay tuned...

Daniel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/8] x86/vm-event/monitor: relocate code-motion more appropriately

2016-07-04 Thread Corneliu ZUZU

Hi Jan,

On 7/4/2016 1:22 PM, Jan Beulich wrote:

On 30.06.16 at 20:43,  wrote:

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -475,8 +475,6 @@ void hvm_do_resume(struct vcpu *v)
  
  if ( unlikely(v->arch.vm_event) )

  {
-struct monitor_write_data *w = &v->arch.vm_event->write_data;
-
  if ( v->arch.vm_event->emulate_flags )
  {
  enum emul_kind kind = EMUL_KIND_NORMAL;
@@ -493,32 +491,10 @@ void hvm_do_resume(struct vcpu *v)
  
  v->arch.vm_event->emulate_flags = 0;

  }
-
-if ( w->do_write.msr )
-{
-hvm_msr_write_intercept(w->msr, w->value, 0);
-w->do_write.msr = 0;
-}
-
-if ( w->do_write.cr0 )
-{
-hvm_set_cr0(w->cr0, 0);
-w->do_write.cr0 = 0;
-}
-
-if ( w->do_write.cr4 )
-{
-hvm_set_cr4(w->cr4, 0);
-w->do_write.cr4 = 0;
-}
-
-if ( w->do_write.cr3 )
-{
-hvm_set_cr3(w->cr3, 0);
-w->do_write.cr3 = 0;
-}
  }
  
+arch_monitor_write_data(v);

Why does this get moved outside the if(), with the same condition
getting added inside the function (inverted for bailing early)?


I left that so because of patch 5/8 - specifically, monitor_write_data 
handling shouldn't depend on the vm_event subsystem being initialized.
But you're right, it still does depend on that initialization in this 
patch, so I should leave the call inside the if (and remove the check 
inside the function) as you suggest and only get it out in 5/8.

Will do that in v2.




@@ -119,6 +156,55 @@ bool_t monitored_msr(const struct domain *d, u32 msr)
  return test_bit(msr, bitmap);
  }
  
+static void write_ctrlreg_adjust_traps(struct domain *d)

+{
+struct vcpu *v;
+struct arch_vmx_struct *avmx;
+unsigned int cr3_bitmask;
+bool_t cr3_vmevent, cr3_ldexit;
+
+/* Adjust CR3 load-exiting. */
+
+/* vmx only */
+ASSERT(cpu_has_vmx);
+
+/* non-hap domains trap CR3 writes unconditionally */
+if ( !paging_mode_hap(d) )
+{
+for_each_vcpu ( d, v )
+ASSERT(v->arch.hvm_vmx.exec_control & CPU_BASED_CR3_LOAD_EXITING);
+return;
+}
+
+cr3_bitmask = monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3);
+cr3_vmevent = !!(d->arch.monitor.write_ctrlreg_enabled & cr3_bitmask);
+
+for_each_vcpu ( d, v )
+{
+avmx = &v->arch.hvm_vmx;
+cr3_ldexit = !!(avmx->exec_control & CPU_BASED_CR3_LOAD_EXITING);
+
+if ( cr3_vmevent == cr3_ldexit )
+continue;
+
+/*
+ * If CR0.PE=0, CR3 load exiting must remain enabled.
+ * See vmx_update_guest_cr code motion for cr = 0.
+ */
+if ( cr3_ldexit && !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
)
+continue;
+
+if ( cr3_vmevent )
+avmx->exec_control |= CPU_BASED_CR3_LOAD_EXITING;
+else
+avmx->exec_control &= ~CPU_BASED_CR3_LOAD_EXITING;
+
+vmx_vmcs_enter(v);
+vmx_update_cpu_exec_control(v);
+vmx_vmcs_exit(v);
+}
+}

While Razvan gave his ack already, I wonder whether it's really a
good idea to put deeply VMX-specific code outside of a VMX-specific
file.

Jan


Well, a summary of what this function does would sound like: "adjusts 
CR3 load-exiting for cr-write monitor vm-events". IMHO that's (monitor) 
vm-event specific enough to be placed within the vm-event subsystem.
Could you suggest concretely how this separation would look like? (where 
to put this function/parts of it (and what parts), what name should it 
have once moved). Another reason this was done (besides avoiding 
hackishly doing a CR0 update when we actually need a CR3 update 
specifically for a vm-event to happen) is keeping symmetry between 
ARM<->X86 in a future patch that would implement monitor CR vm-events 
for ARM. In that patch write_ctrlreg_adjust_traps is renamed and 
implemented per-architecture, on ARM it would have the same job, i.e. 
updating some hypervisor traps (~ vmx execution controls) for CR 
vm-events to happen.


On a different note, one thing I forgot to do though is to also move the 
following check (instead of completely removing it from 
arch_monitor_domctl_event):


if ( VM_EVENT_X86_CR3 == mop->u.mov_to_cr.index )

inside write_ctrlreg_adjust_traps. Will remedy that in v2.

Thanks,
Corneliu.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] libxl/netbsd: check num_exec in hotplug function

2016-07-04 Thread Wei Liu
Add back xen-devel. Please reply to all recipients in the future.

On Mon, Jul 04, 2016 at 01:11:04AM -0700, John Nemeth wrote:
> On Jul 2, 12:35pm, Wei Liu wrote:
> }
> } This basically replicates the same logic in libxl_linux.c. Without this
> } libxl will loop indefinitely trying to execute hotplug script.
> 
>  One minor change required (see below).
> 
> } Reported-by: John Nemeth 
> } Signed-off-by: Wei Liu 
> } ---
> }  tools/libxl/libxl_netbsd.c | 18 ++
> }  1 file changed, 18 insertions(+)
> } 
> } diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
> } index 096c057..92d3c89 100644
> } --- a/tools/libxl/libxl_netbsd.c
> } +++ b/tools/libxl/libxl_netbsd.c
> } @@ -68,7 +68,25 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, 
> libxl__device *dev,
> }  
> }  switch (dev->backend_kind) {
> }  case LIBXL__DEVICE_KIND_VBD:
> } +if (num_exec != 0) {
> } +LOG(DEBUG, "num_exec %d, not running hotplug scripts", 
> num_exec);
> } +rc = 0;
> } +goto out;
> } +}
> } +rc = libxl__hotplug(gc, dev, args, action);
> } +if (!rc) rc = 1;
> } +break;
> }  case LIBXL__DEVICE_KIND_VIF:
> } +/*
> } + * If domain has a stubdom we don't have to execute hotplug scripts
> } + * for emulated interfaces
> } + */
> } +if ((num_exec > 1) ||
> 
>  The function is called with num_exec set to 0 and 1, so this
> should be:
> 
>if ((num_exec != 0) ||
> 

AIUI this is related to how network is setup because we would need to
hotplug both the emulated nic in QEMU and the PV nic. Is this line
causing problem for you?

Wei.

> } +(libxl_get_stubdom_id(CTX, dev->domid) && num_exec)) {
> } +LOG(DEBUG, "num_exec %d, not running hotplug scripts", 
> num_exec);
> } +rc = 0;
> } +goto out;
> } +}
> }  rc = libxl__hotplug(gc, dev, args, action);
> }  if (!rc) rc = 1;
> }  break;
> } -- 
> } 2.1.4
> } 
> }-- End of excerpt from Wei Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen: arm64: Add support for Renesas RCar Gen3 H3 Salvator-X platform

2016-07-04 Thread Julien Grall

(CC Wei for the release part)

Hi Dirk,

On 04/07/16 07:51, Dirk Behme wrote:

Signed-off-by: Dirk Behme 


Thank you for adding support of a new board in Xen.

During the last hackathon, we discussed about improving pre-release 
testing on ARM hardware [1] and helping users to boot Xen on supported 
board.


This patch is the first board officially supported since then, so I 
would like to start applying what was discussed (I will put on a wiki 
page later). Below the list of things that I would like to see when a 
new board is added:


   - Create a wiki page to explain the requirements to boot Xen on the 
board (e.g new firmware if not supported out of box, linux version,...);

   - Add a link to the new page in [2];
   - Add the contact details in [3] of someone who would could test 
pre-release and help out users to boot Xen on the board.


I do not expect the latter point to be time consuming. It is basically 
checking if Xen boots before each release and possibly update the 
requirements of the board. For the part helping users, it will mostly be 
question related to booting Xen on the hardware. Others may not be able 
to answer because they do not have the board on their desk.


In the future, I would like to see Xen tested before each release on all 
the boards officially supported. If it was not tested, we will consider 
the board as not supported.


[...]


diff --git a/xen/arch/arm/platforms/rcar3.c b/xen/arch/arm/platforms/rcar3.c
new file mode 100644
index 000..5a53f15
--- /dev/null
+++ b/xen/arch/arm/platforms/rcar3.c
@@ -0,0 +1,41 @@
+/*
+ * xen/arch/arm/platforms/rcar3.c
+ *
+ * Renesas R-Car Gen3 specific settings
+ *
+ * Dirk Behme 
+ *
+ * based on Renesas R-Car Gen2 specific settings
+ * Iurii Konovalenko 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include 
+
+static const char const *rcar3_dt_compat[] __initdata =
+{
+"renesas,salvator-x",
+NULL
+};
+
+PLATFORM_START(rcar3, "Renesas R-Car Gen3")
+.compatible = rcar3_dt_compat,


Your platform does not seem to require specific bring-up code, so this 
file can be dropped.



+PLATFORM_END
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */


Regards,

[1] 
http://lists.xenproject.org/archives/html/xen-devel/2016-03/msg03683.html


[2] 
http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions#Hardware


[3] http://wiki.xenproject.org/wiki/Xen_ARM_Manual_Smoke_Test/Results

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.6 script calling conventions

2016-07-04 Thread Wei Liu
On Mon, Jul 04, 2016 at 01:07:51AM -0700, John Nemeth wrote:
> On Jul 2, 12:40pm, Wei Liu wrote:
> } On Tue, Jun 28, 2016 at 06:00:49PM -0700, John Nemeth wrote:
> } >  I'm trying to package Xen 4.6 (specifically Xen 4.6.3) for
> } > use with NetBSD.  I have it mostly done; however, when I try to
> } > create a domU, libxl goes into an infinite loop calling the scripts.
> } > If I try to create a domU with no network or disk, it works fine
> } > (albeit of rather limited use).  Have there been changes between
> } > Xen 4.5 and Xen 4.6 in the calling convention for the scripts?  Is
> } > there documentation on what is expected somewhere?  Please CC me on
> } > any responses.  Here is my domU config file:
> } 
> } Can you give this patch a try? I don't have netbsd system at hand to
> } test it.
> 
>  Thanks.  It pretty much did the trick.  I just had to make
> one minor change to your patch.
> 
> } I suspect netbsd doesn't support stubdom because that pile of code is a
> } bit Linux centric, but it wouldn't hurt to prepare for it.
> 
>  No, NetBSD doesn't do stubdom.  However, I would certainly
> like to NetBSD's support for Xen become equal to Linux's support
> (or better :-) ), so anything that makes that easier is a good
> thing.

Oh, right, I missed this email. Sorry.

Let's discuss in the other email how we can fix this minor issue. I
certainly need to be educated about netbsd xen setup.

Wei.

> 
> }-- End of excerpt from Wei Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 2/3] arm/xen: add support for vm_assist hypercall

2016-07-04 Thread Julien Grall

Hi Juergen,

On 22/06/16 08:03, Juergen Gross wrote:

Add support for the Xen HYPERVISOR_vm_assist hypercall.

Signed-off-by: Juergen Gross 


Reviewed-by: Julien Grall 

Regards,


---
  arch/arm/include/asm/xen/hypercall.h | 1 +
  arch/arm/xen/enlighten.c | 1 +
  arch/arm/xen/hypercall.S | 1 +
  arch/arm64/xen/hypercall.S   | 1 +
  4 files changed, 4 insertions(+)

diff --git a/arch/arm/include/asm/xen/hypercall.h 
b/arch/arm/include/asm/xen/hypercall.h
index b6b962d..9d874db 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -52,6 +52,7 @@ int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
  int HYPERVISOR_physdev_op(int cmd, void *arg);
  int HYPERVISOR_vcpu_op(int cmd, int vcpuid, void *extra_args);
  int HYPERVISOR_tmem_op(void *arg);
+int HYPERVISOR_vm_assist(unsigned int cmd, unsigned int type);
  int HYPERVISOR_platform_op_raw(void *arg);
  static inline int HYPERVISOR_platform_op(struct xen_platform_op *op)
  {
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 71db30c..0f3aa12 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -389,4 +389,5 @@ EXPORT_SYMBOL_GPL(HYPERVISOR_vcpu_op);
  EXPORT_SYMBOL_GPL(HYPERVISOR_tmem_op);
  EXPORT_SYMBOL_GPL(HYPERVISOR_platform_op);
  EXPORT_SYMBOL_GPL(HYPERVISOR_multicall);
+EXPORT_SYMBOL_GPL(HYPERVISOR_vm_assist);
  EXPORT_SYMBOL_GPL(privcmd_call);
diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
index 9a36f4f..a648dfc 100644
--- a/arch/arm/xen/hypercall.S
+++ b/arch/arm/xen/hypercall.S
@@ -91,6 +91,7 @@ HYPERCALL3(vcpu_op);
  HYPERCALL1(tmem_op);
  HYPERCALL1(platform_op_raw);
  HYPERCALL2(multicall);
+HYPERCALL2(vm_assist);

  ENTRY(privcmd_call)
stmdb sp!, {r4}
diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S
index 70df80e..329c802 100644
--- a/arch/arm64/xen/hypercall.S
+++ b/arch/arm64/xen/hypercall.S
@@ -82,6 +82,7 @@ HYPERCALL3(vcpu_op);
  HYPERCALL1(tmem_op);
  HYPERCALL1(platform_op_raw);
  HYPERCALL2(multicall);
+HYPERCALL2(vm_assist);

  ENTRY(privcmd_call)
mov x16, x0



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] libxl/netbsd: check num_exec in hotplug function

2016-07-04 Thread Wei Liu
Also CC Roger since he authored the original code.

Feel free to correct me misunderstanding on this issue.

On Mon, Jul 04, 2016 at 12:09:30PM +0100, Wei Liu wrote:
> Add back xen-devel. Please reply to all recipients in the future.
> 
> On Mon, Jul 04, 2016 at 01:11:04AM -0700, John Nemeth wrote:
> > On Jul 2, 12:35pm, Wei Liu wrote:
> > }
> > } This basically replicates the same logic in libxl_linux.c. Without this
> > } libxl will loop indefinitely trying to execute hotplug script.
> > 
> >  One minor change required (see below).
> > 
> > } Reported-by: John Nemeth 
> > } Signed-off-by: Wei Liu 
> > } ---
> > }  tools/libxl/libxl_netbsd.c | 18 ++
> > }  1 file changed, 18 insertions(+)
> > } 
> > } diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
> > } index 096c057..92d3c89 100644
> > } --- a/tools/libxl/libxl_netbsd.c
> > } +++ b/tools/libxl/libxl_netbsd.c
> > } @@ -68,7 +68,25 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, 
> > libxl__device *dev,
> > }  
> > }  switch (dev->backend_kind) {
> > }  case LIBXL__DEVICE_KIND_VBD:
> > } +if (num_exec != 0) {
> > } +LOG(DEBUG, "num_exec %d, not running hotplug scripts", 
> > num_exec);
> > } +rc = 0;
> > } +goto out;
> > } +}
> > } +rc = libxl__hotplug(gc, dev, args, action);
> > } +if (!rc) rc = 1;
> > } +break;
> > }  case LIBXL__DEVICE_KIND_VIF:
> > } +/*
> > } + * If domain has a stubdom we don't have to execute hotplug 
> > scripts
> > } + * for emulated interfaces
> > } + */
> > } +if ((num_exec > 1) ||
> > 
> >  The function is called with num_exec set to 0 and 1, so this
> > should be:
> > 
> >if ((num_exec != 0) ||
> > 
> 
> AIUI this is related to how network is setup because we would need to
> hotplug both the emulated nic in QEMU and the PV nic. Is this line
> causing problem for you?
> 
> Wei.
> 
> > } +(libxl_get_stubdom_id(CTX, dev->domid) && num_exec)) {
> > } +LOG(DEBUG, "num_exec %d, not running hotplug scripts", 
> > num_exec);
> > } +rc = 0;
> > } +goto out;
> > } +}
> > }  rc = libxl__hotplug(gc, dev, args, action);
> > }  if (!rc) rc = 1;
> > }  break;
> > } -- 
> > } 2.1.4
> > } 
> > }-- End of excerpt from Wei Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch

2016-07-04 Thread George Dunlap
On Wed, Jun 29, 2016 at 5:27 PM, PGNet Dev  wrote:
> In summary, there's a problem
>
> An indication of the guest trying to allocate more memory that the
> host admin has allowed.
>
> that's filling logs with 10s of thousands of redundant log entries, with a
> suspicion that it's 'ballooning' issue in the guest
>
> Perhaps something wrong in the guest's balloon driver.
>
> With no currently known way to identify or troubleshoot the problem, and
> provide info here that could be helpful
>
> I'm simply not aware of existing output which would help; I can't
> see any way around instrumenting involved code.
>
> Not particularly ideal.
>
> Since this is the recommended bug-report channel, any next suggestions?
>
> Is there a particular dev involved in the ballooning that can be cc'd,
> perhaps to add some insight?

Thanks for your persistence. :-)

It's likely that this is related to a known problem with the interface
between the balloon driver and the toolstack.  The warning itself is
benign: it simply means that the balloon driver asked Xen for another
page (thinking incorrectly it was a few pages short), and was told
"No" by Xen.

Fixing it properly requires a re-architecting of the interface between
all the different components that use memory (Xen, qemu, the
toolstack, the guest balloon driver, &c). This is on the to-do list,
but since it's quite a complicated problem, and the main side-effect
is mostly just warnings like this it hasn't been a high priority.

If the log space is an issue for you your best bet for now is to turn
down the loglevel so that this warning doesn't show up.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.6 script calling conventions

2016-07-04 Thread Roger Pau Monné
On Mon, Jul 04, 2016 at 01:07:51AM -0700, John Nemeth wrote:
> On Jul 2, 12:40pm, Wei Liu wrote:
> } On Tue, Jun 28, 2016 at 06:00:49PM -0700, John Nemeth wrote:
> } >  I'm trying to package Xen 4.6 (specifically Xen 4.6.3) for
> } > use with NetBSD.  I have it mostly done; however, when I try to
> } > create a domU, libxl goes into an infinite loop calling the scripts.
> } > If I try to create a domU with no network or disk, it works fine
> } > (albeit of rather limited use).  Have there been changes between
> } > Xen 4.5 and Xen 4.6 in the calling convention for the scripts?  Is
> } > there documentation on what is expected somewhere?  Please CC me on
> } > any responses.  Here is my domU config file:
> } 
> } Can you give this patch a try? I don't have netbsd system at hand to
> } test it.
> 
>  Thanks.  It pretty much did the trick.  I just had to make
> one minor change to your patch.
> 
> } I suspect netbsd doesn't support stubdom because that pile of code is a
> } bit Linux centric, but it wouldn't hurt to prepare for it.
> 
>  No, NetBSD doesn't do stubdom.  However, I would certainly
> like to NetBSD's support for Xen become equal to Linux's support
> (or better :-) ), so anything that makes that easier is a good
> thing.

You should be able to mostly use stubdoms on NetBSD (like you can do on 
FreeBSD), it's just that you won't be able to compile the stubdom kernel 
itself, the build system is too broken/tailored for Linux.

You can for example pick it from the Fedora xen-runtime package:

https://apps.fedoraproject.org/packages/xen-runtime/

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes.

2016-07-04 Thread Sergej Proskurin
This commit changes the prototype of the following functions:
- apply_p2m_changes
- apply_one_level
- p2m_shatter_page
- p2m_create_table
- __p2m_lookup
- __p2m_get_mem_access

These changes are required as our implementation reuses most of the
existing ARM p2m implementation to set page table attributes of the
individual altp2m views. Therefore, exiting function prototypes have
been extended to hold another argument (of type struct p2m_domain *).
This allows to specify the p2m/altp2m domain that should be processed by
the individual function -- instead of accessing the host's default p2m
domain.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 80 +-
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 019f10e..9c8fefd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -200,9 +200,8 @@ void flush_tlb_domain(struct domain *d)
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
  */
-static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+static paddr_t __p2m_lookup(struct p2m_domain *p2m, paddr_t paddr, p2m_type_t 
*t)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
 const unsigned int offsets[4] = {
 zeroeth_table_offset(paddr),
 first_table_offset(paddr),
@@ -282,10 +281,11 @@ err:
 paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
 {
 paddr_t ret;
-struct p2m_domain *p2m = &d->arch.p2m;
+struct vcpu *v = current;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 spin_lock(&p2m->lock);
-ret = __p2m_lookup(d, paddr, t);
+ret = __p2m_lookup(p2m, paddr, t);
 spin_unlock(&p2m->lock);
 
 return ret;
@@ -441,10 +441,12 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t 
flush_cache)
  *
  * level_shift is the number of bits at the level we want to create.
  */
-static int p2m_create_table(struct domain *d, lpae_t *entry,
-int level_shift, bool_t flush_cache)
+static int p2m_create_table(struct domain *d,
+struct p2m_domain *p2m,
+lpae_t *entry,
+int level_shift,
+bool_t flush_cache)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
 struct page_info *page;
 lpae_t *p;
 lpae_t pte;
@@ -502,10 +504,9 @@ static int p2m_create_table(struct domain *d, lpae_t 
*entry,
 return 0;
 }
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
 xenmem_access_t *access)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 void *i;
 unsigned int index;
 
@@ -548,7 +549,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
  * No setting was found in the Radix tree. Check if the
  * entry exists in the page-tables.
  */
-paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
+paddr_t maddr = __p2m_lookup(p2m, gfn_x(gfn) << PAGE_SHIFT, NULL);
 if ( INVALID_PADDR == maddr )
 return -ESRCH;
 
@@ -677,17 +678,17 @@ static const paddr_t level_shifts[] =
 { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
 static int p2m_shatter_page(struct domain *d,
+struct p2m_domain *p2m,
 lpae_t *entry,
 unsigned int level,
 bool_t flush_cache)
 {
 const paddr_t level_shift = level_shifts[level];
-int rc = p2m_create_table(d, entry,
+int rc = p2m_create_table(d, p2m, entry,
   level_shift - PAGE_SHIFT, flush_cache);
 
 if ( !rc )
 {
-struct p2m_domain *p2m = &d->arch.p2m;
 p2m->stats.shattered[level]++;
 p2m->stats.mappings[level]--;
 p2m->stats.mappings[level+1] += LPAE_ENTRIES;
@@ -704,6 +705,7 @@ static int p2m_shatter_page(struct domain *d,
  * -ve == (-Exxx) error.
  */
 static int apply_one_level(struct domain *d,
+   struct p2m_domain *p2m,
lpae_t *entry,
unsigned int level,
bool_t flush_cache,
@@ -721,7 +723,6 @@ static int apply_one_level(struct domain *d,
 const paddr_t level_mask = level_masks[level];
 const paddr_t level_shift = level_shifts[level];
 
-struct p2m_domain *p2m = &d->arch.p2m;
 lpae_t pte;
 const lpae_t orig_pte = *entry;
 int rc;
@@ -776,7 +777,7 @@ static int apply_one_level(struct domain *d,
  * L3) or mem_access is in use. Create a page table and
  * continue to descend so we try smaller allocations.
  */
-rc = p2m_create_table(d, entry, 0, flush_cac

[Xen-devel] [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table.

2016-07-04 Thread Sergej Proskurin
The initially named function "p2m_alloc_table" allocated pages solely
required for p2m. The new implementation leaves p2m allocation related
parts inside of this function (which is made static) and provides an
overlay function "p2m_table_init" that can be called from extern to
generally initialize p2m tables.  Therefore, it distinguishes between
the domain's p2m and altp2m mappings, which are allocated similarly.

NOTE: Inside the function "p2m_alloc_table" we do not lock the p2m lock
anymore. Also, we flush the TLBs outside of the function
"p2m_alloc_table". Instead, we perform the associate locking and TLB
flushing as part of the function p2m_table_init. This allows us to
provide a uniform interface for p2m-related table allocation, which can
be used for altp2m (and potentially nested p2m tables in the future
implementation) -- as it is done in the x86 implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/domain.c |  2 +-
 xen/arch/arm/p2m.c| 53 +--
 xen/include/asm-arm/p2m.h |  2 +-
 3 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6ce4645..6102ed0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -573,7 +573,7 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 if ( (rc = domain_io_init(d)) != 0 )
 goto fail;
 
-if ( (rc = p2m_alloc_table(d)) != 0 )
+if ( (rc = p2m_table_init(d)) != 0 )
 goto fail;
 
 switch ( config->gic_version )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8bf23ee..7e721f9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1315,35 +1315,66 @@ void guest_physmap_remove_page(struct domain *d,
   d->arch.p2m.default_access);
 }
 
-int p2m_alloc_table(struct domain *d)
+static int p2m_alloc_table(struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
-struct page_info *page;
+struct page_info *page = NULL;
 unsigned int i;
 
 page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
 if ( page == NULL )
 return -ENOMEM;
 
-spin_lock(&p2m->lock);
-
-/* Clear both first level pages */
+/* Clear all first level pages */
 for ( i = 0; i < P2M_ROOT_PAGES; i++ )
 clear_and_clean_page(page + i);
 
 p2m->root = page;
 
-d->arch.vttbr = page_to_maddr(p2m->root)
-| ((uint64_t)p2m->vmid&0xff)<<48;
+p2m->vttbr.vttbr = 0;
+p2m->vttbr.vttbr_vmid = p2m->vmid & 0xff;
+p2m->vttbr.vttbr_baddr = page_to_maddr(p2m->root);
 
-/* Make sure that all TLBs corresponding to the new VMID are flushed
- * before using it
+return 0;
+}
+
+int p2m_table_init(struct domain *d)
+{
+int i = 0;
+int rc = -ENOMEM;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+spin_lock(&p2m->lock);
+
+rc = p2m_alloc_table(p2m);
+if ( rc != 0 )
+goto out;
+
+d->arch.vttbr = d->arch.p2m.vttbr.vttbr;
+
+/*
+ * Make sure that all TLBs corresponding to the new VMID are flushed
+ * before using it.
  */
 flush_tlb_domain(d);
 
 spin_unlock(&p2m->lock);
 
-return 0;
+if ( hvm_altp2m_supported() )
+{
+/* Init alternate p2m data */
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+if ( rc != 0 )
+goto out;
+}
+
+d->arch.altp2m_active = 0;
+}
+
+out:
+return rc;
 }
 
 #define MAX_VMID 256
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 783db5c..451b097 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -171,7 +171,7 @@ int relinquish_p2m_mapping(struct domain *d);
  *
  * Returns 0 for success or -errno.
  */
-int p2m_alloc_table(struct domain *d);
+int p2m_table_init(struct domain *d);
 
 /* Context switch */
 void p2m_save_state(struct vcpu *p);
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 
Cc: Ian Jackson 
Cc: Wei Liu 
---
 tools/tests/xen-access/xen-access.c | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index f26e723..ef21d0d 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -333,8 +333,9 @@ void usage(char* progname)
 {
 fprintf(stderr, "Usage: %s [-m]  write|exec", progname);
 #if defined(__i386__) || defined(__x86_64__)
-fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec");
+fprintf(stderr, "|breakpoint");
 #endif
+fprintf(stderr, "|altp2m_write|altp2m_exec");
 fprintf(stderr,
 "\n"
 "Logs first page writes, execs, or breakpoint traps that occur on 
the domain.\n"
@@ -402,6 +403,7 @@ int main(int argc, char *argv[])
 {
 breakpoint = 1;
 }
+#endif
 else if ( !strcmp(argv[0], "altp2m_write") )
 {
 default_access = XENMEM_access_rx;
@@ -412,7 +414,6 @@ int main(int argc, char *argv[])
 default_access = XENMEM_access_rw;
 altp2m = 1;
 }
-#endif
 else
 {
 usage(argv[0]);
@@ -485,12 +486,14 @@ int main(int argc, char *argv[])
 goto exit;
 }
 
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep( xch, domain_id, 1 );
 if ( rc < 0 )
 {
 ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
 goto exit;
 }
+#endif
 }
 
 if ( !altp2m )
@@ -540,7 +543,9 @@ int main(int argc, char *argv[])
 rc = xc_altp2m_switch_to_view( xch, domain_id, 0 );
 rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id);
 rc = xc_altp2m_set_domain_state(xch, domain_id, 0);
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep(xch, domain_id, 0);
+#endif
 } else {
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
~0ull, 0);
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
START_PFN,
@@ -695,9 +700,11 @@ int main(int argc, char *argv[])
 exit:
 if ( altp2m )
 {
+#if defined(__i386__) || defined(__x86_64__)
 uint32_t vcpu_id;
 for ( vcpu_id = 0; vcpu_idhttp://lists.xen.org/xen-devel


[Xen-devel] [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.

2016-07-04 Thread Sergej Proskurin
The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
specific domain. This commit adopts the x86
HVMOP_altp2m_set_domain_state implementation.  The function
p2m_flush_altp2m is currently implemented in form of a stub.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/Makefile|  1 +
 xen/arch/arm/altp2m.c| 68 
 xen/arch/arm/hvm.c   | 30 ++-
 xen/arch/arm/p2m.c   | 46 ++
 xen/include/asm-arm/altp2m.h | 20 ++---
 xen/include/asm-arm/domain.h |  9 ++
 xen/include/asm-arm/p2m.h| 19 +
 7 files changed, 175 insertions(+), 18 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 9e38da3..abd6f1a 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -41,6 +41,7 @@ obj-y += decode.o
 obj-y += processor.o
 obj-y += smc.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += altp2m.o
 
 #obj-bin-y += o
 
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
new file mode 100644
index 000..1d2505f
--- /dev/null
+++ b/xen/arch/arm/altp2m.c
@@ -0,0 +1,68 @@
+/*
+ * arch/arm/altp2m.c
+ *
+ * Alternate p2m
+ * Copyright (c) 2016 Sergej Proskurin 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#include 
+#include 
+#include 
+
+void altp2m_vcpu_reset(struct vcpu *v)
+{
+struct altp2mvcpu *av = &vcpu_altp2m(v);
+
+av->p2midx = INVALID_ALTP2M;
+}
+
+void altp2m_vcpu_initialise(struct vcpu *v)
+{
+if ( v != current )
+vcpu_pause(v);
+
+altp2m_vcpu_reset(v);
+vcpu_altp2m(v).p2midx = 0;
+atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+
+if ( v != current )
+vcpu_unpause(v);
+}
+
+void altp2m_vcpu_destroy(struct vcpu *v)
+{
+struct p2m_domain *p2m;
+
+if ( v != current )
+vcpu_pause(v);
+
+if ( (p2m = p2m_get_altp2m(v)) )
+atomic_dec(&p2m->active_vcpus);
+
+altp2m_vcpu_reset(v);
+
+if ( v != current )
+vcpu_unpause(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8e8e0f7..cb90a55 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -104,8 +104,36 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_set_domain_state:
-rc = -EOPNOTSUPP;
+{
+struct vcpu *v;
+bool_t ostate;
+
+if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+{
+rc = -EINVAL;
+break;
+}
+
+ostate = d->arch.altp2m_active;
+d->arch.altp2m_active = !!a.u.domain_state.state;
+
+/* If the alternate p2m state has changed, handle appropriately */
+if ( d->arch.altp2m_active != ostate &&
+ (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
+{
+for_each_vcpu( d, v )
+{
+if ( !ostate )
+altp2m_vcpu_initialise(v);
+else
+altp2m_vcpu_destroy(v);
+}
+
+if ( ostate )
+p2m_flush_altp2m(d);
+}
 break;
+}
 
 case HVMOP_altp2m_vcpu_enable_notify:
 rc = -EOPNOTSUPP;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e72ca7a..4a745fd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2064,6 +2064,52 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
 return ret;
 }
 
+struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
+{
+unsigned int index = vcpu_altp2m(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return NULL;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return v->domain->arch.altp2m_p2m[index];
+}
+
+static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
+{
+struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+struct vttbr_data *vttbr = &p2m->vttbr;
+
+p2m->lowest_mapped_gfn = INVALID_GFN;
+p2m->max_mapped_gfn = 0;
+
+vttbr->vttbr_baddr = page_to_maddr(p2m->root);
+vttbr->vttbr_vmid = p2m->vmid;
+
+d->arch.altp2m_vttbr[i] = vttbr->vttbr;
+}
+
+int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+re

[Xen-devel] [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table.

2016-07-04 Thread Sergej Proskurin
The initially named function "p2m_alloc_table" allocated pages solely
required for p2m. The new implementation leaves p2m allocation related
parts inside of this function (which is made static) and provides an
overlay function "p2m_table_init" that can be called from extern to
generally initialize p2m tables.  Therefore, it distinguishes between
the domain's p2m and altp2m mappings, which are allocated similarly.

NOTE: Inside the function "p2m_alloc_table" we do not lock the p2m lock
anymore. Also, we flush the TLBs outside of the function
"p2m_alloc_table". Instead, we perform the associate locking and TLB
flushing as part of the function p2m_table_init. This allows us to
provide a uniform interface for p2m-related table allocation, which can
be used for altp2m (and potentially nested p2m tables in the future
implementation) -- as it is done in the x86 implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/domain.c |  2 +-
 xen/arch/arm/p2m.c| 53 +--
 xen/include/asm-arm/p2m.h |  2 +-
 3 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6ce4645..6102ed0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -573,7 +573,7 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 if ( (rc = domain_io_init(d)) != 0 )
 goto fail;
 
-if ( (rc = p2m_alloc_table(d)) != 0 )
+if ( (rc = p2m_table_init(d)) != 0 )
 goto fail;
 
 switch ( config->gic_version )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8bf23ee..7e721f9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1315,35 +1315,66 @@ void guest_physmap_remove_page(struct domain *d,
   d->arch.p2m.default_access);
 }
 
-int p2m_alloc_table(struct domain *d)
+static int p2m_alloc_table(struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
-struct page_info *page;
+struct page_info *page = NULL;
 unsigned int i;
 
 page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
 if ( page == NULL )
 return -ENOMEM;
 
-spin_lock(&p2m->lock);
-
-/* Clear both first level pages */
+/* Clear all first level pages */
 for ( i = 0; i < P2M_ROOT_PAGES; i++ )
 clear_and_clean_page(page + i);
 
 p2m->root = page;
 
-d->arch.vttbr = page_to_maddr(p2m->root)
-| ((uint64_t)p2m->vmid&0xff)<<48;
+p2m->vttbr.vttbr = 0;
+p2m->vttbr.vttbr_vmid = p2m->vmid & 0xff;
+p2m->vttbr.vttbr_baddr = page_to_maddr(p2m->root);
 
-/* Make sure that all TLBs corresponding to the new VMID are flushed
- * before using it
+return 0;
+}
+
+int p2m_table_init(struct domain *d)
+{
+int i = 0;
+int rc = -ENOMEM;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+spin_lock(&p2m->lock);
+
+rc = p2m_alloc_table(p2m);
+if ( rc != 0 )
+goto out;
+
+d->arch.vttbr = d->arch.p2m.vttbr.vttbr;
+
+/*
+ * Make sure that all TLBs corresponding to the new VMID are flushed
+ * before using it.
  */
 flush_tlb_domain(d);
 
 spin_unlock(&p2m->lock);
 
-return 0;
+if ( hvm_altp2m_supported() )
+{
+/* Init alternate p2m data */
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+if ( rc != 0 )
+goto out;
+}
+
+d->arch.altp2m_active = 0;
+}
+
+out:
+return rc;
 }
 
 #define MAX_VMID 256
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 783db5c..451b097 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -171,7 +171,7 @@ int relinquish_p2m_mapping(struct domain *d);
  *
  * Returns 0 for success or -errno.
  */
-int p2m_alloc_table(struct domain *d);
+int p2m_table_init(struct domain *d);
 
 /* Context switch */
 void p2m_save_state(struct vcpu *p);
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m.

2016-07-04 Thread Sergej Proskurin
This commit adapts get_page_from_gva to consider the currently mapped
altp2m view during address translation. We also adapt the function's
prototype (provide "struct vcpu *" instead of "struct domain *"). This
change is required, as the function indirectly calls the function
gva_to_ma_par, which requires the mmu to use the current p2m mapping. So
if the caller is interested in a page that must be claimed from a vCPU
other than current, it must temporarily set the altp2m view that is used
by the vCPU in question.  Therefore, we need to provide the particular
vCPU to this function.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/guestcopy.c |  6 +++---
 xen/arch/arm/p2m.c   | 19 +--
 xen/arch/arm/traps.c |  2 +-
 xen/include/asm-arm/mm.h |  2 +-
 4 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index ce1c3c3..413125f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const 
void *from,
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
 if ( page == NULL )
 return len;
 
@@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
 if ( page == NULL )
 return len;
 
@@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user 
*from, unsigned le
 unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
 struct page_info *page;
 
-page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
+page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
 if ( page == NULL )
 return len;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 9c8fefd..23b482f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1829,10 +1829,11 @@ err:
 return page;
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 unsigned long flags)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+struct domain *d = v->domain;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 struct page_info *page = NULL;
 paddr_t maddr = 0;
 int rc;
@@ -1844,17 +1845,23 @@ struct page_info *get_page_from_gva(struct domain *d, 
vaddr_t va,
 unsigned long irq_flags;
 
 local_irq_save(irq_flags);
-p2m_load_VTTBR(d);
+
+if ( altp2m_active(d) )
+p2m_load_altp2m_VTTBR(v);
+else
+p2m_load_VTTBR(d);
 
 rc = gvirt_to_maddr(va, &maddr, flags);
 
-p2m_load_VTTBR(current->domain);
+if ( altp2m_active(current->domain) )
+p2m_load_altp2m_VTTBR(current);
+else
+p2m_load_VTTBR(current->domain);
+
 local_irq_restore(irq_flags);
 }
 else
-{
 rc = gvirt_to_maddr(va, &maddr, flags);
-}
 
 if ( rc )
 goto err;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 44926ca..6995971 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -957,7 +957,7 @@ static void show_guest_stack(struct vcpu *v, struct 
cpu_user_regs *regs)
 return;
 }
 
-page = get_page_from_gva(v->domain, sp, GV2M_READ);
+page = get_page_from_gva(v, sp, GV2M_READ);
 if ( page == NULL )
 {
 printk("Failed to convert stack to physical address\n");
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 68cf203..19eadd2 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -281,7 +281,7 @@ static inline void *page_to_virt(const struct page_info *pg)
 return mfn_to_virt(page_to_mfn(pg));
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 unsigned long flags);
 
 /*
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.

2016-07-04 Thread Sergej Proskurin
The HVMOP HVMOP_altp2m_set_mem_access allows to set gfn permissions of
(currently one page at a time) of a specific altp2m view. In case the
view does not hold the requested gfn entry, it will be first copied from
the hostp2m table and then modified as requested.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c |   7 +-
 xen/arch/arm/p2m.c | 207 +
 2 files changed, 200 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 9a536b2..8218737 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -153,7 +153,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_set_mem_access:
-rc = -EOPNOTSUPP;
+if ( a.u.set_mem_access.pad )
+rc = -EINVAL;
+else
+rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+a.u.set_mem_access.hvmmem_access,
+a.u.set_mem_access.view);
 break;
 
 case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 23b482f..395ea0f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2085,6 +2085,159 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 return false;
 }
 
+static int p2m_get_gfn_level_and_attr(struct p2m_domain *p2m,
+  paddr_t paddr, unsigned int *level,
+  unsigned long *mattr)
+{
+const unsigned int offsets[4] = {
+zeroeth_table_offset(paddr),
+first_table_offset(paddr),
+second_table_offset(paddr),
+third_table_offset(paddr)
+};
+lpae_t pte, *map;
+unsigned int root_table;
+
+ASSERT(spin_is_locked(&p2m->lock));
+BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
+
+if ( P2M_ROOT_PAGES > 1 )
+{
+/*
+ * Concatenated root-level tables. The table number will be
+ * the offset at the previous level. It is not possible to
+ * concatenate a level-0 root.
+ */
+ASSERT(P2M_ROOT_LEVEL > 0);
+root_table = offsets[P2M_ROOT_LEVEL - 1];
+if ( root_table >= P2M_ROOT_PAGES )
+goto err;
+}
+else
+root_table = 0;
+
+map = __map_domain_page(p2m->root + root_table);
+
+ASSERT(P2M_ROOT_LEVEL < 4);
+
+/* Find the p2m level of the wanted paddr */
+for ( *level = P2M_ROOT_LEVEL ; *level < 4 ; (*level)++ )
+{
+pte = map[offsets[*level]];
+
+if ( *level == 3 || !p2m_table(pte) )
+/* Done */
+break;
+
+ASSERT(*level < 3);
+
+/* Map for next level */
+unmap_domain_page(map);
+map = map_domain_page(_mfn(pte.p2m.base));
+}
+
+unmap_domain_page(map);
+
+if ( !p2m_valid(pte) )
+goto err;
+
+/* Provide mattr information of the paddr */
+*mattr = pte.p2m.mattr;
+
+return 0;
+
+err:
+return -EINVAL;
+}
+
+static inline
+int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
+  struct p2m_domain *ap2m, p2m_access_t a,
+  gfn_t gfn)
+{
+p2m_type_t p2mt;
+xenmem_access_t xma_old;
+paddr_t gpa = pfn_to_paddr(gfn_x(gfn));
+paddr_t maddr, mask = 0;
+unsigned int level;
+unsigned long mattr;
+int rc;
+
+static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+ACCESS(n),
+ACCESS(r),
+ACCESS(w),
+ACCESS(rw),
+ACCESS(x),
+ACCESS(rx),
+ACCESS(wx),
+ACCESS(rwx),
+ACCESS(rx2rw),
+ACCESS(n2rwx),
+#undef ACCESS
+};
+
+/* Check if entry is part of the altp2m view. */
+spin_lock(&ap2m->lock);
+maddr = __p2m_lookup(ap2m, gpa, &p2mt);
+spin_unlock(&ap2m->lock);
+
+/* Check host p2m if no valid entry in ap2m. */
+if ( maddr == INVALID_PADDR )
+{
+/* Check if entry is part of the host p2m view. */
+spin_lock(&hp2m->lock);
+maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+if ( maddr == INVALID_PADDR || p2mt != p2m_ram_rw )
+goto out;
+
+rc = __p2m_get_mem_access(hp2m, gfn, &xma_old);
+if ( rc )
+goto out;
+
+rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+if ( rc )
+goto out;
+spin_unlock(&hp2m->lock);
+
+mask = level_masks[level];
+
+/* If this is a superpage, copy that first. */
+if ( level != 3 )
+{
+rc = apply_p2m_changes(d, ap2m, INSERT,
+   gpa & mask,
+   (gpa + level_sizes[level]) & mask,
+   maddr & mask, mattr, 0, p2mt,
+   memaccess[xma

[Xen-devel] [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes.

2016-07-04 Thread Sergej Proskurin
This commit changes the prototype of the following functions:
- apply_p2m_changes
- apply_one_level
- p2m_shatter_page
- p2m_create_table
- __p2m_lookup
- __p2m_get_mem_access

These changes are required as our implementation reuses most of the
existing ARM p2m implementation to set page table attributes of the
individual altp2m views. Therefore, exiting function prototypes have
been extended to hold another argument (of type struct p2m_domain *).
This allows to specify the p2m/altp2m domain that should be processed by
the individual function -- instead of accessing the host's default p2m
domain.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 80 +-
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 019f10e..9c8fefd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -200,9 +200,8 @@ void flush_tlb_domain(struct domain *d)
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
  */
-static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+static paddr_t __p2m_lookup(struct p2m_domain *p2m, paddr_t paddr, p2m_type_t 
*t)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
 const unsigned int offsets[4] = {
 zeroeth_table_offset(paddr),
 first_table_offset(paddr),
@@ -282,10 +281,11 @@ err:
 paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
 {
 paddr_t ret;
-struct p2m_domain *p2m = &d->arch.p2m;
+struct vcpu *v = current;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 spin_lock(&p2m->lock);
-ret = __p2m_lookup(d, paddr, t);
+ret = __p2m_lookup(p2m, paddr, t);
 spin_unlock(&p2m->lock);
 
 return ret;
@@ -441,10 +441,12 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t 
flush_cache)
  *
  * level_shift is the number of bits at the level we want to create.
  */
-static int p2m_create_table(struct domain *d, lpae_t *entry,
-int level_shift, bool_t flush_cache)
+static int p2m_create_table(struct domain *d,
+struct p2m_domain *p2m,
+lpae_t *entry,
+int level_shift,
+bool_t flush_cache)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
 struct page_info *page;
 lpae_t *p;
 lpae_t pte;
@@ -502,10 +504,9 @@ static int p2m_create_table(struct domain *d, lpae_t 
*entry,
 return 0;
 }
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
 xenmem_access_t *access)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 void *i;
 unsigned int index;
 
@@ -548,7 +549,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
  * No setting was found in the Radix tree. Check if the
  * entry exists in the page-tables.
  */
-paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
+paddr_t maddr = __p2m_lookup(p2m, gfn_x(gfn) << PAGE_SHIFT, NULL);
 if ( INVALID_PADDR == maddr )
 return -ESRCH;
 
@@ -677,17 +678,17 @@ static const paddr_t level_shifts[] =
 { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
 static int p2m_shatter_page(struct domain *d,
+struct p2m_domain *p2m,
 lpae_t *entry,
 unsigned int level,
 bool_t flush_cache)
 {
 const paddr_t level_shift = level_shifts[level];
-int rc = p2m_create_table(d, entry,
+int rc = p2m_create_table(d, p2m, entry,
   level_shift - PAGE_SHIFT, flush_cache);
 
 if ( !rc )
 {
-struct p2m_domain *p2m = &d->arch.p2m;
 p2m->stats.shattered[level]++;
 p2m->stats.mappings[level]--;
 p2m->stats.mappings[level+1] += LPAE_ENTRIES;
@@ -704,6 +705,7 @@ static int p2m_shatter_page(struct domain *d,
  * -ve == (-Exxx) error.
  */
 static int apply_one_level(struct domain *d,
+   struct p2m_domain *p2m,
lpae_t *entry,
unsigned int level,
bool_t flush_cache,
@@ -721,7 +723,6 @@ static int apply_one_level(struct domain *d,
 const paddr_t level_mask = level_masks[level];
 const paddr_t level_shift = level_shifts[level];
 
-struct p2m_domain *p2m = &d->arch.p2m;
 lpae_t pte;
 const lpae_t orig_pte = *entry;
 int rc;
@@ -776,7 +777,7 @@ static int apply_one_level(struct domain *d,
  * L3) or mem_access is in use. Create a page table and
  * continue to descend so we try smaller allocations.
  */
-rc = p2m_create_table(d, entry, 0, flush_cac

[Xen-devel] [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 96892a5..de97a12 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -51,7 +51,8 @@ static bool_t p2m_mapping(lpae_t pte)
 
 void p2m_dump_info(struct domain *d)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+struct vcpu *v = current;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 spin_lock(&p2m->lock);
 printk("p2m mappings for domain %d (vmid %d):\n",
@@ -71,7 +72,8 @@ void memory_type_changed(struct domain *d)
 
 void dump_p2m_lookup(struct domain *d, paddr_t addr)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+struct vcpu *v = current;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.

2016-07-04 Thread Sergej Proskurin
The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the functions
p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
individual altp2m views without destroying the entire table. In this
way, altp2m views can be reused at a later point in time.

In addition, the implementation clears all altp2m entries during the
process of flushing. The same applies to hostp2m entries, when it is
destroyed. In this way, further domain and p2m allocations will not
unintentionally reuse old p2m mappings.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c| 67 +++
 xen/include/asm-arm/p2m.h | 15 ---
 2 files changed, 78 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4a745fd..ae789e6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int 
idx)
 return rc;
 }
 
+/* Reset this p2m table to be empty */
+static void p2m_flush_table(struct p2m_domain *p2m)
+{
+struct page_info *top, *pg;
+mfn_t mfn;
+unsigned int i;
+
+/* Check whether the p2m table has already been flushed before. */
+if ( p2m->root == NULL)
+return;
+
+spin_lock(&p2m->lock);
+
+/*
+ * "Host" p2m tables can have shared entries &c that need a bit more care
+ * when discarding them
+ */
+ASSERT(!p2m_is_hostp2m(p2m));
+
+/* Zap the top level of the trie */
+top = p2m->root;
+
+/* Clear all concatenated first level pages */
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+{
+mfn = _mfn(page_to_mfn(top + i));
+clear_domain_page(mfn);
+}
+
+/* Free the rest of the trie pages back to the paging pool */
+while ( (pg = page_list_remove_head(&p2m->pages)) )
+if ( pg != top  )
+{
+/*
+ * Before freeing the individual pages, we clear them to prevent
+ * reusing old table entries in future p2m allocations.
+ */
+mfn = _mfn(page_to_mfn(pg));
+clear_domain_page(mfn);
+free_domheap_page(pg);
+}
+
+page_list_add(top, &p2m->pages);
+
+/* Invalidate VTTBR */
+p2m->vttbr.vttbr = 0;
+p2m->vttbr.vttbr_baddr = INVALID_MFN;
+
+spin_unlock(&p2m->lock);
+}
+
+void p2m_flush_altp2m(struct domain *d)
+{
+unsigned int i;
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+p2m_flush_table(d->arch.altp2m_p2m[i]);
+flush_tlb();
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+}
+
+altp2m_unlock(d);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8ee78e0..51d784f 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 
 /* Flush all the alternate p2m's for a domain */
-static inline void p2m_flush_altp2m(struct domain *d)
-{
-/* Not supported on ARM. */
-}
+void p2m_flush_altp2m(struct domain *d);
 
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
@@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_host;
+}
+
+static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_alternate;
+}
+
 /* vm_event and mem_access are supported on any ARM guest */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.

2016-07-04 Thread Sergej Proskurin
The HVMOP HVMOP_altp2m_set_mem_access allows to set gfn permissions of
(currently one page at a time) of a specific altp2m view. In case the
view does not hold the requested gfn entry, it will be first copied from
the hostp2m table and then modified as requested.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c |   7 +-
 xen/arch/arm/p2m.c | 207 +
 2 files changed, 200 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 9a536b2..8218737 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -153,7 +153,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_set_mem_access:
-rc = -EOPNOTSUPP;
+if ( a.u.set_mem_access.pad )
+rc = -EINVAL;
+else
+rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+a.u.set_mem_access.hvmmem_access,
+a.u.set_mem_access.view);
 break;
 
 case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 23b482f..395ea0f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2085,6 +2085,159 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 return false;
 }
 
+static int p2m_get_gfn_level_and_attr(struct p2m_domain *p2m,
+  paddr_t paddr, unsigned int *level,
+  unsigned long *mattr)
+{
+const unsigned int offsets[4] = {
+zeroeth_table_offset(paddr),
+first_table_offset(paddr),
+second_table_offset(paddr),
+third_table_offset(paddr)
+};
+lpae_t pte, *map;
+unsigned int root_table;
+
+ASSERT(spin_is_locked(&p2m->lock));
+BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
+
+if ( P2M_ROOT_PAGES > 1 )
+{
+/*
+ * Concatenated root-level tables. The table number will be
+ * the offset at the previous level. It is not possible to
+ * concatenate a level-0 root.
+ */
+ASSERT(P2M_ROOT_LEVEL > 0);
+root_table = offsets[P2M_ROOT_LEVEL - 1];
+if ( root_table >= P2M_ROOT_PAGES )
+goto err;
+}
+else
+root_table = 0;
+
+map = __map_domain_page(p2m->root + root_table);
+
+ASSERT(P2M_ROOT_LEVEL < 4);
+
+/* Find the p2m level of the wanted paddr */
+for ( *level = P2M_ROOT_LEVEL ; *level < 4 ; (*level)++ )
+{
+pte = map[offsets[*level]];
+
+if ( *level == 3 || !p2m_table(pte) )
+/* Done */
+break;
+
+ASSERT(*level < 3);
+
+/* Map for next level */
+unmap_domain_page(map);
+map = map_domain_page(_mfn(pte.p2m.base));
+}
+
+unmap_domain_page(map);
+
+if ( !p2m_valid(pte) )
+goto err;
+
+/* Provide mattr information of the paddr */
+*mattr = pte.p2m.mattr;
+
+return 0;
+
+err:
+return -EINVAL;
+}
+
+static inline
+int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
+  struct p2m_domain *ap2m, p2m_access_t a,
+  gfn_t gfn)
+{
+p2m_type_t p2mt;
+xenmem_access_t xma_old;
+paddr_t gpa = pfn_to_paddr(gfn_x(gfn));
+paddr_t maddr, mask = 0;
+unsigned int level;
+unsigned long mattr;
+int rc;
+
+static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+ACCESS(n),
+ACCESS(r),
+ACCESS(w),
+ACCESS(rw),
+ACCESS(x),
+ACCESS(rx),
+ACCESS(wx),
+ACCESS(rwx),
+ACCESS(rx2rw),
+ACCESS(n2rwx),
+#undef ACCESS
+};
+
+/* Check if entry is part of the altp2m view. */
+spin_lock(&ap2m->lock);
+maddr = __p2m_lookup(ap2m, gpa, &p2mt);
+spin_unlock(&ap2m->lock);
+
+/* Check host p2m if no valid entry in ap2m. */
+if ( maddr == INVALID_PADDR )
+{
+/* Check if entry is part of the host p2m view. */
+spin_lock(&hp2m->lock);
+maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+if ( maddr == INVALID_PADDR || p2mt != p2m_ram_rw )
+goto out;
+
+rc = __p2m_get_mem_access(hp2m, gfn, &xma_old);
+if ( rc )
+goto out;
+
+rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+if ( rc )
+goto out;
+spin_unlock(&hp2m->lock);
+
+mask = level_masks[level];
+
+/* If this is a superpage, copy that first. */
+if ( level != 3 )
+{
+rc = apply_p2m_changes(d, ap2m, INSERT,
+   gpa & mask,
+   (gpa + level_sizes[level]) & mask,
+   maddr & mask, mattr, 0, p2mt,
+   memaccess[xma

[Xen-devel] [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c|  2 +-
 xen/arch/arm/p2m.c| 32 
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index f4ec5cf..9a536b2 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -149,7 +149,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_switch_p2m:
-rc = -EOPNOTSUPP;
+rc = p2m_switch_domain_altp2m_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_set_mem_access:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f82f1ea..8bf23ee 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2232,6 +2232,38 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned 
int idx)
 return rc;
 }
 
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+struct vcpu *v;
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+{
+for_each_vcpu( d, v )
+if ( idx != vcpu_altp2m(v).p2midx )
+{
+atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+vcpu_altp2m(v).p2midx = idx;
+atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+}
+
+rc = 0;
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 255a282..783db5c 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -143,6 +143,9 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 /* Make a specific alternate p2m invalid */
 int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Switch alternate p2m for entire domain */
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)  ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c|  2 +-
 xen/arch/arm/p2m.c| 32 
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 005d7c6..f4ec5cf 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -145,7 +145,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_destroy_p2m:
-rc = -EOPNOTSUPP;
+rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_switch_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6c41b98..f82f1ea 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
 altp2m_unlock(d);
 }
 
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+struct p2m_domain *p2m;
+int rc = -EBUSY;
+
+if ( !idx || idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+{
+p2m = d->arch.altp2m_p2m[idx];
+
+if ( !_atomic_read(p2m->active_vcpus) )
+{
+p2m_flush_table(p2m);
+flush_tlb();
+d->arch.altp2m_vttbr[idx] = INVALID_MFN;
+rc = 0;
+}
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index c51532a..255a282 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -140,6 +140,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int 
idx);
 /* Find an available alternate p2m and make it valid */
 int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 
+/* Make a specific alternate p2m invalid */
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)  ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.

2016-07-04 Thread Sergej Proskurin
The Xen altp2m subsystem is currently supported only on x86-64 based
architectures. By utilizing ARM's virtualization extensions, we intend
to implement altp2m support for ARM architectures and thus further
extend current Virtual Machine Introspection (VMI) capabilities on ARM.

With this commit, Xen is now able to activate altp2m support on ARM by
means of the command-line argument 'altp2m' (bool).

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c| 22 
 xen/include/asm-arm/hvm/hvm.h | 47 +++
 2 files changed, 69 insertions(+)
 create mode 100644 xen/include/asm-arm/hvm/hvm.h

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index d999bde..3615036 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,28 @@
 
 #include 
 
+#include 
+
+/* Xen command-line option enabling altp2m */
+static bool_t __initdata opt_altp2m_enabled = 0;
+boolean_param("altp2m", opt_altp2m_enabled);
+
+struct hvm_function_table hvm_funcs __read_mostly = {
+.name = "ARM_HVM",
+};
+
+/* Initcall enabling hvm functionality. */
+static int __init hvm_enable(void)
+{
+if ( opt_altp2m_enabled )
+hvm_funcs.altp2m_supported = 1;
+else
+hvm_funcs.altp2m_supported = 0;
+
+return 0;
+}
+presmp_initcall(hvm_enable);
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
new file mode 100644
index 000..96c455c
--- /dev/null
+++ b/xen/include/asm-arm/hvm/hvm.h
@@ -0,0 +1,47 @@
+/*
+ * include/asm-arm/hvm/hvm.h
+ *
+ * Copyright (c) 2016, Sergej Proskurin 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#ifndef __ASM_ARM_HVM_HVM_H__
+#define __ASM_ARM_HVM_HVM_H__
+
+struct hvm_function_table {
+char *name;
+
+/* Necessary hardware support for alternate p2m's. */
+bool_t altp2m_supported;
+};
+
+extern struct hvm_function_table hvm_funcs;
+
+/* Returns true if hardware supports alternate p2m's */
+static inline bool_t hvm_altp2m_supported(void)
+{
+return hvm_funcs.altp2m_supported;
+}
+
+#endif /* __ASM_ARM_HVM_HVM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c|  2 +-
 xen/arch/arm/p2m.c| 32 
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 005d7c6..f4ec5cf 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -145,7 +145,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_destroy_p2m:
-rc = -EOPNOTSUPP;
+rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_switch_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6c41b98..f82f1ea 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
 altp2m_unlock(d);
 }
 
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+struct p2m_domain *p2m;
+int rc = -EBUSY;
+
+if ( !idx || idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+{
+p2m = d->arch.altp2m_p2m[idx];
+
+if ( !_atomic_read(p2m->active_vcpus) )
+{
+p2m_flush_table(p2m);
+flush_tlb();
+d->arch.altp2m_vttbr[idx] = INVALID_MFN;
+rc = 0;
+}
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index c51532a..255a282 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -140,6 +140,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int 
idx);
 /* Find an available alternate p2m and make it valid */
 int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 
+/* Make a specific alternate p2m invalid */
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)  ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m.

2016-07-04 Thread Sergej Proskurin
This commit adapts get_page_from_gva to consider the currently mapped
altp2m view during address translation. We also adapt the function's
prototype (provide "struct vcpu *" instead of "struct domain *"). This
change is required, as the function indirectly calls the function
gva_to_ma_par, which requires the mmu to use the current p2m mapping. So
if the caller is interested in a page that must be claimed from a vCPU
other than current, it must temporarily set the altp2m view that is used
by the vCPU in question.  Therefore, we need to provide the particular
vCPU to this function.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/guestcopy.c |  6 +++---
 xen/arch/arm/p2m.c   | 19 +--
 xen/arch/arm/traps.c |  2 +-
 xen/include/asm-arm/mm.h |  2 +-
 4 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index ce1c3c3..413125f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const 
void *from,
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
 if ( page == NULL )
 return len;
 
@@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
 if ( page == NULL )
 return len;
 
@@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user 
*from, unsigned le
 unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
 struct page_info *page;
 
-page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
+page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
 if ( page == NULL )
 return len;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 9c8fefd..23b482f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1829,10 +1829,11 @@ err:
 return page;
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 unsigned long flags)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+struct domain *d = v->domain;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 struct page_info *page = NULL;
 paddr_t maddr = 0;
 int rc;
@@ -1844,17 +1845,23 @@ struct page_info *get_page_from_gva(struct domain *d, 
vaddr_t va,
 unsigned long irq_flags;
 
 local_irq_save(irq_flags);
-p2m_load_VTTBR(d);
+
+if ( altp2m_active(d) )
+p2m_load_altp2m_VTTBR(v);
+else
+p2m_load_VTTBR(d);
 
 rc = gvirt_to_maddr(va, &maddr, flags);
 
-p2m_load_VTTBR(current->domain);
+if ( altp2m_active(current->domain) )
+p2m_load_altp2m_VTTBR(current);
+else
+p2m_load_VTTBR(current->domain);
+
 local_irq_restore(irq_flags);
 }
 else
-{
 rc = gvirt_to_maddr(va, &maddr, flags);
-}
 
 if ( rc )
 goto err;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 44926ca..6995971 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -957,7 +957,7 @@ static void show_guest_stack(struct vcpu *v, struct 
cpu_user_regs *regs)
 return;
 }
 
-page = get_page_from_gva(v->domain, sp, GV2M_READ);
+page = get_page_from_gva(v, sp, GV2M_READ);
 if ( page == NULL )
 {
 printk("Failed to convert stack to physical address\n");
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 68cf203..19eadd2 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -281,7 +281,7 @@ static inline void *page_to_virt(const struct page_info *pg)
 return mfn_to_virt(page_to_mfn(pg));
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 unsigned long flags);
 
 /*
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 96892a5..de97a12 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -51,7 +51,8 @@ static bool_t p2m_mapping(lpae_t pte)
 
 void p2m_dump_info(struct domain *d)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+struct vcpu *v = current;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 spin_lock(&p2m->lock);
 printk("p2m mappings for domain %d (vmid %d):\n",
@@ -71,7 +72,8 @@ void memory_type_changed(struct domain *d)
 
 void dump_p2m_lookup(struct domain *d, paddr_t addr)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+struct vcpu *v = current;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state.

2016-07-04 Thread Sergej Proskurin
This commit adopts the x86 HVMOP_altp2m_get_domain_state implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 1118f22..8e8e0f7 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -93,7 +93,14 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 switch ( a.cmd )
 {
 case HVMOP_altp2m_get_domain_state:
-rc = -EOPNOTSUPP;
+if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+{
+rc = -EINVAL;
+break;
+}
+
+a.u.domain_state.state = altp2m_active(d);
+rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_set_domain_state:
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.

2016-07-04 Thread Sergej Proskurin
This commit moves the altp2m-related code from x86 to ARM. Functions
that are no yet supported notify the caller or print a BUG message
stating their absence.

Also, the struct arch_domain is extended with the altp2m_active
attribute, represeting the current altp2m activity configuration of the
domain.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c   | 82 
 xen/include/asm-arm/altp2m.h | 22 ++--
 xen/include/asm-arm/domain.h |  3 ++
 3 files changed, 105 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 3615036..1118f22 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,7 @@
 
 #include 
 
+#include 
 #include 
 
 /* Xen command-line option enabling altp2m */
@@ -54,6 +55,83 @@ static int __init hvm_enable(void)
 }
 presmp_initcall(hvm_enable);
 
+static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+struct xen_hvm_altp2m_op a;
+struct domain *d = NULL;
+int rc = 0;
+
+if ( !hvm_altp2m_supported() )
+return -EOPNOTSUPP;
+
+if ( copy_from_guest(&a, arg, 1) )
+return -EFAULT;
+
+if ( a.pad1 || a.pad2 ||
+ (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
+ (a.cmd < HVMOP_altp2m_get_domain_state) ||
+ (a.cmd > HVMOP_altp2m_change_gfn) )
+return -EINVAL;
+
+d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
+
+if ( d == NULL )
+return -ESRCH;
+
+if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
+ (a.cmd != HVMOP_altp2m_set_domain_state) &&
+ !d->arch.altp2m_active )
+{
+rc = -EOPNOTSUPP;
+goto out;
+}
+
+if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
+goto out;
+
+switch ( a.cmd )
+{
+case HVMOP_altp2m_get_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_vcpu_enable_notify:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_create_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_destroy_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_switch_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_mem_access:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_change_gfn:
+rc = -EOPNOTSUPP;
+break;
+}
+
+out:
+rcu_unlock_domain(d);
+
+return rc;
+}
+
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
@@ -102,6 +180,10 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 rc = -EINVAL;
 break;
 
+case HVMOP_altp2m:
+rc = do_altp2m_op(arg);
+break;
+
 default:
 {
 gdprintk(XENLOG_DEBUG, "HVMOP op=%lu: not implemented\n", op);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a..16ae9d6 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
  * Alternate p2m
  *
  * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin .
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-/* Not implemented on ARM. */
-return 0;
+return d->arch.altp2m_active;
 }
 
 /* Alternate p2m VCPU */
@@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 return 0;
 }
 
+static inline void altp2m_vcpu_initialise(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
+static inline void altp2m_vcpu_destroy(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
+static inline void altp2m_vcpu_reset(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 979f7de..2039f16 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -127,6 +127,9 @@ struct arch_domain
 paddr_t efi_acpi_gpa;
 paddr_t efi_acpi_len;
 #endif
+
+/* altp2m: allow multiple copies of host p2m */
+bool_t altp2m_active;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-04 Thread Sergej Proskurin
This commit makes sure that the TLB of a domain considers flushing all
of the associated altp2m views. Therefore, in case a different domain
(not the currently active domain) shall flush its TLBs, the current
implementation loops over all VTTBRs of the different altp2m mappings
per vCPU and flushes the TLBs. This way, a change of one of the altp2m
mapping is considered.  At this point, it must be considered that the
domain --whose TLBs are to be flushed-- is not locked.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 71 --
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7e721f9..019f10e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,8 @@
 #include 
 #include 
 
+#include 
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }
 
+static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint16_t index = vcpu_altp2m(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return INVALID_MFN;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return d->arch.altp2m_vttbr[index];
+}
+
+static void p2m_load_altp2m_VTTBR(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint64_t vttbr = p2m_get_altp2m_vttbr(v);
+
+if ( is_idle_domain(d) )
+return;
+
+BUG_ON(vttbr == INVALID_MFN);
+WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+isb(); /* Ensure update is visible */
+}
+
 static void p2m_load_VTTBR(struct domain *d)
 {
 if ( is_idle_domain(d) )
 return;
+
 BUG_ON(!d->arch.vttbr);
 WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+
 isb(); /* Ensure update is visible */
 }
 
@@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
 WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
 isb();
 
-p2m_load_VTTBR(n->domain);
+if ( altp2m_active(n->domain) )
+p2m_load_altp2m_VTTBR(n);
+else
+p2m_load_VTTBR(n->domain);
+
 isb();
 
 if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
 void flush_tlb_domain(struct domain *d)
 {
 unsigned long flags = 0;
+struct vcpu *v = NULL;
 
-/* Update the VTTBR if necessary with the domain d. In this case,
- * it's only necessary to flush TLBs on every CPUs with the current VMID
- * (our domain).
+/*
+ * Update the VTTBR if necessary with the domain d. In this case, it is 
only
+ * necessary to flush TLBs on every CPUs with the current VMID (our
+ * domain).
  */
 if ( d != current->domain )
 {
 local_irq_save(flags);
-p2m_load_VTTBR(d);
-}
 
-flush_tlb();
+/* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
+if ( altp2m_active(d) )
+{
+for_each_vcpu( d, v )
+{
+p2m_load_altp2m_VTTBR(v);
+flush_tlb();
+}
+}
+else
+{
+p2m_load_VTTBR(d);
+flush_tlb();
+}
+}
+else
+flush_tlb();
 
 if ( d != current->domain )
 {
-p2m_load_VTTBR(current->domain);
+/* Make sure altp2m mapping is valid. */
+if ( altp2m_active(current->domain) )
+p2m_load_altp2m_VTTBR(current);
+else
+p2m_load_VTTBR(current->domain);
 local_irq_restore(flags);
 }
 }
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.

2016-07-04 Thread Sergej Proskurin
This commit adds the function p2m_altp2m_lazy_copy implementing the
altp2m paging mechanism. The function p2m_altp2m_lazy_copy lazily copies
the hostp2m's mapping into the currently active altp2m view on 2nd stage
instruction or data access violations. Every altp2m violation generates
a vm_event.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c   | 130 ++-
 xen/arch/arm/traps.c | 102 +++--
 xen/include/asm-arm/altp2m.h |   4 +-
 xen/include/asm-arm/p2m.h|  17 --
 4 files changed, 224 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 395ea0f..96892a5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 
+#include 
 #include 
 
 #ifdef CONFIG_ARM_64
@@ -1955,6 +1956,12 @@ void __init setup_virt_paging(void)
 smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+if ( altp2m_active(v->domain) )
+p2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
 {
 int rc;
@@ -1962,13 +1969,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 xenmem_access_t xma;
 vm_event_request_t *req;
 struct vcpu *v = current;
-struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+struct domain *d = v->domain;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 /* Mem_access is not in use. */
 if ( !p2m->mem_access_enabled )
 return true;
 
-rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
+rc = p2m_get_mem_access(d, _gfn(paddr_to_pfn(gpa)), &xma);
 if ( rc )
 return true;
 
@@ -2074,6 +2082,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
 req->vcpu_id = v->vcpu_id;
 
+vm_event_fill_regs(req);
+
+if ( altp2m_active(v->domain) )
+{
+req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
+req->altp2m_idx = vcpu_altp2m(v).p2midx;
+}
+
 mem_access_send_req(v->domain, req);
 xfree(req);
 }
@@ -2356,6 +2372,116 @@ struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
 return v->domain->arch.altp2m_p2m[index];
 }
 
+bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
+{
+struct domain *d = v->domain;
+bool_t rc = 0;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+{
+if ( idx != vcpu_altp2m(v).p2midx )
+{
+atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+vcpu_altp2m(v).p2midx = idx;
+atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+}
+rc = 1;
+}
+
+altp2m_unlock(d);
+
+return rc;
+}
+
+/*
+ * If the fault is for a not present entry:
+ * if the entry in the host p2m has a valid mfn, copy it and retry
+ * else indicate that outer handler should handle fault
+ *
+ * If the fault is for a present entry:
+ * indicate that outer handler should handle fault
+ */
+bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
+unsigned long gva, struct npfec npfec,
+struct p2m_domain **ap2m)
+{
+struct domain *d = v->domain;
+struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
+p2m_type_t p2mt;
+xenmem_access_t xma;
+paddr_t maddr, mask = 0;
+gfn_t gfn = _gfn(paddr_to_pfn(gpa));
+unsigned int level;
+unsigned long mattr;
+int rc = 0;
+
+static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+ACCESS(n),
+ACCESS(r),
+ACCESS(w),
+ACCESS(rw),
+ACCESS(x),
+ACCESS(rx),
+ACCESS(wx),
+ACCESS(rwx),
+ACCESS(rx2rw),
+ACCESS(n2rwx),
+#undef ACCESS
+};
+
+*ap2m = p2m_get_altp2m(v);
+if ( *ap2m == NULL)
+return 0;
+
+/* Check if entry is part of the altp2m view */
+spin_lock(&(*ap2m)->lock);
+maddr = __p2m_lookup(*ap2m, gpa, NULL);
+spin_unlock(&(*ap2m)->lock);
+if ( maddr != INVALID_PADDR )
+return 0;
+
+/* Check if entry is part of the host p2m view */
+spin_lock(&hp2m->lock);
+maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+if ( maddr == INVALID_PADDR )
+goto out;
+
+rc = __p2m_get_mem_access(hp2m, gfn, &xma);
+if ( rc )
+goto out;
+
+rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+if ( rc )
+goto out;
+spin_unlock(&hp2m->lock);
+
+mask = level_masks[level];
+
+rc = apply_p2m_changes(d, *ap2m, INSERT,
+   

[Xen-devel] [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Razvan Cojocaru 
Cc: Tamas K Lengyel 
Cc: Ian Jackson 
Cc: Wei Liu 
---
 tools/tests/xen-access/xen-access.c | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index f26e723..ef21d0d 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -333,8 +333,9 @@ void usage(char* progname)
 {
 fprintf(stderr, "Usage: %s [-m]  write|exec", progname);
 #if defined(__i386__) || defined(__x86_64__)
-fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec");
+fprintf(stderr, "|breakpoint");
 #endif
+fprintf(stderr, "|altp2m_write|altp2m_exec");
 fprintf(stderr,
 "\n"
 "Logs first page writes, execs, or breakpoint traps that occur on 
the domain.\n"
@@ -402,6 +403,7 @@ int main(int argc, char *argv[])
 {
 breakpoint = 1;
 }
+#endif
 else if ( !strcmp(argv[0], "altp2m_write") )
 {
 default_access = XENMEM_access_rx;
@@ -412,7 +414,6 @@ int main(int argc, char *argv[])
 default_access = XENMEM_access_rw;
 altp2m = 1;
 }
-#endif
 else
 {
 usage(argv[0]);
@@ -485,12 +486,14 @@ int main(int argc, char *argv[])
 goto exit;
 }
 
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep( xch, domain_id, 1 );
 if ( rc < 0 )
 {
 ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
 goto exit;
 }
+#endif
 }
 
 if ( !altp2m )
@@ -540,7 +543,9 @@ int main(int argc, char *argv[])
 rc = xc_altp2m_switch_to_view( xch, domain_id, 0 );
 rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id);
 rc = xc_altp2m_set_domain_state(xch, domain_id, 0);
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep(xch, domain_id, 0);
+#endif
 } else {
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
~0ull, 0);
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
START_PFN,
@@ -695,9 +700,11 @@ int main(int argc, char *argv[])
 exit:
 if ( altp2m )
 {
+#if defined(__i386__) || defined(__x86_64__)
 uint32_t vcpu_id;
 for ( vcpu_id = 0; vcpu_idhttp://lists.xen.org/xen-devel


[Xen-devel] [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c|  2 +-
 xen/arch/arm/p2m.c| 32 
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index f4ec5cf..9a536b2 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -149,7 +149,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_switch_p2m:
-rc = -EOPNOTSUPP;
+rc = p2m_switch_domain_altp2m_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_set_mem_access:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f82f1ea..8bf23ee 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2232,6 +2232,38 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned 
int idx)
 return rc;
 }
 
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+struct vcpu *v;
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+{
+for_each_vcpu( d, v )
+if ( idx != vcpu_altp2m(v).p2midx )
+{
+atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+vcpu_altp2m(v).p2midx = idx;
+atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+}
+
+rc = 0;
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 255a282..783db5c 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -143,6 +143,9 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 /* Make a specific alternate p2m invalid */
 int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Switch alternate p2m for entire domain */
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)  ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.

2016-07-04 Thread Sergej Proskurin
The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
This parameter allows further usage of altp2m on ARM. For this, we
define an additional altp2m field for PV domains as part of the
libxl_domain_type struct. This field can be set only on ARM systems
through the "altp2m" switch in the domain's configuration file (i.e.
set altp2m=1).

Signed-off-by: Sergej Proskurin 
---
Cc: Ian Jackson 
Cc: Wei Liu 
---
 tools/libxl/libxl_create.c  |  1 +
 tools/libxl/libxl_dom.c | 14 ++
 tools/libxl/libxl_types.idl |  1 +
 tools/libxl/xl_cmdimpl.c|  5 +
 4 files changed, 21 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b99472..40b5f61 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 b_info->cmdline = b_info->u.pv.cmdline;
 b_info->u.pv.cmdline = NULL;
 }
+libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
 break;
 default:
 LOG(ERROR, "invalid domain type %s in create info",
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index ec29060..ab023a2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -277,6 +277,16 @@ err:
 }
 #endif
 
+#if defined(__arm__) || defined(__aarch64__)
+static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
+   libxl_domain_build_info *const info)
+{
+if ( libxl_defbool_val(info->u.pv.altp2m) )
+xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
+ libxl_defbool_val(info->u.pv.altp2m));
+}
+#endif
+
 static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
 libxl_domain_build_info *const info)
 {
@@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
 return rc;
 #endif
 }
+#if defined(__arm__) || defined(__aarch64__)
+else /* info->type == LIBXL_DOMAIN_TYPE_PV */
+pv_set_conf_params(ctx->xch, domid, info);
+#endif
 
 rc = libxl__arch_domain_create(gc, d_config, domid);
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index ef614be..0a164f9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
   ("features", string, {'const': True}),
   # Use host's E820 for PCI passthrough.
   ("e820_host", libxl_defbool),
+  ("altp2m", libxl_defbool),
   ])),
  ("invalid", None),
  ], keyvar_init_val = "LIBXL_DOMAIN_TYPE_INVALID")),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 6459eec..12c6e48 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1718,6 +1718,11 @@ static void parse_config_data(const char *config_source,
 exit(1);
 }
 
+#if defined(__arm__) || defined(__aarch64__)
+/* Enable altp2m for PV guests solely on ARM */
+xlu_cfg_get_defbool(config, "altp2m", &b_info->u.pv.altp2m, 0);
+#endif
+
 break;
 }
 default:
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c|  3 ++-
 xen/arch/arm/p2m.c| 23 +++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index cb90a55..005d7c6 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -140,7 +140,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_create_p2m:
-rc = -EOPNOTSUPP;
+if ( !(rc = p2m_init_next_altp2m(d, &a.u.view.view)) )
+rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_destroy_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ae789e6..6c41b98 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,29 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int 
idx)
 return rc;
 }
 
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
+{
+int rc = -EINVAL;
+unsigned int i;
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_vttbr[i] != INVALID_MFN )
+continue;
+
+p2m_init_altp2m_helper(d, i);
+*idx = i;
+rc = 0;
+
+break;
+}
+
+altp2m_unlock(d);
+return rc;
+}
+
 /* Reset this p2m table to be empty */
 static void p2m_flush_table(struct p2m_domain *p2m)
 {
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 51d784f..c51532a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -137,6 +137,9 @@ void p2m_flush_altp2m(struct domain *d);
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Find an available alternate p2m and make it valid */
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)  ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.

2016-07-04 Thread Sergej Proskurin
The Xen altp2m subsystem is currently supported only on x86-64 based
architectures. By utilizing ARM's virtualization extensions, we intend
to implement altp2m support for ARM architectures and thus further
extend current Virtual Machine Introspection (VMI) capabilities on ARM.

With this commit, Xen is now able to activate altp2m support on ARM by
means of the command-line argument 'altp2m' (bool).

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c| 22 
 xen/include/asm-arm/hvm/hvm.h | 47 +++
 2 files changed, 69 insertions(+)
 create mode 100644 xen/include/asm-arm/hvm/hvm.h

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index d999bde..3615036 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,28 @@
 
 #include 
 
+#include 
+
+/* Xen command-line option enabling altp2m */
+static bool_t __initdata opt_altp2m_enabled = 0;
+boolean_param("altp2m", opt_altp2m_enabled);
+
+struct hvm_function_table hvm_funcs __read_mostly = {
+.name = "ARM_HVM",
+};
+
+/* Initcall enabling hvm functionality. */
+static int __init hvm_enable(void)
+{
+if ( opt_altp2m_enabled )
+hvm_funcs.altp2m_supported = 1;
+else
+hvm_funcs.altp2m_supported = 0;
+
+return 0;
+}
+presmp_initcall(hvm_enable);
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
new file mode 100644
index 000..96c455c
--- /dev/null
+++ b/xen/include/asm-arm/hvm/hvm.h
@@ -0,0 +1,47 @@
+/*
+ * include/asm-arm/hvm/hvm.h
+ *
+ * Copyright (c) 2016, Sergej Proskurin 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#ifndef __ASM_ARM_HVM_HVM_H__
+#define __ASM_ARM_HVM_HVM_H__
+
+struct hvm_function_table {
+char *name;
+
+/* Necessary hardware support for alternate p2m's. */
+bool_t altp2m_supported;
+};
+
+extern struct hvm_function_table hvm_funcs;
+
+/* Returns true if hardware supports alternate p2m's */
+static inline bool_t hvm_altp2m_supported(void)
+{
+return hvm_funcs.altp2m_supported;
+}
+
+#endif /* __ASM_ARM_HVM_HVM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM.

2016-07-04 Thread Sergej Proskurin
Hello together,

Since this is my first contribution to the Xen development mailing list, I
would like to shortly introduce myself. My name is Sergej Proskurin. I am a PhD
Student at the Technical University of Munich. My research areas focus on
Virtual Machine Introspection, Hypervisor/OS Security, and Reverse Engineering.

The following patch series can be found on Github[0] and is part of my
contribution to this year's Google Summer of Code (GSoC)[1]. My project is
managed by the organization The Honeynet Project. As part of GSoC, I am being
supervised by the Xen developer Tamas K. Lengyel , George
D. Webster, and Steven Maresca.

In this patch series, we provide an implementation of the altp2m subsystem for
ARM. Our implementation is based on the altp2m subsystem for x86, providing
additional --alternate-- views on the guest's physical memory by means of the
ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
extend the p2m subsystem. We extend libxl to support altp2m on ARM and modify
xen-access to test the suggested functionality.

[0] https://github.com/sergej-proskurin/xen (Branch arm-altp2m-patch)
[1] https://summerofcode.withgoogle.com/projects/#4970052843470848

Sergej Proskurin (18):
  arm/altp2m: Add cmd-line support for altp2m on ARM.
  arm/altp2m: Add first altp2m HVMOP stubs.
  arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  arm/altp2m: Add altp2m init/teardown routines.
  arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  arm/altp2m: Add a(p2m) table flushing routines.
  arm/altp2m: Add HVMOP_altp2m_create_p2m.
  arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  arm/altp2m: Renamed and extended p2m_alloc_table.
  arm/altp2m: Make flush_tlb_domain ready for altp2m.
  arm/altp2m: Cosmetic fixes - function prototypes.
  arm/altp2m: Make get_page_from_gva ready for altp2m.
  arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  arm/altp2m: Add altp2m paging mechanism.
  arm/altp2m: Extended libxl to activate altp2m on ARM.
  arm/altp2m: Adjust debug information to altp2m.
  arm/altp2m: Extend xen-access for altp2m on ARM.

 tools/libxl/libxl_create.c  |   1 +
 tools/libxl/libxl_dom.c |  14 +
 tools/libxl/libxl_types.idl |   1 +
 tools/libxl/xl_cmdimpl.c|   5 +
 tools/tests/xen-access/xen-access.c |  11 +-
 xen/arch/arm/Makefile   |   1 +
 xen/arch/arm/altp2m.c   |  68 +++
 xen/arch/arm/domain.c   |   2 +-
 xen/arch/arm/guestcopy.c|   6 +-
 xen/arch/arm/hvm.c  | 145 ++
 xen/arch/arm/p2m.c  | 930 
 xen/arch/arm/traps.c| 104 +++-
 xen/include/asm-arm/altp2m.h|  12 +-
 xen/include/asm-arm/domain.h|  18 +
 xen/include/asm-arm/hvm/hvm.h   |  59 +++
 xen/include/asm-arm/mm.h|   2 +-
 xen/include/asm-arm/p2m.h   |  72 ++-
 17 files changed, 1330 insertions(+), 121 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c
 create mode 100644 xen/include/asm-arm/hvm/hvm.h

-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state.

2016-07-04 Thread Sergej Proskurin
This commit adopts the x86 HVMOP_altp2m_get_domain_state implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 1118f22..8e8e0f7 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -93,7 +93,14 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 switch ( a.cmd )
 {
 case HVMOP_altp2m_get_domain_state:
-rc = -EOPNOTSUPP;
+if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+{
+rc = -EINVAL;
+break;
+}
+
+a.u.domain_state.state = altp2m_active(d);
+rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_set_domain_state:
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.

2016-07-04 Thread Sergej Proskurin
This commit moves the altp2m-related code from x86 to ARM. Functions
that are no yet supported notify the caller or print a BUG message
stating their absence.

Also, the struct arch_domain is extended with the altp2m_active
attribute, represeting the current altp2m activity configuration of the
domain.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c   | 82 
 xen/include/asm-arm/altp2m.h | 22 ++--
 xen/include/asm-arm/domain.h |  3 ++
 3 files changed, 105 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 3615036..1118f22 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,7 @@
 
 #include 
 
+#include 
 #include 
 
 /* Xen command-line option enabling altp2m */
@@ -54,6 +55,83 @@ static int __init hvm_enable(void)
 }
 presmp_initcall(hvm_enable);
 
+static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+struct xen_hvm_altp2m_op a;
+struct domain *d = NULL;
+int rc = 0;
+
+if ( !hvm_altp2m_supported() )
+return -EOPNOTSUPP;
+
+if ( copy_from_guest(&a, arg, 1) )
+return -EFAULT;
+
+if ( a.pad1 || a.pad2 ||
+ (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
+ (a.cmd < HVMOP_altp2m_get_domain_state) ||
+ (a.cmd > HVMOP_altp2m_change_gfn) )
+return -EINVAL;
+
+d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
+
+if ( d == NULL )
+return -ESRCH;
+
+if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
+ (a.cmd != HVMOP_altp2m_set_domain_state) &&
+ !d->arch.altp2m_active )
+{
+rc = -EOPNOTSUPP;
+goto out;
+}
+
+if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
+goto out;
+
+switch ( a.cmd )
+{
+case HVMOP_altp2m_get_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_vcpu_enable_notify:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_create_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_destroy_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_switch_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_mem_access:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_change_gfn:
+rc = -EOPNOTSUPP;
+break;
+}
+
+out:
+rcu_unlock_domain(d);
+
+return rc;
+}
+
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
@@ -102,6 +180,10 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 rc = -EINVAL;
 break;
 
+case HVMOP_altp2m:
+rc = do_altp2m_op(arg);
+break;
+
 default:
 {
 gdprintk(XENLOG_DEBUG, "HVMOP op=%lu: not implemented\n", op);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a..16ae9d6 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
  * Alternate p2m
  *
  * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin .
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-/* Not implemented on ARM. */
-return 0;
+return d->arch.altp2m_active;
 }
 
 /* Alternate p2m VCPU */
@@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 return 0;
 }
 
+static inline void altp2m_vcpu_initialise(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
+static inline void altp2m_vcpu_destroy(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
+static inline void altp2m_vcpu_reset(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 979f7de..2039f16 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -127,6 +127,9 @@ struct arch_domain
 paddr_t efi_acpi_gpa;
 paddr_t efi_acpi_len;
 #endif
+
+/* altp2m: allow multiple copies of host p2m */
+bool_t altp2m_active;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.

2016-07-04 Thread Sergej Proskurin
The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
specific domain. This commit adopts the x86
HVMOP_altp2m_set_domain_state implementation.  The function
p2m_flush_altp2m is currently implemented in form of a stub.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/Makefile|  1 +
 xen/arch/arm/altp2m.c| 68 
 xen/arch/arm/hvm.c   | 30 ++-
 xen/arch/arm/p2m.c   | 46 ++
 xen/include/asm-arm/altp2m.h | 20 ++---
 xen/include/asm-arm/domain.h |  9 ++
 xen/include/asm-arm/p2m.h| 19 +
 7 files changed, 175 insertions(+), 18 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 9e38da3..abd6f1a 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -41,6 +41,7 @@ obj-y += decode.o
 obj-y += processor.o
 obj-y += smc.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += altp2m.o
 
 #obj-bin-y += o
 
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
new file mode 100644
index 000..1d2505f
--- /dev/null
+++ b/xen/arch/arm/altp2m.c
@@ -0,0 +1,68 @@
+/*
+ * arch/arm/altp2m.c
+ *
+ * Alternate p2m
+ * Copyright (c) 2016 Sergej Proskurin 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#include 
+#include 
+#include 
+
+void altp2m_vcpu_reset(struct vcpu *v)
+{
+struct altp2mvcpu *av = &vcpu_altp2m(v);
+
+av->p2midx = INVALID_ALTP2M;
+}
+
+void altp2m_vcpu_initialise(struct vcpu *v)
+{
+if ( v != current )
+vcpu_pause(v);
+
+altp2m_vcpu_reset(v);
+vcpu_altp2m(v).p2midx = 0;
+atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+
+if ( v != current )
+vcpu_unpause(v);
+}
+
+void altp2m_vcpu_destroy(struct vcpu *v)
+{
+struct p2m_domain *p2m;
+
+if ( v != current )
+vcpu_pause(v);
+
+if ( (p2m = p2m_get_altp2m(v)) )
+atomic_dec(&p2m->active_vcpus);
+
+altp2m_vcpu_reset(v);
+
+if ( v != current )
+vcpu_unpause(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8e8e0f7..cb90a55 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -104,8 +104,36 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_set_domain_state:
-rc = -EOPNOTSUPP;
+{
+struct vcpu *v;
+bool_t ostate;
+
+if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+{
+rc = -EINVAL;
+break;
+}
+
+ostate = d->arch.altp2m_active;
+d->arch.altp2m_active = !!a.u.domain_state.state;
+
+/* If the alternate p2m state has changed, handle appropriately */
+if ( d->arch.altp2m_active != ostate &&
+ (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
+{
+for_each_vcpu( d, v )
+{
+if ( !ostate )
+altp2m_vcpu_initialise(v);
+else
+altp2m_vcpu_destroy(v);
+}
+
+if ( ostate )
+p2m_flush_altp2m(d);
+}
 break;
+}
 
 case HVMOP_altp2m_vcpu_enable_notify:
 rc = -EOPNOTSUPP;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e72ca7a..4a745fd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2064,6 +2064,52 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
 return ret;
 }
 
+struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
+{
+unsigned int index = vcpu_altp2m(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return NULL;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return v->domain->arch.altp2m_p2m[index];
+}
+
+static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
+{
+struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+struct vttbr_data *vttbr = &p2m->vttbr;
+
+p2m->lowest_mapped_gfn = INVALID_GFN;
+p2m->max_mapped_gfn = 0;
+
+vttbr->vttbr_baddr = page_to_maddr(p2m->root);
+vttbr->vttbr_vmid = p2m->vmid;
+
+d->arch.altp2m_vttbr[i] = vttbr->vttbr;
+}
+
+int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+re

[Xen-devel] [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.

2016-07-04 Thread Sergej Proskurin
This commit adds the function p2m_altp2m_lazy_copy implementing the
altp2m paging mechanism. The function p2m_altp2m_lazy_copy lazily copies
the hostp2m's mapping into the currently active altp2m view on 2nd stage
instruction or data access violations. Every altp2m violation generates
a vm_event.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c   | 130 ++-
 xen/arch/arm/traps.c | 102 +++--
 xen/include/asm-arm/altp2m.h |   4 +-
 xen/include/asm-arm/p2m.h|  17 --
 4 files changed, 224 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 395ea0f..96892a5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 
+#include 
 #include 
 
 #ifdef CONFIG_ARM_64
@@ -1955,6 +1956,12 @@ void __init setup_virt_paging(void)
 smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+if ( altp2m_active(v->domain) )
+p2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
 {
 int rc;
@@ -1962,13 +1969,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 xenmem_access_t xma;
 vm_event_request_t *req;
 struct vcpu *v = current;
-struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+struct domain *d = v->domain;
+struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : 
p2m_get_hostp2m(d);
 
 /* Mem_access is not in use. */
 if ( !p2m->mem_access_enabled )
 return true;
 
-rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
+rc = p2m_get_mem_access(d, _gfn(paddr_to_pfn(gpa)), &xma);
 if ( rc )
 return true;
 
@@ -2074,6 +2082,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 req->u.mem_access.flags |= npfec.insn_fetch ? MEM_ACCESS_X : 0;
 req->vcpu_id = v->vcpu_id;
 
+vm_event_fill_regs(req);
+
+if ( altp2m_active(v->domain) )
+{
+req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
+req->altp2m_idx = vcpu_altp2m(v).p2midx;
+}
+
 mem_access_send_req(v->domain, req);
 xfree(req);
 }
@@ -2356,6 +2372,116 @@ struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
 return v->domain->arch.altp2m_p2m[index];
 }
 
+bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
+{
+struct domain *d = v->domain;
+bool_t rc = 0;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+{
+if ( idx != vcpu_altp2m(v).p2midx )
+{
+atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+vcpu_altp2m(v).p2midx = idx;
+atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+}
+rc = 1;
+}
+
+altp2m_unlock(d);
+
+return rc;
+}
+
+/*
+ * If the fault is for a not present entry:
+ * if the entry in the host p2m has a valid mfn, copy it and retry
+ * else indicate that outer handler should handle fault
+ *
+ * If the fault is for a present entry:
+ * indicate that outer handler should handle fault
+ */
+bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
+unsigned long gva, struct npfec npfec,
+struct p2m_domain **ap2m)
+{
+struct domain *d = v->domain;
+struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
+p2m_type_t p2mt;
+xenmem_access_t xma;
+paddr_t maddr, mask = 0;
+gfn_t gfn = _gfn(paddr_to_pfn(gpa));
+unsigned int level;
+unsigned long mattr;
+int rc = 0;
+
+static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+ACCESS(n),
+ACCESS(r),
+ACCESS(w),
+ACCESS(rw),
+ACCESS(x),
+ACCESS(rx),
+ACCESS(wx),
+ACCESS(rwx),
+ACCESS(rx2rw),
+ACCESS(n2rwx),
+#undef ACCESS
+};
+
+*ap2m = p2m_get_altp2m(v);
+if ( *ap2m == NULL)
+return 0;
+
+/* Check if entry is part of the altp2m view */
+spin_lock(&(*ap2m)->lock);
+maddr = __p2m_lookup(*ap2m, gpa, NULL);
+spin_unlock(&(*ap2m)->lock);
+if ( maddr != INVALID_PADDR )
+return 0;
+
+/* Check if entry is part of the host p2m view */
+spin_lock(&hp2m->lock);
+maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+if ( maddr == INVALID_PADDR )
+goto out;
+
+rc = __p2m_get_mem_access(hp2m, gfn, &xma);
+if ( rc )
+goto out;
+
+rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+if ( rc )
+goto out;
+spin_unlock(&hp2m->lock);
+
+mask = level_masks[level];
+
+rc = apply_p2m_changes(d, *ap2m, INSERT,
+   

[Xen-devel] [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.

2016-07-04 Thread Sergej Proskurin
The p2m intialization now invokes intialization routines responsible for
the allocation and intitialization of altp2m structures. The same
applies to teardown routines. The functionality has been adopted from
the x86 altp2m implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c| 166 --
 xen/include/asm-arm/domain.h  |   6 ++
 xen/include/asm-arm/hvm/hvm.h |  12 +++
 xen/include/asm-arm/p2m.h |  20 +
 4 files changed, 198 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index aa4e774..e72ca7a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1400,19 +1400,103 @@ static void p2m_free_vmid(struct domain *d)
 spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+int ret = 0;
+
+spin_lock_init(&p2m->lock);
+INIT_PAGE_LIST_HEAD(&p2m->pages);
+
+spin_lock(&p2m->lock);
+
+p2m->domain = d;
+p2m->access_required = false;
+p2m->mem_access_enabled = false;
+p2m->default_access = p2m_access_rwx;
+p2m->p2m_class = p2m_host;
+p2m->root = NULL;
+
+/* Adopt VMID of the associated domain */
+p2m->vmid = d->arch.p2m.vmid;
+p2m->vttbr.vttbr = 0;
+p2m->vttbr.vttbr_vmid = p2m->vmid;
+
+p2m->max_mapped_gfn = 0;
+p2m->lowest_mapped_gfn = ULONG_MAX;
+radix_tree_init(&p2m->mem_access_settings);
+
+spin_unlock(&p2m->lock);
+
+return ret;
+}
+
+static void p2m_free_one(struct p2m_domain *p2m)
+{
+mfn_t mfn;
+unsigned int i;
 struct page_info *pg;
 
 spin_lock(&p2m->lock);
 
 while ( (pg = page_list_remove_head(&p2m->pages)) )
-free_domheap_page(pg);
+if ( pg != p2m->root )
+free_domheap_page(pg);
+
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+{
+mfn = _mfn(page_to_mfn(p2m->root) + i);
+clear_domain_page(mfn);
+}
+free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+p2m->root = NULL;
+
+radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
+spin_unlock(&p2m->lock);
+
+xfree(p2m);
+}
+
+static struct p2m_domain *p2m_init_one(struct domain *d)
+{
+struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+if ( !p2m )
+return NULL;
+
+if ( p2m_initialise(d, p2m) )
+goto free_p2m;
+
+return p2m;
+
+free_p2m:
+xfree(p2m);
+return NULL;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+struct page_info *pg = NULL;
+mfn_t mfn;
+unsigned int i;
+
+spin_lock(&p2m->lock);
 
-if ( p2m->root )
-free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+while ( (pg = page_list_remove_head(&p2m->pages)) )
+if ( pg != p2m->root )
+{
+mfn = _mfn(page_to_mfn(pg));
+clear_domain_page(mfn);
+free_domheap_page(pg);
+}
 
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+{
+mfn = _mfn(page_to_mfn(p2m->root) + i);
+clear_domain_page(mfn);
+}
+free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
 p2m->root = NULL;
 
 p2m_free_vmid(d);
@@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
 spin_unlock(&p2m->lock);
 }
 
-int p2m_init(struct domain *d)
+static int p2m_init_hostp2m(struct domain *d)
 {
 struct p2m_domain *p2m = &d->arch.p2m;
 int rc = 0;
@@ -1437,6 +1521,8 @@ int p2m_init(struct domain *d)
 if ( rc != 0 )
 goto err;
 
+p2m->vttbr.vttbr_vmid = p2m->vmid;
+
 d->arch.vttbr = 0;
 
 p2m->root = NULL;
@@ -1454,6 +1540,74 @@ err:
 return rc;
 }
 
+static void p2m_teardown_altp2m(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( !d->arch.altp2m_p2m[i] )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+p2m_free_one(p2m);
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+d->arch.altp2m_p2m[i] = NULL;
+}
+
+d->arch.altp2m_active = false;
+}
+
+static int p2m_init_altp2m(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+spin_lock_init(&d->arch.altp2m_lock);
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+if ( p2m == NULL )
+{
+p2m_teardown_altp2m(d);
+return -ENOMEM;
+}
+p2m->p2m_class = p2m_alternate;
+p2m->access_required = 1;
+_atomic_set(&p2m->active_vcpus, 0);
+}
+
+return 0;
+}
+
+void p2m_teardown(struct domain *d)
+{
+/*
+ * We must teardown altp2m unconditionally because
+ * we initialise it unconditionally.
+ */
+p2m_teardown_altp2m(d);
+
+p2m_teardown_hostp2m(d);
+}
+
+int p2m_init(stru

[Xen-devel] [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.

2016-07-04 Thread Sergej Proskurin
The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the functions
p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
individual altp2m views without destroying the entire table. In this
way, altp2m views can be reused at a later point in time.

In addition, the implementation clears all altp2m entries during the
process of flushing. The same applies to hostp2m entries, when it is
destroyed. In this way, further domain and p2m allocations will not
unintentionally reuse old p2m mappings.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c| 67 +++
 xen/include/asm-arm/p2m.h | 15 ---
 2 files changed, 78 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4a745fd..ae789e6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int 
idx)
 return rc;
 }
 
+/* Reset this p2m table to be empty */
+static void p2m_flush_table(struct p2m_domain *p2m)
+{
+struct page_info *top, *pg;
+mfn_t mfn;
+unsigned int i;
+
+/* Check whether the p2m table has already been flushed before. */
+if ( p2m->root == NULL)
+return;
+
+spin_lock(&p2m->lock);
+
+/*
+ * "Host" p2m tables can have shared entries &c that need a bit more care
+ * when discarding them
+ */
+ASSERT(!p2m_is_hostp2m(p2m));
+
+/* Zap the top level of the trie */
+top = p2m->root;
+
+/* Clear all concatenated first level pages */
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+{
+mfn = _mfn(page_to_mfn(top + i));
+clear_domain_page(mfn);
+}
+
+/* Free the rest of the trie pages back to the paging pool */
+while ( (pg = page_list_remove_head(&p2m->pages)) )
+if ( pg != top  )
+{
+/*
+ * Before freeing the individual pages, we clear them to prevent
+ * reusing old table entries in future p2m allocations.
+ */
+mfn = _mfn(page_to_mfn(pg));
+clear_domain_page(mfn);
+free_domheap_page(pg);
+}
+
+page_list_add(top, &p2m->pages);
+
+/* Invalidate VTTBR */
+p2m->vttbr.vttbr = 0;
+p2m->vttbr.vttbr_baddr = INVALID_MFN;
+
+spin_unlock(&p2m->lock);
+}
+
+void p2m_flush_altp2m(struct domain *d)
+{
+unsigned int i;
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+p2m_flush_table(d->arch.altp2m_p2m[i]);
+flush_tlb();
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+}
+
+altp2m_unlock(d);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8ee78e0..51d784f 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 
 /* Flush all the alternate p2m's for a domain */
-static inline void p2m_flush_altp2m(struct domain *d)
-{
-/* Not supported on ARM. */
-}
+void p2m_flush_altp2m(struct domain *d);
 
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
@@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_host;
+}
+
+static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_alternate;
+}
+
 /* vm_event and mem_access are supported on any ARM guest */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-04 Thread Sergej Proskurin
This commit makes sure that the TLB of a domain considers flushing all
of the associated altp2m views. Therefore, in case a different domain
(not the currently active domain) shall flush its TLBs, the current
implementation loops over all VTTBRs of the different altp2m mappings
per vCPU and flushes the TLBs. This way, a change of one of the altp2m
mapping is considered.  At this point, it must be considered that the
domain --whose TLBs are to be flushed-- is not locked.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 71 --
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7e721f9..019f10e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,8 @@
 #include 
 #include 
 
+#include 
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }
 
+static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint16_t index = vcpu_altp2m(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return INVALID_MFN;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return d->arch.altp2m_vttbr[index];
+}
+
+static void p2m_load_altp2m_VTTBR(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint64_t vttbr = p2m_get_altp2m_vttbr(v);
+
+if ( is_idle_domain(d) )
+return;
+
+BUG_ON(vttbr == INVALID_MFN);
+WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+isb(); /* Ensure update is visible */
+}
+
 static void p2m_load_VTTBR(struct domain *d)
 {
 if ( is_idle_domain(d) )
 return;
+
 BUG_ON(!d->arch.vttbr);
 WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+
 isb(); /* Ensure update is visible */
 }
 
@@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
 WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
 isb();
 
-p2m_load_VTTBR(n->domain);
+if ( altp2m_active(n->domain) )
+p2m_load_altp2m_VTTBR(n);
+else
+p2m_load_VTTBR(n->domain);
+
 isb();
 
 if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
 void flush_tlb_domain(struct domain *d)
 {
 unsigned long flags = 0;
+struct vcpu *v = NULL;
 
-/* Update the VTTBR if necessary with the domain d. In this case,
- * it's only necessary to flush TLBs on every CPUs with the current VMID
- * (our domain).
+/*
+ * Update the VTTBR if necessary with the domain d. In this case, it is 
only
+ * necessary to flush TLBs on every CPUs with the current VMID (our
+ * domain).
  */
 if ( d != current->domain )
 {
 local_irq_save(flags);
-p2m_load_VTTBR(d);
-}
 
-flush_tlb();
+/* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
+if ( altp2m_active(d) )
+{
+for_each_vcpu( d, v )
+{
+p2m_load_altp2m_VTTBR(v);
+flush_tlb();
+}
+}
+else
+{
+p2m_load_VTTBR(d);
+flush_tlb();
+}
+}
+else
+flush_tlb();
 
 if ( d != current->domain )
 {
-p2m_load_VTTBR(current->domain);
+/* Make sure altp2m mapping is valid. */
+if ( altp2m_active(current->domain) )
+p2m_load_altp2m_VTTBR(current);
+else
+p2m_load_VTTBR(current->domain);
 local_irq_restore(flags);
 }
 }
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.

2016-07-04 Thread Sergej Proskurin
The p2m intialization now invokes intialization routines responsible for
the allocation and intitialization of altp2m structures. The same
applies to teardown routines. The functionality has been adopted from
the x86 altp2m implementation.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c| 166 --
 xen/include/asm-arm/domain.h  |   6 ++
 xen/include/asm-arm/hvm/hvm.h |  12 +++
 xen/include/asm-arm/p2m.h |  20 +
 4 files changed, 198 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index aa4e774..e72ca7a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1400,19 +1400,103 @@ static void p2m_free_vmid(struct domain *d)
 spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = &d->arch.p2m;
+int ret = 0;
+
+spin_lock_init(&p2m->lock);
+INIT_PAGE_LIST_HEAD(&p2m->pages);
+
+spin_lock(&p2m->lock);
+
+p2m->domain = d;
+p2m->access_required = false;
+p2m->mem_access_enabled = false;
+p2m->default_access = p2m_access_rwx;
+p2m->p2m_class = p2m_host;
+p2m->root = NULL;
+
+/* Adopt VMID of the associated domain */
+p2m->vmid = d->arch.p2m.vmid;
+p2m->vttbr.vttbr = 0;
+p2m->vttbr.vttbr_vmid = p2m->vmid;
+
+p2m->max_mapped_gfn = 0;
+p2m->lowest_mapped_gfn = ULONG_MAX;
+radix_tree_init(&p2m->mem_access_settings);
+
+spin_unlock(&p2m->lock);
+
+return ret;
+}
+
+static void p2m_free_one(struct p2m_domain *p2m)
+{
+mfn_t mfn;
+unsigned int i;
 struct page_info *pg;
 
 spin_lock(&p2m->lock);
 
 while ( (pg = page_list_remove_head(&p2m->pages)) )
-free_domheap_page(pg);
+if ( pg != p2m->root )
+free_domheap_page(pg);
+
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+{
+mfn = _mfn(page_to_mfn(p2m->root) + i);
+clear_domain_page(mfn);
+}
+free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+p2m->root = NULL;
+
+radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
+spin_unlock(&p2m->lock);
+
+xfree(p2m);
+}
+
+static struct p2m_domain *p2m_init_one(struct domain *d)
+{
+struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+if ( !p2m )
+return NULL;
+
+if ( p2m_initialise(d, p2m) )
+goto free_p2m;
+
+return p2m;
+
+free_p2m:
+xfree(p2m);
+return NULL;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+struct page_info *pg = NULL;
+mfn_t mfn;
+unsigned int i;
+
+spin_lock(&p2m->lock);
 
-if ( p2m->root )
-free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+while ( (pg = page_list_remove_head(&p2m->pages)) )
+if ( pg != p2m->root )
+{
+mfn = _mfn(page_to_mfn(pg));
+clear_domain_page(mfn);
+free_domheap_page(pg);
+}
 
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+{
+mfn = _mfn(page_to_mfn(p2m->root) + i);
+clear_domain_page(mfn);
+}
+free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
 p2m->root = NULL;
 
 p2m_free_vmid(d);
@@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
 spin_unlock(&p2m->lock);
 }
 
-int p2m_init(struct domain *d)
+static int p2m_init_hostp2m(struct domain *d)
 {
 struct p2m_domain *p2m = &d->arch.p2m;
 int rc = 0;
@@ -1437,6 +1521,8 @@ int p2m_init(struct domain *d)
 if ( rc != 0 )
 goto err;
 
+p2m->vttbr.vttbr_vmid = p2m->vmid;
+
 d->arch.vttbr = 0;
 
 p2m->root = NULL;
@@ -1454,6 +1540,74 @@ err:
 return rc;
 }
 
+static void p2m_teardown_altp2m(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( !d->arch.altp2m_p2m[i] )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+p2m_free_one(p2m);
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+d->arch.altp2m_p2m[i] = NULL;
+}
+
+d->arch.altp2m_active = false;
+}
+
+static int p2m_init_altp2m(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+spin_lock_init(&d->arch.altp2m_lock);
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+d->arch.altp2m_vttbr[i] = INVALID_MFN;
+d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+if ( p2m == NULL )
+{
+p2m_teardown_altp2m(d);
+return -ENOMEM;
+}
+p2m->p2m_class = p2m_alternate;
+p2m->access_required = 1;
+_atomic_set(&p2m->active_vcpus, 0);
+}
+
+return 0;
+}
+
+void p2m_teardown(struct domain *d)
+{
+/*
+ * We must teardown altp2m unconditionally because
+ * we initialise it unconditionally.
+ */
+p2m_teardown_altp2m(d);
+
+p2m_teardown_hostp2m(d);
+}
+
+int p2m_init(stru

[Xen-devel] [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m.

2016-07-04 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/hvm.c|  3 ++-
 xen/arch/arm/p2m.c| 23 +++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index cb90a55..005d7c6 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -140,7 +140,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_create_p2m:
-rc = -EOPNOTSUPP;
+if ( !(rc = p2m_init_next_altp2m(d, &a.u.view.view)) )
+rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_destroy_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ae789e6..6c41b98 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,29 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int 
idx)
 return rc;
 }
 
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
+{
+int rc = -EINVAL;
+unsigned int i;
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_vttbr[i] != INVALID_MFN )
+continue;
+
+p2m_init_altp2m_helper(d, i);
+*idx = i;
+rc = 0;
+
+break;
+}
+
+altp2m_unlock(d);
+return rc;
+}
+
 /* Reset this p2m table to be empty */
 static void p2m_flush_table(struct p2m_domain *p2m)
 {
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 51d784f..c51532a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -137,6 +137,9 @@ void p2m_flush_altp2m(struct domain *d);
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Find an available alternate p2m and make it valid */
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)  ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.

2016-07-04 Thread Sergej Proskurin
The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
This parameter allows further usage of altp2m on ARM. For this, we
define an additional altp2m field for PV domains as part of the
libxl_domain_type struct. This field can be set only on ARM systems
through the "altp2m" switch in the domain's configuration file (i.e.
set altp2m=1).

Signed-off-by: Sergej Proskurin 
---
Cc: Ian Jackson 
Cc: Wei Liu 
---
 tools/libxl/libxl_create.c  |  1 +
 tools/libxl/libxl_dom.c | 14 ++
 tools/libxl/libxl_types.idl |  1 +
 tools/libxl/xl_cmdimpl.c|  5 +
 4 files changed, 21 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b99472..40b5f61 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 b_info->cmdline = b_info->u.pv.cmdline;
 b_info->u.pv.cmdline = NULL;
 }
+libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
 break;
 default:
 LOG(ERROR, "invalid domain type %s in create info",
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index ec29060..ab023a2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -277,6 +277,16 @@ err:
 }
 #endif
 
+#if defined(__arm__) || defined(__aarch64__)
+static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
+   libxl_domain_build_info *const info)
+{
+if ( libxl_defbool_val(info->u.pv.altp2m) )
+xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
+ libxl_defbool_val(info->u.pv.altp2m));
+}
+#endif
+
 static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
 libxl_domain_build_info *const info)
 {
@@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
 return rc;
 #endif
 }
+#if defined(__arm__) || defined(__aarch64__)
+else /* info->type == LIBXL_DOMAIN_TYPE_PV */
+pv_set_conf_params(ctx->xch, domid, info);
+#endif
 
 rc = libxl__arch_domain_create(gc, d_config, domid);
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index ef614be..0a164f9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
   ("features", string, {'const': True}),
   # Use host's E820 for PCI passthrough.
   ("e820_host", libxl_defbool),
+  ("altp2m", libxl_defbool),
   ])),
  ("invalid", None),
  ], keyvar_init_val = "LIBXL_DOMAIN_TYPE_INVALID")),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 6459eec..12c6e48 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1718,6 +1718,11 @@ static void parse_config_data(const char *config_source,
 exit(1);
 }
 
+#if defined(__arm__) || defined(__aarch64__)
+/* Enable altp2m for PV guests solely on ARM */
+xlu_cfg_get_defbool(config, "altp2m", &b_info->u.pv.altp2m, 0);
+#endif
+
 break;
 }
 default:
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 0/3] x86: instruction emulator adjustments

2016-07-04 Thread Jan Beulich
1: use consistent exit mechanism
2: drop pointless and add useful default cases
3: fold local variables

Signed-off-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 1/3] use consistent exit mechanism

2016-07-04 Thread Jan Beulich
Similar code should use similar exit mechanisms (return vs goto).

Signed-off-by: Jan Beulich 
---
v2: Use "goto" instead of "return", making things consistent right away
instead of only after a (series of) future patch(es) converting
more code to "return".

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2106,7 +2106,7 @@ x86_emulate(
 generate_exception_if(mode_64bit() && !ext, EXC_UD, -1);
 fail_if(ops->read_segment == NULL);
 if ( (rc = ops->read_segment(src.val, ®, ctxt)) != 0 )
-return rc;
+goto done;
 /* 64-bit mode: PUSH defaults to a 64-bit operand. */
 if ( mode_64bit() && (op_bytes == 4) )
 op_bytes = 8;
@@ -2125,10 +2125,9 @@ x86_emulate(
 if ( mode_64bit() && (op_bytes == 4) )
 op_bytes = 8;
 if ( (rc = read_ulong(x86_seg_ss, sp_post_inc(op_bytes),
-  &dst.val, op_bytes, ctxt, ops)) != 0 )
+  &dst.val, op_bytes, ctxt, ops)) != 0 ||
+ (rc = load_seg(src.val, dst.val, 0, NULL, ctxt, ops)) != 0 )
 goto done;
-if ( (rc = load_seg(src.val, dst.val, 0, NULL, ctxt, ops)) != 0 )
-return rc;
 break;
 
 case 0x0e: /* push %%cs */



x86emul: use consistent exit mechanism

Similar code should use similar exit mechanisms (return vs goto).

Signed-off-by: Jan Beulich 
---
v2: Use "goto" instead of "return", making things consistent right away
instead of only after a (series of) future patch(es) converting
more code to "return".

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2106,7 +2106,7 @@ x86_emulate(
 generate_exception_if(mode_64bit() && !ext, EXC_UD, -1);
 fail_if(ops->read_segment == NULL);
 if ( (rc = ops->read_segment(src.val, ®, ctxt)) != 0 )
-return rc;
+goto done;
 /* 64-bit mode: PUSH defaults to a 64-bit operand. */
 if ( mode_64bit() && (op_bytes == 4) )
 op_bytes = 8;
@@ -2125,10 +2125,9 @@ x86_emulate(
 if ( mode_64bit() && (op_bytes == 4) )
 op_bytes = 8;
 if ( (rc = read_ulong(x86_seg_ss, sp_post_inc(op_bytes),
-  &dst.val, op_bytes, ctxt, ops)) != 0 )
+  &dst.val, op_bytes, ctxt, ops)) != 0 ||
+ (rc = load_seg(src.val, dst.val, 0, NULL, ctxt, ops)) != 0 )
 goto done;
-if ( (rc = load_seg(src.val, dst.val, 0, NULL, ctxt, ops)) != 0 )
-return rc;
 break;
 
 case 0x0e: /* push %%cs */
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 3/3] x86emul: fold local variables

2016-07-04 Thread Jan Beulich
Declare some variables to they can be used by multiple pieces of code,
allowing some figure braces to be dropped (which don't align nicely
when used inside of case labeled statements).

Signed-off-by: Jan Beulich 
Reviewed-by: Andrew Cooper 

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3561,6 +3561,8 @@ x86_emulate(
 case 0xf6 ... 0xf7: /* Grp3 */
 switch ( modrm_reg & 7 )
 {
+unsigned long u[2], v;
+
 case 0 ... 1: /* test */
 goto test;
 case 2: /* not */
@@ -3599,15 +3601,15 @@ x86_emulate(
 _regs.edx = (uint32_t)(dst.val >> 32);
 break;
 #endif
-default: {
-unsigned long m[2] = { src.val, dst.val };
-if ( mul_dbl(m) )
+default:
+u[0] = src.val;
+u[1] = dst.val;
+if ( mul_dbl(u) )
 _regs.eflags |= EFLG_OF|EFLG_CF;
-_regs.edx = m[1];
-dst.val  = m[0];
+_regs.edx = u[1];
+dst.val  = u[0];
 break;
 }
-}
 break;
 case 5: /* imul */
 dst.type = OP_REG;
@@ -3643,20 +3645,18 @@ x86_emulate(
 _regs.edx = (uint32_t)(dst.val >> 32);
 break;
 #endif
-default: {
-unsigned long m[2] = { src.val, dst.val };
-if ( imul_dbl(m) )
+default:
+u[0] = src.val;
+u[1] = dst.val;
+if ( imul_dbl(u) )
 _regs.eflags |= EFLG_OF|EFLG_CF;
 if ( b > 0x6b )
-_regs.edx = m[1];
-dst.val  = m[0];
+_regs.edx = u[1];
+dst.val  = u[0];
 break;
 }
-}
 break;
-case 6: /* div */ {
-unsigned long u[2], v;
-
+case 6: /* div */
 dst.type = OP_REG;
 dst.reg  = (unsigned long *)&_regs.eax;
 switch ( dst.bytes = src.bytes )
@@ -3703,10 +3703,7 @@ x86_emulate(
 break;
 }
 break;
-}
-case 7: /* idiv */ {
-unsigned long u[2], v;
-
+case 7: /* idiv */
 dst.type = OP_REG;
 dst.reg  = (unsigned long *)&_regs.eax;
 switch ( dst.bytes = src.bytes )
@@ -3754,7 +3751,6 @@ x86_emulate(
 }
 break;
 }
-}
 break;
 
 case 0xf8: /* clc */



x86emul: fold local variables

Declare some variables to they can be used by multiple pieces of code,
allowing some figure braces to be dropped (which don't align nicely
when used inside of case labeled statements).

Signed-off-by: Jan Beulich 
Reviewed-by: Andrew Cooper 

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3561,6 +3561,8 @@ x86_emulate(
 case 0xf6 ... 0xf7: /* Grp3 */
 switch ( modrm_reg & 7 )
 {
+unsigned long u[2], v;
+
 case 0 ... 1: /* test */
 goto test;
 case 2: /* not */
@@ -3599,15 +3601,15 @@ x86_emulate(
 _regs.edx = (uint32_t)(dst.val >> 32);
 break;
 #endif
-default: {
-unsigned long m[2] = { src.val, dst.val };
-if ( mul_dbl(m) )
+default:
+u[0] = src.val;
+u[1] = dst.val;
+if ( mul_dbl(u) )
 _regs.eflags |= EFLG_OF|EFLG_CF;
-_regs.edx = m[1];
-dst.val  = m[0];
+_regs.edx = u[1];
+dst.val  = u[0];
 break;
 }
-}
 break;
 case 5: /* imul */
 dst.type = OP_REG;
@@ -3643,20 +3645,18 @@ x86_emulate(
 _regs.edx = (uint32_t)(dst.val >> 32);
 break;
 #endif
-default: {
-unsigned long m[2] = { src.val, dst.val };
-if ( imul_dbl(m) )
+default:
+u[0] = src.val;
+u[1] = dst.val;
+if ( imul_dbl(u) )
 _regs.eflags |= EFLG_OF|EFLG_CF;
 if ( b > 0x6b )
-_regs.edx = m[1];
-dst.val  = m[0];
+_regs.edx = u[1];
+dst.val  = u[0];
 break;
 }
-}
 break;
-case 6: /* div */ {
-unsigned long u[2], v;
-
+case 6: /* div */
 dst.type = OP_REG;
 dst.reg  = (unsigned long *)&_regs.eax;
 switch ( dst.bytes = src.bytes )
@@ -3703,10 +3703,7 @@ x86_emulate(
 break;
 }
 break;
-}
-case 7: /* idiv */ {
-unsigne

Re: [Xen-devel] [PATCH] libxl/netbsd: check num_exec in hotplug function

2016-07-04 Thread John Nemeth
On Jul 4, 12:23pm, Wei Liu wrote:
}
} Also CC Roger since he authored the original code.
} 
} Feel free to correct me misunderstanding on this issue.
} 
} On Mon, Jul 04, 2016 at 12:09:30PM +0100, Wei Liu wrote:
} > Add back xen-devel. Please reply to all recipients in the future.

 Oops, I usually do (*checks header*; yep, good this time).

} > On Mon, Jul 04, 2016 at 01:11:04AM -0700, John Nemeth wrote:
} > > On Jul 2, 12:35pm, Wei Liu wrote:
} > > }
} > > } This basically replicates the same logic in libxl_linux.c. Without this
} > > } libxl will loop indefinitely trying to execute hotplug script.
} > > 
} > >  One minor change required (see below).
} > > 
} > > } Reported-by: John Nemeth 
} > > } Signed-off-by: Wei Liu 
} > > } ---
} > > }  tools/libxl/libxl_netbsd.c | 18 ++
} > > }  1 file changed, 18 insertions(+)
} > > } 
} > > } diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
} > > } index 096c057..92d3c89 100644
} > > } --- a/tools/libxl/libxl_netbsd.c
} > > } +++ b/tools/libxl/libxl_netbsd.c
} > > } @@ -68,7 +68,25 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, 
libxl__device *dev,
} > > }  
} > > }  switch (dev->backend_kind) {
} > > }  case LIBXL__DEVICE_KIND_VBD:
} > > } +if (num_exec != 0) {
} > > } +LOG(DEBUG, "num_exec %d, not running hotplug scripts", 
num_exec);
} > > } +rc = 0;
} > > } +goto out;
} > > } +}
} > > } +rc = libxl__hotplug(gc, dev, args, action);
} > > } +if (!rc) rc = 1;
} > > } +break;
} > > }  case LIBXL__DEVICE_KIND_VIF:
} > > } +/*
} > > } + * If domain has a stubdom we don't have to execute hotplug 
scripts
} > > } + * for emulated interfaces
} > > } + */
} > > } +if ((num_exec > 1) ||
} > > 
} > >  The function is called with num_exec set to 0 and 1, so this
} > > should be:
} > > 
} > >if ((num_exec != 0) ||
} > 
} > AIUI this is related to how network is setup because we would need to
} > hotplug both the emulated nic in QEMU and the PV nic. Is this line
} > causing problem for you?
} > 
} > Wei.

 Yes, when the script is called the second time, it fails when
trying to add the interface to the bridge and returns an error.
This results in xl aborting the domU creation process and tearing
it down.  If the script is supposed to be called twice, then there
needs to be some way to distinguish the calls and what is supposed
to happen during each call so the script can be adjusted.  The
change above allowed me to bring up a domU and work with it (including
network).  However, I only tested a PV domU, not an HVM one, so
QEMU wasn't involved.

} > > } +(libxl_get_stubdom_id(CTX, dev->domid) && num_exec)) {
} > > } +LOG(DEBUG, "num_exec %d, not running hotplug scripts", 
num_exec);
} > > } +rc = 0;
} > > } +goto out;
} > > } +}
} > > }  rc = libxl__hotplug(gc, dev, args, action);
} > > }  if (!rc) rc = 1;
} > > }  break;
} > > } -- 
} > > } 2.1.4
} > > } 
} > > }-- End of excerpt from Wei Liu
}-- End of excerpt from Wei Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 2/3] drop pointless and add useful default cases

2016-07-04 Thread Jan Beulich
There's no point in having default cases when all possible values have
respective case statements, or when there's just a "break" statement.

Otoh the two main switch() statements better get default cases added,
just to cover the case of someone altering one of the two lookup arrays
without suitably changing these switch statements.

Signed-off-by: Jan Beulich 
---
v2: use ASSERT_UNREACHABLE() in favor of BUG(), and log a message once.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1379,7 +1379,6 @@ decode_segment(uint8_t modrm_reg)
 case 3: return x86_seg_ds;
 case 4: return x86_seg_fs;
 case 5: return x86_seg_gs;
-default: break;
 }
 return decode_segment_failed;
 }
@@ -1503,6 +1502,19 @@ int x86emul_unhandleable_rw(
 return X86EMUL_UNHANDLEABLE;
 }
 
+static void internal_error(const char *which, uint8_t byte,
+   const struct cpu_user_regs *regs)
+{
+#ifdef __XEN__
+static bool_t logged;
+
+if ( !test_and_set_bool(logged) )
+gprintk(XENLOG_ERR, "Internal error: %s/%02x [%04x:%08lx]\n",
+which, byte, regs->cs, regs->eip);
+#endif
+ASSERT_UNREACHABLE();
+}
+
 int
 x86_emulate(
 struct x86_emulate_ctxt *ctxt,
@@ -2996,8 +3008,6 @@ x86_emulate(
 case 7: /* fdivr */
 emulate_fpu_insn_memsrc("fdivrs", src.val);
 break;
-default:
-goto cannot_emulate;
 }
 }
 break;
@@ -3128,8 +3138,6 @@ x86_emulate(
 case 7: /* fidivr m32i */
 emulate_fpu_insn_memsrc("fidivrl", src.val);
 break;
-default:
-goto cannot_emulate;
 }
 }
 break;
@@ -3352,8 +3360,6 @@ x86_emulate(
 case 7: /* fidivr m16i */
 emulate_fpu_insn_memsrc("fidivrs", src.val);
 break;
-default:
-goto cannot_emulate;
 }
 }
 break;
@@ -3431,8 +3437,6 @@ x86_emulate(
 dst.type = OP_MEM;
 emulate_fpu_insn_memdst("fistpll", dst.val);
 break;
-default:
-goto cannot_emulate;
 }
 }
 break;
@@ -3750,8 +3754,6 @@ x86_emulate(
 }
 break;
 }
-default:
-goto cannot_emulate;
 }
 break;
 
@@ -3845,10 +3847,12 @@ x86_emulate(
 goto push;
 case 7:
 generate_exception_if(1, EXC_UD, -1);
-default:
-goto cannot_emulate;
 }
 break;
+
+default:
+internal_error("primary", b, ctxt->regs);
+goto cannot_emulate;
 }
 
  writeback:
@@ -4815,6 +4819,10 @@ x86_emulate(
 break;
 }
 break;
+
+default:
+internal_error("secondary", b, ctxt->regs);
+goto cannot_emulate;
 }
 goto writeback;
 



x86emul: drop pointless and add useful default cases

There's no point in having default cases when all possible values have
respective case statements, or when there's just a "break" statement.

Otoh the two main switch() statements better get default cases added,
just to cover the case of someone altering one of the two lookup arrays
without suitably changing these switch statements.

Signed-off-by: Jan Beulich 
---
v2: use ASSERT_UNREACHABLE() in favor of BUG(), and log a message once.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1379,7 +1379,6 @@ decode_segment(uint8_t modrm_reg)
 case 3: return x86_seg_ds;
 case 4: return x86_seg_fs;
 case 5: return x86_seg_gs;
-default: break;
 }
 return decode_segment_failed;
 }
@@ -1503,6 +1502,19 @@ int x86emul_unhandleable_rw(
 return X86EMUL_UNHANDLEABLE;
 }
 
+static void internal_error(const char *which, uint8_t byte,
+   const struct cpu_user_regs *regs)
+{
+#ifdef __XEN__
+static bool_t logged;
+
+if ( !test_and_set_bool(logged) )
+gprintk(XENLOG_ERR, "Internal error: %s/%02x [%04x:%08lx]\n",
+which, byte, regs->cs, regs->eip);
+#endif
+ASSERT_UNREACHABLE();
+}
+
 int
 x86_emulate(
 struct x86_emulate_ctxt *ctxt,
@@ -2996,8 +3008,6 @@ x86_emulate(
 case 7: /* fdivr */
 emulate_fpu_insn_memsrc("fdivrs", src.val);
 break;
-default:
-goto cannot_emulate;
 }
 }
 break;
@@ -3128,8 +3138,6 @@ x86_emulate(
 case 7: /* fidivr m32i */
 emulate_fpu_insn_memsrc("fidivrl", src.val);
 break;
-default:
-goto cannot_emulate;
 }
 }
 break;
@@ -3352,8 +3360,6 @@ x86_emulate(
 case 7: /* fidivr m16i */
 emulate_fpu_insn_memsrc("fidivrs", src.val);
  

Re: [Xen-devel] Xen 4.6 script calling conventions

2016-07-04 Thread Roger Pau Monné
On Sat, Jul 02, 2016 at 12:40:43PM +0100, Wei Liu wrote:
> On Tue, Jun 28, 2016 at 06:00:49PM -0700, John Nemeth wrote:
> >  I'm trying to package Xen 4.6 (specifically Xen 4.6.3) for
> > use with NetBSD.  I have it mostly done; however, when I try to
> > create a domU, libxl goes into an infinite loop calling the scripts.
> > If I try to create a domU with no network or disk, it works fine
> > (albeit of rather limited use).  Have there been changes between
> > Xen 4.5 and Xen 4.6 in the calling convention for the scripts?  Is
> > there documentation on what is expected somewhere?  Please CC me on
> > any responses.  Here is my domU config file:
> 
> Can you give this patch a try? I don't have netbsd system at hand to
> test it.
> 
> I suspect netbsd doesn't support stubdom because that pile of code is a
> bit Linux centric, but it wouldn't hurt to prepare for it.
> 
> ---8<---
> From 3c64a22f4a5dcf76244c2acff7a26717402ea33c Mon Sep 17 00:00:00 2001
> From: Wei Liu 
> Date: Sat, 2 Jul 2016 12:35:30 +0100
> Subject: [PATCH] libxl/netbsd: check num_exec in hotplug function
> 
> This basically replicates the same logic in libxl_linux.c. Without this
> libxl will loop indefinitely trying to execute hotplug script.
> 
> Reported-by: John Nemeth 
> Signed-off-by: Wei Liu 
> ---
>  tools/libxl/libxl_netbsd.c | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
> index 096c057..92d3c89 100644
> --- a/tools/libxl/libxl_netbsd.c
> +++ b/tools/libxl/libxl_netbsd.c
> @@ -68,7 +68,25 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, 
> libxl__device *dev,
>  
>  switch (dev->backend_kind) {
>  case LIBXL__DEVICE_KIND_VBD:
> +if (num_exec != 0) {
> +LOG(DEBUG, "num_exec %d, not running hotplug scripts", num_exec);
> +rc = 0;
> +goto out;
> +}
> +rc = libxl__hotplug(gc, dev, args, action);
> +if (!rc) rc = 1;
> +break;
>  case LIBXL__DEVICE_KIND_VIF:
> +/*
> + * If domain has a stubdom we don't have to execute hotplug scripts
> + * for emulated interfaces
> + */
> +if ((num_exec > 1) ||

This should be num_exec != 0, NetBSD libxl only executes the network hotplug 
script once, because the emulated network card is attached to the bridge 
using a script called directly by QEMU, see:

http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/libxl/libxl_dm.c;hb=HEAD#l608

And:

http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/libxl/libxl_dm.c;hb=HEAD#l27
 

Apart from that the change looks fine, and provided this is fixed:

Acked-by: Roger Pau Monné 

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] libxl/netbsd: check num_exec in hotplug function

2016-07-04 Thread Roger Pau Monné
On Mon, Jul 04, 2016 at 04:56:33AM -0700, John Nemeth wrote:
> On Jul 4, 12:23pm, Wei Liu wrote:
> } On Mon, Jul 04, 2016 at 12:09:30PM +0100, Wei Liu wrote:
> } > On Mon, Jul 04, 2016 at 01:11:04AM -0700, John Nemeth wrote:
> } > > On Jul 2, 12:35pm, Wei Liu wrote:
> } > > } +/*
> } > > } + * If domain has a stubdom we don't have to execute hotplug 
> scripts
> } > > } + * for emulated interfaces
> } > > } + */
> } > > } +if ((num_exec > 1) ||
> } > > 
> } > >  The function is called with num_exec set to 0 and 1, so this
> } > > should be:
> } > > 
> } > >if ((num_exec != 0) ||
> } > 
> } > AIUI this is related to how network is setup because we would need to
> } > hotplug both the emulated nic in QEMU and the PV nic. Is this line
> } > causing problem for you?
> } > 
> } > Wei.
> 
>  Yes, when the script is called the second time, it fails when
> trying to add the interface to the bridge and returns an error.
> This results in xl aborting the domU creation process and tearing
> it down.  If the script is supposed to be called twice, then there
> needs to be some way to distinguish the calls and what is supposed
> to happen during each call so the script can be adjusted.  The
> change above allowed me to bring up a domU and work with it (including
> network).  However, I only tested a PV domU, not an HVM one, so
> QEMU wasn't involved.

I've already replied to the other thread with the reason why this needs to 
be num_exec != 0 instead of num_exec > 1, and to avoid further confusion I 
would recommend Wei to add a comment here when the patch is updated, since 
this is a NetBSD-specific behavior that's not shared with other Dom0s:

http://lists.xenproject.org/archives/html/xen-devel/2016-07/msg00178.html

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.6 script calling conventions

2016-07-04 Thread Wei Liu
On Mon, Jul 04, 2016 at 01:57:20PM +0200, Roger Pau Monné wrote:
> On Sat, Jul 02, 2016 at 12:40:43PM +0100, Wei Liu wrote:
> > On Tue, Jun 28, 2016 at 06:00:49PM -0700, John Nemeth wrote:
> > >  I'm trying to package Xen 4.6 (specifically Xen 4.6.3) for
> > > use with NetBSD.  I have it mostly done; however, when I try to
> > > create a domU, libxl goes into an infinite loop calling the scripts.
> > > If I try to create a domU with no network or disk, it works fine
> > > (albeit of rather limited use).  Have there been changes between
> > > Xen 4.5 and Xen 4.6 in the calling convention for the scripts?  Is
> > > there documentation on what is expected somewhere?  Please CC me on
> > > any responses.  Here is my domU config file:
> > 
> > Can you give this patch a try? I don't have netbsd system at hand to
> > test it.
> > 
> > I suspect netbsd doesn't support stubdom because that pile of code is a
> > bit Linux centric, but it wouldn't hurt to prepare for it.
> > 
> > ---8<---
> > From 3c64a22f4a5dcf76244c2acff7a26717402ea33c Mon Sep 17 00:00:00 2001
> > From: Wei Liu 
> > Date: Sat, 2 Jul 2016 12:35:30 +0100
> > Subject: [PATCH] libxl/netbsd: check num_exec in hotplug function
> > 
> > This basically replicates the same logic in libxl_linux.c. Without this
> > libxl will loop indefinitely trying to execute hotplug script.
> > 
> > Reported-by: John Nemeth 
> > Signed-off-by: Wei Liu 
> > ---
> >  tools/libxl/libxl_netbsd.c | 18 ++
> >  1 file changed, 18 insertions(+)
> > 
> > diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
> > index 096c057..92d3c89 100644
> > --- a/tools/libxl/libxl_netbsd.c
> > +++ b/tools/libxl/libxl_netbsd.c
> > @@ -68,7 +68,25 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, 
> > libxl__device *dev,
> >  
> >  switch (dev->backend_kind) {
> >  case LIBXL__DEVICE_KIND_VBD:
> > +if (num_exec != 0) {
> > +LOG(DEBUG, "num_exec %d, not running hotplug scripts", 
> > num_exec);
> > +rc = 0;
> > +goto out;
> > +}
> > +rc = libxl__hotplug(gc, dev, args, action);
> > +if (!rc) rc = 1;
> > +break;
> >  case LIBXL__DEVICE_KIND_VIF:
> > +/*
> > + * If domain has a stubdom we don't have to execute hotplug scripts
> > + * for emulated interfaces
> > + */
> > +if ((num_exec > 1) ||
> 
> This should be num_exec != 0, NetBSD libxl only executes the network hotplug 
> script once, because the emulated network card is attached to the bridge 
> using a script called directly by QEMU, see:
> 
> http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/libxl/libxl_dm.c;hb=HEAD#l608
> 
> And:
> 
> http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/libxl/libxl_dm.c;hb=HEAD#l27
>  
> 
> Apart from that the change looks fine, and provided this is fixed:
> 
> Acked-by: Roger Pau Monné 
> 

Thanks for your clarification on how netbsd works.

I will update the patch accordingly.

Wei.

> Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.

2016-07-04 Thread Sergej Proskurin
ARM allows the use of concatenated root (first-level) page tables (there
are P2M_ROOT_PAGES consecutive pages that are used for the root level
page table. We need to prevent freeing one of these concatenated pages
during the process of flushing in p2m_flush_table (simply because new
pages might be re-inserted at a later point in time into the page table).


On 07/04/2016 01:45 PM, Sergej Proskurin wrote:
> The current implementation differentiates between flushing and
> destroying altp2m views. This commit adds the functions
> p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
> individual altp2m views without destroying the entire table. In this
> way, altp2m views can be reused at a later point in time.
> 
> In addition, the implementation clears all altp2m entries during the
> process of flushing. The same applies to hostp2m entries, when it is
> destroyed. In this way, further domain and p2m allocations will not
> unintentionally reuse old p2m mappings.
> 
> Signed-off-by: Sergej Proskurin 
> ---
> Cc: Stefano Stabellini 
> Cc: Julien Grall 
> ---
>  xen/arch/arm/p2m.c| 67 
> +++
>  xen/include/asm-arm/p2m.h | 15 ---
>  2 files changed, 78 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 4a745fd..ae789e6 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned 
> int idx)
>  return rc;
>  }
>  
> +/* Reset this p2m table to be empty */
> +static void p2m_flush_table(struct p2m_domain *p2m)
> +{
> +struct page_info *top, *pg;
> +mfn_t mfn;
> +unsigned int i;
> +
> +/* Check whether the p2m table has already been flushed before. */
> +if ( p2m->root == NULL)
> +return;
> +
> +spin_lock(&p2m->lock);
> +
> +/*
> + * "Host" p2m tables can have shared entries &c that need a bit more care
> + * when discarding them
> + */
> +ASSERT(!p2m_is_hostp2m(p2m));
> +
> +/* Zap the top level of the trie */
> +top = p2m->root;
> +
> +/* Clear all concatenated first level pages */
> +for ( i = 0; i < P2M_ROOT_PAGES; i++ )
> +{
> +mfn = _mfn(page_to_mfn(top + i));
> +clear_domain_page(mfn);
> +}
> +
> +/* Free the rest of the trie pages back to the paging pool */
> +while ( (pg = page_list_remove_head(&p2m->pages)) )
> +if ( pg != top  )
> +{
> +/*
> + * Before freeing the individual pages, we clear them to prevent
> + * reusing old table entries in future p2m allocations.
> + */
> +mfn = _mfn(page_to_mfn(pg));
> +clear_domain_page(mfn);
> +free_domheap_page(pg);
> +}

At this point, we prevent only the first root level page from being
freed. In case there are multiple consecutive first level pages, one of
them will be freed in the upper loop (and potentially crash the guest if
the table is reused at a later point in time). However, testing for
every concatenated page in the if clause of the while loop would further
decrease the flushing performance. Thus, my question is, whether there
is a good way to solve this issue?

> +
> +page_list_add(top, &p2m->pages);
> +
> +/* Invalidate VTTBR */
> +p2m->vttbr.vttbr = 0;
> +p2m->vttbr.vttbr_baddr = INVALID_MFN;
> +
> +spin_unlock(&p2m->lock);
> +}
> +
> +void p2m_flush_altp2m(struct domain *d)
> +{
> +unsigned int i;
> +
> +altp2m_lock(d);
> +
> +for ( i = 0; i < MAX_ALTP2M; i++ )
> +{
> +p2m_flush_table(d->arch.altp2m_p2m[i]);
> +flush_tlb();
> +d->arch.altp2m_vttbr[i] = INVALID_MFN;
> +}
> +
> +altp2m_unlock(d);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 8ee78e0..51d784f 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>  struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
>  
>  /* Flush all the alternate p2m's for a domain */
> -static inline void p2m_flush_altp2m(struct domain *d)
> -{
> -/* Not supported on ARM. */
> -}
> +void p2m_flush_altp2m(struct domain *d);
>  
>  /* Make a specific alternate p2m valid */
>  int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
> @@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info 
> *page,
>  /* get host p2m table */
>  #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>  
> +static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
> +{
> +return p2m->p2m_class == p2m_host;
> +}
> +
> +static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
> +{
> +return p2m->p2m_class == p2m_alternate;
> +}
> +
>  /* vm_event and mem_access are supported on any ARM guest */
>  static inline bool_t p2m_mem_access_sanity_check(struct dom

[Xen-devel] [PATCH] libxl/netbsd: check num_exec in hotplug function

2016-07-04 Thread Wei Liu
This basically replicates the same logic in libxl_linux.c but with one
change -- only test num_exec == 0 in nic hotplug case because NetBSD let
QEMU call a script itself. Without this patch libxl will loop
indefinitely trying to execute hotplug script.

Reported-by: John Nemeth 
Signed-off-by: Wei Liu 
Acked-by: Roger Pau Monné 
---
Cc: Ian Jackson 
Cc: John Nemeth 
Cc: Roger Pau Monné 

Ian, this is a backport candidate.
---
 tools/libxl/libxl_netbsd.c | 21 +
 1 file changed, 21 insertions(+)

diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
index 096c057..a79b8aa 100644
--- a/tools/libxl/libxl_netbsd.c
+++ b/tools/libxl/libxl_netbsd.c
@@ -68,7 +68,28 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, 
libxl__device *dev,
 
 switch (dev->backend_kind) {
 case LIBXL__DEVICE_KIND_VBD:
+if (num_exec != 0) {
+LOG(DEBUG, "num_exec %d, not running hotplug scripts", num_exec);
+rc = 0;
+goto out;
+}
+rc = libxl__hotplug(gc, dev, args, action);
+if (!rc) rc = 1;
+break;
 case LIBXL__DEVICE_KIND_VIF:
+/*
+ * If domain has a stubdom we don't have to execute hotplug scripts
+ * for emulated interfaces
+ *
+ * NetBSD let QEMU call a script to plug emulated nic, so
+ * only test if num_exec == 0 in that case.
+ */
+if ((num_exec != 0) ||
+(libxl_get_stubdom_id(CTX, dev->domid) && num_exec)) {
+LOG(DEBUG, "num_exec %d, not running hotplug scripts", num_exec);
+rc = 0;
+goto out;
+}
 rc = libxl__hotplug(gc, dev, args, action);
 if (!rc) rc = 1;
 break;
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.

2016-07-04 Thread Andrew Cooper
On 04/07/16 12:45, Sergej Proskurin wrote:
> The Xen altp2m subsystem is currently supported only on x86-64 based
> architectures. By utilizing ARM's virtualization extensions, we intend
> to implement altp2m support for ARM architectures and thus further
> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>
> With this commit, Xen is now able to activate altp2m support on ARM by
> means of the command-line argument 'altp2m' (bool).
>
> Signed-off-by: Sergej Proskurin 

In addition, please patch docs/misc/xen-command-line.markdown to
indicate that the altp2m option is available for x86 Intel and ARM.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 96611: tolerable FAIL

2016-07-04 Thread osstest service owner
flight 96611 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96611/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale  16 guest-start.2  fail in 96558 pass in 96611
 test-armhf-armhf-xl-credit2 15 guest-start/debian.repeat fail in 96558 pass in 
96611
 test-amd64-amd64-xl-qemuu-win7-amd64 9 windows-install fail in 96558 pass in 
96611
 test-armhf-armhf-xl-rtds 16 guest-start.2   fail pass in 96558

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 96515
 build-amd64-rumpuserxen   6 xen-buildfail   like 96558
 build-i386-rumpuserxen6 xen-buildfail   like 96558
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 96558
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 96558
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 96558
 test-amd64-amd64-xl-rtds  9 debian-install   fail   like 96558

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass

version targeted for testing:
 xen  bb4f41b3dff831faaf5a3248e0ecd123024d7f8f
baseline version:
 xen  bb4f41b3dff831faaf5a3248e0ecd123024d7f8f

Last test of basis96611  2016-07-04 01:58:03 Z0 days
Testing same since0  1970-01-01 00:00:00 Z 16986 days0 attempts

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libv

Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-04 Thread Sergej Proskurin


On 07/04/2016 01:45 PM, Sergej Proskurin wrote:
> This commit makes sure that the TLB of a domain considers flushing all
> of the associated altp2m views. Therefore, in case a different domain
> (not the currently active domain) shall flush its TLBs, the current
> implementation loops over all VTTBRs of the different altp2m mappings
> per vCPU and flushes the TLBs. This way, a change of one of the altp2m
> mapping is considered.  At this point, it must be considered that the
> domain --whose TLBs are to be flushed-- is not locked.
> 
> Signed-off-by: Sergej Proskurin 
> ---
> Cc: Stefano Stabellini 
> Cc: Julien Grall 
> ---
>  xen/arch/arm/p2m.c | 71 
> --
>  1 file changed, 63 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 7e721f9..019f10e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -15,6 +15,8 @@
>  #include 
>  #include 
>  
> +#include 
> +
>  #ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly p2m_root_order;
>  static unsigned int __read_mostly p2m_root_level;
> @@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
>  }
>  
> +static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
> +{
> +struct domain *d = v->domain;
> +uint16_t index = vcpu_altp2m(v).p2midx;
> +
> +if ( index == INVALID_ALTP2M )
> +return INVALID_MFN;
> +
> +BUG_ON(index >= MAX_ALTP2M);
> +
> +return d->arch.altp2m_vttbr[index];
> +}
> +
> +static void p2m_load_altp2m_VTTBR(struct vcpu *v)
> +{
> +struct domain *d = v->domain;
> +uint64_t vttbr = p2m_get_altp2m_vttbr(v);
> +
> +if ( is_idle_domain(d) )
> +return;
> +
> +BUG_ON(vttbr == INVALID_MFN);
> +WRITE_SYSREG64(vttbr, VTTBR_EL2);
> +
> +isb(); /* Ensure update is visible */
> +}
> +
>  static void p2m_load_VTTBR(struct domain *d)
>  {
>  if ( is_idle_domain(d) )
>  return;
> +
>  BUG_ON(!d->arch.vttbr);
>  WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +
>  isb(); /* Ensure update is visible */
>  }
>  
> @@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
>  WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>  isb();
>  
> -p2m_load_VTTBR(n->domain);
> +if ( altp2m_active(n->domain) )
> +p2m_load_altp2m_VTTBR(n);
> +else
> +p2m_load_VTTBR(n->domain);
> +
>  isb();
>  
>  if ( is_32bit_domain(n->domain) )
> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>  void flush_tlb_domain(struct domain *d)
>  {
>  unsigned long flags = 0;
> +struct vcpu *v = NULL;
>  
> -/* Update the VTTBR if necessary with the domain d. In this case,
> - * it's only necessary to flush TLBs on every CPUs with the current VMID
> - * (our domain).
> +/*
> + * Update the VTTBR if necessary with the domain d. In this case, it is 
> only
> + * necessary to flush TLBs on every CPUs with the current VMID (our
> + * domain).
>   */
>  if ( d != current->domain )
>  {
>  local_irq_save(flags);
> -p2m_load_VTTBR(d);
> -}
>  
> -flush_tlb();
> +/* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
> +if ( altp2m_active(d) )
> +{
> +for_each_vcpu( d, v )
> +{
> +p2m_load_altp2m_VTTBR(v);
> +flush_tlb();
> +}
> +}
> +else
> +{
> +p2m_load_VTTBR(d);
> +flush_tlb();
> +}
> +}
> +else
> +flush_tlb();

Flushing of all vCPU's TLBs must also be performed on the current
domain. Does it make sense to flush the current domain's TLBs at this
point as well? If the current domain's TLBs were modified and needs
flushing, it will be done during the loading of the VTTBR at the next
VMENTRY anyway.

>  
>  if ( d != current->domain )
>  {
> -p2m_load_VTTBR(current->domain);
> +/* Make sure altp2m mapping is valid. */
> +if ( altp2m_active(current->domain) )
> +p2m_load_altp2m_VTTBR(current);
> +else
> +p2m_load_VTTBR(current->domain);
>  local_irq_restore(flags);
>  }
>  }
> 

-- 
M.Sc. Sergej Proskurin
Wissenschaftlicher Mitarbeiter

Technische Universität München
Fakultät für Informatik
Lehrstuhl für Sicherheit in der Informatik

Boltzmannstraße 3
85748 Garching (bei München)

Tel. +49 (0)89 289-18592
Fax +49 (0)89 289-18579

prosku...@sec.in.tum.de
www.sec.in.tum.de

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 96626: tolerable all pass - PUSHED

2016-07-04 Thread osstest service owner
flight 96626 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96626/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  d0fd9ae54491328b10dee4003656c14b3bf3d3e9
baseline version:
 xen  bb4f41b3dff831faaf5a3248e0ecd123024d7f8f

Last test of basis96476  2016-06-30 14:01:02 Z3 days
Testing same since96626  2016-07-04 10:01:47 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Kevin Tian 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=d0fd9ae54491328b10dee4003656c14b3bf3d3e9
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
d0fd9ae54491328b10dee4003656c14b3bf3d3e9
+ branch=xen-unstable-smoke
+ revision=d0fd9ae54491328b10dee4003656c14b3bf3d3e9
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.7-testing
+ '[' xd0fd9ae54491328b10dee4003656c14b3bf3d3e9 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://cache:9419/
+++ '[' xgit://cache:9419/ '!=' x ']'
+++ echo 
'git://cache:9419

Re: [Xen-devel] [PATCH 4/8] x86/vm-event/monitor: turn monitor_write_data.do_write into enum

2016-07-04 Thread Jan Beulich
>>> On 30.06.16 at 20:44,  wrote:
> After trapping a control-register write vm-event and -until- deciding if that
> write is to be permitted or not (VM_EVENT_FLAG_DENY) and doing the actual 
> write,
> there cannot and should not be another trapped control-register write event.

Is that true even for the case where full register state gets updated
for a vCPU? Is that updating-all-context case of no interest to a
monitoring application, now and forever?

> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -259,19 +259,19 @@ struct pv_domain
>  struct cpuidmasks *cpuidmasks;
>  };
>  
> -struct monitor_write_data {
> -struct {
> -unsigned int msr : 1;
> -unsigned int cr0 : 1;
> -unsigned int cr3 : 1;
> -unsigned int cr4 : 1;
> -} do_write;
> +enum monitor_write_status
> +{
> +MWS_NOWRITE = 0,

Please omit the "= 0" here - MWS_NOWRITE will be zero even
without that.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 4/8] x86/vm-event/monitor: turn monitor_write_data.do_write into enum

2016-07-04 Thread Corneliu ZUZU

On 7/4/2016 3:37 PM, Jan Beulich wrote:

On 30.06.16 at 20:44,  wrote:

After trapping a control-register write vm-event and -until- deciding if that
write is to be permitted or not (VM_EVENT_FLAG_DENY) and doing the actual
write,
there cannot and should not be another trapped control-register write event.

Is that true even for the case where full register state gets updated
for a vCPU?


AFAIK, the full register state cannot be updated _at once_, that is: 
after each trapped register update monitor_write_data must _always_ be 
handled _before reentering the vCPU_.



Is that updating-all-context case of no interest to a
monitoring application, now and forever?


As I said above, I'm don't see how such a case would (ever) be possible.


--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -259,19 +259,19 @@ struct pv_domain
  struct cpuidmasks *cpuidmasks;
  };
  
-struct monitor_write_data {

-struct {
-unsigned int msr : 1;
-unsigned int cr0 : 1;
-unsigned int cr3 : 1;
-unsigned int cr4 : 1;
-} do_write;
+enum monitor_write_status
+{
+MWS_NOWRITE = 0,

Please omit the "= 0" here - MWS_NOWRITE will be zero even
without that.

Jan




Ack.

Thanks,
Corneliu.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 5/8] x86/vm-event/monitor: don't compromise monitor_write_data on domain cleanup

2016-07-04 Thread Jan Beulich
>>> On 30.06.16 at 20:45,  wrote:
> The arch_vm_event structure is dynamically allocated and freed @
> vm_event_cleanup_domain. This cleanup is triggered e.g. when the toolstack 
> user
> disables domain monitoring (xc_monitor_disable), which in turn effectively
> discards any information that was in arch_vm_event.write_data.

Isn't that rather a toolstack user bug, not warranting a relatively
extensive (even if mostly mechanical) hypervisor change like this
one? Sane monitor behavior, after all, is required anyway for the
monitored guest to survive.

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -473,24 +473,24 @@ void hvm_do_resume(struct vcpu *v)
>  if ( !handle_hvm_io_completion(v) )
>  return;
>  
> -if ( unlikely(v->arch.vm_event) )
> +if ( unlikely(v->arch.vm_event.emulate_flags) )
>  {
> -if ( v->arch.vm_event->emulate_flags )
> -{
> -enum emul_kind kind = EMUL_KIND_NORMAL;
> +enum emul_kind kind;
>  
> -if ( v->arch.vm_event->emulate_flags &
> - VM_EVENT_FLAG_SET_EMUL_READ_DATA )
> -kind = EMUL_KIND_SET_CONTEXT;
> -else if ( v->arch.vm_event->emulate_flags &
> -  VM_EVENT_FLAG_EMULATE_NOWRITE )
> -kind = EMUL_KIND_NOWRITE;
> +ASSERT(v->arch.vm_event.emul_read_data);
>  
> -hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
> -   HVM_DELIVER_NO_ERROR_CODE);
> +kind = EMUL_KIND_NORMAL;

Please keep this being the initializer of the variable.

>  
> -v->arch.vm_event->emulate_flags = 0;
> -}
> +if ( v->arch.vm_event.emulate_flags & 
> VM_EVENT_FLAG_SET_EMUL_READ_DATA )

Long line.

> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -259,21 +259,6 @@ struct pv_domain
>  struct cpuidmasks *cpuidmasks;
>  };
>  
> -enum monitor_write_status
> -{
> -MWS_NOWRITE = 0,
> -MWS_MSR,
> -MWS_CR0,
> -MWS_CR3,
> -MWS_CR4,
> -};
> -
> -struct monitor_write_data {
> -enum monitor_write_status status;
> -uint32_t msr;
> -uint64_t value;
> -};
> -
>  struct arch_domain
>  {
>  struct page_info *perdomain_l3_pg;
> @@ -496,6 +481,31 @@ typedef enum __packed {
>  SMAP_CHECK_DISABLED,/* disable the check */
>  } smap_check_policy_t;
>  
> +enum monitor_write_status
> +{
> +MWS_NOWRITE = 0,
> +MWS_MSR,
> +MWS_CR0,
> +MWS_CR3,
> +MWS_CR4,
> +};
> +
> +struct monitor_write_data {
> +enum monitor_write_status status;
> +uint32_t msr;
> +uint64_t value;
> +};

Instead of moving these around now, may I suggest you put them
into their final place right away in the previous patch?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM.

2016-07-04 Thread Andrew Cooper
On 04/07/16 12:45, Sergej Proskurin wrote:
> Hello together,
>
> Since this is my first contribution to the Xen development mailing list, I
> would like to shortly introduce myself. My name is Sergej Proskurin. I am a 
> PhD
> Student at the Technical University of Munich. My research areas focus on
> Virtual Machine Introspection, Hypervisor/OS Security, and Reverse 
> Engineering.
>
> The following patch series can be found on Github[0] and is part of my
> contribution to this year's Google Summer of Code (GSoC)[1]. My project is
> managed by the organization The Honeynet Project. As part of GSoC, I am being
> supervised by the Xen developer Tamas K. Lengyel , George
> D. Webster, and Steven Maresca.
>
> In this patch series, we provide an implementation of the altp2m subsystem for
> ARM. Our implementation is based on the altp2m subsystem for x86, providing
> additional --alternate-- views on the guest's physical memory by means of the
> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
> extend the p2m subsystem. We extend libxl to support altp2m on ARM and modify
> xen-access to test the suggested functionality.
>
> [0] https://github.com/sergej-proskurin/xen (Branch arm-altp2m-patch)
> [1] https://summerofcode.withgoogle.com/projects/#4970052843470848
>
> Sergej Proskurin (18):
>   arm/altp2m: Add cmd-line support for altp2m on ARM.
>   arm/altp2m: Add first altp2m HVMOP stubs.
>   arm/altp2m: Add HVMOP_altp2m_get_domain_state.
>   arm/altp2m: Add altp2m init/teardown routines.
>   arm/altp2m: Add HVMOP_altp2m_set_domain_state.
>   arm/altp2m: Add a(p2m) table flushing routines.
>   arm/altp2m: Add HVMOP_altp2m_create_p2m.
>   arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
>   arm/altp2m: Add HVMOP_altp2m_switch_p2m.
>   arm/altp2m: Renamed and extended p2m_alloc_table.
>   arm/altp2m: Make flush_tlb_domain ready for altp2m.
>   arm/altp2m: Cosmetic fixes - function prototypes.
>   arm/altp2m: Make get_page_from_gva ready for altp2m.
>   arm/altp2m: Add HVMOP_altp2m_set_mem_access.
>   arm/altp2m: Add altp2m paging mechanism.
>   arm/altp2m: Extended libxl to activate altp2m on ARM.
>   arm/altp2m: Adjust debug information to altp2m.
>   arm/altp2m: Extend xen-access for altp2m on ARM.
>
>  tools/libxl/libxl_create.c  |   1 +
>  tools/libxl/libxl_dom.c |  14 +
>  tools/libxl/libxl_types.idl |   1 +
>  tools/libxl/xl_cmdimpl.c|   5 +
>  tools/tests/xen-access/xen-access.c |  11 +-
>  xen/arch/arm/Makefile   |   1 +
>  xen/arch/arm/altp2m.c   |  68 +++
>  xen/arch/arm/domain.c   |   2 +-
>  xen/arch/arm/guestcopy.c|   6 +-
>  xen/arch/arm/hvm.c  | 145 ++
>  xen/arch/arm/p2m.c  | 930 
> 
>  xen/arch/arm/traps.c| 104 +++-
>  xen/include/asm-arm/altp2m.h|  12 +-
>  xen/include/asm-arm/domain.h|  18 +
>  xen/include/asm-arm/hvm/hvm.h   |  59 +++
>  xen/include/asm-arm/mm.h|   2 +-
>  xen/include/asm-arm/p2m.h   |  72 ++-
>  17 files changed, 1330 insertions(+), 121 deletions(-)
>  create mode 100644 xen/arch/arm/altp2m.c
>  create mode 100644 xen/include/asm-arm/hvm/hvm.h
>

I have taken a quick look over the series, and things seem to be in order.

However, I wonder whether it would be better to have do_altp2m_op()
common between x86 and ARM, to avoid the risk of divergence in the API.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/3] use consistent exit mechanism

2016-07-04 Thread Andrew Cooper
On 04/07/16 12:55, Jan Beulich wrote:
> Similar code should use similar exit mechanisms (return vs goto).
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/3] drop pointless and add useful default cases

2016-07-04 Thread Andrew Cooper
On 04/07/16 12:55, Jan Beulich wrote:
> There's no point in having default cases when all possible values have
> respective case statements, or when there's just a "break" statement.
>
> Otoh the two main switch() statements better get default cases added,
> just to cover the case of someone altering one of the two lookup arrays
> without suitably changing these switch statements.
>
> Signed-off-by: Jan Beulich 

Reviewed-by: Andrew Cooper 
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.

2016-07-04 Thread Sergej Proskurin


On 07/04/2016 02:15 PM, Andrew Cooper wrote:
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> The Xen altp2m subsystem is currently supported only on x86-64 based
>> architectures. By utilizing ARM's virtualization extensions, we intend
>> to implement altp2m support for ARM architectures and thus further
>> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>>
>> With this commit, Xen is now able to activate altp2m support on ARM by
>> means of the command-line argument 'altp2m' (bool).
>>
>> Signed-off-by: Sergej Proskurin 
> 
> In addition, please patch docs/misc/xen-command-line.markdown to
> indicate that the altp2m option is available for x86 Intel and ARM.
> 
> ~Andrew
> 

Will do, thank you.

Cheers,
Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM.

2016-07-04 Thread Sergej Proskurin


On 07/04/2016 02:52 PM, Andrew Cooper wrote:
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> Hello together,
>>
>> Since this is my first contribution to the Xen development mailing list, I
>> would like to shortly introduce myself. My name is Sergej Proskurin. I am a 
>> PhD
>> Student at the Technical University of Munich. My research areas focus on
>> Virtual Machine Introspection, Hypervisor/OS Security, and Reverse 
>> Engineering.
>>
>> The following patch series can be found on Github[0] and is part of my
>> contribution to this year's Google Summer of Code (GSoC)[1]. My project is
>> managed by the organization The Honeynet Project. As part of GSoC, I am being
>> supervised by the Xen developer Tamas K. Lengyel , 
>> George
>> D. Webster, and Steven Maresca.
>>
>> In this patch series, we provide an implementation of the altp2m subsystem 
>> for
>> ARM. Our implementation is based on the altp2m subsystem for x86, providing
>> additional --alternate-- views on the guest's physical memory by means of the
>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
>> extend the p2m subsystem. We extend libxl to support altp2m on ARM and modify
>> xen-access to test the suggested functionality.
>>
>> [0] https://github.com/sergej-proskurin/xen (Branch arm-altp2m-patch)
>> [1] https://summerofcode.withgoogle.com/projects/#4970052843470848
>>
>> Sergej Proskurin (18):
>>   arm/altp2m: Add cmd-line support for altp2m on ARM.
>>   arm/altp2m: Add first altp2m HVMOP stubs.
>>   arm/altp2m: Add HVMOP_altp2m_get_domain_state.
>>   arm/altp2m: Add altp2m init/teardown routines.
>>   arm/altp2m: Add HVMOP_altp2m_set_domain_state.
>>   arm/altp2m: Add a(p2m) table flushing routines.
>>   arm/altp2m: Add HVMOP_altp2m_create_p2m.
>>   arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
>>   arm/altp2m: Add HVMOP_altp2m_switch_p2m.
>>   arm/altp2m: Renamed and extended p2m_alloc_table.
>>   arm/altp2m: Make flush_tlb_domain ready for altp2m.
>>   arm/altp2m: Cosmetic fixes - function prototypes.
>>   arm/altp2m: Make get_page_from_gva ready for altp2m.
>>   arm/altp2m: Add HVMOP_altp2m_set_mem_access.
>>   arm/altp2m: Add altp2m paging mechanism.
>>   arm/altp2m: Extended libxl to activate altp2m on ARM.
>>   arm/altp2m: Adjust debug information to altp2m.
>>   arm/altp2m: Extend xen-access for altp2m on ARM.
>>
>>  tools/libxl/libxl_create.c  |   1 +
>>  tools/libxl/libxl_dom.c |  14 +
>>  tools/libxl/libxl_types.idl |   1 +
>>  tools/libxl/xl_cmdimpl.c|   5 +
>>  tools/tests/xen-access/xen-access.c |  11 +-
>>  xen/arch/arm/Makefile   |   1 +
>>  xen/arch/arm/altp2m.c   |  68 +++
>>  xen/arch/arm/domain.c   |   2 +-
>>  xen/arch/arm/guestcopy.c|   6 +-
>>  xen/arch/arm/hvm.c  | 145 ++
>>  xen/arch/arm/p2m.c  | 930 
>> 
>>  xen/arch/arm/traps.c| 104 +++-
>>  xen/include/asm-arm/altp2m.h|  12 +-
>>  xen/include/asm-arm/domain.h|  18 +
>>  xen/include/asm-arm/hvm/hvm.h   |  59 +++
>>  xen/include/asm-arm/mm.h|   2 +-
>>  xen/include/asm-arm/p2m.h   |  72 ++-
>>  17 files changed, 1330 insertions(+), 121 deletions(-)
>>  create mode 100644 xen/arch/arm/altp2m.c
>>  create mode 100644 xen/include/asm-arm/hvm/hvm.h
>>
> 
> I have taken a quick look over the series, and things seem to be in order.
> 
> However, I wonder whether it would be better to have do_altp2m_op()
> common between x86 and ARM, to avoid the risk of divergence in the API.
> 
> ~Andrew
> 

It makes definitely sense to combine or rather pull out arch-independent
parts into a common place. We are still discussing what exactly needs to
be moved out. Thus, we are open for suggestions, thank you.

Cheers,
Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 5/8] x86/vm-event/monitor: don't compromise monitor_write_data on domain cleanup

2016-07-04 Thread Corneliu ZUZU

On 7/4/2016 3:47 PM, Jan Beulich wrote:

On 30.06.16 at 20:45,  wrote:

The arch_vm_event structure is dynamically allocated and freed @
vm_event_cleanup_domain. This cleanup is triggered e.g. when the toolstack user
disables domain monitoring (xc_monitor_disable), which in turn effectively
discards any information that was in arch_vm_event.write_data.

Isn't that rather a toolstack user bug, not warranting a relatively
extensive (even if mostly mechanical) hypervisor change like this
one? Sane monitor behavior, after all, is required anyway for the
monitored guest to survive.


Sorry but could you please rephrase this, I don't quite understand what 
you're saying.
The write_data field in arch_vm_event should _not ever_ be invalidated 
as a direct result of a toolstack user's action.



--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -473,24 +473,24 @@ void hvm_do_resume(struct vcpu *v)
  if ( !handle_hvm_io_completion(v) )
  return;
  
-if ( unlikely(v->arch.vm_event) )

+if ( unlikely(v->arch.vm_event.emulate_flags) )
  {
-if ( v->arch.vm_event->emulate_flags )
-{
-enum emul_kind kind = EMUL_KIND_NORMAL;
+enum emul_kind kind;
  
-if ( v->arch.vm_event->emulate_flags &

- VM_EVENT_FLAG_SET_EMUL_READ_DATA )
-kind = EMUL_KIND_SET_CONTEXT;
-else if ( v->arch.vm_event->emulate_flags &
-  VM_EVENT_FLAG_EMULATE_NOWRITE )
-kind = EMUL_KIND_NOWRITE;
+ASSERT(v->arch.vm_event.emul_read_data);
  
-hvm_mem_access_emulate_one(kind, TRAP_invalid_op,

-   HVM_DELIVER_NO_ERROR_CODE);
+kind = EMUL_KIND_NORMAL;

Please keep this being the initializer of the variable.


I put it there because of the ASSERT (to do that before anything else), 
but I will undo if you prefer.




  
-v->arch.vm_event->emulate_flags = 0;

-}
+if ( v->arch.vm_event.emulate_flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA 
)

Long line.


Long but under 80 columns, isn't that the rule? :-)




--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -259,21 +259,6 @@ struct pv_domain
  struct cpuidmasks *cpuidmasks;
  };
  
-enum monitor_write_status

-{
-MWS_NOWRITE = 0,
-MWS_MSR,
-MWS_CR0,
-MWS_CR3,
-MWS_CR4,
-};
-
-struct monitor_write_data {
-enum monitor_write_status status;
-uint32_t msr;
-uint64_t value;
-};
-
  struct arch_domain
  {
  struct page_info *perdomain_l3_pg;
@@ -496,6 +481,31 @@ typedef enum __packed {
  SMAP_CHECK_DISABLED,/* disable the check */
  } smap_check_policy_t;
  
+enum monitor_write_status

+{
+MWS_NOWRITE = 0,
+MWS_MSR,
+MWS_CR0,
+MWS_CR3,
+MWS_CR4,
+};
+
+struct monitor_write_data {
+enum monitor_write_status status;
+uint32_t msr;
+uint64_t value;
+};

Instead of moving these around now, may I suggest you put them
into their final place right away in the previous patch?

Jan




Sounds good, will do.

Corneliu.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 4/8] x86/vm-event/monitor: turn monitor_write_data.do_write into enum

2016-07-04 Thread Jan Beulich
>>> On 04.07.16 at 14:47,  wrote:
> On 7/4/2016 3:37 PM, Jan Beulich wrote:
> On 30.06.16 at 20:44,  wrote:
>>> After trapping a control-register write vm-event and -until- deciding if 
>>> that
>>> write is to be permitted or not (VM_EVENT_FLAG_DENY) and doing the actual
>>> write,
>>> there cannot and should not be another trapped control-register write event.
>> Is that true even for the case where full register state gets updated
>> for a vCPU?
> 
> AFAIK, the full register state cannot be updated _at once_, that is: 
> after each trapped register update monitor_write_data must _always_ be 
> handled _before reentering the vCPU_.

I'm thinking about operations like VCPUOP_initialise here. Of course
operations on current can't possibly update more than one register
at a time.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 5/8] x86/vm-event/monitor: don't compromise monitor_write_data on domain cleanup

2016-07-04 Thread Jan Beulich
>>> On 04.07.16 at 15:03,  wrote:
> On 7/4/2016 3:47 PM, Jan Beulich wrote:
> On 30.06.16 at 20:45,  wrote:
>>> The arch_vm_event structure is dynamically allocated and freed @
>>> vm_event_cleanup_domain. This cleanup is triggered e.g. when the toolstack 
>>> user
>>> disables domain monitoring (xc_monitor_disable), which in turn effectively
>>> discards any information that was in arch_vm_event.write_data.
>> Isn't that rather a toolstack user bug, not warranting a relatively
>> extensive (even if mostly mechanical) hypervisor change like this
>> one? Sane monitor behavior, after all, is required anyway for the
>> monitored guest to survive.
> 
> Sorry but could you please rephrase this, I don't quite understand what 
> you're saying.
> The write_data field in arch_vm_event should _not ever_ be invalidated 
> as a direct result of a toolstack user's action.

The monitoring app can cause all kinds of problems to the guest it
monitors. Why would this specific one need taking care of in the
hypervisor, instead of demanding that the app not disable monitoring
at the wrong time?

>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -473,24 +473,24 @@ void hvm_do_resume(struct vcpu *v)
>>>   if ( !handle_hvm_io_completion(v) )
>>>   return;
>>>   
>>> -if ( unlikely(v->arch.vm_event) )
>>> +if ( unlikely(v->arch.vm_event.emulate_flags) )
>>>   {
>>> -if ( v->arch.vm_event->emulate_flags )
>>> -{
>>> -enum emul_kind kind = EMUL_KIND_NORMAL;
>>> +enum emul_kind kind;
>>>   
>>> -if ( v->arch.vm_event->emulate_flags &
>>> - VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>>> -kind = EMUL_KIND_SET_CONTEXT;
>>> -else if ( v->arch.vm_event->emulate_flags &
>>> -  VM_EVENT_FLAG_EMULATE_NOWRITE )
>>> -kind = EMUL_KIND_NOWRITE;
>>> +ASSERT(v->arch.vm_event.emul_read_data);
>>>   
>>> -hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
>>> -   HVM_DELIVER_NO_ERROR_CODE);
>>> +kind = EMUL_KIND_NORMAL;
>> Please keep this being the initializer of the variable.
> 
> I put it there because of the ASSERT (to do that before anything else), 
> but I will undo if you prefer.

Since the initializer is (very obviously) independent of the
condition the ASSERT() checks, I indeed would prefer it to remain
the way it is before this change.

>>> -v->arch.vm_event->emulate_flags = 0;
>>> -}
>>> +if ( v->arch.vm_event.emulate_flags & 
>>> VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>> Long line.
> 
> Long but under 80 columns, isn't that the rule? :-)

I've counted 81 here.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 4/8] x86/vm-event/monitor: turn monitor_write_data.do_write into enum

2016-07-04 Thread Corneliu ZUZU

On 7/4/2016 4:07 PM, Jan Beulich wrote:

On 04.07.16 at 14:47,  wrote:

On 7/4/2016 3:37 PM, Jan Beulich wrote:

On 30.06.16 at 20:44,  wrote:

After trapping a control-register write vm-event and -until- deciding if that
write is to be permitted or not (VM_EVENT_FLAG_DENY) and doing the actual
write,
there cannot and should not be another trapped control-register write event.

Is that true even for the case where full register state gets updated
for a vCPU?

AFAIK, the full register state cannot be updated _at once_, that is:
after each trapped register update monitor_write_data must _always_ be
handled _before reentering the vCPU_.

I'm thinking about operations like VCPUOP_initialise here. Of course
operations on current can't possibly update more than one register
at a time.

Jan


Yes but those register update operations happen outside the vm-event 
subsystem, i.e. in those cases the registers get updated directly, not 
by setting bits in monitor_write_data.
Only hypervisor-trapped register updates (e.g. hvm_set_cr0 w/ may_defer 
parameter = 1) which are deferred because of hvm_event_crX returning 
true use monitor_write_data.


Corneliu.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/8] x86/vm-event/monitor: relocate code-motion more appropriately

2016-07-04 Thread Razvan Cojocaru
On 07/04/16 13:22, Jan Beulich wrote:
 On 30.06.16 at 20:43,  wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -475,8 +475,6 @@ void hvm_do_resume(struct vcpu *v)
>>  
>>  if ( unlikely(v->arch.vm_event) )
>>  {
>> -struct monitor_write_data *w = &v->arch.vm_event->write_data;
>> -
>>  if ( v->arch.vm_event->emulate_flags )
>>  {
>>  enum emul_kind kind = EMUL_KIND_NORMAL;
>> @@ -493,32 +491,10 @@ void hvm_do_resume(struct vcpu *v)
>>  
>>  v->arch.vm_event->emulate_flags = 0;
>>  }
>> -
>> -if ( w->do_write.msr )
>> -{
>> -hvm_msr_write_intercept(w->msr, w->value, 0);
>> -w->do_write.msr = 0;
>> -}
>> -
>> -if ( w->do_write.cr0 )
>> -{
>> -hvm_set_cr0(w->cr0, 0);
>> -w->do_write.cr0 = 0;
>> -}
>> -
>> -if ( w->do_write.cr4 )
>> -{
>> -hvm_set_cr4(w->cr4, 0);
>> -w->do_write.cr4 = 0;
>> -}
>> -
>> -if ( w->do_write.cr3 )
>> -{
>> -hvm_set_cr3(w->cr3, 0);
>> -w->do_write.cr3 = 0;
>> -}
>>  }
>>  
>> +arch_monitor_write_data(v);
> 
> Why does this get moved outside the if(), with the same condition
> getting added inside the function (inverted for bailing early)?
> 
>> @@ -119,6 +156,55 @@ bool_t monitored_msr(const struct domain *d, u32 msr)
>>  return test_bit(msr, bitmap);
>>  }
>>  
>> +static void write_ctrlreg_adjust_traps(struct domain *d)
>> +{
>> +struct vcpu *v;
>> +struct arch_vmx_struct *avmx;
>> +unsigned int cr3_bitmask;
>> +bool_t cr3_vmevent, cr3_ldexit;
>> +
>> +/* Adjust CR3 load-exiting. */
>> +
>> +/* vmx only */
>> +ASSERT(cpu_has_vmx);
>> +
>> +/* non-hap domains trap CR3 writes unconditionally */
>> +if ( !paging_mode_hap(d) )
>> +{
>> +for_each_vcpu ( d, v )
>> +ASSERT(v->arch.hvm_vmx.exec_control & 
>> CPU_BASED_CR3_LOAD_EXITING);
>> +return;
>> +}
>> +
>> +cr3_bitmask = monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3);
>> +cr3_vmevent = !!(d->arch.monitor.write_ctrlreg_enabled & cr3_bitmask);
>> +
>> +for_each_vcpu ( d, v )
>> +{
>> +avmx = &v->arch.hvm_vmx;
>> +cr3_ldexit = !!(avmx->exec_control & CPU_BASED_CR3_LOAD_EXITING);
>> +
>> +if ( cr3_vmevent == cr3_ldexit )
>> +continue;
>> +
>> +/*
>> + * If CR0.PE=0, CR3 load exiting must remain enabled.
>> + * See vmx_update_guest_cr code motion for cr = 0.
>> + */
>> +if ( cr3_ldexit && !hvm_paging_enabled(v) && 
>> !vmx_unrestricted_guest(v) 
>> )
>> +continue;
>> +
>> +if ( cr3_vmevent )
>> +avmx->exec_control |= CPU_BASED_CR3_LOAD_EXITING;
>> +else
>> +avmx->exec_control &= ~CPU_BASED_CR3_LOAD_EXITING;
>> +
>> +vmx_vmcs_enter(v);
>> +vmx_update_cpu_exec_control(v);
>> +vmx_vmcs_exit(v);
>> +}
>> +}
> 
> While Razvan gave his ack already, I wonder whether it's really a
> good idea to put deeply VMX-specific code outside of a VMX-specific
> file.

Didn't I add "for monitor / vm_event parts Acked-by: ..."? If I didn't,
I meant to. Obviously VMX code maintainers outrank me on these issues.


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.

2016-07-04 Thread Julien Grall

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:

The Xen altp2m subsystem is currently supported only on x86-64 based
architectures. By utilizing ARM's virtualization extensions, we intend
to implement altp2m support for ARM architectures and thus further
extend current Virtual Machine Introspection (VMI) capabilities on ARM.

With this commit, Xen is now able to activate altp2m support on ARM by
means of the command-line argument 'altp2m' (bool).

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
  xen/arch/arm/hvm.c| 22 
  xen/include/asm-arm/hvm/hvm.h | 47 +++
  2 files changed, 69 insertions(+)
  create mode 100644 xen/include/asm-arm/hvm/hvm.h

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index d999bde..3615036 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,28 @@

  #include 

+#include 
+
+/* Xen command-line option enabling altp2m */
+static bool_t __initdata opt_altp2m_enabled = 0;
+boolean_param("altp2m", opt_altp2m_enabled);
+
+struct hvm_function_table hvm_funcs __read_mostly = {
+.name = "ARM_HVM",
+};


I don't see any reason to introduce hvm_function_table on ARM. This 
structure is used to know whether the hardware will support altp2m. 
However, based on your implementation, this feature will this will not 
depend on the hardware for ARM.



+
+/* Initcall enabling hvm functionality. */
+static int __init hvm_enable(void)
+{
+if ( opt_altp2m_enabled )
+hvm_funcs.altp2m_supported = 1;
+else
+hvm_funcs.altp2m_supported = 0;
+
+return 0;
+}
+presmp_initcall(hvm_enable);
+
  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
  {
  long rc = 0;
diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
new file mode 100644
index 000..96c455c
--- /dev/null
+++ b/xen/include/asm-arm/hvm/hvm.h
@@ -0,0 +1,47 @@
+/*
+ * include/asm-arm/hvm/hvm.h
+ *
+ * Copyright (c) 2016, Sergej Proskurin 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#ifndef __ASM_ARM_HVM_HVM_H__
+#define __ASM_ARM_HVM_HVM_H__
+
+struct hvm_function_table {
+char *name;
+
+/* Necessary hardware support for alternate p2m's. */
+bool_t altp2m_supported;
+};
+
+extern struct hvm_function_table hvm_funcs;
+
+/* Returns true if hardware supports alternate p2m's */


This comment is not true. The feature does not depend on the hardware 
for ARM.



+static inline bool_t hvm_altp2m_supported(void)
+{
+return hvm_funcs.altp2m_supported;


You could directly use opt_altp2m_enabled here.


+}
+
+#endif /* __ASM_ARM_HVM_HVM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */



Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 5/8] x86/vm-event/monitor: don't compromise monitor_write_data on domain cleanup

2016-07-04 Thread Corneliu ZUZU

On 7/4/2016 4:11 PM, Jan Beulich wrote:

On 04.07.16 at 15:03,  wrote:

On 7/4/2016 3:47 PM, Jan Beulich wrote:

On 30.06.16 at 20:45,  wrote:

The arch_vm_event structure is dynamically allocated and freed @
vm_event_cleanup_domain. This cleanup is triggered e.g. when the toolstack user
disables domain monitoring (xc_monitor_disable), which in turn effectively
discards any information that was in arch_vm_event.write_data.

Isn't that rather a toolstack user bug, not warranting a relatively
extensive (even if mostly mechanical) hypervisor change like this
one? Sane monitor behavior, after all, is required anyway for the
monitored guest to survive.

Sorry but could you please rephrase this, I don't quite understand what
you're saying.
The write_data field in arch_vm_event should _not ever_ be invalidated
as a direct result of a toolstack user's action.

The monitoring app can cause all kinds of problems to the guest it
monitors. Why would this specific one need taking care of in the
hypervisor, instead of demanding that the app not disable monitoring
at the wrong time?


Because it wouldn't be the wrong time to disable monitoring.
This is not a case of wrong toolstack usage, but rather a case of 
xc_monitor_disable doing the wrong thing along the way.





--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -473,24 +473,24 @@ void hvm_do_resume(struct vcpu *v)
   if ( !handle_hvm_io_completion(v) )
   return;
   
-if ( unlikely(v->arch.vm_event) )

+if ( unlikely(v->arch.vm_event.emulate_flags) )
   {
-if ( v->arch.vm_event->emulate_flags )
-{
-enum emul_kind kind = EMUL_KIND_NORMAL;
+enum emul_kind kind;
   
-if ( v->arch.vm_event->emulate_flags &

- VM_EVENT_FLAG_SET_EMUL_READ_DATA )
-kind = EMUL_KIND_SET_CONTEXT;
-else if ( v->arch.vm_event->emulate_flags &
-  VM_EVENT_FLAG_EMULATE_NOWRITE )
-kind = EMUL_KIND_NOWRITE;
+ASSERT(v->arch.vm_event.emul_read_data);
   
-hvm_mem_access_emulate_one(kind, TRAP_invalid_op,

-   HVM_DELIVER_NO_ERROR_CODE);
+kind = EMUL_KIND_NORMAL;

Please keep this being the initializer of the variable.

I put it there because of the ASSERT (to do that before anything else),
but I will undo if you prefer.

Since the initializer is (very obviously) independent of the
condition the ASSERT() checks, I indeed would prefer it to remain
the way it is before this change.


-v->arch.vm_event->emulate_flags = 0;
-}
+if ( v->arch.vm_event.emulate_flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA 
)

Long line.

Long but under 80 columns, isn't that the rule? :-)

I've counted 81 here.

Jan


You may have counted the beginning '+' as well. Is the rule "<= 80 
columns in the source file" (in which case you're wrong) or is it "<= 80 
columns in the resulting diff" (in which case I'm wrong)?


Corneliu.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.

2016-07-04 Thread Julien Grall

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:

This commit moves the altp2m-related code from x86 to ARM.


Looking at the code in the follow-up patches, I have the impression that 
the code is very similar (if not exactly) to the x86 code. If so, we 
should move the HVMOP for altp2m in common code rather than duplicating 
the code.



Functions
that are no yet supported notify the caller or print a BUG message
stating their absence.

Also, the struct arch_domain is extended with the altp2m_active
attribute, represeting the current altp2m activity configuration of the


s/represeting/representing/


domain.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
  xen/arch/arm/hvm.c   | 82 
  xen/include/asm-arm/altp2m.h | 22 ++--
  xen/include/asm-arm/domain.h |  3 ++
  3 files changed, 105 insertions(+), 2 deletions(-)


[..]


diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a..16ae9d6 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
   * Alternate p2m
   *
   * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin .
   *
   * This program is free software; you can redistribute it and/or modify it
   * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
  /* Alternate p2m on/off per domain */
  static inline bool_t altp2m_active(const struct domain *d)
  {
-/* Not implemented on ARM. */
-return 0;
+return d->arch.altp2m_active;
  }

  /* Alternate p2m VCPU */
@@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
  return 0;
  }

+static inline void altp2m_vcpu_initialise(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
+static inline void altp2m_vcpu_destroy(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}
+
+static inline void altp2m_vcpu_reset(struct vcpu *v)
+{
+/* Not implemented on ARM, should not be reached. */
+BUG();
+}


Those 3 helpers are not used by anyone in the code so far and you 
replace them by another implementation in patch #5.


So I would prefer if you introduce the helpers and implementation only 
when they will be used.



  #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 979f7de..2039f16 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -127,6 +127,9 @@ struct arch_domain
  paddr_t efi_acpi_gpa;
  paddr_t efi_acpi_len;
  #endif
+
+/* altp2m: allow multiple copies of host p2m */
+bool_t altp2m_active;
  }  __cacheline_aligned;

  struct arch_vcpu



Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM.

2016-07-04 Thread Razvan Cojocaru
On 07/04/16 14:45, Sergej Proskurin wrote:
> Signed-off-by: Sergej Proskurin 
> ---
> Cc: Razvan Cojocaru 
> Cc: Tamas K Lengyel 
> Cc: Ian Jackson 
> Cc: Wei Liu 
> ---
>  tools/tests/xen-access/xen-access.c | 11 +--
>  1 file changed, 9 insertions(+), 2 deletions(-)

Fair enough, looks like a trivial change.

Acked-by: Razvan Cojocaru 


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.

2016-07-04 Thread Sergej Proskurin
Hello Julien,

On 07/04/2016 03:25 PM, Julien Grall wrote:
> Hello Sergej,
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> The Xen altp2m subsystem is currently supported only on x86-64 based
>> architectures. By utilizing ARM's virtualization extensions, we intend
>> to implement altp2m support for ARM architectures and thus further
>> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>>
>> With this commit, Xen is now able to activate altp2m support on ARM by
>> means of the command-line argument 'altp2m' (bool).
>>
>> Signed-off-by: Sergej Proskurin 
>> ---
>> Cc: Stefano Stabellini 
>> Cc: Julien Grall 
>> ---
>>   xen/arch/arm/hvm.c| 22 
>>   xen/include/asm-arm/hvm/hvm.h | 47
>> +++
>>   2 files changed, 69 insertions(+)
>>   create mode 100644 xen/include/asm-arm/hvm/hvm.h
>>
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index d999bde..3615036 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -32,6 +32,28 @@
>>
>>   #include 
>>
>> +#include 
>> +
>> +/* Xen command-line option enabling altp2m */
>> +static bool_t __initdata opt_altp2m_enabled = 0;
>> +boolean_param("altp2m", opt_altp2m_enabled);
>> +
>> +struct hvm_function_table hvm_funcs __read_mostly = {
>> +.name = "ARM_HVM",
>> +};
> 
> I don't see any reason to introduce hvm_function_table on ARM. This
> structure is used to know whether the hardware will support altp2m.
> However, based on your implementation, this feature will this will not
> depend on the hardware for ARM.
> 

This is true: hvm_function_table is not of crucial importance. During
the implementation, we decided to pull out arch-independent parts out
the x86 and ARM implementation (still needs to be done) and hence reused
as much code as possible. However, this struct can be left out.

>> +
>> +/* Initcall enabling hvm functionality. */
>> +static int __init hvm_enable(void)
>> +{
>> +if ( opt_altp2m_enabled )
>> +hvm_funcs.altp2m_supported = 1;
>> +else
>> +hvm_funcs.altp2m_supported = 0;
>> +
>> +return 0;
>> +}
>> +presmp_initcall(hvm_enable);
>> +
>>   long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>>   {
>>   long rc = 0;
>> diff --git a/xen/include/asm-arm/hvm/hvm.h
>> b/xen/include/asm-arm/hvm/hvm.h
>> new file mode 100644
>> index 000..96c455c
>> --- /dev/null
>> +++ b/xen/include/asm-arm/hvm/hvm.h
>> @@ -0,0 +1,47 @@
>> +/*
>> + * include/asm-arm/hvm/hvm.h
>> + *
>> + * Copyright (c) 2016, Sergej Proskurin 
>> + *
>> + * This program is free software; you can redistribute it and/or
>> modify it
>> + * under the terms and conditions of the GNU General Public License,
>> version 2,
>> + * as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but
>> WITHOUT ANY
>> + * WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> FITNESS
>> + * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> more
>> + * details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> along with
>> + * this program; If not, see .
>> + */
>> +
>> +#ifndef __ASM_ARM_HVM_HVM_H__
>> +#define __ASM_ARM_HVM_HVM_H__
>> +
>> +struct hvm_function_table {
>> +char *name;
>> +
>> +/* Necessary hardware support for alternate p2m's. */
>> +bool_t altp2m_supported;
>> +};
>> +
>> +extern struct hvm_function_table hvm_funcs;
>> +
>> +/* Returns true if hardware supports alternate p2m's */
> 
> This comment is not true. The feature does not depend on the hardware
> for ARM.
> 

True. I will change that.

>> +static inline bool_t hvm_altp2m_supported(void)
>> +{
>> +return hvm_funcs.altp2m_supported;
> 
> You could directly use opt_altp2m_enabled here.
> 

Ok, thank you.

>> +}
>> +
>> +#endif /* __ASM_ARM_HVM_HVM_H__ */
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>>
> 
> Regards,
> 

Thank you.

Cheers, Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.

2016-07-04 Thread Sergej Proskurin
Hello Julien,

On 07/04/2016 03:36 PM, Julien Grall wrote:
> Hello Sergej,
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> This commit moves the altp2m-related code from x86 to ARM.
> 
> Looking at the code in the follow-up patches, I have the impression that
> the code is very similar (if not exactly) to the x86 code. If so, we
> should move the HVMOP for altp2m in common code rather than duplicating
> the code.
> 

You are correct: A big part of the code is very similar to the x86
implementation. We are already started thinking about which parts need
to be pulled out into a common place. Thank you.

>> Functions
>> that are no yet supported notify the caller or print a BUG message
>> stating their absence.
>>
>> Also, the struct arch_domain is extended with the altp2m_active
>> attribute, represeting the current altp2m activity configuration of the
> 
> s/represeting/representing/
> 

Ok.

>> domain.
>>
>> Signed-off-by: Sergej Proskurin 
>> ---
>> Cc: Stefano Stabellini 
>> Cc: Julien Grall 
>> ---
>>   xen/arch/arm/hvm.c   | 82
>> 
>>   xen/include/asm-arm/altp2m.h | 22 ++--
>>   xen/include/asm-arm/domain.h |  3 ++
>>   3 files changed, 105 insertions(+), 2 deletions(-)
> 
> [..]
> 
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index a87747a..16ae9d6 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -2,6 +2,7 @@
>>* Alternate p2m
>>*
>>* Copyright (c) 2014, Intel Corporation.
>> + * Copyright (c) 2016, Sergej Proskurin .
>>*
>>* This program is free software; you can redistribute it and/or
>> modify it
>>* under the terms and conditions of the GNU General Public License,
>> @@ -24,8 +25,7 @@
>>   /* Alternate p2m on/off per domain */
>>   static inline bool_t altp2m_active(const struct domain *d)
>>   {
>> -/* Not implemented on ARM. */
>> -return 0;
>> +return d->arch.altp2m_active;
>>   }
>>
>>   /* Alternate p2m VCPU */
>> @@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct
>> vcpu *v)
>>   return 0;
>>   }
>>
>> +static inline void altp2m_vcpu_initialise(struct vcpu *v)
>> +{
>> +/* Not implemented on ARM, should not be reached. */
>> +BUG();
>> +}
>> +
>> +static inline void altp2m_vcpu_destroy(struct vcpu *v)
>> +{
>> +/* Not implemented on ARM, should not be reached. */
>> +BUG();
>> +}
>> +
>> +static inline void altp2m_vcpu_reset(struct vcpu *v)
>> +{
>> +/* Not implemented on ARM, should not be reached. */
>> +BUG();
>> +}
> 
> Those 3 helpers are not used by anyone in the code so far and you
> replace them by another implementation in patch #5.
> 
> So I would prefer if you introduce the helpers and implementation only
> when they will be used.
> 

Will do. Thank you.

>>   #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 979f7de..2039f16 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -127,6 +127,9 @@ struct arch_domain
>>   paddr_t efi_acpi_gpa;
>>   paddr_t efi_acpi_len;
>>   #endif
>> +
>> +/* altp2m: allow multiple copies of host p2m */
>> +bool_t altp2m_active;
>>   }  __cacheline_aligned;
>>
>>   struct arch_vcpu
>>
> 
> Regards,
> 

Thank you.

Best regards,
Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 5/8] x86/vm-event/monitor: don't compromise monitor_write_data on domain cleanup

2016-07-04 Thread Razvan Cojocaru
On 07/04/16 16:11, Jan Beulich wrote:
 On 04.07.16 at 15:03,  wrote:
>> On 7/4/2016 3:47 PM, Jan Beulich wrote:
>> On 30.06.16 at 20:45,  wrote:
 The arch_vm_event structure is dynamically allocated and freed @
 vm_event_cleanup_domain. This cleanup is triggered e.g. when the toolstack 
 user
 disables domain monitoring (xc_monitor_disable), which in turn effectively
 discards any information that was in arch_vm_event.write_data.
>>> Isn't that rather a toolstack user bug, not warranting a relatively
>>> extensive (even if mostly mechanical) hypervisor change like this
>>> one? Sane monitor behavior, after all, is required anyway for the
>>> monitored guest to survive.
>>
>> Sorry but could you please rephrase this, I don't quite understand what 
>> you're saying.
>> The write_data field in arch_vm_event should _not ever_ be invalidated 
>> as a direct result of a toolstack user's action.
> 
> The monitoring app can cause all kinds of problems to the guest it
> monitors. Why would this specific one need taking care of in the
> hypervisor, instead of demanding that the app not disable monitoring
> at the wrong time?

I'm not sure there's a right time here. The problem is that, if I
understand this correctly, a race is possible between the moment the
userspace application responds to the vm_event _and_ call
xc_monitor_disable() and the time hvm_do_resume() gets called.

If xc_monitor_disable() happened before hvm_do_resume() springs into
action, we lose a register write. There's no guaranteed way for this not
to happen as far as I can see, although it's true that the race should
pretty much never happen in practice - at least we've never come across
such a case so far.


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 5/8] x86/vm-event/monitor: don't compromise monitor_write_data on domain cleanup

2016-07-04 Thread Corneliu ZUZU

On 7/4/2016 4:50 PM, Razvan Cojocaru wrote:

On 07/04/16 16:11, Jan Beulich wrote:

On 04.07.16 at 15:03,  wrote:

On 7/4/2016 3:47 PM, Jan Beulich wrote:

On 30.06.16 at 20:45,  wrote:

The arch_vm_event structure is dynamically allocated and freed @
vm_event_cleanup_domain. This cleanup is triggered e.g. when the toolstack user
disables domain monitoring (xc_monitor_disable), which in turn effectively
discards any information that was in arch_vm_event.write_data.

Isn't that rather a toolstack user bug, not warranting a relatively
extensive (even if mostly mechanical) hypervisor change like this
one? Sane monitor behavior, after all, is required anyway for the
monitored guest to survive.

Sorry but could you please rephrase this, I don't quite understand what
you're saying.
The write_data field in arch_vm_event should _not ever_ be invalidated
as a direct result of a toolstack user's action.

The monitoring app can cause all kinds of problems to the guest it
monitors. Why would this specific one need taking care of in the
hypervisor, instead of demanding that the app not disable monitoring
at the wrong time?

I'm not sure there's a right time here. The problem is that, if I
understand this correctly, a race is possible between the moment the
userspace application responds to the vm_event _and_ call
xc_monitor_disable() and the time hvm_do_resume() gets called.

If xc_monitor_disable() happened before hvm_do_resume() springs into
action, we lose a register write. There's no guaranteed way for this not
to happen as far as I can see, although it's true that the race should
pretty much never happen in practice - at least we've never come across
such a case so far.


Thanks,
Razvan



Perfectly pointed, thanks. Note that xc_monitor_disable() may happen 
before hvm_do_resume() because the latter only happens _when the 
scheduler reschedules the target vCPU_, which _may not happen between_ 
the moment the toolstack user _handles the vm-event_ and the moment when 
he _calls xc_monitor_disable()_, but rather after xc_monitor_disable() 
is called.


Corneliu.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/8] x86/vm-event/monitor: relocate code-motion more appropriately

2016-07-04 Thread Jan Beulich
>>> On 04.07.16 at 15:22,  wrote:
> On 07/04/16 13:22, Jan Beulich wrote:
> On 30.06.16 at 20:43,  wrote:
>>> @@ -119,6 +156,55 @@ bool_t monitored_msr(const struct domain *d, u32 msr)
>>>  return test_bit(msr, bitmap);
>>>  }
>>>  
>>> +static void write_ctrlreg_adjust_traps(struct domain *d)
>>> +{
>>> +struct vcpu *v;
>>> +struct arch_vmx_struct *avmx;
>>> +unsigned int cr3_bitmask;
>>> +bool_t cr3_vmevent, cr3_ldexit;
>>> +
>>> +/* Adjust CR3 load-exiting. */
>>> +
>>> +/* vmx only */
>>> +ASSERT(cpu_has_vmx);
>>> +
>>> +/* non-hap domains trap CR3 writes unconditionally */
>>> +if ( !paging_mode_hap(d) )
>>> +{
>>> +for_each_vcpu ( d, v )
>>> +ASSERT(v->arch.hvm_vmx.exec_control & 
>>> CPU_BASED_CR3_LOAD_EXITING);
>>> +return;
>>> +}
>>> +
>>> +cr3_bitmask = monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3);
>>> +cr3_vmevent = !!(d->arch.monitor.write_ctrlreg_enabled & cr3_bitmask);
>>> +
>>> +for_each_vcpu ( d, v )
>>> +{
>>> +avmx = &v->arch.hvm_vmx;
>>> +cr3_ldexit = !!(avmx->exec_control & CPU_BASED_CR3_LOAD_EXITING);
>>> +
>>> +if ( cr3_vmevent == cr3_ldexit )
>>> +continue;
>>> +
>>> +/*
>>> + * If CR0.PE=0, CR3 load exiting must remain enabled.
>>> + * See vmx_update_guest_cr code motion for cr = 0.
>>> + */
>>> +if ( cr3_ldexit && !hvm_paging_enabled(v) && 
>>> !vmx_unrestricted_guest(v) 
>>> )
>>> +continue;
>>> +
>>> +if ( cr3_vmevent )
>>> +avmx->exec_control |= CPU_BASED_CR3_LOAD_EXITING;
>>> +else
>>> +avmx->exec_control &= ~CPU_BASED_CR3_LOAD_EXITING;
>>> +
>>> +vmx_vmcs_enter(v);
>>> +vmx_update_cpu_exec_control(v);
>>> +vmx_vmcs_exit(v);
>>> +}
>>> +}
>> 
>> While Razvan gave his ack already, I wonder whether it's really a
>> good idea to put deeply VMX-specific code outside of a VMX-specific
>> file.
> 
> Didn't I add "for monitor / vm_event parts Acked-by: ..."? If I didn't,
> I meant to.

Well - this is a monitor file (monitor.c).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 4/8] x86/vm-event/monitor: turn monitor_write_data.do_write into enum

2016-07-04 Thread Jan Beulich
>>> On 04.07.16 at 15:21,  wrote:
> On 7/4/2016 4:07 PM, Jan Beulich wrote:
> On 04.07.16 at 14:47,  wrote:
>>> On 7/4/2016 3:37 PM, Jan Beulich wrote:
>>> On 30.06.16 at 20:44,  wrote:
> After trapping a control-register write vm-event and -until- deciding if 
> that
> write is to be permitted or not (VM_EVENT_FLAG_DENY) and doing the actual
> write,
> there cannot and should not be another trapped control-register write 
> event.
 Is that true even for the case where full register state gets updated
 for a vCPU?
>>> AFAIK, the full register state cannot be updated _at once_, that is:
>>> after each trapped register update monitor_write_data must _always_ be
>>> handled _before reentering the vCPU_.
>> I'm thinking about operations like VCPUOP_initialise here. Of course
>> operations on current can't possibly update more than one register
>> at a time.
> 
> Yes but those register update operations happen outside the vm-event 
> subsystem, i.e. in those cases the registers get updated directly, not 
> by setting bits in monitor_write_data.
> Only hypervisor-trapped register updates (e.g. hvm_set_cr0 w/ may_defer 
> parameter = 1) which are deferred because of hvm_event_crX returning 
> true use monitor_write_data.

That's why I had added "Is that updating-all-context case of no
interest to a monitoring application, now and forever?" After all I
gave VCPUOP_initialise as an example because the guest itself
can invoke it, and so I had assumed this to be of interest to a
monitoring app.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


  1   2   3   >