Re: [Xen-devel] [PATCH v2 16/17] libxc/xc_dom_arm: Copy ACPI tables to guest space

2016-06-26 Thread Shannon Zhao


On 2016/6/24 2:46, Julien Grall wrote:
>> +
>>  static int alloc_magic_pages(struct xc_dom_image *dom)
>>  {
> >  int rc, i;
>> @@ -100,6 +141,16 @@ static int alloc_magic_pages(struct xc_dom_image
>> *dom)
>>  xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_EVTCHN,
>>  dom->xenstore_evtchn);
>>
>> +if ( dom->acpitable_blob && dom->acpitable_size > 0 )
>> +{
>> +rc = xc_dom_copy_acpi(dom);
>> +if ( rc != 0 )
>> +{
>> +DOMPRINTF("Unable to copy ACPI tables");
>> +return rc;
>> +}
>> +}
> 
> alloc_magic_pages looks the wrong place with this function. Any reason
> to not have a generic ACPI blob loading in xc_dom_core.c as we do for
> devicetree?
Looks like xc_dom_build_image is used for allocating pages in guest RAM
while ACPI blob is not put in guest RAM.

Thanks,
-- 
Shannon


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 11/17] libxl/arm: Construct ACPI DSDT table

2016-06-26 Thread Shannon Zhao


On 2016/6/24 1:03, Julien Grall wrote:
> Hi Shannon,
> 
> On 23/06/16 04:16, Shannon Zhao wrote:
> 
> [...]
> 
>> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
>> index 264b6ef..5347480 100644
>> --- a/tools/libxl/Makefile
>> +++ b/tools/libxl/Makefile
>> @@ -77,7 +77,29 @@ endif
>>
>>   LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o libxl_psr.o
>>   LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
>> libxl_libfdt_compat.o
>> -LIBXL_OBJS-$(CONFIG_ARM) += libxl_arm_acpi.o
>> +LIBXL_OBJS-$(CONFIG_ARM) += libxl_arm_acpi.o libxl_dsdt_anycpu_arm.o
>> +
>> +vpath iasl $(PATH)
>> +libxl_mk_dsdt_arm: libxl_mk_dsdt_arm.c
>> +$(CC) $(CFLAGS) -o $@ libxl_mk_dsdt_arm.c
>> +
>> +libxl_dsdt_anycpu_arm.asl: libxl_empty_dsdt_arm.asl libxl_mk_dsdt_arm
>> +awk 'NR > 1 {print s} {s=$$0}' $< > $@
>> +./libxl_mk_dsdt_arm >> $@
>> +
>> +libxl_dsdt_anycpu_arm.c: %.c: iasl %.asl
>> +iasl -vs -p $* -tc $*.asl
>> +sed -e 's/AmlCode/$*/g' $*.hex >$@
>> +echo "int $*_len=sizeof($*);" >>$@
>> +rm -f $*.aml $*.hex
>> +
> 
> I don't like the idea to add iasl as a dependency for all ARM platforms.
> For instance ARMv7 platform will not use ACPI, but we still ask users to
> install iasl. So I think we should allow the user to opt-in/opt-out for
> ACPI.
> 
> Any opinions?
> 
I agree. But how to exclude for ARMv7. I notice it only has the option
CONFIG_ARM which doesn't distinguish ARM32 and ARM64.

>> +iasl:
>> +@echo
>> +@echo "ACPI ASL compiler (iasl) is needed"
>> +@echo "Download and install Intel ACPI CA from"
>> +@echo "http://acpica.org/downloads/";
>> +@echo
>> +@exit 1
> 
> It is really a pain to discover the dependency in the middle of a build.
> The presence of iasl should be done by the configure.
> 
>>
>>   libxl_arm_acpi.o: libxl_arm_acpi.c
>>   $(CC) -c $(CFLAGS) -I../../xen/include/ -o $@ libxl_arm_acpi.c
>> diff --git a/tools/libxl/libxl_arm_acpi.c b/tools/libxl/libxl_arm_acpi.c
>> index 353d774..45fc354 100644
>> --- a/tools/libxl/libxl_arm_acpi.c
>> +++ b/tools/libxl/libxl_arm_acpi.c
>> @@ -54,6 +54,9 @@ enum {
>>   NUMS,
>>   };
>>
>> +extern unsigned char libxl_dsdt_anycpu_arm[];
>> +extern int libxl_dsdt_anycpu_arm_len;
> 
> Not sure this is the right place to mention it, but I don't find the
> actual declaration.
> 
The declarations are in libxl_dsdt_anycpu_arm.c which is generated by
iasl. You can see that in the Makefile above.

> Both variables should be const and _hidden. Also, the *_len should be at
> least const int.
> 
>> +
>>   struct acpitable {
>>   void *table;
>>   size_t size;
>> @@ -256,6 +259,17 @@ static void make_acpi_fadt(libxl__gc *gc, struct
>> xc_dom_image *dom)
>>   dom->acpitable_size += ROUNDUP(acpitables[FADT].size, 3);
>>   }
>>
>> +static void make_acpi_dsdt(libxl__gc *gc, struct xc_dom_image *dom)
>> +{
>> +acpitables[DSDT].table = libxl__zalloc(gc,
>> libxl_dsdt_anycpu_arm_len);
>> +memcpy(acpitables[DSDT].table, libxl_dsdt_anycpu_arm,
>> +   libxl_dsdt_anycpu_arm_len);
>> +
>> +acpitables[DSDT].size = libxl_dsdt_anycpu_arm_len;
>> +/* Align to 64bit. */
>> +dom->acpitable_size += ROUNDUP(acpitables[DSDT].size, 3);
>> +}
>> +
>>   int libxl__prepare_acpi(libxl__gc *gc, libxl_domain_build_info *info,
>>   libxl__domain_build_state *state,
>>   struct xc_dom_image *dom)
>> @@ -284,6 +298,7 @@ int libxl__prepare_acpi(libxl__gc *gc,
>> libxl_domain_build_info *info,
>>   return rc;
>>
>>   make_acpi_fadt(gc, dom);
>> +make_acpi_dsdt(gc, dom);
>>
>>   return 0;
>>   }
>> diff --git a/tools/libxl/libxl_arm_acpi.h b/tools/libxl/libxl_arm_acpi.h
>> index 9b58de6..b0fd9ce 100644
>> --- a/tools/libxl/libxl_arm_acpi.h
>> +++ b/tools/libxl/libxl_arm_acpi.h
>> @@ -19,6 +19,8 @@
>>
>>   #include 
>>
>> +#define DOMU_MAX_VCPUS 128
>> +
> 
> I would rather define the maximum number of VCPUS in public/arch_arm.h
> to avoid defining the current number of vCPUs supported in multiple places.
> 
>>   int libxl__prepare_acpi(libxl__gc *gc, libxl_domain_build_info *info,
>>   libxl__domain_build_state *state,
>>   struct xc_dom_image *dom);
>> diff --git a/tools/libxl/libxl_empty_dsdt_arm.asl
>> b/tools/libxl/libxl_empty_dsdt_arm.asl
>> new file mode 100644
>> index 000..005fa6a
>> --- /dev/null
>> +++ b/tools/libxl/libxl_empty_dsdt_arm.asl
>> @@ -0,0 +1,22 @@
>> +/**
>>
>> + * DSDT for Xen ARM DomU
>> + *
>> + * Copyright (c) 2004, Intel Corporation.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but
>> WITHOUT
>> + * ANY WARRANTY; without even the i

[Xen-devel] [qemu-upstream-4.3-testing test] 96293: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96293 qemu-upstream-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96293/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   5 libvirt-build fail REGR. vs. 80927
 build-i386-libvirt5 libvirt-build fail REGR. vs. 80927

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
96290

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail in 96290 like 80927
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 80927

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass

version targeted for testing:
 qemuu12e8fccf5b5460be7aecddc71d27eceaba6e1f15
baseline version:
 qemuu10c1b763c26feb645627a1639e722515f3e1e876

Last test of basis80927  2016-02-06 13:30:02 Z  141 days
Failing since 93977  2016-05-10 11:09:16 Z   47 days  150 attempts
Testing same since95534  2016-06-11 00:59:46 Z   16 days   30 attempts


People who touched revisions under test:
  Anthony PERARD 
  Gerd Hoffmann 
  Ian Jackson 
  Stefano Stabellini 
  Wei Liu 

jobs:
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-credit2  pass
 test-amd64-i386-freebsd10-i386   pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass
 test-amd64-amd64-libvirt blocked 
 test-amd64-i386-libvirt  blocked 
 test-amd64-amd64-xl-multivcpupass
 test-amd64-amd64-pairpass
 test-amd64-i386-pair pass
 test-amd64-amd64-pv  pass
 test-amd64-i386-pv   pass
 test-amd64-amd64-amd64-pvgrubpass
 test-amd64-amd64-i386-pvgrub pass
 test-amd64-amd64-pygrub  pass
 test-amd64-amd64-xl-qcow2pass
 test-amd64-i386-xl-raw   pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass
 test-amd64-amd64-libvirt-vhd blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 12e8fccf5b5460be7aecddc71d27eceaba6e1f15
Author: Ian Jackson 
Date:   Thu May 26 16:21:56 2016 +0100

main loop: Big hammer to fix logfile disk DoS in Xen setups

Each t

[Xen-devel] [ovmf test] 96282: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96282 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96282/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail REGR. 
vs. 94748
 test-amd64-amd64-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail 
REGR. vs. 94748

version targeted for testing:
 ovmf 9252d67ab3007601ddf983d1278cbe0e4a647f34
baseline version:
 ovmf dc99315b8732b6e3032d01319d3f534d440b43d0

Last test of basis94748  2016-05-24 22:43:25 Z   33 days
Failing since 94750  2016-05-25 03:43:08 Z   32 days   58 attempts
Testing same since96220  2016-06-24 14:13:59 Z2 days4 attempts


People who touched revisions under test:
  Ard Biesheuvel 
  Chao Zhang 
  Cinnamon Shia 
  Cohen, Eugene 
  Dandan Bi 
  Darbin Reyes 
  Eric Dong 
  Eugene Cohen 
  Evan Lloyd 
  Fu Siyuan 
  Fu, Siyuan 
  Gary Li 
  Gary Lin 
  Giri P Mudusuru 
  Hao Wu 
  Hegde Nagaraj P 
  hegdenag 
  Heyi Guo 
  Jan D?bro? 
  Jan Dabros 
  Jeff Fan 
  Jiaxin Wu 
  Jiewen Yao 
  Joe Zhou 
  Katie Dellaquila 
  Laszlo Ersek 
  Liming Gao 
  Lu, ShifeiX A 
  lushifex 
  Marcin Wojtas 
  Marvin H?user 
  Marvin Haeuser 
  Maurice Ma 
  Michael Zimmermann 
  Qiu Shumin 
  Ruiyu Ni 
  Ryan Harkin 
  Sami Mujawar 
  Satya Yarlagadda 
  Sriram Subramanian 
  Star Zeng 
  Tapan Shah 
  Thomas Palmer 
  Yonghong Zhu 
  Zhang Lubo 
  Zhang, Chao B 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2959 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Elaboration of "Question about sharing spinlock_t among VMs in Xen"

2016-06-26 Thread Dagaen Golomb
I wanted some elaboration on this question and answer posted recently.

On 06/13/2016 01:43 PM, Meng Xu wrote:
>> Hi,
>>
>> I have a quick question about using the Linux spin_lock() in Xen
>> environment to protect some host-wide shared (memory) resource among
>> VMs.
>>
>> *** The question is as follows ***
>> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
>> the sharing memory) on the same host. Suppose we have one process in
>> each VM. Each process uses the linux function spin_lock(&lock) [1] to
>> grab & release the lock.
>> Will these two processes in the two VMs have race on the shared lock?

> You can't do this: depending on which Linux version you use you will
> find that kernel uses ticketlocks or qlocks locks which keep track of
> who is holding the lock (obviously this information is internal to VM).
> On top of this on Xen we use pvlocks which add another (internal)
> control layer.

I wanted to see if this can be done with the correct combination of
versions and parameters. We are using 4.1.0 for all domains, which
still has the CONFIG_PARAVIRT_SPINLOCK option. I've recompiled the
guests with this option set to n, and have also added the boot
parameter xen_nopvspin to both domains and dom0 for good measure. A
basic ticketlock holds all the information inside the struct itself to
order the requests, and I believe this is the version I'm using.

Do you think this *should* work? I am still getting a deadlock issue
but I do not believe its due to blocking vcpus, especially after the
above changes. Instead, I believe the spinlock struct is getting
corrupted. To be more precise, I only have two competing domains as a
test, both domUs. I print the raw spinlock struct out when I create it
and after a lock/unlock test. I get the following:

Init: [ 00 00 00 00 ]
Lock: [ 00 00 02 00 ]
Unlock: [ 02 00 02 00 ]
Lock: [ 02 00 04 00 ]
Unlock: [ 04 00 04 00 ]

It seems clear from the output and reading I've done that the first 2
bytes are the "currently servicing" number and the next two are the
"next number to draw" value. With only two guests, one should always
be getting serviced while another waits, so I would expect these two
halves to stay nearly the same (within one grab actually) and end with
both values equal when both are done with their locking/unlocking.
Instead, after what seems to be deadlock I destroy the VMs and print
the spinlock values an its this: [ 11 1e 14 1e ]. Note the 11 and 14,
should these be an odd number apart? The accesses I see keep them
even. Please correct me if I am wrong! Seems practically every time
there is this issue, the first pair of bytes are 3 off and the last
pair match. Could this have something to do with the issue?

>> My speculation is that it should have the race on the shard lock when
>> the spin_lock() function in *two VMs* operate on the same lock.
>>
>> We did some quick experiment on this and we found one VM sometimes see
>> the soft lockup on the lock. But we want to make sure our
>> understanding is correct.
>>
>> We are exploring if we can use the spin_lock to protect the shared
>> resources among VMs, instead of using the PV drivers. If the
>> spin_lock() in linux can provide the host-wide atomicity (which will
>> surprise me, though), that will be great. Otherwise, we probably have
>> to expose the spin_lock in Xen to the Linux?

> I'd think this has to be via the hypervisor (or some other third party).
> Otherwise what happens if one of the guests dies while holding the lock?
> -boris

This is a valid point against locking in the guests, but itself won't
prevent a spinlock implementation from working! We may move this
direction for several reasons but I am interested in why the above is
not working when I've disabled the PV part that sleeps vcpus.

Regards,
Dagaen Golomb
Ph.D. Student, University of Pennsylvania

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 11/17] libxl/arm: Construct ACPI DSDT table

2016-06-26 Thread Shannon Zhao


On 2016/6/24 0:42, Julien Grall wrote:
> 
> 
> On 23/06/16 15:50, Stefano Stabellini wrote:
>> On Thu, 23 Jun 2016, Shannon Zhao wrote:
>>> diff --git a/tools/libxl/libxl_empty_dsdt_arm.asl
>>> b/tools/libxl/libxl_empty_dsdt_arm.asl
>>> new file mode 100644
>>> index 000..005fa6a
>>> --- /dev/null
>>> +++ b/tools/libxl/libxl_empty_dsdt_arm.asl
>>> @@ -0,0 +1,22 @@
>>> +/**
>>>
>>> + * DSDT for Xen ARM DomU
>>> + *
>>> + * Copyright (c) 2004, Intel Corporation.
>>> + *
>>> + * This program is free software; you can redistribute it and/or
>>> modify it
>>> + * under the terms and conditions of the GNU General Public License,
>>> + * version 2, as published by the Free Software Foundation.
>>> + *
>>> + * This program is distributed in the hope it will be useful, but
>>> WITHOUT
>>> + * ANY WARRANTY; without even the implied warranty of
>>> MERCHANTABILITY or
>>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
>>> License for
>>> + * more details.
>>> + *
>>> + * You should have received a copy of the GNU General Public License
>>> along with
>>> + * this program; If not, see .
>>> + */
>>> +
>>> +DefinitionBlock ("DSDT.aml", "DSDT", 3, "XenARM", "Xen DSDT", 1)
>>> +{
>>> +
>>> +}
>>
>> Why do we need C code to generate the "static" asl? Can't we just
>> manually writing the asl code here and get rid of libxl_mk_dsdt_arm.c?
> 
> Whilst I agree that manually writing the asl code sounds more appealing,
> we need to write one node per processor. So currently this would be 128
> nodes and this will likely increase in the future.
> 
> Generating the asl has the advantage to be able to add new property in
> the processor node easily without having to modify one by one all the
> nodes.
Yes, and maybe in the future it needs to add other information in DSDT.

Thanks,
-- 
Shannon


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 07/17] libxl/arm: Construct ACPI GTDT table

2016-06-26 Thread Shannon Zhao


On 2016/6/24 0:26, Julien Grall wrote:
> On 23/06/16 04:16, Shannon Zhao wrote:
>> From: Shannon Zhao 
>>
>> Construct GTDT table with the interrupt information of timers.
>>
>> Signed-off-by: Shannon Zhao 
>> ---
>>   tools/libxl/libxl_arm_acpi.c | 28 
>>   1 file changed, 28 insertions(+)
>>
>> diff --git a/tools/libxl/libxl_arm_acpi.c b/tools/libxl/libxl_arm_acpi.c
>> index d5ffedf..de863f4 100644
>> --- a/tools/libxl/libxl_arm_acpi.c
>> +++ b/tools/libxl/libxl_arm_acpi.c
>> @@ -39,6 +39,9 @@ typedef uint64_t u64;
>>   #define ACPI_BUILD_APPNAME6 "XenARM"
>>   #define ACPI_BUILD_APPNAME4 "Xen "
>>
>> +#define ACPI_LEVEL_SENSITIVE(u8) 0x00
>> +#define ACPI_ACTIVE_LOW (u8) 0x01
>> +
> 
> Why did not you include actypes.h rather than define these two defines?
If we include actypes.h, there will be some compiling errors.

../../xen/include/acpi/actypes.h:55:2: error: #error ACPI_MACHINE_WIDTH
not defined
 #error ACPI_MACHINE_WIDTH not defined
  ^
../../xen/include/acpi/actypes.h:130:9: error: unknown type name
'COMPILER_DEPENDENT_UINT64'
 typedef COMPILER_DEPENDENT_UINT64 UINT64;
 ^
../../xen/include/acpi/actypes.h:131:9: error: unknown type name
'COMPILER_DEPENDENT_INT64'
 typedef COMPILER_DEPENDENT_INT64 INT64;
 ^
../../xen/include/acpi/actypes.h:202:2: error: #error unknown
ACPI_MACHINE_WIDTH
 #error unknown ACPI_MACHINE_WIDTH
  ^
../../xen/include/acpi/actypes.h:207:9: error: unknown type name
'acpi_native_uint'
 typedef acpi_native_uint acpi_size;
 ^
../../xen/include/acpi/actypes.h:617:3: error: unknown type name
'acpi_io_address'
   acpi_io_address pblk_address;

Yeah, it maybe can be solved by defining ACPI_MACHINE_WIDTH and
COMPILER_DEPENDENT_INT64 here, but since we only needs
ACPI_LEVEL_SENSITIVE and ACPI_ACTIVE_LOW, I think it's ok to define them
here.

Thanks,
-- 
Shannon


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-4.3-testing test] 96291: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96291 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96291/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt5 libvirt-build fail REGR. vs. 87893
 build-amd64-libvirt   5 libvirt-build fail REGR. vs. 87893
 build-armhf   5 xen-build fail REGR. vs. 87893

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  9 windows-install  fail pass in 96279

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 87893
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 87893
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 87893

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 20 leak-check/check fail in 96279 never 
pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass

version targeted for testing:
 xen  0a8c94fae993dd8f2b27fd4cc694f61c21de84bf
baseline version:
 xen  8fa31952e2d08ef63897c43b5e8b33475ebf5d93

Last test of basis87893  2016-03-29 13:49:52 Z   89 days
Failing since 92180  2016-04-20 17:49:21 Z   67 days   33 attempts
Testing same since96017  2016-06-20 17:22:27 Z6 days   12 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony Liguori 
  Anthony PERARD 
  Gerd Hoffmann 
  Ian Jackson 
  Jan Beulich 
  Jim Paris 
  Stefan Hajnoczi 
  Tim Deegan 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  fail
 build-i386   pass
 build-amd64-libvirt  fail
 build-armhf-libvirt  blocked 
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  blocked 
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-rumpuserxen-amd64   blocked 
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 test-amd64-i386-

Re: [Xen-devel] Lenovo X200 IOMMU support through Xen 4.6 iommu=no-igfx switch

2016-06-26 Thread Thierry Laurion
Sorry for the precedent post that was written a bit too fast. Libreboot was
flashed when I wrote it, which is the equivalent of a having vt-d
deactivated (iommu=0). Thanks to a user that read this post and wrote to me
personally so I could do my mea culpa. Sorry for the precedent misleading
post.

Xen on a GM45 chipset and with IGD i915 driver is still getting the system
hanged when vt-d is activated. I'm willing to borrow a machine to the Xen
developer that could fix the iommu=no-igfx code for gm45 chipset to
actually work.

A ticket is opened here with current states of thing:
https://github.com/QubesOS/qubes-issues/issues/1594#issuecomment-209213917

Sorry about that (and repost since I wrote the same misleading post to two
places)
Thierry

Le dim. 28 févr. 2016 à 14:03, Thierry Laurion 
a écrit :

> The problem wasn't with xen iommu support but kms/drm and i915 driver.
>
> Passing to the kernel i915.preliminary_hw_support=1 fixes it all :)
>
> Thanks
>
> Le mer. 6 janv. 2016 à 22:11, Thierry Laurion 
> a écrit :
>
>> Nope. That commit is present in 4.6 and results in x200 being able to
>> boot xen.
>>
>> Not having that option makes xen hang at boot.
>>
>> If present, it works until other vm access pass-through devices, which
>> I'm not able to troubleshoot even through amt SOL.
>>
>> See here for debug logs:
>> https://groups.google.com/forum/m/#!topic/qubes-users/bHQHjXqinaU
>>
>> Le mer. 6 janv. 2016 09:35, Jan Beulich  a écrit :
>>
>>> >>> On 22.12.15 at 19:04,  wrote:
>>> > iommu=no-igfx is a gamechanger for Qubes support through 3.1 RC1
>>> release,
>>> > thanks to Xen 4.6 :)
>>> >
>>> > The Lenovo X200 supports vt-x, vt-d and TPM as reported and required by
>>> > Qubes in the HCL attached to this e-mail. The problem is that when
>>> Qubes
>>> > launches it's netvm which uses IOMMU to talk to it's network card, it
>>> > freezes the whole system up. Even when specifying sync_console, I
>>> don't get
>>> > much more verbosity. I ordered a PCMCIA to serial adapter which will be
>>> > shipped to my door late January... Meanwhile, booting with iommu=0
>>> makes
>>> > things work, but a potential hardware component being compromised has
>>> > chances to compromise the whole system since compartmentalization is
>>> not
>>> > guaranteed without IOMMU (vt-d).
>>> >
>>> > A little more love is needed from xen to make that laptop line
>>> supported by
>>> > Qubes and a nice alternative to the costy Librem currently promoted by
>>> > Qubes-Purism
>>> > partnership
>>>
>>> Is all of the above and below a quite complicated way of expressing
>>> that you'd like to see commit 146341187a backported to 4.6.x?
>>>
>>> Jan
>>>
>>> > <
>>> http://arstechnica.com/gadgets/2015/12/qubes-os-will-ship-pre-installed-on-p
>>> > urisms-security-focused-librem-13-laptop/>which
>>> > suggest that the laptop will be Respect Your Freedom compliant in the
>>> > future with Intel participation in removing ME and AMT
>>> > , which is not guaranteed at all.
>>> > <
>>> http://www.phoronix.com/scan.php?page=news_item&px=Purism-Librem-Still-Blobbe
>>> > d>
>>> > If Xen 4.6 can cooperate with Penryn GM45 chipset, it's all MiniFree
>>> laptops
>>> >  (and Libreboot
>>> support of
>>> > those ) that will be
>>> potential
>>> > candidates!
>>> > Please share the love so that the community has a cheap alternative.
>>> >
>>> > Requirements to replicate bug:
>>> > Model: X200 745434U with p8700 CPU running 1067a microcode(important),
>>> > upgrable to 8go
>>> > BIOS: Lenovo 3.22/1.07 (latest from 2013
>>> > )
>>> > Network card supports FLReset+ as requested here
>>> > .
>>> > Bios settings: vt-d and vt-x needs to be enforced.
>>> > Xen command line option required
>>> >  to boot:
>>> > iommu=no-igfx
>>> >
>>> > Here is the current debug trace/status on Qubes side of things
>>> > .
>>> > If you have any hint, please contribute :)
>>> >
>>> > Help me say happy new years to all security conscious people out there
>>> :)
>>> >
>>> > Merry Christmas all,
>>> > Thierry Laurion
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thierry Laurion
>>>
>>>
>>>
>>>
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] pre Sandy bridge IOMMU support (gm45)

2016-06-26 Thread Thierry Laurion
Sorry for the precedent post that was written a bit too fast. Libreboot was
flashed when I wrote it, which is the equivalent of a having vt-d
deactivated (iommu=0). Thanks to a user that read this post and wrote to me
personally so I could do my mea culpa. Sorry for the precedent misleading
post.

Xen on a GM45 chipset and with IGD i915 driver is still getting the system
hanged when vt-d is activated. I'm willing to borrow a machine to the Xen
developer that could fix the iommu=no-igfx code for gm45 chipset to
actually work.

A ticket is opened here with current states of thing:
https://github.com/QubesOS/qubes-issues/issues/1594#issuecomment-209213917

Sorry about that.
Thierry

Le dim. 28 févr. 2016 à 14:08, Thierry Laurion 
a écrit :

> The problem wasn't with xen iommu support but kms/drm and i915 driver.
>
> Passing to the kernel i915.preliminary_hw_support=1 fixes it all :)
>
> Thanks
>
> Le sam. 20 févr. 2016 à 22:44, Thierry Laurion 
> a écrit :
>
>> Le mar. 26 janv. 2016 à 05:52, Jan Beulich  a écrit :
>>
>>> >>> On 25.01.16 at 22:49,  wrote:
>>> > The case is 1) disabling iommu for IGD, unilaterally since i915 + gm45
>>> > doesn't play well together. Iommu is still desired to isolate usb and
>>> > network devices, so we don't want to disable iommu completely. The side
>>> > effect of this would be to have IGD only for dom0, which would also
>>> > completely make sense in this use case.
>>> >
>>> > The point is the iommu=no-igfx doesn't fix the issue, since remapping
>>> seems
>>> > to still happen for IGD. Does that make sense ?
>>>
>>> It certainly may make sense, just that in what you have written so
>>> far I don't think I've been able to spot any evidence thereof. Since,
>>> as you say, nothing interesting gets logged by Xen, you must be
>>> drawing this conclusion from something (or else you wouldn't say
>>> "doesn't fix the issue").
>>>
>>> Jan
>>>
>>>
>> Here is some interesting lines showing Xen failing without iommu=no-igfx:
>>
>> --- /home/john/Downloads/amtterm/x200_xen_debug-normal-no_ts.txt
>> +++ /home/john/Downloads/amtterm/x200_xen_debug-iommu-no_igfx-no_ts.txt
>> @@ -339,23 +339,10 @@
>> (XEN) [VT-D]iommu.c:1465: d0:PCI: map :00:1f.3
>> (XEN) [VT-D]iommu.c:1453: d0:PCIe: map :03:00.0
>> (XEN) [VT-D]iommu.c:729: iommu_enable_translation: iommu->reg =
>> 82c000205000
>> -(XEN) [VT-D]iommu.c:729: iommu_enable_translation: iommu->reg =
>> 82c000203000
>> +(XEN) [VT-D]iommu.c:719: BIOS did not enable IGD for VT properly.
>> Disabling IGD VT-d engine.
>> (XEN) [VT-D]iommu.c:729: iommu_enable_translation: iommu->reg =
>> 82c000201000
>> (XEN) [VT-D]iommu.c:729: iommu_enable_translation: iommu->reg =
>> 82c000207000
>> (XEN) Scrubbing Free RAM on 1 nodes using 2 CPUs
>> -(XEN) [VT-D]iommu.c:873: iommu_fault_status: Fault Overflow
>> -(XEN) [VT-D]iommu.c:875: iommu_fault_status: Primary Pending Fault
>> -(XEN) [VT-D]DMAR:[DMA Write] Request device [:00:02.0] fault addr
>> ff000, iommu reg = 82c000203000
>> -(XEN) [VT-D]DMAR: reason 05 - PTE Write access is not set
>> -(XEN) print_vtd_entries: iommu 8301363fa7d0 dev :00:02.0 gmfn
>> ff
>> -(XEN) root_entry = 8301363f4000
>> -(XEN) root_entry[0] = 80fa001
>> -(XEN) context = 8300080fa000
>> -(XEN) context[10] = 1_8ae0001
>> -(XEN) l3 = 830008ae
>> -(XEN) l3_index = 3f
>> -(XEN) l3[3f] = 0
>> -(XEN) l3[3f] not present
>> (XEN) done.
>> (XEN) Initial low memory virq threshold set at 0x4000 pages.
>> (XEN) Std. Loglevel: All
>>
>> I restate my comprehension.
>> iommu=no-igfx needs to be passed to hypervisor for it to boot.
>> iommu=dom0-passthrough would also be needed for dom0 tobe able to unset
>> iommu usage for itself?
>>
>> For Dom0 to have access to device, I also understand that
>> intel_iommu=igfx_off kernel option would need to be passed to i915 driver
>> of dom0?
>>
>> Doing so still fails without error. Any hint?
>> Doing so by providing dom0-pass
>>
>>
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [qemu-upstream-4.3-testing test] 96290: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96290 qemu-upstream-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96290/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   5 libvirt-build fail REGR. vs. 80927
 build-i386-libvirt5 libvirt-build fail REGR. vs. 80927

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 80927
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 80927

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass

version targeted for testing:
 qemuu12e8fccf5b5460be7aecddc71d27eceaba6e1f15
baseline version:
 qemuu10c1b763c26feb645627a1639e722515f3e1e876

Last test of basis80927  2016-02-06 13:30:02 Z  141 days
Failing since 93977  2016-05-10 11:09:16 Z   47 days  149 attempts
Testing same since95534  2016-06-11 00:59:46 Z   15 days   29 attempts


People who touched revisions under test:
  Anthony PERARD 
  Gerd Hoffmann 
  Ian Jackson 
  Stefano Stabellini 
  Wei Liu 

jobs:
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-credit2  pass
 test-amd64-i386-freebsd10-i386   pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass
 test-amd64-amd64-libvirt blocked 
 test-amd64-i386-libvirt  blocked 
 test-amd64-amd64-xl-multivcpupass
 test-amd64-amd64-pairpass
 test-amd64-i386-pair pass
 test-amd64-amd64-pv  pass
 test-amd64-i386-pv   pass
 test-amd64-amd64-amd64-pvgrubpass
 test-amd64-amd64-i386-pvgrub pass
 test-amd64-amd64-pygrub  pass
 test-amd64-amd64-xl-qcow2pass
 test-amd64-i386-xl-raw   pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass
 test-amd64-amd64-libvirt-vhd blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 12e8fccf5b5460be7aecddc71d27eceaba6e1f15
Author: Ian Jackson 
Date:   Thu May 26 16:21:56 2016 +0100

main loop: Big hammer to fix logfile disk DoS in Xen setups

Each time round the main loop, we now fstat stderr.  If it is too big,
we dup2 /dev/null onto it.  This is not a very pretty patch but it is

[Xen-devel] [xen-unstable test] 96278: tolerable FAIL

2016-06-26 Thread osstest service owner
flight 96278 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96278/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 10 guest-start fail pass in 96251

Regressions which are regarded as allowable (not blocking):
 build-amd64-rumpuserxen   6 xen-buildfail   like 96251
 build-i386-rumpuserxen6 xen-buildfail   like 96251
 test-amd64-amd64-xl-rtds  9 debian-install   fail   like 96251
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 96251
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 96251
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 96251
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 96251

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check fail in 96251 never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass

version targeted for testing:
 xen  8384dc2d95538c5910d98db3df3ff5448bf0af48
baseline version:
 xen  8384dc2d95538c5910d98db3df3ff5448bf0af48

Last test of basis96278  2016-06-26 07:09:59 Z0 days
Testing same since0  1970-01-01 00:00:00 Z 16978 days0 attempts

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-oldkern  pass
 build-i386-oldkern   pass
 build-amd64-prev  

Re: [Xen-devel] [PATCH v3] xen: arm: Update arm64 image header

2016-06-26 Thread Julien Grall



On 26/06/2016 10:30, Dirk Behme wrote:

Hi Julien,


Hi Dirk,


On 26.06.2016 10:29, Julien Grall wrote:

Hello Dirk,

On 26/06/2016 06:47, Dirk Behme wrote:

On 23.06.2016 17:18, Julien Grall wrote:

On 23/06/16 07:38, Dirk Behme wrote:

+uint64_t res2;
  uint64_t res3;
  uint64_t res4;
-uint64_t res5;
-uint32_t magic1;
-uint32_t res6;
+uint32_t magic;/* Magic number, little endian,
"ARM\x64" */
+uint32_t res5;
  } zimage;
  uint64_t start, end;

@@ -354,20 +353,30 @@ static int kernel_zimage64_probe(struct
kernel_info *info,

  copy_from_paddr(&zimage, addr, sizeof(zimage));

-if ( zimage.magic0 != ZIMAGE64_MAGIC_V0 &&
- zimage.magic1 != ZIMAGE64_MAGIC_V1 )
+if ( zimage.magic != ZIMAGE64_MAGIC ) {
+printk(XENLOG_ERR "No valid magic found in header! Kernel
too old?\n");


I have found why there were no error messages here before. The
function kernel_probe will try the different formats supported one by
one.

So this message will be printed if the kernel is an ARM32 image, which
will confuse the user. So I would print this message only when
zimage.magic0 is equal to ZIMAGE64_MAGIC_V0.



Which we don't have with the recent header format any more.


Well, we control the structure in Xen. So we could re-introduce the
field magic0 through an union.

 > This does

mean I drop this message again, as it doesn't make sense if the
magic is
used for the format detection.


I would still prefer to keep an error message when only MAGIC_V0 is
present. This will avoid people to spend time understanding why it
does not work anymore.



This way

if ( zimage.magic != ZIMAGE64_MAGIC ) {
  if ( zimage.magic0 == ZIMAGE64_MAGIC_V0 )
printk(XENLOG_ERR "No valid magic found in header! Kernel
too old?\n");
  return -EINVAL;
}

with magic0 being a union with code0?


Yes, although I would drop the question marks at the end of the second 
sentence. We know that the kernel is too old.


Thank you for doing this.

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-4.3-testing test] 96279: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96279 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96279/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt5 libvirt-build fail REGR. vs. 87893
 build-amd64-libvirt   5 libvirt-build fail REGR. vs. 87893
 build-armhf   5 xen-build fail REGR. vs. 87893

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 87893
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 87893
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 87893

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 20 leak-check/checkfail never pass

version targeted for testing:
 xen  0a8c94fae993dd8f2b27fd4cc694f61c21de84bf
baseline version:
 xen  8fa31952e2d08ef63897c43b5e8b33475ebf5d93

Last test of basis87893  2016-03-29 13:49:52 Z   89 days
Failing since 92180  2016-04-20 17:49:21 Z   67 days   32 attempts
Testing same since96017  2016-06-20 17:22:27 Z6 days   11 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony Liguori 
  Anthony PERARD 
  Gerd Hoffmann 
  Ian Jackson 
  Jan Beulich 
  Jim Paris 
  Stefan Hajnoczi 
  Tim Deegan 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  fail
 build-i386   pass
 build-amd64-libvirt  fail
 build-armhf-libvirt  blocked 
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  blocked 
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-rumpuserxen-amd64   blocked 
 test-amd64-amd64-xl-qemut-win7-amd64 fail
 test-amd64-i386-xl-qemut-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i3

[Xen-devel] [qemu-upstream-4.3-testing test] 96275: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96275 qemu-upstream-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96275/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   5 libvirt-build fail REGR. vs. 80927
 build-i386-libvirt5 libvirt-build fail REGR. vs. 80927

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv  17 guest-localmigrate/x10  fail pass in 96257
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
96257
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail pass in 
96257

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail in 96257 like 80927
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail in 96257 like 80927

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass

version targeted for testing:
 qemuu12e8fccf5b5460be7aecddc71d27eceaba6e1f15
baseline version:
 qemuu10c1b763c26feb645627a1639e722515f3e1e876

Last test of basis80927  2016-02-06 13:30:02 Z  141 days
Failing since 93977  2016-05-10 11:09:16 Z   47 days  148 attempts
Testing same since95534  2016-06-11 00:59:46 Z   15 days   28 attempts


People who touched revisions under test:
  Anthony PERARD 
  Gerd Hoffmann 
  Ian Jackson 
  Stefano Stabellini 
  Wei Liu 

jobs:
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl  pass
 test-amd64-i386-xl   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-credit2  pass
 test-amd64-i386-freebsd10-i386   pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass
 test-amd64-amd64-libvirt blocked 
 test-amd64-i386-libvirt  blocked 
 test-amd64-amd64-xl-multivcpupass
 test-amd64-amd64-pairpass
 test-amd64-i386-pair pass
 test-amd64-amd64-pv  fail
 test-amd64-i386-pv   pass
 test-amd64-amd64-amd64-pvgrubpass
 test-amd64-amd64-i386-pvgrub pass
 test-amd64-amd64-pygrub  pass
 test-amd64-amd64-xl-qcow2pass
 test-amd64-i386-xl-raw   pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass
 test-amd64-amd64-libvirt-vhd blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 12e8fccf5b5460be7

[Xen-devel] [PATCH V2 00/10] Add support for parsing per CPU Redistributor entry

2016-06-26 Thread Shanker Donthineni
The current driver doesn't support parsing Redistributor entries that
are described in the MADT GICC table. Not all the GIC implementors
places the Redistributor regions in the always-on power domain. On
systems, the UEFI firmware should describe Redistributor base address
in the associated GIC CPU Interface (GICC) instead of GIC Redistributor
(GICR) table.

The maximum number of mmio handlers and struct vgic_rdist_region
that holds Redistributor addresses are allocated through a static
array with hardcoded size. I don't think this is the right approach
and is not scalable for implementing features like this. I have
decided to convert static to dynamic allocation based on comments
from the below link.

Patches #1 fixes the bug in the current driver.

Patches #2, #3 and #4 adds support for parsing not always-on power
domain Redistributor regions.

Patches #5, #6, #7, #8 and #10 refactors the code and allocates the
memory for mmio handlers and vgic_rdist_region based on the number of
Redistributors required for dom0/domU instead of hardcoded values.

Patch #9 changes the linear to binary search to avoid lookup overhead.

This pacthset is created on tip of Julien's branch 
http://xenbits.xen.org/gitweb/?p=people/julieng/xen-unstable.git;a=shortlog;h=refs/heads/irq-routing-acpi-rfc

Shanker Donthineni (10):
  arm/gic-v3: Fix bug in function cmp_rdist()
  arm/gic-v3: Do early GICD ioremap and clean up
  arm/gic-v3: Fold GICR subtable parsing into a new function
  arm/gic-v3: Parse per-cpu redistributor entry in GICC subtable
  xen/arm: vgic: Use dynamic memory allocation for vgic_rdist_region
  arm/gic-v3: Remove an unused macro MAX_RDIST_COUNT
  arm: vgic: Split vgic_domain_init() functionality into two functions
  arm/io: Use separate memory allocation for mmio handlers
  xen/arm: io: Use binary search for mmio handler lookup
  arm/vgic: Change fixed number of mmio handlers to variable number

 xen/arch/arm/domain.c |  13 +++-
 xen/arch/arm/gic-v3.c | 158 --
 xen/arch/arm/io.c |  64 +++
 xen/arch/arm/vgic-v2.c|   7 ++
 xen/arch/arm/vgic-v3.c|  30 +++-
 xen/arch/arm/vgic.c   |  30 +---
 xen/include/asm-arm/domain.h  |   3 +-
 xen/include/asm-arm/gic.h |   2 +-
 xen/include/asm-arm/gic_v3_defs.h |   1 +
 xen/include/asm-arm/mmio.h|   6 +-
 xen/include/asm-arm/vgic.h|   3 +
 11 files changed, 241 insertions(+), 76 deletions(-)

-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 10/10] arm/vgic: Change fixed number of mmio handlers to variable number

2016-06-26 Thread Shanker Donthineni
Record the number of mmio handlers that are required for vGICv3/2
in variable 'arch_domain.vgic.mmio_count' in vgic_v3/v2_init().
Augment this variable number to a fixed number MAX_IO_HANDLER and
pass it to domain_io_init() to allocate enough memory for handlers.

New code path:
 domain_vgic_register()
   count = MAX_IO_HANDLER + d->arch.vgic.mmio_count;
   domain_io_init(count)
 domain_vgic_init()

Signed-off-by: Shanker Donthineni 
---
 xen/arch/arm/domain.c| 11 +++
 xen/arch/arm/vgic-v2.c   |  1 +
 xen/arch/arm/vgic-v3.c   | 12 ++--
 xen/arch/arm/vgic.c  |  6 +-
 xen/include/asm-arm/domain.h |  1 +
 xen/include/asm-arm/vgic.h   |  1 +
 6 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 4010ff2..ebc12ac 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -550,10 +550,6 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 share_xen_page_with_guest(
 virt_to_page(d->shared_info), d, XENSHARE_writable);
 
-count = MAX_IO_HANDLER;
-if ( (rc = domain_io_init(d, count)) != 0 )
-goto fail;
-
 if ( (rc = p2m_alloc_table(d)) != 0 )
 goto fail;
 
@@ -590,6 +586,13 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 goto fail;
 }
 
+if ( (rc = domain_vgic_register(d)) != 0 )
+goto fail;
+
+count = MAX_IO_HANDLER + d->arch.vgic.mmio_count;
+if ( (rc = domain_io_init(d, count)) != 0 )
+goto fail;
+
 if ( (rc = domain_vgic_init(d, config->nr_spis)) != 0 )
 goto fail;
 
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index f5778e6..d5367b3 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -721,6 +721,7 @@ int vgic_v2_init(struct domain *d)
 return -ENODEV;
 }
 
+d->arch.vgic.mmio_count = 1; /* Only GICD region */
 register_vgic_ops(d, &vgic_v2_ops);
 
 return 0;
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index e877e9e..472deac 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1391,14 +1391,19 @@ static int vgic_v3_vcpu_init(struct vcpu *v)
 return 0;
 }
 
+static inline unsigned int vgic_v3_rdist_count(struct domain *d)
+{
+return is_hardware_domain(d) ? vgic_v3_hw.nr_rdist_regions :
+   GUEST_GICV3_RDIST_REGIONS;
+}
+
 static int vgic_v3_domain_init(struct domain *d)
 {
 struct vgic_rdist_region *rdist_regions;
 int rdist_count, i;
 
 /* Allocate memory for Re-distributor regions */
-rdist_count = is_hardware_domain(d) ? vgic_v3_hw.nr_rdist_regions :
-   GUEST_GICV3_RDIST_REGIONS;
+rdist_count = vgic_v3_rdist_count(d);
 
 rdist_regions = xzalloc_array(struct vgic_rdist_region, rdist_count);
 if ( !rdist_regions )
@@ -1504,6 +1509,9 @@ int vgic_v3_init(struct domain *d)
 return -ENODEV;
 }
 
+/* GICD region + number of Re-distributors */
+d->arch.vgic.mmio_count = vgic_v3_rdist_count(d) + 1;
+
 register_vgic_ops(d, &v3_ops);
 
 return 0;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7627eff..0658bfc 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -88,7 +88,7 @@ static void vgic_rank_init(struct vgic_irq_rank *rank, 
uint8_t index,
 rank->vcpu[i] = vcpu;
 }
 
-static int domain_vgic_register(struct domain *d)
+int domain_vgic_register(struct domain *d)
 {
 switch ( d->arch.vgic.version )
 {
@@ -124,10 +124,6 @@ int domain_vgic_init(struct domain *d, unsigned int 
nr_spis)
 
 d->arch.vgic.nr_spis = nr_spis;
 
-ret = domain_vgic_register(d);
-if ( ret < 0)
-return ret;
-
 spin_lock_init(&d->arch.vgic.lock);
 
 d->arch.vgic.shared_irqs =
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 29346c6..b205461 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -111,6 +111,7 @@ struct arch_domain
 int nr_regions; /* Number of rdist regions */
 uint32_t rdist_stride;  /* Re-Distributor stride */
 #endif
+uint32_t mmio_count;/* Number of mmio handlers */
 } vgic;
 
 struct vuart {
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index fbb763a..1ce441c 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -307,6 +307,7 @@ extern void register_vgic_ops(struct domain *d, const 
struct vgic_ops *ops);
 int vgic_v2_init(struct domain *d);
 int vgic_v3_init(struct domain *d);
 
+extern int domain_vgic_register(struct domain *d);
 extern int vcpu_vgic_free(struct vcpu *v);
 extern int vgic_to_sgi(struct vcpu *v, register_t sgir,
enum gic_sgi_mode irqmode, int virq,
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation C

[Xen-devel] [PATCH V2 03/10] arm/gic-v3: Fold GICR subtable parsing into a new function

2016-06-26 Thread Shanker Donthineni
Add a new function for parsing GICR subtable and move the code
that is specific to GICR table to new function without changing
the function gicv3_acpi_init() behavior.

Signed-off-by: Shanker Donthineni 
---
Changes since v1:
  Removed the unnecessary GICR ioremap operation inside GICR table parse code.

 xen/arch/arm/gic-v3.c | 61 ---
 1 file changed, 39 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 542c4f3..0471fea 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1282,6 +1282,14 @@ static int gicv3_iomem_deny_access(const struct domain 
*d)
 }
 
 #ifdef CONFIG_ACPI
+static void __init gic_acpi_add_rdist_region(u64 base_addr, u32 size)
+{
+unsigned int idx = gicv3.rdist_count++;
+
+gicv3.rdist_regions[idx].base = base_addr;
+gicv3.rdist_regions[idx].size = size;
+}
+
 static int gicv3_make_hwdom_madt(const struct domain *d, u32 offset)
 {
 struct acpi_subtable_header *header;
@@ -1387,6 +1395,25 @@ gic_acpi_parse_madt_distributor(struct 
acpi_subtable_header *header,
 
 return 0;
 }
+
+static int __init
+gic_acpi_parse_madt_redistributor(struct acpi_subtable_header *header,
+  const unsigned long end)
+{
+struct acpi_madt_generic_redistributor *rdist;
+
+rdist = (struct acpi_madt_generic_redistributor *)header;
+if ( BAD_MADT_ENTRY(rdist, end) )
+return -EINVAL;
+
+if ( !rdist->base_address || !rdist->length )
+return -EINVAL;
+
+gic_acpi_add_rdist_region(rdist->base_address, rdist->length);
+
+return 0;
+}
+
 static int __init
 gic_acpi_get_madt_redistributor_num(struct acpi_subtable_header *header,
 const unsigned long end)
@@ -1402,7 +1429,7 @@ static void __init gicv3_acpi_init(void)
 struct acpi_table_header *table;
 struct rdist_region *rdist_regs;
 acpi_status status;
-int count, i;
+int count;
 
 status = acpi_get_table(ACPI_SIG_MADT, 0, &table);
 
@@ -1433,37 +1460,27 @@ static void __init gicv3_acpi_init(void)
 if ( count <= 0 )
 panic("GICv3: No valid GICR entries exists");
 
-gicv3.rdist_count = count;
-
-if ( gicv3.rdist_count > MAX_RDIST_COUNT )
+if ( count > MAX_RDIST_COUNT )
 panic("GICv3: Number of redistributor regions is more than"
   "%d (Increase MAX_RDIST_COUNT!!)\n", MAX_RDIST_COUNT);
 
-rdist_regs = xzalloc_array(struct rdist_region, gicv3.rdist_count);
+rdist_regs = xzalloc_array(struct rdist_region, count);
 if ( !rdist_regs )
 panic("GICv3: Failed to allocate memory for rdist regions\n");
 
-for ( i = 0; i < gicv3.rdist_count; i++ )
-{
-struct acpi_subtable_header *header;
-struct acpi_madt_generic_redistributor *gic_rdist;
-
-header = 
acpi_table_get_entry_madt(ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR,
-   i);
-if ( !header )
-panic("GICv3: Can't get GICR entry");
-
-gic_rdist =
-   container_of(header, struct acpi_madt_generic_redistributor, 
header);
-rdist_regs[i].base = gic_rdist->base_address;
-rdist_regs[i].size = gic_rdist->length;
-}
+gicv3.rdist_regions = rdist_regs;
+
+/* Parse always-on power domain Re-distributor entries */
+count = acpi_parse_entries(ACPI_SIG_MADT,
+   sizeof(struct acpi_table_madt),
+   gic_acpi_parse_madt_redistributor, table,
+   ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR, count);
+if ( count <= 0 )
+panic("GICv3: Can't get Redistributor entry");
 
 /* The vGIC code requires the region to be sorted */
 sort(rdist_regs, gicv3.rdist_count, sizeof(*rdist_regs), cmp_rdist, NULL);
 
-gicv3.rdist_regions= rdist_regs;
-
 /* Collect CPU base addresses */
 count = acpi_parse_entries(ACPI_SIG_MADT, sizeof(struct acpi_table_madt),
gic_acpi_parse_madt_cpu, table,
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 07/10] arm: vgic: Split vgic_domain_init() functionality into two functions

2016-06-26 Thread Shanker Donthineni
Separate the code logic that does the registration of vgic_v3/v2 ops
to a new fucntion domain_vgic_register(). The intention of this
separation is to record the required mmio count in vgic_v3/v2_init()
and pass it to function domain_io_init() in the later patch.

Signed-off-by: Shanker Donthineni 
---
Changes since v1:
  Moved registration of vgic_v3/v2 functionality to a new 
domain_vgic_register().

 xen/arch/arm/vgic.c | 33 +
 1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 5df5f01..7627eff 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -88,19 +88,8 @@ static void vgic_rank_init(struct vgic_irq_rank *rank, 
uint8_t index,
 rank->vcpu[i] = vcpu;
 }
 
-int domain_vgic_init(struct domain *d, unsigned int nr_spis)
+static int domain_vgic_register(struct domain *d)
 {
-int i;
-int ret;
-
-d->arch.vgic.ctlr = 0;
-
-/* Limit the number of virtual SPIs supported to (1020 - 32) = 988  */
-if ( nr_spis > (1020 - NR_LOCAL_IRQS) )
-return -EINVAL;
-
-d->arch.vgic.nr_spis = nr_spis;
-
 switch ( d->arch.vgic.version )
 {
 #ifdef CONFIG_HAS_GICV3
@@ -119,6 +108,26 @@ int domain_vgic_init(struct domain *d, unsigned int 
nr_spis)
 return -ENODEV;
 }
 
+return 0;
+}
+
+int domain_vgic_init(struct domain *d, unsigned int nr_spis)
+{
+int i;
+int ret;
+
+d->arch.vgic.ctlr = 0;
+
+/* Limit the number of virtual SPIs supported to (1020 - 32) = 988  */
+if ( nr_spis > (1020 - NR_LOCAL_IRQS) )
+return -EINVAL;
+
+d->arch.vgic.nr_spis = nr_spis;
+
+ret = domain_vgic_register(d);
+if ( ret < 0)
+return ret;
+
 spin_lock_init(&d->arch.vgic.lock);
 
 d->arch.vgic.shared_irqs =
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 05/10] xen/arm: vgic: Use dynamic memory allocation for vgic_rdist_region

2016-06-26 Thread Shanker Donthineni
The number of Re-distributor regions allowed for dom0 is hardcoded
to a compile time macro MAX_RDIST_COUNT which is 4. On some systems,
especially latest server chips might have more than 4 redistributors.
Either we have to increase MAX_RDIST_COUNT to a bigger number or
allocate memory based on number of redistributors that are found in
MADT table. In the worst case scenario, the macro MAX_RDIST_COUNT
should be equal to CONFIG_NR_CPUS in order to support per CPU
Redistributors.

Increasing MAX_RDIST_COUNT has side effect, it blows 'struct domain'
size and hits BUILD_BUG_ON() in domain build code path.

struct domain *alloc_domain_struct(void)
{
struct domain *d;
BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
d = alloc_xenheap_pages(0, 0);
if ( d == NULL )
return NULL;
...

This patch uses the second approach to fix the BUILD_BUG().

Signed-off-by: Shanker Donthineni 
---
Changes since v1:
  Keep 'struct vgic_rdist_region' definition inside 'struct arch_domain'.

 xen/arch/arm/vgic-v2.c   |  6 ++
 xen/arch/arm/vgic-v3.c   | 22 +++---
 xen/arch/arm/vgic.c  |  1 +
 xen/include/asm-arm/domain.h |  2 +-
 xen/include/asm-arm/vgic.h   |  2 ++
 5 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 9adb4a9..f5778e6 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -699,9 +699,15 @@ static int vgic_v2_domain_init(struct domain *d)
 return 0;
 }
 
+static void vgic_v2_domain_free(struct domain *d)
+{
+/* Nothing to be cleanup for this driver */
+}
+
 static const struct vgic_ops vgic_v2_ops = {
 .vcpu_init   = vgic_v2_vcpu_init,
 .domain_init = vgic_v2_domain_init,
+.domain_free = vgic_v2_domain_free,
 .max_vcpus = 8,
 };
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index b37a7c0..e877e9e 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1393,7 +1393,19 @@ static int vgic_v3_vcpu_init(struct vcpu *v)
 
 static int vgic_v3_domain_init(struct domain *d)
 {
-int i;
+struct vgic_rdist_region *rdist_regions;
+int rdist_count, i;
+
+/* Allocate memory for Re-distributor regions */
+rdist_count = is_hardware_domain(d) ? vgic_v3_hw.nr_rdist_regions :
+   GUEST_GICV3_RDIST_REGIONS;
+
+rdist_regions = xzalloc_array(struct vgic_rdist_region, rdist_count);
+if ( !rdist_regions )
+return -ENOMEM;
+
+d->arch.vgic.nr_regions = rdist_count;
+d->arch.vgic.rdist_regions = rdist_regions;
 
 /*
  * Domain 0 gets the hardware address.
@@ -1426,7 +1438,6 @@ static int vgic_v3_domain_init(struct domain *d)
 
 first_cpu += size / d->arch.vgic.rdist_stride;
 }
-d->arch.vgic.nr_regions = vgic_v3_hw.nr_rdist_regions;
 }
 else
 {
@@ -1435,7 +1446,6 @@ static int vgic_v3_domain_init(struct domain *d)
 /* XXX: Only one Re-distributor region mapped for the guest */
 BUILD_BUG_ON(GUEST_GICV3_RDIST_REGIONS != 1);
 
-d->arch.vgic.nr_regions = GUEST_GICV3_RDIST_REGIONS;
 d->arch.vgic.rdist_stride = GUEST_GICV3_RDIST_STRIDE;
 
 /* The first redistributor should contain enough space for all CPUs */
@@ -1467,9 +1477,15 @@ static int vgic_v3_domain_init(struct domain *d)
 return 0;
 }
 
+static void vgic_v3_domain_free(struct domain *d)
+{
+xfree(d->arch.vgic.rdist_regions);
+}
+
 static const struct vgic_ops v3_ops = {
 .vcpu_init   = vgic_v3_vcpu_init,
 .domain_init = vgic_v3_domain_init,
+.domain_free = vgic_v3_domain_free,
 .emulate_sysreg  = vgic_v3_emulate_sysreg,
 /*
  * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3e1c572..5df5f01 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -177,6 +177,7 @@ void domain_vgic_free(struct domain *d)
 }
 }
 
+d->arch.vgic.handler->domain_free(d);
 xfree(d->arch.vgic.shared_irqs);
 xfree(d->arch.vgic.pending_irqs);
 xfree(d->arch.vgic.allocated_irqs);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 370cdeb..29346c6 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -107,7 +107,7 @@ struct arch_domain
 paddr_t base;   /* Base address */
 paddr_t size;   /* Size */
 unsigned int first_cpu; /* First CPU handled */
-} rdist_regions[MAX_RDIST_COUNT];
+} *rdist_regions;
 int nr_regions; /* Number of rdist regions */
 uint32_t rdist_stride;  /* Re-Distributor stride */
 #endif
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index a2fccc0..fbb763a 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -128,6 +128,8 @@ struct vgic_ops {
 int (*vcpu_init)(struct vcpu *v);
 /* Domain specifi

[Xen-devel] [PATCH V2 09/10] xen/arm: io: Use binary search for mmio handler lookup

2016-06-26 Thread Shanker Donthineni
As the number of I/O handlers increase, the overhead associated with
linear lookup also increases. The system might have maximum of 144
(assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
it would require 144 iterations for finding a matching handler. Now
it is time for us to change from linear (complexity O(n)) to a binary
search (complexity O(log n) for reducing mmio handler lookup overhead.

Signed-off-by: Shanker Donthineni 
---
 xen/arch/arm/io.c | 50 +++---
 1 file changed, 39 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index a5b2c2d..abf49fb 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -70,23 +71,38 @@ static int handle_write(const struct mmio_handler *handler, 
struct vcpu *v,
handler->priv);
 }
 
-int handle_mmio(mmio_info_t *info)
+const struct mmio_handler *find_mmio_handler(struct vcpu *v, paddr_t addr)
 {
-struct vcpu *v = current;
-int i;
-const struct mmio_handler *handler = NULL;
 const struct vmmio *vmmio = &v->domain->arch.vmmio;
+const struct mmio_handler *handler = vmmio->handlers;
+unsigned int eidx = vmmio->num_entries;
+unsigned int midx = eidx / 2;
+unsigned int sidx = 0;
 
-for ( i = 0; i < vmmio->num_entries; i++ )
+/* Do binary search for matching mmio handler */
+while ( sidx != midx )
 {
-handler = &vmmio->handlers[i];
-
-if ( (info->gpa >= handler->addr) &&
- (info->gpa < (handler->addr + handler->size)) )
-break;
+if ( addr < handler[midx].addr )
+eidx = midx;
+else
+sidx = midx;
+midx = sidx + (eidx - sidx) / 2;
 }
 
-if ( i == vmmio->num_entries )
+if ( (addr >= handler[sidx].addr) &&
+ (addr < (handler[sidx].addr + handler[sidx].size)) )
+return handler + sidx;
+
+return NULL;
+}
+
+int handle_mmio(mmio_info_t *info)
+{
+const struct mmio_handler *handler;
+struct vcpu *v = current;
+
+handler = find_mmio_handler(v, info->gpa);
+if ( !handler )
 return 0;
 
 if ( info->dabt.write )
@@ -95,6 +111,14 @@ int handle_mmio(mmio_info_t *info)
 return handle_read(handler, v, info);
 }
 
+static int cmp_mmio_handler(const void *key, const void *elem)
+{
+const struct mmio_handler *handler0 = key;
+const struct mmio_handler *handler1 = elem;
+
+return (handler0->addr < handler1->addr) ? -1 : 0;
+}
+
 void register_mmio_handler(struct domain *d,
const struct mmio_handler_ops *ops,
paddr_t addr, paddr_t size, void *priv)
@@ -122,6 +146,10 @@ void register_mmio_handler(struct domain *d,
 
 vmmio->num_entries++;
 
+/* Sort mmio handlers in ascending order based on base address */
+sort(vmmio->handlers, vmmio->num_entries, sizeof(struct mmio_handler),
+ cmp_mmio_handler, NULL);
+
 spin_unlock(&vmmio->lock);
 }
 
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 06/10] arm/gic-v3: Remove an unused macro MAX_RDIST_COUNT

2016-06-26 Thread Shanker Donthineni
The macro MAX_RDIST_COUNT is not being used after converting code
to handle number of redistributor dynamically. So remove it from
header file and the two other panic() messages that are not valid
anymore.

Signed-off-by: Shanker Donthineni 
---
 xen/arch/arm/gic-v3.c | 8 
 xen/include/asm-arm/gic.h | 1 -
 2 files changed, 9 deletions(-)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 3977244..87f4ecf 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1200,10 +1200,6 @@ static void __init gicv3_dt_init(void)
 &gicv3.rdist_count) )
 gicv3.rdist_count = 1;
 
-if ( gicv3.rdist_count > MAX_RDIST_COUNT )
-panic("GICv3: Number of redistributor regions is more than"
-  "%d (Increase MAX_RDIST_COUNT!!)\n", MAX_RDIST_COUNT);
-
 rdist_regs = xzalloc_array(struct rdist_region, gicv3.rdist_count);
 if ( !rdist_regs )
 panic("GICv3: Failed to allocate memory for rdist regions\n");
@@ -1518,10 +1514,6 @@ static void __init gicv3_acpi_init(void)
 gicr_table = false;
 }
 
-if ( count > MAX_RDIST_COUNT )
-panic("GICv3: Number of redistributor regions is more than"
-  "%d (Increase MAX_RDIST_COUNT!!)\n", MAX_RDIST_COUNT);
-
 rdist_regs = xzalloc_array(struct rdist_region, count);
 if ( !rdist_regs )
 panic("GICv3: Failed to allocate memory for rdist regions\n");
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index fedf1fa..db7b2d0 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -20,7 +20,6 @@
 
 #define NR_GIC_LOCAL_IRQS  NR_LOCAL_IRQS
 #define NR_GIC_SGI 16
-#define MAX_RDIST_COUNT4
 
 #define GICD_CTLR   (0x000)
 #define GICD_TYPER  (0x004)
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 04/10] arm/gic-v3: Parse per-cpu redistributor entry in GICC subtable

2016-06-26 Thread Shanker Donthineni
The redistributor address can be specified either as part of GICC or
GICR subtable depending on the power domain. The current driver
doesn't support parsing redistributor entry that is defined in GICC
subtable. The GIC CPU subtable entry holds the associated Redistributor
base address if it is not on always-on power domain.

The per CPU Redistributor size is not defined in ACPI specification.
Set it's size to SZ_256K if the GIC hardware is capable of Direct
Virtual LPI Injection feature otherwise SZ_128K.

This patch adds necessary code to handle both types of Redistributors
base addresses.

Signed-off-by: Shanker Donthineni 
---
Changes since v1:
  Edited commit text and fixed white spaces.
  Added a new function for parsing per CPU Redistributor entry.

 xen/arch/arm/gic-v3.c | 84 ++-
 xen/include/asm-arm/gic.h |  1 +
 xen/include/asm-arm/gic_v3_defs.h |  1 +
 3 files changed, 77 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 0471fea..3977244 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -659,6 +659,10 @@ static int __init gicv3_populate_rdist(void)
 smp_processor_id(), i, ptr);
 return 0;
 }
+
+if ( gicv3.rdist_regions[i].single_rdist )
+break;
+
 if ( gicv3.rdist_stride )
 ptr += gicv3.rdist_stride;
 else
@@ -1282,14 +1286,21 @@ static int gicv3_iomem_deny_access(const struct domain 
*d)
 }
 
 #ifdef CONFIG_ACPI
-static void __init gic_acpi_add_rdist_region(u64 base_addr, u32 size)
+static void __init
+gic_acpi_add_rdist_region(u64 base_addr, u32 size, bool single_rdist)
 {
 unsigned int idx = gicv3.rdist_count++;
 
+gicv3.rdist_regions[idx].single_rdist = single_rdist;
 gicv3.rdist_regions[idx].base = base_addr;
 gicv3.rdist_regions[idx].size = size;
 }
 
+static inline bool gic_dist_supports_dvis(void)
+{
+return !!(readl_relaxed(GICD + GICD_TYPER) & GICD_TYPER_DVIS);
+}
+
 static int gicv3_make_hwdom_madt(const struct domain *d, u32 offset)
 {
 struct acpi_subtable_header *header;
@@ -1397,6 +1408,42 @@ gic_acpi_parse_madt_distributor(struct 
acpi_subtable_header *header,
 }
 
 static int __init
+gic_acpi_parse_cpu_redistributor(struct acpi_subtable_header *header,
+  const unsigned long end)
+{
+struct acpi_madt_generic_interrupt *processor;
+u32 size;
+
+processor = (struct acpi_madt_generic_interrupt *)header;
+if ( BAD_MADT_ENTRY(processor, end) )
+return -EINVAL;
+
+if ( !processor->gicr_base_address )
+return -EINVAL;
+
+if ( processor->flags & ACPI_MADT_ENABLED )
+{
+size = gic_dist_supports_dvis() ? 4 * SZ_64K : 2 * SZ_64K;
+gic_acpi_add_rdist_region(processor->gicr_base_address, size, true);
+}
+
+return 0;
+}
+
+static int __init
+gic_acpi_get_madt_cpu_num(struct acpi_subtable_header *header,
+const unsigned long end)
+{
+struct acpi_madt_generic_interrupt *cpuif;
+
+cpuif = (struct acpi_madt_generic_interrupt *)header;
+if ( BAD_MADT_ENTRY(cpuif, end) || !cpuif->gicr_base_address )
+return -EINVAL;
+
+return 0;
+}
+
+static int __init
 gic_acpi_parse_madt_redistributor(struct acpi_subtable_header *header,
   const unsigned long end)
 {
@@ -1409,7 +1456,7 @@ gic_acpi_parse_madt_redistributor(struct 
acpi_subtable_header *header,
 if ( !rdist->base_address || !rdist->length )
 return -EINVAL;
 
-gic_acpi_add_rdist_region(rdist->base_address, rdist->length);
+gic_acpi_add_rdist_region(rdist->base_address, rdist->length, false);
 
 return 0;
 }
@@ -1428,6 +1475,7 @@ static void __init gicv3_acpi_init(void)
 {
 struct acpi_table_header *table;
 struct rdist_region *rdist_regs;
+bool gicr_table = true;
 acpi_status status;
 int count;
 
@@ -1457,8 +1505,18 @@ static void __init gicv3_acpi_init(void)
 count = acpi_parse_entries(ACPI_SIG_MADT, sizeof(struct acpi_table_madt),
gic_acpi_get_madt_redistributor_num, table,
ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR, 0);
-if ( count <= 0 )
-panic("GICv3: No valid GICR entries exists");
+
+/* Count the total number of CPU interface entries */
+if (count <= 0) {
+count = acpi_parse_entries(ACPI_SIG_MADT,
+   sizeof(struct acpi_table_madt),
+   gic_acpi_get_madt_cpu_num,
+   table, ACPI_MADT_TYPE_GENERIC_INTERRUPT, 0);
+if (count <= 0)
+panic("GICv3: No valid GICR entries exists");
+
+gicr_table = false;
+}
 
 if ( count > MAX_RDIST_COUNT )
 panic("GICv3: Number of redistributor regions is more than"
@@ -1470,11 +1528,19 @@ static voi

[Xen-devel] [PATCH V2 08/10] arm/io: Use separate memory allocation for mmio handlers

2016-06-26 Thread Shanker Donthineni
The number of mmio handlers are limited to a compile time macro
MAX_IO_HANDLER which is 16. This number is not at all sufficient
to support per CPU distributor regions. Either it needs to be
increased to a bigger number, at least CONFIG_NR_CPUS+16, or
allocate a separate memory for mmio handlers dynamically during
domain build.

This patch uses the dynamic allocation strategy to reduce memory
footprint for 'struct domain' instead of static allocation.

Signed-off-by: Shanker Donthineni 
---
 xen/arch/arm/domain.c  |  6 --
 xen/arch/arm/io.c  | 14 --
 xen/include/asm-arm/mmio.h |  6 --
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 1365b4a..4010ff2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -527,7 +527,7 @@ void vcpu_destroy(struct vcpu *v)
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
struct xen_arch_domainconfig *config)
 {
-int rc;
+int rc, count;
 
 d->arch.relmem = RELMEM_not_started;
 
@@ -550,7 +550,8 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 share_xen_page_with_guest(
 virt_to_page(d->shared_info), d, XENSHARE_writable);
 
-if ( (rc = domain_io_init(d)) != 0 )
+count = MAX_IO_HANDLER;
+if ( (rc = domain_io_init(d, count)) != 0 )
 goto fail;
 
 if ( (rc = p2m_alloc_table(d)) != 0 )
@@ -644,6 +645,7 @@ void arch_domain_destroy(struct domain *d)
 free_xenheap_pages(d->arch.efi_acpi_table,
get_order_from_bytes(d->arch.efi_acpi_len));
 #endif
+domain_io_free(d);
 }
 
 void arch_domain_shutdown(struct domain *d)
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index 0156755..a5b2c2d 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -102,7 +102,7 @@ void register_mmio_handler(struct domain *d,
 struct vmmio *vmmio = &d->arch.vmmio;
 struct mmio_handler *handler;
 
-BUG_ON(vmmio->num_entries >= MAX_IO_HANDLER);
+BUG_ON(vmmio->num_entries >= vmmio->max_num_entries);
 
 spin_lock(&vmmio->lock);
 
@@ -125,14 +125,24 @@ void register_mmio_handler(struct domain *d,
 spin_unlock(&vmmio->lock);
 }
 
-int domain_io_init(struct domain *d)
+int domain_io_init(struct domain *d, int max_count)
 {
spin_lock_init(&d->arch.vmmio.lock);
d->arch.vmmio.num_entries = 0;
 
+   d->arch.vmmio.max_num_entries = max_count;
+   d->arch.vmmio.handlers = xzalloc_array(struct mmio_handler, max_count);
+   if ( !d->arch.vmmio.handlers )
+   return -ENOMEM;
+
return 0;
 }
 
+void domain_io_free(struct domain *d)
+{
+xfree(d->arch.vmmio.handlers);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index da1cc2e..276b263 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -51,15 +51,17 @@ struct mmio_handler {
 
 struct vmmio {
 int num_entries;
+int max_num_entries;
 spinlock_t lock;
-struct mmio_handler handlers[MAX_IO_HANDLER];
+struct mmio_handler *handlers;
 };
 
 extern int handle_mmio(mmio_info_t *info);
 void register_mmio_handler(struct domain *d,
const struct mmio_handler_ops *ops,
paddr_t addr, paddr_t size, void *priv);
-int domain_io_init(struct domain *d);
+int domain_io_init(struct domain *d, int max_count);
+void domain_io_free(struct domain *d);
 
 #endif  /* __ASM_ARM_MMIO_H__ */
 
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 01/10] arm/gic-v3: Fix bug in function cmp_rdist()

2016-06-26 Thread Shanker Donthineni
The cmp_rdist() is always returning value zero irrespective of the
input Redistributor base addresses. Both the local variables 'l' and
'r' are pointing to the first argument 'a' causing the logical
expression 'l->base < r->base' always evaluated as false which is
wrong.

Signed-off-by: Shanker Donthineni 
---
 xen/arch/arm/gic-v3.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 8d3f149..b89c608 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1133,7 +1133,7 @@ static const hw_irq_controller gicv3_guest_irq_type = {
 
 static int __init cmp_rdist(const void *a, const void *b)
 {
-const struct rdist_region *l = a, *r = a;
+const struct rdist_region *l = a, *r = b;
 
 /* We assume that re-distributor regions can never overlap */
 return ( l->base < r->base) ? -1 : 0;
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 02/10] arm/gic-v3: Do early GICD ioremap and clean up

2016-06-26 Thread Shanker Donthineni
For ACPI based XEN boot, the GICD region needs to be accessed inside
the function gicv3_acpi_init() in later pacth. There is a duplicate
panic() message, one in the DTS probe and second one in the ACPI probe
path. For these two reasons, move the code that validates the GICD base
address and does the region ioremap to a separate function. The
following pacth accesses the GICD region inside gicv3_acpi_init() for
finding per CPU Redistributor size.

Signed-off-by: Shanker Donthineni 
---
Changes sicne v1:
  Edited commit text.

 xen/arch/arm/gic-v3.c | 23 +--
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index b89c608..542c4f3 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1169,6 +1169,17 @@ static void __init gicv3_init_v2(void)
 vgic_v2_setup_hw(dbase, cbase, csize, vbase, 0);
 }
 
+static void __init gicv3_ioremap_distributor(paddr_t dist_paddr)
+{
+if ( dist_paddr & ~PAGE_MASK )
+panic("GICv3:  Found unaligned distributor address %"PRIpaddr"",
+  dbase);
+
+gicv3.map_dbase = ioremap_nocache(dist_paddr, SZ_64K);
+if ( !gicv3.map_dbase )
+panic("GICv3: Failed to ioremap for GIC distributor\n");
+}
+
 static void __init gicv3_dt_init(void)
 {
 struct rdist_region *rdist_regs;
@@ -1179,9 +1190,7 @@ static void __init gicv3_dt_init(void)
 if ( res )
 panic("GICv3: Cannot find a valid distributor address");
 
-if ( (dbase & ~PAGE_MASK) )
-panic("GICv3:  Found unaligned distributor address %"PRIpaddr"",
-  dbase);
+gicv3_ioremap_distributor(dbase);
 
 if ( !dt_property_read_u32(node, "#redistributor-regions",
 &gicv3.rdist_count) )
@@ -1415,9 +1424,7 @@ static void __init gicv3_acpi_init(void)
 if ( count <= 0 )
 panic("GICv3: No valid GICD entries exists");
 
-if ( (dbase & ~PAGE_MASK) )
-panic("GICv3: Found unaligned distributor address %"PRIpaddr"",
-  dbase);
+gicv3_ioremap_distributor(dbase);
 
 /* Get number of redistributor */
 count = acpi_parse_entries(ACPI_SIG_MADT, sizeof(struct acpi_table_madt),
@@ -1491,10 +1498,6 @@ static int __init gicv3_init(void)
 else
 gicv3_acpi_init();
 
-gicv3.map_dbase = ioremap_nocache(dbase, SZ_64K);
-if ( !gicv3.map_dbase )
-panic("GICv3: Failed to ioremap for GIC distributor\n");
-
 reg = readl_relaxed(GICD + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
 if ( reg != GIC_PIDR2_ARCH_GICv3 && reg != GIC_PIDR2_ARCH_GICv4 )
  panic("GICv3: no distributor detected\n");
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc. 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 96272: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96272 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96272/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 94856
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
94856
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
94856
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 94856
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 94856
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail REGR. vs. 94856
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 94856
 test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 94856

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 94856
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 94856
 test-amd64-amd64-xl-rtds  9 debian-install   fail   like 94856

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass

version targeted for testing:
 qemuua01aef5d2f96c334d048f43f0d3573a1152b37ca
baseline version:
 qemuud6550e9ed2e1a60d889dfb721de00d9a4e3bafbe

Last test of basis94856  2016-05-27 20:14:49 Z   29 days
Failing since 94983  2016-05-31 09:40:12 Z   26 days   36 attempts
Testing same since96228  2016-06-24 19:43:51 Z1 days3 attempts


People who touched revisions under test:
  Alberto Garcia 
  Alex Bennée 
  Alex Bligh 
  Alex Williamson 
  Alexander Graf 
  Alexey Kardashevskiy 
  Alistair Francis 
  Amit Shah 
  Andrea Arcangeli 
  Andrew Jeffery 
  Andrew Jones 
  Aneesh Kumar K.V 
  Anthony PERARD 
  Anton Blanchard 
  Ard Biesheuvel 
  Artyom Tarasenko 
  Benjamin Herrenschmidt 
  Bharata B Rao 
  Cao jin 
  Changlong Xie 
  Chao Peng 
  Chen Fan 
  Christian Borntraeger 
  Christophe Lyon 
  Cole Robinson 
  Colin Lord 
  Corey Minyard 
  Cornelia Huck 
  Cédric Le Goater 
  Daniel P. Berrange 
  David Gibson 
  David Hildenbrand 
  Denis V. Lunev 
  Dmitry Fleytman 
  Dmitry Fleytman 
  Dmitry Osipenko 
  Dr. David Alan Gilbert 
  Drew Jones 
  Edgar

Re: [Xen-devel] Discussion about virtual iommu support for Xen guest

2016-06-26 Thread Lan, Tianyu

On 6/8/2016 4:11 PM, Tian, Kevin wrote:

It makes sense... I thought you used this security issue against
placing vIOMMU in Qemu, which made me a bit confused earlier. :-)

We are still thinking feasibility of some staging plan, e.g. first
implementing some vIOMMU features w/o dependency on root-complex in
Xen (HVM only) and then later enabling full vIOMMU feature w/
root-complex in Xen (covering HVMLite). If we can reuse most code
between two stages while shorten time-to-market by half (e.g. from
2yr to 1yr), it's still worthy of pursuing. will report back soon
once the idea is consolidated...

Thanks Kevin



After discussion with Kevin, we draft a staging plan of implementing
vIOMMU in Xen based on Qemu host bridge. Both virtual devices and
passthough devices use one vIOMMU in Xen. Your comments are very 
appreciated.


1. Enable Q35 support in the hvmloader.
In the real world, VTD support starts from Q35 and OS may have such
assumption that VTD only exists on the Q35 or newer platform.
Q35 support seems necessary for vIOMMU support.

In regardless of Q35 host bridge in the Qemu or Xen hypervisor,
hvmloader needs to be compatible with Q35 and build Q35 ACPI tables.

Qemu already has Q35 emulation and so the hvmloader job can start with
Qemu. When host bridge in Xen is ready, these changes also can be reused.

2. Implement vIOMMU in Xen based on Qemu host bridge.
Add a new device type "Xen iommu" in the Qemu as a wrapper of vIOMMU
hypercalls to communicate with Xen vIOMMU.

It's in charge of:
1) Query vIOMMU capability(E,G interrupt remapping, DMA translation, SVM
and so on)
2) Create vIOMMU with predefined base address of IOMMU unit regs
3) Notify hvmloader to populate related content in the ACPI DMAR
table.(Add vIOMMU info to struct hvm_info_table)
4) Deal with DMA translation request of virtual devices and return
back translated address.
5) Attach/detach hotplug device from vIOMMU


New hypercalls for vIOMMU that are also necessary when host bridge in Xen.
1) Query vIOMMU capability
2) Create vIOMMU(IOMMU unit reg base as params)
3) Virtual device's DMA translation
4) Attach/detach hotplug device from VIOMMU


All IOMMU emulations will be done in Xen
1) DMA translation
2) Interrupt remapping
3) Shared Virtual Memory (SVM)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] xc_domain_maximum_gpfn

2016-06-26 Thread sepanta s
Hi,
what exactly does this module do?
I have got about 1 million after calling this function for a domain with 1
Gigabyte ram  and page size 4K.  If the output of the function is the whole
gfns in a domain, shouldn't it be equal with (ram-size)/(page-size) ?
Is there any API to get the memory range of the domain to search through
its pages ?
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] monitor access to pages with a specific p2m_type_t

2016-06-26 Thread sepanta s
On Fri, Jun 24, 2016 at 8:10 PM, Tamas K Lengyel 
wrote:

>
> On Jun 24, 2016 05:19, "Razvan Cojocaru" 
> wrote:
> >
> > On 06/24/2016 02:05 PM, George Dunlap wrote:
> > > On Wed, Jun 22, 2016 at 12:38 PM, sepanta s 
> wrote:
> > >> Hi,
> > >> Is it possible to monitor the access on the pages withp2m_type_t
> > >> p2m_ram_shared?
> > >
> > > cc'ing Tamas and Razvan
> >
> > Thanks for the CC. Judging by the "if ( npfec.write_access && (p2mt ==
> > p2m_ram_shared) )" line in hvm_hap_nested_page_fault() (from
> > xen/arch/x86/hvm/hvm.c), I'd say it certainly looks possible. But I
> > don't know what the context of the question is.
> >
> >
> > Thanks,
> > Razvan
>
The question is just getting the gfn and mfn of the page which is as type:
p2m_ram_shared to see which pages are written and unshared.

> Yes, p2m_ram_shared type pages can be monitored with mem_access just as
> normal pages. The only part that may be tricky is if you map the page into
> your monitoring application while the page is shared. Your handle will
> continue to be valid even if the page is unshared but it will continue to
> point to the shared page. However, even if you catch write access events to
> the shared page that will lead to unsharing, the mem_access notification is
> sent before unsharing. I just usually do unsharing myself in the mem_access
> callback manually for monitored pages for this reason. I might change the
> flow in 4.8 to send the notification after the unsharing happened to
> simplify this.
>
> Tamas
>
Thanks, but in mem_access , what APIs can be used to see such events ?
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-coverity test] 96284: all pass - PUSHED

2016-06-26 Thread osstest service owner
flight 96284 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96284/

Perfect :-)
All tests in this flight passed
version targeted for testing:
 xen  8384dc2d95538c5910d98db3df3ff5448bf0af48
baseline version:
 xen  c6f7d21747805b50123fc1b8d73518fea2aa9096

Last test of basis96110  2016-06-22 09:20:10 Z4 days
Testing same since96284  2016-06-26 09:26:17 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Dirk Behme 
  Dirk Behme 
  Jan Beulich 
  Julien Grall 
  Kevin Tian 
  Quan Xu 
  Razvan Cojocaru 
  Stefano Stabellini 
  Tamas K Lengyel 
  Wei Liu 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-coverity
+ revision=8384dc2d95538c5910d98db3df3ff5448bf0af48
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push 
xen-unstable-coverity 8384dc2d95538c5910d98db3df3ff5448bf0af48
+ branch=xen-unstable-coverity
+ revision=8384dc2d95538c5910d98db3df3ff5448bf0af48
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-coverity
+ qemuubranch=qemu-upstream-unstable-coverity
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-coverity
+ prevxenbranch=xen-4.7-testing
+ '[' x8384dc2d95538c5910d98db3df3ff5448bf0af48 = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : git
++ : git://xenbits.xen.org/rumpuser-xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/rumpuser-xen.git
+++ besteffort_repo https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ cached_repo https://github.com/rumpkernel/rumpkernel-netbsd-src 
'[fetch=try]'
+++ local repo=https://github.com/rumpkernel/rumpkernel-netbsd-src
+++ local 'options=[fetch=try]'
 getconfig GitCacheProxy
 perl -e '
use Osstest;
readglobalconfig();
print $c{"GitCacheProxy"} or die $!;
'
+++ local cache=git://cache:9419/
+++ '[' xgit://cache:9419/ '!=' x ']'
+++ echo 
'git://cache:9419/https://github.com/rumpkernel/rumpkernel-netbsd-src%20[fetch=try]'
++ : 
'git://cache:9419/https://github.com/rumpkernel/rumpkernel-netbsd-src%20[fetch=try]'
++ : git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://x

Re: [Xen-devel] [PATCH v12 4/6] IOMMU/x86: using a struct pci_dev* instead of SBDF

2016-06-26 Thread Xu, Quan
On June 24, 2016 7:46 PM, Tian, Kevin  wrote:
> > From: Xu, Quan
> > Sent: Friday, June 24, 2016 1:52 PM
> >
> > From: Quan Xu 
> >
> > a struct pci_dev* instead of SBDF is stored inside struct pci_ats_dev
> > and parameter to enable_ats_device().
> >
> > Signed-off-by: Quan Xu 
> 
> Can we unify the naming convention throughout the patch, e.g.
> always using ats_pdev for "struct pci_ats_dev" variable,

Kevin, Is it 'ats_dev'? -Quan

> while pdev for "struct
> pci_dev". It's quite confusing when reading the patch which has both named
> as pdev in various places I know the confusion is also in original code, 
> but
> please take this chance to clean them up. :-)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3] xen: arm: Update arm64 image header

2016-06-26 Thread Dirk Behme

Hi Julien,

On 26.06.2016 10:29, Julien Grall wrote:

Hello Dirk,

On 26/06/2016 06:47, Dirk Behme wrote:

On 23.06.2016 17:18, Julien Grall wrote:

On 23/06/16 07:38, Dirk Behme wrote:

+uint64_t res2;
  uint64_t res3;
  uint64_t res4;
-uint64_t res5;
-uint32_t magic1;
-uint32_t res6;
+uint32_t magic;/* Magic number, little endian,
"ARM\x64" */
+uint32_t res5;
  } zimage;
  uint64_t start, end;

@@ -354,20 +353,30 @@ static int kernel_zimage64_probe(struct
kernel_info *info,

  copy_from_paddr(&zimage, addr, sizeof(zimage));

-if ( zimage.magic0 != ZIMAGE64_MAGIC_V0 &&
- zimage.magic1 != ZIMAGE64_MAGIC_V1 )
+if ( zimage.magic != ZIMAGE64_MAGIC ) {
+printk(XENLOG_ERR "No valid magic found in header! Kernel
too old?\n");


I have found why there were no error messages here before. The
function kernel_probe will try the different formats supported one by
one.

So this message will be printed if the kernel is an ARM32 image, which
will confuse the user. So I would print this message only when
zimage.magic0 is equal to ZIMAGE64_MAGIC_V0.



Which we don't have with the recent header format any more.


Well, we control the structure in Xen. So we could re-introduce the
field magic0 through an union.

 > This does

mean I drop this message again, as it doesn't make sense if the
magic is
used for the format detection.


I would still prefer to keep an error message when only MAGIC_V0 is
present. This will avoid people to spend time understanding why it
does not work anymore.



This way

if ( zimage.magic != ZIMAGE64_MAGIC ) {
  if ( zimage.magic0 == ZIMAGE64_MAGIC_V0 )
printk(XENLOG_ERR "No valid magic found in header! Kernel 
too old?\n");

  return -EINVAL;
}

with magic0 being a union with code0?

Best regards

Dirk


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 6/6] vt-d: fix vt-d Device-TLB flush timeout issue

2016-06-26 Thread Xu, Quan
On June 24, 2016 7:55 PM, Tian, Kevin  wrote:
> > From: Xu, Quan
> > Sent: Friday, June 24, 2016 1:52 PM
> > diff --git a/xen/drivers/passthrough/vtd/extern.h
> > b/xen/drivers/passthrough/vtd/extern.h
> > index 45357f2..efaff28 100644
> > --- a/xen/drivers/passthrough/vtd/extern.h
> > +++ b/xen/drivers/passthrough/vtd/extern.h
> > @@ -25,6 +25,7 @@
> >
> >  #define VTDPREFIX "[VT-D]"
> >
> > +struct pci_ats_dev;
> >  extern bool_t rwbf_quirk;
> >
> >  void print_iommu_regs(struct acpi_drhd_unit *drhd); @@ -60,8 +61,8 @@
> > int dev_invalidate_iotlb(struct iommu *iommu, u16 did,
> >   u64 addr, unsigned int size_order, u64
> > type);
> >
> >  int __must_check qinval_device_iotlb_sync(struct iommu *iommu,
> > -  u32 max_invs_pend,
> > -  u16 sid, u16 size, u64 addr);
> > +  struct pci_ats_dev *ats_dev,
> > +  u16 did, u16 size, u64
> > + addr);
> >
> >  unsigned int get_cache_line_size(void);  void cacheline_flush(char
> > *); diff --git a/xen/drivers/passthrough/vtd/qinval.c
> > b/xen/drivers/passthrough/vtd/qinval.c
> > index 4492b29..e4e2771 100644
> > --- a/xen/drivers/passthrough/vtd/qinval.c
> > +++ b/xen/drivers/passthrough/vtd/qinval.c
> > @@ -27,11 +27,11 @@
> >  #include "dmar.h"
> >  #include "vtd.h"
> >  #include "extern.h"
> > +#include "../ats.h"
> 
> Earlier you said:
> >1. a forward declaration struct pci_ats_dev*, instead of
> >   including ats.h.
>

This context is 'in extern.h', but..

> But above you still have ats.h included.
> 

.. I really need to include 'ats.h' here, as the 'struct pci_ats_dev*' is used 
in this file.

> >
> >  #define VTD_QI_TIMEOUT 1
> >
> > -static int __must_check invalidate_sync(struct iommu *iommu,
> > -bool_t flush_dev_iotlb);
> > +static int __must_check invalidate_sync(struct iommu *iommu);
> 
> I don't understand the rationale behind. In earlier patch you introduce a new
> parameter which is however just removed later here
> 
In earlier patch, refactor invalidate_sync() to indicate whether we need to 
flush device IOTLB or not.
change it back here, as I add a specific function - dev_invalidate_sync() for 
device IOTLB invalidation..

Quan





___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [ovmf test] 96258: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96258 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96258/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail REGR. 
vs. 94748
 test-amd64-amd64-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail 
REGR. vs. 94748

version targeted for testing:
 ovmf 9252d67ab3007601ddf983d1278cbe0e4a647f34
baseline version:
 ovmf dc99315b8732b6e3032d01319d3f534d440b43d0

Last test of basis94748  2016-05-24 22:43:25 Z   32 days
Failing since 94750  2016-05-25 03:43:08 Z   32 days   57 attempts
Testing same since96220  2016-06-24 14:13:59 Z1 days3 attempts


People who touched revisions under test:
  Ard Biesheuvel 
  Chao Zhang 
  Cinnamon Shia 
  Cohen, Eugene 
  Dandan Bi 
  Darbin Reyes 
  Eric Dong 
  Eugene Cohen 
  Evan Lloyd 
  Fu Siyuan 
  Fu, Siyuan 
  Gary Li 
  Gary Lin 
  Giri P Mudusuru 
  Hao Wu 
  Hegde Nagaraj P 
  hegdenag 
  Heyi Guo 
  Jan D?bro? 
  Jan Dabros 
  Jeff Fan 
  Jiaxin Wu 
  Jiewen Yao 
  Joe Zhou 
  Katie Dellaquila 
  Laszlo Ersek 
  Liming Gao 
  Lu, ShifeiX A 
  lushifex 
  Marcin Wojtas 
  Marvin H?user 
  Marvin Haeuser 
  Maurice Ma 
  Michael Zimmermann 
  Qiu Shumin 
  Ruiyu Ni 
  Ryan Harkin 
  Sami Mujawar 
  Satya Yarlagadda 
  Sriram Subramanian 
  Star Zeng 
  Tapan Shah 
  Thomas Palmer 
  Yonghong Zhu 
  Zhang Lubo 
  Zhang, Chao B 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2959 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 5/6] IOMMU: move the domain crash logic up to the generic IOMMU layer

2016-06-26 Thread Xu, Quan
On June 24, 2016 7:48 PM, Tian, Kevin  wrote:
> > From: Xu, Quan
> > Sent: Friday, June 24, 2016 1:52 PM
> >
> > From: Quan Xu 
> >
> > Signed-off-by: Quan Xu 
> >
> > CC: Julien Grall 
> > CC: Kevin Tian 
> > CC: Feng Wu 
> > CC: Jan Beulich 
> > CC: Suravee Suthikulpanit 
> > ---
> >  xen/drivers/passthrough/iommu.c | 30
> > --
> >  xen/drivers/passthrough/vtd/iommu.c | 11 +++
> >  2 files changed, 39 insertions(+), 2 deletions(-)
> 
> when you say "moving the logic up", I don't see any lines being deleted. Looks
> you are just "adding the domain crash logic"?

Yes, it is 'adding'..

Quan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 4/6] IOMMU/x86: using a struct pci_dev* instead of SBDF

2016-06-26 Thread Xu, Quan
On June 24, 2016 7:46 PM, Tian, Kevin  wrote:
> > From: Xu, Quan
> > Sent: Friday, June 24, 2016 1:52 PM
> >
> > From: Quan Xu 
> >
> > a struct pci_dev* instead of SBDF is stored inside struct pci_ats_dev
> > and parameter to enable_ats_device().
> >
> > Signed-off-by: Quan Xu 
> 
> Can we unify the naming convention throughout the patch, e.g.
> always using ats_pdev for "struct pci_ats_dev" variable, while pdev for 
> "struct
> pci_dev". It's quite confusing when reading the patch which has both named
> as pdev in various places I know the confusion is also in original code, 
> but
> please take this chance to clean them up. :-)

Make sense. I'll fix it in next patch soon.

Quan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3] xen: arm: Update arm64 image header

2016-06-26 Thread Julien Grall

Hello Dirk,

On 26/06/2016 06:47, Dirk Behme wrote:

On 23.06.2016 17:18, Julien Grall wrote:

On 23/06/16 07:38, Dirk Behme wrote:

+uint64_t res2;
  uint64_t res3;
  uint64_t res4;
-uint64_t res5;
-uint32_t magic1;
-uint32_t res6;
+uint32_t magic;/* Magic number, little endian,
"ARM\x64" */
+uint32_t res5;
  } zimage;
  uint64_t start, end;

@@ -354,20 +353,30 @@ static int kernel_zimage64_probe(struct
kernel_info *info,

  copy_from_paddr(&zimage, addr, sizeof(zimage));

-if ( zimage.magic0 != ZIMAGE64_MAGIC_V0 &&
- zimage.magic1 != ZIMAGE64_MAGIC_V1 )
+if ( zimage.magic != ZIMAGE64_MAGIC ) {
+printk(XENLOG_ERR "No valid magic found in header! Kernel
too old?\n");


I have found why there were no error messages here before. The
function kernel_probe will try the different formats supported one by
one.

So this message will be printed if the kernel is an ARM32 image, which
will confuse the user. So I would print this message only when
zimage.magic0 is equal to ZIMAGE64_MAGIC_V0.



Which we don't have with the recent header format any more.


Well, we control the structure in Xen. So we could re-introduce the 
field magic0 through an union.


> This does

mean I drop this message again, as it doesn't make sense if the magic is
used for the format detection.


I would still prefer to keep an error message when only MAGIC_V0 is 
present. This will avoid people to spend time understanding why it does 
not work anymore.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [libvirt test] 96270: tolerable FAIL - PUSHED

2016-06-26 Thread osstest service owner
flight 96270 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96270/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  d0a9dbc323118cfd78f078e1fff48207e82d4b8e
baseline version:
 libvirt  f294b83ee632a6330f3a3045fbb5bcb9d9951c03

Last test of basis96237  2016-06-25 04:22:09 Z1 days
Testing same since96270  2016-06-26 04:19:46 Z0 days1 attempts


People who touched revisions under test:
  Erik Skultety 
  Maxim Nestratov 
  Nikolay Shirokovskiy 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm fail
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt fail
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-armhf-armhf-libvirt-qcow2   fail
 test-armhf-armhf-libvirt-raw fail
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=libvirt
+ revision=d0a9dbc323118cfd78f078e1fff48207e82d4b8e
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push libvirt 
d0a9dbc323118cfd78f078e1fff48207e82d4b8e
+ branch=libvirt
+ 

[Xen-devel] [xen-4.3-testing test] 96262: regressions - FAIL

2016-06-26 Thread osstest service owner
flight 96262 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/96262/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt5 libvirt-build fail REGR. vs. 87893
 build-amd64-libvirt   5 libvirt-build fail REGR. vs. 87893
 build-armhf   5 xen-build fail REGR. vs. 87893

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail in 96248 
pass in 96262
 test-amd64-i386-xend-qemut-winxpsp3 15 guest-localmigrate/x10 fail pass in 
96202
 test-amd64-i386-pv   17 guest-localmigrate/x10  fail pass in 96248
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-localmigrate   fail pass in 96248

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail in 96248 like 87893
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 87893
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 87893

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 20 leak-check/check fail in 96202 never 
pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install  fail never pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail never pass
 build-amd64-rumpuserxen   6 xen-buildfail   never pass
 build-i386-rumpuserxen6 xen-buildfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass

version targeted for testing:
 xen  0a8c94fae993dd8f2b27fd4cc694f61c21de84bf
baseline version:
 xen  8fa31952e2d08ef63897c43b5e8b33475ebf5d93

Last test of basis87893  2016-03-29 13:49:52 Z   88 days
Failing since 92180  2016-04-20 17:49:21 Z   66 days   31 attempts
Testing same since96017  2016-06-20 17:22:27 Z5 days   10 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony Liguori 
  Anthony PERARD 
  Gerd Hoffmann 
  Ian Jackson 
  Jan Beulich 
  Jim Paris 
  Stefan Hajnoczi 
  Tim Deegan 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  fail
 build-i386   pass
 build-amd64-libvirt  fail
 build-armhf-libvirt  blocked 
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  pass
 test-armhf-armhf-xl  blocked 
 test-amd64-i386-xl   pass
 test-amd64-i386-qemut-rhel6hvm-amd   pass
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64pass
 test-amd64-i386-xl-qemut-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-i386-freebsd10-amd64  pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64