Re: [Xen-devel] [PATCH 2/2] xenstore: use temporary memory context for firing watches

2016-07-17 Thread Juergen Gross
On 15/07/16 18:21, Ian Jackson wrote:
> Juergen Gross writes ("[PATCH 2/2] xenstore: use temporary memory context for 
> firing watches"):
>> Use a temporary memory context for memory allocations when firing
>> watches. This will avoid leaking memory in case of long living
>> connections and/or xenstore entries.
> 
> Can you please split out the non-functional argument change to
> get_node and get_parent ?  This is very hard to review as-is.

Okay.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/2] xenstore: call each xenstored command function with temporary context

2016-07-17 Thread Juergen Gross
On 15/07/16 18:19, Ian Jackson wrote:
> Juergen Gross writes ("[PATCH 1/2] xenstore: call each xenstored command 
> function with temporary context"):
>> In order to be able to avoid leaving temporary memory allocated after
>> processing of a command in xenstored call all command functions with
>> the temporary "in" context. Each function can then make use of that
>> temporary context for allocating temporary memory instead of either
>> leaving that memory allocated until the connection is dropped (or
>> even until end of xenstored) or freeing the memory itself.
>>
>> This requires to modify the interfaces of the functions taking only
>> one argument from the connection.
> 
> AFAICT this patch has no functional change, and is just the argument
> changes to all these functions ?
> 
> If so please say that in the commit message!

Okay.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] linux-next: manual merge of the xen-tip tree with the tip tree

2016-07-17 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the xen-tip tree got a conflict in:

  arch/arm/xen/enlighten.c

between commit:

  4761adb6f490 ("arm/xen: Convert to hotplug state machine")

from the tip tree and commit:

  ecb23dc6f2ef ("xen: add steal_clock support on x86")

from the xen-tip tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc arch/arm/xen/enlighten.c
index d822e2313950,2f4c3aa540eb..
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@@ -334,8 -414,12 +397,8 @@@ static int __init xen_guest_init(void
return -EINVAL;
}
  
-   pv_time_ops.steal_clock = xen_stolen_accounting;
-   static_key_slow_inc(¶virt_steal_enabled);
 -  xen_percpu_init();
 -
 -  register_cpu_notifier(&xen_cpu_notifier);
 -
+   xen_time_setup_guest();
+ 
if (xen_initial_domain())
pvclock_gtod_register_notifier(&xen_pvclock_gtod_notifier);
  

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 2/3] xen-scsiback: One function call less in scsiback_device_action() after error detection

2016-07-17 Thread Juergen Gross
On 16/07/16 22:23, SF Markus Elfring wrote:
> From: Markus Elfring 
> Date: Sat, 16 Jul 2016 21:42:42 +0200
> 
> The kfree() function was called in one case by the
> scsiback_device_action() function during error handling
> even if the passed variable "tmr" contained a null pointer.
> 
> Adjust jump targets according to the Linux coding style convention.
> 
> Signed-off-by: Markus Elfring 
> ---
>  drivers/xen/xen-scsiback.c | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
> index 4a48c06..7612bc9 100644
> --- a/drivers/xen/xen-scsiback.c
> +++ b/drivers/xen/xen-scsiback.c
> @@ -606,7 +606,7 @@ static void scsiback_device_action(struct vscsibk_pend 
> *pending_req,
>   tmr = kzalloc(sizeof(struct scsiback_tmr), GFP_KERNEL);
>   if (!tmr) {
>   target_put_sess_cmd(se_cmd);
> - goto err;
> + goto do_resp;
>   }

Hmm, I'm not convinced this is an improvement.

I'd rather rename the new error label to "put_cmd" and get rid of the
braces in above if statement:

-   if (!tmr) {
-   target_put_sess_cmd(se_cmd);
-   goto err;
-   }
+   if (!tmr)
+   goto put_cmd;

and then in the error path:

-err:
+put_cmd:
+   target_put_sess_cmd(se_cmd);
+free_tmr:
kfree(tmr);


Juergen

>  
>   init_waitqueue_head(&tmr->tmr_wait);
> @@ -616,7 +616,7 @@ static void scsiback_device_action(struct vscsibk_pend 
> *pending_req,
>  unpacked_lun, tmr, act, GFP_KERNEL,
>  tag, TARGET_SCF_ACK_KREF);
>   if (rc)
> - goto err;
> + goto free_tmr;
>  
>   wait_event(tmr->tmr_wait, atomic_read(&tmr->tmr_complete));
>  
> @@ -626,8 +626,9 @@ static void scsiback_device_action(struct vscsibk_pend 
> *pending_req,
>   scsiback_do_resp_with_sense(NULL, err, 0, pending_req);
>   transport_generic_free_cmd(&pending_req->se_cmd, 1);
>   return;
> -err:
> +free_tmr:
>   kfree(tmr);
> +do_resp:
>   scsiback_do_resp_with_sense(NULL, err, 0, pending_req);
>  }
>  
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 3/3] xen-scsiback: Pass a failure indication as a constant

2016-07-17 Thread Juergen Gross
On 16/07/16 22:24, SF Markus Elfring wrote:
> From: Markus Elfring 
> Date: Sat, 16 Jul 2016 21:55:01 +0200
> 
> Pass the constant "FAILED" in a function call directly instead of
> using an intialisation for a local variable.
> 
> Signed-off-by: Markus Elfring 

Reviewed-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/3] xen-scsiback: Delete an unnecessary check before the function call "kfree"

2016-07-17 Thread Juergen Gross
On 16/07/16 22:22, SF Markus Elfring wrote:
> From: Markus Elfring 
> Date: Sat, 16 Jul 2016 21:21:05 +0200
> 
> The kfree() function tests whether its argument is NULL and then
> returns immediately. Thus the test around the call is not needed.
> 
> This issue was detected by using the Coccinelle software.
> 
> Signed-off-by: Markus Elfring 

Reviewed-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] (no subject)

2016-07-17 Thread 姚 忠将
   Recently , I made a test to compare the performance of I/O between xen 
project and xenserver. I found the performance of xenserver is much better than 
that of xen project .

I want to find the reason why xenserver is better so I search through 
google.com. On the website www.xenproject.org , I 
found url http://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance and 
http://wiki.xenproject.org/wiki/Network_Throughput_and_Performance_Guide .
I set these parameters as the url given. But , it looks doesn’t work. The 
performance got no obvious improvement.

So , I send this mail to get some advise . will you help me ? if so , I’ll  be 
much appreciate.


发送自 Windows 10 版邮件应用

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 97504: regressions - trouble: blocked/broken/fail/pass

2016-07-17 Thread osstest service owner
flight 97504 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97504/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops 3 host-install(3) broken REGR. vs. 96791
 test-amd64-amd64-libvirt-xsm 11 guest-start   fail REGR. vs. 96791
 test-amd64-amd64-libvirt-pair 20 guest-start/debian   fail REGR. vs. 96791
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 96791
 test-amd64-amd64-libvirt 11 guest-start   fail REGR. vs. 96791
 test-amd64-amd64-xl-qcow2 9 debian-di-install fail REGR. vs. 96791
 test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 96791
 test-amd64-amd64-libvirt-vhd  9 debian-di-install fail REGR. vs. 96791

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale 15 guest-start/debian.repeat fail in 97470 pass in 
97429
 test-amd64-amd64-xl-rtds  6 xen-bootfail pass in 97470

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  9 debian-installfail in 97470 like 96791
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 96791
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 96791

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt 14 guest-saverestore fail in 97470 never pass
 test-armhf-armhf-libvirt 12 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-xl-arndale 13 saverestore-support-check fail in 97470 never 
pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-check fail in 97470 never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail in 97470 never 
pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-xl-credit2 13 saverestore-support-check fail in 97470 never 
pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail in 97470 never 
pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail in 97470 
never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore fail in 97470 never pass
 test-armhf-armhf-xl  12 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-xl  13 saverestore-support-check fail in 97470 never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestore fail in 97470 never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-check fail in 97470 never 
pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestore   fail in 97470 never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-check fail in 97470 never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-check fail in 97470 never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-check fail in 97470 never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-check fail in 97470 never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass

version targeted for testing:
 qemuu6b92bbfe812746fe7841a24c24e6460f5359ce72
baseline version:
 qemuu4f4a9ca4a4386c137301b3662faba076455ff15a

Last test of basis96791  2016-07-08 12:20:0

[Xen-devel] [ovmf test] 97509: regressions - FAIL

2016-07-17 Thread osstest service owner
flight 97509 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97509/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail REGR. 
vs. 94748
 test-amd64-amd64-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail 
REGR. vs. 94748

version targeted for testing:
 ovmf e2f5c491d8749c88cbf56168a3493d70ff19a382
baseline version:
 ovmf dc99315b8732b6e3032d01319d3f534d440b43d0

Last test of basis94748  2016-05-24 22:43:25 Z   54 days
Failing since 94750  2016-05-25 03:43:08 Z   53 days  116 attempts
Testing same since97375  2016-07-15 14:23:16 Z2 days5 attempts


People who touched revisions under test:
  Anandakrishnan Loganathan 
  Ard Biesheuvel 
  Bi, Dandan 
  Bret Barkelew 
  Bruce Cran 
  Bruce Cran 
  Chao Zhang 
  Cinnamon Shia 
  Cohen, Eugene 
  Dandan Bi 
  Darbin Reyes 
  david wei 
  Eric Dong 
  Eugene Cohen 
  Evan Lloyd 
  Evgeny Yakovlev 
  Feng Tian 
  Fu Siyuan 
  Fu, Siyuan 
  Gary Li 
  Gary Lin 
  Giri P Mudusuru 
  Graeme Gregory 
  Hao Wu 
  Hegde Nagaraj P 
  Hegde, Nagaraj P 
  hegdenag 
  Heyi Guo 
  Jan D?bro? 
  Jan Dabros 
  Jeff Fan 
  Jeremy Linton 
  Jiaxin Wu 
  Jiewen Yao 
  Joe Zhou 
  Jordan Justen 
  Katie Dellaquila 
  Laszlo Ersek 
  Liming Gao 
  Lu, ShifeiX A 
  lushifex 
  Marcin Wojtas 
  Mark Rutland 
  Marvin H?user 
  Marvin Haeuser 
  Maurice Ma 
  Michael Zimmermann 
  Mudusuru, Giri P 
  Ni, Ruiyu 
  Qiu Shumin 
  Ruiyu Ni 
  Ruiyu Ni 
  Ryan Harkin 
  Sami Mujawar 
  Satya Yarlagadda 
  Shannon Zhao 
  Sriram Subramanian 
  Star Zeng 
  Subramanian, Sriram (EG Servers Platform SW) 
  Sunny Wang 
  Tapan Shah 
  Thomas Palmer 
  Yarlagadda, Satya P 
  Yonghong Zhu 
  Zhang Lubo 
  Zhang, Chao B 
  Zhang, Lubo 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 10330 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.1 test] 97496: regressions - FAIL

2016-07-17 Thread osstest service owner
flight 97496 linux-4.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97496/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 96211
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96211
 test-amd64-i386-qemuu-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96211
 test-amd64-i386-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96211
 test-amd64-i386-freebsd10-amd64  9 freebsd-installfail REGR. vs. 96211
 test-amd64-i386-libvirt   9 debian-installfail REGR. vs. 96211
 test-amd64-i386-xl-xsm9 debian-installfail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96211
 test-amd64-i386-xl9 debian-installfail REGR. vs. 96211
 test-amd64-i386-xl-qemut-winxpsp3  9 windows-install  fail REGR. vs. 96211
 test-amd64-i386-xl-raw9 debian-di-install fail REGR. vs. 96211
 test-armhf-armhf-xl-arndale   9 debian-installfail REGR. vs. 96211
 test-amd64-amd64-xl-multivcpu  6 xen-boot fail REGR. vs. 96211
 test-amd64-i386-freebsd10-i386  9 freebsd-install fail REGR. vs. 96211
 test-amd64-i386-qemuu-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96211
 test-armhf-armhf-xl-xsm   9 debian-installfail REGR. vs. 96211
 test-armhf-armhf-xl   9 debian-installfail REGR. vs. 96211
 test-amd64-i386-libvirt-xsm   9 debian-installfail REGR. vs. 96211
 test-amd64-amd64-i386-pvgrub  6 xen-boot  fail REGR. vs. 96211
 test-amd64-i386-qemut-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96211
 test-armhf-armhf-xl-multivcpu  9 debian-install   fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-win7-amd64  6 xen-boot  fail REGR. vs. 96211
 test-armhf-armhf-libvirt-xsm  9 debian-installfail REGR. vs. 96211
 test-armhf-armhf-libvirt  9 debian-installfail REGR. vs. 96211
 test-armhf-armhf-xl-cubietruck  9 debian-install  fail REGR. vs. 96211
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-credit2   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
96211
 test-amd64-amd64-xl-qemut-debianhvm-amd64  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-winxpsp3  6 xen-bootfail REGR. vs. 96211
 test-amd64-amd64-xl   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-qemuu-nested-intel  6 xen-boot   fail REGR. vs. 96211
 test-amd64-amd64-pygrub   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-pvh-amd   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qcow2 6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-libvirt-vhd  6 xen-boot  fail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96211
 test-amd64-amd64-xl-qemuu-ovmf-amd64  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-pvh-intel  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-winxpsp3  6 xen-bootfail REGR. vs. 96211
 test-amd64-amd64-libvirt-xsm  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-xsm   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-libvirt  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail REGR. 
vs. 96211
 test-armhf-armhf-xl-credit2   9 debian-installfail REGR. vs. 96211
 test-amd64-i386-qemut-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-win7-amd64  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-qemuu-nested-amd  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-amd64-pvgrub  6 xen-boot fail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-winxpsp3  9 windows-install  fail REGR. vs. 96211
 test-amd64-i386-libvirt-pair 15 debian-install/dst_host   fail REGR. vs. 96211
 test-amd64-i386-pair 15 debian-install/dst_host   fail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail REGR. vs. 96211
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install   fail REGR. vs. 96211
 test-amd64-i386-xl-qemut-win7-amd64  9 windows-installfail REGR. vs. 96211
 test-amd64-amd64-pair  

[Xen-devel] [RFC Design Doc v2] Add vNVDIMM support for Xen

2016-07-17 Thread Haozhong Zhang
Hi,

Following is version 2 of the design doc for supporting vNVDIMM in
Xen. It's basically the summary of discussion on previous v1 design
(https://lists.xenproject.org/archives/html/xen-devel/2016-02/msg6.html).
Any comments are welcome. The corresponding patches are WIP.

Thanks,
Haozhong



vNVDIMM Design v2

Changes in v2:
 - Rewrite the the design details based on previous discussion [7].
 - Add Section 3 Usage Example of vNVDIMM in Xen.
 - Remove content about pcommit instruction which has been deprecated [8].

Content
===
1. Background
 1.1 Access Mechanisms: Persistent Memory and Block Window
 1.2 ACPI Support
  1.2.1 NFIT
  1.2.2 _DSM and _FIT
 1.3 Namespace
 1.4 clwb/clflushopt
2. NVDIMM/vNVDIMM Support in Linux Kernel/KVM/QEMU
 2.1 NVDIMM Driver in Linux Kernel
 2.2 vNVDIMM Implementation in KVM/QEMU
3. Usage Example of vNVDIMM in Xen
4. Design of vNVDIMM in Xen
 4.1 Guest clwb/clflushopt Enabling
 4.2 pmem Address Management
  4.2.1 Reserve Storage for Management Structures
  4.2.2 Detection of Host pmem Devices
  4.2.3 Get Host Machine Address (SPA) of Host pmem Files
  4.2.4 Map Host pmem to Guests
  4.2.5 Misc 1: RAS
  4.2.6 Misc 2: hotplug
 4.3 Guest ACPI Emulation
  4.3.1 Building Guest ACPI Tables
  4.3.2 Emulating Guest _DSM
References


Non-Volatile DIMM or NVDIMM is a type of RAM device that provides
persistent storage and retains data across reboot and even power
failures. This document describes the design to provide virtual NVDIMM
devices or vNVDIMM in Xen.

The rest of this document is organized as below.
 - Section 1 introduces the background knowledge of NVDIMM hardware,
   which is used by other parts of this document.

 - Section 2 briefly introduces the current/future NVDIMM/vNVDIMM
   support in Linux kernel/KVM/QEMU. They will affect the vNVDIMM
   design in Xen.

 - Section 3 shows the basic usage example of vNVDIMM in Xen.

 - Section 4 proposes design details of vNVDIMM in Xen.



1. Background

1.1 Access Mechanisms: Persistent Memory and Block Window

 NVDIMM provides two access mechanisms: byte-addressable persistent
 memory (pmem) and block window (pblk). An NVDIMM can contain multiple
 ranges and each range can be accessed through either pmem or pblk
 (but not both).

 Byte-addressable persistent memory mechanism (pmem) maps NVDIMM or
 ranges of NVDIMM into the system physical address (SPA) space, so
 that software can access NVDIMM via normal memory loads and
 stores. If the virtual address is used, then MMU will translate it to
 the physical address.

 In the virtualization circumstance, we can pass through a pmem range
 or partial of it to a guest by mapping it in EPT (i.e. mapping guest
 vNVDIMM physical address to host NVDIMM physical address), so that
 guest accesses are applied directly to the host NVDIMM device without
 hypervisor's interceptions.

 Block window mechanism (pblk) provides one or multiple block windows
 (BW).  Each BW is composed of a command register, a status register
 and a 8 Kbytes aperture register. Software fills the direction of the
 transfer (read/write), the start address (LBA) and size on NVDIMM it
 is going to transfer. If nothing goes wrong, the transferred data can
 be read/write via the aperture register. The status and errors of the
 transfer can be got from the status register. Other vendor-specific
 commands and status can be implemented for BW as well. Details of the
 block window access mechanism can be found in [3].

 In the virtualization circumstance, different pblk regions on a
 single NVDIMM device may be accessed by different guests, so the
 hypervisor needs to emulate BW, which would introduce a high overhead
 for I/O intensive workload.

 Therefore, we are going to only implement pmem for vNVDIMM. The rest
 of this document will mostly concentrate on pmem.


1.2 ACPI Support

 ACPI provides two factors of support for NVDIMM. First, NVDIMM
 devices are described by firmware (BIOS/EFI) to OS via ACPI-defined
 NVDIMM Firmware Interface Table (NFIT). Second, several functions of
 NVDIMM, including operations on namespace labels, S.M.A.R.T and
 hotplug, are provided by ACPI methods (_DSM and _FIT).

1.2.1 NFIT

 NFIT is a new system description table added in ACPI v6 with
 signature "NFIT". It contains a set of structures.

 - System Physical Address Range Structure
   (SPA Range Structure)

   SPA range structure describes system physical address ranges
   occupied by NVDIMMs and types of regions.

   If Address Range Type GUID field of a SPA range structure is "Byte
   Addressable Persistent Memory (PM) Region", then the structure
   describes a NVDIMM region that is accessed via pmem. The System
   Physical Address Range Base and Length fields describe the start
   system physical address and the length that is occupied by that
   NVDIMM region.

   A SPA range structure is identified by a non-zero SPA range
   structure index.

   Note: [1] reserves E820 type 7: OSPM must comprehend this memory as
 hav

[Xen-devel] [PATCH V2 3/4] xen/arm: io: Use binary search for mmio handler lookup

2016-07-17 Thread Shanker Donthineni
As the number of I/O handlers increase, the overhead associated with
linear lookup also increases. The system might have maximum of 144
(assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
it would require 144 iterations for finding a matching handler. Now
it is time for us to change from linear (complexity O(n)) to a binary
search (complexity O(log n) for reducing mmio handler lookup overhead.

Signed-off-by: Shanker Donthineni 
---
 xen/arch/arm/io.c | 39 ---
 1 file changed, 24 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index 40330f0..e8aa7fa 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -70,27 +71,31 @@ static int handle_write(const struct mmio_handler *handler, 
struct vcpu *v,
handler->priv);
 }
 
-static const struct mmio_handler *find_mmio_handler(struct domain *d,
-paddr_t gpa)
+/* This function assumes that mmio regions are not overlapped */
+static int cmp_mmio_handler(const void *key, const void *elem)
 {
-const struct mmio_handler *handler;
-unsigned int i;
-struct vmmio *vmmio = &d->arch.vmmio;
+const struct mmio_handler *handler0 = key;
+const struct mmio_handler *handler1 = elem;
 
-read_lock(&vmmio->lock);
+if ( handler0->addr < handler1->addr )
+return -1;
 
-for ( i = 0; i < vmmio->num_entries; i++ )
-{
-handler = &vmmio->handlers[i];
+if ( handler0->addr > (handler1->addr + handler1->size) )
+return 1;
 
-if ( (gpa >= handler->addr) &&
- (gpa < (handler->addr + handler->size)) )
-break;
-}
+return 0;
+}
 
-if ( i == vmmio->num_entries )
-handler = NULL;
+static const struct mmio_handler *find_mmio_handler(struct domain *d,
+paddr_t gpa)
+{
+struct vmmio *vmmio = &d->arch.vmmio;
+struct mmio_handler key = {.addr = gpa};
+const struct mmio_handler *handler;
 
+read_lock(&vmmio->lock);
+handler = bsearch(&key, vmmio->handlers, vmmio->num_entries,
+  sizeof(*handler), cmp_mmio_handler);
 read_unlock(&vmmio->lock);
 
 return handler;
@@ -131,6 +136,10 @@ void register_mmio_handler(struct domain *d,
 
 vmmio->num_entries++;
 
+/* Sort mmio handlers in ascending order based on base address */
+sort(vmmio->handlers, vmmio->num_entries, sizeof(struct mmio_handler),
+ cmp_mmio_handler, NULL);
+
 write_unlock(&vmmio->lock);
 }
 
-- 
Qualcomm Datacenter Technologies, Inc. on behalf of the Qualcomm Technologies, 
Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux 
Foundation Collaborative Project.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 1/4] arm/io: Use separate memory allocation for mmio handlers

2016-07-17 Thread Shanker Donthineni
The number of mmio handlers are limited to a compile time macro
MAX_IO_HANDLER which is 16. This number is not at all sufficient
to support per CPU distributor regions. Either it needs to be
increased to a bigger number, at least CONFIG_NR_CPUS+16, or
allocate a separate memory for mmio handlers dynamically during
domain build.

This patch uses the dynamic allocation strategy to reduce memory
footprint for 'struct domain' instead of static allocation.

Signed-off-by: Shanker Donthineni 
Acked-by: Julien Grall 
---
 xen/arch/arm/domain.c  |  6 --
 xen/arch/arm/io.c  | 13 +++--
 xen/include/asm-arm/mmio.h |  7 +--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 61fc08e..0170cee 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -546,7 +546,7 @@ void vcpu_destroy(struct vcpu *v)
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
struct xen_arch_domainconfig *config)
 {
-int rc;
+int rc, count;
 
 d->arch.relmem = RELMEM_not_started;
 
@@ -569,7 +569,8 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 share_xen_page_with_guest(
 virt_to_page(d->shared_info), d, XENSHARE_writable);
 
-if ( (rc = domain_io_init(d)) != 0 )
+count = MAX_IO_HANDLER;
+if ( (rc = domain_io_init(d, count)) != 0 )
 goto fail;
 
 if ( (rc = p2m_alloc_table(d)) != 0 )
@@ -663,6 +664,7 @@ void arch_domain_destroy(struct domain *d)
 free_xenheap_pages(d->arch.efi_acpi_table,
get_order_from_bytes(d->arch.efi_acpi_len));
 #endif
+domain_io_free(d);
 }
 
 void arch_domain_shutdown(struct domain *d)
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index 5a96836..40330f0 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -118,7 +118,7 @@ void register_mmio_handler(struct domain *d,
 struct vmmio *vmmio = &d->arch.vmmio;
 struct mmio_handler *handler;
 
-BUG_ON(vmmio->num_entries >= MAX_IO_HANDLER);
+BUG_ON(vmmio->num_entries >= vmmio->max_num_entries);
 
 write_lock(&vmmio->lock);
 
@@ -134,14 +134,23 @@ void register_mmio_handler(struct domain *d,
 write_unlock(&vmmio->lock);
 }
 
-int domain_io_init(struct domain *d)
+int domain_io_init(struct domain *d, int max_count)
 {
 rwlock_init(&d->arch.vmmio.lock);
 d->arch.vmmio.num_entries = 0;
+d->arch.vmmio.max_num_entries = max_count;
+d->arch.vmmio.handlers = xzalloc_array(struct mmio_handler, max_count);
+if ( !d->arch.vmmio.handlers )
+return -ENOMEM;
 
 return 0;
 }
 
+void domain_io_free(struct domain *d)
+{
+xfree(d->arch.vmmio.handlers);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index 32f10f2..c620eed 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -52,15 +52,18 @@ struct mmio_handler {
 
 struct vmmio {
 int num_entries;
+int max_num_entries;
 rwlock_t lock;
-struct mmio_handler handlers[MAX_IO_HANDLER];
+struct mmio_handler *handlers;
 };
 
 extern int handle_mmio(mmio_info_t *info);
 void register_mmio_handler(struct domain *d,
const struct mmio_handler_ops *ops,
paddr_t addr, paddr_t size, void *priv);
-int domain_io_init(struct domain *d);
+int domain_io_init(struct domain *d, int max_count);
+void domain_io_free(struct domain *d);
+
 
 #endif  /* __ASM_ARM_MMIO_H__ */
 
-- 
Qualcomm Datacenter Technologies, Inc. on behalf of the Qualcomm Technologies, 
Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux 
Foundation Collaborative Project.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 2/4] xen: Add generic implementation of binary search

2016-07-17 Thread Shanker Donthineni
This patch adds the generic implementation of binary search algorithm
whcih is copied from Linux kernel v4.7-rc7. No functional changes.

Signed-off-by: Shanker Donthineni 
Reviewed-by: Andrew Cooper 
---
Changes since v1:
 Removed the header file xen/include/xen/bsearch.h.
 Defined function bsearch() prototype in the header file xen/lib.h.

 xen/common/Makefile   |  1 +
 xen/common/bsearch.c  | 51 +++
 xen/include/xen/lib.h |  3 +++
 3 files changed, 55 insertions(+)
 create mode 100644 xen/common/bsearch.c

diff --git a/xen/common/Makefile b/xen/common/Makefile
index dbf00c6..f8123c2 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -43,6 +43,7 @@ obj-y += schedule.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
+obj-y += bsearch.o
 obj-y += smp.o
 obj-y += spinlock.o
 obj-y += stop_machine.o
diff --git a/xen/common/bsearch.c b/xen/common/bsearch.c
new file mode 100644
index 000..7090930
--- /dev/null
+++ b/xen/common/bsearch.c
@@ -0,0 +1,51 @@
+/*
+ * A generic implementation of binary search for the Linux kernel
+ *
+ * Copyright (C) 2008-2009 Ksplice, Inc.
+ * Author: Tim Abbott 
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; version 2.
+ */
+
+#include 
+
+/*
+ * bsearch - binary search an array of elements
+ * @key: pointer to item being searched for
+ * @base: pointer to first element to search
+ * @num: number of elements
+ * @size: size of each element
+ * @cmp: pointer to comparison function
+ *
+ * This function does a binary search on the given array.  The
+ * contents of the array should already be in ascending sorted order
+ * under the provided comparison function.
+ *
+ * Note that the key need not have the same type as the elements in
+ * the array, e.g. key could be a string and the comparison function
+ * could compare the string with the struct's name field.  However, if
+ * the key and elements in the array are of the same type, you can use
+ * the same comparison function for both sort() and bsearch().
+ */
+void *bsearch(const void *key, const void *base, size_t num, size_t size,
+ int (*cmp)(const void *key, const void *elt))
+{
+   size_t start = 0, end = num;
+   int result;
+
+   while (start < end) {
+   size_t mid = start + (end - start) / 2;
+
+   result = cmp(key, base + mid * size);
+   if (result < 0)
+   end = mid;
+   else if (result > 0)
+   start = mid + 1;
+   else
+   return (void *)base + mid * size;
+   }
+
+   return NULL;
+}
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index b1b0fb2..b90d582 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -153,4 +153,7 @@ void dump_execstate(struct cpu_user_regs *);
 
 void init_constructors(void);
 
+void *bsearch(const void *key, const void *base, size_t num, size_t size,
+  int (*cmp)(const void *key, const void *elt))
+
 #endif /* __LIB_H__ */
-- 
Qualcomm Datacenter Technologies, Inc. on behalf of the Qualcomm Technologies, 
Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux 
Foundation Collaborative Project.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 0/4] Change fixed mmio handlers to a variable number

2016-07-17 Thread Shanker Donthineni
The maximum number of mmio handlers that are allowed is limited to
a macro MAX_IO_HANDLER(16), which is not enough for supporting per CPU
Redistributor regions. We need at least MAX_IO_HANDLER+CONFIG_NR_CPUS
mmio handlers in order to support ACPI based XEN boot.

This patchset uses the dynamic allocation strategy to allocate memory
resource dynamically depends on the number of Redistributor regions
that are described in the APCI MADT table.

Shanker Donthineni (4):
  arm/io: Use separate memory allocation for mmio handlers
  xen: Add generic implementation of binary search
  xen/arm: io: Use binary search for mmio handler lookup
  arm/vgic: Change fixed number of mmio handlers to variable number

 xen/arch/arm/domain.c  | 12 +++
 xen/arch/arm/io.c  | 52 +++---
 xen/arch/arm/vgic-v2.c |  3 ++-
 xen/arch/arm/vgic-v3.c |  5 -
 xen/arch/arm/vgic.c| 10 +++--
 xen/common/Makefile|  1 +
 xen/common/bsearch.c   | 51 +
 xen/include/asm-arm/mmio.h |  7 +--
 xen/include/asm-arm/vgic.h |  5 +++--
 xen/include/xen/lib.h  |  3 +++
 10 files changed, 115 insertions(+), 34 deletions(-)
 create mode 100644 xen/common/bsearch.c

-- 
Qualcomm Datacenter Technologies, Inc. on behalf of the Qualcomm Technologies, 
Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux 
Foundation Collaborative Project.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH V2 4/4] arm/vgic: Change fixed number of mmio handlers to variable number

2016-07-17 Thread Shanker Donthineni
Compute the number of mmio handlers that are required for vGICv3 and
vGICv2 emulation drivers in vgic_v3_init()/vgic_v2_init(). Augment
this variable number of mmio handers to a fixed number MAX_IO_HANDLER
and pass it to domain_io_init() to allocate enough memory.

New code path:
 domain_vgic_register(&count)
   domain_io_init(count + MAX_IO_HANDLER)
 domain_vgic_init()

Signed-off-by: Shanker Donthineni 
---
 xen/arch/arm/domain.c  | 12 +++-
 xen/arch/arm/vgic-v2.c |  3 ++-
 xen/arch/arm/vgic-v3.c |  5 -
 xen/arch/arm/vgic.c| 10 +++---
 xen/include/asm-arm/vgic.h |  5 +++--
 5 files changed, 19 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 0170cee..4e5259b 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -546,7 +546,7 @@ void vcpu_destroy(struct vcpu *v)
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
struct xen_arch_domainconfig *config)
 {
-int rc, count;
+int rc, count = 0;
 
 d->arch.relmem = RELMEM_not_started;
 
@@ -569,10 +569,6 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 share_xen_page_with_guest(
 virt_to_page(d->shared_info), d, XENSHARE_writable);
 
-count = MAX_IO_HANDLER;
-if ( (rc = domain_io_init(d, count)) != 0 )
-goto fail;
-
 if ( (rc = p2m_alloc_table(d)) != 0 )
 goto fail;
 
@@ -609,6 +605,12 @@ int arch_domain_create(struct domain *d, unsigned int 
domcr_flags,
 goto fail;
 }
 
+if ( (rc = domain_vgic_register(d, &count)) != 0 )
+goto fail;
+
+if ( (rc = domain_io_init(d, count + MAX_IO_HANDLER)) != 0 )
+goto fail;
+
 if ( (rc = domain_vgic_init(d, config->nr_spis)) != 0 )
 goto fail;
 
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 6a5e67b..c6d280e 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -711,7 +711,7 @@ static const struct vgic_ops vgic_v2_ops = {
 .max_vcpus = 8,
 };
 
-int vgic_v2_init(struct domain *d)
+int vgic_v2_init(struct domain *d, int *mmio_count)
 {
 if ( !vgic_v2_hw.enabled )
 {
@@ -721,6 +721,7 @@ int vgic_v2_init(struct domain *d)
 return -ENODEV;
 }
 
+*mmio_count = 1; /* Only GICD region */
 register_vgic_ops(d, &vgic_v2_ops);
 
 return 0;
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index be9a9a3..ec038a3 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1499,7 +1499,7 @@ static const struct vgic_ops v3_ops = {
 .max_vcpus = 4096,
 };
 
-int vgic_v3_init(struct domain *d)
+int vgic_v3_init(struct domain *d, int *mmio_count)
 {
 if ( !vgic_v3_hw.enabled )
 {
@@ -1509,6 +1509,9 @@ int vgic_v3_init(struct domain *d)
 return -ENODEV;
 }
 
+/* GICD region + number of Redistributors */
+*mmio_count = vgic_v3_rdist_count(d) + 1;
+
 register_vgic_ops(d, &v3_ops);
 
 return 0;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index a7ccfe7..768cb91 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -88,18 +88,18 @@ static void vgic_rank_init(struct vgic_irq_rank *rank, 
uint8_t index,
 rank->vcpu[i] = vcpu;
 }
 
-static int domain_vgic_register(struct domain *d)
+int domain_vgic_register(struct domain *d, int *mmio_count)
 {
 switch ( d->arch.vgic.version )
 {
 #ifdef CONFIG_HAS_GICV3
 case GIC_V3:
-if ( vgic_v3_init(d) )
+if ( vgic_v3_init(d, mmio_count) )
return -ENODEV;
 break;
 #endif
 case GIC_V2:
-if ( vgic_v2_init(d) )
+if ( vgic_v2_init(d, mmio_count) )
 return -ENODEV;
 break;
 default:
@@ -124,10 +124,6 @@ int domain_vgic_init(struct domain *d, unsigned int 
nr_spis)
 
 d->arch.vgic.nr_spis = nr_spis;
 
-ret = domain_vgic_register(d);
-if ( ret < 0 )
-return ret;
-
 spin_lock_init(&d->arch.vgic.lock);
 
 d->arch.vgic.shared_irqs =
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index c3cc4f6..300f461 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -304,9 +304,10 @@ extern int vgic_emulate(struct cpu_user_regs *regs, union 
hsr hsr);
 extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n);
 extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n);
 extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
-int vgic_v2_init(struct domain *d);
-int vgic_v3_init(struct domain *d);
+int vgic_v2_init(struct domain *d, int *mmio_count);
+int vgic_v3_init(struct domain *d, int *mmio_count);
 
+extern int domain_vgic_register(struct domain *d, int *mmio_count);
 extern int vcpu_vgic_free(struct vcpu *v);
 extern int vgic_to_sgi(struct vcpu *v, register_t sgir,
enum gic_sgi_mode irqmode, int virq,
-- 
Qualcomm Datacenter Technologies, Inc. on behalf of the Qualcomm Technologies, 
Inc.
Qualcomm Te

[Xen-devel] [linux-3.18 test] 97487: regressions - trouble: blocked/broken/fail/pass

2016-07-17 Thread osstest service owner
flight 97487 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97487/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96188
 test-amd64-amd64-pygrub   6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96188
 test-armhf-armhf-xl-xsm   9 debian-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemut-win7-amd64  9 windows-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemut-winxpsp3  9 windows-install  fail REGR. vs. 96188
 test-armhf-armhf-libvirt  9 debian-installfail REGR. vs. 96188
 test-amd64-amd64-amd64-pvgrub  6 xen-boot fail REGR. vs. 96188
 test-amd64-i386-xl-xsm9 debian-installfail REGR. vs. 96188
 test-amd64-i386-freebsd10-amd64  9 freebsd-installfail REGR. vs. 96188
 test-amd64-amd64-i386-pvgrub  6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-qemut-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96188
 test-armhf-armhf-libvirt-xsm  9 debian-installfail REGR. vs. 96188
 test-amd64-i386-xl9 debian-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96188
 test-amd64-amd64-libvirt  6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-winxpsp3  6 xen-bootfail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96188
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96188
 test-amd64-amd64-xl-xsm   6 xen-boot  fail REGR. vs. 96188
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install   fail REGR. vs. 96188
 test-amd64-i386-qemuu-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96188
 test-amd64-i386-qemut-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96188
 test-amd64-i386-libvirt-xsm   9 debian-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 96188
 test-amd64-amd64-libvirt-vhd  6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-xl-multivcpu  6 xen-boot fail REGR. vs. 96188
 test-amd64-amd64-qemuu-nested-intel  6 xen-boot   fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  6 xen-boot fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-winxpsp3  6 xen-bootfail REGR. vs. 96188
 test-amd64-amd64-xl-pvh-intel  6 xen-boot fail REGR. vs. 96188
 test-armhf-armhf-xl-multivcpu  9 debian-install   fail REGR. vs. 96188
 test-amd64-amd64-xl   6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-libvirt-xsm  6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-win7-amd64  9 windows-installfail REGR. vs. 96188
 test-amd64-i386-libvirt   9 debian-installfail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-win7-amd64  6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96188
 test-amd64-i386-xl-raw9 debian-di-install fail REGR. vs. 96188
 test-amd64-i386-qemuu-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail REGR. 
vs. 96188
 test-amd64-amd64-xl-pvh-amd   6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-ovmf-amd64  6 xen-boot  fail REGR. vs. 96188
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
96188
 test-amd64-amd64-xl-credit2   6 xen-boot  fail REGR. vs. 96188
 test-armhf-armhf-xl-cubietruck  9 debian-install  fail REGR. vs. 96188
 test-amd64-amd64-xl-qemuu-win7-amd64  6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-freebsd10-i386  9 freebsd-install fail REGR. vs. 96188
 test-amd64-amd64-xl-qemut-debianhvm-amd64  6 xen-boot fail REGR. vs. 96188
 test-amd64-amd64-xl-qcow2 6 xen-boot  fail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail REGR. vs. 96188
 test-armhf-armhf-xl-vhd   9 debian-di-install fail REGR. vs. 96188
 test-amd64-amd64-qemuu-nested-amd  6 xen-boot fail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96188
 test-amd64-i386-libvirt-pair 15 debian-install/dst_host   fail REGR. vs. 96188
 test-armhf-armhf-xl-credit2   9 debian-installfail REGR. vs. 96188
 test-armhf-armhf-xl-arndale   9 debian-installfail REGR. vs. 96188
 test-amd64-i386-xl-qemuu-winxpsp3  9 windows-install  fail REGR. vs. 96188
 test-amd64-i386-pair  

Re: [Xen-devel] [PATCH 01/19] xen: Create a new file xen_pvdev.c

2016-07-17 Thread Emil Condrea
On Jul 17, 2016 10:41, "Quan Xu"  wrote:
>
>
> [Quan:]: comment starts with [Quan:]
>
Thanks, Quan for your comments.

The first patches from this series just move some code from xen_backend to
xen_pvdev file. I would not group the reorg from xen_backend with
refactoring in the same patch. Eventually this can be done in another patch
later.
>
>
>
The purpose of the new file is to store generic functions shared by frontend
> and backends such as xenstore operations, xendevs.
>
> Signed-off-by: Quan Xu 
> Signed-off-by: Emil Condrea 
> ---
>  hw/xen/Makefile.objs |   2 +-
>  hw/xen/xen_backend.c | 125 +---
>
 hw/xen/xen_pvdev.c   | 149 +++
>  include/hw/xen/xen_backend.h |  63 +-
>  include/hw/xen/xen_pvdev.h   |  71 +
>  5 files changed, 223 insertions(+), 187 deletions(-)
>  create mode 100644 hw/xen/xen_pvdev.c
>  create mode 100644 include/hw/xen/xen_pvdev.h
>
> diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
> index d367094..591cdc2 100644
> --- a/hw/xen/Makefile.objs
> +++ b/hw/xen/Makefile.objs
> @@ -1,5 +1,5 @@
>  # xen backend driver support
> -common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
>
+common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o xen_pvdev.o
>
>  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
>  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o
> xen_pt_graphics.o xen_pt_msi.o
> diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
> index bab79b1..a251a4a 100644
> --- a/hw/xen/xen_backend.c
> +++ b/hw/xen/xen_backend.c
> @@ -30,6 +30,7 @@
>  #include "sysemu/char.h"
>  #include "qemu/log.h"
>  #include "hw/xen/xen_backend.h"
> +#include "hw/xen/xen_pvdev.h"
>
>  #include 
>
> @@ -56,8 +57,6 @@ static QTAILQ_HEAD(xs_dirs_head, xs_dirs) xs_cleanup =
>  static QTAILQ_HEAD(XenDeviceHead, XenDevice) xendevs =
> QTAILQ_HEAD_INITIALIZER(xendevs);
>  static int debug = 0;
>
> -/* - */
> -
>  static void xenstore_cleanup_dir(char *dir)
>  {
>  struct xs_dirs *d;
> @@ -76,34 +75,6 @@ void xen_config_cleanup(void)
>  }
>  }
>
>
-int xenstore_write_str(const char *base, const char *node, const char *val)
> -{
> -char abspath[XEN_BUFSIZE];
> -
> -snprintf(abspath, sizeof(abspath), "%s/%s", base, node);
> -if (!xs_write(xenstore, 0, abspath, val, strlen(val))) {
> -return -1;
> -}
> -return 0;
> -}
> -
> -char *xenstore_read_str(const char *base, const char *node)
> -{
> -char abspath[XEN_BUFSIZE];
> -unsigned int len;
> -char *str, *ret = NULL;
> -
> -snprintf(abspath, sizeof(abspath), "%s/%s", base, node);
> -str = xs_read(xenstore, 0, abspath, &len);
> -if (str != NULL) {
> -/* move to qemu-allocated memory to make sure
> - * callers can savely g_free() stuff. */
> -ret = g_strdup(str);
> -free(str);
> -}
> -return ret;
> -}
> -
>  int xenstore_mkdir(char *path, int p)
>  {
>  struct xs_permissions perms[2] = {
> @@ -128,48 +99,6 @@ int xenstore_mkdir(char *path, int p)
>  return 0;
>  }
>
> -int xenstore_write_int(const char *base, const char *node, int ival)
> -{
> -char val[12];
> -
>
> [Quan:]: why 12 ? what about XEN_BUFSIZE?
>
> -snprintf(val, sizeof(val), "%d", ival);
> -return xenstore_write_str(base, node, val);
> -}
> -
>
-int xenstore_write_int64(const char *base, const char *node, int64_t ival)
> -{
> -char val[21];
> -
>
> [Quan:]: why 21 ? what about XEN_BUFSIZE?
>
>
> -snprintf(val, sizeof(val), "%"PRId64, ival);
> -return xenstore_write_str(base, node, val);
> -}
> -
> -int xenstore_read_int(const char *base, const char *node, int *ival)
> -{
> -char *val;
> -int rc = -1;
> -
> -val = xenstore_read_str(base, node);
>
> [Quan:]:  IMO, it is better to initialize val when declares.  the same
comment for the other 'val'
>
> -if (val && 1 == sscanf(val, "%d", ival)) {
> -rc = 0;
> -}
> -g_free(val);
> -return rc;
> -}
> -
>
-int xenstore_read_uint64(const char *base, const char *node, uint64_t *uval)
> -{
> -char *val;
> -int rc = -1;
> -
> -val = xenstore_read_str(base, node);
> -if (val && 1 == sscanf(val, "%"SCNu64, uval)) {
> -rc = 0;
> -}
> -g_free(val);
> -return rc;
> -}
> -
>
 int xenstore_write_be_str(struct XenDevice *xendev, const char *node, const
> char *val)
>  {
>  return xenstore_write_str(xendev->be, node, val);
>
@@ -212,20 +141,6 @@ int xenstore_read_fe_uint64(struct XenDevice *xendev,
> const char *node, uint64_t
>
>  /* - */
>
> -const char *xenbus_strstate(enum xenbus_state state)
> -{
> -static const char *const name[] = {
> -[ XenbusStateUnknown  ] = "Unknown",
> -[ XenbusStateInitialising ] = "Initialising",
> -

[Xen-devel] [xen-unstable test] 97477: tolerable FAIL

2016-07-17 Thread osstest service owner
flight 97477 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97477/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   6 xen-bootfail pass in 97410
 test-amd64-i386-qemuu-rhel6hvm-intel 11 guest-start/redhat.repeat fail pass in 
97410
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat   fail pass in 97410

Regressions which are regarded as allowable (not blocking):
 build-amd64-rumpuserxen   6 xen-buildfail   like 97410
 build-i386-rumpuserxen6 xen-buildfail   like 97410
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 97410
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 97410
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 97410
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 97410
 test-amd64-amd64-xl-rtds  6 xen-boot fail   like 97410

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-armhf-armhf-xl-arndale  12 migrate-support-check fail in 97410 never pass
 test-armhf-armhf-xl-arndale 13 saverestore-support-check fail in 97410 never 
pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  b48be35ac86cd6369124cf06ca3006d086095297
baseline version:
 xen  b48be35ac86cd6369124cf06ca3006d086095297

Last test of basis97477  2016-07-17 02:01:05 Z0 days
Testing same since0  1970-01-01 00:00:00 Z 16999 days0 attempts

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-oldkern  

Re: [Xen-devel] [v9 00/19] QEMU:Xen stubdom vTPM for HVM virtual machine(QEMU Part)

2016-07-17 Thread Quan Xu

On 2016 Jul 14 (Thu) 23:34, Stefano Stabellini  wrote:> 
Hi Quan,
> 
> thanks for CC'ing me. sstabell...@kernel.org is the right address to
> reach me now.
>
> I am also CC'ing Anthony Perard who is Xen co-maintainer in QEMU.
> 
> Cheers,
>
> Stefano
thanks in advance!! :):)Quan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 01/19] xen: Create a new file xen_pvdev.c

2016-07-17 Thread Quan Xu

[Quan:]: comment starts with [Quan:]


The purpose of the new file is to store generic functions shared by frontendand 
backends such as xenstore operations, xendevs.

Signed-off-by: Quan Xu 
Signed-off-by: Emil Condrea 
---
 hw/xen/Makefile.objs |   2 +-
 hw/xen/xen_backend.c | 125 +---
 hw/xen/xen_pvdev.c   | 149 +++
 include/hw/xen/xen_backend.h |  63 +-
 include/hw/xen/xen_pvdev.h   |  71 +
 5 files changed, 223 insertions(+), 187 deletions(-)
 create mode 100644 hw/xen/xen_pvdev.c
 create mode 100644 include/hw/xen/xen_pvdev.h

diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index d367094..591cdc2 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,5 +1,5 @@
 # xen backend driver support
-common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
+common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o xen_pvdev.o
 
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o 
xen_pt_graphics.o xen_pt_msi.o
diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
index bab79b1..a251a4a 100644
--- a/hw/xen/xen_backend.c
+++ b/hw/xen/xen_backend.c
@@ -30,6 +30,7 @@
 #include "sysemu/char.h"
 #include "qemu/log.h"
 #include "hw/xen/xen_backend.h"
+#include "hw/xen/xen_pvdev.h"
 
 #include 
 
@@ -56,8 +57,6 @@ static QTAILQ_HEAD(xs_dirs_head, xs_dirs) xs_cleanup =
 static QTAILQ_HEAD(XenDeviceHead, XenDevice) xendevs = 
QTAILQ_HEAD_INITIALIZER(xendevs);
 static int debug = 0;
 
-/* - */
-
 static void xenstore_cleanup_dir(char *dir)
 {
 struct xs_dirs *d;
@@ -76,34 +75,6 @@ void xen_config_cleanup(void)
 }
 }
 
-int xenstore_write_str(const char *base, const char *node, const char *val)
-{
-char abspath[XEN_BUFSIZE];
-
-snprintf(abspath, sizeof(abspath), "%s/%s", base, node);
-if (!xs_write(xenstore, 0, abspath, val, strlen(val))) {
-return -1;
-}
-return 0;
-}
-
-char *xenstore_read_str(const char *base, const char *node)
-{
-char abspath[XEN_BUFSIZE];
-unsigned int len;
-char *str, *ret = NULL;
-
-snprintf(abspath, sizeof(abspath), "%s/%s", base, node);
-str = xs_read(xenstore, 0, abspath, &len);
-if (str != NULL) {
-/* move to qemu-allocated memory to make sure
- * callers can savely g_free() stuff. */
-ret = g_strdup(str);
-free(str);
-}
-return ret;
-}
-
 int xenstore_mkdir(char *path, int p)
 {
 struct xs_permissions perms[2] = {
@@ -128,48 +99,6 @@ int xenstore_mkdir(char *path, int p)
 return 0;
 }
 
-int xenstore_write_int(const char *base, const char *node, int ival)
-{
-char val[12];
-
[Quan:]: why 12 ? what about XEN_BUFSIZE? 
-snprintf(val, sizeof(val), "%d", ival);
-return xenstore_write_str(base, node, val);
-}
-
-int xenstore_write_int64(const char *base, const char *node, int64_t ival)
-{
-char val[21];
-
[Quan:]: why 21 ? what about XEN_BUFSIZE?

-snprintf(val, sizeof(val), "%"PRId64, ival);
-return xenstore_write_str(base, node, val);
-}
-
-int xenstore_read_int(const char *base, const char *node, int *ival)
-{
-char *val;
-int rc = -1;
-
-val = xenstore_read_str(base, node);
[Quan:]:  IMO, it is better to initialize val when declares.  the same comment 
for the other 'val'
-if (val && 1 == sscanf(val, "%d", ival)) {
-rc = 0;
-}
-g_free(val);
-return rc;
-}
-
-int xenstore_read_uint64(const char *base, const char *node, uint64_t *uval)
-{
-char *val;
-int rc = -1;
-
-val = xenstore_read_str(base, node);-if (val && 1 == sscanf(val, 
"%"SCNu64, uval)) {
-rc = 0;
-}
-g_free(val);
-return rc;
-}
-
 int xenstore_write_be_str(struct XenDevice *xendev, const char *node, const 
char *val)
 {
 return xenstore_write_str(xendev->be, node, val);
@@ -212,20 +141,6 @@ int xenstore_read_fe_uint64(struct XenDevice *xendev, 
const char *node, uint64_t
 
 /* - */
 
-const char *xenbus_strstate(enum xenbus_state state)
-{
-static const char *const name[] = {
-[ XenbusStateUnknown  ] = "Unknown",
-[ XenbusStateInitialising ] = "Initialising",
-[ XenbusStateInitWait ] = "InitWait",
-[ XenbusStateInitialised  ] = "Initialised",
-[ XenbusStateConnected] = "Connected",
-[ XenbusStateClosing  ] = "Closing",
-[ XenbusStateClosed   ] = "Closed",
-};
-return (state < ARRAY_SIZE(name)) ? name[state] : "INVALID";
-}
-
 int xen_be_set_state(struct XenDevice *xendev, enum xenbus_state state)
 {
 int rc;
@@ -833,44 +748,6 @@ int xen_be_send_notify(struct XenDevice *xendev)
 return xenevtchn_notify(xendev->evtchndev, xendev->local_port);
 }
 
-/*
- * msg_level:
- *  0

[Xen-devel] [ovmf test] 97478: regressions - FAIL

2016-07-17 Thread osstest service owner
flight 97478 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97478/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail REGR. 
vs. 94748
 test-amd64-amd64-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail 
REGR. vs. 94748

version targeted for testing:
 ovmf e2f5c491d8749c88cbf56168a3493d70ff19a382
baseline version:
 ovmf dc99315b8732b6e3032d01319d3f534d440b43d0

Last test of basis94748  2016-05-24 22:43:25 Z   53 days
Failing since 94750  2016-05-25 03:43:08 Z   53 days  115 attempts
Testing same since97375  2016-07-15 14:23:16 Z1 days4 attempts


People who touched revisions under test:
  Anandakrishnan Loganathan 
  Ard Biesheuvel 
  Bi, Dandan 
  Bret Barkelew 
  Bruce Cran 
  Bruce Cran 
  Chao Zhang 
  Cinnamon Shia 
  Cohen, Eugene 
  Dandan Bi 
  Darbin Reyes 
  david wei 
  Eric Dong 
  Eugene Cohen 
  Evan Lloyd 
  Evgeny Yakovlev 
  Feng Tian 
  Fu Siyuan 
  Fu, Siyuan 
  Gary Li 
  Gary Lin 
  Giri P Mudusuru 
  Graeme Gregory 
  Hao Wu 
  Hegde Nagaraj P 
  Hegde, Nagaraj P 
  hegdenag 
  Heyi Guo 
  Jan D?bro? 
  Jan Dabros 
  Jeff Fan 
  Jeremy Linton 
  Jiaxin Wu 
  Jiewen Yao 
  Joe Zhou 
  Jordan Justen 
  Katie Dellaquila 
  Laszlo Ersek 
  Liming Gao 
  Lu, ShifeiX A 
  lushifex 
  Marcin Wojtas 
  Mark Rutland 
  Marvin H?user 
  Marvin Haeuser 
  Maurice Ma 
  Michael Zimmermann 
  Mudusuru, Giri P 
  Ni, Ruiyu 
  Qiu Shumin 
  Ruiyu Ni 
  Ruiyu Ni 
  Ryan Harkin 
  Sami Mujawar 
  Satya Yarlagadda 
  Shannon Zhao 
  Sriram Subramanian 
  Star Zeng 
  Subramanian, Sriram (EG Servers Platform SW) 
  Sunny Wang 
  Tapan Shah 
  Thomas Palmer 
  Yarlagadda, Satya P 
  Yonghong Zhu 
  Zhang Lubo 
  Zhang, Chao B 
  Zhang, Lubo 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 10330 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 97470: regressions - FAIL

2016-07-17 Thread osstest service owner
flight 97470 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97470/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm 11 guest-start   fail REGR. vs. 96791
 test-amd64-amd64-libvirt-pair 20 guest-start/debian   fail REGR. vs. 96791
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 96791
 test-amd64-amd64-libvirt 11 guest-start   fail REGR. vs. 96791
 test-amd64-amd64-xl-qcow2 9 debian-di-install fail REGR. vs. 96791
 test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 96791
 test-amd64-amd64-libvirt-vhd  9 debian-di-install fail REGR. vs. 96791

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale  15 guest-start/debian.repeat   fail pass in 97429

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 96791
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 96791
 test-amd64-amd64-xl-rtds  9 debian-install   fail   like 96791

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass

version targeted for testing:
 qemuu6b92bbfe812746fe7841a24c24e6460f5359ce72
baseline version:
 qemuu4f4a9ca4a4386c137301b3662faba076455ff15a

Last test of basis96791  2016-07-08 12:20:07 Z8 days
Failing since 97271  2016-07-13 13:44:26 Z3 days7 attempts
Testing same since97396  2016-07-15 21:43:55 Z1 days3 attempts


People who touched revisions under test:
  Alberto Garcia 
  Alex Bennée 
  Alexander Yarygin 
  Andrew Jones 
  Anthony PERARD 
  Ashok Raj 
  Cao jin 
  Cornelia Huck 
  Cédric Le Goater 
  Daniel P. Berrange 
  David Gibson 
  David Hildenbrand 
  Denis V. Lunev 
  Dmitry Osipenko 
  Eduardo Habkost 
  Eric Blake 
  Eugene (jno) Dvurechenski 
  Evgeny Yakovlev 
  Fam Zheng 
  Gerd Hoffmann 
  Gonglei 
  Haibin Wang 
  Haozhong Zhang 
  Igor Mammedov 
  James Hogan 
  Jarkko Lavinen 
  Jeff Cody 
  Jing Liu 
  Kevin Wolf 
  Laszlo Ersek 
  Leon Alrae 
  Lin Ma 
  Marc Marí 
  Marc-André Lureau 
  Marcin Krzeminski 
  Mark Cave-Ayland 
  Markus Armbruster 
  Max Filippov 
  Max Reitz 
  Paolo Bonzini 
  Paul Burton 
  Peter Lieven 
  Peter Maydell 
  Pierre Morel 
  Reda Sallahi 
  Richard Henderson 
  Richard W.M. Jones 
  Samuel Damashek 
  Sascha Silbe 

[Xen-devel] [xen-unstable-coverity test] 97501: regressions - ALL FAIL

2016-07-17 Thread osstest service owner
flight 97501 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97501/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 coverity-amd645 coverity-buildfail REGR. vs. 96924

version targeted for testing:
 xen  b48be35ac86cd6369124cf06ca3006d086095297
baseline version:
 xen  7da483b0236d8974cc97f81780dcf8e559a63175

Last test of basis96924  2016-07-10 09:19:23 Z7 days
Testing same since97501  2016-07-17 09:26:52 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anshul Makkar 
  Corneliu ZUZU 
  Daniel De Graaf 
  Doug Goldstein 
  Elena Ufimtseva 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Juergen Gross 
  Julien Grall 
  Kevin Tian 
  Konrad Rzeszutek Wilk 
  Quan Xu 
  Razvan Cojocaru 
  Shanker Donthineni 
  Stefano Stabellini 
  Tamas K Lengyel 
  Tim Deegan 
  Vitaly Kuznetsov 
  Wei Liu 

jobs:
 coverity-amd64   fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1008 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.1 bisection] complete test-amd64-i386-xl-qemuu-winxpsp3

2016-07-17 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-winxpsp3
testid windows-install

Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  c5ad33184354260be6d05de57e46a5498692f6d6
  Bug not present: c5bcec6cbcbf520f088dc7939934bbf10c20c5a5
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/97493/


  commit c5ad33184354260be6d05de57e46a5498692f6d6
  Author: Lukasz Odzioba 
  Date:   Fri Jun 24 14:50:01 2016 -0700
  
  mm/swap.c: flush lru pvecs on compound page arrival
  
  [ Upstream commit 8f182270dfec432e93fae14f9208a6b9af01009f ]
  
  Currently we can have compound pages held on per cpu pagevecs, which
  leads to a lot of memory unavailable for reclaim when needed.  In the
  systems with hundreads of processors it can be GBs of memory.
  
  On of the way of reproducing the problem is to not call munmap
  explicitly on all mapped regions (i.e.  after receiving SIGTERM).  After
  that some pages (with THP enabled also huge pages) may end up on
  lru_add_pvec, example below.
  
void main() {
#pragma omp parallel
{
size_t size = 55 * 1000 * 1000; // smaller than  MEM/CPUS
void *p = mmap(NULL, size, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS , -1, 0);
if (p != MAP_FAILED)
memset(p, 0, size);
//munmap(p, size); // uncomment to make the problem go away
}
}
  
  When we run it with THP enabled it will leave significant amount of
  memory on lru_add_pvec.  This memory will be not reclaimed if we hit
  OOM, so when we run above program in a loop:
  
for i in `seq 100`; do ./a.out; done
  
  many processes (95% in my case) will be killed by OOM.
  
  The primary point of the LRU add cache is to save the zone lru_lock
  contention with a hope that more pages will belong to the same zone and
  so their addition can be batched.  The huge page is already a form of
  batched addition (it will add 512 worth of memory in one go) so skipping
  the batching seems like a safer option when compared to a potential
  excess in the caching which can be quite large and much harder to fix
  because lru_add_drain_all is way to expensive and it is not really clear
  what would be a good moment to call it.
  
  Similarly we can reproduce the problem on lru_deactivate_pvec by adding:
  madvise(p, size, MADV_FREE); after memset.
  
  This patch flushes lru pvecs on compound page arrival making the problem
  less severe - after applying it kill rate of above example drops to 0%,
  due to reducing maximum amount of memory held on pvec from 28MB (with
  THP) to 56kB per CPU.
  
  Suggested-by: Michal Hocko 
  Link: 
http://lkml.kernel.org/r/1466180198-18854-1-git-send-email-lukasz.odzi...@intel.com
  Signed-off-by: Lukasz Odzioba 
  Acked-by: Michal Hocko 
  Cc: Kirill Shutemov 
  Cc: Andrea Arcangeli 
  Cc: Vladimir Davydov 
  Cc: Ming Li 
  Cc: Minchan Kim 
  Cc: 
  Signed-off-by: Andrew Morton 
  Signed-off-by: Linus Torvalds 
  Signed-off-by: Sasha Levin 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-4.1/test-amd64-i386-xl-qemuu-winxpsp3.windows-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-4.1/test-amd64-i386-xl-qemuu-winxpsp3.windows-install
 --summary-out=tmp/97493.bisection-summary --basis-template=96211 
--blessings=real,real-bisect linux-4.1 test-amd64-i386-xl-qemuu-winxpsp3 
windows-install
Searching for failure / basis pass:
 97434 fail [host=huxelrebe0] / 96211 [host=italia1] 96183 [host=fiano0] 96160 
[host=elbling1] 95848 [host=chardonnay1] 95818 [host=fiano1] 95591 
[host=elbling0] 95517 [host=pinot1] 95455 ok.
Failure / basis pass flights: 97434 / 95455
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5880876e94699ce010554f483ccf0009997955ca 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1

[Xen-devel] [linux-4.1 test] 97434: regressions - FAIL

2016-07-17 Thread osstest service owner
flight 97434 linux-4.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/97434/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96211
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
REGR. vs. 96211
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96211
 test-amd64-i386-qemuu-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96211
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 windows-install fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-debianhvm-amd64  6 xen-boot fail REGR. vs. 96211
 test-amd64-i386-libvirt   9 debian-installfail REGR. vs. 96211
 test-amd64-i386-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96211
 test-amd64-i386-freebsd10-amd64  9 freebsd-installfail REGR. vs. 96211
 test-amd64-i386-xl-xsm9 debian-installfail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 9 debian-hvm-install fail REGR. 
vs. 96211
 test-amd64-i386-xl9 debian-installfail REGR. vs. 96211
 test-amd64-i386-xl-qemut-winxpsp3  9 windows-install  fail REGR. vs. 96211
 test-amd64-i386-xl-raw9 debian-di-install fail REGR. vs. 96211
 test-armhf-armhf-xl-arndale   9 debian-installfail REGR. vs. 96211
 test-amd64-amd64-xl-multivcpu  6 xen-boot fail REGR. vs. 96211
 test-amd64-i386-freebsd10-i386  9 freebsd-install fail REGR. vs. 96211
 test-amd64-i386-qemuu-rhel6hvm-amd  9 redhat-install  fail REGR. vs. 96211
 test-armhf-armhf-xl   9 debian-installfail REGR. vs. 96211
 test-armhf-armhf-xl-xsm   9 debian-installfail REGR. vs. 96211
 test-armhf-armhf-libvirt-xsm  9 debian-installfail REGR. vs. 96211
 test-amd64-i386-libvirt-xsm   9 debian-installfail REGR. vs. 96211
 test-amd64-amd64-i386-pvgrub  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-win7-amd64  6 xen-boot  fail REGR. vs. 96211
 test-armhf-armhf-xl-multivcpu  9 debian-install   fail REGR. vs. 96211
 test-armhf-armhf-libvirt  9 debian-installfail REGR. vs. 96211
 test-armhf-armhf-xl-cubietruck  9 debian-install  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-credit2   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 
96211
 test-amd64-amd64-xl-qemut-winxpsp3  6 xen-bootfail REGR. vs. 96211
 test-amd64-amd64-xl   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-qemuu-nested-intel  6 xen-boot   fail REGR. vs. 96211
 test-amd64-amd64-pygrub   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-pvh-amd   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qcow2 6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-libvirt-vhd  6 xen-boot  fail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-debianhvm-amd64 9 debian-hvm-install fail REGR. vs. 
96211
 test-amd64-amd64-qemuu-nested-amd  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-ovmf-amd64  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-pvh-intel  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-winxpsp3  6 xen-bootfail REGR. vs. 96211
 test-amd64-amd64-libvirt-xsm  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-xsm   6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  6 xen-boot fail REGR. vs. 96211
 test-amd64-amd64-libvirt  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail REGR. 
vs. 96211
 test-armhf-armhf-xl-credit2   9 debian-installfail REGR. vs. 96211
 test-amd64-i386-qemut-rhel6hvm-intel  9 redhat-installfail REGR. vs. 96211
 test-amd64-amd64-xl-qemuu-win7-amd64  6 xen-boot  fail REGR. vs. 96211
 test-amd64-amd64-amd64-pvgrub  6 xen-boot fail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-winxpsp3  9 windows-install  fail REGR. vs. 96211
 test-amd64-i386-libvirt-pair 15 debian-install/dst_host   fail REGR. vs. 96211
 test-amd64-i386-pair 15 debian-install/dst_host   fail REGR. vs. 96211
 test-amd64-i386-xl-qemuu-ovmf-amd64  9 debian-hvm-install fail REGR. vs. 96211
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install   fail REGR. vs. 96211
 test-amd64-i386-xl-qemut-win7-amd64  9 windows-installfail REGR. vs. 96211
 test-amd64-amd64-pair