Re: [Xen-devel] [PATCH -next] drm/xen-front: Drop pointless static qualifier in fb_destroy()

2019-02-03 Thread Oleksandr Andrushchenko
On 1/26/19 2:05 PM, YueHaibing wrote:
> There is no need to have the 'struct drm_framebuffer *fb' variable
> static since new value always be assigned before use it.
>
> Signed-off-by: YueHaibing 
> ---
>   drivers/gpu/drm/xen/xen_drm_front_kms.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c 
> b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> index 860da05..c2955d3 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> @@ -54,7 +54,7 @@ static void fb_destroy(struct drm_framebuffer *fb)
> const struct drm_mode_fb_cmd2 *mode_cmd)
>   {
>   struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> - static struct drm_framebuffer *fb;
> + struct drm_framebuffer *fb;
>   struct drm_gem_object *gem_obj;
>   int ret;
>   
Applied to drm-misc-next
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] drm/xen-front: Fix mmap attributes for display buffers

2019-02-03 Thread Oleksandr Andrushchenko
On 1/29/19 9:07 PM, Julien Grall wrote:
> Hi Oleksandr,
>
> On 1/29/19 3:04 PM, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko 
>>
>> When GEM backing storage is allocated those are normal pages,
>> so there is no point using pgprot_writecombine while mmaping.
>> This fixes mismatch of buffer pages' memory attributes between
>> the frontend and backend which may cause screen artifacts.
>>
>> Fixes: c575b7eeb89f ("drm/xen-front: Add support for Xen PV display 
>> frontend")
>>
>> Signed-off-by: Oleksandr Andrushchenko 
>> 
>> Suggested-by: Julien Grall 
>> ---
>>   drivers/gpu/drm/xen/xen_drm_front_gem.c | 5 ++---
>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c 
>> b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> index d303a2e17f5e..9d5c03d7668d 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> @@ -235,8 +235,7 @@ static int gem_mmap_obj(struct xen_gem_object 
>> *xen_obj,
>>   vma->vm_flags &= ~VM_PFNMAP;
>>   vma->vm_flags |= VM_MIXEDMAP;
>>   vma->vm_pgoff = 0;
>> -    vma->vm_page_prot =
>> - pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>> +    vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>
> The patch looks good to me. It would be worth expanding the comment a 
> bit before to explain that we overwrite vm_page_prot to use cacheable 
> attribute as required by the Xen ABI.
>
> With the comment updated:
>
> Acked-by: Julien Grall 
>
> Cheers,
>
Applied to drm-misc-next
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable test] 132759: tolerable FAIL - PUSHED

2019-02-03 Thread osstest service owner
flight 132759 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132759/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-rumprun-i386 17 rumprun-demo-xenstorels/xenstorels.repeat fail 
REGR. vs. 132622

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 132622
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 132622
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 132622
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 132622
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 132622
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 132622
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 132622
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 132622
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 132622
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  755eb6403ec722db37f1b8f8b51e0b0ab661c003
baseline version:
 xen  f50dd67950ca9d5a517501af10de7c8d88d1a188

Last test of basis   132622  2019-01-30 11:52:41 Z4 days
Failing since132683  2019-01-31 20:13:10 Z3 days2 attempts
Testing same since   132759  2019-02-02 21:38:48 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Andrii Anisov 
  Anthony PERARD 
  Brian 

Re: [Xen-devel] [PATCH] tools/misc: Remove obsolete xen-bugtool

2019-02-03 Thread Juergen Gross
On 03/02/2019 21:35, Hans van Kranenburg wrote:
> xen-bugtool relies on code that has been removed in commit 9e8672f1c3
> "tools: remove xend and associated python modules", more than 5 years
> ago. Remove it, since it confuses users.
> 
> -$ /usr/sbin/xen-bugtool
> Traceback (most recent call last):
>   File "/usr/sbin/xen-bugtool", line 9, in 
>   from xen.util import bugtool
> ImportError: No module named xen.util
> 
> Signed-off-by: Hans van Kranenburg 
> Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=866380
> Cc: Ian Jackson 
> Cc: Wei Liu 

I think we want that in 4.12. So:

Release-acked-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-4.10-testing test] 132762: regressions - FAIL

2019-02-03 Thread osstest service owner
flight 132762 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132762/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 132630
 build-i386-xsm6 xen-buildfail REGR. vs. 132630
 build-amd64-xsm   6 xen-buildfail REGR. vs. 132630
 build-i3866 xen-buildfail REGR. vs. 132630

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 build-i386-rumprun1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-livepatch 1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-xtf-amd64-amd64-21 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-41 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 build-amd64-rumprun   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-xtf-amd64-amd64-11 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)

[Xen-devel] [ovmf test] 132766: all pass - PUSHED

2019-02-03 Thread osstest service owner
flight 132766 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132766/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 6c61ec4c62b6d1001e8ea6683e83f0e9ec0b3c9b
baseline version:
 ovmf 7381bd3e753c4d3b706c752ec1d4305b3378af35

Last test of basis   132723  2019-02-01 23:53:55 Z2 days
Testing same since   132766  2019-02-03 02:47:04 Z1 days1 attempts


People who touched revisions under test:
  Ard Biesheuvel 
  Bi, Dandan 
  Bob Feng 
  Chen A Chen 
  Dandan Bi 
  Fan, ZhijuX 
  Feng, Bob C 
  Hess Chen 
  Mike Turner 
  Shenglei Zhang 
  Zhiju.Fan 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7381bd3e75..6c61ec4c62  6c61ec4c62b6d1001e8ea6683e83f0e9ec0b3c9b -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-linus test] 132754: regressions - FAIL

2019-02-03 Thread osstest service owner
flight 132754 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132754/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine  4 memdisk-try-append   fail REGR. vs. 132599

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-rumprun-amd64 17 rumprun-demo-xenstorels/xenstorels.repeat 
fail REGR. vs. 132599

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 12 guest-start/redhat.repeat fail like 
132561
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 132599
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 132599
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 132599
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 132599
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 132599
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 132599
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 132599
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 132599
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linuxcd984a5be21549273a3f13b52a8b7b84097b32a7
baseline version:
 linux4aa9fc2a435abe95a1e8d7f8c7b3d6356514b37a

Last test of basis   132599  2019-01-30 01:09:59 Z5 days
Failing since132669  2019-01-31 12:06:18 Z3 days2 attempts

[Xen-devel] [linux-3.18 test] 132741: regressions - FAIL

2019-02-03 Thread osstest service owner
flight 132741 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132741/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt  7 xen-boot fail REGR. vs. 128858
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 128858
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-boot fail REGR. vs. 
128858
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. 
vs. 128858
 test-amd64-amd64-xl-multivcpu  7 xen-bootfail REGR. vs. 128858
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  7 xen-bootfail REGR. vs. 128858
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-boot fail REGR. 
vs. 128858
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 128858
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 128858
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 128858
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 128858
 test-amd64-amd64-xl   7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-qemuu-nested-intel  7 xen-boot  fail REGR. vs. 128858
 test-amd64-amd64-xl-pvshim7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-xl-shadow7 xen-boot fail REGR. vs. 128858
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 128858
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 128858
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-boot  fail REGR. vs. 128858
 test-amd64-amd64-libvirt-pair 10 xen-boot/src_host   fail REGR. vs. 128858
 test-amd64-amd64-libvirt-pair 11 xen-boot/dst_host   fail REGR. vs. 128858
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-xl-qemuu-ovmf-amd64  7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 128858
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 128858
 test-amd64-amd64-xl-xsm   7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-libvirt-xsm  7 xen-boot fail REGR. vs. 128858
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 128858
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 128858

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-pair 10 xen-boot/src_host  fail pass in 132579
 test-amd64-i386-libvirt-pair 11 xen-boot/dst_host  fail pass in 132579
 test-armhf-armhf-libvirt 16 guest-start/debian.repeat  fail pass in 132652
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install fail pass in 132652

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds  7 xen-boot fail REGR. vs. 128858

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail in 132579 like 
128858
 test-armhf-armhf-libvirt-raw 12 migrate-support-check fail in 132579 never pass
 test-armhf-armhf-xl-vhd 12 migrate-support-check fail in 132579 never pass
 test-armhf-armhf-xl-vhd 13 saverestore-support-check fail in 132579 never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail in 132579 never pass
 test-amd64-amd64-examine  4 memdisk-try-append  fail in 132652 like 128807
 test-armhf-armhf-libvirt-raw 10 debian-di-installfail  like 128841
 test-armhf-armhf-xl-vhd  10 debian-di-installfail  like 128841
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 128858
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 128858
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 128858
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 128858
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 128858
 build-arm64-pvops 6 kernel-build fail   never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 

[Xen-devel] [linux-4.9 test] 132748: tolerable FAIL - PUSHED

2019-02-03 Thread osstest service owner
flight 132748 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132748/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 132521
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 132521
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 132521
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 132521
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 132521
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linuxa4d0a0910e693dafd83311994e12a0a8a0846694
baseline version:
 linux189b75ad3fc2d4a0d40a818ca298526d254ccdc4

Last test of basis   132521  2019-01-28 07:17:10 Z6 days
Testing same since   132661  2019-01-31 07:41:22 Z3 days2 attempts


[Xen-devel] [linux-3.18 bisection] complete test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm

2019-02-03 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm
testid xen-boot

Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  7b8052e19304865477e03a0047062d977309a22f
  Bug not present: d255d18a34a8d53ccc4a019dc07e17b6e8cf6bd1
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/132791/


  commit 7b8052e19304865477e03a0047062d977309a22f
  Author: Jan Beulich 
  Date:   Mon Oct 19 04:23:29 2015 -0600
  
  igb: fix NULL derefs due to skipped SR-IOV enabling
  
  [ Upstream commit be06998f96ecb93938ad2cce46c4289bf7cf45bc ]
  
  The combined effect of commits 6423fc3416 ("igb: do not re-init SR-IOV
  during probe") and ceee3450b3 ("igb: make sure SR-IOV init uses the
  right number of queues") causes VFs no longer getting set up, leading
  to NULL pointer dereferences due to the adapter's ->vf_data being NULL
  while ->vfs_allocated_count is non-zero. The first commit not only
  neglected the side effect of igb_sriov_reinit() that the second commit
  tried to account for, but also that of setting IGB_FLAG_HAS_MSIX,
  without which igb_enable_sriov() is effectively a no-op. Calling
  igb_{,re}set_interrupt_capability() as done here seems to address this,
  but I'm not sure whether this is better than sinply reverting the other
  two commits.
  
  Signed-off-by: Jan Beulich 
  Tested-by: Aaron Brown 
  Signed-off-by: Jeff Kirsher 
  Signed-off-by: Sasha Levin 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-3.18/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-3.18/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot
 --summary-out=tmp/132792.bisection-summary --basis-template=128858 
--blessings=real,real-bisect linux-3.18 
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm xen-boot
Searching for failure / basis pass:
 132741 fail [host=debina0] / 128858 [host=pinot1] 128841 [host=debina1] 128807 
[host=chardonnay0] 128691 [host=albana1] 128258 [host=albana0] 128232 
[host=joubertin0] 128177 [host=rimava1] 128096 [host=elbling0] 127486 
[host=baroque1] 127472 [host=chardonnay1] 127455 [host=italia0] 127296 
[host=rimava1] 127001 [host=albana1] 126926 [host=albana1] 126813 
[host=albana1] 126711 [host=albana1] 126583 [host=albana1] 126472 
[host=albana1] 126362 [host=albana1] 126270 [host=albana1] 126189 [host=alban\
 a0] 126042 [host=huxelrebe1] 125899 [host=albana1] 125658 [host=pinot1] 125649 
[host=godello0] 125641 [host=albana1] 125561 [host=joubertin0] 125525 
[host=pinot0] 125505 [host=joubertin1] 125138 [host=fiano1] 125043 
[host=chardonnay1] 124945 [host=baroque0] 124897 [host=godello0] 124855 
[host=albana0] 124173 [host=godello1] 123837 [host=joubertin1] 123803 
[host=godello0] 123683 [host=joubertin0] 123594 [host=baroque1] 123480 
[host=chardonnay1] 123396 [host=fiano1] 123274 [host=huxelrebe0] 123222\
  [host=italia0] 123190 [host=debina1] 123035 [host=godello1] 122965 
[host=godello0] 122884 [host=baroque1] 122565 [host=elbling0] 122515 
[host=elbling1] 122478 [host=godello0] 122427 [host=godello1] 122388 
[host=chardonnay1] 122286 [host=huxelrebe0] 122273 [host=baroque1] 122180 
[host=pinot0] 122166 [host=chardonnay1] 122145 [host=godello0] 122125 
[host=italia1] 122110 [host=italia0] 122094 [host=godello1] 121320 
[host=pinot0] 121303 [host=pinot1] 121268 [host=fiano0] 121099 
[host=huxelrebe1] 12\
 1053 [host=godello1] 120977 [host=godello0] 120911 [host=baroque1] 120780 
[host=huxelrebe0] 120665 [host=rimava0] 120486 [host=baroque0] 120276 
[host=fiano0] 120235 [host=italia1] 120132 [host=godello1] 120090 
[host=baroque1] 120043 [host=godello0] 120010 [host=huxelrebe0] 119432 
[host=godello1] 118730 [host=elbling0] 118666 [host=italia0] 118488 
[host=huxelrebe0] 118186 [host=godello0] 118149 [host=huxelrebe0] 117702 
[host=rimava0] 117641 [host=chardonnay0] 117375 [host=godello1] 117211 [host=g\
 odello0] 117131 [host=pinot0] 116920 [host=godello1] 116890 [host=nobling1] 
116862 [host=fiano0] 116760 [host=elbling0] 116728 [host=elbling0] 116501 
[host=godello0] 116475 [host=baroque1] 116308 [host=rimava0] 116193 
[host=nobling1] 116140 [host=pinot1] 116121 [host=huxelrebe1] 116106 
[host=merlot0] 115729 [host=nobling0] 115714 [host=elbling0] 115698 
[host=pinot0] 115688 

[Xen-devel] [PATCH] tools/misc: Remove obsolete xen-bugtool

2019-02-03 Thread Hans van Kranenburg
xen-bugtool relies on code that has been removed in commit 9e8672f1c3
"tools: remove xend and associated python modules", more than 5 years
ago. Remove it, since it confuses users.

-$ /usr/sbin/xen-bugtool
Traceback (most recent call last):
  File "/usr/sbin/xen-bugtool", line 9, in 
from xen.util import bugtool
ImportError: No module named xen.util

Signed-off-by: Hans van Kranenburg 
Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=866380
Cc: Ian Jackson 
Cc: Wei Liu 
---
 docs/README.xen-bugtool | 16 
 tools/misc/Makefile |  2 --
 tools/misc/xen-bugtool  | 17 -
 3 files changed, 35 deletions(-)
 delete mode 100644 docs/README.xen-bugtool
 delete mode 100644 tools/misc/xen-bugtool

diff --git a/docs/README.xen-bugtool b/docs/README.xen-bugtool
deleted file mode 100644
index a7e95ef4ce..00
--- a/docs/README.xen-bugtool
+++ /dev/null
@@ -1,16 +0,0 @@
-xen-bugtool
-===
-
-The xen-bugtool command line application will collate the Xen dmesg output,
-details of the hardware configuration of your machine, information about the
-build of Xen that you are using, plus, if you allow it, various logs.
-
-The information collated can either be posted to a Xen Bugzilla bug (this bug
-must already exist in the system, and you must be a registered user there), or
-it can be saved as a .tar.bz2 for sending or archiving.
-
-The collated logs may contain private information, and if you are at all
-worried about that, you should not use this tool, or you should explicitly
-exclude those logs from the archive.
-
-xen-bugtool is wholly interactive, so simply run it, and answer the questions.
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index eaa28793ef..fd91202950 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -17,7 +17,6 @@ INSTALL_BIN+= xencov_split
 INSTALL_BIN += $(INSTALL_BIN-y)
 
 # Everything to be installed in regular sbin/
-INSTALL_SBIN   += xen-bugtool
 INSTALL_SBIN-$(CONFIG_MIGRATE) += xen-hptool
 INSTALL_SBIN-$(CONFIG_X86) += xen-hvmcrash
 INSTALL_SBIN-$(CONFIG_X86) += xen-hvmctx
@@ -41,7 +40,6 @@ INSTALL_PRIVBIN+= xenpvnetboot
 TARGETS_ALL := $(INSTALL_BIN) $(INSTALL_SBIN) $(INSTALL_PRIVBIN)
 
 # Everything which only needs copying to install
-TARGETS_COPY += xen-bugtool
 TARGETS_COPY += xen-ringwatch
 TARGETS_COPY += xencons
 TARGETS_COPY += xencov_split
diff --git a/tools/misc/xen-bugtool b/tools/misc/xen-bugtool
deleted file mode 100644
index a3742b4787..00
--- a/tools/misc/xen-bugtool
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env python
-
-#  -*- mode: python; -*-
-
-# Copyright (c) 2005, XenSource Ltd.
-
-import sys
-
-from xen.util import bugtool
-
-
-if __name__ == "__main__":
-try:
-sys.exit(bugtool.main())
-except KeyboardInterrupt:
-print "\nInterrupted."
-sys.exit(1)
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v7 10/15] argo: implement the notify op

2019-02-03 Thread Christopher Clark
On Thu, Jan 31, 2019 at 8:45 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 30, 2019 at 08:28:15PM -0800, Christopher Clark wrote:
> > Queries for data about space availability in registered rings and
> > causes notification to be sent when space has become available.
> >
> > The hypercall op populates a supplied data structure with information about
> > ring state and if insufficient space is currently available in a given ring,
> > the hypervisor will record the domain's expressed interest and notify it
> > when it observes that space has become available.
> >
> > Checks for free space occur when this notify op is invoked, so it may be
> > intentionally invoked with no data structure to populate
> > (ie. a NULL argument) to trigger such a check and consequent notifications.
> >
> > Limit the maximum number of notify requests in a single operation to a
> > simple fixed limit of 256.
> >
> > Signed-off-by: Christopher Clark 
> > Tested-by: Chris Patterson 
>
> Reviewed-by: Roger Pau Monné 

Thanks.

>
> Despite the usage of list_for_each_entry_safe instead of
> list_first_entry_or_null.
>
> > +static void
> > +pending_notify(struct list_head *to_notify)
> > +{
> > +struct pending_ent *ent, *next;
> > +
> > +ASSERT(LOCKING_Read_L1);
> > +
> > +/* Sending signals for all ents in this list, draining until it is 
> > empty. */
> > +list_for_each_entry_safe(ent, next, to_notify, node)
>
> list_first_entry_or_null would be more suitable here.

ack, applied.

Christopher

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v7 09/15] argo: implement the sendv op; evtchn: expose send_guest_global_virq

2019-02-03 Thread Christopher Clark
On Thu, Jan 31, 2019 at 8:38 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 30, 2019 at 08:28:14PM -0800, Christopher Clark wrote:
> > sendv operation is invoked to perform a synchronous send of buffers
> > contained in iovs to a remote domain's registered ring.
> >
> > It takes:
> >  * A destination address (domid, port) for the ring to send to.
> >It performs a most-specific match lookup, to allow for wildcard.
> >  * A source address, used to inform the destination of where to reply.
> >  * The address of an array of iovs containing the data to send
> >  * .. and the length of that array of iovs
> >  * and a 32-bit message type, available to communicate message context
> >data (eg. kernel-to-kernel, separate from the application data).
> >
> > If insufficient space exists in the destination ring, it will return
> > -EAGAIN and Xen will notify the caller when sufficient space becomes
> > available.
> >
> > Accesses to the ring indices are appropriately atomic. The rings are
> > mapped into Xen's private address space to write as needed and the
> > mappings are retained for later use.
> >
> > Notifications are sent to guests via VIRQ and send_guest_global_virq is
> > exposed in the change to enable argo to call it. VIRQ_ARGO is claimed
> > from the VIRQ previously reserved for this purpose (#11).
> >
> > The VIRQ notification method is used rather than sending events using
> > evtchn functions directly because:
> >
> > * no current event channel type is an exact fit for the intended
> >   behaviour. ECS_IPI is closest, but it disallows migration to
> >   other VCPUs which is not necessarily a requirement for Argo.
> >
> > * at the point of argo_init, allocation of an event channel is
> >   complicated by none of the guest VCPUs being initialized yet
> >   and the event channel logic expects that a valid event channel
> >   has a present VCPU.
> >
> > * at the point of signalling a notification, the VIRQ logic is already
> >   defensive: if d->vcpu[0] is NULL, the notification is just silently
> >   dropped, whereas the evtchn_send logic is not so defensive: vcpu[0]
> >   must not be NULL, otherwise a null pointer dereference occurs.
> >
> > Using a VIRQ removes the need for the guest to query to determine which
> > event channel notifications will be delivered on. This is also likely to
> > simplify establishing future L0/L1 nested hypervisor argo communication.
> >
> > Signed-off-by: Christopher Clark 
> > Tested-by: Chris Patterson 
>
> There's one style nit that I think can be fixed while committing:
>
> Reviewed-by: Roger Pau Monné 

Thanks.

> Despite the usage of the open-coded mask below. As with previous
> patches this is argos code, so I'm not going to oppose, but again I
> think using such open coded masks is bad, and can lead to bugs in the
> code. It can be fixed by a follow up patch.

Have responded with a proposed fix to address this on the other thread.

>
> > +static int
> > +ringbuf_insert(const struct domain *d, struct argo_ring_info *ring_info,
> > +   const struct argo_ring_id *src_id, xen_argo_iov_t *iovs,
> > +   unsigned int niov, uint32_t message_type,
> > +   unsigned long *out_len)
> > +{
> > +xen_argo_ring_t ring;
> > +struct xen_argo_ring_message_header mh = { };
> > +int sp, ret;
> > +unsigned int len = 0;
> > +xen_argo_iov_t *piov;
> > +XEN_GUEST_HANDLE(uint8) NULL_hnd = { };
> > +
> > +ASSERT(LOCKING_L3(d, ring_info));
> > +
> > +/*
> > + * Obtain the total size of data to transmit -- sets the 'len' variable
> > + * -- and sanity check that the iovs conform to size and number limits.
> > + * Enforced below: no more than 'len' bytes of guest data
> > + * (plus the message header) will be sent in this operation.
> > + */
> > +ret = iov_count(iovs, niov, );
> > +if ( ret )
> > +return ret;
> > +
> > +/*
> > + * Upper bound check the message len against the ring size.
> > + * The message must not fill the ring; there must be at least one slot
> > + * remaining so we can distinguish a full ring from an empty one.
> > + * iov_count has already verified: len <= MAX_ARGO_MESSAGE_SIZE.
> > + */
> > +if ( (ROUNDUP_MESSAGE(len) + sizeof(struct 
> > xen_argo_ring_message_header))
>missing space ^
> > +>= ring_info->len )
>
> Align of >= also looks weird, should be aligned to the parenthesis
> before ROUNDUP_.

ack

> > @@ -1175,6 +1766,42 @@ do_argo_op(unsigned int cmd, 
> > XEN_GUEST_HANDLE_PARAM(void) arg1,
> >  break;
> >  }
> >
> > +case XEN_ARGO_OP_sendv:
> > +{
> > +xen_argo_send_addr_t send_addr;
> > +xen_argo_iov_t iovs[XEN_ARGO_MAXIOV];
> > +unsigned int niov;
> > +
> > +XEN_GUEST_HANDLE_PARAM(xen_argo_send_addr_t) send_addr_hnd =
> > +guest_handle_cast(arg1, xen_argo_send_addr_t);
> > +

Re: [Xen-devel] [PATCH v7 07/15] argo: implement the register op

2019-02-03 Thread Christopher Clark
n Thu, Jan 31, 2019 at 8:19 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 30, 2019 at 08:28:12PM -0800, Christopher Clark wrote:
> > The register op is used by a domain to register a region of memory for
> > receiving messages from either a specified other domain, or, if specifying a
> > wildcard, any domain.
> >
> > This operation creates a mapping within Xen's private address space that
> > will remain resident for the lifetime of the ring. In subsequent commits,
> > the hypervisor will use this mapping to copy data from a sending domain into
> > this registered ring, making it accessible to the domain that registered the
> > ring to receive data.
> >
> > Wildcard any-sender rings are default disabled and registration will be
> > refused with EPERM unless they have been specifically enabled with the
> > new mac-permissive flag that is added to the argo boot option here. The
> > reason why the default for wildcard rings is 'deny' is that there is
> > currently no means to protect the ring from DoS by a noisy domain
> > spamming the ring, affecting other domains ability to send to it. This
> > will be addressed with XSM policy controls in subsequent work.
> >
> > Since denying access to any-sender rings is a significant functional
> > constraint, the new option "mac-permissive" for the argo bootparam
> > enables overriding this. eg: "argo=1,mac-permissive=1"
> >
> > The p2m type of the memory supplied by the guest for the ring must be
> > p2m_ram_rw and the memory will be pinned as PGT_writable_page while the ring
> > is registered.
> >
> > This hypercall op and its interface currently only supports 4K-sized pages.
> >
> > Signed-off-by: Christopher Clark 
> > Tested-by: Chris Patterson 
>
> Just one style issue below that should be fixed before commit, and two
> comments:
>
> Reviewed-by: Roger Pau Monné 

Thanks

> > +static int
> > +ring_map_page(const struct domain *d, struct argo_ring_info *ring_info,
> > +  unsigned int i, void **out_ptr)
> > +{
> > +ASSERT(LOCKING_L3(d, ring_info));
> > +
> > +/*
> > + * FIXME: Investigate using vmap to create a single contiguous virtual
> > + * address space mapping of the ring instead of using the array of 
> > single
> > + * page mappings.
> > + * Affects logic in memcpy_to_guest_ring, the mfn_mapping array data
> > + * structure, and places where ring mappings are added or removed.
> > + */
> > +
> > +if ( i >= ring_info->nmfns )
> > +{
> > +gprintk(XENLOG_ERR,
> > +   "argo: ring (vm%u:%x vm%u) %p attempted to map page %u of 
> > %u\n",
> > +ring_info->id.domain_id, ring_info->id.aport,
> > +ring_info->id.partner_id, ring_info, i, ring_info->nmfns);
> > +return -ENOMEM;
> > +}
> > +i = array_index_nospec(i, ring_info->nmfns);
> > +
> > +if ( !ring_info->mfns || !ring_info->mfn_mapping)
>^ missing space

ack

> [...]
> > +static int
> > +copy_gfn_from_handle(XEN_GUEST_HANDLE_PARAM(void) gfn_hnd, bool compat,
> > + unsigned int i, gfn_t *out_gfn)
> > +{
> > +int ret;
> > +
> > +#ifdef CONFIG_COMPAT
> > +if ( compat )
> > +{
> > +XEN_GUEST_HANDLE_PARAM(compat_pfn_t) c_gfn_hnd =
> > +guest_handle_cast(gfn_hnd, compat_pfn_t);
> > +compat_pfn_t c_gfn;
> > +
> > +ret = __copy_from_guest_offset(_gfn, c_gfn_hnd, i, 1) ? -EFAULT 
> > : 0;
> > +*out_gfn = _gfn(c_gfn);
> > +}
> > +else
>
> AFAICT you could place the #endif here and avoid the one below.

ack, thanks.

> > @@ -579,7 +1105,6 @@ compat_argo_op(unsigned int cmd, 
> > XEN_GUEST_HANDLE_PARAM(void) arg1,
> >  argo_dprintk("<-compat_argo_op(%u)=%ld\n", cmd, rc);
> >
> >  return rc;
> > -
>
> This looks like a stray fix that should have gone into a different
> patch.

ack, fixed.

Christopher

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 00/15] Argo: hypervisor-mediated interdomain communication

2019-02-03 Thread Christopher Clark
On Thu, Jan 31, 2019 at 5:39 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 30, 2019 at 08:05:30PM -0800, Christopher Clark wrote:
> > On Tue, Jan 22, 2019 at 6:19 AM Roger Pau Monné  
> > wrote:
> > >
> > > On Mon, Jan 21, 2019 at 01:59:40AM -0800, Christopher Clark wrote:
> > > > Version five of this patch series:
> > > >
> > > > * Changes are primarily addressing feedback from the v4 series reviews.
> > > >   Many points noted on the invididual commit posts.
> > > >
> > > > * Critical sections have been shrunk, with allocations and frees
> > > >   pulled outside where possible, reordering logic within hypercall ops.
> > > >
> > > > * A new ring hash function implemented, derived from the djb2 string
> > > >   hash function.
> > > >
> > > > * Flags returned by the notify op have been simplified.
> > > >
> > > > * Now uses a single argo boot parameter, taking a list:
> > > >   - top level boolean to enable/disable Argo
> > > >   - mac-permissive option to enable/disable wildcard rings
> > > >   - command line doc edit: no "CONFIG_ARGO" but refers to build config
> > > >
> > > > * Switched to use the standard list data structures used by Xen's
> > > >   common code.
> > >
> > > AFAIK this was not requested by any reviewer, so I wonder why you made
> > > such change. The more that you open coded some of the list_ macros
> > > instead of just doing a s/hlist_/list_/ replacement.
> > > I'm fine with using list instead of hlist,
> >
> > At your request, v7 replaces open coding with Xen's list macros. The
> > hlist macros were not used by any of the common code in Xen.
> >
> > > but I don't understand why
> > > you decided to open code list_for_each and list_for_each_safe instead
> > > of using the macros provided by Xen. Is there an issue with such
> > > macros?
> >
> > As discussed offline:
> >
> > - Using Xen's list macros will expedite Argo's merge for Xen 4.12
> > - List macros in Xen list.h originated in Linux list.h and have diverged
> > - OpenXT has use cases for measured launch and nested virtualization,
> >   which influence downstream performance and security requirements for
> >   Argo and Xen
> > - OpenXT can temporarily patch Xen 4.12 for downstream use
> >
> > > I've made a couple of minor comments, but I think the current status
> > > is good, and fixing those minor comments is going to be trivial.
> >
> > Ack, thanks. Hopefully v7 looks good.
>
> As a note, the common flow of interactions usually involves the
> contributor replying to the comments made by the reviewer in order to
> try to reach an agreement before sending a new version.

Yes, v7 was sent to address Jan and Julien's review comments in parallel
with our ongoing discussion on v5 macros. v7 also provided a checkpoint
for Argo testers to maximize test coverage as the series converges into
a Xen 4.12 merge candidate for Juergen. It addressed:

 - Jan's v6 review comments
 - Julien's v1 review comment
 - most of your xen-devel and offline review comments

> There are comments from v5 that haven't been fixed in v7
> (the mask usage and list_first_entry_or_null for example)
> and the reply to the reviewer's comment was sent at the same time as
> v7, leaving no time for further discussion (and for reaching an
> agreement suitable to both parties) before sending v7.

Code changes from our ongoing discussion will be addressed in v8. A
proposal to address mask usage has been put forward in the parallel
thread. Your proposed usage of list_first_entry_or_null will be made in
v8, subject to the previous offline discussion about list macros
(duplicated here for convenience):

> > As discussed offline:
> >
> > - Using Xen's list macros will expedite Argo's merge for Xen 4.12
> > - List macros in Xen list.h originated in Linux list.h and have diverged
> > - OpenXT has use cases for measured launch and nested virtualization,
> >   which influence downstream performance and security requirements for
> >   Argo and Xen
> > - OpenXT can temporarily patch Xen 4.12 for downstream use

Christopher

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v7 04/15] argo: init, destroy and soft-reset, with enable command line opt

2019-02-03 Thread Christopher Clark
On Thu, Jan 31, 2019 at 6:49 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 30, 2019 at 08:28:09PM -0800, Christopher Clark wrote:
> > Initialises basic data structures and performs teardown of argo state
> > for domain shutdown.
> >
> > Inclusion of the Argo implementation is dependent on CONFIG_ARGO.
> >
> > Introduces a new Xen command line parameter 'argo': bool to enable/disable
> > the argo hypercall. Defaults to disabled.
> >
> > New headers:
> >   public/argo.h: with definions of addresses and ring structure, including
> >   indexes for atomic update for communication between domain and hypervisor.
> >
> >   xen/argo.h: to expose the hooks for integration into domain lifecycle:
> > argo_init: per-domain init of argo data structures for domain_create.
> > argo_destroy: teardown for domain_destroy and the error exit
> >   path of domain_create.
> > argo_soft_reset: reset of domain state for domain_soft_reset.
> >
> > Adds a new field to struct domain: struct argo_domain *argo;
> >
> > In accordance with recent work on _domain_destroy, argo_destroy is
> > idempotent. It will tear down: all rings registered by this domain, all
> > rings where this domain is the single sender (ie. specified partner,
> > non-wildcard rings), and all pending notifications where this domain is
> > awaiting signal about available space in the rings of other domains.
> >
> > A count will be maintained of the number of rings that a domain has
> > registered in order to limit it below the fixed maximum limit defined here.
> >
> > Macros are defined to verify the internal locking state within the argo
> > implementation. The macros are ASSERTed on entry to functions to validate
> > and document the required lock state prior to calling.
> >
> > The hash function for the hashtables that hold ring state is derived from
> > the string hashing function djb2 (http://www.cse.yorku.ca/~oz/hash.html)
> > by Daniel J. Bernstein. Basic testing with a limited number of domains and
> > ports has shown reasonable distribution for the table size.
> >
> > The software license on the public header is the BSD license, standard
> > procedure for the public Xen headers. The public header was originally
> > posted under a GPL license at: [1]:
> > https://lists.xenproject.org/archives/html/xen-devel/2013-05/msg02710.html
> >
> > The following ACK by Lars Kurth is to confirm that only people being
> > employees of Citrix contributed to the header files in the series posted at
> > [1] and that thus the copyright of the files in question is fully owned by
> > Citrix. The ACK also confirms that Citrix is happy for the header files to
> > be published under a BSD license in this series (which is based on [1]).
> >
> > Signed-off-by: Christopher Clark 
> > Acked-by: Lars Kurth 
> > Reviewed-by: Ross Philipson 
> > Tested-by: Chris Patterson 
>
> Reviewed-by: Roger Pau Monné 

ack

> I have some comments below that could be fixed upon commit if the
> committer agrees.

I've applied all the requested changes below for the next series submission.

>
> > +static struct argo_ring_info *
> > +find_ring_info(const struct domain *d, const struct argo_ring_id *id)
> > +{
> > +struct argo_ring_info *ring_info;
> > +const struct list_head *bucket;
> > +
> > +ASSERT(LOCKING_Read_rings_L2(d));
> > +
> > +/* List is not modified here. Search and return the match if found. */
> > +bucket = >argo->ring_hash[hash_index(id)];
> > +
> > +list_for_each_entry(ring_info, bucket, node)
>
> I'm not sure what's the policy regarding list_ macros, should spaces
> be added between the parentheses?
>
> list_for_each_entry ( ring_info, bucket, node )
>
> I don't have a strong opinion either way.
>
> [...]
> > +static void
> > +pending_remove_all(const struct domain *d, struct argo_ring_info 
> > *ring_info)
> > +{
> > +struct pending_ent *ent, *next;
> > +
> > +ASSERT(LOCKING_L3(d, ring_info));
> > +
> > +/* Delete all pending notifications from this ring's list. */
> > +list_for_each_entry_safe(ent, next, _info->pending, node)
>
> list_first_entry_or_null and you can get rid of next.

ack

> > +{
> > +/* For wildcard rings, remove each from their wildcard list too. */
> > +if ( ring_info->id.partner_id == XEN_ARGO_DOMID_ANY )
> > +wildcard_pending_list_remove(ent->domain_id, ent);
> > +list_del(>node);
> > +xfree(ent);
> > +}
> > +ring_info->npending = 0;
> > +}
> > +
> > +static void
> > +wildcard_rings_pending_remove(struct domain *d)
> > +{
> > +struct pending_ent *ent, *next;
> > +
> > +ASSERT(LOCKING_Write_L1);
> > +
> > +/* Delete all pending signals to the domain about wildcard rings. */
> > +list_for_each_entry_safe(ent, next, >argo->wildcard_pend_list, node)
>
> list_first_entry_or_null and you can get rid of next.

ack

>
> > +{
> > +/*
> > + * The ent->node deleted here, and the npending value decreased,
> > + 

Re: [Xen-devel] [PATCH v5 09/15] argo: implement the sendv op; evtchn: expose send_guest_global_virq

2019-02-03 Thread Christopher Clark
On Thu, Jan 31, 2019 at 3:01 AM Roger Pau Monné  wrote:
>
> On Thu, Jan 31, 2019 at 03:35:23AM -0700, Jan Beulich wrote:
> > >>> On 31.01.19 at 11:18,  wrote:
> > > On Wed, Jan 30, 2019 at 08:10:28PM -0800, Christopher Clark wrote:
> > >> On Tue, Jan 22, 2019 at 4:08 AM Roger Pau Monné  
> > >> wrote:
> > >> > On Mon, Jan 21, 2019 at 01:59:49AM -0800, Christopher Clark wrote:
> > >> > > +/*
> > >> > > + * Check padding is zeroed. Reject niov above limit or 
> > >> > > message_types
> > >> > > + * that are outside 32 bit range.
> > >> > > + */
> > >> > > +if ( unlikely(send_addr.src.pad || send_addr.dst.pad ||
> > >> > > +  (arg3 > XEN_ARGO_MAXIOV) || (arg4 & 
> > >> > > ~0xUL)) )
> > >> >
> > >> > arg4 & (GB(4) - 1)
> > >> >
> > >> > Is clearer IMO, or:
> > >> >
> > >> > arg4 > UINT32_MAX
> > >>
> > >> I've left the code unchanged, as the mask constant is used multiple
> > >> places elsewhere in Xen. UINT32_MAX is only used as a threshold value.
> > >
> > > The fact that others parts of the code could be improved is not an
> > > excuse to follow suit. I'm having a hard time believing that you find
> > > "arg4 & ~0xUL" easier to read than "arg4 & ~(GB(4) - 1)" or
> > > even "arg4 >= GB(4)".


Below, I propose an alternative way of achieving our correctness and
readability goals.

On the topic of readability, this self-contained definition
does stand out: ~0xUL,
encouraging caution and careful counting of 'f's. However, no other
source files are involved, making the code independent of changes in
(macro) definitions in other files.

In comparison, to understand GB, I have find the external definition,
and then parse this:

#define GB(_gb) (_AC(_gb, ULL) << 30)

(which seems to have a different type? ULL vs UL?) and then find and
understand this, in another file:

#ifdef __ASSEMBLY__
#define _AC(X,Y)X
#define _AT(T,X)X
#else
#define __AC(X,Y)   (X##Y)
#define _AC(X,Y)__AC(X,Y)
#define _AT(T,X)((T)(X))
#endif

so I'm saying: it's at least somewhat arguable which is easier to understand.
Regardless, I think there's a better option than either.

> > > IMO it's much more likely to miss an 'f' in the first construct, and
> > > thus get the value wrong and introduce a bug.
> >
> > I agree with this last statement, but I'm having trouble to see how
> > message _type_ is related to a size construct like GB(4) is. I see
> > only UINT32_MAX as a viable alternative for something that's not
> > expressing the size of anything.
>
> I've suggested the GB construct as an alternative because the comment
> above mentions the 32bit range. IMO anything that avoids using
> 0xUL is fine.

Jan and Andrew have employed a useful technique in recent changes where such a
test was required.  This could work:

(arg4 != (uint32_t)arg4))

It is self-contained, readable and clearly expresses the intent of the check
being performed. I have tested a series with this applied, and have it ready
to post if you approve.

Christopher

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-4.9-testing test] 132747: regressions - FAIL

2019-02-03 Thread osstest service owner
flight 132747 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132747/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i3866 xen-buildfail REGR. vs. 130954
 build-amd64-xsm   6 xen-buildfail REGR. vs. 130954
 build-amd64   6 xen-buildfail REGR. vs. 130954
 build-i386-xsm6 xen-buildfail REGR. vs. 130954
 test-armhf-armhf-xl   7 xen-boot fail REGR. vs. 130954

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 17 guest-start.2fail REGR. vs. 130851

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win10-i386  1 build-check(1)  blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 build-i386-rumprun1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-xtf-amd64-amd64-21 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-xtf-amd64-amd64-11 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-51 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-31 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-livepatch 1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 build-amd64-rumprun   1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-41 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 

[Xen-devel] [xen-4.11-testing test] 132736: regressions - FAIL

2019-02-03 Thread osstest service owner
flight 132736 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132736/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore fail REGR. vs. 132647

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10   fail REGR. vs. 132647

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  e2e3a1d75798781a8031feec0050e6e1c98187ca
baseline version:
 xen  df1debf494ac38c95abb602b2b3057613de06b47

Last test of basis   132647  2019-01-31 03:32:06 Z3 days
Testing same since   132736  2019-02-02 05:49:09 Z1 days1 

[Xen-devel] [libvirt test] 132745: tolerable all pass - PUSHED

2019-02-03 Thread osstest service owner
flight 132745 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132745/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 132664
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 132664
 test-arm64-arm64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  18795687447dd55a281d33d24ae0a06c6bc081ee
baseline version:
 libvirt  d56afb8e3997ae19fd7449f773065a2b997dc7c1

Last test of basis   132664  2019-01-31 09:40:55 Z3 days
Testing same since   132745  2019-02-02 11:36:21 Z1 days1 attempts


People who touched revisions under test:
  Andrea Bolognani 
  Casey Callendrello 
  Daniel P. Berrangé 
  Erik Skultety 
  John Ferlan 
  Ján Tomko 
  Laine Stump 
  Laine Stump 
  Michal Privoznik 
  Peter Krempa 
  Roman Bogorodskiy 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-arm64-arm64-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-arm64-arm64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-arm64-arm64-libvirt-qcow2   pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   d56afb8e39..1879568744  18795687447dd55a281d33d24ae0a06c6bc081ee -> 
xen-tested-master


[Xen-devel] [qemu-mainline test] 132737: regressions - FAIL

2019-02-03 Thread osstest service owner
flight 132737 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132737/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   6 xen-install  fail REGR. vs. 131842
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail REGR. vs. 131842

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 131842
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 131842
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 131842
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 131842
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 131842
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 qemuub3fc0af1ff5e922d4dd7c875394dbd26dc7313b4
baseline version:
 qemuu147923b1a901a0370f83a0f4c58ec1baffef22f0

Last test of basis   131842  2019-01-09 00:37:22 Z   25 days
Failing since131892  2019-01-09 23:37:00 Z   24 days   21 attempts
Testing same since   132737  2019-02-02 06:42:28 Z1 days1 attempts


People who touched revisions under test:
  Aaron Lindsay 
  Aaron Lindsay 
  Aaron Lindsay 
  Aaron Lindsay OS 
  Alberto Garcia 
  Aleksandar Markovic 
  Alex Bennée 
  Alex Williamson 
  Alexander Graf 
  Alexander Kanavin 
  Alexandro Sanchez Bach 
  Alexey Kardashevskiy 
  Alistair Francis 
  Andrew Jeffery 
  Anthony PERARD 
  BALATON Zoltan 
  Bandan Das 
  Bastian Koppelmann 
  Borislav Petkov 
  Christian Borntraeger 
  Christophe Fergeau 
  Cleber Rosa 
  Collin Walling 
  Cornelia Huck 
  Cédric Le Goater 
  Daniel P. Berrangé 
  David Gibson 
  David Hildenbrand 
  Dongli Zhang 
  Dr. David Alan Gilbert 
  Edgar E. Iglesias 
  

[Xen-devel] [linux-4.19 test] 132732: regressions - trouble: blocked/broken/fail/pass

2019-02-03 Thread osstest service owner
flight 132732 linux-4.19 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132732/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt  broken
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. 
vs. 129313
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 129313
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 129313
 test-amd64-amd64-xl-shadow7 xen-boot fail REGR. vs. 129313
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 129313
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-boot fail REGR. 
vs. 129313
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 129313
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-boot fail REGR. vs. 129313
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 129313
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 129313
 test-amd64-i386-xl-shadow 7 xen-boot fail REGR. vs. 129313
 test-amd64-i386-libvirt   7 xen-boot fail REGR. vs. 129313
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 129313
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 129313
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 129313
 test-amd64-amd64-examine  8 reboot   fail REGR. vs. 129313
 build-armhf-libvirt   5 host-build-prep  fail REGR. vs. 129313
 test-amd64-i386-qemut-rhel6hvm-amd 12 guest-start/redhat.repeat fail REGR. vs. 
129313
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail REGR. vs. 
129313

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-rumprun-amd64 17 rumprun-demo-xenstorels/xenstorels.repeat 
fail REGR. vs. 129313

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  7 xen-boot   fail never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass

[Xen-devel] [xen-unstable-coverity test] 132767: regressions - ALL FAIL

2019-02-03 Thread osstest service owner
flight 132767 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132767/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 coverity-amd647 coverity-upload  fail REGR. vs. 132424

version targeted for testing:
 xen  755eb6403ec722db37f1b8f8b51e0b0ab661c003
baseline version:
 xen  08b908ba63dee8bc313983c5e412852cbcbcda85

Last test of basis   132424  2019-01-23 09:19:14 Z   11 days
Failing since132506  2019-01-27 09:18:42 Z7 days3 attempts
Testing same since   132767  2019-02-03 09:18:42 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Andrii Anisov 
  Anthony PERARD 
  Brian Woods 
  Doug Goldstein 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Norbert Manthey 
  Roger Pau Monne 
  Roger Pau Monné 
  Tamas K Lengyel 
  Wei Liu 

jobs:
 coverity-amd64   fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 713 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 10/21] memblock: refactor internal allocation functions

2019-02-03 Thread Mike Rapoport
On Sun, Feb 03, 2019 at 08:39:20PM +1100, Michael Ellerman wrote:
> Mike Rapoport  writes:
> 
> > Currently, memblock has several internal functions with overlapping
> > functionality. They all call memblock_find_in_range_node() to find free
> > memory and then reserve the allocated range and mark it with kmemleak.
> > However, there is difference in the allocation constraints and in fallback
> > strategies.
> >
> > The allocations returning physical address first attempt to find free
> > memory on the specified node within mirrored memory regions, then retry on
> > the same node without the requirement for memory mirroring and finally fall
> > back to all available memory.
> >
> > The allocations returning virtual address start with clamping the allowed
> > range to memblock.current_limit, attempt to allocate from the specified
> > node from regions with mirroring and with user defined minimal address. If
> > such allocation fails, next attempt is done with node restriction lifted.
> > Next, the allocation is retried with minimal address reset to zero and at
> > last without the requirement for mirrored regions.
> >
> > Let's consolidate various fallbacks handling and make them more consistent
> > for physical and virtual variants. Most of the fallback handling is moved
> > to memblock_alloc_range_nid() and it now handles node and mirror fallbacks.
> >
> > The memblock_alloc_internal() uses memblock_alloc_range_nid() to get a
> > physical address of the allocated range and converts it to virtual address.
> >
> > The fallback for allocation below the specified minimal address remains in
> > memblock_alloc_internal() because memblock_alloc_range_nid() is used by CMA
> > with exact requirement for lower bounds.
> 
> This is causing problems on some of my machines.
> 
> I see NODE_DATA allocations falling back to node 0 when they shouldn't,
> or didn't previously.
> 
> eg, before:
> 
> 57990190: (116011251): numa:   NODE_DATA [mem 0xfffe4980-0xfffebfff]
> 58152042: (116373087): numa:   NODE_DATA [mem 0x8fff90980-0x8fff97fff]
> 
> after:
> 
> 16356872061562: (6296877055): numa:   NODE_DATA [mem 0xfffe4980-0xfffebfff]
> 16356872079279: (6296894772): numa:   NODE_DATA [mem 0xfffcd300-0xfffd497f]
> 16356872096376: (6296911869): numa: NODE_DATA(1) on node 0
> 
> 
> On some of my other systems it does that, and then panics because it
> can't allocate anything at all:
> 
> [0.00] numa:   NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff]
> [0.00] numa:   NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f]
> [0.00] numa: NODE_DATA(1) on node 0
> [0.00] Kernel panic - not syncing: Cannot allocate 20864 bytes for 
> node 16 data
> [0.00] CPU: 0 PID: 0 Comm: swapper Not tainted 
> 5.0.0-rc4-gccN-next-20190201-gdc4c899 #1
> [0.00] Call Trace:
> [0.00] [c11cfca0] [c0c11044] dump_stack+0xe8/0x164 
> (unreliable)
> [0.00] [c11cfcf0] [c00fdd6c] panic+0x17c/0x3e0
> [0.00] [c11cfd90] [c0f61bc8] initmem_init+0x128/0x260
> [0.00] [c11cfe60] [c0f57940] setup_arch+0x398/0x418
> [0.00] [c11cfee0] [c0f50a94] start_kernel+0xa0/0x684
> [0.00] [c11cff90] [c000af70] 
> start_here_common+0x1c/0x52c
> [0.00] Rebooting in 180 seconds..
> 
> 
> So there's something going wrong there, I haven't had time to dig into
> it though (Sunday night here).

I'll try to see if I can reproduce it with qemu.
 
> cheers
> 

-- 
Sincerely yours,
Mike.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 10/21] memblock: refactor internal allocation functions

2019-02-03 Thread Michael Ellerman
Mike Rapoport  writes:

> Currently, memblock has several internal functions with overlapping
> functionality. They all call memblock_find_in_range_node() to find free
> memory and then reserve the allocated range and mark it with kmemleak.
> However, there is difference in the allocation constraints and in fallback
> strategies.
>
> The allocations returning physical address first attempt to find free
> memory on the specified node within mirrored memory regions, then retry on
> the same node without the requirement for memory mirroring and finally fall
> back to all available memory.
>
> The allocations returning virtual address start with clamping the allowed
> range to memblock.current_limit, attempt to allocate from the specified
> node from regions with mirroring and with user defined minimal address. If
> such allocation fails, next attempt is done with node restriction lifted.
> Next, the allocation is retried with minimal address reset to zero and at
> last without the requirement for mirrored regions.
>
> Let's consolidate various fallbacks handling and make them more consistent
> for physical and virtual variants. Most of the fallback handling is moved
> to memblock_alloc_range_nid() and it now handles node and mirror fallbacks.
>
> The memblock_alloc_internal() uses memblock_alloc_range_nid() to get a
> physical address of the allocated range and converts it to virtual address.
>
> The fallback for allocation below the specified minimal address remains in
> memblock_alloc_internal() because memblock_alloc_range_nid() is used by CMA
> with exact requirement for lower bounds.

This is causing problems on some of my machines.

I see NODE_DATA allocations falling back to node 0 when they shouldn't,
or didn't previously.

eg, before:

57990190: (116011251): numa:   NODE_DATA [mem 0xfffe4980-0xfffebfff]
58152042: (116373087): numa:   NODE_DATA [mem 0x8fff90980-0x8fff97fff]

after:

16356872061562: (6296877055): numa:   NODE_DATA [mem 0xfffe4980-0xfffebfff]
16356872079279: (6296894772): numa:   NODE_DATA [mem 0xfffcd300-0xfffd497f]
16356872096376: (6296911869): numa: NODE_DATA(1) on node 0


On some of my other systems it does that, and then panics because it
can't allocate anything at all:

[0.00] numa:   NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff]
[0.00] numa:   NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f]
[0.00] numa: NODE_DATA(1) on node 0
[0.00] Kernel panic - not syncing: Cannot allocate 20864 bytes for node 
16 data
[0.00] CPU: 0 PID: 0 Comm: swapper Not tainted 
5.0.0-rc4-gccN-next-20190201-gdc4c899 #1
[0.00] Call Trace:
[0.00] [c11cfca0] [c0c11044] dump_stack+0xe8/0x164 
(unreliable)
[0.00] [c11cfcf0] [c00fdd6c] panic+0x17c/0x3e0
[0.00] [c11cfd90] [c0f61bc8] initmem_init+0x128/0x260
[0.00] [c11cfe60] [c0f57940] setup_arch+0x398/0x418
[0.00] [c11cfee0] [c0f50a94] start_kernel+0xa0/0x684
[0.00] [c11cff90] [c000af70] 
start_here_common+0x1c/0x52c
[0.00] Rebooting in 180 seconds..


So there's something going wrong there, I haven't had time to dig into
it though (Sunday night here).

cheers

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel