[Xen-devel] [xen-unstable test] 101242: tolerable FAIL

2016-10-02 Thread osstest service owner
flight 101242 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101242/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2  11 guest-start  fail in 101235 pass in 101242
 test-armhf-armhf-xl-arndale 15 guest-start/debian.repeat fail in 101235 pass 
in 101242
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeat  fail pass in 101235
 test-amd64-amd64-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail pass in 
101235

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 101235
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 101235
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 101235
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 101235
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 101235

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked  n/a
 build-amd64-rumprun   5 rumprun-buildfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 build-i386-rumprun5 rumprun-buildfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass

version targeted for testing:
 xen  b3d54cead6459567d9786ad415149868ee7f2f5b
baseline version:
 xen  b3d54cead6459567d9786ad415149868ee7f2f5b

Last test of basis   101242  2016-10-02 02:01:24 Z0 days
Testing same since0  1970-01-01 00:00:00 Z 17076 days0 attempts

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvir

[Xen-devel] [xen-unstable-coverity test] 101243: all pass - PUSHED

2016-10-02 Thread osstest service owner
flight 101243 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101243/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen  b3d54cead6459567d9786ad415149868ee7f2f5b
baseline version:
 xen  1e75ed8b64bc1a9b47e540e6f100f17ec6d97f1b

Last test of basis   101181  2016-09-28 09:19:11 Z4 days
Testing same since   101243  2016-10-02 09:19:38 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Boris Ostrovsky 
  Daniel Kiper 
  Dario Faggioli 
  George Dunlap 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Jan Beulich  [for non-ARM parts]
  Jan Beulich  [non-arm parts]
  Juergen Gross 
  Julien Grall 
  Keir Fraser 
  Kevin Tian 
  Konrad Rzeszutek Wilk 
  Konrad Rzeszutek Wilk  [for Oracle, VirtualIron and 
Sun contributions]
  Kouya Shimura 
  Lars Kurth 
  Mihai Donțu 
  Paul Lai 
  Razvan Cojocaru 
  Shannon Zhao 
  Simon Horman 
  Stefan Berger 
  Stefano Stabellini 
  Tamas K Lengyel 
  Wei Liu 
  Zhi Wang 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-coverity
+ revision=b3d54cead6459567d9786ad415149868ee7f2f5b
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push 
xen-unstable-coverity b3d54cead6459567d9786ad415149868ee7f2f5b
+ branch=xen-unstable-coverity
+ revision=b3d54cead6459567d9786ad415149868ee7f2f5b
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-coverity
+ qemuubranch=qemu-upstream-unstable-coverity
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-coverity
+ prevxenbranch=xen-4.7-testing
+ '[' xb3d54cead6459567d9786ad415149868ee7f2f5b = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/ovmf.git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osst...@xenbits.xen.org:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : 

[Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Sander Eikelenboom

Hi All,

Since the new merge window is emerging I took the liberty of testing a 
linux 4.8-rc8 tree with

the Xen for-linus-4.9 branch pulled on top.
Unfortunately this crashes dom0 early in boot under Xen.
On bare-metal the same kernel boots fine.
Under Xen a linux 4.8-rc8 kernel without the Xen branch pulled on top, 
also boots fine.


Hypervisor is a recentish xen-unstable build.

The serial log is below.

--
Sander

 __  ___  ____ __ _
 \ \/ /___ _ __   | || |  ( _ )_   _ _ __  ___| |_ __ _| |__ | | ___
  \  // _ \ '_ \  | || |_ / _ \ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ 
\
  /  \  __/ | | | |__   _| (_) |__| |_| | | | \__ \ || (_| | |_) | |  
__/
 /_/\_\___|_| |_||_|(_)___/\__,_|_| 
|_|___/\__\__,_|_.__/|_|\___|


(XEN) Xen version 4.8-unstable (r...@dyndns.org) (gcc-4.9.real (Debian 
4.9.2-10) 4.9.2) debug=y  Sat Oct  1 21:59:54 CEST 2016

(XEN) Latest ChangeSet: Wed Sep 28 12:12:13 2016 +0100 git:da9207c-dirty
(XEN) Bootloader: GRUB 2.02~beta2-22+deb8u1
(XEN) Command line: dom0_mem=2048M,max:2048M loglvl=all loglvl_guest=all 
console_timestamps=datems vga=gfx-1280x1024x32 no-cpuidle cpufreq=xen 
com1=38400,8n1 console=vga,com1 ivrs_ioapic[6]=00:14.0 
iommu=on,verbose,debug,amd-iommu-debug conring_size=128k ucode=-1

(XEN) Video information:
(XEN)  VGA is graphics mode 1280x1024, 32 bpp
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN) Disc information:
(XEN)  Found 2 MBR signatures
(XEN)  Found 2 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)   - 00099400 (usable)
(XEN)  00099400 - 000a (reserved)
(XEN)  000e4000 - 0010 (reserved)
(XEN)  0010 - 9ff9 (usable)
(XEN)  9ff9 - 9ff9e000 (ACPI data)
(XEN)  9ff9e000 - 9ffe (ACPI NVS)
(XEN)  9ffe - a000 (reserved)
(XEN)  ffe0 - 0001 (reserved)
(XEN)  0001 - 00056000 (usable)
(XEN) ACPI: RSDP 000FB100, 0014 (r0 ACPIAM)
(XEN) ACPI: RSDT 9FF9, 0048 (r1 MSIOEMSLIC  20100913 MSFT   
97)
(XEN) ACPI: FACP 9FF90200, 0084 (r1 7640MS A7640100 20100913 MSFT   
97)
(XEN) ACPI: DSDT 9FF905E0, 9427 (r1  A7640 A7640100  100 INTL 
20051117)

(XEN) ACPI: FACS 9FF9E000, 0040
(XEN) ACPI: APIC 9FF90390, 0088 (r1 7640MS A7640100 20100913 MSFT   
97)
(XEN) ACPI: MCFG 9FF90420, 003C (r1 7640MS OEMMCFG  20100913 MSFT   
97)
(XEN) ACPI: SLIC 9FF90460, 0176 (r1 MSIOEMSLIC  20100913 MSFT   
97)
(XEN) ACPI: OEMB 9FF9E040, 0072 (r1 7640MS A7640100 20100913 MSFT   
97)
(XEN) ACPI: SRAT 9FF9A5E0, 0108 (r3 AMDFAM_F_102 AMD 
1)
(XEN) ACPI: HPET 9FF9A6F0, 0038 (r1 7640MS OEMHPET  20100913 MSFT   
97)
(XEN) ACPI: IVRS 9FF9A730, 0110 (r1  AMD RD890S   202031 AMD 
0)
(XEN) ACPI: SSDT 9FF9A840, 0DA4 (r1 A M I  POWERNOW1 AMD 
1)

(XEN) System RAM: 20479MB (20970660kB)
(XEN) SRAT: PXM 0 -> APIC 00 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 01 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 02 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 03 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 04 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 05 -> Node 0
(XEN) SRAT: Node 0 PXM 0 0-a
(XEN) SRAT: Node 0 PXM 0 10-a000
(XEN) SRAT: Node 0 PXM 0 1-56000
(XEN) NUMA: Allocated memnodemap from 55c75c000 - 55c762000
(XEN) NUMA: Using 8 for the hash shift.
(XEN) Domain heap initialised
(XEN) Allocated console ring of 128 KiB.
(XEN) vesafb: framebuffer at 0xd000, mapped to 0x82c000201000, 
using 6144k, total 16384k

(XEN) vesafb: mode is 1280x1024x32, linelength=5120, font 8x16
(XEN) vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0
(XEN) CPU Vendor: AMD, Family 16 (0x10), Model 10 (0xa), Stepping 0 (raw 
00100fa0)

(XEN) found SMP MP-table at 000ff780
(XEN) DMI present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808 (24 bits)
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1:804,1:0], pm1x_evt[1:800,1:0]
(XEN) ACPI: wakeup_vec[9ff9e00c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee0
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
(XEN) ACPI: IOAPIC (id[0x06] address[0xfec0] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 6, version 33, address 0xfec0, GSI 0-23
(XEN) ACPI: IOAPIC (id[0x07] address[0xfec2] gsi_base[24])
(XEN) IOAPIC[1]: apic_id 7, version 33, address 0xfec2, GSI 24-55
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  

Re: [Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Andrew Cooper
On 02/10/2016 12:46, Sander Eikelenboom wrote:
> Hi All,
>
> Since the new merge window is emerging I took the liberty of testing a
> linux 4.8-rc8 tree with
> the Xen for-linus-4.9 branch pulled on top.
> Unfortunately this crashes dom0 early in boot under Xen.
> On bare-metal the same kernel boots fine.
> Under Xen a linux 4.8-rc8 kernel without the Xen branch pulled on top,
> also boots fine.

So this looks to be a regression in the Xen for-linux-4.9 branch.

>
> Hypervisor is a recentish xen-unstable build.
>
> The serial log is below.
> 
> (XEN) [2016-10-02 11:31:53.106] Scrubbing Free RAM on 1 nodes using 6
> CPUs
> (XEN) [2016-10-02 11:31:53.217] .done.
> (XEN) [2016-10-02 11:31:56.242] Initial low memory virq threshold set
> at 0x4000 pages.
> (XEN) [2016-10-02 11:31:56.260] Std. Loglevel: All
> (XEN) [2016-10-02 11:31:56.277] Guest Loglevel: All
> (XEN) [2016-10-02 11:31:56.295] Xen is relinquishing VGA console.
> (XEN) [2016-10-02 11:31:56.396] *** Serial input -> DOM0 (type
> 'CTRL-a' three times to switch input to Xen)
> (XEN) [2016-10-02 11:31:56.396] Freed 308kB init memory
> (XEN) [2016-10-02 11:31:56.397] d0v0: unhandled page fault (ec=)
> (XEN) [2016-10-02 11:31:56.397] Pagetable walk from 0001:
> (XEN) [2016-10-02 11:31:56.397]  L4[0x000] = 
> 
> (XEN) [2016-10-02 11:31:56.397] domain_crash_sync called from entry.S:
> fault at 82d080244960 entry.o#create_bounce_frame+0x145/0x154
> (XEN) [2016-10-02 11:31:56.397] Domain 0 (vcpu#0) crashed on cpu#0:
> (XEN) [2016-10-02 11:31:56.397] [ Xen-4.8-unstable  x86_64 
> debug=y   Not tainted ]
> (XEN) [2016-10-02 11:31:56.397] CPU:0
> (XEN) [2016-10-02 11:31:56.397] RIP:e033:[]
> (XEN) [2016-10-02 11:31:56.397] RFLAGS: 0286   EM: 1  
> CONTEXT: pv guest (d0v0)
> (XEN) [2016-10-02 11:31:56.397] rax:    rbx:
> 82248bb0   rcx: 8101bc10
> (XEN) [2016-10-02 11:31:56.397] rdx: 0001   rsi:
> 81f0aa50   rdi: 82248bb0
> (XEN) [2016-10-02 11:31:56.397] rbp: 82203e50   rsp:
> 82203dc0   r8:  8101b550
> (XEN) [2016-10-02 11:31:56.397] r9:     r10:
>    r11: 80802001
> (XEN) [2016-10-02 11:31:56.397] r12:    r13:
>    r14: 82215580
> (XEN) [2016-10-02 11:31:56.397] r15:    cr0:
> 8005003b   cr4: 06e0
> (XEN) [2016-10-02 11:31:56.397] cr3: 00054a601000   cr2:
> 0001
> (XEN) [2016-10-02 11:31:56.397] ds:    es:    fs:    gs:
>    ss: e02b   cs: e033
> (XEN) [2016-10-02 11:31:56.397] Guest stack trace from
> rsp=82203dc0:
> (XEN) [2016-10-02 11:31:56.397]8101bc10 80802001
>  8101fdb9
> (XEN) [2016-10-02 11:31:56.397]0001e030 00010086
> 82203e00 e02b
> (XEN) [2016-10-02 11:31:56.397]82203e50 8101fcb5
> 80802001 
> (XEN) [2016-10-02 11:31:56.397] 8101b550
> 82248bb0 81f0aa50
> (XEN) [2016-10-02 11:31:56.397]0001 8101bc10
> 82203eb8 81b7e9f4
> (XEN) [2016-10-02 11:31:56.397] 82203ea8
> 80802001 0004
> (XEN) [2016-10-02 11:31:56.397]8101baa2 82203f40
> 82248bb0 
> (XEN) [2016-10-02 11:31:56.397] 8101bc10
>  82203ed0
> (XEN) [2016-10-02 11:31:56.397]81b7ed45 0013
> 82203ee0 810cc127
> (XEN) [2016-10-02 11:31:56.397]82203f28 810ccdab
> 81f0aa50 8101b550
> (XEN) [2016-10-02 11:31:56.397]82203f60 82203f5c
>  
> (XEN) [2016-10-02 11:31:56.397] 82203f40
> 810c83be 8260
> (XEN) [2016-10-02 11:31:56.397]82203ff8 8232946a
> 00100fa0 8080200100060800
> (XEN) [2016-10-02 11:31:56.397]1789c3f5 
>  
> (XEN) [2016-10-02 11:31:56.397] 
>  
> (XEN) [2016-10-02 11:31:56.397] 
>  
> (XEN) [2016-10-02 11:31:56.397] 
>  
> (XEN) [2016-10-02 11:31:56.397] 
>  
> (XEN) [2016-10-02 11:31:56.397]0f0060c0c748 c305
>  
> (XEN) [2016-10-02 11:31:56.397] 
>  
> (XEN) [2016-10-02 11:31:56.397] Hardware Dom0 crashed: rebooting
> machine in 5 seconds.

Something in

Re: [Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Sander Eikelenboom
addr2line -e vmlinux-4.8.0-rc8-20161002-linus-xennext+ 8101fdb9
/usr/src/new/linux-linus/arch/x86/xen/irq.c:34

asmlinkage __visible unsigned long xen_save_fl(void)
{
struct vcpu_info *vcpu;
unsigned long flags;

vcpu = this_cpu_read(xen_vcpu);

/* flag has opposite sense of mask */
flags = !vcpu->evtchn_upcall_mask;   <== WHICH IS HERE

/* convert to IF type flag
   -0 -> 0x
   -1 -> 0x
*/
return (-flags) & X86_EFLAGS_IF;
}

--
Sander

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Boris Ostrovsky
00 82203f40
>>> 810c83be 8260
>>> (XEN) [2016-10-02 11:31:56.397]82203ff8 8232946a
>>> 00100fa0 8080200100060800
>>> (XEN) [2016-10-02 11:31:56.397]1789c3f5 
>>>  
>>> (XEN) [2016-10-02 11:31:56.397] 
>>>  
>>> (XEN) [2016-10-02 11:31:56.397] 
>>>  
>>> (XEN) [2016-10-02 11:31:56.397] 
>>>  
>>> (XEN) [2016-10-02 11:31:56.397] 
>>>  
>>> (XEN) [2016-10-02 11:31:56.397]0f0060c0c748 c305
>>>  
>>> (XEN) [2016-10-02 11:31:56.397] 
>>>  
>>> (XEN) [2016-10-02 11:31:56.397] Hardware Dom0 crashed: rebooting
>>> machine in 5 seconds.
>>
>> Something in Linux at 8101fdb9 followed a NULL pointer.  Can you
>> see what it was with the linux debug symbols?
>>
>> ~Andrew
>
> Sure thing:
> addr2line -e vmlinux-4.8.0-rc8-20161002-linus-xennext+ 8101fdb9
> /usr/src/new/linux-linus/arch/x86/xen/irq.c:34
>
> asmlinkage __visible unsigned long xen_save_fl(void)
> {
> struct vcpu_info *vcpu;
> unsigned long flags;
>
> vcpu = this_cpu_read(xen_vcpu);
>
> /* flag has opposite sense of mask */
> flags = !vcpu->evtchn_upcall_mask;   <== WHICH IS HERE
>
> /* convert to IF type flag
>-0 -> 0x
>-1 -> 0x
> */
> return (-flags) & X86_EFLAGS_IF;
> }


Can you post your .config file?

Our tests seem to have run fine yesterday with those branches, until
apparently electrical problem shut down the whole test farm this morning.

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Boris Ostrovsky
On 10/02/2016 08:50 AM, Sander Eikelenboom wrote:
>
>>> Sure thing:
>>> addr2line -e vmlinux-4.8.0-rc8-20161002-linus-xennext+ 8101fdb9
>>> /usr/src/new/linux-linus/arch/x86/xen/irq.c:34
>>>
>>> asmlinkage __visible unsigned long xen_save_fl(void)
>>> {
>>> struct vcpu_info *vcpu;
>>> unsigned long flags;
>>>
>>> vcpu = this_cpu_read(xen_vcpu);
>>>
>>> /* flag has opposite sense of mask */
>>> flags = !vcpu->evtchn_upcall_mask;   <== WHICH IS HERE
>>>
>>> /* convert to IF type flag
>>>-0 -> 0x
>>>-1 -> 0x
>>> */
>>> return (-flags) & X86_EFLAGS_IF;
>>> }
>>
>>
>> Can you post your .config file?
>>
>> Our tests seem to have run fine yesterday with those branches, until
>> apparently electrical problem shut down the whole test farm this
>> morning.
>>
>> -boris
>
> Sure, it's attached.


I can reproduce this and will send a patch soon.

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Sander Eikelenboom

On 2016-10-02 18:53, Boris Ostrovsky wrote:

On 10/02/2016 08:50 AM, Sander Eikelenboom wrote:



Sure thing:
addr2line -e vmlinux-4.8.0-rc8-20161002-linus-xennext+ 
8101fdb9

/usr/src/new/linux-linus/arch/x86/xen/irq.c:34

asmlinkage __visible unsigned long xen_save_fl(void)
{
struct vcpu_info *vcpu;
unsigned long flags;

vcpu = this_cpu_read(xen_vcpu);

/* flag has opposite sense of mask */
flags = !vcpu->evtchn_upcall_mask;   <== WHICH IS HERE

/* convert to IF type flag
   -0 -> 0x
   -1 -> 0x
*/
return (-flags) & X86_EFLAGS_IF;
}



Can you post your .config file?

Our tests seem to have run fine yesterday with those branches, until
apparently electrical problem shut down the whole test farm this
morning.

-boris


Sure, it's attached.



I can reproduce this and will send a patch soon.

-boris


Hi Boris,

Thx, just wondering though what the specific difference in .config was 
:)


--
Sander
Thx

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Boris Ostrovsky
On 10/02/2016 12:57 PM, Sander Eikelenboom wrote:
> On 2016-10-02 18:53, Boris Ostrovsky wrote:
>> On 10/02/2016 08:50 AM, Sander Eikelenboom wrote:
>>>
>>>>> Sure thing:
>>>>> addr2line -e vmlinux-4.8.0-rc8-20161002-linus-xennext+
>>>>> 8101fdb9
>>>>> /usr/src/new/linux-linus/arch/x86/xen/irq.c:34
>>>>>
>>>>> asmlinkage __visible unsigned long xen_save_fl(void)
>>>>> {
>>>>> struct vcpu_info *vcpu;
>>>>> unsigned long flags;
>>>>>
>>>>> vcpu = this_cpu_read(xen_vcpu);
>>>>>
>>>>> /* flag has opposite sense of mask */
>>>>> flags = !vcpu->evtchn_upcall_mask;   <== WHICH IS HERE
>>>>>
>>>>> /* convert to IF type flag
>>>>>-0 -> 0x
>>>>>-1 -> 0x
>>>>> */
>>>>> return (-flags) & X86_EFLAGS_IF;
>>>>> }
>>>>
>>>>
>>>> Can you post your .config file?
>>>>
>>>> Our tests seem to have run fine yesterday with those branches, until
>>>> apparently electrical problem shut down the whole test farm this
>>>> morning.
>>>>
>>>> -boris
>>>
>>> Sure, it's attached.
>>
>>
>> I can reproduce this and will send a patch soon.
>>
>> -boris
>
> Hi Boris,
>
> Thx, just wondering though what the specific difference in .config was :)
>

That's what I am trying to understand.

The fix is probably to move

per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];

in enlighten.c:xen_start_kernel() up a little. I am trying to understand
what's changed to trip this error.

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Linux 4.8-rc8 with Xen for-linus-4.9 branch: dom0 crashes on boot

2016-10-02 Thread Boris Ostrovsky
On 10/02/2016 12:57 PM, Sander Eikelenboom wrote:
> On 2016-10-02 18:53, Boris Ostrovsky wrote:
>> On 10/02/2016 08:50 AM, Sander Eikelenboom wrote:
>>>
>>>>> Sure thing:
>>>>> addr2line -e vmlinux-4.8.0-rc8-20161002-linus-xennext+
>>>>> 8101fdb9
>>>>> /usr/src/new/linux-linus/arch/x86/xen/irq.c:34
>>>>>
>>>>> asmlinkage __visible unsigned long xen_save_fl(void)
>>>>> {
>>>>> struct vcpu_info *vcpu;
>>>>> unsigned long flags;
>>>>>
>>>>> vcpu = this_cpu_read(xen_vcpu);
>>>>>
>>>>> /* flag has opposite sense of mask */
>>>>> flags = !vcpu->evtchn_upcall_mask;   <== WHICH IS HERE
>>>>>
>>>>> /* convert to IF type flag
>>>>>-0 -> 0x
>>>>>-1 -> 0x
>>>>> */
>>>>> return (-flags) & X86_EFLAGS_IF;
>>>>> }
>>>>
>>>>
>>>> Can you post your .config file?
>>>>
>>>> Our tests seem to have run fine yesterday with those branches, until
>>>> apparently electrical problem shut down the whole test farm this
>>>> morning.
>>>>
>>>> -boris
>>>
>>> Sure, it's attached.
>>
>>
>> I can reproduce this and will send a patch soon.
>>
>> -boris
>
> Hi Boris,
>
> Thx, just wondering though what the specific difference in .config was :)


It's CONFIG_DEBUG_MUTEXES.

When it is defined, spin_lock_mutex() does local_irq_save() which ends
up calling xen_save_fl(). And the latter requires per_cpu(xen_vcpu,
cpu<=0>) to be set, which it is not.

Until now we haven't called mutex_lock() yet at this point but now that
we are registering with hotplug infrastructure we do.


-boris





___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen/x86: Initialize per_cpu(xen_vcpu, 0) a little earlier

2016-10-02 Thread Boris Ostrovsky
xen_cpuhp_setup() calls mutex_lock() which, when CONFIG_DEBUG_MUTEXES
is defined, ends up calling xen_save_fl(). That routine expects
per_cpu(xen_vcpu, 0) to be already initialized.

Signed-off-by: Boris Ostrovsky 
Reported-by: Sander Eikelenboom 
---
Sander, please see if this fixes the problem. Thanks.


 arch/x86/xen/enlighten.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 366b6ae..96c2dea 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1644,7 +1644,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
xen_initial_gdt = &per_cpu(gdt_page, 0);
 
xen_smp_init();
-   WARN_ON(xen_cpuhp_setup());
 
 #ifdef CONFIG_ACPI_NUMA
/*
@@ -1658,6 +1657,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
   possible map and a non-dummy shared_info. */
per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
 
+   WARN_ON(xen_cpuhp_setup());
+
local_irq_disable();
early_boot_irqs_disabled = true;
 
-- 
1.8.3.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] linux-next: manual merge of the xen-tip tree with the tip tree

2016-10-02 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the xen-tip tree got a conflict in:

  include/linux/cpuhotplug.h

between commit:

  dfc616d8b3df ("cpuidle/coupled: Convert to hotplug state machine")
  68e694dcef24 ("powerpc/powermac: Convert to hotplug state machine")
  da3ed6519b19 ("powerpc/mmu nohash: Convert to hotplug state machine")

from the tip tree and commits:

  4d737042d6c4 ("xen/x86: Convert to hotplug state machine")
  c8761e2016aa ("xen/events: Convert to hotplug state machine")

from the xen-tip tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc include/linux/cpuhotplug.h
index a8ffc405f915,5f603166831c..
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@@ -36,20 -20,11 +36,22 @@@ enum cpuhp_state 
CPUHP_PROFILE_PREPARE,
CPUHP_X2APIC_PREPARE,
CPUHP_SMPCFD_PREPARE,
 +  CPUHP_RELAY_PREPARE,
 +  CPUHP_SLAB_PREPARE,
 +  CPUHP_MD_RAID5_PREPARE,
CPUHP_RCUTREE_PREP,
 +  CPUHP_CPUIDLE_COUPLED_PREPARE,
 +  CPUHP_POWERPC_PMAC_PREPARE,
 +  CPUHP_POWERPC_MMU_CTX_PREPARE,
+   CPUHP_XEN_PREPARE,
+   CPUHP_XEN_EVTCHN_PREPARE,
CPUHP_NOTIFY_PREPARE,
 +  CPUHP_ARM_SHMOBILE_SCU_PREPARE,
 +  CPUHP_SH_SH3X_PREPARE,
 +  CPUHP_BLK_MQ_PREPARE,
CPUHP_TIMERS_DEAD,
 +  CPUHP_NOTF_ERR_INJ_PREPARE,
 +  CPUHP_MIPS_SOC_PREPARE,
CPUHP_BRINGUP_CPU,
CPUHP_AP_IDLE_DEAD,
CPUHP_AP_OFFLINE,

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] New Outreachy Applicant

2016-10-02 Thread Ronald Rojas
On Tue, Sep 20, 2016 at 04:24:47PM +0100, George Dunlap wrote:
> Thanks for your interest in the Xen Project!  Sorry for the delay in
> responding -- somehow your mail either never made it to my personal
> inbox or I accidentally deleted it instead of filing it properly.  I
> saw your question on IRC and now found your mail here on xen-devel.
> 
> First, I want to emphasize that Outreachy internships should be
> considered a full-time job.  As part of the application process you
> will be asked to confirm that you will not be taking any classes, nor
> have any other significant commitments (such as another job) during
> the period of the internship.

I'll confirm now that won't be taking any classes or working during the
majority of the program, however the fall semester for my college will
end around the 20th of December. 
> 
> Now on to the bite-sized task.  We've actually found that one of the
> difficult parts of getting going with our project is making sure that
> you understand how to get your whole system and environment set up.
> And another thing we want to see is to what degree you can balance
> figuring things out, finding the answers on the web, and asking for
> help when you need it.
> 
> So with that in mind, we've started experimenting with tasks which
> don't contribute very much to the project directly, but provide a
> really solid base of knowledge to do further contributions.
> 
> So here's my challenge for you.
> 
> ---
> OUTCOME
> 
> Write a simple go program that will list the current cpu pools,
> similar to the output of "xl cpupool-list".  No need to handle extra
> arguments or modify libxl.go (beyond what may be needed to compile it).

RESULTS:
I've managed to get xen running on my local machine. I am running linux natively
on ubuntu with xen 4.8-unstable and it is running reasonable well.
There are two VM's running on my computer, one is named tutorial-pv-guest that I
created by following the xen beginners guide. The other is called ubuntu1 and
was created through a custom configuration file that I created. I wasn't having
much luck with the outreachy tutorial provided so after some googling the link 
below really helped me get through the process.

http://www.virtuatopia.com/index.php/Building_a_Xen_Guest_Domain_using_Xen-Tools

> 
> Please post a copy of your .go program, along with the results of
> output *when more than one VM is running*.

The output when I run "sudo go run libxl.go" is pasted below. The output was 
made 
when the two VM's mentioned above were running. It prints out the appropiate 
output for Pool Name, Scheduler, and Domain Count. 
All the changes in libxl.go are contained within the main() method and the 
libxl.go
file is pasted below as well.


CONCERNS/QUESTIONS:
I believe I was only able to print out the information for Name, Sched, and 
Domain 
count because the other information such as Active and CPU's are not stored 
within
CpupoolInfo. Was there a way I could obtain that information? I don't think the 
code privided had that functionality. Maybe the next task could be to add that 
functionality?
It also seems that Domain Count is off. The command "xl cpupool-list" lists 3 
domains
(which are Dom0, tutorial-pv-guest, and ubuntu1) but my program returns 11 
domains. 
Would you like me to look into the issue? I'm using the information called from 
Ctx.ListCpupool() so I may have to take a closer look at that code. 

Thank You!
Ronald Rojas
NameSched   Domain count
Pool-0  credit  11
/*
 * Copyright (C) 2016 George W. Dunlap, Citrix Systems UK Ltd
 *
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License as
 * published by the Free Software Foundation; version 2 of the
 * License only.
 *
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
 * 02110-1301, USA.
 */
package main

/*
#cgo LDFLAGS: -lyajl -lxenlight
#include 
#include 
*/
import "C"

import (
"unsafe"
"fmt"
"time"
)

type Context struct {
ctx *C.libxl_ctx
}

var Ctx Context

func (Ctx *Context) IsOpen() bool {
return Ctx.ctx != nil
}

func (Ctx *Context) Open() (err error) {
if Ctx.ctx != nil {
return
}

ret := C.libxl_ctx_alloc(unsafe.Pointer(&Ctx.ctx), C.LIBXL_VERSION, 0, 
nil)

if ret != 0 {
err = fmt.Errorf("Allocating libxl context: %d", ret)
}
return
}

func (Ctx *Context) Close() (err error) {
ret := C.libxl_ctx_free(unsafe.Pointer(Ctx.ctx))
Ctx