flight 116237 xen-4.8-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116237/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
115205
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop       fail REGR. vs. 115205

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 116221 
pass in 116237
 test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat  fail pass in 116221

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop       fail REGR. vs. 115205

Tests which did not succeed, but are not blocking:
 test-xtf-amd64-amd64-5 49 xtf/test-hvm64-lbr-tsx-vmentry fail in 116221 like 
115205
 test-xtf-amd64-amd64-4      49 xtf/test-hvm64-lbr-tsx-vmentry fail like 115185
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 115185
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 115185
 test-xtf-amd64-amd64-3      49 xtf/test-hvm64-lbr-tsx-vmentry fail like 115205
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 115205
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 115205
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 115205
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 115205
 build-i386-prev               7 xen-build/dist-test          fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 build-amd64-prev              7 xen-build/dist-test          fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-check        fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-install        fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install        fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install         fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install         fail never pass

version targeted for testing:
 xen                  9ba6783e47db71379c5120039b878f605bdf31f3
baseline version:
 xen                  03af24c35ed38967ab8151fdb53da3f6f6cc0872

Last test of basis   115205  2017-10-25 06:31:45 Z   23 days
Testing same since   116221  2017-11-16 11:17:40 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.coop...@citrix.com>
  Eric Chanudet <chanud...@ainfosec.com>
  George Dunlap <george.dun...@citrix.com>
  Jan Beulich <jbeul...@suse.com>
  Min He <min...@intel.com>
  Yi Zhang <yi.z.zh...@intel.com>
  Yu Zhang <yu.c.zh...@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          pass    
 build-i386-rumprun                                           pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 pass    
 test-amd64-amd64-xl-qemut-win10-i386                         fail    
 test-amd64-i386-xl-qemut-win10-i386                          fail    
 test-amd64-amd64-xl-qemuu-win10-i386                         fail    
 test-amd64-i386-xl-qemuu-win10-i386                          fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-i386-libvirt-qcow2                                pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9ba6783e47db71379c5120039b878f605bdf31f3
Author: Andrew Cooper <andrew.coop...@citrix.com>
Date:   Thu Nov 16 11:55:47 2017 +0100

    x86/shadow: correct SH_LINEAR mapping detection in sh_guess_wrmap()
    
    The fix for XSA-243 / CVE-2017-15592 (c/s bf2b4eadcf379) introduced a change
    in behaviour for sh_guest_wrmap(), where it had to cope with no shadow 
linear
    mapping being present.
    
    As the name suggests, guest_vtable is a mapping of the guests pagetable, not
    Xen's pagetable, meaning that it isn't the pagetable we need to check for 
the
    shadow linear slot in.
    
    The practical upshot is that a shadow HVM vcpu which switches into 4-level
    paging mode, with an L4 pagetable that contains a mapping which aliases 
Xen's
    SH_LINEAR_PT_VIRT_START will fool the safety check for whether a 
SHADOW_LINEAR
    mapping is present.  As the check passes (when it should have failed), Xen
    subsequently falls over the missing mapping with a pagefault such as:
    
        (XEN) Pagetable walk from ffff8140a0503880:
        (XEN)  L4[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L3[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L2[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L1[0x103] = 0000000000000000 ffffffffffffffff
    
    This is part of XSA-243.
    
    Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com>
    Reviewed-by: Tim Deegan <t...@xen.org>
    master commit: d20daf4294adbdb9316850566013edb98db7bfbc
    master date: 2017-11-16 10:38:14 +0100

commit bc244b70fec092ce9fc31e83da4e07572d4407ae
Author: Jan Beulich <jbeul...@suse.com>
Date:   Thu Nov 16 11:55:16 2017 +0100

    x86: don't wrongly trigger linear page table assertion
    
    _put_page_type() may do multiple iterations until its cmpxchg()
    succeeds. It invokes set_tlbflush_timestamp() on the first
    iteration, however. Code inside the function takes care of this, but
    - the assertion in _put_final_page_type() would trigger on the second
      iteration if time stamps in a debug build are permitted to be
      sufficiently much wider than the default 6 bits (see WRAP_MASK in
      flushtlb.c),
    - it returning -EINTR (for a continuation to be scheduled) would leave
      the page inconsistent state (until the re-invocation completes).
    Make the set_tlbflush_timestamp() invocation conditional, bypassing it
    (for now) only in the case we really can't tolerate the stamp to be
    stored.
    
    This is part of XSA-240.
    
    Signed-off-by: Jan Beulich <jbeul...@suse.com>
    Reviewed-by: George Dunlap <george.dun...@citrix.com>
    master commit: 2c458dfcb59f3d9d8a35fc5ffbf780b6ed7a26a6
    master date: 2017-11-16 10:37:29 +0100

commit 13eb73f0f045a685712786f14fd692eab29751a8
Author: Yu Zhang <yu.c.zh...@linux.intel.com>
Date:   Thu Nov 16 11:54:44 2017 +0100

    x86/mm: fix race condition in modify_xen_mappings()
    
    In modify_xen_mappings(), a L1/L2 page table shall be freed,
    if all entries of this page table are empty. Corresponding
    L2/L3 PTE will need be cleared in such scenario.
    
    However, concurrent paging structure modifications on different
    CPUs may cause the L2/L3 PTEs to be already be cleared or set
    to reference a superpage.
    
    Therefore the logic to enumerate the L1/L2 page table and to
    reset the corresponding L2/L3 PTE need to be protected with
    spinlock. And the _PAGE_PRESENT and _PAGE_PSE flags need be
    checked after the lock is obtained.
    
    Suggested-by: Jan Beulich <jbeul...@suse.com>
    Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
    Reviewed-by: Jan Beulich <jbeul...@suse.com>
    master commit: b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
    master date: 2017-11-14 17:11:26 +0100

commit 6183d537ce4aca01eed3fd015a2fcf458ce2748f
Author: Min He <min...@intel.com>
Date:   Thu Nov 16 11:54:14 2017 +0100

    x86/mm: fix race conditions in map_pages_to_xen()
    
    In map_pages_to_xen(), a L2 page table entry may be reset to point to
    a superpage, and its corresponding L1 page table need be freed in such
    scenario, when these L1 page table entries are mapping to consecutive
    page frames and having the same mapping flags.
    
    However, variable `pl1e` is not protected by the lock before L1 page table
    is enumerated. A race condition may happen if this code path is invoked
    simultaneously on different CPUs.
    
    For example, `pl1e` value on CPU0 may hold an obsolete value, pointing
    to a page which has just been freed on CPU1. Besides, before this page
    is reused, it will still be holding the old PTEs, referencing consecutive
    page frames. Consequently the `free_xen_pagetable(l2e_to_l1e(ol2e))` will
    be triggered on CPU0, resulting the unexpected free of a normal page.
    
    This patch fixes the above problem by protecting the `pl1e` with the lock.
    
    Also, there're other potential race conditions. For instance, the L2/L3
    entry may be modified concurrently on different CPUs, by routines such as
    map_pages_to_xen(), modify_xen_mappings() etc. To fix this, this patch will
    check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is obtained,
    for the corresponding L2/L3 entry.
    
    Signed-off-by: Min He <min...@intel.com>
    Signed-off-by: Yi Zhang <yi.z.zh...@intel.com>
    Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com>
    Reviewed-by: Jan Beulich <jbeul...@suse.com>
    master commit: a5114662297ad03efc36b52ad365ffa05fb357b7
    master date: 2017-11-14 17:10:56 +0100

commit 1ac3ab78cf2f6338f4f1d58cdd89464f73d13b48
Author: Eric Chanudet <chanud...@ainfosec.com>
Date:   Thu Nov 16 11:53:46 2017 +0100

    x86/hvm: do not register hpet mmio during s3 cycle
    
    Do it once at domain creation (hpet_init).
    
    Sleep -> Resume cycles will end up crashing an HVM guest with hpet as
    the sequence during resume takes the path:
    -> hvm_s3_suspend
      -> hpet_reset
        -> hpet_deinit
        -> hpet_init
          -> register_mmio_handler
            -> hvm_next_io_handler
    
    register_mmio_handler will use a new io handler each time, until
    eventually it reaches NR_IO_HANDLERS, then hvm_next_io_handler calls
    domain_crash.
    
    Signed-off-by: Eric Chanudet <chanud...@ainfosec.com>
    Reviewed-by: Jan Beulich <jbeul...@suse.com>
    master commit: 015d6738ddff4074668c1d4887bbffd507ed1a7f
    master date: 2017-11-14 17:09:50 +0100

commit e1fa1c6ee152105c9adf5fb5ff4507028a87d2a3
Author: George Dunlap <george.dun...@citrix.com>
Date:   Thu Nov 16 11:52:45 2017 +0100

    x86/mm: Make PV linear pagetables optional
    
    Allowing pagetables to point to other pagetables of the same level
    (often called 'linear pagetables') has been included in Xen since its
    inception; but recently it has been the source of a number of subtle
    reference-counting bugs.
    
    It is not used by Linux or MiniOS; but it is used by NetBSD and Novell
    Netware.  There are significant numbers of people who are never going
    to use the feature, along with significant numbers who need the
    feature.
    
    Add a Kconfig option for the feature (default to 'y').  Also add a
    command-line option to control whether PV linear pagetables are
    allowed (default to 'true').
    
    NB that we leave linear_pt_count in the page struct.  It's in a union,
    so its presence doesn't increase the size of the data struct.
    Changing the layout of the other elements based on configuration
    options is asking for trouble however; so we'll just leave it there
    and ASSERT that it's zero.
    
    Reported-by: Jann Horn <ja...@google.com>
    Signed-off-by: George Dunlap <george.dun...@citrix.com>
    Reviewed-by: Jan Beulich <jbeul...@suse.com>
    master commit: 3285e75dea89afb0ef5b3ee39bd15194bd7cc110
    master date: 2017-10-27 14:36:45 +0100

commit 96e76d8b66a786e2cc6847ff918584a4bab9c52a
Author: Jan Beulich <jbeul...@suse.com>
Date:   Thu Nov 16 11:52:17 2017 +0100

    x86: fix asm() constraint for GS selector update
    
    Exception fixup code may alter the operand, which ought to be reflected
    in the constraint.
    
    Signed-off-by: Jan Beulich <jbeul...@suse.com>
    Reviewed-by: Andrew Cooper <andrew.coop...@citrix.com>
    master commit: 65ab53de34851243fb7793ebf12fd92a65f84ddd
    master date: 2017-10-27 13:49:10 +0100

commit 651d839afa0f2f8dea1379a7207c22232c75ca63
Author: Jan Beulich <jbeul...@suse.com>
Date:   Thu Nov 16 11:51:46 2017 +0100

    x86: don't latch wrong (stale) GS base addresses
    
    load_segments() writes selector registers before doing any of the base
    address updates. Any of these selector loads can cause a page fault in
    case it references the LDT, and the LDT page accessed was only recently
    installed. Therefore the call tree map_ldt_shadow_page() ->
    guest_get_eff_kern_l1e() -> toggle_guest_mode() would in such a case
    wrongly latch the outgoing vCPU's GS.base into the incoming vCPU's
    recorded state.
    
    Split page table toggling from GS handling - neither
    guest_get_eff_kern_l1e() nor guest_io_okay() need more than the page
    tables being the kernel ones for the memory access they want to do.
    
    Signed-off-by: Jan Beulich <jbeul...@suse.com>
    Reviewed-by: Andrew Cooper <andrew.coop...@citrix.com>
    master commit: a711f6f24a7157ae70d1cc32e61b98f23dc0c584
    master date: 2017-10-27 13:49:10 +0100

commit 14826e327b3dc4e7dfbda7671b6a052e29da374c
Author: Jan Beulich <jbeul...@suse.com>
Date:   Thu Nov 16 11:51:11 2017 +0100

    x86: also show FS/GS base addresses when dumping registers
    
    Their state may be important to figure the reason for a crash. To not
    further grow duplicate code, break out a helper function.
    
    I realize that (ab)using the control register array here may not be
    considered the nicest solution, but it seems easier (and less overall
    overhead) to do so compared to the alternative of introducing another
    helper structure.
    
    Signed-off-by: Jan Beulich <jbeul...@suse.com>
    Reviewed-by Andrew Cooper <andrew.coop...@citrix.com>
    master commit: be7f60b5a39741eab0a8fea0324f7be0cb724cfb
    master date: 2017-10-24 18:13:13 +0200

commit 814e065d661fcb208db7a3d42bcdb4129c53fd46
Author: Jan Beulich <jbeul...@suse.com>
Date:   Thu Nov 16 11:50:31 2017 +0100

    x86: fix GS-base-dirty determination
    
    load_segments() writes the two MSRs in their "canonical" positions
    (GS_BASE for the user base, SHADOW_GS_BASE for the kernel one) and uses
    SWAPGS to switch them around if the incoming vCPU is in kernel mode. In
    order to not leave a stale kernel address in GS_BASE when the incoming
    guest is in user mode, the check on the outgoing vCPU needs to be
    dependent upon the mode it is currently in, rather than blindly looking
    at the user base.
    
    Signed-off-by: Jan Beulich <jbeul...@suse.com>
    Reviewed-by: Andrew Cooper <andrew.coop...@citrix.com>
    master commit: 91f85280b9b80852352fcad73d94ed29fafb88da
    master date: 2017-10-24 18:12:31 +0200
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to