Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-09 Thread Ian Campbell
On Tue, 2015-09-08 at 17:13 +0100, Ian Jackson wrote:
> Jan Beulich writes ("Re: [Xen-devel] [xen-unstable test] 61521:
> regressions - FAIL"):
> > That's what I was about to say in a reply to your earlier mail. To me
> > this still means the guest booted up successfully (or else it wouldn't
> > have accepted the TCP connection). I agree its "ssh server" service
> > is unavailable (or unreliable), which is bad, but which then again we
> > have no real handle to deal with without someone running into it
> > outside of osstest.
> 
> Right.
> 
> > Whether to use this as justification for a force push I'm not sure
> > anyway.
> 
> I have looked a the histories of various `debianhvm' tests in many
> other osstest `branches' and there do seem to be occasional similar
> failures elsewhere.  So Ian C wasn't right to remember that this test

was ???

> step might be unreliable.
> 
> I think based on this, and Wei's analysis, a force push is justified.

>From the original report:
> version targeted for testing:
>  xen  a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d
> baseline version:
>  xen  801ab48e5556cb54f67e3cb57f077f47e8663ced

Therefore I have force pushed a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d.

Ian.

$ OSSTEST_CONFIG=production-config ./ap-push xen-unstable 
a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d
+ branch=xen-unstable
+ revision=a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable 
a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d
+ branch=xen-unstable
+ revision=a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' x = x ']'
+ qemuubranch=qemu-upstream-unstable
+ select_prevxenbranch
+ local b
+ local p
++ ./mg-list-all-branches
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ '[' xxen-4.0-testing = xxen-unstable ']'
+ p=xen-4.0-testing
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ '[' xxen-4.1-testing = xxen-unstable ']'
+ p=xen-4.1-testing
+ for b in '$(./mg-list-all-branches)'
+ case "$b" in
+ '[' xxen-4.2-testing = xxen-unstable ']'
+ p=xen-4.2-testing
+ for b in '$(.

Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-09 Thread Ian Jackson
Ian Campbell writes ("Re: [Xen-devel] [xen-unstable test] 61521: regressions - 
FAIL"):
> On Tue, 2015-09-08 at 17:13 +0100, Ian Jackson wrote:
> > I have looked a the histories of various `debianhvm' tests in many
> > other osstest `branches' and there do seem to be occasional similar
> > failures elsewhere.  So Ian C wasn't right to remember that this test
> 
> was ???

Err, yes, "was", sorry!

> >From the original report:
> > version targeted for testing:
> >  xen  a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d
> > baseline version:
> >  xen  801ab48e5556cb54f67e3cb57f077f47e8663ced
> 
> Therefore I have force pushed a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d.

Thanks,
Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread osstest service owner
flight 61521 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/61521/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-amd64 19 guest-start/debianhvm.repeat fail 
REGR. vs. 61059

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 15 guest-start.2fail blocked in 61059
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 15 guest-localmigrate.2 
fail blocked in 61059
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
like 61059
 test-amd64-amd64-rumpuserxen-amd64 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail like 61059
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 61059
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 61059

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-xl-raw   9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass
 test-armhf-armhf-xl-qcow2 9 debian-di-installfail   never pass
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt-vhd  9 debian-di-installfail   never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-installfail never pass
 test-amd64-amd64-libvirt-pair 21 guest-migrate/src_host/dst_host fail never 
pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-pair 21 guest-migrate/src_host/dst_host fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-raw  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qcow2 11 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-i386-libvirt-vhd  11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass

version targeted for testing:
 xen  a7b39c8bd6cba3fe1c8012987b9e28bdbac7e92d
baseline version:
 xen  801ab48e5556cb54f67e3cb57f077f47e8663ced

Last test of basis61059  2015-08-30 15:26:08 Z8 days
Failing since 61248  2015-09-01 10:13:44 Z7 days3 attempts
Testing same since61521  2015-09-07 11:46:37 Z1 days1 attempts


People who touched revisions under test:
  Daniel De Graaf 
  David Scott 
  David Scott 
  David Scott 
  Doug Goldstein 

Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Ian Campbell
On Tue, 2015-09-08 at 15:52 +0100, Wei Liu wrote:
> On Tue, Sep 08, 2015 at 02:30:30PM +, osstest service owner wrote:
> > flight 61521 xen-unstable real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/61521/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-amd64-xl-qemut-debianhvm-amd64 19 guest
> > -start/debianhvm.repeat fail REGR. vs. 61059
> > 
> 
> I suspect this failure is due to infrastructure problem.

Actually I think it is more likely that the qemut-debianhvm test is just
not 100% reliable.

Unless you have observed something in the logs which points specifically to
wards an infrastructure failure? In which case you should have mentioned it
here.

> 
> Same test job passed in
> http://logs.test-lab.xenproject.org/osstest/logs/61306/
> 
> The differences of xen-unstable between 61306 and this flight are:
> 
> a7b39c8bd libxc: add assertion to avoid setting same bit more than once
> e8e9f830d libxc: don't populate same pfn more than once in populate_pfns
> e00f8a1a7 libxc: fix indentation
> 76d75222a libxc: migration v2 prefix Memory -> Frames
> 477619384 libxc: clearer migration v2 debug message
> 
> Tested in migration jobs and passed.
> 
> b2700877a public/io/netif.h: move and amend multicast control
> documentation
> 09e2a619a MAINTAINERS: tools/ocaml: update David Scott's email address
> 06925643c build: update top-level make help
> 
> Doc update, irrelevant to failure.
> 
> f42f2cbe5 xen/dt: Handle correctly node without interrupt-map in
> dt_for_each_irq_map
> 
> ARM, irrelevant to failure.
> 
> d9be0990f tools/xen-access: use PRI_xen_pfn
> 
> Not tested in OSSTest
> 
> 0a7167d9b xen/arm64: do not (incorrectly) limit size of xenheap
> 
> ARM, irrelevant to failure.
> 
> 8747dba3b Merge branch 'staging' of
> ssh://xenbits.xen.org/home/xen/git/xen into staging
> 076cd5065 tmem: Spelling and full stop surgery.
> fee800b94 tmem: Remove extra spaces at end and some hard tabbing.
> 880699a8b tmem: Use 'struct xen_tmem_oid' in tmem_handle and move it to
> sysctl header.
> b657a8ad1 tmem: Use 'struct xen_tmem_oid' for every user.
> 987e64e05 tmem: Make the uint64_t oid[3] a proper structure: xen_tmem_oid
> ad1b7a139 tmem: Remove the old tmem control XSM checks as it is part of
> sysctl hypercall.
> d0edc15a6 tmem: Move TMEM_CONTROL subop of tmem hypercall to sysctl.
> 54a51b176 tmem: Remove xc_tmem_control mystical arg3
> 1682d2fdf tmem: Remove in xc_tmem_control_oid duplicate
> set_xen_guest_handle call
> 7646da32f tmem: Add ASSERT in obj_rb_insert for pool->rwlock lock.
> 7ec4b2f89 tmem: Don't crash/hang/leak hypervisor when using shared pools
> within an guest.
> 
> Not tested in OSSTest.
> 
> 16181cbb1 tools: Honor Config.mk debug value, rather than setting our own
> 
> Passed, and irrelevant to failure.
> 
> Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Jan Beulich
>>> On 08.09.15 at 16:52,  wrote:
> On Tue, Sep 08, 2015 at 02:30:30PM +, osstest service owner wrote:
>> flight 61521 xen-unstable real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/61521/ 
>> 
>> Regressions :-(
>> 
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  test-amd64-amd64-xl-qemut-debianhvm-amd64 19 guest-start/debianhvm.repeat 
>> fail 
> REGR. vs. 61059
>> 
> 
> I suspect this failure is due to infrastructure problem.

This or some random (but reoccurring) failure. Not so long ago I
already mentioned that for the purpose of just determining whether
a guest is up

ssh: connect to host 172.16.145.6 port 22: Connection refused

is sufficient, as that means there was a response (albeit a
negative one). But whether the refusal is due to something
getting corrupted in the networking stack or due to an
infrastructure problem is rather hard to tell.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Ian Jackson
Ian Jackson writes ("Re: [Xen-devel] [xen-unstable test] 61521: regressions - 
FAIL"):
> A more likely explanation is that the host sometimes fails to complete
> its bootup within a reasonable time.

Actually, looking at a bit more of the osstest transcript:

2015-09-08 09:14:32 Z guest debianhvm.guest.osstest 5a:36:0e:51:00:2a 22 
link/ip/tcp: ok. (18s)
2015-09-08 09:14:33 Z executing ssh ... root@172.16.145.6 echo guest 
debianhvm.guest.osstest: ok
ssh: connect to host 172.16.145.6 port 22: Connection refused

The first message means that
  nc -n -v -z -w $interval $ho->{Ip} $ho->{TcpCheckPort}
succeeded.  The IP address is the one from DHCP and the port is 22
as you see printed.  $interval is 5.

So what this means is that the osstest controller successfully made a
tcp connection to the guest's port 22.  It then went on to the next
check which is actually to try ssh'ing to the host.  But ssh got
ECONNREFUSED.

This means that the guest was accepting connections on 172.16.145.6:22
and then stopped doing so (perhaps only briefly).

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Jan Beulich
>>> On 08.09.15 at 17:45, <ian.jack...@eu.citrix.com> wrote:
> Ian Jackson writes ("Re: [Xen-devel] [xen-unstable test] 61521: regressions - 
> FAIL"):
>> A more likely explanation is that the host sometimes fails to complete
>> its bootup within a reasonable time.
> 
> Actually, looking at a bit more of the osstest transcript:
> 
> 2015-09-08 09:14:32 Z guest debianhvm.guest.osstest 5a:36:0e:51:00:2a 22 
> link/ip/tcp: ok. (18s)
> 2015-09-08 09:14:33 Z executing ssh ... root@172.16.145.6 echo guest 
> debianhvm.guest.osstest: ok
> ssh: connect to host 172.16.145.6 port 22: Connection refused
> 
> The first message means that
>   nc -n -v -z -w $interval $ho->{Ip} $ho->{TcpCheckPort}
> succeeded.  The IP address is the one from DHCP and the port is 22
> as you see printed.  $interval is 5.
> 
> So what this means is that the osstest controller successfully made a
> tcp connection to the guest's port 22.  It then went on to the next
> check which is actually to try ssh'ing to the host.  But ssh got
> ECONNREFUSED.
> 
> This means that the guest was accepting connections on 172.16.145.6:22
> and then stopped doing so (perhaps only briefly).

That's what I was about to say in a reply to your earlier mail. To me
this still means the guest booted up successfully (or else it wouldn't
have accepted the TCP connection). I agree its "ssh server" service
is unavailable (or unreliable), which is bad, but which then again we
have no real handle to deal with without someone running into it
outside of osstest.

Whether to use this as justification for a force push I'm not sure
anyway.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Ian Jackson
Jan Beulich writes ("Re: [Xen-devel] [xen-unstable test] 61521: regressions - 
FAIL"):
> That's what I was about to say in a reply to your earlier mail. To me
> this still means the guest booted up successfully (or else it wouldn't
> have accepted the TCP connection). I agree its "ssh server" service
> is unavailable (or unreliable), which is bad, but which then again we
> have no real handle to deal with without someone running into it
> outside of osstest.

Right.

> Whether to use this as justification for a force push I'm not sure
> anyway.

I have looked a the histories of various `debianhvm' tests in many
other osstest `branches' and there do seem to be occasional similar
failures elsewhere.  So Ian C wasn't right to remember that this test
step might be unreliable.

I think based on this, and Wei's analysis, a force push is justified.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Ian Jackson
Jan Beulich writes ("Re: [Xen-devel] [xen-unstable test] 61521: regressions - 
FAIL"):
> This or some random (but reoccurring) failure.

I agree with Ian C that this isn't likely to be an infrastructure
problem.  It's probably an actual bug.

> Not so long ago I already mentioned that for the purpose of just
> determining whether a guest is up
> 
> ssh: connect to host 172.16.145.6 port 22: Connection refused
> 
> is sufficient, as that means there was a response (albeit a
> negative one).

That depends on what you mean by `up'.  The guest is not `up' in the
sense that it is not providing the services it is supposed to provide
(which include, in this case, an ssh server).

So I think that osstest is correct to regard this as a test failure.

>  But whether the refusal is due to something getting corrupted in
> the networking stack or due to an infrastructure problem is rather
> hard to tell.

Both of these seem unlikely as explanations.

A more likely explanation is that the host sometimes fails to complete
its bootup within a reasonable time.

Possible root causes include but are not limited to:

 * Something wrong with the network frontend/backend causes
   a delay to startup

 * Something else wrong means that the guest sometimes has
   very poor performance

 * The guest has actually locked up or partially locked up during
   boot and currently only kernel interrupt processing is being done

Infrastructure problems are possible of course but I don't think they
are likely.  When we had that BSD lost-gratuitous-arp bug I
investigated the networking in the colo in some depth and there was no
evidence of any packet loss, for example.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Wei Liu
On Tue, Sep 08, 2015 at 02:30:30PM +, osstest service owner wrote:
> flight 61521 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/61521/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl-qemut-debianhvm-amd64 19 guest-start/debianhvm.repeat 
> fail REGR. vs. 61059
> 

I suspect this failure is due to infrastructure problem.

Same test job passed in
http://logs.test-lab.xenproject.org/osstest/logs/61306/

The differences of xen-unstable between 61306 and this flight are:

a7b39c8bd libxc: add assertion to avoid setting same bit more than once
e8e9f830d libxc: don't populate same pfn more than once in populate_pfns
e00f8a1a7 libxc: fix indentation
76d75222a libxc: migration v2 prefix Memory -> Frames
477619384 libxc: clearer migration v2 debug message

Tested in migration jobs and passed.

b2700877a public/io/netif.h: move and amend multicast control documentation
09e2a619a MAINTAINERS: tools/ocaml: update David Scott's email address
06925643c build: update top-level make help

Doc update, irrelevant to failure.

f42f2cbe5 xen/dt: Handle correctly node without interrupt-map in 
dt_for_each_irq_map

ARM, irrelevant to failure.

d9be0990f tools/xen-access: use PRI_xen_pfn

Not tested in OSSTest

0a7167d9b xen/arm64: do not (incorrectly) limit size of xenheap

ARM, irrelevant to failure.

8747dba3b Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into 
staging
076cd5065 tmem: Spelling and full stop surgery.
fee800b94 tmem: Remove extra spaces at end and some hard tabbing.
880699a8b tmem: Use 'struct xen_tmem_oid' in tmem_handle and move it to sysctl 
header.
b657a8ad1 tmem: Use 'struct xen_tmem_oid' for every user.
987e64e05 tmem: Make the uint64_t oid[3] a proper structure: xen_tmem_oid
ad1b7a139 tmem: Remove the old tmem control XSM checks as it is part of sysctl 
hypercall.
d0edc15a6 tmem: Move TMEM_CONTROL subop of tmem hypercall to sysctl.
54a51b176 tmem: Remove xc_tmem_control mystical arg3
1682d2fdf tmem: Remove in xc_tmem_control_oid duplicate set_xen_guest_handle 
call
7646da32f tmem: Add ASSERT in obj_rb_insert for pool->rwlock lock.
7ec4b2f89 tmem: Don't crash/hang/leak hypervisor when using shared pools within 
an guest.

Not tested in OSSTest.

16181cbb1 tools: Honor Config.mk debug value, rather than setting our own

Passed, and irrelevant to failure.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 61521: regressions - FAIL

2015-09-08 Thread Wei Liu
On Tue, Sep 08, 2015 at 04:13:06PM +0100, Ian Campbell wrote:
> On Tue, 2015-09-08 at 15:52 +0100, Wei Liu wrote:
> > On Tue, Sep 08, 2015 at 02:30:30PM +, osstest service owner wrote:
> > > flight 61521 xen-unstable real [real]
> > > http://logs.test-lab.xenproject.org/osstest/logs/61521/
> > > 
> > > Regressions :-(
> > > 
> > > Tests which did not succeed and are blocking,
> > > including tests which could not be run:
> > >  test-amd64-amd64-xl-qemut-debianhvm-amd64 19 guest
> > > -start/debianhvm.repeat fail REGR. vs. 61059
> > > 
> > 
> > I suspect this failure is due to infrastructure problem.
> 
> Actually I think it is more likely that the qemut-debianhvm test is just
> not 100% reliable.
> 
> Unless you have observed something in the logs which points specifically to
> wards an infrastructure failure? In which case you should have mentioned it
> here.
> 

Hmm... I don't think I spot those kind of things.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel