oVirt infra daily report - unstable production jobs - 84

2016-09-21 Thread jenkins
Good morning!

Attached is the HTML page with the jenkins status report. You can see it also 
here:
 - 
http://jenkins.ovirt.org/job/system_jenkins-report/84//artifact/exported-artifacts/upstream_report.html

Cheers,
Jenkins


upstream_report.html
Description: Binary data
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


vdsm_master_build-artifacts-el7-ppc64le failure - [Errno 256] No more mirrors to try.

2016-09-21 Thread Nir Soffer
The job fails consistently with:

EBUG util.py:421:  Install  25 Packages (+133 Dependent packages)
DEBUG util.py:421:  Total download size: 115 M
DEBUG util.py:421:  Installed size: 482 M
DEBUG util.py:421:
http://mirror.centos.org/altarch/7/updates/ppc64le/Packages/ca-certificates-2015.2.6-70.1.el7_2.noarch.rpm:
[Errno -1] Package does not match intended download. Suggestion: run yum
--enablerepo=el-updates clean metadata
DEBUG util.py:421:  Trying other mirror.
DEBUG util.py:421:
http://mirror.centos.org/altarch/7/updates/ppc64le/Packages/tzdata-2016f-1.el7.noarch.rpm:
[Errno -1] Package does not match intended download. Suggestion: run yum
--enablerepo=el-updates clean metadata
DEBUG util.py:421:  Trying other mirror.
DEBUG util.py:421:  Error downloading packages:
DEBUG util.py:421:tzdata-2016f-1.el7.noarch: [Errno 256] No more
mirrors to try.
DEBUG util.py:421:ca-certificates-2015.2.6-70.1.el7_2.noarch: [Errno
256] No more mirrors to try.
DEBUG util.py:557:  Child return code was: 1
DEBUG util.py:180:  kill orphans

See
http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-ppc64le/279/artifact/exported-artifacts/logs.tgz

In ./vdsm/logs/mocker-epel-7-ppc64le.el7.init/root.log

Nir
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-721) Re: gerrit - Prevent pushing directly to origin/branches

2016-09-21 Thread eyal edri [Administrator] (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri [Administrator] updated OVIRT-721:

Blocked By: OVIRT-487
Status: Blocked  (was: In Progress)

> Re: gerrit - Prevent pushing directly to origin/branches
> 
>
> Key: OVIRT-721
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-721
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: eyal edri [Administrator]
>Assignee: Shlomo Ben David
>Priority: Highest
>
> Opening a ticket on it.
> On Tue, Sep 6, 2016 at 11:13 AM, Roy Golan  wrote:
> > Using ```git push origin HEAD:origin/ovirt-engine-3.6``` I'm able to
> > merge my work and bypass any review. This needs to be prevented to
> > unprivileged people i.e only stable branch maintainers.
> >
> > See example in https://gerrit.ovirt.org/#/c/62425/
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> -- 
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)



--
This message was sent by Atlassian JIRA
(v1000.350.2#100014)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-736) soft lockup on el7-vm25

2016-09-21 Thread Evgheni Dereveanchin (oVirt JIRA)
Evgheni Dereveanchin created OVIRT-736:
--

 Summary: soft lockup on el7-vm25
 Key: OVIRT-736
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-736
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: Evgheni Dereveanchin
Assignee: infra


I've noticed some slaves going offline in Jenkins with 100% CPU reported on the 
Engine. They eventually return to normal state. CHecked the logs on 
el7-vm25.phx.ovirt.org which had these symptoms and there seems to be a soft 
lockup due to the qemu-kvm process:

Sep 21 04:57:18 el7-vm25 kernel: BUG: soft lockup - CPU#0 stuck for 22s! 
[qemu-kvm:13768]
Sep 21 04:57:18 el7-vm25 kernel: Modules linked in: nls_utf8 isofs loop dm_mod 
xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun ip6t_rpfilter ip6t_REJECT 
ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc 
ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 
nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter 
ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat 
nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter 
aesni_intel lrw gf128mul glue_helper ppdev ablk_helper cryptd sg pcspkr 
parport_pc parport i2c_piix4 kvm_intel nfsd kvm auth_rpcgss nfs_acl lockd grace 
sunrpc ip_tables ext4 mbcache jbd2 sr_mod cdrom ata_generic pata_acpi 
virtio_blk virtio_console virtio_scsi virtio_net qxl syscopyarea sysfillrect 
sysimgblt drm_kms_helper
Sep 21 04:57:18 el7-vm25 kernel: ttm ata_piix crc32c_intel libata serio_raw 
virtio_pci virtio_ring virtio drm i2c_core floppy
Sep 21 04:57:18 el7-vm25 kernel: CPU: 0 PID: 13768 Comm: qemu-kvm Not tainted 
3.10.0-327.28.3.el7.x86_64 #1
Sep 21 04:57:18 el7-vm25 kernel: Hardware name: oVirt oVirt Node, BIOS 0.5.1 
01/01/2011
Sep 21 04:57:18 el7-vm25 kernel: task: 880210017300 ti: 8800363f8000 
task.ti: 8800363f8000
Sep 21 04:57:18 el7-vm25 kernel: RIP: 0010:[]  
[] generic_exec_single+0xfa/0x1a0
Sep 21 04:57:18 el7-vm25 kernel: RSP: 0018:8800363fbc40  EFLAGS: 0202
Sep 21 04:57:18 el7-vm25 kernel: RAX: 0020 RBX: 8800363fbc10 
RCX: 0020
Sep 21 04:57:18 el7-vm25 kernel: RDX:  RSI: 0020 
RDI: 0282
Sep 21 04:57:18 el7-vm25 kernel: RBP: 8800363fbc88 R08: 8165fbe0 
R09: ea000357c4c0
Sep 21 04:57:18 el7-vm25 kernel: R10: 3496 R11: 0206 
R12: 880210017300
Sep 21 04:57:18 el7-vm25 kernel: R13: 880210017300 R14: 0001 
R15: 880210017300
Sep 21 04:57:18 el7-vm25 kernel: FS:  7fe7b288e700() 
GS:880216e0() knlGS:
Sep 21 04:57:18 el7-vm25 kernel: CS:  0010 DS:  ES:  CR0: 
8005003b
Sep 21 04:57:18 el7-vm25 kernel: CR2:  CR3: 000211bcb000 
CR4: 26f0
Sep 21 04:57:18 el7-vm25 kernel: DR0:  DR1:  
DR2: 
Sep 21 04:57:18 el7-vm25 kernel: DR3:  DR6: 0ff0 
DR7: 0400
Sep 21 04:57:18 el7-vm25 kernel: Stack:
Sep 21 04:57:18 el7-vm25 kernel:   
81065c90 8800363fbd10
Sep 21 04:57:18 el7-vm25 kernel: 0003 9347ffdf 
0001 81065c90
Sep 21 04:57:18 el7-vm25 kernel: 81065c90 8800363fbcb8 
810e6adf 8800363fbcb8
Sep 21 04:57:18 el7-vm25 kernel: Call Trace:
Sep 21 04:57:18 el7-vm25 kernel: [] ? leave_mm+0x70/0x70
Sep 21 04:57:18 el7-vm25 kernel: [] ? leave_mm+0x70/0x70
Sep 21 04:57:18 el7-vm25 kernel: [] ? leave_mm+0x70/0x70
Sep 21 04:57:18 el7-vm25 kernel: [] 
smp_call_function_single+0x5f/0xa0
Sep 21 04:57:18 el7-vm25 kernel: [] ? 
cpumask_next_and+0x35/0x50
Sep 21 04:57:18 el7-vm25 kernel: [] 
smp_call_function_many+0x223/0x260
Sep 21 04:57:18 el7-vm25 kernel: [] 
native_flush_tlb_others+0xb8/0xc0
Sep 21 04:57:18 el7-vm25 kernel: [] 
flush_tlb_mm_range+0x66/0x140
Sep 21 04:57:18 el7-vm25 kernel: [] 
tlb_flush_mmu.part.54+0x33/0xc0
Sep 21 04:57:18 el7-vm25 kernel: [] tlb_finish_mmu+0x55/0x60
Sep 21 04:57:18 el7-vm25 kernel: [] zap_page_range+0x12a/0x170
Sep 21 04:57:18 el7-vm25 kernel: [] SyS_madvise+0x394/0x820
Sep 21 04:57:18 el7-vm25 kernel: [] ? 
hrtimer_nanosleep+0xad/0x170
Sep 21 04:57:18 el7-vm25 kernel: [] ? SyS_futex+0x80/0x180
Sep 21 04:57:18 el7-vm25 kernel: [] 
system_call_fastpath+0x16/0x1b
Sep 21 04:57:18 el7-vm25 kernel: Code: 80 72 01 00 48 89 de 48 03 14 c5 20 c9 
a5 81 48 89 df e8 7a 03 22 00 84 c0 75 46 45 85 ed 74 11 f6 43 20 01 74 0b 0f 
1f 00 f3 90  43 20 01 75 f8 31 c0 48 8b 7c 24 28 65 48 33 3c 25 28 00 00 


need to find the root cause and fix the issue as this is negatively affecting 
jobs being run.



--
This message was sent by Atlassian JIRA
(v1000.350.2#100014)
___
Infra mailing 

[JIRA] (OVIRT-736) soft lockup on el7-vm25

2016-09-21 Thread Evgheni Dereveanchin (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgheni Dereveanchin reassigned OVIRT-736:
--

Assignee: Evgheni Dereveanchin  (was: infra)

> soft lockup on el7-vm25
> ---
>
> Key: OVIRT-736
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-736
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Evgheni Dereveanchin
>Assignee: Evgheni Dereveanchin
>
> I've noticed some slaves going offline in Jenkins with 100% CPU reported on 
> the Engine. They eventually return to normal state. CHecked the logs on 
> el7-vm25.phx.ovirt.org which had these symptoms and there seems to be a soft 
> lockup due to the qemu-kvm process:
> Sep 21 04:57:18 el7-vm25 kernel: BUG: soft lockup - CPU#0 stuck for 22s! 
> [qemu-kvm:13768]
> Sep 21 04:57:18 el7-vm25 kernel: Modules linked in: nls_utf8 isofs loop 
> dm_mod xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun ip6t_rpfilter 
> ip6t_REJECT ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc 
> ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 
> nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter 
> ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat 
> nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter 
> aesni_intel lrw gf128mul glue_helper ppdev ablk_helper cryptd sg pcspkr 
> parport_pc parport i2c_piix4 kvm_intel nfsd kvm auth_rpcgss nfs_acl lockd 
> grace sunrpc ip_tables ext4 mbcache jbd2 sr_mod cdrom ata_generic pata_acpi 
> virtio_blk virtio_console virtio_scsi virtio_net qxl syscopyarea sysfillrect 
> sysimgblt drm_kms_helper
> Sep 21 04:57:18 el7-vm25 kernel: ttm ata_piix crc32c_intel libata serio_raw 
> virtio_pci virtio_ring virtio drm i2c_core floppy
> Sep 21 04:57:18 el7-vm25 kernel: CPU: 0 PID: 13768 Comm: qemu-kvm Not tainted 
> 3.10.0-327.28.3.el7.x86_64 #1
> Sep 21 04:57:18 el7-vm25 kernel: Hardware name: oVirt oVirt Node, BIOS 0.5.1 
> 01/01/2011
> Sep 21 04:57:18 el7-vm25 kernel: task: 880210017300 ti: 8800363f8000 
> task.ti: 8800363f8000
> Sep 21 04:57:18 el7-vm25 kernel: RIP: 0010:[]  
> [] generic_exec_single+0xfa/0x1a0
> Sep 21 04:57:18 el7-vm25 kernel: RSP: 0018:8800363fbc40  EFLAGS: 0202
> Sep 21 04:57:18 el7-vm25 kernel: RAX: 0020 RBX: 8800363fbc10 
> RCX: 0020
> Sep 21 04:57:18 el7-vm25 kernel: RDX:  RSI: 0020 
> RDI: 0282
> Sep 21 04:57:18 el7-vm25 kernel: RBP: 8800363fbc88 R08: 8165fbe0 
> R09: ea000357c4c0
> Sep 21 04:57:18 el7-vm25 kernel: R10: 3496 R11: 0206 
> R12: 880210017300
> Sep 21 04:57:18 el7-vm25 kernel: R13: 880210017300 R14: 0001 
> R15: 880210017300
> Sep 21 04:57:18 el7-vm25 kernel: FS:  7fe7b288e700() 
> GS:880216e0() knlGS:
> Sep 21 04:57:18 el7-vm25 kernel: CS:  0010 DS:  ES:  CR0: 
> 8005003b
> Sep 21 04:57:18 el7-vm25 kernel: CR2:  CR3: 000211bcb000 
> CR4: 26f0
> Sep 21 04:57:18 el7-vm25 kernel: DR0:  DR1:  
> DR2: 
> Sep 21 04:57:18 el7-vm25 kernel: DR3:  DR6: 0ff0 
> DR7: 0400
> Sep 21 04:57:18 el7-vm25 kernel: Stack:
> Sep 21 04:57:18 el7-vm25 kernel:   
> 81065c90 8800363fbd10
> Sep 21 04:57:18 el7-vm25 kernel: 0003 9347ffdf 
> 0001 81065c90
> Sep 21 04:57:18 el7-vm25 kernel: 81065c90 8800363fbcb8 
> 810e6adf 8800363fbcb8
> Sep 21 04:57:18 el7-vm25 kernel: Call Trace:
> Sep 21 04:57:18 el7-vm25 kernel: [] ? leave_mm+0x70/0x70
> Sep 21 04:57:18 el7-vm25 kernel: [] ? leave_mm+0x70/0x70
> Sep 21 04:57:18 el7-vm25 kernel: [] ? leave_mm+0x70/0x70
> Sep 21 04:57:18 el7-vm25 kernel: [] 
> smp_call_function_single+0x5f/0xa0
> Sep 21 04:57:18 el7-vm25 kernel: [] ? 
> cpumask_next_and+0x35/0x50
> Sep 21 04:57:18 el7-vm25 kernel: [] 
> smp_call_function_many+0x223/0x260
> Sep 21 04:57:18 el7-vm25 kernel: [] 
> native_flush_tlb_others+0xb8/0xc0
> Sep 21 04:57:18 el7-vm25 kernel: [] 
> flush_tlb_mm_range+0x66/0x140
> Sep 21 04:57:18 el7-vm25 kernel: [] 
> tlb_flush_mmu.part.54+0x33/0xc0
> Sep 21 04:57:18 el7-vm25 kernel: [] tlb_finish_mmu+0x55/0x60
> Sep 21 04:57:18 el7-vm25 kernel: [] 
> zap_page_range+0x12a/0x170
> Sep 21 04:57:18 el7-vm25 kernel: [] SyS_madvise+0x394/0x820
> Sep 21 04:57:18 el7-vm25 kernel: [] ? 
> hrtimer_nanosleep+0xad/0x170
> Sep 21 04:57:18 el7-vm25 kernel: [] ? SyS_futex+0x80/0x180
> Sep 21 04:57:18 el7-vm25 kernel: [] 
> system_call_fastpath+0x16/0x1b
> Sep 21 04:57:18 el7-vm25 kernel: Code: 80 72 01 00 48 89 de 48 03 14 c5 20 c9 
> a5