[JIRA] (OVIRT-989) Ensure backup.pxh.ovirt.org has proper data retention cycles

2016-12-27 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-989:
--

 Summary: Ensure backup.pxh.ovirt.org has proper data retention 
cycles
 Key: OVIRT-989
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-989
 Project: oVirt - virtualization made easy
  Issue Type: Improvement
  Components: General
Reporter: Barak Korren
Assignee: infra
Priority: High


backup.phx.ovirt.org started running out of space, so the disk was expanded 
form 250G to 500G (While keeping the actual LV at 300G).

We need to ensure old data gets erased eventually, and its not just filling up.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-989) Ensure backup.pxh.ovirt.org has proper data retention cycles

2016-12-27 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-989:
---
Epic Link: OVIRT-403

> Ensure backup.pxh.ovirt.org has proper data retention cycles
> 
>
> Key: OVIRT-989
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-989
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: General
>Reporter: Barak Korren
>Assignee: infra
>Priority: High
>
> backup.phx.ovirt.org started running out of space, so the disk was expanded 
> form 250G to 500G (While keeping the actual LV at 300G).
> We need to ensure old data gets erased eventually, and its not just filling 
> up.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: ovirt_3.6_he-system-tests #791

2016-12-27 Thread jenkins
See 

Changes:

[Sandro Bonazzola] 3.6 repo: drop double *-debuginfo

[Juan Hernandez] Build and check API metamodel from branch 1.1

--
[...truncated 608 lines...]
+ local ID=3
+ lvcreate -L20G -n lun3_bdev vg1_storage
+ targetcli /backstores/block create name=lun3_bdev 
dev=/dev/vg1_storage/lun3_bdev
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create 
/backstores/block/lun3_bdev
+ for I in '$(seq $NUM_LUNS)'
+ create_lun 4
+ local ID=4
+ lvcreate -L20G -n lun4_bdev vg1_storage
+ targetcli /backstores/block create name=lun4_bdev 
dev=/dev/vg1_storage/lun4_bdev
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create 
/backstores/block/lun4_bdev
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set auth userid=username 
password=password
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set attribute 
demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1 
default_cmdsn_depth=64
+ targetcli saveconfig
+ systemctl enable target
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service 
to /usr/lib/systemd/system/target.service.
+ systemctl start target
+ sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = 
CHAP/g' /etc/iscsi/iscsid.conf
+ sed -i 's/#node.session.auth.username = username/node.session.auth.username = 
username/g' /etc/iscsi/iscsid.conf
+ sed -i 's/#node.session.auth.password = password/node.session.auth.password = 
password/g' /etc/iscsi/iscsid.conf
+ iscsiadm -m discovery -t sendtargets -p 127.0.0.1
+ iscsiadm -m node -L all
+ rescan-scsi-bus.sh
which: no multipath in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
+ lsscsi -i
+ grep 36
+ sort
+ awk '{print $NF}'
+ iscsiadm -m node -U all
+ iscsiadm -m node -o delete
+ systemctl stop iscsi.service
+ systemctl disable iscsi.service
Removed symlink /etc/systemd/system/sysinit.target.wants/iscsi.service.
+ install_deps_389ds
+ yum install -y --downloaddir=/dev/shm 389-ds-base
+ setup_389ds
+ DOMAIN=lago.local
+ PASSWORD=12345678
++ hostname
++ sed s/_/-/g
+ HOSTNAME=lago-he-basic-suite-3-6-storage.lago.local
++ /sbin/ip -4 -o addr show dev eth0
++ awk '{split($4,a,"."); print a[1] "." a[2] "." a[3] "." a[4]}'
++ awk -F/ '{print $1}'
+ ADDR=192.168.200.2
+ cat
+ sed -i s/@HOSTNAME@/lago-he-basic-suite-3-6-storage.lago.local/g 
answer_file.inf
+ sed -i s/@PASSWORD@/12345678/g answer_file.inf
+ sed -i s/@DOMAIN@/lago.local/g answer_file.inf
+ cat
+ /usr/sbin/setup-ds.pl --silent --file=answer_file.inf
Warning: using root as the server user id.  You are strongly encouraged to use 
a non-root user.

* [Thread-2] Deploy VM lago-he-basic-suite-3-6-storage: ERROR (in 0:02:15)
Error while running thread
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in 
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1242, in 
_deploy_host
host.name(),
RuntimeError: 

 failed with status 1 on lago-he-basic-suite-3-6-storage
* [Thread-5] Deploy VM lago-he-basic-suite-3-6-host1: ERROR (in 0:08:40)
* [Thread-4] Deploy VM lago-he-basic-suite-3-6-host0: ERROR (in 0:09:44)
  # Deploy environment: ERROR (in 0:09:44)
@ Deploy oVirt environment: ERROR (in 0:09:44)
Error occured, aborting
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 281, in do_run
self.cli_plugins[args.ovirtverb].do_run(args)
  File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
do_run
self._do_run(**vars(args))
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in wrapper
return func(*args, prefix=prefix, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 187, in 
do_deploy
prefix.deploy()
  File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 621, in 
wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py", line 68, in 
wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/__init__.py", line 198, in 
deploy
return super(OvirtPrefix, self).deploy()
  File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 621, in 
wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1249, in deploy
self._deploy_host, self.virt_env.get_vms().values()
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 97, in 

Build failed in Jenkins: ovirt_4.1_publish-rpms_nightly #25

2016-12-27 Thread jenkins
See 

--
Started by timer
[EnvInject] - Loading node environment variables.
Building on master in workspace 

[WS-CLEANUP] Deleting project workspace...
[workspace] $ /bin/bash -xe /tmp/hudson5307849611683268207.sh
+ rm -rf 

+ mkdir 

Copied 5 artifacts from "ovirt-host-deploy_4.1_build-artifacts-el7-x86_64" 
build number 5
Copied 5 artifacts from "ovirt-host-deploy_4.1_build-artifacts-fc24-x86_64" 
build number 5
Copied 7 artifacts from "otopi_master_build-artifacts-el7-x86_64" build number 
33
Copied 7 artifacts from "otopi_master_build-artifacts-fc24-x86_64" build number 
21
Copied 5 artifacts from "ovirt-vmconsole_master_build-artifacts-el7-x86_64" 
build number 2
Copied 6 artifacts from "ovirt-vmconsole_master_build-artifacts-fc24-x86_64" 
build number 1
Copied 10 artifacts from "ovirt-imageio_4.1_build-artifacts-el7-x86_64" build 
number 3
Copied 10 artifacts from "ovirt-imageio_4.1_build-artifacts-fc24-x86_64" build 
number 3
Copied 4 artifacts from "ovirt-iso-uploader_master_build-artifacts-el7-x86_64" 
build number 9
Copied 4 artifacts from "ovirt-iso-uploader_master_build-artifacts-fc24-x86_64" 
build number 5
Copied 3 artifacts from "ovirt-log-collector_4.1_build-artifacts-el7-x86_64" 
build number 1
Copied 3 artifacts from "ovirt-log-collector_4.1_build-artifacts-fc24-x86_64" 
build number 1
Copied 4 artifacts from "ovirt-engine-cli_3.6_build-artifacts-el7-x86_64" build 
number 23
Copied 4 artifacts from "ovirt-engine-cli_3.6_build-artifacts-fc24-x86_64" 
build number 13
Copied 7 artifacts from 
"ovirt-engine-extension-aaa-ldap_master_create-rpms-el7-x86_64_merged" build 
number 31
Copied 7 artifacts from 
"ovirt-engine-extension-aaa-ldap_master_create-rpms-fc24-x86_64_merged" build 
number 14
Copied 7 artifacts from 
"ovirt-engine-extension-aaa-misc_master_create-rpms-el7-x86_64_merged" build 
number 5
Copied 7 artifacts from 
"ovirt-engine-extension-aaa-misc_master_create-rpms-fc24-x86_64_merged" build 
number 1
Copied 7 artifacts from 
"ovirt-engine-extension-logger-log4j_master_create-rpms-el7-x86_64_merged" 
build number 4
Copied 7 artifacts from 
"ovirt-engine-extension-logger-log4j_master_create-rpms-fc24-x86_64_merged" 
build number 3
Copied 4 artifacts from "ovirt-dwh_master_build-artifacts-el7-x86_64" build 
number 32
Copied 4 artifacts from "ovirt-dwh_master_build-artifacts-fc24-x86_64" build 
number 12
Copied 7 artifacts from 
"ovirt-engine-extension-aaa-jdbc_master_create-rpms-el7-x86_64_merged" build 
number 15
Copied 7 artifacts from 
"ovirt-engine-extension-aaa-jdbc_master_create-rpms-fc24-x86_64_merged" build 
number 10
Copied 3 artifacts from "ovirt-setup-lib_master_build-artifacts-el7-x86_64" 
build number 12
Copied 3 artifacts from "ovirt-setup-lib_master_build-artifacts-fc24-x86_64" 
build number 8
Copied 4 artifacts from "vdsm-jsonrpc-java_master_build-artifacts-el7-x86_64" 
build number 50
Copied 4 artifacts from "vdsm-jsonrpc-java_master_build-artifacts-fc24-x86_64" 
build number 29
Copied 3 artifacts from "ovirt-engine-sdk_master_build-artifacts-el7-x86_64" 
build number 101
Copied 5 artifacts from "ovirt-engine-sdk_master_build-artifacts-fc24-x86_64" 
build number 76
Copied 4 artifacts from 
"python-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64" build number 8
Copied 5 artifacts from 
"python-ovirt-engine-sdk4_master_build-artifacts-fc24-x86_64" build number 8
Copied 4 artifacts from "ovirt-engine-sdk_master_build-artifacts-el7-ppc64le" 
build number 30
Copied 5 artifacts from "ovirt-engine-sdk_master_build-artifacts-fc24-ppc64le" 
build number 30
Copied 3 artifacts from 
"ovirt-engine-sdk-java_master_build-artifacts-el7-x86_64" build number 74
Copied 3 artifacts from 
"ovirt-engine-sdk-java_master_build-artifacts-fc24-x86_64" build number 54
Copied 4 artifacts from 
"java-ovirt-engine-sdk4_master_build-artifacts-el7-x86_64" build number 6
Copied 3 artifacts from 
"java-ovirt-engine-sdk4_master_build-artifacts-fc24-x86_64" build number 10
Copied 4 artifacts from 
"ovirt-scheduler-proxy_master_build-artifacts-el7-x86_64" build number 5
Copied 4 artifacts from 
"ovirt-scheduler-proxy_master_build-artifacts-fc24-x86_64" build number 1
Copied 6 artifacts from "ovirt-optimizer_master_build-artifacts-el7-x86_64" 
build number 22
Copied 7 artifacts from "ovirt-optimizer_master_build-artifacts-fc24-x86_64" 
build number 21
Copied 5 artifacts from 
"ovirt-jboss-modules-maven-plugin_master_build-artifacts-el7-x86_64" build 
number 3
Copied 5 artifacts from 
"ovirt-jboss-modules-maven-plugin_master_build-artifacts-fc24-x86_64" build 
number 2
Copied 3 artifacts from 
"ovirt-engine-dashboard_master_build-artifacts-el7-x86_64" build number 71
Copied 3 artifacts 

oVirt infra daily report - unstable production jobs - 182

2016-12-27 Thread jenkins
Good morning!

Attached is the HTML page with the jenkins status report. You can see it also 
here:
 - 
http://jenkins.ovirt.org/job/system_jenkins-report/182//artifact/exported-artifacts/upstream_report.html

Cheers,
Jenkins

 
 
 
 RHEVM CI Jenkins Daily Report - 27/12/2016
 
00 Unstable Critical
 
   
   deploy-to-ovirt_experimental_4.1
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-node-ng_ovirt-master-experimental_build-artifacts-el7-x86_64
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt_3.6_he-system-tests
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt_3.6_image-ng-system-tests
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt_master_system-tests_per_patch
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   system-backup_jenkins_old_ovirt_org
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

Job disabled - The old Jenkins is probably not up any more
   
   ___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : ovirt_master_system-tests #886

2016-12-27 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-985) Fix the way repoman deploy to experimental

2016-12-27 Thread eyal edri [Administrator] (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=25018#comment-25018
 ] 

eyal edri [Administrator] commented on OVIRT-985:
-

we need longer history, please update the deploy experimental job to keep 
history for 14 days ( other projects were updated already )

> Fix the way repoman deploy to experimental
> --
>
> Key: OVIRT-985
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-985
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Gil Shinar
>Assignee: infra
>
> Please consult with Eyal on that.
> No idea what should be done.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-982) Fwd: http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.1_el7_created/431

2016-12-27 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=25017#comment-25017
 ] 

Barak Korren commented on OVIRT-982:


[~gshinar] please link relevant patch here, also link this in patch 
commit-message.

> Fwd: 
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.1_el7_created/431
> --
>
> Key: OVIRT-982
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-982
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Yevgeny Zaspitsky
>Assignee: infra
>
> Looks like a lot of jobs fail on similar problem.
> From the job console log:
> git -c core.askpass=true fetch --tags --progress
> git://gerrit.ovirt.org/ovirt-engine.git refs/changes/93/67593/22
> --prune
> ERROR: Error fetching remote repo
> 'origin'hudson.plugins.git.GitException
> :
> Failed to fetch from git://gerrit.ovirt.org/ovirt-engine.git
>   at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
> 
>   at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
> 
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
> 
>   at 
> org.jenkinsci.plugins.multiplescms.MultiSCM.checkout(MultiSCM.java:129)
> 
>   at hudson.scm.SCM.checkout(SCM.java:485)
> 
>   at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
> 
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
> 
>   at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
> 
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
> 
>   at hudson.model.Run.execute(Run.java:1738)
> 
>   at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> 
>   at hudson.model.ResourceController.execute(ResourceController.java:98)
> 
>   at hudson.model.Executor.run(Executor.java:410)
> 
> Caused by: hudson.plugins.git.GitException
> :
> Command "git -c core.askpass=true fetch --tags --progress
> git://gerrit.ovirt.org/ovirt-engine.git refs/changes/93/67593/22
> --prune" returned status code 128:
> stdout:
> stderr: fatal: read error: Connection reset by peer
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1388)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:152)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:50)
>   at hudson.remoting.Request$2.run(Request.java:332)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   

Build failed in Jenkins: ovirt_master_system-tests_per_patch #4

2016-12-27 Thread jenkins
See 

Changes:

[Yaniv Kaul] Fixes and changes to storage tests

[Juan Hernandez] Build and check API model from branch 4.1

--
Started by user Eyal Edri
[EnvInject] - Loading node environment variables.
Building remotely on ovirt-srv18.phx.ovirt.org (phx integ-tests physical) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://gerrit.ovirt.org/ovirt-system-tests.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/ovirt-system-tests.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/ovirt-system-tests.git refs/heads/master --prune
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
Checking out Revision e266107c53eaa88c58e1609f0679924d56161183 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e266107c53eaa88c58e1609f0679924d56161183
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
 > git rev-list 9febbf1d562baec55ffaaa18b06f94aabbcc5848 # timeout=10
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/APIV4^{commit} # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* 
 > --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 93fecbcde573cf5428e056942b899919583b15a1 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 93fecbcde573cf5428e056942b899919583b15a1
 > git rev-list 90b9a38a52522f18189f267eba3d88f1a3c6e109 # timeout=10
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
[ovirt_master_system-tests_per_patch] $ /bin/bash -e 
/tmp/hudson9015366531611428089.sh
shell-scripts/cleanup_slave.sh
###
#Cleaning up slave#
###
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  898G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% /boot
tmpfs   4.8G 0  4.8G   0% /run/user/1012
---
Cleaning up postgres databases
Postgres installation not found, skipping
Cleaning up journal logs (if any)
Redirecting to /bin/systemctl restart  systemd-journald.service
Cleaning up /var/tmp
done
Emptying some common logs
/var/log/wtmp
Done
Making sure there are no device mappings...
Removing the used loop devices...
Redirecting to /bin/systemctl restart  libvirtd.service
---
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  898G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% /boot
tmpfs   4.8G 0  4.8G   0% /run/user/1012
###
#Slave cleanup done   #
###
[ovirt_master_system-tests_per_patch] $ /bin/bash -xe 
/tmp/hudson113706959738959.sh
+ echo shell-scripts/system_tests.sh
shell-scripts/system_tests.sh
+ VERSION=master
+ SUITE_TYPE=basic
+ 
WORKSPACE=
+ OVIRT_SUITE=basic_suite_master
+ 
PREFIX=
+ OVIRT_SUITE_DIR=basic-suite-master
+ echo rec:
/tmp/hudson113706959738959.sh: line 18: 

Re: ost host addition failure

2016-12-27 Thread Yaniv Kaul
On Dec 27, 2016 3:01 PM, "Gil Shinar"  wrote:

After the following fix had been merged, we still had an issue with vm_run
but it had been fixed as well.
Master experimental is now working properly.


Excellent news!
1. Can we publish it?
2. 4.1 branch?
Y.


Thanks Dan
Gil

On Tue, Dec 27, 2016 at 10:24 AM, Dan Kenigsberg  wrote:

> On Tue, Dec 27, 2016 at 9:59 AM, Eyal Edri  wrote:
> >
> >
> > On Tue, Dec 27, 2016 at 9:56 AM, Eyal Edri  wrote:
> >>
> >> Any updates?
> >> The tests are still failing on vdsmd won't start from Sunday... master
> >> repos havn't been refreshed for a few days due to this.
> >>
> >> from host deploy log: [1]
> >> basic-suite-master-engine/_var_log_ovirt-engine/host-deploy/
> ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
> >> the job links [2]
> >>
> >>
> >>
> >>
> >>
> >> [1]
> >> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/lastCompletedBuild/artifact/exported-artifacts/basic_
> suite_master.sh-el7/exported-artifacts/test_logs/basic-
> suite-master/post-002_bootstrap.py/lago-
> >
> >
> > Now with the full link:
> > http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/lastCompletedBuild/artifact/exported-artifacts/basic_
> suite_master.sh-el7/exported-artifacts/test_logs/basic-
> suite-master/post-002_bootstrap.py/lago-basic-suite-master-
> engine/_var_log_ovirt-engine/host-deploy/ovirt-host-deploy-
> 20161227012930-192.168.201.4-14af2bf0.log
> >
> >>
> >>
> >>
> >>
> >> 016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
> >> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
> >> 'vdsmd.service') stdout:
> >>
> >> 2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
> >> plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
> >> 'vdsmd.service') stderr:
> >> A dependency job for vdsmd.service failed. See 'journalctl -xe' for
> >> details.
> >>
> >> 2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142
> method
> >> exception
> >> Traceback (most recent call last):
> >>   File "/tmp/ovirt-QZ1ucxWFfm/pythonlib/otopi/context.py", line 132, in
> >> _executeMethod
> >> method['method']()
> >>   File
> >> "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/ovirt-host-deploy/vdsm/
> packages.py",
> >> line 209, in _start
> >> self.services.state('vdsmd', True)
> >>   File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/otopi/services/systemd.py",
> >> line 141, in state
> >> service=name,
> >> RuntimeError: Failed to start service 'vdsmd'
> >> 2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151
> Failed
> >> to execute stage 'Closing up': Failed to start service 'vdsmd'
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760
> >> ENVIRONMENT DUMP - BEGIN
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
> >> BASE/error=bool:'True'
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
> >> BASE/excep
> >>
> >> [2]
> >> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/lastCompletedBuild/testReport/
>
>
> In the log I see
>
> Processing package vdsm-4.20.0-7.gitf851d1b.el7.centos.x86_64
>
> which is from Dec 22 (last Thursday). This is because of us missing a
> master-branch tag. v4.20.0 wrongly tagged on the same commit as that
> of v4.19.1, removed, and never placed properly.
>
> I've re-pushed v4.20.0 properly, and now merged a patch to trigger
> build-artifacts in master.
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64/1544/
>
> When this is done, could you use it to take the artifacts and try again?
>
> Regards,
> Dan.
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: ovirt_3.6_he-system-tests #790

2016-12-27 Thread jenkins
See 

Changes:

[Yaniv Kaul] Fixes and changes to storage tests

[Dan Kenigsberg] vdsm: exclude f23 from 4.1 branch

[Yaniv Bronhaim] Update Jenkins slaves for vdsm to f24 and f25 and remove 
excludes

[Eyal Edri] update vdsm jobs to use 4.1 jobs

[ngoldin] Add fc25 lago jobs

[Juan Hernandez] Build and check API model from branch 4.1

--
[...truncated 608 lines...]
+ local ID=3
+ lvcreate -L20G -n lun3_bdev vg1_storage
+ targetcli /backstores/block create name=lun3_bdev 
dev=/dev/vg1_storage/lun3_bdev
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create 
/backstores/block/lun3_bdev
+ for I in '$(seq $NUM_LUNS)'
+ create_lun 4
+ local ID=4
+ lvcreate -L20G -n lun4_bdev vg1_storage
+ targetcli /backstores/block create name=lun4_bdev 
dev=/dev/vg1_storage/lun4_bdev
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create 
/backstores/block/lun4_bdev
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set auth userid=username 
password=password
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set attribute 
demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1 
default_cmdsn_depth=64
+ targetcli saveconfig
+ systemctl enable target
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service 
to /usr/lib/systemd/system/target.service.
+ systemctl start target
+ sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = 
CHAP/g' /etc/iscsi/iscsid.conf
+ sed -i 's/#node.session.auth.username = username/node.session.auth.username = 
username/g' /etc/iscsi/iscsid.conf
+ sed -i 's/#node.session.auth.password = password/node.session.auth.password = 
password/g' /etc/iscsi/iscsid.conf
+ iscsiadm -m discovery -t sendtargets -p 127.0.0.1
+ iscsiadm -m node -L all
+ rescan-scsi-bus.sh
which: no multipath in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
+ lsscsi -i
+ grep 36
+ awk '{print $NF}'
+ sort
+ iscsiadm -m node -U all
+ iscsiadm -m node -o delete
+ systemctl stop iscsi.service
+ systemctl disable iscsi.service
Removed symlink /etc/systemd/system/sysinit.target.wants/iscsi.service.
+ install_deps_389ds
+ yum install -y --downloaddir=/dev/shm 389-ds-base
+ setup_389ds
+ DOMAIN=lago.local
+ PASSWORD=12345678
++ sed s/_/-/g
++ hostname
+ HOSTNAME=lago-he-basic-suite-3-6-storage.lago.local
++ /sbin/ip -4 -o addr show dev eth0
++ awk '{split($4,a,"."); print a[1] "." a[2] "." a[3] "." a[4]}'
++ awk -F/ '{print $1}'
+ ADDR=192.168.200.2
+ cat
+ sed -i s/@HOSTNAME@/lago-he-basic-suite-3-6-storage.lago.local/g 
answer_file.inf
+ sed -i s/@PASSWORD@/12345678/g answer_file.inf
+ sed -i s/@DOMAIN@/lago.local/g answer_file.inf
+ cat
+ /usr/sbin/setup-ds.pl --silent --file=answer_file.inf
Warning: using root as the server user id.  You are strongly encouraged to use 
a non-root user.

* [Thread-2] Deploy VM lago-he-basic-suite-3-6-storage: ERROR (in 0:02:27)
Error while running thread
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in 
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1242, in 
_deploy_host
host.name(),
RuntimeError: 

 failed with status 1 on lago-he-basic-suite-3-6-storage
* [Thread-4] Deploy VM lago-he-basic-suite-3-6-host0: ERROR (in 0:08:50)
* [Thread-5] Deploy VM lago-he-basic-suite-3-6-host1: ERROR (in 0:08:51)
  # Deploy environment: ERROR (in 0:08:51)
@ Deploy oVirt environment: ERROR (in 0:08:51)
Error occured, aborting
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 281, in do_run
self.cli_plugins[args.ovirtverb].do_run(args)
  File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
do_run
self._do_run(**vars(args))
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in wrapper
return func(*args, prefix=prefix, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 187, in 
do_deploy
prefix.deploy()
  File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 621, in 
wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py", line 68, in 
wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/__init__.py", line 198, in 
deploy
return super(OvirtPrefix, self).deploy()
  File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 621, in 
wrapper
return func(*args, **kwargs)
  File 

[JIRA] (OVIRT-988) standard-CI artifacts are not collected when mock runner fails

2016-12-27 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-988:
---
Epic Link: OVIRT-400

> standard-CI artifacts are not collected when mock runner fails
> --
>
> Key: OVIRT-988
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-988
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Barak Korren
>Assignee: infra
>
> Currently, when a standard-CI script fails (which makes mock_runner.sh fail), 
> the exported-artifacts are not taken from the project directory but only from 
> $WORKSPACE.
> It desirable to take the project's atficats too, even if the CI script 
> failed, to allow debugging.
> It seems that pipeline-based jobs (Lago's CI, experimental, OST)  do not 
> exhibit this behaviour.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-988) standard-CI artifacts are not collected when mock runner fails

2016-12-27 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-988:
--

Assignee: Barak Korren  (was: infra)

> standard-CI artifacts are not collected when mock runner fails
> --
>
> Key: OVIRT-988
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-988
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Barak Korren
>Assignee: Barak Korren
>
> Currently, when a standard-CI script fails (which makes mock_runner.sh fail), 
> the exported-artifacts are not taken from the project directory but only from 
> $WORKSPACE.
> It desirable to take the project's atficats too, even if the CI script 
> failed, to allow debugging.
> It seems that pipeline-based jobs (Lago's CI, experimental, OST)  do not 
> exhibit this behaviour.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-988) standard-CI artifacts are not collected when mock runner fails

2016-12-27 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-988:
--

 Summary: standard-CI artifacts are not collected when mock runner 
fails
 Key: OVIRT-988
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-988
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: Barak Korren
Assignee: infra


Currently, when a standard-CI script fails (which makes mock_runner.sh fail), 
the exported-artifacts are not taken from the project directory but only from 
$WORKSPACE.

It desirable to take the project's atficats too, even if the CI script failed, 
to allow debugging.

It seems that pipeline-based jobs (Lago's CI, experimental, OST)  do not 
exhibit this behaviour.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: ovirt_master_system-tests_per_patch #3

2016-12-27 Thread jenkins
See 

Changes:

[Yaniv Kaul] Fixes and changes to storage tests

--
Started by user Eyal Edri
[EnvInject] - Loading node environment variables.
Building remotely on ovirt-srv18.phx.ovirt.org (phx integ-tests physical) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://gerrit.ovirt.org/ovirt-system-tests.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/ovirt-system-tests.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/ovirt-system-tests.git refs/heads/master --prune
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
Checking out Revision e266107c53eaa88c58e1609f0679924d56161183 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e266107c53eaa88c58e1609f0679924d56161183
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
 > git rev-list 9febbf1d562baec55ffaaa18b06f94aabbcc5848 # timeout=10
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/APIV4^{commit} # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* 
 > --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 90b9a38a52522f18189f267eba3d88f1a3c6e109 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 90b9a38a52522f18189f267eba3d88f1a3c6e109
 > git rev-list 90b9a38a52522f18189f267eba3d88f1a3c6e109 # timeout=10
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
[ovirt_master_system-tests_per_patch] $ /bin/bash -e 
/tmp/hudson1762005761609787543.sh
shell-scripts/cleanup_slave.sh
###
#Cleaning up slave#
###
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  898G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% /boot
tmpfs   4.8G 0  4.8G   0% /run/user/1012
---
Cleaning up postgres databases
Postgres installation not found, skipping
Cleaning up journal logs (if any)
Redirecting to /bin/systemctl restart  systemd-journald.service
Cleaning up /var/tmp
done
Emptying some common logs
/var/log/wtmp
Done
Making sure there are no device mappings...
Removing the used loop devices...
Redirecting to /bin/systemctl restart  libvirtd.service
---
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  898G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% /boot
tmpfs   4.8G 0  4.8G   0% /run/user/1012
###
#Slave cleanup done   #
###
[ovirt_master_system-tests_per_patch] $ /bin/bash -xe 
/tmp/hudson9020561732978837815.sh
+ echo shell-scripts/system_tests.sh
shell-scripts/system_tests.sh
+ VERSION=master
+ SUITE_TYPE=basic
+ 
WORKSPACE=
+ OVIRT_SUITE=basic_suite_master
+ 
PREFIX=
+ touch extra_sources
+ echo rec:
/tmp/hudson9020561732978837815.sh: line 18: 

Build failed in Jenkins: ovirt_master_system-tests_per_patch #2

2016-12-27 Thread jenkins
See 

Changes:

[Yaniv Kaul] Fixes and changes to storage tests

--
Started by user Eyal Edri
[EnvInject] - Loading node environment variables.
Building remotely on ovirt-srv18.phx.ovirt.org (phx integ-tests physical) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://gerrit.ovirt.org/ovirt-system-tests.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/ovirt-system-tests.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/ovirt-system-tests.git refs/heads/master --prune
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
Checking out Revision e266107c53eaa88c58e1609f0679924d56161183 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e266107c53eaa88c58e1609f0679924d56161183
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
 > git rev-list 9febbf1d562baec55ffaaa18b06f94aabbcc5848 # timeout=10
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/APIV4^{commit} # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* 
 > --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 90b9a38a52522f18189f267eba3d88f1a3c6e109 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 90b9a38a52522f18189f267eba3d88f1a3c6e109
 > git rev-list 90b9a38a52522f18189f267eba3d88f1a3c6e109 # timeout=10
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
[ovirt_master_system-tests_per_patch] $ /bin/bash -e 
/tmp/hudson5195721722187719295.sh
shell-scripts/cleanup_slave.sh
###
#Cleaning up slave#
###
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  898G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% /boot
tmpfs   4.8G 0  4.8G   0% /run/user/1012
---
Cleaning up postgres databases
Postgres installation not found, skipping
Cleaning up journal logs (if any)
Redirecting to /bin/systemctl restart  systemd-journald.service
Cleaning up /var/tmp
done
Emptying some common logs
/var/log/wtmp
Done
Making sure there are no device mappings...
Removing the used loop devices...
Redirecting to /bin/systemctl restart  libvirtd.service
---
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  898G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% /boot
tmpfs   4.8G 0  4.8G   0% /run/user/1012
###
#Slave cleanup done   #
###
[ovirt_master_system-tests_per_patch] $ /bin/bash -xe 
/tmp/hudson8486222969270923559.sh
+ echo shell-scripts/system_tests.sh
shell-scripts/system_tests.sh
+ VERSION=master
+ SUITE_TYPE=basic
+ 
WORKSPACE=
+ OVIRT_SUITE=basic_suite_master
+ 
PREFIX=
+ echo rec:
/tmp/hudson8486222969270923559.sh: line 17: 

Build failed in Jenkins: ovirt_master_system-tests_per_patch #1

2016-12-27 Thread jenkins
See 

--
Started by user Eyal Edri
[EnvInject] - Loading node environment variables.
Building remotely on ovirt-srv18.phx.ovirt.org (phx integ-tests physical) in 
workspace 
Cloning the remote Git repository
Cloning repository git://gerrit.ovirt.org/ovirt-system-tests.git
 > git init 
 > 
 >  # timeout=10
Fetching upstream changes from git://gerrit.ovirt.org/ovirt-system-tests.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/ovirt-system-tests.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url git://gerrit.ovirt.org/ovirt-system-tests.git # 
 > timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url git://gerrit.ovirt.org/ovirt-system-tests.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
No valid HEAD. Skipping the resetting
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/ovirt-system-tests.git
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/ovirt-system-tests.git refs/heads/master --prune
 > git rev-parse FETCH_HEAD^{commit} # timeout=10
Checking out Revision e266107c53eaa88c58e1609f0679924d56161183 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e266107c53eaa88c58e1609f0679924d56161183
First time build. Skipping changelog.
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/APIV4^{commit} # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
Cloning the remote Git repository
Cloning repository git://gerrit.ovirt.org/jenkins.git
 > git init 
 > 
 >  # timeout=10
Fetching upstream changes from git://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url git://gerrit.ovirt.org/jenkins.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url git://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
No valid HEAD. Skipping the resetting
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from git://gerrit.ovirt.org/jenkins.git
 > git -c core.askpass=true fetch --tags --progress 
 > git://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* 
 > --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 90b9a38a52522f18189f267eba3d88f1a3c6e109 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 90b9a38a52522f18189f267eba3d88f1a3c6e109
First time build. Skipping changelog.
 > git branch -a # timeout=10
 > git rev-parse remotes/origin/master^{commit} # timeout=10
[ovirt_master_system-tests_per_patch] $ /bin/bash -e 
/tmp/hudson8013611588156676195.sh
shell-scripts/cleanup_slave.sh
###
#Cleaning up slave#
###
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  897G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% /boot
tmpfs   4.8G 0  4.8G   0% /run/user/1012
---
Cleaning up postgres databases
Postgres installation not found, skipping
Cleaning up journal logs (if any)
Redirecting to /bin/systemctl restart  systemd-journald.service
Cleaning up /var/tmp
done
Emptying some common logs
/var/log/wtmp
Done
/home/jenkins/workspace/ovirt-engine_master_build-artifacts-fc25-x86_64
Making sure there are no device mappings...
Removing the used loop devices...
Redirecting to /bin/systemctl restart  libvirtd.service
---
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs 24G 0   24G   0% /dev
tmpfs24G 0   24G   0% /dev/shm
tmpfs24G  1.4M   24G   1% /run
tmpfs24G 0   24G   0% /sys/fs/cgroup
/dev/sda3   908G   11G  898G   2% /
tmpfs24G  336K   24G   1% /tmp
/dev/sda1   253M  125M  129M  50% 

Jenkins build is back to normal : ovirt_4.1_system-tests #20

2016-12-27 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ost host addition failure

2016-12-27 Thread Gil Shinar
After the following fix had been merged, we still had an issue with vm_run
but it had been fixed as well.
Master experimental is now working properly.

Thanks Dan
Gil

On Tue, Dec 27, 2016 at 10:24 AM, Dan Kenigsberg  wrote:

> On Tue, Dec 27, 2016 at 9:59 AM, Eyal Edri  wrote:
> >
> >
> > On Tue, Dec 27, 2016 at 9:56 AM, Eyal Edri  wrote:
> >>
> >> Any updates?
> >> The tests are still failing on vdsmd won't start from Sunday... master
> >> repos havn't been refreshed for a few days due to this.
> >>
> >> from host deploy log: [1]
> >> basic-suite-master-engine/_var_log_ovirt-engine/host-
> deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
> >> the job links [2]
> >>
> >>
> >>
> >>
> >>
> >> [1]
> >> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/lastCompletedBuild/artifact/exported-artifacts/
> basic_suite_master.sh-el7/exported-artifacts/test_logs/
> basic-suite-master/post-002_bootstrap.py/lago-
> >
> >
> > Now with the full link:
> > http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/lastCompletedBuild/artifact/exported-artifacts/
> basic_suite_master.sh-el7/exported-artifacts/test_logs/
> basic-suite-master/post-002_bootstrap.py/lago-basic-suite-
> master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-
> deploy-20161227012930-192.168.201.4-14af2bf0.log
> >
> >>
> >>
> >>
> >>
> >> 016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
> >> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
> >> 'vdsmd.service') stdout:
> >>
> >> 2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
> >> plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
> >> 'vdsmd.service') stderr:
> >> A dependency job for vdsmd.service failed. See 'journalctl -xe' for
> >> details.
> >>
> >> 2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142
> method
> >> exception
> >> Traceback (most recent call last):
> >>   File "/tmp/ovirt-QZ1ucxWFfm/pythonlib/otopi/context.py", line 132, in
> >> _executeMethod
> >> method['method']()
> >>   File
> >> "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/ovirt-host-deploy/
> vdsm/packages.py",
> >> line 209, in _start
> >> self.services.state('vdsmd', True)
> >>   File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/otopi/services/systemd.py",
> >> line 141, in state
> >> service=name,
> >> RuntimeError: Failed to start service 'vdsmd'
> >> 2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151
> Failed
> >> to execute stage 'Closing up': Failed to start service 'vdsmd'
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760
> >> ENVIRONMENT DUMP - BEGIN
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
> >> BASE/error=bool:'True'
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
> >> BASE/excep
> >>
> >> [2]
> >> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/lastCompletedBuild/testReport/
>
>
> In the log I see
>
> Processing package vdsm-4.20.0-7.gitf851d1b.el7.centos.x86_64
>
> which is from Dec 22 (last Thursday). This is because of us missing a
> master-branch tag. v4.20.0 wrongly tagged on the same commit as that
> of v4.19.1, removed, and never placed properly.
>
> I've re-pushed v4.20.0 properly, and now merged a patch to trigger
> build-artifacts in master.
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64/1544/
>
> When this is done, could you use it to take the artifacts and try again?
>
> Regards,
> Dan.
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-982) Fwd: http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.1_el7_created/431

2016-12-27 Thread Gil Shinar (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=25014#comment-25014
 ] 

Gil Shinar commented on OVIRT-982:
--

We are currently checking if moving from git protocol to http, solves this 
issue.

> Fwd: 
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.1_el7_created/431
> --
>
> Key: OVIRT-982
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-982
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Yevgeny Zaspitsky
>Assignee: infra
>
> Looks like a lot of jobs fail on similar problem.
> From the job console log:
> git -c core.askpass=true fetch --tags --progress
> git://gerrit.ovirt.org/ovirt-engine.git refs/changes/93/67593/22
> --prune
> ERROR: Error fetching remote repo
> 'origin'hudson.plugins.git.GitException
> :
> Failed to fetch from git://gerrit.ovirt.org/ovirt-engine.git
>   at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
> 
>   at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
> 
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
> 
>   at 
> org.jenkinsci.plugins.multiplescms.MultiSCM.checkout(MultiSCM.java:129)
> 
>   at hudson.scm.SCM.checkout(SCM.java:485)
> 
>   at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
> 
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
> 
>   at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
> 
>   at 
> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
> 
>   at hudson.model.Run.execute(Run.java:1738)
> 
>   at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> 
>   at hudson.model.ResourceController.execute(ResourceController.java:98)
> 
>   at hudson.model.Executor.run(Executor.java:410)
> 
> Caused by: hudson.plugins.git.GitException
> :
> Command "git -c core.askpass=true fetch --tags --progress
> git://gerrit.ovirt.org/ovirt-engine.git refs/changes/93/67593/22
> --prune" returned status code 128:
> stdout:
> stderr: fatal: read error: Connection reset by peer
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1388)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:152)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:50)
>   at hudson.remoting.Request$2.run(Request.java:332)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   

[JIRA] (OVIRT-987) Infra issues in Jenkins aren't reported properly on most of Jenkins jobs

2016-12-27 Thread Yevgeny Zaspitsky (oVirt JIRA)
Yevgeny Zaspitsky created OVIRT-987:
---

 Summary: Infra issues in Jenkins aren't reported properly on most 
of Jenkins jobs
 Key: OVIRT-987
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-987
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Yevgeny Zaspitsky
Assignee: infra


Findbugs jobs reports an infra problem to gerrit properly: "There was an
infra issue, please contact infra@ovirt.org"

It'd be good to have same behavior on the rest of the Jenkins jobs.
Currently there is no differentiation between failure upon an infra issue
and a real patch problem in reporting to gerrit.

Regards,
Yevgeny



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-449) Add junit report to ovirt-engine standard ci

2016-12-27 Thread eyal edri [Administrator] (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri [Administrator] reassigned OVIRT-449:
---

Assignee: eyal edri [Administrator]  (was: infra)

> Add junit report to ovirt-engine standard ci
> 
>
> Key: OVIRT-449
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-449
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>Reporter: David Caro Estevez
>Assignee: eyal edri [Administrator]
>Priority: Low
>
> Currently there are to ways of doing it:
> * Using a special job template for ovirt-engine (easy to do now, hard to 
> maintain/reuse)
> * Using jenkins pipeline plugin to dynamically add it to any job that 
> generates a report (hard to do right now, easy to maintain/reuse)



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-986) remove gerrit hooks from infra repos

2016-12-27 Thread eyal edri [Administrator] (oVirt JIRA)
eyal edri [Administrator] created OVIRT-986:
---

 Summary: remove gerrit hooks from infra repos
 Key: OVIRT-986
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-986
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: eyal edri [Administrator]
Assignee: infra


We don't need to run oVirt gerrit hooks on infra repos like gerrit-admin or 
jenkins since we don't use bugzilla for tracking issues.

We might want to add support for JIRA ticket added to a commit msg for handling 
infra repos, but that should require another ticket, for now we just need to 
remove the hooks from projects which are not relevant like 
jenkins,infra-docs,infra-puppet,gerrit-admin and others



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-986) remove gerrit hooks from infra repos

2016-12-27 Thread eyal edri [Administrator] (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri [Administrator] updated OVIRT-986:

Epic Link: OVIRT-411

> remove gerrit hooks from infra repos
> 
>
> Key: OVIRT-986
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-986
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: eyal edri [Administrator]
>Assignee: infra
>
> We don't need to run oVirt gerrit hooks on infra repos like gerrit-admin or 
> jenkins since we don't use bugzilla for tracking issues.
> We might want to add support for JIRA ticket added to a commit msg for 
> handling infra repos, but that should require another ticket, for now we just 
> need to remove the hooks from projects which are not relevant like 
> jenkins,infra-docs,infra-puppet,gerrit-admin and others



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] test-repo_ovirt_experimental_master - Build #4448 - SUCCESS!

2016-12-27 Thread jenkins
Build: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4448/,
Build Number: 4448,
Build Status: SUCCESS___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] test-repo_ovirt_experimental_master - Build #4447 - FAILURE!

2016-12-27 Thread jenkins
Build: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4447/,
Build Number: 4447,
Build Status: FAILURE___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm ppc64 build fails

2016-12-27 Thread Eyal Edri
Adding Danken,
We currently don't have ppc64le builds for vdsm 4.1 due to this.

On Sun, Dec 25, 2016 at 4:54 PM, Karanbir Singh  wrote:

> Altarch contet is released to http://mirror.centos.org/altarch
>
> Regards
>
>
> On 25 December 2016 14:26:25 GMT+00:00, Eyal Edri 
> wrote:
>>
>> Can you help with links to the alterarch so we can use them in the
>> meantime?
>>
>> On Sun, Dec 25, 2016 at 3:50 PM, Karanbir Singh 
>> wrote:
>>
>>> We pushed ppc64le altarch 2 days back. I dont believe the SIG content os
>>> there as yet. Just getting the arch's content lined up into cbs has proven
>>> to be a challenge.
>>>
>>> Regards
>>>
>>>
>>> On 25 December 2016 13:21:27 GMT+00:00, Sandro Bonazzola <
>>> sbona...@redhat.com> wrote:

 Ppc64le is on altarch not on centos subdirectory.  About the virt sig
 packages adding Karanbir, afaik both ppc64le and aarch64 are not yet pushed
 to mirrors.

 Il 25/Dic/2016 01:40 PM, "Eyal Edri"  ha scritto:

> missing ppc64le repo on centos:
>
> DEBUG util.py:421: http://mirror.centos.org/cento
> s/7/virt/ppc64le/ovirt-4.0/repodata/repomd.xml: [Errno 14] HTTP Error
> 404 - Not Found
>
> If there is an alternate repo, please update it on
> automation/.packages files in vdsm repo.
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>

>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] test-repo_ovirt_experimental_master - Build #4446 - FAILURE!

2016-12-27 Thread jenkins
Build: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4446/,
Build Number: 4446,
Build Status: FAILURE___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: ovirt_4.1_system-tests #19

2016-12-27 Thread jenkins
See 

Changes:

[Yaniv Kaul] Fixes and changes to storage tests

--
[...truncated 890 lines...]
+ [[ -e 

 ]]
+ echo '--- Cleaning with lago'
--- Cleaning with lago
+ lago --workdir 

 destroy --yes --all-prefixes
+ echo '--- Cleaning with lago done'
--- Cleaning with lago done
+ [[ 0 != \0 ]]
+ echo ' Cleanup done'
 Cleanup done
+ exit 0
+ exit
Took 1667 seconds
===
##!
##! ERROR ^^
##!
##
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script  : #!/bin/bash -xe
echo 'shell_scripts/system_tests.collect_logs.sh'

#
# Required jjb vars:
#version
#
VERSION=4.1
SUITE_TYPE=

WORKSPACE="$PWD"
OVIRT_SUITE="$SUITE_TYPE_suite_$VERSION"
TESTS_LOGS="$WORKSPACE/ovirt-system-tests/exported-artifacts"

rm -rf "$WORKSPACE/exported-artifacts"
mkdir -p "$WORKSPACE/exported-artifacts"

if [[ -d "$TESTS_LOGS" ]]; then
mv "$TESTS_LOGS/"* "$WORKSPACE/exported-artifacts/"
fi

[ovirt_4.1_system-tests] $ /bin/bash -xe /tmp/hudson6219167849919206198.sh
+ echo shell_scripts/system_tests.collect_logs.sh
shell_scripts/system_tests.collect_logs.sh
+ VERSION=4.1
+ SUITE_TYPE=
+ WORKSPACE=
+ OVIRT_SUITE=4.1
+ 
TESTS_LOGS=
+ rm -rf 

+ mkdir -p 

+ [[ -d 

 ]]
+ mv 

 

 

 

 

 

 

 

POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Match found for :.* : True
Logical operation result is TRUE
Running script  : #!/bin/bash -x
echo "shell-scripts/mock_cleanup.sh"
# Make clear this is the cleanup, helps reading the jenkins logs
cat < 0 ]] && sleep "$UMOUNT_RETRY_DELAY"
# Try to umount
sudo umount --lazy "$mount" && return 0
# See if the mount is already not there despite failing
findmnt --kernel --first "$mount" > /dev/null && return 0
done
echo "ERROR:  Failed to umount $mount."
return 1
}

# restore the permissions in the working dir, as sometimes it leaves files
# owned by root and then the 'cleanup workspace' from jenkins job fails to
# clean and breaks the jobs
sudo chown -R "$USER" "$WORKSPACE"

# Archive the logs, we want them anyway
logs=(
./*log
./*/logs
)

if [[ "$logs" ]]; then
for log in "${logs[@]}"; do
[[ "$log" = ./exported-artifacts/* ]] && continue
echo "Copying ${log} to exported-artifacts"
mv $log exported-artifacts/
done
fi

# stop any processes running inside the chroot
failed=false
mock_confs=("$WORKSPACE"/*/mocker*)
# Clean current jobs mockroot if any
for mock_conf_file in "${mock_confs[@]}"; do
[[ "$mock_conf_file" ]] || continue
echo "Cleaning up mock $mock_conf"
mock_root="${mock_conf_file##*/}"
mock_root="${mock_root%.*}"
my_mock="/usr/bin/mock"
my_mock+=" --configdir=${mock_conf_file%/*}"
my_mock+=" --root=${mock_root}"
my_mock+=" --resultdir=$WORKSPACE"

#TODO: investigate why mock --clean fails to umount certain dirs sometimes,
#so we can use it instead of manually doing all this.
echo "Killing all mock orphan processes, if any."
$my_mock \
--orphanskill \
|| {
echo "ERROR:  Failed to kill orphans on 

[JIRA] (OVIRT-984) Add emails to deploy to experimental jobs

2016-12-27 Thread eyal edri [Administrator] (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri [Administrator] reassigned OVIRT-984:
---

Assignee: Pavel Zhukov  (was: infra)

> Add emails to deploy to experimental jobs
> -
>
> Key: OVIRT-984
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-984
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Gil Shinar
>Assignee: Pavel Zhukov
>
> When deploy to experimental fails, we need to alert.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-985) Fix the way repoman deploy to experimental

2016-12-27 Thread eyal edri [Administrator] (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=25003#comment-25003
 ] 

eyal edri [Administrator] commented on OVIRT-985:
-

[~gshinar] please add the relevant failing job ( mark it to keep forever ) and 
paste the exception here.

> Fix the way repoman deploy to experimental
> --
>
> Key: OVIRT-985
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-985
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Gil Shinar
>Assignee: infra
>
> Please consult with Eyal on that.
> No idea what should be done.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] test-repo_ovirt_experimental_master - Build #4445 - FAILURE!

2016-12-27 Thread jenkins
Build: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4445/,
Build Number: 4445,
Build Status: FAILURE___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-985) Fix the way repoman deploy to experimental

2016-12-27 Thread Gil Shinar (oVirt JIRA)
Gil Shinar created OVIRT-985:


 Summary: Fix the way repoman deploy to experimental
 Key: OVIRT-985
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-985
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Gil Shinar
Assignee: infra


Please consult with Eyal on that.
No idea what should be done.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-984) Add emails to deploy to experimental jobs

2016-12-27 Thread Gil Shinar (oVirt JIRA)
Gil Shinar created OVIRT-984:


 Summary: Add emails to deploy to experimental jobs
 Key: OVIRT-984
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-984
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Gil Shinar
Assignee: infra


When deploy to experimental fails, we need to alert.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-983) Improve CI output for Jenkins repo

2016-12-27 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-983:
--

 Summary: Improve CI output for Jenkins repo
 Key: OVIRT-983
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-983
 Project: oVirt - virtualization made easy
  Issue Type: Improvement
  Components: Jenkins
Reporter: Barak Korren
Assignee: infra


Since we've added more ci jobs to the {{jenkins}} repo, the CI output because 
unclear. When it was one job, it was easy to understand it failed because of 
YAML issues. Now, when it passes on all platforms but el7, its easy to mistake 
this for a infra failure on el7 rather then a real problem with the patch.

The following improvements should be made to the job output:
# The YAML failure message should be placed directly in the Gerrit comment, 
this can be done with the "Unsuccessful Message File" feature of the Gerrit 
plugin.
# When YAML check succeeds ther sohuld be a message in Gerrit saying so and 
indicating where the DIFF output can be seen (will "Unsuccessful Message File" 
also work on success?)
# The {{differences.html}} file should probably be called {{index.html}} to get 
it placed in the job status page.
# Maybe add some javascript to {{differences.html}} to allow collapsing 
individual differences.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-983) Improve CI output for Jenkins repo

2016-12-27 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-983:
---
Epic Link: OVIRT-400

> Improve CI output for Jenkins repo
> --
>
> Key: OVIRT-983
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-983
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: Jenkins
>Reporter: Barak Korren
>Assignee: infra
>
> Since we've added more ci jobs to the {{jenkins}} repo, the CI output because 
> unclear. When it was one job, it was easy to understand it failed because of 
> YAML issues. Now, when it passes on all platforms but el7, its easy to 
> mistake this for a infra failure on el7 rather then a real problem with the 
> patch.
> The following improvements should be made to the job output:
> # The YAML failure message should be placed directly in the Gerrit comment, 
> this can be done with the "Unsuccessful Message File" feature of the Gerrit 
> plugin.
> # When YAML check succeeds ther sohuld be a message in Gerrit saying so and 
> indicating where the DIFF output can be seen (will "Unsuccessful Message 
> File" also work on success?)
> # The {{differences.html}} file should probably be called {{index.html}} to 
> get it placed in the job status page.
> # Maybe add some javascript to {{differences.html}} to allow collapsing 
> individual differences.



--
This message was sent by Atlassian JIRA
(v1000.621.5#100023)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ost host addition failure

2016-12-27 Thread Dan Kenigsberg
On Tue, Dec 27, 2016 at 9:59 AM, Eyal Edri  wrote:
>
>
> On Tue, Dec 27, 2016 at 9:56 AM, Eyal Edri  wrote:
>>
>> Any updates?
>> The tests are still failing on vdsmd won't start from Sunday... master
>> repos havn't been refreshed for a few days due to this.
>>
>> from host deploy log: [1]
>> basic-suite-master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
>> the job links [2]
>>
>>
>>
>>
>>
>> [1]
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-
>
>
> Now with the full link:
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
>
>>
>>
>>
>>
>> 016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stdout:
>>
>> 2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stderr:
>> A dependency job for vdsmd.service failed. See 'journalctl -xe' for
>> details.
>>
>> 2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142 method
>> exception
>> Traceback (most recent call last):
>>   File "/tmp/ovirt-QZ1ucxWFfm/pythonlib/otopi/context.py", line 132, in
>> _executeMethod
>> method['method']()
>>   File
>> "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
>> line 209, in _start
>> self.services.state('vdsmd', True)
>>   File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/otopi/services/systemd.py",
>> line 141, in state
>> service=name,
>> RuntimeError: Failed to start service 'vdsmd'
>> 2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151 Failed
>> to execute stage 'Closing up': Failed to start service 'vdsmd'
>> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760
>> ENVIRONMENT DUMP - BEGIN
>> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
>> BASE/error=bool:'True'
>> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
>> BASE/excep
>>
>> [2]
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/testReport/


In the log I see

Processing package vdsm-4.20.0-7.gitf851d1b.el7.centos.x86_64

which is from Dec 22 (last Thursday). This is because of us missing a
master-branch tag. v4.20.0 wrongly tagged on the same commit as that
of v4.19.1, removed, and never placed properly.

I've re-pushed v4.20.0 properly, and now merged a patch to trigger
build-artifacts in master.
http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64/1544/

When this is done, could you use it to take the artifacts and try again?

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ost host addition failure

2016-12-27 Thread Eyal Edri
On Tue, Dec 27, 2016 at 9:56 AM, Eyal Edri  wrote:

> Any updates?
> The tests are still failing on vdsmd won't start from Sunday... master
> repos havn't been refreshed for a few days due to this.
>
> from host deploy log: [1] basic-suite-master-engine/_
> var_log_ovirt-engine/host-deploy/ovirt-host-deploy-
> 20161227012930-192.168.201.4-14af2bf0.log
> the job links [2]
>
>
>
>
>
> [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/lastCompletedBuild/artifact/exported-artifacts/
> basic_suite_master.sh-el7/exported-artifacts/test_logs/
> basic-suite-master/post-002_bootstrap.py/lago-
>

Now with the full link:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log


>
>
>
> 016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
> 'vdsmd.service') stdout:
>
> 2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd 
> plugin.execute:926 execute-output: ('/bin/systemctl', 'start', 
> 'vdsmd.service') stderr:
> A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.
>
> 2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142 method 
> exception
> Traceback (most recent call last):
>   File "/tmp/ovirt-QZ1ucxWFfm/pythonlib/otopi/context.py", line 132, in 
> _executeMethod
> method['method']()
>   File 
> "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/ovirt-host-deploy/vdsm/packages.py", 
> line 209, in _start
> self.services.state('vdsmd', True)
>   File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/otopi/services/systemd.py", line 
> 141, in state
> service=name,
> RuntimeError: Failed to start service 'vdsmd'
> 2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151 Failed to 
> execute stage 'Closing up': Failed to start service 'vdsmd'
> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760 
> ENVIRONMENT DUMP - BEGIN
> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV 
> BASE/error=bool:'True'
> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV 
> BASE/excep
>
> [2] 
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/testReport/
>
>
>
>
> On Sun, Dec 25, 2016 at 11:31 AM, Eyal Edri  wrote:
>
>> We should see it fixed here hopefully [1]
>>
>>
>> [1] http://jenkins.ovirt.org/view/All%20Running%20jobs/job/t
>> est-repo_ovirt_experimental_master/4412/console
>>
>> On Sun, Dec 25, 2016 at 11:19 AM, Dan Kenigsberg 
>> wrote:
>>
>>> On Sun, Dec 25, 2016 at 10:28 AM, Yaniv Kaul  wrote:
>>> >
>>> >
>>> > On Sun, Dec 25, 2016 at 9:47 AM, Dan Kenigsberg 
>>> wrote:
>>> >>
>>> >> Correct. https://gerrit.ovirt.org/#/c/69052/
>>> >>
>>> >> Can you try adding
>>> >> lago shell "$vm_name" -c "mkdir -p /var/log/ovirt-imageio-daemon/ &&
>>> >> chown vdsm:kvm /var/log/ovirt-imageio-daemon/"
>>> >
>>> >
>>> > How will it know what is the vdsm user before installing vdsm?
>>>
>>> You're right. a hack would have to `chmod a+rwx
>>> /var/log/ovirt-imageio-daemon/` instead.
>>>
>>> > Why not either:
>>> > 1. Fix it
>>>
>>> yes, that's why we've opened
>>> https://bugzilla.redhat.com/show_bug.cgi?id=143 ; now a fix is
>>> getting merged. I don't know when it is going to be ready in lago's
>>> repos.
>>>
>>> > -or-
>>> > 2. Revert the offending patch?
>>>
>>> I'm not aware of such patch. It's a race that has been there since
>>> ever, and I don't know why it suddenly pops up so often.
>>>
>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [VDSM] Testing vdsm master on Fedora 23?

2016-12-27 Thread Dan Kenigsberg
On Sun, Dec 25, 2016 at 11:57 AM, Eyal Edri  wrote:
> It should be removed from the standard yml file for vdsm on Jenkins repo.

Do you refer to jobs/confs/projects/vdsm/vdsm_standard.yaml ?

> Any maintainer can send a patch to exclude it, if help is needed, then
> someone from infra can guide you to the right file.
>
> On Thu, Dec 22, 2016 at 5:20 PM, Nir Soffer  wrote:
>>
>> Hi all,
>>
>> For some reason we have a new job testing vdsm master/4.1 on Fedora 23.
>> http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc23-x86_64/2/
>>
>> Fedora 23 is not support on vdsm master long time ago. Please remove this
>> job.

On master and 4.1 we should happily skip to f25.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra