[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 1651 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/1651/
Build Number: 1651
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51727

-
Changes Since Last Success:
-
Changes for Build #1647
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1648
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1649
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1650
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1651
[Maor Lipchuk] core: delete Cinder snapshot failover.




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Logwatch for linode01.ovirt.org (Linux)

2016-01-13 Thread logwatch

 ### Logwatch 7.3.6 (05/19/07)  
Processing Initiated: Wed Jan 13 03:19:19 2016
Date Range Processed: yesterday
  ( 2016-Jan-12 )
  Period is day.
  Detail Level of Output: 0
  Type of Output: unformatted
   Logfiles for Host: linode01.ovirt.org
  ## 
 
 - httpd Begin  

 Requests with error response codes
400 Bad Request
   /index.php?option=com_jce&task=plugin&plug ... 86d0dd595c8e20b: 2 Time(s)
   /pipermail/infra/2015-February/009227.html ... 86d0dd595c8e20b: 2 Time(s)
   /pipermail/infra/2015-March/009296.html&am ... 86d0dd595c8e20b: 2 Time(s)
404 Not Found
   /: 1 Time(s)
   //index.php?option=com_jdownloads&Itemid=0&view=upload: 5 Time(s)
   /__%2A%2Amailman/listinfo/users%3Chttp%3A/ ... /listinfo/users: 1 Time(s)
   /mailman/listinfo/users: 1 Time(s)
   /__mailman/listinfo/users: 1 Time(s)
   /admin.php: 5 Time(s)
   /admin/: 6 Time(s)
   /admin/board: 1 Time(s)
   /admin/login.php: 6 Time(s)
   /adminaccess/welcome.aspx: 1 Time(s)
   /administrator/components/com_civicrm/civi ... pload_image.php: 1 Time(s)
   /administrator/components/com_extplorer/uploadhandler.php: 1 Time(s)
   /administrator/components/com_maianmedia/u ... pload_image.php: 1 Time(s)
   /administrator/components/com_rokdownloads ... loadhandler.php: 1 Time(s)
   /administrator/components/com_simplephotog ... /uploadFile.php: 1 Time(s)
   /administrator/index.php: 6 Time(s)
   /bitrix/admin/index.php?lang=en: 6 Time(s)
   /blog/: 1 Time(s)
   /blog/robots.txt: 1 Time(s)
   /blog/wp-admin/: 5 Time(s)
   /blog/wp-login.php: 1 Time(s)
   /board: 2 Time(s)
   /category/news/feed: 1 Time(s)
   /category/news/feed/: 12 Time(s)
   /comment/class: 1 Time(s)
   /components/com_agileplmform/views/agilepl ... s/uploadify.php: 1 Time(s)
   /components/com_creativecontactform/fileupload/index.php: 1 Time(s)
   /components/com_joomleague/assets/classes/ ... pload_image.php: 1 Time(s)
   /components/com_joomsport/includes/imgres.php: 1 Time(s)
   /components/com_pinboard/popup/popup.php?option=showupload: 1 Time(s)
   /favicon.ico: 1191 Time(s)
   /forum/wp-login.php: 1 Time(s)
   /index.php?option=com_adsmanager&task=upload&tmpl=component: 1 Time(s)
   /index.php?option=com_easyblog&view=dashboard&layout=write: 1 Time(s)
   /index.php?option=com_jce&task=plugin&plug ... ion=1576&cid=20: 2 Time(s)
   /index.php?option=com_myblog&task=ajaxupload: 1 Time(s)
   /index.php?option=com_simpleimageupload&vi ... ent&e_name=desc: 1 Time(s)
   /index.php?option=com_users&view=registration: 2 Time(s)
   /index.php?route=product/product/upload: 1 Time(s)
   /listinfo/board: 1 Time(s)
   /mailman/xxx: 1 Time(s)
   /media/uploadify/uploadify.php: 1 Time(s)
   /mobile/pipermail/users/2015-January/030762.html: 1 Time(s)
   /modules/mod_artuploader/upload.php: 1 Time(s)
   /modules/pm_advancedsearch4/js/uploadify/uploadify.php: 1 Time(s)
   /news-and-events/workshop-1-to-3-november-2011: 1 Time(s)
   /old/wp-admin/: 5 Time(s)
   /phpmyadmin/scripts/setup.php: 1 Time(s)
   /pipermail/arch/2011-December/000129.html/trackback/: 1 Time(s)
   /pipermail/engine-devel/2011-November/000166.html/trackback/: 1 Time(s)
   /pipermail/engine-devel/2012-October/002636.html/trackback/: 1 Time(s)
   /pipermail/engine-patches/2012-April/011975.html: 1 Time(s)
   /pipermail/engine-patches/2012-April/012815.html: 1 Time(s)
   /pipermail/engine-patches/2012-April/013043.html: 1 Time(s)
   /pipermail/engine-patches/2012-April/013069.html: 1 Time(s)
   /pipermail/engine-patches/2012-April/013517.html: 1 Time(s)
   /pipermail/engine-patches/2012-April/014346.html: 1 Time(s)
   /pipermail/engine-patches/2012-April/014989.html: 1 Time(s)
   /pipermail/engine-patches/2012-April/015054.html: 1 Time(s)
   /pipermail/engine-patches/2012-August/029206.html: 1 Time(s)
   /pipermail/engine-patches/2012-August/029241.html: 1 Time(s)
   /pipermail/engine-patches/2012-August/032130.html: 1 Time(s)
   /pipermail/engine-patches/2012-December/043944.html: 1 Time(s)
   /pipermail/engine-patches/2012-December/044011.html: 1 Time(s)
   /pipermail/engine-patches/2012-December/044130.html: 1 Time(s)
   /pipermail/engine-patches/2012-December/044352.html: 1 Time(s)
   /pipermail/engine-patches/2012-December/044528.html: 1 Time(s)
   /pipermail/engine-patches/2012-December/047492.html: 1 Time(s)
   /pipermail/engine-patches/2012-February/005905.html: 1 Time(s)
   /pipermail/engine-patches/2012-February/006479.html: 1 Time(s)
   /pipermail/engine-patches/2012-February

[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_merged - Build # 1742 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/1742/
Build Number: 1742
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51761

-
Changes Since Last Success:
-
Changes for Build #1737
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1738
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1739
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1740
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1741
[Maor Lipchuk] core: delete Cinder snapshot failover.


Changes for Build #1742
[Alona Kaplan] engine: fix find bug error




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 1652 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/1652/
Build Number: 1652
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51761

-
Changes Since Last Success:
-
Changes for Build #1647
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1648
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1649
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1650
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1651
[Maor Lipchuk] core: delete Cinder snapshot failover.


Changes for Build #1652
[Alona Kaplan] engine: fix find bug error




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: removing 3.6.2 jobs

2016-01-13 Thread Eyal Edri
My question was why do we need the build artifacts for e.g?
I'm OK with the check-patch jobs that verify compilation for e.g, but are
we using the 3.6.2 build artifacts jobs?

e.

On Wed, Jan 13, 2016 at 9:59 AM, Sandro Bonazzola 
wrote:

>
>
> On Wed, Jan 13, 2016 at 8:22 AM, Eyal Edri  wrote:
>
>> I can't recall which 3.6.2 jobs we said we'll keep and which we'll drop.
>> IIRC, we don't need the build artifacts jobs [1]
>>
>> Tal, you mentioned there were some jobs you do want to see running on the
>> version branch,
>> Do you recall which?
>>
>> I remind that running these temporarily jobs is an overhead since its
>> temporal and are a subset of the 3.6 jobs, so chance of hitting something
>> there which didn't fail on the 3.6 is slim.
>>
>> [1]
>> http://jenkins.ovirt.org/job/ovirt-engine_3.6.2_build-artifacts-fc23-x86_64/
>>
>>
> 3.6.2 jobs are needed only until Jan 26th when we'll release 3.6.2 GA.
> Then we can get rid of all of them.
>
>
>
>
>> --
>> Eyal Edri
>> Associate Manager
>> EMEA ENG Virtualization R&D
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
Eyal Edri
Associate Manager
EMEA ENG Virtualization R&D
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_merged - Build # 1743 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/1743/
Build Number: 1743
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51627

-
Changes Since Last Success:
-
Changes for Build #1737
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1738
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1739
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1740
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1741
[Maor Lipchuk] core: delete Cinder snapshot failover.


Changes for Build #1742
[Alona Kaplan] engine: fix find bug error


Changes for Build #1743
[Arik Hadas] core: fetch vm statistics from vm analyzers




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 770 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/770/
Build Number: 770
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51660

-
Changes Since Last Success:
-
Changes for Build #770
[Milan Zamazal] core: Video RAM size settings reworked




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 1653 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/1653/
Build Number: 1653
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51627

-
Changes Since Last Success:
-
Changes for Build #1647
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1648
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1649
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1650
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1651
[Maor Lipchuk] core: delete Cinder snapshot failover.


Changes for Build #1652
[Alona Kaplan] engine: fix find bug error


Changes for Build #1653
[Arik Hadas] core: fetch vm statistics from vm analyzers




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Fwd: Request to mirror the open source project oVirt.org

2016-01-13 Thread Nadav Goldin
Great :)
I'll track it for a few days to see its all good and update our wiki
pages/mirror list afterwards.

thanks,

Nadav.

On Tue, Jan 12, 2016 at 11:04 AM, Lior Kaplan  wrote:

> Bingo (:
>
> http://mirror.isoc.org.il/pub/ovirt/
>
>
> I'll set a daily update.
>
> Kaplan
>
> On Mon, Jan 11, 2016 at 4:13 PM, Nadav Goldin  wrote:
>
>> Hey Lior, sorry sent this to the wrong address:
>> can you please try again without the z flag:
>> rsync -rltHvvP mir...@resources.ovirt.org:/var/www/html mirror/ovirt
>>
>>
>> On Mon, Jan 11, 2016 at 4:09 PM, Lior Kaplan 
>> wrote:
>>
>>> Hi Guys,
>>>
>>> Any news?
>>>
>>> On Wed, Jan 6, 2016 at 3:53 PM, Lior Kaplan 
>>> wrote:
>>>
 $ rsync -e /usr/bin/ssh -rltHvvzP   
 mir...@resources.ovirt.org:/var/www/html/pub
 mirror/ovirt/
 opening connection using: /usr/bin/ssh -l mirror resources.ovirt.org
 rsync --server --sender -vvlHtrze.iLsf . /var/www/html/pub
 receiving file list ...
 [Receiver] expand file_list pointer array to 262144 bytes, did move
 64162 files to consider
 delta-transmission enabled
 ovirt-node-base-stable is uptodate
 ovirt-3.3/rpm/el6Server is uptodate
 ovirt-3.4/rpm/el6Server is uptodate
 ovirt-3.4/rpm/el7Server is uptodate
 ovirt-3.5-snapshot-static/rpm/el6.6 is uptodate
 ovirt-3.5-snapshot-static/rpm/el6Server is uptodate
 ovirt-3.5-snapshot-static/rpm/el6Workstation is uptodate
 ovirt-3.5-snapshot-static/rpm/el7Server is uptodate
 ovirt-3.5-snapshot-static/rpm/el7Workstation is uptodate
 ovirt-3.5-snapshot/rpm/el6.6 is uptodate
 ovirt-3.5-snapshot/rpm/el6Server is uptodate
 ovirt-3.5-snapshot/rpm/el6Workstation is uptodate
 ovirt-3.5-snapshot/rpm/el7Server is uptodate
 ovirt-3.5-snapshot/rpm/el7Workstation is uptodate
 ovirt-3.5/rpm/el6.6 is uptodate
 ovirt-3.5/rpm/el6Server is uptodate
 ovirt-3.5/rpm/el6Workstation is uptodate
 ovirt-3.5/rpm/el7Server is uptodate
 ovirt-3.5/rpm/el7Workstation is uptodate
 ./
 ovirt-3.6-pre/rpm/el6.7 is uptodate
 ovirt-3.6-pre/rpm/el6Server is uptodate
 ovirt-3.6-pre/rpm/el6Workstation is uptodate
 ovirt-3.6-pre/rpm/el7Server is uptodate
 ovirt-3.6-pre/rpm/el7Workstation is uptodate
 keys/
 keys/RPM-GPG-ovirt
0   0%0.00kB/s0:00:00
 ovirt-3.6-snapshot-static/rpm/el6.6 is uptodate
 ovirt-3.6-snapshot-static/rpm/el6Server is uptodate
 ovirt-3.6-snapshot-static/rpm/el6Workstation is uptodate
 ovirt-3.6-snapshot-static/rpm/el7Server is uptodate
 ovirt-3.6-snapshot-static/rpm/el7Workstation is uptodate

 inflate returned -3 (0 bytes)
 rsync error: error in rsync protocol data stream (code 12) at
 token.c(548) [receiver=3.0.9]
 rsync: connection unexpectedly closed (3651558 bytes received so far)
 [generator]
 rsync error: error in rsync protocol data stream (code 12) at io.c(605)
 [generator=3.0.9]


 On Wed, Jan 6, 2016 at 3:44 PM, Sagi Shnaidman 
 wrote:

> Hi,
>
> try please adding  "-e /usr/bin/ssh" to rsync options.
> rsync -e /usr/bin/ssh -rltHvvzP  ...
>
> "/usr/bin/ssh" should be your SSH path.
>
> tell me please if it succeeds.
>
> thanks
>
> On 01/06/2016 11:33 AM, Lior Kaplan wrote:
>
> *rsync error: error in rsync protocol data stream*
>
>
>

>>>
>>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-375) add gamification to oVirt infra

2016-01-13 Thread eyal edri [Administrator] (oVirt JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 eyal edri [Administrator] created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 oVirt - virtualization made easy /  OVIRT-375  
 
 
  add gamification to oVirt infra   
 

  
 
 
 
 

 
Issue Type: 
  Improvement  
 
 
Assignee: 
 infra  
 
 
Components: 
 General  
 
 
Created: 
 13/Jan/16 12:17 PM  
 
 
Priority: 
  Medium  
 
 
Reporter: 
 eyal edri [Administrator]  
 

  
 
 
 
 

 
 Think on options to add gamification options to oVirt and which services can be augmented. The following site might be interesting to use to get a working service: 
 
https://ovirt.getbadges.io/manager/project/677/app/new 
  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 


Random jenkins failures

2016-01-13 Thread Vinzenz Feenstra
I have just submitted a set of 4 patches where 1 patch unit tests failed with 
the pasted text below. Those patches are absolutely unrelated to those failures.

Please check into those issues - Thanks 

http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2168/console


12:15:21 
12:15:21 ==
12:15:21 ERROR: testLoopMount (mountTests.MountTests)
12:15:21 --
12:15:21 Traceback (most recent call last):
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
 line 128, in testLoopMount
12:15:21 m.mount(mntOpts="loop")
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 225, in mount
12:15:21 return self._runcmd(cmd, timeout)
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 241, in _runcmd
12:15:21 raise MountError(rc, ";".join((out, err)))
12:15:21 MountError: (32, ';mount: /tmp/tmpsDTh9u: failed to setup loop device: 
No such file or directory\n')
12:15:21  >> begin captured logging << 
12:15:21 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/sbin/mkfs.ext2 -F /tmp/tmpsDTh9u (cwd None)
12:15:21 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13 
(17-May-2015)\n';  = 0
12:15:21 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/usr/bin/mount -o loop /tmp/tmpsDTh9u /var/tmp/tmpTPj7t6 (cwd None)
12:15:21 - >> end captured logging << -
12:15:21 
12:15:21 ==
12:15:21 ERROR: testSymlinkMount (mountTests.MountTests)
12:15:21 --
12:15:21 Traceback (most recent call last):
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
 line 150, in testSymlinkMount
12:15:21 m.mount(mntOpts="loop")
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 225, in mount
12:15:21 return self._runcmd(cmd, timeout)
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 241, in _runcmd
12:15:21 raise MountError(rc, ";".join((out, err)))
12:15:21 MountError: (32, ';mount: /var/tmp/tmpBpSrGA/backing.img: failed to 
setup loop device: No such file or directory\n')
12:15:21  >> begin captured logging << 
12:15:21 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/sbin/mkfs.ext2 -F /var/tmp/tmpBpSrGA/backing.img (cwd None)
12:15:21 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13 
(17-May-2015)\n';  = 0
12:15:21 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/usr/bin/mount -o loop /var/tmp/tmpBpSrGA/link_to_image 
/var/tmp/tmpBpSrGA/mountpoint (cwd None)
12:15:21 - >> end captured logging << -
12:15:21 
12:15:21 ==
12:15:21 ERROR: test_getDevicePartedInfo (parted_utils_tests.PartedUtilsTests)
12:15:21 --
12:15:21 Traceback (most recent call last):
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/testValidation.py",
 line 97, in wrapper
12:15:21 return f(*args, **kwargs)
12:15:21   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/parted_utils_tests.py",
 line 61, in setUp
12:15:21 self.assertEquals(rc, 0)
12:15:21 AssertionError: 1 != 0
12:15:21  >> begin captured logging << 
12:15:21 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 dd if=/dev/zero 
of=/tmp/tmpNOvAvX bs=100M count=1 (cwd None)
12:15:21 root: DEBUG: SUCCESS:  = '1+0 records in\n1+0 records 
out\n104857600 bytes (105 MB) copied, 0.373591 s, 281 MB/s\n';  = 0
12:15:21 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 losetup -f --show 
/tmp/tmpNOvAvX (cwd None)
12:15:21 root: DEBUG: FAILED:  = 'losetup: /tmp/tmpNOvAvX: failed to set 
up loop device: No such file or directory\n';  = 1
12:15:21 - >> end captured logging << -
12:15:21

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 771 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/771/
Build Number: 771
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51723

-
Changes Since Last Success:
-
Changes for Build #770
[Milan Zamazal] core: Video RAM size settings reworked


Changes for Build #771
[Maor Lipchuk] core: Support revert of new Cinder snapshot.




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_merged - Build # 1745 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/1745/
Build Number: 1745
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51628

-
Changes Since Last Success:
-


-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_merged - Build # 1744 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/1744/
Build Number: 1744
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51714

-
Changes Since Last Success:
-
Changes for Build #1737
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1738
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1739
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1740
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1741
[Maor Lipchuk] core: delete Cinder snapshot failover.


Changes for Build #1742
[Alona Kaplan] engine: fix find bug error


Changes for Build #1743
[Arik Hadas] core: fetch vm statistics from vm analyzers


Changes for Build #1744
[Jakub Niedermertl] core: Losing graphical protocol fix




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 1654 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/1654/
Build Number: 1654
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51714

-
Changes Since Last Success:
-
Changes for Build #1647
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1648
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1649
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1650
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1651
[Maor Lipchuk] core: delete Cinder snapshot failover.


Changes for Build #1652
[Alona Kaplan] engine: fix find bug error


Changes for Build #1653
[Arik Hadas] core: fetch vm statistics from vm analyzers


Changes for Build #1654
[Jakub Niedermertl] core: Losing graphical protocol fix




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-375) add gamification to oVirt infra

2016-01-13 Thread bkorren (oVirt JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 bkorren commented on  OVIRT-375  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: add gamification to oVirt infra   
 

  
 
 
 
 

 
 No sure I understand the idea, what do you want to game-ify exactly?  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.0-OD-04-012#71001-sha1:dd0493d)  
 
 

 
   
 

  
 

  
 

   

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Random jenkins failures

2016-01-13 Thread Vinzenz Feenstra

> On Jan 13, 2016, at 1:20 PM, Vinzenz Feenstra  wrote:
> 
> I have just submitted a set of 4 patches where 1 patch unit tests failed with 
> the pasted text below. Those patches are absolutely unrelated to those 
> failures.
> 
> Please check into those issues - Thanks 

It happened again 
http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2186/console

13:06:47 ==
13:06:47 ERROR: testLoopMount (mountTests.MountTests)
13:06:47 --
13:06:47 Traceback (most recent call last):
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
 line 128, in testLoopMount
13:06:47 m.mount(mntOpts="loop")
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 225, in mount
13:06:47 return self._runcmd(cmd, timeout)
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 241, in _runcmd
13:06:47 raise MountError(rc, ";".join((out, err)))
13:06:47 MountError: (32, ';mount: /tmp/tmpl2jG_h: failed to setup loop device: 
No such file or directory\n')
13:06:47  >> begin captured logging << 
13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/sbin/mkfs.ext2 -F /tmp/tmpl2jG_h (cwd None)
13:06:47 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13 
(17-May-2015)\n';  = 0
13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/usr/bin/mount -o loop /tmp/tmpl2jG_h /var/tmp/tmpRslb5M (cwd None)
13:06:47 - >> end captured logging << -
13:06:47 
13:06:47 ==
13:06:47 ERROR: testSymlinkMount (mountTests.MountTests)
13:06:47 --
13:06:47 Traceback (most recent call last):
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
 line 150, in testSymlinkMount
13:06:47 m.mount(mntOpts="loop")
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 225, in mount
13:06:47 return self._runcmd(cmd, timeout)
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 241, in _runcmd
13:06:47 raise MountError(rc, ";".join((out, err)))
13:06:47 MountError: (32, ';mount: /var/tmp/tmpTeUZUl/backing.img: failed to 
setup loop device: No such file or directory\n')
13:06:47  >> begin captured logging << 
13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/sbin/mkfs.ext2 -F /var/tmp/tmpTeUZUl/backing.img (cwd None)
13:06:47 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13 
(17-May-2015)\n';  = 0
13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/usr/bin/mount -o loop /var/tmp/tmpTeUZUl/link_to_image 
/var/tmp/tmpTeUZUl/mountpoint (cwd None)
13:06:47 - >> end captured logging << -
13:06:47 
13:06:47 ==
13:06:47 ERROR: test_getDevicePartedInfo (parted_utils_tests.PartedUtilsTests)
13:06:47 --
13:06:47 Traceback (most recent call last):
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/testValidation.py",
 line 97, in wrapper
13:06:47 return f(*args, **kwargs)
13:06:47   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/parted_utils_tests.py",
 line 61, in setUp
13:06:47 self.assertEquals(rc, 0)
13:06:47 AssertionError: 1 != 0
13:06:47  >> begin captured logging << 
13:06:47 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 dd if=/dev/zero 
of=/tmp/tmp7dS7VS bs=100M count=1 (cwd None)
13:06:47 root: DEBUG: SUCCESS:  = '1+0 records in\n1+0 records 
out\n104857600 bytes (105 MB) copied, 0.350029 s, 300 MB/s\n';  = 0
13:06:47 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 losetup -f --show 
/tmp/tmp7dS7VS (cwd None)
13:06:47 root: DEBUG: FAILED:  = 'losetup: /tmp/tmp7dS7VS: failed to set 
up loop device: No such file or directory\n';  = 1


> 
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2168/console 
> 
> 
> 
> 12:15:21 
> 12:15:21 
> ==
> 12:15:21 ERROR: testLoopMount (mountTests.MountTests)
> 12:15:21 
> --
> 12:15:21 Traceback (most recent call last):
> 12:15:21   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-

Re: Random jenkins failures

2016-01-13 Thread Eyal Edri
Looks like loop device issues.
Nir - didn't you say you have a patch to fix this?

In any case i think rebooting the slave fix this.

E.

On Wed, Jan 13, 2016 at 3:08 PM, Vinzenz Feenstra 
wrote:

>
> On Jan 13, 2016, at 1:20 PM, Vinzenz Feenstra  wrote:
>
> I have just submitted a set of 4 patches where 1 patch unit tests failed with 
> the pasted text below. Those patches are absolutely unrelated to those 
> failures.
>
>
> Please check into those issues - Thanks
>
>
> It happened again
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2186/console
>
> *13:06:47* 
> ==*13:06:47*
>  ERROR: testLoopMount (mountTests.MountTests)*13:06:47* 
> --*13:06:47*
>  Traceback (most recent call last):*13:06:47*   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
>  line 128, in testLoopMount*13:06:47* m.mount(mntOpts="loop")*13:06:47*   
> File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>  line 225, in mount*13:06:47* return self._runcmd(cmd, timeout)*13:06:47* 
>   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>  line 241, in _runcmd*13:06:47* raise MountError(rc, ";".join((out, 
> err)))*13:06:47* MountError: (32, ';mount: /tmp/tmpl2jG_h: failed to setup 
> loop device: No such file or directory\n')*13:06:47*  >> 
> begin captured logging << *13:06:47* Storage.Misc.excCmd: 
> DEBUG: /usr/bin/taskset --cpu-list 0-1 /sbin/mkfs.ext2 -F /tmp/tmpl2jG_h (cwd 
> None)*13:06:47* Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13 
> (17-May-2015)\n';  = 0*13:06:47* Storage.Misc.excCmd: DEBUG: 
> /usr/bin/taskset --cpu-list 0-1 /usr/bin/mount -o loop /tmp/tmpl2jG_h 
> /var/tmp/tmpRslb5M (cwd None)*13:06:47* - >> end captured 
> logging << -*13:06:47* *13:06:47* 
> ==*13:06:47*
>  ERROR: testSymlinkMount (mountTests.MountTests)*13:06:47* 
> --*13:06:47*
>  Traceback (most recent call last):*13:06:47*   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
>  line 150, in testSymlinkMount*13:06:47* 
> m.mount(mntOpts="loop")*13:06:47*   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>  line 225, in mount*13:06:47* return self._runcmd(cmd, timeout)*13:06:47* 
>   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>  line 241, in _runcmd*13:06:47* raise MountError(rc, ";".join((out, 
> err)))*13:06:47* MountError: (32, ';mount: /var/tmp/tmpTeUZUl/backing.img: 
> failed to setup loop device: No such file or directory\n')*13:06:47* 
>  >> begin captured logging << 
> *13:06:47* Storage.Misc.excCmd: DEBUG: /usr/bin/taskset 
> --cpu-list 0-1 /sbin/mkfs.ext2 -F /var/tmp/tmpTeUZUl/backing.img (cwd 
> None)*13:06:47* Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13 
> (17-May-2015)\n';  = 0*13:06:47* Storage.Misc.excCmd: DEBUG: 
> /usr/bin/taskset --cpu-list 0-1 /usr/bin/mount -o loop 
> /var/tmp/tmpTeUZUl/link_to_image /var/tmp/tmpTeUZUl/mountpoint (cwd 
> None)*13:06:47* - >> end captured logging << 
> -*13:06:47* *13:06:47* 
> ==*13:06:47*
>  ERROR: test_getDevicePartedInfo 
> (parted_utils_tests.PartedUtilsTests)*13:06:47* 
> --*13:06:47*
>  Traceback (most recent call last):*13:06:47*   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/testValidation.py",
>  line 97, in wrapper*13:06:47* return f(*args, **kwargs)*13:06:47*   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/parted_utils_tests.py",
>  line 61, in setUp*13:06:47* self.assertEquals(rc, 0)*13:06:47* 
> AssertionError: 1 != 0*13:06:47*  >> begin captured 
> logging << *13:06:47* root: DEBUG: /usr/bin/taskset 
> --cpu-list 0-1 dd if=/dev/zero of=/tmp/tmp7dS7VS bs=100M count=1 (cwd 
> None)*13:06:47* root: DEBUG: SUCCESS:  = '1+0 records in\n1+0 records 
> out\n104857600 bytes (105 MB) copied, 0.350029 s, 300 MB/s\n';  = 
> 0*13:06:47* root: DEBUG: /usr/bin/taskset --cpu-list 0-1 losetup -f --show 
> /tmp/tmp7dS7VS (cwd None)*13:06:47* root: DEBUG: FAILED:  = 'losetup: 
> /tmp/tmp7dS7VS: failed to set up loop device: No such file or directory\n'; 
>  = 1
>
>
>
>
> *http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2168/console
>  
> 

DB upgrade jobs are not using engine-setup

2016-01-13 Thread Eli Mesika
Hi

Currently DB upgrade jobs are using schema.sh script and not engine-setup
This may cause CI upgrade tests to be successful while they are actually failed 
when running from engine-setup
Since we are using engine-setup to upgrade the database, CI tests must use the 
same exact method in order to be reliable 
I had faced this week 2 different cases when my patches passed CI but failed 
actually engine-setup causing a loose of time to me and to other that already 
rebased on a problematic patch.


Thanks
Eli Mesika

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 1655 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/1655/
Build Number: 1655
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51628

-
Changes Since Last Success:
-
Changes for Build #1647
[Eli Mesika] core: rename vds group to cluster


Changes for Build #1648
[Eli Mesika] core: stop upgrade when shell script fail


Changes for Build #1649
[Roy Golan] core: storage: make storagServerConnectin compensatable


Changes for Build #1650
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #1651
[Maor Lipchuk] core: delete Cinder snapshot failover.


Changes for Build #1652
[Alona Kaplan] engine: fix find bug error


Changes for Build #1653
[Arik Hadas] core: fetch vm statistics from vm analyzers


Changes for Build #1654
[Jakub Niedermertl] core: Losing graphical protocol fix


Changes for Build #1655
[Arik Hadas] core: fetch vm nic statistics from vm analyzers




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: DB upgrade jobs are not using engine-setup

2016-01-13 Thread Eyal Edri
Hi,

The scripts used to run the job can be found here [1].
Specifically:

ovirt-engine_upgrade-db.cleanup.sh
ovirt-engine_upgrade-db.sh

If you can submit a patch to replace the logic to use engine-setup instead,
the jobs will update automatically via yaml to use the new logic.
infra team can help with verification if needed.

Eyal.

[1]
https://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;f=jobs/confs/shell-scripts;h=a919553fd30c2da2c145ed70f25d1c3000144501;hb=refs/heads/master

On Wed, Jan 13, 2016 at 4:36 PM, Eli Mesika  wrote:

> Hi
>
> Currently DB upgrade jobs are using schema.sh script and not engine-setup
> This may cause CI upgrade tests to be successful while they are actually
> failed when running from engine-setup
> Since we are using engine-setup to upgrade the database, CI tests must use
> the same exact method in order to be reliable
> I had faced this week 2 different cases when my patches passed CI but
> failed actually engine-setup causing a loose of time to me and to other
> that already rebased on a problematic patch.
>
>
> Thanks
> Eli Mesika
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
>


-- 
Eyal Edri
Associate Manager
EMEA ENG Virtualization R&D
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: DB upgrade jobs are not using engine-setup

2016-01-13 Thread Eyal Edri
Also replacing support-infra with infra-support which is the right email to
open a ticket.

e.

On Wed, Jan 13, 2016 at 4:49 PM, Eyal Edri  wrote:

> Hi,
>
> The scripts used to run the job can be found here [1].
> Specifically:
>
> ovirt-engine_upgrade-db.cleanup.sh
> ovirt-engine_upgrade-db.sh
>
> If you can submit a patch to replace the logic to use engine-setup
> instead, the jobs will update automatically via yaml to use the new logic.
> infra team can help with verification if needed.
>
> Eyal.
>
> [1]
> https://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;f=jobs/confs/shell-scripts;h=a919553fd30c2da2c145ed70f25d1c3000144501;hb=refs/heads/master
>
> On Wed, Jan 13, 2016 at 4:36 PM, Eli Mesika  wrote:
>
>> Hi
>>
>> Currently DB upgrade jobs are using schema.sh script and not engine-setup
>> This may cause CI upgrade tests to be successful while they are actually
>> failed when running from engine-setup
>> Since we are using engine-setup to upgrade the database, CI tests must
>> use the same exact method in order to be reliable
>> I had faced this week 2 different cases when my patches passed CI but
>> failed actually engine-setup causing a loose of time to me and to other
>> that already rebased on a problematic patch.
>>
>>
>> Thanks
>> Eli Mesika
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 
Eyal Edri
Associate Manager
EMEA ENG Virtualization R&D
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: DB upgrade jobs are not using engine-setup

2016-01-13 Thread Barak Korren
I wonder if the upgrade jobs could somehow be converted into standard-CI.
Are they running on every merge atm?

On 13 January 2016 at 16:52, Eyal Edri  wrote:
> Also replacing support-infra with infra-support which is the right email to
> open a ticket.
>
> e.
>
> On Wed, Jan 13, 2016 at 4:49 PM, Eyal Edri  wrote:
>>
>> Hi,
>>
>> The scripts used to run the job can be found here [1].
>> Specifically:
>>
>> ovirt-engine_upgrade-db.cleanup.sh
>> ovirt-engine_upgrade-db.sh
>>
>> If you can submit a patch to replace the logic to use engine-setup
>> instead, the jobs will update automatically via yaml to use the new logic.
>> infra team can help with verification if needed.
>>
>> Eyal.
>>
>> [1]
>> https://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;f=jobs/confs/shell-scripts;h=a919553fd30c2da2c145ed70f25d1c3000144501;hb=refs/heads/master
>>
>> On Wed, Jan 13, 2016 at 4:36 PM, Eli Mesika  wrote:
>>>
>>> Hi
>>>
>>> Currently DB upgrade jobs are using schema.sh script and not engine-setup
>>> This may cause CI upgrade tests to be successful while they are actually
>>> failed when running from engine-setup
>>> Since we are using engine-setup to upgrade the database, CI tests must
>>> use the same exact method in order to be reliable
>>> I had faced this week 2 different cases when my patches passed CI but
>>> failed actually engine-setup causing a loose of time to me and to other that
>>> already rebased on a problematic patch.
>>>
>>>
>>> Thanks
>>> Eli Mesika
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> EMEA ENG Virtualization R&D
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
>
>
>
> --
> Eyal Edri
> Associate Manager
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>



-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


DB upgrade jobs are not using engine-setup

2016-01-13 Thread Eli Mesika

 Hi
 
 Currently DB upgrade jobs are using schema.sh script and not engine-setup
 This may cause CI upgrade tests to be successful while they are actually
 failed when running from engine-setup
 Since we are using engine-setup to upgrade the database, CI tests must use
 the same exact method in order to be reliable
 I had faced this week 2 different cases when my patches passed CI but failed
 actually engine-setup causing a loose of time to me and to other that
 already rebased on a problematic patch.
 
 
 Thanks
 Eli Mesika
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 773 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/773/
Build Number: 773
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51747

-
Changes Since Last Success:
-
Changes for Build #773
[Maor Lipchuk] core: delete Cinder snapshot failover.

[Ryan Barry] Move ovirt-node-ng builders to the node-fc21 builder'




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


My ssh public key

2016-01-13 Thread Lev Veyde
Hi,

This is my key.

Please grant me access to resource.ovirt.org server.

Thanks in advance,
Lev Veyde.

lveyde.pub
Description: Binary data
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: My ssh public key

2016-01-13 Thread Sandro Bonazzola
On Wed, Jan 13, 2016 at 5:00 PM, Lev Veyde  wrote:

> Hi,
>
> This is my key.
>
> Please grant me access to resource.ovirt.org server.
>


+1


>
> Thanks in advance,
> Lev Veyde.




-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Using oVirt VM pools in oVirt infra

2016-01-13 Thread Barak Korren
VM Pools are a nice feature of oVirt.
A VM pool lets you quickly create a pool of stateles VMs all based on
the same template.
A VM pool also seems to currently be the only way to create
template-based thin QCOW2 VMs in oVirt. (Cloning from template creates
a thick copy, this is why its relatively slow)
With the autostart [1] feature, you can have the VMs auto-started when
the pool is started, it also means VMs get started automatically a few
minutes after they are shut down.
What this comes down to is that if you run 'shutdown' in a VM from a
pool, you will automatically get back a clean VM a few minutes later.

Unfortunately VM pools are not without their short comings, I've
documented two of these in BZ#1298235 [2] and BZ#1298232 [3].
When this means in essence is that oVirt does not give you a way to
predictably assign names or IPs to VMs in a pool.

So how do we solve this?

Since the ultimate goal for VMs in a pool is to become Jenkins slaves,
one solution is to use the swarm plugin [4].
With the swarm plugin, the actual name and address of the slave VM
becomes not very important.
We could quite easily setup the cloud-init invoked for VMs in the pool
to download the swarm plugin client and then run it to register to
Jenkins while setting labels according to various system properties.

The question remains how to assign IP addresses and names, to the pool VMs.
We will probably need a range of IP addresses that is pre-assigned to
a range of DNS records an that will be assigned to pool VMs as they
boot up.

Currently our DHCP and DNS servers in PHX is managed by Foreman in a
semi-random fashion.
As we've seen in the past this is subject to various failures such as
the MAC address of the foreman record getting out of sync with the one
of the VM (for example due to Facter reporting a bad address after a
particularity nasty VDSM test run), or the DNS record going out of
sync with the VM's host name and address in the DHCP.
At this point I think we've enough evidence against Foreman's style of
managing DNS and DHCP, I suggest we will:
1. Cease from creating new VMs in PHX via Foreman for a while.
2. Shutdown the PHX foreman proxy to disconnect it from managing the
DNS and DHCP.
3. Map out our currently active MAC->IP->HOSTNAME combinations and
create static DNS and DHCP configuration files (I suggest we also
migrate from BIND+ISC DHCPD to Dnsmasq which is far easier to
configure and provides very tight DNS, DHCP and TFTP integration)
4. Add configuration for a dynamically assigned IP range as described above.

Another way to resolve the current problem of coming up with a
dynamically assignable range of IPs, is to create a new VLAN in PHX
for the new pools of VMs.

One more issue we need to consider is how to use Puppet on the pool
VMs, we will probably still like Puppet to run in order to setup SSH
access for us, as well as other things needed on the slave.
Possibly we would also like for the swarm plugin client to be actually
installed and activated by Puppet, as that would grant us easy access
to Facter facts for determining the labels the slave should have while
also ensuring the slave will not become available to Jenkins until it
is actually ready for use.
It is easy enough to get Puppet running via a cloud-init script, but
the issue here is how to select classes for the new VMs.
Since they are not created in Foreman, they will not get assigned to
hostgroups, and therefore class assignment by way of hostgroup
membership will not work.
I see a few ways to resolve this:
1. An a 'node' entry in 'site.pp' to detect pool VMs (with a name
regex) and assgin classes to them
2. Use 'hiera_include' [5] in 'site.pp' to assign classes by facts via Hiera
3. Use a combination of the two methods above to ensure
'hiera_include' gets applied to and only to pool VMs.

These are my thoughts about this so far, I am working on building a
POC for this, but I would be happy to hear other thoughts and opinions
at this point.


[1]: http://www.ovirt.org/Features/PrestartedVm
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1298235
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1298232
[4]: https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin
[5]: 
https://docs.puppetlabs.com/hiera/1/puppet.html#assigning-classes-to-nodes-with-hiera-hierainclude

-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: My ssh public key

2016-01-13 Thread Barak Korren
>>
>> Please grant me access to resource.ovirt.org server.
>
Dear integration team,
Please send patches like this: https://gerrit.ovirt.org/#/c/51673/1
instead of emails (But check it out and base on it as it creates the
file you need to add to).

This will streamline the process of you getting the access you need,
also please remember to add the password hash, it is needed for sudo
access and accounts will not be created without it.
Right now the patch above is not getting merged because Didi's
password hash is missing.


-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: My ssh public key

2016-01-13 Thread Sandro Bonazzola
On Wed, Jan 13, 2016 at 5:42 PM, Barak Korren  wrote:

> >>
> >> Please grant me access to resource.ovirt.org server.
> >
> Dear integration team,
> Please send patches like this: https://gerrit.ovirt.org/#/c/51673/1


I have no rights to see that patch.



>
> instead of emails (But check it out and base on it as it creates the
> file you need to add to).
>
> This will streamline the process of you getting the access you need,
> also please remember to add the password hash, it is needed for sudo
> access and accounts will not be created without it.
> Right now the patch above is not getting merged because Didi's
> password hash is missing.
>
>
> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: My ssh public key

2016-01-13 Thread Sandro Bonazzola
On Wed, Jan 13, 2016 at 5:47 PM, Sandro Bonazzola 
wrote:

>
>
> On Wed, Jan 13, 2016 at 5:42 PM, Barak Korren  wrote:
>
>> >>
>> >> Please grant me access to resource.ovirt.org server.
>> >
>> Dear integration team,
>> Please send patches like this: https://gerrit.ovirt.org/#/c/51673/1
>
>
> I have no rights to see that patch.
>

Full error I see is:
Code Review - Error
The page you requested was not found, or you do not have permission to view
this page.




>
>
>>
>> instead of emails (But check it out and base on it as it creates the
>> file you need to add to).
>>
>> This will streamline the process of you getting the access you need,
>> also please remember to add the password hash, it is needed for sudo
>> access and accounts will not be created without it.
>> Right now the patch above is not getting merged because Didi's
>> password hash is missing.
>>
>>
>> --
>> Barak Korren
>> bkor...@redhat.com
>> RHEV-CI Team
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Using oVirt VM pools in oVirt infra

2016-01-13 Thread David Caro
On 01/13 18:23, Barak Korren wrote:
> VM Pools are a nice feature of oVirt.
> A VM pool lets you quickly create a pool of stateles VMs all based on
> the same template.
> A VM pool also seems to currently be the only way to create
> template-based thin QCOW2 VMs in oVirt. (Cloning from template creates
> a thick copy, this is why its relatively slow)
> With the autostart [1] feature, you can have the VMs auto-started when
> the pool is started, it also means VMs get started automatically a few
> minutes after they are shut down.
> What this comes down to is that if you run 'shutdown' in a VM from a
> pool, you will automatically get back a clean VM a few minutes later.
>

Is there an easy way to do so from jenknis job without failing the job
with slave connection error? Most projects I know that use ephemeral
slaves have to work around it by having a job that starts/creates a
slave tag and provisions the slave, and removes it at the end, if we
can skip that extra job level better for us.

> Unfortunately VM pools are not without their short comings, I've
> documented two of these in BZ#1298235 [2] and BZ#1298232 [3].
> When this means in essence is that oVirt does not give you a way to
> predictably assign names or IPs to VMs in a pool.
> 
> So how do we solve this?
> 
> Since the ultimate goal for VMs in a pool is to become Jenkins slaves,
> one solution is to use the swarm plugin [4].
> With the swarm plugin, the actual name and address of the slave VM
> becomes not very important.
> We could quite easily setup the cloud-init invoked for VMs in the pool
> to download the swarm plugin client and then run it to register to
> Jenkins while setting labels according to various system properties.
>

iirc the puppet manifest for jenkins already has integration with the
swarm plugin, we can use that instead.

> The question remains how to assign IP addresses and names, to the pool VMs.
> We will probably need a range of IP addresses that is pre-assigned to
> a range of DNS records an that will be assigned to pool VMs as they
> boot up.
> 
> Currently our DHCP and DNS servers in PHX is managed by Foreman in a
> semi-random fashion.
> As we've seen in the past this is subject to various failures such as
> the MAC address of the foreman record getting out of sync with the one
> of the VM (for example due to Facter reporting a bad address after a
> particularity nasty VDSM test run), or the DNS record going out of
> sync with the VM's host name and address in the DHCP.
> At this point I think we've enough evidence against Foreman's style of
> managing DNS and DHCP, I suggest we will:
> 1. Cease from creating new VMs in PHX via Foreman for a while.
> 2. Shutdown the PHX foreman proxy to disconnect it from managing the
> DNS and DHCP.
> 3. Map out our currently active MAC->IP->HOSTNAME combinations and
> create static DNS and DHCP configuration files (I suggest we also
> migrate from BIND+ISC DHCPD to Dnsmasq which is far easier to
> configure and provides very tight DNS, DHCP and TFTP integration)
> 4. Add configuration for a dynamically assigned IP range as described above.
>

Can't we just use a reserved range for those machines instead? there's
no need to remove from foreman, it can work with machines it does not
provision.

> Another way to resolve the current problem of coming up with a
> dynamically assignable range of IPs, is to create a new VLAN in PHX
> for the new pools of VMs.
>

I'm in favor of using an internal network for the jenkins slaves, if
they are the ones connecting to the master there's no need for
externally addressable ips, so no need for public ips, though I recall
that it was not so easy to set up, better discuss with the hosting

> One more issue we need to consider is how to use Puppet on the pool
> VMs, we will probably still like Puppet to run in order to setup SSH
> access for us, as well as other things needed on the slave.
> Possibly we would also like for the swarm plugin client to be actually
> installed and activated by Puppet, as that would grant us easy access
> to Facter facts for determining the labels the slave should have while
> also ensuring the slave will not become available to Jenkins until it
> is actually ready for use.
> It is easy enough to get Puppet running via a cloud-init script, but
> the issue here is how to select classes for the new VMs.
> Since they are not created in Foreman, they will not get assigned to
> hostgroups, and therefore class assignment by way of hostgroup
> membership will not work.

Can't you just autoasign a hostgroup on creation on formean or
something?
Quick search throws a plugin that might do the trick:
  https://github.com/GregSutcliffe/foreman_default_hostgroup

+1 on moving any data aside from the hostgroup assignation to hiera
though, so it can be versioned and peer reviewed.

> I see a few ways to resolve this:
> 1. An a 'node' entry in 'site.pp' to detect pool VMs (with a name
> regex) and assgin classes to them
> 2. Use 'hiera_inc

Re: My ssh public key

2016-01-13 Thread Barak Korren
hmm... right this is the 'secret' repo... never mind then, keep
emailing...  unless david says otherwise...


On 13 January 2016 at 18:49, Sandro Bonazzola  wrote:
>
>
> On Wed, Jan 13, 2016 at 5:47 PM, Sandro Bonazzola 
> wrote:
>>
>>
>>
>> On Wed, Jan 13, 2016 at 5:42 PM, Barak Korren  wrote:
>>>
>>> >>
>>> >> Please grant me access to resource.ovirt.org server.
>>> >
>>> Dear integration team,
>>> Please send patches like this: https://gerrit.ovirt.org/#/c/51673/1
>>
>>
>> I have no rights to see that patch.
>
>
> Full error I see is:
> Code Review - Error
> The page you requested was not found, or you do not have permission to view
> this page.
>
>
>
>>
>>
>>>
>>>
>>> instead of emails (But check it out and base on it as it creates the
>>> file you need to add to).
>>>
>>> This will streamline the process of you getting the access you need,
>>> also please remember to add the password hash, it is needed for sudo
>>> access and accounts will not be created without it.
>>> Right now the patch above is not getting merged because Didi's
>>> password hash is missing.
>>>
>>>
>>> --
>>> Barak Korren
>>> bkor...@redhat.com
>>> RHEV-CI Team
>>
>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: My ssh public key

2016-01-13 Thread David Caro
On 01/13 18:53, Barak Korren wrote:
> hmm... right this is the 'secret' repo... never mind then, keep
> emailing...  unless david says otherwise...
> 

I think sandro is a good candidate to also send patches to that repo :)

> 
> On 13 January 2016 at 18:49, Sandro Bonazzola  wrote:
> >
> >
> > On Wed, Jan 13, 2016 at 5:47 PM, Sandro Bonazzola 
> > wrote:
> >>
> >>
> >>
> >> On Wed, Jan 13, 2016 at 5:42 PM, Barak Korren  wrote:
> >>>
> >>> >>
> >>> >> Please grant me access to resource.ovirt.org server.
> >>> >
> >>> Dear integration team,
> >>> Please send patches like this: https://gerrit.ovirt.org/#/c/51673/1
> >>
> >>
> >> I have no rights to see that patch.
> >
> >
> > Full error I see is:
> > Code Review - Error
> > The page you requested was not found, or you do not have permission to view
> > this page.
> >
> >
> >
> >>
> >>
> >>>
> >>>
> >>> instead of emails (But check it out and base on it as it creates the
> >>> file you need to add to).
> >>>
> >>> This will streamline the process of you getting the access you need,
> >>> also please remember to add the password hash, it is needed for sudo
> >>> access and accounts will not be created without it.
> >>> Right now the patch above is not getting merged because Didi's
> >>> password hash is missing.
> >>>
> >>>
> >>> --
> >>> Barak Korren
> >>> bkor...@redhat.com
> >>> RHEV-CI Team
> >>
> >>
> >>
> >>
> >> --
> >> Sandro Bonazzola
> >> Better technology. Faster innovation. Powered by community collaboration.
> >> See how it works at redhat.com
> >
> >
> >
> >
> > --
> > Sandro Bonazzola
> > Better technology. Faster innovation. Powered by community collaboration.
> > See how it works at redhat.com
> 
> 
> 
> -- 
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Using oVirt VM pools in oVirt infra

2016-01-13 Thread Anton Marchukov
Hello All.

> What this comes down to is that if you run 'shutdown' in a VM from a
> > pool, you will automatically get back a clean VM a few minutes later.
> >
>
> Is there an easy way to do so from jenknis job without failing the job
> with slave connection error? Most projects I know that use ephemeral
>

But why we need it here. Do we really need to target ephemeral slaves or UI
management of pool servers is not good enough in ovirt?

> 1. Cease from creating new VMs in PHX via Foreman for a while.
> > 2. Shutdown the PHX foreman proxy to disconnect it from managing the
> > DNS and DHCP.
> > 3. Map out our currently active MAC->IP->HOSTNAME combinations and
> > create static DNS and DHCP configuration files (I suggest we also
> > migrate from BIND+ISC DHCPD to Dnsmasq which is far easier to
> > configure and provides very tight DNS, DHCP and TFTP integration)
> > 4. Add configuration for a dynamically assigned IP range as described
> above.
> >
> Can't we just use a reserved range for those machines instead? there's
> no need to remove from foreman, it can work with machines it does not
> provision.
>

As I understand the problem here is that in one VLAN we obviously can have
only one DHCP and if it is managed by foreman it may not be possible to
have a range there that is not touchable by foreman. But it depends on how
foreman touches DHCP config.


> > Another way to resolve the current problem of coming up with a
> > dynamically assignable range of IPs, is to create a new VLAN in PHX
> > for the new pools of VMs.
> >
> I'm in favor of using an internal network for the jenkins slaves, if
> they are the ones connecting to the master there's no need for
> externally addressable ips, so no need for public ips, though I recall
> that it was not so easy to set up, better discuss with the hosting
>

I think if we want to scale that public IPv4 IPs might be indeed quite
wasteful. I though about using IPv6 since e.g. we can just have one prefix
and there is no need for DHCP so such VMs will live in the same VLAN as
foreman if needed with no problem. But as I understand we need IPv4
addressing on the slave for the tests, do I get it correct?


> Can't you just autoasign a hostgroup on creation on formean or
> something?
> Quick search throws a plugin that might do the trick:
>   https://github.com/GregSutcliffe/foreman_default_hostgroup
>
> +1 on moving any data aside from the hostgroup assignation to hiera
> though, so it can be versioned and peer reviewed.
>

Can we somehow utilize cloud init for this.

Also do we really want to use vanialla OS templates for this instead of
building our own based on vanialla but with configuration setting needed
for us. I think it will also fasten slave creation although since they are
not ephemeral this will not give much.

-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Using oVirt VM pools in oVirt infra

2016-01-13 Thread David Caro
On 01/13 18:02, Anton Marchukov wrote:
> Hello All.
> 
> > What this comes down to is that if you run 'shutdown' in a VM from a
> > > pool, you will automatically get back a clean VM a few minutes later.
> > >
> >
> > Is there an easy way to do so from jenknis job without failing the job
> > with slave connection error? Most projects I know that use ephemeral
> >
> 
> But why we need it here. Do we really need to target ephemeral slaves or UI
> management of pool servers is not good enough in ovirt?

The issue is being able to recycle the slaves without breaking
any jenkins jobs, and if possble, automatically. Iiuc the key idea of
those slaves, is that they are ephemeral, so we can create/destroy
them on demand really easily.

> 
> > 1. Cease from creating new VMs in PHX via Foreman for a while.
> > > 2. Shutdown the PHX foreman proxy to disconnect it from managing the
> > > DNS and DHCP.
> > > 3. Map out our currently active MAC->IP->HOSTNAME combinations and
> > > create static DNS and DHCP configuration files (I suggest we also
> > > migrate from BIND+ISC DHCPD to Dnsmasq which is far easier to
> > > configure and provides very tight DNS, DHCP and TFTP integration)
> > > 4. Add configuration for a dynamically assigned IP range as described
> > above.
> > >
> > Can't we just use a reserved range for those machines instead? there's
> > no need to remove from foreman, it can work with machines it does not
> > provision.
> >
> 
> As I understand the problem here is that in one VLAN we obviously can have
> only one DHCP and if it is managed by foreman it may not be possible to
> have a range there that is not touchable by foreman. But it depends on how
> foreman touches DHCP config.

We already have reserved ips and ranges in the same dhcp that is
managed by foreman.

> 
> 
> > > Another way to resolve the current problem of coming up with a
> > > dynamically assignable range of IPs, is to create a new VLAN in PHX
> > > for the new pools of VMs.
> > >
> > I'm in favor of using an internal network for the jenkins slaves, if
> > they are the ones connecting to the master there's no need for
> > externally addressable ips, so no need for public ips, though I recall
> > that it was not so easy to set up, better discuss with the hosting
> >
> 
> I think if we want to scale that public IPv4 IPs might be indeed quite
> wasteful. I though about using IPv6 since e.g. we can just have one prefix
> and there is no need for DHCP so such VMs will live in the same VLAN as
> foreman if needed with no problem. But as I understand we need IPv4
> addressing on the slave for the tests, do I get it correct?
>

I'm not really sure, but if we are using lago for the functional
tests, maybe there's no need for them. I'm not really familiar with
ipv6, maybe it's time to get to know it :)

> 
> > Can't you just autoasign a hostgroup on creation on formean or
> > something?
> > Quick search throws a plugin that might do the trick:
> >   https://github.com/GregSutcliffe/foreman_default_hostgroup
> >
> > +1 on moving any data aside from the hostgroup assignation to hiera
> > though, so it can be versioned and peer reviewed.
> >
> 
> Can we somehow utilize cloud init for this.

I don't like the slaves explicitly registering themselves into
foreman, that makes the provision totally coupled with it from the
slave perspective.

> 
> Also do we really want to use vanilla OS templates for this instead of
> building our own based on vanialla but with configuration setting needed
> for us. I think it will also fasten slave creation although since they are
> not ephemeral this will not give much.
> 
> -- 
> Anton Marchukov
> Senior Software Engineer - RHEV CI - Red Hat

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Random jenkins failures

2016-01-13 Thread David Caro
On 01/13 15:37, Eyal Edri wrote:
> Looks like loop device issues.
> Nir - didn't you say you have a patch to fix this?
> 
> In any case i think rebooting the slave fix this.

I just logged in to the slave and I see some strange loop devices that
don't look right:

[dcaro@fc21-vm09 ~]$ losetup 
NAME   SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0 0  0 1  0 
/var/lib/docker/devicemapper/devicemapper/data
/dev/loop1 0  0 1  0 
/var/lib/docker/devicemapper/devicemapper/metadata
/dev/loop2 0  0 0  0 /tmp/ngnodeJ0pwVC/disklono5e1i.img 
(deleted)
/dev/loop3 0  0 0  0 /tmp/ngnode1AGZcM/diskq59_0u5b.img 
(deleted)
/dev/loop4 0  0 0  0 /tmp/ngnode4sZfrq/disk7f9xd4m_.img 
(deleted)


Not sure why there's a docker dir at all there :/, anyhow, stil investigating

> 
> E.
> 
> On Wed, Jan 13, 2016 at 3:08 PM, Vinzenz Feenstra 
> wrote:
> 
> >
> > On Jan 13, 2016, at 1:20 PM, Vinzenz Feenstra  wrote:
> >
> > I have just submitted a set of 4 patches where 1 patch unit tests failed 
> > with the pasted text below. Those patches are absolutely unrelated to those 
> > failures.
> >
> >
> > Please check into those issues - Thanks
> >
> >
> > It happened again
> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2186/console
> >
> > *13:06:47* 
> > ==*13:06:47*
> >  ERROR: testLoopMount (mountTests.MountTests)*13:06:47* 
> > --*13:06:47*
> >  Traceback (most recent call last):*13:06:47*   File 
> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
> >  line 128, in testLoopMount*13:06:47* m.mount(mntOpts="loop")*13:06:47* 
> >   File 
> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> >  line 225, in mount*13:06:47* return self._runcmd(cmd, 
> > timeout)*13:06:47*   File 
> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> >  line 241, in _runcmd*13:06:47* raise MountError(rc, ";".join((out, 
> > err)))*13:06:47* MountError: (32, ';mount: /tmp/tmpl2jG_h: failed to setup 
> > loop device: No such file or directory\n')*13:06:47*  
> > >> begin captured logging << *13:06:47* 
> > Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 /sbin/mkfs.ext2 
> > -F /tmp/tmpl2jG_h (cwd None)*13:06:47* Storage.Misc.excCmd: DEBUG: SUCCESS: 
> >  = 'mke2fs 1.42.13 (17-May-2015)\n';  = 0*13:06:47* 
> > Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 /usr/bin/mount 
> > -o loop /tmp/tmpl2jG_h /var/tmp/tmpRslb5M (cwd None)*13:06:47* 
> > - >> end captured logging << 
> > -*13:06:47* *13:06:47* 
> > ==*13:06:47*
> >  ERROR: testSymlinkMount (mountTests.MountTests)*13:06:47* 
> > --*13:06:47*
> >  Traceback (most recent call last):*13:06:47*   File 
> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
> >  line 150, in testSymlinkMount*13:06:47* 
> > m.mount(mntOpts="loop")*13:06:47*   File 
> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> >  line 225, in mount*13:06:47* return self._runcmd(cmd, 
> > timeout)*13:06:47*   File 
> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> >  line 241, in _runcmd*13:06:47* raise MountError(rc, ";".join((out, 
> > err)))*13:06:47* MountError: (32, ';mount: /var/tmp/tmpTeUZUl/backing.img: 
> > failed to setup loop device: No such file or directory\n')*13:06:47* 
> >  >> begin captured logging << 
> > *13:06:47* Storage.Misc.excCmd: DEBUG: /usr/bin/taskset 
> > --cpu-list 0-1 /sbin/mkfs.ext2 -F /var/tmp/tmpTeUZUl/backing.img (cwd 
> > None)*13:06:47* Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 
> > 1.42.13 (17-May-2015)\n';  = 0*13:06:47* Storage.Misc.excCmd: DEBUG: 
> > /usr/bin/taskset --cpu-list 0-1 /usr/bin/mount -o loop 
> > /var/tmp/tmpTeUZUl/link_to_image /var/tmp/tmpTeUZUl/mountpoint (cwd 
> > None)*13:06:47* - >> end captured logging << 
> > -*13:06:47* *13:06:47* 
> > ==*13:06:47*
> >  ERROR: test_getDevicePartedInfo 
> > (parted_utils_tests.PartedUtilsTests)*13:06:47* 
> > --*13:06:47*
> >  Traceback (most recent call last):*13:06:47*   File 
> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/testValidation.py",
> >  line 97, in wrapper*13:06:47* return f(*args, **kwargs)*13:06:47*   
> > File 
>

[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 1660 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/1660/
Build Number: 1660
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51720

-
Changes Since Last Success:
-
Changes for Build #1660
[Idan Shaby] frontend: fix FcpStorageView faults




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.5_el6_merged - Build # 790 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/790/
Build Number: 790
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51417

-
Changes Since Last Success:
-
Changes for Build #790
[Roy Golan] core: hosted-engine: Lock the sd import exclusively




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 778 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/778/
Build Number: 778
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51419

-
Changes Since Last Success:
-
Changes for Build #778
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 779 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/779/
Build Number: 779
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51715

-
Changes Since Last Success:
-
Changes for Build #778
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #779
[Jakub Niedermertl] core: Losing graphical protocol fix




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 780 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/780/
Build Number: 780
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51639

-
Changes Since Last Success:
-
Changes for Build #778
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #779
[Jakub Niedermertl] core: Losing graphical protocol fix


Changes for Build #780
[Roman Mohr] webui: Fix numa pinning dialog cancel button




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Random jenkins failures

2016-01-13 Thread Nir Soffer
On Wed, Jan 13, 2016 at 3:37 PM, Eyal Edri  wrote:
> Looks like loop device issues.
> Nir - didn't you say you have a patch to fix this?

The patch was merge today:
https://gerrit.ovirt.org/51614/

It fixes incorrect umount in two tests, that may cause stale loop
devices effecting
other tests on same slave.


>
> In any case i think rebooting the slave fix this.
>
> E.
>
> On Wed, Jan 13, 2016 at 3:08 PM, Vinzenz Feenstra 
> wrote:
>>
>>
>> On Jan 13, 2016, at 1:20 PM, Vinzenz Feenstra  wrote:
>>
>> I have just submitted a set of 4 patches where 1 patch unit tests failed
>> with the pasted text below. Those patches are absolutely unrelated to those
>> failures.
>>
>>
>> Please check into those issues - Thanks
>>
>>
>> It happened again
>> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2186/console
>>
>> 13:06:47
>> ==
>> 13:06:47 ERROR: testLoopMount (mountTests.MountTests)
>> 13:06:47
>> --
>> 13:06:47 Traceback (most recent call last):
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
>> line 128, in testLoopMount
>> 13:06:47 m.mount(mntOpts="loop")
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>> line 225, in mount
>> 13:06:47 return self._runcmd(cmd, timeout)
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>> line 241, in _runcmd
>> 13:06:47 raise MountError(rc, ";".join((out, err)))
>> 13:06:47 MountError: (32, ';mount: /tmp/tmpl2jG_h: failed to setup loop
>> device: No such file or directory\n')
>> 13:06:47  >> begin captured logging <<
>> 
>> 13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
>> /sbin/mkfs.ext2 -F /tmp/tmpl2jG_h (cwd None)
>> 13:06:47 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13
>> (17-May-2015)\n';  = 0
>> 13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
>> /usr/bin/mount -o loop /tmp/tmpl2jG_h /var/tmp/tmpRslb5M (cwd None)
>> 13:06:47 - >> end captured logging <<
>> -
>> 13:06:47
>> 13:06:47
>> ==
>> 13:06:47 ERROR: testSymlinkMount (mountTests.MountTests)
>> 13:06:47
>> --
>> 13:06:47 Traceback (most recent call last):
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
>> line 150, in testSymlinkMount
>> 13:06:47 m.mount(mntOpts="loop")
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>> line 225, in mount
>> 13:06:47 return self._runcmd(cmd, timeout)
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>> line 241, in _runcmd
>> 13:06:47 raise MountError(rc, ";".join((out, err)))
>> 13:06:47 MountError: (32, ';mount: /var/tmp/tmpTeUZUl/backing.img: failed
>> to setup loop device: No such file or directory\n')
>> 13:06:47  >> begin captured logging <<
>> 
>> 13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
>> /sbin/mkfs.ext2 -F /var/tmp/tmpTeUZUl/backing.img (cwd None)
>> 13:06:47 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13
>> (17-May-2015)\n';  = 0
>> 13:06:47 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
>> /usr/bin/mount -o loop /var/tmp/tmpTeUZUl/link_to_image
>> /var/tmp/tmpTeUZUl/mountpoint (cwd None)
>> 13:06:47 - >> end captured logging <<
>> -
>> 13:06:47
>> 13:06:47
>> ==
>> 13:06:47 ERROR: test_getDevicePartedInfo
>> (parted_utils_tests.PartedUtilsTests)
>> 13:06:47
>> --
>> 13:06:47 Traceback (most recent call last):
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/testValidation.py",
>> line 97, in wrapper
>> 13:06:47 return f(*args, **kwargs)
>> 13:06:47   File
>> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/parted_utils_tests.py",
>> line 61, in setUp
>> 13:06:47 self.assertEquals(rc, 0)
>> 13:06:47 AssertionError: 1 != 0
>> 13:06:47  >> begin captured logging <<
>> 
>> 13:06:47 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 dd if=/dev/zero
>> of=/tmp/tmp7dS7VS bs=100M count=1 (cwd None)
>> 13:06:47 root: DEBUG: SUCCESS:  = '1+0 records in\n1+0 records
>> out\n104857600 bytes (105 MB) copied, 0.350029 s, 300 MB/s\n';  = 0
>> 13:06:47 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 losetup -f --sho

[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 781 - Still Failing!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/781/
Build Number: 781
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51801

-
Changes Since Last Success:
-
Changes for Build #778
[Roy Golan] core: hosted-engine: Add connection details explicitly for NFS


Changes for Build #779
[Jakub Niedermertl] core: Losing graphical protocol fix


Changes for Build #780
[Roman Mohr] webui: Fix numa pinning dialog cancel button


Changes for Build #781
[Alexander Wels] webadmin: increase default nodes in cell tree




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.5_el6_merged - Build # 796 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/796/
Build Number: 796
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51801

-
Changes Since Last Success:
-
Changes for Build #796
[Alexander Wels] webadmin: increase default nodes in cell tree




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


"ovirt-engine_3.6_upgrade-from-3.5_el6_merged" job failure

2016-01-13 Thread Einav Cohen
all errors in the console log aren't related to Alexander's 
patch as implied by the original jenkins e-mail (forwarded 
below): 

...
ERROR with rpm_check_debug vs depsolve:
ovirt-engine-sdk-python >= 3.5.2.1 is needed by (installed) 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.2.1 is needed by (installed) 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.1.0 is needed by (installed) 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-iso-uploader-3.6.1-0.0.master.20151006154303.gitd2aea1a.el6.noarch
ovirt-engine-sdk-python >= 3.5.1.0 is needed by (installed) 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.1.0 is needed by (installed) 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.2.1 is needed by (installed) 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.1.0 is needed by (installed) 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-iso-uploader-3.6.1-0.0.master.20151006154303.gitd2aea1a.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-iso-uploader-3.6.1-0.0.master.20151006154303.gitd2aea1a.el6.noarch
ovirt-engine-sdk-python >= 3.5.1.0 is needed by (installed) 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.2.1 is needed by (installed) 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.2.1 is needed by (installed) 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-sdk-python >= 3.5.1.0 is needed by (installed) 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch
ovirt-engine-sdk-python >= 3.5.2.1 is needed by (installed) 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch
ovirt-engine-sdk-python >= 3.6.0.2 is needed by (installed) 
ovirt-iso-uploader-3.6.1-0.0.master.20151006154303.gitd2aea1a.el6.noarch
** Found 28 pre-existing rpmdb problem(s), 'yum check' output follows:
ovirt-image-uploader-3.5.1-1.el6.noarch is a duplicate with 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch is a duplicate with 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch is a duplicate with 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch is a duplicate with 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch is a duplicate with 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-image-uploader-3.5.1-1.el6.noarch is a duplicate with 
ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch is a 
duplicate with ovirt-image-uploader-3.5.1-1.el6.noarch
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch is a 
duplicate with 
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch is a 
duplicate with 
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch is a 
duplicate with 
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch has 
missing requires of ovirt-engine-sdk-python >= ('0', '3.6.0.2', None)
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch has 
missing requires of ovirt-engine-sdk-python >= ('0', '3.6.0.2', None)
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch has 
missing requires of ovirt-engine-sdk-python >= ('0', '3.6.0.2', None)
ovirt-image-uploader-3.6.1-0.0.master.20151006154122.git95ce637.el6.noarch has 
missing requires of ovirt-engine-sdk-python >= ('0', '3.6.0.2', None)
ovirt-iso-uploader-3.5.2-1.el6.noarch is a duplicate with 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch is a duplicate with 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch is a duplicate with 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch is a duplicate with 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch is a duplicate with 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.5.2-1.el6.noarch is a duplicate with 
ovirt-iso-uploader-3.5.2-1.el6.noarch
ovirt-iso-uploader-3.6.1-0.0.mast

[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 1667 - Failure!

2016-01-13 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/1667/
Build Number: 1667
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/51257

-
Changes Since Last Success:
-
Changes for Build #1667
[Allon Mureinik] engine: One statement per line checkstyle




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Random jenkins failures

2016-01-13 Thread Nir Soffer
On Wed, Jan 13, 2016 at 7:57 PM, David Caro  wrote:
> On 01/13 15:37, Eyal Edri wrote:
>> Looks like loop device issues.
>> Nir - didn't you say you have a patch to fix this?
>>
>> In any case i think rebooting the slave fix this.
>
> I just logged in to the slave and I see some strange loop devices that
> don't look right:
>
> [dcaro@fc21-vm09 ~]$ losetup
> NAME   SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
> /dev/loop0 0  0 1  0 
> /var/lib/docker/devicemapper/devicemapper/data
> /dev/loop1 0  0 1  0 
> /var/lib/docker/devicemapper/devicemapper/metadata
> /dev/loop2 0  0 0  0 /tmp/ngnodeJ0pwVC/disklono5e1i.img 
> (deleted)
> /dev/loop3 0  0 0  0 /tmp/ngnode1AGZcM/diskq59_0u5b.img 
> (deleted)
> /dev/loop4 0  0 0  0 /tmp/ngnode4sZfrq/disk7f9xd4m_.img 
> (deleted)

Fabian, these devices looks like leftovers from node tests - can you look at it?

>
>
> Not sure why there's a docker dir at all there :/, anyhow, stil investigating
>
>>
>> E.
>>
>> On Wed, Jan 13, 2016 at 3:08 PM, Vinzenz Feenstra 
>> wrote:
>>
>> >
>> > On Jan 13, 2016, at 1:20 PM, Vinzenz Feenstra  wrote:
>> >
>> > I have just submitted a set of 4 patches where 1 patch unit tests failed 
>> > with the pasted text below. Those patches are absolutely unrelated to 
>> > those failures.
>> >
>> >
>> > Please check into those issues - Thanks
>> >
>> >
>> > It happened again
>> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/2186/console
>> >
>> > *13:06:47* 
>> > ==*13:06:47*
>> >  ERROR: testLoopMount (mountTests.MountTests)*13:06:47* 
>> > --*13:06:47*
>> >  Traceback (most recent call last):*13:06:47*   File 
>> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
>> >  line 128, in testLoopMount*13:06:47* 
>> > m.mount(mntOpts="loop")*13:06:47*   File 
>> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>> >  line 225, in mount*13:06:47* return self._runcmd(cmd, 
>> > timeout)*13:06:47*   File 
>> > "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
>> >  line 241, in _runcmd*13:06:47* raise MountError(rc, ";".join((out, 
>> > err)))*13:06:47* MountError: (32, ';mount: /tmp/tmpl2jG_h: failed to setup 
>> > loop device: No such file or directory\n')*13:06:47*  
>> > >> begin captured logging << *1
 3:06:47* Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 
/sbin/mkfs.ext2 -F /tmp/tmpl2jG_h (cwd None)*13:06:47* Storage.Misc.excCmd: 
DEBUG: SUCCESS:  = 'mke2fs 1.42.13 (17-May-2015)\n';  = 0*13:06:47* 
Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1 /usr/bin/mount -o 
loop /tmp/tmpl2jG_h /var/tmp/tmpRslb5M (cwd None)*13:06:47* 
- >> end captured logging << 
-*13:06:47* *13:06:47* 
==*13:06:47*
 ERROR: testSymlinkMount (mountTests.MountTests)*13:06:47* 
--*13:06:47*
 Traceback (most recent call last):*13:06:47*   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
 line 150, in testSymlinkMount*13:06:47* m.mount(mntOpts="loop")*13:06:47*  
 File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 li
 ne 225, in mount*13:06:47* return self._runcmd(cmd, timeout)*13:06:47*   
File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
 line 241, in _runcmd*13:06:47* raise MountError(rc, ";".join((out, 
err)))*13:06:47* MountError: (32, ';mount: /var/tmp/tmpTeUZUl/backing.img: 
failed to setup loop device: No such file or directory\n')*13:06:47* 
 >> begin captured logging << 
*13:06:47* Storage.Misc.excCmd: DEBUG: /usr/bin/taskset 
--cpu-list 0-1 /sbin/mkfs.ext2 -F /var/tmp/tmpTeUZUl/backing.img (cwd 
None)*13:06:47* Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13 
(17-May-2015)\n';  = 0*13:06:47* Storage.Misc.excCmd: DEBUG: 
/usr/bin/taskset --cpu-list 0-1 /usr/bin/mount -o loop 
/var/tmp/tmpTeUZUl/link_to_image /var/tmp/tmpTeUZUl/mountpoint (cwd 
None)*13:06:47* - >> end captured logging << 
-*13:06:47* *13:06:47* =
 =*13:06:47* ERROR: 
test_getDevicePartedInfo (parted_utils_tests.PartedUtilsTests)*13:06:47* 
--*13:06:47*
 Traceback (most recent call last):*13:06:47*   File 
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/testValidation.py",
 line 97, in wrapper*13