[CQ]: 83894, 20 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2017-11-20 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
83894,20 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 84343,3 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 84343,3 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/83894/20

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/84343/3

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3939/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build became unstable: ovirt_master_publish-rpms_nightly #779

2017-11-20 Thread jenkins
See 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: system-sync_mirrors-epel-el6-x86_64 #932

2017-11-20 Thread jenkins
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git 
 > +refs/heads/*:refs/remotes/origin/* --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 5417869e41154e9c513d882f2a781eba65449fca (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5417869e41154e9c513d882f2a781eba65449fca
Commit message: "Move useful stuff from test_usrc.py to conftest.py"
 > git rev-list 5417869e41154e9c513d882f2a781eba65449fca # timeout=10
[system-sync_mirrors-epel-el6-x86_64] $ /bin/bash -xe 
/tmp/jenkins7371068000987969008.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror epel-el6 x86_64 
jenkins/data/mirrors-reposync.conf
Checking if mirror needs a resync
Traceback (most recent call last):
  File "/usr/bin/reposync", line 343, in 
main()
  File "/usr/bin/reposync", line 175, in main
my.doRepoSetup()
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in 
doRepoSetup
return self._getRepos(thisrepo, True)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in 
_getRepos
self._repos.doSetup(thisrepo)
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in 
retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1465, in 
_commonLoadRepoXML
local  = self.cachedir + '/repomd.xml'
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 774, in 
cachedir = property(lambda self: self._dirGetAttr('cachedir'))
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 757, in 
_dirGetAttr
self.dirSetup()
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 735, in dirSetup
self._dirSetupMkdir_p(dir)
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 712, in 
_dirSetupMkdir_p
raise Errors.RepoError, msg
yum.Errors.RepoError: Error making cache directory: 
/home/jenkins/mirrors_cache/centos-updates-el7 error was: [Errno 17] File 
exists: '/home/jenkins/mirrors_cache/centos-updates-el7'
Build step 'Execute shell' marked build as failure
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 83841, 24 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2017-11-20 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
83841,24 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 84343,3 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 84343,3 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/83841/24

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/84343/3

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3934/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1775) system-sync_mirrors-fedora-updates-fc25-x86_64 job sometimes takes 3+ hours to complete

2017-11-20 Thread Evgheni Dereveanchin (oVirt JIRA)
Evgheni Dereveanchin created OVIRT-1775:
---

 Summary: system-sync_mirrors-fedora-updates-fc25-x86_64 job 
sometimes takes 3+ hours to complete
 Key: OVIRT-1775
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1775
 Project: oVirt - virtualization made easy
  Issue Type: Bug
  Components: oVirt Infra
Reporter: Evgheni Dereveanchin
Assignee: infra
Priority: Low


The fc25 mirror sync job takes hours to complete:
http://jenkins.ovirt.org/view/All%20Running%20jobs/job/system-sync_mirrors-fedora-updates-fc25-x86_64/buildTimeTrend
 

Builds #801, #813, #822, #825, #831, #839 took more than 3 hours to complete. 
Looking at the mirror server during build #839, reposync and genpkgmetadata.py 
processes were consuming resources and constantly waiting in D state with I/O 
Wait being around 70%

Opening a ticket to investigate the root cause of this slowdown.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 83300, 28 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2017-11-20 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
83300,28 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 84343,3 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 84343,3 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/83300/28

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/84343/3

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3932/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 84382, 3 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2017-11-20 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
84382,3 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 84343,3 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 84343,3 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/84382/3

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/84343/3

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3930/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 84346, 4 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2017-11-20 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
84346,4 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 84343,3 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 84343,3 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/84346/4

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/84343/3

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3926/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1709) provision Persistent Storage for OpenShift

2017-11-20 Thread Evgheni Dereveanchin (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgheni Dereveanchin reassigned OVIRT-1709:
---

Assignee: Evgheni Dereveanchin  (was: infra)

> provision Persistent Storage for OpenShift
> --
>
> Key: OVIRT-1709
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1709
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: OpenShift
>Reporter: Evgheni Dereveanchin
>Assignee: Evgheni Dereveanchin
>Priority: High
>  Labels: openshift
>
> The OpenShift instance in PHX currently does not have any Persistent Storage 
> assigned to it which is needed for things like databases and other important 
> data. Opening this ticket to track how many volumes we may need and attach 
> them.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 84379,1 (vdsm-jsonrpc-java) failed "ovirt-4.1" system tests

2017-11-20 Thread oVirt Jenkins
Change 84379,1 (vdsm-jsonrpc-java) is probably the reason behind recent system
test failures in the "ovirt-4.1" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/84379/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1350/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-886) Yum install does not throw error on missing package

2017-11-20 Thread eyal edri (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri updated OVIRT-886:

Status: To Do  (was: In Progress)

> Yum install does not throw error on missing package
> ---
>
> Key: OVIRT-886
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-886
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: Gil Shinar
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> When running el7 mock on fc24 (not necessary the issue) and one of the 
> required packages is missing because a repository in .repos hadn't been 
> added, yum will not fail and the package will not be installed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1774) Upstream source collector can fail to push on multi-branch projects

2017-11-20 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1774:

Epic Link: OVIRT-400

> Upstream source collector can fail to push on multi-branch projects
> ---
>
> Key: OVIRT-1774
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1774
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: Barak Korren
>Assignee: infra
>  Labels: poll-upstream-sources, upstream-source-collector
>
> When the upstream source collection code looks for similar patches in order 
> to avoid pushing a new patch, it lookes for patches that contain the same 
> change to the '{{upstream-sources.yaml}}' file, regardless of the branch they 
> may belong to.
> The code currently ignores the possibility that similar changes might be 
> required for different branches, in which case different patches may need to 
> be pushed.
> The way of detecting similar patches should be changed so that either the 
> branch name is included in the checksum that are used to identify a patch, or 
> the query is limited to patches of the branch being handled.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1774) Upstream source collector can fail to push on multi-branch projects

2017-11-20 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-1774:
---

 Summary: Upstream source collector can fail to push on 
multi-branch projects
 Key: OVIRT-1774
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1774
 Project: oVirt - virtualization made easy
  Issue Type: Bug
  Components: oVirt CI
Reporter: Barak Korren
Assignee: infra


When the upstream source collection code looks for similar patches in order to 
avoid pushing a new patch, it lookes for patches that contain the same change 
to the '{{upstream-sources.yaml}}' file, regardless of the branch they may 
belong to.

The code currently ignores the possibility that similar changes might be 
required for different branches, in which case different patches may need to be 
pushed.

The way of detecting similar patches should be changed so that either the 
branch name is included in the checksum that are used to identify a patch, or 
the query is limited to patches of the branch being handled.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ OST Failure Report ] [ oVirt master ] [ 20-11-2017 ] [004_basic_sanity.vm_run ]

2017-11-20 Thread Dan Kenigsberg
Francesco is on it: https://gerrit.ovirt.org/#/c/84382/

On Mon, Nov 20, 2017 at 3:43 PM, Dafna Ron  wrote:
> Hi,
>
> We have a failure in OST on test 004_basic_sanity.vm_run.
>
> it seems to be an error in vm type which is related to the patch reported.
>
>
> Link to suspected patches: https://gerrit.ovirt.org/#/c/84343/
>
>
> Link to Job:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922
>
>
> Link to all logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922/artifact
>
>
> (Relevant) error snippet from the log:
>
> 
>
>
> vdsm log:
>
> 2017-11-20 07:40:12,779-0500 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:611)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606,
> in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in
> _dynamicMethod
> result = fn(*methodArgs)
>   File "", line 2, in getAllVmStats
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1341, in
> getAllVmStats
> statsList = self._cif.getAllVmStats()
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 508, in
> getAllVmStats
> return [v.getStats() for v in self.vmContainer.values()]
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1664, in
> getStats
> stats.update(self._getConfigVmStats())
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1703, in
> _getConfigVmStats
> 'vmType': self.conf['vmType'],
> KeyError: 'vmType'
>
>
> engine log:
>
> 2017-11-20 07:43:07,675-05 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
> Message received: {"jsonrpc": "2.0", "id":
> "5bf12e5a-4a09-4999-a6ce-a7dd639d3833", "error": {"message": "Internal
> JSON-RPC error:
>  {'reason': \"'vmType'\"}", "code": -32603}}
> 2017-11-20 07:43:07,676-05 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Unexpected return
> value: Status [code=-32603, message=Internal JSON-RPC error: {'r
> eason': "'vmType'"}]
> 2017-11-20 07:43:07,676-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Failed in
> 'GetAllVmStatsVDS' method
> 2017-11-20 07:43:07,676-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Command
> 'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host-0, VdsIdV
> DSCommandParametersBase:{hostId='1af28f2c-79db-4069-aa53-5bb46528c5e9'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> GetAllVmStatsVDS, error = Internal JSON-RPC error: {'reason': "'vmType'"},
> code = -32603
> 2017-11-20 07:43:07,676-05 DEBUG
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGeneric
> Exception: VDSErrorException: Failed to GetAllVmStatsVDS, error = Internal
> JSON-RPC error: {'reason': "'vmType'"}, code = -32603
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:81)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:223)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:193)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:112)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> [dal.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
> Source) [vdsbroker.jar:]
> at sun.reflect.GeneratedMethodAccessor247.invoke(Unknown Source)
> [:1.8.0_151]
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_151]
> at 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20-11-1017 ] [ 002_bootstrap.verify_add_all_hosts ]

2017-11-20 Thread Dafna Ron
On 11/20/2017 01:16 PM, Yaniv Kaul wrote:
>
>
> On Mon, Nov 20, 2017 at 3:10 PM, Dafna Ron  > wrote:
>
> Hi,
>
> We had a failure in OST for test 002_bootstrap.verify_add_all_hosts.
>
> From the logs I can see that vdsm on host0 was reporting that it
> cannot find the physical volume but eventually the storage was
> created and is reported as responsive.
>
> However, Host1 is reported to became non-operational with storage
> domain does not exist error and I think that there is a race.
>
>
> I've opened https://bugzilla.redhat.com/show_bug.cgi?id=1514906 on this. 
>  
>
> I think that we create the storage domain while host1 is being
> installed and if the domain is not created and reported as
> activated in time, host1 will become nonOperational.
>
>
> And based on the above description, this is exactly the issue I've
> described in the BZ.
> Y.
>  

+1 this seems exactly the same issue.

> are we starting installation of host1 before host0 and storage are
> active?
>
> **
>
> *Link to suspected patches: I do not think that the patch reported
> is related to the error*
>
> *
>
> **
>
> **
>
> https://gerrit.ovirt.org/#/c/84133/
> **
>
>
> Link to Job:
>
> **
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/
> 
>
>
> Link to all logs:
>
>
> *
>
> 
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/artifact/
> 
> 
> *
>
> *
> *
>
> *(Relevant) error snippet from the log: *
>
> *
>
> 
>
>
> Lago log:
> *
>
> 2017-11-18
> 11:15:25,472::log_utils.py::end_log_task::670::nose::INFO::  #
> add_master_storage_domain: ESC[32mSuccessESC[0m (in 0:01:09)
> 2017-11-18
> 11:15:25,472::log_utils.py::start_log_task::655::nose::INFO::  #
> add_secondary_storage_domains: ESC[0mESC[0m
> 2017-11-18
> 11:16:47,455::log_utils.py::end_log_task::670::nose::INFO::  #
> add_secondary_storage_domains: ESC[32mSuccessESC[0m (in 0:01:21)
> 2017-11-18
> 11:16:47,456::log_utils.py::start_log_task::655::nose::INFO::  #
> import_templates: ESC[0mESC[0m
> 2017-11-18
> 11:16:47,513::testlib.py::stopTest::198::nose::INFO::*
> SKIPPED: Exported domain generation not supported yet
> 2017-11-18
> 11:16:47,514::log_utils.py::end_log_task::670::nose::INFO::  #
> import_templates: ESC[32mSuccessESC[0m (in 0:00:00)
> 2017-11-18
> 11:16:47,514::log_utils.py::start_log_task::655::nose::INFO::  #
> verify_add_all_hosts: ESC[0mESC[0m
> 2017-11-18
> 
> 11:16:47,719::testlib.py::assert_equals_within::227::ovirtlago.testlib::ERROR::
>
> * Unhandled exception in  at 0x2909230>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> line 219, in assert_equals_within
> res = func()
>   File
> 
> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 430, in 
> lambda: _all_hosts_up(hosts_service, total_hosts)
>   File
> 
> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 129, in _all_hosts_up
> _check_problematic_hosts(hosts_service)
>   File
> 
> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 149, in _check_problematic_hosts
> raise RuntimeError(dump_hosts)
> RuntimeError: 1 hosts failed installation:
> lago-basic-suite-master-host-1: non_operational
>
> 2017-11-18
> 11:16:47,722::utils.py::wrapper::480::lago.utils::DEBUG::Looking
> for a workdir
> 2017-11-18
> 
> 11:16:47,722::workdir.py::resolve_workdir_path::361::lago.workdir::DEBUG::Checking
> if /dev/shm/ost/deployment-basic-suite-master is a workdir
> 2017-11-18
> 11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::   
> * Collect artifacts: ESC[0mESC[0m
> 2017-11-18
> 11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::   
> * Collect artifacts: ESC[0mESC[0m
>
> vdsm host0:
>
> 2017-11-18 06:14:23,980-0500 INFO  (jsonrpc/0) [vdsm.api] START
> getDeviceList(storageType=3,
> guids=[u'360014059618895272774e97a2aaf5dd6'], checkStatus=False,
> options={}) from=:::192.168.201.4,45636,
> flow_id=ed8310a1-a7af-4a67-b351-8ff
> 364766b8a, task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:46)
> 2017-11-18 06:14:24,353-0500 WARN  (jsonrpc/0) [storage.LVM] lvm
> pvs failed: 5 [] ['  Failed to find physical 

[ OST Failure Report ] [ oVirt master ] [ 20-11-2017 ] [004_basic_sanity.vm_run ]

2017-11-20 Thread Dafna Ron
Hi,

We have a failure in OST on test 004_basic_sanity.vm_run.

it seems to be an error in vm type which is related to the patch reported.


**

*Link to suspected patches: https://gerrit.ovirt.org/#/c/84343/*

*

Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922


Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922/artifact


(Relevant) error snippet from the log:




vdsm log:
*

*2017-11-20 07:40:12,779-0500 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer]
Internal server error (__init__:611)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
606, in _handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201,
in _dynamicMethod
result = fn(*methodArgs)
  File "", line 2, in getAllVmStats
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48,
in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1341, in
getAllVmStats
statsList = self._cif.getAllVmStats()
  File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 508, in
getAllVmStats
return [v.getStats() for v in self.vmContainer.values()]
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1664, in
getStats
stats.update(self._getConfigVmStats())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1703, in
_getConfigVmStats
'vmType': self.conf['vmType'],
KeyError: 'vmType'

*

*
*

engine log:

***2017-11-20 07:43:07,675-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker)
[] Message received: {"jsonrpc": "2.0", "id":
"5bf12e5a-4a09-4999-a6ce-a7dd639d3833", "error": {"message": "Internal
JSON-RPC error:
 {'reason': \"'vmType'\"}", "code": -32603}}
2017-11-20 07:43:07,676-05 WARN 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Unexpected return
value: Status [code=-32603, message=Internal JSON-RPC error: {'r
eason': "'vmType'"}]
2017-11-20 07:43:07,676-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Failed in
'GetAllVmStatsVDS' method
2017-11-20 07:43:07,676-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Command
'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host-0, VdsIdV
DSCommandParametersBase:{hostId='1af28f2c-79db-4069-aa53-5bb46528c5e9'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
GetAllVmStatsVDS, error = Internal JSON-RPC error: {'reason':
"'vmType'"}, code = -32603
2017-11-20 07:43:07,676-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGeneric
Exception: VDSErrorException: Failed to GetAllVmStatsVDS, error =
Internal JSON-RPC error: {'reason': "'vmType'"}, code = -32603
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:81)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:223)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:193)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:112)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
Source) [vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor247.invoke(Unknown Source)
[:1.8.0_151]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_151]
at
org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at

Re: RGD:: Dev Role Permisssion

2017-11-20 Thread Eyal Edri
You should have the dev role, can you confirm you can run tests?

On Mon, Nov 20, 2017 at 11:37 AM, Dhanjal Parth  wrote:

> Hey!
>
> I wanted to build a patch with custom parameters. And as this page
> 
> suggests I should have the 'dev role' permission to do so.
> Can you please help with the same?
>
> Regards
> Parth Dhanjal
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20-11-1017 ] [ 002_bootstrap.verify_add_all_hosts ]

2017-11-20 Thread Yaniv Kaul
On Mon, Nov 20, 2017 at 3:10 PM, Dafna Ron  wrote:

> Hi,
>
> We had a failure in OST for test 002_bootstrap.verify_add_all_hosts.
>
> From the logs I can see that vdsm on host0 was reporting that it cannot
> find the physical volume but eventually the storage was created and is
> reported as responsive.
>
> However, Host1 is reported to became non-operational with storage domain
> does not exist error and I think that there is a race.
>

I've opened https://bugzilla.redhat.com/show_bug.cgi?id=1514906 on this.


> I think that we create the storage domain while host1 is being installed
> and if the domain is not created and reported as activated in time, host1
> will become nonOperational.
>

And based on the above description, this is exactly the issue I've
described in the BZ.
Y.


> are we starting installation of host1 before host0 and storage are active?
>
> *Link to suspected patches: I do not think that the patch reported is
> related to the error*
>
>
> * https://gerrit.ovirt.org/#/c/84133/
>  Link to Job: *
>
>
>
> * http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/
>  Link
> to all logs: *
>
>
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/artifact/
> 
> *
>
>
> *(Relevant) error snippet from the log: *
>
>
>
> *  Lago log: *
>
> 2017-11-18 11:15:25,472::log_utils.py::end_log_task::670::nose::INFO::  #
> add_master_storage_domain: ESC[32mSuccessESC[0m (in 0:01:09)
> 2017-11-18 11:15:25,472::log_utils.py::start_log_task::655::nose::INFO::
> # add_secondary_storage_domains: ESC[0mESC[0m
> 2017-11-18 11:16:47,455::log_utils.py::end_log_task::670::nose::INFO::  #
> add_secondary_storage_domains: ESC[32mSuccessESC[0m (in 0:01:21)
> 2017-11-18 11:16:47,456::log_utils.py::start_log_task::655::nose::INFO::
> # import_templates: ESC[0mESC[0m
> 2017-11-18 11:16:47,513::testlib.py::stopTest::198::nose::INFO::*
> SKIPPED: Exported domain generation not supported yet
> 2017-11-18 11:16:47,514::log_utils.py::end_log_task::670::nose::INFO::  #
> import_templates: ESC[32mSuccessESC[0m (in 0:00:00)
> 2017-11-18 11:16:47,514::log_utils.py::start_log_task::655::nose::INFO::
> # verify_add_all_hosts: ESC[0mESC[0m
> 2017-11-18 
> 11:16:47,719::testlib.py::assert_equals_within::227::ovirtlago.testlib::ERROR::
> * Unhandled exception in  at 0x2909230>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219,
> in assert_equals_within
> res = func()
>   File "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 430, in 
> lambda: _all_hosts_up(hosts_service, total_hosts)
>   File "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 129, in _all_hosts_up
> _check_problematic_hosts(hosts_service)
>   File "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 149, in _check_problematic_hosts
> raise RuntimeError(dump_hosts)
> RuntimeError: 1 hosts failed installation:
> lago-basic-suite-master-host-1: non_operational
>
> 2017-11-18 11:16:47,722::utils.py::wrapper::480::lago.utils::DEBUG::Looking
> for a workdir
> 2017-11-18 
> 11:16:47,722::workdir.py::resolve_workdir_path::361::lago.workdir::DEBUG::Checking
> if /dev/shm/ost/deployment-basic-suite-master is a workdir
> 2017-11-18 11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::
> * Collect artifacts: ESC[0mESC[0m
> 2017-11-18 11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::
> * Collect artifacts: ESC[0mESC[0m
>
> vdsm host0:
>
> 2017-11-18 06:14:23,980-0500 INFO  (jsonrpc/0) [vdsm.api] START
> getDeviceList(storageType=3, guids=[u'360014059618895272774e97a2aaf5dd6'],
> checkStatus=False, options={}) from=:::192.168.201.4,45636,
> flow_id=ed8310a1-a7af-4a67-b351-8ff
> 364766b8a, task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:46)
> 2017-11-18 06:14:24,353-0500 WARN  (jsonrpc/0) [storage.LVM] lvm pvs
> failed: 5 [] ['  Failed to find physical volume "/dev/mapper/
> 360014059618895272774e97a2aaf5dd6".'] (lvm:322)
> 2017-11-18 06:14:24,353-0500 WARN  (jsonrpc/0) [storage.HSM] getPV failed
> for guid: 360014059618895272774e97a2aaf5dd6 (hsm:1973)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1970,
> in _getDeviceList
> pv = lvm.getPV(guid)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 852,
> in getPV
> raise se.InaccessiblePhysDev((pvName,))
> InaccessiblePhysDev: Multipath cannot access physical device(s):
> "devices=(u'360014059618895272774e97a2aaf5dd6',)"
> 2017-11-18 

Re: Default assignee for bugzilla component

2017-11-20 Thread Eyal Edri
Yaniv,
Who have permissions to change that on oVirt product?

On Mon, Nov 20, 2017 at 3:12 PM, Sahina Bose  wrote:

> Hi!
>
> Could you change the default assignee on Product: cockpit-ovirt
> Component:gdeploy to go...@redhat.com?
>
> thanks
> sahina
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Default assignee for bugzilla component

2017-11-20 Thread Sahina Bose
Hi!

Could you change the default assignee on Product: cockpit-ovirt
Component:gdeploy to go...@redhat.com?

thanks
sahina
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[ OST Failure Report ] [ oVirt master ] [ 20-11-1017 ] [ 002_bootstrap.verify_add_all_hosts ]

2017-11-20 Thread Dafna Ron
Hi,

We had a failure in OST for test 002_bootstrap.verify_add_all_hosts.

>From the logs I can see that vdsm on host0 was reporting that it cannot
find the physical volume but eventually the storage was created and is
reported as responsive.

However, Host1 is reported to became non-operational with storage domain
does not exist error and I think that there is a race.

I think that we create the storage domain while host1 is being installed
and if the domain is not created and reported as activated in time,
host1 will become nonOperational.

are we starting installation of host1 before host0 and storage are active?

**

*Link to suspected patches: I do not think that the patch reported is
related to the error*

*

**

**

https://gerrit.ovirt.org/#/c/84133/**


Link to Job:

**

http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/


Link to all logs:


*

*http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/artifact/
*

*
*

*(Relevant) error snippet from the log: *

*




Lago log:
*

2017-11-18 11:15:25,472::log_utils.py::end_log_task::670::nose::INFO:: 
# add_master_storage_domain: ESC[32mSuccessESC[0m (in 0:01:09)
2017-11-18
11:15:25,472::log_utils.py::start_log_task::655::nose::INFO::  #
add_secondary_storage_domains: ESC[0mESC[0m
2017-11-18 11:16:47,455::log_utils.py::end_log_task::670::nose::INFO:: 
# add_secondary_storage_domains: ESC[32mSuccessESC[0m (in 0:01:21)
2017-11-18
11:16:47,456::log_utils.py::start_log_task::655::nose::INFO::  #
import_templates: ESC[0mESC[0m
2017-11-18 11:16:47,513::testlib.py::stopTest::198::nose::INFO::*
SKIPPED: Exported domain generation not supported yet
2017-11-18 11:16:47,514::log_utils.py::end_log_task::670::nose::INFO:: 
# import_templates: ESC[32mSuccessESC[0m (in 0:00:00)
2017-11-18
11:16:47,514::log_utils.py::start_log_task::655::nose::INFO::  #
verify_add_all_hosts: ESC[0mESC[0m
2017-11-18
11:16:47,719::testlib.py::assert_equals_within::227::ovirtlago.testlib::ERROR:: 
  
* Unhandled exception in  at 0x2909230>
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
219, in assert_equals_within
res = func()
  File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 430, in 
lambda: _all_hosts_up(hosts_service, total_hosts)
  File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 129, in _all_hosts_up
_check_problematic_hosts(hosts_service)
  File
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 149, in _check_problematic_hosts
raise RuntimeError(dump_hosts)
RuntimeError: 1 hosts failed installation:
lago-basic-suite-master-host-1: non_operational

2017-11-18
11:16:47,722::utils.py::wrapper::480::lago.utils::DEBUG::Looking for a
workdir
2017-11-18
11:16:47,722::workdir.py::resolve_workdir_path::361::lago.workdir::DEBUG::Checking
if /dev/shm/ost/deployment-basic-suite-master is a workdir
2017-11-18
11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::*
Collect artifacts: ESC[0mESC[0m
2017-11-18
11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::*
Collect artifacts: ESC[0mESC[0m

vdsm host0:

2017-11-18 06:14:23,980-0500 INFO  (jsonrpc/0) [vdsm.api] START
getDeviceList(storageType=3,
guids=[u'360014059618895272774e97a2aaf5dd6'], checkStatus=False,
options={}) from=:::192.168.201.4,45636,
flow_id=ed8310a1-a7af-4a67-b351-8ff
364766b8a, task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:46)
2017-11-18 06:14:24,353-0500 WARN  (jsonrpc/0) [storage.LVM] lvm pvs
failed: 5 [] ['  Failed to find physical volume
"/dev/mapper/360014059618895272774e97a2aaf5dd6".'] (lvm:322)
2017-11-18 06:14:24,353-0500 WARN  (jsonrpc/0) [storage.HSM] getPV
failed for guid: 360014059618895272774e97a2aaf5dd6 (hsm:1973)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line
1970, in _getDeviceList
pv = lvm.getPV(guid)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 852,
in getPV
raise se.InaccessiblePhysDev((pvName,))
InaccessiblePhysDev: Multipath cannot access physical device(s):
"devices=(u'360014059618895272774e97a2aaf5dd6',)"
2017-11-18 06:14:24,389-0500 INFO  (jsonrpc/0) [vdsm.api] FINISH
getDeviceList return={'devList': [{'status': 'unknown', 'vendorID':
'LIO-ORG', 'capacity': '21474836480', 'fwrev': '4.0',
'discard_zeroes_data': 0, 'vgUUID': '', 'pvsize': '', 'pathlist':
[{'initiatorname': u'default', 'connection': u'192.168.200.4', 'iqn':
u'iqn.2014-07.org.ovirt:storage', 'portal': '1', 'user': u'username',
'password': '', 'port': '3260'}, {'initiatorname': u'default',
'connection': u'192.168.201.4', 'iqn': u'iqn.2014-07.org.ovirt:storage',
'portal': '1', 'user': u'username', 'password': '', 'port':
'3260'}], 

[CQ]: 84343,3 (vdsm) failed "ovirt-master" system tests

2017-11-20 Thread oVirt Jenkins
Change 84343,3 (vdsm) is probably the reason behind recent system test failures
in the "ovirt-master" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/84343/3

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1773) Fwd: RGD:: Dev Role Permisssion

2017-11-20 Thread Daniel Belenky (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=35364#comment-35364
 ] 

Daniel Belenky commented on OVIRT-1773:
---

[~gbenh...@redhat.com] can you please reply on his mail?

> Fwd: RGD:: Dev Role Permisssion
> ---
>
> Key: OVIRT-1773
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1773
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Daniel Belenky
>Assignee: infra
>
> -- Forwarded message --
> From: Dhanjal Parth 
> Date: Mon, Nov 20, 2017 at 11:37 AM
> Subject: RGD:: Dev Role Permisssion
> To: infra@ovirt.org
> Hey!
> I wanted to build a patch with custom parameters. And as this page
> 
> suggests I should have the 'dev role' permission to do so.
> Can you please help with the same?
> Regards
> Parth Dhanjal
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> -- 
> DANIEL BELENKY
> RHV DEVOPS
> EMEA VIRTUALIZATION R
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1773) Fwd: RGD:: Dev Role Permisssion

2017-11-20 Thread Gal Ben Haim (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gal Ben Haim updated OVIRT-1773:

Resolution: Fixed
Status: Done  (was: To Do)

> Fwd: RGD:: Dev Role Permisssion
> ---
>
> Key: OVIRT-1773
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1773
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Daniel Belenky
>Assignee: infra
>
> -- Forwarded message --
> From: Dhanjal Parth 
> Date: Mon, Nov 20, 2017 at 11:37 AM
> Subject: RGD:: Dev Role Permisssion
> To: infra@ovirt.org
> Hey!
> I wanted to build a patch with custom parameters. And as this page
> 
> suggests I should have the 'dev role' permission to do so.
> Can you please help with the same?
> Regards
> Parth Dhanjal
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> -- 
> DANIEL BELENKY
> RHV DEVOPS
> EMEA VIRTUALIZATION R
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1773) Fwd: RGD:: Dev Role Permisssion

2017-11-20 Thread Gal Ben Haim (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=35363#comment-35363
 ] 

Gal Ben Haim commented on OVIRT-1773:
-

I've added the dev role to user "dparth"

> Fwd: RGD:: Dev Role Permisssion
> ---
>
> Key: OVIRT-1773
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1773
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Daniel Belenky
>Assignee: infra
>
> -- Forwarded message --
> From: Dhanjal Parth 
> Date: Mon, Nov 20, 2017 at 11:37 AM
> Subject: RGD:: Dev Role Permisssion
> To: infra@ovirt.org
> Hey!
> I wanted to build a patch with custom parameters. And as this page
> 
> suggests I should have the 'dev role' permission to do so.
> Can you please help with the same?
> Regards
> Parth Dhanjal
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> -- 
> DANIEL BELENKY
> RHV DEVOPS
> EMEA VIRTUALIZATION R
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1773) Fwd: RGD:: Dev Role Permisssion

2017-11-20 Thread Daniel Belenky (oVirt JIRA)
Daniel Belenky created OVIRT-1773:
-

 Summary: Fwd: RGD:: Dev Role Permisssion
 Key: OVIRT-1773
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1773
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Daniel Belenky
Assignee: infra


-- Forwarded message --
From: Dhanjal Parth 
Date: Mon, Nov 20, 2017 at 11:37 AM
Subject: RGD:: Dev Role Permisssion
To: infra@ovirt.org


Hey!

I wanted to build a patch with custom parameters. And as this page

suggests I should have the 'dev role' permission to do so.
Can you please help with the same?

Regards
Parth Dhanjal

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra




-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R




--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100072)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


RGD:: Dev Role Permisssion

2017-11-20 Thread Dhanjal Parth
Hey!

I wanted to build a patch with custom parameters. And as this page

suggests I should have the 'dev role' permission to do so.
Can you please help with the same?

Regards
Parth Dhanjal
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 84359, 1 (ovirt-engine-api-model) failed "ovirt-4.2" system tests

2017-11-20 Thread oVirt Jenkins
Change 84359,1 (ovirt-engine-api-model) is probably the reason behind recent
system test failures in the "ovirt-4.2" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/84359/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/73/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra