Re: [ovirt-devel] [vdsm][RFC] reconsidering branching out ovirt-4.2

2018-02-01 Thread Yaniv Bronheim
On Mon, Jan 29, 2018 at 9:40 AM Francesco Romani  wrote:

> Hi all,
>
>
> It is time again to reconsider branching out the 4.2 stable branch.
>
> So far we decided to *not* branch out, and we are taking tags for ovirt
> 4.2 releases from master branch.
>
> This means we are merging safe and/or stabilization patches only in master.
>
>
> I think it is time to reconsider this decision and branch out for 4.2,
> because of two reasons:
>
> 1. it sends a clearer signal that 4.2 is going in stabilization mode
>
> 2. we have requests from virt team, which wants to start working on the
> next cycle features.
>

This the only reason to branch out -  "next cycle features" should be part
of 4.2 as well?
Do other teams also plan to push new features to 4.2 that are not stable
yet?
If not, I don't see any reason to branch out or backport "not stable"
patches to 4.2 branch. We can keep it stable and avoid this branch-out
unless we want to add new big feature that might cause regression in
current stable 4.2 code.


>
> If we decide to branch out, I'd start the new branch on monday, February
> 5 (1 week from now).
>
>
> The discussion is open, please share your acks/nacks for branching out,
> and for the branching date.
>
>
> I for myself I'm inclined to branch out, so if noone chimes in (!!) I'll
> execute the above plan.
>
>
> --
> Francesco Romani
> Senior SW Eng., Virtualization R&D
> Red Hat
> IRC: fromani github: @fromanirh
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 01-02-2018 ] [ 002_bootstrap.verify_add_all_hosts ]

2018-02-01 Thread Dafna Ron
Hi,

We failed cq test 002_bootstrap.verify_add_all_hosts for Master vdsm
project.

Looking at the log, vdsm cannot find master storage domain and engine puts
the host on non-operational state.

Although on the surface the patch seems to be related, the master storage
domain is iscsi whole the patch is related to gluster.

I do not think there is a connection between the patch and the failure but
can you please have a look to make sure?

*Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/69668/  -
*











































*gluster: Fix error when brick is on a btrfs subvolumeLink to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5180/
Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5180/artifact/
(Relevant)
error snippet from the log: vdsm: 2018-02-01 03:13:49,211-0500 INFO
(jsonrpc/4) [vdsm.api] START createStorageDomain(storageType=3,
sdUUID=u'077add35-9171-45d5-b6de-79cc5a853c36', domainName=u'iscsi',
typeSpecificArg=u'IdW3HG-K1Af-e0d3-u2O3-rGle-8fk5-ACNk6C',
domClass=1, domVersion=u'4', options=None) from=:::192.168.201.4,58530,
flow_id=22d4ffd8, task_id=2ce6dd52-3d28-4532-abbf-d78d52af6cda
(api:46)2018-02-01 03:14:40,223-0500 INFO  (jsonrpc/7) [vdsm.api] START
connectStoragePool(spUUID=u'2570c0c9-f872-4e49-964a-ee533a79c3f2',
hostID=1, msdUUID=u'077add35-9171-45d5-b6de-79cc5a853c36', masterVersion=1,
domainsMap={u'077add35-9171-45d5-b6de-79cc5a853c36': u'active'},
options=None) from=:::192.168.201.4,36310, flow_id=19e9aa89,
task_id=878419a0-c5ce-4e35-aed5-b27d56b2886e (api:46)2018-02-01
03:14:40,225-0500 INFO  (jsonrpc/7) [storage.StoragePoolMemoryBackend] new
storage pool master version 1 and domains map
{u'077add35-9171-45d5-b6de-79cc5a853c36': u'Active'}
(spbackends:449)2018-02-01 03:14:40,225-0500 INFO  (jsonrpc/7)
[storage.StoragePool] updating pool 2570c0c9-f872-4e49-964a-ee533a79c3f2
backend from type NoneType instance 0x7f45919e3f20 to type
StoragePoolMemoryBackend instance 0x45411b0 (sp:157)2018-02-01
03:14:40,226-0500 INFO  (jsonrpc/7) [storage.StoragePool] Connect host #1
to the storage pool 2570c0c9-f872-4e49-964a-ee533a79c3f2 with master
domain: 077add35-9171-45d5-b6de-79cc5a853c36 (ver = 1) (sp:692)2018-02-01
03:14:40,462-0500 INFO  (jsonrpc/7) [vdsm.api] FINISH connectStoragePool
error=Cannot find master domain:
u'spUUID=2570c0c9-f872-4e49-964a-ee533a79c3f2,
msdUUID=077add35-9171-45d5-b6de-79cc5a853c36'
from=:::192.168.201.4,36310, flow_id=19e9aa89,
task_id=878419a0-c5ce-4e35-aed5-b27d56b2886e (api:50)2018-02-01
03:14:40,462-0500 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='878419a0-c5ce-4e35-aed5-b27d56b2886e') Unexpected error
(task:875)Traceback (most recent call last):  File
"/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in
_runreturn fn(*args, **kargs)  File "", line 2, in
connectStoragePool  File
"/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
methodret = func(*args, **kwargs)  File
"/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1032, in
connectStoragePoolspUUID, hostID, msdUUID, masterVersion, domainsMap)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1094, in
_connectStoragePoolres = pool.connect(hostID, msdUUID, masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 704, in
connectself.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1275, in
__rebuildself.setMasterDomain(msdUUID, masterVersion)  File
"/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1488, in
setMasterDomainraise se.StoragePoolMasterNotFound(self.spUUID,
msdUUID)StoragePoolMasterNotFound: Cannot find master domain:
u'spUUID=2570c0c9-f872-4e49-964a-ee533a79c3f2,
msdUUID=077add35-9171-45d5-b6de-79cc5a853c36'2018-02-01 03:14:40,466-0500
INFO  (jsonrpc/7) [storage.TaskManager.Task]
(Task='878419a0-c5ce-4e35-aed5-b27d56b2886e') aborting: Task is aborted:
"Cannot find master domain: u'spUUID=2570c0c9-f872-4e49-964a-ee533a79c3f2,
msdUUID=077add35-9171-45d5-b6de-79cc5a853c36'" - code 304
(task:1181)2018-02-01 03:14:40,467-0500 ERROR (jsonrpc/7)
[storage.Dispatcher] FINISH connectStoragePool error=Cannot find master
domain: u'spUUID=2570c0c9-f872-4e49-964a-ee533a79c3f2,
msdUUID=077add35-9171-45d5-b6de-79cc5a853c36' (dispatcher:82)2018-02-01
03:14:40,467-0500 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call
StoragePool.connect failed (error 304) in 0.25 seconds (__init__:573)*

*engine:*
2018-02-01 03:14:40,603-05 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [ba52086] EVENT_ID:
VDS_SET_NONOPERATIONAL_DOMAIN(522), Host lago-basic-suite-mast
er-host-

[ovirt-devel] [ OST Failure Report ] [ oVirtMaster (otopi) ] [ 01-02-2018 ] [ 001_initialize_engine.initialize_engine/001_upgrade_engine.test_initialize_engine ]

2018-02-01 Thread Dafna Ron
Hi,

We are failing initialize engine on both basic and upgrade suites.

Can you please check?

*Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/86679/  -
*

























*core: Check Sequence before/afterLink to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5187/
Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5187/artifact/
(Relevant)
error snippet from the log: 2018-02-01 10:38:27,057-0500 DEBUG
otopi.plugins.otopi.dialog.human dialog.__logString:204
DIALOG:SEND Version: otopi-1.7.7_master
(otopi-1.7.7-0.0.master.20180201063428.git81ce9b7.el7.centos)2018-02-01
10:38:27,058-0500 ERROR otopi.context context.check:833 "before" parameter
of method
otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_provider
is a string, should probably be a tuple. Perhaps a missing comma?2018-02-01
10:38:27,058-0500 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND methodinfo: {'priority':
5000, 'name': None, 'before': 'osetup.ovn.provider.service.restart',
'after': ('osetup.pki.ca .available',
'osetup.ovn.services.restart'), 'method': >, 'condition':  of
>, 'stage': 11}2018-02-01 10:38:27,059-0500 DEBUG
otopi.context context._executeMethod:143 method exceptionTraceback (most
recent call last):  File
"/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethodmethod['method']()  File
"/usr/share/otopi/plugins/otopi/core/misc.py", line 61, in _setup
self.context.checkSequence()  File
"/usr/lib/python2.7/site-packages/otopi/context.py", line 844, in
checkSequenceraise RuntimeError(_('Found bad "before" or "after"
parameters'))RuntimeError: Found bad "before" or "after"
parameters2018-02-01 10:38:27,059-0500 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Environment setup':
Found bad "before" or "after" parameters*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel