[ovirt-devel] engine-setup upgrade issue: insert or update on table "cluster" violates foreign key constraint "cluster_default_network_provider_id_fkey"

2017-09-28 Thread Dominik Holler
Hi all,
I introduced an possible upgrade issue in engine-setup.
If engine-setup fails on upgrading the db with:

[ ERROR ] Failed to execute stage 'Misc configuration': insert or
update on table "cluster" violates foreign key constraint
"cluster_default_network_provider_id_fkey" DETAIL:  Key
(default_network_provider_id)=(b32f7988-ca21-4d1b-b116-55d3f5794534) is
not present in table "providers".

please create the missing provider in sql by:

select InsertProvider(
v_id:='b32f7988-ca21-4d1b-b116-55d3f5794534',
v_name:='ovirt-provider-ovn',
v_description:='oVirt network provider for OVN',
v_url:='https://localhost:9696',
v_provider_type:='EXTERNAL_NETWORK',
v_auth_required:=False,
v_auth_username:=null,
v_auth_password:=null,
v_custom_properties:=null,
v_auth_url:=null
);

Please change the id of the new provider to the id logged in the error
message. 
A suitable SQL prompt can by created by

sudo su - postgres -c "psql -U postgres engine"

I am going to post a patch which will fix this issue.

This issue arises if the ovirt-provider-ovn created be engine-setup is
gone, e.g. manually removed.

Dominik
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Missing ovirtmgmt (was: Change in ovirt-system-tests[master]: he: Run hosted-engine --vm-status)

2017-10-02 Thread Dominik Holler
My point if view is the following, please point me to weak points.

The error says, that the network ovirtmgmt is expected, but not found
on host1. As far as I understand the scenario, this error message is
wanted, because ovirtmgmt should available be on host1, because:

host1 is in cluster Default:

[root@hc-engine ~]# su - postgres -c "psql -U postgres engine -c  'select 
vds_name,cluster_name,cluster_id from vds;'"  
 vds_name | cluster_name |  cluster_id  

--+--+--
 lago-hc-basic-suite-master-host2 | Default  | 
59d1f257-012b-01e6-0019-0133
 lago-hc-basic-suite-master-host1 | Default  | 
59d1f257-012b-01e6-0019-0133
 lago_basic_suite_hc_host0| Default  | 
59d1f257-012b-01e6-0019-0133
(3 rows)


[root@hc-engine ~]#  su - postgres -c "psql -U postgres engine -c  'select 
id,name from network;'" 
  id  |   name
--+---
 ----0009 | ovirtmgmt
(1 row)

ovirtmgmt with id 9 is required and management in cluster Default:

[root@hc-engine ~]#  su - postgres -c "psql -U postgres engine -c  'select 
network_id,cluster_id,status,required,management from network_cluster;'" 
  network_id  |  cluster_id  | 
status | required | management 
--+--++--+
 ----0009 | 59d1f257-012b-01e6-0019-0133 |  
1 | t| t
(1 row)


but ovirtmgmt is not available on host1: 

[root@hc-engine ~]# ssh lago-hc-basic-suite-master-host1 "python -c \"from 
vdsm.network.api import network_caps; print network_caps()['networks']\""
root@lago-hc-basic-suite-master-host1's password: 
{}


If this argumentation would be correct, the next step would be to analyze why 
ovirtmgmt is not available on host1.




On Sun, 1 Oct 2017 16:48:14 +0300
Dan Kenigsberg  wrote:

> On Sun, Oct 1, 2017 at 12:19 PM, Yedidyah Bar David 
> wrote:
> > Hi all,
> >
> > On Sun, Oct 1, 2017 at 10:15 AM, Code Review 
> > wrote:  
> >> Jenkins CI posted comments on this change.
> >>
> >> View Change
> >>
> >> Patch set 1:Continuous-Integration -1
> >>
> >> Build Failed
> >>
> >> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/1886/
> >> : FAILURE  
> >
> > Above was triggered by [1].
> >
> > It failed with [2]:
> >
> > TASK [ovirt-provider-ovn-driver : Configure OVN for oVirt]
> > * fatal: [lago-he-basic-suite-master-host1]:
> > FAILED! => {"changed": true, "cmd": ["vdsm-tool", "ovn-config",
> > "192.168.200.99", "ovirtmgmt"], "delta": "0:00:00.623565", "end":
> > "2017-10-01 03:15:11.179717", "failed": true, "msg": "non-zero
> > return code", "rc": 1, "start": "2017-10-01 03:15:10.556152",
> > "stderr": "Traceback (most recent call last):\n  File
> > \"/usr/bin/vdsm-tool\", line 219, in main\n return
> > tool_command[cmd][\"command\"](*args)\n  File
> > \"/usr/lib/python2.7/site-packages/vdsm/tool/ovn_config.py\", line
> > 58, in ovn_config\nip_address =
> > get_ip_addr(get_network(network_caps(), net_name))\n  File
> > \"/usr/lib/python2.7/site-packages/vdsm/tool/ovn_config.py\", line
> > 79, in get_network\nraise
> > NetworkNotFoundError(net_name)\nNetworkNotFoundError: ovirtmgmt",
> > "stderr_lines": ["Traceback (most recent call last):", "  File
> > \"/usr/bin/vdsm-tool\", line 219, in main", "return
> > tool_command[cmd][\"command\"](*args)", "  File
> > \"/usr/lib/python2.7/site-packages/vdsm/tool/ovn_config.py\", line
> > 58, in ovn_config", "ip_address =
> > get_ip_addr(get_network(network_caps(), net_name))", "  File
> > \"/usr/lib/python2.7/site-packages/vdsm/tool/ovn_config.py\", line
> > 79, in get_network", "raise NetworkNotFoundError(net_name)",
> > "NetworkNotFoundError: ovirtmgmt"], "stdout": "", "stdout_lines":
> > []}
> >
> > Meaning, 'ovirtmgmt' is missing.
> > In host-deploy [3] of this host, I see that the engine asked
> > host-deploy to configure ovirtmgmt:
> >
> > 2017-10-01 03:14:36,036-0400 DEBUG
> > otopi.plugins.otopi.dialog.machine dialog.__logString:204
> > DIALOG:SEND   ### Customization phase, use 'install' to proceed
> > 2017-10-01 03:14:36,036-0400 DEBUG
> > otopi.plugins.otopi.dialog.machine dialog.__logString:204
> > DIALOG:SEND   ### COMMAND> 2017-10-01 03:14:36,036-0400 DEBUG
> > otopi.plugins.otopi.dialog.machine dialog.__logString:204
> > DIALOG:SEND   **%QHidden: FALSE 2017-10-01 03:14:36,037-0400
> > DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204
> > DIALOG:SEND   ***Q:STRING CUSTOMIZATION_COMMAND
> > 2017-10-01 03:14:36,037-0400 DEBUG
> > otopi.plugins.otopi.dialog.machine dialog.__logString:204
> > DIALOG:SEND   **%QEnd: CUSTOMIZATION_COMMAND
> > 2017-10-01 03:1

Re: [ovirt-devel] Missing ovirtmgmt (was: Change in ovirt-system-tests[master]: he: Run hosted-engine --vm-status)

2017-10-02 Thread Dominik Holler
During install, the management network is configured after the Ansible
hostDeploy playbook is run. So there might be no management network
during the Ansible hostDeploy playbook is run, thanks for pointing me
there.
[1] uses the IP address instead of network during install, and should
improve this way the behavior.

Unfortunately there is currently another problem in
running he-basic-suite-master to verify.


[1]
  https://gerrit.ovirt.org/#/c/82487/


On Mon, 2 Oct 2017 11:05:35 +0200
Dominik Holler  wrote:

> My point if view is the following, please point me to weak points.
> 
> The error says, that the network ovirtmgmt is expected, but not found
> on host1. As far as I understand the scenario, this error message is
> wanted, because ovirtmgmt should available be on host1, because:
> 
> host1 is in cluster Default:
> 
> [root@hc-engine ~]# su - postgres -c "psql -U postgres engine -c
> 'select vds_name,cluster_name,cluster_id from vds;'"
> vds_name | cluster_name |  cluster_id
> --+--+--
> lago-hc-basic-suite-master-host2 | Default  |
> 59d1f257-012b-01e6-0019-0133 lago-hc-basic-suite-master-host1
> | Default  | 59d1f257-012b-01e6-0019-0133
> lago_basic_suite_hc_host0| Default  |
> 59d1f257-012b-01e6-0019-0133 (3 rows)
> 
> 
> [root@hc-engine ~]#  su - postgres -c "psql -U postgres engine -c
> 'select id,name from network;'" id  |   name
> --+---
>  ----0009 | ovirtmgmt
> (1 row)
> 
> ovirtmgmt with id 9 is required and management in cluster Default:
> 
> [root@hc-engine ~]#  su - postgres -c "psql -U postgres engine -c
> 'select network_id,cluster_id,status,required,management from
> network_cluster;'" network_id  |
> cluster_id  | status | required | management
> --+--++--+
> ----0009 |
> 59d1f257-012b-01e6-0019-0133 |  1 | t| t (1 row)
> 
> 
> but ovirtmgmt is not available on host1: 
> 
> [root@hc-engine ~]# ssh lago-hc-basic-suite-master-host1 "python -c
> \"from vdsm.network.api import network_caps; print
> network_caps()['networks']\"" root@lago-hc-basic-suite-master-host1's
> password: {}
> 
> 
> If this argumentation would be correct, the next step would be to
> analyze why ovirtmgmt is not available on host1.
> 
> 
> 
> 
> On Sun, 1 Oct 2017 16:48:14 +0300
> Dan Kenigsberg  wrote:
> 
> > On Sun, Oct 1, 2017 at 12:19 PM, Yedidyah Bar David
> >  wrote:  
> > > Hi all,
> > >
> > > On Sun, Oct 1, 2017 at 10:15 AM, Code Review 
> > > wrote:
> > >> Jenkins CI posted comments on this change.
> > >>
> > >> View Change
> > >>
> > >> Patch set 1:Continuous-Integration -1
> > >>
> > >> Build Failed
> > >>
> > >> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/1886/
> > >> : FAILURE
> > >
> > > Above was triggered by [1].
> > >
> > > It failed with [2]:
> > >
> > > TASK [ovirt-provider-ovn-driver : Configure OVN for oVirt]
> > > * fatal: [lago-he-basic-suite-master-host1]:
> > > FAILED! => {"changed": true, "cmd": ["vdsm-tool", "ovn-config",
> > > "192.168.200.99", "ovirtmgmt"], "delta": "0:00:00.623565", "end":
> > > "2017-10-01 03:15:11.179717", "failed": true, "msg": "non-zero
> > > return code", "rc": 1, "start": "2017-10-01 03:15:10.556152",
> > > "stderr": "Traceback (most recent call last):\n  File
> > > \"/usr/bin/vdsm-tool\", line 219, in main\n return
> > > tool_command[cmd][\"command\"](*args)\n  File
> > > \"/usr/lib/python2.7/site-packages/vdsm/tool/ovn_config.py\", line
> > > 58, in ovn_config\nip_address =
> > > get_ip_addr(get_network(network_caps(), net_name))\n  File
> > > \"/usr/lib/python2.7/site-packages/vdsm/tool/ovn_config.py\", line
> > > 79, in get_network\nraise
> > > NetworkNotFoundError(net_name)\nNetworkNotFoundError: ovirtmgmt",
> > > "stderr_lines": ["Trac

[ovirt-devel] OST Failure he-basic

2017-10-02 Thread Dominik Holler
Hi all,
OST he-basic is currently failing [1] during setup "hosted-engine"
during Connecting Storage Pool with the message
Failed to execute stage 'Misc configuration': 'functools.partial' object has no 
attribute 'getAllTasksStatuses'

Since the lago-he-basic-suite-master-engine VM is inaccessible after
the problem, I have no idea how to locate the problem.

Dominik


[1]
  
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/1279/console
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Unable to add host in oVirt 4.2 (clean install)

2017-10-05 Thread Dominik Holler
Please give me a hint if the issue occours with current master.

On Wed, 4 Oct 2017 19:05:24 +0200
Martin Sivak  wrote:

> Hi,
> 
> I ran engine OST this afternoon and it passed. But I also saw Andrej's
> tracebacks and Jenny reported the same thing yesterday. No idea what
> is wrong, but both were related to hosted engine setup flow.
> 
> Martin
> 
> On Wed, Oct 4, 2017 at 11:56 AM, Marek Libra 
> wrote:
> 
> > Since Andrej ran in similar issue today, I'm posting for others:
> >
> > When installing oVirt 4.2 first alpha release from scratch on fresh
> > Centos 7 minimal, adding host failed. Unfortunately, I don't have
> > log messages anymore, but I remember last errors were misleadingly
> > related to ovn (no matter the ovn was not configured on the engine).
> >
> > The issue was resolved by manual installation of python-netaddr
> > package on the host. Subsequently, adding the host in webadmin
> > passed.
> >
> > What package should require the python-netaddr?
> > ovirt-provider-ovn-driver?
> >
> > I hope it helps,
> > Marek
> >
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >  

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [oVirt 4.2 Localization Question #2] ACTION_TYPE_FAILED_INVALID_NIC_FILTER_PARAMETER_INTERFACE_VM

2017-10-24 Thread Dominik Holler
Please find the fix in https://gerrit.ovirt.org/#/c/83120/ .

On Tue, 24 Oct 2017 08:35:37 +1000
Yuko Katabami  wrote:

> On Tue, Oct 24, 2017 at 7:48 AM, Greg Sheremeta 
> wrote:
> 
> > ​​
> > I'm not sure, but from the English, I'm guessing neither. It looks
> > like the word "found" should be deleted or moved in that second
> > sentence. As is, it's not grammatically correct. So my guess is
> > "found" should go here: "not  on".
> >  
> 
> ​Thank you very much, Greg. That makes perfect sense.
> I will apply this interpretation to my translation for now, but hope
> it will be fixed in the source later on.
> 
> Kind regards,
> 
> Yuko​
> 
> >
> > Greg
> >
> > On Oct 23, 2017 5:40 PM, "Yuko Katabami" 
> > wrote:
> >
> > Hello again.
> >
> > Here is our second question.
> >
> >
> > *File: *AppErrors
> > *Resource ID:
> > *ACTION_TYPE_FAILED_INVALID_NIC_FILTER_PARAMETER_INTERFAC E_VM
> >
> > *String: *Cannot ${action} ${type}. The network interface (id
> > '${INTERFACE_ID}') is not on virtual machine (id '${VM_ID}') found.
> > *Question:* Which one of the following interpretations is correct
> > for the second sentence?
> > 1) The network interface (id '${INTERFACE_ID}') is not on the
> > virtual machine (id '${VM_ID}') which was found
> > or
> > 2) The network interface (id '${INTERFACE_ID}') which is not on the
> > virtual machine (id '${VM_ID}') is found
> >
> > Kind regards,
> >
> > Yuko
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
> >  

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Host deploy fails due to missing Ansible playbook (engine dev env)

2017-11-05 Thread Dominik Holler
As an additional step ovirt-ansible has to be installed in the
development installation, please find instructions in
https://github.com/oVirt/ovirt-engine/blob/master/README.adoc#host-deploy-via-ansible
.

On Sun, 5 Nov 2017 16:58:15 +0200
Fred Rolland  wrote:

> Hi,
> 
> When I try to add a new host in an engine running in a dev
> environment I get the following error:
> 
> 2017-11-05 16:05:28,544+02 WARN
> [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
> (EE-ManagedThreadFactory-engine-Thread-18)
> [350cd700-493b-4259-8d7a-0e63a06f6cd8] Playbook
> '/home/frolland/ovirt-engine/share/ovirt-engine/../ovirt-ansible-roles/playbooks/ovirt-host-deploy.yml'
> does not exist, please ensure that ovirt-ansible-roles package is
> properly installed.
> 
> Installing 'ovirt-ansible-roles' does not solve anything as the
> host-deploy process tries to run the playbook on a location relative
> to where the engine is running.
> 
> I manually copied the playbook as a workaround.
> 
> Can we have a more robust solution when running engine in dev mode ?
> 
> Thanks,
> Freddy

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt System Test configuration

2017-12-18 Thread Dominik Holler
On Mon, 18 Dec 2017 12:51:33 +0200
Eyal Edri  wrote:

> On Mon, Dec 18, 2017 at 12:43 PM, Sandro Bonazzola
>  wrote:
> 
> > Hi, I'd like to discuss what's being tested by oVirt System Test.
> >
> > I'm investigating on a sanlock issue that affects hosted engine hc
> > suite. I installed a CentOS minimal VM and set repositories as in
> > http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/128/
> > artifact/exported-artifacts/reposync-config.repo
> >
> > Upgrade from CentOS 1708 (7.4) minimal is:
> >
> > Aggiornamento:
> >
> > meaning this environement is not receiving updates to core packages
> > like the kernel.
> >
> > Restricting to libvirt, with the repos used in the job libvirt
> > packages doesn't even exists, making yum install libvirt just fail.
> >
> >
> > I think you already know I'm against filtering packages from the
> > repos even if I understand it saves a huge amount of space and
> > download time. I may be wrong, but I tend to not trust OST results
> > since it's not testing real life environments. Any chance we can
> > improve OST to match what users are going to have on their systems?
> >  
> 
> Why do you think this is not testing real life environments? what
> gurantees users are yum upgrading their hosts all time time?
> I actually thinks this is representing more the real life than
> forcing yum update all the time, if we need a newer pkg then the spec
> file requirements should be updated.
> 
> One option is to refresh the OS Images on a regular basis, and I
> believe Gal is working on on automating the image creation flow which
> would help with that.
> 

I want to take the chance to advertise the idea of using CentOS cloud
images, which are updated sometimes. This way we would have well
defined but regularly updated basic images.

Another idea for hosts is to create an disk images from
http://jenkins.ovirt.org/job/ovirt-node-ng_master_build-artifacts-el7-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/
This would include ovirt-node in OST and speed up adding hosts.
The drawbacks are that the disk images are big and the current work flow
would be still required to test vdsm patches.
The disk images could be created by
https://gist.github.com/dominikholler/73dfbd9179ad89a002c669a936cd97e4



> What would you suggest needs to be changed in OST to reflect a more
> real life scenario?
> 
> 
> >
> >
> > --
> >
> > SANDRO BONAZZOLA
> >
> > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
> >
> > Red Hat EMEA 
> > 
> > TRIED. TESTED. TRUSTED. 
> >
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >  
> 
> 
> 

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 24/12/2017 ] [use_ovn_provider]

2017-12-25 Thread Dominik Holler
A helpful hint is in

http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4492/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
 :
Caused by: org.jboss.resteasy.spi.ReaderException: 
org.codehaus.jackson.map.JsonMappingException: Can not construct instance of 
java.util.Calendar from String value '2017-12-27 13:19:51Z': not a valid 
representation (error: Can not parse date "2017-12-27 13:19:51Z": not 
compatible with any of standard forms ("-MM-dd'T'HH:mm:ss.SSSZ", 
"-MM-dd'T'HH:mm:ss.SSS'Z'", "EEE, dd MMM  HH:mm:ss zzz", "-MM-dd"))
 at [Source: 
org.jboss.resteasy.client.core.BaseClientResponse$InputStreamWrapper@72c184c5; 
line: 1, column: 23] (through reference chain: 
com.woorea.openstack.keystone.model.Access["token"]->com.woorea.openstack.keystone.model.Token["expires"])


This problem was introduced by 
https://gerrit.ovirt.org/#/c/85702/

I created a fix:
https://gerrit.ovirt.org/85734



On Mon, 25 Dec 2017 11:30:19 +0200
Eyal Edri  wrote:

> Adding ovn maintainers.
> 
> On Sun, Dec 24, 2017 at 9:25 AM, Barak Korren 
> wrote:
> 
> > Test failed: [ 098_ovirt_provider_ovn.use_ovn_provider ]
> >
> > Link to suspected patches:
> >
> > - Linked test failed on:
> >https://gerrit.ovirt.org/#/c/85703/3
> > - It seems OVN patches had been failing tests ever since:
> >https://gerrit.ovirt.org/#/c/85645/2
> >
> > Link to Job:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4492/
> >
> > Link to all logs:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> > tester/4492/artifact/exported-artifacts/basic-suit-master-
> > el7/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/
> >
> > Error snippet from log:
> >
> > 
> >
> > Fault reason is "Operation Failed". Fault detail is "Failed to
> > communicate with the external provider, see log for additional
> > details.". HTTP response code is 400.  
> >  >> begin captured logging <<
> >  requests.packages.urllib3.connectionpool:
> > INFO: * Starting new HTTPS connection (1): 192.168.201.4
> > py.warnings: WARNING: * Unverified HTTPS request is being made.
> > Adding certificate verification is strongly advised. See:
> > https://urllib3.readthedocs.org/en/latest/security.html
> > requests.packages.urllib3.connectionpool: DEBUG: "POST /v2.0/tokens/
> > HTTP/1.1" 200 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG:
> > "GET /v2.0/networks/ HTTP/1.1" 200 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/ports/
> > HTTP/1.1" 200 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/subnets/
> > HTTP/1.1" 200 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG:
> > "POST /v2.0/networks/ HTTP/1.1" 201 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG:
> > "POST /v2.0/subnets/ HTTP/1.1" 201 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG: "POST /v2.0/ports/
> > HTTP/1.1" 201 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG:
> > "GET /v2.0/networks/ HTTP/1.1" 200 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/ports/
> > HTTP/1.1" 200 None
> > requests.packages.urllib3.connectionpool: INFO: * Starting new
> > HTTPS connection (1): 192.168.201.4
> > requests.packages.urllib3.connectionpool: DEBUG: "GET /v2.0/subnets/
> > HTTP/1.1" 200 None  
> > - >> end captured logging <<
> > -  
> >
> >
> >
> > 
> >
> >
> >
> > --
> > Barak Korren
> > RHV DevOps team , RHCE, RHCi
> > Red Hat EMEA
> > redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
> >  
> 
> 

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 24/12/2017 ] [use_ovn_provider]

2017-12-25 Thread Dominik Holler
On Mon, 25 Dec 2017 14:14:36 +0200
Dan Kenigsberg  wrote:

> On Mon, Dec 25, 2017 at 2:09 PM, Dominik Holler 
> wrote:
> > A helpful hint is in
> >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4492/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >  :
> > Caused by: org.jboss.resteasy.spi.ReaderException:
> > org.codehaus.jackson.map.JsonMappingException: Can not construct
> > instance of java.util.Calendar from String value '2017-12-27
> > 13:19:51Z': not a valid representation (error: Can not parse date
> > "2017-12-27 13:19:51Z": not compatible with any of standard forms
> > ("-MM-dd'T'HH:mm:ss.SSSZ", "-MM-dd'T'HH:mm:ss.SSS'Z'",
> > "EEE, dd MMM  HH:mm:ss zzz", "-MM-dd")) at [Source:
> > org.jboss.resteasy.client.core.BaseClientResponse$InputStreamWrapper@72c184c5;
> > line: 1, column: 23] (through reference chain:
> > com.woorea.openstack.keystone.model.Access["token"]->com.woorea.openstack.keystone.model.Token["expires"])
> >
> >
> > This problem was introduced by
> > https://gerrit.ovirt.org/#/c/85702/
> >
> > I created a fix:
> > https://gerrit.ovirt.org/85734  
> 
> Thanks for the quick fix.
> 
> Is the new format accpetable to other users of the keystone-like API
> (such at the neutron cli)?


Yes, I verified ovirt-engine via webadmin, and neutron CLI and ansible
on command line:

[user@fedora-25-gui ovirt-system-tests]$ cat createNetwok.yml 
---
- hosts: localhost
  tasks:
  - os_network:
  auth:
auth_url: http://0.0.0.0:35357/v2.0
username: admin@internal
password: 123456
  state: present
  name: myNewAnsibleNet

[user@fedora-25-gui ovirt-system-tests]$ ansible-playbook createNetwok.yml 
 [WARNING]: Could not match supplied host pattern, ignoring: all

 [WARNING]: provided hosts list is empty, only localhost is available


PLAY [localhost] 
***

TASK [Gathering Facts] 
*
ok: [localhost]

TASK [os_network] 
**
changed: [localhost]

PLAY RECAP 
*
localhost  : ok=2changed=1unreachable=0failed=0   

[user@fedora-25-gui ovirt-system-tests]$ ansible-playbook createNetwok.yml 
 [WARNING]: Could not match supplied host pattern, ignoring: all

 [WARNING]: provided hosts list is empty, only localhost is available


PLAY [localhost] 
***

TASK [Gathering Facts] 
*
ok: [localhost]

TASK [os_network] 
**
ok: [localhost]

PLAY RECAP 
*
localhost  : ok=2changed=0unreachable=0failed=0
[user@fedora-25-gui ovirt-system-tests]$ OS_USERNAME=admin@internal 
OS_PASSWORD=123456 OS_AUTH_URL=http://0.0.0.0:35357/v2.0 neutron net-list
Failed to discover available identity versions when contacting 
http://0.0.0.0:35357/v2.0. Attempting to parse version from URL.
+--+-+
| id   | name|
+--+-+
| 97b653b0-623e-4b5d-a7a0-e05c6d95fdf2 | ansibleNet2 |
| e1f36f9b-bfb2-4779-880f-d8b8f8d9c64a | myNewAnsibleNet |
| 31172fec-1d6e-42eb-acb4-ab5bf77a1296 | osnet   |
| 05e680b8-544a-4278-9ac0-403fb5e83af2 | test.json   |
| 60f74925-adb9-4ae2-9751-2a3f1315bd2e | net877  |
| 18687d84-0923-4e1a-b349-4030c6f9c11e | net111  |
| c20b5484-dde1-4729-bae4-5f073c3e14ef | net1114 |
| ddd9741b-6874-4075-abba-615fb1777b62 | ansibleNet  |
| 2b913120-260f-4750-9fc2-c0e44f3d51e9 | net11149|
| a3db332f-5b2b-478c-a90e-73ee5fbee3ce | net412  |
+--+-+



___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] Warnings in ovirt-engine build process

2018-01-03 Thread Dominik Holler
If a code change adds a warning to ovirt-engine build process,
shouldn't CI mark the change as unstable?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Warnings in ovirt-engine build process

2018-01-03 Thread Dominik Holler
On Wed, 3 Jan 2018 11:41:05 +0200
Yedidyah Bar David  wrote:

> On Wed, Jan 3, 2018 at 11:36 AM, Dominik Holler 
> wrote:
> 
> > If a code change adds a warning to ovirt-engine build process,
> > shouldn't CI mark the change as unstable?
> >  
> 
> Can you give an example (e.g. a link to a jenkins build with such a
> warning)?
> 

I created the change https://gerrit.ovirt.org/#/c/85925/ to produce
different kinds of warnings.
The CI result page
http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64/35302/
says:
"Plug-in Result:Unstable - 6 warnings exceed the threshold of 0 by 6
(Reference build: #35300)"
Looks like CI checks for added findbugs warnings, but not for javac
warnings.

> I do not know if we currently emit any warnings. If we do, and you
> want to ignore only the existing ones and fail new ones, I'd say this
> is a bit hard and non-maintainable.
> 

The findbugs warnings seems to use something smart like this.

> If you want to treat all warnings as fatal errors, I guess this can
> be done.
> 

I expect there are many warnings, if they are currently not managed.

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirtMaster (otopi) ] [ 01-02-2018 ] [ 001_initialize_engine.initialize_engine/001_upgrade_engine.test_initialize_engine ]

2018-02-02 Thread Dominik Holler
On Thu, 1 Feb 2018 15:57:46 +
Dafna Ron  wrote:

> Hi,
> 
> We are failing initialize engine on both basic and upgrade suites.
> 
> Can you please check?
> 
> *Link and headline of suspected patches:
> https://gerrit.ovirt.org/#/c/86679/
>  - *
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *core: Check Sequence before/afterLink to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5187/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5187/artifact/
> (Relevant)
> error snippet from the log: 2018-02-01 10:38:27,057-0500 DEBUG
> otopi.plugins.otopi.dialog.human dialog.__logString:204
> DIALOG:SEND Version: otopi-1.7.7_master
> (otopi-1.7.7-0.0.master.20180201063428.git81ce9b7.el7.centos)2018-02-01
> 10:38:27,058-0500 ERROR otopi.context context.check:833 "before"
> parameter of method
> otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_provider
> is a string, should probably be a tuple. Perhaps a missing
> comma?2018-02-01 10:38:27,058-0500 DEBUG
> otopi.plugins.otopi.dialog.human dialog.__logString:204
> DIALOG:SEND methodinfo: {'priority': 5000, 'name':
> None, 'before': 'osetup.ovn.provider.service.restart', 'after':
> ('osetup.pki.ca .available',
> 'osetup.ovn.services.restart'), 'method':  method ?._misc_configure_provider of
>  object at 0x2edf6d0>>, 'condition':  of
>  object at 0x2edf6d0>>, 'stage': 11}2018-02-01 10:38:27,059-0500 DEBUG
> otopi.context context._executeMethod:143 method exceptionTraceback
> (most recent call last):  File
> "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
> _executeMethodmethod['method']()  File
> "/usr/share/otopi/plugins/otopi/core/misc.py", line 61, in _setup
> self.context.checkSequence()  File
> "/usr/lib/python2.7/site-packages/otopi/context.py", line 844, in
> checkSequenceraise RuntimeError(_('Found bad "before" or "after"
> parameters'))RuntimeError: Found bad "before" or "after"
> parameters2018-02-01 10:38:27,059-0500 ERROR otopi.context
> context._executeMethod:152 Failed to execute stage 'Environment
> setup': Found bad "before" or "after" parameters*

Seems like the new introduced check of
https://gerrit.ovirt.org/#/c/86679/ works.
I posted https://gerrit.ovirt.org/#/c/87045/ to fix this. Locally it
works for me, but I still have test this change in OST on jenkins.

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (cockpit-ovirt) ] [ 15-03-2018 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2018-03-15 Thread Dominik Holler
On Thu, 15 Mar 2018 16:24:10 +
Dafna Ron  wrote:

> Hi,
> 
> We have a failure on master for test
> 098_ovirt_provider_ovn.use_ovn_provider in project cockpit-ovirt.
> This seems to be a race because object is locked. also, the actual
> failure is logged as WARN and not ERROR.
> 
> I don't think the patch is actually related to the failure but I
> think the test should be fixed.
> can you please review to make sure we do not have an actual
> regression and let me know if we need to open a bz to fix the test?
> 
> 
> *Link and headline of suspected patches: *
> *https://gerrit.ovirt.org/#/c/89020/2
>  - *
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *wizard: Enable scroll on start page for low-res screensLink to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374/artifacts
> (Relevant)
> error snippet from the log: 2018-03-15 10:05:00,160-04 DEBUG
> [org.ovirt.engine.core.sso.servlets.OAuthTokenInfoServlet] (default
> task-10) [] Sending json response2018-03-15 10:05:00,160-04 DEBUG
> [org.ovirt.engine.core.sso.utils.TokenCleanupUtility] (default
> task-10) [] Not cleaning up expired tokens2018-03-15 10:05:00,169-04
> INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-90) [789edb23] Lock
> Acquired to object
> 'EngineLock:{exclusiveLocks='[c38a67ec-0b48-4e6f-be85-70c700df5483=PROVIDER]',
> sharedLocks=''}'2018-03-15 10:05:00,184-04 INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-90) [789edb23]
> Running command: SyncNetworkProviderCommand internal: true.2018-03-15
> 10:05:00,228-04 DEBUG
> [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Compiled
> stored procedure. Call string is [{call
> getdcidbyexternalnetworkid(?)}]2018-03-15 10:05:00,228-04 DEBUG
> [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] SqlCall for
> procedure [GetDcIdByExternalNetworkId] compiled2018-03-15
> 10:05:00,229-04 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] method:
> runQuery, params: [GetAllExternalNetworksOnProvider,
> IdQueryParameters:{refresh='false', filtered='false'}], timeElapsed:
> 353ms2018-03-15 10:05:00,239-04 INFO
> [org.ovirt.engine.core.bll.network.dc.AddNetworkCommand] (default
> task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Failed to Acquire
> Lock to object 'EngineLock:{exclusiveLocks='[network_1=NETWORK,
> c38a67ec-0b48-4e6f-be85-70c700df5483=PROVIDER]',
> sharedLocks=''}'2018-03-15 10:05:00,239-04 WARN
> [org.ovirt.engine.core.bll.network.dc.AddNetworkCommand] (default
> task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Validation of action
> 'AddNetwork' failed for user admin@internal-authz. Reasons:
> VAR__TYPE__NETWORK,VAR__ACTION__ADD,ACTION_TYPE_FAILED_PROVIDER_LOCKED,$providerId
> c38a67ec-0b48-4e6f-be85-70c700df54832018-03-15 10:05:00,240-04 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] method:
> runAction, params: [AddNetwork,
> AddNetworkStoragePoolParameters:{commandId='61b365ec-27c1-49af-ad72-f907df8befcd',
> user='null', commandType='Unknown'}], timeElapsed: 10ms2018-03-15
> 10:05:00,250-04 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-13) [] Operation Failed: [Cannot add Network. Related
> operation on provider with the id
> c38a67ec-0b48-4e6f-be85-70c700df5483 is currently in progress. Please
> try again later.]2018-03-15 10:05:00,254-04 DEBUG
> [org.ovirt.engine.core.utils.servlet.LocaleFilter] (default task-14)
> [] Incoming locale 'en-US'. Filter determined locale to be
> 'en-US'2018-03-15 10:05:00,254-04 DEBUG
> [org.ovirt.engine.core.sso.servlets.OAuthTokenServlet] (default
> task-14) [] Entered OAuthTokenServlet Query String: null,
> Parameters : password = ***, grant_type = password, scope =
> ovirt-app-api ovirt-ext=token-info:validate, username =
> admin@internal, *

I will care about this.
The problem is that SyncNetworkProviderCommand is running in the
background and locking the provider, which blocks the lock for the
tested AddNetworkCommand.
The related changes are
core: Add locking for Add and RemoveNetworkCommand
https://gerrit.ovirt.org/#/c/85480/
and
core: Add SyncNetworkProviderCommand
https://gerrit.ovirt.org/#/c/85134/


___
Devel mailing list
D

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (cockpit-ovirt) ] [ 15-03-2018 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2018-03-16 Thread Dominik Holler
I have created
https://bugzilla.redhat.com/show_bug.cgi?id=1557419 and
https://bugzilla.redhat.com/show_bug.cgi?id=1557424
to discuss how failing to acquire locks in engine should be handled in
engine's REST-API, in ovirt-sdk or in application (OST in this case).


On Fri, 16 Mar 2018 09:32:47 +
Dafna Ron  wrote:

> Thank you for the fast reply and help.
> 
> On Thu, Mar 15, 2018 at 8:21 PM, Dominik Holler 
> wrote:
> 
> > On Thu, 15 Mar 2018 16:24:10 +
> > Dafna Ron  wrote:
> >  
> > > Hi,
> > >
> > > We have a failure on master for test
> > > 098_ovirt_provider_ovn.use_ovn_provider in project cockpit-ovirt.
> > > This seems to be a race because object is locked. also, the actual
> > > failure is logged as WARN and not ERROR.
> > >
> > > I don't think the patch is actually related to the failure but I
> > > think the test should be fixed.
> > > can you please review to make sure we do not have an actual
> > > regression and let me know if we need to open a bz to fix the
> > > test?
> > >
> > >
> > > *Link and headline of suspected patches: *
> > > *https://gerrit.ovirt.org/#/c/89020/2
> > > <https://gerrit.ovirt.org/#/c/89020/2> - *
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > *wizard: Enable scroll on start page for low-res screensLink to
> > > Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374
> > > <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374>Link
> > > to all
> > > logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-  
> > tester/6374/artifacts  
> > > <http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> > tester/6374/artifacts>(Relevant)
> > > error snippet from the log: 2018-03-15 10:05:00,160-04
> > > DEBUG [org.ovirt.engine.core.sso.servlets.OAuthTokenInfoServlet]
> > > (default task-10) [] Sending json response2018-03-15
> > > 10:05:00,160-04 DEBUG
> > > [org.ovirt.engine.core.sso.utils.TokenCleanupUtility] (default
> > > task-10) [] Not cleaning up expired tokens2018-03-15
> > > 10:05:00,169-04 INFO
> > > [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> > > (EE-ManagedThreadFactory-engineScheduled-Thread-90) [789edb23]
> > > Lock Acquired to object
> > > 'EngineLock:{exclusiveLocks='[c38a67ec-0b48-4e6f-be85-  
> > 70c700df5483=PROVIDER]',  
> > > sharedLocks=''}'2018-03-15 10:05:00,184-04 INFO
> > > [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> > > (EE-ManagedThreadFactory-engineScheduled-Thread-90) [789edb23]
> > > Running command: SyncNetworkProviderCommand internal:
> > > true.2018-03-15 10:05:00,228-04 DEBUG
> > > [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$  
> > PostgresSimpleJdbcCall]  
> > > (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Compiled
> > > stored procedure. Call string is [{call
> > > getdcidbyexternalnetworkid(?)}]2018-03-15 10:05:00,228-04 DEBUG
> > > [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$  
> > PostgresSimpleJdbcCall]  
> > > (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] SqlCall
> > > for procedure [GetDcIdByExternalNetworkId] compiled2018-03-15
> > > 10:05:00,229-04 DEBUG
> > > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > > (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] method:
> > > runQuery, params: [GetAllExternalNetworksOnProvider,
> > > IdQueryParameters:{refresh='false', filtered='false'}],
> > > timeElapsed: 353ms2018-03-15 10:05:00,239-04 INFO
> > > [org.ovirt.engine.core.bll.network.dc.AddNetworkCommand] (default
> > > task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Failed to Acquire
> > > Lock to object 'EngineLock:{exclusiveLocks='[network_1=NETWORK,
> > > c38a67ec-0b48-4e6f-be85-70c700df5483=PROVIDER]',
> > > sharedLocks=''}'2018-03-15 10:05:00,239-04 WARN
> > > [org.ovirt.engine.core.bll.network.dc.AddNetworkCommand] (default
> > > task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Validation of
> > >

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (cockpit-ovirt) ] [ 15-03-2018 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2018-03-19 Thread Dominik Holler
On Sat, 17 Mar 2018 16:16:49 +0200
Dan Kenigsberg  wrote:

> Thanks for filing those, but let us not keep them secret:
> 
> Bug 1557419 - Recommend that a call should be tried again
> sounds most reasonable. Providing more information to the client
> makes sense
> 
> Bug 1557424 - Automatically retry call failed because he failed to
> acquire a lock
> is more tricky. when would REST perform the retry? 

If the command returns a machine-readable hint that the conflict is
temporary.

> how often? We must make sure we never cause a livelock.
> 

I agree that it is a good idea to have an upper limit for the count of
retries to prevent a livelock.

> Can we alternatively take a normal blocking lock when we create an
> external network (instead of a trylock)?
> 

This might work, but currently no command is doing this.
(The lock is acquired in CommandBase.acquireLockInternal() which could
be overridden for all commands using an external network provider.)
But I would prefer a solution which solve the issue for other commands,
too.

> On Fri, Mar 16, 2018 at 4:52 PM, Dominik Holler 
> wrote:
> > I have created
> > https://bugzilla.redhat.com/show_bug.cgi?id=1557419 and
> > https://bugzilla.redhat.com/show_bug.cgi?id=1557424
> > to discuss how failing to acquire locks in engine should be handled
> > in engine's REST-API, in ovirt-sdk or in application (OST in this
> > case).
> >
> >
> > On Fri, 16 Mar 2018 09:32:47 +
> > Dafna Ron  wrote:
> >  
> >> Thank you for the fast reply and help.
> >>
> >> On Thu, Mar 15, 2018 at 8:21 PM, Dominik Holler
> >>  wrote:
> >>  
> >> > On Thu, 15 Mar 2018 16:24:10 +
> >> > Dafna Ron  wrote:
> >> >  
> >> > > Hi,
> >> > >
> >> > > We have a failure on master for test
> >> > > 098_ovirt_provider_ovn.use_ovn_provider in project
> >> > > cockpit-ovirt. This seems to be a race because object is
> >> > > locked. also, the actual failure is logged as WARN and not
> >> > > ERROR.
> >> > >
> >> > > I don't think the patch is actually related to the failure but
> >> > > I think the test should be fixed.
> >> > > can you please review to make sure we do not have an actual
> >> > > regression and let me know if we need to open a bz to fix the
> >> > > test?
> >> > >
> >> > >
> >> > > *Link and headline of suspected patches: *
> >> > > *https://gerrit.ovirt.org/#/c/89020/2
> >> > > <https://gerrit.ovirt.org/#/c/89020/2> - *
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > *wizard: Enable scroll on start page for low-res screensLink to
> >> > > Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374
> >> > > <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374>Link
> >> > > to all
> >> > > logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-  
> >> > tester/6374/artifacts  
> >> > > <http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> >> > tester/6374/artifacts>(Relevant)
> >> > > error snippet from the log: 2018-03-15 10:05:00,160-04
> >> > > DEBUG
> >> > > [org.ovirt.engine.core.sso.servlets.OAuthTokenInfoServlet]
> >> > > (default task-10) [] Sending json response2018-03-15
> >> > > 10:05:00,160-04 DEBUG
> >> > > [org.ovirt.engine.core.sso.utils.TokenCleanupUtility] (default
> >> > > task-10) [] Not cleaning up expired tokens2018-03-15
> >> > > 10:05:00,169-04 INFO
> >> > > [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> >> > > (EE-ManagedThreadFactory-engineScheduled-Thread-90) [789edb23]
> >> > > Lock Acquired to object
> >> > > 'EngineLock:{exclusiveLocks='[c38a67ec-0b48-4e6f-be85-  
> >> > 70c700df5483=PROVIDER]',  
> >> > 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (cockpit-ovirt) ] [ 15-03-2018 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2018-03-19 Thread Dominik Holler
I have overseen that many other commands handle this problem by using a
waiting lock, so the related commands in this issue should do the same.
I created
Adding a new external network fails during auto-sync is running
https://bugzilla.redhat.com/show_bug.cgi?id=1558054
to track this.

>From my point of view the other related two bugs I created are not
required anymore, because other commands seems to use a waiting lock,
too.
Should we close 1557419 and 1557424?


On Mon, 19 Mar 2018 09:40:32 +0100
Dominik Holler  wrote:

> On Sat, 17 Mar 2018 16:16:49 +0200
> Dan Kenigsberg  wrote:
> 
> > Thanks for filing those, but let us not keep them secret:
> > 
> > Bug 1557419 - Recommend that a call should be tried again
> > sounds most reasonable. Providing more information to the client
> > makes sense
> > 
> > Bug 1557424 - Automatically retry call failed because he failed to
> > acquire a lock
> > is more tricky. when would REST perform the retry?   
> 
> If the command returns a machine-readable hint that the conflict is
> temporary.
> 
> > how often? We must make sure we never cause a livelock.
> >   
> 
> I agree that it is a good idea to have an upper limit for the count of
> retries to prevent a livelock.
> 
> > Can we alternatively take a normal blocking lock when we create an
> > external network (instead of a trylock)?
> >   
> 
> This might work, but currently no command is doing this.
> (The lock is acquired in CommandBase.acquireLockInternal() which could
> be overridden for all commands using an external network provider.)
> But I would prefer a solution which solve the issue for other
> commands, too.
> 
> > On Fri, Mar 16, 2018 at 4:52 PM, Dominik Holler 
> > wrote:  
> > > I have created
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1557419 and
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1557424
> > > to discuss how failing to acquire locks in engine should be
> > > handled in engine's REST-API, in ovirt-sdk or in application (OST
> > > in this case).
> > >
> > >
> > > On Fri, 16 Mar 2018 09:32:47 +
> > > Dafna Ron  wrote:
> > >
> > >> Thank you for the fast reply and help.
> > >>
> > >> On Thu, Mar 15, 2018 at 8:21 PM, Dominik Holler
> > >>  wrote:
> > >>
> > >> > On Thu, 15 Mar 2018 16:24:10 +
> > >> > Dafna Ron  wrote:
> > >> >
> > >> > > Hi,
> > >> > >
> > >> > > We have a failure on master for test
> > >> > > 098_ovirt_provider_ovn.use_ovn_provider in project
> > >> > > cockpit-ovirt. This seems to be a race because object is
> > >> > > locked. also, the actual failure is logged as WARN and not
> > >> > > ERROR.
> > >> > >
> > >> > > I don't think the patch is actually related to the failure
> > >> > > but I think the test should be fixed.
> > >> > > can you please review to make sure we do not have an actual
> > >> > > regression and let me know if we need to open a bz to fix the
> > >> > > test?
> > >> > >
> > >> > >
> > >> > > *Link and headline of suspected patches: *
> > >> > > *https://gerrit.ovirt.org/#/c/89020/2
> > >> > > <https://gerrit.ovirt.org/#/c/89020/2> - *
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > *wizard: Enable scroll on start page for low-res screensLink
> > >> > > to
> > >> > > Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374
> > >> > > <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374>Link
> > >> > > to all
> > >> > > logs:http://jenkins.ovirt.org/job/ovirt-master_chan

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (otopi+imgbased) ] [ 11-05-2018 ] [ 001_initialize_engine.test_initialize_engine ]

2018-05-11 Thread Dominik Holler
On Fri, 11 May 2018 13:05:54 +0300
Dafna Ron  wrote:

> Hi,
> 
> We are failing in 001_initialize_engine.test_initialize_engine in the
> upgrade suite.
> the issue seems to be related to ovn configuration.
> 
> The changes reported by CQ are not the cause of this failure and I
> may be mistaken but I suspect it may be related to one of the below
> changes.
> 
> *Link and headline of suspected patches: *
> 
> 
> 
> *https://gerrit.ovirt.org/#/c/90784/
> 
> - network: default ovn provider client is returned by
> fixturehttps://gerrit.ovirt.org/#/c/90327/
>  - backend, packing: Add default
> MTU for tunnelled networksLink to Job:*
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7492/
> *
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7488/
> 
> 
> 
> 
> *Link to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7492/artifact/exported-artifacts/upgrade-from-release-suit-master-el7/test_logs/upgrade-from-release-suite-master/post-001_initialize_engine.py/
> (Relevant)
> error snippet from the log: *
> 
> 2018-05-11 04:14:34,940-0400 DEBUG otopi.context
> context._executeMethod:143 method exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133,
> in _executeMethod
> method['method']()
>   File
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/ovirtproviderovn.py",
> line 779, in _customization self._query_install_ovn()
>   File
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/ovirtproviderovn.py",
> line 399, in _query_install_ovn default=True
>   File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/dialog.py",
> line 47, in queryBoolean
> default=true if default else false,
>   File "/usr/share/otopi/plugins/otopi/dialog/human.py", line 211, in
> queryString
> value = self._readline(hidden=hidden)
>   File "/usr/lib/python2.7/site-packages/otopi/dialog.py", line 248,
> in _readline
> raise IOError(_('End of file'))
> IOError: End of file
> 2018-05-11 04:14:34,942-0400 ERROR otopi.context
> context._executeMethod:152 Failed to execute stage 'Environment
> customization': End of file
> 2018-05-11 04:14:34,972-0400 DEBUG
> otopi.plugins.otopi.debug.debug_failure.debug_failure
> debug_failure._notification:100 tcp connections:
> id uid local foreign state pid exe
> 0: 0 0.0.0.0:111 0.0.0.0:0 LISTEN 1829 /usr/sbin/rpcbind
> 1: 29 0.0.0.0:662 0.0.0.0:0 LISTEN 1868 /usr/sbin/rpc.statd
> 2: 0 0.0.0.0:22 0.0.0.0:0 LISTEN 970 /usr/sbin/sshd
> 3: 0 192.168.201.2:3260 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 4: 0 192.168.200.2:3260 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 5: 0 0.0.0.0:892 0.0.0.0:0 LISTEN 1874 /usr/sbin/rpc.mountd
> 6: 0 0.0.0.0:2049 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 7: 0 0.0.0.0:32803 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 8: 0 192.168.201.2:22 192.168.201.1:8 ESTABLISHED
> 5544 /usr/sbin/sshd 2018-05-11 04:14:34,973-0400 DEBUG otopi.context
> context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN
> 2018-05-11 04:14:34,973-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
> 2018-05-11 04:14:34,973-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[( 'exceptions.IOError'>, IOError('End of file',),  0x239ab90>)]'  
> 2018-05-11 04:14:34,974-0400 DEBUG otopi.context
> context.dumpEnvironment:873 ENVIRONMENT DUMP - END
> 2018-05-11 04:14:34,975-0400 INFO otopi.context
> context.runSequence:741 Stage: Clean up
> 2018-05-11 04:14:34,975-0400 DEBUG otopi.context
> context.runSequence:745 STAGE cleanup
> 2018-05-11 04:14:34,976-0400 DEBUG otopi.context
> context._executeMethod:128 Stage cleanup METHOD
> otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file
> 2018-05-11 04:14:34,977-0400 DEBUG otopi.context
> context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN
> 2018-05-11 04:14:34,977-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV DIALOG/answerFileContent=str:'# OTOPI
> answer file, generated by human dialog
> [environment:default]
> '
> 
> 
> 
> 
> *Thanks, Dafna*


The upgrade suite looks like the change of the initial version of
oVirt from 4.1 to 4.2 is not yet completed.
https://gerrit.ovirt.org/#/c/91172/ fixes this issue, but the next one
seems to be
[ INFO  ] Configuring WebSocket Proxy\n
[ INFO  ] Backing up database localhost:engine to
\'/var/lib/ovirt-engine/backups/engine-20180511123046.d7PUoD.dump\'.\n
[ INFO  ] Creating/refreshing Engine database schema\n
[ ERROR ] schema.sh: FATAL:
Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbs

[ovirt-devel] Re: Propose Dominik Holler as a Network-Backend Maintainer

2018-05-13 Thread Dominik Holler
On Sun, 13 May 2018 09:36:32 +0300
Barak Korren  wrote:

> On 13 May 2018 at 09:28, Alona Kaplan  wrote:
> 
> >
> >
> > On Sun, May 13, 2018 at 9:25 AM, Barak Korren 
> > wrote: 
> >> Eyal will not be available this week, please forward such requests
> >> to infra-support next time.
> >>
> >> Just to be sure - which project are we talking about here? Is it
> >> vdsm? 
> >
> > No. ovirt-engine.
> >  
> 
> All right, added 'dhol...@redhat.com'  to the ovirt-engine-maintainers
> group.
> 
> Good luck Dominik, and please be nice to the CI team and don't merge
> huge patch streams on Thursdays and Fridays like some people seem to
> like to do... ;)
> 

Thank you for putting so much trust in me.
I will try me very best to avoid surprises during the weekend.

> 
> >  
> >> On 13 May 2018 at 09:03, Alona Kaplan  wrote:
> >>  
> >>> Hi Eyal,
> >>>
> >>> Please grant +2 powers to Dominik.
> >>>
> >>> Thanks,
> >>> Alona.
> >>>
> >>> On Thu, May 3, 2018 at 2:48 PM, Sandro Bonazzola
> >>>  wrote:
> >>>  
> >>>>
> >>>>
> >>>> 2018-05-01 9:54 GMT+02:00 Alona Kaplan :
> >>>>  
> >>>>> Hi all,
> >>>>>
> >>>>> Dominik Holler has been working on the oVirt project for more
> >>>>> than 1.5 years.
> >>>>>
> >>>>> To share some of Dominik's great stats -
> >>>>> ~ 120 patches related to the network backend/ui
> >>>>> ~ 95 patches for ovirt-provider-ovn
> >>>>> ~ 44 vdsm patches
> >>>>> ~ 80 bug fixes
> >>>>>
> >>>>> He was the feature owner of 'auto sync network provider',
> >>>>> 'lldp-reporting' and 'network-filter-parameters'.
> >>>>>
> >>>>> For the last few months Dominik is helping review
> >>>>> network-backend related patches and is doing a great and
> >>>>> thorough work. Dominik showed a deep understanding of all the
> >>>>> parts of code that he touched or reviewed.
> >>>>> He learns fast, thorough and uncompromising.
> >>>>>
> >>>>> I've reviewed most of Dominik's engine related work (code and
> >>>>> reviews). I trust his opinion and think he will be a good
> >>>>> addition to the maintainers team.
> >>>>>
> >>>>> I would like to propose Dominik as a Network backend maintainer.
> >>>>>  
> >>>>
> >>>> I think you already got enough +1 but if needed, +1 from me as
> >>>> well.
> >>>>
> >>>>
> >>>>  
> >>>>>
> >>>>>
> >>>>> Thanks,
> >>>>> Alona.
> >>>>>
> >>>>> ___
> >>>>> Devel mailing list
> >>>>> Devel@ovirt.org
> >>>>> http://lists.ovirt.org/mailman/listinfo/devel
> >>>>>  
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>>
> >>>> SANDRO BONAZZOLA
> >>>>
> >>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION
> >>>> R&D
> >>>>
> >>>> Red Hat EMEA <https://www.redhat.com/>
> >>>>
> >>>> sbona...@redhat.com
> >>>> <https://red.ht/sig>
> >>>> <https://redhat.com/summit>
> >>>>  
> >>>
> >>>
> >>> ___
> >>> Devel mailing list -- devel@ovirt.org
> >>> To unsubscribe send an email to devel-le...@ovirt.org
> >>> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
> >>>  
> >>
> >>
> >>
> >> --
> >> Barak Korren
> >> RHV DevOps team , RHCE, RHCi
> >> Red Hat EMEA
> >> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> >>  
> >
> >  
> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[ovirt-devel] Re: Adding Fedora 28 host to engine 4.2 (NotImplementedError: Packager install not implemented)

2018-06-22 Thread Dominik Holler
On Fri, 22 Jun 2018 08:31:16 +0200
Sandro Bonazzola  wrote:

> 2018-06-21 21:42 GMT+02:00 Nir Soffer :
> 
> > On Wed, Jun 20, 2018 at 9:25 PM Nir Soffer 
> > wrote: 
> >> On Wed, Jun 20, 2018 at 11:06 AM Sandro Bonazzola
> >>  wrote:
> >>  
> >>> 2018-06-20 0:21 GMT+02:00 Nir Soffer :
> >>>  
>  I'm trying to add a host running Fedora 28 to engine 4.2, and
>  installation
>  fails with:
> 
>  2018-06-20 01:14:26,137+0300 DEBUG otopi.context
>  context._executeMethod:143 method exception
>  Traceback (most recent call last):
>    File "/tmp/ovirt-Z5BGYej3Qa/pythonlib/otopi/context.py", line
>  133, in _executeMethod
>  method['method']()
>    File
>  "/tmp/ovirt-Z5BGYej3Qa/otopi-plugins/ovirt-host-deploy/vdsm/vdsmid.py",
>  line 84, in _packages self.packager.install(('dmidecode',))
>    File "/tmp/ovirt-Z5BGYej3Qa/pythonlib/otopi/packager.py", line
>  102, in install
>  raise NotImplementedError(_('Packager install not
>  implemented')) NotImplementedError: Packager install not
>  implemented 2018-06-20 01:14:26,138+0300 ERROR otopi.context
>  context._executeMethod:152 Failed to execute stage 'Environment
>  packages setup': Packager install not implemented
> 
>   
> >>> Can you please send output of "rpm -qa|grep otopi" ? Please note
> >>> that otopi and ovirt-host-deploy are installed on the 4.2 engine
> >>> host and executed with ssh on the fedora 28 host. So you'll need
> >>> otopi and ovirt-host-deploy from master installed on the 4.2
> >>> engine host. 
> >>
> >> On the 4.2 engine host I'm using latest 4.2 release and repos.
> >>  
> >
> > Tried again with engine master
> > (2e3a05ffc83611a2cb18c2ca7268be2d489834f9) on CentOS 7.5 (1804).
> >
> > $ rpm -qa | egrep 'otopi|ovit-host'
> > otopi-common-1.8.0-0.0.master.20180614102257.git6c66781.el7.noarch
> > python2-otopi-1.8.0-0.0.master.20180614102257.git6c66781.el7.noarch
> >
> > $ rpm -qa | egrep 'ovirt-release'
> > ovirt-release-master-4.3.0-0.1.master.2018062053.git025660e.el7.noarch
> >
> > Building and installing engine was great pain. Developer
> > documentation is in the same poor state it was 5 years ago.  Thanks
> > Daniel for the help!
> >
> > Adding Fedora 28 host fails with (in host deploy log):
> >
> > AttributeError: 'str' object has no attribute 'decode'
> > 2018-06-21 02:12:18,864+0300 ERROR otopi.context
> > context._executeMethod:152 Failed to execute stage 'Initializing':
> > 'str' object has no attribute 'decode'
> >
> > After fixing this we fail with (in host deploy log):
> >
> > TypeError: a bytes-like object is required, not 'str'
> > 2018-06-21 02:21:50,148+0300 ERROR otopi.context
> > context._executeMethod:152 Failed to execute stage 'Setup
> > validation': a bytes-like object is required, not 'str'
> >
> > Both issues fixed in https://gerrit.ovirt.org/#/c/92437/
> >
> > With this patch we reach the next failure in otopi, fixed in
> > https://gerrit.ovirt.org/#/c/92435/
> >
> >  
> Thanks for the patches, Didi can you please review / merge if good?
> 
> 
> 
> > Next failure is in TASK [ovirt-provider-ovn-driver : Install
> > ovirt-provider-ovn-driver]:
> > The conditional check 'ovn_central | ipaddr' failed. The error was:
> > The ipaddr filter requires python-netaddr be installed on the
> > ansible controller
> >  
> 
> > After installing python-netaddr on the engine host, we passed
> > this step.
> >  
> 
> 
> Ondra, is one of the role rpms missing the dependency?
> 
> 
> >
> > I don't know why this ansible playbook is running, I answered NO
> > when engine-setup asked about OVN.
> >

The ansible role is triggered always, the issue comes up during
deciding if the role should be executed or skipped.

> > I'm not sure where the dependency on python-netaddr should be.
> >

The issue is in the file
/usr/share/ovirt-engine/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml
which belongs to the package ovirt-engine-tools.

Ondra, can we add the dependency to ovirt-engine-tools?

> > Next failure is in TASK [ovirt-host-deploy-firewalld : Enable SSH
> > port] unsupported version of firewalld, requires >= 0.2.11
> >
> > # rpm -q firewalld
> > firewalld-0.5.2-2.fc28.noarch
> >
> > Obviously the complain is incorrect, "0.5.2" > "0.2.11".
> >
> >  
> Reopened https://bugzilla.redhat.com/show_bug.cgi?id=1381135
> 
> 
> 
> 
> > I worked around this by disabling firewall configuration when adding
> > a host.
> >
> > The host was added but was not reachable.
> >
> > To fix this, I disabled the firewall on the host using:
> > iptables -F
> >
> > The next issue is missing ovirtmgmt bridge on the host, using setup
> > networks
> > fixed the issue - and the host became UP.
> >  
> 
> This should have been done by ovirt-host-deploy, can you please share
> the host deploy logs?
> 
> 
> 
> >
> > I tried to add storage, and found that:
> >
> > - engine "New Domain" dialog is very broken now. See attached
> > screenshots.
> >
> > - block stora

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt Master (ALL) ] [ 27-07-2018 ] [ 002_bootstrap.list_glance_images ]

2018-07-27 Thread Dominik Holler
On Fri, 27 Jul 2018 15:37:23 +0100
Dafna Ron  wrote:

> There are two issues here:
> 1. OST is exiting with wring error due to the local function not
> working which Gal has a patch for
> 2. there is an actual code regression which we suspects comes from
> the SDK
> 
> We suspect the issue is a new sdk package build yesterday
> https://cbs.centos.org/koji/buildinfo?buildID=23581
> 
> Here is a link to the fist failure's logs:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8800/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> 
> Dominic is currently looking at the issue.
> 

One reason was an incompatibility on bytecode level of the
openstack-java-sdk used at compile time and runtime.
Fix posted on https://gerrit.ovirt.org/93352

> 
> 
> 2018-07-26 13:47:37,745-04 DEBUG
> [org.ovirt.otopi.dialog.MachineDialogParser] (VdsDeploy) [3283d2df]
> Got: ***L:INFO Yum install: 217/529: libosinfo-1.0.0-1.el7.x86_64
> 2018-07-26 13:47:37,745-04 DEBUG
> [org.ovirt.otopi.dialog.MachineDialogParser] (VdsDeploy) [3283d2df]
> nextEvent: Log INFO Yum install: 217/529: libosinfo-1.0.0-1.el7.x86_64
> 2018-07-26 13:47:37,754-04 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-2) [b4915769-1cf0-4526-9214-e932d078cf07] method:
> runAction, params: [TestProviderConnectivity,
> ProviderParameters:{comma
> ndId='d29721ff-3dd7-4932-b43d-eee819f1afee', user='null',
> commandType='Unknown'}], timeElapsed: 33ms 2018-07-26 13:47:37,766-04
> INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [3283d2df] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
> Installing Host lago-basic-suite-master-host-1. Yum install: 217/529:
> l ibosinfo-1.0.0-1.el7.x86_64. 2018-07-26 13:47:37,782-04 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-2) [] Operation Failed: WFLYEJB0442: Unexpected Error
> 2018-07-26 13:47:37,782-04 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-2) [] Exception: javax.ejb.EJBException: WFLYEJB0442:
> Unexpected Error at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:218)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:418)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:148)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at
> org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81)
> at
> org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>   

[ovirt-devel] Re: failure in ost test - Invalid argument - help in debugging issue

2018-09-10 Thread Dominik Holler
Looks like the problem is network related, we will have a deeper look.

On Mon, 10 Sep 2018 10:01:28 +0100
Dafna Ron  wrote:

> Hi,
> 
> can someone please have a look at this ost failure?
> it is not related to the change that failed and I think its probably a
> race.
> 
> you can find the logs here:
> 
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/10175/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> 
> The error I can see is this:
> 
> https://pastebin.com/pm6x0W62
> 
> Thanks,
> Dafna
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3OQQYVK4YHEHKTFEO74CHJ3A5VHTGUPM/


[ovirt-devel] Re: issue in vdsm monitoring

2018-09-11 Thread Dominik Holler
On Tue, 11 Sep 2018 10:26:02 +0100
Dafna Ron  wrote:

> Hi,
> 
> I have been seeing random failures of tests in different projects
> caused by vdsm monitoring.
> 
> I need someone from vdsm to please help debug this issue.
> 

Petr, is this the same problem like yesterday in

[ovirt-devel] failure in ost test - Invalid argument - help in debugging issue

> From what I can see, the test suspend/resume vm failed because we
> could not query the status of the vm on the host.
> 
> you can see full log from failed tests here:
> 
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/10208/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> 
> Here are the errors that I can see in the vdsm which seem to suggest
> that there is an issue getting stats on the vm which was suspended
> and caused the failure of the test:
> 
> 
> 
> 2018-09-11 00:40:05,896-0400 INFO  (monitor/c1fe6e6)
> [storage.StorageDomain] Removing remnants of deleted images []
> (fileSD:734) 2018-09-11 00:40:07,957-0400 DEBUG (qgapoller/1) [vds]
> Not sending QEMU-GA command 'guest-get-users' to
> vm_id='8214433a-f233-4aaa-aeda-2ce1d31c78dc', command is not
> supported (qemuguestagent:192) 2018-09-11 00:40:08,068-0400 DEBUG
> (periodic/3) [virt.sampling.VMBulkstatsMonitor] sampled timestamp
> 4296118.49 elapsed 0.010 acquired True domains all (sampling:443)
> 2018-09-11 00:40:08,271-0400 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer]
> Calling 'Image.prepare' in bridge with {u'allowIllegal': True,
> u'storagepoolID': u'e80a56d9-74da-498a-b010-4a9df287f11d', u'imageID':
> u'd4c831e6-02d2-4d89-b516-0ec4597
> 5e024', u'volumeID': u'15b07af1-625b-42e3-b62a-8e7c7a120a56',
> u'storagedomainID': u'f1744940-41b6-4d35-b7bf-870c4e07d995'}
> (__init__:329)
> 
> 
> 2018-09-11 00:40:10,846-0400 DEBUG (vmchannels) [virt.vm]
> (vmId='8214433a-f233-4aaa-aeda-2ce1d31c78dc') Guest connection timed
> out (guestagent:556)
> 2018-09-11 00:40:11,637-0400 DEBUG (jsonrpc/5) [jsonrpc.JsonRpcServer]
> Calling 'Host.getStats' in bridge with {} (__init__:329)
> 2018-09-11 00:40:11,637-0400 INFO  (jsonrpc/5) [api.host] START
> getStats() from=:::192.168.201.4,49184 (api:47)
> 2018-09-11 00:40:11,643-0400 DEBUG (jsonrpc/5) [root] cannot read eth0
> speed (nic:42)
> 2018-09-11 00:40:11,645-0400 DEBUG (jsonrpc/5) [root] cannot read eth1
> speed (nic:42)
> 2018-09-11 00:40:11,647-0400 DEBUG (jsonrpc/5) [root] cannot read eth2
> speed (nic:42)
> 2018-09-11 00:40:11,649-0400 DEBUG (jsonrpc/5) [root] cannot read eth3
> speed (nic:42)
> 2018-09-11 00:40:11,667-0400 INFO  (jsonrpc/5) [api.host] FINISH
> getStats error=[Errno 22] Invalid argument
> from=:::192.168.201.4,49184 (api:51) 2018-09-11 00:40:11,667-0400
> ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] Internal server error
> (__init__:350) Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 345, in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line
> 202, in _dynamicMethod
> result = fn(*methodArgs)
>   File "", line 2, in getStats
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> 49, in method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1407, in
> getStats
> multipath=True)}
>   File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 49,
> in get_stats
> decStats = stats.produce(first_sample, last_sample)
>   File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line
> 71, in produce
> stats.update(get_interfaces_stats())
>   File "/usr/lib/python2.7/site-packages/vdsm/host/stats.py", line
> 153, in get_interfaces_stats
> return net_api.network_stats()
>   File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line
> 63, in network_stats
> return netstats.report()
>   File "/usr/lib/python2.7/site-packages/vdsm/network/netstats.py",
> line 31, in report
> stats = link_stats.report()
>   File "/usr/lib/python2.7/site-packages/vdsm/network/link/stats.py",
> line 41, in report
> speed = vlan.speed(i.device)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/link/vlan.py",
> line 36, in speed
> dev_speed = nic.read_speed_using_sysfs(dev_name)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/link/nic.py",
> line 48, in read_speed_using_sysfs
> s = int(f.read())
> IOError: [Errno 22] Invalid argument
> 2018-09-11 00:40:11,669-0400 INFO  (jsonrpc/5)
> [jsonrpc.JsonRpcServer] RPC call Host.getStats failed (error -32603)
> in 0.03 seconds (__init__:312) 2018-09-11 00:40:14,656-0400 DEBUG
> (jsonrpc/2) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmStats' in
> bridge with {} (__init__:329) 2018-09-11 00:40:14,657-0400 INFO
> (jsonrpc/2) [api.host] START getAllVmStats() from=::1,46772 (api:47)
> 2018-09-11 00:40:14,659-0400 INFO  (jsonrpc/2) [api.host] FINISH
> getAllVmStats return={'status': {'message': 'Done', 'code': 0},
> 'stats

[ovirt-devel] Re: issue in vdsm monitoring

2018-09-11 Thread Dominik Holler
On Tue, 11 Sep 2018 12:22:21 +0100
Dafna Ron  wrote:

> Can someone take ownership to fix it?
> 

I will take ownership and track the fix.


> On Tue, Sep 11, 2018 at 12:04 PM, Petr Horacek 
> wrote:
> 
> > vdsm.log is the same issue, supervdsm.log seems unrelated.
> >
> > 2018-09-11 11:59 GMT+02:00 Dominik Holler :
> >  
> >> On Tue, 11 Sep 2018 10:26:02 +0100
> >> Dafna Ron  wrote:
> >>  
> >> > Hi,
> >> >
> >> > I have been seeing random failures of tests in different projects
> >> > caused by vdsm monitoring.
> >> >
> >> > I need someone from vdsm to please help debug this issue.
> >> >  
> >>
> >> Petr, is this the same problem like yesterday in
> >>
> >> [ovirt-devel] failure in ost test - Invalid argument - help in
> >> debugging issue
> >>  
> >> > From what I can see, the test suspend/resume vm failed because we
> >> > could not query the status of the vm on the host.
> >> >
> >> > you can see full log from failed tests here:
> >> >
> >> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-test  
> >> er/10208/artifact/basic-suite.el7.x86_64/test_logs/basic-
> >> suite-master/post-004_basic_sanity.py/  
> >> >
> >> > Here are the errors that I can see in the vdsm which seem to
> >> > suggest that there is an issue getting stats on the vm which was
> >> > suspended and caused the failure of the test:
> >> >
> >> >
> >> >
> >> > 2018-09-11 00:40:05,896-0400 INFO  (monitor/c1fe6e6)
> >> > [storage.StorageDomain] Removing remnants of deleted images []
> >> > (fileSD:734) 2018-09-11 00:40:07,957-0400 DEBUG (qgapoller/1)
> >> > [vds] Not sending QEMU-GA command 'guest-get-users' to
> >> > vm_id='8214433a-f233-4aaa-aeda-2ce1d31c78dc', command is not
> >> > supported (qemuguestagent:192) 2018-09-11 00:40:08,068-0400 DEBUG
> >> > (periodic/3) [virt.sampling.VMBulkstatsMonitor] sampled timestamp
> >> > 4296118.49 elapsed 0.010 acquired True domains all (sampling:443)
> >> > 2018-09-11 00:40:08,271-0400 DEBUG (jsonrpc/1)
> >> > [jsonrpc.JsonRpcServer] Calling 'Image.prepare' in bridge with
> >> > {u'allowIllegal': True, u'storagepoolID':
> >> > u'e80a56d9-74da-498a-b010-4a9df287f11d', u'imageID':
> >> > u'd4c831e6-02d2-4d89-b516-0ec4597 5e024', u'volumeID':
> >> > u'15b07af1-625b-42e3-b62a-8e7c7a120a56', u'storagedomainID':
> >> > u'f1744940-41b6-4d35-b7bf-870c4e07d995'} (__init__:329)
> >> >
> >> >
> >> > 2018-09-11 00:40:10,846-0400 DEBUG (vmchannels) [virt.vm]
> >> > (vmId='8214433a-f233-4aaa-aeda-2ce1d31c78dc') Guest connection
> >> > timed out (guestagent:556)
> >> > 2018-09-11 00:40:11,637-0400 DEBUG (jsonrpc/5)
> >> > [jsonrpc.JsonRpcServer] Calling 'Host.getStats' in bridge with
> >> > {} (__init__:329) 2018-09-11 00:40:11,637-0400 INFO  (jsonrpc/5)
> >> > [api.host] START getStats() from=:::192.168.201.4,49184
> >> > (api:47) 2018-09-11 00:40:11,643-0400 DEBUG (jsonrpc/5) [root]
> >> > cannot read eth0 speed (nic:42)
> >> > 2018-09-11 00:40:11,645-0400 DEBUG (jsonrpc/5) [root] cannot
> >> > read eth1 speed (nic:42)
> >> > 2018-09-11 00:40:11,647-0400 DEBUG (jsonrpc/5) [root] cannot
> >> > read eth2 speed (nic:42)
> >> > 2018-09-11 00:40:11,649-0400 DEBUG (jsonrpc/5) [root] cannot
> >> > read eth3 speed (nic:42)
> >> > 2018-09-11 00:40:11,667-0400 INFO  (jsonrpc/5) [api.host] FINISH
> >> > getStats error=[Errno 22] Invalid argument
> >> > from=:::192.168.201.4,49184 (api:51) 2018-09-11
> >> > 00:40:11,667-0400 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer]
> >> > Internal server error (__init__:350) Traceback (most recent call
> >> > last): File
> >> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> >> > 345, in _handle_request res = method(**params)
> >> >   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
> >> > line 202, in _dynamicMethod
> >> > result = fn(*methodArgs)
> >> >   File "", line 2, in getStats
> >> >   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
&

[ovirt-devel] Re: issue in vdsm monitoring

2018-09-11 Thread Dominik Holler
On Tue, 11 Sep 2018 17:18:15 +0200
Dominik Holler  wrote:

> On Tue, 11 Sep 2018 12:22:21 +0100
> Dafna Ron  wrote:
> 
> > Can someone take ownership to fix it?
> >   
> 
> I will take ownership and track the fix.
> 
> 
> > On Tue, Sep 11, 2018 at 12:04 PM, Petr Horacek 
> > wrote:
> >   
> > > vdsm.log is the same issue, supervdsm.log seems unrelated.
> > >
> > > 2018-09-11 11:59 GMT+02:00 Dominik Holler :
> > >
> > >> On Tue, 11 Sep 2018 10:26:02 +0100
> > >> Dafna Ron  wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > I have been seeing random failures of tests in different
> > >> > projects caused by vdsm monitoring.
> > >> >
> > >> > I need someone from vdsm to please help debug this issue.
> > >> >
> > >>
> > >> Petr, is this the same problem like yesterday in
> > >>
> > >> [ovirt-devel] failure in ost test - Invalid argument - help in
> > >> debugging issue
> > >>
> > >> > From what I can see, the test suspend/resume vm failed because
> > >> > we could not query the status of the vm on the host.
> > >> >
> > >> > you can see full log from failed tests here:
> > >> >
> > >> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-test
> > >> er/10208/artifact/basic-suite.el7.x86_64/test_logs/basic-
> > >> suite-master/post-004_basic_sanity.py/
> > >> >
> > >> > Here are the errors that I can see in the vdsm which seem to
> > >> > suggest that there is an issue getting stats on the vm which
> > >> > was suspended and caused the failure of the test:
> > >> >
> > >> >
> > >> >
> > >> > 2018-09-11 00:40:05,896-0400 INFO  (monitor/c1fe6e6)
> > >> > [storage.StorageDomain] Removing remnants of deleted images []
> > >> > (fileSD:734) 2018-09-11 00:40:07,957-0400 DEBUG (qgapoller/1)
> > >> > [vds] Not sending QEMU-GA command 'guest-get-users' to
> > >> > vm_id='8214433a-f233-4aaa-aeda-2ce1d31c78dc', command is not
> > >> > supported (qemuguestagent:192) 2018-09-11 00:40:08,068-0400
> > >> > DEBUG (periodic/3) [virt.sampling.VMBulkstatsMonitor] sampled
> > >> > timestamp 4296118.49 elapsed 0.010 acquired True domains all
> > >> > (sampling:443) 2018-09-11 00:40:08,271-0400 DEBUG (jsonrpc/1)
> > >> > [jsonrpc.JsonRpcServer] Calling 'Image.prepare' in bridge with
> > >> > {u'allowIllegal': True, u'storagepoolID':
> > >> > u'e80a56d9-74da-498a-b010-4a9df287f11d', u'imageID':
> > >> > u'd4c831e6-02d2-4d89-b516-0ec4597 5e024', u'volumeID':
> > >> > u'15b07af1-625b-42e3-b62a-8e7c7a120a56', u'storagedomainID':
> > >> > u'f1744940-41b6-4d35-b7bf-870c4e07d995'} (__init__:329)
> > >> >
> > >> >
> > >> > 2018-09-11 00:40:10,846-0400 DEBUG (vmchannels) [virt.vm]
> > >> > (vmId='8214433a-f233-4aaa-aeda-2ce1d31c78dc') Guest connection
> > >> > timed out (guestagent:556)
> > >> > 2018-09-11 00:40:11,637-0400 DEBUG (jsonrpc/5)
> > >> > [jsonrpc.JsonRpcServer] Calling 'Host.getStats' in bridge with
> > >> > {} (__init__:329) 2018-09-11 00:40:11,637-0400 INFO
> > >> > (jsonrpc/5) [api.host] START getStats()
> > >> > from=:::192.168.201.4,49184 (api:47) 2018-09-11
> > >> > 00:40:11,643-0400 DEBUG (jsonrpc/5) [root] cannot read eth0
> > >> > speed (nic:42) 2018-09-11 00:40:11,645-0400 DEBUG (jsonrpc/5)
> > >> > [root] cannot read eth1 speed (nic:42)
> > >> > 2018-09-11 00:40:11,647-0400 DEBUG (jsonrpc/5) [root] cannot
> > >> > read eth2 speed (nic:42)
> > >> > 2018-09-11 00:40:11,649-0400 DEBUG (jsonrpc/5) [root] cannot
> > >> > read eth3 speed (nic:42)
> > >> > 2018-09-11 00:40:11,667-0400 INFO  (jsonrpc/5) [api.host]
> > >> > FINISH getStats error=[Errno 22] Invalid argument
> > >> > from=:::192.168.201.4,49184 (api:51) 2018-09-11
> > >> > 00:40:11,667-0400 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer]
> > >> > Internal server error (__init__:350) Traceback (most recent
> > >> > call last): File

[ovirt-devel] Re: issue in vdsm monitoring

2018-09-13 Thread Dominik Holler
On Wed, 12 Sep 2018 08:17:46 +0100
Dafna Ron  wrote:

> Thanks Dominic.
> Can you please review the network tests to fix this race between the
> two tests?
> 
> 


Dafna, why do you think the current behavior should be changed, or
how the current behavior should be improved?

> On Tue, Sep 11, 2018 at 9:53 PM, Dominik Holler 
> wrote:
> 
> > On Tue, 11 Sep 2018 17:18:15 +0200
> > Dominik Holler  wrote:
> >  
> > > On Tue, 11 Sep 2018 12:22:21 +0100
> > > Dafna Ron  wrote:
> > >  
> > > > Can someone take ownership to fix it?
> > > >  
> > >
> > > I will take ownership and track the fix.
> > >
> > >  
> > > > On Tue, Sep 11, 2018 at 12:04 PM, Petr Horacek
> > > >  wrote:
> > > >  
> > > > > vdsm.log is the same issue, supervdsm.log seems unrelated.
> > > > >
> > > > > 2018-09-11 11:59 GMT+02:00 Dominik Holler
> > > > > : 
> > > > >> On Tue, 11 Sep 2018 10:26:02 +0100
> > > > >> Dafna Ron  wrote:
> > > > >>  
> > > > >> > Hi,
> > > > >> >
> > > > >> > I have been seeing random failures of tests in different
> > > > >> > projects caused by vdsm monitoring.
> > > > >> >
> > > > >> > I need someone from vdsm to please help debug this issue.
> > > > >> >  
> > > > >>
> > > > >> Petr, is this the same problem like yesterday in
> > > > >>
> > > > >> [ovirt-devel] failure in ost test - Invalid argument - help
> > > > >> in debugging issue
> > > > >>  
> > > > >> > From what I can see, the test suspend/resume vm failed
> > > > >> > because we could not query the status of the vm on the
> > > > >> > host.
> > > > >> >
> > > > >> > you can see full log from failed tests here:
> > > > >> >
> > > > >> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-test  
> > > > >> er/10208/artifact/basic-suite.el7.x86_64/test_logs/basic-
> > > > >> suite-master/post-004_basic_sanity.py/  
> > > > >> >
> > > > >> > Here are the errors that I can see in the vdsm which seem
> > > > >> > to suggest that there is an issue getting stats on the vm
> > > > >> > which was suspended and caused the failure of the test:
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > 2018-09-11 00:40:05,896-0400 INFO  (monitor/c1fe6e6)
> > > > >> > [storage.StorageDomain] Removing remnants of deleted
> > > > >> > images [] (fileSD:734) 2018-09-11 00:40:07,957-0400 DEBUG
> > > > >> > (qgapoller/1) [vds] Not sending QEMU-GA command
> > > > >> > 'guest-get-users' to
> > > > >> > vm_id='8214433a-f233-4aaa-aeda-2ce1d31c78dc', command is
> > > > >> > not supported (qemuguestagent:192) 2018-09-11
> > > > >> > 00:40:08,068-0400 DEBUG (periodic/3)
> > > > >> > [virt.sampling.VMBulkstatsMonitor] sampled timestamp
> > > > >> > 4296118.49 elapsed 0.010 acquired True domains all
> > > > >> > (sampling:443) 2018-09-11 00:40:08,271-0400 DEBUG
> > > > >> > (jsonrpc/1) [jsonrpc.JsonRpcServer] Calling
> > > > >> > 'Image.prepare' in bridge with {u'allowIllegal': True,
> > > > >> > u'storagepoolID': u'e80a56d9-74da-498a-b010-4a9df287f11d',
> > > > >> > u'imageID': u'd4c831e6-02d2-4d89-b516-0ec4597 5e024',
> > > > >> > u'volumeID': u'15b07af1-625b-42e3-b62a-8e7c7a120a56',
> > > > >> > u'storagedomainID':
> > > > >> > u'f1744940-41b6-4d35-b7bf-870c4e07d995'} (__init__:329)
> > > > >> >
> > > > >> >
> > > > >> > 2018-09-11 00:40:10,846-0400 DEBUG (vmchannels) [virt.vm]
> > > > >> > (vmId='8214433a-f233-4aaa-aeda-2ce1d31c78dc') Guest
> > > > >> > connection timed out (guestagent:556)
> > > > >> > 2018-09-11 00:40:11,637-0400 DEBUG (jsonrpc/5)
> > > > >> > [jsonr

[ovirt-devel] Re: issue in vdsm monitoring

2018-09-13 Thread Dominik Holler
On Thu, 13 Sep 2018 10:32:12 +0100
Dafna Ron  wrote:

> Hi Dominic,
> 
> should detach_vm_network_from_host_0 be running while
> modify_host_0_ip_to_dhcp set VM_NETWORK is running?
> 

modify_host_0_ip_to_dhcp just initiates the async dhcp client in the
background on the host and returns in the test environment.


> 
> 
> On Thu, Sep 13, 2018 at 8:05 AM, Dominik Holler 
> wrote:
> 
> > On Wed, 12 Sep 2018 08:17:46 +0100
> > Dafna Ron  wrote:
> >  
> > > Thanks Dominic.
> > > Can you please review the network tests to fix this race between
> > > the two tests?
> > >
> > >  
> >
> >
> > Dafna, why do you think the current behavior should be changed, or
> > how the current behavior should be improved?
> >  
> > > On Tue, Sep 11, 2018 at 9:53 PM, Dominik Holler
> > >  wrote:
> > >  
> > > > On Tue, 11 Sep 2018 17:18:15 +0200
> > > > Dominik Holler  wrote:
> > > >  
> > > > > On Tue, 11 Sep 2018 12:22:21 +0100
> > > > > Dafna Ron  wrote:
> > > > >  
> > > > > > Can someone take ownership to fix it?
> > > > > >  
> > > > >
> > > > > I will take ownership and track the fix.
> > > > >
> > > > >  
> > > > > > On Tue, Sep 11, 2018 at 12:04 PM, Petr Horacek
> > > > > >  wrote:
> > > > > >  
> > > > > > > vdsm.log is the same issue, supervdsm.log seems unrelated.
> > > > > > >
> > > > > > > 2018-09-11 11:59 GMT+02:00 Dominik Holler
> > > > > > > :  
> > > > > > >> On Tue, 11 Sep 2018 10:26:02 +0100
> > > > > > >> Dafna Ron  wrote:
> > > > > > >>  
> > > > > > >> > Hi,
> > > > > > >> >
> > > > > > >> > I have been seeing random failures of tests in
> > > > > > >> > different projects caused by vdsm monitoring.
> > > > > > >> >
> > > > > > >> > I need someone from vdsm to please help debug this
> > > > > > >> > issue. 
> > > > > > >>
> > > > > > >> Petr, is this the same problem like yesterday in
> > > > > > >>
> > > > > > >> [ovirt-devel] failure in ost test - Invalid argument -
> > > > > > >> help in debugging issue
> > > > > > >>  
> > > > > > >> > From what I can see, the test suspend/resume vm failed
> > > > > > >> > because we could not query the status of the vm on the
> > > > > > >> > host.
> > > > > > >> >
> > > > > > >> > you can see full log from failed tests here:
> > > > > > >> >
> > > > > > >> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-test  
> >  
> > > > > > >> er/10208/artifact/basic-suite.el7.x86_64/test_logs/basic-
> > > > > > >> suite-master/post-004_basic_sanity.py/  
> > > > > > >> >
> > > > > > >> > Here are the errors that I can see in the vdsm which
> > > > > > >> > seem to suggest that there is an issue getting stats
> > > > > > >> > on the vm which was suspended and caused the failure
> > > > > > >> > of the test:
> > > > > > >> >
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > 2018-09-11 00:40:05,896-0400 INFO  (monitor/c1fe6e6)
> > > > > > >> > [storage.StorageDomain] Removing remnants of deleted
> > > > > > >> > images [] (fileSD:734) 2018-09-11 00:40:07,957-0400
> > > > > > >> > DEBUG (qgapoller/1) [vds] Not sending QEMU-GA command
> > > > > > >> > 'guest-get-users' to
> > > > > > >> > vm_id='8214433a-f233-4aaa-aeda-2ce1d31c78dc', command
> > > > > > >> > is not supported (qemuguestagent:192) 2018-09-11
> > > > > > >> > 00:40:08,068-0400 DEBUG (periodic/3)
> > > > > > >> > [virt.sampling.VMBulkstatsMonitor] sampled timestamp
> > > > > > >> > 4296118.49 elapsed 0.010 acq

[ovirt-devel] Re: issue in vdsm monitoring

2018-09-13 Thread Dominik Holler
On Thu, 13 Sep 2018 10:53:20 +0100
Dafna Ron  wrote:

> but this test is currently failing no?
> 

No, only the issue related to the vlan.speed() failed.
This is already addressed in "failure in ost test - Invalid argument -
help in debugging issue" and Petr provided a fix for master and
ovirt-4.2 in
https://gerrit.ovirt.org/#/c/94301/

So I am not aware of any network related OST fail.

> On Thu, Sep 13, 2018 at 10:49 AM, Dominik Holler 
> wrote:
> 
> > On Thu, 13 Sep 2018 10:32:12 +0100
> > Dafna Ron  wrote:
> >  
> > > Hi Dominic,
> > >
> > > should detach_vm_network_from_host_0 be running while
> > > modify_host_0_ip_to_dhcp set VM_NETWORK is running?
> > >  
> >
> > modify_host_0_ip_to_dhcp just initiates the async dhcp client in the
> > background on the host and returns in the test environment.
> >
> >  
> > >
> > >
> > > On Thu, Sep 13, 2018 at 8:05 AM, Dominik Holler
> > >  wrote:
> > >  
> > > > On Wed, 12 Sep 2018 08:17:46 +0100
> > > > Dafna Ron  wrote:
> > > >  
> > > > > Thanks Dominic.
> > > > > Can you please review the network tests to fix this race
> > > > > between the two tests?
> > > > >
> > > > >  
> > > >
> > > >
> > > > Dafna, why do you think the current behavior should be changed,
> > > > or how the current behavior should be improved?
> > > >  
> > > > > On Tue, Sep 11, 2018 at 9:53 PM, Dominik Holler
> > > > >  wrote:
> > > > >  
> > > > > > On Tue, 11 Sep 2018 17:18:15 +0200
> > > > > > Dominik Holler  wrote:
> > > > > >  
> > > > > > > On Tue, 11 Sep 2018 12:22:21 +0100
> > > > > > > Dafna Ron  wrote:
> > > > > > >  
> > > > > > > > Can someone take ownership to fix it?
> > > > > > > >  
> > > > > > >
> > > > > > > I will take ownership and track the fix.
> > > > > > >
> > > > > > >  
> > > > > > > > On Tue, Sep 11, 2018 at 12:04 PM, Petr Horacek
> > > > > > > >  wrote:
> > > > > > > >  
> > > > > > > > > vdsm.log is the same issue, supervdsm.log seems
> > > > > > > > > unrelated.
> > > > > > > > >
> > > > > > > > > 2018-09-11 11:59 GMT+02:00 Dominik Holler
> > > > > > > > > :  
> > > > > > > > >> On Tue, 11 Sep 2018 10:26:02 +0100
> > > > > > > > >> Dafna Ron  wrote:
> > > > > > > > >>  
> > > > > > > > >> > Hi,
> > > > > > > > >> >
> > > > > > > > >> > I have been seeing random failures of tests in
> > > > > > > > >> > different projects caused by vdsm monitoring.
> > > > > > > > >> >
> > > > > > > > >> > I need someone from vdsm to please help debug this
> > > > > > > > >> > issue.  
> > > > > > > > >>
> > > > > > > > >> Petr, is this the same problem like yesterday in
> > > > > > > > >>
> > > > > > > > >> [ovirt-devel] failure in ost test - Invalid argument
> > > > > > > > >> - help in debugging issue
> > > > > > > > >>  
> > > > > > > > >> > From what I can see, the test suspend/resume vm
> > > > > > > > >> > failed because we could not query the status of
> > > > > > > > >> > the vm on the host.
> > > > > > > > >> >
> > > > > > > > >> > you can see full log from failed tests here:
> > > > > > > > >> >
> > > > > > > > >> > https://jenkins.ovirt.org/job/  
> > ovirt-master_change-queue-test  
> > > >  
> > > > > > > > >> er/10208/artifact/basic-suite.el7.x86_64/test_logs/basic-
> > > > > > > > >> suite-master/post-004_basic_sanity.py/  
> > > > > > > > >> >
> > > > > > > > >> > Here are the e

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 27-09-20118 ] [ initialize_engine ]

2018-09-28 Thread Dominik Holler
On Thu, 27 Sep 2018 15:28:02 +0100
Dafna Ron  wrote:

> Hi,
> 
> we are failing on ovirt-engine 4.1 on the upgrade suite.
> 
> The issue seems to be related to this change:
> https://gerrit.ovirt.org/#/c/94551/ - packaging: Generate random MAC pool
> instead of hardcoded one
> 
> Can you please have a look and issue a fix?
> 
> Build log:
> 
> https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3234/testReport/junit/(root)/001_upgrade_engine/running_tests___upgrade_from_prevrelease_suite_el7_x86_64___test_initialize_engine/
> 
> error:
> 
> [ INFO  ] Yum Verify: 100/100: ovirt-engine-tools.noarch
> 0:4.1.9.1-1.el7.centos - ud
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Upgrading CA
> [ INFO  ] Installing PostgreSQL uuid-ossp extension into database
> [ INFO  ] Creating/refreshing DWH database schema
> [ INFO  ] Configuring WebSocket Proxy
> [ INFO  ] Creating/refreshing Engine database schema
> [ INFO  ] Creating/refreshing Engine 'internal' domain database schema
>   Unregistering existing client registration info.
> [ INFO  ] Creating default mac pool
> [ ERROR ] Failed to execute stage 'Misc configuration': insert or
> update on table "mac_pool_ranges" violates foreign key constraint
> "mac_pool_ranges_mac_pool_id_fkey"
>  DETAIL:  Key
> (mac_pool_id)=(58ca604b-017d-0374-0220-014e) is not present in
> table "mac_pools".
>  CONTEXT:  SQL statement "INSERT INTO mac_pool_ranges (
>  mac_pool_id,
>  from_mac,
>  to_mac
>  )
>  VALUES (
>  v_mac_pool_id,
>  v_from_mac,
>  v_to_mac
>  )"
>  PL/pgSQL function insertmacpoolrange(uuid,character
> varying,character varying) line 3 at SQL statement
> 
> [ INFO  ] Rolling back to the previous PostgreSQL instance (postgresql).
> [ INFO  ] Stage: Clean up
>   Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20180927090017-97fd5u.log
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20180927090149-setup.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Execution of setup failed
> ('FATAL Internal error (main): insert or update on table
> "mac_pool_ranges" violates foreign key constraint
> "mac_pool_ranges_mac_pool_id_fkey"\nDETAIL:  Key
> (mac_pool_id)=(58ca604b-017d-0374-0220-014e) is not present in
> table "mac_pools".\nCONTEXT:  SQL statement "INSERT INTO
> mac_pool_ranges (\nmac_pool_id,\nfrom_mac,\n
> to_mac\n)\nVALUES (\nv_mac_pool_id,\n
> v_from_mac,\nv_to_mac\n)"\nPL/pgSQL function
> insertmacpoolrange(uuid,character varying,character varying) line 3 at
> SQL statement\n',)
> 
> lago.ssh: DEBUG: Command 483aadd2 on
> lago-upgrade-from-prevrelease-suite-4-2-engine  errors:
>  Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/__main__.py", line 88, in main
> installer.execute()
>   File "/usr/lib/python2.7/site-packages/otopi/main.py", line 157, in execute
> self.context.runSequence()
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 771,
> in runSequence
> util.raiseExceptionInformation(infos[0])
>   File "/usr/lib/python2.7/site-packages/otopi/util.py", line 81, in
> raiseExceptionInformation
> exec('raise info[1], None, info[2]')
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133,
> in _executeMethod
> method['method']()
>   File 
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> line 98, in _misc_db_entries
> self._create_new_mac_pool_range(range_prefix)
>   File 
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> line 73, in _create_new_mac_pool_range
> to_mac=range_prefix + ':ff:ff',
>   File 
> "/usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/database.py",
> line 266, in execute
> args,
> IntegrityError: insert or update on table "mac_pool_ranges" violates
> foreign key constraint "mac_pool_ranges_mac_pool_id_fkey"
> DETAIL:  Key (mac_pool_id)=(58ca604b-017d-0374-0220-014e) is
> not present in table "mac_pools".
> CONTEXT:  SQL statement "INSERT INTO mac_pool_ranges (
> mac_pool_id,
> from_mac,
> to_mac
> )
> VALUES (
> v_mac_pool_id,
> v_from_mac,
> v_to_mac
> )"
> PL/pgSQL function insertmacpoolrange(uuid,character varying,character
> varying) line 3 at SQL statement
> 
> 
> Thanks,
> 
> Dafna

I will have a look.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lis

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 27-09-20118 ] [ initialize_engine ]

2018-09-28 Thread Dominik Holler
https://gerrit.ovirt.org/#/c/94582/ fixes the issue

On Fri, 28 Sep 2018 20:04:21 +0100
Dafna Ron  wrote:

> Thanks
> Please note that ovirt-engine on 4.2 is broken and we had 2 more changes
> fail on this issue.
> 
> thanks,
> Dafna
> 
> 
> On Fri, Sep 28, 2018 at 3:10 PM Dominik Holler  wrote:
> 
> > On Thu, 27 Sep 2018 15:28:02 +0100
> > Dafna Ron  wrote:
> >  
>  [...]  
> > pool  
>  [...]  
> > https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3234/testReport/junit/(root)/001_upgrade_engine/running_tests___upgrade_from_prevrelease_suite_el7_x86_64___test_initialize_engine/
> >   
>  [...]  
> > main  
>  [...]  
> > execute  
>  [...]  
> > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> >   
>  [...]  
> > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> >   
>  [...]  
> > "/usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/database.py",
> >   
>  [...]  
> >
> > I will have a look.
> >  
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZGJD377CQKOXSR2MSI36O7RRM2COTCJT/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 01-10-2018 ] [ 002_bootstrap.download_engine_certs ]

2018-10-01 Thread Dominik Holler
On Mon, 1 Oct 2018 08:50:34 +0100
Dafna Ron  wrote:

> Hi,
> 
> We are failing project ovirt-engine on master branch.
> The issue seems to be related to the reported patch
> Dominik, can you please take a look?
> 

Thanks,
https://gerrit.ovirt.org/#/c/94585/ will be this fix, but is not yet enough 
verified.



> https://gerrit.ovirt.org/#/c/94582/ - packaging: Add MAC Pool range only if
> MAC Pool exists
> 
> full logs can be found here:
> 
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/10442/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
> 
> Error:
> 
>  [...]  
> 2018-09-29 12:37:58,209-04 INFO
> [org.ovirt.engine.core.bll.network.macpool.MacPoolUsingRanges]
> (ServerService Thread Pool -- 43) [] Initializing
> MacPoolUsingRanges:{id='58ca604b-017d-0374-0220-014e'}
> 2018-09-29 12:37:58,220-04 ERROR
> [org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster]
> (ServerService Thread Pool -- 43) [] Error initializing: EngineException:
> MAC_POOL_INITIALIZATION_FAILED (Failed with error
> MAC_POOL_INITIALIZATION_FAILED and code 5010)
> 2018-09-29 12:37:58,237-04 ERROR [org.ovirt.engine.core.bll.Backend]
> (ServerService Thread Pool -- 43) [] Error during initialization:
> javax.ejb.EJBException: java.lang.IllegalStateException: WFLYEE0042: Failed
> to construct component instance
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:246)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.required(CMTTxInterceptor.java:362)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:144)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at
> org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:72)
> [weld-ejb-3.0.4.Final.jar:3.0.4.Final]
> at
> org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at
> org.jboss.as.ejb3.component.singleton.ContainerManagedConcurrencyInterceptor.processInvocation(ContainerManagedConcurrencyInterceptor.java:106)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
> at
> org.wildfly.security.ma

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-12 Thread Dominik Holler
On Sun, 11 Nov 2018 19:04:40 +0200
Dan Kenigsberg  wrote:

> On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri  wrote:
> >
> >
> >
> > On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri  wrote:  
> >>
> >>
> >>
> >> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg  wrote:  
> >>>
> >>> On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi  wrote:  
> >>> >
> >>> > Hey,
> >>> > I've seen that CQ Master is not passing ovirt-engine for 10 days and 
> >>> > fails on test suite called restore_vm0_networking
> >>> > here's a snap error regarding it:
> >>> >
> >>> > https://pastebin.com/7msEYqKT
> >>> >
> >>> > Link to a sample job with the error:
> >>> >
> >>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/004_basic_sanity.py.junit.xml
> >>> >   
> >>>
> >>> I cannot follow this link because I'm 4 minutes too late
> >>>
> >>> jenkins.ovirt.org uses an invalid security certificate. The
> >>> certificate expired on November 11, 2018, 5:13:25 PM GMT+2. The
> >>> current time is November 11, 2018, 5:17 PM.  
> >>
> >>
> >> Yes, we're looking into that issue now.  
> >
> >
> > Fixed, you should be able to access it now.  
> 
> OST fails during restore_vm0_networking in line 101 of
> 004_basic_sanity.py while comparing
> vm_service.get().status == state
> 
> It seems that instead of reporting back the VM status, Engine set garbage
> "The response content type 'text/html; charset=iso-8859-1' isn't the
> expected XML"
> 

The relevant line in
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/httpd/ssl_access_log/*view*/
seems to be
192.168.201.1 - - [11/Nov/2018:04:27:43 -0500] "GET 
/ovirt-engine/api/vms/26088164-d1a0-4254-a377-5d3c242c8105 HTTP/1.1" 503 299
and I guess the 503 error message is sent in HTML instead of XML.

If I run manually
https://gerrit.ovirt.org/#/c/95354/
with latest build of engine-master
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8074/
basic suite seems to be happy:
https://jenkins.ovirt.org/view/oVirt system 
tests/job/ovirt-system-tests_manual/3484/


> I do not know what could cause that, and engine.log does not mention
> it. But it seems like a problem in engine API hence +Martin Perina and
> +Ondra Machacek .
> 
> 
> 
> >  
> >>
> >>
> >>  
> >>>
> >>>  
> >>> >
> >>> > Can some1 have a look at it and help to resolve the issue?
> >>> >
> >>> >
> >>> > ___
> >>> > Infra mailing list -- in...@ovirt.org
> >>> > To unsubscribe send an email to infra-le...@ovirt.org
> >>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>> > oVirt Code of Conduct: 
> >>> > https://www.ovirt.org/community/about/community-guidelines/
> >>> > List Archives: 
> >>> > https://lists.ovirt.org/archives/list/in...@ovirt.org/message/ZQAYWTLZJKGPJ25F33E6ICVDXQDYSKSQ/
> >>> >   
> >>> ___
> >>> Devel mailing list -- devel@ovirt.org
> >>> To unsubscribe send an email to devel-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>> oVirt Code of Conduct: 
> >>> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives: 
> >>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/R5LOJH73XCLLFOUTKPM5GUCS6PNNKGTE/
> >>>   
> >>
> >>
> >>
> >> --
> >>
> >> Eyal edri
> >>
> >>
> >> MANAGER
> >>
> >> RHV/CNV DevOps
> >>
> >> EMEA VIRTUALIZATION R&D
> >>
> >>
> >> Red Hat EMEA
> >>
> >> TRIED. TESTED. TRUSTED.
> >> phone: +972-9-7692018
> >> irc: eedri (on #tlv #rhev-dev #rhev-integ)  
> >
> >
> >
> > --
> >
> > Eyal edri
> >
> >
> > MANAGER
> >
> > RHV/CNV DevOps
> >
> > EMEA VIRTUALIZATION R&D
> >
> >
> > Red Hat EMEA
> >
> > TRIED. TESTED. TRUSTED.
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)  
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DA6Q5RE5JO3FYIKN2QLKLWMCUBQA2HBX/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IKTOVHLDB2DH6BM7QF4VM6HI4KCWHPUZ/


[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-12 Thread Dominik Holler
On Mon, 12 Nov 2018 12:29:17 +0100
Martin Perina  wrote:

> On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> 
> > There are currently two issues failing ovirt-engine on CQ ovirt master:
> >
> > 1. edit vm pool is causing failure in different tests. it has a patch 
> > *waiting
> > to be merged*: https://gerrit.ovirt.org/#/c/95354/
> >  
> 
> Merged
> 
> >
> > 2. we have a failure in upgrade suite as well to run vm but this seems to
> > be related to the tests as well:
> > 2018-11-12 05:41:07,831-05 WARN
> > [org.ovirt.engine.core.bll.validator.VirtIoRngValidator] (default task-1)
> > [] Random number source URANDOM is not supported in cluster 'test-cluster'
> > compatibility version 4.0.
> >
> > here is the full error from the upgrade suite failure in run vm:
> > https://pastebin.com/XLHtWGGx
> >
> > Here is the latest failure:
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/8/
> >  
> 
> I will try to take a look later today
> 

I have the idea that this might be related to 
https://gerrit.ovirt.org/#/c/95377/ , and I check in 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3485/console
 , but I have to stop now, if not solved I can go on later today.

> >
> >
> > Thanks,
> > Dafna
> >
> >
> >
> >
> > On Mon, Nov 12, 2018 at 9:23 AM Dominik Holler  wrote:
> >  
> >> On Sun, 11 Nov 2018 19:04:40 +0200
> >> Dan Kenigsberg  wrote:
> >>  
> >> > On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri  wrote:  
> >> > >
> >> > >
> >> > >
> >> > > On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri  wrote:  
> >> > >>
> >> > >>
> >> > >>
> >> > >> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg   
> >> wrote:  
> >> > >>>
> >> > >>> On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi   
> >> wrote:  
> >> > >>> >
> >> > >>> > Hey,
> >> > >>> > I've seen that CQ Master is not passing ovirt-engine for 10 days  
> >> and fails on test suite called restore_vm0_networking  
> >> > >>> > here's a snap error regarding it:
> >> > >>> >
> >> > >>> > https://pastebin.com/7msEYqKT
> >> > >>> >
> >> > >>> > Link to a sample job with the error:
> >> > >>> >
> >> > >>> >  
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/004_basic_sanity.py.junit.xml
> >>  
> >> > >>>
> >> > >>> I cannot follow this link because I'm 4 minutes too late
> >> > >>>
> >> > >>> jenkins.ovirt.org uses an invalid security certificate. The
> >> > >>> certificate expired on November 11, 2018, 5:13:25 PM GMT+2. The
> >> > >>> current time is November 11, 2018, 5:17 PM.  
> >> > >>
> >> > >>
> >> > >> Yes, we're looking into that issue now.  
> >> > >
> >> > >
> >> > > Fixed, you should be able to access it now.  
> >> >
> >> > OST fails during restore_vm0_networking in line 101 of
> >> > 004_basic_sanity.py while comparing
> >> > vm_service.get().status == state
> >> >
> >> > It seems that instead of reporting back the VM status, Engine set  
> >> garbage  
> >> > "The response content type 'text/html; charset=iso-8859-1' isn't the
> >> > expected XML"
> >> >  
> >>
> >> The relevant line in
> >>
> >> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/httpd/ssl_access_log/*view*/
> >> seems to be
> >> 192.168.201.1 - - [11/Nov/2018:04:27:43 -0500] "GET
> >> /ovirt-engine/api/vms/26088164-d1a0-4254-a377-5d3c242c8105 HTTP/1.1" 503 
> >> 299
> >> and I guess the 503 error message is sent in HTML instead of XML.
> >>
> >> If I run manually
> >> https://gerrit.ovirt.org/#/c/95354/
> >> with latest build of engine-master
> >>
> >> http://jenkins.ovirt.org/job/ovirt-engine_master_b

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-12 Thread Dominik Holler
On Mon, 12 Nov 2018 13:45:54 +0100
Martin Perina  wrote:

> On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler  wrote:
> 
> > On Mon, 12 Nov 2018 12:29:17 +0100
> > Martin Perina  wrote:
> >  
> > > On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> > >  
> > > > There are currently two issues failing ovirt-engine on CQ ovirt master:
> > > >
> > > > 1. edit vm pool is causing failure in different tests. it has a patch  
> > *waiting  
> > > > to be merged*: https://gerrit.ovirt.org/#/c/95354/
> > > >  
> > >
> > > Merged
> > >  
> > > >
> > > > 2. we have a failure in upgrade suite as well to run vm but this seems  
> > to  
> > > > be related to the tests as well:
> > > > 2018-11-12 05:41:07,831-05 WARN
> > > > [org.ovirt.engine.core.bll.validator.VirtIoRngValidator] (default  
> > task-1)  
> > > > [] Random number source URANDOM is not supported in cluster  
> > 'test-cluster'  
> > > > compatibility version 4.0.
> > > >
> > > > here is the full error from the upgrade suite failure in run vm:
> > > > https://pastebin.com/XLHtWGGx
> > > >
> > > > Here is the latest failure:
> > > >  
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/8/
> >   
> > > >  
> > >
> > > I will try to take a look later today
> > >  
> >
> > I have the idea that this might be related to
> > https://gerrit.ovirt.org/#/c/95377/ , and I check in
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3485/console
> > , but I have to stop now, if not solved I can go on later today.
> >  
> 
> OK, both CI and above manual OST job went fine, so I've just merged the
> revert patch. I will take a look at it later in detail, we should really be
> testing 4.3 on master and not 4.2
> 

Ack.

Now
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
is failing on
File 
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
 line 698, in run_vms
api.vms.get(VM0_NAME).start(start_params)
status: 400
reason: Bad Request

2018-11-12 10:06:30,722-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3) 
[b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host 
'lago-basic-suite-master-host-1' ('dbfe1b0c-f940-4dba-8fb1-0cfe5ca7ddfc') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
b8d11cb0-5be9-4b7e-b45a-c95fa1f18681)
2018-11-12 10:06:30,722-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3) 
[b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host 
'lago-basic-suite-master-host-0' ('e83a63ca-381e-40db-acb2-65a3e7953e11') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
b8d11cb0-5be9-4b7e-b45a-c95fa1f18681)
2018-11-12 10:06:30,723-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-3) [b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Can't find VDS to run 
the VM '57a66eff-8cbf-4643-b045-43d4dda80c66' on, so this VM will not be run.

Is this related to
https://gerrit.ovirt.org/#/c/95310/
?



> >  
> > > >
> > > >
> > > > Thanks,
> > > > Dafna
> > > >
> > > >
> > > >
> > > >
> > > > On Mon, Nov 12, 2018 at 9:23 AM Dominik Holler   
> > wrote:  
> > > >  
> > > >> On Sun, 11 Nov 2018 19:04:40 +0200
> > > >> Dan Kenigsberg  wrote:
> > > >>  
> > > >> > On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri   
> > wrote:  
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri   
> > wrote:  
> > > >> > >>
> > > >> > >>
> > > >> > >>
> > > >> > >> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg <  
> > dan...@redhat.com>  
> > > >> wrote:  
> > > >> > >>>
> > > >> > >>> On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi  
> > > >> > >>>  
> >  
> > > >> wrote:  
> > > >> > >>> >
> > > >> > >>> > 

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-13 Thread Dominik Holler
On Tue, 13 Nov 2018 11:56:37 +0100
Martin Perina  wrote:

> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> 
> > Martin? can you please look at the patch that Dominik sent?
> > We need to resolve this as we have not had an engine build for the last 11
> > days
> >  
> 
> Yesterday I've merged Dominik's revert patch https://gerrit.ovirt.org/95377
> which should switch cluster level back to 4.2. Below mentioned change
> https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> right Michal?
> 
> The build mentioned
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> is from yesterday. Are we sure that it was executed only after #95377 was
> merged? I'd like to see the results from latest
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> but unfortunately it already waits more than an hour for available hosts ...
> 




https://gerrit.ovirt.org/#/c/95283/ results in 
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
which is used in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
results in run_vms succeeding.

The next merged change
https://gerrit.ovirt.org/#/c/95310/ results in
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
which is used in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
results in run_vms failing with
2018-11-12 17:35:10,109-05 INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
(default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: 
RunVmOnceCommand internal: false. Entities affected :  ID: 
d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role type 
USER
2018-11-12 17:35:10,113-05 DEBUG 
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default 
task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: getVmManager, params: 
[d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed: 4ms
2018-11-12 17:35:10,128-05 DEBUG 
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default 
task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: 
getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260, Up], 
timeElapsed: 7ms
2018-11-12 17:35:10,129-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
[6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
6930b632-5593-4481-bf2a-a1d8b14a583a)
2018-11-12 17:35:10,129-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
[6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
6930b632-5593-4481-bf2a-a1d8b14a583a)
2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to run 
the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not be run.
in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/

Is this helpful for you?

> 
> > On Mon, Nov 12, 2018 at 3:58 PM Dominik Holler  wrote:
> >  
> >> On Mon, 12 Nov 2018 13:45:54 +0100
> >> Martin Perina  wrote:
> >>  
> >> > On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler   
> >> wrote:  
> >> >  
> >> > > On Mon, 12 Nov 2018 12:29:17 +0100
> >> > > Martin Perina  wrote:
> >> > >  
> >> > > > On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> >> > > >  
> >> > > > > There are currently two issues failing ovirt-engine on CQ ovirt  
> >> master:  
> >> > > > >
> >> > > > > 1. edit vm pool is causing failure in different tests. it has a  
> >> patch  
> >> > > *waiting  
> >> > > > > to be merged*: https://gerrit.ovirt.org/#/c/95354/
> >> > > > >  
> >> > > >
> >> > > > Merged
> >> > > >  
> >> > > > >
> >> > > > > 2. we have a failure in upgrade suite as well to run vm but this  
> >> seems  

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-14 Thread Dominik Holler
On Tue, 13 Nov 2018 13:01:09 +0100
Martin Perina  wrote:

> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> wrote:
> 
> >
> >
> > On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> >
> > On Tue, 13 Nov 2018 11:56:37 +0100
> > Martin Perina  wrote:
> >
> > On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> >
> > Martin? can you please look at the patch that Dominik sent?
> > We need to resolve this as we have not had an engine build for the last 11
> > days
> >
> >
> > Yesterday I've merged Dominik's revert patch
> > https://gerrit.ovirt.org/95377
> > which should switch cluster level back to 4.2. Below mentioned change
> > https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> > right Michal?
> >
> > The build mentioned
> >
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> > is from yesterday. Are we sure that it was executed only after #95377 was
> > merged? I'd like to see the results from latest
> >
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> > but unfortunately it already waits more than an hour for available hosts
> > ...
> >
> >
> >
> >
> >
> > https://gerrit.ovirt.org/#/c/95283/ results in
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> > which is used in
> >
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> > results in run_vms succeeding.
> >
> > The next merged change
> > https://gerrit.ovirt.org/#/c/95310/ results in
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> > which is used in
> >
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> > results in run_vms failing with
> > 2018-11-12 17:35:10,109-05 INFO
> >  [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> > [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: RunVmOnceCommand
> > internal: false. Entities affected :  ID:
> > d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role
> > type USER
> > 2018-11-12 17:35:10,113-05 DEBUG
> > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed:
> > 4ms
> > 2018-11-12 17:35:10,128-05 DEBUG
> > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260,
> > Up], timeElapsed: 7ms
> > 2018-11-12 17:35:10,129-05 INFO
> >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af')
> > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > 2018-11-12 17:35:10,129-05 INFO
> >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > 'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8')
> > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > 2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
> > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to
> > run the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not
> > be run.
> > in
> >
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/
> >
> > Is this helpful for you?
> >
> >
> >
> > actually, there ire two issues
> > 1) cluster is still 4.3 even after Martin’s revert.
> >  
> 
> https://gerrit.ovirt.org/#/c/95409/ should align cluster level with dc level
> 

This change aligns the cluster level, but
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-14 Thread Dominik Holler
On Wed, 14 Nov 2018 09:27:39 +0100
Dominik Holler  wrote:

> On Tue, 13 Nov 2018 13:01:09 +0100
> Martin Perina  wrote:
> 
> > On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> > wrote:
> >   
> > >
> > >
> > > On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> > >
> > > On Tue, 13 Nov 2018 11:56:37 +0100
> > > Martin Perina  wrote:
> > >
> > > On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> > >
> > > Martin? can you please look at the patch that Dominik sent?
> > > We need to resolve this as we have not had an engine build for the last 11
> > > days
> > >
> > >
> > > Yesterday I've merged Dominik's revert patch
> > > https://gerrit.ovirt.org/95377
> > > which should switch cluster level back to 4.2. Below mentioned change
> > > https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> > > right Michal?
> > >
> > > The build mentioned
> > >
> > > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> > > is from yesterday. Are we sure that it was executed only after #95377 was
> > > merged? I'd like to see the results from latest
> > >
> > > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> > > but unfortunately it already waits more than an hour for available hosts
> > > ...
> > >
> > >
> > >
> > >
> > >
> > > https://gerrit.ovirt.org/#/c/95283/ results in
> > >
> > > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> > > which is used in
> > >
> > > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> > > results in run_vms succeeding.
> > >
> > > The next merged change
> > > https://gerrit.ovirt.org/#/c/95310/ results in
> > >
> > > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> > > which is used in
> > >
> > > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> > > results in run_vms failing with
> > > 2018-11-12 17:35:10,109-05 INFO
> > >  [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> > > [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: RunVmOnceCommand
> > > internal: false. Entities affected :  ID:
> > > d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role
> > > type USER
> > > 2018-11-12 17:35:10,113-05 DEBUG
> > > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > > getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed:
> > > 4ms
> > > 2018-11-12 17:35:10,128-05 DEBUG
> > > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > > getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260,
> > > Up], timeElapsed: 7ms
> > > 2018-11-12 17:35:10,129-05 INFO
> > >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > > 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af')
> > > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > > 2018-11-12 17:35:10,129-05 INFO
> > >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > > 'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8')
> > > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > > 2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
> > > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to
> > > run the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not
> > > be run.
> > > in
> > >
> > > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-art

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-14 Thread Dominik Holler
On Wed, 14 Nov 2018 11:24:10 +0100
Michal Skrivanek  wrote:

> > On 14 Nov 2018, at 10:50, Dominik Holler  wrote:
> > 
> > On Wed, 14 Nov 2018 09:27:39 +0100
> > Dominik Holler  wrote:
> >   
> >> On Tue, 13 Nov 2018 13:01:09 +0100
> >> Martin Perina  wrote:
> >>   
> >>> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> >>> wrote:
> >>>   
> >>>> 
> >>>> 
> >>>> On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> >>>> 
> >>>> On Tue, 13 Nov 2018 11:56:37 +0100
> >>>> Martin Perina  wrote:
> >>>> 
> >>>> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> >>>> 
> >>>> Martin? can you please look at the patch that Dominik sent?
> >>>> We need to resolve this as we have not had an engine build for the last 
> >>>> 11
> >>>> days
> >>>> 
> >>>> 
> >>>> Yesterday I've merged Dominik's revert patch
> >>>> https://gerrit.ovirt.org/95377
> >>>> which should switch cluster level back to 4.2. Below mentioned change
> >>>> https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am 
> >>>> I
> >>>> right Michal?
> >>>> 
> >>>> The build mentioned
> >>>> 
> >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> >>>> is from yesterday. Are we sure that it was executed only after #95377 was
> >>>> merged? I'd like to see the results from latest
> >>>> 
> >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> >>>> but unfortunately it already waits more than an hour for available hosts
> >>>> ...
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> https://gerrit.ovirt.org/#/c/95283/ results in
> >>>> 
> >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> >>>> which is used in
> >>>> 
> >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> >>>> results in run_vms succeeding.
> >>>> 
> >>>> The next merged change
> >>>> https://gerrit.ovirt.org/#/c/95310/ results in
> >>>> 
> >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> >>>> which is used in
> >>>> 
> >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> >>>> results in run_vms failing with
> >>>> 2018-11-12 17:35:10,109-05 INFO
> >>>> [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: RunVmOnceCommand
> >>>> internal: false. Entities affected :  ID:
> >>>> d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with 
> >>>> role
> >>>> type USER
> >>>> 2018-11-12 17:35:10,113-05 DEBUG
> >>>> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> >>>> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> >>>> getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], 
> >>>> timeElapsed:
> >>>> 4ms
> >>>> 2018-11-12 17:35:10,128-05 DEBUG
> >>>> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> >>>> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> >>>> getAllForClusterWithStatus, params: 
> >>>> [2ca9ccd8-61f0-470c-ba3f-07766202f260,
> >>>> Up], timeElapsed: 7ms
> >>>> 2018-11-12 17:35:10,129-05 INFO
> >>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> >>>> 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af')
> >>>> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> >>>> (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> >>>> 2018-11-12 17:35:10,129-05 INFO
> >>>> [org.ovirt.engine.core.b

[ovirt-devel] Re: Failing OST patches

2018-12-04 Thread Dominik Holler
On Thu, 29 Nov 2018 14:48:47 +0200
Eyal Edri  wrote:

> Galit/Dafna,
> Do we know of any existing add host failures that are being fixed/handled?
> Can you help review these errors?
> 

I am able to add hosts to oVirt master if I run
"yum update" after installing the
https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
and before adding the host to Engine.


> On Thu, Nov 29, 2018 at 2:35 PM Kaustav Majumder 
> wrote:
> 
> > Recently I have pushed 2 patches for OST[1][2], both are  failing in CI.
> > Though unrelated failures I find the errors in "add_hosts" test beacuse the
> > host is non operation within the asserted time.I have added the link for
> > the jenkins log [3].
> >
> > [1] https://gerrit.ovirt.org/#/c/95327/
> > [2] https://gerrit.ovirt.org/#/c/95347/
> > [3] https://pastebin.com/MVpws4Et
> >
> > Any advice?
> > --
> >
> > KAUSTAV MAJUMDER
> >
> > ASSOCIATE SOFTWARE ENGINEER
> >
> > Red Hat India PVT LTD. 
> >
> > kmajum...@redhat.comM: 08981884037 IM: IRC: kmajumder
> > 
> > TRIED. TESTED. TRUSTED. 
> > @redhatway    @redhatinc
> >    @redhatsnaps
> > 
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KHLOVT3GJHFH7WVVSFH32S32OZ6JJZRG/
> >  
> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VKXD2BS4GHONIHE3SN6MPYDOCOT24HGD/


[ovirt-devel] Re: ovirt-system-tests_compat-4.2-suite-master is failing since November 27

2018-12-10 Thread Dominik Holler
On Mon, 10 Dec 2018 12:02:21 -0500
Greg Sheremeta  wrote:

> On Mon, Dec 10, 2018 at 11:53 AM Sandro Bonazzola 
> wrote:
> 
> > hi, ovirt-system-tests_compat-4.2-suite-master is failing since November
> > 27 with following error:
> >
> > Error: Fault reason is "Operation Failed". Fault detail is "[Bond name 
> > 'bond_fancy0' does not follow naming criteria. For host compatibility 
> > version 4.3 or greater, custom bond names must begin with the prefix 'bond' 
> >  followed by 'a-z', 'A-Z', '0-9', or '_' characters. For host compatibility 
> > version 4.2 and lower, custom bond names must begin with the prefix 'bond' 
> > followed by a number.]". HTTP response code is 400.
> >
> >
> > I think that if the scope of the test is to check that 4.2 still works
> > with 4.3 engine, bond_fancy0 works for 4.3 but it's clearly not good for
> > 4.2 and the test needs a fix.
> >  
> 

Gal, what is the most correct way to fix this?
Should the BOND_NAME derived from the versioning.cluster_version()?
Or should we add an extra test which require_version(4,3) with the
fancy bond name?

> Not sure about the test, but yes, it needs to be bond[\d+] for 4.2.
> https://gerrit.ovirt.org/#/c/95163/
> 

Yes.

> 
> >
> > --
> >
> > SANDRO BONAZZOLA
> >
> > MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> >
> > Red Hat EMEA 
> >
> > sbona...@redhat.com
> > 
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PFJ2Y2PD4SIGLV4BSYLXOQANO5OJTQGT/
> >  
> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DX6H5RUQNIO53UUXTSGTVHDH6LNDAULC/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt 4.2 (ovirt-provider-ovn) ] [ 18-01-2019 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2019-01-18 Thread Dominik Holler
On Fri, 18 Jan 2019 11:13:25 +
Dafna Ron  wrote:

> Hi,
> 
> We have a failure in ovn tests in branch 4.2. Marcin/Miguel, can you please
> take a look?
> 

https://gerrit.ovirt.org/#/c/97072/ is ready to be merged.

> Jira opened: https://ovirt-jira.atlassian.net/browse/OVIRT-2655
> 
> Link and headline of suspected patches:
> 
> https://gerrit.ovirt.org/#/c/96926/ - ip_version is mandatory on POSTs
> 
> Link to Job:
> 
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3742/
> 
> Link to all logs:
> 
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3742/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-4.2/post-098_ovirt_provider_ovn.py/
> 
> (Relevant) error snippet from the log:
> 
> 
> 
> 2019-01-18 00:14:30,591 root Starting server
> 2019-01-18 00:14:30,592 root Version: 1.2.19-0.20190117180529.gite1d4195
> 2019-01-18 00:14:30,592 root Build date: 20190117180529
> 2019-01-18 00:14:30,592 root Githash: e1d4195
> 2019-01-18 00:20:39,394 ovsdbapp.backend.ovs_idl.vlog ssl:127.0.0.1:6641:
> no response to inactivity probe after 5.01 seconds, disconnecting
> 2019-01-18 00:45:01,435 root From: :::192.168.200.1:49008 Request: POST
> /v2.0/subnets/
> 2019-01-18 00:45:01,435 root Request body:
> {"subnet": {"network_id": "99c260ec-dad4-40b9-8732-df32dd54bd00",
> "dns_nameservers": ["8.8.8.8"], "cidr": "1.1.1.0/24", "gateway_ip":
> "1.1.1.1", "name": "subnet_1"}}
> 2019-01-18 00:45:01,435 root Missing 'ip_version' attribute
> Traceback (most recent call last):
>   File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 134,
> in _handle_request
> method, path_parts, content
>   File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
> 175, in handle_request
> return self.call_response_handler(handler, content, parameters)
>   File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, in
> call_response_handler
> return response_handler(ovn_north, content, parameters)
>   File "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", line
> 154, in post_subnets
> subnet = nb_db.add_subnet(received_subnet)
>   File "/usr/share/ovirt-provider-ovn/neutron/neutron_api_mappers.py", line
> 74, in wrapper
> validate_rest_input(rest_data)
>   File "/usr/share/ovirt-provider-ovn/neutron/neutron_api_mappers.py", line
> 596, in validate_add_rest_input
> raise BadRequestError('Missing \'ip_version\' attribute')
> BadRequestError: Missing 'ip_version' attribute
> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCOOAPHNOYLMPMBE4UK2QLPGZHLHUBNM/


[ovirt-devel] Re: Proposing Yedidyah Bar David (didi) as a integration maintainer for oVirt Engine

2019-02-05 Thread Dominik Holler
+1 from me, too.

On Tue, 5 Feb 2019 09:41:04 +0100
Sandro Bonazzola  wrote:

> Hi,
> Didi is maintaining OTOPI and wrote with me most of the code in
> engine-setup back into oVirt 3.3 release. He maintained that code across
> the following releases along with the backup and restore tool and the
> rename tool. He's the most active contributor within integration team for
> ovirt-engine related code.
> 
> I would like to propose Didi as integration maintainer for oVirt Engine.
> 
> Thanks,
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LMWXMGZ23IIHJCHEQMY7WXDVTNCQIDUA/


[ovirt-devel] Password for database in ovirt-system-tests/basic_suite_3.6

2016-10-06 Thread Dominik Holler
Hi,
I want to have a look at the database in
ovirt-system-tests/basic_suite_3.6, but unfortunately I do not know the
password for the database. I tried already various ones by:
[root@lago_basic_suite_3_6_engine ~]# psql -U engine -p 5432 -h 127.0.0.1

Can anyone give me a hint how to get the password for the database?

Thanks,
Dominik
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Password for database in ovirt-system-tests/basic_suite_3.6

2016-10-06 Thread Dominik Holler
On Thu, Oct 06, 2016 at 10:54:53AM +0200, Ondra Machacek wrote:
> On 10/06/2016 10:33 AM, Dominik Holler wrote:
> > Hi,
> > I want to have a look at the database in
> > ovirt-system-tests/basic_suite_3.6, but unfortunately I do not know the
> > password for the database. I tried already various ones by:
> > [root@lago_basic_suite_3_6_engine ~]# psql -U engine -p 5432 -h 127.0.0.1
> > 
> > Can anyone give me a hint how to get the password for the database?
> 
> It's in file /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
> 
> $ grep PASSWORD /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
> ENGINE_DB_PASSWORD=""
>

Thanks, I just want to confirm this work.

> > 
> > Thanks,
> > Dominik
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> > 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] engine-setup: ***L:ERROR Internal error: No module named dwh

2016-10-12 Thread Dominik Holler
Hi,
I got the same error. Is there any experience about this, by now?
Dominik

On Mon, 10 Oct 2016 15:48:08 -0400 (EDT)
Jakub Niedermertl  wrote:

> Hi all,
> 
> does anyone else experience following error of `engine-setup`?
> 
> $ ~/target/bin/engine-setup
> ***L:ERROR Internal error: No module named dwh
> 
> I have a suspicion it might be related to commit '221c7ed packaging:
> setup: Remove constants duplication'.
> 
> Jakub
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [OST Failure Report] [oVirt master] [09.02.2017] [test-repo_ovirt_experimental_master]

2017-02-21 Thread Dominik Holler
A deep analysis of the logfiles gives details about the
unexpected behavior, but I regret to not provide the fault causing the
unexpected behavior.

To get this fault, the help of someone familiar with
org.ovirt.vdsm.jsonrpc.client.JsonRpcClient is needed.

In the failing test "assign_labeled_network" a (labeld) network is
assigned to the cluster. For this reason the network has to be added to
the hosts. After that, the test "assign_labeled_network" checks, if the
engine acknowledges that hosts are in the labeld network. This
execution of the test failed, because this acknowledge of the engine
is missing after 180 seconds [3].

There are two hosts lago-basic-suite-master-host0 and
lago-basic-suite-master-host1 in the scenario.
lago-basic-suite-master-host1 fails and  
lago-basic-suite-master-host0 succeeds, so only 
lago-basic-suite-master-host1 is analyzed below.

Please find here the most relevant steps causing this error:
1. The engine sends Host.setupNetworks to the hosts in 
   line 40279 - 40295 in [1] with
   "id":"02298344-165f-47e4-9ea4-7c17a55d37f8".
2. The host executes the Host.setupNetworks RPC call successfully in
   line 1286 in [2].
3. The engine receives the acknowledgment of the successful execution
   in line 40716 and 40717 in [1].
4. The error occurs in line 40718: 
   '[org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) []
   Not able to update response for
   "02298344-165f-47e4-9ea4-7c17a55d37f8"'. This means the engine can
   not process the acknowledgment of the successful execution.
5. The command HostSetupNetworksVDS is aborted.
   So Host.getCapabilities is skipped and the engine database is not
   updated with the new network configuration of the host.
6. Since the test script relays on the information from database
   about host network configuration, it does not see that
   Host.setupNetworks is successfully executed and stops with the
   error "False != True after 180 seconds" [3]

So the fault happens before or in step 4 and is around the jsonrpc
communication.

It is an open action item to precise the location of the fault.



[1]
  
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-005_network_by_label.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log

[2]
  
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-005_network_by_label.py/lago-basic-suite-master-host1/_var_log/vdsm/vdsm.log

[3]
  
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/testReport/junit/(root)/005_network_by_label/assign_labeled_network/



On Thu, 9 Feb 2017 14:52:52 +0200
Shlomo Ben David  wrote:

> Hi,
> 
> 
> *Test failed:* [test-repo_ovirt_experimental_master]
> 
> *Link to suspected patches:* n/a
> 
> *Link to Job:*
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217
> 
> *Link to all logs:*
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-005_network_by_label.py/
> 
> *Error snippet from the log: *
> 
> 
> 
> ifup/VLAN100_Network::ERROR::2017-02-09
> 06:21:15,236::concurrent::189::root::(run) FINISH thread
>  failed
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line
> 185, in run ret = func(*args, **kwargs)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
> line 949, in _exec_ifup _exec_ifup_by_name(iface.name, cgroup)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
> line 935, in _exec_ifup_by_name raise
> ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '')
> ConfigNetworkError: (29, 'Determining IPv6 information for
> VLAN100_Network... failed.')
> 
> 
> 
> Best Regards,
> 
> Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
> RHCSA | RHCVA | RHCE
> IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
> 
> OPEN SOURCE - 1 4 011 && 011 4 1

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] problem about setting up ovirt-engine java IDE developemnt environment under centos 7

2017-04-28 Thread Dominik Holler
Please find my comments below.

On Thu, 27 Apr 2017 15:54:46 +0800
yuening  wrote:

> 
>   After installing centos 7(CentOS-7-x86_64-DVD-1511.iso)with
> Minimal installation, then refer to
> 
> http://www.ovirt.org/develop/developer-guide/engine/engine-development-environment/
>  
> (I use ovirt-engine 4.1 version)
> 
> I can compile and run ovirt-engine on cenots 7, ovirt-engine web page 
> can be accessed normally,  Moreover, after installing gnome desktop
> on centos7 and setting up java IDE development (I use Intellij 
> idea-IC-171.4073.35 community edition), I met some questions about
> using Intellij idea IDE to debug ovirt-engine, 

There are two different ways to debug ovirt-engine.
If you want to debug the backend, please follow the instructions from
https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob_plain;f=README.adoc
and use port 8787 for debugging.

> the following is my
> problem description:
> 
> I  refer  the following two links to build the developement
> environment
> 
> https://www.ovirt.org/develop/developer-guide/engine/building/ide/
> 
>https://www.ovirt.org/develop/developer-guide/debugfrontend/
> 

If you want to debug the frontend, try firefox version 26 as browser.

>   firstly, at ovirt-engine source directory, after mading the
> following operation,   I can attach to port 8000 under intellij idea
> IDE environments, but debug can not be started
> 


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failing in 004_basic_sanity add_filter_parameter

2017-06-28 Thread Dominik Holler
I will take care for this. Is there a broken jenkins build available?

On Wed, 28 Jun 2017 13:14:17 +0300
Yaniv Kaul  wrote:

> On Wed, Jun 28, 2017 at 12:14 PM, Milan Zamazal 
> wrote:
> 
> > Daniel Belenky  writes:
> >  
> > > Can you please provide more logs? lago.log and the test_logs dir
> > > will assist to get down to the issue  
> >
> > Hi, I've hit the problem too.  It seems to be as simple as that the
> > given method is not defined, we don't know what's missing.  Here is
> > the traceback:
> >
> >   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> > testMethod()
> >   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
> > runTest
> > self.test(*self.arg)
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> > line 129, in wrapped_test
> > test()
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> > line 59, in wrapper
> > return func(get_test_prefix(), *args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> > line 78, in wrapper
> > prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
> >   File "/home/pdm/ovirt/lago/ovirt-system-tests/basic-suite-
> > master/test-scenarios/004_basic_sanity.py", line 414, in
> > add_filter_parameter
> > ovirt_api4.system_service())
> >   File "/home/pdm/ovirt/lago/ovirt-system-tests/basic-suite-
> > master/test-scenarios/004_basic_sanity.py", line 74, in
> > _get_network_fiter_parameters_service
> > return nics_service.nic_service(id=nic.id)\
> > AttributeError: 'VmNicService' object has no attribute
> > 'network_filter_parameters_service'
> >  
> 
> Dominik, can you please take a look?
> Y.
> 
> 
> >
> > I'm attaching lago.log, I'll send you test_logs privately not to
> > pollute the mailing list.
> >
> >
> >  
> > > On Tue, Jun 27, 2017 at 6:16 PM, Valentina Makarova <
> > makarovav...@gmail.com>
> > > wrote:
> > >  
> > >> Hello!
> > >>
> > >> After pull last master branch of ovirt_system_tests 004 test
> > >> fails in add_filter_parameter  in basic-suite-master with error:
> > >>
> > >> AttributeError: 'VmNicService' object has no attribute
> > >> 'network_filter_parameters_service'
> > >>
> > >> After first running fail I updated all yum packages using yum
> > >> upgrade, updated lago, lago-ovirt, ovirt-sdk-python
> > >> using pip install --upgrade. And I have next version of this
> > >> packages: lago (0.39.0), lago-ovirt (0.41.0)
> > >> ovirt-engine-sdk-python (4.1.5)
> > >> But error is still open. (I run ./run_suilte without -s. and it
> > >> should update internal repo)
> > >>
> > >> Do I forget to update something else?
> > >>
> > >> Sincerely, Valentina Makarova
> > >>
> > >> ___
> > >> Devel mailing list
> > >> Devel@ovirt.org
> > >> http://lists.ovirt.org/mailman/listinfo/devel
> > >>  
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >  

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] OST failing in 004_basic_sanity add_filter_parameter

2017-06-28 Thread Dominik Holler
On Wed, 28 Jun 2017 12:21:29 +0200
Juan Hernández  wrote:

> On 06/28/2017 11:14 AM, Milan Zamazal wrote:
> > Daniel Belenky  writes:
> >   
> >> Can you please provide more logs? lago.log and the test_logs dir
> >> will assist to get down to the issue  
> > 
> > Hi, I've hit the problem too.  It seems to be as simple as that the
> > given method is not defined, we don't know what's missing.  Here is
> > the traceback:
> > 
> >   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> > testMethod()
> >   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197,
> > in runTest self.test(*self.arg)
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> > line 129, in wrapped_test test()
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> > line 59, in wrapper return func(get_test_prefix(), *args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
> > line 78, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4),
> > *args, **kwargs File
> > "/home/pdm/ovirt/lago/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
> > line 414, in add_filter_parameter ovirt_api4.system_service()) File
> > "/home/pdm/ovirt/lago/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
> > line 74, in _get_network_fiter_parameters_service return
> > nics_service.nic_service(id=nic.id)\ AttributeError: 'VmNicService'
> > object has no attribute 'network_filter_parameters_service'
> > 
> > I'm attaching lago.log, I'll send you test_logs privately not to
> > pollute the mailing list.
> >   
> 
> The network filter parameters concept was added in version 4.2.9 of
> the specification of the API. The python SDK is currently using
> version 4.2.6 of that specification, so that method will simply not
> be there. We need to update the SDK to use the newer version of the
> model, and then you need to use that new version of the SDK.
> 

The change Juan refers to was just a renaming related to the network
filter parameters. The network filter parameters concept is available
in python-ovirt-engine-sdk4 with version 4.2.

The test requires python-ovirt-engine-sdk4 version 4.2 to be
installed.

python-ovirt-engine-sdk4 version > 4.2 is available in the
ovirt-master-snapshot repository, which could be added by installing
http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm .

> > 
> > 
> >   
> >> On Tue, Jun 27, 2017 at 6:16 PM, Valentina Makarova
> >>  wrote:
> >>  
> >>> Hello!
> >>>
> >>> After pull last master branch of ovirt_system_tests 004 test fails
> >>> in add_filter_parameter  in basic-suite-master with error:
> >>>
> >>> AttributeError: 'VmNicService' object has no attribute
> >>> 'network_filter_parameters_service'
> >>>
> >>> After first running fail I updated all yum packages using yum
> >>> upgrade, updated lago, lago-ovirt, ovirt-sdk-python
> >>> using pip install --upgrade. And I have next version of this
> >>> packages: lago (0.39.0), lago-ovirt (0.41.0)
> >>> ovirt-engine-sdk-python (4.1.5)
> >>> But error is still open. (I run ./run_suilte without -s. and it
> >>> should update internal repo)
> >>>
> >>> Do I forget to update something else?
> >>>
> >>> Sincerely, Valentina Makarova
> >>>
> >>> ___
> >>> Devel mailing list
> >>> Devel@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/devel
> >>>
> >>>
> >>>
> >>> ___
> >>> Devel mailing list
> >>> Devel@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/devel  
> 
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ ovirt-devel ] [ OST Failure Report ] [ oVirt Master ] [ 002_bootstrap ] [ 17/08/17 ]

2017-08-17 Thread Dominik Holler
>From my point of view the snippet from logs does not point to the
reason to fail the test.


On Thu, 17 Aug 2017 12:26:23 +0300
Daniel Belenky  wrote:

> Failed test: basic_suite_master/002_bootstrap
> 
> Version: oVirt master
> 
> Link to failed job (Jenkins): ovirt-master_change-queue-tester/1817/
> 
> 
> Link to logs (Jenkins): link
> 
> 
> Suspected patch: Gerrit 80481/10
> 
> 
> 
> Error snippet from logs:
> 
> *From host0*
> 
> MainThread::DEBUG::2017-08-17
> 05:03:20,501::cmd::63::root::(exec_sync_bytes) FAILED:  = '';
>  = 1
> MainThread::ERROR::2017-08-17
> 05:03:20,502::initializer::53::root::(_lldp_init) Failed to enable
> LLDP on eth0
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/network/initializer.py",
> line 51, in _lldp_init
> Lldp.enable_lldp_on_iface(device)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/lldp/lldpad.py",
> line 30, in enable_lldp_on_iface
> lldptool.enable_lldp_on_iface(iface, rx_only)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/lldpad/lldptool.py",
> line 46, in enable_lldp_on_iface raise EnableLldpError(rc, out, err,
> iface) EnableLldpError: (1,
> "timeout\n'M0001C304000c04ethbadminStatus0002rx' command
> timed out.\n", '', 'eth0')
> 
> 

This error is expected [1] and will by fixed by REL 7.4.1 [2].
This error just blocks collecting lldp information and should not
influence anything else.

[1]
  https://bugzilla.redhat.com/show_bug.cgi?id=1472722

[2]
  https://bugzilla.redhat.com/show_bug.cgi?id=1479767



___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-08-30 ] [add_hosts]

2017-08-30 Thread Dominik Holler
On Wed, 30 Aug 2017 14:18:49 +0300
Dan Kenigsberg  wrote:

> On Wed, Aug 30, 2017 at 1:40 PM, Barak Korren 
> wrote:
> > Test failed: [ 002_bootstrap.add_hosts ]
> >
> > Link to suspected patches:
> >
> > We suspect this is due to change to LLDPAD in upstream CentOS repos.
> > We can't tell the exact point it was introduced because other
> > ovirt-engine regressions introduced too much noise into the system.
> >
> > It also seems that the failure is not 100% reproducible sine we have
> > runs that do not encounter it.
> >
> > Link to Job:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2151
> >
> > (This job checked a specific patch to ovirt-log-collector but
> > failure seems unrelated, and also happens on vanille 'tested' repo
> > right now).
> >
> > Link to all logs:
> > VDSM logs seem to be most relevant here:
> >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2151/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/vdsm/
> >
> > Error snippet from log:
> >
> > - From VDSM logs (note it exists very quickly):
> >
> > 
> >
> > 2017-08-30 05:55:52,208-0400 INFO  (MainThread) [vds] (PID: 4594) I
> > am the actual vdsm 4.20.2-133.git0ce4485.el7.centos
> > lago-basic-suite-master-host-0 (3.10.0-514.2.2.el7.x86_64)
> > (vdsmd:148) 2017-08-30 05:55:52,209-0400 INFO  (MainThread) [vds]
> > VDSM will run with cpu affinity: frozenset([1]) (vdsmd:254)
> > 2017-08-30 05:55:52,253-0400 INFO  (MainThread) [storage.check]
> > Starting check service (check:92)
> > 2017-08-30 05:55:52,257-0400 INFO  (MainThread) [storage.Dispatcher]
> > Starting StorageDispatcher... (dispatcher:48)
> > 2017-08-30 05:55:52,257-0400 INFO  (check/loop) [storage.asyncevent]
> > Starting 
> > (asyncevent:125)
> > 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [vdsm.api] START
> > registerDomainStateChangeCallback(callbackFunc= > object at 0x2b47b50>) from=internal,
> > task_id=2cebe9ef-358e-47d7-81a6-8c4b54b9cd6d (api:46)
> > 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [vdsm.api] FINISH
> > registerDomainStateChangeCallback return=None from=internal,
> > task_id=2cebe9ef-358e-47d7-81a6-8c4b54b9cd6d (api:52)
> > 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [MOM] Preparing MOM
> > interface (momIF:53)
> > 2017-08-30 05:55:52,289-0400 INFO  (MainThread) [MOM] Using named
> > unix socket /var/run/vdsm/mom-vdsm.sock (momIF:62)
> > 2017-08-30 05:55:52,289-0400 INFO  (MainThread) [root] Unregistering
> > all secrets (secret:91)
> > 2017-08-30 05:55:52,307-0400 INFO  (MainThread) [vds] Setting
> > channels' timeout to 30 seconds. (vmchannels:221)
> > 2017-08-30 05:55:52,309-0400 INFO  (vmrecovery) [vds] recovery:
> > completed in 0s (clientIF:516)
> > 2017-08-30 05:55:52,310-0400 INFO  (MainThread)
> > [vds.MultiProtocolAcceptor] Listening at :::54321
> > (protocoldetector:196)
> > 2017-08-30 05:55:52,496-0400 INFO  (http) [vds] Server running
> > (http:58) 2017-08-30 05:55:52,742-0400 INFO  (periodic/1)
> > [vdsm.api] START repoStats(domains=()) from=internal,
> > task_id=85768015-8ecb-48e3-9307-f671bfc33c65 (api:46)
> > 2017-08-30 05:55:52,743-0400 INFO  (periodic/1) [vdsm.api] FINISH
> > repoStats return={} from=internal,
> > task_id=85768015-8ecb-48e3-9307-f671bfc33c65 (api:52)
> > 2017-08-30 05:55:52,743-0400 WARN  (periodic/1) [throttled] MOM not
> > available. (throttledlog:103)
> > 2017-08-30 05:55:52,744-0400 WARN  (periodic/1) [throttled] MOM not
> > available, KSM stats will be missing. (throttledlog:103)
> > 2017-08-30 05:55:55,043-0400 INFO  (MainThread) [vds] Received
> > signal 15, shutting down (vdsmd:67)
> > 2017-08-30 05:55:55,045-0400 INFO  (MainThread)
> > [jsonrpc.JsonRpcServer] Stopping JsonRPC Server (__init__:759)
> > 2017-08-30 05:55:55,049-0400 INFO  (MainThread) [vds] Stopping http
> > server (http:79)
> > 2017-08-30 05:55:55,049-0400 INFO  (http) [vds] Server stopped
> > (http:69) 2017-08-30 05:55:55,050-0400 INFO  (MainThread) [root]
> > Unregistering all secrets (secret:91)
> > 2017-08-30 05:55:55,052-0400 INFO  (MainThread) [vdsm.api] START
> > prepareForShutdown(options=None) from=internal,
> > task_id=ffde5caa-fa44-49ab-bdd1-df81519680a3 (api:46)
> > 2017-08-30 05:55:55,089-0400 INFO  (MainThread) [storage.Monitor]
> > Shutting down domain monitors (monitor:222)
> > 2017-08-30 05:55:55,090-0400 INFO  (MainThread) [storage.check]
> > Stopping check service (check:105)
> > 2017-08-30 05:55:55,090-0400 INFO  (check/loop) [storage.asyncevent]
> > Stopping 
> > (asyncevent:220)
> > 2017-08-30 05:55:55,090-0400 INFO  (MainThread) [vdsm.api] FINISH
> > prepareForShutdown return=None from=internal,
> > task_id=ffde5caa-fa44-49ab-bdd1-df81519680a3 (api:52)
> > 2017-08-30 05:55:55,091-0400 INFO  (MainThread) [vds] Stopping
> > threads (vdsmd:159)
> > 2017-08-30 05:55:55,091-0400 INFO  (MainThread) [vds] Exiting
> > (vdsmd:170)
> >
> > 
> >
> >
> > - From SuperVDSM logs:
> >
> > 
> >
> > MainThre

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-08-30 ] [add_hosts]

2017-08-30 Thread Dominik Holler
On Wed, 30 Aug 2017 13:31:38 +0200
Dominik Holler  wrote:

> On Wed, 30 Aug 2017 14:18:49 +0300
> Dan Kenigsberg  wrote:
> 
> > On Wed, Aug 30, 2017 at 1:40 PM, Barak Korren 
> > wrote:  
> > > Test failed: [ 002_bootstrap.add_hosts ]
> > >
> > > Link to suspected patches:
> > >
> > > We suspect this is due to change to LLDPAD in upstream CentOS
> > > repos. We can't tell the exact point it was introduced because
> > > other ovirt-engine regressions introduced too much noise into the
> > > system.
> > >
> > > It also seems that the failure is not 100% reproducible sine we
> > > have runs that do not encounter it.
> > >
> > > Link to Job:
> > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2151
> > >
> > > (This job checked a specific patch to ovirt-log-collector but
> > > failure seems unrelated, and also happens on vanille 'tested' repo
> > > right now).
> > >
> > > Link to all logs:
> > > VDSM logs seem to be most relevant here:
> > >
> > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2151/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/vdsm/
> > >
> > > Error snippet from log:
> > >
> > > - From VDSM logs (note it exists very quickly):
> > >
> > > 
> > >
> > > 2017-08-30 05:55:52,208-0400 INFO  (MainThread) [vds] (PID: 4594)
> > > I am the actual vdsm 4.20.2-133.git0ce4485.el7.centos
> > > lago-basic-suite-master-host-0 (3.10.0-514.2.2.el7.x86_64)
> > > (vdsmd:148) 2017-08-30 05:55:52,209-0400 INFO  (MainThread) [vds]
> > > VDSM will run with cpu affinity: frozenset([1]) (vdsmd:254)
> > > 2017-08-30 05:55:52,253-0400 INFO  (MainThread) [storage.check]
> > > Starting check service (check:92)
> > > 2017-08-30 05:55:52,257-0400 INFO  (MainThread)
> > > [storage.Dispatcher] Starting StorageDispatcher... (dispatcher:48)
> > > 2017-08-30 05:55:52,257-0400 INFO  (check/loop)
> > > [storage.asyncevent] Starting  > > closed=False at 0x45317520> (asyncevent:125)
> > > 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [vdsm.api] START
> > > registerDomainStateChangeCallback(callbackFunc= > > object at 0x2b47b50>) from=internal,
> > > task_id=2cebe9ef-358e-47d7-81a6-8c4b54b9cd6d (api:46)
> > > 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [vdsm.api] FINISH
> > > registerDomainStateChangeCallback return=None from=internal,
> > > task_id=2cebe9ef-358e-47d7-81a6-8c4b54b9cd6d (api:52)
> > > 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [MOM] Preparing
> > > MOM interface (momIF:53)
> > > 2017-08-30 05:55:52,289-0400 INFO  (MainThread) [MOM] Using named
> > > unix socket /var/run/vdsm/mom-vdsm.sock (momIF:62)
> > > 2017-08-30 05:55:52,289-0400 INFO  (MainThread) [root]
> > > Unregistering all secrets (secret:91)
> > > 2017-08-30 05:55:52,307-0400 INFO  (MainThread) [vds] Setting
> > > channels' timeout to 30 seconds. (vmchannels:221)
> > > 2017-08-30 05:55:52,309-0400 INFO  (vmrecovery) [vds] recovery:
> > > completed in 0s (clientIF:516)
> > > 2017-08-30 05:55:52,310-0400 INFO  (MainThread)
> > > [vds.MultiProtocolAcceptor] Listening at :::54321
> > > (protocoldetector:196)
> > > 2017-08-30 05:55:52,496-0400 INFO  (http) [vds] Server running
> > > (http:58) 2017-08-30 05:55:52,742-0400 INFO  (periodic/1)
> > > [vdsm.api] START repoStats(domains=()) from=internal,
> > > task_id=85768015-8ecb-48e3-9307-f671bfc33c65 (api:46)
> > > 2017-08-30 05:55:52,743-0400 INFO  (periodic/1) [vdsm.api] FINISH
> > > repoStats return={} from=internal,
> > > task_id=85768015-8ecb-48e3-9307-f671bfc33c65 (api:52)
> > > 2017-08-30 05:55:52,743-0400 WARN  (periodic/1) [throttled] MOM
> > > not available. (throttledlog:103)
> > > 2017-08-30 05:55:52,744-0400 WARN  (periodic/1) [throttled] MOM
> > > not available, KSM stats will be missing. (throttledlog:103)
> > > 2017-08-30 05:55:55,043-0400 INFO  (MainThread) [vds] Received
> > > signal 15, shutting down (vdsmd:67)
> > > 2017-08-30 05:55:55,045-0400 INFO  (MainThread)
> > > [jsonrpc.JsonRpcServer] Stopping JsonRPC Server (__init__:759)
> > > 2017-08-30 05:55:55,049-0400 INFO  (MainThread) [vds] Stopping
> > > http server (http:79)
> > > 2017-08-30 05:55:55,049-0400 INFO  (http) [vds] Server stopped
> > > (http

[ovirt-devel] Re: FYI - Many nightly suites, including Network, HE and HC are failing since 2-3 days ago

2019-03-25 Thread Dominik Holler
This issue seems to be solved now, please let me know if not.

The build
https://jenkins.ovirt.org/view/oVirt system 
tests/job/ovirt-system-tests_manual/4408/
with ovirt-engine d0a215d862eb819f0bbdd51fed012f9b972c1bdf
which includes  Ahmed's commit a236c90d54652503d43d8315582effb74050d22e
succeeded.
Also the build
https://jenkins.ovirt.org/view/oVirt system 
tests/job/ovirt-system-tests_manual/4407/
with default repos succeeded.

On Mon, 25 Mar 2019 08:58:25 +0200
Eitan Raviv  wrote:

> the patch was merged on 4.3 but test run [1] has build [2] which is one
> patch before Ahmed's merge...
> 
> 
> [1] http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.3/21/
> [2] ovirt-engine-4.3.2.2-0.0.master.20190324105929.git8b0969c.el7.noarch.rpm
> 
> 
> On Mon, Mar 25, 2019 at 8:42 AM Dan Kenigsberg  wrote:
> 
> > but http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.3/21/
> > is still failing
> > Was your patch merged?
> >
> > On Sun, Mar 24, 2019 at 10:14 AM Ahmad Khiet  wrote:
> >
> >> patched 4.3!
> >>
> >> On Sun, Mar 24, 2019 at 9:06 AM Eitan Raviv  wrote:
> >>
> >>> After some off line discussions it seems that the change that should be
> >>> implemented in order to solve the original problem (remove host fails due
> >>> to disconnect storage in progress) is to leave the host in 'preparing for
> >>> maintenance' until all relevant storage operations are completed.
> >>>
> >>> On Sat, Mar 23, 2019 at 11:36 PM Dan Kenigsberg 
> >>> wrote:
> >>>
>  Unfortunately, the network suite is still failing on
> 
>  Cannot edit Host. Related operation is currently in progress. Please
>  try again later.
> 
>  Can you check if that's the same issue? Did you revert from 4.3 too?
> 
>  http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.3/19/
> 
> 
> 
>  On Sat, 23 Mar 2019, 21:27 Benny Zlotnik,  wrote:
> 
> > The patch was reverted on Thursday
> >
> > On Sat, Mar 23, 2019 at 8:48 PM Dan Kenigsberg 
> > wrote:
> >
> >> I was told that intervening in the host state machine is delicate, but
> >> I think that this is the only correct approach.
> >>
> >> Benny, Ahmad, Tal: do you have a plan to resolve this? We are entering
> >> a third week with this constant failure.
> >>
> >>
> >> On Wed, Mar 20, 2019 at 2:42 PM Eitan Raviv 
> >> wrote:
> >> >
> >> > I am not sure that locking both groups would be sufficient, because
> >> there is still a chance that the removeNetowrks request will start and
> >> acquire the lock before the DisconnectStorage operation starts.
> >> > So probably the correct and full proof solution is to not move the
> >> host to maintenance until all related storage ops terminate.
> >> >
> >> >
> >> > On Wed, Mar 20, 2019 at 2:07 PM Simone Tiraboschi <
> >> stira...@redhat.com> wrote:
> >> >>
> >> >>
> >> >>
> >> >> On Sun, Mar 17, 2019 at 3:04 PM Eyal Edri 
> >> wrote:
> >> >>>
> >> >>> Not sure if all the same issue, but seems to failing around the
> >> same time:
> >> >>>
> >> >>>   ovirt-system-tests_hc-basic-suite-4.2 1 day 12 hr - #824 12
> >> hr - #825 57 min  integ-tests
> >> >>>   ovirt-system-tests_hc-basic-suite-master 2 days 12 hr - #1043
> >> 12 hr - #1045 51 min  integ-tests
> >> >>>   ovirt-system-tests_he-basic-ansible-suite-4.3 N/A 11 hr - #11
> >> 24 sec  integ-tests
> >> >>>   ovirt-system-tests_he-basic-ipv6-suite-master N/A 12 hr - #11
> >> 10 min  integ-tests
> >> >>>   ovirt-system-tests_he-basic-iscsi-suite-master 2 days 10 hr -
> >> #871 10 hr - #873 1 hr 36 min  integ-tests
> >> >>>   ovirt-system-tests_he-basic-suite-master 2 days 11 hr - #1147
> >> 11 hr - #1149 1 hr 26 min  integ-tests
> >> >>>   ovirt-system-tests_he-node-ng-suite-master 2 days 12 hr -
> >> #727 12 hr - #729 1 hr 54 min  integ-tests
> >> >>
> >> >>
> >> >> It's a NullPointerException on engine side:
> >> >> I opened a bug here:
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1690159 which is not on
> >> POST
> >> >>
> >> >>
> >> >>>
> >> >>>   ovirt-system-tests_network-suite-master 3 days 12 hr - #926
> >> 12 hr - #929 45 min  integ-tests
> >> >>>   ovirt-system-tests_openshift-on-ovirt-suite-4.2 3 days 10 hr
> >> - #187 10 hr - #190 45 min  integ-tests
> >> >>>
> >> >>> Links to jobs can be found here: https://jenkins.ovirt.org/
> >> >>>
> >> >>> --
> >> >>>
> >> >>> Eyal edri
> >> >>>
> >> >>>
> >> >>> MANAGER
> >> >>>
> >> >>> RHV/CNV DevOps
> >> >>>
> >> >>> EMEA VIRTUALIZATION R&D
> >> >>>
> >> >>>
> >> >>> Red Hat EMEA
> >> >>>
> >> >>> TRIED. TESTED. TRUSTED.
> >> >>> phone: +972-9-7692018
> >> >>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >> >>
> >> >> 

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
On Mon, 25 Mar 2019 17:30:53 -0400
Ryan Barry  wrote:

> It may be virt, but I'm looking...
> 
> I'm very suspicious of this happening immediately after hotplugging a NIC,
> especially since the bug attached to https://gerrit.ovirt.org/#/c/98765/
> talks about dropping packets. Dominik, did anything else change here?
> 

No, nothing I am aware of.

Is there already a pattern in the failed runs detected, or does it fail
randomly?

> On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov 
> wrote:
> 
> > Which team is it? Is it Virt? Just checking who should open a bug in
> > libvirt as suggested.
> >
> > > On 22 Mar 2019, at 20:52, Nir Soffer  wrote:
> > >
> > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron  wrote:
> > > Hi,
> > >
> > > We are failing ovirt-engine master on test 004_basic_sanity.hotplug_cpu
> > > looking at the logs, we can see that the for some reason, libvirt
> > reports a vm as none responsive which fails the test.
> > >
> > > CQ first failure was for patch:
> > > https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for mdevs,
> > use nodisplay to override
> > > But I do not think this is the cause of failure.
> > >
> > > Adding Marcin, Milan and Dan as well as I think it may be netwrok
> > related.
> > >
> > > You can see the libvirt log here:
> > >
> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > >
> > > you can see the full logs here:
> > >
> > >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> > >
> > > Evgheni and I confirmed this is not an infra issue and the problem is
> > ssh connection to the internal vm
> > >
> > > Thanks,
> > > Dafna
> > >
> > >
> > > error:
> > > 2019-03-22 15:08:22.658+: 22068: warning : qemuDomainObjTaint:7521 :
> > Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
> > tainted: hook-script
> > >
> > > Why our vm is tainted?
> > >
> > > 2019-03-22 15:08:22.693+: 22068: error :
> > virProcessRunInMountNamespace:1159 : internal error: child reported: unable
> > to set security context 'system_u:object_r:virt_content_t:s0' on
> > '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
> > No such file or directory
> > >
> > > This should not happen, libvirt is not adding labels to files in
> > /rhev/data-center. It is using using its own mount
> > > namespace and adding there the devices used by the VM. Since libvirt
> > create the devices in its namespace
> > > it should not complain about missing paths in /rhev/data-center.
> > >
> > > I think we should file a libvirt bug for this.
> > >
> > > 2019-03-22 15:08:28.168+: 22070: error :
> > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> > agent is not connected
> > > 2019-03-22 15:08:58.193+: 22070: error :
> > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> > agent is not connected
> > > 2019-03-22 15:13:58.179+: 22071: error :
> > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> > agent is not connected
> > >
> > > Do we have guest agent in the test VMs?
> > >
> > > Nir
> >
> > --
> > Anton Marchukov
> > Associate Manager - RHV DevOps - Red Hat
> >
> >
> >
> >
> >
> > ___
> > Infra mailing list -- in...@ovirt.org
> > To unsubscribe send an email to infra-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/in...@ovirt.org/message/B44Q3AZA7JUPMW4IDWZAS3RYMAFQ56VG/
> >
> 
> 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7XYIPXZLPHRRI53QDC24TY6J2ZL2JWSH/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
On Tue, 26 Mar 2019 10:58:22 +
Dafna Ron  wrote:

> This is still failing randomly
> 

I created https://gerrit.ovirt.org/#/c/98906/ to help to understand
which action is crashing the guest.

> 
> On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler  wrote:
> 
> > On Mon, 25 Mar 2019 17:30:53 -0400
> > Ryan Barry  wrote:
> >
> > > It may be virt, but I'm looking...
> > >
> > > I'm very suspicious of this happening immediately after hotplugging a
> > NIC,
> > > especially since the bug attached to https://gerrit.ovirt.org/#/c/98765/
> > > talks about dropping packets. Dominik, did anything else change here?
> > >
> >
> > No, nothing I am aware of.
> >
> > Is there already a pattern in the failed runs detected, or does it fail
> > randomly?
> >
> > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov 
> > > wrote:
> > >
> > > > Which team is it? Is it Virt? Just checking who should open a bug in
> > > > libvirt as suggested.
> > > >
> > > > > On 22 Mar 2019, at 20:52, Nir Soffer  wrote:
> > > > >
> > > > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron  wrote:
> > > > > Hi,
> > > > >
> > > > > We are failing ovirt-engine master on test
> > 004_basic_sanity.hotplug_cpu
> > > > > looking at the logs, we can see that the for some reason, libvirt
> > > > reports a vm as none responsive which fails the test.
> > > > >
> > > > > CQ first failure was for patch:
> > > > > https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for
> > mdevs,
> > > > use nodisplay to override
> > > > > But I do not think this is the cause of failure.
> > > > >
> > > > > Adding Marcin, Milan and Dan as well as I think it may be netwrok
> > > > related.
> > > > >
> > > > > You can see the libvirt log here:
> > > > >
> > > >
> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > > > >
> > > > > you can see the full logs here:
> > > > >
> > > > >
> > > >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> > > > >
> > > > > Evgheni and I confirmed this is not an infra issue and the problem is
> > > > ssh connection to the internal vm
> > > > >
> > > > > Thanks,
> > > > > Dafna
> > > > >
> > > > >
> > > > > error:
> > > > > 2019-03-22 15:08:22.658+: 22068: warning :
> > qemuDomainObjTaint:7521 :
> > > > Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
> > > > tainted: hook-script
> > > > >
> > > > > Why our vm is tainted?
> > > > >
> > > > > 2019-03-22 15:08:22.693+: 22068: error :
> > > > virProcessRunInMountNamespace:1159 : internal error: child reported:
> > unable
> > > > to set security context 'system_u:object_r:virt_content_t:s0' on
> > > >
> > '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
> > > > No such file or directory
> > > > >
> > > > > This should not happen, libvirt is not adding labels to files in
> > > > /rhev/data-center. It is using using its own mount
> > > > > namespace and adding there the devices used by the VM. Since libvirt
> > > > create the devices in its namespace
> > > > > it should not complain about missing paths in /rhev/data-center.
> > > > >
> > > > > I think we should file a libvirt bug for this.
> > > > >
> > > > > 2019-03-22 15:08:28.168+: 22070: error :
> > > > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
> > guest
> > > > agent is not connected
> > > > > 2019-03-22 15:08:58.193+: 22070: error :
> > > > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
> > guest
> > > > agent is not connected
> > > > > 2019-03-22 15:13:58.17

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
On Tue, 26 Mar 2019 12:31:36 +0100
Dominik Holler  wrote:

> On Tue, 26 Mar 2019 10:58:22 +
> Dafna Ron  wrote:
> 
> > This is still failing randomly
> > 
> 
> I created https://gerrit.ovirt.org/#/c/98906/ to help to understand
> which action is crashing the guest.
> 

I was not able to reproduce the failure with the change above.
We could merge the change to have better information on the next
failure.


> > 
> > On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler  wrote:
> > 
> > > On Mon, 25 Mar 2019 17:30:53 -0400
> > > Ryan Barry  wrote:
> > >
> > > > It may be virt, but I'm looking...
> > > >
> > > > I'm very suspicious of this happening immediately after hotplugging a
> > > NIC,
> > > > especially since the bug attached to https://gerrit.ovirt.org/#/c/98765/
> > > > talks about dropping packets. Dominik, did anything else change here?
> > > >
> > >
> > > No, nothing I am aware of.
> > >
> > > Is there already a pattern in the failed runs detected, or does it fail
> > > randomly?
> > >
> > > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov 
> > > > wrote:
> > > >
> > > > > Which team is it? Is it Virt? Just checking who should open a bug in
> > > > > libvirt as suggested.
> > > > >
> > > > > > On 22 Mar 2019, at 20:52, Nir Soffer  wrote:
> > > > > >
> > > > > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron  wrote:
> > > > > > Hi,
> > > > > >
> > > > > > We are failing ovirt-engine master on test
> > > 004_basic_sanity.hotplug_cpu
> > > > > > looking at the logs, we can see that the for some reason, libvirt
> > > > > reports a vm as none responsive which fails the test.
> > > > > >
> > > > > > CQ first failure was for patch:
> > > > > > https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for
> > > mdevs,
> > > > > use nodisplay to override
> > > > > > But I do not think this is the cause of failure.
> > > > > >
> > > > > > Adding Marcin, Milan and Dan as well as I think it may be netwrok
> > > > > related.
> > > > > >
> > > > > > You can see the libvirt log here:
> > > > > >
> > > > >
> > > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > > > > >
> > > > > > you can see the full logs here:
> > > > > >
> > > > > >
> > > > >
> > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> > > > > >
> > > > > > Evgheni and I confirmed this is not an infra issue and the problem 
> > > > > > is
> > > > > ssh connection to the internal vm
> > > > > >
> > > > > > Thanks,
> > > > > > Dafna
> > > > > >
> > > > > >
> > > > > > error:
> > > > > > 2019-03-22 15:08:22.658+: 22068: warning :
> > > qemuDomainObjTaint:7521 :
> > > > > Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
> > > > > tainted: hook-script
> > > > > >
> > > > > > Why our vm is tainted?
> > > > > >
> > > > > > 2019-03-22 15:08:22.693+: 22068: error :
> > > > > virProcessRunInMountNamespace:1159 : internal error: child reported:
> > > unable
> > > > > to set security context 'system_u:object_r:virt_content_t:s0' on
> > > > >
> > > '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
> > > > > No such file or directory
> > > > > >
> > > > > > This should not happen, libvirt is not adding labels to files in
> > > > > /rhev/data-center. It is using using its own mount
> > > > > > namespace and adding there the devices used by the VM. Since libvirt
> > > > > create the devices in its namespace
&g

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
I added in 
https://gerrit.ovirt.org/#/c/98925/
a ping directly before the ssh.
The ping succeeds, but the ssh fails.


On Tue, 26 Mar 2019 17:07:45 +0100
Sandro Bonazzola  wrote:

> Il giorno mar 26 mar 2019 alle ore 16:48 Ryan Barry  ha
> scritto:
> 
> > +1 from me
> >
> 
> Merged. I have 2 patches constantly failing on it, rebased them, you can
> follow on:
> https://gerrit.ovirt.org/#/c/98863/ and https://gerrit.ovirt.org/98862
> 

still failing on jenkins, but at least one succeeds locally for me

> 
> 
> >
> > On Tue, Mar 26, 2019 at 11:13 AM Dominik Holler 
> > wrote:
> > >
> > > On Tue, 26 Mar 2019 12:31:36 +0100
> > > Dominik Holler  wrote:
> > >
> > > > On Tue, 26 Mar 2019 10:58:22 +
> > > > Dafna Ron  wrote:
> > > >
> > > > > This is still failing randomly
> > > > >
> > > >
> > > > I created https://gerrit.ovirt.org/#/c/98906/ to help to understand
> > > > which action is crashing the guest.
> > > >
> > >
> > > I was not able to reproduce the failure with the change above.
> > > We could merge the change to have better information on the next
> > > failure.
> > >
> > >
> > > > >
> > > > > On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler 
> > wrote:
> > > > >
> > > > > > On Mon, 25 Mar 2019 17:30:53 -0400
> > > > > > Ryan Barry  wrote:
> > > > > >
> > > > > > > It may be virt, but I'm looking...
> > > > > > >
> > > > > > > I'm very suspicious of this happening immediately after
> > hotplugging a
> > > > > > NIC,
> > > > > > > especially since the bug attached to
> > https://gerrit.ovirt.org/#/c/98765/
> > > > > > > talks about dropping packets. Dominik, did anything else change
> > here?
> > > > > > >
> > > > > >
> > > > > > No, nothing I am aware of.
> > > > > >
> > > > > > Is there already a pattern in the failed runs detected, or does it
> > fail
> > > > > > randomly?
> > > > > >
> > > > > > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov <
> > amarc...@redhat.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Which team is it? Is it Virt? Just checking who should open a
> > bug in
> > > > > > > > libvirt as suggested.
> > > > > > > >
> > > > > > > > > On 22 Mar 2019, at 20:52, Nir Soffer 
> > wrote:
> > > > > > > > >
> > > > > > > > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron 
> > wrote:
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > We are failing ovirt-engine master on test
> > > > > > 004_basic_sanity.hotplug_cpu
> > > > > > > > > looking at the logs, we can see that the for some reason,
> > libvirt
> > > > > > > > reports a vm as none responsive which fails the test.
> > > > > > > > >
> > > > > > > > > CQ first failure was for patch:
> > > > > > > > > https://gerrit.ovirt.org/#/c/98553/ - core: Add
> > display="on" for
> > > > > > mdevs,
> > > > > > > > use nodisplay to override
> > > > > > > > > But I do not think this is the cause of failure.
> > > > > > > > >
> > > > > > > > > Adding Marcin, Milan and Dan as well as I think it may be
> > netwrok
> > > > > > > > related.
> > > > > > > > >
> > > > > > > > > You can see the libvirt log here:
> > > > > > > > >
> > > > > > > >
> > > > > >
> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > > > > > > > >
> > > > > > > > > you can see the full logs here:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > >

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-27 Thread Dominik Holler
On Wed, 27 Mar 2019 10:07:16 +0200
Eyal Edri  wrote:

> On Wed, Mar 27, 2019 at 3:06 AM Ryan Barry  wrote:
> 
> > On Tue, Mar 26, 2019 at 4:07 PM Dominik Holler  wrote:
> > >
> > > I added in
> > > https://gerrit.ovirt.org/#/c/98925/
> > > a ping directly before the ssh.
> > > The ping succeeds, but the ssh fails.
> > >
> > >
> > > On Tue, 26 Mar 2019 17:07:45 +0100
> > > Sandro Bonazzola  wrote:
> > >
> > > > Il giorno mar 26 mar 2019 alle ore 16:48 Ryan Barry 
> > ha
> > > > scritto:
> > > >
> > > > > +1 from me
> > > > >
> > > >
> > > > Merged. I have 2 patches constantly failing on it, rebased them, you
> > can
> > > > follow on:
> > > > https://gerrit.ovirt.org/#/c/98863/ and https://gerrit.ovirt.org/98862
> > > >
> > >
> > > still failing on jenkins, but at least one succeeds locally for me
> >
> > Succeeds locally for me also.
> >
> > Dafna, are we sure there's not an infra issue?
> >
> 
> I think since its a race ( and we've seen failures on this test in the
> past, also a race I think ), its probably hard to reproduce locally.
> Also, we probably need to make sure the same Libvirt version is used.
> The upstream servers are quite old, it can also be local run ends up being
> faster and not hitting the same issues ( as we've seen in the past )
> 
> Could it be a bug in the ssh client ( paramiko? )
> 


Probably wrong idea, but worth to ask:
Any ideas which  ssh_timeout is used or how to modify?

If 100 tries including a time.sleep(1) takes 100 seconds,
either the timeout is not the expected 10 seconds, or the guest refuses
the connection.


> Barak,Gal,Galit, Evgheni - any thoughts on something on infra that can
> cause this? ( other than slow servers )
> 
> 
> >
> > >
> > > >
> > > >
> > > > >
> > > > > On Tue, Mar 26, 2019 at 11:13 AM Dominik Holler 
> > > > > wrote:
> > > > > >
> > > > > > On Tue, 26 Mar 2019 12:31:36 +0100
> > > > > > Dominik Holler  wrote:
> > > > > >
> > > > > > > On Tue, 26 Mar 2019 10:58:22 +
> > > > > > > Dafna Ron  wrote:
> > > > > > >
> > > > > > > > This is still failing randomly
> > > > > > > >
> > > > > > >
> > > > > > > I created https://gerrit.ovirt.org/#/c/98906/ to help to
> > understand
> > > > > > > which action is crashing the guest.
> > > > > > >
> > > > > >
> > > > > > I was not able to reproduce the failure with the change above.
> > > > > > We could merge the change to have better information on the next
> > > > > > failure.
> > > > > >
> > > > > >
> > > > > > > >
> > > > > > > > On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler <
> > dhol...@redhat.com>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > On Mon, 25 Mar 2019 17:30:53 -0400
> > > > > > > > > Ryan Barry  wrote:
> > > > > > > > >
> > > > > > > > > > It may be virt, but I'm looking...
> > > > > > > > > >
> > > > > > > > > > I'm very suspicious of this happening immediately after
> > > > > hotplugging a
> > > > > > > > > NIC,
> > > > > > > > > > especially since the bug attached to
> > > > > https://gerrit.ovirt.org/#/c/98765/
> > > > > > > > > > talks about dropping packets. Dominik, did anything else
> > change
> > > > > here?
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > No, nothing I am aware of.
> > > > > > > > >
> > > > > > > > > Is there already a pattern in the failed runs detected, or
> > does it
> > > > > fail
> > > > > > > > > randomly?
> > > > > > > > >
> > > > > > > > > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov <
> > > > > amarc...@redhat.com>
> > > > > > > > > > wro

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-04-02 Thread Dominik Holler
On Tue, 2 Apr 2019 12:24:45 +0300
Galit Rosenthal  wrote:

> Hi
> 
> I had a failure on my laptop when running in mock on hotplug cpu test.
> (when testing a change, got the same error on the jenkins)
> Dominik requested me to make a video of the vm0 dmesg command.
> 
> 
> https://drive.google.com/file/d/1Kr6r4SMhnVsWBvWD6E6JIZI4ddivo2pv/view?usp=sharing
> 

Galit, thank you very much for the video!
The video shows that the guest is in serious trouble and a dropbear
process is killed by the oom killer.
Gallit currently checks if it is possible to give the guest VM more
memory.



> 
> Regards,
> Galit
> 
> 
> On Wed, Mar 27, 2019 at 2:39 PM Sandro Bonazzola 
> wrote:
> 
> >
> >
> > Il giorno mer 27 mar 2019 alle ore 09:54 Dominik Holler <
> > dhol...@redhat.com> ha scritto:
> >
> >> On Wed, 27 Mar 2019 10:07:16 +0200
> >> Eyal Edri  wrote:
> >>
> >> > On Wed, Mar 27, 2019 at 3:06 AM Ryan Barry  wrote:
> >> >
> >> > > On Tue, Mar 26, 2019 at 4:07 PM Dominik Holler 
> >> wrote:
> >> > > >
> >> > > > I added in
> >> > > > https://gerrit.ovirt.org/#/c/98925/
> >> > > > a ping directly before the ssh.
> >> > > > The ping succeeds, but the ssh fails.
> >> > > >
> >> > > >
> >> > > > On Tue, 26 Mar 2019 17:07:45 +0100
> >> > > > Sandro Bonazzola  wrote:
> >> > > >
> >> > > > > Il giorno mar 26 mar 2019 alle ore 16:48 Ryan Barry <
> >> rba...@redhat.com>
> >> > > ha
> >> > > > > scritto:
> >> > > > >
> >> > > > > > +1 from me
> >> > > > > >
> >> > > > >
> >> > > > > Merged. I have 2 patches constantly failing on it, rebased them,
> >> you
> >> > > can
> >> > > > > follow on:
> >> > > > > https://gerrit.ovirt.org/#/c/98863/ and
> >> https://gerrit.ovirt.org/98862
> >> > > > >
> >> > > >
> >> > > > still failing on jenkins, but at least one succeeds locally for me
> >> > >
> >> > > Succeeds locally for me also.
> >> > >
> >> > > Dafna, are we sure there's not an infra issue?
> >> > >
> >> >
> >> > I think since its a race ( and we've seen failures on this test in the
> >> > past, also a race I think ), its probably hard to reproduce locally.
> >> > Also, we probably need to make sure the same Libvirt version is used.
> >> > The upstream servers are quite old, it can also be local run ends up
> >> being
> >> > faster and not hitting the same issues ( as we've seen in the past )
> >> >
> >> > Could it be a bug in the ssh client ( paramiko? )
> >> >
> >>
> >>
> >> Probably wrong idea, but worth to ask:
> >> Any ideas which  ssh_timeout is used or how to modify?
> >>
> >> If 100 tries including a time.sleep(1) takes 100 seconds,
> >> either the timeout is not the expected 10 seconds, or the guest refuses
> >> the connection.
> >>
> >>
> > I'm looking into a similar failure and found this on host1 logs at the
> > time of the ssh failure:
> > https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/3905/artifact/check-patch.basic_suite_master.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/messages
> >
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered blocking state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered disabled state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: device vnet1
> > entered promiscuous mode
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered blocking state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered forwarding state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 NetworkManager[2667]:
> >   [1553672120.9133] manager: (vnet1): new Tun device
> > (/org/freedesktop/NetworkManager/Devices/44)
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 lldpad: recvfrom(Event
> > interface): No buffer space available
> > Mar 27 03:35:20 lago-basic-suite-mast

[ovirt-devel] Suspend resume and DHCP

2019-04-03 Thread Dominik Holler
Hello,
would you help me to understand if the dhcp client in an oVirt guest
should refresh his dhcp configuration after the guest is resumed?
If this is the case, how this should be triggered?

The reason why I ask is, that if a VM suspends on a first host, and
resumes on a second one, libvirt's nwfilter losses the IP address of
the guest, which means that the guest is not reachable until he
refreshes dhcp config, if the clean-traffic filter with
CTRL_IP_LEARNING=dhcp is used.
This scenario might happen in OST basic-suite-master and
basic-suite-4.3 in verify_suspend_resume_vm0.
Thanks
Dominik
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/I5OTF745RRRB3WUJ6T6KK4HGPE5VRDTF/


[ovirt-devel] Re: Network Labels and API Access (Error in documentation)

2019-04-08 Thread Dominik Holler
On Mon, 08 Apr 2019 17:37:24 -
j...@streamguys.com wrote:

> The self-hosted documentation for network labels appears to be inaccurate.
> https://{{ MY_DOMAIN 
> }}/ovirt-engine/apidoc/#/services/network_labels/methods/add
> 
> The documentation asks the user to post to the following URI:
> POST /ovirt-engine/api/networks/123/labels
> 
> However that results in a 404. The following URL should be posted to instead 
> for a successful response:
> POST /ovirt-engine/api/networks/123/networklabels
> 
> I discovered the discrepancy when trying to curl loop automate adding 
> networks to my cluster, and resulted in 404s. The 'links' section of the 
> specific network label actually do contain the right URI to hit to get the 
> proper label tagging:
>   "link": [
> {
>   "href": 
> "/ovirt-engine/api/networks/929fec34-7a34-4c1b-9451-e6abf6733ac6/networklabels",
>   "rel": "networklabels"
> },
> {
>   "href": 
> "/ovirt-engine/api/networks/929fec34-7a34-4c1b-9451-e6abf6733ac6/permissions",
>   "rel": "permissions"
> },
> {
>   "href": 
> "/ovirt-engine/api/networks/929fec34-7a34-4c1b-9451-e6abf6733ac6/vnicprofiles",
>   "rel": "vnicprofiles"
> }
>   ]
> 
> The hosted documentation also has confusing information with networklabels 
> being used in the results of some requests, but this section here also just 
> says 'labels':
> https://ovirt.github.io/ovirt-engine-api-model/4.1/#services/network_labels/methods/add

Thanks, does
https://gerrit.ovirt.org/#/c/99271/
correct all wrong occurrences you noticed?


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/Y4DM6DUGP4JOIK7O4LELLSFD7Z6JEK7G/


[ovirt-devel] Merging change into OST network-suite

2019-04-09 Thread Dominik Holler
Hallo,
currently OST network-suite-4.2 is broken by a test
executed unintentional in network-suite-4.2.
To unbreak,
https://gerrit.ovirt.org/#/c/99289/
can be used.
Because Edy is not available to merge, it would be helpful if someone
else with merge power would merge the change/
Thanks
Dominik 
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A65H7MMS3N6OLWF4TPFTXVXT7FFTU466/


[ovirt-devel] Re: Nominating Miguel Barosso as ovirt-provider-ovn maintainer

2019-04-28 Thread Dominik Holler
+1
>From my point of view, Miguel is an excellent fit.

On Mon, Apr 29, 2019 at 7:36 AM Moshe Sheena  wrote:

> +1
> I can think of no one else better fit to maintain the project and keep
> raising it to new heights.
>
> On Wed, Apr 24, 2019 at 2:55 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno mer 24 apr 2019 alle ore 09:52 Marcin Mirecki <
>> mmire...@redhat.com> ha scritto:
>>
>>> I would like to propose Miguel Barosso as a maintainer for
>>> ovirt-provider-ovn.
>>>
>>> Miguel is now working on the project for almost a year, and for the last
>>> few month is practically the only active contributor to the project.
>>> He has successfully implemented new features right up from the design
>>> stage, added a new integration test framework, fixed an endless amount of
>>> bugs and contributed over 200 patches to the project.
>>> Currently the only maintainer (me) is no longer actively working on the
>>> project, which is a cause of a review bottleneck.
>>>
>>>
>> +1 on my side
>>
>>
>>
>>> Thanks,
>>> Marcin
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VQ443KDJT75ZDV7IE754B35GHEIA4GWO/
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/NKKFGHIGUITFU7V3MJS5XERTS5V4ZQGY/
>>
>
>
> --
> Kind regards,
> Moshe.
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MMQW6V2CCYSS4YR25B3V7VNMST2HJBL3/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/E64M4L4VPD2CC43LH6ZFG5OAFQ47LYDL/


[ovirt-devel] Weakness of repos in OST

2019-06-25 Thread Dominik Holler
Hi,
from my point of view, we are not testing the repos in OST,
because we manage the packages manually.
The clean way would be installing something like
https://resources.ovirt.org/pub/yum-repo/ovirt-release43-pre.rpm
But this would download each package multiple times each run.

Maybe a way to test the repos would be an OST which bypasses
lago's repo management.
What is your view on this?
Dominik
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7STOCII42CP5MEZW3AC3VPTE4XZRGLIM/


[ovirt-devel] Re: Failure adding host due to missing 'ovirt_host_deploy'

2019-08-21 Thread Dominik Holler
Looks like a py/p3 issue.

On Wed, Aug 21, 2019 at 8:51 AM Eyal Shenitzky  wrote:

> Hi all,
>
> I am failing to add a new host to oVirt environment, the engine runs on
> fedora-30.
>
> ovirt_host_deploy isn't found under -
> http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc30
>
> I installed the fc29 version.
>
> *Error:*
> File
> "/tmp/ovirt-biYslZJETn/otopi-plugins/ovirt-host-deploy/kdump/packages.py",
> line 37, in  from ovirt_host_deploy import constants as odeploycons
> otopi.main.PluginLoadException: No module named 'ovirt_host_deploy
>
> *Installed packages:*
> ovirt-host-deploy-common-1.9.0-0.0.master.20190722100027.git138fb90.fc29.noarch
> python2-ovirt-host-deploy-1.9.0-0.0.master.20190722100027.git138fb90.fc29.noarch
>
> ovirt-host-deploy-common-1.9.0-0.0.master.20190722100027.git138fb90.fc29.noarch
>
> Does someone encounter that issue?
>
> Thanks,
>
> --
> Regards,
> Eyal Shenitzky
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/I33ZZVBY5SAIL5GZZJ3OZSO6MQCNO6DW/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GDU3YG63W4E5EX6EUXSVIXPCQT3GUEWZ/


[ovirt-devel] OST upgrade-from-release-suite-master update_cluster_versions failing

2019-08-27 Thread Dominik Holler
Hi,
for me OST upgrade-from-release-suite-master update_cluster_versions
is failing with

004_basic_sanity.update_cluster_versions (from nosetests)
Failing for the past 1 build (Since Failed#5442 )
Took 1 min 16 sec.
add description
Error Message

Fault reason is "Operation Failed". Fault detail is "[Update of cluster 
compatibility version failed because there are VMs/Templates [vm-with-iface, 
vm-with-iface-template, vm0, vm1] with incorrect configuration. To fix the 
issue, please go to each of them, edit, change the Custom Compatibility Version 
of the VM/Template to the cluster level you want to update the cluster to and 
press OK. If the save does not pass, fix the dialog validation. After 
successful cluster update, you can revert your Custom Compatibility Version 
change.]". HTTP response code is 500.

Stacktrace

Traceback (most recent call last):
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in 
wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in 
wrapper
return func(get_test_prefix(), *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in 
wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
  File 
"/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/upgrade-from-release-suite-master/test-scenarios-after-upgrade/004_basic_sanity.py",
 line 184, in update_cluster_versions
minor=minor
  File 
"/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/upgrade-from-release-suite-master/test-scenarios-after-upgrade/004_basic_sanity.py",
 line 130, in _update_cluster_version
version=new_version
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 3943, 
in update
return self._internal_update(cluster, headers, query, wait)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 253, in 
_internal_update
return future.wait() if wait else future
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55, in 
wait
return self._code(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 250, in 
callback
self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 132, in 
_check_fault
self._raise_error(response, body)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118, in 
_raise_error
raise error
Error: Fault reason is "Operation Failed". Fault detail is "[Update of cluster 
compatibility version failed because there are VMs/Templates [vm-with-iface, 
vm-with-iface-template, vm0, vm1] with incorrect configuration. To fix the 
issue, please go to each of them, edit, change the Custom Compatibility Version 
of the VM/Template to the cluster level you want to update the cluster to and 
press OK. If the save does not pass, fix the dialog validation. After 
successful cluster update, you can revert your Custom Compatibility Version 
change.]". HTTP response code is 500.

Please find an example run in
https://jenkins.ovirt.org/view/oVirt system 
tests/job/ovirt-system-tests_manual/5442

Is this a known error, or is someone already addressing this issue?
Thanks
Dominik
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IUSYMZZ3NAILYVRCHUWC33AUCXYBNRSU/


[ovirt-devel] Re: OST upgrade-from-release-suite-master update_cluster_versions failing

2019-08-27 Thread Dominik Holler
On Tue, 27 Aug 2019 16:36:45 +0300
Dafna Ron  wrote:

> passed with latest package:
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5454/
> 

I don't get it. Isn't 5454 failing?


> 
> On Tue, Aug 27, 2019 at 3:57 PM Dafna Ron  wrote:
> 
> > Dominik,
> > seems it passes with no changes:
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5452/
> > the issue if exists is in engine before merge (unless we will see a failed
> > CQ soon)
> > I ran another manual test on the latest built package of engine so lets
> > see if we have an issue that has not been identified yet.
> >
> > adding Dusan in case CQ starts failing
> >
> >
> > On Tue, Aug 27, 2019 at 2:55 PM Dafna Ron  wrote:
> >
> >> Running it without a new package to see if this is reproduces on packages
> >> from tested with no changes.
> >>
> >>
> >> On Tue, Aug 27, 2019 at 1:49 PM Dominik Holler 
> >> wrote:
> >>
> >>> Hi,
> >>> for me OST upgrade-from-release-suite-master update_cluster_versions
> >>> is failing with
> >>>
> >>> 004_basic_sanity.update_cluster_versions (from nosetests)
> >>> Failing for the past 1 build (Since Failed#5442 )
> >>> Took 1 min 16 sec.
> >>> add description
> >>> Error Message
> >>>
> >>> Fault reason is "Operation Failed". Fault detail is "[Update of cluster
> >>> compatibility version failed because there are VMs/Templates
> >>> [vm-with-iface, vm-with-iface-template, vm0, vm1] with incorrect
> >>> configuration. To fix the issue, please go to each of them, edit, change
> >>> the Custom Compatibility Version of the VM/Template to the cluster level
> >>> you want to update the cluster to and press OK. If the save does not pass,
> >>> fix the dialog validation. After successful cluster update, you can revert
> >>> your Custom Compatibility Version change.]". HTTP response code is 500.
> >>>
> >>> Stacktrace
> >>>
> >>> Traceback (most recent call last):
> >>>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> >>> testMethod()
> >>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
> >>> runTest
> >>> self.test(*self.arg)
> >>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> >>> 142, in wrapped_test
> >>> test()
> >>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60,
> >>> in wrapper
> >>> return func(get_test_prefix(), *args, **kwargs)
> >>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79,
> >>> in wrapper
> >>> prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
> >>>   File
> >>> "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/upgrade-from-release-suite-master/test-scenarios-after-upgrade/004_basic_sanity.py",
> >>> line 184, in update_cluster_versions
> >>> minor=minor
> >>>   File
> >>> "/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/upgrade-from-release-suite-master/test-scenarios-after-upgrade/004_basic_sanity.py",
> >>> line 130, in _update_cluster_version
> >>> version=new_version
> >>>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line
> >>> 3943, in update
> >>> return self._internal_update(cluster, headers, query, wait)
> >>>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> >>> 253, in _internal_update
> >>> return future.wait() if wait else future
> >>>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> >>> 55, in wait
> >>> return self._code(response)
> >>>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> >>> 250, in callback
> >>> self._check_fault(response)
> >>>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> >>> 132, in _check_fault
> >>> self._raise_error(response, body)
> >>>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
> >>> 118, in _raise_error
> &

[ovirt-devel] Re: 'Misc configuration' failed in engine-setup

2019-09-04 Thread Dominik Holler
On Thu, Sep 5, 2019 at 8:01 AM Yedidyah Bar David  wrote:

> On Wed, Sep 4, 2019 at 5:55 PM Shmuel Melamud  wrote:
> >
> > Attached.
> >
> > On Wed, Sep 4, 2019 at 5:06 PM Yedidyah Bar David 
> wrote:
> > >
> > > On Wed, Sep 4, 2019 at 2:25 PM Shmuel Melamud 
> wrote:
> > > >
> > > > Hi!
> > > >
> > > > I've tried several minutes ago to perform engine-setup on the latest
> > > > master and it failed with the message:
> > > >
> > > > [ INFO  ] Stage: Misc configuration
> > > > [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 2] No
> > > > such file or directory:
> > > > '/etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf'
>
> OK. This is a real bug, introduced very recently by a patch from dholler,
> which I helped review. Sorry we didn't catch it :-(
>
> This should fix:
>
> https://gerrit.ovirt.org/103117
>
> That said, no idea how it happened. The flow you ran into is:
>
> 1. Install and setup ovirt-provider-ovn
> 2. Create a provider, AFAICT via engine-setup, but:
> 3. Do not create a config file for it
>
>
This is happens in dev mode, but only in dev mode, because in dev mode it
is up on the
developer to create the file manually according the instructions printed by
engine-setup.


> No idea how.
>
> > > >
> > > > Anybody knows what may cause the error?
> > >
> > > Please share the setup log. Thanks.
> > >
> > > Best regards,
> > > --
> > > Didi
>
>
>
> --
> Didi
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LJ7SYWYQ2RFOKZW3S3R6HB7ARNSREEIM/


[ovirt-devel] ovirt-release43-pre is not install able

2019-09-17 Thread Dominik Holler
Hello,
please note that
yum install -y https://resources.ovirt.org/pub/yum-repo/ovirt-release43-pre.rpm
yum install -y ovirt-host
is currently failing with
Error: Package: vdsm-4.30.30-1.el7.x86_64 (ovirt-4.3-pre)
   Requires: sanlock-python >= 3.7.3
   Installing: sanlock-python-3.6.0-1.el7.x86_64 (base)
   sanlock-python = 3.6.0-1.el7
Error: Package: ovirt-hosted-engine-ha-2.3.5-1.el7.noarch (ovirt-4.3-pre)
   Requires: sanlock-python >= 3.7.3
   Installing: sanlock-python-3.6.0-1.el7.x86_64 (base)
   sanlock-python = 3.6.0-1.el7
Error: Package: ovirt-hosted-engine-ha-2.3.5-1.el7.noarch (ovirt-4.3-pre)
   Requires: sanlock >= 3.7.3
   Installing: sanlock-3.6.0-1.el7.x86_64 (base)
   sanlock = 3.6.0-1.el7

Is someone already fixing this?
Dominik
[user@tarox0 images]$  ~/scripts/newCloudVm.sh ovirt-43-host10
+ '[' -n ovirt-43-host10 ']'
+ name=ovirt-43-host10
+ rootpassword=123456
+ osvariant=centos7.0
+ partition=sda1
+ '[' centos7.0 == fedora27 ']'
+ '[' centos7.0 == test ']'
+ '[' centos7.0 == centos7.0 ']'
+ version=7
+ mirror=https://cloud.centos.org/centos/7/images/
+ origfile=CentOS-7-x86_64-GenericCloud.qcow2c
+ osvariant=rhel7.6
+ URI=qemu:///system
+ wget -nc 
https://cloud.centos.org/centos/7/images//CentOS-7-x86_64-GenericCloud.qcow2c
File ‘CentOS-7-x86_64-GenericCloud.qcow2c’ already there; not retrieving.

+ virt-customize -a CentOS-7-x86_64-GenericCloud.qcow2c --update
[   0.0] Examining the guest ...
[   1.9] Setting a random seed
[   1.9] Updating packages
[   2.4] Finishing off
+ image=ovirt-43-host10.img
+ truncate -s 40G ovirt-43-host10.img
+ virt-resize --expand /dev/sda1 CentOS-7-x86_64-GenericCloud.qcow2c 
ovirt-43-host10.img
[   0.0] Examining CentOS-7-x86_64-GenericCloud.qcow2c
**

Summary of changes:

/dev/sda1: This partition will be resized from 8.0G to 40.0G.  The 
filesystem xfs on /dev/sda1 will be expanded using the ‘xfs_growfs’ 
method.

**
[   1.6] Setting up initial partition table on ovirt-43-host10.img
[   1.7] Copying /dev/sda1
 100% 
⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧
 00:00
[  25.8] Expanding /dev/sda1 using the ‘xfs_growfs’ method

Resize operation completed with no errors.  Before deleting the old disk, 
carefully check that the resized disk boots and works correctly.
+ virt-customize -a ovirt-43-host10.img --root-password password:123456 
--ssh-inject root --selinux-relabel --hostname ovirt-43-host10 --timezone 
Europe/Berlin --uninstall cloud-init,kexec-tools,postfix
[   0.0] Examining the guest ...
[   1.8] Setting a random seed
[   1.8] SSH key inject: root
[   2.6] Setting the hostname: ovirt-43-host10
[   2.6] Setting the timezone: Europe/Berlin
[   2.6] Uninstalling packages: cloud-init kexec-tools postfix
[   4.8] Setting passwords
[   5.5] SELinux relabelling
[  11.7] Finishing off
+ [[ -n '' ]]
+ mem=2048,maxmemory=8192
+ sleep 13
+ virt-install --name ovirt-43-host10 --os-type=linux --os-variant=rhel7.6 
--vcpus 2,maxvcpus=4 --cpu host --memory 2048,maxmemory=8192 --rng /dev/urandom 
--import --disk ovirt-43-host10.img --noautoconsole --network network=default 
--connect qemu:///system

Starting install...
Domain creation completed.
+ virsh -c qemu:///system domifaddr ovirt-43-host10
 Name   MAC address  Protocol Address
---
 vnet5  52:54:00:36:0b:2bipv4 192.168.122.30/24

[user@tarox0 images]$ ssh -o StrictHostKeyChecking=no root@192.168.122.30
@@@
@WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:aloIq1j/uLOoseDbLzSh3jniP2KfCZbHgTnbliznPwA.
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/user/.ssh/known_hosts:244
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle 
attacks.
[root@ovirt-43-host10 ~]# yum install -y 
https://resources.ovirt.org/pub/yum-repo/ovirt-release43-pre.rpm
Loaded plugins: fastestmirror
ovirt-release43-pre.r

[ovirt-devel] Re: Using SDK to find VM Network Role

2019-09-26 Thread Dominik Holler
On Thu, Sep 26, 2019 at 4:37 PM Jamie Holohan 
wrote:

> I am trying to verify whether a Networks VM Network checkbox is checked
> using the SDK, but I cannot see any methods or variable with a similar name.
>
> Is it possible to find out if a network has a VM Network Role using the
> SDK?
>

Please note that the network roles exists only in the context of a cluster:

system_service = connection.system_service()
clusters_service = system_service.clusters_service()
cluster = clusters_service.list()[0]
cluster_service = clusters_service.cluster_service(cluster.id)
networks_service = cluster_service.networks_service()
for network in networks_service.list():
  print '{}: {}'.format(network.name, network.usages)


___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HUOLCB5CM5SYFO7KDQF25GDXTMSMUHKM/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/P44KKHGSJCLRBXMTLNVGFEGBN2BLDYLE/


[ovirt-devel] Re: ovirt-system-tests network suite

2019-10-22 Thread Dominik Holler
On Tue, Oct 22, 2019 at 2:51 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> I'm trying to run OST network suite locally, and it is failing w/ the
> following error [0].
>
> The build can be found in [1].
>
> Any guidance / clue ?
>
>
Does
echo 2 | sudo tee /proc/sys/net/ipv6/conf/yourphysicalnic/accept_ra
fix the issue?


> [0] - http://pastebin.test.redhat.com/807736
> [1] - https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5835/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BXO32VRCWKAABO3WIFHDJ5LZSS2UBA72/


[ovirt-devel] Re: OST: Failing migrations on el8

2019-10-25 Thread Dominik Holler
On Fri, Oct 25, 2019 at 3:08 PM Milan Zamazal  wrote:

> Hi, I looked at the failing migrations in OST on el8, when running
> basic-suite-master with https://gerrit.ovirt.org/#/c/103888/31.  The
> migration fails even before started, when Vdsm tries to talk to the
> remote Vdsm and can't reach it.  Indeed, there seems to be a networking
> problem between the hosts:
>
>   [root@lago-basic-suite-master-host-1 ~]# ping -c 1
> lago-basic-suite-master-host-0
>   PING
> lago-basic-suite-master-host-0(lago-basic-suite-master-host-0.lago.local
> (fd8f:1391:3a82:201::c0a8:c902)) 56 data bytes
>   From lago-basic-suite-master-host-1 (fd8f:1391:3a82:200::c0a8:c899):
> icmp_seq=1 Destination unreachable: Address unreachable
>
>
Maybe it is related that the default IP version is flipped from IPv4 in
CentOS7 to IPv6 in CentOS8.
This means that if IPv6 is enabled in the dns, it is required to be enabled
on the source and destination host.
I will have a detailed look.


> Regards,
> Milan
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/E3Z3RDMSYQ7GTXO7R2DTJCDG5RLZFPZZ/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KTAFGOCU5QINPVBYBEJZHC45R7T3DFXJ/


[ovirt-devel] Re: OST: Failing migrations on el8

2019-10-29 Thread Dominik Holler
On Tue, Oct 29, 2019 at 9:35 AM Galit Rosenthal  wrote:

> Hi Dominik,
>
> Any updates on this?
>
>
Looks like my suspicion is right.
The mapping between the IPv6 addresses of ovirtmgmt and the IPv6 addresses,
which are configured to be resolved to hostnames, do not match in OST.
One way would be to disable the resolution to IPv6 at all.
Another way would be to fix the mapping.



> Regards,
> Galit
>
> On Fri, Oct 25, 2019 at 5:34 PM Dominik Holler  wrote:
>
>> On Fri, Oct 25, 2019 at 3:08 PM Milan Zamazal 
>> wrote:
>>
>>> Hi, I looked at the failing migrations in OST on el8, when running
>>> basic-suite-master with https://gerrit.ovirt.org/#/c/103888/31.  The
>>> migration fails even before started, when Vdsm tries to talk to the
>>> remote Vdsm and can't reach it.  Indeed, there seems to be a networking
>>> problem between the hosts:
>>>
>>>   [root@lago-basic-suite-master-host-1 ~]# ping -c 1
>>> lago-basic-suite-master-host-0
>>>   PING
>>> lago-basic-suite-master-host-0(lago-basic-suite-master-host-0.lago.local
>>> (fd8f:1391:3a82:201::c0a8:c902)) 56 data bytes
>>>   From lago-basic-suite-master-host-1 (fd8f:1391:3a82:200::c0a8:c899):
>>> icmp_seq=1 Destination unreachable: Address unreachable
>>>
>>>
>> Maybe it is related that the default IP version is flipped from IPv4 in
>> CentOS7 to IPv6 in CentOS8.
>> This means that if IPv6 is enabled in the dns, it is required to be
>> enabled on the source and destination host.
>> I will have a detailed look.
>>
>>
>>> Regards,
>>> Milan
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/E3Z3RDMSYQ7GTXO7R2DTJCDG5RLZFPZZ/
>>>
>>
>
> --
>
> GALIT ROSENTHAL
>
> SOFTWARE ENGINEER
>
> Red Hat
>
> <https://www.redhat.com/>
>
> ga...@redhat.comT: 972-9-7692230
> <https://red.ht/sig>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/F3IEUWC4S2MCET5MO4ARSCVRVW7IQQQY/


[ovirt-devel] gate pipeline TIMED_OUT

2019-11-14 Thread Dominik Holler
Hello,
how can I interpret the state of
https://gerrit.ovirt.org/#/c/103739/
and what is the way to merge it?
Just ask Edy to overwrite, or is there a more clean way possible how I can
solve this myself?
Thanks
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PGQXE44F5ACS33D5LXTKZVRRO4BAW456/


[ovirt-devel] Re: gate pipeline TIMED_OUT

2019-11-14 Thread Dominik Holler
On Thu, Nov 14, 2019 at 4:18 PM Nir Soffer  wrote:

> On Thu, Nov 14, 2019 at 4:45 PM Dominik Holler  wrote:
> >
> > Hello,
> > how can I interpret the state of
> > https://gerrit.ovirt.org/#/c/103739/
> > and what is the way to merge it?
> > Just ask Edy to overwrite, or is there a more clean way possible how I
> can solve this myself?
> > Thanks
>
> You are using some non-standard marks:
>
> @suite.XFAIL_SUITE_MASTER('TODO')
>
> So I don't know what you do, but given the time out for tests that
> could not work,
> maybe you are missing run=False in the xfail:
>
> pytest.mark.xfail(on_centos("8"), reason="...", run=False)
>
>
This is in
https://gerrit.ovirt.org/#/c/103739/28/network-suite-master/testlib/suite.py@37


> With this the test is mark as expected failure without running it.
>
>
I am sure that the code change is fine.
I wrote this mail to learn about the new gating process.


> Nir
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SWFLAPKF3C3MDOCJZU7LQ54GAB5RVLBE/


[ovirt-devel] Re: gate pipeline TIMED_OUT

2019-11-14 Thread Dominik Holler
On Thu, Nov 14, 2019 at 4:36 PM Nir Soffer  wrote:

> On Thu, Nov 14, 2019 at 5:24 PM Dominik Holler  wrote:
> >> You are using some non-standard marks:
> >>
> >> @suite.XFAIL_SUITE_MASTER('TODO')
> >>
> >> So I don't know what you do, but given the time out for tests that
> >> could not work,
> >> maybe you are missing run=False in the xfail:
> >>
> >> pytest.mark.xfail(on_centos("8"), reason="...", run=False)
> >>
> >
> > This is in
> https://gerrit.ovirt.org/#/c/103739/28/network-suite-master/testlib/suite.py@37
>
> Strange usage of UPPERCASE for mark functions, but the xfail() looks
> correct.
>
>
Ack, make sense to handle decorators as functions.


> > I wrote this mail to learn about the new gating process.
>
> Hopefully the maintainer of this code can merge the change without
> waiting for the
> gating machine.
>
>

The problem might be, that the gating fails, if any dependency in any test
suite fails.
In this case, it would be helpful to have some kind of feedback, why the
gating failed, e.g. on which suite.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JLBC5Z7VO7O4M2CG3XGSSWK5SH343X46/


[ovirt-devel] Re: No engine logs when adding a Network to a host NIC using SDK

2019-11-19 Thread Dominik Holler
On Mon, Nov 18, 2019 at 3:44 PM Jamie Holohan 
wrote:

> I am having an issue. I am building a test case and one step involves
> adding a network to a NIC object. When I perform this step in the UI by
> navigating from Hosts -> hostToAttachNetwork -> Network Interfaces -> Setup
> Host Networks . And I drag the required network to NIC object and press ok,
> after the operation is complete, an event log is registered in the
> engine.log containing this text: "Network changes were saved on host"
>
> When I perform the same step in the automated test case, using the
> following code:
>
>  hostService.setupNetworks().modifiedNetworkAttachments(
> networkAttachment()
> .network(
> network()
> .id(netToAdd.id())
> .name(netToAdd.name())
> )
> .hostNic(
>  hostNic()
>  .id(hsNicId)
> ))
> .send();
>
> The operation is performed correctly, the network is attached to the
> appropriate NIC, but there is no log registered in the engine.log.
>
> I am wondering is there a better way to perform the above task that will
> register an event in the log file. Or is it possible that this a bug in the
> SDK?
>
>
This would be a bug in Engine.
Can you please re-check the engine.log and check also the related vdsm.log
and supervdsm.log for traces of this action?



> Thanks
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/S2OEWFITZFH2WJTQKIV6O2RJ7SPP35HN/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SCB33M33KV6EQVVWJMQRLBUPE2RJMXTB/


[ovirt-devel] Re: gathering builds for oVirt 4.4.0 Alpha release

2019-11-21 Thread Dominik Holler
Is there a change to include to nmstate repos, like in
https://gerrit.ovirt.org/#/c/104143/23/common/yum-repos/ovirt-master-host-cq.repo.in
?

Even nmstate might be not enabled by default in the alpha, but if the repos
would be enabled, the user could comfortably
enable nmstate by
engine-config --set VdsmUseNmstate=true

On Thu, Nov 21, 2019 at 3:29 PM Sandro Bonazzola 
wrote:

> While waiting for CentOS Linux 8.1 to be released, we are now preparing an
> alpha compose that will be made available on top of CentOS Linux 8.0.
> I'm starting gathering the builds to be included in the alpha release with
> the first commit here: https://gerrit.ovirt.org/#/c/104880/
> If you want a specific version to be included in alpha please push a
> patch updating the release configuration file.
> For needed packages that won't be included in the release file, latest
> build passing CI tests will be included in the compose.
>
> Thanks,
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ASUCLLCCSKUCUAY3LRNLBE55YQ3O3XG7/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XVLVRN2CYKLCPRDEYDW4V2C4GSBNBWVN/


[ovirt-devel] Re: gathering builds for oVirt 4.4.0 Alpha release

2019-11-21 Thread Dominik Holler
On Thu, Nov 21, 2019 at 4:43 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno gio 21 nov 2019 alle ore 16:31 Dominik Holler <
> dhol...@redhat.com> ha scritto:
>
>> Is there a change to include to nmstate repos, like in
>>
>> https://gerrit.ovirt.org/#/c/104143/23/common/yum-repos/ovirt-master-host-cq.repo.in
>> ?
>>
>>
> I think you're looking for https://gerrit.ovirt.org/104825
>
>

Yes, thanks!


> Even nmstate might be not enabled by default in the alpha, but if the
>> repos would be enabled, the user could comfortably
>> enable nmstate by
>> engine-config --set VdsmUseNmstate=true
>>
>> On Thu, Nov 21, 2019 at 3:29 PM Sandro Bonazzola 
>> wrote:
>>
>>> While waiting for CentOS Linux 8.1 to be released, we are now preparing
>>> an alpha compose that will be made available on top of CentOS Linux 8.0.
>>> I'm starting gathering the builds to be included in the alpha release
>>> with the first commit here: https://gerrit.ovirt.org/#/c/104880/
>>> If you want a specific version to be included in alpha please push a
>>> patch updating the release configuration file.
>>> For needed packages that won't be included in the release file, latest
>>> build passing CI tests will be included in the compose.
>>>
>>> Thanks,
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> sbona...@redhat.com
>>> <https://www.redhat.com/>*Red Hat respects your work life balance.
>>> Therefore there is no need to answer this email out of your office hours.*
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ASUCLLCCSKUCUAY3LRNLBE55YQ3O3XG7/
>>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EO3NJSHP3SQA4ZNOEW3UVCSU2VSRLTCV/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-21 Thread Dominik Holler
On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer  wrote:

> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek 
> wrote:
> >
> > Hi,
> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails
> with
> >
> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error
> occured:
> > \n Problem 1: cannot install the best update candidate for package vdsm-
> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides nmstate
> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> vdsm-network
> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be
> installed\n
> > - cannot install the best update candidate for package vdsm-
> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides nmstate
> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>
> nmstate should be provided by copr repo enabled by ovirt-release-master.
>


I re-triggered as
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
maybe
https://gerrit.ovirt.org/#/c/104825/
was missing



> Who installs this rpm in OST?
>
>
I do not understand the question.


> > [...]
> >
> > See [2] for full error.
> >
> > Can someone please take a look?
> > Thanks
> > Vojta
> >
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
> > [2]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
> > exported-artifacts/test_logs/basic-suite-master/
> post-002_bootstrap.py/lago-
> >
> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26BL73K7D45A2IR7R3UMMM23/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3NS5TGXFCILYES77KI5TZU/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/URPZHR2IKQYBCMZVN5EBAXWTATCXBEKP/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler  wrote:

>
>
> On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer  wrote:
>
>> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek 
>> wrote:
>> >
>> > Hi,
>> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails
>> with
>> >
>> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error
>> occured:
>> > \n Problem 1: cannot install the best update candidate for package vdsm-
>> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
>> nmstate
>> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
>> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
>> vdsm-network
>> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be
>> installed\n
>> > - cannot install the best update candidate for package vdsm-
>> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides nmstate
>> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>
>> nmstate should be provided by copr repo enabled by ovirt-release-master.
>>
>
>
> I re-triggered as
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> maybe
> https://gerrit.ovirt.org/#/c/104825/
> was missing
>
>
Looks like
https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.

Miguel, do you think merging

https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-host-cq.repo.in

would solve this?


>
>
>> Who installs this rpm in OST?
>>
>>
> I do not understand the question.
>
>
>> > [...]
>> >
>> > See [2] for full error.
>> >
>> > Can someone please take a look?
>> > Thanks
>> > Vojta
>> >
>> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
>> > [2]
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
>> > exported-artifacts/test_logs/basic-suite-master/
>> post-002_bootstrap.py/lago-
>> >
>> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
>> > Devel mailing list -- devel@ovirt.org
>> > To unsubscribe send an email to devel-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26BL73K7D45A2IR7R3UMMM23/
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3NS5TGXFCILYES77KI5TZU/
>>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AXXKBFNWEUWVUKCMSHZTJTZGJ6KVXZ4W/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
> wrote:
> >
> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora Barroso
> wrote:
> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
> > > wrote:
> > > >
> > > >
> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
> > > >
> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler  >
> > > > > wrote:
> > > > >
> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer 
> > > > > > wrote:
> > > > > >
> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> > > > > >> 
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> wrote:
> > > > > >>
> > > > > >> > Hi,
> > > > > >> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host.
> It
> > > > > >> > fails
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> with
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
> "Depsolve
> > > > > >> >  Error
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> occured:
> > > > > >>
> > > > > >> > \n Problem 1: cannot install the best update candidate for
> package
> > > > > >> > vdsm-
> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
> provides
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> nmstate
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > > >> > Problem 2:
> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
> requires
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> vdsm-network
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can
> be
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> installed\n
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > - cannot install the best update candidate for package vdsm-
> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
> provides
> > > > > >> > nmstate
> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> nmstate should be provided by copr repo enabled by
> > > > > >> ovirt-release-master.
> > > > > >
> > > > > >
> > > > > >
> > > > > > I re-triggered as
> > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> > > > > > maybe
> > > > > > https://gerrit.ovirt.org/#/c/104825/
> > > > > > was missing
> > > > >
> > > > >
> > > > >
> > > > > Looks like
> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
> > > >
> > > >
> > > >
> > > > maybe not. You re-triggered with [1], which really missed this patch.
> > > > I did a rebase and now running with this patch in build #6132 [2].
> Let's
> > > > wait
> >  for it to see if gerrit #104825 helps.
> > > >
> > > >
> > > >
> > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
> > > > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
> > &

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler  wrote:

>
>
> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
>
>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>> wrote:
>> >
>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora Barroso
>> wrote:
>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
>> > > wrote:
>> > > >
>> > > >
>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>> > > >
>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>> dhol...@redhat.com>
>> > > > > wrote:
>> > > > >
>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer > >
>> > > > > > wrote:
>> > > > > >
>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>> > > > > >> 
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> wrote:
>> > > > > >>
>> > > > > >> > Hi,
>> > > > > >> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host.
>> It
>> > > > > >> > fails
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> with
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>> "Depsolve
>> > > > > >> >  Error
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> occured:
>> > > > > >>
>> > > > > >> > \n Problem 1: cannot install the best update candidate for
>> package
>> > > > > >> > vdsm-
>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
>> provides
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> nmstate
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> > > > > >> > Problem 2:
>> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
>> requires
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> vdsm-network
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers
>> can be
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> installed\n
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > - cannot install the best update candidate for package vdsm-
>> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
>> provides
>> > > > > >> > nmstate
>> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> nmstate should be provided by copr repo enabled by
>> > > > > >> ovirt-release-master.
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > I re-triggered as
>> > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>> > > > > > maybe
>> > > > > > https://gerrit.ovirt.org/#/c/104825/
>> > > > > > was missing
>> > > > >
>> > > > >
>> > > > >
>> > > > > Looks like
>> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>> > > >
>> > > >
>>

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:

>
>
> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>
>>
>>
>> On 11/22/19 4:54 PM, Martin Perina wrote:
>>
>>
>>
>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
>> wrote:
>>
>>>
>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
>>>> mdbarr...@redhat.com> wrote:
>>>>
>>>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>>>>> wrote:
>>>>> >
>>>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
>>>>> Barroso wrote:
>>>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
>>>>> vjura...@redhat.com>
>>>>> > > wrote:
>>>>> > > >
>>>>> > > >
>>>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>>>>> > > >
>>>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>>>>> dhol...@redhat.com>
>>>>> > > > > wrote:
>>>>> > > > >
>>>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>>>>> nsof...@redhat.com>
>>>>> > > > > > wrote:
>>>>> > > > > >
>>>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>>>>> > > > > >> 
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> wrote:
>>>>> > > > > >>
>>>>> > > > > >> > Hi,
>>>>> > > > > >> > OST fails (see e.g. [1]) in
>>>>> 002_bootstrap.check_update_host. It
>>>>> > > > > >> > fails
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> with
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>>>>> "Depsolve
>>>>> > > > > >> >  Error
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> occured:
>>>>> > > > > >>
>>>>> > > > > >> > \n Problem 1: cannot install the best update candidate
>>>>> for package
>>>>> > > > > >> > vdsm-
>>>>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
>>>>> provides
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> nmstate
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> > needed by
>>>>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>>>> > > > > >> > Problem 2:
>>>>> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
>>>>> requires
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> vdsm-network
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers
>>>>> can be
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >>
>>>>> > > > > >> installed\n
>>>>> > &g

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  wrote:

>
>
> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
>
>>
>>
>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>>
>>>
>>>
>>> On 11/22/19 4:54 PM, Martin Perina wrote:
>>>
>>>
>>>
>>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
>>> wrote:
>>>
>>>>
>>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
>>>>> mdbarr...@redhat.com> wrote:
>>>>>
>>>>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>>>>>> wrote:
>>>>>> >
>>>>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
>>>>>> Barroso wrote:
>>>>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
>>>>>> vjura...@redhat.com>
>>>>>> > > wrote:
>>>>>> > > >
>>>>>> > > >
>>>>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>>>>>> > > >
>>>>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>>>>>> dhol...@redhat.com>
>>>>>> > > > > wrote:
>>>>>> > > > >
>>>>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>>>>>> nsof...@redhat.com>
>>>>>> > > > > > wrote:
>>>>>> > > > > >
>>>>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>>>>>> > > > > >> 
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >> wrote:
>>>>>> > > > > >>
>>>>>> > > > > >> > Hi,
>>>>>> > > > > >> > OST fails (see e.g. [1]) in
>>>>>> 002_bootstrap.check_update_host. It
>>>>>> > > > > >> > fails
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >> with
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>>>>>> "Depsolve
>>>>>> > > > > >> >  Error
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >> occured:
>>>>>> > > > > >>
>>>>>> > > > > >> > \n Problem 1: cannot install the best update candidate
>>>>>> for package
>>>>>> > > > > >> > vdsm-
>>>>>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
>>>>>> provides
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >> nmstate
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >> > needed by
>>>>>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>>>>> > > > > >> > Problem 2:
>>>>>> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
>>>>>> requires
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >>
>>>>>> > > > > >> vdsm-network
>>>>>> > > > > >>
>>>>>> > > > > >&

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler  wrote:

>
>
> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  wrote:
>
>>
>>
>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
>>
>>>
>>>
>>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>>>
>>>>
>>>>
>>>> On 11/22/19 4:54 PM, Martin Perina wrote:
>>>>
>>>>
>>>>
>>>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
>>>> wrote:
>>>>
>>>>>
>>>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
>>>>>> mdbarr...@redhat.com> wrote:
>>>>>>
>>>>>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
>>>>>>> vjura...@redhat.com> wrote:
>>>>>>> >
>>>>>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
>>>>>>> Barroso wrote:
>>>>>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
>>>>>>> vjura...@redhat.com>
>>>>>>> > > wrote:
>>>>>>> > > >
>>>>>>> > > >
>>>>>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>>>>>>> > > >
>>>>>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>>>>>>> dhol...@redhat.com>
>>>>>>> > > > > wrote:
>>>>>>> > > > >
>>>>>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>>>>>>> nsof...@redhat.com>
>>>>>>> > > > > > wrote:
>>>>>>> > > > > >
>>>>>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>>>>>>> > > > > >> 
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >> wrote:
>>>>>>> > > > > >>
>>>>>>> > > > > >> > Hi,
>>>>>>> > > > > >> > OST fails (see e.g. [1]) in
>>>>>>> 002_bootstrap.check_update_host. It
>>>>>>> > > > > >> > fails
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >> with
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>>>>>>> "Depsolve
>>>>>>> > > > > >> >  Error
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >> occured:
>>>>>>> > > > > >>
>>>>>>> > > > > >> > \n Problem 1: cannot install the best update candidate
>>>>>>> for package
>>>>>>> > > > > >> > vdsm-
>>>>>>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  -
>>>>>>> nothing provides
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >> nmstate
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >>
>>>>>>> > > > > >> > needed by
>>>>>>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>>>>>> > > > > >> > Problem 2:
>>>>>>> >

[ovirt-devel] Re: urgent: last passing engine is 7 days old

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 9:20 AM Sandro Bonazzola 
wrote:

> Hi,
> last engine passing OST is
> 71338988 - vdsbroker: logger statement fixed (7 days ago) 
> we need to get newer engine passing OST in order to compose Alpha.
> Can you please help fixing this?
>
>

Is already known which test is failing or add link a failing test?
I expect that it is not obvious for most developers, why change queue is
failing, if the reason is change queue at all.



> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/F2A4F4YU74PRYGG4ZQ555HJKLEZX4P2K/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OMGND6VJQ5MQE5JMWWQQAWBLRTZ2A7BM/


[ovirt-devel] Re: urgent: last passing vdsm is 8 days old

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 9:22 AM Sandro Bonazzola 
wrote:

> Hi,
> last passing vdsm is
> * 63ea8cb8b - net, nmstate: Fix getting wrong attribute from setup
> networks (8 days ago) 
>
> can you please help getting new vdsm pass OST?
>


Is already known which test is failing or add link a failing test?
I expect that it is not obvious for most developers, why change queue is
failing, if the reason is the change queue at all.


> Thanks,
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TL3XBP4S4NJWVUZVA42EMMR35WJJAIV6/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/M557E5HJ2UN7X2H3ME6HPS2X7U2P3A2H/


[ovirt-devel] Re: urgent: last passing engine is 7 days old

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 10:22 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno lun 25 nov 2019 alle ore 10:03 Dominik Holler <
> dhol...@redhat.com> ha scritto:
>
>>
>>
>> On Mon, Nov 25, 2019 at 9:20 AM Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> last engine passing OST is
>>> 71338988 - vdsbroker: logger statement fixed (7 days ago) 
>>> we need to get newer engine passing OST in order to compose Alpha.
>>> Can you please help fixing this?
>>>
>>>
>>
>> Is already known which test is failing or add link a failing test?
>>
>
> Latest jenkins email about engine failure is:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/thread/MFG6N5PLKWYWUWE3PA7BNH46A4CECSYZ/
> which contains links to the failing job.
>


Which is
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/17315/
which says:
upgrade-from-release-suite.el7.x86_64 / 004_basic_sanity.upgrade_hosts
is failing. Is this test expected to pass.
>From my understanding upgrade from rhel7 hosts will not be supported and is
expected to fail.
If the test is testing this, we should disable the test.


>
>
>> I expect that it is not obvious for most developers, why change queue is
>> failing, if the reason is change queue at all.
>>
>>
>
> Sorry, no capacity on my side for digging into the failure right now. @Dafna
> Ron  , @Anton Marchukov  ?
>
>
>>
>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> sbona...@redhat.com
>>> <https://www.redhat.com/>*Red Hat respects your work life balance.
>>> Therefore there is no need to answer this email out of your office hours.*
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/F2A4F4YU74PRYGG4ZQ555HJKLEZX4P2K/
>>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2YSKPBSB6G7IPVNY7IINC2KABYQQ35ID/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler 
> wrote:
> >
> >
> >
> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler 
> wrote:
> >>
> >>
> >>
> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler 
> wrote:
> >>>
> >>>
> >>>
> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
> >>>>
> >>>>
> >>>>
> >>>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk 
> wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 11/22/19 4:54 PM, Martin Perina wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
> wrote:
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
> >>>>>>>>
> >>>>>>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
> >>>>>>>> >
> >>>>>>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
> Barroso wrote:
> >>>>>>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> >>>>>>>> > > wrote:
> >>>>>>>> > > >
> >>>>>>>> > > >
> >>>>>>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler
> wrote:
> >>>>>>>> > > >
> >>>>>>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> >>>>>>>> > > > > wrote:
> >>>>>>>> > > > >
> >>>>>>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> >>>>>>>> > > > > > wrote:
> >>>>>>>> > > > > >
> >>>>>>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> >>>>>>>> > > > > >> 
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >> wrote:
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >> > Hi,
> >>>>>>>> > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
> >>>>>>>> > > > > >> > fails
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >> with
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >> >  FAILED! => {"changed": false, "failures": [],
> "msg": "Depsolve
> >>>>>>>> > > > > >> >  Error
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >> occured:
> >>>>>>>> > > > > >>
> >>>>>>>> > > > > >> > \n Problem 1: cannot install the best update
> candidate for package
> >>>>>>>> > > > > >> > vdsm-
> >>>>>>>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  -
> nothing provides
> >>>>>>>> > > > > >>
> >>>>>>

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler  wrote:
> >
> >
> >
> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer  wrote:
> >>
> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler 
> wrote:
> >> >
> >> >
> >> >
> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler 
> wrote:
> >> >>
> >> >>
> >> >>
> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler 
> wrote:
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer 
> wrote:
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk 
> wrote:
> >> >>>>>
> >> >>>>>
> >> >>>>>
> >> >>>>> On 11/22/19 4:54 PM, Martin Perina wrote:
> >> >>>>>
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
> >> >>>>>>>>
> >> >>>>>>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
> >> >>>>>>>> >
> >> >>>>>>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de
> Mora Barroso wrote:
> >> >>>>>>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> >> >>>>>>>> > > wrote:
> >> >>>>>>>> > > >
> >> >>>>>>>> > > >
> >> >>>>>>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler
> wrote:
> >> >>>>>>>> > > >
> >> >>>>>>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> >> >>>>>>>> > > > > wrote:
> >> >>>>>>>> > > > >
> >> >>>>>>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> >> >>>>>>>> > > > > > wrote:
> >> >>>>>>>> > > > > >
> >> >>>>>>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> >> >>>>>>>> > > > > >> 
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >> wrote:
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >> > Hi,
> >> >>>>>>>> > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
> >> >>>>>>>> > > > > >> > fails
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >> with
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >>
> >> >>>>>>>> > > > > >> >  FAILED! => {"changed": false, "failures": [],
> "msg": "Depsolve
> >> >>>>>>>> > > > > &g

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler  wrote:
> >
> >
> >
> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer  wrote:
> >>
> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler 
> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer 
> wrote:
> >> >>
> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler 
> wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler 
> wrote:
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer 
> wrote:
> >> >> >>>>
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk 
> wrote:
> >> >> >>>>>
> >> >> >>>>>
> >> >> >>>>>
> >> >> >>>>> On 11/22/19 4:54 PM, Martin Perina wrote:
> >> >> >>>>>
> >> >> >>>>>
> >> >> >>>>>
> >> >> >>>>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >>>>>>
> >> >> >>>>>>
> >> >> >>>>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >>>>>>>
> >> >> >>>>>>>
> >> >> >>>>>>>
> >> >> >>>>>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora
> Barroso  wrote:
> >> >> >>>>>>>>
> >> >> >>>>>>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
> >> >> >>>>>>>> >
> >> >> >>>>>>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de
> Mora Barroso wrote:
> >> >> >>>>>>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> >> >> >>>>>>>> > > wrote:
> >> >> >>>>>>>> > > >
> >> >> >>>>>>>> > > >
> >> >> >>>>>>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik
> Holler wrote:
> >> >> >>>>>>>> > > >
> >> >> >>>>>>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> >> >> >>>>>>>> > > > > wrote:
> >> >> >>>>>>>> > > > >
> >> >> >>>>>>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> >> >> >>>>>>>> > > > > > wrote:
> >> >> >>>>>>>> > > > > >
> >> >> >>>>>>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> >> >> >>>>>>>> > > > > >> 
> >> >> >>>>>>>> > > > > >>
> >> >> >>>>>>>> > > > > >>
> >> >> >>>>>>>> > > > > >>
> >> >> >>>>>>>> > > > > >> wrote:
> >> >> >>>>>>>> > > > > >>
> >> >> >>>>>>>> > > > > >> > Hi,
> >> >> >>>>>>>> > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
> >> >> >>>>>>>> > > > > >> > fails
> >> >

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-26 Thread Dominik Holler
On Mon, Nov 25, 2019 at 7:12 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 7:15 PM Dominik Holler  wrote:
> >
> >
> >
> > On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer  wrote:
> >>
> >> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler 
> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer 
> wrote:
> >> >>
> >> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler 
> wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer 
> wrote:
> >> >> >>
> >> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer <
> nsof...@redhat.com> wrote:
> >> >> >> >>>>
> >> >> >> >>>>
> >> >> >> >>>>
> >> >> >> >>>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk <
> msobc...@redhat.com> wrote:
> >> >> >> >>>>>
> >> >> >> >>>>>
> >> >> >> >>>>>
> >> >> >> >>>>> On 11/22/19 4:54 PM, Martin Perina wrote:
> >> >> >> >>>>>
> >> >> >> >>>>>
> >> >> >> >>>>>
> >> >> >> >>>>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>>>>>
> >> >> >> >>>>>>
> >> >> >> >>>>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>>>>>>
> >> >> >> >>>>>>>
> >> >> >> >>>>>>>
> >> >> >> >>>>>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora
> Barroso  wrote:
> >> >> >> >>>>>>>>
> >> >> >> >>>>>>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
> >> >> >> >>>>>>>> >
> >> >> >> >>>>>>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte
> de Mora Barroso wrote:
> >> >> >> >>>>>>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> >> >> >> >>>>>>>> > > wrote:
> >> >> >> >>>>>>>> > > >
> >> >> >> >>>>>>>> > > >
> >> >> >> >>>>>>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik
> Holler wrote:
> >> >> >> >>>>>>>> > > >
> >> >> >> >>>>>>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> >> >> >> >>>>>>>> > > > > wrote:
> >> >> >> >>>>>>>> > > > >
> >> >> >> >>>>>>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> >> >> >> >>>>>>>> > > > > > wrote:
> >> >> >> >>>>>>>> > > > > >
> >> >> >> >>>>>>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech
> Juranek
> >> >> >> >>>>

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-27 Thread Dominik Holler
 vers 4 proc 1
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> #1/3: 53 (OP_SEQUENCE)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel:
>>> __find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence:
>>> slotid 0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid
>>> enter. seqid 406 slot_seqid 405
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> 9042fc202080 opcnt 3 #1: 53: status 0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> #2/3: 22 (OP_PUTFH)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd:
>>> fh_verify(28: 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request
>>> from insecure port 192.168.200.1, port=51529!
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> 9042fc202080 opcnt 3 #2: 22: status 1
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound
>>> returned 1
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: -->
>>> nfsd4_store_cache_entry slot 9042c4d97000
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: renewing client
>>> (clientid 5dde5a1f/cc80daed)
>>>
>>> Regards, Marcin
>>>
>>> On 11/26/19 8:40 PM, Martin Perina wrote:
>>>
>>> I've just merged https://gerrit.ovirt.org/105111 which only silence the
>>> issue, but we really need to unblock OST, as it's suffering from this for
>>> more than 2 weeks now.
>>>
>>> Tal/Nir, could someone really investigate why the storage become
>>> unavailable after some time? It may be caused by recent switch of hosts to
>>> CentOS 8, but may be not related
>>>
>>> Thanks,
>>> Martin
>>>
>>>
>>> On Tue, Nov 26, 2019 at 9:17 AM Dominik Holler 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Nov 25, 2019 at 7:12 PM Nir Soffer  wrote:
>>>>
>>>>> On Mon, Nov 25, 2019 at 7:15 PM Dominik Holler 
>>>>> wrote:
>>>>> >
>>>>> >
>>>>> >
>>>>> > On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer 
>>>>> wrote:
>>>>> >>
>>>>> >> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler 
>>>>> wrote:
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer 
>>>>> wrote:
>>>>> >> >>
>>>>> >> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler <
>>>>> dhol...@redhat.com> wrote:
>>>>> >> >> >
>>>>> >> >> >
>>>>> >> >> >
>>>>> >> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer 
>>>>> wrote:
>>>>> >> >> >>
>>>>> >> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler <
>>>>> dhol...@redhat.com> wrote:
>>>>> >> >> >> >
>>>>> >> >> >> >
>>>>> >> >> >> >
>>>>> >> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler <
>>>>> dhol...@redhat.com> wrote:
>>>>> >> >> >> >>
>>>>> >> >> >> >>
>>>>> >> >> >> >>
>>>>> >> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler <
>>>>> dhol...@redhat.com> wrote:
>>>>> >> >> >> >>>
>>>>> >> >> >> >>>
>>>>> >> >> >> >>>
>>>>> >> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer <
>>>>> nsof...@redhat.com> wrote:
>>>>> >> >> >> >>>>
>>>>> >> >> >> >>>>
>>>>> >> >> >> >>>>
>>>>> >> >> >> >>>> On F

[ovirt-devel] ovirt-engine unit tests ticket.TicketTest

2019-12-01 Thread Dominik Holler
Hi,
for me the ovirt-engine unit tests
 org.ovirt.engine.core.uutils.crypto.ticket.TicketTest.testByEKU

 org.ovirt.engine.core.uutils.crypto.ticket.TicketTest.testByCertificate

are failing. Is already someone investigating?
Dominik
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TJO2VRAF2ZAUIUIILWXWOFF5ANYCEQQB/


[ovirt-devel] Re: New dependency for development environment

2019-12-11 Thread Dominik Holler
On Wed, Nov 27, 2019 at 8:37 AM Ondra Machacek  wrote:

> Hello,
>
> we are going to merge a series of patches to master branch, which
> integrates ansible-runner with oVirt engine. When the patches will be
> merged you will need to install new package called ansible-runner-
> service-dev, and follow instructions so your dev-env will keep working
> smoothly(all relevant info will be also in README.adoc):
>
> 1) sudo dnf update ovirt-release-master
>
> 2) sudo dnf install -y ansible-runner-service-dev
>
>
"dnf install -y ansible-runner-service-dev" did not work for me on fedora
29.
I created manually the file /etc/yum.repos.d/centos.repo:
[centos-ovirt44-testing]
name=CentOS-7 - oVirt 4.4
baseurl=http://cbs.centos.org/repos/virt7-ovirt-44-testing/$basearch/os/
gpgcheck=0
enabled=1

which made the ansible-runner-service-dev available.


> 3) Edit `/etc/ansible-runner-service/config.yaml` file as follows:
>
>---
>playbooks_root_dir:
> '$PREFIX/share/ovirt-engine/ansible-runner-service-project'
>ssh_private_key: '$PREFIX/etc/pki/ovirt-engine/keys/engine_id_rsa'
>port: 50001
>target_user: root
>
> Where `$PREFIX` is the prefix of your development environment prefix,
> which you've specified during the compilation of the engine.
>
> 4) Restart and enable ansible-runner-service:
>
># systemctl restart ansible-runner-service
># systemctl enable ansible-runner-service
>
> That's it, your dev-env should start using the ansible-runner-service
> for host-deployment etc.
>
> Please note that only Fedora 30/31 and Centos7 was packaged, and are
> natively supported!
>
> Thanks,
> Ondra
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AFKGTV4WDNONLND63RR6YMSMV4FJQM4L/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RMNEQG7KNFWSQX4REN3JN34ED4KTGYRH/


[ovirt-devel] Re: vdsm [check-patch] failures - error with nmstate repo

2019-12-19 Thread Dominik Holler
Does rebasing the change solve the problem?

On Thu, Dec 19, 2019 at 2:41 PM Liran Rotenberg  wrote:

> Hi,
> VDSM check patch keep failing,
> Console output taken from[2]:
>
> *15:33:30*  failure: repodata/repomd.xml from nmstate: [Errno 256] No more 
> mirrors to try.*15:33:30*  
> https://copr-be.cloud.fedoraproject.org/results/nmstate/nmstate-git-fedora/fedora-30-x86_64/repodata/repomd.xml:
>  [Errno 14] HTTPS Error 404 - Not Found
>
>
> *15:33:33*  ERROR: Command failed: *15:33:33*   # /usr/bin/dnf --installroot 
> /var/lib/mock/fedora-30-x86_64-b4b1fa43be798f48f41c1f0af664447e-6272/root/ 
> --releasever 30 --setopt=deltarpm=False --allowerasing --disableplugin=local 
> --disableplugin=spacewalk install @buildsys-build autoconf automake 
> createrepo dnf dnf-utils e2fsprogs gcc gdb git iproute-tc 
> iscsi-initiator-utils libguestfs-tools-c lshw make mom openvswitch 
> ovirt-imageio-common python3-augeas python3-blivet python3-coverage 
> python3-dateutil python3-dbus python3-decorator python3-devel 
> python3-dmidecode python3-inotify python3-ioprocess-1.3.0 python3-libselinux 
> python3-libvirt python3-magic python3-netaddr python3-nose python3-pip 
> python3-policycoreutils python3-pyudev python3-pyyaml python3-requests 
> python3-sanlock python3-six python3-yaml rpm-build rpmlint sudo xfsprogs 
> --setopt=tsflags=nocontexts*15:33:33*  No matches found for the following 
> disable plugin patterns: local, spacewalk*15:33:33*  Custom 
> fc30-updates-debuginfo   1.8 MB/s |  13 MB 00:07
> *15:33:33*  Custom tested16 MB/s | 570 kB 
> 00:00*15:33:33*  Custom vdo   61 
> kB/s | 8.1 kB 00:00*15:33:33*  Custom virt-preview
>  770 kB/s | 139 kB 00:00*15:33:33*  Custom nmstate
>   4.7 kB/s | 341  B 00:00*15:33:33*  Error: 
> Failed to download metadata for repo 'nmstate': Cannot download repomd.xml: 
> Cannot download repodata/repomd.xml: All mirrors were tried
>
>
> [1] https://jenkins.ovirt.org/job/vdsm_standard-check-patch/16055/
>
> [2] https://jenkins.ovirt.org/job/vdsm_standard-check-patch/16058/
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WMLSY6AP54EQGZEZ6LXEEDZW4VYVTXJ6/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DME7Z4TKFPP7F3KKI7AG3DAZXOONRPQP/


[ovirt-devel] migration on migration network failed

2020-01-22 Thread Dominik Holler
Hello,
is live migration on ovirt-master via migration network currently expected
to work?
Currently, it fails in OST network-suite-master with

2020-01-21 22:49:20,991-0500 INFO  (jsonrpc/2) [api.virt] START
migrate(params={'abortOnError': 'true', 'autoConverge': 'true', 'dst': '
192.168.201.3:54321', 'method': 'online', 'vmId':
'736dea3b-64be-427f-9ebf-d1e758b6f68e', 'src': '192.168.201.4', 'dstqemu':
'192.0.3.1', 'convergenceSchedule': {'init': [{'name': 'setDowntime',
'params': ['100']}], 'stalling': [{'limit': 1, 'action': {'name':
'setDowntime', 'params': ['150']}}, {'limit': 2, 'action': {'name':
'setDowntime', 'params': ['200']}}, {'limit': 3, 'action': {'name':
'setDowntime', 'params': ['300']}}, {'limit': 4, 'action': {'name':
'setDowntime', 'params': ['400']}}, {'limit': 6, 'action': {'name':
'setDowntime', 'params': ['500']}}, {'limit': -1, 'action': {'name':
'abort', 'params': []}}]}, 'outgoingLimit': 2, 'enableGuestEvents': True,
'tunneled': 'false', 'encrypted': False, 'compressed': 'false',
'incomingLimit': 2}) from=:::192.168.201.2,43782,
flow_id=5c0e0e0a-8d5f-4b66-bda3-acca1e626a41,
vmId=736dea3b-64be-427f-9ebf-d1e758b6f68e (api:48)
2020-01-21 22:49:20,997-0500 INFO  (jsonrpc/2) [api.virt] FINISH migrate
return={'status': {'code': 0, 'message': 'Migration in progress'},
'progress': 0} from=:::192.168.201.2,43782,
flow_id=5c0e0e0a-8d5f-4b66-bda3-acca1e626a41,
vmId=736dea3b-64be-427f-9ebf-d1e758b6f68e (api:54)
2020-01-21 22:49:20,997-0500 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
call VM.migrate succeeded in 0.01 seconds (__init__:312)
2020-01-21 22:49:21,099-0500 INFO  (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Migration semaphore:
acquiring (migration:405)
2020-01-21 22:49:21,099-0500 INFO  (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Migration semaphore: acquired
(migration:407)
2020-01-21 22:49:21,837-0500 INFO  (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Creation of destination VM
took: 0 seconds (migration:459)
2020-01-21 22:49:21,838-0500 INFO  (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') starting migration to
qemu+tls://192.168.201.3/system with miguri tcp://192.0.3.1 (migration:525)
2020-01-21 22:49:21,870-0500 ERROR (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') operation failed: Failed to
connect to remote libvirt URI qemu+tls://192.168.201.3/system: unable to
connect to server at '192.168.201.3:16514': Connection refused
(migration:278)
2020-01-21 22:49:22,816-0500 ERROR (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Failed to migrate
(migration:441)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 422,
in _regular_run
time.time(), machineParams
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 528,
in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 609,
in _perform_with_conv_schedule
self._perform_migration(duri, muri)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 545,
in _perform_migration
self._migration_flags)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101,
in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94,
in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1838, in
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirt.libvirtError: operation failed: Failed to connect to remote libvirt
URI qemu+tls://192.168.201.3/system: unable to connect to server at '
192.168.201.3:16514': Connection refused
2020-01-21 22:49:22,989-0500 INFO  (jsonrpc/7) [api.host] START
getAllVmStats() from=:::192.168.201.2,43782 (api:48)
2020-01-21 22:49:22,992-0500 INFO  (jsonrpc/7) [throttled] Current
getAllVmStats: {'736dea3b-64be-427f-9ebf-d1e758b6f68e': 'Up'}
(throttledlog:104)


Please find details in
https://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-master/1245/
.


Dominik
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/53JJBD4HRM7IFFSSTSUU52W3QXV33I3U/


[ovirt-devel] OST network suite is failing

2020-02-06 Thread Dominik Holler
Hello Tal and Andrej,
can you please have a look at the failing
https://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-master/1260/

2020-02-05 21:03:12,798-05 ERROR
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default
task-2) [03a09eb0-7ccd-44db-a8fd-c01dc8ce82c7] Error during
ValidateFailure.: java.lang.NullPointerException
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.validator.storage.DiskVmElementValidator.isVirtIoScsiValid(DiskVmElementValidator.java:54)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.AddDiskCommand.checkIfImageDiskCanBeAdded(AddDiskCommand.java:299)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.AddDiskCommand.validate(AddDiskCommand.java:195)



Thanks,

Dominik
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/V42PCHW23M5XVSQMQ3IRJ4ZYTEPLMJQC/


[ovirt-devel] Re: net_mlx5 libibverbs.so.1 noise

2020-02-19 Thread Dominik Holler
On Thu, Feb 20, 2020 at 8:10 AM Yedidyah Bar David  wrote:

> Hi all,
>
> I upgraded my CentOS8 engine machine, and now whenever I run a shell
> (e.g. ssh, or another window in tmux), I get:
>
> net_mlx5: cannot load glue library: libibverbs.so.1: cannot open
> shared object file: No such file or directory
> net_mlx5: cannot initialize PMD due to missing run-time dependency on
> rdma-core libraries (libibverbs, libmlx5)
> PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open
> shared object file: No such file or directory
> PMD: net_mlx4: cannot initialize PMD due to missing run-time
> dependency on rdma-core libraries (libibverbs, libmlx4)
> net_mlx5: cannot load glue library: libibverbs.so.1: cannot open
> shared object file: No such file or directory
> net_mlx5: cannot initialize PMD due to missing run-time dependency on
> rdma-core libraries (libibverbs, libmlx5)
> PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open
> shared object file: No such file or directory
> PMD: net_mlx4: cannot initialize PMD due to missing run-time
> dependency on rdma-core libraries (libibverbs, libmlx4)
>
> Not sure if this is a new known issue. It happened in the past, and
> was fixed, but apparently got broken again. Now commented about this
> on the old bug:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1642945#c9
>
>
This means that the bug was fixed downstream, I am unsure if some checked
if this is fixed in upstream rpms already.
I am also unsure if ovirt can leech the recent OVN 2.11 FDP 20, which might
be required for CentOS 8.2, or if we have to create our own rpm.
I will keep an eye on this, and I am happy about any hint.



> Workaround:
>
> dnf install libibverbs.so.1 libmlx5
>


Please note that ovs just generate the warnings by mistake, because ovs is
not required directly on Engine's host.


>
> Best regards,
> --
> Didi
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DO7QX2RN2UAUJFCRJM4D75WFSNSKI72S/


  1   2   >